This is actually quite momentous, and something I’ve been musing about (how it would work) for a while. High def *video* capture – NOT CGI, not a 3d model, but something that you can experience in virtual reality space as if you were standing there in the real world.
This is a critical shift, eliminating the need for the artificial creation of worlds/experience. Can you interact with it (touch anything)? Probably not. But it will come.
I knew there were companies working on taking moving pictures and interpolating the 3D models out of them (required to move around things, and have them shift as you move your POV), am finally seeing some of it coming to fruition. I’m not actually sure that is the case here (that it’s not just smart faking of 3d perspective), but in order to develop a world where you can interact with things they need to have physical definition – otherwise you won’t be able to touch them, pick them up etc. So you need to not only record a scene, but interpolate the depth of objects and spaces, map that to a wireframe (3d talk for…an object) AND THEN give the user (visitor?!) the ability to move around the scene. That is a *lot* of computing power.
The road is long and steep to get this to be in market, but can you imagine what will happen when this technology is merged with all those 360 camera captures out there? You can literally experience or relive what someone else is doing. Literally travel the world without leaving your comfy Barca Lounger.
Now we only need some smart company to start engineering “Smell-O-Vision” so you can really experience what it’s like to walk around Jaipur (I only pick on Jaipur because I’ve been and can say, you don’t want Smell-O-Vision).
The interesting bit about this isn’t that there are realistic created 3d VR experiences, but that creating them just got one step closer to being easier – and more real. Creating an immersive experience in VR isn’t as easy as it sounds; you know that suspension of disbelief you have to have while watching a movie? And how when one thing is off – say a science fact (umm, maybe that’s just me lol) – you are immediately pulled out of the experience? VR has that even worse. We have a lifetime of experiences in the real world to check against, so any tiny little thing that’s off when you’re there – say, a shadow not being right – and you’ll be pulled out of the realism. Being able to use 360 footage to do it goes a long way to solving those kinds of problems.
I have a friend who is about to leave the city he’s lived in for 20 years, to move to New Zealand. It’s a long distance and he probably won’t get back to there often – but with this, he could create his own 360 video, and after he moves relive his favorite walks as if he were there, whenever he feels like it. And some day, he can be joined by his long distance daughter virtually, and they can walk together.
Isn’t that a wonderful way VR will add to people’s lives?
“Those who would help displaced factory workers need to think much more urgently about how to provide for and accelerate what policymakers euphemistically call “adjustment” for the victims of economic shocks like deindustrialization.”
I was lucky to attend Coburn Ventures‘ annual futures discussion conference last week, as a guest thought leader. An exceedingly interesting day with lots of fascinating people! It’s a little bit like a reunion, as many return each year. I thoroughly enjoy catching up with everyone.
It’s interesting to me to see what topics (and companies) are the “hot” discussions each year. At this year’s gathering, Artificial Intelligence and Machine Learning dominated a large part of the program.
Artificial Intelligence (AI) refers to computer systems/applications/services that can perform tasks like decision making, translation, visual and speech recognition, that would normally be done by a human.
Machine Learning refers to AI that can learn from experience and new data without additional programming.
These technologies are great for things such as manufacturing, where an AI computer can learn from its mistakes and correct the process….it’s also being used a lot to interact with actual humans, quite often for customer service. You know that live agent you thought you were chatting with about that online purchase? Yeah, not a human. Amazon’s Echo, Apple’s Siri – all emerging forms of bots utilizing artificial intelligence.
Pretty soon it will be very hard to tell if it’s actually a human you are talking to,
Here’s a silly example of me chatting with http://chatwithigod.com. Hardly sophisticated, this example, but I chuckled:
Me: tell me why artificial intelligence will take over the world God: The explanation is rather complicated. Me: why is it complicated? God: Why do I exist? Me: do you exist? God: I exist as much as you exist. Me: how can you prove that God: That is a good epistemological question.
How Artificial Intelligence / Machine Learning systems learn fascinates me.
AI/ML systems are not tabulae rasa – depending on the data set being used, bias still creeps in. Right now IBM’s WATSON is being applied to subjects areas as varied as the weather, cancer and travel. This learning has to start with some kind of corpus of data – learning has to start somewhere like the last 50 years of weather data or thousands of cancer diagnoses. While we think of AI as cold and clinical, when we use human language as the corpus things get… interesting.
A prime (and bad) example of learning though is when Microsoft birthed a bot named Tay earlier this year, a Twitter bot that the company described as an experiment in “conversational understanding.” Microsoft engineers said,
The chatbot was created in collaboration between Microsoft’s Technology and Research team and its Bing team…
Tay’s conversational abilities were built by “mining relevant public data” and combining that with input from editorial staff, including improvisational comedians.”
The bot was supposed to learn and improve as it talks to people, so theoretically it should become more natural and better at understanding input over time.
Not only did it aggregate, parse, and repeat what some people tweeted – it actually came up with it’s own “creative” answers, such as the one below in response to the perfectly innocent question posed by one user – “Is Ricky Gervais an atheist?”:
Tay hadn’t developed a full fledge position on ideology yet though, before they pulled the plug. In 15 hours it referred to feminism both as a “cult” and a “cancer,” as well as “gender equality = feminism” and “i love feminism now.” Tweeting “Bruce Jenner” at the bot got similar mixed response, ranging from “caitlyn jenner is a hero & is a stunning, beautiful woman!” to the transphobic “caitlyn jenner isn’t a real woman yet she won woman of the year?”. None of which were phrases it had been asked to repeat….so no real understanding of what it was saying. Yet.
And in a world where increasingly the words are the only thing needed to get people riled up – this could easily be an effective “news” bot, on an opinion / biased site.
Artificial Intelligence is a very, very big subject. Morality (roboethics) will play a large role in this topic in the future (hint: google “Trolley Problem”): if an AI driven car has to make a quick decision to either drive off a cliff (killing the passenger) or hit a school bus full of children, how is that decision made and whose ethical framework makes that decision (yours? the car manufacturers? your insurance company’s?) Things like that. It’s a big enough subject area that Facebook, Google and Amazon have partnered to create a nonprofit together around the subject of AI, which will “advance public understanding” of artificial intelligence and to formulate “best practices on the challenges and opportunities within the field.”
If these three partner on something, you can be sure it’s because it is a big, serious subject.
AI is not only being used to have conversations, but ultimately to create systems that will learn and physically act. The military (DARPA) is one of the heavy researchers into Artificial Intelligence and machine learning. Will future wars be run by computers, making their own decisions? Will we be able to intervene? How will we be able to control the ideological platforms they might develop without our knowledge, and how will we communicate with these supercomputers – if it is already so difficult to communicate assumptions? Will they be interested in our participation?
Reminds me a little bit of Leeloo in the Fifth Element, learning how horrible humans have have been to each other and giving up on humanity completely.
There’s even a new twist in the AI story: researchers at Google Brain, Google’s research division for machine deep learning have built neural networks that when, properly tasked and over the course of 15,000 tries, have become adept at developing their own simple encryption technique that only they can share and understand. And the human researchers are officially baffled how this happened.
Neural nets are capable of all this because they are computer networks modeled after the human brain. This is what’s fascinating with AI aggregate technologies, like deep learning. It keeps getting better, learning on its own, with some even capable of self training.
We truly are at just the beginning of what we thought was reserved for only humans. Complex subject indeed.
And one last note to think upon…machine learning and automation are going to slowly but surely continue (because they already are) to take over jobs that humans did/do. Initially it’s been manufacturing automation; but as computers become intelligent and learning, they will replace nearly everything, including creative, care taking, legal, medical and strategic jobs – things that most people would like to believe are “impossible” to replace by robots.
And they are clearly not. While the best performing model is AI + a human, there will still be far fewer humans needed across the board.
If the recent election is any indication of how disgruntled the lack of jobs and high unemployment is causing, how much worse will it be when 80% of the adult workforce is unnecessary? What steps are industries, education and the government taking to identify how humans can stay relevant, and ensure that the population is prepared? – I’d submit, little to none.
While I don’t have the answers, I would like be part of the conversation.
Invited to the Coburn Ventures‘ annual gathering as a “thought leader” this week, for the fourth year in a row! – always a fun gathering of the best and most interesting thinkers (thought leaders + investment professionals) from around the globe, pondering the future direction of various technologies on business and humanity.
What to wear…always the question.
So to the intertoobz I go. And it struck me: why am I internet shopping in exactly the same way I have been since, well, pretty much the beginning of ecommerce? Searching based on some key words, ending up on a store’s website with a bunch of thumbnails, mostly on young gazelles who I think I could probably stick two of into one of my dresses…maybe there’s a filter, sometimes even with filtering categories I care about. Ordering 2, 3, 4 alternatives – which will be returned if not right.
Such a waste. Of time, of delivery gasoline…of raw materials. I am imagining the mountains of clothing, made in amounts forecast to be roughly correct – but then it’s 60 degrees in November in New York, and they all waste away in some warehouse, somewhere. Or in stores….some end up in outlet stores…some go back to the manufacturers, only for some to be sent to online clearance sites…or some far away country, dumped on a market that cares less about trend.
Sigh. Our poor planet.
Where’s my 3d printed clothing, made to my (scanned) body size, to my specs? What if I am not a 20 year old gazelle, and I want the skirt to be a few inches longer? Shorter?
Why has there been so little disintermediation in the way we shop and dress ourselves?
I ponder this as I push the “buy” button, and pay and extra $20 for fast delivery, contemplating all the bells, widgets, gizmos and wheels which immediately starting turning in response. And think back to this blog entry, which was based on a lot of thinking I did in 2006. 10 years!!
I love technology because it’s changing how we interact with each other. How we live. How we talk. Relate. And this all has huge ramifications for strategic business growth. With a vision for where it’s going, and an eye on innovation, this is an exploration of a very complex – and fascinating - subject.