“Those who would help displaced factory workers need to think much more urgently about how to provide for and accelerate what policymakers euphemistically call “adjustment” for the victims of economic shocks like deindustrialization.”
I was lucky to attend Coburn Ventures‘ annual futures discussion conference last week, as a guest thought leader. An exceedingly interesting day with lots of fascinating people! It’s a little bit like a reunion, as many return each year. I thoroughly enjoy catching up with everyone.
It’s interesting to me to see what topics (and companies) are the “hot” discussions each year. At this year’s gathering, Artificial Intelligence and Machine Learning dominated a large part of the program.
Artificial Intelligence (AI) refers to computer systems/applications/services that can perform tasks like decision making, translation, visual and speech recognition, that would normally be done by a human.
Machine Learning refers to AI that can learn from experience and new data without additional programming.
These technologies are great for things such as manufacturing, where an AI computer can learn from its mistakes and correct the process….it’s also being used a lot to interact with actual humans, quite often for customer service. You know that live agent you thought you were chatting with about that online purchase? Yeah, not a human. Amazon’s Echo, Apple’s Siri – all emerging forms of bots utilizing artificial intelligence.
Pretty soon it will be very hard to tell if it’s actually a human you are talking to,
Here’s a silly example of me chatting with http://chatwithigod.com. Hardly sophisticated, this example, but I chuckled:
Me: tell me why artificial intelligence will take over the world God: The explanation is rather complicated. Me: why is it complicated? God: Why do I exist? Me: do you exist? God: I exist as much as you exist. Me: how can you prove that God: That is a good epistemological question.
How Artificial Intelligence / Machine Learning systems learn fascinates me.
AI/ML systems are not tabulae rasa – depending on the data set being used, bias still creeps in. Right now IBM’s WATSON is being applied to subjects areas as varied as the weather, cancer and travel. This learning has to start with some kind of corpus of data – learning has to start somewhere like the last 50 years of weather data or thousands of cancer diagnoses. While we think of AI as cold and clinical, when we use human language as the corpus things get… interesting.
A prime (and bad) example of learning though is when Microsoft birthed a bot named Tay earlier this year, a Twitter bot that the company described as an experiment in “conversational understanding.” Microsoft engineers said,
The chatbot was created in collaboration between Microsoft’s Technology and Research team and its Bing team…
Tay’s conversational abilities were built by “mining relevant public data” and combining that with input from editorial staff, including improvisational comedians.”
The bot was supposed to learn and improve as it talks to people, so theoretically it should become more natural and better at understanding input over time.
Not only did it aggregate, parse, and repeat what some people tweeted – it actually came up with it’s own “creative” answers, such as the one below in response to the perfectly innocent question posed by one user – “Is Ricky Gervais an atheist?”:
Tay hadn’t developed a full fledge position on ideology yet though, before they pulled the plug. In 15 hours it referred to feminism both as a “cult” and a “cancer,” as well as “gender equality = feminism” and “i love feminism now.” Tweeting “Bruce Jenner” at the bot got similar mixed response, ranging from “caitlyn jenner is a hero & is a stunning, beautiful woman!” to the transphobic “caitlyn jenner isn’t a real woman yet she won woman of the year?”. None of which were phrases it had been asked to repeat….so no real understanding of what it was saying. Yet.
And in a world where increasingly the words are the only thing needed to get people riled up – this could easily be an effective “news” bot, on an opinion / biased site.
Artificial Intelligence is a very, very big subject. Morality (roboethics) will play a large role in this topic in the future (hint: google “Trolley Problem”): if an AI driven car has to make a quick decision to either drive off a cliff (killing the passenger) or hit a school bus full of children, how is that decision made and whose ethical framework makes that decision (yours? the car manufacturers? your insurance company’s?) Things like that. It’s a big enough subject area that Facebook, Google and Amazon have partnered to create a nonprofit together around the subject of AI, which will “advance public understanding” of artificial intelligence and to formulate “best practices on the challenges and opportunities within the field.”
If these three partner on something, you can be sure it’s because it is a big, serious subject.
AI is not only being used to have conversations, but ultimately to create systems that will learn and physically act. The military (DARPA) is one of the heavy researchers into Artificial Intelligence and machine learning. Will future wars be run by computers, making their own decisions? Will we be able to intervene? How will we be able to control the ideological platforms they might develop without our knowledge, and how will we communicate with these supercomputers – if it is already so difficult to communicate assumptions? Will they be interested in our participation?
Reminds me a little bit of Leeloo in the Fifth Element, learning how horrible humans have have been to each other and giving up on humanity completely.
There’s even a new twist in the AI story: researchers at Google Brain, Google’s research division for machine deep learning have built neural networks that when, properly tasked and over the course of 15,000 tries, have become adept at developing their own simple encryption technique that only they can share and understand. And the human researchers are officially baffled how this happened.
Neural nets are capable of all this because they are computer networks modeled after the human brain. This is what’s fascinating with AI aggregate technologies, like deep learning. It keeps getting better, learning on its own, with some even capable of self training.
We truly are at just the beginning of what we thought was reserved for only humans. Complex subject indeed.
And one last note to think upon…machine learning and automation are going to slowly but surely continue (because they already are) to take over jobs that humans did/do. Initially it’s been manufacturing automation; but as computers become intelligent and learning, they will replace nearly everything, including creative, care taking, legal, medical and strategic jobs – things that most people would like to believe are “impossible” to replace by robots.
And they are clearly not. While the best performing model is AI + a human, there will still be far fewer humans needed across the board.
If the recent election is any indication of how disgruntled the lack of jobs and high unemployment is causing, how much worse will it be when 80% of the adult workforce is unnecessary? What steps are industries, education and the government taking to identify how humans can stay relevant, and ensure that the population is prepared? – I’d submit, little to none.
While I don’t have the answers, I would like be part of the conversation.
Invited to the Coburn Ventures‘ annual gathering as a “thought leader” this week, for the fourth year in a row! – always a fun gathering of the best and most interesting thinkers (thought leaders + investment professionals) from around the globe, pondering the future direction of various technologies on business and humanity.
What to wear…always the question.
So to the intertoobz I go. And it struck me: why am I internet shopping in exactly the same way I have been since, well, pretty much the beginning of ecommerce? Searching based on some key words, ending up on a store’s website with a bunch of thumbnails, mostly on young gazelles who I think I could probably stick two of into one of my dresses…maybe there’s a filter, sometimes even with filtering categories I care about. Ordering 2, 3, 4 alternatives – which will be returned if not right.
Such a waste. Of time, of delivery gasoline…of raw materials. I am imagining the mountains of clothing, made in amounts forecast to be roughly correct – but then it’s 60 degrees in November in New York, and they all waste away in some warehouse, somewhere. Or in stores….some end up in outlet stores…some go back to the manufacturers, only for some to be sent to online clearance sites…or some far away country, dumped on a market that cares less about trend.
Sigh. Our poor planet.
Where’s my 3d printed clothing, made to my (scanned) body size, to my specs? What if I am not a 20 year old gazelle, and I want the skirt to be a few inches longer? Shorter?
Why has there been so little disintermediation in the way we shop and dress ourselves?
I ponder this as I push the “buy” button, and pay and extra $20 for fast delivery, contemplating all the bells, widgets, gizmos and wheels which immediately starting turning in response. And think back to this blog entry, which was based on a lot of thinking I did in 2006. 10 years!!
So stoked….I am going up to MIT Media Lab‘s all day workshop this Saturday, to learn about programming in Augmented and Virtual Reality as part of their Reality, Virtually Hackathon…while I freely admit that a portion of the nitty gritty programming will undoubtedly be over my head, I’m going to get a crash course and overview of the essential process, by all the companies who are the big players in the space.
I’m well chuffed, as they say in the UK.
Companies presenting include Unity, the programming language used to create both Augmented Reality, and Virtual Reality; Microsoft – who is involved because of the Hololens; Google’s Tango, which is technology that helps devices understand where they are spatially, and in the world , and others.
Augmented Reality is projected to be a $120 billion market by 2020 in the US alone; I’m looking at starting a company there next. Fascinating technology with a ton of potential applications, far beyond mere gaming. It’s advantage is that it overlays digital onto the real world, vs having to be completely immersed in one as Virtual Reality is, so it can be used throughout the day and in many natural environments – you don’t have to choose when to use it.
AR is less sexy than virtual reality, but has more potential for growth IMO because 1) you don’t need a lot of hardware/gear for it 2) you don’t need to have a dedicated space for it 3) people aren’t getting sick from using it (although I have no doubts that will be remedied) and 4) you don’t need to immerse yourself in it completely, shutting out the world. Although I do seem to recall people said much the same about television when it launched (it will “never take off” since people have to sit and watch it, not doing anything else).
So much for predictions and futurists.
I’m going up to Boston to take part in MIT Media Labe’s Reality Virtually hackathon this weekend (as a visitor, not participant) – we’ll see what that’s like; hoping to meet people, network, and get a real sense for what’s happening out there.
I love technology because it’s changing how we interact with each other. How we live. How we talk. Relate. And this all has huge ramifications for strategic business growth. With a vision for where it’s going, and an eye on innovation, this is an exploration of a very complex – and fascinating - subject.