AI, the pig in the poke?
Artificial Intelligence (AI) is everywhere. Or so we are told. The only problem, we are not really sure what it looks like, or what it does.
The term “AI” has become a cultural fetish, almost the secular equivalent of invoking a supreme being. It has been used indiscriminately, often to exalt ordinary basic systems as being something more. Some of these may just be throwing computer power at a problem and letting a machine do what it does best; which is execute algorithms really fast. The humble pocket calculator would be in with a shout here if it were invented tomorrow.
AI is now very much part of the corporate lingua franca. Bloomberg New have tracked references to “artificial intelligence” in earnings call transcripts and noticed a substantial increase in the last two years. AI is the new corporate goldenchild.
The mainstream press and social influencers loosely talk about “deep learning” mimicking the human brain. They describe AI-powered bots stealing our jobs and AI being a potential threat to our survival as a species.
This is a little tenuous at best when we consider that while some AI systems are impressive, they perform very specific tasks, a general AI that could be capable of outwitting its human creators remains a distant and uncertain prospect.
As an antidote to this often sensationalist, sometimes misunderstood and always enthusiastic references to AI, we need to critically evaluate the evidence when we are told that a service we are using is based on AI.
This is really important for a few reasons. Firstly, we are investing billions in AI products, machine learning algorithms, and research. We need to understand what we are investing in and what we will get as a return.
Secondly AI is not limited to a single industry, it is not just for the computer scientists. It is not just a technical solution, it will impact across all sectors, not unlike electricity did 100 years ago. We all need to understand what it is and what it can do for us.
Thirdly, for people building or using AI, we need to know what the finished article looks like. We need to be able to tell the real deal from an imitation, otherwise user experience will suffer, AI will get a bad name and delivery will fall short of the hype.
Finally, there is a debate raging between heavy weights like Elon Musk, Bill Gates, Mark Zuckerberg, Stephen Hawking and others on how dangerous AI can get. To fully understand and participate in this debate, we need to understand what AI does and if we are to apply any brakes, what parts of it they need to affect. What all these heavy weights do agree on is that AI affects us all and that it is a conversation we all need to be able to engage in and not leave it to an elite few. It is not a stretch to say we need to democratize this debate and our understanding of AI and its capabilities.
The upshot is that we really need to challenge ourselves on what we call or understand to be AI. To do this we need to think about how we would describe AI, what key characteristics should AI have?
First up we should think about the TECHNOLOGY (or more accurately not think about the technology). It should not really matter how AI is built. Technology always changes. This is potentially bad news for all the neural net gurus out there. Even if we are using a convolutional neural network (CNN), or some other cutting edge model for supervised machine learning, that in of itself may not be AI. While today neural nets have top billing, that could change with an exciting new discovery in quantum computing. Will we stop calling it AI then? Probably not. The technology, complexity aside, is a side-show, a means to an end.
Next up, we need to think about how PROACTIVE the tool is. On one end of the scale reactive systems like self dimming lamps turn up and down based on light in the environment. But they won’t be taking over the world anytime soon, (no matter how bright they think they are). On the other end of the scale proactive AI tools take the initiative. For example some AI systems used in customer service can predict which callers need escalation and are about to get shirty with the customer service agent. A proactive tool can figure out in advance what to do and not wait for the horse to have bolted. That is real AI.
Now consider, does the tool actually LEARN? The key question here is, can it evaluate whether its actions led to the right result and if not adjust them. Any tool that will simply do the same thing over and over again and not learn from its actions is never going to be intelligent. Einstein's quote “The definition of insanity is doing the same thing over and over again, but expecting different results”, comes to mind, and we could even coin a new term - Artificial Insanity. Try getting a VC to invest in that.
Now we need to think about how AUTONOMOUS the tool is. AI should figure out what it wants and create its own goals and objectives. Goals that were never defined when it was coded by some, not so lateral thinking, techies. It needs to go beyond being a deterministic system that for the most part simply executes algorithms.
Put it another way, we call cars that drive themselves autonomous, lauded as the next big thing in transport. While these cars can do most of the decision-making along the journey from A to B, they cannot decide that instead of taking us to point B we would prefer to go to point C. Until that starts happening they are just following orders, is that AI? Perhaps, but just about.
This brings us to CREATIVITY. As humans we are able to make interesting connections between concepts. In many cases there are no obvious links from idea A to idea B, but put them together and you have concepts like Stephen King's Killer Clowns. Creativity is important because it is essential to coming up with novel solutions to problems. AI needs to think outside the code box and combine unrelated ideas into tangible solutions.
Different systems will have some or all of these characteristics of AI. They will be strong on some and weaker on others. But as long as we have a good concept of what we would expect to see when we are shown AI we will be able to tell the Prosecco from the Champagne when it comes to AI powered systems and solutions.
We should ask if it is a just another form of data mining (even of if it does use neural nets) or is it more than that? Is it proactive and not just some super analytical tool that queries retrospective data. Does it innately recognise success and failure and adjust itself without instruction, can it learn? Is it merely very good at following orders or is it autonomous in the sense that it can determine what to do and can create its own strategy? Can it show creativity and link ideas it has never encountered before in novel ways? Know the answers to some of these and you will know if you are looking at AI or not.
If you don’t know the answers to these questions then you could be victim of a 21st century confidence trick where you invest in or use an AI service and end up with the proverbial pig in a poke. You could be putting a few million into a pocket calculator or changing your business process based on a technology that is limited in its ability to learn, adapt and grow.
At 110% these are some of the considerations we make when thinking about how we build and use AI. It is not about the pursuit of AI just to have AI or adding more data mining and thinking of that as AI.
It is about implementing a technology that delivers intelligence regardless of how it’s done. It’s about technology that thinks in advance and can predict rather than react.
It’s about the ability for the tool to learn between success and failure, to autonomously decide what to do next, and be creative in employing solutions.
That allows us to think about how technology can support our Personal Accelerators and using some of these key characteristics of AI, deliver a champagne experience to the end user.