An AI roundtable discussion is a staple of the tech
journalism circus — usually framed with a preamble about dystopic
threats to human existence from the inexorable rise of ‘super
intelligence machines’. Just add a movie still from The Terminator.
What typically results from such a set-up is a tangled back and forth of viewpoints and anecdotes, where a coherent definition of AI fails to be an emergent property of the assembled learned minds. Nor is there clear consensus about what AI might mean for the future of humanity. After all, how can even the most well-intentioned groupthink predict the outcome of an unknown unknown?
None of this is surprising, given we humans don’t even know what human intelligence is. Thinking ourselves inside the metallic shell of ‘machine consciousness’ — whatever that might mean — is about as fruitful as trying to imagine what our thoughts might be if our own intelligence were embodied inside the flesh of a pear, rather the fleshy forms we do inhabit. Or if our consciousness existed fleetingly in liquid paint during the moment of animation by an artist’s intention. Philosophers can philosophize about the implications of AI, sure (and of course they do). But only an idiot would claim to know.
The panel discussion I attended this week at London’s hyper-trendy startup co-working hub Second Home trod plenty of this familiar ground. So I won’t rehash the usual arguments. Rather, and as some might argue making more like a machine — in the sense of acting like an algorithm trained to surface novelty from a mixed data dump — I’ve compiled a list (below) of some of the more interesting points that did emerge as panelists were asked to consider whether AI is “a force for good” (or not).
I’ve also listed some promising avenues for (narrow) AI mentioned by participants. So where they see potential for learning algorithms to solve problems humans might otherwise find tricky to crack — and also where those use-cases can be broadly considered socially beneficial, in an effort to steer the AI narrative away from bloodthirsty robots.
The last list is a summary of more grounded perceived threats/risks, i.e. those that don’t focus on the stereotypical doomsday scenario of future ‘superintelligent machines’ judging humans a waste of planetary space, but which are again focused on risks associated with the kind of narrow but proliferating — in terms of applications and usage — ‘AI’ we do already have.
One more point before switching to bullets and soundbites: the most concise description of (narrow) AI that emerged during the hour long discussion came from Tractable founder Alexandre Dalyac, who summed it up thus: “Algorithms compared to humans can usually tend to solve scale, speed or accuracy issues.”
So there you have it: AI, it’s all about scale, speed and accuracy. Not turning humans into liquid soap. But if you do want to concern yourself with where machine intelligence is headed, then thinking about how algorithmic scale, speed and accuracy — applied over more and more aspects of human lives — will impact and shape the societies we live in is certainly a question worth pondering.
Panelists
A movement to open source machine learning-related research could also be a way to lessen public fears about the future impact of AI technologies, added Jun.
“I can imagine that the kind of flexibility of the human brain, the plasticity to respond to so many different scenarios requires a reduction in specific abilities to do particular tasks. I think that’s going to be one of the interesting things that will emerge as we start to develop AGI [artificial general intelligence] — whether actually it becomes useful for a very different set of reasons to narrow AI.”
“I don’t think artificial intelligence in itself is what I would be concerned about, it’s more artificial stupidity. It’s the stupidity that comes with either a narrow focus, or a misunderstanding of the broader issues,” added Erden. “The difficulty in trying to establish all the little details that make up the context in which individual specific tasks happen.
“Once you try to ask individual programs to do very big things, and they need therefore to take into account lots of issues, then it becomes much more difficult.”
“A good example for the Web would be people believing that the laws of California were appropriate to everywhere around the world. And they aren’t, and they weren’t, and actually it took those Web companies a huge amount of time — and it was peer group pressure, lobby groups and so on — in order to get those organizations to behave actually appropriately for the laws of those individual countries they were operating in.”
“I’m a bit puzzled that people talk about AI ethics,” added Chace.
“Machines may well be moral beings at some point but at the moment it’s
not about ethics, it’s about safety. It’s about making sure that as AIs
get more and more powerful that they are safe for humans. They don’t
care about us, they don’t care about anything. They don’t know they
exist. But they can do us damage, or they can provide benefits and we
need to thinking about how to make them safe.”
“For example something that we’re working on is automating a task in the visual assessment of insurance claims. And the benefit of that would be to lower insurance premiums for car insurance… so this would be a case where the people who are usually employed to do this would find themselves out of work, so that might involve maybe 400 people in this country. But as a result you have 50 million people that benefit.”
“I think that would help a lot with the discussion because today coders don’t really understand the limitations and the potential of technology. What does it mean to be a machine that can learn by itself and make decisions? It’s so abstract as a concept that I think for people who are not working in the field it’s either too opaque to even consider, or really scary.”
“When you describe it like that to people I don’t think they’re either scared by it or fail to understand it. But if you describe this under the umbrella term of AI you promise too much, you disappoint a lot and you also confuse people… What’s wrong with saying ‘clever computing’? What’s wrong with saying ‘clever programming’? What’s wrong with saying ‘computational intelligence’?”
“We’re looking at automating the assessment of damage on cars, and there’s a paper by IBM Watson in 2012 which, to be honest, uses very, very old school AI — and AI that I can say for sure has nothing to do with winning at Jeopardy,” he added.
Promising applications for learning algorithms cited during the roundtable:
“Moving towards consumers thinking about data a little bit like a currency in the same way that they use and own their own money, and that they’re able to make decisions about where they share that data… Moving the processing, manipulation and storage of data from the murky depths, to something that people are at least aware of and can make decisions about intentionally.”
“And that this is an area where government needs to play an effective role. I don’t think we know exactly what that looks like yet — I don’t think we’ve finished that discussion. But at least a discussion is happening now and I think that’s really important.”
Bottom line, if increasing algorithmic efficiency is destroying more jobs than it’s creating then massive social re-structuring is inevitable. So human brains seeking to ask questions about who benefits from such accelerated change, and what kind of society people want to live in, is surely just prudent due diligence — not to mention the very definition of (biological) intelligence.
What typically results from such a set-up is a tangled back and forth of viewpoints and anecdotes, where a coherent definition of AI fails to be an emergent property of the assembled learned minds. Nor is there clear consensus about what AI might mean for the future of humanity. After all, how can even the most well-intentioned groupthink predict the outcome of an unknown unknown?
None of this is surprising, given we humans don’t even know what human intelligence is. Thinking ourselves inside the metallic shell of ‘machine consciousness’ — whatever that might mean — is about as fruitful as trying to imagine what our thoughts might be if our own intelligence were embodied inside the flesh of a pear, rather the fleshy forms we do inhabit. Or if our consciousness existed fleetingly in liquid paint during the moment of animation by an artist’s intention. Philosophers can philosophize about the implications of AI, sure (and of course they do). But only an idiot would claim to know.
The panel discussion I attended this week at London’s hyper-trendy startup co-working hub Second Home trod plenty of this familiar ground. So I won’t rehash the usual arguments. Rather, and as some might argue making more like a machine — in the sense of acting like an algorithm trained to surface novelty from a mixed data dump — I’ve compiled a list (below) of some of the more interesting points that did emerge as panelists were asked to consider whether AI is “a force for good” (or not).
I’ve also listed some promising avenues for (narrow) AI mentioned by participants. So where they see potential for learning algorithms to solve problems humans might otherwise find tricky to crack — and also where those use-cases can be broadly considered socially beneficial, in an effort to steer the AI narrative away from bloodthirsty robots.
The last list is a summary of more grounded perceived threats/risks, i.e. those that don’t focus on the stereotypical doomsday scenario of future ‘superintelligent machines’ judging humans a waste of planetary space, but which are again focused on risks associated with the kind of narrow but proliferating — in terms of applications and usage — ‘AI’ we do already have.
One more point before switching to bullets and soundbites: the most concise description of (narrow) AI that emerged during the hour long discussion came from Tractable founder Alexandre Dalyac, who summed it up thus: “Algorithms compared to humans can usually tend to solve scale, speed or accuracy issues.”
So there you have it: AI, it’s all about scale, speed and accuracy. Not turning humans into liquid soap. But if you do want to concern yourself with where machine intelligence is headed, then thinking about how algorithmic scale, speed and accuracy — applied over more and more aspects of human lives — will impact and shape the societies we live in is certainly a question worth pondering.
Panelists
- Calum Chace, author of ‘Surviving AI’
- Dan Crow, CTO Songkick
- Alexandre Dalyac, founder, Tractable
- Dr Yasemin J Erden, Lecturer/Programme Director Philosophy, St Mary’s University
- Martina King, CEO, Featurespace
- Ben Medlock, founder, SwiftKey
- Martin Mignot, Principal, Index Ventures
- Jun Wang, Reader, Computer Science, UCL & Co-founder, CTO, MediaGamma
- Should AI research be open source by default? How can we be expected to control and regulate the social impact of increasingly clever computing when the largest entities involved in AI fields like deep learning are commercial companies such as Google that do not divulge their proprietary algorithms?
A movement to open source machine learning-related research could also be a way to lessen public fears about the future impact of AI technologies, added Jun.
- Will it be the case that the more generalist our machines become, the less capable and/or reliable for a particular task — and arguably, therefore, the less safe overall? Is that perhaps the trade-off when you try to make machines think outside a (narrow) box?
“I can imagine that the kind of flexibility of the human brain, the plasticity to respond to so many different scenarios requires a reduction in specific abilities to do particular tasks. I think that’s going to be one of the interesting things that will emerge as we start to develop AGI [artificial general intelligence] — whether actually it becomes useful for a very different set of reasons to narrow AI.”
“I don’t think artificial intelligence in itself is what I would be concerned about, it’s more artificial stupidity. It’s the stupidity that comes with either a narrow focus, or a misunderstanding of the broader issues,” added Erden. “The difficulty in trying to establish all the little details that make up the context in which individual specific tasks happen.
“Once you try to ask individual programs to do very big things, and they need therefore to take into account lots of issues, then it becomes much more difficult.”
- Should core questions of safety or wider ethical worries about machine-powered decision-making usurping human judgment be society’s biggest concern as learning algorithms proliferate? Can you even separate safety from ethics at that fuzzy juncture?
“A good example for the Web would be people believing that the laws of California were appropriate to everywhere around the world. And they aren’t, and they weren’t, and actually it took those Web companies a huge amount of time — and it was peer group pressure, lobby groups and so on — in order to get those organizations to behave actually appropriately for the laws of those individual countries they were operating in.”
They don’t care about us, they don’t care about anything. They don’t know they exist. But they can do us damage, or they can provide benefits and we need to thinking about how to make them safe.
- Will society benefit from the increased efficiency of learning algorithms or will wealth be increasingly concentrated in the hands of (increasingly) few individuals?
“For example something that we’re working on is automating a task in the visual assessment of insurance claims. And the benefit of that would be to lower insurance premiums for car insurance… so this would be a case where the people who are usually employed to do this would find themselves out of work, so that might involve maybe 400 people in this country. But as a result you have 50 million people that benefit.”
- Should something akin to the ‘philosophy of AI’ be taught in schools? Given we’re encouraging kids to learn coding, what about contextualizing that knowledge by also teaching them to think about the social impacts of increasingly clever and powerful decision-making machines?
“I think that would help a lot with the discussion because today coders don’t really understand the limitations and the potential of technology. What does it mean to be a machine that can learn by itself and make decisions? It’s so abstract as a concept that I think for people who are not working in the field it’s either too opaque to even consider, or really scary.”
- Is the umbrella term ‘artificial intelligence’ actually an impediment to public awareness and understanding of myriad developments and (potential) benefits associated with algorithms that can adapt based on data input?
“When you describe it like that to people I don’t think they’re either scared by it or fail to understand it. But if you describe this under the umbrella term of AI you promise too much, you disappoint a lot and you also confuse people… What’s wrong with saying ‘clever computing’? What’s wrong with saying ‘clever programming’? What’s wrong with saying ‘computational intelligence’?”
- Is IBM’s ‘cognitive computing’ tech, Watson — purportedly branching out from playing Jeopardy to applying its algorithmic chops to very different fields, such as predictive medicine — more a case of clever marketing, than an example of an increasingly broad AI?
“We’re looking at automating the assessment of damage on cars, and there’s a paper by IBM Watson in 2012 which, to be honest, uses very, very old school AI — and AI that I can say for sure has nothing to do with winning at Jeopardy,” he added.
Promising applications for learning algorithms cited during the roundtable:
- Helping websites weed out algorithmically generated ad clicks (the irony!)
- Analyzing gamblers’ patterns of play to identify problematic tipping points
- Monitoring skin lesions more effectively by using change point detection
- Creating social AIs that can interact with autistic kids to reduce feelings of isolation
- Tackling the complexity of language translation by using statistical approaches to improve machine translation
- Putting sensors on surgical tools to model (and replicate) the perfect operation
- Using data from motion sensors to predict when a frail elderly person might be at the risk of falling by analyzing behavioral patterns
- How to regulate and control increasingly powerful and sophisticated data processing across borders where different laws might apply?
- How to protect user privacy from predictive algorithms and ensure informed consent of data processing?
“Moving towards consumers thinking about data a little bit like a currency in the same way that they use and own their own money, and that they’re able to make decisions about where they share that data… Moving the processing, manipulation and storage of data from the murky depths, to something that people are at least aware of and can make decisions about intentionally.”
- How to respond to the accumulation of massive amounts of data — and the predictive insights that data can yield — in the hands of an increasingly powerful handful of technology companies?
“And that this is an area where government needs to play an effective role. I don’t think we know exactly what that looks like yet — I don’t think we’ve finished that discussion. But at least a discussion is happening now and I think that’s really important.”
- How to avoid algorithmic efficiencies destroying jobs and concentrating more and more wealth in the hands of fewer and fewer individuals?
Bottom line, if increasing algorithmic efficiency is destroying more jobs than it’s creating then massive social re-structuring is inevitable. So human brains seeking to ask questions about who benefits from such accelerated change, and what kind of society people want to live in, is surely just prudent due diligence — not to mention the very definition of (biological) intelligence.