Press "Enter" to skip to content

MEPs fire opening salvos in Artificial Intelligence war …

MEPs fire openiArtificial Intelligence, deep learningng salvos in Artificial Intelligence war …

In 2017, members of the European Parliament have begun a call to action to create legal and ethical frameworks to address the concerns arising from the emerging reality of artificial intelligence (AI) and robotics.

AI has arrived and we’ve been using it for years … Siri, Google, Facebook and any number of everyday applications rely on Machine Learning(ML) and, more recently, Deep Neural Networks (DNN) have brought us Deep Machine Learning … in an attempt to limit the number of terms and acronyms for the purposes of this post we’ll just call it ML – Machine Learning.

With ML, machines program themselves. For instance, an autonomous car ‘watches’ a human drive a car and then it learns for itself how to drive.

Everyone from Tesla to Google to the US Military (and I would safely bet other military organizations around the globe) are rolling out autonomous AI systems and machines. And these are not ‘proof of concept’ one-off toys and gadgets. These systems are being broadly implemented across the landscape of human activity.

A 2016 MIT/Google survey of 375 qualified respondents from a broad range of industries found that:

  • 95% Have implemented or are in the process of implementing Analytics Processing Business Intelligence (APBI – A business world version of “Big Data”)
  • 60% Have already implemented an ML strategy
  • 18% Plan to implement an ML strategy within 12 to 24 months
  • Only 5% have no plan to implement ML

The problem with Deep Machine Learning is that even its creators cannot say how their creations arrive at their decisions.

This is particularly troubling when we are designing autonomous battle tanks – when asking ‘Why did the tank destroy that village?’ one does not want to hear ‘We don’t know.’

Society is built on a contract of expected social behavior. AI needs to respect and fit into our social norms. The problem is that social norms are incredibly complex, contextual and constantly evolving.

Humans spend the first twenty plus years of life using ‘Deep Biological Learning’ (not sure if that’s a real term … but if not I’ll take credit) to learn how to behave and interact in society … they get positive and negative feedback along the way and are either rewarded or punished to reinforce this learning.

Say I met you on the street, ran up, jumped into the air and slammed into you with my hip – crazy behavior, right? Not if we are at a heavy metal concert and in the mosh pit. Of course, someone reading this paragraph may have no clue what a mosh pit is depending on their age demographic, and heavy metal may not even be legal depending on where they live.

Yet we are building machines that are capable of causing great harm, either intentionally or unintentionally. Somehow we anticipate that our AI creations will be able to respond appropriately to a developing situation in  either Kandahar or Daytona Beach with a positive outcome. In Kandahar if someone runs up to you wearing a leather vest in 100 degree heat you may duck for cover whereas at Daytona Beach you may sell that person an “I Love Bike Week” bumper sticker.

But make no mistake this is no future issue that can be dealt with down the road. See “Deep Patient”, “Deep Driving” and “Deep Mind” for some current examples. I won’t recap all that’s available online but here is one example: Last year a Deep Learning machine examined 700,000 medical records and self-taught itself how to diagnose liver disease. Not only did it do very well, it was “way better” than traditional diagnostic tools.

Arguments are being made that being able to interrogate an AI system as to the ‘how and why’ it reached a particular decision is fundamental right.

Beginning in 2018, the EU may require companies to be able to provide an explanation as to how a decision was reached by an automated system. Obviously that makes sense in a world where credit, employment, medical and even military decisions are based on AI enhanced autonomous systems.

The problem is we don’t have a way to do that  … for a fascinating side bar check out ‘Deep Dream’ … there is no way right now to analyze or determine how a ML algorithm developed … intelligence, human or machine, does not easily lend itself to being broken down into exact component parts for analysis.

Later this week I’ll update this post with a link to resources where the reader can get engaged and involved in this all too important issue …

%d bloggers like this: