Estimated reading time: 3 minutes, 51 seconds

Should We Trust Machine Learning Systems? Featured

Should We Trust Machine Learning Systems? "Group of business workers standing with hands together doing symbol at the office"

Machine learning systems are the crux of most of the major AI advances over the last decade.  Deep learning, the most advanced type of machine learning, has produced the computer vision systems that power self-driving cars, facial recognition systems, machine translation, speech recognition, and AI-based medical diagnosis systems.

But can we trust them?  The goal for self-driving cars is to drive more safely than human drivers.  I want assurance that, if the human would stop for a baby stroller in the road, then the car will stop for that same stroller.

Unfortunately, we'll never be able to completely trust deep learning systems to avoid that stroller.  Similarly, we'll never be able to trust cancer diagnosis systems to detect every cancer that a human doctor would detect -- even if the deep learning cancer detection system does better overall than a human doctor.

Let me explain why...

A deep learning system identifies patterns in its training data that enable it to do a particular task.  For example, if I want to train a deep learning system to distinguish pictures of cats from pictures of dogs, I can give it a large set of training examples that include pictures of cats and dogs.  Each picture would be labeled as a cat or dog.  The deep learning system will identify the patterns in the data that best distinguish the cats in the training set from the dogs in the training set.

The fine print here is "in the training set".   If all the pictures of dogs are outside in the training set, and all the images of cats are inside homes, the deep learning system will likely key in on yard and home features. Then if I show the system a picture of a dog inside a home, the system will probably label it a cat.

Similarly, in the figure above, before researchers pasted the guitar onto the picture, both computers and human subjects correctly labeled the monkey in the photo. Adding the guitar did not confuse people, but it made the computer think it was seeing a picture of a person instead of a monkey. The computer did not learn the visual characteristics that people use to recognize monkeys and people. Instead, it learned that guitars are only present in pictures of people.

Researchers have also found that deep learning systems make similar mistakes in machine translation, music content analysis, question answering, sentiment analysis, topic identification, and many other types of deep learning systems.

The problem for deep learning systems is that the data patterns in the training are almost never exactly the same as the data patterns found in the real world.  The result is that deep learning systems make surprising mistakes that would not be made by their human counterparts.

What does this mean for self-driving cars?  The machine learning system in a self-driving car will only detect baby strollers in the road to the extent that the road, strollers, and other conditions are similar to examples in the training set.  When an Uber self-driving test car killed a pedestrian in 2018, the car’s deep learning software first classified the pedestrian as an unknown object, then as a vehicle, and finally as a bicycle.  I don't know about you but I don't want self-driving cars on the road that might fail to recognize me as a pedestrian or might fail to stop for a baby in a stroller. 

Deep learning technology offers great benefits to society.  But we need to be smart about where and how to put it to work.


STEVE SHWARTZ is a successful serial software entrepreneur and investor. He uses his unique perspective as an early AI researcher and statistician to both explain how AI works in simple terms, why people shouldn’t worry about intelligent robots taking over the world, and the steps we need to take as a society to minimize the negative impacts of AI and maximize its positive influence. 

He received his PhD from Johns Hopkins University in Cognitive Science, where he began his AI research, and also taught Statistics at Towson State University. After receiving his PhD in 1979, AI luminary Roger Schank invited Shwartz to join the Yale University faculty as a postdoctoral researcher in Computer Science. Learn more about Steve Shwartz at www.AIPerspectives.com and connect with him on Twitter, Facebook and LinkedIn. His new book, Evil Robots, Killer Computers, and Other Myths: The Truth About AI and the Future of Humanity, will be available on Amazon and other major booksellers in February 2021. 

Read 283 times
Rate this item
(0 votes)

Visit other PMG Sites:

click me
PMG360 is committed to protecting the privacy of the personal data we collect from our subscribers/agents/customers/exhibitors and sponsors. On May 25th, the European's GDPR policy will be enforced. Nothing is changing about your current settings or how your information is processed, however, we have made a few changes. We have updated our Privacy Policy and Cookie Policy to make it easier for you to understand what information we collect, how and why we collect it.