AI–Building a Mind is Hard

This post is inspired by the book:  Rebooting AI – Building Artificial Intelligence We Can Trust, written by Gary Marcus and Ernest Davis, New York, 2019. Gary Marcus (see post Kluge) is a well known author and artificial intelligence entrepreneur and Ernest Davis is a professor of computer science at Carnegie Mellon. To oversimplify, the authors emphasize that the successes of AI are narrow and tend to be greedy, opaque, and brittle. They provide history of AI seemingly about being ready for prime time decade after decade after decade. Self driving cars are almost there, but they are not. Human frailties in driving result in a death about every 100,000,000 miles driven, but Marcus and Davis indicate that self driving cars require human intervention every 10,000 miles which is 10,000 times in 100,000,000 miles. It may be a very long time before we are ready to sign off on self-driving cars, because the progress thus far has been the easy part.

A good part of the AI hype is just trying to raise cash for your startup or raise the stock price for you tech giant. Or whatever Elon Musk does? With Tesla having fairly widely deployed its brand of self-driving artificial intelligence with less than stellar results, Musk meanwhile has compared AI to demons and nukes. I guess he is just covering his bases. So far AI can be helpful if finding associations most of the time is helpful. Identifying photos of your relatives is an example. AI does not have to work perfectly on that to be helpful. Translating languages is another example. Anything beats what we had before. Advertising and search also do not require near perfection.  However, do not start relying on these things without checking their work.

You may have heard that AI dominates at the ancient game Go and you may know that Watson, the AI, won on Jeopardy. Go is a board game with a huge, but finite, number of moves. There are no outside variables. It is all on one 19 by 19 grid and the rules never change. With respect to Jeopardy, Marcus and Davis find, that wikipedia titles are answers on 95% of the questions.

Meanwhile, computers cannot read and robots require the most perfect and limited environments to work.  If you wonder why those digitized medical records are not curing everything, keep in mind that doctors have to fill in the check boxes for the information to be useful. Those notes that your doctor writes are not understandable to they system except by associations. This is all an improvement, but it is no revolution. That Roomba vacuum cleaner is a robot just like a self driving car. It mostly works, but if your dog poops on the carpet, you will wish that you did not have a Roomba.

Much of AI progress has been through what is called “deep learning”.  As described by Marcus and Davis, deep learning is based on hierarchical pattern recognition and learning. The sketch shows an example of hierarchical pattern recognition designed to take an image and ultimately identify the category it belongs to. Systems like these are called neural networks

 

 

 

since they include nodes and connection weights. Learning involves changing the connection weights largely through trial and error until you get the right answer. Deep learning is greedy because it can take millions of repetitions with huge data sets. The authors indicated that Google translate benefited from collecting things like the proceedings of the Canadian parliament which are published in two languages. In that way Google translate can learn the correspondence between the English words and phrases and their French counterparts. Unfortunately, the more different real world problems are from the data you used to train the system, the less likely the system is to be reliable.

Deep learning is opaque.  It is a black box that does not come up with human explanations. This makes it difficult to guess how applicable the findings are without further experimentation. Deep learning is also brittle. Even with jobs like image identification which are its forte, it can come to stupid conclusions.  For example, Marcus and Davis point to a turtle being identified as a rifle.

So it seems, that AI’s current strengths are in what Ken Hammond called correspondence–getting the right answer and not in coherence–explaining your answer. Hammond saw common sense as the middle of his cognitive continuum with correspondence on one end and coherence on the other. Marcus and Davis see this as a major shortcoming and one that AI should try to follow the human example a bit more. AI needs to know where it is, what are its risks and opportunities, and what it should do and how it should do it. This will require knowledge frameworks such as Kant’s time, space, and causality.  It will need rules and laws and safety margins an order of magnitude greater than it seems they need to be, and this will not be easy. The issues with the 737 Max  are basically just that of a robot with unreliable inputs trying to do more than it could handle (See post Technology and the Ecological Hybrid).

I should note that deep learning with its nodes and connection weights is much like Brunswik’s lens model and parallel constraint satisfaction theory, a couple of my favorites on explaining human automatic or intuitive decision making. Glockner and Betsch (See post  Deliberate Construction in Parallel Constraint Satisfaction) refer to the network installed spontaneously when encountering a decision situation as the primary network. This corresponds with deep learning.  Deliberate processes are activated if the consistency of a resulting mental representation is below a threshold θ. I think this is interesting because it is the intuitive/automatic system requiring coherency of itself. That is what AI needs to do, and it will not be easy.

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.