×

Artificial intelligence in self-driving cars can’t reason like humans

Dr. Shahar Madjar

In April 2022, a Tesla vehicle crashed into a $3.5 million jet. At the time of the accident, the vehicle was using its “Smart Summon” feature of Tesla’s Autopilot, a highly sophisticated self-driving system that is heavily reliant on artificial neural networks. Why didn’t the Tesla stop before it hit the jet? The answer has to do with the differences between biological neural networks such as the human brain and artificial neural networks such as the one at the core of self-driving cars.

In my last article, I described the process that takes place in my brain–a highly sophisticated biological neural network–as I am driving my Subaru toward an intersection with a railroad. In brief: My Subaru isn’t equipped with self-driving capabilities. I need to make a singular, seemingly simple decision, to cross the railroad or stop. My brain is equipped with about 100 billions neurons. Each neuron is connected to 10,000 other neurons from which it receives information and to which it delivers information. The information flows from one neuron to the next through small gaps between the neurons called synapses. To make a decision, my brain has to rely on outside stimuli and on prior acquired knowledge. At the intersection, my eyes constantly collect input in the form of images (the stop sign, other vehicles, road conditions, the weather). All that time, minute electrical currents travel along the neurons connecting my eyes, my ears, centers in my brain where my memory is stored, and areas charged with processing my emotions. At the end of this process, a group of cells within my brain will analyze and add up the information. It would seem like a decision I made, to cross or to wait, but from a purely physical standpoint, the decision is a confluence of electrical currents and chemical reactions; input that is gathered from multiple sources, and processed in a “black box” which I do not fully understand, over which I have no control; an output in the form of a simple decision–stop, or drive on.

In a way similar to the neural network in our brain, artificial neural networks use multiple layers “neurons” (called nodes) that connect via synapses. As a self-driving car approaches an intersection, the cameras mounted on the car translate the images in front of it into a picture made of numerous tiny electrical currents. These signals move from the first layer of neurons that absorb the light into the next, and then into the layers beyond. Each neuron receives information from multiple neurons in the preceding layer and has to “decide” whether or not to propagate the electrical current to the next layer of neurons. Similar to the process in the human brain, the final decision–to brake, to accelerate, to turn–is a confluence of electrical currents; inputs are gathered, and then processed in a “black box” whose function isn’t fully understood.

What gives artificial neural networks the ability to make such decisions? They learn by a flood of information, by billions upon billions of data points, and by using repetitive corrective feedback loops. Passing by a stop sign, for example, the system takes in the image of red and white dots hanging on a long elongated structure. It doesn’t ‘see’ the image the way our brain comprehends it, nor does it ‘understand’ that this is a stop sign. Instead, after multiple occasions in which the system encountered a stop sign, observed the behavior of human drivers at the intersection, and witnessed the consequences of ignoring a stop sign, it calculates that stopping the car is the appropriate response.

On its website, Tesla claims that its cars are the safest, and that Tesla vehicles driven using Autopilot technology are several times less likely to be involved in an accident than Tesla vehicles that aren’t using it. Considering human nature and the limited computational power of the human brain, I believe this to be true. Humans are flawed: driven by emotions, and constantly distracted–by our thoughts and worries, by our phones and social media. Our capacity to take in and analyze outside stimuli is often overwhelmed.

And then there is the case of the Tesla colliding with a jet, and other accidents that seem to have been avoided by man-driven cars. Why does this happen? One explanation is: Tesla’s Autopilot practiced on cars and along highways, it wasn’t trained to identify and avoid jets on runways. Given enough time, and enough exposure to these different circumstances (more data), these accidents could have been avoided. But here is another explanation: The problem with artificial intelligence is more fundamental and cannot be solved by accruing more data. Unlike the human brain, artificial neural networks in their current forms aren’t able to decipher symbols, do not hold a true image of the world, and cannot imagine consequences. Driving toward the jet, a human driver would consider the image of the plane as a symbol of a large, heavy, hard object; faced with uncertain circumstances, the human driver would reach into its memory and extrapolate from its prior experience and understanding of the world; and it would summon its imagination to predict the consequences of a collision.

The Tesla car hit the jet because it doesn’t have common sense. It excels in processing the tremendous amount of data that was fed into its system, but failed when the unexpected happened. It doesn’t have internal knowledge or understanding of the world. It can’t stretch and wrap its mind around a problem. It suffers from a lack of imagination.

Can artificial intelligence take over medicine? Can it replace doctors? In my next article, I will tell you more.

Dr. Shahar Madjar is a member of Aspirus Medical Group based out of Laurium. He speicializes in urology

Newsletter

Today's breaking news and more in your inbox

I'm interested in (please check all that apply)
Are you a paying subscriber to the newspaper? *
   

Starting at $2.99/week.

Subscribe Today