Bold Business Logo

Artificial Neuroscience – The Dark Secret Scientists Are Struggling to Understand

cartoon of a scary-looking spider robot with three scientists running away from it

How do you teach a machine to learn? What constitutes artificial intelligence? Do machines have a threshold? These are the questions staring in the face of data scientists and researchers about neuroscience and artificial intelligence. The scary part of artificial intelligence is the fact that computers are able to decide for themselves in ways humans can no longer understand. Even the engineers who built the computers cannot fully explain their behavior.

During the 1990s, IBM tackled the problem of AI by providing a brute force solution to winning chess against the World Chess Champion Gary Kasparov. Deep Blue was programmed to look for winning lines and then following numerous moves to find one which would win the game. The programmers knew how the computer won chess because they created chess algorithms, and tweaked the system parameters, and heavily upgraded to win the rematch with Kasparov by a score of 3.5-2.5.

Evolution to Google DeepMind

Two years ago, Google’s DeepMind won an even more impressive AI demonstration when it defeated the reigning Go world champion with a score of 4-1. This was achieved by teaching DeepMind’s AI how to play Go and let it play against itself learning the game along the way. It also found a way to determine whether it is winning at any point in time during the game. This is not a trivial achievement, as Go players are rarely able to determine who is winning during the course of a game. The two decades between Deep Blue and DeepMind show the vast difference in power and direction that AI is taking.

The current engagement of AI is all about teaching a computer how to learn. This is the fundamental thing about neural networks, and how this emulation of natural systems is about to lead to all kinds of automation and new research findings.

Closer Look at Neural Networks

Artificial neural networks are systems of computing that imitate the functions of biological neural networks. The system progressively improves performance on tasks, equivalent to learning on humans, by considering examples without the need for task-specific programming. Neural networks are a powerful approach to machine learning by allowing computers to understand images, translate sentences, recognize speech, and do much more.

In recognizing images, the artificial neural networks will learn to identify images containing dogs by analyzing samples of images that have been labeled as a dog or no dog. The machine will identify dogs in other images by using the results. The machine will identify or recognize the dog without any prior knowledge of a dog. The machine will evolve its own set of characteristics about dogs from the learning materials that it processed.

The artificial neurons are organized in layers, with different layers performing different kinds of decisions on their inputs. A signal will travel the input, which is the first layer, to the last layer, the output, after traveling through the layers several times.

The main goal of the artificial neural network is to solve problems in the same manner that a human brain would. However, artificial neural network deviated from biology when it shifted its focus to the matching of specific tasks. The network has been used to perform several tasks, such as speech recognition, computer vision, social network filtering, machine translation, playing games, and making a medical diagnosis.

The Growing Problem with Neural Networks

People are beginning to see the growing problems with neural networks. Nobody really understands how the most advanced machines perform their tasks. For instance, nobody can predict how an autonomous vehicle will respond to emergency situations, which means that once released onto the streets, nobody knows what the outcome will be. 

“When there’s a lot of interest and funding around something, there are also people who are abusing it. I find it unsettling that some people are selling AI even before we make it, and are pretending to know what [problem it will solve),” according to Tomas Mikolov, Research Scientist Facebook AI.

Understanding how the neural networks function will be very difficult. Normally, when a network is created, there is always the understanding of how it will arrive at a specific decision. Research on how the neural network functions have been mainly focused on detecting which neuron in the network has been activated. Even if the researcher found out that a particular neuron fired signals multiple times, it will not give a clear picture of what is going on in the entire network.

Researchers at Google have spent considerable time studying how the neural network functions in different areas. They have done a lot of work, especially in the area of feature visualization, using remarkable research methods and tools. Google researchers, in fact, have published a paper titled “The Building Blocks of Interpretability,” which proposes new ideas on understanding how deep neural networks arrive at their decisions.

Google’s research does not aim to find out the different interpretability techniques used by the neural networks but to create composable building blocks that may be used in larger models that will shed light on understanding the behavior of neural networks.

Is Humanity Becoming “the Matrix?”

Feature visualization, as a function of the neural network, will be easier to understand, but the idea how the neural network performs this particular function will not apply to the understanding of how the network arrives at its overall decision. Attribution can better explain the relationship between neurons but cannot be used to explain the decision that each individual neuron makes. By combining the building blocks, Google researchers have created a model that can explain what the neural network detects, and answers the question of how the network assembles the individual pieces to arrive at a decision, and also why the decision was made.

“Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method,” says Tommi Jaakkola, Professor at MIT.

The main innovation of the Google model for interpretability is its analysis of the decisions made by the various components of a neural network at the different levels: decisions of the individual neurons, decisions of the connected groups of neurons, and decisions of the complete layers. Google research also used a new research technique – matrix factorization – to analyze the impact that the arbitrary groups of neurons made in the final decisions.

The best way to understand Google’s interpretability blocks is to look at a model that can detect the insights of the decisions arrived at by a network of neurons at the different levels of abstractions from the basic computation to the final decision.

The Dark Secret at the Heart of AI

At its core, the darkest secret of artificial intelligence is that the most advanced algorithms of artificial neural networks are beyond the realm of human understanding.

Nvidia, famous as a chipmaker, released an experimental vehicle onto the streets of Monmouth County in New Jersey. The experimental car looked the same as the autonomous cars of Tesla, Google, and General Motors, but was different because of the use of sophisticated artificial intelligence. While the typical autonomous vehicle followed the instructions provided by the programmer or engineer, the Nvidia car relied solely on an algorithm that it learned itself by watching how humans drive a car.

Making a car learn how to drive by itself in an extraordinary achievement but it is quite worrisome because it is not clear how the car makes its own decisions. The multiple sensors in the vehicle feed information straight into the car’s network of neurons that process the data and deliver the commands to operate the steering wheel and other systems that will keep the car driving on the road at varying road conditions. The network of artificial neurons was able to match the responses expected of a human driver in similar conditions.

There is simply no way to design a system that will explain why it did something that was not expected,” John R. Miles

The worry lies in the possibility that at any given moment, the car may not respond as expected. If the car will refuse to drive forward after sitting on a red light, or it will crash into something, no one can explain or understand how it happened. The network of artificial neurons is very complicated that even the engineers who designed the network may not have the explanation of why the unexpected response happened.

In an autonomous car that functions by following a specific program, going over the program and correcting the flaw may correct a malfunction. But in a vehicle that makes a decision by itself, there is no program to review and correct.

The Growing Mystery of NeuroScience

The mystery of how a vehicle of this nature functions points to the main problem with artificial intelligence. Artificial intelligence technology, deep learning, is a powerful tool for solving problems. It can recognize images, voices, and translate languages. When properly utilized, this deep learning technique can be utilized in making crucial trade decisions and diagnose diseases. However, it is a highly risky proposition, which should not be adopted yet unless we can come up with techniques to make deep learning more understandable to the creators and accountable to the users. Unless we get a clear understanding of how the system works and predict its outcome, the inevitability of failure is always present and no one can correct it.

“We can build these models but we don’t know how they work,” according to Joel Dudley Mount Sinai

With all the achievements attributed to the Nvidia car, it is still in the experimental stage. It will remain experimental until the creators can provide an explanation for the decisions that the automated systems make.

Are humans crossing the threshold of artificial intelligence? Is society building machines whose thought processes we cannot explain? More importantly, are people ready for the age of machines that think differently than humans do?  These questions and more should be answered in the future.

Don't miss out!

The Bold Wire delivers our latest global news, exclusive top stories, career
opportunities and more.

Thank you for subscribing!