Development“Can machines think?” What humans need to learn about artificial intelligence

"Can machines think?" What humans need to learn about artificial intelligence

Maybe if we can define ‘can humans think?’ in a scientific way that we can all agree on, then we will have an irrefutable set of metrics for the inevitable machine intelligences that will be part of our lives.

One of people’s biggest fears about artificial intelligence is that machines will rise up and humanity will be obsolete, that we will give ‘them’ too much power and control, and they will take over through either benevolence or malevolence.

The root of many of these science fiction horror stories is the ability for the machines to think for themselves and come to a reasoned logical conclusion. We have had a test for machine intelligence in the form of the Turing test, which sees if a machine can fool a human interrogator into thinking they are talking to another human.

However, the test is not without its problems and one of my favourite criticisms of Turing’s test came over Twitter:

Initially I thought “ok, fair point – we are defining that the only true intelligence is described by the properties humans exhibit” and while one of the better responses did point out that cherry picking a single feature was not the same as the Turing Test, it did get me thinking based on my initial interpretation of the Tweet.

In order to answer this big question, we need to simplify it in a way. Turing did this by simplifying ‘Can machines think?’ to ‘can machines fool humans into thinking they are human?’ By human, we also mean ‘able to converse in human language’.

One of the problems with equating thought to communication is that these are two disconnected abilities. I am a thinking being and a have an appreciation of Russian, German and French, but ask me to have a conversation with someone in any of those languages and I’ll struggle, unless I devote a lot of time learning that language.

I would be classed as a machine in a non-English Turing test. So the test falls down because of its reliance on natural language. Is there a better way to define whether something can think?

alan_turing

Commonly, thinking is associated with intelligence – would this be a better fit? The dictionary definition of intelligence is: the ability to acquire and apply knowledge and skills.

Computers are very good at acquiring information and making decisions based on that information, traditionally in a very fixed way. With machine learning techniques, the acquiring of knowledge has become more refined, with semi-supervised and unsupervised techniques putting control out of human hands.

So we have a type of artificial intelligence, but this doesn’t answer our original question.

Humans have the capacity to learn and make decisions without preconceived answers, whereas computers need to be programmed. Although we are born ‘pre-programmed’ – we do not need to be taught how to see or hear or to make sounds, but we do need assistance to put those abilities into context; to make sounds that others can understand and to assign relevance to the shapes we see with our stereo vision.

An artificial thinking machine must be given an equal starting point. We are given education to train us and there is a point at which a human becomes self-aware. The baby in the mirror is suddenly perceived to be a reflection rather than a different child.

There is a fantastic book covering this topic from a scientific perspective called The Baby in the Mirror: A Child’s World from Birth to Three.

the-baby-in-the-mirror-a-child%27s-world-from-birth-to-three

Machines can also have some limited sense of self awareness. They can learn to recognise themselves or parts of themselves in mirrors, and recently have demonstrated basic self-awareness by understanding whether they were or were not affected by a ‘pill’ to make them unable to speak.

So machines can demonstrate intelligence and (limited) self-awareness. But this is still far from demonstration of thought. There’s a great quote from the film I, Robot where Sonny, the robot, and the detective are discussing what makes someone ‘human’:

I, Robot (2004)
Detective Del Spooner: Human beings have dreams. Even dogs have dreams, but not you, you are just a machine. An imitation of life. Can a robot write a symphony? Can a robot turn a… canvas into a beautiful masterpiece?
Sonny: Can *you*?

While not all humans can possess the artistic creativity we so often use to distinguish humans as a higher intelligence, as Google’s latest deep-dreaming research points out, machines are creating abstract art – original pieces that no human has ever imagined.

Some people might argue these images are more artistic than many humans could produce, but they are simply the result of a finite number of decisions and their knock-on effects as a result of training to see patterns.

Is human imagination any different? Does our subconscious combine fragments of our knowledge and simply spit them out in a form we can understand? Still, this is creativity, not thought.

While we can put electrodes on the outside of human heads and see the electric potentials arising with neurons firing, we can never truly know whether the person in front of us is thinking, just because they can tell us they are – we take it for granted because they are human.

In the same way we take for granted a machine is not truly self-aware. We can only test and verify what we believe are the results of independent thought: self-awareness, intelligence, and the ability to solve problems outside of past experience using the skills gained.

Machines are capable of all of these things individually, although they are yet to be combined. Even when that occurs, I believe many will refute that thought is occurring because ‘it’ is just a simple state machine. At that point I would have to ask: and are humans not?

Maybe if we can define ‘can humans think?’ in a scientific way that we can all agree on, then we will have an irrefutable set of metrics for the inevitable machine intelligences that will be part of our lives.

Dr Janet Bastiman is Chief Science Officer for SmartFocus

Resources

The 2023 B2B Superpowers Index
whitepaper | Analytics

The 2023 B2B Superpowers Index

1y
Data Analytics in Marketing
whitepaper | Analytics

Data Analytics in Marketing

1y
The Third-Party Data Deprecation Playbook
whitepaper | Digital Marketing

The Third-Party Data Deprecation Playbook

2y
Utilizing Email To Stop Fraud-eCommerce Client Fraud Case Study
whitepaper | Digital Marketing

Utilizing Email To Stop Fraud-eCommerce Client Fraud Case Study

2y