AI: The Quest for Superintelligence

 | March 16 2021 | Alldus Recruitment

“It is customary, in a talk or article on this subject, to offer a grain of comfort, in the form of a statement that some particularly human characteristic could never be imitated by a machine. I cannot offer any such comfort.”

It’s been 60 years since the founding father of AI research Alan Turing made this statement, and it’s just as relevant today as it was in 1951.

A quick search for the most popular Ted talks on artificial intelligence will bring up a series of related results for things like; “can we build superintelligent AI without losing control over it” and “the terrifying implications of computers that can learn”.

Elon Musk went as far as saying superintelligent AI represents the biggest existential threat facing humanity, a sentiment that was echoed by the great Professor Stephen Hawking, who stated; “The development of full artificial intelligence could spell the end of the human race.”

Much of the debate on superintelliegence has been framed around whether it should be developed, rather than if it can.

The AI Winter may have thawed, but we are still a long way off seeing machines reach parity with human intelligence. So how realistic is the prospect of a superintelligent AI?

Types of Artificial Intelligence 

Before examining the feasibility of superintelligence, it is important to properly define the different types of AI and assess where we currently stand.

There are 3 generally accepted types of AI;

Artificial Narrow Intelligence (ANI)

This category, also known as “Weak AI”, is the only form of intelligence that has been realized to date, and refers to intelligence that is highly specialized and task-oriented (fraud-detection systems, speech recognition programs, chatbots etc.)

The term “weak” can be slightly misleading, as it covers everything from a basic chatbot to Watson, the computer system that won Jeopardy!

The important thing to note with Weak AI is that while machines can successfully replicate human behaviors, they cannot replicate human intelligence. They are able to perform fixed tasks, based on what human engineers have programmed them to do.

Take the example of AlphaGo, developed by DeepMind, probably the greatest player in 4000-year history of Go. It can easily trounce any human opponent entering its domain, but it would be of little use at anything else. Similarly, RedDot has reached parity with human physicians when detecting tumors, but you couldn’t teach it to play GO!

The key here is specialization. Weak AI operates under a narrow set of rules, and is constrained by what it’s been programmed to do. The challenge lies in applying a machines intelligence across multiple domains – Bringing us to General Intelligence.

Siri - AI chatbot

Artificial General Intelligence (AGI)

The next iteration of AI, General Intelligence (or Strong AI) is the hypothetical ability of a program to master a range of intellectual challenges in the same way a human can.

Machines displaying AGI would be capable of acquiring new knowledge independently and applying it to a range of tasks, rather than just one. Going back to our previous example, think AlphaGo learning to detect tumors similar to RedDot, or vice versa.

Even the most sophisticated examples of narrow intelligence would struggle (or outright fail) to do tasks that most people do on a daily basis. This would not be the case for an AGI.

For AGI to be realized machines would have to develop a level of consciousness, and have the cognitive ability to solve problems across a range of disciplines. They would also be capable of passing the Turing test.

Artificial Superintelligence

A superintelligent AI goes one step further than AGI. Not only can a machine perform any task a human can, it can do them all better. Think of a superintelligent Deep Blue, it is better than the greatest human minds at chess, innovation, medical research and engineering.

There’s a great quote from Philosopher Nick Bostrom, who stated;

“Machine intelligence is the last invention that humanity will ever need to make. Machines will then be better at inventing than we are, and they’ll be doing so on digital timescales”.

 While Biological neurons travel around the brain slowly, computer signals travel at the speed of light (and can operate 24 hours a day). It is therefore entirely plausible that if, and it’s a big if, superintelligence was realized, machines would be able to innovate at a far faster rate than humans.

For superintelligence to be achieved, machines would have to become self-aware, a concept that (so far) exists only in the realms of science fiction.

Robot with superintelligence

Is Superintelligence possible?

As it stands, despite the major advancements that have been made in the field, we have yet to move beyond narrow intelligence.

A more appropriate question therefore (for now at least) would be, is General Intelligence possible? And there are differing schools of thought on how best to answer this, both philosophically and practically.

Proponents of the computational theory of mind view intelligence as a matter of information processing. The brain operates like a computer, it takes stimuli from the outside world (inputs), processes them through mental algorithms, and produces mental states or physical actions (outputs).

In a Ted Talk on artificial intelligence, Sam Harris argues;

“If intelligence is just a matter of information processing, and we continue to improve our machines, we will produce some form of superintelligence.”

It has been argued however, that this is a rather simplistic view of intelligence.

One of the most famous critics of this theory is Philosopher John Searle, who developed the “Chinese room” thought experiment as a counter-argument to the computational theory of mind.

Searle argues that no matter how far computer science advances, machines will always lack the intentionality required to exhibit real intelligence.

And it’s not just Philosophers who disagree on the feasibility of general intelligence.

Speaking to the New York Times in 2020, Elon Musk prophesized;

“It’s going to be upon us very quickly” (superhuman AI). “Then we’ll need to figure out what we should do, if we even have that choice.”

Offering a slightly more realistic (and less apocalyptic) timeframe, MIT Roboticist Rodney Banks stated he believes AGI won’t arrive until 2300.

But even this conservative estimate is too optimistic for some in the field.

Facebook’s VP & Chief AI Scientist, Yan LeCun once stated;

“It’s hard to explain to non-specialists that AGI is not a ‘thing’, and that most venues that have AGI in their name deal in highly speculative and theoretical issues…”

Machine Learning and AI expert Andrew Ng expressed a similar sentiment, comparing worrying about AI to “worrying about overpopulation on Mars”.

So what advancements would have to be made to convince those who are skeptical of AGI?

In a 2020 report, McKinsey outlined the main capabilities needed to achieve general intelligence. These included;

  • Sensory perception – The ability of a machine to extract depth and determine the spacial characteristics of their environment.
  • Fine motor skills – The ability to perform intricate tasks and display the same level of dexterity humans can, even for simple tasks like retrieving a set of keys from your pocket.
  • Natural language understanding – The ability to consume and comprehend information from a number of sources; books, journals, videos etc. Most articles are written with the understanding that humans have a certain level of innate or general knowledge, and can infer meaning from context. So far machines lack this level of general knowledge.
  • Problem-solving – The ability to diagnose and address problems, without being programmed specifically to do so, even ones as simple as identifying the need to change a lightbulb.
  • Social engagement – The ability to interact with humans without being feared. Machines would have the ability to interpret facial expressions and infer emotional states. The article points out that even humans find this task difficult, so the ability of a machine to master understanding emotional cues seems a distant prospect.

How likely it is that we will see the necessary breakthroughs to satisfy the above criteria largely depends on who you ask, with some more optimistic than others.

Herbert Dreyfus once argued that human knowledge was tacit and couldn’t be programmed into a machine, an argument that has somewhat been countered by the development of neural networks that allow machines to learn without receiving instructions.

Similarly, if you went back in time 50 years and asked someone if they were worried about the prospect of smart phones making us less reliant on human connections, they may well have said that was akin to “worrying about overpopulation on Mars”, yet it is a very real problem today.

We’ll finish with a quote from Sir John Turing (nephew of Alan Turing);

“What AI is all about is simple, self-contained things like facial recognition, self-driving cars, voice recognition, algorithms for helping Amazon sell products — self-contained boxes. For as long as AI exists like this, in disparate groups, there is no risk that AI could escape from their boxes, and take over the planet.”

For now, AI remains very much in its box – highly specialized and limited to what it has been programmed to do. Whether this changes by 2300 remains to be seen.

If you’re interested in the pursuit of artificial general intelligence, check out our latest US Machine Learning Jobs or you can upload your resume today to stay up to date with the latest vacancies in your area. 

related articles

Women in AI: Bridging the Gap

Despite huge advancements in AI research, the field still lags in another key area of societal progress, gender equality. With women accounting for just 22% of professionals in the field, we examine the steps needed to address this inequality and how it would also benefit the technologies themselves

Read More

Why SQL is the base knowledge for data science

As a programming language, It's a simple skill to learn, but a very valuable one. A walk in the park compared to Python or R.

Read More

Why NLP is the future of E-Commerce

There are great benefits to using NLP in eCommerce. The world of business would be greatly benefited from in-depth insights that are controlled by AI. It will help in increasing customer satisfaction rates, improve the revenue curve & ultimately transform the future of business operations.

Read More