AI: the Theory

With applications ranging from autonomous vehicles to fraud detection, AI is everywhere and it’s making a significant impact on how we live our lives.


Artificial intelligence (a.k.a. AI) is all the buzz. Although still in its infancy, we know it’s at the core of autonomous vehicles, virtual personal assistants such as Cortana and Siri, videogame realism and smart home applications. But did you know it’s also the brain behind fraud detection, online customer support, purchase prediction and entertainment recommendations? AI is everywhere and it’s making a significant impact on how we live our lives.

So, what exactly is AI? There is no consensus on what constitutes intelligence. What is widely accepted is that there are different types of intelligence. AI research has focused primarily on analytics, reasoning, learning, perception and language.

TOP-DOWN VERSUS BOTTOM-UP AI RESEARCH

The top-down method aims to create intelligence by analyzing cognition in terms of the processing of symbols. The bottom-up approach is based on creating neural networks that imitate the way the brain’s neurons connect with one another. Suppose we want a scanner to recognize the letters of the alphabet. A top-down method would have the application compare each image to programmed geometric descriptions. A bottom-up approach would present letters one at a time in different sizes, font structures, angles and light intensity, gradually tweaking the neural pathways with these different stimuli.

COGNITIVE SIMULATION, APPLIED AI AND STRONG AI

Cognitive simulation is used to test theories about how the human mind works. It is considered a powerful tool to discern how we use perception, language and memory. Examples include how people recognize faces and voices.

Applied AI is used to produce commercial smart systems. The greatest success has been with expert systems that deploy a knowledge base and an inference engine to self-learn. These knowledge bases store specialized information known by experts within a given field. An inference engine applies logical rules, albeit sometimes fuzzy logic, to the knowledge base to deduce new knowledge. Examples include medical diagnosis, credit ratings and financial document routing.

Strong AI aspires to build artificial devices whose intellectual ability is indistinguishable from that of a human. In 1950, Alan Turing, the British mathematician who is generally recognized as the father of AI, proposed a simple test for computer intelligence known as the Turing test. He asked, “Are there imaginable digital computers which would do well in the imitation game?” The test is such that a human must query both a computer and a human. If the responses from the computer are indistinguishable from those of the human, the computer must be thinking.

There have been few, if any, advances in the understanding of strong AI. The early euphoric opinion that strong AI was possible has given way to an appreciation of the extreme difficulties that exist in its creation due to our current lack of knowledge. Most data scientists working on the other two theories of AI do not think it is worth pursuing now. All existing systems using AI, of any sort, are applied AI at best.

WHAT’S NEXT?

The next big AI target is language. Understanding language is more than understanding the linguistic meanings of the words — it also involves understanding the natural meaning. Researchers are currently attacking this from a multitude of technologies such as voice recognition and natural language processing. While we’re a long way from having meaningful conversations with our smartphones, AI is poised to effect big change. Next month we’ll explore what AI will bring over the next couple of years to our world of abundance.

This post was originally published in CPA Magazine