Monday 14 November 2016

From Reactive Robots to Sentient Machines: The 4 Types of AI


Machines need to be able to teach themselves, says one researcher who studies artificial intelligence.
Credit: kirill_makarov / Shutterstock.com
The common, and recurring, view of the latest breakthroughs in artificial intelligence research is that sentient and intelligent machines are just on the horizon. Machines understand verbal commands, distinguish pictures, drive cars and play games better than we do. How much longer can it be before they walk among us?
The new White House report on artificial intelligence takes an appropriately skeptical view of that dream. It says the next 20 years likely won't see machines "exhibit broadly-applicable intelligence comparable to or exceeding that of humans," though it does go on to say that in the coming years, "machines will reach and exceed human performance on more and more tasks." But its assumptions about how those capabilities will develop missed some important points.
As an AI researcher, I'll admit it was nice to have my own field highlighted at the highest level of American government, but the report focused almost exclusively on what I call "the boring kind of AI." It dismissed in half a sentence my branch of AI research, into how evolution can help develop ever-improving AI systems, and how computational models can help us understand how our human intelligence evolved.
The report focuses on what might be called mainstream AI tools: machine learning and deep learning. These are the sorts of technologies that have been able to play "Jeopardy!" well, and beat human Go masters at the most complicated game ever invented. These current intelligent systems are able to handle huge amounts of data and make complex calculations very quickly. But they lack an element that will be key to building the sentient machines we picture having in the future.
We need to do more than teach machines to learn. We need to overcome the boundaries that define the four different types of artificial intelligence, the barriers that separate machines from us – and us from them.
The most basic types of AI systems are purely reactive, and have the ability neither to form memories nor to use past experiences to inform current decisions. Deep Blue, IBM's chess-playing supercomputer, which beat international grandmaster Garry Kasparov in the late 1990s, is the perfect example of this type of machine.
Deep Blue can identify the pieces on a chess board and know how each moves. It can make predictions about what moves might be next for it and its opponent. And it can choose the most optimal moves from among the possibilities.
But it doesn't have any concept of the past, nor any memory of what has happened before. Apart from a rarely used chess-specific rule against repeating the same move three times, Deep Blue ignores everything before the present moment. All it does is look at the pieces on the chess board as it stands right now, and choose from possible next moves.
This type of intelligence involves the computer perceiving the world directly and acting on what it sees. It doesn't rely on an internal concept of the world. In a seminal paper, AI researcher Rodney Brooks argued thatwe should only build machines like this. His main reason was that people are not very good at programming accurate simulated worlds for computers to use, what is called in AI scholarship a "representation" of the world.
The current intelligent machines we marvel at either have no such concept of the world, or have a very limited and specialized one for its particular duties. The innovation in Deep Blue's design was not to broaden the range of possible movies the computer considered. Rather, the developers found a way to narrow its view, to stop pursuing some potential future moves, based on how it rated their outcome. Without this ability, Deep Blue would have needed to be an even more powerful computer to actually beat Kasparov.
Similarly, Google's AlphaGo, which has beaten top human Go experts, can't evaluate all potential future moves either. Its analysis method is more sophisticated than Deep Blue's, using a neural network to evaluate game developments.
These methods do improve the ability of AI systems to play specific games better, but they can't be easily changed or applied to other situations. These computerized imaginations have no concept of the wider world – meaning they can't function beyond the specific tasks they're assigned and are easily fooled.
They can't interactively participate in the world, the way we imagine AI systems one day might. Instead, these machines will behave exactly the same way every time they encounter the same situation. This can be very good for ensuring an AI system is trustworthy: You want your autonomous car to be a reliable driver. But it's bad if we want machines to truly engage with, and respond to, the world. These simplest AI systems won't ever be bored, or interested, or sad.
This Type II class contains machines can look into the past. Self-driving cars do some of this already. For example, they observe other cars' speed and direction. That can't be done in a just one moment, but rather requires identifying specific objects and monitoring them over time.
These observations are added to the self-driving cars' preprogrammed representations of the world, which also include lane markings, traffic lights and other important elements, like curves in the road. They're included when the car decides when to change lanes, to avoid cutting off another driver or being hit by a nearby car.
But these simple pieces of information about the past are only transient. They aren't saved as part of the car's library of experience it can learn from, the way human drivers compile experience over years behind the wheel.
So how can we build AI systems that build full representations, remember their experiences and learn how to handle new situations? Brooks was right in that it is very difficult to do this. My own research into methods inspired by Darwinian evolution can start to make up for human shortcomings by letting the machines build their own representations.
We might stop here, and call this point the important divide between the machines we have and the machines we will build in the future. However, it is better to be more specific to discuss the types of representations machines need to form, and what they need to be about.
Machines in the next, more advanced, class not only form representations about the world, but also about other agents or entities in the world. In psychology, this is called "theory of mind" – the understanding that people, creatures and objects in the world can have thoughts and emotions that affect their own behavior.
This is crucial to how we humans formed societies, because they allowed us to have social interactions. Without understanding each other's motives and intentions, and without taking into account what somebody else knows either about me or the environment, working together is at best difficult, at worst impossible.
If AI systems are indeed ever to walk among us, they'll have to be able to understand that each of us has thoughts and feelings and expectations for how we'll be treated. And they'll have to adjust their behavior accordingly.
The final step of AI development is to build systems that can form representations about themselves. Ultimately, we AI researchers will have to not only understand consciousness, but build machines that have it.
This is, in a sense, an extension of the "theory of mind" possessed by Type III artificial intelligences. Consciousness is also called "self-awareness" for a reason. ("I want that item" is a very different statement from "I know I want that item.") Conscious beings are aware of themselves, know about their internal states, and are able to predict feelings of others. We assume someone honking behind us in traffic is angry or impatient, because that's how we feel when we honk at others. Without a theory of mind, we could not make those sorts of inferences.
While we are probably far from creating machines that are self-aware, we should focus our efforts toward understanding memory, learning and the ability to base decisions on past experiences. This is an important step to understand human intelligence on its own. And it is crucial if we want to design or evolve machines that are more than exceptional at classifying what they see in front of them.
Read More »

Why Do Teeth Hurt?

That gnawing, throbbing pain, the sharp jolt from a cup of hot coffee — almost everyone alive today has experienced the intense pain of a toothache. But why exactly do we get toothaches? In short, it is because, unlike hair or nails, teeth are made up of living tissue, said Christine Wall, an evolutionary anthropologist at Duke University who studies the evolution of teeth. Pain is the brain's way of knowing something has gone wrong in the tissue, she said. "Under the cap of enamel, there are two other layers that are living," Wall told Live Science. Those living tissues are threaded with nerves that send signals to the brain, when encountering hot and cold foods, or when experiencing forces so high that a tooth could break, Wall said. [Chew on This: 8 Foods for Healthy Teeth] Living layers Teeth are made up of several layers: The outer, hard surface, called enamel, is nonliving, but the inner portion of the tooth is made up of hard, bony cells called dentin. Below that, the pulp — soft tissue filled with blood vessels and nerves — anchors the root of the tooth into the gum and extends from the tooth crown to the root. Cavities, or holes that occur when the enamel gets eroded, are the likeliest culprits in tooth pain. Carbohydrates, especially from highly processed, sugary foods, are gobbled up by the bacteria that form plaque on teeth. "The metabolic waste from the plaque bacteria is what rots the teeth," said Peter Ungar, a dental anthropologist at the University of Arkansas, and the author of the forthcoming book "Evolution's Bite" (Princeton University Press). Once the enamel erodes, the exposed dentin registers pain in response to heat, cold and pressure. If bacteria invade the pulp cavity, they can also cause inflammation and infection. Nerves in the cavity will scream with every sip of hot coffee, every bite of cold ice cream, and will often require a root canal, which scoops out the inflamed pulp and replaces it with a rubbery material, according to the American Association of Endodontics (AAE). Cracked teeth can also cause pain when chewing as the outer tooth fragments jostle against the pulp, irritating the sensitive inner portion of the tooth, according to the AAE. Gum disease can also cause pain that mimics tooth pain. Gum disease occurs when those bacteria slip under the gum line and the immune system is mobilized to kill them. The body gets confused when distinguishing between the gum tissue and the plaque bacteria, leading it to attack the body's own tissue, Ungar said. "Gum disease is the No. 1 autoimmune disease in the world," Ungar said. Gum disease can also cause the gums to recede, which exposes a small amount of the tooth's root and makes people momentarily sensitive to heat or cold, according to the AAE. Crowns that are too thick can also cause pain as people bite down because they may either press against the gum or alter the force experienced in the tooth, according to the AAE. Early tooth pain? While most people know the feeling of a toothache, it may not have been a routine part of our evolutionary past, Ungar said. For instance, fossils of Homo erectus, Neanderthals and prehistoric humans show relatively little tooth decay. Even nonhuman primates probably weren't as prone to toothaches as modern people. Rates of tooth decay in modern humans rose after the agricultural revolution and skyrocketed in the 17th century, with the advent of highly refined carbohydrates in the diet, Ungar said. Though some fossils do show signs of tooth decay, "the rates are way, way, way lower, and we typically see it less frequently in hunters and gatherers, at least those that don't consume sugar-rich, or carb-heavy diets," Ungar said. However, Ungar's most recent work has shown that the Hadza, a hunter-gatherer group in Africa, actually has a high rate of tooth decay, likely thanks to their habit of chewing on honeycombs and smoking. Mammal pain Animals more distantly related to humans also may not feel chronic tooth pain often. Unlike mammals, which have just one set of permanent teeth, reptiles such as crocodiles can regrow teeth when they lose them, Wall said. Mammals may also be more aware of their teeth, which could affect their experience of pain. Mammals engage in extensive "mastication" — essentially, chewing before swallowing, so they need to have exquisitely precise understanding of where the teeth are at any time. In turn, this requires more complex networks in the brain to interpret nerve signals from the teeth, Wall said. "This is a system that needs constant feedback. Every time you chew you change the material properties of the food," Wall said. "You need to know: If I chew with the same force in the next chewing cycle, is that going to be too much?" And because it's unlikely that our ancient ancestors were gulping down lattes or eating very cold foods, the tooth's sensitivity to heat and cold may simply be a byproduct of the tooth's ability to sense pressures and the flow of fluids, Wall speculated. Whatever the origins of tooth pain in humans' evolutionary past, the remedy in modern times is simple: Avoid sugary or acidic foods, brush and floss teeth regularly, and get regular dental checkups to prevent the buildup of plaque, Ungar said.

Read More »

BREAKING: AAU shutdown, as students protest school fees’ increase

AAU
Okere Alexandar, Benin
The management of the Ambrose Alli University, Ekpoma, Edo State, has closed down the institution till further notice, following a violent protest by students over an alleged increase in school fees.
Read More »

ASUU to begin strike on Wednesday

ASUU President, Prof. Biodun Ogunyemi
The Academic Staff Union of Universities said it would embark on a one week warning strike over failure by the Federal Government to implement the 2009 Agreement and 2013 MoU.
ASUU National President, Prof. Biodun Ogunyemi, on Monday told a news conference that the National Executive Council of ASUU resolved to embark on a one week warning strike from Wednesday November 16 after a nation-wide consultation with members.
He said, “Many aspects of the 2013 MoU and the 2009 Agreement with the Federal Government have either been unimplemented or despairingly handled.
“The agreements are: Payments of staff entitlements since December 2015, funding of universities for revitalisation, pension, TSA and university autonomy and renegotiation of 2009 Agreement.
“Failure by the Federal Government to implement this agreement has put ASUU leadership in severe difficulty, responding to inquiries from members of the union about the state of our agreement.”
The ASUU president said that during the warning strike, there shall be no teaching, examination and no attendance at statutory meetings in all branches.
He, however, called on all education-loving Nigerians to prevail on the Federal Government to address the patriotic demands of ASUU until the Nigerian university system is repositioned.
He said, “With the release of the 2016 Annual Budget, our union wondered aloud why allocation to education dropped from 11 per cent in 2015 to eight per cent in 2016.
“With the introduction of TSA, the federal universities find it extremely difficult to discharge their core responsibilities of teaching, research and community services.
“We tried to correct the erroneous impression in government circles that the capital and research grants to universities were being handled by the Tertiary Education Trust Fund.”
NAN
Read More »
Designed by Anyinature