Illustration of a humanoid robot with its hand on its chin, demonstrating how machines learn
iStock/gmast3r

A few years ago, a friend from Mexico visited me in Tokyo. As we strolled along the streets of Ginza after lunch, he noticed the buses and crowds of Asian tourists. He turned to me and asked, “Can you distinguish Japanese people from Chinese people?”

I told him yes, I probably could, then pointed out two ladies across the road who were probably Chinese. He was amazed (but doubtful), so we walked closer to hear what language they were speaking. It turned out I was correct—they were speaking Chinese.

Living in Mexico, my friend hadn’t met enough East Asian people to tell different nationalities apart. I, however, am a Chinese Singaporean who has lived in Japan for over thirteen years. This has helped me notice the subtle differences in the ways Asian people talk, walk, dress, wear makeup, style their hair, and so on. It didn’t happen all at once. It was a skill I developed through many cycles of hypothesis, validation, and adjustment.

That’s how humans learn.

When Alan Turing invented the modern-day computer in the late 1930s, he did it imagining computers would one day exhibit humanlike intelligence. In the spirit of Turing’s vision, computer scientists everywhere are seeking ways for machines to do just that. But why? What value comes from a computer learning to think like a human when we already have humans to do that?

The answer depends on who you ask.

For companies like Tesla and Google, it’s about improving human efficiency through self-driving cars. iRobot, too, aimed for efficiency with its globally adored Roomba vacuum cleaner. But iRobot is also teaching computers to think like humans so they can perform human tasks in places it isn’t safe for humans to go: the Fukushima power plants after 3/11, warzones, and Mars.

Then there are computer companions: Softbank’s Pepper, Sony’s Aibo, and Lovot. What kind of companions would they be without humanlike intelligence?

Really, there are myriad benefits to teaching computers to think like us. And luckily, they already learn in similar ways.

How Computers Classify

Miku Hirano, CEO and founder of Cinnamon, developed disruptive AI to perform boring, repetitive office work so humans don’t have to. This involves things like extracting information from handwritten documents, which meant her AI needed to learn to classify.

Let’s imagine something simpler: say we want to train a computer to classify images of cats and dogs. The computer is initialized with a hypothesis, or “model,” as we called it in computer science. When given input of a cat or dog image, the model will arbitrarily output “cat” or “dog.” The computer at this stage is just guessing.

But we don’t want the computer to guess. We want the computer to learn from its mistakes and improve its model. So we need to inform the computer what the right answer is. Thus, we need not only tons of input image data, but also the corresponding expected output.

Using many pairs of input and expected output, the computer will repeatedly run its model, test its guesses, and adjust until it achieves a good level of accuracy. Eventually, the computer will be able to take an image of a cat or dog it has never seen before and correctly classify it.

Classification problems like this are known as “supervised learning,” since we tell the computer what the right answer is. It’s a popular training method for computers, as evidenced by some common algorithms: k-Nearest Neighbors and decision trees.

How Computers Handle Regression

Classification isn’t the only way machines learn with supervision. Another fundamental learning model for computers, this one based on estimating a numerical value (as opposed to a category), is known as regression. Japan’s C2C ecommerce giant Mercari attempted a crowdsourced AI solution on Kaggle to automatically suggest product prices to sellers.

Here’s a simpler example: imagine we want to estimate a person’s age. We humans have developed the ability to do this from appearance—and most of us are familiar with what happens when we get it wrong! By repeatedly guessing the ages of friends, family, celebrities, and even strangers, the compliments or scolding we get tell us if we’ve underestimated or overestimated.

We can train a computer to do the same thing by providing it with many data pairs of human portraits and accurate ages.

Regression problems are also supervised learning, since we inform the computer what we expect as output. One common regression algorithm is (the aptly named) regression analysis.

How Computers Learn Unsupervised

Humans and computers both use supervised learning to tackle classification and regression problems. But humans do not always need to be explicitly told something is right or wrong in order to learn.

The law of proximity (one of the Gestalt laws of grouping) states that humans have the ability to perceive things which are close together as being in the same group. Psychologists used to think that this ability was innate, but recent research found that we acquire it as infants by interacting with our environment.

Can we get computers to perform unsupervised learning like this?

Actually, yes! In computer science terms, the law of proximity is called clustering, and it is a form of unsupervised learning. The idea is to calculate the distance between each object and form groups of objects that are close together. One common clustering algorithm is the k-means algorithm.

Clustering algorithms are widely used in medical imaging—they can even help detect brain tumors.

What’s next?

We’ve made great strides teaching computers to learn the ways humans do, both supervised and unsupervised. As Andrew Ng, cofounder of Google Brain, famously said, “If a typical person can do a mental task with less than one second of thought, we can probably automate it using AI, either now or in the near future.”

But human intelligence is much more than categorizing and grouping.

Nobel Laureate in Economics Daniel Kahneman categorized human cognition into System 1 and System 2. The current state of AI and machine learning is akin to System 1: intuitive, fast, unconscious, and habitual. These are all necessary for predicting things like nationality and age, or grouping dots together.

System 2 is more complex. It’s slow, and it entails logic and reasoning. This is the future of AI research, according to Yoshua Bengio and Yang LeCun, who won the Turing Award with their work on deep learning. They believe the future won’t be about supervised and unsupervised, but self-supervised learning. “Basically, it’s the idea of learning to represent the world before learning a task. This is what babies and animals do,” says LeCun.

The journey toward achieving human intelligence with machines is a long one, and there’s still a lot of research to be done. But every time a computer adjusts its model, it brings us a little closer to a world of higher efficiency, safer human workplaces, and better machine companions.

Get monthly Insights

Sign up for our newsletter! Privacy Policy