Human Learns Faster Than AI, Will This Continue Or Machines Will Overtake?

Researchers found that humans use a lot of background knowledge before starting to play a new game. The prior knowledge about certain things make it easier for humans to learn faster and play.

0
548

The field of artificial intelligence (AI) has progressed so far that its applications have replaced humans in number of jobs. Various sectors including BPO, automobile, law, manufacturing, and others have already undergoing transformation owing to rise of AI. A lot of researchers and scientists from across the globe have been researching every day and making progress in terms of facilitating tasks and replace humans in as many jobs as possible. AI has penetrated in almost every sector. Various learning algorithms have implemented to train AI systems and perform functions.

DeepMind Technologies published a paper in 2013, explaining how neural networks learn to play video games just by looking at the screen. They used 1980s video games to demonstrate that AI could defeat best human players in those games. Later, the company was purchased by Google for $400 million. The company also used a deep learning algorithm to train AI system and it defeated the world’s best players of the game Go. Though machines were able to achieve this feat, there was a major limitation in learning. As compared to humans, machines were taking too much time to learn. The question arose, what leads humans to learn so quickly than machines?

Rachit Dubey and colleagues from the University of California, Berkeley found the answer to this question. They researched on how humans interact with these games and found that humans use a lot of background knowledge before starting to play a new game. The prior knowledge about certain things make it easier for humans to learn faster. On the other hand, when human play games in which they have no prior knowledge of things used in games, they learn with a slow speed. In this case, the learning speed is approximately as same as that of machines.

Dubey and colleagues used a game based on Montezuma’s Revenge, which was released for the Atari 8-bit computer in 1984. They asked 40 employees of Amazon’s crowdsourcing site Mechanical Turk to play this game and offered $1 to finish it. They did not give any instructions or manual to employees regarding the game. The researchers said, “This is not overly surprising as one could easily guess that the game’s goal is to move the robot sprite towards the princess by stepping on the brick-like objects and using ladders to reach the higher platforms while avoiding the angry pink and the fire objects.”

For humans, it took nearly a minute and 3,000 keyboard actions to finish the game. On the other hand, it was difficult for machines to learn and finish the game. It took nearly four million keyboard actions to finish this game. This is equal to almost 37 hours of continuous play. Many deep learning algorithms were not able to finish this game because it was impossible to learn as progress was evaluated only after the feedback that came from finishing. However, humans did not approach the game with blank slate. They had a prior knowledge of avoiding fire, jumping over gaps, and climb ladders. However, machines did not have this knowledge.

In the next experiment, researchers created another version of the game in which they changed entities such as ladders, keys, platforms, and enemies with the help of different textures. This change was made to make the prior knowledge irrelevant. However, the underlying dynamics were the same. Researchers found that there was significant decrease in learning speed of humans after changing the texture. Moreover, changing the texture did not make any difference to learning speed of machine algorithms. This study helps computer scientists working on machine learning algorithm to program based on the way humans and machines learn. Moreover, these findings would lead to further developments for machines to catch up with humans and maybe, outperform them in future.

LEAVE A REPLY

Please enter your comment!
Please enter your name here