Overcoming ‘Catastrophic Forgetfulness’: A Leap in AI-Powered Lifelong Learning – Neuroscience News

Summary: Researchers are investigating a significant roadblock in machine learning known as ‘catastrophic forgetting’, a phenomenon in which AI systems lose information from previous tasks while learning new ones.

Research shows that, like humans, AI remembers information better when faced with different tasks rather than those sharing similar characteristics. Insights from the study could help improve continuous learning in AI systems, improving their capabilities to mimic human learning processes and improve performance.

Main aspects:

  1. “Catastrophic forgetting” is a challenge in AI systems, where they forget information from previous activities while learning new ones.
  2. Artificial neural networks remember information better when presented with a variety of activities, rather than activities that share similar attributes.
  3. Insights from the study could bridge the gap between machine learning and human learning, potentially leading to more sophisticated AI systems.

Source: Ohio State University

Memories can be as difficult for machines to retain as they are for humans.

To help understand why artificial agents develop holes in their own cognitive processes, electrical engineers at The Ohio State University analyzed how much a process called continuous learning affects their overall performance.

This shows a robotic woman.
In essence, the goal of these systems would be to one day mimic the learning abilities of humans. Credit: Neuroscience News

Continuous learning is when a computer is trained to continuously learn a sequence of tasks, using knowledge accumulated from old tasks to better learn new tasks.

Yet one of the major hurdles scientists have yet to overcome to achieve such feats is learning how to circumvent the machine learning equivalent of memory loss, a process known in AI agents as catastrophic forgetting.

As artificial neural networks are trained on one new task after another, they tend to lose information gained from those previous tasks, a problem that could become problematic as society increasingly relies on artificial intelligence systems, said Ness Shroff, a prominent Ohio scholar and professor of computer science and engineering at Ohio State University.

As applications of automated driving or other robotic systems are taught new things, it’s important they don’t forget the lessons they’ve already learned for their safety and ours, Shroff said. Our research delves into the intricacies of continuous learning in these artificial neural networks, and what we’ve discovered are insights that begin to bridge the gap between how a machine learns and how a human learns.

The researchers found that in the same way that people might have difficulty recalling conflicting facts about similar scenarios but recall inherently different situations with ease, artificial neural networks may better recall information when faced with successively different tasks, rather than those that share similar characteristics, Shroff said.

The team, which includes Ohio State postdoctoral researchers Sen Lin and Peizhong Ju and professors Yingbin Liang and Shroff, will present their research this month at the 40th Annual International Machine Learning Conference in Honolulu, Hawaii, a flagship conference on machine learning.

While it can be difficult to teach autonomous systems to exhibit this kind of dynamic and lifelong learning, possessing such capabilities would allow scientists to scale up machine learning algorithms at a faster rate and easily adapt them to handle changing environments and unexpected situations. In essence, the goal of these systems would be to one day mimic the learning abilities of humans.

Traditional machine learning algorithms train on data all at once, but this team’s findings showed that factors like task similarity, negative and positive correlations, and even the order in which an algorithm is taught a task matter in how long an artificial network retains certain knowledge.

For example, to optimize memory for an algorithm, Shroff said, different tasks should be taught early in the continuous learning process. This method expands the network’s capacity for new information and improves its ability to learn more similar tasks later down the line.

Their work is especially important as understanding the similarities between machines and human brains could pave the way for a deeper understanding of artificial intelligence, Shroff said.

Our work heralds a new era of intelligent machines that can learn and adapt like their human counterparts, he said.

Financing: The study was supported by the National Science Foundation and the Army Research Office.

About this news about artificial intelligence and learning research

Author: Tatiana Woodall
Source: Ohio State University
Contact: Tatyana Woodall – Ohio State University
Image: The image is credited to Neuroscience News

#Overcoming #Catastrophic #Forgetfulness #Leap #AIPowered #Lifelong #Learning #Neuroscience #News
Image Source : neurosciencenews.com

Leave a Comment