June 9, 2023: New Research from the "Mathematics of Neural Networks"

A baseball player cannot keep their eye on the ball.  Realistically, the speed of an incoming pitch far exceeds the limits of human vision.  To accurately forecast the ball’s trajectory, a hitter must draw upon past experiences of previous pitches.
In this example, the hitter does not rely upon their sense of sight but on visual experience to predict the ball’s movement.  Like human brains, artificial neural networks are adept at anticipating what comes next in a sequence--for example, ChatGPT predicts natural language patterns--but developers do not fully understand how AI is able to make these connections.  This challenge is known as the “explainability problem”. 

The “Mathematics of Neural Networks” team has developed a new mathematical technique, one of the first of its kind, that can peer into the connections in a neural network and make sense of how it predicts upcoming events.

To achieve this breakthrough, the team trained a computational model to predict the next frames in a video sequence.  Just as the baseball player must rely upon memory to anticipate the trajectory of a speeding ball, the model did not receive any visual inputs; instead, it relied upon its internal systems.  Using this mathematical technique, which has been developed over the course of the team’s tenure with the Western Academy, the team could understand how the system creates its predictions.

As AI technologies become increasingly sophisticated, developers must become equipped with a better understanding of how these systems really work.  This study promises to be an important contribution to the study of neural networks and their possibilities.  

Read about it in the latest issue of Nature Communicationshttps://www.nature.com/articles/s41467-023-39076-2

 Image shows article title:  Waves traveling over a map of visual space can ignite short-term predictions of sensory input

Article updated:  June 27, 2023