Synthetic neural networks mimic human brains, however the know-how has its roots in physics. Thom Leach/Science Photograph Library by way of Getty Photos
In case your jaw dropped as you watched the newest AI-generated video, your financial institution steadiness was saved from criminals by a fraud detection system, or your day was made a bit simpler since you had been in a position to dictate a textual content message on the run, you’ve many scientists, mathematicians and engineers to thank.
However two names stand out for foundational contributions to the deep studying know-how that makes these experiences potential: Princeton College physicist John Hopfield and College of Toronto pc scientist Geoffrey Hinton.
The 2 researchers had been awarded the Nobel Prize in physics on Oct. 8, 2024, for his or her pioneering work within the area of synthetic neural networks. Although synthetic neural networks are modeled on organic neural networks, each researchers’ work drew on statistical physics, therefore the prize in physics.

The Nobel committee pronounces the 2024 prize in physics.
Atila Altuntas/Anadolu by way of Getty Photos
How a neuron computes
Synthetic neural networks owe their origins to research of organic neurons in residing brains. In 1943, neurophysiologist Warren McCulloch and logician Walter Pitts proposed a easy mannequin of how a neuron works. Within the McCulloch-Pitts mannequin, a neuron is related to its neighboring neurons and may obtain indicators from them. It may well then mix these indicators to ship indicators to different neurons.
However there’s a twist: It may well weigh indicators coming from completely different neighbors in another way. Think about that you’re attempting to determine whether or not to purchase a brand new bestselling cellphone. You speak to your mates and ask them for his or her suggestions. A easy technique is to gather all buddy suggestions and determine to go together with regardless of the majority says. For instance, you ask three pals, Alice, Bob and Charlie, they usually say yay, yay and nay, respectively. This leads you to a call to purchase the cellphone as a result of you’ve two yays and one nay.
Nevertheless, you may belief some pals extra as a result of they’ve in-depth information of technical devices. So that you may determine to provide extra weight to their suggestions. For instance, if Charlie may be very educated, you may depend his nay 3 times and now your determination is to not purchase the cellphone – two yays and three nays. If you happen to’re unlucky to have a buddy whom you fully mistrust in technical gadget issues, you may even assign them a detrimental weight. So their yay counts as a nay and their nay counts as a yay.
When you’ve made your personal determination about whether or not the brand new cellphone is an effective alternative, different pals can ask you in your advice. Equally, in synthetic and organic neural networks, neurons can combination indicators from their neighbors and ship a sign to different neurons. This functionality results in a key distinction: Is there a cycle within the community? For instance, if I ask Alice, Bob and Charlie right now, and tomorrow Alice asks me for my advice, then there’s a cycle: from Alice to me, and from me again to Alice.

In recurrent neural networks, neurons talk backwards and forwards somewhat than in only one course.
Zawersh/Wikimedia, CC BY-SA
If the connections between neurons don’t have a cycle, then pc scientists name it a feedforward neural community. The neurons in a feedforward community may be organized in layers. The primary layer consists of the inputs. The second layer receives its indicators from the primary layer and so forth. The final layer represents the outputs of the community.
Nevertheless, if there’s a cycle within the community, pc scientists name it a recurrent neural community, and the preparations of neurons may be extra difficult than in feedforward neural networks.
Hopfield community
The preliminary inspiration for synthetic neural networks got here from biology, however quickly different fields began to form their growth. These included logic, arithmetic and physics. The physicist John Hopfield used concepts from physics to check a selected kind of recurrent neural community, now known as the Hopfield community. Particularly, he studied their dynamics: What occurs to the community over time?
Such dynamics are additionally essential when data spreads by means of social networks. Everybody’s conscious of memes going viral and echo chambers forming in on-line social networks. These are all collective phenomena that finally come up from easy data exchanges between individuals within the community.
Hopfield was a pioneer in utilizing fashions from physics, particularly these developed to check magnetism, to grasp the dynamics of recurrent neural networks. He additionally confirmed that their dynamics can provide such neural networks a type of reminiscence.
Boltzmann machines and backpropagation
In the course of the Eighties, Geoffrey Hinton, computational neurobiologist Terrence Sejnowski and others prolonged Hopfield’s concepts to create a brand new class of fashions known as Boltzmann machines, named for the Nineteenth-century physicist Ludwig Boltzmann. Because the title implies, the design of those fashions is rooted within the statistical physics pioneered by Boltzmann. In contrast to Hopfield networks that might retailer patterns and proper errors in patterns – like a spellchecker does – Boltzmann machines might generate new patterns, thereby planting the seeds of the fashionable generative AI revolution.
Hinton was additionally a part of one other breakthrough that occurred within the Eighties: backpropagation. If you’d like synthetic neural networks to do attention-grabbing duties, it’s a must to by some means select the best weights for the connections between synthetic neurons. Backpropagation is a key algorithm that makes it potential to pick out weights primarily based on the efficiency of the community on a coaching dataset. Nevertheless, it remained difficult to coach synthetic neural networks with many layers.
Within the 2000s, Hinton and his co-workers cleverly used Boltzmann machines to coach multilayer networks by first pretraining the community layer by layer after which utilizing one other fine-tuning algorithm on high of the pretrained community to additional modify the weights. Multilayered networks had been rechristened deep networks, and the deep studying revolution had begun.
A pc scientist explains machine studying to a toddler, to a highschool scholar, to a school scholar, to a grad scholar after which to a fellow professional.
AI pays it again to physics
The Nobel Prize in physics exhibits how concepts from physics contributed to the rise of deep studying. Now deep studying has begun to pay its due again to physics by enabling correct and quick simulations of programs starting from molecules and supplies all the way in which to your entire Earth’s local weather.
By awarding the Nobel Prize in physics to Hopfield and Hinton, the prize committee has signaled its hope in humanity’s potential to make use of these advances to advertise human well-being and to construct a sustainable world.

Ambuj Tewari receives funding from the NSF.












