Skip to main content

The Brains Behind Today’s AI Revolution

Electronic brain and Concept of artificial intelligence.
Credit: iStock/Chor muang

Published Date

Article Content

Today’s artificial intelligence revolution is radically reshaping the way we live, work and play. Decades ago, Professor Terry Sejnowski of the School of Biological Sciences and Salk Institute helped pioneer a foundation that set the stage for the current surge of AI advancements. A professor in the Department of Neurobiology, Sejnowski’s research in neural networks and computational neuroscience have helped play an indispensable role in the AI we regularly interact with today. In recognition of these contributions, Sejnowski was honored with the 2024 Brain Prize, an honorary doctorate in science from Princeton University and named ARCS Foundation of San Diego Scientist of the Year. He was also recently recognized for his contributions to AI development with Nobel Prize-winner Geoffrey Hinton. His book, “ChatGPT and the Future of AI,” was published by the MIT Press on October 29 and he has a free Substack on Brains and AI. In this interview he discusses the path of brain research that led to today’s AI revolution, how systems such as ChatGPT are evolving and the future of AI.

Terry Sejnowski
Professor Terry Sejnowski, Department of Neurobiology, School of Biological Sciences, and Salk Institute.

What were the early days of artificial intelligence research like?

Multilayer neural network learning was invented in the 1980s when it was generally believed to be impossible. I was a pioneer in developing and using these learning algorithms. Back then, we didn’t know how much computation we would need to make progress with difficult problems in AI. Forty years later, we have the answer: Deep learning networks today have billions of neuron-like units and trillions of synapse-like weights.

What was your role in the convergence of neuroscience and AI?

Patricia Churchland and I wrote a book in 1992 called The Computational Brain. The patterns of activity of the units in early network models resembled the responses of neurons in brains. This demonstrated that with learning, it was possible to distribute a task across many units, each making a small contribution. These networks had hundreds of units and thousands of weights, tiny by today’s standards. I was a pioneer in computational neuroscience, which has become a vital part of neuroscience. I continue to have a leadership role in machine learning and AI as the President of the Neural Information Processing Systems Foundation, which organizes the largest AI meeting.

Your vision for AI was modeled after the human brain. Why did you decide to follow that path of thinking?

The only proof that problems in AI could be solved, from vision, language and planning was that nature had already solved them. In contrast, AI researchers tried to solve difficult computational problems by writing computer programs using logic, rules and abstract symbols, with only modest results. However, this approach was the “only game in town” when I was starting my career. Over the last ten years, the shift in AI has vindicated our intuition that “the only game in town” is nature and that we have much more to learn by studying brains.

Professor Terry Sejnowski has authored "ChatGPT and the Future of AI," a new book on the deep language revolution.

How did neuroscience and neural networks research lead us to where we are today with AI?

Brains have many interacting neurons that learn through experience. Learning was the secret sauce missing from AI. Network learning scales beautifully with size when combined with abundant data. As network models grew larger, a threshold was reached in 2010 when object recognition in images and language translation became possible. More recently, you can talk with generative AI and ask it to produce photorealistic images on demand.

You’ve studied how ChatGPT mirrors its users. How so?

ChatGPT was trained on the world’s text database, which included many textbooks, novels and computer programs. GPT can mimic these authors and even write computer code. GPT will respond to sophisticated questions with sophisticated answers and to dumb questions with dumb answers. It mirrors your intelligence if you don’t tell GPT what persona you want it to take. In a way, ChatGPT is testing your intelligence, a reversal of the Turing Test, where humans judge the intelligence of AI.

Will the fields of AI and neuroscience continue to converge?

AI is an engineering task whose goal is to build an intelligent machine. Neuroscience is a scientific problem whose goal is to understand brains. But now that these two fields speak the same mathematical language, there is crosstalk between them that will benefit both. Both fields are undergoing revolutions that have accelerated progress. We are exploring a new computational universe, a far richer source of solutions to challenging problems across science and society than ever before.

What do you see coming next for the AI revolution? Are you working on anything now in this area that particularly excites you?

Transformers that power ChatGPT are feedforward models, in which information flows in one direction; however, the latest versions have incorporated recurrent network models, in which activity can circulate, resembling what happens in our cerebral cortex. This has implications for how brains generate thoughts and language. Transformers are pretrained and can be fine-tuned, but additional learning does not occur when answering questions. Transformers also need the equivalent of the hippocampus in our brains, which makes it possible for us to be lifelong learners.

Learn more about research and education at UC San Diego in: Artificial Intelligence

Share This:

Category navigation with Social links