Skip to main content

A Deep Look into the AI Revolution

Virtual event explores the power of artificial intelligence to accelerate scientific discovery and shape biomedical research

Hand holding magnifying glass over

Published Date

Article Content

 “A Deep Look into the AI Revolution,” a virtual event hosted by the UC San Diego School of Biological Sciences and UCTV on Nov. 15, offered attendees from around the world a glimpse into how artificial intelligence is being used to accelerate scientific discovery and shape biomedical research, both in academia and industry.

“Artificial intelligence has reached into virtually every aspect of our lives and its capabilities continue to expand in multiple areas, from health care to education and commerce,” said Kit Pogliano, Dean of the School of Biological Sciences. “At UC San Diego, our world-class faculty and industry collaborators are leading the way in deploying this powerful new technology to dramatically accelerate discovery and take innovation to an entirely new level.”

The event featured four perspectives on the future of AI, ranging from AI-enabled simulations for developing new medicines and vaccines to unprecedented explorations of how the brain works, and beyond. Each member of the expert panel gave an informative and thought-provoking presentation before inviting questions submitted by participants. While their topics varied, one theme remained central: AI is a game-changer for scientific research, and its possible applications are endless.

Can AI untangle the brain’s complexities?

On any reliable list of 2023’s biggest buzzwords, ChatGPT is sure to be found near the top. The generative AI chatbot has dominated news headlines since its release by OpenAI just over a year ago, and in many ways has threatened to upend the way we live, work and learn.

Powered by deep learning algorithms and neural networks—which are modeled after the human brain—ChatGPT has astounded even the most seasoned experts with its ability to pass medical licensing exams, simplify complex articles and write computer programs with ease. If these large language models are based on the principles of neuroscience, then it stands to reason that their rapid advancement can help scientists learn more about how the brain makes decisions, solves problems or processes language—and how we might maximize its potential.

Terry Sejnowski
Terry Sejnowski

That’s exactly what Terry Sejnowski, distinguished professor in the Department of Neurobiology at UC San Diego and holder of the Francis Crick Chair at the Salk Institute for Biological Studies, explored during his presentation, which kicked off the virtual Deep Look event. A pioneering researcher who played a key role in the founding of deep learning and neural networks in the 1980s, Sejnowski is a leading expert at the intersection of AI and neuroscience, and offered attendees a glimpse into how the two fields are increasingly converging.

“We finally are making progress, both on understanding human intelligence by seeing how artificial intelligence is able to solve problems—and then, similarly, artificial intelligence, by looking at how nature solves problems, is improving the performance of large language models like ChatGPT,” said Sejnowski. “This is unprecedented in the development of AI: that these two groups are talking to each other and helping each other.”

Sejnowski explained that the transformer models built into ChatGPT that enable the chatbot to predict the next word in a sequence are already providing valuable insights into how the brain carries information and handles sensory input.

“It’s a really interesting time,” Sejnowski said at the close of his presentation. “It’s a new era in AI: ‘terra incognita.’”

Can AI help us avert another pandemic crisis?

It’s been nearly four years since the COVID-19 pandemic spurred a global health emergency—and though restrictions have lifted and vaccines have enabled a return to relative normalcy, the virus’s ability to mutate and evolve new variants remains a real concern.

While there are still many unanswered questions about SARS-CoV-2—the virus that causes COVID-19—scientists today understand a lot more about its dynamics than they did in those early days of confusion and uncertainty. That’s thanks in part to the futuristic, AI-based simulations that have come out of the Amaro Lab at UC San Diego, providing key insights into how the virus moves and stays infectious when aerosolized.

Rommie Amaro
Rommie Amaro

During her presentation, Rommie Amaro, professor of molecular biology and co-director of the university’s new Meta-Institute for Airborne Disease in a Changing Climate, discussed how state-of-the-art computational methods, biological data and AI have together enabled her team to build highly detailed, animated 3D computer models that offer a groundbreaking look at viruses at the molecular and atomic level. These simulations also can be used to reveal new binding sites for drugs and vaccines.

“One of the key things that these simulations allow us to do is to see things that experimentalists can’t see with their imaging techniques,” said Amaro. “What we’ve found is that AI makes our simulations more efficient,” she added, using an example of the SARS-CoV-2 spike protein—the first point of contact the virus has with human cells.

Not only do the Amaro Lab’s simulations of a COVID-19 spike protein give scientists an atomic-level view of its structure, but their ability to animate its movement and motion is what sets them apart from traditional imaging techniques.

“Experimental scientists honestly really have no way to actually see what’s happening to the virus … but our simulations are giving these never-before-seen views of this really complex environment and how the different molecules that are surrounding the virus in these aerosol particles are affecting its structure and its ability to stay infectious,” said Amaro. “What we’re doing here with these AI-driven or enhanced simulations is really getting down into the details of airborne disease and pathogen transmission and hoping to keep everyone safer and healthier as we go forward.”

Can AI revolutionize scientific research?

At the start of his presentation, Gavin Hartigan, a vice president of research and development for analytical instruments at Thermo Fisher Scientific, asked attendees to consider a world where AI helps doctors predict diseases before they strike; scientists to design new green materials; and semiconductor chip makers to produce unimaginable computing power.

Bringing an industry perspective to the Deep Look event, Hartigan shared how AI is pioneering innovation and new product development across an array of disciplines, including life sciences and materials science.

Gavin Hartigan
Gavin Hartigan

At Thermo Fisher, Hartigan leads a large global team that develops advanced electron microscopy products that provide scientists with atomic-scale insights in the form of high-resolution images of specimens. He explained that specifically in electron microscopy, AI has the potential to be used to quickly detect parts of an image that are most relevant to a scientist’s experiment.

“Getting images, interpreting those and turning them into actionable data for scientists to use to make important decisions or discoveries—that’s what we want to apply AI to,” Hartigan said.

Hartigan expects AI to transform the way scientists conduct research, aided by new instruments that enable them to put aside the more mundane tasks and focus on accelerating innovation and finding new insights. For example, AI and analytical software that can segment and label data can save considerable amounts of time—time that can instead be spent on higher-level tasks that only a human can perform. He predicts that AI is on the cusp of revolutionizing the world as we know it.

“It seems to me that the future of science is becoming fairly inseparable from the future of AI,” said Hartigan. “AI is a tool for innovation, but it’s a very special tool as it can expose knowledge that was previously obscured. When you consider the impact AI can have on human health, sustainable technology, understanding the universe, I really don’t think it’s overly dramatic to think about the advent of the compass, the printing press or the internet when I think about the magnitude of what’s here in front of us.”

Will AI transform scientific imaging?

Several years ago, Uri Manor, an assistant professor in the Department of Cell and Developmental Biology, read an article about how AI could be used to increase the resolution of photographs. As a microscopist whose work involves developing and applying advanced computational and molecular tools for imaging living cells, this idea sparked a question in his mind: Could scientists similarly use AI to improve imaging data from microscopes?

“Ultimately, we found out that the answer is yes,” Manor told attendees during his presentation, going on to explain how he and an interdisciplinary team of researchers at the Salk Institute for Biological Studies developed a highly sophisticated computational device and trained a deep learning model to convert low-resolution images to high-resolution in a fraction of the time. “With this model, we can reconstruct 3D structures from the brain with higher speed and accuracy than we ever could before.”

Uri Manor
Uri Manor

He believes that recent advancements in AI will contribute even further to its convergence with microscopy in the 21st century and could likely enable scientists to map the wiring diagram of the brain at a high enough resolution to capture all its synapses—or connections between neurons—with unprecedented clarity.

These advances could eliminate what Manor says is a constant debate faced by microscopists over whether they want more resolution, more speed or more sensitivity in their imaging, as currently available technology doesn’t always typically allow them to have it all. “There are real tradeoffs that have impacted our research and our ability to gather knowledge,” he said.

As faculty director of the Goeddel Family Technology Sandbox at UC San Diego, which will be formally launched in 2024, Manor has seen firsthand what is possible when software engineers and computer scientists come together with domain experts to collect high-quality data—and then use that data to train machine learning algorithms to do things like track objects in images over time. The collaborative facility will bring high-powered computational capabilities alongside ingenuity in biological sciences to drive innovation.

“Ultimately, the goal should be for all biologists to band together and build a new ‘Library of Alexandria’ for biological data, where we all upload our gold standard data sets and help improve AI that can be making predictions on the next generation of drugs and biological insight—everything from medicine to basic fundamental research,” Manor said.

Topics:

Share This:

Category navigation with Social links