Is Artificial General Intelligence Here?
Four cross-disciplinary faculty at UC San Diego believe the answer is yes
Story by:
Published Date
Article Content
Will artificial intelligence ever be able to reason, learn and solve problems at levels comparable to humans? Four experts at the University of California San Diego believe the answer is yes – and that such artificial general intelligence has already arrived.
Computer scientist Alan Turing first posed this question in his landmark 1950 paper, though he didn’t use the term artificial general intelligence (AGI). His “imitation game,” now known as the Turing Test, asked whether a machine could pass as human in text-based conversation with humans. Seventy-five years later, that future is here.
This debate is tackled by four UC San Diego faculty members spanning humanities, social sciences and data science in a recently published Comment invited by Nature.
Over the past year, Associate Professor of Philosophy Eddy Keming Chen, Professor of Artificial Intelligence, Data Science and Computer Science Mikhail Belkin, Associate Professor of Linguistics and Computer Science Leon Bergen and Professor of Data Science, Philosophy and Policy David Danks engaged in extensive dialogue on this question. These discussions happened as another set of researchers at UC San Diego found in March 2025 that the large language model GPT-4.5 was judged to be human 73% of the time in a Turing test — much more often than actual humans.
Approaching the AGI question from different disciplinary perspectives — philosophy, machine learning, linguistics and cognitive science — the four scholars converged on a controversial conclusion: by reasonable standards, current large language models (LLMs) already constitute AGI. Their argument addresses three key questions: What is general intelligence? Why does this conclusion provoke such strong reactions? And what does it mean for humanity?
Clarifying the bounds of general intelligence
Discussions about the capabilities of AI often center on technology. The UC San Diego faculty took a step back and approached the question through a philosophical and scientific lens. The essay defines breadth in intelligence as abilities across multiple domains — mathematics, language, science, practical reasoning, creative tasks — and depth as strong performance within those domains, not merely superficial engagement.
“There is a common misconception that AGI must be perfect — knowing everything, solving every problem — but no individual human can do that,” explains Chen, who is lead author. “The debate often conflates general intelligence with superintelligence. The real question is whether LLMs display the flexible, general competence characteristic of human thought. Our conclusion: insofar as individual humans possess general intelligence, current LLMs do too.”
The authors begin by identifying what is not required for general intelligence. It does not have to be perfect in what it does, since few humans hold perfect knowledge, even in their specializations. Similarly, universal mastery is unrealistic; no individual can do every cognitive task. Intelligence does not have to follow human models, either — the human brain is but one form of cognitive architecture. And finally, artificial general intelligence does not need to exceed the mental capacity of humans through superintelligence.
Rather than seeking a single definitive test, the authors assess a cascade of increasingly demanding evidence—the same way we evaluate human general intelligence. The first tier, Turing-test level, includes basic literacy and adequate conversation. The expert tier requires gold-medal olympiad performance, PhD-level problem solving in multiple domains, and competent creative and practical reasoning. The superhuman tier demands revolutionary scientific breakthroughs that few humans meet. Frontier large language models already meet the first two levels.
While humans are paradigm examples of general intelligence, many qualities we associate with human cognition turn out to be inessential. Chen, Belkin, Bergen and Danks separate what intelligence requires from what humans merely happen to have as a result of biological evolution. They examine numerous objections in this effort to clarify the bounds of general intelligence. For instance, LLMs have been criticized for hallucinating, or presenting false, unreliable information at times. However, humans are also prone to false memories and cognitive biases yet still make important contributions otherwise. Human error does not preclude intelligence, and it should not disqualify general intelligence among machines, argue the authors.
“There is a common misconception that AGI must be perfect — knowing everything, solving every problem — but no individual human can do that."
Contending with an ‘alien’ intelligence
Today, LLMs can produce outcomes that are indistinguishable from humans, even surpassing depictions from science fiction. Yet the authors spot a persistent divide between what many people consider to be human-centered intelligence versus the “alien” intelligence that machines possess.
For example, one prevailing counterpoint is that LLMs don’t have a body. Yet the writers remind us of examples like physicist Stephen Hawking, who interacted nearly entirely through text and synthesized speech. His physical limitations did not diminish his intelligence; therefore, motor capabilities should not be a prerequisite for intelligence, the authors suggest.
“This is an emotionally charged topic because it challenges human exceptionalism and our standing as being uniquely intelligent,” says Belkin. “Copernicus displaced humans from the center of the universe, Darwin displaced humans from a privileged place in nature; now we are contending with the prospect that there are more kinds of minds than we had previously entertained.”
Acknowledging that machines are capable of intelligence matching that of humans can be a frightening prospect. Concerns about potential social upheaval is enough for some to fervently deny the possibility, a “heads in the sand” response as Turing described in his 1950 paper. Chen, Belkin, Bergen and Danks suggest embracing the emotions that arise with compassionate curiosity, not anxious evasion.
Risks and rewards
There’s no denying that we’re in the midst of an unprecedented technological revolution as artificial intelligence pervades our personal and professional lives. The authors position this period as both “remarkable and concerning,” with plentiful possibility and significant responsibility.
In the essay, the experts describe numerous economic demands that are placed on LLMs, which they claim can distort true assessments about whether artificial general intelligence has arrived. Industry leaders often set standards based on profitability rather than intelligence itself—demanding perfect reliability, instant learning or revolutionary discoveries that exceed what we require of individual humans. Yet the UC San Diego faculty point out that speed, efficiency and profitability are simply a potential output of general intelligence, not a defining quality.
A distinct objection centers on what critics call the ‘stochastic parrot’ problem — the claim that LLMs merely recombine patterns from their training data without genuine understanding, and therefore must fail on truly new problems. “We have built highly capable systems, but we do not understand why we were successful,” says Bergen. “LLMs learned about the world through processes unlike human learning, and we lack a detailed account of how their abilities emerged. This gap in understanding grows more important as the systems grow more capable.”
AI systems are also becoming more autonomous. The authors clarify that this does not contribute to their intelligence, but it does make responsible design and shared governance an urgent priority.
“We're developing AI systems that can dramatically impact the world without being mediated through a human and this raises a host of challenging ethical, societal and psychological questions,” explains Danks. “AI is a future that we are building right now. Ultimately, we’re innovating because we want something better, and the very idea of better should have ethics and safety baked in.”
An unconventional team
The four faculty that assembled to explore artificial general intelligence represent multiple disciplines across UC San Diego, a public research university that prioritizes cross-collaboration.
“LLMs learned about the world through processes unlike human learning, and we lack a detailed account of how their abilities emerged. This gap in understanding grows more important as the systems grow more capable.”
Chen is part of the School of Arts and Humanities, a philosopher of science who explores big questions about the smallest parts of our universe, as well as questions about the nature of the mind and cognition. These studies complement the research of Bergen, a linguist and computer scientist in the School of Social Sciences who is investigating the science of LLMs.
This research intersects with work being done by Belkin, a data scientist focused on the theory and applications of machine learning at the School of Computing, Information and Data Sciences’ Halıcıoğlu Data Science Institute, with an affiliation with the Jacobs School of Engineering Computer Science and Engineering Department. With a similar focus on data, Danks examines the ethical, psychological and policy issues around AI using methods from machine learning, philosophy and cognitive science.
“I’ve learned so much from this group,” said Chen. “UC San Diego’s institutional structure made this collaboration possible—we simply wouldn’t have crossed paths elsewhere. It’s a powerful example of what cross-disciplinary work can achieve when applied to fundamental questions facing humanity.”
Learn more about research and education at UC San Diego in: Artificial Intelligence
Stay in the Know
Keep up with all the latest from UC San Diego. Subscribe to the newsletter today.