UC San Diego Professor Explores Academia’s Role in AI’s Evolution at Major National Conference
Story by:
Published Date
Article Content
At the inaugural HumanX conference in Las Vegas, David Danks, a UC San Diego professor of Data Science, Philosophy, and Policy and national expert in AI ethics, shared insights on the evolving international landscape of AI and the industry's unique dynamics.
HumanX gathered more than 3,000 attendees, including a self-deprecating, “unemployed” Vice President of the United States. Event organizers said that they wanted to shift the conversation around artificial intelligence away from the basic utopia vs dystopia debate and focus on real-world AI applications, their benefits, challenges and the need for governance as AI becomes ubiquitous.

Danks, who spoke on a panel called, “Redrawing boundaries: AI regulation on a global scale,” noted the particularly strong, emerging focus on agentic AI, meaning AI that can make decisions and perform tasks independently. This, of course, elevates the complex task of orchestrating and governing them.
“It’s important for academics to know what everyone’s going to be talking about 6 months from now and to hear the new buzzwords (AI agents),” Danks said. “The subtext from many of the company presentations is that they are starting to recognize the importance of giving people some control over AIs and their data, so they can feel more comfortable about both.”
One of the organizers conceded as much in an opening address on the future of AI.
Building Trust in AI
“Hype is dictating decisions, and the race to not get left behind is pushing companies and governments to move fast, whether or not they actually understand what they’re building. So the problem isn’t just that AI is overhyped; it’s that the hype itself is making us irresponsible. It’s kind of like the Fyre Festival with better algorithms,” Stefan Weitz said, perhaps half-jokingly.
He also repeated a dire quote from J. Robert Oppenheimer, a physicist who contributed heavily to the development of atomic bombs, saying the future of AI will be determined by the choices we make now.
Danks said that concerns over AI will persist until the technology and its uses are better defined. Academia, he said, can help make that future less bleak.
Building trust into AI systems was a key topic of discussion at the conference, which also highlighted how companies are shifting their focus toward non-technical, user-experience factors to stand out. For instance, companies were touting more robust options to protect consumer data and customize controls as opposed to technical discussions of high-performance models that can run large language models faster than anyone else. The shift in focus is on making systems more palatable to the general public by, at least, discussing safety and security measures.
Academics and AI's Future
Danks said that it's not uncommon to have so few academics at an industry conference like this but on new tech topics of this complexity, academics could be an asset.

Colin Kaepernick founder of Lumi Story AI, a software suite that leverages advanced AI technology to enhance the creative process and provide a platform for diverse and authentic stories spoke at a presention titled, "How Kaepernick is changing the narrative."
“We have the ability to show them that being thoughtful doesn't mean being slow or missing out on an opportunity or launch,” Danks said. “Academics have a reputation for being slow, and some of it is warranted, BUT… it takes us, say, six months to do something because we are usually creating something that’s never been done before or something others thought could not be done.”
By contrast, industries defer to quick and easy solutions that can have negative downstream implications. Perhaps exemplified in a provocative AI company campaign boldly declaring, “Stop hiring humans,” on a screen of rotating ads outside the convention hall.
Danks said academics “by our nature have more knowledge and exposure to many different options, and that knowledge might be the key to unlocking something a company is trying to achieve quickly by taking an ill-informed shortcut.”
With a focus on creating a more balanced discussion around AI, Danks underscored the need for academic perspectives in an industry driven by rapid, and sometimes reckless, innovation.
Read more from David Danks in: Creating AI That Helps, Not Harms.
Learn more about research and education at UC San Diego in: Artificial Intelligence
“Academics have a reputation for being slow, and some of it is warranted, BUT… it takes us, say, six months to do something because we are usually creating something that’s never been done before or something others thought could not be done.”
Share This:
Stay in the Know
Keep up with all the latest from UC San Diego. Subscribe to the newsletter today.