Skip to main content

Creating AI That Helps, Not Harms

Professor David Danks illuminates a roadmap to ethical AI development

David Danks speaks at a podium.
David Danks is a member of the National AI Advisory Committee with research interests at the intersection of philosophy, cognitive science and machine learning. Photos by Erik Jepsen/University Communications.

Published Date

Article Content

There are plenty of reasons to love artificial intelligence (AI). It can process billions of data points in a flash, perform work in environments too dangerous for humans—like deep sea exploration or nuclear power plants—and make predictions about the weather and stock market.

But what happens when helpfulness turns into harm—like autonomous vehicles that go haywire, or when personal information is monitored without consent? This is one of the big questions that occupies David Danks, a professor in UC San Diego’s Halıcıoğlu Data Science Institute and Department of Philosophy. 

He was recently invited by the School of Arts and Humanities’ Institute for Practical Ethics to deliver a keynote address to the topic of “The Ethical and Policy Implications of Artificial Intelligence.” Using personal stories and insights garnered in part from his role as a member of the National AI Advisory Committee, Danks demystified the path to a more ethical and responsible form of AI that can benefit all.

Myth #1: AI is currently growing untamed, wild west style

Hold your horses. While it’s true that popular tools like ChatGPT have permeated certain sectors of our economy in a seemingly uncontrolled fashion, Danks explained that “there’s an enormous amount of AI governance going on.” Much of it happens behind the scenes—and there is still plenty of work to be done.

He suggested visualizing it as a spectrum. On one end, there is peer pressure, a method of making decisions as a group about what’s wrong or right. On the opposite side there is regulation in the form of official rules about what can and can’t be done. For example, the Equal Opportunity Exchange Commission investigates discrimination in hiring that results from the use of AI.

Other types of governance are happening, like targeted funding. “Federal agencies, national foundations and private entities are using their financial might to help shape where AI goes,” explained Danks. There’s also an effort to establish a set of standards and best practices to identify and implement responsible AI across the globe.

“AI is having massive impacts,” said Danks. “We have AI being imposed on us, often without trying to figure out if it is the right thing for us. There is a need for governance. How can we get our AI systems from where we are now, to where we want them to be?”

“AI is having massive impacts. We have AI being imposed on us, often without trying to figure out if it is the right thing for us. There is a need for governance. How can we get our AI systems from where we are now, to where we want them to be?”
David Danks
Collage of three images, including self-driving bus and using technology for financial predictions and shopping

Myth #2: AI should be predictable

When we think about our technological values as a society—such as safety, privacy or accountability—it’s difficult to articulate how each should function. For instance, safety as it relates to driving could be evaluated many ways, including mechanically how the vehicle is built, or behaviorally based on the person behind the wheel.

Danks believes our values need to be defined before AI can be successful. Why? Because AI is designed to continually surprise us, to present answers that we never considered, or patterns we’ve failed to see. “If we know what AI should do, why would we use it?” said Danks. “The whole point is that it can discover patterns that we don’t recognize. It can discover solutions to problems that never would have occurred to us.”

To achieve the outcomes we want, we need more experts who know how to create responsible AI. Danks explained, “There are maybe 1,000 people in the world who can build a trustworthy AI system for any purpose. The real challenge is how to enable 100,000 people to gain these skills, to democratize the knowledge of how to build these systems responsibly.”

Currently, building an ethical AI system—for example, one that prevents discrimination and keeps data protected—is not the norm; it’s an afterthought. A supplement to gain a certification. Instead of having this dichotomy, Danks argues that building responsible AI should be the only way to build AI.

“We need to invert the way we’re doing things,” he explains. “We need to normalize creating AI that respects human values. You don’t get a gold star, it’s just what you ought to be doing. This will require reshaping the social norms that we have within the computing profession.”

Audience members smiling while listening to David Danks

Myth #3: Ethical issues only arise at the initial development of an AI system

When is the right time for designers to think about ethics? According to Danks, it should be top of mind throughout the entire lifecycle. “Ethical issues arise from the very beginning—from design to development to refinement to the decision to sunset,” he said. “The decision to build an AI system in the first place is already an ethical choice.”

If ethical values are defined from the beginning based on a set of standards collectively generated, then success is inevitable, right? Not necessarily.

Danks shared the example of the Department of Defense, which has the most comprehensive set of AI principles of any U.S. government organization. These delineate what is acceptable behavior, and what is to be condemned. Many other militaries around the world have their own unique set of ethical guidelines. 

Even though each organization has their own values clearly established, the probability of aligning completely is highly unlikely. This can lead to ethical outsourcing, when one organization or agency chooses to contract with another that has fewer restrictions. “Everyone has their own ethics; we might share outputs, but not actual code or the inner workings of our systems,” said Danks. 

He continued, “We shouldn’t be able to ethically outsource—to say, ‘I’m not allowed to do that, but someone in another country can.' We can have ethical failings despite governance systems.”

“We shouldn’t be able to ethically outsource—to say, ‘I’m not allowed to do that, but someone in another country can.' We can have ethical failings despite governance systems.”
David Danks

Interrogating breakthroughs for the common good

The technology sphere is complex and perpetually evolving, requiring constant attention from ethicists, social scientists and policymakers. To promote socially responsible science, UC San Diego’s Institute for Practical Ethics unites interdisciplinary scholars from across the university. Collectively they conduct research on the ethical considerations of breakthrough science—from gene editing to big data and climate change.

Co-directed by Associate Dean of Social Sciences John Evans and Professor of Philosophy Craig Callender, the institute hosts an annual keynote talk to highlight an expert’s research findings. Discussions have ranged from whether to resurrect the wooly mammoth and the “return of nature” for environmental conservation. Danks, who earned a doctoral degree in philosophy at UC San Diego and serves on the Institute for Practical Ethics advisory board, garnered a sold-out audience for his talk.

“The Institute for Practical Ethics annual keynote address is the primary way we can interact with the public about topics most relevant to the mission of the institute,” said Dean of the School of Arts and Humanities Cristina Della Coletta, who opened the event with Vice Chancellor Corinne Peek-Asa and John Evans.

She added, “Artificial intelligence remains at the forefront of nearly everything these days, and I’m pleased the institute is focusing once again on this subject–which has such a broad reach and, perhaps, offers some of the biggest questions to answer.”

Can AI be more trustworthy than humans? Are there certain jobs that should never be replaced by AI? What is the role of empathy in the development of this technology? These are just a few of the questions that were raised by audience members at the keynote event. They serve as a reminder of the immediate and ongoing need of the work being done by Danks and scholars at the institute to ensure ethics remain at the forefront of all breakthroughs.

For those interested, Danks' full talk is available on the School of Arts and Humanities YouTube page here.

Learn more about research and education at UC San Diego in: Artificial Intelligence

Cristina Della Coletta speaks with John Evans. Photo by Aretha Li.
Dean of the School of Arts and Humanities Cristina Della Coletta with Associate Dean of Social Sciences John Evans.
Category navigation with Social links