Skip to main content
Behind Every Breakthrough

AI Creeps Closer to Human Agency

A graphic depicting a human hand shaking a robot hand.
Philosopher and data scientist David Danks is interested in how artificial intelligence can augment what we’re doing to allow us to achieve what we’ve never done before. Image: iStock, SvetaZi.

Published Date

Article Content

Artificial intelligence (AI) systems are getting smarter. Will this technology eventually undermine our sense of who we are as humans? Can we develop AI safely in ways that augment our capabilities rather than overshadow them?

As both a philosopher and data scientist, David Danks is at the nexus of these big questions. He takes an interdisciplinary approach to these complex queries in the School of Arts and Humanities and School of Computing, Information and Data Sciences. A large portion of his work is made possible by grants from federal agencies, which has offered him the freedom to innovate.

We recently spoke with Danks about the potentials and hazards of AI autonomy, as well as the unique contributions this technology can make in fields like emergency triage and mental illness treatments.

What fascinates you about AI?

We’re developing AI systems that can act in the world and thereby dramatically impact it, but without being mediated through a human. I think this raises a host of challenging ethical, social and psychological questions about who we are as humans. We remain the most autonomous beings on this planet, but this technology is getting better fast, approaching real agency.

Sometimes AI is perceived as “magic,” but it’s just code. I’m interested in the ways AI can help augment or expand what we’re capable of by providing methods that are additive to what we are already doing. Rather than replacing us, I’m exploring ways this technology can help us achieve things we could never do before.

What are two examples from your own research that demonstrate how AI can assist and expand what we’re capable of achieving?

For people who suffer from schizophrenia, there’s a longstanding belief that something has gone awry in the wiring of their brains. We have abundant neuroimaging data from fMRI scans, but how do we figure out which parts of the brain influence other parts of the brain, and how do we connect this with people’s mental health? By using AI, we’re trying to discern the underlying mechanisms in the brains of individuals who have been diagnosed with schizophrenia. The project, which is supported by a grant from the National Institute of Mental Health, is intended to improve targeted treatments.

"We’re developing AI systems that can act in the world and thereby dramatically impact it, but without being mediated through a human. I think this raises a host of challenging ethical, social and psychological questions about who we are as humans."
David Danks
Portrait of David Danks
David Danks

Another project I’m working on explores the potential for AI to support triage decisions in situations where mass medical care is needed, like on a battlefield. My role is focused on ethical and policy considerations, thinking about whether and how AI can contribute to complex decision making that involves human values. For instance, when disaster strikes and you don’t have enough people to lead relief efforts, or if the person in charge is not experienced in triage, then could AI assist these kinds of rapid determinations that have life-or-death consequences? This work is funded by DARPA (Defense Advanced Research Projects Agency), a U.S. government agency that focuses on developing breakthrough technologies for national security.

What role does federal funding serve in advancing this kind of university research?

From my perspective, there are three important functions of federal funding at universities:

First, it fosters collaboration. I have the incredibly fortunate opportunity to do research, but many of the interesting problems out there can’t be solved by my brain alone. Often these problems have no profit incentive, so industry isn’t compelled to solve them. And the government is busy delivering services to the public. So, funding university research enables interdisciplinary scholars to come together to tackle these big problems.

Secondly, it shapes knowledge. The federal government has the incredible power to shape what kinds of questions are addressed through their funding. For example, the National Science Foundation helps shape the entire trajectory of our nation’s science agenda. Grants like these support research that is not driven by profit, especially projects that are speculative, foundational, or provide public benefit. And to the extent that the federal government gets out of that business, it is voluntarily giving up its voice as one of the leaders of science and technology.

And lastly, it provides validation. Federal funding provides validation and publicity, which might be viewed as simplistic, but in fact are really important. In particular, this validation means it's easier to get access to experts who can explain the very real problems faced by the federal government, all so we can do work that is responsive to their needs.

A woman sitting at a computer with AI graphics surrounding her

According to Danks, there is research that demonstrates regulation of AI development improves innovation by putting safety first. Image: iStock, metamorworks.

Some argue that developing AI with safety in mind hinders its innovation. What are your thoughts?

That’s simply not the case. There is ample research that shows introducing constraints, whether regulation or otherwise, increases innovation. And right now, we have no regulation or governance, so we’re nowhere near the point that it would become problematic for progress.

Ultimately, we’re innovating because we want something better, and the very idea of better should have safety baked in. What’s the point of creating something that’s going to harm people? Safe AI practice is based on good design, and that requires a little extra time up front, but it will lead to cheaper, faster development later. If you’ve got a good design, you’re already most of the way to building a safe system.

Redrawing boundaries

Danks was recently featured at the inaugural HumanX conference in Las Vegas, where he spoke on the topic “Redrawing boundaries: AI regulation on a global scale.” He advocated for the need to regulate the rollout of AI, especially considering the emergence of “agentic AI” that can make decisions and perform tasks independently. Read more about the HumanX conference here.

Category navigation with Social links