Skip to main content

Entering Our ChatGPT Era

Amid the rise of generative AI, UC San Diego experts express “cautious optimism”

ChatGPT official app icon on iPhone screen
Photo by Robert Way/iStock

Published Date

Article Content

This is part one of a two-part series on the use of ChatGPT across UC San Diego and beyond:

From the doctor’s office to the classroom, rapidly advancing generative AI tools like OpenAI’s ChatGPT are already changing life at UC San Diego, leaving no discipline or specialty untouched. In March, the introduction of multimodal GPT-4, trained on unprecedented amounts of data and capable of generating the most human-like text we’ve seen from a bot, prompted the term “AI apocalypse”—and raised real concerns about the future of work, academia, health care and beyond.

How are these tools being used at UC San Diego? What are the ethical implications? And what should we be considering as we await the next developments in generative AI? For this series, UC San Diego Today sat down with experts from across campus and UC San Diego Health to find out.

ChatGPT enters the syllabus

During the recent winter quarter, R. Stuart Geiger, an assistant professor in both the Halıcıoğlu Data Science Institute and the Department of Communication at UC San Diego, made a bold decision: not only to allow the use of ChatGPT in their communication courses, but to mandate it.

Stuart Geiger
Stuart Geiger

Geiger, who uses either they/them or he/him pronouns, says that requiring their students to use the bot to generate essays fostered an improved understanding of its capabilities and limitations.

“Coming up with the right prompt that generates an essay that’s factual and isn’t hallucinating anything and gets the argument right is harder than one might think,” said Geiger. In fact, these assignments prompted important discussions around thought-provoking questions such as “What are the values of academia?” and “What is the future of work?”

While Geiger believes generative AI can be dangerous, they also believe it is important for critics to have a basic understanding of how it works and what it can and can’t do. Geiger found that by the end of the quarter, their students had become thoughtful and reasoned users of ChatGPT who are able to intelligently participate in conversations about the implications of AI.

Students work on laptops at table
With the acceleration of generative AI tools in recent months, students and faculty at UC San Diego are learning how to navigate this new "normal." (Photo by Erik Jepsen/University Communications)

Nevertheless, Geiger said they are concerned about the increased supply of disinformation that generative AI will introduce to the public discourse. “We’re going to spend a lot more time being suspicious of each other,” Geiger predicts. And this goes far beyond text-generating large language models like ChatGPT. AI voice generators, now widely available, can take a short audio clip of anyone speaking—from President Joe Biden to your next-door neighbor—and manipulate their voice to say anything the user wants them to.

The institutions on which our society is built—from education to science to journalism—were already in crisis, and are struggling with these new advances, says Geiger, who believes that the developers of these tools have a responsibility to introduce new technologies around protection to mitigate some of the potential harms. Geiger argues that while AI companies are reaping the benefits, the rest of society is bearing the bulk of the costs.

We need to be thinking about the role that technologists play in their responsibilities to society,” said Geiger. “Many drop the bomb and then they walk away with their backs turned and don’t look at the explosion, and then it’s the rest of society that has to catch up.”

But despite these sentiments, Geiger doesn’t buy into the “hype” that this situation is unlike anything we’ve seen before, citing the old adage, “History repeats itself.” Blue collar workers, they said, have for centuries dealt with new technologies that come in and threaten to displace or change the conditions of labor.

“I see it as yet another thing that we’re going to have to deal with,” said Geiger. “It feels novel and scary right now, but I think in a few years it’s going to be normalized.”

“Cognitive offloading” or cheating? Higher ed’s latest debate

It’s been an eventful few years in the realm of higher education. After shifting abruptly to virtual-only instruction at the onset of the pandemic, faculty are again being forced to reimagine the way they deliver instruction and assess students’ knowledge. This time, it’s not a public health crisis to blame—it’s the sudden accessibility of sophisticated large language models like ChatGPT.

Tricia Bertram Gallant
Tricia Bertram Gallant

In this new reality, where cheating has never been easier and students could plausibly earn a degree by passing off AI-generated work as their own, many experts are concerned about the future of the university as an institution. 

Tricia Bertram Gallant, director of the Academic Integrity Office (AIO) and Triton Testing Center (TTC) at UC San Diego, believes these concerns are valid—but she’s also hopeful that with the right approach, the so-called “AI revolution” could bring about a positive sea change in the way both instructors and students approach teaching and learning.

“Most new things are exciting and scary at the same time. There are so many possibilities, and I think there are ways to really harness this for the good of the university, for the good of our students and their learning. We just need to be intentional and urgent about it,” said Bertram Gallant.

She draws a parallel to another controversy over educational technology—one that entered the scene in the 1970s when the handheld calculator became a household object. Despite fears that the use of calculators would negatively impact students’ ability to learn and understand key concepts, their implementation has had a largely positive effect.

“People assume that we don’t need to learn basic math anymore because we have calculators. However, educators know that students still need to learn basic math because that foundational knowledge is helpful for doing higher-order thinking and tasks. So, you teach it to students and then you say, ‘Now you can use a calculator for that.’ You allow them to cognitively offload,” said Bertram Gallant.

Educators, she says, will need to determine on an individual basis where to draw the line between cognitive offloading and cheating. And while the knee-jerk response might be to prohibit the use of AI in the educational setting altogether, the reality is that these tools are going to become an integral part of our daily lives.

“Students will need to use generative AI because they’ll be using it in both their personal and professional life, and we are going to be learning and writing with these tools in the future. So why would we stop them now?” said Bertram Gallant, who suggests the most successful approach may be to teach students to work responsibly with these tools and reflect on the process of using them.

How will professors ensure that their assessments give an accurate picture of a student’s knowledge and ability? Bertram Gallant believes the days of relying on products, like essays or research papers, for assessing student learning may soon be over. She’s leading a work group of experts from across the University of California system to explore the idea of establishing in-person centers, similar to UC San Diego’s Triton Testing Center, where students would be required to complete certain assessments in a proctored environment. She’s also energized by a growing interest in oral exams. In fact, a study published by UC San Diego researchers earlier this year found that the use of oral exams in undergraduate engineering courses increased students’ motivation to learn and improved their understanding of the content, evidenced by outcomes on subsequent written tests.

Professor Huihui Qi and student work on whiteboard during oral exam
Huihui Qi, an assistant teaching professor in the Department of Mechanical and Aerospace Engineering at UC San Diego, administers an oral exam to an undergraduate engineering student. 

Bertram Gallant sees many potential benefits to the use of generative AI in education. She’s optimistic about the possibility of generative AI-powered one-on-one virtual tutors, such as one currently being developed by Seneca College in Canada. With 24/7 availability, she believes that the use of these tutors may reduce incidences of cheating.

By focusing on the aspects of learning and assessment that can’t be replicated by a chatbot, Bertram Gallant is confident that higher education will overcome this newfound hurdle.

“I think what universities have been struggling with over the last 20 years since the internet came to be is the idea that we are not where people have to come for knowledge anymore,” she said. “So, what are they coming to us for? They’re coming to us for that social engagement, development of human skills, problem solving, critical thinking, collaborative, written and oral communication, ethical reasoning, interpersonal skills and empathy. Everything will have to become more active and more engaged than in the past if we want to retain our students and our integrity.”

A “massive experiment" comes with risks and benefits

An open letter published in March called on AI labs to enact an immediate six-month pause in training systems more powerful than GPT-4, citing the “profound risks to society and humanity” posed by AI systems with human-competitive intelligence.

David Danks
David Danks

But we can’t put the proverbial genie back in the bottle, says David Danks, a professor of data science and philosophy at UC San Diego, who holds a joint appointment with Halıcıoğlu Data Science Institute and the School of Arts and Humanities.

A brief pause in development, Danks believes, won’t solve the complex ethical challenges presented by the staggering growth of large language models like ChatGPT or other generative AI tools—which are now producing audio clips, images and even videos so realistic they’re virtually indistinguishable to the human eye or ear.

“AI provides us with capabilities that we as humans have never previously had,” said Danks. “It’s given us a whole new set of capabilities to generate plausible-sounding text very quickly—to rapidly sort through, in some sense, everything that humanity has ever known. But we have to recognize that those capabilities could be put to problematic uses. I think the key to all of this is less the AI and more the humans. We should be asking, not ‘What is the AI going to do to us?’ but rather, ‘What are we going to do to each other?’”

And while Danks, who also serves on the advisory board of the UC San Diego Institute for Practical Ethics, believes there are positive uses for generative AI—such as proofreading, editing or creative and artistic expression—he has grave concerns about the potential of these systems to inflict real harms on society.

These include an upsurge in disinformation—which Danks says could easily have a disastrous impact on the presidential election in 2024—and the “deskilling” of workers that will take place as the use of generative AI tools becomes widespread. We are, he says, “the subjects of a massive experiment.”

Voting booths stand ready for use in a U.S. election.
The potential for an upsurge in disinformation due to the acceleration of generative AI has many experts, like David Danks of UC San Diego, concerned about possible impacts on the 2024 presidential election. (Photo by M. Rolands/iStock)

Given that large language models like ChatGPT have a propensity to hallucinate, or create false content, Danks says it’s important for users to remember that ChatGPT is designed solely to give users the most probable completion to their prompt based on the massive amounts of data that have trained the model—not to tell the truth. He references instances where the model has hallucinated academic research papers, attributing made-up titles to real people—as well as a recent story that made national headlines after ChatGPT invented a sexual harassment scandal, wrongly accusing a real law professor of sexual assault by citing a story from The Washington Post that didn’t exist.

Danks’ unique work at the intersections of AI, ethics and policy have attracted national attention. In 2022, he was invited to serve as a member of the National AI Advisory Committee, a role in which he helps inform the federal government on artificial intelligence alongside members from private industry, civil society, academic and nonprofit organizations. He believes that academics play an important role in the discussion of AI and its implications: not only do they know what’s happening at the frontiers of research, but they also have a moral obligation to contribute to the public debate and public policy.

“We (academics) are perhaps the most objective and unbiased observers of this space. As a university professor, my job is not to be an advocate for a particular company, software or system. It’s to conduct research and transmit that research to the world,” Danks said.

While his concerns about the exponential growth of generative AI tools are very real, the fact that the ethical implications are already at the forefront of public discourse gives him a sense of hope for the future.

“Unlike some technologies where it’s taken years for people to recognize the ethical challenges, I think we already see it after just a few months. Harms have been done and harms will continue to occur, but I am cautiously optimistic that as a community—academic community, the broader community, people working on these technologies and generative AI systems—that we can make real progress moving forward so that we can minimize the harms and maximize the benefits,” Danks said.

Category navigation with Social links