7 AI-Powered Technologies You Should Know About
Story by:
Published Date
Article Content
Move over, ChatGPT: These artificial intelligence-powered technologies and innovations being developed and implemented at UC San Diego could lead to the next developments in the “AI revolution.” From helping us manage chronic health conditions to deciding which movies to watch, advances in AI can help inform decision-making, accelerate scientific discovery—and even save lives. The following are just seven of the many tools and technologies being developed on campus with the potential to go from the research space to the real world:
1. A social robot to help people with cognitive impairments
An artificially intelligent robot being developed in UC San Diego’s Healthcare Robotics Lab could one day improve access to care and increase independence for individuals living with dementia or mild cognitive impairment. The Cognitively Assistive Robot for Motivation and Neurorehabilitation, or CARMEN, is a social robot designed to teach strategies related to memory, attention, organization, problem-solving and planning. Using custom AI algorithms, CARMEN can learn about the user and tailor its interactions based on the individual’s abilities and goals. These interactions might include teaching people to form memory-supporting habits, like putting things in familiar places in their home, or helping them set and meet their cognitive goals such as remembering names at a social gathering.
This project is spearheaded by the lab’s director, roboticist Laurel Riek, a professor of computer science and engineering with a joint appointment in the Department of Emergency Medicine. Riek has worked at the intersection of AI and robotics for decades and says that robots like CARMEN offer the potential for exciting advancements in the field. Prototypes of CARMEN are already being used to provide cognitive interventions for individuals affiliated with the George G. Glenner Alzheimer’s Family Centers in San Diego and more recently, in peoples’ homes as part of the team’s research.
2. A mobile platform for managing chronic health conditions
From smart watches and fitness trackers to blood pressure monitors, patches and biosensors, wearable medical devices have exploded in popularity in recent years, offering both users and clinicians the ability to access their own personal health data in real time. But what if there were a way to combine those data to generate precise, individualized recommendations that could help people manage chronic conditions like hypertension and diabetes? Enter CIPRA.ai, a new mobile app which does just that—and it’s based on technology developed at UC San Diego.
CIPRA.ai is built on the idea that treatment for chronic conditions is not a “one-size-fits-all” solution. The artificial intelligence platform collects the multi-dimensional data available from a person’s wearable devices and health apps and feeds that data into machine learning algorithms that can learn about the user and pinpoint the primary cause of their condition. The app can then recommend one or two targeted interventions each day that are specifically tailored to the user and will be most effective for them personally in reversing the disease.
“This came from just a technology that we were developing in a research lab, to a real product,” said Sujit Dey, a professor in the Department of Electrical and Computer Engineering and director of the Center for Wireless Communications at UC San Diego. Designed for deployment in partnership with health systems, which allows medical providers to access their patients’ recommendations and tracked progress, CIPRA.ai will soon become available to hypertensive patients at UC San Diego Health. The team is working to expand the tool to a multi-chronic disease platform that will provide personalized recommendations for the management of diabetes, mental health conditions and more.
3. Self-driving vehicles for delivery and micro-transit
At UC San Diego, to catch a glimpse of the future, one needs only to look around. Here, research conducted in the Autonomous Vehicle Laboratory extends beyond the walls of a building and into the roads and walkways that weave throughout the university’s 1,200-acre campus. Self-driving golf carts making mail deliveries have become a common sight on campus since making their debut in 2019, and Henrik Christensen, who leads the lab’s research team and directs the UC San Diego Contextual Robotics Institute, says this project barely scratches the surface of how artificial intelligence could transform delivery and micro-transit logistics on campuses, in cities and beyond.
Using the same underlying AI algorithms they’ve developed for the mail delivery vehicles, which are programmed to obey traffic laws en route to their intended destination and to detect cars, bicycles or pedestrians along the way, Christensen’s team aims to start the roll-out of its next project this fall. This time, it’s three-wheeled scooters programmed to drive themselves to high-demand locations on campus at certain times of day. In the morning, for example, several scooters might be found at the central campus trolley station, ready for commuters to pick up and ride to class. After the user reaches their destination, the scooter will then drive itself back to wherever it’s needed.
According to Christensen, who is a distinguished professor of computer science in the Jacobs School of Engineering, developing AI algorithms that allow autonomous vehicles to safely navigate pedestrian-heavy routes like those found on university campuses presents an interesting research challenge. Self-driving technology that’s already commercialized can deftly handle highway travel, but dense urban environments remain a significant challenge.
“We’re trying to work on problems that the current self-driving companies have not yet solved,” Christensen said.
4. A tool that improves the prediction of atmospheric rivers
A team of atmospheric scientists and computer scientists in the Center for Western Weather and Water Extremes (CW3E) at Scripps Institution of Oceanography have created an artificial intelligence-enabled tool to improve the prediction of Integrated Water Vapor Transport, or IVT, the key variable for determining the presence and intensity of atmospheric rivers—and it’s already making a significant impact on decision-making by water managers across the state of California.
Led by CW3E deputy director Luca Delle Monache, the team has developed machine learning algorithms that can sift through massive amounts of weather data in what they call a “post-processing framework.” This method enables them to improve the predictions they make today based on errors that forecasting models have made in the past. Through the center’s Forecast Informed Reservoir Operations (FIRO) program, these highly accurate, machine learning-fueled predictions have been developed to determine how much water should be released from reservoirs and when, which not only optimizes the state’s water supply but also reduces the risk of flooding. With better prediction of precipitation and inflow into the reservoir, researchers at CW3E have found that water managers can save approximately 25% more water each year.
“The application of machine learning to the dynamical, physics-based model is a game changer,” said Delle Monache. “It’s an exciting time, where we’re really making meaningful improvements and contributions.”
5. A chatbot that gives movie recommendations
“Recommended For You:” We see it every time we log on to Netflix, Hulu, Disney+ or any other popular streaming app. Armed with data about what types of content you watch and how long you stay tuned in, these companies employ personalized machine learning algorithms to figure out your preferences. But what if these recommender systems could go one step further? What if you could talk to them about your likes and dislikes, and they could talk back to you, or adjust their recommendations accordingly?
In his lab in the Jacobs School of Engineering, computer science professor Julian McAuley, who specializes in recommender systems, is in the early stages of making this idea a reality. With funding from Netflix, he and his team are building demo systems to explore the possibility of what this technology could look like and how users might respond to it. In tandem with the rapid acceleration of generative AI tools like ChatGPT over the past year, McAuley has observed an exponential increase in interest around conversational recommender systems. This work involves merging large language models, the subset of AI that underlies ChatGPT, with traditional recommender systems that are focused on coming up with suggestions in highly specific areas. To train the model, McAuley and his team are collecting datasets of movie reviews, conversations about movies from Reddit and more.
“This idea has gone from seeming kind of impossible to seeming like something that is almost within reach,” said McAuley, who says this technology could have potential applications that extend far beyond movies, to include e-commerce, fashion, fitness and more. “Everyone wants to get on board with building and deploying these things.”
6. Robots that can perform automated lifesaving surgeries
Imagine you’ve just survived a car crash in a remote location that’s difficult for first responders to access. You have a deep cut on your arm from a piece of broken glass, and you’re bleeding profusely. The situation is dire, until a drone flying overhead drops an autonomous surgical robot, trained to perform hemorrhage control vessel repairs, onto the ground below. It sounds like a scene straight out of a sci-fi film—and while it’s not something we’re likely to see happening anytime soon, engineers at UC San Diego are already laying the groundwork.
Michael Yip, an associate professor of electrical and computer engineering, and his team of engineering and clinical collaborators are building surgical robots with artificial intelligence components that can recognize blood, control hemorrhaging, apply sutures, autonomously perform certain surgical procedures and more. Recently, in partnership with the UC San Diego School of Medicine, a 25-pound humanoid surgical robot that Yip co-developed with the U.S. Army’s Telemedicine & Advanced Technology Research Center and SRI International has already helped perform vessel repairs alongside human surgeons. It’s extremely complex work to develop AI algorithms that can recognize individual differences between patients and differences in anatomy, but Yip finds it personally rewarding—and says that these advances could one day save people’s lives.
“Robotics and automation are not only a potential future—they are a future of medicine,” said Yip. “Statistics say that we don’t have enough doctors and surgeons to handle the rising population of patients, so something needs to be done to address the amount of care that people need.”
7. A brain-inspired approach to facial recognition
Facial recognition technology is all around us. From the smartphones we hold in our hands to security cameras in airports and retail stores, AI—in the form of deep learning algorithms and artificial neural networks—can learn what we look like and identify us later. These artificial neural networks are connected by variable weights, modeled after the synapses between neurons in the human brain. But the synapses in the brain are incredibly complex and we don’t fully understand their inner workings: That’s one reason why typical AI technologies like facial recognition have traditionally been constructed using simple “synapses,” or weights, rather than complex, brain-inspired ones.
But what would happen if an artificial neural network for face familiarity detection was built to replicate these brain-like synapses instead? Would this system be even better at remembering faces? Marcus Benna, an assistant professor of neurobiology at UC San Diego, and colleagues decided to find out—so they built one. In a study published last year, the team found that their synapse memory system could recognize a larger number of faces, and when they added even more synapses, the number increased more rapidly than for simple synapses.
Benna, who has extensively studied synaptic complexity, says his primary goal as a computational neuroscientist is to better understand how the brain works and how it can overcome its limitations—not to build machine learning applications. But as the fields of AI and neuroscience increasingly converge, their respective advances are proving to be mutually beneficial.
Learn more about research and education at UC San Diego in: Artificial Intelligence
You May Also Like
Stay in the Know
Keep up with all the latest from UC San Diego. Subscribe to the newsletter today.