Թ

Skip to main content

CU Boulder advancing artificial intelligence research for real-world applications

Robotic dog navigates a mine.

A modified Boston Dynamics Spot robot operates in a mine as part of an experiment leading up to the DARPA Subterranean Challenge.

CU Boulder researchers maneuver a robot through fallen rock and debris in an experimental mine tunnel in Colorado. Somewhere in the darkness ahead, potential survivors could be running out of air. The robot stops, assesses the unstable ceiling above, then continues toward areas too dangerous for human rescuers.

Other teams in the College of Engineering and Applied Sciences are developing systems that analyze satellite imagery to identify environmental changes invisible to the human eye, while computers mine years of medical records in seconds to help doctors save lives.

Meanwhile, researchers are developing computer systems that describe unfamiliar spaces in detail, guiding someone through a building by noting, for example, "stairs ahead, handrail on your right, doorway opens to a lobby beyond."

These scenarios illustrate a few of the artificial intelligence (AI) discoveries happening at CU Boulder, where researchers collaborate across disciplines to develop trustworthy systems that work alongside humans to address challenges in emergency response, space and planetary exploration, medical diagnosis, education, environmental prediction, accessibility applications and more.

When robots venture where humans cannot

Sean Humbert, professor of mechanical engineering and director of CU Boulder's Robotics Program, oversees a lab where autonomous systems and bio-inspired robots learn to operate in places and situations where direct human engagement creates unacceptable risk. The algorithms developed by Humbert’s team must process sensor data within milliseconds, adapting to terrain that shifts without warning in contaminated areas, deep ocean trenches and collapsed structures where every decision could mean the difference between rescue and catastrophe.

Humbert, along withChristoffer Heckman and Eric Frew, faculty in computer science and aerospace engineering sciences, respectively, recently led a team of CU researchers in the DARPA Subterranean Challenge to test these systems in unmapped caves, where GPS and communication links were unavailable. Equipment failures and navigation problems in real conditions taught the team lessons impossible to learn in the lab. This iterative process of real-world testing proves essential for developing truly reliable autonomous rescue systems.

Doctoral student sets up a humanoid robot for experimentation.

CU Boulder's first humanoid robot is being prepared to perform electric vehicle (EV) lithium-ion battery pack disassembly.

Heckman has also partnered with the Army Research Laboratory to investigate the use of vision-language models, such as those employed in ChatGPT, for building robotic contextual awareness. Building on these AI agents, Heckman’s group is developing models that help interpret statements like “go check out the backpack in the rear of the building,” which provide spatial context to robotic assistants.

Morteza Lahijanian, associate professor of aerospace engineering sciences, approaches the same domain with a fundamentally different methodology. Where some systems learn from experience, Lahijanian applies formal verification techniques to help ensure safe behavior before deployment. His team is pioneering a new AI paradigm that complements learning with logic-based reasoning, enabling autonomous systems to make decisions grounded in both intuition and logic.

In space applications, where missions are costly and communication delays are unacceptable, Lahijanian develops autonomous systems for spacecraft that must dock with satellites and space stations without human intervention. His mathematical verification methods ensure these systems can handle situations no programmer could anticipate.

Teaching robots to work with people

Human-robot collaboration requires systems that can learn from and communicate with their human teammates.Brad Hayes, associate professor of computer science, combines advances in machine learning techniques, including Explainable AI, Reinforcement Learning, and Imitation Learning, with foundational principles from robotics and psychology to develop intelligent, reliable, and adaptable autonomous collaborators. By working at the intersection of pervasive and personalized artificial intelligence, large language models, collaborative agents, and decision support, his work produces the technologies that empower human-robot teams to be greater than the sum of their parts.

Alessandro Roncone, assistant professor of computer science and associate director of the Robotics Program,designs robots that act as teammates by understanding human goals and adapting accordingly. His work addresses tricky questions: When should the robot override the human? How can it warn people about limitations without being annoying? Roncone’s research suggests that some chemists hate when robots second-guess their decisions. Others become too dependent on robotic assistance. Achieving balance requires understanding individual personalities.

The challenge of making machines understand

James H. Martin, professor of computer science and faculty fellow at the Institute of Cognitive Science, works on natural language processing systems that can capture meaning in language and exploit that knowledge in practical applications. Language is often vague and ambiguous, requiring context and background knowledge to achieve adequate interpretations. Despite advances with large language models, current approaches struggle with ambiguous language. Supported by awards from the National Institutes of Health, Martin and his colleagues have been developing systems to mine electronic medical records for temporal and causal patterns to improve clinical outcomes. As part of his work with theNational Science Foundation AI Institute for Student-AI Teaming (iSAT), located here at CU Boulder, Martin and his students are contributing to systems to analyze and improve instructional discourse in K-12 STEM classrooms.

Researchers experiment with robotic arm to assist medical professionals.

Professor Alessandro Roncone and a PhD student are working on a collaborative robot that, in the future, will assist people across complex environments, from factories to hospitals.

Maria Pacheco, assistant professor of computer science, faces her own linguistic puzzle. While LLMs are adept at interpreting and producing language, how to use them effectively for knowledge-intensive purposes remains an open question. Whenever LLMs need access to knowledge they were not trained on, or to provide transparent, faithful explanations for their decisions, they fall short. Pacheco studies how to integrate the language capabilities of LLMs with external, structured knowledge resources to develop robust language technology employed in real-world applications.

Beyond recognition: AI that truly sees

Danna Gurari, assistant professor of computer science, applies neural network models to generate spatial descriptions rather than standard object classification. Developed with input from individuals who have vision impairments, her systems provide detailed image descriptions that enable AI-generated spatial navigation. People require different types of information in various contexts. Her work addresses the challenge of providing relevant details without overwhelming users with unnecessary information.

Standardizing how we teach and visualize AI architectures

Associate professor Tom Yeh’s AI research, represented by theAI by Hand project, bridges human-computer interaction and computer vision to make deep learning more intuitive and accessible. With over 200,000 followers across social media platforms, the project has achieved global impact by demystifying complex AI concepts through hand-drawn visual explanations. Yeh invented a novel unified representation framework for neural network architectures, designed to help learners and practitioners better understand, compare and build models. By combining rigorous technical insight with human-centered design, Yeh’s research empowers diverse audiences to engage more meaningfully with the foundations of modern AI.

Embedding Earth's data to improve environmental monitoring

Satellite data offer an unprecedented window into Earth and its changing conditions, but turning this firehose of data into usable information requires new technologies.Esther Rolf, assistant professor of computer science, develops machine learning systems that coalesce satellite data into succinct, general-purpose representations of the Earth. These embeddings unlock new large-scale applications in environmental monitoring; her recent research identified previously unmonitored artisanal mining activity across Sub-Saharan Africa. Rolf pushes the frontier of geospatial AI, aligning fundamental research advances with goals of computational efficiency and accessibility.

Looking forward

As AI systems integrate into scientific, industrial and public domains, CU Boulder researchers are addressing both technical challenges, such as ensuring reliability and explainability, and social dimensions, including fairness and privacy protection. Rather than replacing human judgment, research aims to create AI that enhances human capabilities and decision-making.

Whether developing robots for dangerous rescue missions, creating educational tools that help people understand AI, or building systems that track environmental changes across the globe, CU Boulder researchers are advancing technology that amplifies human capability and safeguards human agency.