Rocky Mountain Workshop on Control and Autonomy

Join us for a one-day workshop designed to spark connections and collaboration among control scientists and engineers across Colorado and beyond! Held just after the 2025 American Controls Conference in Denver, this event offers a unique chance to connect with experts in the field.
What to Expect:
ÌýÌýInspiring plenary talks by two leading controls researchers, as well as a leading wind farm controls expert from NREL
ÌýÌýEngaging poster sessions for students and early-career researchers
ÌýÌýNetworking & coffee breaks focused on curiosity, collaboration and innovation
Program Schedule:Ìý
- 8:50-9:25 a.m. - Check In
- 9:00-9:25a.m. - Welcome Coffee
- 9:25-9:30 a.m. - Opening Remarks
- 9:30-10:30 a.m. - Plenary Talk 1 by Jeff Shamma (³Ô¹ÏÍø of Illinois Urbana-Champaign)
- Talk Title: A control-theoretic perspective on game-theoretic learning
- 10:30-11:00 a.m. - Technical Talk 1 by Joeri Frederik (NREL)
- Talk Title: Advancements in active wake control for wind turbines
- 11:00-1:00 p.m. - Lunch & Poster Session
- 1:00-1:30 p.m. - Technical Talk 2 by Alex Nelson & Christopher Grasso
- Talk Title: Deep Space Operations: Connecting Spacecraft Sequencing to Low Level Control Laws via Commanding
- 1:30-1:40 p.m. - "Best Poster Award" Announcement
- 1:40-2:15 p.m. - Coffee Break
- 2:15-3:00 p.m. - Plenary Talk 2 by Manxi Wu (UC Berkeley)
- Talk Title: Information Design and Learning with Strategic Agents
- 3:00-3:05 p.m. - Closing Remarks
Note: Registration for the workshop is now closed. If you would like to be added to the waitlist, please email Emily Jensen.
Contact Event Organizers:
ÌýÌýEmily Jensen: Assistant Professor, Electrical, Computer & Energy Engineering, CU Boulder
ÌýÌýPhilip Brown: Assistant Professor, Computer Science, ³Ô¹ÏÍø of Colorado Colorado Springs
Workshop Speakers
Title: Information Design and Learning with Strategic AgentsÌý
Abstract: Strategic behavior in complex systems—such as digital platforms or autonomous agent networks—is shaped by how information is collected, shared, and used in decision-making. In this talk, I will present two perspectives on how information governs system outcomes: one from the platform designer’s point of view, and the other from the learner’s.
First, I will discuss how ride-hailing platforms like Uber can strategically share information about the realized demand distribution with drivers to influence repositioning behavior. Using a Bayesian persuasion framework, we show that in many practically relevant cases, the optimal mechanism takes a simple threshold form: the demand realization is fully revealed when below or above certain thresholds, and not revealed in the intermediate region. We also develop a computational method to solve for optimal information mechanisms in general settings, and extend the framework to the joint design of information and pricing.
Then, I will shift focus to settings without a central designer—where agents learn and adapt based solely on their observations of others. We introduce a new hypothesis-testing-based learning model in which agents form beliefs about opponents' strategies and revise them through a stochastic process based on hypothesis testing and utility-driven exploration. In any game, we show that the limiting behavior selects a particular Nash equilibrium that maximizes the minimum utility across all players. We extend this result to stochastic and repeated games, and discuss how such learning dynamics can support cooperative outcomes that emerge through repeated interactions.
Bio: ManxiÌýWuÌýis assistant professor in the Department of Civil and Environmental Engineering at ³Ô¹ÏÍø of California, Berkeley. Her research develops methods in game theory, multi-agent learning, and market design to address emerging challenges in urban infrastructure and mobility systems. Prior to joining Berkeley, she was an Assistant Professor in the School of Operations Research and Information Engineering at Cornell ³Ô¹ÏÍø. She received her Ph.D. in Social and Engineering Systems from MIT in 2021.ÌýManxiÌýhas also held positions as a Research Fellow at the Simons Institute program on Learning and Games and as a Postdoctoral Scholar in the EECS Department at UC Berkeley.
Title:ÌýA control-theoretic perspective on game-theoretic learning
Abstract: The framework of game-theoretic (or multi-agent) learning explores how individual agent strategies evolve in response to the strategies of others. A central question is whether these evolving strategies converge to classical solution concepts, such as Nash equilibrium.
This talk adopts a control-theoretic perspective by recognizing that learning agents interacting with one another form a feedback system. Learning dynamics are modeled as open dynamical systems that map payoffs, regardless of their source, into strategy updates, while the game itself provides the feedback interconnection.
The focus is on uncoupled learning, where agents update strategies based solely on observed payoffs, without explicit knowledge of utility functions (their own or of others). This perspective enables the use of control-theoretic tools to both analyze and synthesize learning dynamics.
We first exploit that convergence to Nash equilibrium corresponds to feedback stability. The main part of the talk establishes that uncoupled learning can, in general, lead to mixed-strategy Nash equilibrium, while highlighting that the required learning dynamics are not universal and may sometimes involve seemingly irrational behavior. We go on to show how a control-theoretic perspective supports compositional analysis of learning in games, i.e., accommodating hybrid learning dynamics and game structures, using game-theoretic extensions of passivity theory
Bio: Jeff Shamma is Department Head of Industrial and Enterprise Systems Engineering and Jerry S. Dobrovolny Chair at the ³Ô¹ÏÍø of Illinois Urbana-Champaign. He previously held faculty positions at the King Abdullah ³Ô¹ÏÍø of Science and Technology (KAUST) and at Georgia Tech as the Julian T. Hightower Chair in Systems and Controls. Jeff received a PhD in Systems Science and Engineering from MIT in 1988. He is a Fellow of IEEE and IFAC, a past Distinguished Lecturer of the IEEE Control Systems Society, and a recipient of the IFAC High Impact Paper Award, AACC Donald P. Eckman Award, and NSF Young Investigator Award. Jeff has been a plenary/semi-plenary speaker at NeurIPS, World Congress of the Game Theory Society, and IEEE Conference on Decision and Control. He was Editor-in-Chief of the IEEE Transactions on Control of Network Systems from 2020-2024. Jeff’s research focuses on decision and control, game theory, and multi-agent systems.
Title:ÌýAdvancements in active wake control for wind turbines
Abstract:ÌýFor economic reasons, wind turbines are often placed closely together in so-called wind farm. However, as a wind turbine extracts energy from the wind, it also generates a wake, a region of lower wind speed and higher turbulence intensity. These wakes create interaction between different turbines, and the wind farm can therefore be seen as a distributed system of individual turbines. As such, the optimal control settings for a single turbine are no longer necessarily the same as the optimal settings for the distributed wind farm system. Wind farm flow control (WFFC) is the field of research that focuses on using the actuator degrees of freedom of a wind turbine (blade pitch, generator torque, yaw angle) to manipulate the flow in the wake of the turbine to benefit the wind farm as a whole. In this presentation, we will highlight the most recent advancements by NREL in the field of WFFC, with a focus on mitigating wake losses using active wake mixing strategies.
Bio:Joeri Frederik works as a wind farm controls researcher at the National Renewable Energy Laboratory (NREL) in Boulder, Colorado. He earned his PhD in Systems & Control at the Delft ³Ô¹ÏÍø of Technology (TU Delft) in 2021, under the supervision of professor Jan-Willem van Wingerden. During this time, he co-invented a patented wind farm control strategy called the "helix approach". He joined NREL in 2022, and here continuesÌýhis work on wind farm control, with a specific focus on active wake mixing strategies that mitigate wake losses.

Bio: Mr. Nelson is an Aerospace Engineer at Rhea Space Activity focused on developing optical navigation routines for RSA’s Jervis Autonomy Module. He completed his M.S. in Aerospace Engineering at the ³Ô¹ÏÍø of Colorado Boulder and focused his education on Astrodynamics and Satellite Navigation. He was involved in cube satellite development during this time and served as the ADCS Lead on the RALPHIE and MAXWELL projects as well as the Chief Engineer/Project Manager of the RALPHIE project. At RSA he worked on the Deep Reinforcement Learning Ecosystem for Guaranteed Safety project focused on developing safety guarantees for reinforcement learning based controls laws. His experience spans both linear and nonlinear control law implementation in aerospace systems.

Bio: Dr. Grasso earned his PhD in electrical engineering from the ³Ô¹ÏÍø of Colorado Boulder and has gone on to accumulate over 30 years of experience in the space industry. He has developed software and sequences for autonomously operating a variety of deep space missions, including the Spitzer Space Telescope, Mars Odyssey, MRO, Dawn, and MSolo. He developed the Virtual Machine Language (VML) sequencing flight software for Spitzer, versions of which have gone on to be used on fifteen deep space missions. He used VML to sequence the successful autonomous entry, descent, and landing of Mars Phoenix, and assists with installing VML on other missions. Dr. Grasso is the lead instructor for graduate class ASEN5148 Spacecraft Design at the ³Ô¹ÏÍø of Colorado.