Förderjahr 2024 / Stipendium Call #19 / ProjektID: 7383 / Projekt: Dynamic Power Management for Edge AI: A Sustainable Self-Adaptive Approach
How I built a context-aware reward that helps an RL agent stay smart and energy-efficient.
Designing a Reward Function That Actually Teaches the Agent What to Care About
In reinforcement learning (RL), the reward function defines what “success” means — it’s the feedback signal that shapes every decision the agent makes. Getting it right is surprisingly hard: if the reward is too simple, the agent plateaus; if it’s too complex, it chases noise. Reward design often feels like tuning a musical instrument that keeps changing pitch.
The Challenge
My agent had to balance two conflicting goals: maintain high detection confidence and preserve battery life on a solar-powered system. Push too hard for accuracy, and the device drains itself; play it too safe, and performance collapses. The reward function needed to capture that trade-off and adapt as sunlight and battery levels changed.
The Solution
The final version of my reward function, combines two components each step:
-
r_conf:rewards detection confidence above a threshold, penalizes weak predictions. -
r_eff: penalizes excess power draw compared to available solar energy.
A context signal called energy_context, derived from battery level and solar input, scales their importance:
reward = alpha * r_conf + beta * r_eff
When energy is plentiful, confidence matters more, because alpha is large ; when it’s scarce, efficiency dominates because beta outweighs alpha. If the battery depletes entirely, the agent receives a small fixed penalty (−0.3). Enough to discourage downtime without breaking training stability.
The Takeaway
This context-aware reward function doesn’t just optimize performance; it teaches the agent when to care about each goal. Designing it was a long, iterative process, but it turned a fragile, confused learner into one that adapts intelligently to its environment.