Probabilistic Safety Guarantees for Learned Control Barrier Functions: Theory and Application to Multi-Objective Human–Robot Collaborative Optimization
Journal
Mathematics
ISSN
2227-7390
Date Issued
2026
Author(s)
Abstract
Designing provably safe controllers for high-dimensional nonlinear systems with formal guarantees represents a fundamental challenge in control theory. While control barrier functions (CBFs) provide safety certificates through forward invariance, manually crafting these barriers for complex systems becomes intractable. Neural network approximation offers expressiveness but traditionally lacks formal guarantees on approximation error and Lipschitz continuity essential for safety-critical applications. This work establishes rigorous theoretical foundations for learned barrier functions through explicit probabilistic bounds relating neural approximation error to safety failure probability. The framework integrates Lipschitz-constrained neural networks trained via PAC learning within multi-objective model predictive control. Three principal results emerge: a probabilistic forward invariance theorem establishing (Formula presented.), explicitly connecting network parameters to failure probability; sample complexity analysis proving (Formula presented.) safe set expansion; and computational complexity bounds of (Formula presented.) enabling 50 Hz real-time control. An experimental validation across 648,000 time steps demonstrates a 99.8% success rate with zero violations, a measured approximation error of (Formula presented.) m, a matching theoretical bound of (Formula presented.) m, and a 16.2 ms average solution time. The framework achieves a 52% conservatism reduction compared to manual barriers and a 21% improvement in multi-objective Pareto hypervolume while maintaining formal safety guarantees. © 2026 by the author.
