Time optimal control in stochastic systems
Abstract
Time optimal control of stochastic dynamic system is considered. It is assumed that noise-free observations are available at all times. An optimal admissible feedback control policy is formulated leading to minimization of the expectation of the length of time required to reach the desired terminal region. Dynamic programming formalism leads to a second order nonlinear partial differential equation. The difference between the stochastic and deterministic equations is represented by a truncated power series and the optimal switching surface for a "bang-bang" controller is then computed through a direct search using repetitive simulations. Numerical results for the location of the stochastic switching curves for a specific second order system are computed and discussed.
Degree
Ph. D.
Rights
OpenAccess.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 License. Copyright held by author.