Adaptive Dynamic Programming for Control - Algorithms and Stability (Hardcover, 2013 ed.)

, , ,
There are many methods of stable controller design for nonlinear systems. In seeking to go beyond the minimum requirement of stability, Adaptive Dynamic Programming in Discrete Time approaches the challenging topic of optimal control for nonlinear systems using the tools of adaptive dynamic programming (ADP). The range of systems treated is extensive; affine, switched, singularly perturbed and time-delay nonlinear systems are discussed as are the uses of neural networks and techniques of value and policy iteration. The text features three main aspects of ADP in which the methods proposed for stabilization and for tracking and games benefit from the incorporation of optimal control methods:
infinite-horizon control for which the difficulty of solving partial differential Hamilton Jacobi Bellman equations directly is overcome, and proof provided that the iterative value function updating sequence converges to the infimum of all the value functions obtained by admissible control law sequences;
finite-horizon control, implemented in discrete-time nonlinear systems showing the reader how to obtain suboptimal control solutions within a fixed number of control steps and with results more easily applied in real systems than those usually gained from infinite-horizon control;
nonlinear games for which a pair of mixed optimal policies are derived for solving games both when the saddle point does not exist, and, when it does, avoiding the existence conditions of the saddle point.
Non-zero-sum games are studied in the context of a single network scheme in which policies are obtained guaranteeing system stability and minimizing the individual performance function yielding a Nash equilibrium.
In order to make the coverage suitable for the student as well as for the expert reader, Adaptive Dynamic Programming in Discrete Time: establishes the fundamental theory involved clearly with each chapter devoted to a clearly identifiable control paradigm;
demonstrates convergence proofs of the ADP algorithms to deepen understanding of the derivation of stability and convergence with the iterative computational methods used; and
shows how ADP methods can be put to use both in simulation and in real applications.
This text will be of considerable interest to researchers interested in optimal control and its applications in operations research, applied mathematics computational intelligence and engineering. Graduate students working in control and operations research will also find the ideas presented here to be a source of powerful methods for furthering their study.

R5,184

Or split into 4x interest-free payments of 25% on orders over R50
Learn more

Discovery Miles51840
Mobicred@R486pm x 12* Mobicred Info
Free Delivery
Delivery AdviceShips in 12 - 17 working days



Product Description

There are many methods of stable controller design for nonlinear systems. In seeking to go beyond the minimum requirement of stability, Adaptive Dynamic Programming in Discrete Time approaches the challenging topic of optimal control for nonlinear systems using the tools of adaptive dynamic programming (ADP). The range of systems treated is extensive; affine, switched, singularly perturbed and time-delay nonlinear systems are discussed as are the uses of neural networks and techniques of value and policy iteration. The text features three main aspects of ADP in which the methods proposed for stabilization and for tracking and games benefit from the incorporation of optimal control methods:
infinite-horizon control for which the difficulty of solving partial differential Hamilton Jacobi Bellman equations directly is overcome, and proof provided that the iterative value function updating sequence converges to the infimum of all the value functions obtained by admissible control law sequences;
finite-horizon control, implemented in discrete-time nonlinear systems showing the reader how to obtain suboptimal control solutions within a fixed number of control steps and with results more easily applied in real systems than those usually gained from infinite-horizon control;
nonlinear games for which a pair of mixed optimal policies are derived for solving games both when the saddle point does not exist, and, when it does, avoiding the existence conditions of the saddle point.
Non-zero-sum games are studied in the context of a single network scheme in which policies are obtained guaranteeing system stability and minimizing the individual performance function yielding a Nash equilibrium.
In order to make the coverage suitable for the student as well as for the expert reader, Adaptive Dynamic Programming in Discrete Time: establishes the fundamental theory involved clearly with each chapter devoted to a clearly identifiable control paradigm;
demonstrates convergence proofs of the ADP algorithms to deepen understanding of the derivation of stability and convergence with the iterative computational methods used; and
shows how ADP methods can be put to use both in simulation and in real applications.
This text will be of considerable interest to researchers interested in optimal control and its applications in operations research, applied mathematics computational intelligence and engineering. Graduate students working in control and operations research will also find the ideas presented here to be a source of powerful methods for furthering their study.

Customer Reviews

No reviews or ratings yet - be the first to create one!

Product Details

General

Imprint

Springer London

Country of origin

United Kingdom

Series

Communications and Control Engineering

Release date

December 2012

Availability

Expected to ship within 12 - 17 working days

First published

2013

Authors

, , ,

Dimensions

235 x 155 x 21mm (L x W x T)

Format

Hardcover

Pages

424

Edition

2013 ed.

ISBN-13

978-1-4471-4756-5

Barcode

9781447147565

Categories

LSN

1-4471-4756-1



Trending On Loot