Research Activity
1. Control in deterministic discrete-continuous and hybrid systems.
Dynamic systems with discrete-continuous properties (discrete-continuous systems
(DCS) or hybrid systems) arise in various areas of applied sciences such as:
mechanics, economics, telecommunications etc. The research performed during
the last twenty years included the extension of classical methodology to a
new class of DCS. The powerful method, which can be refereed as the method
of discontinuous time change, was developed for various control problems in
the area of DCS. By using the special transformation of the time scale, this
method gives the possibility to reduce the complicated optimal control problem
with impulsive controls and discontinuous trajectories to a standard one,
which can be stated for some auxiliary system described by ordinary differential
equations with bounded controls. The key point was the introduction of the
concept of {generalized solution} in optimal control problems with discontinuous
paths and their description with aid of the discontinuous time transformation
method.
With aid of this method the following problems were successively solved:
-
representation of discontinuous paths with aid of solutions of some
auxiliary dynamic system with bounded controls;
-
derivation of dynamic equations as differential equations with measures;
-
proof of the existence theorems in optimal control problems with unbounded
controls and discontinuous paths;
-
derivation of necessary and sufficient optimality conditions within the
class generalized solutions in optimal control problems with discontinuous paths.
2. Stochastic discrete-continuous systems.
The extension of the discontinuous time change method to a class of stochastic
systems gave the possibility to solve the following problems:
-
to derive the filtering equations of Kalman type in linear and nonlinear
discrete-continuous systems, described by stochastic differential equations
with measures;
-
to prove the existence theorems in optimal control problems for stochastic
systems with impulsive controls and discontinuous paths;
-
to derive the filtering equations for stochastic systems described by
differential equations with measures and driven by hidden Markov chains;
-
to create the theory of observation control in complicated stochastic
systems with various constraints, imposed on the composition and
localization of the observations, prove the existence theorems of the
optimal generalized solution, derive the necessary and sufficient
optimality conditions.
3. Applications.
These theoretical methods were successively applied in the following areas:
-
development of the sub-optimal algorithms of the motion estimation
in dynamical systems with jumping parameters;
-
modeling and optimization of computer-vision systems in visual and infra-red band;
-
image processing, description of the image motion in airborne and
spacecraft based optical-electronic systems, development of the image
motion compensation methods;
-
control of mechanical systems with unilateral constraints.