The theory of automatic control for dummies. Theory of automatic control. According to the number of adjustable values

THEORY OF AUTOMATIC CONTROL FOR "DUMPS"

K.Yu. Polyakov

St. Petersburg

© K.Yu. Polyakov, 2008

“In a university, you need to present the material at a high professional level. But since this level is well above the head of the average student, I will explain on my fingers. It's not very professional, but it's understandable."

Unknown teacher

Foreword

This manual is intended for the first acquaintance with the subject. Her task is to explain “on the fingers” the basic concepts automatic control theory and make it so that after reading it you can perceive professional literature on this topic. You should consider this manual only as a foundation, a launching pad for serious study of a serious subject, which can become very interesting and exciting.

There are hundreds of textbooks on automatic control. But the whole problem is that the brain, when perceiving new information, is looking for something familiar, for which you can “catch on”, and on this basis “tie” the new to already known concepts. Practice shows that it is difficult for a modern student to read serious textbooks. Nothing to grab onto. And behind strict scientific evidence, the essence of the matter, which is usually quite simple, often escapes. The author tried to "go down" to a lower level and build a chain from "everyday" concepts to the concepts of control theory.

The exposition at every step sins with laxity, no proofs are given, formulas are used only where it is impossible to do without them. The mathematician will find here many inconsistencies and omissions, since (in accordance with the objectives of the manual) between rigor and intelligibility, the choice is always made in favor of intelligibility.

Little prior knowledge is required from the reader. Need to have an idea

about some sections of the course of higher mathematics:

1) derivatives and integrals;

2) differential equations;

3) linear algebra, matrices;

4) complex numbers.

Thanks

The author expresses his deep gratitude to Dr. Sci. A.N. Churilov, Ph.D. V.N. Kalinichenko and Ph.D. IN. Rybinsky, who carefully read the preliminary version of the manual and made many valuable comments, which helped to improve the presentation and make it more understandable.

© K.Yu. Polyakov, 2008

BASIC CONCEPTS ...

Introduction ................................................ ................................................. ...............................................

Control systems................................................ ................................................. ......................

1.3. What are the control systems? ................................................. ...............................................

M ATEMATIC MODELS..........................................................................................................................

2.1. What do you need to know to manage? ................................................. ..................................................

2.2. Connection of input and output .......................................................... ................................................. ......................

How are models built? ................................................. ................................................. ...................

Linearity and non-linearity ............................................................... ................................................. .............

Linearization of Equations................................................... ................................................. ...................

Control................................................. ................................................. ................................................

3M CLOTHES OF LINEAR OBJECTS.....................................................................................................................

Differential Equations.................................................................... ................................................. .........

3.2. State-Space Models.................................................................... ................................................. ..

Transition function .................................................................. ................................................. .........................

Impulse response (weight function) .............................................................. .................................

Transmission function................................................ ................................................. ...................

Laplace transform .................................................................. ................................................. .................

3.7. Transfer function and state space .............................................................. .........................

Frequency characteristics .................................................................. ................................................. ..........

Logarithmic frequency responses............................................................... ...............................

4. T IP DYNAMIC LINKS................................................................................................................

Amplifier................................................. ................................................. .........................................

Aperiodic link .............................................................. ................................................. .........................

Vibrating link .................................................................. ................................................. .........................

Integrating link .................................................................. ................................................. .......................

Differentiating links .................................................................. ................................................. ..............

Lag................................................. ................................................. ....................................

"Reverse" links .......................................................... ................................................. ...............................

LAFCHH of complex links ............................................... ................................................. .................

With STRUCTURAL SCHEMES....................................................................................................................................

Conventions .................................................................. ................................................. ......................

Conversion Rules .................................................................. ................................................. ...................

Typical single-loop system .............................................................. ................................................. .....

BUT ANALYSIS OF CONTROL SYSTEMS......................................................................................................................

Management requirements .................................................................. ................................................. ...................

Exit process .................................................................. ................................................. ...............................

Accuracy................................................. ................................................. ............................................

Sustainability ................................................................ ................................................. .................................

Sustainability criteria .................................................................. ................................................. .................

Transition process ................................................................ ................................................. .........................

Frequency assessments of quality ............................................................... ................................................. ............

Root quality assessments ............................................................... ................................................. ................

Robustness ................................................................ ................................................. ....................................

With INTESIS OF REGULATORS....................................................................................................................................

Classic scheme .............................................................. ................................................. ......................

PID controllers ............................................................... ................................................. ................................

Pole placement method .............................................................. ................................................. .............

LAFCH correction .................................................. ................................................. ...............................

Combined management .................................................................. ................................................. ...........

Invariance .............................................................. ................................................. ...............................

Many stabilizing regulators .............................................................. ....................................

CONCLUSION ................................................. ................................................. ................................................. .....

L ITERATURE FOR FURTHER READING..........................................................................................................

© K.Yu. Polyakov, 2008

1. Basic concepts

1.1. Introduction

Since ancient times, man has wanted to use the objects and forces of nature for his own purposes, that is, to control them. You can control inanimate objects (for example, rolling a stone to another place), animals (training), people (boss - subordinate). Many management tasks in the modern world are associated with technical systems - cars, ships, aircraft, machine tools. For example, you need to maintain a given course of the ship, the height of the aircraft, the engine speed, the temperature in the refrigerator or in the oven. If these tasks are solved without human intervention, they talk about automatic control.

Management theory tries to answer the question "how should we manage?". Until the 19th century, the science of control did not exist, although the first automatic control systems already existed (for example, windmills “taught” to turn towards the wind). The development of management theory began during the industrial revolution. At first, this direction in science was developed by mechanics to solve control problems, that is, maintaining a given value of rotational speed, temperature, pressure in technical devices (for example, in steam engines). This is where the name “control theory” comes from.

Later it turned out that the principles of management can be successfully applied not only in technology, but also in biology, economics, social sciences. The processes of control and processing of information in systems of any nature are studied by the science of cybernetics. One of its sections, connected mainly with technical systems, is called theory of automatic control. In addition to the classical tasks of regulation, she also deals with the optimization of control laws, issues of adaptability (adaptation).

Sometimes the names "automatic control theory" and "automatic control theory" are used interchangeably. For example, in modern foreign literature you will find only one term - control theory.

1.2. Control systems

1.2.1. What is the control system?

AT In management tasks, there are always two objects - managed and managing. The managed object is usually calledcontrol object or just an object , and the control object is a regulator . For example, when controlling the rotational speed, the control object is the engine (electric motor, turbine); in the problem of stabilizing the course of a ship, a ship submerged in water; in the task of maintaining the volume level - dynamic

Regulators can be built on different principles.

The most famous of the first mechanical regulators is

Watt centrifugal regulator for frequency stabilization

rotation of the steam turbine (in the figure on the right). When the frequency

rotation increases, the balls diverge due to the increase

centrifugal force. At the same time, through the system of levers, a little

the damper closes, reducing the flow of steam to the turbine.

Temperature controller in the refrigerator or thermostat -

this is the electronic circuit that turns on the cooling mode

(or heating) if the temperature gets higher (or lower)

given.

In many modern systems, regulators are microprocessor devices that

puthers. They successfully control aircraft and spacecraft without human intervention.

© K.Yu. Polyakov, 2008

ka. A modern car is literally "stuffed" with control electronics, up to on-board computers.

Typically, the regulator does not act directly on the control object, but through actuators ( drives), which can amplify and convert the control signal, for example, an electrical signal can “turn” into the movement of a valve that regulates fuel consumption, or into turning the steering wheel at a certain angle.

In order for the regulator to "see" what is actually happening with the object, sensors are needed. With the help of sensors, those characteristics of the object that need to be controlled are most often measured. In addition, the quality of control can be improved if additional information is obtained - by measuring the internal properties of the object.

1.2.2. System Structure

So, a typical control system includes an object, a controller, a drive, and sensors. However, a set of these elements is not yet a system. To turn into a system, communication channels are needed, through which information is exchanged between elements. Electric current, air (pneumatic systems), liquid (hydraulic systems), computer networks can be used to transmit information.

Interconnected elements are already a system that has (due to connections) special properties that individual elements and any combination of them do not have.

The main intrigue of management is related to the fact that the object is affected by the environment - external disturbances, which "prevent" the regulator from performing its task. Most perturbations are unpredictable in advance, that is, they are random in nature.

In addition, sensors do not measure parameters accurately, but with some error, albeit a small one. In this case, one speaks of "measurement noise" by analogy with noise in radio engineering, which distort signals.

Summing up, you can draw a block diagram of the control system like this:

control

regulator

disturbances

reverse

measurements

For example, in the ship's course control system

control object- this is the ship itself, located in the water; to control its course, a rudder is used that changes the direction of the flow of water;

controller - digital computer;

drive - a steering device that amplifies the control electrical signal and converts it into a steering wheel;

sensors - a measuring system that determines the actual course;

external disturbances- this is sea waves and wind that deviate the ship from a given course;

measurement noises are sensor errors.

The information in the control system, as it were, “walks in a circle”: the regulator issues a signal

control on the drive, which acts directly on the object; then information about the object through the sensors returns back to the controller and everything starts anew. They say that there is feedback in the system, that is, the controller uses information about the state of the object to develop control. Feedback systems are called closed, since information is transmitted in a closed loop.

© K.Yu. Polyakov, 2008

1.2.3. How does the regulator work?

The controller compares the setting signal (“setpoint”, “setpoint”, “desired value”) with the feedback signals from the sensors and determines mismatch(control error) is the difference between the specified and actual state. If it is zero, no control is required. If there is a difference, the regulator issues a control signal that seeks to reduce the mismatch to zero. Therefore, the controller circuit in many cases can be drawn like this:

mismatch

algorithm

control

management

Feedback

This diagram shows error control(or by deviation). This means that in order for the controller to take effect, the controlled variable must deviate from the set value. The block marked with ≠ finds the mismatch. In the simplest case, it subtracts the feedback signal (measured value) from the set value.

Is it possible to manipulate the object so that there is no error? In real systems, no. First of all, due to external influences and noises that are not known in advance. In addition, control objects have inertia, that is, they cannot instantly move from one state to another. The capabilities of the controller and drives (that is, the power of the control signal) are always limited, so the speed of the control system (speed of transition to a new mode) is also limited. For example, when steering a ship, the rudder angle usually does not exceed 30 - 35 °, this limits the rate of change of course.

We considered the option when feedback is used in order to reduce the difference between the given and actual state of the control object. Such feedback is called negative because the feedback signal is subtracted from the driving signal. Could it be the other way around? It turns out yes. In this case, the feedback is called positive, it increases the mismatch, that is, it tends to "shake" the system. In practice, positive feedback is used, for example, in generators to maintain undamped electrical oscillations.

1.2.4. Open systems

Is it possible to manage without using feedback? Basically, you can. In this case, the regulator does not receive any information about the real state of the object, so it must be known exactly how this object behaves. Only then can you calculate in advance how they need to be controlled (build the desired control program). However, there is no guarantee that the task will be completed. Such systems are called program control systems or open systems, since information is not transmitted along a closed loop, but only in one direction.

program

control

regulator

disturbances

A blind and deaf driver can also drive a car. Some time. As long as he remembers the road and can correctly calculate his place. Until there are pedestrians or other vehicles on the way that he cannot know about in advance. From this simple example, it is clear that without

© K.Yu. Polyakov, 2008

feedback (information from sensors) it is impossible to take into account the influence of unknown factors, the incompleteness of our knowledge.

Despite these shortcomings, open-loop systems are used in practice. For example, the information board at the station. Or the simplest engine control system, which does not require very precise speed control. However, from the point of view of control theory, open-loop systems are of little interest, and we will no longer think about them.

1.3. What are the control systems?

Automatic system is a system that works without human intervention. Is there some more automated systems in which routine processes (collection and analysis of information) are performed by a computer, but the entire system is controlled by a human operator who makes decisions. We will further study only automatic systems.

1.3.1. Tasks of control systems

Automatic control systems are used to solve three types of problems:

stabilization, that is, maintaining a given operating mode that does not change for a long time (the setting signal is constant, often zero);

program control– control according to a previously known program (the master signal changes, but is known in advance);

tracking of an unknown master signal.

To stabilization systems include, for example, autopilots on ships (maintaining a given course), systems for regulating the speed of turbines. Program control systems are widely used in household appliances, such as washing machines. Tracking systems are used to amplify and convert signals, they are used in drives and when transmitting commands via communication lines, for example, via the Internet.

1.3.2. One-dimensional and multidimensional systems

According to the number of inputs and outputs, there are

one-dimensional systems that have one input and one output (they are considered in the so-called classical control theory);

multidimensional systems having several inputs and/or outputs (the main subject of study in modern control theory).

We will study only one-dimensional systems, where both the plant and the controller have one input and one output signal. For example, when steering a ship along a course, it can be assumed that there is one control action (rudder turn) and one adjustable variable (heading).

However, in reality this is not entirely true. The fact is that when the course changes, the roll and trim of the ship also changes. In the one-dimensional model, we neglect these changes, although they can be very significant. For example, with a sharp turn, the roll can reach an unacceptable value. On the other hand, not only the steering wheel can be used for control, but also various thrusters, stabilizers, etc., that is, the object has several inputs. Thus, the real course management system is multidimensional.

The study of multidimensional systems is a rather difficult task and is beyond the scope of this tutorial. Therefore, in engineering calculations, they sometimes try to simplistically represent a multidimensional system as several one-dimensional ones, and quite often this method leads to success.

1.3.3. Continuous and discrete systems

According to the nature of the signals, the system can be

continuous , in which all signals are functions of continuous time, defined on a certain interval;

discrete, which use discrete signals (sequences of numbers) that are determined only at certain points in time;

© K.Yu. Polyakov, 2008

continuous-discrete, in which there are both continuous and discrete signals. Continuous (or analog) systems are usually described by differential equations. These are all motion control systems in which there are no computers and other elements.

cops of discrete action (microprocessors, logic integrated circuits). Microprocessors and computers are discrete systems, since they contain all the information

The information is stored and processed in a discrete form. The computer cannot process continuous signals because it only works with sequences numbers. Examples of discrete systems can be found in economics (the reference period is a quarter or a year) and in biology (the “predator-prey” model). Difference equations are used to describe them.

There are also hybrid continuous-discrete systems, for example, computer systems for controlling moving objects (ships, aircraft, cars, etc.). In them, some of the elements are described by differential equations, and some by difference equations. From the point of view of mathematics, this creates great difficulties for their study, therefore, in many cases, continuous-discrete systems are reduced to simplified purely continuous or purely discrete models.

1.3.4. Stationary and non-stationary systems

For management, the question of whether the characteristics of an object change over time is very important. Systems in which all parameters remain constant are called stationary, which means "not changing in time." This tutorial only deals with stationary systems.

In practical problems, the situation is often not so rosy. For example, a flying rocket consumes fuel and due to this its mass changes. Thus, a rocket is a non-stationary object. Systems in which the parameters of an object or controller change over time are called non-stationary. Although the theory of non-stationary systems exists (the formulas are written), it is not so easy to apply it in practice.

1.3.5. Certainty and randomness

The simplest option is to assume that all object parameters are defined (specified) exactly, just like external influences. In this case, we are talking about deterministic systems that were considered in classical control theory.

However, in real problems, we do not have exact data. First of all, it refers to external influences. For example, to study the motion of a ship at the first stage, we can assume that the wave has the form of a sine of known amplitude and frequency. This is a deterministic model. Is it so in practice? Naturally not. With this approach, only approximate, rough results can be obtained.

According to modern concepts, the waveform is approximately described as the sum of sinusoids that have random, that is, unknown in advance, frequencies, amplitudes and phases. Interference, measurement noise are also random signals.

Systems in which random perturbations act or the parameters of an object can change randomly are called stochastic(probabilistic). The theory of stochastic systems allows obtaining only probabilistic results. For example, it cannot be guaranteed that the ship's course deviation will always be no more than 2°, but you can try to ensure such a deviation with some probability (99% probability means that the requirement will be met in 99 cases out of 100).

1.3.6. Optimal systems

Often the system requirements can be formulated in the form optimization problems. In optimal systems, the controller is constructed in such a way as to provide a minimum or maximum of some quality criterion. It must be remembered that the expression "optimal system" does not mean that it is really ideal. Everything is determined by the accepted criterion - if it is chosen successfully, the system will turn out to be good, if not, then vice versa.

© K.Yu. Polyakov, 2008

1.3.7. Special classes of systems

If the parameters of the object or disturbances are known inaccurately or may change over time (in non-stationary systems), adaptive or self-adjusting controllers are used, in which the control law changes when conditions change. In the simplest case (when there are several previously known modes of operation), there is a simple switching between several control laws. Often in adaptive systems, the controller estimates the parameters of the object in real time and accordingly changes the control law according to a given rule.

A self-adjusting system that tries to adjust the controller so as to "find" the maximum or minimum of some quality criterion is called extreme (from the word extremum, which means maximum or minimum).

Many modern household appliances (such as washing machines) use fuzzy controllers, built on the principles of fuzzy logic . This approach allows us to formalize the human way of making a decision: "if the ship has gone too far to the right, the rudder must be shifted too far to the left."

One of the popular trends in modern theory is the application of artificial intelligence achievements to control technical systems. The controller is built (or only adjusted) on the basis of a neural network, which is previously trained by a human expert.

© K.Yu. Polyakov, 2008

2. Mathematical models

2.1. What do you need to know to manage?

The goal of any control is to change the state of the object in the right way (in accordance with the task). The theory of automatic control should answer the question: "how to build a regulator that can control a given object in such a way as to achieve the goal?" To do this, the developer needs to know how the control system will respond to different influences, that is, a system model is needed: an object, a drive, sensors, communication channels, disturbances, and noise.

A model is an object that we use to study another object (original). The model and the original must be somewhat similar so that the conclusions made when studying the model could (with some probability) be transferred to the original. We will be primarily interested mathematical models expressed as formulas. In addition, descriptive (verbal), graphic, tabular and other models are also used in science.

2.2. Connection of input and output

Any object interacts with the environment through inputs and outputs. Inputs are possible effects on the object, outputs are those signals that can be measured. For example, for an electric motor, the inputs can be supply voltage and load, and the outputs

– shaft speed, temperature.

The inputs are independent, they "come" from the external environment. When the input information changes, the internal object state(as its changing properties are called) and, as a consequence, the outputs are:

input x

output y

This means that there is some rule according to which the element transforms input x into output y . This rule is called an operator. The entry y = U means that the output y is received in

the result of applying the operator U to the input x.

Building a model means finding an operator that connects inputs and outputs. It can be used to predict the reaction of an object to any input signal.

Consider a DC motor. The input of this object is the supply voltage (in volts), the output is the rotational speed (in revolutions per second). We will assume that at a voltage of 1 V, the rotational speed is 1 rpm, and at a voltage of 2 V - 2 rpm, that is, the rotation frequency is equal in magnitude to the voltage1. It is easy to see that the action of such an operator can be written as

U[ x] = x .

Now let's assume that the same engine rotates the wheel and as the output of the object we have chosen the number of revolutions of the wheel relative to the initial position (at the moment t = 0). In this case, with uniform rotation, the product x ∆ t gives us the number of revolutions during the time ∆ t, that is, y (t) = x ∆ t (here, the notation y (t) explicitly denotes the dependence of the output on time

nor t ). Can we assume that we have defined the operator U by this formula? Obviously not, because the resulting dependence is valid only for a constant input signal. If the voltage at the input x (t) changes (it doesn’t matter how!), The angle of rotation will be written as an inte-

1 Of course, this will only be true in a certain range of voltages.


In the modern world, there are a great many different automatic systems, and their number is constantly increasing every year. And all of them require high-quality and best management, the principles of which should be laid down in them by the design engineer at the design stage. After all, the smart house itself will heat the room to the set temperature, not because it suddenly acquired its own intelligence, and the quadcopter flies so cool not because it uses a magic crystal. Believe me, there is no magic in this probability, the theory of automatic control, or TAU for short, is simply to blame for everything.

In order for the room to heat up to a given temperature, and the quadcopter to fly perfectly, you need to have information about their state at the current time and about environmental conditions. A smart home will not interfere with information about the temperature in the room; for a copter, the relevant information is the height and position in space. All this is collected by a certain type of device, which is called sensors or sensors. There are a huge number of sensors: sensors for temperature, humidity, pressure, voltage, current, acceleration, speed, magnetic field and many others.

Then the information from the sensors needs to be processed, and this is done by special regulators, which are some kind of mathematical expression programmed in the microcontroller (or assembled in an electronic circuit), which, based on the data on the control action and data from the sensors, generates a control signal for optimal control working body (heating element in the smart heater system, engine, etc.).

Here, with the help of an information converter, feedback is formed, which allows the automatic control system of the ACS to always be aware of the latest changes and not give the master influence a monopoly on system control, otherwise, without taking into account external disturbing influences, the system would go into overdrive. Due to the presence of feedback, such systems are called closed. There are also open systems that do not have any sensors or other tools that inform about the external space. But they are as simple as possible and are practically not suitable for managing complex objects, because you need to thoroughly know the entire object, study and correctly describe its behavior in all possible situations. Therefore, such systems are not complex units and are controlled in time. For example, the simplest scheme for watering flowers on a timer.

Open-loop systems are not of practical interest, therefore, we will further consider only closed ones. The figure showed an example with one circuit, since there is only one feedback. But for more precise control of complex objects, it is necessary to control several quantities that affect the behavior of the object as a whole, which means that several sensors, several regulators and feedbacks are required. As a result, the ACS is transformed into a multi-circuit one.

From the point of view of the structural organization, ACS with serial and parallel correction has become widespread.


ACS with sequential correction


ACS with serial and parallel correction

As can be seen from the diagrams above, these ACSs have different organization of feedbacks and regulators. With sequential correction, the output value of the external loop controller is the input for the internal loop controller, i.e. first one is corrected, then the other and multiplied by the previous one, and so on along the entire chain. Such an ACS is also called a subordinate control system. With parallel correction, the signals from the converters follow the input of one regulator, which must process all this. As a result, each system has its pros and cons. Automatic control systems with parallel correction work quickly, but are very difficult to debug, because in one regulator it is necessary to take into account all the possible nuances of various feedbacks. With serial equalization, the regulators are tuned sequentially and without any problems, but the speed of such systems is not very good, because the more circuits, the more uncompensated time constants, and the longer the signal goes to the output.

There is also a combined self-propelled guns, which is capable of much. But in this course of lectures it will not be considered.

In the first lecture, you will learn what the subject and disciplines (TAU) are and a brief historical background
Classification of ACS (automatic control systems)

Transmission function
Frequency characteristics.
Time functions and characteristics
Block diagrams and their transformation
Typical links and their characteristics
Minimum and Non-Minimum-Phase Links
Frequency response of open systems
Connections of some typical links

The concept of stability of linear continuous ACS
Hurwitz stability criterion
Mikhailov stability criterion
Nyquist stability criterion
The concept of stability margin

Quality indicators
Criteria for the quality of the transition process
Sequential correction of dynamic properties
Parallel Correction

Popov E.P. Theory of linear systems of automatic regulation and control. - M. Nauka, 1989. - 304 p.
Theory of automatic control. Part 1. Theory of linear automatic control systems / N.A. Babakov and others; Ed. A.A. Voronova. - M.: Higher school, 1986. - 367 p.
Babakov N.A. etc. Theory of automatic control. Part 1 / Ed. A.A. Voronova. - M.: Higher School, 1977. - 303 p.
Yurevich E.I. Theory of automatic control. - M.: Energy, 1975. - 416 p.
Besekersky V.A. and other Collection of tasks on the theory of automatic regulation and control. - M.: Nauka, 1978. - 512 p.
Theory of automatic control. Rotach V.Ya - Considered the provisions of the theory of automatic control from the standpoint of its application for the purpose of building control systems for technological processes.
abstract lecture of a student botanist


When the question of the implementation of PID controllers is somewhat deeper than it seems. So much so that the young self-made ones who decide to implement such a regulation scheme are waiting for a lot of wonderful discoveries, and the topic is relevant. So I hope this opus is useful to someone, so let's get started.

Try number one

As an example, let's try to implement a control scheme using the example of turn control in a simple 2D space arcade, step by step, starting from the very beginning (don't forget that this is a tutorial?).


Why not 3D? Because the implementation doesn't change, except that you have to turn up the PID controller to control pitch, yaw, and roll. Although the question of the correct application of PID control along with quaternions is really interesting, maybe in the future I will consecrate it, but even NASA prefers Euler angles instead of quaternions, so we'll get by with a simple model on a two-dimensional plane.


To begin with, let's create the spaceship game object itself, which will consist of the ship object itself at the top level of the hierarchy, attach a child Engine object to it (purely for the sake of special effects). Here's what it looks like for me:



And on the object of the spacecraft itself we throw in inspector all sorts of components. Looking ahead, I will give a screen of how it will look at the end:



But that's later, but for now there are no scripts in it yet, only a standard gentleman's set: Sprite Render, RigidBody2D, Polygon Collider, Audio Source (why?).


Actually, physics is the most important thing for us now and control will be carried out exclusively through it, otherwise, the use of a PID controller would lose its meaning. Let's also leave the mass of our spacecraft at 1 kg, and all the coefficients of friction and gravity are equal to zero - in space.


Because in addition to the spacecraft itself, there are a bunch of other, less intelligent space objects, then we first describe the parent class base body, which will contain references to our components, initialization and destruction methods, as well as a number of additional fields and methods, for example, to implement celestial mechanics:


BaseBody.cs

using UnityEngine; using System.Collections; using System.Collections.Generic; namespace Assets.Scripts.SpaceShooter.Bodies ( public class BaseBody: MonoBehaviour ( readonly float _deafultTimeDelay = 0.05f; public static List _bodies = new List (); #region RigidBody public Rigidbody2D _rb2d; public Collider2D _c2d; #endregion #region References public Transform _myTransform; public GameObject _myObject; ///

/// Object that appears when destroyed /// public GameObject _explodePrefab; #endregion #region Audio public AudioSource _audioSource; /// /// Sounds played when damaged /// public AudioClip _hitSounds; /// /// Sounds that play when an object appears /// public AudioClip _awakeSounds; /// /// Sounds played before death /// public AudioClip _deadSounds; #endregion #region External Force Variables /// /// External forces acting on the object /// public Vector2 _ExternalForces = new Vector2(); /// /// Current velocity vector /// public Vector2 _V = new Vector2(); /// /// Current gravity force vector /// public Vector2 _G = new Vector2(); #endregion public virtual void Awake() ( Init(); ) public virtual void Start() ( ) public virtual void Init() ( _myTransform = this.transform; _myObject = gameObject; _rb2d = GetComponent (); _c2d = GetComponentsInChildren (); _audioSource = GetComponent (); PlayRandomSound(_awakeSounds); BaseBody bb = GetComponent (); _bodies.Add(bb); ) /// /// Destruction of the character /// public virtual void Destroy() ( _bodies.Remove(this); for (int i = 0; i< _c2d.Length; i++) { _c2d[i].enabled = false; } float _t = PlayRandomSound(_deadSounds); StartCoroutine(WaitAndDestroy(_t)); } /// /// Wait some time before destroying /// /// Waiting time /// public IEnumerator WaitAndDestroy(float waitTime) ( yield return new WaitForSeconds(waitTime); if (_explodePrefab) ( Instantiate(_explodePrefab, transform.position, Quaternion.identity); ) Destroy(gameObject, _deafultTimeDelay); ) /// /// Play a random sound /// /// Array of sounds /// Sound duration public float PlayRandomSound(AudioClip audioClip) ( float _t = 0; if (audioClip.Length > 0) ( int _i = UnityEngine.Random.Range(0, audioClip.Length - 1); AudioClip _audioClip = audioClip[_i]; _t = _audioClip.length;_audioSource.PlayOneShot(_audioClip); ) return _t; ) /// /// Taking damage /// /// Damage Level public virtual void Damage(float damage) ( PlayRandomSound(_hitSounds); ) ) )


It seems that they described everything that is needed, even more than necessary (within the framework of this article). Now let's inherit the class of the ship from it ship, which should be able to move and turn:


SpaceShip.cs

using UnityEngine; using System.Collections; using System.Collections.Generic; namespace Assets.Scripts.SpaceShooter.Bodies ( public class Ship: BaseBody ( public Vector2 _movement = new Vector2(); public Vector2 _target = new Vector2(); public float _rotation = 0f; public void FixedUpdate() ( float torque = ControlRotate( _rotation); Vector2 force = ControlForce(_movement); _rb2d.AddTorque(torque); _rb2d.AddRelativeForce(force); ) public float ControlRotate(Vector2 rotate) ( float result = 0f; return result; ) public Vector2 ControlForce(Vector2 movement) ( Vector2 result = new Vector2(); return result; ) ) )


While there is nothing interesting in it, at the moment it is just a stub class.


We will also describe the base (abstract) class for all BaseInputController input controllers:


BaseInputController.cs

using UnityEngine; using Assets.Scripts.SpaceShooter.Bodies; namespace Assets.Scripts.SpaceShooter.InputController ( public enum eSpriteRotation ( Rigth = 0, Up = -90, Left = -180, Down = -270 ) public abstract class BaseInputController: MonoBehaviour ( public GameObject _agentObject; public Ship _agentBody; // Link on the ship logic component public eSpriteRotation _spriteOrientation = eSpriteRotation.Up; //This is due to the non-standard // orientation of the sprite "up" instead of "right" public abstract void ControlRotate(float dt); public abstract void ControlForce(float dt); public virtual void Start() ( _agentObject = gameObject; _agentBody = gameObject.GetComponent (); ) public virtual void FixedUpdate() ( float dt = Time.fixedDeltaTime; ControlRotate(dt); ControlForce(dt); ) public virtual void Update() ( //TO DO ) ) )


And finally, the player controller class PlayerFigtherInput:


PlayerInput.cs

using UnityEngine; using Assets.Scripts.SpaceShooter.Bodies; namespace Assets.Scripts.SpaceShooter.InputController ( public class PlayerFigtherInput: BaseInputController ( public override void ControlRotate(float dt) ( // Determine the position of the mouse relative to the player Vector3 worldPos = Input.mousePosition; worldPos = Camera.main.ScreenToWorldPoint(worldPos); / / Store mouse pointer coordinates float dx = -this.transform.position.x + worldPos.x; float dy = -this.transform.position.y + worldPos.y; //Pass vector2 direction target = new Vector2(dx, dy ); _agentBody._target = target; // Calculate rotation according to keypress float targetAngle = Mathf.Atan2(dy, dx) * Mathf.Rad2Deg; _agentBody._targetAngle = targetAngle + (float)_spriteOrientation; ) public override void ControlForce( float dt) ( //Pass movement _agentBody._movement = Input.GetAxis("Vertical") * Vector2.up + Input.GetAxis("Horizontal") * Vector2.right; ) ) )


It seems to be finished, now we can finally move on to what all this was started for, i.e. PID controllers (do not forget, I hope?). Its implementation seems simple to the point of disgrace:


using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace Assets.Scripts.Regulator ( // This attribute is necessary for the regulator fields // to be displayed in the inspector and serialized public class SimplePID ( public float Kp, Ki, Kd; private float lastError; private float P, I, D; public SimplePID() ( Kp = 1f; Ki = 0; Kd = 0.2f; ) public SimplePID(float pFactor, float iFactor, float dFactor) ( this.Kp = pFactor; this.Ki = iFactor; this.Kd = dFactor; ) public float Update(float error, float dt) ( P = error; I += error * dt; D = (error - lastError) / dt; lastError = error; float CO = P * Kp + I * Ki + D * Kd ; return CO; ) ) )

We will take the default values ​​of the coefficients from the ceiling: it will be a trivial unit coefficient of the proportional control law Kp = 1, a small value of the coefficient for the differential control law Kd = 0.2, which should eliminate the expected fluctuations and a zero value for Ki, which is chosen because in our software model, there are no static errors (but you can always introduce them, and then heroically fight with the help of the integrator).


Now let's go back to our SpaceShip class and try to use our creation as the spaceship's rotation controller in the ControlRotate method:


public float ControlRotate(Vector2 rotate) ( float MV = 0f; float dt = Time.fixedDeltaTime; //Calculate the error float angleError = Mathf.DeltaAngle(_myTransform.eulerAngles.z, targetAngle); //Get corrective acceleration MV = _angleController.Update (angleError, dt); return MV; )

The PID controller will carry out precise angular positioning of the spacecraft using torque alone. Everything is honest, physics and self-propelled guns, almost like in real life.


And without those Quaternion.Lerp of yours

if (!_rb2d.freezeRotation) rb2d.freezeRotation = true; float deltaAngle = Mathf.DeltaAngle(_myTransform.eulerAngles.z, targetAngle); float T = dt * Mathf.Abs(_rotationSpeed ​​/ deltaAngle); // Transform the angle into a Quaternion vector rot = Quaternion.Lerp(_myTransform.rotation, Quaternion.Euler(new Vector3(0, 0, targetAngle)), T); // Change the rotation of the object _myTransform. rotation = rot;


The resulting Ship.cs source code is under the spoiler

using UnityEngine; using Assets.Scripts.Regulator; namespace Assets.Scripts.SpaceShooter.Bodies ( public class Ship: BaseBody ( public GameObject _flame; public Vector2 _movement = new Vector2(); public Vector2 _target = new Vector2(); public float _targetAngle = 0f; public float _angle = 0f; public SimplePID _angleController = new SimplePID(); public void FixedUpdate() ( float torque = ControlRotate(_targetAngle); Vector2 force = ControlForce(_movement); _rb2d.AddTorque(torque); _rb2d.AddRelativeForce(force); ) public float ControlRotate(float rotate) ( float MV = 0f; float dt = Time.fixedDeltaTime; _angle = _myTransform.eulerAngles.z; //Calculate the error float angleError = Mathf.DeltaAngle(_angle, rotate); //Get corrective acceleration MV = _angleController.Update( angleError, dt); return MV; ) public Vector2 ControlForce(Vector2 movement) ( Vector2 MV = new Vector2(); //Piece of engine running special effect code for the sake of if (movement != Vector2.zero) ( if (_flame != null) ( _flame.SetActive(tru e); ) ) else ( if (_flame != null) ( _flame.SetActive(false); ) ) MV = movement; returnMV; ) ) )


All? Are we going home?



WTF! What's happening? Why is the ship turning in a strange way? And why does it bounce off other objects so sharply? Is this stupid PID controller not working?


No panic! Let's try to figure out what's going on.


At the moment a new SP value is received, there is a sharp (stepped) jump in the error mismatch, which, as we remember, is calculated like this: accordingly, there is a sharp jump in the derivative of the error, which we calculate in this line of code:


D = (error - lastError) / dt;

You can, of course, try other differentiation schemes, for example, three-point, or five-point, or ... but it still won't help. Well, they don’t like the derivatives of sharp jumps - at such points the function is not differentiable. However, it is worth experimenting with different differentiation and integration schemes, but then not in this article.


I think that the time has come to build graphs of the transient process: step action from S(t) = 0 to SP(t) = 90 degrees for a body weighing 1 kg, a force arm of 1 meter long and a differentiation grid step of 0.02 s - just like in our example on Unity3D (actually not quite, when constructing these graphs, it was not taken into account that the moment of inertia depends on the geometry of a rigid body, so the transient process will be slightly different, but still similar enough for demonstration). All values ​​on the graph are given in absolute values:


Hmm, what's going on here? Where did the PID controller response go?


Congratulations, we've just encountered the "kick" phenomenon. Obviously, at the time when the process is still PV = 0, and the setpoint is already SP = 90, then with numerical differentiation we obtain the value of the derivative of the order of 4500, which is multiplied by Kd=0.2 and add up with a proportional term, so that at the output we get the value of the angular acceleration of 990, and this is already a form of abuse of the Unity3D physical model (angular velocities will reach 18000 deg / s ... I think this is the limiting value of the angular velocity for RigidBody2D).


  • Maybe it's worth choosing the coefficients with knobs so that the jump is not so strong?
  • Not! The best thing we can achieve in this way is a small amplitude of the derivative jump, however, the jump itself will remain the same, while it is possible to screw up to the complete inefficiency of the differential component.

However, you can experiment.

Attempt number two. Saturation

It is logical that drive unit(in our case, SpaceShip's virtual maneuvering thrusters) cannot work out any large values ​​that our insane regulator can give out. So the first thing we do is saturate the output of the regulator:


public float ControlRotate(Vector2 rotate, float thrust) ( float CO = 0f; float MV = 0f; float dt = Time.fixedDeltaTime; //Calculate the error float angleError = Mathf.DeltaAngle(_myTransform.eulerAngles.z, targetAngle); // Get corrective acceleration CO = _angleController.Update(angleError, dt); //Saturate MV = CO; if (MV > thrust) MV = thrust; if (MV< -thrust) MV = -thrust; return MV; }

And once again the rewritten class Ship completely looks like this

namespace Assets.Scripts.SpaceShooter.Bodies ( public class Ship: BaseBody ( public GameObject _flame; public Vector2 _movement = new Vector2(); public Vector2 _target = new Vector2(); public float _targetAngle = 0f; public float _angle = 0f; public float _thrust = 1f; public SimplePID _angleController = new SimplePID(0.1f,0f,0.05f); public void FixedUpdate() ( _torque = ControlRotate(_targetAngle, _thrust); _force = ControlForce(_movement); _rb2d.AddTorque(_torque); _rb2d.AddRelativeForce(_force); ) public float ControlRotate(float targetAngle, float thrust) ( float CO = 0f; float MV = 0f; float dt = Time.fixedDeltaTime; //Calculate error float angleError = Mathf.DeltaAngle(_myTransform.eulerAngles .z, targetAngle); //Get corrective acceleration CO = _angleController.Update(angleError, dt); //Saturate MV = CO; if (MV > thrust) MV = thrust; if (MV< -thrust) MV = -thrust; return MV; } public Vector2 ControlForce(Vector2 movement) { Vector2 MV = new Vector2(); if (movement != Vector2.zero) { if (_flame != null) { _flame.SetActive(true); } } else { if (_flame != null) { _flame.SetActive(false); } } MV = movement * _thrust; return MV; } public void Update() { } } }


The final scheme of our self-propelled guns will then become like this


At the same time, it becomes clear that the controller output CO(t) slightly different from the process variable MV(t).


Actually from this place you can already add a new game entity - drive unit, through which the process will be controlled, the logic of which can be more complex than just Mathf.Clamp (), for example, you can introduce discretization of values ​​​​(so as not to overload the game physics with values ​​\u200b\u200breaching sixths after the decimal point), a dead zone (again, not it makes sense to overload physics with ultra-small reactions), introduce a delay into the control and non-linearity (for example, a sigmoid) of the drive, and then see what happens.


When we start the game, we will find that the spaceship has finally become controllable:



If you build graphs, you can see that the controller's reaction has already become like this:


Normalized values ​​are already used here, the angles are divided by the SP value, and the controller output is normalized relative to the maximum value at which saturation is already taking place.

Below is a well-known table of the influence of increasing the parameters of the PID controller ( how to reduce the font, otherwise the meringue hyphenation table does not fit?):



And the general algorithm for manual tuning of the PID controller is as follows:


  1. We select the proportional coefficients with the differential and integral links turned off until self-oscillations begin.
  2. Gradually increasing the differential component, we get rid of self-oscillations
  3. If there is a residual control error (displacement), then we eliminate it due to the integral component.

There are no general values ​​for the PID controller parameters: specific values ​​​​depend solely on the process parameters (its transfer characteristic): a PID controller that works perfectly with one control object will be inoperable with another. Moreover, the coefficients at the proportional, integral and differential components are also interdependent.


Attempt number three. Once again derivatives

Having attached a crutch in the form of limiting the values ​​of the controller output, we have not solved the main problem of our controller - the differential component does not feel well with a step change in the error at the controller input. In fact, there are many other crutches, for example, at the time of an abrupt change in SP, "turn off" the differential component or put low-pass filters between SP(t) and an operation due to which a smooth increase in the error will occur, or you can completely turn around and screw in a real Kalman filter to smooth the input data. In general, there are a lot of crutches, and add observer Of course I would like to, but not this time.


Therefore, we will return to the derivative of the mismatch error again and look at it carefully:



Didn't notice anything? If you look closely, you will find that, in general, SP(t) does not change in time (except for the moments of a step change, when the controller receives a new command), i.e. its derivative is zero:





In other words, instead of the error derivative, which is differentiable not everywhere we can use the derivative of the process, which in the world of classical mechanics is usually continuous and differentiable everywhere, and the scheme of our ACS will already take the following form:




We modify the controller code:


using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace Assets.Scripts.Regulator ( public class SimplePID ( public float Kp, Ki, Kd; private float P, I, D; private float lastPV = 0f; public SimplePID() ( Kp = 1f; Ki = 0f; Kd = 0.2f ; ) public SimplePID(float pFactor, float iFactor, float dFactor) ( this.Kp = pFactor; this.Ki = iFactor; this.Kd = dFactor; ) public float Update(float error, float PV, float dt) ( P = error; I += error * dt; D = -(PV - lastPV) / dt; lastPV = PV; float CO = Kp * P + Ki * I + Kd * D; return CO; ) ) )

And let's change the ControlRotate method a bit:


public float ControlRotate(Vector2 rotate, float thrust) ( float CO = 0f; float MV = 0f; float dt = Time.fixedDeltaTime; //Calculate the error float angleError = Mathf.DeltaAngle(_myTransform.eulerAngles.z, targetAngle); // Get corrective acceleration CO = _angleController.Update(angleError, _myTransform.eulerAngles.z, dt); //Saturate MV = CO; if (CO >< -thrust) MV = -thrust; return MV; }

And-and-and-and ... if you run the game, it turns out that in fact nothing has changed since the last attempt, which was required to be proven. However, if we remove the saturation, then the regulator response graph will look like this:


jump CO(t) is still present, but it is no longer as big as it was at the very beginning, and most importantly, it has become predictable, because is provided exclusively by the proportional component, and is limited by the maximum possible mismatch error and the proportional gain of the PID controller (and this already hints that Kp it makes sense to choose less than unity, for example, 1/90f), but does not depend on the differentiation grid step (i.e., dt). In general, I strongly recommend using the derivative of the process, and not the errors.


I think now it won’t surprise anyone, but you can replace it with the same way, but we won’t dwell on this, you can experiment yourself and tell in the comments what came of it (most interesting)

Attempt number four. Alternative implementations of the PID controller

In addition to the ideal representation of the PID controller described above, in practice the standard form is often used, without coefficients Ki and kd, instead of which temporary constants are used.


This approach is due to the fact that a number of PID tuning techniques are based on the frequency response of the PID controller and the process. Actually, the whole TAU revolves around the frequency characteristics of the processes, so for those who want to go deeper, and, suddenly, faced with an alternative nomenclature, I will give an example of the so-called. standard form PID controller:




where, is the differentiation constant that affects the prediction of the state of the system by the regulator,
- integration constant, which affects the error averaging interval by the integral link.


The basic principles of tuning a PID controller in standard form are similar to an idealized PID controller:

  • an increase in the proportional coefficient increases the speed and reduces the margin of stability;
  • with a decrease in the integral component, the control error decreases faster over time;
  • decrease in the constant of integration reduces the margin of stability;
  • an increase in the differential component increases the margin of stability and speed

The source code of the standard form, you can find under the spoiler

namespace Assets.Scripts.Regulator ( public class StandardPID ( public float Kp, Ti, Td; public float error, CO; public float P, I, D; private float lastPV = 0f; public StandardPID() ( Kp = 0.1f; Ti = 10000f; Td = 0.5f; bias = 0f; ) public StandardPID(float Kp, float Ti, float Td) ( this.Kp = Kp; this.Ti = Ti; this.Td = Td; ) public float Update(float error, float PV, float dt) ( this.error = error; P = error; I += (1 / Ti) * error * dt; D = -Td * (PV - lastPV) / dt; CO = Kp * ( P + I + D); lastPV = PV; return CO; ) ) )

The default values ​​are Kp = 0.01, Ti = 10000, Td = 0.5 - with these values, the ship turns fairly quickly and has some margin of stability.


In addition to this form of PID controller, the so-called. recurrent form:



We will not dwell on it, because. it is relevant primarily for hardware programmers working with FPGAs and microcontrollers, where such an implementation is much more convenient and efficient. In our case - let's do something on Unity3D - this is just another implementation of the PID controller, which is no better than others and even less understandable, so once again we will rejoice together how good it is to program in cozy C #, and not in creepy and the scary VHDL, for example.

instead of a conclusion. Where else to add a PID controller

Now let's try to complicate the control of the ship a little using two-loop control: one PID controller, already familiar to us _angleController, is still responsible for the angular positioning, but the second - the new one, _angularVelocityController - controls the rotation speed:


public float ControlRotate(float targetAngle, float thrust) ( float CO = 0f; float MV = 0f; float dt = Time.fixedDeltaTime; _angle = _myTransform.eulerAngles.z; //Rotation angle controller float angleError = Mathf.DeltaAngle(_angle, targetAngle); float torqueCorrectionForAngle = _angleController.Update(angleError, _angle, dt); //Velocity stabilization controller float angularVelocityError = -_rb2d.angularVelocity; float torqueCorrectionForAngularVelocity = _angularVelocityController.Update(angularVelocityError, -angularVelocityError, dt); //Total controller output CO = torqueCorrectionForAngle + torqueCorrectionForAngularVelocity;//Discrete in steps of 100 CO = Mathf.Round(100f * CO) / 100f;//Saturate MV = CO;if (CO > thrust) MV = thrust;if (CO< -thrust) MV = -thrust; return MV; }

The purpose of the second regulator is to dampen excess angular velocities by changing the torque - this is akin to the presence of angular friction, which we turned off when we created the game object. Such a control scheme [perhaps] will make it possible to obtain a more stable behavior of the ship, and even get by with only proportional control coefficients - the second regulator will dampen all oscillations, performing a function similar to the differential component of the first regulator.


In addition, we will add a new player input class - PlayerInputCorvette, in which the turns will be carried out by pressing the left-right keys, and we will leave the target designation with the mouse for something more useful, for example, to control the turret. At the same time, we now have such a parameter as _turnRate - responsible for the speed / responsiveness of the turn (it's not clear where to put it better in InputCOntroller or still Ship).


public class PlayerCorvetteInput: BaseInputController ( public float _turnSpeed ​​= 90f; public override void ControlRotate() ( // Find the Vector3 mouse pointer worldPos = Input.mousePosition; worldPos = Camera.main.ScreenToWorldPoint(worldPos); // Store the relative position of the mouse pointer float dx = -this.transform.position.x + worldPos.x; float dy = -this.transform.position.y + worldPos.y; // Pass in the direction of the mouse pointer Vector2 target = new Vector2(dx, dy); _agentBody. _target = target; // Calculate rotation according to keystroke _agentBody._rotation -= Input.GetAxis("Horizontal") * _turnSpeed ​​* Time.deltaTime; ) public override void ControlForce() ( //Pass movement _agentBody._movement = Input .GetAxis("Vertical") * Vector2.up; ) )

Also, for clarity, we throw a script on our knees to display debugging information

namespace Assets.Scripts.SpaceShooter.UI( public class Debugger: MonoBehaviour( Ship _ship; BaseInputController _controller; List _pids = new List (); List _names = new List (); Vector2 _orientation = new Vector2(); // Use this for initialization void Start() ( _ship = GetComponent (); _controller = GetComponent (); _pids.Add(_ship._angleController); _names.Add("Angle controller"); _pids.Add(_ship._angularVelocityController); _names.Add("Angular velocity controller"); ) // Update is called once per frame void Update() ( DrawDebug(); ) Vector3 GetDretion(eSpriteRotation spriteRotation) ( switch (_controller._spriteOrientation) ( case eSpriteRotation.Rigth: return transform.right; case eSpriteRotation.Up: return transform .up; case eSpriteRotation.Left: return -transform.right; case eSpriteRotation.Down: return -transform.up; ) return Vector3.zero; ) void DrawDebug() ( // Vector3 rotation direction vectorToTarget = transform.position + 5f * new Vector3(-Mathf.Sin(_ship._targetAngle * Mathf.Deg2Rad), Mathf.Cos(_ship._targetAngle * Mathf.Deg2Rad), 0f); // Current Direction Vector3 heading = transform.position + 4f * GetDirection(_controller. _spriteOrientation); //Angular acceleration Vector3 torque = heading - transform.right * _ship._Torque; Debug.DrawLine(transform.position, vectorToTarget, Color.white); Debug.DrawLine(transform.position, heading, Color.green); Debug.DrawLine(heading, torque, Color.red); ) void OnGUI( ) ( float x0 = 10; float y0 = 100; float dx = 200; floatdy=40; floatSliderKpMax = 1; floatSliderKpMin = 0; floatSliderKiMax = .5f; float SliderKiMin = -.5f; floatSliderKdMax = .5f; float SliderKdMin = 0; int i = 0; foreach (SimplePID pid in _pids) ( y0 += 2 * dy; GUI.Box(new Rect(25 + x0, 5 + y0, dx, dy), ""); pid.Kp = GUI.HorizontalSlider(new Rect( pid.Ki = GUI.HorizontalSlider(new Rect(25 + x0, 20 + y0, 200, 10), pid.Ki, SliderKiMin, SliderKiMax); pid.Kd = GUI.HorizontalSlider(new Rect(25 + x0, 35 + y0, 200, 10), pid.Kd, SliderKdMin, SliderKdMax); GUIStyle style1 = new GUIStyle(); style1.alignment = TextAnchor.MiddleRight; style1.fontStyle = FontStyle.Bold; style1.normal.textColor = Color.yellow; style1.fontSize = 9; GUI.Label(new Rect(0 + x0, 5 + y0, 20, 10), "Kp ", style1); GUI.Label(new Rect(0 + x0, 20 + y0, 20, 10), "Ki", ​​style1); GUI.Label(new Rect(0 + x0, 35 + y0, 20, 10 ), "Kd", style1); GUIStyle style2 = new GUIStyle(); style2.alignment = TextAnchor.MiddleLeft; style2.fontStyle = FontStyle.Bold; style2.normal.textColor = Color.yellow; style2.fontSize = 9; GUI .TextField(new Rect(235 + x0, 5 + y0, 60, 10), pid.Kp.ToString(), style2); GUI.TextField(new Rect(235 + x0, 20 + y0, 60, 10), pid. Ki.ToString(), style2); GUI.TextField(new Rect(235 + x0, 35 + y0, 60, 10), pid.Kd.ToString(), style2); GUI.Label(new Rect(0 + x0, -8 + y0, 200, 10), _names, style2); ) ) ) )


The Ship class has also undergone irreversible mutations and should now look like this:

namespace Assets.Scripts.SpaceShooter.Bodies ( public class Ship: BaseBody ( public GameObject _flame; public Vector2 _movement = new Vector2(); public Vector2 _target = new Vector2(); public float _targetAngle = 0f; public float _angle = 0f; public float _thrust = 1f; public SimplePID _angleController = new SimplePID(0.1f,0f,0.05f); public SimplePID _angularVelocityController = new SimplePID(0f,0f,0f); private float _torque = 0f; public float _Torque ( get ( return _torque; ) ) private Vector2 _force = new Vector2(); public Vector2 _Force ( get ( return _force; ) ) public void FixedUpdate() ( _torque = ControlRotate(_targetAngle, _thrust); _force = ControlForce(_movement, _thrust); _rb2d.AddTorque( _torque); _rb2d.AddRelativeForce(_force); ) public float ControlRotate(float targetAngle, float thrust) ( float CO = 0f; float MV = 0f; float dt = Time.fixedDeltaTime; _angle = _myTransform.eulerAngles.z; //Controller float angleError = Mathf.DeltaAngle(_angle, targetAngle); float torqueCorrectionForAngle = _angleController.Update(angleError, _angle, dt); //Velocity stabilization controller float angularVelocityError = -_rb2d.angularVelocity; float torqueCorrectionForAngularVelocity = _angularVelocityController.Update(angularVelocityError, -angularVelocityError, dt); //Total controller output CO = torqueCorrectionForAngle + torqueCorrectionForAngularVelocity; //Discrete in steps of 100 CO = Mathf.Round(100f * CO) / 100f; //Saturate MV = CO; if (CO > thrust) MV = thrust; if (CO< -thrust) MV = -thrust; return MV; } public Vector2 ControlForce(Vector2 movement, float thrust) { Vector2 MV = new Vector2(); if (movement != Vector2.zero) { if (_flame != null) { _flame.SetActive(true); } } else { if (_flame != null) { _flame.SetActive(false); } } MV = movement * thrust; return MV; } public void Update() { } } }

THEORY OF AUTOMATIC CONTROL FOR "DUMPS"

K.Yu. Polyakov

St. Petersburg

© K.Yu. Polyakov, 2008

“In a university, you need to present the material at a high professional level. But since this level is well above the head of the average student, I will explain on my fingers. It's not very professional, but it's understandable."

Unknown teacher

Foreword

This manual is intended for the first acquaintance with the subject. Her task is to explain “on the fingers” the basic concepts automatic control theory and make it so that after reading it you can perceive professional literature on this topic. You should consider this manual only as a foundation, a launching pad for serious study of a serious subject, which can become very interesting and exciting.

There are hundreds of textbooks on automatic control. But the whole problem is that the brain, when perceiving new information, is looking for something familiar, for which you can “catch on”, and on this basis “tie” the new to already known concepts. Practice shows that it is difficult for a modern student to read serious textbooks. Nothing to grab onto. And behind strict scientific evidence, the essence of the matter, which is usually quite simple, often escapes. The author tried to "go down" to a lower level and build a chain from "everyday" concepts to the concepts of control theory.

The exposition at every step sins with laxity, no proofs are given, formulas are used only where it is impossible to do without them. The mathematician will find here many inconsistencies and omissions, since (in accordance with the objectives of the manual) between rigor and intelligibility, the choice is always made in favor of intelligibility.

Little prior knowledge is required from the reader. Need to have an idea

about some sections of the course of higher mathematics:

1) derivatives and integrals;

2) differential equations;

3) linear algebra, matrices;

4) complex numbers.

Thanks

The author expresses his deep gratitude to Dr. Sci. A.N. Churilov, Ph.D. V.N. Kalinichenko and Ph.D. IN. Rybinsky, who carefully read the preliminary version of the manual and made many valuable comments, which helped to improve the presentation and make it more understandable.

© K.Yu. Polyakov, 2008

BASIC CONCEPTS ...

Introduction ................................................ ................................................. ...............................................

Control systems................................................ ................................................. ......................

1.3. What are the control systems? ................................................. ...............................................

M ATEMATIC MODELS..........................................................................................................................

2.1. What do you need to know to manage? ................................................. ..................................................

2.2. Connection of input and output .......................................................... ................................................. ......................

How are models built? ................................................. ................................................. ...................

Linearity and non-linearity ............................................................... ................................................. .............

Linearization of Equations................................................... ................................................. ...................

Control................................................. ................................................. ................................................

3M CLOTHES OF LINEAR OBJECTS.....................................................................................................................

Differential Equations.................................................................... ................................................. .........

3.2. State-Space Models.................................................................... ................................................. ..

Transition function .................................................................. ................................................. .........................

Impulse response (weight function) .............................................................. .................................

Transmission function................................................ ................................................. ...................

Laplace transform .................................................................. ................................................. .................

3.7. Transfer function and state space .............................................................. .........................

Frequency characteristics .................................................................. ................................................. ..........

Logarithmic frequency responses............................................................... ...............................

4. T IP DYNAMIC LINKS................................................................................................................

Amplifier................................................. ................................................. .........................................

Aperiodic link .............................................................. ................................................. .........................

Vibrating link .................................................................. ................................................. .........................

Integrating link .................................................................. ................................................. .......................

Differentiating links .................................................................. ................................................. ..............

Lag................................................. ................................................. ....................................

"Reverse" links .......................................................... ................................................. ...............................

LAFCHH of complex links ............................................... ................................................. .................

With STRUCTURAL SCHEMES....................................................................................................................................

Conventions .................................................................. ................................................. ......................

Conversion Rules .................................................................. ................................................. ...................

Typical single-loop system .............................................................. ................................................. .....

BUT ANALYSIS OF CONTROL SYSTEMS......................................................................................................................

Management requirements .................................................................. ................................................. ...................

Exit process .................................................................. ................................................. ...............................

Accuracy................................................. ................................................. ............................................

Sustainability ................................................................ ................................................. .................................

Sustainability criteria .................................................................. ................................................. .................

Transition process ................................................................ ................................................. .........................

Frequency assessments of quality ............................................................... ................................................. ............

Root quality assessments ............................................................... ................................................. ................

Robustness ................................................................ ................................................. ....................................

With INTESIS OF REGULATORS....................................................................................................................................

Classic scheme .............................................................. ................................................. ......................

PID controllers ............................................................... ................................................. ................................

Pole placement method .............................................................. ................................................. .............

LAFCH correction .................................................. ................................................. ...............................

Combined management .................................................................. ................................................. ...........

Invariance .............................................................. ................................................. ...............................

Many stabilizing regulators .............................................................. ....................................

CONCLUSION ................................................. ................................................. ................................................. .....

L ITERATURE FOR FURTHER READING..........................................................................................................

© K.Yu. Polyakov, 2008

1. Basic concepts

1.1. Introduction

Since ancient times, man has wanted to use the objects and forces of nature for his own purposes, that is, to control them. You can control inanimate objects (for example, rolling a stone to another place), animals (training), people (boss - subordinate). Many management tasks in the modern world are associated with technical systems - cars, ships, aircraft, machine tools. For example, you need to maintain a given course of the ship, the height of the aircraft, the engine speed, the temperature in the refrigerator or in the oven. If these tasks are solved without human intervention, they talk about automatic control.

Management theory tries to answer the question "how should we manage?". Until the 19th century, the science of control did not exist, although the first automatic control systems already existed (for example, windmills “taught” to turn towards the wind). The development of management theory began during the industrial revolution. At first, this direction in science was developed by mechanics to solve control problems, that is, maintaining a given value of rotational speed, temperature, pressure in technical devices (for example, in steam engines). This is where the name “control theory” comes from.

Later it turned out that the principles of management can be successfully applied not only in technology, but also in biology, economics, social sciences. The processes of control and processing of information in systems of any nature are studied by the science of cybernetics. One of its sections, connected mainly with technical systems, is called theory of automatic control. In addition to the classical tasks of regulation, she also deals with the optimization of control laws, issues of adaptability (adaptation).

Sometimes the names "automatic control theory" and "automatic control theory" are used interchangeably. For example, in modern foreign literature you will find only one term - control theory.

1.2. Control systems

1.2.1. What is the control system?

AT In management tasks, there are always two objects - managed and managing. The managed object is usually calledcontrol object or just an object , and the control object is a regulator . For example, when controlling the rotational speed, the control object is the engine (electric motor, turbine); in the problem of stabilizing the course of a ship, a ship submerged in water; in the task of maintaining the volume level - dynamic

Regulators can be built on different principles.

The most famous of the first mechanical regulators is

Watt centrifugal regulator for frequency stabilization

rotation of the steam turbine (in the figure on the right). When the frequency

rotation increases, the balls diverge due to the increase

centrifugal force. At the same time, through the system of levers, a little

the damper closes, reducing the flow of steam to the turbine.

Temperature controller in the refrigerator or thermostat -

this is the electronic circuit that turns on the cooling mode

(or heating) if the temperature gets higher (or lower)

given.

In many modern systems, regulators are microprocessor devices that

puthers. They successfully control aircraft and spacecraft without human intervention.

© K.Yu. Polyakov, 2008

ka. A modern car is literally "stuffed" with control electronics, up to on-board computers.

Typically, the regulator does not act directly on the control object, but through actuators ( drives), which can amplify and convert the control signal, for example, an electrical signal can “turn” into the movement of a valve that regulates fuel consumption, or into turning the steering wheel at a certain angle.

In order for the regulator to "see" what is actually happening with the object, sensors are needed. With the help of sensors, those characteristics of the object that need to be controlled are most often measured. In addition, the quality of control can be improved if additional information is obtained - by measuring the internal properties of the object.

1.2.2. System Structure

So, a typical control system includes an object, a controller, a drive, and sensors. However, a set of these elements is not yet a system. To turn into a system, communication channels are needed, through which information is exchanged between elements. Electric current, air (pneumatic systems), liquid (hydraulic systems), computer networks can be used to transmit information.

Interconnected elements are already a system that has (due to connections) special properties that individual elements and any combination of them do not have.

The main intrigue of management is related to the fact that the object is affected by the environment - external disturbances, which "prevent" the regulator from performing its task. Most perturbations are unpredictable in advance, that is, they are random in nature.

In addition, sensors do not measure parameters accurately, but with some error, albeit a small one. In this case, one speaks of "measurement noise" by analogy with noise in radio engineering, which distort signals.

Summing up, you can draw a block diagram of the control system like this:

control

regulator

disturbances

reverse

measurements

For example, in the ship's course control system

control object- this is the ship itself, located in the water; to control its course, a rudder is used that changes the direction of the flow of water;

controller - digital computer;

drive - a steering device that amplifies the control electrical signal and converts it into a steering wheel;

sensors - a measuring system that determines the actual course;

external disturbances- this is sea waves and wind that deviate the ship from a given course;

measurement noises are sensor errors.

The information in the control system, as it were, “walks in a circle”: the regulator issues a signal

control on the drive, which acts directly on the object; then information about the object through the sensors returns back to the controller and everything starts anew. They say that there is feedback in the system, that is, the controller uses information about the state of the object to develop control. Feedback systems are called closed, since information is transmitted in a closed loop.

© K.Yu. Polyakov, 2008

1.2.3. How does the regulator work?

The controller compares the setting signal (“setpoint”, “setpoint”, “desired value”) with the feedback signals from the sensors and determines mismatch(control error) is the difference between the specified and actual state. If it is zero, no control is required. If there is a difference, the regulator issues a control signal that seeks to reduce the mismatch to zero. Therefore, the controller circuit in many cases can be drawn like this:

mismatch

algorithm

control

management

Feedback

This diagram shows error control(or by deviation). This means that in order for the controller to take effect, the controlled variable must deviate from the set value. The block marked with ≠ finds the mismatch. In the simplest case, it subtracts the feedback signal (measured value) from the set value.

Is it possible to manipulate the object so that there is no error? In real systems, no. First of all, due to external influences and noises that are not known in advance. In addition, control objects have inertia, that is, they cannot instantly move from one state to another. The capabilities of the controller and drives (that is, the power of the control signal) are always limited, so the speed of the control system (speed of transition to a new mode) is also limited. For example, when steering a ship, the rudder angle usually does not exceed 30 - 35 °, this limits the rate of change of course.

We considered the option when feedback is used in order to reduce the difference between the given and actual state of the control object. Such feedback is called negative because the feedback signal is subtracted from the driving signal. Could it be the other way around? It turns out yes. In this case, the feedback is called positive, it increases the mismatch, that is, it tends to "shake" the system. In practice, positive feedback is used, for example, in generators to maintain undamped electrical oscillations.

1.2.4. Open systems

Is it possible to manage without using feedback? Basically, you can. In this case, the regulator does not receive any information about the real state of the object, so it must be known exactly how this object behaves. Only then can you calculate in advance how they need to be controlled (build the desired control program). However, there is no guarantee that the task will be completed. Such systems are called program control systems or open systems, since information is not transmitted along a closed loop, but only in one direction.

program

control

regulator

disturbances

A blind and deaf driver can also drive a car. Some time. As long as he remembers the road and can correctly calculate his place. Until there are pedestrians or other vehicles on the way that he cannot know about in advance. From this simple example, it is clear that without

© K.Yu. Polyakov, 2008

feedback (information from sensors) it is impossible to take into account the influence of unknown factors, the incompleteness of our knowledge.

Despite these shortcomings, open-loop systems are used in practice. For example, the information board at the station. Or the simplest engine control system, which does not require very precise speed control. However, from the point of view of control theory, open-loop systems are of little interest, and we will no longer think about them.

1.3. What are the control systems?

Automatic system is a system that works without human intervention. Is there some more automated systems in which routine processes (collection and analysis of information) are performed by a computer, but the entire system is controlled by a human operator who makes decisions. We will further study only automatic systems.

1.3.1. Tasks of control systems

Automatic control systems are used to solve three types of problems:

stabilization, that is, maintaining a given operating mode that does not change for a long time (the setting signal is constant, often zero);

program control– control according to a previously known program (the master signal changes, but is known in advance);

tracking of an unknown master signal.

To stabilization systems include, for example, autopilots on ships (maintaining a given course), systems for regulating the speed of turbines. Program control systems are widely used in household appliances, such as washing machines. Tracking systems are used to amplify and convert signals, they are used in drives and when transmitting commands via communication lines, for example, via the Internet.

1.3.2. One-dimensional and multidimensional systems

According to the number of inputs and outputs, there are

one-dimensional systems that have one input and one output (they are considered in the so-called classical control theory);

multidimensional systems having several inputs and/or outputs (the main subject of study in modern control theory).

We will study only one-dimensional systems, where both the plant and the controller have one input and one output signal. For example, when steering a ship along a course, it can be assumed that there is one control action (rudder turn) and one adjustable variable (heading).

However, in reality this is not entirely true. The fact is that when the course changes, the roll and trim of the ship also changes. In the one-dimensional model, we neglect these changes, although they can be very significant. For example, with a sharp turn, the roll can reach an unacceptable value. On the other hand, not only the steering wheel can be used for control, but also various thrusters, stabilizers, etc., that is, the object has several inputs. Thus, the real course management system is multidimensional.

The study of multidimensional systems is a rather difficult task and is beyond the scope of this tutorial. Therefore, in engineering calculations, they sometimes try to simplistically represent a multidimensional system as several one-dimensional ones, and quite often this method leads to success.

1.3.3. Continuous and discrete systems

According to the nature of the signals, the system can be

continuous , in which all signals are functions of continuous time, defined on a certain interval;

discrete, which use discrete signals (sequences of numbers) that are determined only at certain points in time;

© K.Yu. Polyakov, 2008

continuous-discrete, in which there are both continuous and discrete signals. Continuous (or analog) systems are usually described by differential equations. These are all motion control systems in which there are no computers and other elements.

cops of discrete action (microprocessors, logic integrated circuits). Microprocessors and computers are discrete systems, since they contain all the information

The information is stored and processed in a discrete form. The computer cannot process continuous signals because it only works with sequences numbers. Examples of discrete systems can be found in economics (the reference period is a quarter or a year) and in biology (the “predator-prey” model). Difference equations are used to describe them.

There are also hybrid continuous-discrete systems, for example, computer systems for controlling moving objects (ships, aircraft, cars, etc.). In them, some of the elements are described by differential equations, and some by difference equations. From the point of view of mathematics, this creates great difficulties for their study, therefore, in many cases, continuous-discrete systems are reduced to simplified purely continuous or purely discrete models.

1.3.4. Stationary and non-stationary systems

For management, the question of whether the characteristics of an object change over time is very important. Systems in which all parameters remain constant are called stationary, which means "not changing in time." This tutorial only deals with stationary systems.

In practical problems, the situation is often not so rosy. For example, a flying rocket consumes fuel and due to this its mass changes. Thus, a rocket is a non-stationary object. Systems in which the parameters of an object or controller change over time are called non-stationary. Although the theory of non-stationary systems exists (the formulas are written), it is not so easy to apply it in practice.

1.3.5. Certainty and randomness

The simplest option is to assume that all object parameters are defined (specified) exactly, just like external influences. In this case, we are talking about deterministic systems that were considered in classical control theory.

However, in real problems, we do not have exact data. First of all, it refers to external influences. For example, to study the motion of a ship at the first stage, we can assume that the wave has the form of a sine of known amplitude and frequency. This is a deterministic model. Is it so in practice? Naturally not. With this approach, only approximate, rough results can be obtained.

According to modern concepts, the waveform is approximately described as the sum of sinusoids that have random, that is, unknown in advance, frequencies, amplitudes and phases. Interference, measurement noise are also random signals.

Systems in which random perturbations act or the parameters of an object can change randomly are called stochastic(probabilistic). The theory of stochastic systems allows obtaining only probabilistic results. For example, it cannot be guaranteed that the ship's course deviation will always be no more than 2°, but you can try to ensure such a deviation with some probability (99% probability means that the requirement will be met in 99 cases out of 100).

1.3.6. Optimal systems

Often the system requirements can be formulated in the form optimization problems. In optimal systems, the controller is constructed in such a way as to provide a minimum or maximum of some quality criterion. It must be remembered that the expression "optimal system" does not mean that it is really ideal. Everything is determined by the accepted criterion - if it is chosen successfully, the system will turn out to be good, if not, then vice versa.

© K.Yu. Polyakov, 2008

1.3.7. Special classes of systems

If the parameters of the object or disturbances are known inaccurately or may change over time (in non-stationary systems), adaptive or self-adjusting controllers are used, in which the control law changes when conditions change. In the simplest case (when there are several previously known modes of operation), there is a simple switching between several control laws. Often in adaptive systems, the controller estimates the parameters of the object in real time and accordingly changes the control law according to a given rule.

A self-adjusting system that tries to adjust the controller so as to "find" the maximum or minimum of some quality criterion is called extreme (from the word extremum, which means maximum or minimum).

Many modern household appliances (such as washing machines) use fuzzy controllers, built on the principles of fuzzy logic . This approach allows us to formalize the human way of making a decision: "if the ship has gone too far to the right, the rudder must be shifted too far to the left."

One of the popular trends in modern theory is the application of artificial intelligence achievements to control technical systems. The controller is built (or only adjusted) on the basis of a neural network, which is previously trained by a human expert.

© K.Yu. Polyakov, 2008

2. Mathematical models

2.1. What do you need to know to manage?

The goal of any control is to change the state of the object in the right way (in accordance with the task). The theory of automatic control should answer the question: "how to build a regulator that can control a given object in such a way as to achieve the goal?" To do this, the developer needs to know how the control system will respond to different influences, that is, a system model is needed: an object, a drive, sensors, communication channels, disturbances, and noise.

A model is an object that we use to study another object (original). The model and the original must be somewhat similar so that the conclusions made when studying the model could (with some probability) be transferred to the original. We will be primarily interested mathematical models expressed as formulas. In addition, descriptive (verbal), graphic, tabular and other models are also used in science.

2.2. Connection of input and output

Any object interacts with the environment through inputs and outputs. Inputs are possible effects on the object, outputs are those signals that can be measured. For example, for an electric motor, the inputs can be supply voltage and load, and the outputs

– shaft speed, temperature.

The inputs are independent, they "come" from the external environment. When the input information changes, the internal object state(as its changing properties are called) and, as a consequence, the outputs are:

input x

output y

This means that there is some rule according to which the element transforms input x into output y . This rule is called an operator. The entry y = U means that the output y is received in

the result of applying the operator U to the input x.

Building a model means finding an operator that connects inputs and outputs. It can be used to predict the reaction of an object to any input signal.

Consider a DC motor. The input of this object is the supply voltage (in volts), the output is the rotational speed (in revolutions per second). We will assume that at a voltage of 1 V, the rotational speed is 1 rpm, and at a voltage of 2 V - 2 rpm, that is, the rotation frequency is equal in magnitude to the voltage1. It is easy to see that the action of such an operator can be written as

U[ x] = x .

Now let's assume that the same engine rotates the wheel and as the output of the object we have chosen the number of revolutions of the wheel relative to the initial position (at the moment t = 0). In this case, with uniform rotation, the product x ∆ t gives us the number of revolutions during the time ∆ t, that is, y (t) = x ∆ t (here, the notation y (t) explicitly denotes the dependence of the output on time

nor t ). Can we assume that we have defined the operator U by this formula? Obviously not, because the resulting dependence is valid only for a constant input signal. If the voltage at the input x (t) changes (it doesn’t matter how!), The angle of rotation will be written as an inte-

1 Of course, this will only be true in a certain range of voltages.

MINISTRY OF EDUCATION AND SCIENCE OF THE RUSSIAN FEDERATION

Federal State Autonomous Educational Institution of Higher Professional Education

"St. Petersburg State University of Aerospace Instrumentation"

_________________________________________________________________

M. V. Burakov

Theory of automatic control.

Tutorial

St. Petersburg

Reviewers:

Candidate of Technical Sciences D. O. Yakimovsky (Federal State Enterprise "Research Institute of Command Instruments"). Candidate of Technical Sciences Associate Professor A. A. Martynov

(St. Petersburg State University of Aerospace Instrumentation)

Approved by the Editorial and Publishing Council of the University

as a teaching aid

Burakov M.V.

D79 Theory of automatic control: textbook. allowance. Part 1 / M. V. Burakov; - St. Petersburg: GUAP, 2013. -258 p.: ill.

The textbook discusses the basics of the theory of automatic control - the basic course in the preparation of engineers in the field of automation and control.

The basic concepts and principles of control are given, mathematical models and methods of analysis and synthesis of linear and discrete control systems based on the apparatus of transfer functions are considered.

The textbook is intended for the preparation of bachelors and masters in the direction 220400 "Control in technical systems", as well as students of other specialties studying the disciplines "Theory of automatic control" and "Fundamentals of control theory".

1. BASIC CONCEPTS AND DEFINITIONS

1.1. Brief history of TAU development

1.2. Basic concepts of TAU

1.3. Ways to describe control objects

1.4. Linearization

1.4. Management quality criteria

1.5. Deviation regulators

Questions for self-examination

2. TRANSFER FUNCTIONS

2.1. Laplace transform

2.2. The concept of transfer function

2.3. Typical dynamic links

2.4. Timing

2.5. The transfer function of the system with the inverse

2.6. Partial Transfer Functions

2.7. Accuracy in steady state

2.8. Block Diagram Conversion

2.9. Signal Graphs and Mason's Formula

2.10. Invariant systems

Questions for self-examination

3. ROOT STABILITY ESTIMATES AND KA-

3.1. Necessary and sufficient condition for stable

3.2. Algebraic stability criterion

3.3. Structurally unstable systems

3.4. Root indicators of the quality of the transitional

process

3.5. Selection of controller parameters

3.6. Root locus

Questions for self-examination

4. FREQUENCY METHODS OF ANALYSIS AND SYNTHESIS

4.1. Fourier transform

4.2. Logarithmic frequency response

4.3. Frequency characteristics of an open system

4.4. Frequency stability criteria

4.4.1. Mikhailov stability criterion

4.4.2. Nyquist stability criterion

4.4.3. The Nyquist criterion for systems with a delay

4.5. Frequency quality criteria

4.5.1. Sustainability margins

4.5.2. Harmonic Accuracy

4.6. Synthesis of corrective devices

4.6.1. Assessment of the quality of the tracking system by type

LACH open system

4.6.2. Correction using a differentiator

devices

4.6.3. Correction using integro-

differentiating circuit

4.6.4. Synthesis of a general corrective link

4.7. Analog correction links

4.7.1. Passive corrective links

4.7.2. Active corrective links

Questions for self-examination

5. DIGITAL CONTROL SYSTEMS

5.1. Analog-to-digital and digital-to-analog converter

development

5.2. Implementation of DAC and ADC

5.3. Z - transformation

5.4. Shift theorem

5.5. Synthesis of digital systems from continuous

5.6. Stability of discrete control systems

5.7. Dynamic Object Identification

5.7.1. Identification task

5.7.2. Deterministic ID

5.7.3. Building an LSM Model from the Acceleration Curve

Questions for self-examination

6. ADAPTIVE CONTROL SYSTEMS

6.1. Classification of adaptive systems

6.2. Extreme control systems

6.3. Adaptive control with reference model

Questions for self-examination

CONCLUSION

Bibliographic list

− BASIC CONCEPTS AND DEFINITIONS

o Brief history of the development of the theory of automatic

management

It is possible to define the theory of automatic control as the science of methods for determining the laws of control of any objects that can be implemented using technical means.

The first automatic devices were developed by man in ancient times, this can be judged by the written evidence that has come down to us. In the works of ancient Greek and Roman scientists, descriptions of various automatic devices are given: a hodometer is an automatic device for measuring distance based on the conversion of the number of revolutions of a wagon wheel; vending machines for opening doors and selling water in temples; automatic theaters with cam mechanisms; device for throwing arrows with automatic feeding. At the turn of our era, the Arabs supplied a water clock with a float level regulator (Fig. 1.1).

In the Middle Ages, "android" automation was developed, when mechanical designers created devices that imitated individual human actions. The name "android" emphasizes the humanoid nature of the machine. Androids functioned on the basis of clockwork.

There are several factors that necessitated the development of control systems in the XVII - XVIII:

1. the development of watchmaking, caused by the needs of the rapidly developing navigation;

2. the development of the flour-grinding industry and the need to regulate the operation of water mills;

3. the invention of the steam engine.

Rice. 1.1. Water clock design

Although it is known that centrifugal speed equalizers were used in water flour mills as early as the Middle Ages, the first feedback control system is considered to be the temperature controller of the Dutchman Cornelius Drebbel (1600). In 1675, X. Huygens built a pendulum rate regulator into the clock. Denis Papin in 1681 invented the first pressure regulator for steam boilers.

The steam engine became the first object for industrial regulators, since it did not have the ability to work stably by itself, i.e. did not possess "self-leveling-

vaem ”(Fig. 1.2).

Fig.1.2. Steam engine with regulator

The first industrial regulators are the automatic float regulator for feeding the boiler of a steam engine, built in 1765 by I. I. Polzunov, and the centrifugal regulator of the speed of a steam engine, for which J. Watt received a patent in 1784 (Fig. 1.3).

These first regulators were direct control systems, i.e. no additional power sources were required to actuate the regulators - the sensing element directly moved the regulator (modern control systems are indirect control systems, since the error signal is almost always insufficient in power to control regulatory authority).

Rice. 1.3. Watt centrifugal regulator.

It was no coincidence that the steam engine became the first object for the application of technology and control theory, since it did not have the ability to work stably by itself, did not have self-levelling.

It should also be noted the importance of creating the first software device for controlling a loom from a punched card (for reproducing patterns on carpets), built in 1808 by J. Jaccard.

Polzunov's invention was not accidental, since at the end of the 18th century the Russian metallurgical industry occupied a leading position in the world. In the future, Russian scientists and engineers continued to make a great contribution to the development of the theory of automatic control.

The first work on the theory of regulation appeared in 1823, and it was written by Chizhov, a professor at St. Petersburg University.

AT 1854 K. I. Konstantinov proposed to use the “electromagnetic speed controller” developed by him instead of a conical pendulum in steam engines. Instead of a centrifugal mechanism, it uses an electromagnet that regulates the intake of steam into the machine. The regulator proposed by Konstantinov was more sensitive than the conical pendulum.

AT 1866 A. I. Shpakovsky developed a regulator for a steam boiler, which was heated with the help of nozzles. The fuel supply through the nozzles was proportional to the change in steam pressure in the boiler. If the pressure dropped, the fuel flow through the injectors increased, which led to an increase in temperature and, as a result, an increase in pressure.

AT In 1856, during the coronation of Alexander III, six powerful electric arc lamps with Shpakovsky's automatic regulator were installed in Moscow. This was the first practical experience in the manufacture of the installation and long-term operation of a series of electromechanical regulators.

From 1869–1883 V. N. Chikolev developed a number of electromechanical regulators, including a differential regulator for arc lamps, which played an important role in the history of regulation technology.

The date of birth of the theory of automatic control (TAU) is usually called 1868, when J. Maxwell's work "On Regulators" was published, in which the differential equation was used as a model of the regulator.

A great contribution to the development of TAU was made by the Russian mathematician and engineer I. A. Vyshnegradsky. In his work “On the General Theory of Regulators”, published in 1876, he considered the steam engine and the centrifugal regulator as a single dynamic system. Vyshnegradsky made the most practically important conclusions on the stable motion of systems. He first introduced the concept of linearization of differential equations, thus greatly simplifying the mathematical apparatus of the study.

Read also: