Control Theory of Engineering

feedback

Control theory is an interdisciplinary branch of engineering and mathematics. It deals with the behavior of dynamical systems. The desired output of a system is called the reference. In a control system a controller manipulates the inputs to a system. In the control systems one or more output variables of a system need to follow a certain reference over time.

By manipulating the input, the controller wants to obtain the desired effect on the output of the system. The usual objective of a control theory is to calculate solutions for the proper corrective action from the controller that result in system stability, that is, the system will hold the set point and not oscillate around it.

Control systems can be thought of as having four functions; Measure, Compare, Compute, and Correct. These four functions are completed by five elements; Detector, Transducer, Transmitter, Controller, and Final Control Element. The measuring function is completed by the detector, transducer and transmitter. In practical applications these three elements are typically contained in one unit. Consider a car’s cruise control, which is a device designed to maintain vehicle speed at a constant desired or reference speed provided by the driver. The controller is the cruise control, the plant is the car, and the system is the car and the cruise control. The system output is the car’s speed, and the control itself is the engine’s throttle position which determines how much power the engine generates.

A primitive way to implement cruise control is simply to lock the throttle position when the driver engages cruise control. However, if the cruise control is engaged on a stretch of flat road, then the car will travel slower going uphill and faster when going downhill. This type of controller is called an open-loop controller because no measurement of the system output (the car’s speed) is used to alter the control (the throttle position.) As a result, the controller cannot compensate for changes acting on the car, like a change in the slope of the road.

In a closed-loop control system, a sensor monitors the system output (the car’s speed) and feeds the data to a controller which adjusts the control (the throttle position) as necessary to maintain the desired system output (match the car’s speed to the reference speed.) Now when the car goes uphill the decrease in speed is measured, and the throttle position changed to increase engine power, speeding the vehicle. Feedback from measuring the car’s speed has allowed the controller to dynamically compensate for changes to the car’s speed. It is from this feedback that the paradigm of the control loop arises: the control affects the system output, which in turn is measured and looped back to alter the control.

Although control systems of various types date back to antiquity, a more formal analysis of the field began with a dynamics analysis of the centrifugal governor, conducted by the physicist James Clerk Maxwell in 1868 entitled ‘On Governors.’ This described and analyzed the phenomenon of ‘hunting,’ in which lags in the system can lead to overcompensation and unstable behavior. This generated a flurry of interest in the topic, during which Maxwell’s classmate Edward John Routh generalized Maxwell’s results for the general class of linear systems. Independently, Adolf Hurwitz analyzed system stability using differential equations in 1877, resulting in what is now known as the Routh–Hurwitz theorem.

A notable application of dynamic control was in the area of manned flight. The Wright brothers made their first successful test flights in 1903 and were distinguished by their ability to control their flights for substantial periods (more so than the ability to produce lift from an airfoil, which was known). Continuous, reliable control of the airplane was necessary for flights lasting longer than a few seconds. By World War II, control theory was an important part of fire-control systems (computers which calculate trajectories for Naval guns), guidance systems, and electronics. Sometimes mechanical methods are used to improve the stability of systems. For example, ship stabilizers are fins mounted beneath the waterline and emerging laterally. In contemporary vessels, they may be gyroscopically controlled active fins, which have the capacity to change their angle of attack to counteract roll caused by wind or waves acting on the ship.

The Sidewinder missile uses small control surfaces placed at the rear of the missile with spinning disks on their outer surface; these are known as rollerons. Airflow over the disk spins them to a high speed. If the missile starts to roll, the gyroscopic force of the disk drives the control surface into the airflow, cancelling the motion. Thus the Sidewinder team replaced a potentially complex control system with a simple mechanical solution.

To avoid the problems of the open-loop controller, control theory introduces feedback. A closed-loop controller uses feedback to control states or outputs of a dynamical system. Its name comes from the information path in the system: process inputs (e.g., voltage applied to an electric motor) have an effect on the process outputs (e.g., speed or torque of the motor), which is measured with sensors and processed by the controller; the result (the control signal) is used as input to the process, closing the loop. In some systems, closed-loop and open-loop control are used simultaneously. In such systems, the open-loop control is termed feedforward and serves to further improve reference tracking performance.

Controllability and observability are main issues in the analysis of a system before deciding the best control strategy to be applied, or whether it is even possible to control or stabilize the system. Controllability is related to the possibility of forcing the system into a particular state by using an appropriate control signal. If a state is not controllable, then no signal will ever be able to control the state. If a state is not controllable, but its dynamics are stable, then the state is termed Stabilizable. Observability instead is related to the possibility of ‘observing,’ through output measurements, the state of a system. If a state is not observable, the controller will never be able to determine the behavior of an unobservable state and hence cannot use it to stabilize the system. However, similar to the stabilizability condition above, if a state cannot be observed it might still be detectable.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s