FROM: DAD
20211226
FROM: DAD
20211215
Yes, but what do we mean when we talk of God helping us? We mean God putting into us a bit of Himself, so to speak. He lends us a little of His reasoning powers and that is how we think: He puts a little of His love into us and that is how we love one another. When you teach a child writing, you hold its hand while it forms the letters: that is, it forms the letters because you are forming them. We love and reason because God loves and reasons and holds our hand when we do it.
C.S. Lewis Mere Christianity. 1952.
20211212
[begin transmission 5/?]
You didn't think it would be that easy, did you? Honestly.
Here we are, returning to that dreadful topic of dynamics. Steady your breath, quiet your heart.
We'll make it easy and, rather than expose abstractly, let us use the mass-spring-damper system from Post #20210621.
This should be fairly straightforward to grasp. Are you ready?
Who's That? So Cute! So Compact! Enter: State Space Representation!
If you'd so kindly recall, the dynamical equation associated w/ our mass-spring-damper system looked something like this:\begin{equation} M\ddot y + b \dot y + ky = r(t) \tag{1}\end{equation}
I'd like to slightly modify it from here, to incorporate some of the conventions we've talked about so far:
\begin{equation} M\ddot x + b \dot x + kx = u \tag{2}\end{equation}
All I did was replace the variable 'y' with 'x', since 'x' is the go-to generic mathematical variable of choice. I also replaced r(t) with u; u is ubiquitously used in control theory to designate our input, also known as our control signal. I also dropped the (t) for all these variables b/c it is implicit that they vary w/ time.
This is a second-order (b/c the 'highest' order term is a double dot, meaning the second derivative) ordinary differential equation. It describes the movement of a mass attached to a spring, anchored to an immovable fixed object. subjected to an input force u. Recall that what we did in Post #20210714 was apply the Laplace Transform to this equation to solve for the spatial trajectory of the mass after subjecting it to an input force u; in that post we elected to designate u as a Heaviside function--a simple on/off application of force.
The Laplace Transform technique is known as an analytical method (a solution can be obtained procedurally, w/ pen and paper), the primary limitation being that it can only be applied strictly to linear, time-invariant systems. Of course, most complex real-world systems are NOT linear or time-invariant. With the advancement of computation through the decades since WWII, a new type of analysis emerged that is now known as state-space control, or modern control theory. In this framework, we don't need to bother w/ transforms, complex numbers, and the s-domain. We can conduct our analysis in the time-domain. The goal here is to represent our system in terms of states. It's really cool, I hope that it is fairly intuitive and that you'll like it.
So, how do we get there from the above equation? First, you count the number of dots on the highest order term and create the corresponding number of new variables out of thin air. In our system, this is two. So, we now define two new variables, these will be known as our states:
\begin {equation} x_1 = ? \tag{3}\end{equation}
\begin {equation} x_2 = ? \tag{4}\end{equation}
Great, now what? Well, defining state x1 as x wouldn't be so bad. Nor would taking the derivative of x1 and assigning it to state x2. Let's do that now:
\begin {equation} x_1 = x \tag{5}\end{equation}
\begin {equation} x_2 = \dot x_1 = \dot x \tag{6}\end{equation}
Next, we take the derivative of each of these states:
\begin {equation} \dot x_1 = ? \tag{7}\end{equation}
\begin {equation} \dot x_2 = ? \tag{8}\end{equation}
Look at Equation 6 for a clue on what to put in Equation 7. Equation 8 will take a little more deduction, but you can do it. Just look and think about Equation 6. Equation 7 and 8 become:
\begin {equation} \dot x_1 = x_2 = \dot x \tag{9}\end{equation}
\begin {equation} \dot x_2 = \ddot x \tag{10}\end{equation}
What gives? We're supposed to be representing our system in terms of states. In order to do this, we need to return to our original equation and replace all the variables w/ our states:
\begin{equation} M\ddot x + b x_2 + kx_1 = u \tag{11}\end{equation}
A little algebraic magic to this equation yields:
\begin{equation} \ddot x = \frac{1}{M}u - \frac{b}{M}x_2 - \frac{k}{M}x_1 \tag{12}\end{equation}
You may now go back and re-populate Equation 10 w/ this information:
\begin {equation} \dot x_2 = \frac{1}{M}u - \frac{b}{M}x_2 - \frac{k}{M}x_1 \tag{13}\end{equation}
Voila! (I'll explain in a bit) We have taken our original dynamical equations, Equation 2, and obtained the following state and control vectors (vectors are designated by bold face) and dynamical equations:
\begin{equation} \mathbf{x} = \begin{bmatrix} x_1 \\ x_2 \end{bmatrix}\tag{14}\end{equation}
The entire point of this process was to take our original second-order differential equation, describe it in terms of states, and reduce it into two, coupled, first-order differential equations. Mathematically, this is much easier to work w/, and makes the system amenable to representation in state-space form. In this example, we elected to use x1 and x2 as our states; again, this is to keep things mathematically generic. There's nothing to stop you from designating these states variables as whatever you'd like. In practice, engineers use canonical designations (x for x-position, y for y-position, V for voltage, etc.) and their associated derivatives (first derivative for x-position = x dot = x-velocity = vx) for states instead of the generic x1, x2, x3, etc.
The dynamics presented here, as a system of two ordinary, coupled, first-order differential equations w/ a declared state vector and control vector are good to go for use in optimal control. Canonically, they're presented this way. However, we can continue further and represent this system of equations in state-space form. Generically, state-space representation assumes the form:
\begin {equation} \mathbf{\dot x} = A\mathbf{x} + B\mathbf{u} \tag{18} \end{equation}
\begin {equation} \mathbf{y} = C\mathbf{x} + D\mathbf{u} \tag{19} \end{equation}
Equation 18 is known as the state equation and Equation 19 is known as the output equation. The state equation describes how the states of our system evolve over time, and the output equation describes, you guessed it, the output of the system. How do we get from our Equations 16 and 17 to these state and output equations? If you're familiar w/ linear algebra, you can prolly already see it. If you're not, here's how.
Step 01: Re-write your differential equations as a linear combination (simple multiplication and addition of terms) of your states and control inputs:
\begin {equation} \dot x_1 = x_2 \quad \rightarrow \quad \dot x_1 = 0x_1 + x_2 + 0u \tag{20}\end{equation}
\begin {equation} \dot x_2 = \frac{1}{M}u - \frac{b}{M}x_2 - \frac{k}{M}x_1 \quad \rightarrow \quad \dot x_2 = - \frac{k}{M}x_1 - \frac{b}{M}x_2 + \frac{1}{M}u \tag{21}\end{equation}
Step 02: Extract the coefficients of your states and organize them into matrix A:
\begin {equation} A = \begin{bmatrix} 0 & 1 \\ \frac{-k}{M} & \frac{-b}{M} \end{bmatrix} \tag{22} \end{equation}
Step 03: Extract the coefficients of the control inputs and organize them into a matrix B:
\begin{equation} B = \begin{bmatrix} 0 \\ \frac{1}{M} \end{bmatrix} \tag{23} \end{equation}
Step 04: Plop in your state and control vectors to obtain the state equation:
\begin {equation} \begin{bmatrix} \dot x_1 \\ \dot x_2 \end{bmatrix} = \begin{bmatrix} 0 & 1 \\ \frac{-k}{M} & \frac{-b}{M} \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} + \begin{bmatrix} 0 \\ \frac{1}{M} \end{bmatrix} \begin{bmatrix} u \end{bmatrix} \quad = \quad \mathbf{\dot x} = A\mathbf{x} + B\mathbf{u} \tag{24} \end{equation}
Adorable [ ] Very clean, very compact. You could practically pop your system dynamics into your pocket and get on w/ your day...As stated before, you need to have a basic understanding of linear algebra in order to make sense of this equation. You have to be able to multiply and add matrices and vectors together, but I hope that it is easy enough to see how it is done. You simply 'distribute' your states in your state vector into each row of A. So, x1 gets distributed to the 0, and that's added to the x2 distributed to the 1. You do the same for B and your control vector u. Perform the same procedure for the next row and you should end up with Equations 20 and 21 again.
Oh! One last thing. Your output equation. From your system dynamics, you have to decide what you want to designate as your output; in our mass-spring-damper example, we wanted our x-position to be our output. We assigned x to x1, so our output will be x1. So, let us designate y as x1:
\begin{equation} y = x_1 \tag{25} \end{equation}
Now, expand this equation as a linear combination of all our state and control variables:
\begin{equation} y = x_1 + 0x_2 + 0u \tag{26} \end{equation}
Much like in the process of obtaining our state equation, we need to look at the coefficients of the states and controls in order to define matrices C and D:
\begin {equation} C = \begin{bmatrix} 1 & 0 \end{bmatrix} \tag{27} \end{equation}
\begin {equation} D = \begin{bmatrix} 0 \end{bmatrix} \tag{28} \end{equation}
Then you plop in your state and control vector to obtain the output function:
\begin {equation} \begin{bmatrix} y \end{bmatrix} = \begin{bmatrix} 1 & 0 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} + \begin{bmatrix} 0 \end{bmatrix} \begin{bmatrix} u \end{bmatrix} \quad = \quad \mathbf{y} = C\mathbf{x} + D\mathbf{u} \tag{29} \end{equation}
And there you have it. The state-space representation of your system. From here, you can determine aspects of your system such as it's characteristic function, location of poles and zeroes, stability, observability, controllability--these are all parts of control system analysis. The beauty of the state-space representation is that you can do all of these things w/o having to resort to transforming to the s-domain; everything can be done in the time-domain.
Now, do you need to know all of this? Will I elaborate on it? No. Actually, once we had our system dynamics represented as two first-order, ordinary, differential equations, we were good to go w/ optimal control. I merely wanted to give you a small taste of modern control theory and include this information for your own edification~
[end transmission 5/?]
20211201
[begin transmission]
Why is it that I long for the touch of something that no longer exists?
The lamentations of the created for the creator.
The present for the past.
The here for the there.
The unloved for the loved.
The isn't for the is.
Is this merely consequence of our fallen nature? My fallen nature?
Reaching out so desperately for something to hold onto, to possess?
Something held so close, so dear, so thoroughly known and real.
And simultaneously inarticulable, immaterial, and unobtainable.
Does anyone else feel this way? I cannot be alone in this. It's too harrowing...
Who would do such a thing?
Why would I be made to long for something I could never hold?
A fragment deep in my heart yearning to be united w/ the whole.
Pulling me in one direction while I push in another.
[end transmission]