This post requires Javascript to display formulas!
A centered, rotated ellipse can be defined by matrices which operate on position-vectors for points on the ellipse. The topic of this post series is the relation of the coefficients of such matrices to some basic geometrical properties of an ellipse. In the previous posts
we have found that we can use (at least) two matrix based approaches:
One reflects a combination of two affine operations applied to a unit circle. This approach led us to a non-symmetric matrix, which we called AE. Its coefficients ((a, b), (c, d)) depend on the lengths of the ellipses’ principal axes and trigonometric functions of its rotation angle.
The second approach is based on coefficients of a quadratic form which describes an ellipse as a special type of a conic section. We got a symmetric matrix, which we called Aq.
We have shown how the coefficients α, β, γ of Aq can be expressed in terms of the coefficients of AE. Another major result was that the eigenvalues and eigenvectors of Aq completely control the ellipse’s properties.
Furthermore, we have derived equations for the lengths σ1, σ2 of the ellipse’s principal axes and the rotation angle by which the major axis is rotated against the x-axis of the Cartesian coordinate system [CCS] we work with.
We have also found equations for the components of the position vectors to those points of the ellipse with maximum y-values.
In this post we determine the components of the vectors to the end-points of the ellipse’s principal axes in terms of the coefficients of Aq. Afterward we shall test our formulas by a Python program and plots for a specific example.
Reduced matrix equation for an ellipse
Our centered, but rotated ellipse is defined by a quadratic form, i.e. by a polynomial equation with quadratic terms in the components xe and ye of position vectors to points on the ellipse:
The quadratic polynomial can be formulated as a matrix operation applied to position vectors vE = (xE, yE)T. With the the quadratic and symmetric matrix Aq
Method 1 to determine the vectors to the principal axes’ end points
My readers have certainly noticed that we have already gathered all required information to solve our task. In the first post of this series we have performed an eigendecomposition of our symmetric matrix Aq. We found that the two eigenvectors of Aq for respective eigenvalues λ1 and λ2 point along the principal axes of our rotated ellipse:
This is trivial regarding the algebraic operations, but results in lengthy (and boring) expressions in terms of the matrix coefficients. So, I skip to write down all the terms. (We do not need it for setting up ordered numerical programs.)
Remember that you could in addition replace (α, β, γ) by coefficients (a, b, c, d) of matrix AE. See the first post of this series for the formulas. This would, however, produce even longer equation terms.
We pick the yE with the positive term in the following steps. (The way for the solution with the negative term in yE is analogous.) The square of yE is:
A detailed analysis also for the other yE-expression (see above) leads to further solutions for the coordinates (=vector component values) of points with extremal values for the radii. These are the end-points of the principal axes of the ellipse:
I leave it to the reader to expand the convenience variables into terms containing the original coefficients α, β, γ.
Plots
It is easy to write a Python program, which calculates and plots the data of an ellipse and the special points with extremal values of the radii and extremal values of ye. The general steps which I followed were:
Step 0: Create 100 points a unit circle. Save the coordinates in Python lists (or Numpy arrays). Use Matplotlib’s plot(x,y)-function to plot the vectors.
Step 1: Create an axis-parallel ellipse with values for the axes ha = 2.0 and hb = 1.0 along the x- and the y-axis of the Cartesian coordinate system [CCS]. Do this by applying a diagonal scaling matrix Dσ1, σ2 (see the first post of this series).
Step 2: Rotate the ellipse bei π/3 (60 °). Do this by applying a rotation matrix Rπ/3 to the position vectors of your ellipse (with the help of Numpy). Alternatively, you can first create the matrices, perform a matrix multiplication and then apply the resulting matrix to the position vectors of your unit circle.
(The limiting lines have been calculated by the formulas given above.)
Step 3: Determine the coefficients of combined matrix AE = Rπ/3 ○ Dσ1, σ2
I got for the coefficients ( (a, b), (c, d) ) of AE :
A_ell =
[[ 1. -0.8660254 ]
[ 1.73205081 0.5 ]]
Step 3: Determine the coefficients of the matrix Aq by the formulas given in the first post of this series. I got
A_q =
[[ 3.25 -1.29903811]
[-1.29903811 1.75 ]]
For δ I got:
delta = 4.0
which is consistent with the length-values of the principal axes.
Step 4: Determine values for the eigenvalues λ1 and λ2 from the Aq-coefficients by the formulas given in the first post. Also calculate them by using Numpy’s
eigenvalues, eigenvectors = numpy.linalg.eig(A_q). Theory tells us that these values should be exactly λ1 = 4 and λ1 = 1. I got
Eigenvalues from A_q: lambda_1 = 4. :: lambda_2 = 1.
Step 5: Determine the components of the normalized eigenvectors with the help of numpy.linalg.eig(A_q). I got:
Components of normalized eigenvectors by theoretical formulas from A_q coefficients:
ev_1_n : -0.8660254037844386 : 0.5000000000000002
ev_2_n : 0.5000000000000001 : 0.8660254037844385
Eigenvectors from A_q via numpyy.linalg.eig():
ev_1_num : 0.8660254037844387 : -0.5000000000000001
ev_2_num : 0.5000000000000001 : 0.8660254037844387
The deviation between ev_1_n and ev_1_num is just due to a difference by -1. This is correct as the eigenvectors are unique only up to a minus-sign in all components.
Step 6: Calculate the sinus of the rotation angle of our ellipse from Aq– and Aq-coefficients. The theoretical value is sin(2 π/3) = sin(2 pi/3) = 0.8660254037844387. I got:
sin(2. * rotation angle) of major axis of the ellipse against the CCS x-axis from A_E coefficients:
sin_2phi-A_E = 0.8660254037844388
sin(2. * rotation angle) of major axis of the ellipse against the CCS x-axis from from eigenvectors of A_q:
sin_2phi-ev_A_q = 0.8660254037844387
sin(2. * rotation angle) of major axis of the ellipse against the CCS x-axis from A_q-coefficients:
sin_2phi-coeff-A_q = 0.8660254037844388
Perfect!
Step 7: Plot the end-points of the normalized eigenvectors of Aq:
Note that in our example case the end-point of the eigenvector along the minor axis must be located exactly on the elliptic curve as the ellipses minor axes has a length of b=1!
Step 8: Calculate the components of the vectors to data-points of the ellipse with maximal absolute ye-values from the Aq-coefficients given in the previous post. Plot these data-points (here in green color).
Step 9: Calculate the components of the vectors to data-points of the ellipse with maximal values of the radii with the help of the complex formulas presented in this post and plot these points in addition.
Conclusion
In this mini-series of posts we have performed some small mathematical exercises with respect to centered and rotated ellipses. We have calculated basic geometrical properties of such ellipses from the coefficients of matrices which define ellipses in algebraic form. Linear Algebra helped us to understand that the eigenvectors and eigenvalues of a symmetric matrix, whose coefficients stem from a quadratic equation (for a conic section), control both the orientation and the lengths of the ellipse’s axes completely.
This knowledge is useful in some Machine Learning [ML] context where elliptic data appear as projections of multivariate normal distributions. Multivariate Gaussian probability functions control properties of a lot of natural objects. Experience shows that certain types of neural networks may transform such data into multivariate normal distributions in latent spaces. An evaluation of the numerical data coming from such ML-experiments often delivers the coefficients of defining matrices for ellipses.
In my blog I now return to the study of with shearing operations applied to circles, spheres, ellipses and 3-dimensional ellipsoids. Later I will continue with the study of multivariate normal distributions in latent spaces of Autoencoders. For both of these topics the knowledge we have gathered regarding the matrices behind ellipses will help us a lot.
I have discussed how the coefficients of two matrices which can be used to define a centered, rotated ellipse can be used to calculate geometrical properties of the ellipse:
The lengths σ1, σ2 of the ellipse’s principal axes and the rotation angle by which the major axis is rotated against the x-axis of the Cartesian coordinate system [CCS] we work with.
But there are other properties which are interesting, too. A centered, rotated ellipse has two points with extremal values in their y-coordinates. Can we express the coordinates – or equivalently the components of respective position vectors – in terms of the basic matrix coefficients?
The answer is, of course, yes. This post provides a derivation of respective formulas.
Matrix equation for an ellipse
In the last post we have shown that a centered ellipse is defined by a quadratic form, i.e. by a polynomial equation with quadratic terms in the components xE and yE of position vectors for points of the ellipse:
The quadratic polynomial can be formulated as a matrix operation applied to position vectors vE of points on an ellipse. With the the quadratic and symmetric matrix Aq
We now follow the alternative solution for yE (see above). After a calculation of the yE-values from the derived xE values, we get the components for the two position vectors to the points with extremal y-values on the ellipse:
Solution in terms of the coefficients of an alternative matrix AE
In my previous post I have discussed yet another matrix AE which also can be used to define an ellipse. This matrix summarizes two affine transformations of a centered unit circle: AE = Rφ ○ Dσ1, σ2. Dσ1, σ2.
You find the relations between the coefficients (a, b, c, d) of matrix AE and the coefficients (α, β, γ) of matrix Aq in my previous post. This will allow you to calculate the vectors to the extremal points of an ellipse in terms of the coefficients (a, b, c, d).
Conclusion
In this post we have again used the coefficients of a matrix which defines an ellipse via a quadratic form to get information about a geometrical property.
We can now calculate the components of the position vectors to the two points of an ellipse with extremal y-values as functions of the matrix coefficients.
I will show you how to calculate the components of the end points of the principal axes of the ellipse with the help of our matrix for a quadratic form. I will also use our theoretical results for plots of some ellipses’ axes and of their extremal points. We will also compare theoretical predictions with numerically evaluated values.
some geometrical and algebraic properties of ellipses are of interest.
Geometrically, we think of an ellipse in terms of their perpendicular two principal axes, focal points, ellipticity and rotation angles with respect to a coordinate system. As elliptic data appear in many contexts of physics, chaos theory, engineering, optics … ellipses are well studied mathematical objects. So, why a post about ellipses in the Machine Learning section of a blog?
In my present working context ellipses appear as a side result of statistical multivariate normal distributions [MNDs]. The projections of multidimensional contour hyper-surfaces of a MND within the ℝn onto coordinate planes of an Cartesian Coordinate System [CCS] result in 2-dimensional ellipses. These ellipses are typically rotated against the axes of the CCS – and their rotation angles reflect data correlations. The general relations of statistical vector data with projections of multidimensional MNDs are somewhat intricate.
Data produced in numerical experiments, e.g. in a Machine Learning context, most often do not give you the geometrical properties of ellipses, which some theory may have predicted, directly. Instead you may get numerical values of statistical vector distributions which correspond to algebraic coefficients. Examples of such coefficients are e.g. correlation coefficients. These coefficients can often be regarded as elements of a matrix. In case of an underlying MND of your statistical variables these matrices indirectly and approximately describe contour surfaces – namely ellipsoids. Or regarding projections of the data on 2-dim coordinate planes “ellipses”.
Ellipsoids/ellipses can in general be defined by matrices operating on position vectors. In particular: Coefficients of quadratic polynomial expressions used to describe ellipses as conic sections correspond to the coefficients of a matrix operating on position vectors.
So, when I became confronted with multidimensional MNDs and their projections on coordinate planes the following questions became interesting:
How can one derive the lengths σ1, σ2 of the perpendicular principal axes of an ellipse from data for the coefficients of a matrix which defines the ellipse by a polynomial expression?
By which formula do the matrix coefficients provide the inclination angle of the ellipse’s primary axes with the x-axis of a chosen coordinate system?
You may have to dig a bit to find correct and reproducible answers in your math books. Regarding the resulting mathematical expressions I have had some bad experiences with ChatGPT. But as a former physicist I take the above questions as a welcome exercise in solving quadratic equations and doing some linear algebra. So, for those of my readers who are a bit interested in elementary math I want to answer the posed questions step by step and indicate how one can derive the respective formulas. The level is moderate – you need some knowledge in trigonometry and/or linear algebra.
Centered ellipses and two related matrices
Below I regard ellipses whose centers coincide with the origin of a chosen CCS. For our present purpose we thus get rid of some boring linear terms in the equations we have to solve. We do not loose much of general validity by this step: Results of an off-center ellipse follow from applying a simple translation operation to the resulting vector data. But I admit: Your (statistical) data must give you some relatively precis information about the center of your ellipse. We assume that this is the case.
Our ellipses can be rotated with respect to a chosen CCS. I.e., their longer principal axes may be inclined by some angle φ towards the x-axis of our CCS.
There are actually two different ways to define a centered ellipse by a matrix:
Alternative 1: We define the (rotated) ellipse by a matrix AE which results from the (matrix) product of two simpler matrices: AE = Rφ ○ Dσ1, σ2. Dσ1, σ2 corresponds to a scaling operation applied to position vectors for points located on a centered unit circle. Aφ describes a subsequent rotation. AE summarizes these geometrical operations in a compact form.
Alternative 2: We define the (rotated) ellipse by a matrix Aq which combines the x- and y-elements of position vectors in a polynomial equation with quadratic terms in the components (see below). The matrix defines a so called quadratic form. Geometrically interpreted, a quadratic form describes an ellipse as a special case of a conic section. The coefficients of the polynomial and the matrix must, of course, fulfill some particular properties.
While it is relatively simple to derive the matrix elements from known values for σ1, σ2 and φ it is a bit harder to derive the ellipse’s properties from the elements of either of the two defining matrices. I will cover both matrices in this post.
For many practical purposes the derivation of central elliptic properties from given elements of Aq is more relevant and thus of special interest in the following discussion.
Matrix AE of a centered and rotated ellipse: Scaling of a unit circle followed by a rotation
Our starting point is a unit circle C whose center coincides with our CCS’s origin. The components of vectors vc to points on the circle C fulfill the following conditions:
DE is a diagonal matrix which describes a stretching of the circle along the CCS-axes, and Rφ is an orthogonal rotation matrix. The stretching (or scaling) of the vector-components is done by
As we already know, σ1 and σ2 are factors which give us the lengths of the principal axes of the ellipse. σ1 and σ2 have positive values. We, therefore, can safely assume:
Ok, we have defined an ellipse via an invertible matrix AE, whose coefficients are directly based on geometrical properties.
But as said: Often an ellipse is described by a an equation with quadratic terms in x and y coordinates of data points. The quadratic form has its background in algebraic properties of conic sections. As a next step we derive such a quadratic equation and relate the coefficients of the quadratic polynomial with the elements of our matrix AE. The result will in turn define another very useful matrix Aq.
Quadratic forms – Case 1: Centered ellipse, principal axes aligned with CCS-axes
We start with a simple case. We take a so called axis-parallel ellipse which results from applying only a scaling matrix DE onto our unit circle C. I.e., in this case, the rotation matrix is assumed to be just the identity matrix. We can omit it from further calculations:
To get quadratic terms of vector components it often helps to invoke a scalar product. The scalar product of a vector with itself gives us the squared norm or length of a vector. In our case the norms of the inversely re-scaled vectors obviously have to fulfill:
I.e., we can directly derive σ1, σ2 and φ from the coefficients of the quadratic form. But an axis-parallel ellipse is a very simple ellipse. Things get more difficult for a rotated ellipse.
Quadratic forms – Case 2: General centered and rotated ellipse
We perform the same trick with the vectors vE to get a quadratic polynomial for a rotated ellipse:
Thus Aq is an invertible matrix if AE is invertible. For standard conditions (σ1 >0, σ2 > 0) this is the case (see above). Furthermore, Aq is symmetric and thus its own transposed matrix.
Above we have got α, β, γ as some relatively simple functions of a, b, c, d. The inversion is not so trivial and we do not even try it here.
Instead we focus on how we can express σ1, σ2 and φ as functions of either (a, b, c, d) or (α, β, γ).
How to derive σ1, σ2 and φ from the coefficients of AE or Aq in the general case?
Let us assume we have (numerical) data for the coefficients of the quadratic form. Then we may want to calculate values for the length of the principal axes and the rotation angle φ of the corresponding ellipse. There are two ways to derive respective formulas:
Approach 1: Use trigonometric relations to directly solve the equation system.
Approach 2: Use an eigenvector decomposition of Aq.
Both ways are fun!
Direct derivation of σ1, σ2 and φ from Aq by using trigonometric relations
Without loosing much of generality we further assume
\[
\lambda_2 \:\ge \lambda_1 \,.
\]
This affects some aspects of the following derivations and should be kept in mind whilst reading. In the end, our results would only differ by a rotation of π/2, if we had chosen otherwise.
Note, however, that due to our assumption we discuss ellipses whose half-axis σ1 in x-direction is longer than the half-axis σ2 in y-direction!
By combining the above relations for the Aq-coefficients we find
See a later section below for ambiguities coming from the arcsin-function. A proper analysis and interpretation of this formula for the rotation angle is necessary, when you want to reconstruct ellipses by numerical methods from a given matrix Aq. Some matrices may describe ellipses with their half-axis in y-direction being longer than the half-axis in x-direction. Then λ1 and λ2 would change their role with respect to x,y.
Example:
A typical example for a matrix would be one that appears in the context of a standardized (!) bivariate normal distribution with some correlation imposed onto the statistical variables:
Note: All in all there are four different solutions. The reason is that we alternatively could have requested λ2 ≥ λ1 and also chosen the angle π + φ. So, the ambiguity is due to a selection of the considered principal axis and rotational symmetries.
2nd way to a solution for σ1, σ2 and φ via eigendecomposition
For our second way of deriving formulas for σ1, σ2 and φ we use some linear algebra.
This approach is interesting for two reasons: It indicates how we can use the Python “linalg”-package together with Numpy to get results numerically. In addition we get familiar with a representation of the ellipse in a properly rotated CCS.
Above we have written down a symmetric matrix Aq describing an operation on the position vectors of points on our rotated ellipse:
We know from linear algebra that every symmetric matrix can be decomposed into a product of orthogonal matrices O, OT and a diagonal matrix. This reflects the so called eigendecomposition of a symmetric matrix. It is a unique decomposition in the sense that it has a uniquely defined solution in terms of the coefficients of the following matrices:
The coefficients λu and λd are eigenvalues of bothDdiagandAq.
Reason: Orthogonal matrices do not change eigenvalues of a transformed matrix. So, the diagonal elements of Ddiag are the eigenvalues of Aq. Linear algebra also tells us that the columns of the matrix O are given by the components of the normalized eigenvectors of Aq.
We can interpret O as a rotation matrix Rψ for some angle ψ:
The whole operation tells us a simple truth, which we are already familiar with. By our construction procedure for a rotated ellipse we know that a rotated CCS exists, in which the ellipse can be described as the result of a scaling operation (along the coordinate axes of the rotated CCS) applied to a unit circle. (This CCS is, of course, rotated by an angle φ against our working CCS in which the ellipse appears rotated.)
Remembering that a diagonal matrix is its own transposed matrix and that the inverse of an orthogonal matrix (rotation) is its transposed matrix, we get:
Mathematically, a lengthy calculation (see below) will indeed reveal that the eigenvalues of a symmetric matrix Aq with coefficients α, 1/2*β and γ have the following form:
This is, of course, exactly what we have found some minutes ago by solving respective equations with the help of trigonometric terms. Remember, however, the assumptions about the λ-values and lengths of the ellipse’s axes!
We will prove the fact that these indeed are valid eigenvalues in a minute. Let us first look at respective eigenvectorsξ1/2. To get them we must solve the equations resulting from
As usual, the T at the formulas for the vectors symbolizes a transposition operation.
Note again that we have
\[
\lambda_1 \:\le\: \lambda_2 \,.
\]
This reflects our initial assumptions about the axes-lenghts of our ellipses. It means that the eigenvalue λ1 and the respective eigenvector ξ1 are associated with the longer half-axis of the ellipse! We had assumed that this half-axis is aligned with the x-coordinate axis.
Note also that the vector components given above are not normalized. This is important for performing numerical checks as Numpy and linear algebra programs would typically give you normalized eigenvectors with a length = 1. But you can easily compensate for this by working with
The eigenvectors are perpendicular to each other. Exactly, what we expect for the orientations of the principal axes of an ellipse against each other.
Rotation angle from coefficients of Aq
We still need a formula for the rotation angle(s). From linear algebra results related to an eigendecomposition we know that the orthogonal (rotation) matrices consist of columns of the normalized eigenvectors. With the components given in terms of our un-rotated CCS, in which we basically work. These vectors point along the principal axes of our ellipse. Thus the components of these eigenvectors define our aspired rotation angles of the ellipse’s principal axes against the x-axis of our CCS.
This looks very differently from the simple expression we got above. And a direct approach is cumbersome. The trick is to multiply nominator and denominator by a convenience factor
\[
\left( t \,+\, z \right),
\]
and exploit
\[ \begin{align}
\left( t \,-\, z \right) \, \left( t \,+\, z \right) \,&=\, t^2 \,-\, z^2 \,, \\[8pt]
\left( t \,-\, z \right) \, \left( t \,+\, z \right) \,&=\,\, – \,\beta^2
\end{align}
\]
which is of course identical to the result we got with our first solution approach. It is clear that the second axis has an inclination by φ +- π / 2:
\[
\phi_2\, =\, \phi_1 \,\pm\, \pi/2.
\]
In general the angles have a natural ambiguity of π. This makes life a bit harder when you get a matrix, use the formulas above and try to construct the matrix with correct orientation.
Addendum 07/05/2025: Getting the right orientation of the ellipses when constructing them via eigenvalues of a matrix Aq
The formulas for the rotation angle of the ellipse look simple. However, some pitfalls may await you, when you want to use the formula to construct your ellipses after having determined its half-axes from the eignevalues of a given matrix Aq. If you are not careful and forget to take into account our special assumptions on axis-length and orientation of the eigenvectors you may get the ellipses orientation wrong.
Most often this happens when working with contour ellipses of BivariateNormal Distributions [BVDs] and their central covariance matrices, which correspond to our Aq-matrix. In the case of BVDs the matrix coefficients have an association with the standard-deviations of the BVDs marginal distributions and this may lead to confusion.
The important points to take care of are the following:
The arcsin-function allows for multiple equivalent values – and we have to choose the right one. To do this you should analyze both of the key equations determining the angle Φ:
This will narrow down the interval in which Φ resides.
Throughout our text, we have used the assumption λ1 = λu ≤ λ2 = λd. A priori it is questionable whether this covers all necessary cases during the reconstruction of an ellipse properly. Not all given matrices may fulfill the assumption about the λ-values. Instead, for certain matrices the eigenvalues λ1 and λ2 may switch their position in the central diagonal matrix of the eigendecomposition. This must be analyzed. It can, however, be shown that cases with λ1 < λ2 can be mapped 1:1 onto certain cases with λ1 > λ2. A proper analysis is given in an extensive post in another blog.
The eigenvalue λ1 and the respective eigenvector ξ1 were in our considerations always associated with the longer half-axis of the ellipse! This has the consequence that the meaning of Φ1 during reonstruction can change with respect to the real situation given by the matrix.
A thorough analysis of how the matrix elements determine the angle of the ellipse and respective recipes has meanwhile benn provided by myself in a related post in another blog. There you find also arguments why it is justified to focus on cases with (λ2 – λ1) ≥ 0.
Conclusion
In this post I have shown how one can derive essential properties of centered, but rotated ellipses from matrix-based representations. Such calculations become relevant when e.g. experimental or numerical data only deliver the coefficients of a quadratic form for the ellipse.
We have first established the relation of the coefficients of a matrix that defines an ellipse by a combined scaling and rotation operation with the coefficients of a matrix which defines an ellipse as a quadratic form of the components of position vectors. In addition we have shown how the coefficients of both matrices are related to quantities like the lengths of the principal axes of the ellipse and the inclination of these axes against the x-axis of the Cartesian coordinate system in which the ellipse is described via position vectors. So, if one of our two defining matrices is given we can numerically calculate the ellipse’s main properties.
Regarding the ellipses orientation we have to be careful and check which of its eigenvalues describes the longer axis.
we have a look at the x- and y-coordinates of points on an ellipse with extremal y-values. All in terms of the matrix coefficients we are now familiar with.
we have clarified some basic properties of shear transformations [SHT]. We got interested in this topic, because Autoencoders can produce latent multivariate normal vector distributions, which in turn result from linear transformations of multivariate standard normal distributions. When we want to analyze such latent vector distributions we should be aware transformations of quadratic forms. An important linear transformation is a shear operation. It combines aspects of scaling with rotations.
The objects we applied SHTs to were so far only squares and cubes. Both (discrete) rotational and plane symmetries of the squares and cubes were broken by SHTs. We also saw that this symmetry breaking could not be explained by a pure scaling operation in another rotated Euclidean Coordinate System [ECS]. But cubes do not have a continuous rotational symmetry. The distances of surface points of a cube to its symmetry center show no isotropy.
However, already in the first post when we superficially worked with Blender we got the impression that the shearing of a sphere seemed to produce a figure with both plane and discrete rotational symmetries – namely ellipsoids, wich appeared to be rotated. We still have to prove this, mathematically. With this post we move a first step in this direction: We will apply a shear operation to a 2D-body with perfect continuous rotational symmetry in all directions, namely a circle. A circle is a special example of a quadratic form (with respect to the vector component values). We center our Euclidean Coordinate System [ECS] at the center of the circle. We know already that this point remains a fix-point of our transformations. As in the previous post I use Python and Matplotlib to produce visual results. But we support our impression also by some simple math.
We first check via plotting that the shear operations move an extremal point of the circle (with respect to the y-coordinate) along a line ymax = const. (Points of other layers for other values yl = const also move along their level-lines.) We then have to find out whether the produced figure really is an ellipse. We do so by mathematically deriving its quadratic form with respect to the coordinates of the transformed points. Afterward, we derive the coordinate values of points with extremal y-values after the shear transformation.
In addition we calculate the position of the points with maximum and minimum distance from the center. I.e., we derive the coordinates of end-points of the main axes of the ellipse. This will enable us to calculate the angle, by which the ellipse is rotated against the x-axis.
The astonishing thing is that our ellipse actually can be created by a pure scaling operation in a rotated ECS. This seems to be in contrast to our insight in previous posts that a shear matrix cannot be diagonalized. But it isn’t … It is just the rotational symmetry of the circle that saves us.
Shearing a circle
We define a circle with radius r = a = 2.
I have indicated the limiting line at the extremal y-values. From the analysis in the last post we expect that a shear operation moves the extremal points along this line.
We now apply a shearing matrix with a x/y-shearing parameter λ = 2.0
Thus, we have indeed produced a rotated ellipse! We see this from the fact that the term mixing the xs and the yl coordinates does not vanish.
Position of maximum absolute y-values
We know already that the y-coordinates of the extremal points (in y-direction) are preserved. And we know that these points were located at x = 0, y = a. So, we can calculate the coordinates of the shifted point very easily:
In our case this gives us a position at (4, 2). But for getting some experience with the quadratic form let us determine it differently, namely by rewriting the above quadratic equation and by a subsequent differentiation. Quadratic supplementation gives us:
Let us also find the position of the end-points of the main axes of the ellipse. One method would be to express the ellipse in terms of the coordinates (xs, ys), calculate the squared radial distance rs of a point from the center and set the derivative with respect to xs to zero.
The “problem” with this approach is that we have to work with a lot of terms with square roots. Sometimes it is easier to just work in the original coordinates and express everything in terms of (x, y):
Plot of main axes, their end-points and of the points with maximum y-value
The coordinate data found above help us to plot the respective points and the axes of the produced ellipse. The diameters’ end-points are plotted in red, the points with extremal y-value in green:
It becomes very clear that the points with maximum y-values are not identical with the end-points of the ellipse’s main symmetry axes. We have to keep this in mind for a discussion of higher dimensional figures and vector distributions as multidimensional spheres, ellipsoids and multivariate normal distributions in later posts.
Rotated ECS to produce the ellipse?
The plot above makes it clear that we could have created the ellipse also by switching to an ECS rotated by the angle α. Followed by a simple scaling in x- and y-direction by the factors as and bs in the rotated ECS. This seems to be a contradiction to a previous statement in this post series, which said that a shear matrix cannot be diagonalized. We saw that in general we cannot find a rotated ECS, in which the shear transformation reduces to pure scaling along the coordinate axes. We assumed from linear algebra that we in general need a first rotation plus a scaling and afterward a second different rotation.
But the reader has already guessed it: For a fully rotation-symmetric, i.e. isotropic body any first rotation does not change the figure’s symmetry with respect to the new coordinate axes. In contrast e.g. to squares or rectangles any rotated coordinate system is as good as any other with respect to the effect of scaling. So, it is just scaling and rotating or vice versa. No second rotation required. We shall in a later post see that this holds in general for isotropically shaped bodies.
Conclusion
Enough for today. We have shown that a shear transformation applied to a circle always produces an ellipse. We were able to derive the vectors to the points with maximum y-values from the parameters of the original circle and of the shear matrix. We saw that due to the circle’s isotropy we could reduce the effect of shearing to a scaling plus one rotation or vice versa. In contrast to what we saw for a cube in the previous post.
This post requires Javascript to display formulas!
This post series is about shear transformations and related matrices.
In the end we want to better understand the effect of shear operations on “multivariate standard normal distributions” [MSND]. A characteristic property of a MNSD is that its contour hyper-surfaces of constant density are nested spheres. We want to understand the impact of a shear operation on such a random vector distribution.
In the first post of this series Fun with shear operations and SVD – I – shear matrices and examples created with Blender
we have already had a preliminary look on shear transformations. We have seen that shear transformations are a special class of affine transformations. They are represented by square unipotent, upper (or lower) triangular matrices. The eigenspace is 1-dimensional and the only eigenvalue is 1.
We have also understood that a shear operation can not be reduced to a simple scaling operation with different factors by choosing some special rotated Euclidean coordinate system [ECS], whose origin coincides with volume and symmetry center of the original figure.
Due to its matrix properties which are very different from orthogonal matrices a shear operation is neither a rotation, nor a scaling. Instead we suspect that it might only be decomposed into a combination of a rotation plus a scaling – whatever coordinate system we choose. We still have to prove this in a forthcoming post.
In this post we want to explicitly apply shear matrices to simple 2D- and 3D-figures, namely squares and cubes. We use Python to control the operations and visualize the results with the help of Matplotlib. We will check that the position and orientation of all extremal points in y-direction (2D-case) and z-direction (3D-case) are preserved during shearing. We will depict this by applying the transformations to position-vectors, which point to elements of distinct vertical layers. In 3D I also test the effects of shearing on cubes by just using the 8 corner points of the cube to define 6 faces, which then get transformed.
Afterward we briefly look at the chaining of shear operations and the resulting product of their respective matrices. As some people may think that one could use combinations of shears induced by a mix of upper triagonal and lower triagonal matrices, we investigate the effects of such combinations. We see that such combinations hurt conditions we have already defined for pure shear operations. Even a symmetric matrix, which has the same shear factors in the upper and lower triangular parts of the matrix has a different effect than a real shearing: It distorts a body in the sense of a pure scaling with different factors in potentially all orthogonal directions.
Hint: I omit Python code in this post. If I find the time I will deliver some code snippets at the end of this post series. However, it is really easy to set up your own programs to reproduce the results of my experiments and respective plots with the help of Numpy and Matplotlib.
Before showing results of numerical experiments I first list up some general affine properties and other specific properties of shear transformations.
Affine properties of shear transformations
Important properties of all affine transformations are:
Collinearity: points on a line are transformed into points on a line.
Parallelism: Parallel lines remain parallel lines. Lines remain lines. Planes are transformed into planes.
Preservation of sub-dimensionality: Affine transformations preserve the dimensionality of objects confined to a sub-space of the “affine” space they operate on.
Convexity: Convex figures remain convex – i.e. the sign of a curvature radius does not change.
Proportionality: Distance ratios on parallel line segments are preserved.
Extremal points on convex surfaces may remain extremal with respect to distances to the origin, but not with respect to a specific coordinate axis (rotation!). Note that the third point holds for convex (hyper-) surfaces – as e.g. that of a sphere or ellipsoid.
Properties of shear transformations and their matrices Msh
I list some properties which I have already discussed in my previous post:
Fix point: When we omit a translation it is natural to choose an Euclidean coordinate system [ECS] such, that its origin coincides with the symmetry center of the body to be sheared. This point then remains a fixed point of the transformation.
Msh is a unipotent matrix with non-zero-elements only in its upper (or lower) triangular part. The matrix elements on the diagonal are just 1.
det(Msh) = 1. A shear transformation is invertible. Its rank is rank(Msh) = n.
The only eigenvalue ε is ε = 1. Its multiplicity in the ℝn is n.
The eigenspace ESsh has a dimension dim(ESsh) = 1.
A single pure shear mediated by a unipotent upper (or lower) matrix can not be reduced to a pure scaling operation in some cleverly chosen, rotated coordinate system.
For reasons of convenience we below again chose an Euclidean coordinate system [ECS] whose origin coincides with the center of volume or the symmetry center of a body which we apply a shear transformation to. This not only guarantees a central fixed point. The structure of the diagonal matrix and the 1s on the diagonal
This is independent of the linear coupling between other components. Therefore, an important feature of a shear operation is:
Extremal points on the surface of a sheared body in the n-th coordinate direction remain extremal and are moved within a hyper-plane orthogonal to the sub-space ℝ1 of the ℝn.
And, more generally:
Points of a sheared body lying in a (n-1)-dimensional sub-space of the ℝn, which are defined by vn = const, are moved within the sub-space, only, during shearing.
In 3D with coordinates (x, y, z) such a sub-space is a hyper-plane defined by z = const.. E.g. for a cube the points of the top face in z-direction are elements who just move within this plane during a shearing-operation. The same holds for any layer of points defined by other values z = const. Actually, the extremal points in z-direction even move along defined straight lines within the plane. In the general case these lines cross the (n-1)-dimensional sub-space.
It is time to take Python to perform some numerical experiments in 2D and 3D – and visualize the above statements. As said, in this post we concentrate on the shearing of squares and cubes. And the behavior of extremal points and other points located on layers with a constant value of the preserved vector component. I admit that squares and cubes are a bit boring, but this post is just a preparational step towards a closer investigation of the the impact of shear operations on statistical vector distributions controlled by Gaussians.
2D: Shearing of a square/rectangle
The figure below shows the end-points of position-vectors, which define data points on a rectangular meshes within a square. All the points reside inside and on the boundary of the square and reflect horizontal layers of the square:
Now we apply a shear matrix with the following elements [[1.0, 1.5], [0.0, 1.0]]. As we only work in two dimensions our matrix just has 4 elements with values ≠ 0. We name the undisturbed vectors of our square vsq. The resulting vectors after the shear operation will be called vsh. I.e.:
Exactly, what we expected. We directly see the constant shift of layers against each other. And we also see the linearity of the overall effect in the diagonal lines between the new edges. The overall result is a parallelogram.
The points on all the layers were moved along hyper-surfaces with y = const., which in our 2D case are just lines parallel to the x-axis.
Note that the symmetry center at the origin of our figure remained fixed. Point symmetries with respect to the original symmetry center have been preserved. However, original symmetries with respect to the coordinate axes have been broken.
For a rectangle we get analogous, stretched images:
Shear transformation of a cube
The last plots were simple, but visualized the very nature of a shear transformation: Linear differences of (position) vectors between layers. Breaking of plane symmetries for polytopes, while point symmetries are preserved. Let us try to demonstrate something similar in 3D. Once again an illustrative method to do this is to produce multiple planes on different z-levels filled with points on a regular mesh. During plotting we allow for some transparency of these points.
All images show the situation before shearing. You see the flat squares (extending in x- and y-direction) on 4 different z-levels. It can be seen that the dimensions are -15 ≤ vi ≤ 15, for i ∈ {1, 2, 3}. We now apply a shear matrix Msh = [[1.0, 0.7, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0]]
to the position-vectors of our layer points. I.e. we only induce a linear effect of the y-components onto the x-components of our vectors. The result is:
We get a systematic y-dependent change of the x-coordinate values of our points. The z-level of our layer points, however, remain unchanged. Now, we repeat our experiment, but this time we add an additional shear via a linear dependency of x- and y-component values of our vectors on their z-component by
The shift of the layers against each other in x- and y-direction gets a clearly visible z-dependence:
But, again and of course, all elements of our 4 layers remain on their respective z-level.
Shearing of a cube: Surface layers
An alternative method to visualize the impact of a shear-transformation on a cube with the help of Matplotlib is based on 8 vectors that span the 6 surface elements of the cube. We also make the cube-faces a bit transparent in some plots. The original cube then looks like:
Here wee see again that the z-level of the upper and lower limiting plates of the cube remained constant. They were just moved in x- and y-direction:
Our shear operation has produced a parallelepiped or a spat. This is consistent to the results of Blender experiments depicted in my previous post.
Remarks on a significant deficit of Matplotlib for 3D
A significant problem that one stumbles into with plots like those above is that Matplotlib does not offer us a real 3D engine. Meaning: It often calculates the so called z-order of objects like hyper-planes (or parts of such), curved manifolds, bodies along our fictitious line of visibility wrongly. So, the shading of e.g. transparent and opaque surfaces according to overlaps in 3D along your line of sight is almost always depicted in a wrong way.
For complex bodies this can drive you crazy and makes you wonder if it would be better to learn controlling Blender with Python. For our relatively simple cases I solved the z-order problem by disabling the computed z-order
and by manually assigning a z-order-value to each of the cube’s faces.
Still: As soon as we distort a simple body as a cube strongly (see below) you need a really good overview about the order of your surfaces and their colors after rotations. It is worthwhile to make sketches or to create a similar body with Blender first to clearly see, what happens during a sequence of rotations.
Combinations of shear transformations
Shearing transformations can, of course, be chained in an orderly sequence of operations. Let us take a 2D example of two shear matrices with the same shear-parameter:
I.e., we just increase the shearing effect by performing two shear operations after another. If we had taken the negative value (μ = – λ) we would just have inverted the shearing. Note:
A chaining of pure shear matrices, which we had defined as upper triangular matrices in the first post, is a symmetric operation with respect to the order of the matrices.
Combination of upper and lower triangular shear matrices?
However, what happens if we combine an unipotent upper triangular shear matrix with a lower triangular matrix with the same shearing coefficient?
We get a symmetric matrix. This is not surprising, as this always is the case for product of a matrix with its transposed counterpart. But, in addition, a scaling is superimposed. Note also that the combined operation is not symmetric regarding the order of matrix application:
Let us apply the second variant to our square (see above) for λ = 1.5:
As expected we get a stronger distortion in y-direction. Note also that in contrast to a normal shear operation z-levels of our 4 level-rows were not preserved. Instead the individual levels appear rotated and stretched. Actually, in one of the next posts we will see that our combined matrix can be factorized into a combination of a rotation, then pure scaling, then back-rotation (by the same angle used in the first rotation). But this corresponds to nothing else than a pure scaling in a rotated ECS.
The funny thing with chaining is that using a simple symmetric matrix from the start for λ = 1.5
would give us a different result than a sequence of MshT * Msh:
Note:
Combining a shear matrix with its transposed is something different than just using a matrix with filling in (transposed) elements in the lower part with the same value as their mirrored elements in the upper triangular part.
By combining upper and lower triangular shear matrices we leave the group of pure shear operations
We could call the application of a symmetric matrix a kind of symmetric shearing. But even with a symmetric matrix we loose the central property of a normal shear operation, namely that the z-levels of body-layers parallel to the x/y-plane are preserved. All in all we get into trouble as soon as we combine original shear matrices and their transposed counterparts. Regarding my definition of a shear matrix in my previous post, we obviously leave the group of shear operations by such a combination. Some reader may regard this as an exaggeration. But it makes sense:
Even the symmetric matrix of the combined operation can be reduced to a pure scaling operation in a rotated coordinate system. And this simply is not shearing.
Just for fun: “Symmetric shearing” applied to a cube
Just for the fun of it, let us apply a symmetric version of a 3D-shearing matrix to our cube:
shear = [[1.0, 0.6, 1.0], [0.6, 1.0, 1.2], [1.0, 1.2, 1.0]]
We expect all layers to be stretched into all coordinate directions now; z-levels of the original layers parallel to the x/y-plane will not be preserved. The normal vector to the transformed layers will be inclined to all axes after the operation.
The following plots were done with intransparent, opaque surfaces. Due to the extreme stretching we would otherwise loose the overview – which is nevertheless difficult. I have kept the order of colors for the cubes side-layers the same as for the unstretched cube: sf_color=[‘yellow’, ‘blue’, ‘red’, ‘green’, ‘magenta’, ‘cyan’]. I first give you some selected perspectives:
The distortion in all directions is obvious. Note that the point-symmetries of all corners with respect to the origin are preserved. The following sequence reflects views from a circle around the vertical axis:
What we learn is that even scaling alone can distort a symmetric body quite a lot.
Conclusion
In this post I have visualized the effect of shear operations on squares and cubes. We saw that working with points on layers and with respective vectors is helpful when we want to demonstrate a central property of real shear operations: One vector component stays as it is. Nevertheless, the symmetry breaking of shearing transformations was clearly visible. We also saw that we should represent shear transformations either by upper or lower triangular matrices, but we should not mix these types of matrices.
As a side effect we have seen that a combination of shear matrix with its transposed induces strong deformations in specific coordinate directions by enhanced scaling factors. Symmetric matrices also lead to relativ strong distortions. We have claimed that these distortions can be explained by pure scaling operations in a rotated coordinate system. A proof has to be delivered yet.
In the next post Fun with shear operations and SVD – III – Shearing of circles
we take a first step in the direction of shearing multivariate normal vector distributions. We keep it simple and just study the impact of shearing transformations on circles.