Fun with shear operations and SVD – IV – Shearing of ellipses

This post requires Javascript to display formulas!

In the previous posts of this series we got acquainted with shear operations:

Fun with shear operations and SVD – I – shear matrices and examples created with Blender
Fun with shear operations and SVD – II – Shearing of rectangles and cubes with Python and Matplotlib
Fun with shear operations and SVD – III – Shearing of circles

Already established results for shearing a circle

Post III focused on the shearing of a circle, which was centered in the Euclidean coordinate system [ECS] we worked with. The shear operation resulted in an ellipse with an inclination against the coordinate axes of our ECS. This was interesting regarding four points:

  • A circle, which is centered in a chosen ECS, exhibits a continuous rotational symmetry (isotropy). This obviously allows for a decomposition of a shear operation into a sequence of two affine operations in the chosen ECS: a scaling operation (with different factors along the coordinate axes) followed by a rotation (or the other way round). Equivalently: We could switch to another specific ECS which is already rotated by a proper angle against our originally chosen ECS and just perform a scaling operation there.
    The rotation angle is determined by the shear parameter λ.
    This seems to stand in some contrast to the shearing of figures with only discrete rotational symmetries: We saw for rectangles and cubes that an additional rotation was required to replace the shear operation by a sequence of scaling and rotation operations.
  • Points (x, y) of circles and ellipses are described by quadratic forms in two dimensions (with some real coefficients α, β, γ, δ):
    \[
    \alpha \,x^2 \, + \, \beta \, x \, y \, + \, \gamma \, y^2 \:=\: \delta
    \]

    Quadratic forms play a general role in the mathematical description of cone-sections. (Ellipses are the results of specific cone-sections.)

  • Ellipses also result from projections of multi-dimensional ellipsoids onto two-dimensional coordinate planes. Multi-dimensional ellipsoids are described by quadratic forms in an ECS covering the ℝn.
  • Hyper-surfaces for constant probability density values of multivariate normal vector distributions form multi-dimensional ellipsoids. Here we have a link to Machine Learning where key properties of certain objects are often ruled by Gaussian distributions.

From the first point we may expect that a shear operation applied to a multi-dimensional sphere will result in a multi-dimensional ellipsoid – and that such an operation could be replaced by scaling the original sphere (with different factors along n coordinate axes of a n-dimensional ECS) followed by a rotation (or vice versa). We will explicitly investigate this for a 3-dimensional sphere in the next post.

If our assumption were true we would get a first glimpse of the fact that a general multivariate standard distribution can be created by applying a sequence of distinct affine (i.e. linear) operations to a spherical probability distribution. This is discussed in detail in another post-series in this blog.

What is a bit confusing at the moment is that a replacement of a shear operation by simpler affine operations in general seems to require at least two rotations, but only one when we work with centered isotropic bodies. We come back to this point when we discuss the decomposition of a shear matrix by the so called SVD-procedure.

In the previous post of this series we have used the radius of the circle and the shearing parameter λ to derive analytical expressions for the coordinates of special points with extremal values on our ellipse

  • Points with maximal and minimal y-coordinate values.
  • Points with a maximal or minimal distance to the symmetry center of the ellipse. I.e. the end-points of the principal diameters of the ellipse.

From the fact that shearing does not change extremal values along the axis perpendicular to the sharing direction we could easily determine the lengths of the ellipse’s principal axes and the inclination angle of the longer axis with the x-axis of our Euclidean coordinate system [ECS].

What do we have in addition? In another mini-series on ellipses

Properties of ellipses by matrix coefficients – I – Two defining matrices (and two more posts)

I have meanwhile described how the geometry of an ellipse is related to its quadratic form and respective coefficients of a symmetric matrix. I call this matrix Aq. It forces the components of position vectors to fulfill an equation based on a quadratic polynomial. Furthermore Aq‘s eigenvalues and eigenvectors define the lengths of the ellipse’s principal axes and their inclination to the axes of our chosen ECS. The matrix coefficients in addition allow us to determine the coordinates of the points with extremal y-values on an ellipse. We will use these results later in the present post.

Objectives of this post: Shearing of a centered, rotated ellipse

In this post I want to show that shearing a given centered, but rotated original ellipse EO results in another ellipse ES with a different inclination angle and different sizes of the principal axes.

In addition we will derive the relations of the shearing parameter λS with the coefficients of the symmetric matrix \(\pmb{\operatorname{A}}_q^S \) that defines ES. I also provide formulas for the dependence of ES‘s geometrical properties on the shear parameter λS.

There are two basic prerequisites:

  1. We must show that the application of a shear transformation to the variables of the quadratic form which describes an ellipse EO results in another proper quadratic form and a related matrix \(\pmb{\operatorname{A}}_q^S \).
  2. The coefficients of the resulting quadratic form and of \(\pmb{\operatorname{A}}_q^S \) must fulfill a mathematical criterion for an ellipse.

We expect point 1 to be valid because a shear operation is just a linear operation.

To get some exercise we approach our goals by first looking at the simple case of shearing an axis-parallel ellipse before extending our considerations to general ellipses with an inclination angle against the coordinate axes of our chosen ECS.

Continue reading

Properties of ellipses by matrix coefficients – I – Two defining matrices

This post requires Javascript to display formulas!

For my two current post series on multivariate normal distributions [MNDs] and on shear operations

Multivariate Normal Distributions – II – random vectors and their covariance matrix
Fun with shear operations and SVD

some geometrical and algebraic properties of ellipses are of interest. Geometrically, we think of an ellipse in terms of their perpendicular two principal axes, focal points, ellipticity and rotation angles with respect to a coordinate system. As elliptic data appear in many contexts of physics, chaos theory, engineering, optics … ellipses are well studied mathematical objects. So, why a post about ellipses in the Machine Learning section of a blog?

In my present working context ellipses appear as a side result of statistical multivariate normal distributions [MNDs]. The projections of multidimensional contour hyper-surfaces of a MND within the ℝn onto coordinate planes of an Euclidean Coordinate System [ECS] result in 2-dimensional ellipses. These ellipses are typically rotated against the axes of the ECS – and their rotation angles reflect data correlations. The general relations of statistical vector data with projections of multidimensional MNDs is somewhat intricate.

Data produced in numerical experiments, e.g. in a Machine Learning context, most often do not give you the geometrical properties of ellipses, which some theory may have predicted, directly. Instead you may get averaged values of statistical vector distributions which correspond to a kind of algebraic coefficients. These coefficients can often be regarded as elements of a matrix. The mathematical reason is that ellipses can in general be defined by matrices operating on position vectors. In particular: Coefficients of quadratic polynomial expressions used to describe ellipses as conic sections correspond to the coefficients of a matrix operating on position vectors.

So, when I became confronted with multidimensional MNDs and their projections on coordinate planes the following questions became interesting:

  • How can one derive the lengths σ1, σ2 of the perpendicular principal axes of an ellipse from data for the coefficients of a matrix which defines the ellipse by a polynomial expression?
  • By which formula do the matrix coefficients provide the inclination angle of the ellipse’s primary axes with the x-axis of a chosen coordinate system?

You may have to dig a bit to find correct and reproducible answers in your math books. Regarding the resulting mathematical expressions I have had some bad experiences with ChatGPT. But as a former physicist I take the above questions as a welcome exercise in solving quadratic equations and doing some linear algebra. So, for those of my readers who are a bit interested in elementary math I want to answer the posed questions step by step and indicate how one can derive the respective formulas. The level is moderate – you need some knowledge in trigonometry and/or linear algebra.

Centered ellipses and two related matrices

Below I regard ellipses whose centers coincide with the origin of a chosen ECS. For our present purpose we thus get rid of some boring linear terms in the equations we have to solve. We do not loose much of general validity by this step: Results of an off-center ellipse follow from applying a simple translation operation to the resulting vector data. But I admit: Your (statistical) data must give you some information about the center of your ellipse. We assume that this is the case.

Our ellipses can, however, be rotated with respect to a chosen ECS. I.e., their longer principal axes may be inclined by some angle φ towards the x-axis of our ECS.

There are actually two different ways to define a centered ellipse by a matrix:

  • Alternative 1: We define the (rotated) ellipse by a matrix AE which results from the (matrix) product of two simpler matrices: AE = RφDσ1, σ2. Dσ1, σ2 corresponds to a scaling operation applied to position vectors for points located on a centered unit circle. Aφ describes a subsequent rotation. AE summarizes these geometrical operations in a compact form.
  • Alternative 2: We define the (rotated) ellipse by a matrix Aq which combines the x- and y-elements of position vectors in a polynomial equation with quadratic terms in the components (see below). The matrix defines a so called quadratic form. Geometrically interpreted, a quadratic form describes an ellipse as a special case of a conic section. The coefficients of the polynomial and the matrix must, of course, fulfill some particular properties.

While it is relatively simple to derive the matrix elements from known values for σ1, σ2 and φ it is a bit harder to derive the ellipse’s properties from the elements of either of the two defining matrices. I will cover both matrices in this post. For many practical purposes the derivation of central elliptic properties from given elements of Aq is more relevant and thus of special interest in the following discussion.

Matrix AE of a centered and rotated ellipse: Scaling of a unit circle followed by a rotation

Our starting point is a unit circle C whose center coincides with our ECS’s origin. The components of vectors vc to points on the circle C fulfill the following conditions:

\[
\pmb{C} \::\: \left\{ \pmb{v}_c \:=\: \begin{pmatrix} x_c \\ y_c \end{pmatrix} \:=\: \begin{pmatrix} \operatorname{cos}(\psi) \\ \operatorname{sin}(\psi) \end{pmatrix}, \quad 0\,\leq\,\psi\, \le 2\pi \right\}
\]

and

\[
x_c^2 \, +\, y_c^2 \,=\; 1
\]

We define an ellipse E1, σ2) by the application of two linear operations to the vectors of the unit circle:

\[
\pmb{E}_{\sigma_1, \, \sigma_2} \::\: \left\{ \, \pmb{v}_e \:=\: \begin{pmatrix} x_e \\ y_e \end{pmatrix} \:=\: \pmb{\operatorname{R}}_{\phi} \circ \pmb{\operatorname{D}}_E \circ \pmb{v_c} , \quad \pmb{v_c} \in \pmb{C} \, \right\}
\]

DE is a diagonal matrix which describes a stretching of the circle along the ECS-axes, and Rφ is an orthogonal rotation matrix. The stretching (or scaling) of the vector-components is done by

\[
\pmb{\operatorname{D}}_E \:=\: \begin{pmatrix} \sigma_1 & 0 \\ 0 & \sigma_2 \end{pmatrix},
\]
\[
\pmb{\operatorname{D}}_E^{-1} \:=\: \begin{pmatrix} {1 \over \sigma_1} & 0 \\ 0 & {1 \over \sigma_2} \end{pmatrix},
\]

The coefficients σ1, σ2 obviously define the lengths of the principal axes of the yet unrotated ellipse. To be more precise: σ1 is half of the diameter in x-direction, σ1 is half of the diameter in y-direction.

The subsequent rotation by an angle φ against the x-axis of the ECS is done by

\[
\pmb{\operatorname{R}}_{\phi} \:=\:
\begin{pmatrix} \operatorname{cos}(\phi) & – \,\operatorname{sin}(\phi) \\ \operatorname{sin}(\phi) & \operatorname{cos}(\phi)\end{pmatrix}
\:=\: \begin{pmatrix} u_1 & -\,u_2 \\ u_2 & u_1 \end{pmatrix}
\]
\[
\pmb{\operatorname{R}}_{\phi}^T \:=\: \pmb{\operatorname{R}}_{\phi}^{-1} \:=\: \pmb{\operatorname{R}}_{-\,\phi}
\]

The combined linear transformation results in a matrix AE with coefficients ((a, b), (c, d)):

\[ \begin{align}
\pmb{\operatorname{A}}_E \:=\: \pmb{\operatorname{R}}_{\phi} \circ \pmb{\operatorname{D}}_E \:=\:
\begin{pmatrix} \sigma_1\,u_1 & -\,\sigma_2\,u_2 \\ \sigma_1\,u_2 & \sigma_2\,u_1 \end{pmatrix} \:=\:: \begin{pmatrix} a & b \\ c & d \end{pmatrix}
\end{align}
\]

These is the first set of matrix coefficients we are interested in.

Note:

\[ \pmb{\operatorname{A}}_E^{-1} \:=\:
\begin{pmatrix} {1 \over \sigma_1} \,u_1 & {1 \over \sigma_1}\,u_2 \\ -{1 \over \sigma_2}\,u_2 & {1 \over \sigma_2}\,u_1 \end{pmatrix}
\]
\[
\pmb{v}_e \:=\: \begin{pmatrix} x_e \\ y_e \end{pmatrix} \:=\: \pmb{\operatorname{A}}_E \circ \begin{pmatrix} x_c \\ y_c \end{pmatrix}
\]
\[
\pmb{v}_k \:=\: \begin{pmatrix} x_c \\ y_c \end{pmatrix} \:=\: \pmb{\operatorname{A}}_E^{-1} \circ \begin{pmatrix} x_e \\ y_e \end{pmatrix}
\]

We use

\[ \begin{align}
u_1 \,&=\, \operatorname{cos}(\phi),\quad u_2 \,=\,\operatorname{sin}(\phi), \quad u_1^2 \,+\, u_2^2 \,=\, 1 \\
\lambda_1 \,&: =\, \sigma_1^2, \quad\quad \lambda_2 \,: =\, \sigma_2^2
\end{align}
\]

and find

\[ \begin{align}
a \,&=\, \sigma_1\,u_1, \quad b \,=\, -\, \sigma_2\,u_2, \\
c \,&=\, \sigma_1\,u_2, \quad d \,=\, \sigma_2\,u_1
\end{align}
\]
\[
\operatorname{det}\left( \pmb{\operatorname{A}}_E \right) \:=\: a\,d \,-\, b\,c \:=\: \sigma_1\, \sigma_2
\]

σ1 and σ2 are factors which give us the lengths of the principal axes of the ellipse. σ1 and σ2 have positive values. We therefore demand:

\[
\operatorname{det}\left( \pmb{\operatorname{A}}_E \right) \:=\: a\,d \,-\, b\,c \:\ge\: 0
\]

Ok, we have defined an ellipse via a matrix AE, whose coefficients are directly based on geometrical properties. But as said: Often an ellipse is described by a an equation with quadratic terms in x and y coordinates of data points. The quadratic form has its background in algebraic properties of conic sections. As a next step we derive such a quadratic equation and relate the coefficients of the quadratic polynomial with the elements of our matrix AE. The result will in turn define another very useful matrix Aq.

Quadratic forms – Case 1: Centered ellipse, principal axes aligned with ECS-axes

We start with a simple case. We take a so called axis-parallel ellipse which results from a scaling matrix DE, only. I.e., in this case, the rotation matrix is assumed to be just the identity matrix. We can omit it from further calculations:

\[
\pmb{\operatorname{R}}_{\phi} \:=\:
\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \,=\, \pmb{\operatorname{I}}, \quad u_1 \,=\, 1,\: u_2 \,=\, 0, \: \phi = 0
\]

We need an expression in terms of (xe, ye). To get quadratic terms of vector components it often helps to invoke a scalar product. The scalar product of a vector with itself gives us the squared norm or length of a vector. In our case the norms of the inversely re-scaled vectors obviously have to fulfill:

\[
\left[\, \pmb{\operatorname{D}}_E^{-1} \circ \begin{pmatrix} x_e \\ y_e \end{pmatrix} \, \right]^T \,\bullet \, \left[\, \pmb{\operatorname{D}}_E^{-1} \circ \begin{pmatrix} x_e \\ y_e \end{pmatrix} \,\right] \:=\: 1
\]

(The bullet represents the scalar product of the vectors.) This directly results in:

\[
{1 \over \sigma_1^2} x_e^2 \, + \, {1 \over \sigma_2^2} y_e^2 \:=\: 1
\]

We eliminate the denominator to get a convenient quadratic form:

\[
\lambda_2\,x_e^2 \,+\, \lambda_1\, y_e^2 \:=\: \lambda_1 \lambda_2 \quad \left( = \, \operatorname{det}\left(\pmb{\operatorname{D}}_E\right) \right) \phantom{\huge{(}}
\]

If we were given the quadratic form more generally by coefficients α, β and γ

\[
\alpha \,x_e^2 \,+\, \beta\, x_e y_e \,+\, \gamma\, y_e^2 \:=\: \delta
\]

we could directly relate these coefficients with the geometrical properties of our ellipse:

Axis-parallel ellipse:

\[ \begin{align}
\alpha \,&=\, c^2 \,=\, \sigma_2^2 \,=\, \lambda_2 \\
\gamma \,&=\, a^2 \,=\, \sigma_1^2 \,=\, \lambda_1 \\
\beta \,&=\, b \,=\, c \,=\, 0 \\
\delta \,&=\, a^2\, d^2 \,=\, \sigma_1^2 \, \sigma_2^2 \,=\, \lambda_1 \lambda_2 \\
\phi &= 0
\end{align}
\]

I.e., we can directly derive σ1, σ2 and φ from the coefficients of the quadratic form. But an axis-parallel ellipse is a very simple ellipse. Things get more difficult for a rotated ellipse.

Quadratic forms – Case 2: General centered and rotated ellipse

We perform the same trick to get a quadratic polynomial with the vectors ve of a rotated ellipse:

\[
\left[ \,\pmb{\operatorname{A}}_E^{-1} \circ \begin{pmatrix} x_e \\ y_e \end{pmatrix} \, \right]^T \, \bullet \,
\left[ \, \pmb{\operatorname{A}}_E^{-1} \circ \begin{pmatrix} x_e \\ y_e \end{pmatrix} \,\right] \:=\: 1
\]

I skip the lengthy, but simple algebraic calculation. We get (with our matrix elements a, b, c, d):

\[
\left( c^2 \,+\, d^2 \right)\,x_e^2 \,\, – \,\, 2\left( a\,c\, +\, b\,d \right)\,x_e y_e \,\, + \,\, \left(a^2 \,+\, b^2\right)\,y_e^2
\:=\: \sigma_1^2 \, \sigma_2^2
\]

The rotation has obviously lead to mixing of components in the polynomial. The coefficient for xeye is > 0 for the non-trivial case.

Quadratic form: A matrix equation to define an ellipse

We rewrite our equation again with general coefficients α, β and γ

\[
\alpha\,x_e^2 \, + \, \beta \, x_e y_e \, + \, \gamma \, y_e^2 \:=\: \delta
\]

These are coefficients which may come from some theory or from averages of numerical data. The quadratic polynomial can in turn be reformulated as a matrix operation with a symmetric matrix Aq:

\[
\pmb{v}_e^T \circ \pmb{\operatorname{A}}_q \circ \pmb{v}_e^T \:=\: \delta
\]

with

\[ \pmb{\operatorname{A}}_q \:=\:
\begin{pmatrix} \alpha & \beta / 2 \\ \beta / 2 & \gamma \end{pmatrix}
\]
\[ \pmb{\operatorname{A}}_q \:=\:
\begin{pmatrix} c^2 \,+\, d^2 & a\,c \, +\, b\,d \\ a\,c \, +\, b\,d & a^2 \,+\, b^2 \end{pmatrix}
\]
\[ \begin{align}
\alpha \:&=\: c^2 \,+\, d^2 \:=\: \sigma_1^2 u_2^2 \, + \, \sigma_2^2 u_1^2 \\
\gamma \:&=\: a^2 \,+\, b^2 \:=\: \sigma_1^2 u_1^2 \, + \, \sigma_2^2 u_2^2 \\
\beta \:&=\: – 2\left(a c \,+\, b d \right) \:=\: -2 \left( \sigma_1^2 u_1 u_2 \, + \, \sigma_2^2 u_1 u_2 \right) \\
\delta \:&=\: \left(ad \,-\, bc\right)^2 \:=\: \sigma_1^2 \, \sigma_2^2
\end{align}
\]

Note also:

\[ \begin{align}
\alpha \,+\, \gamma \:&=\: a^2 \,+\, b^2 \,+\, c^2 \,+\, d^2 \:=\: \sigma_1^2 \, + \, \sigma_2^2 \\
\alpha \,-\, \gamma \:&=\: a^2 \,+\, b^2 \,-\, c^2 \,-\, d^2 \:=\: \sigma_1^2 \, \operatorname{cos}\left(2\phi\right) \, + \, \sigma_2^2 \, \operatorname{cos}\left(2\phi\right) \end{align}
\]

These terms are intimately related to the geometrical data; expect them to play a major role in further considerations.

With the help of the coefficients of AE we can also show that det(Aq) > 0:

\[
\operatorname{det}\left( \pmb{\operatorname{A}}_q \right) \:=\: \left(\alpha\gamma \,-\, {1\over 4}\beta^2\right) \:=\: \left(bc \,-\, ad\right)^2 \, \ge \, 0
\]

Thus Aq is an invertible matrix (as was to be expected).

Above we have got α, β, γ, δ as some relatively simple functions of a, b, c, d. The inversion is not so trivial and we do not even try it here. Instead we focus on how we can express σ1, σ2 and φ as functions of either (a, b, c, d) or (α, β, γ, δ).

How to derive σ1, σ2 and φ from the coefficients of AE or Aq in the general case?

Let us assume we have (numerical) data for the coefficients of the quadratic form. Then we may want to calculate values for the length of the principal axes and the rotation angle φ of the corresponding ellipse. There are two ways to derive respective formulas:

  • Approach 1: Use trigonometric relations to directly solve the equation system.
  • Approach 2: Use an eigenvector decomposition of Aq.

Both ways are fun!

Direct derivation of σ1, σ2 and φ by using trigonometric relations

We start with the hard tour, namely by solving equations for λ1, λ2 and φ directly. This requires some knowledge in trigonometry. So far, we know the following:

\[ \begin{align}
\gamma \:&=\: a^2 \,+\, b^2 \:=\: \lambda_1 \operatorname{cos}^2\phi \, + \, \lambda_2 \operatorname{sin}^2\phi \\
\alpha \:&=\: c^2 \,+\, d^2 \:=\: \lambda_2 \operatorname{sin}^2\phi \, + \, \lambda_2 \operatorname{cos}^2\phi \\
\beta \:&=\: – 2\left(a c \,+\, b d \right) \:=\: -2 \left( \lambda_1 \,-\, \lambda_2 \right) \operatorname{cos}\phi \, \operatorname{sin}\phi \\
\delta \:&=\: \lambda_1 \, \lambda_2
\end{align}
\]

Trigonometric relations which we can use are:

\[ \begin{align}
\operatorname{sin}(2 \phi) \:&=\: 2 \,\operatorname{cos}(\phi)\, \operatorname{sin}(\phi) \\
\operatorname{cos}(2 \phi) \:&=\: 2 \,\operatorname{cos}^2(\phi)\, -\, 1 \\
\:&=\: 1 \,-\, 2\,\operatorname{sin}^2(\phi) \\
\:&=\: \operatorname{cos}^2(\phi)\, -\, \operatorname{sin}^2(\phi)
\end{align}
\]

Without loosing generality we assume

\[
\lambda_1 \:\ge \lambda_2
\]

In the end results would only differ by a rotation of π/2, if we had chosen otherwise. This leads to

\[ \begin{align}
2 \gamma \:&=\: 2\left( a^2 \,+\, b^2 \right) \:=\: \lambda_1 \left( 1 \,+\, \operatorname{cos}(2\phi) \right) \, + \,
\lambda_2 \left( 1 \,-\, \operatorname{cos}(2\phi) \right) \\
2 \alpha \:&=\: 2 \left( c^2 \,+\, d^2 \right) \:=\: \lambda_1 \left( 1 \,-\, \operatorname{cos}(2\phi) \right) \, + \,
\lambda_2 \left( 1 \,+\, \operatorname{cos}(2\phi) \right) \\
– \beta \:&=\: 2 \left(a c \,+\, b d \right) \:=\: \left( \lambda_1 \,-\, \lambda_2 \right) \operatorname{sin}(2\phi)
\end{align}
\]

We rearrange terms and get:

\[ \begin{align}
\left( \lambda_1 \,-\, \lambda_2 \right) \operatorname{cos}(2\phi) \,+\, \lambda_1 \,+\, \lambda_2 \:&=\: 2 \left( a^2 \,+\, b^2 \right) \\
\left( \lambda_1 \,-\, \lambda_2 \right) \operatorname{cos}(2\phi) \,-\, \lambda_1 \,-\, \lambda_2 \:&=\: – 2 \left( c^2 \,+\, d^2 \right) \\
\left( \lambda_1 \,-\, \lambda_2 \right) \operatorname{sin}(2\phi) \:&=\: 2 \left( a\,c \,+\, b\,d \right)
\end{align}
\]

Let us define some further variables before we add and subtract the first two of the above equations:

\[ \begin{align}
r \:&=\: {1 \over 2} \left( a^2 \,+\, b^2 \,+\, c^2 \,+\, d^2 \right) \:=\: {1 \over 2} \left( \gamma \,+\, \alpha \right) \\
s_1 \:&=\: {1 \over 2} \left( a^2 \,+\, b^2 \,-\, c^2 \,-\, d^2 \right) \:=\: {1 \over 2} \left( \gamma \,-\, \alpha \right) \\
s_2 \:&=\: \left( a\,c \,+\, b\,d \right) \:=\: – {1 \over 2} \, \beta \\
\pmb{s} \:&=\: \begin{pmatrix} s_1 \\ s_2 \end{pmatrix} \:=\: {1 \over 2} \begin{pmatrix} \gamma \,-\, \alpha \\ – \beta \end{pmatrix} \phantom{\Huge{(}} \\
s \:&=\: \sqrt{ s_1^2 \,+\, s_2^2 } \:=\: {1 \over 2} \left[ \, \beta^2 \,+\, \left( \gamma \,-\, \alpha \right)^2 \,\right]^{1/2} \phantom{\huge{(}}
\end{align}
\]

Then adding two of the equations with the sin2φ and cos2φ above and using the third one results in:

\[
{1 \over 2} \left( \lambda_1 \,-\, \lambda_2 \right) \begin{pmatrix} \operatorname{cos}(2\phi) \\ \operatorname{sin}(2\phi) \end{pmatrix} \:=\: \begin{pmatrix} s_1 \\ s_2 \end{pmatrix}
\]

Taking the vector norm on both sides (with λ1 ≥ λ2) and adding two of the equations above results in:

\[ \begin{align}
\lambda_1 \,\, – \,\, \lambda_2 \:&=\: 2 s \:\:=\: \left[ \, \beta^2 \,+\, \left( \gamma \,-\, \alpha \right) \,\right]^{1/2} \\
\lambda_1 \, + \, \lambda_2 \:&=\: 2 r \:\:=\: \gamma \,+\, \alpha
\end{align}
\]

This gives us:

\[ \begin{align}
\sigma_1^2 \,=\, \lambda_1 \:&=\: r \,+\, s \\
\sigma_2^2 \,=\, \lambda_2 \:&=\: r \,-\, s
\end{align}
\]

In terms of a, b, c, d:

\[ \begin{align}
\sigma_1^2 \,=\, \lambda_1 \:&=\: {1 \over 2} \left[ \, a^2+b^2+c^2 +d^2 \,+\, \left[ 4 (ac + bd)^2 \, +\, \left( c^2+d^2 -a^2 -b^2\right)^2 \, \right]^{1/2} \right] \\
\sigma_2^2 \,=\, \lambda_2 \:&=\: {1 \over 2} \left[ \, a^2+b^2+c^2 +d^2 \,-\, \left[ 4 (ac + bd)^2 \, +\, \left( c^2+d^2 -a^2 -b^2\right)^2 \, \right]^{1/2} \right]
\end{align}
\]

Who said that life has to be easy? In terms of α, β, γ, δ it looks a bit better:

\[ \begin{align}
\sigma_1^2 \,=\, \lambda_1 \:&=\: {1 \over 2} \left[ \, \left( \gamma \,+\, \alpha \right) \,+\, \left[ \beta^2 \, +\, \left( \gamma \,-\, \alpha \right)^2 \, \right]^{1/2} \right] \\
\sigma_2^2 \,=\, \lambda_2 \:&=\: {1 \over 2} \left[ \, \left( \gamma \,+\, \alpha \right) \,-\, \left[ \beta^2 \, +\, \left( \gamma \,-\, \alpha \right)^2 \, \right]^{1/2} \right]
\end{align}
\]

The reader can convince himself that with the definitions above we do indeed reproduce

\[
\lambda_1 \, \lambda_2 \:=\: r^2 \,-\, s^2 \:=\: \left(a\, d\,+\, b\,c \right)^2 \,=\, \operatorname{det}\left(\pmb{\operatorname{A}}_E\right)^2
\]

Determination of the inclination angle φ

For the determination of the angle φ we use:

\[
\begin{pmatrix} \operatorname{cos}(2\phi) \\ \operatorname{sin}(2\phi) \end{pmatrix} \:=\: {1 \over s} \begin{pmatrix} s_1 \\ s_2 \end{pmatrix}
\]

If we choose

\[
-\pi/2 \,\lt\, \phi \le \pi/2
\]

we get:

\[
\phi \:=\: {1 \over 2} \operatorname{arctan}\left({s_2 \over s_1}\right) \:=\: – {1 \over 2} \operatorname{arctan}\left( { – 2\left(ac \,+\, bd\right) \over (a^2 \,+\, b^2 \,-\, c^2 \,-\, d^2) } \right)
\]
\[
\phi \:=\: – {1 \over 2} \operatorname{arctan}\left( {\beta \over \gamma \,- \alpha } \right)
\]

Or equivalently with respect to α, β, γ:

\[
\operatorname{sin}\left(2\phi\right) \:=\: – \, {\beta \over \left[ \beta^2 \, +\, \left( \gamma \,-\, \alpha \right)^2 \, \right]^{1/2} }
\]
\[
\phi \:=\: {1 \over 2} \operatorname{arcsin}\left({s_2 \over s}\right) \:=\: – {1 \over 2} \operatorname{arcsin}\left( {\beta \over \left[ \beta^2 \, +\, \left( \gamma \,-\, \alpha \right)^2 \, \right]^{1/2} } \right)
\]

Note: All in all there are four different solutions. The reason is that we alternatively could have requested λ2 ≥ λ1 and also chosen the angle π + φ. So, the ambiguity is due to a selection of the considered principal axis and rotational symmetries.

In the special case that we have a circle

\[
\lambda_1 \,=\, \lambda_2 \,=\, r
\]

and then, of course, any angle φ will be allowed.

2nd way to a solution for σ1, σ2 and φ via eigendecomposition

For our second way of deriving formulas for σ1, σ2 and φ we use some linear algebra. This way is interesting for two reasons: It indicates how we can use the Python “linalg”-package together with Numpy to get results numerically. In addition we get familiar with a representation of the ellipse in a properly rotated ECS.

Above we have written down a symmetric matrix Aq describing an operation on the position vectors of points on our rotated ellipse:

\[
\pmb{v}_e^T \circ \pmb{\operatorname{A}}_q \circ \pmb{v}_e^T \:=\: \delta
\]

We know from linear algebra that every symmetric matrix can be decomposed into a product of orthogonal matrices O, OT and a diagonal matrix. This reflects the so called eigendecomposition of a symmetric matrix. It is a unique decomposition in the sense that it has a uniquely defined solution in terms of the coefficients of the following matrices:

\[
\pmb{\operatorname{A}}_q \:=\: \pmb{\operatorname{O}} \circ \pmb{\operatorname{D}}_q \circ \pmb{\operatorname{O}}^T
\]

with

\[
\pmb{\operatorname{D}}_{q} \:=\: \begin{pmatrix} \lambda_{u} & 0 \\ 0 & \lambda_{d} \end{pmatrix}
\]

The coefficients λu and λd are eigenvalues of both Dq and Aq. Reason: Orthogonal matrices do not change eigenvalues of a transformed matrix. So, the diagonal elements of Dq are the eigenvalues of Aq. Linear algebra also tells us that the columns of the matrix O are given by the components of the normalized eigenvectors of Aq.

We can interpret O as a rotation matrix Rψ for some angle ψ:

\[
\pmb{v}_e^T \circ \pmb{\operatorname{A}}_q \circ \pmb{v}_e^T \:=\: \pmb{v}_e^T \circ
\pmb{\operatorname{R}}_{\psi} \circ \pmb{\operatorname{D}}_q \circ \pmb{\operatorname{R}}_{\psi}^T \circ \pmb{v}_e \:=\: \delta
\]

This means

\[
\left[ \pmb{\operatorname{R}}_{-\psi} \circ \pmb{v}_e \right]^T \circ
\pmb{\operatorname{D}}_q \circ \left[ \pmb{\operatorname{R}}_{-\psi} \circ \pmb{v}_e \right] \:=\: \delta \:=\: \sigma_1^2\, \sigma_2^2
\]
\[
\pmb{v}_{-\psi}^T \circ \pmb{\operatorname{D}}_q \circ \pmb{v}_{-\psi} \:=\: \sigma_1^2\, \sigma_2^2
\]

The whole operation tells us a simple truth, which we are already familiar with:

By our construction procedure for a rotated ellipse we know that a rotated ECS exists, in which the ellipse can be described as the result of a scaling operation (along the coordinate axes of the rotated ECS) applied to a unit circle. (This ECS is, of course, rotated by an angle φ against our working ECS in which the ellipse appears rotated.)

Indeed:

\[
\left(\, x_{-\psi}, \,y_{-\psi}\,\right) \circ \begin{pmatrix} \lambda_u & 0 \\ 0 & \lambda_d \end{pmatrix} \circ \begin{pmatrix} x_{-\psi} \\ y_{-\psi} \end{pmatrix} \:=\: \sigma_1^2\, \sigma_2^2
\]
\[
{\lambda_u \over \sigma_1^2\, \sigma_2^2} \, x_{-\psi}^2 \, + \, {\lambda_d \over \sigma_1^2\, \sigma_2^2} \, y_{-\psi}^2 \:=\: 1
\]

We know exactly what angle ψ by which we have to rotate our ECS to get this result: ψ = φ. Therefore:

\[ \begin{align}
x_{-\psi} \:&=\: x_c, \\
y_{-\psi} \:&=\: y_c, \\
\lambda_u \:&=\: \lambda_2 \,=\, \sigma_2^2, \\
\lambda_d \:&=\: \lambda_1 \,=\, \sigma_1^2
\end{align}
\]

This already makes it plausible that the eigenvalues of our symmetric matrix Aq are just λ1 and λ2.

Mathematically, a lengthy calculation will indeed reveal that the eigenvalues of a symmetric matrix Aq with coefficients α, 1/2*β and γ have the following form:

\[
\lambda_{u/d} \:=\: {1 \over 2} \left[\, \left(\alpha \,+\, \gamma \right) \,\pm\, \left[ \beta^2 + \left(\gamma \,-\, \alpha \right)^2 \,\right]^{1/2} \, \right]
\]

This is, of course, exactly what we have found some minutes ago by directly solving the equations with the trigonometric terms.

We will prove the fact that these indeed are valid eigenvalues in a minute. Let us first look at respective eigenvectors ξ1/2. To get them we must solve the equations resulting from

\[
\left( \begin{pmatrix} \alpha & \beta / 2 \\ \beta / 2 & \gamma \end{pmatrix} \,-\, \begin{pmatrix} \lambda_{1/2} & 0 \\ 0 & \lambda_{1/2} \end{pmatrix} \right) \,\circ \, \pmb{\xi_{1/2}} \:=\: \pmb{0},
\]

with

\[
\pmb{\xi_1} \,=\, \begin{pmatrix} \xi_{1,x} \\ \xi_{1,y} \end{pmatrix}, \quad \pmb{\xi_2} \,=\, \begin{pmatrix} \xi_{2,x} \\ \xi_{2,y} \end{pmatrix}
\]

Again a lengthy calculation shows that the following vectors fulfill the conditions (up to a common factor in the components):

\[ \begin{align}
\lambda_1 \: &: \quad \pmb{\xi_1} \:=\: \left(\, {1 \over \beta} \left( (\alpha \,-\, \gamma) \,-\, \left[\, \beta^2 \,+\, \left(\gamma \,-\, \alpha \right)^2\,\right]^{1/2} \right), \: 1 \, \right)^T \\
\lambda_2 \: &: \quad \pmb{\xi_2} \:=\: \left(\, {1 \over \beta} \left( (\alpha \,-\, \gamma) \,+\, \left[\, \beta^2 \,+\, \left(\gamma \,-\, \alpha \right)^2\,\right]^{1/2} \right), \: 1 \, \right)^T
\end{align}
\]

for the eigenvalues

\[ \begin{align}
\lambda_1 \:&=\: {1 \over 2} \left(\, \left(\alpha \,+\, \gamma \right) \,+\, \left[ \beta^2 \,+\, \left(\gamma \,-\, \alpha \right)^2 \,\right]^{1/2} \, \right) \\
\lambda_2 \:&=\: {1 \over 2} \left(\, \left(\alpha \,+\, \gamma \right) \,-\, \left[ \beta^2 \,+\, \left(\gamma \,-\, \alpha \right)^2 \,\right]^{1/2} \, \right) \\
\end{align}
\]

The T at the formulas for the vectors symbolizes a transposition operation.

Note that the vector components given above are not normalized. This is important for performing numerical checks as Numpy and linear algebra programs would typically give you normalized eigenvectors with a length = 1. But you can easily compensate for this by working with

\[ \begin{align}
\lambda_1 \: &: \quad \pmb{\xi_1^n} \:=\: {1 \over \|\pmb{\xi_1}\|}\, \pmb{\xi_1} \\
\lambda_2 \: &: \quad \pmb{\xi_2^n} \:=\: {1 \over \|\pmb{\xi_2}\|}\, \pmb{\xi_2}
\end{align}
\]

Proof for the eigenvalues and eigenvector components

We just prove that the eigenvector conditions are e.g. fulfilled for the components of the second eigenvector ξ2 and λ2 = λd.

\[ \begin{align}
\left(\alpha \,-\, \lambda_2 \right) * \xi_{2,x} \,+\, {1 \over 2} \beta * \xi_{2,y} \,&=\, 0 \\
{1 \over 2} \beta * \xi_{2,x} \,+\, \left( \gamma \,-\, \lambda_2 \right) * \xi_{2,y} \,&=\, 0
\end{align}
\]

(The steps for the first eigenvector are completely analogous).

We start with the condition for the first component

\[ \begin{align}
&\left( \alpha \,-\,
{1\over 2}\left[\,\left(\alpha \, + \, \gamma\right) \,+\, \left[ \beta^2 \,+\, \left( \alpha \,-\, \gamma \right)^2 \right]^{1/2} \right] \right) * \\
& {1 \over \beta}\,
\left[\, \left(\alpha \,-\, \gamma\right) \,+\, \left[ \beta^2 \,+\, \left(\alpha \,-\, \gamma \right)^2 \right]^{1/2} \,\right] \,+\, {\beta \over 2 }
\,=\, 0
\end{align}
\]
\[ \begin{align}
& {1 \over 2 } \left[ \left(\alpha \,-\,\gamma\right) \,+\, \left[ \beta^2 \,+\, \left( \alpha \,-\, \gamma \right)^2 \right]^{1/2} \right] * \\
& {1 \over \beta}\,
\left[\, \left(\alpha \,-\, \gamma\right) \,+\, \left[ \beta^2 \,+\, \left(\alpha \,-\, \gamma \right)^2 \right]^{1/2} \,\right] \,+\, {\beta \over 2 }
\,=\, 0
\end{align}
\]
\[
{1 \over 2 \beta} \left[ (\alpha \,-\,\gamma)^2 \,-\, \beta^2 \,-\, (\alpha \,-\,\gamma)^2 \right] \,+\, {\beta \over 2 } \,=\, 0
\]

The last relation is obviously true. You can perform a similar calculation for the other eigenvector component:

\[ \begin{align}
{1 \over 2} \, \beta & {1 \over \beta}\,
\left[\, \left(\alpha \,-\, \gamma\right) \,+\, \left[ \beta^2 \,+\, \left(\alpha \,-\, \gamma \right)^2 \right]^{1/2} \,\right] \,+\, \\
&
\left( \gamma \, -\,
{1\over 2}\left[\,\left(\alpha \, + \, \gamma\right) \,+\, \left[ \beta^2 \,+\, \left( \alpha \,-\, \gamma \right)^2 \right]^{1/2} \right] \right) * 1 \,=\, 0
\end{align}
\]

Thus:

\[ \begin{align}
&{1 \over 2} \, \left(\alpha \,-\, \gamma\right) \,+\, {1 \over 2} \left[ \beta^2 \,+\, \left(\alpha \,-\, \gamma \right)^2 \right]^{1/2} \,-\, \\
&{1\over 2}\left(\alpha \,-\, \gamma\right) \,-\, {1\over 2}\left[ \beta^2 \,+\, \left( \alpha \,-\, \gamma \right)^2 \right]^{1/2} \,=\, 0
\end{align}
\]

True, again. In a very similar exercise one can show that the scalar product of the eigenvectors is equal to zero:

\[ \begin{align}
& {1 \over \beta}\,
\left[\, \left(\alpha \,-\, \gamma\right) \,+\, \left[ \beta^2 \,+\, \left(\alpha \,-\, \gamma \right)^2 \right]^{1/2} \,\right] \,+\, {\beta \over 2 } \,*\, \\
& {1 \over \beta}\,
\left[\, \left(\alpha \,-\, \gamma\right) \,-\, \left[ \beta^2 \,+\, \left(\alpha \,-\, \gamma \right)^2 \right]^{1/2} \,\right] \,+\, {\beta \over 2 } \,+\, 1\,*\,1 \\
&\, {1 \over \beta^2}\, *\, (-\beta^2) \,+\, 1 \,=\, 0
\end{align}
\]

I.e.:

\[
\pmb{\xi_1} \bullet \pmb{\xi_1} \,=\, \left( \xi_{1,x}, \, \xi_{1,y} \right) \circ \begin{pmatrix} \xi_{2,x} \\ \xi_{2,y} \end{pmatrix} \,= \, 0,
\]

which means that the eigenvectors are perpendicular to each other. Exactly, what we expect for the orientations of the principal axes of an ellipse against each other.

Rotation angle from coefficients of Aq

We still need a formula for the rotation angle(s). From linear algebra results related to an eigendecomposition we know that the orthogonal (rotation) matrices consist of columns of the normalized eigenvectors. With the components given in terms of our un-rotated ECS, in which we basically work. These vectors point along the principal axes of our ellipse. Thus the components of these eigenvectors define our aspired rotation angles of the ellipse’s principal axes against the x-axis of our ECS.

Let us prove this. By assuming

\[ \begin{align}
\operatorname{cos}(\phi_1) \,&=\, \xi_{1,x}^n \\
\operatorname{sin}(\phi_1) \,&=\, \xi_{1,y}^n
\end{align}
\]

and using

\[
\operatorname{cos}(2\phi_1) \,=\, 2\, \operatorname{sin}(\phi_1) \, \operatorname{cos}(\phi_1)
\]

we get:

\[ \begin{align}
\operatorname{sin}(2 \phi_1) \,=\,
2 * { \xi_{1,x} * \xi_{1,y} \over \left[\, \xi_{1,x}^2 \, + \, \xi_{1,y}^2 \,\right]^{1/2} }
\end{align}
\]

and thus

\[ \begin{align}
\operatorname{sin}(2 \phi_1) \,&=\,
2 \, { {1 \over \large{\beta}} \left( (\alpha \,-\, \gamma) \,-\, \left[\, \beta^2 \,+\, \left(\gamma \,-\, \alpha \right)^2\,\right]^{1/2} \right) \,*\, 1
\over
\left[\, \left( {1 \over \large{\beta}} \left( (\alpha \,-\, \gamma) \,-\, \left[\, \beta^2 \,+\, \left(\gamma \,-\, \alpha \right)^2\,\right]^{1/2} \right) \right)^2
\,+\, 1^2 \right]^{1/2} } \\
&=\, 2\, { {1 \over \large{\beta}} \left( t \,-\, z \right)
\over
{1 \over \large{\beta}^2 \phantom{\large{]}} } \left[\, \beta^2 \,+\, \left(\, t \,-\, z \,\right)^2 \right]^2 }
\end{align}
\]

with

\[ \begin{align}
t \,&=\, (\alpha \,-\, \gamma) \\
z \,&=\, \left[\, \beta^2 \,+\, \left(\gamma \,-\, \alpha \right)^2\,\right]^{1/2}
\end{align}
\]

This looks very differently from the simple expression we got above. And a direct approach is cumbersome. The trick is multiply nominator and denominator by a convenience factor

\[
\left( t \,+\, z \right),
\]

and exploit

\[ \begin{align}
\left( t \,-\, z \right) \, \left( t \,+\, z \right) \,&=\, t^2 \,-\, z^2 \\
\left( t \,-\, z \right) \, \left( t \,+\, z \right) \,&=\, – \beta^2
\end{align}
\]

to get

\[ \begin{align}
&2 * \beta { (t\,-\, z) * ( t\,+\, z) \over \left[ \beta^2 \, + \,( t \,-\, z )^2 \right] * (t \,+\, z) } \\
=\, &2 * \beta { – \beta^2 \over \beta^2 (t\,+\,z) \,-\, \beta^2 (t\,-\,z) } \\
=\, & – {\beta \over \left[\, \beta^2 \,+\, (\alpha \,-\, \gamma)^2 \,\right]^{1/2} }
\end{align}
\]

This means that our 2nd solution approach provides the result

\[
\operatorname{sin}(2 \phi_1) \, =\, – \, { \beta \over \left[\, \beta^2 \,+\, (\alpha \,-\, \gamma)^2 \,\right]^{1/2} }\,,
\]

which is of course identical to the result we got with our first solution approach. It is clear that the second axis has an inclination by φ +- π / 2:

\[
\phi_2\, =\, \phi_1 \,\pm\, \pi/2.
\]

In general the angles have a natural ambiguity of π.

Conclusion

In this post I have shown how one can derive essential properties of centered, but rotated ellipses from matrix-based representations. Such calculations become relevant when e.g. experimental or numerical data only deliver the coefficients of a quadratic form for the ellipse.

We have first established the relation of the coefficients of a matrix that defines an ellipse by a combined scaling and rotation operation with the coefficients of a matrix which defines an ellipse as a quadratic form of the components of position vectors. In addition we have shown how the coefficients of both matrices are related to quantities like the lengths of the principal axes of the ellipse and the inclination of these axes against the x-axis of the Euclidean coordinate system in which the ellipse is described via position vectors. So, if one of the matrices is given we can numerically calculate the ellipse’s main properties.

In the next post of this mini-series

Properties of ellipses by matrix coefficients – II – coordinates of points with extremal y-values

we have a look at the x- and y-coordinates of points on an ellipse with extremal y-values. All in terms of the matrix coefficients we are now familiar with.

 

Fun with shear operations and SVD – I – shear matrices and examples created with Blender

This post requires Javascript to display formulas!

In Machine Learning samples of data vectors often undergo linear transformations. Typical examples are the adaption of vector component values due to the choice of a different shifted and/or rotated coordinate system. Another example is the scaling of vectors. As we have seen in an investigation of Autoencoders they produce multivariate normal distributions which actually are the result of linear transformations of elementary Gaussian distributions. And let us not forget that the transport between layers of neural networks includes linear transformations.

In most cases we can interpret a linear transformations in the ℝn in a geometrical way. Linear transformations are members of a sub-class of affine transformations. Affine transformations in turn can be regarded as a combination of

  • a translation,
  • a scaling in either coordinate direction,
  • a reflection = mirroring operation,
  • a rotation
  • and/or a shear operation.

Translations are trivial. We also understand the effects of rotation and scaling transformations intuitively. A mirror operation can be understand as a kind of inverted scaling with factors -1.

Scaling operations with different scaling factors along different coordinate directions destroy certain rotational symmetries of a 3-dimensional or n-dimensional body. But scaling transformations do not always eliminate symmetries with respect to mirroring planes: If and when we first choose an Euclidean coordinate system whose axes are aligned with symmetry axes of a body with plane symmetries, we are able to keep up at least some of the planar symmetries of the body during scaling operation.

Something that is a bit more difficult to grasp is a shear operation. The reason is that a shear operation destroys one or multiple plane symmetries of an impacted figure – whatever coordinate system we choose. And a shear operation cannot be reduced to a pure scaling operation in some cleverly chosen coordinate system. Neither can it be reduced to a rotation.

This alone is a good reason to have a closer look at this special kind of affine transformation. Another reason is to study what impact a shear operations has on the construction or transformation of a multivariate normal distribution – a topic which is at the center of another post series in this blog.

In this post I first want to discuss what a matrix that controls a shear operation looks like. We discuss some of its mathematical properties. Afterward I want to demonstrate what effects shearing has on two simple objects – a cube and a sphere. These examples will be done with Blender and the available mouse driven shear operations there.

Blender performs the required math in the background – and the examples below only provide visual impressions.

To get a better understanding of the nature of shear operation we, in the second post of this series, switch to Python and apply shear operations via explicit matrix operations onto vectors. We start with 2D-figures. Afterward we turn to 3D-cuboids and find that we produce spats. Another important example (not only with respect to multivariate normal distributions) is then given by the shearing of 3D-spheres: We will find that such an operation always produce ellipsoids. We will check this mathematically.

In a fourth post I want to discuss an important result from linear algebra, namely the so called SVD-decomposition of a linear transformation and its meaning in our context of shear operations. SVD tells us that we can always replace a shear operation by a sequence of simpler operations. We will test this statement for our 2D- and 3D-objects objects. We then also find that the creation of an ellipsoid at any orientation can be done by applying just one shear operation to a unit sphere. Alternatively, we can perform a scaling of a sphere and rotate it once. This will help us to better understand the creation of multivariate normal distributions in a parallel post series.

Shear operations break symmetries of the affected figures

A shear operation Msh introduces an asymmetry into originally symmetric figures. It does so by coupling vector components in a specific way. For reasons of simplicity we reduce our argumentation to the ℝn. We describe objects in this space with the help of position vectors to points inside and on the surface of these objects. We specify the components of position vectors with respect to an Euclidean coordinate system [ECS].

To make things easier we assume that the origin of our ECS coincides with the symmetry center of the body we apply the shear operation to. If the body has plane and point symmetries then the origin is placed at the cutting lines or the cutting point of such planes; i.e., the ECS origin coincides with the symmetry-center of the body and coordinate planes will coincide with some of the symmetry planes. For the cases we look at – spheres, cubes, coboids – the ECS origin thus gets identical with the body’s volume center.

A shear transformation applied to a vector couples at least two of the vector’s components such that a growing value for the coordinate value in the direction of a chosen coordinate axis (e.g. vz) impacts the coordinate value in another coordinate direction (e.g. vx) in a linear way.

Here is an example for the x- and z-components of a 3-dimensional vector (vx, vy, vz)T:

\[ \begin{align}
v_x^{sh} \, &= \, v_x \, + \, \alpha_{xz} * v_z \\ v_y^{sh} \,&=\, v_y \\ v_z^{sh}\,&=\, v_z \end{align}
\]
\[ \pmb{\operatorname{M}}_{sh}: \quad \pmb{v} \,=\, \begin{pmatrix} v_x \\ v_y \\ v_z \end{pmatrix} \:\Rightarrow \:
\pmb{v}_{sh} \,=\, \begin{pmatrix} v_x^{sh} \, = \, v_x \, + \, \alpha_{xz} * v_z \\ v_y^{sh} \,=\, v_y \\ v_z^{sh}\,=\, v_z \end{pmatrix},
\]

With α being some constant. This operation breaks an original plane symmetry of a body with respect to the x-z-plane.

A shear operation preserves the orientation of tangent planes at some surface points

But note also, that a shear operation has another unique property that keeps up a geometrical property of the affected body:

A shear operation preserves the orientation of tangent planes to the body’s surface at some particular points. The most important of these points define extrema on the surface with respect to the component whose value is linearly added in the shearing transformation. The tangent planes there do not only keep up their orientation, but also their position.

In the case defined above we would look a extrema with respect to the z-component:

\[ z \,=\, f(x,\,y), \quad { \partial f \over \partial x } \,=\, { \partial f \over \partial y } \, = \, 0
\]

Actually, we would need a bit of multivariate calculus to derive this statement for general bodies. But the example of a cube helps to grasp the central point: The transformation defined above would keep up the orientation and position of the top and the bottom square surfaces of the cube. See below for respective graphics.

In the case of a general n-dimensional body whose surface is closed, continuous and differentiable at all surface points we will find at least two such points whose tangent planes keep their orientation and position during a shearing operation.

Coupling of more than two vector components – and a restriction

Of course we could have introduced more than just one linear coupling of a selected vector component to other components. In three or n dimensions we can couple 2 components to a third one. In n dimensions we can add up shearing contributions of (n-1)-coordinate values to each of the components. But we introduce a restriction:

To avoid confusion and double shearing effects we always set αzx = 0. And analogously for the other components:

\[ \begin{align} \alpha_{ij} \, &= \, const. \, \neq \, 0, \quad \forall\, i > j \\ \alpha_{ij} \, &= \, 0, \quad \forall\, i < j \end{align} \]

Once again: We avoid a kind of double coupling between the same pair of components. E.g., vx may become impacted by vz, but vz then shall not get impacted by vx at the same time.

Obviously, this is a an asymmetric policy in the formal handling of vector component coupling. But, it has geometrical consequences (see below). Of course, we follow this asymmetric policy also for other pairs of vector components and respective coupling parameters. This gives us a number of n*(n – 1)/2 potential constant shearing parameters.

A first example produced with the help of Blender

To get a visual impression of what shearing does to a body with some symmetries, let us look at a simple example: a cube.

Regarding the first image we define that the x-coordinate direction goes from the left to the right, the y-direction from the front to the back and the z-direction from the bottom vertically upwards. The last image from the top shows that the x- and y-side-length of our cube are equal. Among other symmetries, a cube and also cuboids obviously have a mirror symmetry with respect to planes parallel to the x/y-plane, the y/z-plane and the x/z-plane through their centers of volume.

Now let us apply two shear operations which couple the x/z-coordinates and the y/z-coordinates. Then we get something like the following:

The first two image show the sheared object from the front, but with the camera moved to different heights. We see that the cube was sheared in two directions – in the x- and in the y-direction. The z-values of the upper and lower surfaces remain constant during the operation. As a result the object gets inclined in two directions: We get a spat (or parallelepiped) as the result of applying a shear operation to a cube or cuboid.

The last image shows the object from exactly the same top position as before the shearing. We see that the original symmetry is destroyed with respect to all 3 orthogonal planes (parallel to coordinate planes) through the center of volume of our object.

Also all of the following objects are the results of simple shearing operations. While this clear for the parallelepiped (derived from a cuboid) and the inclined cylinder (derived from a straight cylinder), it is not directly obvious for the ellipsoid which in the depicted case actually is the result of a shearing applied to a sphere (with x being impacted by y and z). In all cases rotations and translations have been applied in addition to arrange the scene.

We get the impression that some simple geometric shapes like a spat or an ellipsoid could be the direct result of one or more shearing operations applied to originally simple objects with multiple symmetries – as cubes and spheres.

Shear transformations are mathematically done by unipotent upper triangular matrices

Let us return to a more mathematical description of a shear operation. As any other linear operation applied to vectors of the ℝn we can represent a shear operation by a matrix. We just have to express the linear coupling between two vector components by a suitable arrangement of our shear-constants as matrix-elements. By experimenting a bit we find that a square matrix which only has non-zero elements on its diagonal and in its upper right triangular part does the job. We speak of an “upper triangular matrix“:

\[ \pmb{\operatorname{M}} \, = \,
\begin{pmatrix}
m_{11} & m_{12} &\cdots & m_{1n} \\
0 & m_{22} &\cdots & m_{2n} \\
\vdots &\vdots &\ddots &\vdots \\
0 & 0 &\cdots & m_{33} \end{pmatrix}
\]

That such a matrix mixes vector components in the intended way follows directly from the definition of standard matrix multiplications. (Note: In Python/Numpy we talk about “matmul”-operations.)

Note that not all elements in the upper triangular part (aside the diagonal) are required to be different from zero. But at least one should be ≠ 0.

Not a full and neither a symmetric matrix

Now we come back to a point introduced already above: In principle we could allow for a coupling of two selected vector components in both logically possible ways. E.g., when we have an impact of the z- onto the x-component, why not also allow for an analogous impact of x- onto z at the same time? Maybe with a different linear factor?

Well, we then would get a full linear operation with possibly all n**2 elements having different values or we would get a symmetric matrices. A full generalization is not our objective. We want a special sub-case of a linear transformation. But why not allow for a symmetric matrix?

The reason for this will become clear when we discuss the decomposition and factorization of shear matrices in future posts. For the time being the following hint may help: The complexity of a linear operation can be measured by the minimum number of elementary operations required in a well chosen ECS. Can we find a coordinate system where the transformation reduces to just one elementary operation (translation, scaling, reflection, rotation)?

For a shearing transformation this would have to be a scaling operation as this kind of operation can at least potentially break rotational and planar symmetries.

Now we remember a result of linear algebra: Any symmetric matrix corresponds to an operation for wich we always can find a rotated ECS (at the center of an originally highly symmetric figure) in which the transformation of the figure reduces to a pure scaling operation.

For a shear operation we even want to exclude this kind of reduction: We shall not be able to find a ECS in which the operation just becomes a scaling. (A shearing shall in any coordinate system at least require a scaling and a rotation.) How can we achieve this? Well, we must not even allow for a symmetric matrix. For bodies which show original symmetries with respect to planes through its center this means that a shear operation destroys at least some of its planar symmetries whatever coordinate system we choose. Despite the fact that a shear operation can be inverted (see below)!

So, our elimination of some rotational and at least one planar symmetry in an originally highly symmetric body is reflected by the triangular form of the matrix. We avoid a 2-fold coupling between two selected coordinates at the same time. So when we have a matrix element mi,j, with j > i, we always set the corresponding element to zero: mj,i = 0.

This restricts the distortions we are going to get. Some surface elements are going to remain congruent and some point symmetries are kept up.

Why did we not use a lower triangular matrix?

Well, this is just a convention. Actually, the transposed matrix of an upper triangular one would also do the job – so we could also take matrices with non-zero elements only in the lower left triangle. But, for convenience, we will stick to matrices with non-zero elements in the upper right triangular part.

What about the diagonal elements of a shear matrix?

First of all note that not all elements on the diagonal of the matrix Msh should become zero – because then we would eliminate our original figure totally. But for a shear operation we do not want any of the vector components to disappear! In addition: As non-zero elements in the diagonal would reflect a simple scaling we do request that all of the diagonal matrix elements should be equal to 1 for a pure shear operation:

\[ \pmb{\operatorname{M}}_{sh} \, = \,
\begin{pmatrix}
1 & m_{12} &\cdots & m_{1n}(\ne0) \\
0 & 1 &\cdots & m_{2n} \\
\vdots &\vdots &\ddots &\vdots \\
0 & 0 &\cdots & 1 \end{pmatrix}
\]

With at least some of the mi,j(>i) ≠ 0. We then speak of a unipotent or unitriangular matrix.

An example for 4 dimensions would be:

\[ \pmb{\operatorname{M}}_{sh}^{4D} \: = \: \begin{pmatrix}
1.0 & 0 & 1.5 & 0.8 \\
0 & 1.0 & 0.2 & 0 \\
0 & 0 & 1.0 & 2.0 \\
0 & 0 & 0 & 1.0
\end{pmatrix}
\]

As the diagonal is fixed by elements mi,i(>i) = 1, we have reduced the number of free and independent (off-diagonal) parameters Nindsh(>i) of a pure shear matrix Msh from n**2 to:

\[ N^{ind}_{sh}(\pmb{\operatorname{M}}_{sh}) :\: = \: n\,*\, (n\,-\,1)/2
\]

So, in 2 dimensions instead of 4 parameters we set the diagonal elements to 1 and choose just 1 free off-diagonal parameter. In 3 dimensions we only have to set 3 instead of 6 off-diagonal parameters.

Basic properties of a shear matrix

A quick calculation shows: The determinant of a shear matrix is directly given by the product of its diagonal elements, and thus we have:

\[ \operatorname{det} \left(\pmb{\operatorname{M}}_{sh}\right) : = \: 1
\]

Therefore, a shear matrix is always invertible. This means its effects can be reverted – also in geometrical terms. The matrix also has full rank r = n.

The eigenspace and the eigenvalue(s) of a shear matrix Msh are also interesting. We can use a property of triangular matrices to get the eigenvalue(s):

The diagonal elements of a triangular matrix are its eigenvalues.

This in turn can be understood from analyzing the characteristic polynomial det(M – λI) = 0.

Thus: The one and only eigenvalue ev of Msh is 1.

\[ \pmb{ev}_{\operatorname{M}_{sh}} \, = \, 1
\]

The eigenvalues algebraic multiplicity in the characteristic polynom is n This does not yet tell us much about the geometrical multiplicity, i.e. the dimension of the eigenspace.

But all the n eigenvalues share the same eigenspace ES. ES is the kernel of the matrix (A – λI). Therefore, we can derive the eigenspace’s dimension by the dimension formula for this particular matri:

\[ \operatorname{dim}_{ES} \, = \, n \,- \, \operatorname{rank}\left(\pmb{\operatorname{M}}_{sh} \, – \, \lambda*\pmb{\operatorname{I}} \right) \,=\, n \, – \, (n\,-\,1) \,=\, 1
\]

as the rank of the combined matrix on the right side is (n – 1).

You also see this understand this by looking at the 4-dim example matrix given above:
When you write down the equations you will find that only 1 variable can be chosen freely. So the kernel of the matrix (A – λI) is only 1-dimensional. Logically and geometrically, this is clear as any other coordinate value (≠ 0) beyond the first one mixes in contributions.

It is easy to prove that the eigenspace ES can be based upon the eigenvector e1.

\[ \pmb{e}_1 \, = \, \left( \, 1,\, 0,\, \cdots,\,0\right)^T, \: \\ \mbox{with}\, (n\,-\, 1) \, \mbox{ zero components}
\]

A shear matrix cannot be diagonalized

Can Msh be diagonalized? Answer: No! From linear algebra we know that this would require n linearly independent eigenvectors. But we have just one! So:

\[ \pmb{\operatorname{M}}_{sh} : \neq \: \pmb{\operatorname{Q}} \circ \pmb{\operatorname{D}}_{sh} \circ \pmb{\operatorname{Q}^T}
\]

Think a bit about the consequences:
An operation like that on the right side of the un-equation above would correspond to the representation of our shear matrix in a different (rotated) coordinate system. A diagonalization corresponds to an eigendecomposition; in geometrical terms a diagonal matrix represents a pure scaling operation. But, obviously, such a description of our shear operation in some (cleverly) selected ECS is not possible:

We cannot find a (translated and rotated) ECS in which a shear operation reduces to a scaling transformation. In any chosen (rotated) ECS there is always an additional second rotation involved which we cannot simply invert or get rid off!

We shall see this in more detail when we discuss decompositions. We simply cannot get rid of the fundamental asymmetry we have build into Msh by not allowing non-zero elements in the lower triangular part of the matrix! No change to a whatever selected coordinate system will help.

So, a shear operation really is a special symmetry breaking operation which cannot be reduced to just one elementary operation. In any ECS it requires at least a rotation and a scaling!

Summary:

A pure shear operation is introduced by some off-diagonal and non-zero element in the upper right triangular part of a unipotent matrix. All off-diagonal elements in the lower left triangular part are set to zero.

More illustrations from Blender

First some images for a cuboid that was more extremely deformed by shearing in x- and y-direction than the first one:

Now we come to an interesting example: We apply shearing in x- and y-direction to a sphere. We get something whose shape looks much more regular than that of a cuboid. We pick a unit sphere and apply to shearing operations in x- and y-direction:

The first image shows the original sphere from some point of view. The second image was taken from the same perspective, but now after the shearing. The resulting figure looks very regular – and it seems to have plane symmetries.

Another example with a bit of different lighting gives us:

Actually, we get an ellipsoid. We clearly can identify the two points which have a maximum absolute z-values whose tangent planes did not change tier orientation.

Ellipsoids (in 3D) are figures with 3 main axes and planar symmetries. The original fully rotational symmetry (first image of the series above) is obviously broken. However, we seem to have saved at least some planar symmetries. Hmmm …. Just when we thought we had understood that a shearing operation breaks original planar symmetries we have in our new example case kept some.

Why is this? What is so special about the shearing of spheres? Keep this question in mind.

Conclusion

Shearing transformations are a special class of linear and thus transformations. Shearing obviously can break some plane symmetries of originally highly symmetric bodies – as cubes. In contrast to other elementary affine operations no coordinate system can be found where the number of operations to reproduce the effects of a shearing can be reduced to one. In particular it is not possible to find an Euclidean coordinate system where it is just reduced to a (symmetry breaking) scaling with different factors in different coordinate directions.

Blender offers a simple graphical method to apply shearing to simple geometrical bodies. This allowed us to get an impression of the effects on cubes and spheres. Something that was a bit astonishing was the shearing effect on a unit spheres. It just created ellipsoids – again highly symmetric bodies WITH planar symmetries.

In the next post of this series
Fun with shear operations and SVD – II – Shearing of rectangles and cubes with Python and Matplotlib
we will look closer at shearing operation with the help of Python and matplotlib.