Fun with shear operations and SVD – II – Shearing of rectangles and cubes with Python and Matplotlib

This post requires Javascript to display formulas!

This post series is about shear transformations and related matrices.

In the end we want to better understand the effect of shear operations on “multivariate standard normal distributions” [MSND]. A characteristic property of a MNSD is that its contour hyper-surfaces of constant density are nested spheres. We want to understand the impact of a shear operation on such a random vector distribution.

In the first post of this series
Fun with shear operations and SVD – I – shear matrices and examples created with Blender
we have already had a preliminary look on shear transformations. We have seen that shear transformations are a special class of affine transformations. They are represented by square unipotent, upper (or lower) triangular matrices. The eigenspace is 1-dimensional and the only eigenvalue is 1.

We have also understood that a shear operation can not be reduced to a simple scaling operation with different factors by choosing some special rotated Euclidean coordinate system [ECS], whose origin coincides with volume and symmetry center of the original figure.

Due to its matrix properties which are very different from orthogonal matrices a shear operation is neither a rotation, nor a scaling. Instead we suspect that it might only be decomposed into a combination of a rotation plus a scaling – whatever coordinate system we choose. We still have to prove this in a forthcoming post.

In this post we want to explicitly apply shear matrices to simple 2D- and 3D-figures, namely squares and cubes. We use Python to control the operations and visualize the results with the help of Matplotlib. We will check that the position and orientation of all extremal points in y-direction (2D-case) and z-direction (3D-case) are preserved during shearing. We will depict this by applying the transformations to position-vectors, which point to elements of distinct vertical layers. In 3D I also test the effects of shearing on cubes by just using the 8 corner points of the cube to define 6 faces, which then get transformed.

Afterward we briefly look at the chaining of shear operations and the resulting product of their respective matrices. As some people may think that one could use combinations of shears induced by a mix of upper triagonal and lower triagonal matrices, we investigate the effects of such combinations. We see that such combinations hurt conditions we have already defined for pure shear operations. Even a symmetric matrix, which has the same shear factors in the upper and lower triangular parts of the matrix has a different effect than a real shearing: It distorts a body in the sense of a pure scaling with different factors in potentially all orthogonal directions.

Hint: I omit Python code in this post. If I find the time I will deliver some code snippets at the end of this post series. However, it is really easy to set up your own programs to reproduce the results of my experiments and respective plots with the help of Numpy and Matplotlib.

Before showing results of numerical experiments I first list up some general affine properties and other specific properties of shear transformations.

Affine properties of shear transformations

Important properties of all affine transformations are:

  • Collinearity: points on a line are transformed into points on a line.
  • Parallelism: Parallel lines remain parallel lines. Lines remain lines. Planes are transformed into planes.
  • Preservation of sub-dimensionality: Affine transformations preserve the dimensionality of objects confined to a sub-space of the “affine” space they operate on.
  • Convexity: Convex figures remain convex – i.e. the sign of a curvature radius does not change.
  • Proportionality: Distance ratios on parallel line segments are preserved.

Extremal points on convex surfaces may remain extremal with respect to distances to the origin, but not with respect to a specific coordinate axis (rotation!). Note that the third point holds for convex (hyper-) surfaces – as e.g. that of a sphere or ellipsoid.

Properties of shear transformations and their matrices Msh

I list some properties which I have already discussed in my previous post:

  • Fix point: When we omit a translation it is natural to choose an Euclidean coordinate system [ECS] such, that its origin coincides with the symmetry center of the body to be sheared. This point then remains a fixed point of the transformation.
  • Msh is a unipotent matrix with non-zero-elements only in its upper (or lower) triangular part. The matrix elements on the diagonal are just 1.
  • det(Msh) = 1. A shear transformation is invertible. Its rank is rank(Msh) = n.
  • The only eigenvalue ε is ε = 1. Its multiplicity in the ℝn is n.
  • The eigenspace ESsh has a dimension dim(ESsh) = 1.
  • A single pure shear mediated by a unipotent upper (or lower) matrix can not be reduced to a pure scaling operation in some cleverly chosen, rotated coordinate system.

For reasons of convenience we below again chose an Euclidean coordinate system [ECS] whose origin coincides with the center of volume or the symmetry center of a body which we apply a shear transformation to. This not only guarantees a central fixed point. The structure of the diagonal matrix and the 1s on the diagonal

\[ \begin{align} \pmb{\operatorname{M}}_{sh} \, &= \,
\begin{pmatrix}
1 & m_{12} &\cdots & m_{1n} \\
0 & 1 &\cdots & m_{2n} \\
\vdots &\vdots &\ddots &\vdots \\
0 & 0 &\cdots & 1 \end{pmatrix}, \\
\pmb{v}^{sh} \: &= \: \pmb{\operatorname{M}}_{sh} \circ \pmb{v} \phantom{\Huge{(}}
\end{align}
\]

make it clear that the last component of any affected vector remains constant during a shear transformation:

\[
\begin{align}
\pmb{v} \: &= \: (\, v_1, \, v_2, \,\cdots \, v_n)^T, \\
\pmb{v}^{sh} \: &= \: (\, v_1^{sh}, \, v_2^{sh}, \,\cdots \, v_n^{sh})^T \phantom{\huge{(}} \\
v^{sh}_n \: &= \: 1.0 \, *\, v_n \phantom{\huge{(}}
\end{align}
\]

This is independent of the linear coupling between other components. Therefore, an important feature of a shear operation is:

Extremal points on the surface of a sheared body in the n-th coordinate direction remain extremal and are moved within a hyper-plane orthogonal to the sub-space ℝ1 of the ℝn.

And, more generally:

Points of a sheared body lying in a (n-1)-dimensional sub-space of the ℝn, which are defined by vn = const, are moved within the sub-space, only, during shearing.

In 3D with coordinates (x, y, z) such a sub-space is a hyper-plane defined by z = const.. E.g. for a cube the points of the top face in z-direction are elements who just move within this plane during a shearing-operation. The same holds for any layer of points defined by other values z = const. Actually, the extremal points in z-direction even move along defined straight lines within the plane. In the general case these lines cross the (n-1)-dimensional sub-space.

It is time to take Python to perform some numerical experiments in 2D and 3D – and visualize the above statements. As said, in this post we concentrate on the shearing of squares and cubes. And the behavior of extremal points and other points located on layers with a constant value of the preserved vector component. I admit that squares and cubes are a bit boring, but this post is just a preparational step towards a closer investigation of the the impact of shear operations on statistical vector distributions controlled by Gaussians.

2D: Shearing of a square/rectangle

The figure below shows the end-points of position-vectors, which define data points on a rectangular meshes within a square. All the points reside inside and on the boundary of the square and reflect horizontal layers of the square:

Now we apply a shear matrix with the following elements [[1.0, 1.5], [0.0, 1.0]]. As we only work in two dimensions our matrix just has 4 elements with values ≠ 0. We name the undisturbed vectors of our square vsq. The resulting vectors after the shear operation will be called vsh. I.e.:

\[ \pmb{\operatorname{M}}_{sh} \: = \: \begin{pmatrix} 1.0 & 1.5 \\ 0.0 & 1.0\end{pmatrix}
\]

and

\[ \pmb{v}_{sh} \: = \: \pmb{\operatorname{M}}_{sh} \circ \pmb{v}_{sq}
\]

A plot of all the vsh gives us:

Exactly, what we expected. We directly see the constant shift of layers against each other. And we also see the linearity of the overall effect in the diagonal lines between the new edges. The overall result is a parallelogram.

The points on all the layers were moved along hyper-surfaces with y = const., which in our 2D case are just lines parallel to the x-axis.

Note that the symmetry center at the origin of our figure remained fixed. Point symmetries with respect to the original symmetry center have been preserved. However, original symmetries with respect to the coordinate axes have been broken.

For a rectangle we get analogous, stretched images:

Shear transformation of a cube

The last plots were simple, but visualized the very nature of a shear transformation: Linear differences of (position) vectors between layers. Breaking of plane symmetries for polytopes, while point symmetries are preserved. Let us try to demonstrate something similar in 3D. Once again an illustrative method to do this is to produce multiple planes on different z-levels filled with points on a regular mesh. During plotting we allow for some transparency of these points.

All images show the situation before shearing. You see the flat squares (extending in x- and y-direction) on 4 different z-levels. It can be seen that the dimensions are -15 ≤ vi ≤ 15, for i ∈ {1, 2, 3}. We now apply a shear matrix Msh = [[1.0, 0.7, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0]]

\[ \pmb{\operatorname{M}}_{sh} \: = \: \begin{pmatrix} 1.0 & 0.7 & 0.0
\\ 0.0 & 1.0 & 0.0 \\
0.0 & 0.0 & 1.0
\end{pmatrix}
\]

to the position-vectors of our layer points. I.e. we only induce a linear effect of the y-components onto the x-components of our vectors. The result is:

We get a systematic y-dependent change of the x-coordinate values of our points. The z-level of our layer points, however, remain unchanged. Now, we repeat our experiment, but this time we add an additional shear via a linear dependency of x- and y-component values of our vectors on their z-component by

\[ \pmb{\operatorname{M}}_{sh} \: = \: \begin{pmatrix} 1.0 & 0.7 & 1.2
\\ 0.0 & 1.0 & 1.0 \\
0.0 & 0.0 & 1.0
\end{pmatrix}
\]

The shift of the layers against each other in x- and y-direction gets a clearly visible z-dependence:

But, again and of course, all elements of our 4 layers remain on their respective z-level.

Shearing of a cube: Surface layers

An alternative method to visualize the impact of a shear-transformation on a cube with the help of Matplotlib is based on 8 vectors that span the 6 surface elements of the cube. We also make the cube-faces a bit transparent in some plots. The original cube then looks like:

The result for our last shear matrix

\[ \pmb{\operatorname{M}}_{sh} \: = \: \begin{pmatrix} 1.0 & 0.7 & 1.2
\\ 0.0 & 1.0 & 1.0 \\
0.0 & 0.0 & 1.0
\end{pmatrix}
\]

is:

Here wee see again that the z-level of the upper and lower limiting plates of the cube remained constant. They were just moved in x- and y-direction:

Our shear operation has produced a parallelepiped or a spat. This is consistent to the results of Blender experiments depicted in my previous post.

Remarks on a significant deficit of Matplotlib for 3D

A significant problem that one stumbles into with plots like those above is that Matplotlib does not offer us a real 3D engine. Meaning: It often calculates the so called z-order of objects like hyper-planes (or parts of such), curved manifolds, bodies along our fictitious line of visibility wrongly. So, the shading of e.g. transparent and opaque surfaces according to overlaps in 3D along your line of sight is almost always depicted in a wrong way.

For complex bodies this can drive you crazy and makes you wonder if it would be better to learn controlling Blender with Python. For our relatively simple cases I solved the z-order problem by disabling the computed z-order

ax = fig.add_subplot(111, projection=’3d’, computed_zorder=False)

and by manually assigning a z-order-value to each of the cube’s faces.

Still: As soon as we distort a simple body as a cube strongly (see below) you need a really good overview about the order of your surfaces and their colors after rotations. It is worthwhile to make sketches or to create a similar body with Blender first to clearly see, what happens during a sequence of rotations.

Combinations of shear transformations

Shearing transformations can, of course, be chained in an orderly sequence of operations. Let us take a 2D example of two shear matrices with the same shear-parameter:

\[ \begin{align} \pmb{\operatorname{M}}_{sh} \circ \pmb{\operatorname{M}}_{sh} \: &= \:
\begin{pmatrix} 1 & \lambda \\ 0 & 1 \end{pmatrix} \circ
\begin{pmatrix} 1 & \lambda \\ 0 & 1 \end{pmatrix}
\\
&= \: \begin{pmatrix} 1 & 2\lambda \\ 0 & 1 \end{pmatrix} \phantom{\Huge{(}^T}
\end{align}
\]

For different shear parameters we get

\[ \begin{align} \pmb{\operatorname{M1}}_{sh} \circ \pmb{\operatorname{M2}}_{sh} \: &= \:
\begin{pmatrix} 1 & \lambda \\ 0 & 1 \end{pmatrix} \circ
\begin{pmatrix} 1 & \mu \\ 0 & 1 \end{pmatrix}
\\
&= \: \begin{pmatrix} 1 & \lambda \, +\, \mu \\ 0 & 1 \end{pmatrix} \phantom{\Huge{(}^T}
\end{align}
\]

I.e., we just increase the shearing effect by performing two shear operations after another. If we had taken the negative value (μ = – λ) we would just have inverted the shearing. Note:

A chaining of pure shear matrices, which we had defined as upper triangular matrices in the first post, is a symmetric operation with respect to the order of the matrices.

Combination of upper and lower triangular shear matrices?

However, what happens if we combine an unipotent upper triangular shear matrix with a lower triangular matrix with the same shearing coefficient?

\[ \begin{align} \pmb{\operatorname{M}}_{sh} \circ \pmb{\operatorname{M}}_{sh}^T \: &= \:
\begin{pmatrix} 1 & \lambda \\ 0 & 1 \end{pmatrix} \circ
\begin{pmatrix} 1 & 0 \\ \lambda & 1 \end{pmatrix}
\\
&= \: \begin{pmatrix} 1 \,+\, \lambda^2 & \lambda \\ \lambda & 1 \end{pmatrix} \phantom{\Huge{(}^T}
\end{align}
\]

We get a symmetric matrix. This is not surprising, as this always is the case for product of a matrix with its transposed counterpart. But, in addition, a scaling is superimposed. Note also that the combined operation is not symmetric regarding the order of matrix application:

\[ \begin{align} \pmb{\operatorname{M}}_{sh}^T \circ \pmb{\operatorname{M}}_{sh} \: &= \:
\begin{pmatrix} 1 & 0 \\ \lambda & 1 \end{pmatrix} \circ
\begin{pmatrix} 1 & \lambda \\ 0 & 1 \end{pmatrix}
\\
&= \: \begin{pmatrix} 1 & \lambda \\ \lambda & 1 \,+\, \lambda^2 \end{pmatrix} \phantom{\Huge{(}^T}
\end{align}
\]

Let us apply the second variant to our square (see above) for λ = 1.5:

As expected we get a stronger distortion in y-direction. Note also that in contrast to a normal shear operation z-levels of our 4 level-rows were not preserved. Instead the individual levels appear rotated and stretched. Actually, in one of the next posts we will see that our combined matrix can be factorized into a combination of a rotation, then pure scaling, then back-rotation (by the same angle used in the first rotation). But this corresponds to nothing else than a pure scaling in a rotated ECS.

The funny thing with chaining is that using a simple symmetric matrix from the start for λ = 1.5

\[ \pmb{\operatorname{M}}_{symm} \: = \:
\begin{pmatrix} 1 & \lambda \\ \lambda & 1 \end{pmatrix}
\]

would give us a different result than a sequence of MshT * Msh:

Note:

Combining a shear matrix with its transposed is something different than just using a matrix with filling in (transposed) elements in the lower part with the same value as their mirrored elements in the upper triangular part.

By combining upper and lower triangular shear matrices we leave the group of pure shear operations

We could call the application of a symmetric matrix a kind of symmetric shearing. But even with a symmetric matrix we loose the central property of a normal shear operation, namely that the z-levels of body-layers parallel to the x/y-plane are preserved. All in all we get into trouble as soon as we combine original shear matrices and their transposed counterparts. Regarding my definition of a shear matrix in my previous post, we obviously leave the group of shear operations by such a combination. Some reader may regard this as an exaggeration. But it makes sense:
Even the symmetric matrix of the combined operation can be reduced to a pure scaling operation in a rotated coordinate system. And this simply is not shearing.

Just for fun: “Symmetric shearing” applied to a cube

Just for the fun of it, let us apply a symmetric version of a 3D-shearing matrix to our cube:
shear = [[1.0, 0.6, 1.0], [0.6, 1.0, 1.2], [1.0, 1.2, 1.0]]

\[ \pmb{\operatorname{M}}_{symm} \: = \: \begin{pmatrix}
1.0 & 0.6 & 1.0 \\
0.6 & 1.0 & 1.2 \\
1.0 & 1.2 & 1.0
\end{pmatrix}
\]

We expect all layers to be stretched into all coordinate directions now; z-levels of the original layers parallel to the x/y-plane will not be preserved. The normal vector to the transformed layers will be inclined to all axes after the operation.

The following plots were done with intransparent, opaque surfaces. Due to the extreme stretching we would otherwise loose the overview – which is nevertheless difficult. I have kept the order of colors for the cubes side-layers the same as for the unstretched cube: sf_color=[‘yellow’, ‘blue’, ‘red’, ‘green’, ‘magenta’, ‘cyan’]. I first give you some selected perspectives:

The distortion in all directions is obvious. Note that the point-symmetries of all corners with respect to the origin are preserved. The following sequence reflects views from a circle around the vertical axis:













What we learn is that even scaling alone can distort a symmetric body quite a lot.

Conclusion

In this post I have visualized the effect of shear operations on squares and cubes. We saw that working with points on layers and with respective vectors is helpful when we want to demonstrate a central property of real shear operations: One vector component stays as it is. Nevertheless, the symmetry breaking of shearing transformations was clearly visible. We also saw that we should represent shear transformations either by upper or lower triangular matrices, but we should not mix these types of matrices.

As a side effect we have seen that a combination of shear matrix with its transposed induces strong deformations in specific coordinate directions by enhanced scaling factors. Symmetric matrices also lead to relativ strong distortions. We have claimed that these distortions can be explained by pure scaling operations in a rotated coordinate system. A proof has to be delivered yet.

In the next post
Fun with shear operations and SVD – III – Shearing of circles
we take a first step in the direction of shearing multivariate normal vector distributions. We keep it simple and just study the impact of shearing transformations on circles.

 

Fun with shear operations and SVD – I – shear matrices and examples created with Blender

This post requires Javascript to display formulas!

In Machine Learning samples of data vectors often undergo linear transformations. Typical examples are the adaption of vector component values due to the choice of a different shifted and/or rotated coordinate system. Another example is the scaling of vectors. As we have seen in an investigation of Autoencoders they produce multivariate normal distributions which actually are the result of linear transformations of elementary Gaussian distributions. And let us not forget that the transport between layers of neural networks includes linear transformations.

In most cases we can interpret a linear transformations in the ℝn in a geometrical way. Linear transformations are members of a sub-class of affine transformations. Affine transformations in turn can be regarded as a combination of

  • a translation,
  • a scaling in either coordinate direction,
  • a reflection = mirroring operation,
  • a rotation
  • and/or a shear operation.

Translations are trivial. We also understand the effects of rotation and scaling transformations intuitively. A mirror operation can be understand as a kind of inverted scaling with factors -1.

Scaling operations with different scaling factors along different coordinate directions destroy certain rotational symmetries of a 3-dimensional or n-dimensional body. But scaling transformations do not always eliminate symmetries with respect to mirroring planes: If and when we first choose an Euclidean coordinate system whose axes are aligned with symmetry axes of a body with plane symmetries, we are able to keep up at least some of the planar symmetries of the body during scaling operation.

Something that is a bit more difficult to grasp is a shear operation. The reason is that a shear operation destroys one or multiple plane symmetries of an impacted figure – whatever coordinate system we choose. And a shear operation cannot be reduced to a pure scaling operation in some cleverly chosen coordinate system. Neither can it be reduced to a rotation.

This alone is a good reason to have a closer look at this special kind of affine transformation. Another reason is to study what impact a shear operations has on the construction or transformation of a multivariate normal distribution – a topic which is at the center of another post series in this blog.

In this post I first want to discuss what a matrix that controls a shear operation looks like. We discuss some of its mathematical properties. Afterward I want to demonstrate what effects shearing has on two simple objects – a cube and a sphere. These examples will be done with Blender and the available mouse driven shear operations there.

Blender performs the required math in the background – and the examples below only provide visual impressions.

To get a better understanding of the nature of shear operation we, in the second post of this series, switch to Python and apply shear operations via explicit matrix operations onto vectors. We start with 2D-figures. Afterward we turn to 3D-cuboids and find that we produce spats. Another important example (not only with respect to multivariate normal distributions) is then given by the shearing of 3D-spheres: We will find that such an operation always produce ellipsoids. We will check this mathematically.

In a fourth post I want to discuss an important result from linear algebra, namely the so called SVD-decomposition of a linear transformation and its meaning in our context of shear operations. SVD tells us that we can always replace a shear operation by a sequence of simpler operations. We will test this statement for our 2D- and 3D-objects objects. We then also find that the creation of an ellipsoid at any orientation can be done by applying just one shear operation to a unit sphere. Alternatively, we can perform a scaling of a sphere and rotate it once. This will help us to better understand the creation of multivariate normal distributions in a parallel post series.

Shear operations break symmetries of the affected figures

A shear operation Msh introduces an asymmetry into originally symmetric figures. It does so by coupling vector components in a specific way. For reasons of simplicity we reduce our argumentation to the ℝn. We describe objects in this space with the help of position vectors to points inside and on the surface of these objects. We specify the components of position vectors with respect to an Euclidean coordinate system [ECS].

To make things easier we assume that the origin of our ECS coincides with the symmetry center of the body we apply the shear operation to. If the body has plane and point symmetries then the origin is placed at the cutting lines or the cutting point of such planes; i.e., the ECS origin coincides with the symmetry-center of the body and coordinate planes will coincide with some of the symmetry planes. For the cases we look at – spheres, cubes, coboids – the ECS origin thus gets identical with the body’s volume center.

A shear transformation applied to a vector couples at least two of the vector’s components such that a growing value for the coordinate value in the direction of a chosen coordinate axis (e.g. vz) impacts the coordinate value in another coordinate direction (e.g. vx) in a linear way.

Here is an example for the x- and z-components of a 3-dimensional vector (vx, vy, vz)T:

\[ \begin{align}
v_x^{sh} \, &= \, v_x \, + \, \alpha_{xz} * v_z \\ v_y^{sh} \,&=\, v_y \\ v_z^{sh}\,&=\, v_z \end{align}
\]
\[ \pmb{\operatorname{M}}_{sh}: \quad \pmb{v} \,=\, \begin{pmatrix} v_x \\ v_y \\ v_z \end{pmatrix} \:\Rightarrow \:
\pmb{v}_{sh} \,=\, \begin{pmatrix} v_x^{sh} \, = \, v_x \, + \, \alpha_{xz} * v_z \\ v_y^{sh} \,=\, v_y \\ v_z^{sh}\,=\, v_z \end{pmatrix},
\]

With α being some constant. This operation breaks an original plane symmetry of a body with respect to the x-z-plane.

A shear operation preserves the orientation of tangent planes at some surface points

But note also, that a shear operation has another unique property that keeps up a geometrical property of the affected body:

A shear operation preserves the orientation of tangent planes to the body’s surface at some particular points. The most important of these points define extrema on the surface with respect to the component whose value is linearly added in the shearing transformation. The tangent planes there do not only keep up their orientation, but also their position.

In the case defined above we would look a extrema with respect to the z-component:

\[ z \,=\, f(x,\,y), \quad { \partial f \over \partial x } \,=\, { \partial f \over \partial y } \, = \, 0
\]

Actually, we would need a bit of multivariate calculus to derive this statement for general bodies. But the example of a cube helps to grasp the central point: The transformation defined above would keep up the orientation and position of the top and the bottom square surfaces of the cube. See below for respective graphics.

In the case of a general n-dimensional body whose surface is closed, continuous and differentiable at all surface points we will find at least two such points whose tangent planes keep their orientation and position during a shearing operation.

Coupling of more than two vector components – and a restriction

Of course we could have introduced more than just one linear coupling of a selected vector component to other components. In three or n dimensions we can couple 2 components to a third one. In n dimensions we can add up shearing contributions of (n-1)-coordinate values to each of the components. But we introduce a restriction:

To avoid confusion and double shearing effects we always set αzx = 0. And analogously for the other components:

\[ \begin{align} \alpha_{ij} \, &= \, const. \, \neq \, 0, \quad \forall\, i > j \\ \alpha_{ij} \, &= \, 0, \quad \forall\, i < j \end{align} \]

Once again: We avoid a kind of double coupling between the same pair of components. E.g., vx may become impacted by vz, but vz then shall not get impacted by vx at the same time.

Obviously, this is a an asymmetric policy in the formal handling of vector component coupling. But, it has geometrical consequences (see below). Of course, we follow this asymmetric policy also for other pairs of vector components and respective coupling parameters. This gives us a number of n*(n – 1)/2 potential constant shearing parameters.

A first example produced with the help of Blender

To get a visual impression of what shearing does to a body with some symmetries, let us look at a simple example: a cube.

Regarding the first image we define that the x-coordinate direction goes from the left to the right, the y-direction from the front to the back and the z-direction from the bottom vertically upwards. The last image from the top shows that the x- and y-side-length of our cube are equal. Among other symmetries, a cube and also cuboids obviously have a mirror symmetry with respect to planes parallel to the x/y-plane, the y/z-plane and the x/z-plane through their centers of volume.

Now let us apply two shear operations which couple the x/z-coordinates and the y/z-coordinates. Then we get something like the following:

The first two image show the sheared object from the front, but with the camera moved to different heights. We see that the cube was sheared in two directions – in the x- and in the y-direction. The z-values of the upper and lower surfaces remain constant during the operation. As a result the object gets inclined in two directions: We get a spat (or parallelepiped) as the result of applying a shear operation to a cube or cuboid.

The last image shows the object from exactly the same top position as before the shearing. We see that the original symmetry is destroyed with respect to all 3 orthogonal planes (parallel to coordinate planes) through the center of volume of our object.

Also all of the following objects are the results of simple shearing operations. While this clear for the parallelepiped (derived from a cuboid) and the inclined cylinder (derived from a straight cylinder), it is not directly obvious for the ellipsoid which in the depicted case actually is the result of a shearing applied to a sphere (with x being impacted by y and z). In all cases rotations and translations have been applied in addition to arrange the scene.

We get the impression that some simple geometric shapes like a spat or an ellipsoid could be the direct result of one or more shearing operations applied to originally simple objects with multiple symmetries – as cubes and spheres.

Shear transformations are mathematically done by unipotent upper triangular matrices

Let us return to a more mathematical description of a shear operation. As any other linear operation applied to vectors of the ℝn we can represent a shear operation by a matrix. We just have to express the linear coupling between two vector components by a suitable arrangement of our shear-constants as matrix-elements. By experimenting a bit we find that a square matrix which only has non-zero elements on its diagonal and in its upper right triangular part does the job. We speak of an “upper triangular matrix“:

\[ \pmb{\operatorname{M}} \, = \,
\begin{pmatrix}
m_{11} & m_{12} &\cdots & m_{1n} \\
0 & m_{22} &\cdots & m_{2n} \\
\vdots &\vdots &\ddots &\vdots \\
0 & 0 &\cdots & m_{33} \end{pmatrix}
\]

That such a matrix mixes vector components in the intended way follows directly from the definition of standard matrix multiplications. (Note: In Python/Numpy we talk about “matmul”-operations.)

Note that not all elements in the upper triangular part (aside the diagonal) are required to be different from zero. But at least one should be ≠ 0.

Not a full and neither a symmetric matrix

Now we come back to a point introduced already above: In principle we could allow for a coupling of two selected vector components in both logically possible ways. E.g., when we have an impact of the z- onto the x-component, why not also allow for an analogous impact of x- onto z at the same time? Maybe with a different linear factor?

Well, we then would get a full linear operation with possibly all n**2 elements having different values or we would get a symmetric matrices. A full generalization is not our objective. We want a special sub-case of a linear transformation. But why not allow for a symmetric matrix?

The reason for this will become clear when we discuss the decomposition and factorization of shear matrices in future posts. For the time being the following hint may help: The complexity of a linear operation can be measured by the minimum number of elementary operations required in a well chosen ECS. Can we find a coordinate system where the transformation reduces to just one elementary operation (translation, scaling, reflection, rotation)?

For a shearing transformation this would have to be a scaling operation as this kind of operation can at least potentially break rotational and planar symmetries.

Now we remember a result of linear algebra: Any symmetric matrix corresponds to an operation for wich we always can find a rotated ECS (at the center of an originally highly symmetric figure) in which the transformation of the figure reduces to a pure scaling operation.

For a shear operation we even want to exclude this kind of reduction: We shall not be able to find a ECS in which the operation just becomes a scaling. (A shearing shall in any coordinate system at least require a scaling and a rotation.) How can we achieve this? Well, we must not even allow for a symmetric matrix. For bodies which show original symmetries with respect to planes through its center this means that a shear operation destroys at least some of its planar symmetries whatever coordinate system we choose. Despite the fact that a shear operation can be inverted (see below)!

So, our elimination of some rotational and at least one planar symmetry in an originally highly symmetric body is reflected by the triangular form of the matrix. We avoid a 2-fold coupling between two selected coordinates at the same time. So when we have a matrix element mi,j, with j > i, we always set the corresponding element to zero: mj,i = 0.

This restricts the distortions we are going to get. Some surface elements are going to remain congruent and some point symmetries are kept up.

Why did we not use a lower triangular matrix?

Well, this is just a convention. Actually, the transposed matrix of an upper triangular one would also do the job – so we could also take matrices with non-zero elements only in the lower left triangle. But, for convenience, we will stick to matrices with non-zero elements in the upper right triangular part.

What about the diagonal elements of a shear matrix?

First of all note that not all elements on the diagonal of the matrix Msh should become zero – because then we would eliminate our original figure totally. But for a shear operation we do not want any of the vector components to disappear! In addition: As non-zero elements in the diagonal would reflect a simple scaling we do request that all of the diagonal matrix elements should be equal to 1 for a pure shear operation:

\[ \pmb{\operatorname{M}}_{sh} \, = \,
\begin{pmatrix}
1 & m_{12} &\cdots & m_{1n}(\ne0) \\
0 & 1 &\cdots & m_{2n} \\
\vdots &\vdots &\ddots &\vdots \\
0 & 0 &\cdots & 1 \end{pmatrix}
\]

With at least some of the mi,j(>i) ≠ 0. We then speak of a unipotent or unitriangular matrix.

An example for 4 dimensions would be:

\[ \pmb{\operatorname{M}}_{sh}^{4D} \: = \: \begin{pmatrix}
1.0 & 0 & 1.5 & 0.8 \\
0 & 1.0 & 0.2 & 0 \\
0 & 0 & 1.0 & 2.0 \\
0 & 0 & 0 & 1.0
\end{pmatrix}
\]

As the diagonal is fixed by elements mi,i(>i) = 1, we have reduced the number of free and independent (off-diagonal) parameters Nindsh(>i) of a pure shear matrix Msh from n**2 to:

\[ N^{ind}_{sh}(\pmb{\operatorname{M}}_{sh}) :\: = \: n\,*\, (n\,-\,1)/2
\]

So, in 2 dimensions instead of 4 parameters we set the diagonal elements to 1 and choose just 1 free off-diagonal parameter. In 3 dimensions we only have to set 3 instead of 6 off-diagonal parameters.

Basic properties of a shear matrix

A quick calculation shows: The determinant of a shear matrix is directly given by the product of its diagonal elements, and thus we have:

\[ \operatorname{det} \left(\pmb{\operatorname{M}}_{sh}\right) : = \: 1
\]

Therefore, a shear matrix is always invertible. This means its effects can be reverted – also in geometrical terms. The matrix also has full rank r = n.

The eigenspace and the eigenvalue(s) of a shear matrix Msh are also interesting. We can use a property of triangular matrices to get the eigenvalue(s):

The diagonal elements of a triangular matrix are its eigenvalues.

This in turn can be understood from analyzing the characteristic polynomial det(M – λI) = 0.

Thus: The one and only eigenvalue ev of Msh is 1.

\[ \pmb{ev}_{\operatorname{M}_{sh}} \, = \, 1
\]

The eigenvalues algebraic multiplicity in the characteristic polynom is n This does not yet tell us much about the geometrical multiplicity, i.e. the dimension of the eigenspace.

But all the n eigenvalues share the same eigenspace ES. ES is the kernel of the matrix (A – λI). Therefore, we can derive the eigenspace’s dimension by the dimension formula for this particular matri:

\[ \operatorname{dim}_{ES} \, = \, n \,- \, \operatorname{rank}\left(\pmb{\operatorname{M}}_{sh} \, – \, \lambda*\pmb{\operatorname{I}} \right) \,=\, n \, – \, (n\,-\,1) \,=\, 1
\]

as the rank of the combined matrix on the right side is (n – 1).

You also see this understand this by looking at the 4-dim example matrix given above:
When you write down the equations you will find that only 1 variable can be chosen freely. So the kernel of the matrix (A – λI) is only 1-dimensional. Logically and geometrically, this is clear as any other coordinate value (≠ 0) beyond the first one mixes in contributions.

It is easy to prove that the eigenspace ES can be based upon the eigenvector e1.

\[ \pmb{e}_1 \, = \, \left( \, 1,\, 0,\, \cdots,\,0\right)^T, \: \\ \mbox{with}\, (n\,-\, 1) \, \mbox{ zero components}
\]

A shear matrix cannot be diagonalized

Can Msh be diagonalized? Answer: No! From linear algebra we know that this would require n linearly independent eigenvectors. But we have just one! So:

\[ \pmb{\operatorname{M}}_{sh} : \neq \: \pmb{\operatorname{Q}} \circ \pmb{\operatorname{D}}_{sh} \circ \pmb{\operatorname{Q}^T}
\]

Think a bit about the consequences:
An operation like that on the right side of the un-equation above would correspond to the representation of our shear matrix in a different (rotated) coordinate system. A diagonalization corresponds to an eigendecomposition; in geometrical terms a diagonal matrix represents a pure scaling operation. But, obviously, such a description of our shear operation in some (cleverly) selected ECS is not possible:

We cannot find a (translated and rotated) ECS in which a shear operation reduces to a scaling transformation. In any chosen (rotated) ECS there is always an additional second rotation involved which we cannot simply invert or get rid off!

We shall see this in more detail when we discuss decompositions. We simply cannot get rid of the fundamental asymmetry we have build into Msh by not allowing non-zero elements in the lower triangular part of the matrix! No change to a whatever selected coordinate system will help.

So, a shear operation really is a special symmetry breaking operation which cannot be reduced to just one elementary operation. In any ECS it requires at least a rotation and a scaling!

Summary:

A pure shear operation is introduced by some off-diagonal and non-zero element in the upper right triangular part of a unipotent matrix. All off-diagonal elements in the lower left triangular part are set to zero.

More illustrations from Blender

First some images for a cuboid that was more extremely deformed by shearing in x- and y-direction than the first one:

Now we come to an interesting example: We apply shearing in x- and y-direction to a sphere. We get something whose shape looks much more regular than that of a cuboid. We pick a unit sphere and apply to shearing operations in x- and y-direction:

The first image shows the original sphere from some point of view. The second image was taken from the same perspective, but now after the shearing. The resulting figure looks very regular – and it seems to have plane symmetries.

Another example with a bit of different lighting gives us:

Actually, we get an ellipsoid. We clearly can identify the two points which have a maximum absolute z-values whose tangent planes did not change tier orientation.

Ellipsoids (in 3D) are figures with 3 main axes and planar symmetries. The original fully rotational symmetry (first image of the series above) is obviously broken. However, we seem to have saved at least some planar symmetries. Hmmm …. Just when we thought we had understood that a shearing operation breaks original planar symmetries we have in our new example case kept some.

Why is this? What is so special about the shearing of spheres? Keep this question in mind.

Conclusion

Shearing transformations are a special class of linear and thus transformations. Shearing obviously can break some plane symmetries of originally highly symmetric bodies – as cubes. In contrast to other elementary affine operations no coordinate system can be found where the number of operations to reproduce the effects of a shearing can be reduced to one. In particular it is not possible to find an Euclidean coordinate system where it is just reduced to a (symmetry breaking) scaling with different factors in different coordinate directions.

Blender offers a simple graphical method to apply shearing to simple geometrical bodies. This allowed us to get an impression of the effects on cubes and spheres. Something that was a bit astonishing was the shearing effect on a unit spheres. It just created ellipsoids – again highly symmetric bodies WITH planar symmetries.

In the next post of this series
Fun with shear operations and SVD – II – Shearing of rectangles and cubes with Python and Matplotlib
we will look closer at shearing operation with the help of Python and matplotlib.

Multivariate Normal Distributions – II – random vectors and their covariance matrix

This post requires Javascript to display formulas!

This post series is based on notes I took when I recently studied the mathematical properties of Multivariate Normal Distributions [MNDs]. A motivation to publish the notes came from the analysis of a Machine Learning experiment: A Convolutional Autoencoder [CAE] trained on human face images had produced a MND in its latent space. But MNDs are, of course, relevant in other contexts, too.

In this post we take some preparatory steps: I want to introduce the term “random vector” to describe statistical multivariate distributions in vector form. We then have a look at the definition of a probability density function [pdf] in multiple dimensions. Furthermore we discuss linear transformations of a random vector, its expectation values and its covariance matrix. The latter both marks and quantifies (linear) relations between its components. I will also point out a simple but useful equation for the behavior of the covariance matrix when a matrix operates on its random vector. Eventually we also have a look at marginal distributions and their probability density function.

I will proceed not too formally and my discussion of the topics will not be really complete in a pure mathematical sense. On our way I will also remind you of some basic ideas and equations from statistics. People who are familiar with all the terms named above can jump over this post.

Random Vectors and multivariate distributions

Let us assume that we can describe an object of interest, e.g. in the context of an ML problem, by a set of n indexed, real-value variables sj. Instead of a manifest object we could also use n variables to characterize the result of a well defined experiment. When I speak of “objects below I include this more abstract option.

Each object can mathematically be represented by a data point in the ℝn. We simply relate each of the indexed coordinate axes of a corresponding Euclidean coordinate system with a certain object property. The coordinates of the data points are then given by the variable values.

Let us now order a complete set of parallel measured property values sjc for a concrete object as indexed components of a vector sc. The vector then represents our object:

\[ \pmb{s}^c \: = \: \left( \, s_1^c, \, s_2^c, \, ….\, s_n^c\, \right)^T \,
\]

The T indicates the transposition operation. Note: Throughout this post series we assume that untransposed vectors are written in vertical form. I.e. the components are arranged as rows.

Samples

Let us assume that we have m concrete observations of individual objects (or experimental results). The values of the n real variables for each object are always measured or qualified in parallel:

  • We either randomly pick a bunch of m concrete objects out of a wider population of available objects and determine the values of their n characteristic sj-variables.
  • Or we repeat a concrete experiment, whose outcome is described by values for a set of n measurable sj-variables, many times. The setup for our experiment defines precisely what the variables are and how we measure their values. The experimental result can be interpreted as an abstract “object”.

Thus we get samples of object data. Our experimental setup and the properties of our objects define a sample space Ω we have a clear description of how we produce one or multiple vectors s. A particular observation maps Ω to an individual vector.

Let us assume that at least in principle our vector components can take any real value. Producing a vector s can then formally be regarded as a function to the ℝn:

FS: Ω   =>   ℝn,     s = FS(Ω, experiment number, … ).

A sequence of many observations, i.e. creating a sample, formally corresponds to a sequence of many independent applications of this function. An implicit (and sometime problematic) assumption is that picking multiple objects or our experiments do not influence Ω and the statistics of the basic population.

The outcome, namely a set of resulting s-vectors, depends on the statistical properties of the underlying object population. We index the k-th vector by sk.

Each sample can be visualized by a bunch of data points in an n-dimensional space. A concrete sample in 3 dimensions may e.g. look like this from two perspectives:

But our sample can also be represented by a data matrix. We can e.g. arrange the columns to be given by the sk vectors. If we have a sample of m objects then we get a (n x m)- data matrix MS ∈ ℝnxm. A row with index j then provides us with a sample statistics for the j-th object property.

\[
\pmb{\operatorname{D}}_S \: = \: \begin{pmatrix}
s^1_1 & s^2_1 &\cdots & s^m_1 \\
s^1_2 & s^2_2 &\cdots & s^m_2 \\
\vdots &\vdots &\ddots &\vdots \\
s^1_n & s^2_n &\cdots & s^m_n \end{pmatrix}
\]
\[ \pmb{\operatorname{D}}_S \: = \: \left(\, \pmb{s}^1, \pmb{s}^2, \cdots …, \pmb{s}^m\right).
\]

Often you will find the transposed matrix for reasons of an efficient notation. I prefer this version as it later allows for matrix operations on the vector components from the left side, e.g. in a Python program.

Random vectors

We indicate the statistical element in creating a concrete vector from an underpinning population by a vector symbol S. By using this symbol we understand implicitly that any concrete vector S = sc is created by applying our well defined function FS.

It is natural to split S up into n individual univariate random variables Sj: Ω => ℝ, which represent the statistics per vector component. We write S in vector form:

\[ \pmb{S} \: = \: \left( \, S_1, \, S_2, \, ….\, S_n\, \right)^T
\]

So:

\[ \pmb{S} \:=\: \pmb{s}^c\: =: \left(\, S_1=s_1, S_2=s_2, \cdots …, S_n=s_n \, \right)^T.
\]

The vector variable S gets concrete component values by a well defined statistical process. But, if you like to see it more globally, S represents any object from the population and thus in a certain meaning the whole population – and its statistics.

We call S a multivariate random variable or a random vector.

Thus, our statistical multivariate distribution can now be represented by a random vector – and we know how to create samples by turning S repeatedly into concrete vectors. Why all the fuss?

Linear operations on random vectors

The vector notation allows us to combine a random vector with mathematical vector operations, in particular linear transformations. We implicitly understand that such an operation is applied to any vector of the population (and in certain contexts also to elements of our samples). Such a transformation consists of a mapping of vectors of the original population or a sample to new vectors in the same or a different vector space by one and the same linear transformation. The mapping gives us new component values.

In linear algebra we learn that a linear vector transformation can be represented by a matrix plus a translation vector. If the transformation maps a random vector S of the ℝm to a random vector X into the ℝn then the linear (affine) transformation is defined by a (n x m)-matrix A ∈ ℝnxm and a vector b ∈ ℝn:

\[ \begin{align}
\pmb{x} \,&=\, (x_1, x_2, …, x_n)^T \,=\, \pmb{\operatorname{A}} \pmb{s} \, + \, \pmb{b} \\
&\, \\
\pmb{X} \, &= \, \pmb{\operatorname{A}S} + \pmb{b} \end{align}
\]

In close similarity to standard operations in vector spaces a linear transformation of a random vector can have two meanings:

  • The operation either actively transforms our random vector (and the underlying vector distribution) – e.g. by a rotation in the ℝn.
  • Or we perform a passive switch to another Euclidean coordinate system (e.g. by a rotation of the coordinate axes) and the transformation adapts the component values accordingly.

Invertible transformations?

If and when we have a transformation within the ℝn the matrix A would typically be a real valued quadratic (n x n)-matrix. In many cases we will assume that the transformation and its matrix A are invertible. This imposes certain conditions on the matrix: A must then be positive definite and have a determinant det(A) ≠ 0.

\[ \operatorname{det}\left(\pmb{\operatorname{A}}\right) \, =\, \left|\pmb{\operatorname{A}}\right|
\, \ne \, 0
\]

But note that in the more general case also rectangular matrices can have a “left”- or a “right”-inverse matrices; see https://en.wikipedia.org/ wiki/ Generalized_ inverse.

Rectangular matrices, e.g. with m (< n) rows and n columns have a rank rm < n, and not all of the column vectors of the matrix are linearly independent.

Probability density functions in the ℝn

Let us assume that the population behind a random vector covers the ℝn completely and in a continuous way. By “continuous” we mean that a transition to infinitesimal small volumes at any position in the ℝn is possible: We still will find vectors of the population there and component values change smoothly between adjacent elements.

We can apply a frequency analysis to the vector component values of the objects of a huge sample or of a number of samples. E.g. we can define intervals into which the components may fall or not. The analysis gives us a probability P(S=sΔV) for the occurrence of position vectors sΔV which point into a certain finite region ΔV of the ℝn.

If you better like to think in terms of data points:

We get a probability that a data point representing a statistically picked object lies within a certain defined and finite region of the ℝn.

How we split up the ℝn into separate volume regions depends on available information about the object population or on an analysis of sufficiently large samples.

If each component of the vectors s can take any real value we may under certain circumstances make a transition to probability densities. Probability densities can be distilled from huge samples or alternatively very many samples picked from the underlying population. To get an idea of the density function we, in practice, raise the number of observations and reduce the volume regions of the ℝn to a proper averaging size that avoids wild fluctuations. The latter is called “sampling”.

If we by many experiments can assume a converging limiting process to infinitesimal real value intervals dsj for each of the vector components then we end up with a bunch of n continuous 1-dim probability density functions pj(sj): ℝ => ℝ. A concrete probability value for a component must be derived via integration of pj() over a finite interval a ≤ s_j ≤ b:

\[ \operatorname{P}(a \le s_j \le b) \,=\, \int_a^b p(s_j)ds_j
\]

An interpretation as a probability requires, of course, a normalization of such an integral over the whole ℝ.

It is clear that the full probability density function pS(s) of the vector distribution varies continuously with the 1-dim probability density functions of its components.

In this post series we are interested in examples of vector distributions which can be described by a continuous multidimensional probability density function [p.d.f] for infinitesimal volume elements dVS = ds1, ds2dsn:

\[ p_{\pmb{S}}(\pmb{s}) \: = \: p_{\pmb{S}} \left( \, s_1, \, s_2, \, ….\, s_n\, \right)
\]
\[ \begin{align} P(\pmb{s}, V) \, &= \, \int_V p_{\pmb{S}}(\pmb{s}) \, ds_1ds_2…ds_n \\
&= \, \int\int…\int_V p_{\pmb{S}} \left( \, s_1, \, s_2, \, ….\, s_n\, \right) \, ds_1ds_2 \cdots ds_n
\end{align}
\]

where V marks some finite volume element in ℝn. Note again: A normalization of such an integral over the whole ℝn is required to get well defined probability values. The normalization factor above is assumed to be integrated in the mathematical expression for pS(s).

For the purposes of this post series we assume that both pS and the pj() are differentiable.

Correlations of random vector components

There may be dependencies which control how the components of a random vector vary together. So the variation of pS(s) may depend in a rather complicated way on the density functions of their components pj(sj)

\[ p_{\pmb{S}}(\pmb{s}) \: = \: p_{\pmb{S}} \left( \, p_1(s_1), \, p_2(s_2), \, ….\, p_n(s_n) \, \right)
\]

The variation of components may e.g. happen in pairwise correlated or uncorrelated ways. I.e. we can in general not assume that the total probability density function pS(s) for a random vector is e.g. a product of the 1-dimensional density probability functions pj(sj for its components.

Such dependencies make the statistics of multivariate vector distributions more complex than the statistics of their constituting (marginal) univariate components. The relation between pS and the individual pj can be difficult to derive and it may even reflect non-linear component relations. Luckily we will see that MNDs contain only linear couplings of the components and thus simple correlations described by constant coefficients.

In ML we have only samples with a limited number of individual vectors available. The investigated samples have to be big enough to reveal the properties of the total probability density function pS(s), which assumedly describes the variation of the underlying continuous vector population in the ℝn.

Probability density after linear transformations and contour hyper-surfaces

A transformation of the components of a random vector has, of course, also an impact on the resulting probability density:

\[ p_{\pmb{X}}(\pmb{x}) \,=\, p_{\pmb{X}} \left( \pmb{\operatorname{A}s} \,+\, \pmb{b} \right) \, = \, p_{\pmb{X}} \left(\, x_1, \, x_2, \, ….\, x_n\, \right)
\]

and for a finite volume VX in the target space

\[ P\left(\pmb{x}, V_X\right) \,=\, \int\int…\int_{V_X} \, = \, p_{\pmb{X}} \left( x_1, \, x_2, \, ….\, x_n \right)\, dx_1dx_2 \cdots dx_n
\]

In case of a well defined distribution S we can hope to find an analytical expression for the multidimensional probability pX(x) of the transformed random vector X. But as we will see we have to invest some work into the calculation. And we will answer the question by what parameters the probability density of a MND is defined in a unique way.

We expect that the concrete form of pX is influenced in a characteristic way by the properties of the original pj(sj)-pdfs and, of course, by our transformation matrix A. This is the case for MNDs and we will study the effects of linear transformations on MNDs and their projections to sub-spaces in more detail in forthcoming posts.

As the matrix A mixes components we may also expect the creation of (linear) relations between the X-components. Therefore, the shape of the density distribution may change due to scaling, mirroring, shearing and rotation effects. This results from the fact that linear transformations are affine transformations in geometrical terms.

This leads to the question by what method we can describe the “shape” of a multivariate distribution. A simple method is to determine hyper-surfaces of constant probability density values. I.e. contour-surfaces:

\[ p_{\pmb{X}}(\pmb{x}) \, = \, p_{\pmb{X}} \left(\, x_1, \, x_2, \, ….\, x_n\, \right) \:=\: const.
\]

This actually is the definition of a (n-1)-dimensional hypersurface. The dependency on the component values can be complicated. But for the special case of a MND we will find a rather comprehensible form which we can interpret. So we will get a chance to understand and qualify the effects of a linear transformation by its impact on the contour hyper-surfaces of the probability density of a MND.

Expectation value and covariance of a statistical vector distribution

The covariance of two univariate distributions X and Y is defined as

\[
\begin{align}
\operatorname{cov}(X, Y)
&= \operatorname{\mathbb{E}}\left[\,\left(\,X \,-\, \operatorname{\mathbb{E}}\left(X\right)\,\right) \, \left(\,Y \,-\, \operatorname{\mathbb{E}}\left(Y\right)\, \right)\,\right] \\
&= \operatorname{\mathbb{E}}\left(X Y\right) \,-\, \operatorname{\mathbb{E}}\left(X\right) \operatorname{\mathbb{E}}\left(Y\right)
\end{align}
\]

The expectation value and the covariance of a general vector distribution S are defined by:

\[ \begin{align} \pmb{\mu}\left(\pmb{S}\right) \,=\, \pmb{\mu}_S \: &= \: \operatorname{\mathbb{E}}\left(\pmb{S} \right) \, := \, \left( \operatorname{\mathbb{E}}(S_1), \, \operatorname{\mathbb{E}}(S_2), …, \, \operatorname{\mathbb{E}}(S_n) \right)^T \\
\operatorname{Cov}\left(\pmb{S}\right) \: &:= \: \operatorname{\mathbb{E}}\left[\left(\pmb{S} – \operatorname{\mathbb{E}}(\pmb{S}) \right)\, \left(\pmb{S} – \operatorname{\mathbb{E}}(\pmb{S}) \right)^T \right] \end{align}
\]

Note the order of transposition in the definition of Cov! A (vertical) vector is combined with a transposed (horizontal) vector. The rules of a matrix-multiplication then give you a matrix as the result! And the expectation value has to be taken for every element of the matrix.

Thus, the interpretation of the notation given above is:

Pick all pairwise combinations (Sj, Sk) of the component distributions. Calculate the covariance of the pair cov(S_j, S_k) and put it at the (j,k)-place of the matrix.

Meaning:

\[ \begin{align}
\operatorname{Cov}\left(\pmb{S}\right)\:&=\:\operatorname{\mathbb{E}} {\begin{pmatrix} (S_{1}-\mu_{1})^{2}&(S_{1}-\mu_{1})(S_{2}-\mu_{2})&\cdots &(S_{1}-\mu_{1})(S_{n}-\mu_{n})\\(S_{2}-\mu_{2})(S_{1}-\mu_{1})&(S_{2}-\mu_{2})^{2}&\cdots &(S_{2}-\mu _{2})(S_{n}-\mu_{n})\\\vdots &\vdots &\ddots &\vdots \\(S_{n}-\mu_{n})(S_{1}-\mu_{1})&(S_{n}-\mu_{n})(S_{2}-\mu _{2})&\cdots &(X_{n}-\mu_{n})^{2} \end{pmatrix}}
\\\\
&=\quad {\begin{pmatrix} \operatorname{Var} (S_{1})&\operatorname{Cov} (S_{1},S_{2})&\cdots &\operatorname{Cov} (S_{1},S_{n})\\ \operatorname{Cov} (S_{2},S_{1})&\operatorname{Var} (S_{2})&\cdots &\operatorname{Cov} (S_{2},S_{n})\\\vdots &\vdots &\ddots &\vdots \\ \operatorname{Cov} (S_{n},S_{1})&\operatorname{Cov} (S_{n},S_{2})&\cdots &\operatorname {Var} (S_{n}) \end{pmatrix}}
\end{align}
\]

Keep the details of this definition in mind!

The above matrix is the (variance-) covariance matrix. As Wikipedia tells you, some people call it a bit different. I refer to it later on just as the covariance matrix (of a random vector).

Useful properties of the covariance matrix of a random vector

Note that the diagonal elements of the covariance matrix are just the variances of the individual component distributions, whilst the off diagonal elements contain the pairwise covariances of the vector components. In case of a vector distribution with uncorrelated (or even independent) Sj the matrix becomes diagonal:

\[
\begin{pmatrix}
(\sigma_1^s)^2 & 0 &\cdots & 0 \\
0 & (\sigma_2^s)^2 &\cdots & 0 \\
\vdots &\vdots &\ddots &\vdots \\
0 & 0 &\cdots & (\sigma_n^s)^2 \end{pmatrix}
\]

 

Other properties can be derived from the linearity of expectation values and the fact that

\[ \begin{align} \operatorname{Cov}\left(aX+bY,\,cW+dV\right) \,=\, ac*\operatorname{Cov}\left(X,\, W\right)\,&+\,ad*\operatorname{Cov}\left(X,\,V\right)\,\\ +\,bc*\operatorname{Cov}\left(Y,\,W\right)\,&+\,bd*\operatorname{Cov}\left(Y,\,V\right) \end{align}
\]

It is relatively easy to show that the defined expectation value of a vector distribution behaves linearly with respect to linear transformations of S with a matrix A:

\[
\operatorname{\mathbb{E}}\left( \pmb{\operatorname{A}S} + \pmb{b} \right) \, = \pmb{\operatorname{A}} \operatorname{\mathbb{E}}\left(\pmb{S} \right) \,+\, \pmb{b} \vphantom{(}
\]

And the covariance matrix Cov transforms as follows:

\[ \begin{align}
\operatorname{Cov}\left(\pmb{\operatorname{A}S}\right) \: &= \: \operatorname{\mathbb{E}}\left[\, \left( \pmb{\operatorname{A}} (\pmb{S} \,-\, \mu_S ) \right) \, \left( \pmb{\operatorname{A}} (\pmb{S} \,-\, \mu_S ) \right)^T \, \right] \\ &= \: \operatorname{\mathbb{E}}\left[\, \pmb{\operatorname{A}} ( \pmb{S} \,-\, \mu_S ) \, (\pmb{S} \,-\, \mu_S)^T \pmb{\operatorname{A}}^T \,\right] \\
&= \: \pmb{\operatorname{A}} \operatorname{\mathbb{E}}\left[\, ( \pmb{S} \,-\, \mu_S ) \, (\pmb{S} \,-\, \mu_S)^T \,\right] \pmb{\operatorname{A}}^T \\
\operatorname{Cov}\left(\pmb{\operatorname{A}S}\right) \: &= \: \pmb{\operatorname{A}} \operatorname{Cov}\left(\pmb{S}\right) \pmb{\operatorname{A}}^T \end{align}
\]

The intermediate steps for the Cov – and in particular the 3rd one – are in my opinion best understood when you just write the stuff down for e.g. (3×3)-matrices and carefully regard the Cov-definition for random vectors. You will see that the definition of Cov just allows for the right index juggling! The extension to more dimensions is straightforward and could formally be done by induction.

See e.g. also https://cds.nyu.edu/ wp-content/ uploads/ 2021/05/ covariance_matrix.pdf and https://stats.stackexchange.com/ questions/ 113700/ covariance-of-a-random-vector-after-a-linear-transformation.

Other useful properties are

\[
\begin{align}
\operatorname{Cov}\left(V_j,\, V_k\right) \,&=\, \operatorname{\mathbb{E}}\left(V_j, \,V_k\right) \,-\, \operatorname{\mathbb{E}}\left(V_j\right)\operatorname{\mathbb{E}}\left(V_k\right) \\
\operatorname{Cov}\left(\pmb{S}\right) \,=\,
\operatorname{Cov}\left(\pmb{S},\, \pmb{S}\right) \,&=\, \operatorname{\mathbb{E}}\left(\pmb{S}, \,\pmb{S}^T\right) \,-\, \operatorname{\mathbb{E}}\left(\pmb{S}\right)\operatorname{\mathbb{E}}\left(\pmb{S}\right)^T
\end{align}
\]

By the way, for two different random vectors X and Y the matrix

\[
\operatorname{Cov}\left(\pmb{X},\, \pmb{Y}\right) \,=\, \operatorname{\mathbb{E}}\left(\pmb{X}, \,\pmb{Y}^T\right) \,-\, \operatorname{\mathbb{E}}\left(\pmb{X}\right)\operatorname{\mathbb{E}}\left(\pmb{Y}\right)^T
\]

is called the cross-covariance matrix.

Remark on independence and correlation

Two univariate random variables X and Y are statistically (!) independent when their joint probability, i.e. their probability for the occurrence of a value pair (x,y) is the product of the individual probabilities for the occurrence of x and y:

\[
p_{X,Y}(x,y) \,=\, p_X(x)*p_Y(y)
\]

However, uncorrelatedness does mean something else, namely that the covariance of the distributions is 0:

\[ \begin{align}
\operatorname{Cov}\left(X,\, Y\right) \,&=\, 0 \\
\Rightarrow \: \operatorname{\mathbb{E}}\left(X Y\right) \,&=\, \operatorname{\mathbb{E}}\left(X\right) \operatorname{\mathbb{E}}\left(Y\right)
\end{align}
\]

Independence and uncorrelatedness are two different (!) things and uncorrelatedness does in general not imply statistical independence!

Thus the uncorrelatedness of a pair of components (Sj, Sk) of a random vector S does unfortunately not automatically imply their independence!

We shall later in this series that this is actually a bit different for a MND.

Conclusion

In this post we have collected and clarified some basic terms which we will use in forthcoming texts about multivariate normal distributions. An important step was to describe a statistical vector distribution in form of a random vector. We also have seen that a random vector can be subject to linear transformations.

We have discussed a limiting transition to probability density functions for statistical vector distributions. We understood that dependencies and correlations may make it difficult to derive the form of the probability density for the random vector from the probability density functions for the vector components.

In addition we have defined the mean value and the covariance matrix for a random vector. We saw how a linear transformation affects the covariance matrix.

In the next post we will apply our knowledge to a simple random vector based on Gaussian probability density functions without correlations. We will study what effects orthogonal and shearing transformations have on such a distribution.