Fun with shear operations and SVD – IV – Shearing of ellipses

This post requires Javascript to display formulas!

In the previous posts of this series we got acquainted with shear operations:

Fun with shear operations and SVD – I – shear matrices and examples created with Blender
Fun with shear operations and SVD – II – Shearing of rectangles and cubes with Python and Matplotlib
Fun with shear operations and SVD – III – Shearing of circles

Already established results for shearing a circle

Post III focused on the shearing of a circle, which was centered in the Euclidean coordinate system [ECS] we worked with. The shear operation resulted in an ellipse with an inclination against the coordinate axes of our ECS. This was interesting regarding four points:

  • A circle, which is centered in a chosen ECS, exhibits a continuous rotational symmetry (isotropy). This obviously allows for a decomposition of a shear operation into a sequence of two affine operations in the chosen ECS: a scaling operation (with different factors along the coordinate axes) followed by a rotation (or the other way round). Equivalently: We could switch to another specific ECS which is already rotated by a proper angle against our originally chosen ECS and just perform a scaling operation there.
    The rotation angle is determined by the shear parameter λ.
    This seems to stand in some contrast to the shearing of figures with only discrete rotational symmetries: We saw for rectangles and cubes that an additional rotation was required to replace the shear operation by a sequence of scaling and rotation operations.
  • Points (x, y) of circles and ellipses are described by quadratic forms in two dimensions (with some real coefficients α, β, γ, δ):
    \[
    \alpha \,x^2 \, + \, \beta \, x \, y \, + \, \gamma \, y^2 \:=\: \delta
    \]

    Quadratic forms play a general role in the mathematical description of cone-sections. (Ellipses are the results of specific cone-sections.)

  • Ellipses also result from projections of multi-dimensional ellipsoids onto two-dimensional coordinate planes. Multi-dimensional ellipsoids are described by quadratic forms in an ECS covering the ℝn.
  • Hyper-surfaces for constant probability density values of multivariate normal vector distributions form multi-dimensional ellipsoids. Here we have a link to Machine Learning where key properties of certain objects are often ruled by Gaussian distributions.

From the first point we may expect that a shear operation applied to a multi-dimensional sphere will result in a multi-dimensional ellipsoid – and that such an operation could be replaced by scaling the original sphere (with different factors along n coordinate axes of a n-dimensional ECS) followed by a rotation (or vice versa). We will explicitly investigate this for a 3-dimensional sphere in the next post.

If our assumption were true we would get a first glimpse of the fact that a general multivariate standard distribution can be created by applying a sequence of distinct affine (i.e. linear) operations to a spherical probability distribution. This is discussed in detail in another post-series in this blog.

What is a bit confusing at the moment is that a replacement of a shear operation by simpler affine operations in general seems to require at least two rotations, but only one when we work with centered isotropic bodies. We come back to this point when we discuss the decomposition of a shear matrix by the so called SVD-procedure.

In the previous post of this series we have used the radius of the circle and the shearing parameter λ to derive analytical expressions for the coordinates of special points with extremal values on our ellipse

  • Points with maximal and minimal y-coordinate values.
  • Points with a maximal or minimal distance to the symmetry center of the ellipse. I.e. the end-points of the principal diameters of the ellipse.

From the fact that shearing does not change extremal values along the axis perpendicular to the sharing direction we could easily determine the lengths of the ellipse’s principal axes and the inclination angle of the longer axis with the x-axis of our Euclidean coordinate system [ECS].

What do we have in addition? In another mini-series on ellipses

Properties of ellipses by matrix coefficients – I – Two defining matrices (and two more posts)

I have meanwhile described how the geometry of an ellipse is related to its quadratic form and respective coefficients of a symmetric matrix. I call this matrix Aq. It forces the components of position vectors to fulfill an equation based on a quadratic polynomial. Furthermore Aq‘s eigenvalues and eigenvectors define the lengths of the ellipse’s principal axes and their inclination to the axes of our chosen ECS. The matrix coefficients in addition allow us to determine the coordinates of the points with extremal y-values on an ellipse. We will use these results later in the present post.

Objectives of this post: Shearing of a centered, rotated ellipse

In this post I want to show that shearing a given centered, but rotated original ellipse EO results in another ellipse ES with a different inclination angle and different sizes of the principal axes.

In addition we will derive the relations of the shearing parameter λS with the coefficients of the symmetric matrix \(\pmb{\operatorname{A}}_q^S \) that defines ES. I also provide formulas for the dependence of ES‘s geometrical properties on the shear parameter λS.

There are two basic prerequisites:

  1. We must show that the application of a shear transformation to the variables of the quadratic form which describes an ellipse EO results in another proper quadratic form and a related matrix \(\pmb{\operatorname{A}}_q^S \).
  2. The coefficients of the resulting quadratic form and of \(\pmb{\operatorname{A}}_q^S \) must fulfill a mathematical criterion for an ellipse.

We expect point 1 to be valid because a shear operation is just a linear operation.

To get some exercise we approach our goals by first looking at the simple case of shearing an axis-parallel ellipse before extending our considerations to general ellipses with an inclination angle against the coordinate axes of our chosen ECS.

Continue reading

Properties of ellipses by matrix coefficients – III – coordinates of points with extremal radii

This post requires Javascript to display formulas!

A centered, rotated ellipse can be defined by matrices which operate on position-vectors for points on the ellipse. The topic of this post series is the relation of the coefficients of such matrices to some basic geometrical properties of an ellipse. In the previous posts

Properties of ellipses by matrix coefficients – I – Two defining matrices
Properties of ellipses by matrix coefficients – II – coordinates of points with extremal y-values

we have found that we can use (at least) two matrix based approaches:

  • One reflects a combination of two affine operations applied to a unit cycle. This approach lead us to a non-symmetric matrix, which we called AE. Its coefficients ((a, b), (c, d)) depend on the lengths of the ellipses’ principal axes and trigonometric functions of its rotation angle.
  • The second approach is based on coefficients of a quadratic form which describes an ellipse as a special type of a conic section. We got a symmetric matrix, which we called Aq.

We have shown how the coefficients α, β, γ of Aq and a further coefficient δ of the quadratic form can be expressed in terms of the coefficients of AE. Furthermore, we have derived equations for the lengths σ1, σ2 of the ellipse’s principal axes and the rotation angle by which the major axis is rotated against the x-axis of the Euclidean coordinate system [ECS] we work with. We have also found equations for the components of the position vectors to those points of the ellipse with maximum y-values. A major result was that the eigenvalues and eigenvectors of Aq completely control the ellipse’s properties.

In this post we determine the components of the vectors to the end-points of the ellipse’s principal axes in terms of the coefficients of Aq. Afterward we shall test our formulas by a Python program and plots for a specific example.

Reduced matrix equation for an ellipse

Our centered, but rotated ellipse is defined by a quadratic form, i.e. by a polynomial equation with quadratic terms in the components xe and ye of position vectors to points on the ellipse:

\[
\alpha\,x_e^2 \, + \, \beta \, x_e y_e \, + \, \gamma \, y_e^2 \:=\: \delta
\]

The quadratic polynomial can be formulated as a matrix operation applied to position vectors ve = (xe, ye)T. With the the quadratic and symmetric matrix Aq

\[ \pmb{\operatorname{A}}_q \:=\:
\begin{pmatrix} \alpha & \beta / 2 \\ \beta / 2 & \gamma \end{pmatrix}
\]

we can rewrite the polynomial equation for the centered ellipse as

\[
\pmb{v}_e^T \circ \pmb{\operatorname{A}}_q \circ \pmb{v}_e^T \:=\: \delta \,=\, \sigma_1^2\, \sigma_2^2, \quad \operatorname{with}\: \pmb{v_e} \,=\, \begin{pmatrix} x_e \\ y_e \end{pmatrix}.
\]

Just to cover another relation you may find in books: We could have included the δ-term in a somewhat artificial (3×3)-matrix

\[
\pmb{\operatorname{A}}_q^e \:=\:
\begin{pmatrix} \alpha & \beta / 2 & 0 \\ \beta / 2 & \gamma & 0 \\ 0 & 0 & – \delta \end{pmatrix},
\]

and applied this matrix to an artificially extended position vector to reproduce our definition equation:

\[
\pmb{\operatorname{A}}_q^e \, \circ \, \begin{pmatrix} x_e \\ y_e \\ 1 \end{pmatrix} \:=\: 0
\]

However, this formal aspect will not help much to solve the equations coming below. We want to describe the vectors to the principal axes’ end-points by mathematical expressions that depend on α, β, γ, δ – and both matrices will of course deliver the same results. But: There are still two different approaches to achieve our objective.

Method 1 to determine the vectors to the principal axes’ end points

My readers have certainly noticed that we have already gathered all required information to solve our task. In the first post of this series we have performed an eigendecomposition of our symmetric matrix Aq. We found that the two eigenvectors of Aq for respective eigenvalues λ1 and λ2 point along the principal axes of our rotated ellipse:

\[ \begin{align}
\lambda_1 \: &: \quad \pmb{\xi_1} \:=\: \left(\, {1 \over \beta} \left( (\alpha \,-\, \gamma) \,-\, \left[\, \beta^2 \,+\, \left(\gamma \,-\, \alpha \right)^2\,\right]^{1/2} \right), \: 1 \, \right)^T \\
\lambda_2 \: &: \quad \pmb{\xi_2} \:=\: \left(\, {1 \over \beta} \left( (\alpha \,-\, \gamma) \,+\, \left[\, \beta^2 \,+\, \left(\gamma \,-\, \alpha \right)^2\,\right]^{1/2} \right), \: 1 \, \right)^T
\end{align}
\]

The T symbolizes a transposition operation. The eigenvalues are related to the Aq-coefficients by the following equations:

\[ \begin{align}
\lambda_1 \:&=\: {1 \over 2} \left(\, \left( \alpha \,+\, \gamma \right) \,+\, \left[ \beta^2 \,+\, \left(\gamma \,-\, \alpha \right)^2 \,\right]^{1/2} \,\right) \\
\lambda_2 \:&=\: {1 \over 2} \left(\, \left( \alpha \,+\, \gamma \right) \,-\, \left[ \beta^2 \,+\, \left(\gamma \,-\, \alpha \right)^2 \,\right]^{1/2} \,\right)
\end{align}
\]

These eigenvalues correspond to the squares of the lengths of the ellipse’s axes.

\[ \begin{align}
\lambda_1 \:&=\: \sigma_1^2 \\
\lambda_2 \:&=\: \sigma_2^2 \\
\end{align}
\]

Therefore, we can simply take the components of the normalized vectors

\[ \begin{align}
\lambda_1 \: &: \quad \pmb{\xi_1^n} \:=\: {1 \over \|\pmb{\xi_1}\|}\, \pmb{\xi_1} \\
\lambda_2 \: &: \quad \pmb{\xi_2^n} \:=\: {1 \over \|\pmb{\xi_2}\|}\, \pmb{\xi_2}
\end{align}
\]

and multiply them with the square-root of the respective eigenvalues to get the vector components to the end-points of the ellipse’s axes:

\[ \begin{align}
\pmb{\xi_1}^{rmax} \:&=\: \sqrt{\lambda_1} * \pmb{\xi_1^n} \\
\pmb{\xi_2}^{rmax} \:&=\: \sqrt{\lambda_2} * \pmb{\xi_1^n}
\end{align}
\]

This is trivial regarding the algebraic operations, but results in lengthy (and boring) expressions in terms of the matrix coefficients. So, I skip to write down all the terms. (We do not need it for setting up ordered numerical programs.)

Remember that you could in addition replace (α, β, γ, δ) by coefficients (a, b, c, d) of matrix AE. See the first post of this series for the formulas. This would, however, produce even longer equation terms.

Equation for points with maximum radius values

We define again some convenience variables:

\[ \begin{align}
a_h \,& =\, {\alpha \over \gamma} \\
b_h \,& =\, {1 \over 2 } {\beta \over \gamma} \\
d_h \,& =\, {\delta \over \gamma} \\
g_h \,& =\, a_h \,-\, b_h^2 \\
f_h \,& =\, 1 \,+\, b_h^2 \,-\, g_h \phantom{\huge{(}}
\end{align}
\]

and

\[
\xi_h \,=\, { \left[\, 4\,d_h\, g_h\, b_h^2 \,+\, d_h\, f_h^2 \,\right] \over 2\, \left[\, 4\,b_h^2\,g_h^2 \,+\, g_h\,f_h^2\,\right] } \phantom{\Huge{(}} \\
\]
\[
\eta_h \,=\, { b_h^2 \, d_h^2 \over \left[\, 4\,b_h^2\,g_h^2 \,+\, g_h\,f_h^2\,\right] } \phantom{\Huge{(}}
\]

We find for ye:

\[ \begin{align}
y_e \:&=\: – b_h \, x_e \, \pm \, \left[\,d_h \,-\, \left( a_h \,-\, b_h^2 \right)\, x_e^2 \, \right]^{1/2} \\
\:&=\: – b_h \, x_e \, \pm \, \left[\,d_h \,-\, g_h\, x_e^2 \, \right]^{1/2} \phantom{\huge{(}}
\end {align}
\]

We pick the ye with the positive term in the following steps. (The way for the solution with the negative term in ye is analogous.) The square of ye is:

\[
y_e^2 \:=\: – d_h \,+\, \left(b^2 \,-\, g\right)\, x_e^2 \,-\, 2 \, b_h \, x_e \,\left[d \,-\, g\,x_e^2 \right]^{1/2}
\]

To find an extremal value of the radius we differentiate and set the derivative to zero:

\[
{\partial \, \left(y_e^2 \,+\, x_e^2\right) \over \partial \, x_e} \:=\: 0 \: \Rightarrow
\]
\[ \begin{align}
& \left(\, 1\,+\, b_h^2 \,-\, g_h\,\right) \, x_e \,-\, b_h \, \left[\, d_h \,-\, g_h\, x_e^2 \, \right]^{1/2} \\
&+\, b_h\,g_h\,x_e^2 \, {1 \over \left[\, d_h \,-\, g_h \, x_e^2 \, \right]^{1/2} } \:=\: 0
\end{align}
\]

This results in

\[
f_h\, x_e \, \left[ d_h \,-\, g_h\, x_e^2\right]^{1/2} \:=\: b_h\,d_h \,-\, 2\, b_h\,g_h\,x_e^2 \, .
\]

Solution for xe-values of the end-points of the principal axes

We take the square of both sides and reorder terms to get

\[
\left[\, 4\,b_h^2\,g_h^2 \,+\, g_h\,f_h^2\,\right]\, x_e^4 \,-\, \left[\, 4\,d_h\, g_h\, b_h^2 \,+\, d_h\, f_h^2 \,\right]\, x_e^2 \,+\, b_h^2 \, d_h^2 \:=\: 0
\]
\[
x_e^4 \,-\, { \left[\, 4\,d_h\, g_h\, b_h^2 \,+\, d_h\, f_h^2 \,\right] \over \left[\, 4\,b_h^2\,g_h^2 \,+\, g_h\,f_h^2\,\right] } \, x_e^2 \:=\:
-\, { b_h^2 \, d_h^2 \over \left[\, 4\,b_h^2\,g_h^2 \,+\, g_h\,f_h^2\,\right] }
\]

With

\[ \begin{align}
\xi_h \,&=\, { \left[\, 4\,d_h\, g_h\, b_h^2 \,+\, d_h\, f_h^2 \,\right] \over 2\, \left[\, 4\,b_h^2\,g_h^2 \,+\, g_h\,f_h^2\,\right] } \\
\eta_h \,&=\, { b_h^2 \, d_h^2 \over \left[\, 4\,b_h^2\,g_h^2 \,+\, g_h\,f_h^2\,\right] } \phantom{\Huge{)^A}}
\end{align}
\]

we have

\[
x_e^4 \, -\, 2 \,\xi_h \,x_e^2 \:=\: – \eta_h
\]

With the help of a quadratic supplement we get

\[
\left[\, x_e^2 \, -\, \xi_h \right]^2 \:=\: \xi_h^2 \:-\: \eta_h
\]

and find the solution

\[
x_e \:=\: \pm \, \sqrt{ \xi_h \,\pm\, \sqrt{\, \xi_h^2 \,-\,\eta_h \,} }
\]

A detailed analysis also for the other ye-expression (see above) leads to further solutions for the coordinates (=vector component values) of points with extremal values for the radii. These are the end-points of the principal axes of the ellipse:

\[ \begin{align}
x_{e1}^{rmax} \:&=\: -\, \sqrt{ \, \xi_h \,-\, \sqrt{\, \xi_h^2 \,-\,\eta \,} } \\
y_{e1}^{rmax} \:&=\: +\, \sqrt{ \, d_h \,-\, g_h \, \left(x_{e1}^{rmax}\right)^2 \,} \,-\, b_h \, x_{e1}^{rmax} \\
x_{e2}^{rmax} \:&=\: -\, x_{e1}^{rmax} \phantom{\huge{)}} \\
y_{e2}^{rmax} \:&=\: -\, y_{e1}^{rmax} \\
x_{e3}^{rmax} \:&=\: +\, \sqrt{ \, \xi_h \,+\, \sqrt{\, \xi_h^2 \,-\,\eta \,} } \phantom{\huge{)}} \\
y_{e3}^{rmax} \:&=\: +\, \sqrt{ \, d_h \,-\, g_h \, \left(x_{e3}^{rmax}\right)^2 \,} \,-\, b_h \, x_{e3}^{rmax} \\
x_{e4}^{rmax} \:&=\: -\, x_{e3}^{rmax} \phantom{\huge{)}} \\
y_{e4}^{rmax} \:&=\: -\, y_{e3}^{rmax}
\end {align}
\]

I leave it to the reader to expand the convenience variables into terms containing the original coefficients α, β, γ, δ.

Plots

It is easy to write a Python program, which calculates and plots the data of an ellipse and the special points with extremal values of the radii and extremal values of ye. The general steps which I followed were:

Step 0: Create 100 points a unit circle. Save the coordinates in Python lists (or Numpy arrays). Use Matplotlib’s plot(x,y)-function to plot the vectors.

Step 1: Create an axis-parallel ellipse with values for the axes ha = 2.0 and hb = 1.0 along the x- and the y-axis of the Euclidean coordinate system [ECS]. Do this by applying a diagonal scaling matrix Dσ1, σ2 (see the first post of this series).

Step 2: Rotate the ellipse bei π/3 (60 °). Do this by applying a rotation matrix Rπ/3 to the position vectors of your ellipse (with the help of Numpy). Alternatively, you can first create the matrices, perform a matrix multiplication and then apply the resulting matrix to the position vectors of your unit circle.

(The limiting lines have been calculated by the formulas given above.)

Step 3: Determine the coefficients of combined matrix AE = Rπ/3Dσ1, σ2

I got for the coefficients ( (a, b), (c, d) ) of AE :

A_ell = 
[[ 1.         -0.8660254 ]
[ 1.73205081  0.5       ]]

Step 3: Determine the coefficients of the matrix Aq by the formulas given in the first post of this series. I got

A_q = 
[[ 3.25       -1.29903811]
[-1.29903811  1.75      ]]

For δ I got:

delta =  4.0

which is consistent with the length-values of the principal axes.

Step 4: Determine values for the eigenvalues λ1 and λ2 from the Aq-coefficients by the formulas given in the first post. Also calculate them by using Numpy’s
eigenvalues, eigenvectors = numpy.linalg.eig(A_q). Theory tells us that these values should be exactly λ1 = 4 and λ1 = 1. I got

Eigenvalues from A_q:  lambda_1 = 4. :: lambda_2 = 1.

Step 5: Determine the components of the normalized eigenvectors with the help of numpy.linalg.eig(A_q). I got:

Components of normalized eigenvectors by theoretical formulas from A_q coefficients: 
ev_1_n :  -0.8660254037844386  :  0.5000000000000002
ev_2_n :  0.5000000000000001  :  0.8660254037844385

Eigenvectors from A_q via numpyy.linalg.eig():  
ev_1_num :  0.8660254037844387  :  -0.5000000000000001
ev_2_num :  0.5000000000000001  :  0.8660254037844387 

The deviation between ev_1_n and ev_1_num is just due to a difference by -1. This is correct as the eigenvectors are unique only up to a minus-sign in all components.

Step 6: Calculate the sinus of the rotation angle of our ellipse from Aq– and Aq-coefficients. The theoretical value is sin(2 π/3) = sin(2 pi/3) = 0.8660254037844387. I got:

sin(2. * rotation angle) of major axis of the ellipse against the ECS x-axis from A_E coefficients: 
sin_2phi-A_E  =  0.8660254037844388

sin(2. * rotation angle) of major axis of the ellipse against the ECS x-axis from from eigenvectors of A_q:
sin_2phi-ev_A_q =  0.8660254037844387 

sin(2. * rotation angle) of major axis of the ellipse against the ECS x-axis from A_q-coefficients:
sin_2phi-coeff-A_q =  0.8660254037844388 

Perfect!

Step 7: Plot the end-points of the normalized eigenvectors of Aq:

Note that in our example case the end-point of the eigenvector along the minor axis must be located exactly on the elliptic curve as the ellipses minor axes has a length of b=1!

Step 8: Calculate the components of the vectors to data-points of the ellipse with maximal absolute ye-values from the Aq-coefficients given in the previous post. Plot these data-points (here in green color).

Step 9: Calculate the components of the vectors to data-points of the ellipse with maximal values of the radii with the help of the complex formulas presented in this post and plot these points in addition.

Conclusion

In this mini-series of posts we have performed some small mathematical exercises with respect to centered and rotated ellipses. We have calculated basic geometrical properties of such ellipses from the coefficients of matrices which define ellipses in algebraic form. Linear Algebra helped us to understand that the eigenvectors and eigenvalues of a symmetric matrix, whose coefficients stem from a quadratic equation (for a conic section), control both the orientation and the lengths of the ellipse’s axes completely.

This knowledge is useful in some Machine Learning [ML] context where elliptic data appear as projections of multivariate normal distributions. Multivariate Gaussian probability functions control properties of a lot of natural objects. Experience shows that certain types of neural networks may transform such data into multivariate normal distributions in latent spaces. An evaluation of the numerical data coming from such ML-experiments often delivers the coefficients of defining matrices for ellipses.

In my blog I now return to the study of with shearing operations applied to circles, spheres, ellipses and 3-dimensional ellipsoids. Later I will continue with the study of multivariate normal distributions in latent spaces of Autoencoders. For both of these topics the knowledge we have gathered regarding the matrices behind ellipses will help us a lot.

 

Statistical vector generation for multivariate normal distributions – I – multivariate and bi-variate normal distributions from CAEs

This post requires Javascript to display formulas!

Convolutional Autoencoders and multivariate normal distributions

Experiments as with my own on convolutional Autoencoders [CAE] show: A CAE maps a training set of human face images (e.g. CelebA) onto an approximate multivariate vector distribution in the CAE’s latent space Z. Each image corresponds to a point (z-point) and a corresponding vector (z-vector) in the CAE’s multidimensional latent space. More precisely the results of numerical experiments showed:

The multidimensional density function which describes the inner dense core of the z-point distribution (containing more than 80% of all points) was (aside of normalization factors) equivalent to the density function of a multivariate normal distribution [MND] for the respective z-vectors in an Euclidean coordinate system.

For results of my numerical CAE-experiments see
Autoencoders and latent space fragmentation – X – a method to create suitable latent vectors for the generation of human face images
and related previous posts in this blog. After the removal of some outliers beyond a high sigma-level (≥ 3) of the original distribution the remaining core distribution fulfilled conditions of standard tests for multivariate normal distributions like the Shapiro-Wilk test or the Henze-Zirkler test.

After a normalization with an appropriate factor the continuous density functions controlling the multivariate vector distribution can be interpreted as a probability density function [p.d.f.]. The components vj (j=1, 2,…, n) of the vectors to the z-points are regarded as logically separate, but not uncorrelated variables. For each of these variables a component specific value distribution Vj is given. All these marginal distributions contribute to a random vector distribution V, in our case with the properties of a MND:

\[ \boldsymbol{V} \: = \: \left( \, V_1, \, V_2, \, ….\, V_n\, \right) \: \sim \: \boldsymbol{\mathcal{N_n}} \, \left( \, \boldsymbol{\mu} , \, \boldsymbol{\Sigma} \, \right), \\ \quad \mbox{with} \: \boldsymbol{\mathcal{N_n}} \: \mbox{symbolizing a MND in an n-dimensional space}
\]

μ is a vector with all mean values μj of the Vj component distributions as its components. Σ abbreviates the covariance matrix relating the distributions Vj with one another.

The point distribution of a CAE’s MND forms a complex rotated multidimensional ellipsoid with its center somewhere off the origin in the latent space. The latent space itself typically has many dimensions. In the case of my numerical experiments the number of dimension was n ≥ 256. The number of sample vectors used were between 80,000 and 200,000 – enough data to approximate the vector distribution by a continuous density function. The densities for the Vj-distributions formed a smooth Gaussian function (for a reasonable sampling interval). But note: One has to be careful: The fact that the Vj have a Gaussian form is not a sufficient condition for a MND. (See the next post.) But if a MND is given all Vj have a Gaussian form.

Generative use of MNDs in multidimensional latent spaces of high dimensionality

When we want to use a CAE as a generative tool we need to solve a problem: We must create statistical vectors which point into the (multidimensional) volume of our point distribution in the latent space of the encoding algorithm. Only such vectors provide useful information to the Decoder of the CAE. A full multivariate normal distribution and hyperplanes of its multidimensional density function are difficult to analyze and to control when developing a proper numerical algorithm. Therefore I want to reduce the problem of vector generation to a sequence of viewable and controllable 2-dimensional problems. How can this be achieved?

A central property of a multivariate normal distribution helps: Any sub-selection of m vector-component distributions forms a multivariate normal distribution, too (see below). For m=2 and for vector components indexed by (j,k) with respective distributions Vj, Vk we get a so called “bivariate normal distribution[BND]:

\[ V_{jk} \: \sim \: \mathcal{N}_2\left(\, (\mu_j, \mu_k), (\,\sigma_j, \sigma_k \,) \, \right)
\]

A MND has n*(n-1)/2 such subordinate BNDs. The 2-dim density function of a bivariate normal distribution

\[ g_{jk}\,\left( \, v_j, \, v_k\, \right) \: = \: g_{jk}\,\left( \, v_j, \, v_j, \, \mu_j, \, \mu_k, \, \sigma_j, \sigma_k, \, \sigma_{jk}, … \right)
\]

for vector component values vj, vk defines a point density of the sample data in the (j,k)-coordinate plane of the Euclidean coordinate system in which the MND is described. The density function of the marginal distributions Vj showed the typical Gaussian forms of a univariate normal distribution.

The density function of a BND has some interesting mathematical properties. Among other things: The hyperplanes of constant density of a BND’s density function form ellipses. This is illustrated by the following plots showing such contour lines for selected pairs (Vj, Vk) of a real point-distribution in a 256-dimensional latent space. The point distribution was created by a CAE in its latent space for the CelebA dataset.

Contour lines for selected (j,k)-pairs. The thick lines stem from theory and calculated correlation coefficients of the univariate distributions.

The next plot shows the contours of selected vector-component pairs after a PCA-transformation of the full MND. (Main ellipse axes are now aligned with the axes of the PCA-coordinate system):

These ellipses with axes along the coordinate axes are relatively easy to handle. They can be used for vector creation. But they require a full PCA transformation of the MND-distribution, a PCA-analysis for complexity reduction and an application of the inverse PCA-transformation. The plot below shows the point-density compared to a 2.2-σ-confidence ellipses. The orange points are the results of a proper statistical numerical vector generation algorithm based on a PCA-transformation of the MND.

See my post quoted above for the application of a PCA-transformation of the multidimensional MND for vector creation.

However, we get the impression that we could also use these rotated ellipses in projections of the MND onto coordinate planes of the original latent space system directly to impose limiting conditions on the component values of statistical vectors pointing to an inner regions of the MND. Of course, a generated statistical vector must then comply with the conditions of all such ellipses. This requires an analysis and combined use of the ellipses of all of the subordinate BNDs of a the original MND during an iterative or successive definition of the values for the vector components.

Objective of this post series

In my last post about CAEs (see the link given above) I have explicitly asked the question whether one can avoid performing a full PCA-transformation of the MND when creating statistical vectors pointing to a defined inner region of a MND.

The objective of this post series is to prove the answer: Yes, we can. And we will use the BNDs resulting from projections of the original MND onto coordinate planes. We will in particular explore the n*(n-1)/2 the properties of the BNDs’ confidence ellipses. As said: These ellipses are rotated against the coordinate system’s axes. We will have to deal with this in detail. We will also use properties of their 1-dimensional marginal distributions (projections onto the coordinate axes, i.e. the Vj.)

In addition we need to prepare a variety of formulas before we are able to define numerical procedure for the vector generation without a full PCA-transformation of the MND with around 100,000 vectors. Some of the derived formulas will also allow for a deeper insight into how the multiple BNDs of a MND are related between each other and with confidence hypersurfaces of the MND.

Ellipses in general lead to equations governed by quadratic or fourth power polynomials. We will in addition use some elementary correlation formula from statistics and for some exercises a simple optimization via derivatives. The series can be regarded as an excursion into some of the math which governs bivariate distributions resulting from a MND.

As MNDs may also be the result of other generative Machine Learning algorithms in respective latent spaces, the whole approach to statistical vector generation for such cases should be of general interest. Note also that the so called “central limit theorem” almost guarantees the appearance of MNDs in many multivariate datasets with sufficiently large samples and value dependencies on many singular observations.

Distributions of a variety of variables may result in a MND if the variables themselves depend on many individual observables with limited covariance values of their distributions. In particular pairwise linearly correlated Gaussian density distributions individual variables (seen as vector components) may constitute a MND if the conditional probabilities fulfill some rules. We will see a glimpse of this in 2 dimensions when we analyze integrals over Gaussians in the bivariate normal case.

Other approaches to statistical vector generation?

Well, we could try to reconstruct the multidimensional density function of the MND. This is a challenge which appears in some problems of pure statistics, but also in experimental physics. See e.g. a paper of Rafey Anwar, Madeline Hamilton, Pavel M. Nadolsky (2019, Department of Physics, Southern Methodist University, Dallas; https://arxiv.org/pdf/1901.05511.pdf). Then we would have to find the elements of the (inverse) covariance matrix or – equivalently – the elements of a multidimensional rotation matrix. But the most efficient algorithms to get the matrix coefficients again work with projections onto coordinate planes. I prefer to use properties of the ellipses of the bivariate distributions directly.

Note that using the multidimensional density function of the MND directly is not of much help if we want to keep the vectors’ end points within a defined multidimensional inner region of the distribution. E.g.: You want to limit the vectors to some confidence region of the MND, i.e. to keep them inside a certain multidimensional ellipsoidal contour hyper-surface. The BND-ellipses in the coordinate planes reflect the multidimensional ellipsoidally shaped contour hyperplanes of the full distribution. Actually, when we vertically project a multidimensional hyperplane onto a coordinate plane then the outer 2-dim border line coincides with a contour ellipse of the respective BND. (This is due to properties of a MND. We will come back to this in a future post.) The problem of proper limiting individual vector component values thus again is best solved by analyzing properties of the BNDs.

Steps, methods, mathematical level

As a first step I will, for the sake of completeness, write down the formula for a multivariate normal distribution, discuss a bit its mathematical construction from uncorrelated univariate normal distribution. I will also list up some basic properties of a MND (without proof!). These properties will justify our approach to create statistical vectors pointing into a defined inner region of the MND by investigating projected contour ellipses of all subordinate BNDs. As a special aspect I want to make it at least plausible, why the projected contour ellipses define infinitesimal regions of the same relative probability level as their multidimensional counterparts – namely the multidimensional ellipsoidal hypersurfaces which were projected onto coordinate planes.

Then as a first productive step I want to motivate the specific mathematical form of the probability density function [p.d.f] of a bivariate normal distribution. In contrast to many of the math papers I have read on the topic I want to use a symmetry argument to derive the basic form of the p.d.f. I will point out an important, but plausible assumption about conditional distributions. An analogous assumption on the multidimensional level is central for the properties of a MND.

As the distributions Vj and Vk can be correlated we then want to understand the impact of the correlation coefficients on the parameters of the 2-dimensional density function. To achieve this I will again derive the density function by using our previous central assumption and some simple relations between the expectation values of the constituting two univariate distributions in the linear correlation regime. This concludes the part of the series where we get familiar with BNDs.

Furthermore we are interested in features and consequences of the 2-dimensional density functions. The contour lines of the 2-dim density function are ellipses – rotated by some specific angle. I will look at a formal mathematical process to construct such ellipses – in particular confidence ellipses. I will refer to the results Carsten Schelp has provided in an Internet article on this topic.

His construction process starts with a basic ellipse, which I will call base correlation ellipse [BCE]. The length of the axes of this ellipse are eigenvalues of the covariance matrix of the standardized marginal distributions constituting the BND. The main axes of this elementary ellipse are in addition aligned with the two selected axes of a basic Euclidean coordinate system in which the bivariate distribution is defined. The length of the BCE’s main axes can be shown to depend on the correlation coefficient for the two vector component distributions Vj and Vk. This coefficient also appears in the precision matrix of the BND. Points on the base correlation ellipse can be mapped with two steps of an affine transformation onto points on the real contour ellipses, in particular to points of the confidence ellipses.

The whole construction process is not only of immense help when designing visualization programs for the contour ellipses of our distribution with many (around 100,000) individual vectors. The process itself gives us some direct geometrical insights. It also helps to avoid finding a numerical solution of the usual eigenvector-problems when answering some specific questions about the rotated contour ellipses. Normally we solve an eigenvalue-problem for the covariance matrix of the multi- or the many subordinate bi-variate distributions to get precise information about contour ellipses. This corresponds to a transformation of the distributions to a new coordinate system whose axes are aligned to the main axes of the ellipses. Numerically this transformation is directly related to a PCA transformation of the vector distributions. However, such a PCA-transformation can be costly in terms of CPU time.

Instead, we only need a numerical determination of all the mutual the correlation coefficients of the univariate marginal distributions of the MND. Then the eigenvalue problem on the BND-level is already analytically solved. We therefore neither perform a full numerical PCA analysis of the MND and multidimensional rotation of the vectors of around 100,000 samples. Nor do we analyze explained variance ratios to determine the most important PCA components for dimensionality reduction. We neither need to perform a numerical PCA analysis of the BNDs.

Most important: Our problem of vector generation is formulated in the original latent space coordinate system and it gets a direct solution there. The nice thing is that Schelp’s construction mechanism reduces the math to the solution of quadratic polynomial equations for the BNDs. The solutions of those equations, which are required for our ultimate purpose of vector generation, can be stated in an explicit form.

Therefore, the math in this series will mostly remain on high school level (at least at a level given when I was young). Actually, it was fun to dive back into exercises reminding me of school 50 years ago. I hope the interested reader has some fun, too.

Solutions to some particular problems with respect to the confidence ellipses of the MND’s BNDs

In particular we will solve the following problems:

  • Problem 1: The two points on the BCE-ellipse with the same vj-value are not mapped onto points with the same vj-value on the confidence ellipse. We therefore derive the coordinates of points on the BCE-ellipse that give us one and the same vj-value on the real confidence ellipse.
  • Problem 2: Plots for a real MND vector distribution indicate that all (n-1) confidence ellipses of distribution pairs of a common Vj with other marginal distributions Vk (for the same confidence level and with k ≠ j) have a common tangent parallel to one coordinate axis. We will derive the value of a maximum v_j-value for all ellipses of (j,k)-pairs of vector component distributions. We will prove that it is identical for all k. This will define the common interval of allowed vj-component-values for a bunch of confidence ellipses for all (Vj, Vk)-pairs with a common Vj.
  • Problem 3: The BCE-ellipses for a common j-, but different k-values depend on different values for the correlation coefficients ρj,k of Vj with its various Vk counterparts. Therefore we need a formula that relates a point on the BCE-ellipse leading to a concrete v_j-value of the mapped point on the confidence ellipse of a particular (Vj, Vk)-pair to respective points on other BCE-ellipses of different (Vj, Vm)-pair with the same resulting v_j-value on their confidence ellipses. I will derive such a formula. It will help us to apply multiple conditions onto the vector component values.
  • Problem 4: As a supplemental exercise we will derive a mathematical expression for the size of the main axes and the rotation angle of the ellipses. We should, of course, get values that are identical to results of the eigenwert-problem for the correlation matrix (describing a PCA coordinate transformation). This gives us additional confidence in Schelps’ approach.

In the end we can use our results to define a numerical algorithm for the direct creation of vectors pointing to a defined inner region of the multivariate normal distribution. As said, this algorithms does not require a costly PCA transformations of the full MND or many, namely n*(n-1)/2 such PCA-transformations of its BNDs.

I intend to visualize all results with the help of a concrete example of a multivariate example distribution created by a CAE for the CelebA dataset. The plots will use Schelp’s construction algorithm for the confidence ellipses extensively.

Conclusion and outlook

Convolutional Autoencoders create approximate multivariate normal distributions [MND] for certain input data (with Gaussian pattern properties) in their latent space. MNDs appear in other contexts of machine learning and statistics, too. For evaluation and generative purposes one may need statistical vectors with end points inside a defined multidimensional hypersurface corresponding to a certain confidence level and a certain constant density value of the MND’s density function. These hypersurfaces are multidimensional ellipsoids.

We have the hope that we can use mathematical properties of the MND’s subordinate bivariate normal distributions [BNDs] to create statistical vectors with end points inside the multidimensional confidence ellipsoids of a MND. Typically such an ellipsoid resides off the origin of the latent space’s coordinate system and the ellipsoid’s main axes are rotated against the axes of the coordinate system. We intend to base the confining conditions on the components of the aspired statistical vectors on correlation coefficients of the marginal vector component distributions. Our numerical algorithm should avoid a full PCA-transformation of the multidimensional vector distribution.

In the next post of this series I give a formula for the density function of a multivariate normal distribution. In addition I will list up some basic properties which justify the vector generation approach via bivariate normal distributions.