Properties of ellipses by matrix coefficients – I – Two defining matrices

This post requires Javascript to display formulas!

For my two current post series on multivariate normal distributions [MNDs] and on shear operations

Multivariate Normal Distributions – II – random vectors and their covariance matrix
Fun with shear operations and SVD

some geometrical and algebraic properties of ellipses are of interest. Geometrically, we think of an ellipse in terms of their perpendicular two principal axes, focal points, ellipticity and rotation angles with respect to a coordinate system. As elliptic data appear in many contexts of physics, chaos theory, engineering, optics … ellipses are well studied mathematical objects. So, why a post about ellipses in the Machine Learning section of a blog?

In my present working context ellipses appear as a side result of statistical multivariate normal distributions [MNDs]. The projections of multidimensional contour hyper-surfaces of a MND within the ℝn onto coordinate planes of an Euclidean Coordinate System [ECS] result in 2-dimensional ellipses. These ellipses are typically rotated against the axes of the ECS – and their rotation angles reflect data correlations. The general relations of statistical vector data with projections of multidimensional MNDs is somewhat intricate.

Data produced in numerical experiments, e.g. in a Machine Learning context, most often do not give you the geometrical properties of ellipses, which some theory may have predicted, directly. Instead you may get averaged values of statistical vector distributions which correspond to a kind of algebraic coefficients. These coefficients can often be regarded as elements of a matrix. The mathematical reason is that ellipses can in general be defined by matrices operating on position vectors. In particular: Coefficients of quadratic polynomial expressions used to describe ellipses as conic sections correspond to the coefficients of a matrix operating on position vectors.

So, when I became confronted with multidimensional MNDs and their projections on coordinate planes the following questions became interesting:

  • How can one derive the lengths σ1, σ2 of the perpendicular principal axes of an ellipse from data for the coefficients of a matrix which defines the ellipse by a polynomial expression?
  • By which formula do the matrix coefficients provide the inclination angle of the ellipse’s primary axes with the x-axis of a chosen coordinate system?

You may have to dig a bit to find correct and reproducible answers in your math books. Regarding the resulting mathematical expressions I have had some bad experiences with ChatGPT. But as a former physicist I take the above questions as a welcome exercise in solving quadratic equations and doing some linear algebra. So, for those of my readers who are a bit interested in elementary math I want to answer the posed questions step by step and indicate how one can derive the respective formulas. The level is moderate – you need some knowledge in trigonometry and/or linear algebra.

Centered ellipses and two related matrices

Below I regard ellipses whose centers coincide with the origin of a chosen ECS. For our present purpose we thus get rid of some boring linear terms in the equations we have to solve. We do not loose much of general validity by this step: Results of an off-center ellipse follow from applying a simple translation operation to the resulting vector data. But I admit: Your (statistical) data must give you some information about the center of your ellipse. We assume that this is the case.

Our ellipses can, however, be rotated with respect to a chosen ECS. I.e., their longer principal axes may be inclined by some angle φ towards the x-axis of our ECS.

There are actually two different ways to define a centered ellipse by a matrix:

  • Alternative 1: We define the (rotated) ellipse by a matrix AE which results from the (matrix) product of two simpler matrices: AE = RφDσ1, σ2. Dσ1, σ2 corresponds to a scaling operation applied to position vectors for points located on a centered unit circle. Aφ describes a subsequent rotation. AE summarizes these geometrical operations in a compact form.
  • Alternative 2: We define the (rotated) ellipse by a matrix Aq which combines the x- and y-elements of position vectors in a polynomial equation with quadratic terms in the components (see below). The matrix defines a so called quadratic form. Geometrically interpreted, a quadratic form describes an ellipse as a special case of a conic section. The coefficients of the polynomial and the matrix must, of course, fulfill some particular properties.

While it is relatively simple to derive the matrix elements from known values for σ1, σ2 and φ it is a bit harder to derive the ellipse’s properties from the elements of either of the two defining matrices. I will cover both matrices in this post. For many practical purposes the derivation of central elliptic properties from given elements of Aq is more relevant and thus of special interest in the following discussion.

Matrix AE of a centered and rotated ellipse: Scaling of a unit circle followed by a rotation

Our starting point is a unit circle C whose center coincides with our ECS’s origin. The components of vectors vc to points on the circle C fulfill the following conditions:

\[
\pmb{C} \::\: \left\{ \pmb{v}_c \:=\: \begin{pmatrix} x_c \\ y_c \end{pmatrix} \:=\: \begin{pmatrix} \operatorname{cos}(\psi) \\ \operatorname{sin}(\psi) \end{pmatrix}, \quad 0\,\leq\,\psi\, \le 2\pi \right\}
\]

and

\[
x_c^2 \, +\, y_c^2 \,=\; 1
\]

We define an ellipse E1, σ2) by the application of two linear operations to the vectors of the unit circle:

\[
\pmb{E}_{\sigma_1, \, \sigma_2} \::\: \left\{ \, \pmb{v}_e \:=\: \begin{pmatrix} x_e \\ y_e \end{pmatrix} \:=\: \pmb{\operatorname{R}}_{\phi} \circ \pmb{\operatorname{D}}_E \circ \pmb{v_c} , \quad \pmb{v_c} \in \pmb{C} \, \right\}
\]

DE is a diagonal matrix which describes a stretching of the circle along the ECS-axes, and Rφ is an orthogonal rotation matrix. The stretching (or scaling) of the vector-components is done by

\[
\pmb{\operatorname{D}}_E \:=\: \begin{pmatrix} \sigma_1 & 0 \\ 0 & \sigma_2 \end{pmatrix},
\]
\[
\pmb{\operatorname{D}}_E^{-1} \:=\: \begin{pmatrix} {1 \over \sigma_1} & 0 \\ 0 & {1 \over \sigma_2} \end{pmatrix},
\]

The coefficients σ1, σ2 obviously define the lengths of the principal axes of the yet unrotated ellipse. To be more precise: σ1 is half of the diameter in x-direction, σ1 is half of the diameter in y-direction.

The subsequent rotation by an angle φ against the x-axis of the ECS is done by

\[
\pmb{\operatorname{R}}_{\phi} \:=\:
\begin{pmatrix} \operatorname{cos}(\phi) & – \,\operatorname{sin}(\phi) \\ \operatorname{sin}(\phi) & \operatorname{cos}(\phi)\end{pmatrix}
\:=\: \begin{pmatrix} u_1 & -\,u_2 \\ u_2 & u_1 \end{pmatrix}
\]
\[
\pmb{\operatorname{R}}_{\phi}^T \:=\: \pmb{\operatorname{R}}_{\phi}^{-1} \:=\: \pmb{\operatorname{R}}_{-\,\phi}
\]

The combined linear transformation results in a matrix AE with coefficients ((a, b), (c, d)):

\[ \begin{align}
\pmb{\operatorname{A}}_E \:=\: \pmb{\operatorname{R}}_{\phi} \circ \pmb{\operatorname{D}}_E \:=\:
\begin{pmatrix} \sigma_1\,u_1 & -\,\sigma_2\,u_2 \\ \sigma_1\,u_2 & \sigma_2\,u_1 \end{pmatrix} \:=\:: \begin{pmatrix} a & b \\ c & d \end{pmatrix}
\end{align}
\]

These is the first set of matrix coefficients we are interested in.

Note:

\[ \pmb{\operatorname{A}}_E^{-1} \:=\:
\begin{pmatrix} {1 \over \sigma_1} \,u_1 & {1 \over \sigma_1}\,u_2 \\ -{1 \over \sigma_2}\,u_2 & {1 \over \sigma_2}\,u_1 \end{pmatrix}
\]
\[
\pmb{v}_e \:=\: \begin{pmatrix} x_e \\ y_e \end{pmatrix} \:=\: \pmb{\operatorname{A}}_E \circ \begin{pmatrix} x_c \\ y_c \end{pmatrix}
\]
\[
\pmb{v}_k \:=\: \begin{pmatrix} x_c \\ y_c \end{pmatrix} \:=\: \pmb{\operatorname{A}}_E^{-1} \circ \begin{pmatrix} x_e \\ y_e \end{pmatrix}
\]

We use

\[ \begin{align}
u_1 \,&=\, \operatorname{cos}(\phi),\quad u_2 \,=\,\operatorname{sin}(\phi), \quad u_1^2 \,+\, u_2^2 \,=\, 1 \\
\lambda_1 \,&: =\, \sigma_1^2, \quad\quad \lambda_2 \,: =\, \sigma_2^2
\end{align}
\]

and find

\[ \begin{align}
a \,&=\, \sigma_1\,u_1, \quad b \,=\, -\, \sigma_2\,u_2, \\
c \,&=\, \sigma_1\,u_2, \quad d \,=\, \sigma_2\,u_1
\end{align}
\]
\[
\operatorname{det}\left( \pmb{\operatorname{A}}_E \right) \:=\: a\,d \,-\, b\,c \:=\: \sigma_1\, \sigma_2
\]

σ1 and σ2 are factors which give us the lengths of the principal axes of the ellipse. σ1 and σ2 have positive values. We therefore demand:

\[
\operatorname{det}\left( \pmb{\operatorname{A}}_E \right) \:=\: a\,d \,-\, b\,c \:\ge\: 0
\]

Ok, we have defined an ellipse via a matrix AE, whose coefficients are directly based on geometrical properties. But as said: Often an ellipse is described by a an equation with quadratic terms in x and y coordinates of data points. The quadratic form has its background in algebraic properties of conic sections. As a next step we derive such a quadratic equation and relate the coefficients of the quadratic polynomial with the elements of our matrix AE. The result will in turn define another very useful matrix Aq.

Quadratic forms – Case 1: Centered ellipse, principal axes aligned with ECS-axes

We start with a simple case. We take a so called axis-parallel ellipse which results from a scaling matrix DE, only. I.e., in this case, the rotation matrix is assumed to be just the identity matrix. We can omit it from further calculations:

\[
\pmb{\operatorname{R}}_{\phi} \:=\:
\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \,=\, \pmb{\operatorname{I}}, \quad u_1 \,=\, 1,\: u_2 \,=\, 0, \: \phi = 0
\]

We need an expression in terms of (xe, ye). To get quadratic terms of vector components it often helps to invoke a scalar product. The scalar product of a vector with itself gives us the squared norm or length of a vector. In our case the norms of the inversely re-scaled vectors obviously have to fulfill:

\[
\left[\, \pmb{\operatorname{D}}_E^{-1} \circ \begin{pmatrix} x_e \\ y_e \end{pmatrix} \, \right]^T \,\bullet \, \left[\, \pmb{\operatorname{D}}_E^{-1} \circ \begin{pmatrix} x_e \\ y_e \end{pmatrix} \,\right] \:=\: 1
\]

(The bullet represents the scalar product of the vectors.) This directly results in:

\[
{1 \over \sigma_1^2} x_e^2 \, + \, {1 \over \sigma_2^2} y_e^2 \:=\: 1
\]

We eliminate the denominator to get a convenient quadratic form:

\[
\lambda_2\,x_e^2 \,+\, \lambda_1\, y_e^2 \:=\: \lambda_1 \lambda_2 \quad \left( = \, \operatorname{det}\left(\pmb{\operatorname{D}}_E\right) \right) \phantom{\huge{(}}
\]

If we were given the quadratic form more generally by coefficients α, β and γ

\[
\alpha \,x_e^2 \,+\, \beta\, x_e y_e \,+\, \gamma\, y_e^2 \:=\: \delta
\]

we could directly relate these coefficients with the geometrical properties of our ellipse:

Axis-parallel ellipse:

\[ \begin{align}
\alpha \,&=\, c^2 \,=\, \sigma_2^2 \,=\, \lambda_2 \\
\gamma \,&=\, a^2 \,=\, \sigma_1^2 \,=\, \lambda_1 \\
\beta \,&=\, b \,=\, c \,=\, 0 \\
\delta \,&=\, a^2\, d^2 \,=\, \sigma_1^2 \, \sigma_2^2 \,=\, \lambda_1 \lambda_2 \\
\phi &= 0
\end{align}
\]

I.e., we can directly derive σ1, σ2 and φ from the coefficients of the quadratic form. But an axis-parallel ellipse is a very simple ellipse. Things get more difficult for a rotated ellipse.

Quadratic forms – Case 2: General centered and rotated ellipse

We perform the same trick to get a quadratic polynomial with the vectors ve of a rotated ellipse:

\[
\left[ \,\pmb{\operatorname{A}}_E^{-1} \circ \begin{pmatrix} x_e \\ y_e \end{pmatrix} \, \right]^T \, \bullet \,
\left[ \, \pmb{\operatorname{A}}_E^{-1} \circ \begin{pmatrix} x_e \\ y_e \end{pmatrix} \,\right] \:=\: 1
\]

I skip the lengthy, but simple algebraic calculation. We get (with our matrix elements a, b, c, d):

\[
\left( c^2 \,+\, d^2 \right)\,x_e^2 \,\, – \,\, 2\left( a\,c\, +\, b\,d \right)\,x_e y_e \,\, + \,\, \left(a^2 \,+\, b^2\right)\,y_e^2
\:=\: \sigma_1^2 \, \sigma_2^2
\]

The rotation has obviously lead to mixing of components in the polynomial. The coefficient for xeye is > 0 for the non-trivial case.

Quadratic form: A matrix equation to define an ellipse

We rewrite our equation again with general coefficients α, β and γ

\[
\alpha\,x_e^2 \, + \, \beta \, x_e y_e \, + \, \gamma \, y_e^2 \:=\: \delta
\]

These are coefficients which may come from some theory or from averages of numerical data. The quadratic polynomial can in turn be reformulated as a matrix operation with a symmetric matrix Aq:

\[
\pmb{v}_e^T \circ \pmb{\operatorname{A}}_q \circ \pmb{v}_e^T \:=\: \delta
\]

with

\[ \pmb{\operatorname{A}}_q \:=\:
\begin{pmatrix} \alpha & \beta / 2 \\ \beta / 2 & \gamma \end{pmatrix}
\]
\[ \pmb{\operatorname{A}}_q \:=\:
\begin{pmatrix} c^2 \,+\, d^2 & a\,c \, +\, b\,d \\ a\,c \, +\, b\,d & a^2 \,+\, b^2 \end{pmatrix}
\]
\[ \begin{align}
\alpha \:&=\: c^2 \,+\, d^2 \:=\: \sigma_1^2 u_2^2 \, + \, \sigma_2^2 u_1^2 \\
\gamma \:&=\: a^2 \,+\, b^2 \:=\: \sigma_1^2 u_1^2 \, + \, \sigma_2^2 u_2^2 \\
\beta \:&=\: – 2\left(a c \,+\, b d \right) \:=\: -2 \left( \sigma_1^2 u_1 u_2 \, + \, \sigma_2^2 u_1 u_2 \right) \\
\delta \:&=\: \left(ad \,-\, bc\right)^2 \:=\: \sigma_1^2 \, \sigma_2^2
\end{align}
\]

Note also:

\[ \begin{align}
\alpha \,+\, \gamma \:&=\: a^2 \,+\, b^2 \,+\, c^2 \,+\, d^2 \:=\: \sigma_1^2 \, + \, \sigma_2^2 \\
\alpha \,-\, \gamma \:&=\: a^2 \,+\, b^2 \,-\, c^2 \,-\, d^2 \:=\: \sigma_1^2 \, \operatorname{cos}\left(2\phi\right) \, + \, \sigma_2^2 \, \operatorname{cos}\left(2\phi\right) \end{align}
\]

These terms are intimately related to the geometrical data; expect them to play a major role in further considerations.

With the help of the coefficients of AE we can also show that det(Aq) > 0:

\[
\operatorname{det}\left( \pmb{\operatorname{A}}_q \right) \:=\: \left(\alpha\gamma \,-\, {1\over 4}\beta^2\right) \:=\: \left(bc \,-\, ad\right)^2 \, \ge \, 0
\]

Thus Aq is an invertible matrix (as was to be expected).

Above we have got α, β, γ, δ as some relatively simple functions of a, b, c, d. The inversion is not so trivial and we do not even try it here. Instead we focus on how we can express σ1, σ2 and φ as functions of either (a, b, c, d) or (α, β, γ, δ).

How to derive σ1, σ2 and φ from the coefficients of AE or Aq in the general case?

Let us assume we have (numerical) data for the coefficients of the quadratic form. Then we may want to calculate values for the length of the principal axes and the rotation angle φ of the corresponding ellipse. There are two ways to derive respective formulas:

  • Approach 1: Use trigonometric relations to directly solve the equation system.
  • Approach 2: Use an eigenvector decomposition of Aq.

Both ways are fun!

Direct derivation of σ1, σ2 and φ by using trigonometric relations

We start with the hard tour, namely by solving equations for λ1, λ2 and φ directly. This requires some knowledge in trigonometry. So far, we know the following:

\[ \begin{align}
\gamma \:&=\: a^2 \,+\, b^2 \:=\: \lambda_1 \operatorname{cos}^2\phi \, + \, \lambda_2 \operatorname{sin}^2\phi \\
\alpha \:&=\: c^2 \,+\, d^2 \:=\: \lambda_2 \operatorname{sin}^2\phi \, + \, \lambda_2 \operatorname{cos}^2\phi \\
\beta \:&=\: – 2\left(a c \,+\, b d \right) \:=\: -2 \left( \lambda_1 \,-\, \lambda_2 \right) \operatorname{cos}\phi \, \operatorname{sin}\phi \\
\delta \:&=\: \lambda_1 \, \lambda_2
\end{align}
\]

Trigonometric relations which we can use are:

\[ \begin{align}
\operatorname{sin}(2 \phi) \:&=\: 2 \,\operatorname{cos}(\phi)\, \operatorname{sin}(\phi) \\
\operatorname{cos}(2 \phi) \:&=\: 2 \,\operatorname{cos}^2(\phi)\, -\, 1 \\
\:&=\: 1 \,-\, 2\,\operatorname{sin}^2(\phi) \\
\:&=\: \operatorname{cos}^2(\phi)\, -\, \operatorname{sin}^2(\phi)
\end{align}
\]

Without loosing generality we assume

\[
\lambda_1 \:\ge \lambda_2
\]

In the end results would only differ by a rotation of π/2, if we had chosen otherwise. This leads to

\[ \begin{align}
2 \gamma \:&=\: 2\left( a^2 \,+\, b^2 \right) \:=\: \lambda_1 \left( 1 \,+\, \operatorname{cos}(2\phi) \right) \, + \,
\lambda_2 \left( 1 \,-\, \operatorname{cos}(2\phi) \right) \\
2 \alpha \:&=\: 2 \left( c^2 \,+\, d^2 \right) \:=\: \lambda_1 \left( 1 \,-\, \operatorname{cos}(2\phi) \right) \, + \,
\lambda_2 \left( 1 \,+\, \operatorname{cos}(2\phi) \right) \\
– \beta \:&=\: 2 \left(a c \,+\, b d \right) \:=\: \left( \lambda_1 \,-\, \lambda_2 \right) \operatorname{sin}(2\phi)
\end{align}
\]

We rearrange terms and get:

\[ \begin{align}
\left( \lambda_1 \,-\, \lambda_2 \right) \operatorname{cos}(2\phi) \,+\, \lambda_1 \,+\, \lambda_2 \:&=\: 2 \left( a^2 \,+\, b^2 \right) \\
\left( \lambda_1 \,-\, \lambda_2 \right) \operatorname{cos}(2\phi) \,-\, \lambda_1 \,-\, \lambda_2 \:&=\: – 2 \left( c^2 \,+\, d^2 \right) \\
\left( \lambda_1 \,-\, \lambda_2 \right) \operatorname{sin}(2\phi) \:&=\: 2 \left( a\,c \,+\, b\,d \right)
\end{align}
\]

Let us define some further variables before we add and subtract the first two of the above equations:

\[ \begin{align}
r \:&=\: {1 \over 2} \left( a^2 \,+\, b^2 \,+\, c^2 \,+\, d^2 \right) \:=\: {1 \over 2} \left( \gamma \,+\, \alpha \right) \\
s_1 \:&=\: {1 \over 2} \left( a^2 \,+\, b^2 \,-\, c^2 \,-\, d^2 \right) \:=\: {1 \over 2} \left( \gamma \,-\, \alpha \right) \\
s_2 \:&=\: \left( a\,c \,+\, b\,d \right) \:=\: – {1 \over 2} \, \beta \\
\pmb{s} \:&=\: \begin{pmatrix} s_1 \\ s_2 \end{pmatrix} \:=\: {1 \over 2} \begin{pmatrix} \gamma \,-\, \alpha \\ – \beta \end{pmatrix} \phantom{\Huge{(}} \\
s \:&=\: \sqrt{ s_1^2 \,+\, s_2^2 } \:=\: {1 \over 2} \left[ \, \beta^2 \,+\, \left( \gamma \,-\, \alpha \right)^2 \,\right]^{1/2} \phantom{\huge{(}}
\end{align}
\]

Then adding two of the equations with the sin2φ and cos2φ above and using the third one results in:

\[
{1 \over 2} \left( \lambda_1 \,-\, \lambda_2 \right) \begin{pmatrix} \operatorname{cos}(2\phi) \\ \operatorname{sin}(2\phi) \end{pmatrix} \:=\: \begin{pmatrix} s_1 \\ s_2 \end{pmatrix}
\]

Taking the vector norm on both sides (with λ1 ≥ λ2) and adding two of the equations above results in:

\[ \begin{align}
\lambda_1 \,\, – \,\, \lambda_2 \:&=\: 2 s \:\:=\: \left[ \, \beta^2 \,+\, \left( \gamma \,-\, \alpha \right) \,\right]^{1/2} \\
\lambda_1 \, + \, \lambda_2 \:&=\: 2 r \:\:=\: \gamma \,+\, \alpha
\end{align}
\]

This gives us:

\[ \begin{align}
\sigma_1^2 \,=\, \lambda_1 \:&=\: r \,+\, s \\
\sigma_2^2 \,=\, \lambda_2 \:&=\: r \,-\, s
\end{align}
\]

In terms of a, b, c, d:

\[ \begin{align}
\sigma_1^2 \,=\, \lambda_1 \:&=\: {1 \over 2} \left[ \, a^2+b^2+c^2 +d^2 \,+\, \left[ 4 (ac + bd)^2 \, +\, \left( c^2+d^2 -a^2 -b^2\right)^2 \, \right]^{1/2} \right] \\
\sigma_2^2 \,=\, \lambda_2 \:&=\: {1 \over 2} \left[ \, a^2+b^2+c^2 +d^2 \,-\, \left[ 4 (ac + bd)^2 \, +\, \left( c^2+d^2 -a^2 -b^2\right)^2 \, \right]^{1/2} \right]
\end{align}
\]

Who said that life has to be easy? In terms of α, β, γ, δ it looks a bit better:

\[ \begin{align}
\sigma_1^2 \,=\, \lambda_1 \:&=\: {1 \over 2} \left[ \, \left( \gamma \,+\, \alpha \right) \,+\, \left[ \beta^2 \, +\, \left( \gamma \,-\, \alpha \right)^2 \, \right]^{1/2} \right] \\
\sigma_2^2 \,=\, \lambda_2 \:&=\: {1 \over 2} \left[ \, \left( \gamma \,+\, \alpha \right) \,-\, \left[ \beta^2 \, +\, \left( \gamma \,-\, \alpha \right)^2 \, \right]^{1/2} \right]
\end{align}
\]

The reader can convince himself that with the definitions above we do indeed reproduce

\[
\lambda_1 \, \lambda_2 \:=\: r^2 \,-\, s^2 \:=\: \left(a\, d\,+\, b\,c \right)^2 \,=\, \operatorname{det}\left(\pmb{\operatorname{A}}_E\right)^2
\]

Determination of the inclination angle φ

For the determination of the angle φ we use:

\[
\begin{pmatrix} \operatorname{cos}(2\phi) \\ \operatorname{sin}(2\phi) \end{pmatrix} \:=\: {1 \over s} \begin{pmatrix} s_1 \\ s_2 \end{pmatrix}
\]

If we choose

\[
-\pi/2 \,\lt\, \phi \le \pi/2
\]

we get:

\[
\phi \:=\: {1 \over 2} \operatorname{arctan}\left({s_2 \over s_1}\right) \:=\: – {1 \over 2} \operatorname{arctan}\left( { – 2\left(ac \,+\, bd\right) \over (a^2 \,+\, b^2 \,-\, c^2 \,-\, d^2) } \right)
\]
\[
\phi \:=\: – {1 \over 2} \operatorname{arctan}\left( {\beta \over \gamma \,- \alpha } \right)
\]

Or equivalently with respect to α, β, γ:

\[
\operatorname{sin}\left(2\phi\right) \:=\: – \, {\beta \over \left[ \beta^2 \, +\, \left( \gamma \,-\, \alpha \right)^2 \, \right]^{1/2} }
\]
\[
\phi \:=\: {1 \over 2} \operatorname{arcsin}\left({s_2 \over s}\right) \:=\: – {1 \over 2} \operatorname{arcsin}\left( {\beta \over \left[ \beta^2 \, +\, \left( \gamma \,-\, \alpha \right)^2 \, \right]^{1/2} } \right)
\]

Note: All in all there are four different solutions. The reason is that we alternatively could have requested λ2 ≥ λ1 and also chosen the angle π + φ. So, the ambiguity is due to a selection of the considered principal axis and rotational symmetries.

In the special case that we have a circle

\[
\lambda_1 \,=\, \lambda_2 \,=\, r
\]

and then, of course, any angle φ will be allowed.

2nd way to a solution for σ1, σ2 and φ via eigendecomposition

For our second way of deriving formulas for σ1, σ2 and φ we use some linear algebra. This way is interesting for two reasons: It indicates how we can use the Python “linalg”-package together with Numpy to get results numerically. In addition we get familiar with a representation of the ellipse in a properly rotated ECS.

Above we have written down a symmetric matrix Aq describing an operation on the position vectors of points on our rotated ellipse:

\[
\pmb{v}_e^T \circ \pmb{\operatorname{A}}_q \circ \pmb{v}_e^T \:=\: \delta
\]

We know from linear algebra that every symmetric matrix can be decomposed into a product of orthogonal matrices O, OT and a diagonal matrix. This reflects the so called eigendecomposition of a symmetric matrix. It is a unique decomposition in the sense that it has a uniquely defined solution in terms of the coefficients of the following matrices:

\[
\pmb{\operatorname{A}}_q \:=\: \pmb{\operatorname{O}} \circ \pmb{\operatorname{D}}_q \circ \pmb{\operatorname{O}}^T
\]

with

\[
\pmb{\operatorname{D}}_{q} \:=\: \begin{pmatrix} \lambda_{u} & 0 \\ 0 & \lambda_{d} \end{pmatrix}
\]

The coefficients λu and λd are eigenvalues of both Dq and Aq. Reason: Orthogonal matrices do not change eigenvalues of a transformed matrix. So, the diagonal elements of Dq are the eigenvalues of Aq. Linear algebra also tells us that the columns of the matrix O are given by the components of the normalized eigenvectors of Aq.

We can interpret O as a rotation matrix Rψ for some angle ψ:

\[
\pmb{v}_e^T \circ \pmb{\operatorname{A}}_q \circ \pmb{v}_e^T \:=\: \pmb{v}_e^T \circ
\pmb{\operatorname{R}}_{\psi} \circ \pmb{\operatorname{D}}_q \circ \pmb{\operatorname{R}}_{\psi}^T \circ \pmb{v}_e \:=\: \delta
\]

This means

\[
\left[ \pmb{\operatorname{R}}_{-\psi} \circ \pmb{v}_e \right]^T \circ
\pmb{\operatorname{D}}_q \circ \left[ \pmb{\operatorname{R}}_{-\psi} \circ \pmb{v}_e \right] \:=\: \delta \:=\: \sigma_1^2\, \sigma_2^2
\]
\[
\pmb{v}_{-\psi}^T \circ \pmb{\operatorname{D}}_q \circ \pmb{v}_{-\psi} \:=\: \sigma_1^2\, \sigma_2^2
\]

The whole operation tells us a simple truth, which we are already familiar with:

By our construction procedure for a rotated ellipse we know that a rotated ECS exists, in which the ellipse can be described as the result of a scaling operation (along the coordinate axes of the rotated ECS) applied to a unit circle. (This ECS is, of course, rotated by an angle φ against our working ECS in which the ellipse appears rotated.)

Indeed:

\[
\left(\, x_{-\psi}, \,y_{-\psi}\,\right) \circ \begin{pmatrix} \lambda_u & 0 \\ 0 & \lambda_d \end{pmatrix} \circ \begin{pmatrix} x_{-\psi} \\ y_{-\psi} \end{pmatrix} \:=\: \sigma_1^2\, \sigma_2^2
\]
\[
{\lambda_u \over \sigma_1^2\, \sigma_2^2} \, x_{-\psi}^2 \, + \, {\lambda_d \over \sigma_1^2\, \sigma_2^2} \, y_{-\psi}^2 \:=\: 1
\]

We know exactly what angle ψ by which we have to rotate our ECS to get this result: ψ = φ. Therefore:

\[ \begin{align}
x_{-\psi} \:&=\: x_c, \\
y_{-\psi} \:&=\: y_c, \\
\lambda_u \:&=\: \lambda_2 \,=\, \sigma_2^2, \\
\lambda_d \:&=\: \lambda_1 \,=\, \sigma_1^2
\end{align}
\]

This already makes it plausible that the eigenvalues of our symmetric matrix Aq are just λ1 and λ2.

Mathematically, a lengthy calculation will indeed reveal that the eigenvalues of a symmetric matrix Aq with coefficients α, 1/2*β and γ have the following form:

\[
\lambda_{u/d} \:=\: {1 \over 2} \left[\, \left(\alpha \,+\, \gamma \right) \,\pm\, \left[ \beta^2 + \left(\gamma \,-\, \alpha \right)^2 \,\right]^{1/2} \, \right]
\]

This is, of course, exactly what we have found some minutes ago by directly solving the equations with the trigonometric terms.

We will prove the fact that these indeed are valid eigenvalues in a minute. Let us first look at respective eigenvectors ξ1/2. To get them we must solve the equations resulting from

\[
\left( \begin{pmatrix} \alpha & \beta / 2 \\ \beta / 2 & \gamma \end{pmatrix} \,-\, \begin{pmatrix} \lambda_{1/2} & 0 \\ 0 & \lambda_{1/2} \end{pmatrix} \right) \,\circ \, \pmb{\xi_{1/2}} \:=\: \pmb{0},
\]

with

\[
\pmb{\xi_1} \,=\, \begin{pmatrix} \xi_{1,x} \\ \xi_{1,y} \end{pmatrix}, \quad \pmb{\xi_2} \,=\, \begin{pmatrix} \xi_{2,x} \\ \xi_{2,y} \end{pmatrix}
\]

Again a lengthy calculation shows that the following vectors fulfill the conditions (up to a common factor in the components):

\[ \begin{align}
\lambda_1 \: &: \quad \pmb{\xi_1} \:=\: \left(\, {1 \over \beta} \left( (\alpha \,-\, \gamma) \,-\, \left[\, \beta^2 \,+\, \left(\gamma \,-\, \alpha \right)^2\,\right]^{1/2} \right), \: 1 \, \right)^T \\
\lambda_2 \: &: \quad \pmb{\xi_2} \:=\: \left(\, {1 \over \beta} \left( (\alpha \,-\, \gamma) \,+\, \left[\, \beta^2 \,+\, \left(\gamma \,-\, \alpha \right)^2\,\right]^{1/2} \right), \: 1 \, \right)^T
\end{align}
\]

for the eigenvalues

\[ \begin{align}
\lambda_1 \:&=\: {1 \over 2} \left(\, \left(\alpha \,+\, \gamma \right) \,+\, \left[ \beta^2 \,+\, \left(\gamma \,-\, \alpha \right)^2 \,\right]^{1/2} \, \right) \\
\lambda_2 \:&=\: {1 \over 2} \left(\, \left(\alpha \,+\, \gamma \right) \,-\, \left[ \beta^2 \,+\, \left(\gamma \,-\, \alpha \right)^2 \,\right]^{1/2} \, \right) \\
\end{align}
\]

The T at the formulas for the vectors symbolizes a transposition operation.

Note that the vector components given above are not normalized. This is important for performing numerical checks as Numpy and linear algebra programs would typically give you normalized eigenvectors with a length = 1. But you can easily compensate for this by working with

\[ \begin{align}
\lambda_1 \: &: \quad \pmb{\xi_1^n} \:=\: {1 \over \|\pmb{\xi_1}\|}\, \pmb{\xi_1} \\
\lambda_2 \: &: \quad \pmb{\xi_2^n} \:=\: {1 \over \|\pmb{\xi_2}\|}\, \pmb{\xi_2}
\end{align}
\]

Proof for the eigenvalues and eigenvector components

We just prove that the eigenvector conditions are e.g. fulfilled for the components of the second eigenvector ξ2 and λ2 = λd.

\[ \begin{align}
\left(\alpha \,-\, \lambda_2 \right) * \xi_{2,x} \,+\, {1 \over 2} \beta * \xi_{2,y} \,&=\, 0 \\
{1 \over 2} \beta * \xi_{2,x} \,+\, \left( \gamma \,-\, \lambda_2 \right) * \xi_{2,y} \,&=\, 0
\end{align}
\]

(The steps for the first eigenvector are completely analogous).

We start with the condition for the first component

\[ \begin{align}
&\left( \alpha \,-\,
{1\over 2}\left[\,\left(\alpha \, + \, \gamma\right) \,+\, \left[ \beta^2 \,+\, \left( \alpha \,-\, \gamma \right)^2 \right]^{1/2} \right] \right) * \\
& {1 \over \beta}\,
\left[\, \left(\alpha \,-\, \gamma\right) \,+\, \left[ \beta^2 \,+\, \left(\alpha \,-\, \gamma \right)^2 \right]^{1/2} \,\right] \,+\, {\beta \over 2 }
\,=\, 0
\end{align}
\]
\[ \begin{align}
& {1 \over 2 } \left[ \left(\alpha \,-\,\gamma\right) \,+\, \left[ \beta^2 \,+\, \left( \alpha \,-\, \gamma \right)^2 \right]^{1/2} \right] * \\
& {1 \over \beta}\,
\left[\, \left(\alpha \,-\, \gamma\right) \,+\, \left[ \beta^2 \,+\, \left(\alpha \,-\, \gamma \right)^2 \right]^{1/2} \,\right] \,+\, {\beta \over 2 }
\,=\, 0
\end{align}
\]
\[
{1 \over 2 \beta} \left[ (\alpha \,-\,\gamma)^2 \,-\, \beta^2 \,-\, (\alpha \,-\,\gamma)^2 \right] \,+\, {\beta \over 2 } \,=\, 0
\]

The last relation is obviously true. You can perform a similar calculation for the other eigenvector component:

\[ \begin{align}
{1 \over 2} \, \beta & {1 \over \beta}\,
\left[\, \left(\alpha \,-\, \gamma\right) \,+\, \left[ \beta^2 \,+\, \left(\alpha \,-\, \gamma \right)^2 \right]^{1/2} \,\right] \,+\, \\
&
\left( \gamma \, -\,
{1\over 2}\left[\,\left(\alpha \, + \, \gamma\right) \,+\, \left[ \beta^2 \,+\, \left( \alpha \,-\, \gamma \right)^2 \right]^{1/2} \right] \right) * 1 \,=\, 0
\end{align}
\]

Thus:

\[ \begin{align}
&{1 \over 2} \, \left(\alpha \,-\, \gamma\right) \,+\, {1 \over 2} \left[ \beta^2 \,+\, \left(\alpha \,-\, \gamma \right)^2 \right]^{1/2} \,-\, \\
&{1\over 2}\left(\alpha \,-\, \gamma\right) \,-\, {1\over 2}\left[ \beta^2 \,+\, \left( \alpha \,-\, \gamma \right)^2 \right]^{1/2} \,=\, 0
\end{align}
\]

True, again. In a very similar exercise one can show that the scalar product of the eigenvectors is equal to zero:

\[ \begin{align}
& {1 \over \beta}\,
\left[\, \left(\alpha \,-\, \gamma\right) \,+\, \left[ \beta^2 \,+\, \left(\alpha \,-\, \gamma \right)^2 \right]^{1/2} \,\right] \,+\, {\beta \over 2 } \,*\, \\
& {1 \over \beta}\,
\left[\, \left(\alpha \,-\, \gamma\right) \,-\, \left[ \beta^2 \,+\, \left(\alpha \,-\, \gamma \right)^2 \right]^{1/2} \,\right] \,+\, {\beta \over 2 } \,+\, 1\,*\,1 \\
&\, {1 \over \beta^2}\, *\, (-\beta^2) \,+\, 1 \,=\, 0
\end{align}
\]

I.e.:

\[
\pmb{\xi_1} \bullet \pmb{\xi_1} \,=\, \left( \xi_{1,x}, \, \xi_{1,y} \right) \circ \begin{pmatrix} \xi_{2,x} \\ \xi_{2,y} \end{pmatrix} \,= \, 0,
\]

which means that the eigenvectors are perpendicular to each other. Exactly, what we expect for the orientations of the principal axes of an ellipse against each other.

Rotation angle from coefficients of Aq

We still need a formula for the rotation angle(s). From linear algebra results related to an eigendecomposition we know that the orthogonal (rotation) matrices consist of columns of the normalized eigenvectors. With the components given in terms of our un-rotated ECS, in which we basically work. These vectors point along the principal axes of our ellipse. Thus the components of these eigenvectors define our aspired rotation angles of the ellipse’s principal axes against the x-axis of our ECS.

Let us prove this. By assuming

\[ \begin{align}
\operatorname{cos}(\phi_1) \,&=\, \xi_{1,x}^n \\
\operatorname{sin}(\phi_1) \,&=\, \xi_{1,y}^n
\end{align}
\]

and using

\[
\operatorname{cos}(2\phi_1) \,=\, 2\, \operatorname{sin}(\phi_1) \, \operatorname{cos}(\phi_1)
\]

we get:

\[ \begin{align}
\operatorname{sin}(2 \phi_1) \,=\,
2 * { \xi_{1,x} * \xi_{1,y} \over \left[\, \xi_{1,x}^2 \, + \, \xi_{1,y}^2 \,\right]^{1/2} }
\end{align}
\]

and thus

\[ \begin{align}
\operatorname{sin}(2 \phi_1) \,&=\,
2 \, { {1 \over \large{\beta}} \left( (\alpha \,-\, \gamma) \,-\, \left[\, \beta^2 \,+\, \left(\gamma \,-\, \alpha \right)^2\,\right]^{1/2} \right) \,*\, 1
\over
\left[\, \left( {1 \over \large{\beta}} \left( (\alpha \,-\, \gamma) \,-\, \left[\, \beta^2 \,+\, \left(\gamma \,-\, \alpha \right)^2\,\right]^{1/2} \right) \right)^2
\,+\, 1^2 \right]^{1/2} } \\
&=\, 2\, { {1 \over \large{\beta}} \left( t \,-\, z \right)
\over
{1 \over \large{\beta}^2 \phantom{\large{]}} } \left[\, \beta^2 \,+\, \left(\, t \,-\, z \,\right)^2 \right]^2 }
\end{align}
\]

with

\[ \begin{align}
t \,&=\, (\alpha \,-\, \gamma) \\
z \,&=\, \left[\, \beta^2 \,+\, \left(\gamma \,-\, \alpha \right)^2\,\right]^{1/2}
\end{align}
\]

This looks very differently from the simple expression we got above. And a direct approach is cumbersome. The trick is multiply nominator and denominator by a convenience factor

\[
\left( t \,+\, z \right),
\]

and exploit

\[ \begin{align}
\left( t \,-\, z \right) \, \left( t \,+\, z \right) \,&=\, t^2 \,-\, z^2 \\
\left( t \,-\, z \right) \, \left( t \,+\, z \right) \,&=\, – \beta^2
\end{align}
\]

to get

\[ \begin{align}
&2 * \beta { (t\,-\, z) * ( t\,+\, z) \over \left[ \beta^2 \, + \,( t \,-\, z )^2 \right] * (t \,+\, z) } \\
=\, &2 * \beta { – \beta^2 \over \beta^2 (t\,+\,z) \,-\, \beta^2 (t\,-\,z) } \\
=\, & – {\beta \over \left[\, \beta^2 \,+\, (\alpha \,-\, \gamma)^2 \,\right]^{1/2} }
\end{align}
\]

This means that our 2nd solution approach provides the result

\[
\operatorname{sin}(2 \phi_1) \, =\, – \, { \beta \over \left[\, \beta^2 \,+\, (\alpha \,-\, \gamma)^2 \,\right]^{1/2} }\,,
\]

which is of course identical to the result we got with our first solution approach. It is clear that the second axis has an inclination by φ +- π / 2:

\[
\phi_2\, =\, \phi_1 \,\pm\, \pi/2.
\]

In general the angles have a natural ambiguity of π.

Conclusion

In this post I have shown how one can derive essential properties of centered, but rotated ellipses from matrix-based representations. Such calculations become relevant when e.g. experimental or numerical data only deliver the coefficients of a quadratic form for the ellipse.

We have first established the relation of the coefficients of a matrix that defines an ellipse by a combined scaling and rotation operation with the coefficients of a matrix which defines an ellipse as a quadratic form of the components of position vectors. In addition we have shown how the coefficients of both matrices are related to quantities like the lengths of the principal axes of the ellipse and the inclination of these axes against the x-axis of the Euclidean coordinate system in which the ellipse is described via position vectors. So, if one of the matrices is given we can numerically calculate the ellipse’s main properties.

In the next post of this mini-series

Properties of ellipses by matrix coefficients – II – coordinates of points with extremal y-values

we have a look at the x- and y-coordinates of points on an ellipse with extremal y-values. All in terms of the matrix coefficients we are now familiar with.

 

Multivariate Normal Distributions – I – objectives

This post requires Javascript to display formulas!

Machine Learning [ML] algorithms are applied to multivariate data: Each individual object of interest (e.g. an image) is characterized by a set of n distinct and quantifiable variables. The variable values may e.g. come from measurements.

A sample of such objects corresponds to a data distribution in a multidimensional space, most often the ℝn. We can visualize our objects as data points in an Euclidean coordinate system of the ℝn: Each axis represents the values a specific variable can take; the position of a data point is given by the variable values.

Equivalently, we can use (position-) vectors to these data points. Thus, when training ML algorithms we typically deal with vector distributions, which by their very nature are multivariate. But also the outputs of some types of neural networks like Autoencoders [AE] form multivariate distributions in the networks’ latent spaces. For today’s ML-scenarios the number of dimensions n can become very big – even if we compress information in latent spaces. For a variety of tasks in generative ML we may need to understand the nature and shape of such distributions.

An elementary kind of a continuous multivariate vector distribution, for which major properties can be derived analytically, is the so called Multivariate Normal Distribution [MND]. MNDs, their marginal and their conditional distributions are of major importance both in the fields of statistics, Big Data and Machine Learning. One reason for this is the “central limit theorem” of statistics (in its vector form).

Some conventional ML-algorithms are even based on the assumption that the population behind the concrete data samples can be approximated by a MND. Due to the central limit theorem we find that averages of big samples of multivariate training data for a population of specific types of observed objects tend to form a MND. But also data samples in latent spaces of neural networks may show a multivariate normal distribution – at least in parts.

For the concrete problem of human face generation via a trained convolutional Autoencoder [CAE] I have actually found that the data produced in the CAE’s latent space can very well be described by a MND. See the posts on Autoencoders in this blog. This alone is motivation enough to dive a bit deeper into the (beautiful) mathematical properties of MNDs.

Just to illustrate it: The following plots show projections of the approximate MND onto coordinate planes.

We find the typical elliptic contour lines which are to be expected for a MND. And here are some generated face images from statistical vectors which I derived from an analysis of the characteristic features of the 2-dim projections of the latent MND which my CAE had produced:

ML is math in the end – and MNDs are no exception

Some of my readers may have noticed that I wanted to start a series on the topic of creating random vectors for a given MND-like vector distribution. The characteristic parameters for the n-dimensional MND can either stem from an analysis of experimental ML data or come from theoretical sources. This was in April. But, I have been silent on this topic for a while.

The reason was that I got caught up in the study of the math of MNDs, of their properties, their marginal distributions and of quadratic forms in multiple dimensions (ellipsoids and ellipses). I had to re-collect a lot of mathematical information which I once (45 years ago) had learned at university. Unfortunately, multivariate analysis (i.e. data analysis in multidimensional spaces) requires some (undergraduate) university math. Regarding MNDs, knowledge both in linear algebra, statistics and vector analysis is required. In particular matrices, their decomposition and their geometrical interpretation play a major role. And when you try to understand a particular problem which obviously is characterized by an overlap of multiple mathematical disciplines the amount of information can quickly grow – without the connections and consistency becoming clear at first sight.

This is in part due to the different fields the authors of papers on MNDs work in and the different focuses they have on properties of MNDs. Although many introductory information about MNDs is available on the Internet, I have so far missed a coherent and comprehensive presentation which illustrates the theoretical insights by both ideal and real world examples. Too often the texts are restricted to pure formal derivations. And none of the texts discussed the problem of vector generation within the limits of MND confidence levels. But this task can become important in generative ML: At high confidence levels outliers become a strong weight – and deviations from an ideal MND may cause disturbances.

One problem with appropriate vector generation for creative ML purposes is that ML experiments deliver (latent) data which are difficult to analyze as they reside in high-dimensional spaces. Even if we already knew that they form a MND in some parts of a latent space we would have to perform a drill down to analytic formulas which describe limiting conditions for the components of the statistical vectors we want to create.

The other problem is that we need a solid understanding of confidence levels for a multidimensional distribution of data points, which we approximate by a MND. And on one’s way to understanding related properties of MNDs you pass a lot of interesting side aspects – e.g. degenerate distributions, matrix decompositions, affine transformations and projections of multidimensional hypersurfaces onto coordinate planes. Far too interesting to refrain from not writing something about it …

After having read many publicly available articles on MNDs and related math I had collected a bunch of notes, formulas and numerical experiments. The idea of a general post series on MNDs grew in parallel. From my own experiences I thought that ML people who are confronted with latent representations of data and find indications of a MND would like to have an introduction which covers the most relevant aspects of MNDs. On a certain mathematical level, and supported by illustrations from a concrete example.

But I will not forget about my original objective, namely the generation of random vectors within confidence levels. In the end we will find two possible approaches: One is based on a particular linear transformation, whose mathematical form is determined by a covariance analysis of our data distribution, and random number generators for multiple Gaussian distributions. The other solution is based on a derivation of precise conditions on random vector components from ellipses which are produced by projections of our real experimental data distribution onto coordinate planes. Such limiting conditions can be given in form of analytic expressions.

The second approach can also be understood as a reconstruction of a multivariate distribution from low-dimensional projection data:

We create vectors of a concrete MND-like vector distribution in n dimensions by only referring to characteristic data of its two-dimensional projections onto coordinate planes.

This is an interesting objective in itself as the access to and the analysis of 2-dimensional (correlated) data may be a much easier endeavour than analyzing the full distribution. But such an approach has to be supported by mathematical arguments.

Objectives of this post series

Objectives of this post series are:

  1. We want to find out what a MND is in mathematical and statistical terms and how it can be based on a simpler vector distributions within the ℝn.
  2. We want to study the basic role of a standardized multivariate normal distribution in the game and the impact of linear affine transformations on such a distribution – in terms of linear algebra and from a geometrical point of view.
  3. We also want to describe and interpret the difference between normal MNDs and so called degenerate MNDs.
  4. We want to understand the most important mathematical properties of MNDs. In particular we want to better grasp the mathematical meaning of correlations between the vector components and their impact on the probability density function. Furthermore the relation of a MND to its marginal distributions in sub-spaces of lower dimensions is of major interest.
  5. We want to formally create a MND-approximation to a real multivariate data distribution by an analysis of real distribution’s properties and in particular from parameters describing the correlations between the vector components. Of particular interest are the covariance matrix and the precision or correlation matrix.
  6. We want to study the role of projections when turning from a MND to its marginal distributions and the impact of such projections on the matrices qualifying the original and its marginal distributions.
  7. We want to understand the form of contour hyper-surfaces for constant probability density values of a MND. We also want to derive what the projections of these hyper-surfaces onto coordinate planes look like.
  8. We want to show that both contour hyper-surfaces of the MND and of its projections in marginal distributions contain the same proportions of integrated data points and, equivalently, the same probability proportions resulting from an integration of the probability density from the distribution’s center up to the hyper-surfaces.
  9. We want to illustrate basic MND-creation principles and the effects of linear affine transformations during the construction process by an ideal 3-dimensional MND example and by projections of a real vector distribution from an ML-experiment onto 2-dimensional and 3-dimensional sub-spaces. We also want to illustrate the relation between the MND and its marginal distributions by plotting concrete 3-dimensional examples and their projections onto coordinate planes.
  10. We want to use the derived MND properties for the creation of statistical vectors v which fulfill the following conditions:
    • Each of the generated v is a member of a vector population, which has been derived from a ML experiment and which to a good approximation can be described by a MND (and its extracted basic parameters).
    • Each v has an endpoint within the multidimensional volume enclosed by a contour-hypersurface of the MND’s probability density function [p.d.f.],
    • The limiting hypersurface is defined by a chosen confidence level.
  11. We want to create statistical vectors within the limit of contour hyper-surfaces by using elementary construction principles of a MND.
  12. In a second approach we want to reduce vector creation to solving a sequence of 2-dimensional problems. I.e. we want to work with 2-dim marginal distributions in 2-dim sub-spaces of the ℝn. We hope that the probability density functions of the relevant distributions can be described analytically and provide computable limiting conditions on vector components.
    Note: The production of statistical vectors from data of projected low-dimensional marginal distributions corresponds to a reconstruction of the full MND from its projections.
  13. During random vector creation we want to avoid PCA-transformations of the whole real data distribution or of projections of it.
  14. Based on MND-parameters we want to find analytic expressions for the vector component limits whenever possible.

The attentive reader has noticed that the list above includes an assumption – namely that a multidimensional contour hypersurface of a MND can be associated with something like a confidence level. In addition we have to justify mathematically that the reduction to data of 2-dimensional projections of the full vector distribution is a real option for statistical vector creation.

The last three points are a bit tough: Even if we believe in math textbooks and get limiting hyper-curves of a quadratic form in our coordinate planes the main axes of the respective ellipses may show angles versus the coordinate axes (see the example images above). All this would have to be taken care of in a precise analytic form of the limits which we impose on the components of our aspired statistical vectors.

So, this series is, at least in parts, going to be a tough, but also very satisfactory journey. Eventually, after having clarified diverse properties of MNDs and their marginal distributions in lower dimensional spaces, we will end up with quadratic equations and some simple matrix operations.

Objectives of the next post

We must not forget that statistics plays a major role in our business. In ML we deal with finite collections (samples) of individual object data which are statistically picked from a greater population (with assumed statistical properties). An example is a concrete collection of images of human faces and/or their latent vectors. The data can be organized in form of a two-dimensional data matrix: Its rows may indicate individual objects and its columns properties of these objects (or vice versa). In either direction we have vectors which focus on a particular aspect of the data: Individual objects or the statistics of a specific object property.

While we are used to univariate “random variables” we have to turn to so called “random vectors” to describe multidimensional statistical distributions and respective samples picked from an underlying population. A proper vector notation will give us the advantage of writing down linear transformations of a whole multidimensional vector distribution in a short and concise form.

Besides introducing random vectors and their components the next post

Multivariate Normal Distributions – II – random vectors and their covariance matrix

will also discuss related probability densities, expectation values and the definition of a covariance matrix for a random vector. Some simple properties of the covariance matrix will help us in further posts.

 

Statistical vector generation for multivariate normal distributions – I – multivariate and bi-variate normal distributions from CAEs

This post requires Javascript to display formulas!

Convolutional Autoencoders and multivariate normal distributions

Experiments as with my own on convolutional Autoencoders [CAE] show: A CAE maps a training set of human face images (e.g. CelebA) onto an approximate multivariate vector distribution in the CAE’s latent space Z. Each image corresponds to a point (z-point) and a corresponding vector (z-vector) in the CAE’s multidimensional latent space. More precisely the results of numerical experiments showed:

The multidimensional density function which describes the inner dense core of the z-point distribution (containing more than 80% of all points) was (aside of normalization factors) equivalent to the density function of a multivariate normal distribution [MND] for the respective z-vectors in an Euclidean coordinate system.

For results of my numerical CAE-experiments see
Autoencoders and latent space fragmentation – X – a method to create suitable latent vectors for the generation of human face images
and related previous posts in this blog. After the removal of some outliers beyond a high sigma-level (≥ 3) of the original distribution the remaining core distribution fulfilled conditions of standard tests for multivariate normal distributions like the Shapiro-Wilk test or the Henze-Zirkler test.

After a normalization with an appropriate factor the continuous density functions controlling the multivariate vector distribution can be interpreted as a probability density function [p.d.f.]. The components vj (j=1, 2,…, n) of the vectors to the z-points are regarded as logically separate, but not uncorrelated variables. For each of these variables a component specific value distribution Vj is given. All these marginal distributions contribute to a random vector distribution V, in our case with the properties of a MND:

\[ \boldsymbol{V} \: = \: \left( \, V_1, \, V_2, \, ….\, V_n\, \right) \: \sim \: \boldsymbol{\mathcal{N_n}} \, \left( \, \boldsymbol{\mu} , \, \boldsymbol{\Sigma} \, \right), \\ \quad \mbox{with} \: \boldsymbol{\mathcal{N_n}} \: \mbox{symbolizing a MND in an n-dimensional space}
\]

μ is a vector with all mean values μj of the Vj component distributions as its components. Σ abbreviates the covariance matrix relating the distributions Vj with one another.

The point distribution of a CAE’s MND forms a complex rotated multidimensional ellipsoid with its center somewhere off the origin in the latent space. The latent space itself typically has many dimensions. In the case of my numerical experiments the number of dimension was n ≥ 256. The number of sample vectors used were between 80,000 and 200,000 – enough data to approximate the vector distribution by a continuous density function. The densities for the Vj-distributions formed a smooth Gaussian function (for a reasonable sampling interval). But note: One has to be careful: The fact that the Vj have a Gaussian form is not a sufficient condition for a MND. (See the next post.) But if a MND is given all Vj have a Gaussian form.

Generative use of MNDs in multidimensional latent spaces of high dimensionality

When we want to use a CAE as a generative tool we need to solve a problem: We must create statistical vectors which point into the (multidimensional) volume of our point distribution in the latent space of the encoding algorithm. Only such vectors provide useful information to the Decoder of the CAE. A full multivariate normal distribution and hyperplanes of its multidimensional density function are difficult to analyze and to control when developing a proper numerical algorithm. Therefore I want to reduce the problem of vector generation to a sequence of viewable and controllable 2-dimensional problems. How can this be achieved?

A central property of a multivariate normal distribution helps: Any sub-selection of m vector-component distributions forms a multivariate normal distribution, too (see below). For m=2 and for vector components indexed by (j,k) with respective distributions Vj, Vk we get a so called “bivariate normal distribution[BND]:

\[ V_{jk} \: \sim \: \mathcal{N}_2\left(\, (\mu_j, \mu_k), (\,\sigma_j, \sigma_k \,) \, \right)
\]

A MND has n*(n-1)/2 such subordinate BNDs. The 2-dim density function of a bivariate normal distribution

\[ g_{jk}\,\left( \, v_j, \, v_k\, \right) \: = \: g_{jk}\,\left( \, v_j, \, v_j, \, \mu_j, \, \mu_k, \, \sigma_j, \sigma_k, \, \sigma_{jk}, … \right)
\]

for vector component values vj, vk defines a point density of the sample data in the (j,k)-coordinate plane of the Euclidean coordinate system in which the MND is described. The density function of the marginal distributions Vj showed the typical Gaussian forms of a univariate normal distribution.

The density function of a BND has some interesting mathematical properties. Among other things: The hyperplanes of constant density of a BND’s density function form ellipses. This is illustrated by the following plots showing such contour lines for selected pairs (Vj, Vk) of a real point-distribution in a 256-dimensional latent space. The point distribution was created by a CAE in its latent space for the CelebA dataset.

Contour lines for selected (j,k)-pairs. The thick lines stem from theory and calculated correlation coefficients of the univariate distributions.

The next plot shows the contours of selected vector-component pairs after a PCA-transformation of the full MND. (Main ellipse axes are now aligned with the axes of the PCA-coordinate system):

These ellipses with axes along the coordinate axes are relatively easy to handle. They can be used for vector creation. But they require a full PCA transformation of the MND-distribution, a PCA-analysis for complexity reduction and an application of the inverse PCA-transformation. The plot below shows the point-density compared to a 2.2-σ-confidence ellipses. The orange points are the results of a proper statistical numerical vector generation algorithm based on a PCA-transformation of the MND.

See my post quoted above for the application of a PCA-transformation of the multidimensional MND for vector creation.

However, we get the impression that we could also use these rotated ellipses in projections of the MND onto coordinate planes of the original latent space system directly to impose limiting conditions on the component values of statistical vectors pointing to an inner regions of the MND. Of course, a generated statistical vector must then comply with the conditions of all such ellipses. This requires an analysis and combined use of the ellipses of all of the subordinate BNDs of a the original MND during an iterative or successive definition of the values for the vector components.

Objective of this post series

In my last post about CAEs (see the link given above) I have explicitly asked the question whether one can avoid performing a full PCA-transformation of the MND when creating statistical vectors pointing to a defined inner region of a MND.

The objective of this post series is to prove the answer: Yes, we can. And we will use the BNDs resulting from projections of the original MND onto coordinate planes. We will in particular explore the n*(n-1)/2 the properties of the BNDs’ confidence ellipses. As said: These ellipses are rotated against the coordinate system’s axes. We will have to deal with this in detail. We will also use properties of their 1-dimensional marginal distributions (projections onto the coordinate axes, i.e. the Vj.)

In addition we need to prepare a variety of formulas before we are able to define numerical procedure for the vector generation without a full PCA-transformation of the MND with around 100,000 vectors. Some of the derived formulas will also allow for a deeper insight into how the multiple BNDs of a MND are related between each other and with confidence hypersurfaces of the MND.

Ellipses in general lead to equations governed by quadratic or fourth power polynomials. We will in addition use some elementary correlation formula from statistics and for some exercises a simple optimization via derivatives. The series can be regarded as an excursion into some of the math which governs bivariate distributions resulting from a MND.

As MNDs may also be the result of other generative Machine Learning algorithms in respective latent spaces, the whole approach to statistical vector generation for such cases should be of general interest. Note also that the so called “central limit theorem” almost guarantees the appearance of MNDs in many multivariate datasets with sufficiently large samples and value dependencies on many singular observations.

Distributions of a variety of variables may result in a MND if the variables themselves depend on many individual observables with limited covariance values of their distributions. In particular pairwise linearly correlated Gaussian density distributions individual variables (seen as vector components) may constitute a MND if the conditional probabilities fulfill some rules. We will see a glimpse of this in 2 dimensions when we analyze integrals over Gaussians in the bivariate normal case.

Other approaches to statistical vector generation?

Well, we could try to reconstruct the multidimensional density function of the MND. This is a challenge which appears in some problems of pure statistics, but also in experimental physics. See e.g. a paper of Rafey Anwar, Madeline Hamilton, Pavel M. Nadolsky (2019, Department of Physics, Southern Methodist University, Dallas; https://arxiv.org/pdf/1901.05511.pdf). Then we would have to find the elements of the (inverse) covariance matrix or – equivalently – the elements of a multidimensional rotation matrix. But the most efficient algorithms to get the matrix coefficients again work with projections onto coordinate planes. I prefer to use properties of the ellipses of the bivariate distributions directly.

Note that using the multidimensional density function of the MND directly is not of much help if we want to keep the vectors’ end points within a defined multidimensional inner region of the distribution. E.g.: You want to limit the vectors to some confidence region of the MND, i.e. to keep them inside a certain multidimensional ellipsoidal contour hyper-surface. The BND-ellipses in the coordinate planes reflect the multidimensional ellipsoidally shaped contour hyperplanes of the full distribution. Actually, when we vertically project a multidimensional hyperplane onto a coordinate plane then the outer 2-dim border line coincides with a contour ellipse of the respective BND. (This is due to properties of a MND. We will come back to this in a future post.) The problem of proper limiting individual vector component values thus again is best solved by analyzing properties of the BNDs.

Steps, methods, mathematical level

As a first step I will, for the sake of completeness, write down the formula for a multivariate normal distribution, discuss a bit its mathematical construction from uncorrelated univariate normal distribution. I will also list up some basic properties of a MND (without proof!). These properties will justify our approach to create statistical vectors pointing into a defined inner region of the MND by investigating projected contour ellipses of all subordinate BNDs. As a special aspect I want to make it at least plausible, why the projected contour ellipses define infinitesimal regions of the same relative probability level as their multidimensional counterparts – namely the multidimensional ellipsoidal hypersurfaces which were projected onto coordinate planes.

Then as a first productive step I want to motivate the specific mathematical form of the probability density function [p.d.f] of a bivariate normal distribution. In contrast to many of the math papers I have read on the topic I want to use a symmetry argument to derive the basic form of the p.d.f. I will point out an important, but plausible assumption about conditional distributions. An analogous assumption on the multidimensional level is central for the properties of a MND.

As the distributions Vj and Vk can be correlated we then want to understand the impact of the correlation coefficients on the parameters of the 2-dimensional density function. To achieve this I will again derive the density function by using our previous central assumption and some simple relations between the expectation values of the constituting two univariate distributions in the linear correlation regime. This concludes the part of the series where we get familiar with BNDs.

Furthermore we are interested in features and consequences of the 2-dimensional density functions. The contour lines of the 2-dim density function are ellipses – rotated by some specific angle. I will look at a formal mathematical process to construct such ellipses – in particular confidence ellipses. I will refer to the results Carsten Schelp has provided in an Internet article on this topic.

His construction process starts with a basic ellipse, which I will call base correlation ellipse [BCE]. The length of the axes of this ellipse are eigenvalues of the covariance matrix of the standardized marginal distributions constituting the BND. The main axes of this elementary ellipse are in addition aligned with the two selected axes of a basic Euclidean coordinate system in which the bivariate distribution is defined. The length of the BCE’s main axes can be shown to depend on the correlation coefficient for the two vector component distributions Vj and Vk. This coefficient also appears in the precision matrix of the BND. Points on the base correlation ellipse can be mapped with two steps of an affine transformation onto points on the real contour ellipses, in particular to points of the confidence ellipses.

The whole construction process is not only of immense help when designing visualization programs for the contour ellipses of our distribution with many (around 100,000) individual vectors. The process itself gives us some direct geometrical insights. It also helps to avoid finding a numerical solution of the usual eigenvector-problems when answering some specific questions about the rotated contour ellipses. Normally we solve an eigenvalue-problem for the covariance matrix of the multi- or the many subordinate bi-variate distributions to get precise information about contour ellipses. This corresponds to a transformation of the distributions to a new coordinate system whose axes are aligned to the main axes of the ellipses. Numerically this transformation is directly related to a PCA transformation of the vector distributions. However, such a PCA-transformation can be costly in terms of CPU time.

Instead, we only need a numerical determination of all the mutual the correlation coefficients of the univariate marginal distributions of the MND. Then the eigenvalue problem on the BND-level is already analytically solved. We therefore neither perform a full numerical PCA analysis of the MND and multidimensional rotation of the vectors of around 100,000 samples. Nor do we analyze explained variance ratios to determine the most important PCA components for dimensionality reduction. We neither need to perform a numerical PCA analysis of the BNDs.

Most important: Our problem of vector generation is formulated in the original latent space coordinate system and it gets a direct solution there. The nice thing is that Schelp’s construction mechanism reduces the math to the solution of quadratic polynomial equations for the BNDs. The solutions of those equations, which are required for our ultimate purpose of vector generation, can be stated in an explicit form.

Therefore, the math in this series will mostly remain on high school level (at least at a level given when I was young). Actually, it was fun to dive back into exercises reminding me of school 50 years ago. I hope the interested reader has some fun, too.

Solutions to some particular problems with respect to the confidence ellipses of the MND’s BNDs

In particular we will solve the following problems:

  • Problem 1: The two points on the BCE-ellipse with the same vj-value are not mapped onto points with the same vj-value on the confidence ellipse. We therefore derive the coordinates of points on the BCE-ellipse that give us one and the same vj-value on the real confidence ellipse.
  • Problem 2: Plots for a real MND vector distribution indicate that all (n-1) confidence ellipses of distribution pairs of a common Vj with other marginal distributions Vk (for the same confidence level and with k ≠ j) have a common tangent parallel to one coordinate axis. We will derive the value of a maximum v_j-value for all ellipses of (j,k)-pairs of vector component distributions. We will prove that it is identical for all k. This will define the common interval of allowed vj-component-values for a bunch of confidence ellipses for all (Vj, Vk)-pairs with a common Vj.
  • Problem 3: The BCE-ellipses for a common j-, but different k-values depend on different values for the correlation coefficients ρj,k of Vj with its various Vk counterparts. Therefore we need a formula that relates a point on the BCE-ellipse leading to a concrete v_j-value of the mapped point on the confidence ellipse of a particular (Vj, Vk)-pair to respective points on other BCE-ellipses of different (Vj, Vm)-pair with the same resulting v_j-value on their confidence ellipses. I will derive such a formula. It will help us to apply multiple conditions onto the vector component values.
  • Problem 4: As a supplemental exercise we will derive a mathematical expression for the size of the main axes and the rotation angle of the ellipses. We should, of course, get values that are identical to results of the eigenwert-problem for the correlation matrix (describing a PCA coordinate transformation). This gives us additional confidence in Schelps’ approach.

In the end we can use our results to define a numerical algorithm for the direct creation of vectors pointing to a defined inner region of the multivariate normal distribution. As said, this algorithms does not require a costly PCA transformations of the full MND or many, namely n*(n-1)/2 such PCA-transformations of its BNDs.

I intend to visualize all results with the help of a concrete example of a multivariate example distribution created by a CAE for the CelebA dataset. The plots will use Schelp’s construction algorithm for the confidence ellipses extensively.

Conclusion and outlook

Convolutional Autoencoders create approximate multivariate normal distributions [MND] for certain input data (with Gaussian pattern properties) in their latent space. MNDs appear in other contexts of machine learning and statistics, too. For evaluation and generative purposes one may need statistical vectors with end points inside a defined multidimensional hypersurface corresponding to a certain confidence level and a certain constant density value of the MND’s density function. These hypersurfaces are multidimensional ellipsoids.

We have the hope that we can use mathematical properties of the MND’s subordinate bivariate normal distributions [BNDs] to create statistical vectors with end points inside the multidimensional confidence ellipsoids of a MND. Typically such an ellipsoid resides off the origin of the latent space’s coordinate system and the ellipsoid’s main axes are rotated against the axes of the coordinate system. We intend to base the confining conditions on the components of the aspired statistical vectors on correlation coefficients of the marginal vector component distributions. Our numerical algorithm should avoid a full PCA-transformation of the multidimensional vector distribution.

In the next post of this series I give a formula for the density function of a multivariate normal distribution. In addition I will list up some basic properties which justify the vector generation approach via bivariate normal distributions.