Fun with shear operations and SVD – III – Shearing of circles

This post requires Javascript to display formulas!

In the two previous posts of this series

Fun with shear operations and SVD – I – shear matrices and examples created with Blender
Fun with shear operations and SVD – II – Shearing of rectangles and cubes with Python and Matplotlib

we have clarified some basic properties of shear transformations [SHT]. We got interested in this topic, because Autoencoders can produce latent multivariate normal vector distributions, which in turn result from linear transformations of multivariate standard normal distributions. When we want to analyze such latent vector distributions we should be aware transformations of quadratic forms. An important linear transformation is a shear operation. It combines aspects of scaling with rotations.

The objects we applied SHTs to were so far only squares and cubes. Both (discrete) rotational and plane symmetries of the squares and cubes were broken by SHTs. We also saw that this symmetry breaking could not be explained by a pure scaling operation in another rotated Euclidean Coordinate System [ECS]. But cubes do not have a continuous rotational symmetry. The distances of surface points of a cube to its symmetry center show no isotropy.

However, already in the first post when we superficially worked with Blender we got the impression that the shearing of a sphere seemed to produce a figure with both plane and discrete rotational symmetries – namely ellipsoids, wich appeared to be rotated. We still have to prove this, mathematically. With this post we move a first step in this direction: We will apply a shear operation to a 2D-body with perfect continuous rotational symmetry in all directions, namely a circle. A circle is a special example of a quadratic form (with respect to the vector component values). We center our Euclidean Coordinate System [ECS] at the center of the circle. We know already that this point remains a fix-point of our transformations. As in the previous post I use Python and Matplotlib to produce visual results. But we support our impression also by some simple math.

We first check via plotting that the shear operations move an extremal point of the circle (with respect to the y-coordinate) along a line ymax = const. (Points of other layers for other values yl = const also move along their level-lines.) We then have to find out whether the produced figure really is an ellipse. We do so by mathematically deriving its quadratic form with respect to the coordinates of the transformed points. Afterward, we derive the coordinate values of points with extremal y-values after the shear transformation.

In addition we calculate the position of the points with maximum and minimum distance from the center. I.e., we derive the coordinates of end-points of the main axes of the ellipse. This will enable us to calculate the angle, by which the ellipse is rotated against the x-axis.

The astonishing thing is that our ellipse actually can be created by a pure scaling operation in a rotated ECS. This seems to be in contrast to our insight in previous posts that a shear matrix cannot be diagonalized. But it isn’t … It is just the rotational symmetry of the circle that saves us.

Shearing a circle

We define a circle with radius r = a = 2.

I have indicated the limiting line at the extremal y-values. From the analysis in the last post we expect that a shear operation moves the extremal points along this line.

We now apply a shearing matrix with a x/y-shearing parameter λ = 2.0

\[ M_{sh} \: = \: \begin{pmatrix} 1.0 & 2.0 \\ 0.0 & 1.0\end{pmatrix}
\]

The result is:

As expected! This really looks like an ellipse. Can we proof it?

Transformation of the circle

We use the definition of a shear transformation in 2D (see the first post of this series):

\[ \begin{align}
x_s \: &= \: x \, + \, \lambda * y \\
y_s \: &= \: y
\end{align}
\]

and the definition of the original circle with radius a:

\[ \begin{align}
{x^2 \over \, a^2} \, + \, {y^2 \over \, a^2} \:&=\: 1 \\
\Rightarrow \: {x_s^2 \,-\, λ y_s \over \, a^2} \, + \, {y_s^2 \over \, a^2} \:&=\: 1 \phantom{\Huge{(}}
\end{align}
\]

This actually gives us a quadratic expression in the new coordinate values:

\[
x_s^2 \, + \, \left(\,1\,+\,\lambda^2\, \right)*y_s^2 \:-\: {2 \lambda \over 1\,+\,\lambda^2}*x_s*y_s \:=\: a^2
\]

Thus, we have indeed produced a rotated ellipse! We see this from the fact that the term mixing the xs and the yl coordinates does not vanish.

Position of maximum absolute y-values

We know already that the y-coordinates of the extremal points (in y-direction) are preserved. And we know that these points were located at x = 0, y = a. So, we can calculate the coordinates of the shifted point very easily:

\[ x_s \:=\: 0 \, + \, \lambda * a \:\Rightarrow \: x_s \:=\: \lambda * a \quad (\, = 4\,)
\]

In our case this gives us a position at (4, 2). But for getting some experience with the quadratic form let us determine it differently, namely by rewriting the above quadratic equation and by a subsequent differentiation. Quadratic supplementation gives us:

\[
\left[\, y_s \,- \, {\lambda \over 1\,+\, \lambda^2} \, x_s^2 \, \right]^2 \:=\:
{ 1 \over \left( 1\,+\, \lambda^2 \right)^2} \left[\, a^2\left(\, 1\,+\,\lambda^2 \,\right) \, – \, x_s^2 \,\right],
\]
\[
y_s \:=\: {1 \over 1\,+\, \lambda^2}\, \left( \, \lambda x_s \, \pm \, \left[\, a^2\left(\, 1\,+\,\lambda^2 \,\right) \, – \, x_s^2 \,\right]^{1/2}\, \right)
\]

and

\[
{\partial y_s \over \partial x_s } \:=\: {1 \over 1\,+\, \lambda^2 }\, \left( \, \lambda \, \pm \, {(-x_s) \over \left[\, a^2\left(\, 1\,+\,\lambda^2 \,\right) \, – \, x_s^2 \,\right]^{1/2} } \, \right)
\]

We demand that the derivative becomes zero:

\[
{\partial y_s \over \partial x_s } \:=\: 0 \: \Rightarrow \: x_s^2 \, = \, \lambda^2 \, \left[\, a^2\left(\, 1\,+\,\lambda^2 \,\right) \, – \, x_s^2 \,\right]
\]

Thus – as expected :

\[
x_s^2 \, = \, \lambda^2 \, a^2
\]

Points with maximum radial distance

Let us also find the position of the end-points of the main axes of the ellipse. One method would be to express the ellipse in terms of the coordinates (xs, ys), calculate the squared radial distance rs of a point from the center and set the derivative with respect to xs to zero.

\[
r_s^2 \, = \, x_s^2 \, +\, \left(y_s(x_s)\right)^2, \quad {\partial r_s \over \partial x_s} \,= \, 0 \,.
\]

The “problem” with this approach is that we have to work with a lot of terms with square roots. Sometimes it is easier to just work in the original coordinates and express everything in terms of (x, y):

\[
r_s^2(x) \, = \, \left(x_s(x)\right)^2 \, +\, \left(y_s(x)\right)^2, \quad {\partial r_s(x) \over \partial x} \,= \, 0 \,.
\]

With the basic circle equation and the transformation rules this gives us:

\[
{\partial \over \partial x} \left[x^2 \,+\, 2 \lambda x y \,+\, \lambda^2 y^2 \,+\, y^2 \right] \,= \, 0
\]
\[
\quad {\partial \over \partial x} \left[a^2(1 \,+\ \lambda^2) \,-\, \lambda^2 x^2 + 2 \lambda x \left(a^2\,-\, x^2\right)^{1/2} \right] \,= \, 0
\]

We do not get rid of the square roots. But it is easy to handle. Sorting of terms and getting rid of the denominator leads us to

\[
\left( a^2\,-\ x^2 \right) \, =\, x^2 \,+\, \lambda x \left( a^2 \,-\, x^2 \right)^{1/2}
\]

Adding a quadratic supplementation term gives:

\[
\left( a^2\,-\ x^2 \right) \,+\, {\lambda^2 \over 4} \left( a^2\,-\ x^2 \right) \, =\, \left[ \, x \,+\, {\lambda \over 2 } \left( a^2 \,-\, x^2 \right)^{1/2} \, \right]^2
\]

Taking the square root and reordering results in:

\[
x \,=\, \left( a^2 \,-\, x^2 \right)^{1/2} \, \left[\,\pm\, \left( 1\,+\, {\lambda^2 \over 4} \right) ^{1/2} -\, {\lambda \over 2} \,\right]
\]

Squaring leads to:

\[ \begin{align}
x^2 \,&=\, \left( a^2 \,-\, x^2 \right) \, \left[\,1 \,+\, {1 \over 2} \lambda^2 \,\pm\, {\lambda \over 2} \left(4\,+\, \lambda^2\right)^{1/2} \,\right] \\
&:=\ \xi * \left( a^2 \,-\, x^2 \right)
\end{align}
\]

So, in the end, we get a very simple expression for the x-coordinate of the axes’ end-points:

\[
x \,=\, \pm\, a\, \sqrt{\eta \phantom{\large{(}} }, \quad \mbox{with} \quad \eta \,=\, {\xi \over 1\, +\, \xi }
\]

This gives us all in all 4 solutions as ξ got two values:

\[ \begin{align}
\xi_1 \,&=\, \left[\,1 \,+\, {1 \over 2} \lambda^2 \,-\, {\lambda \over 2} \left(4\,+\, \lambda^2\right)^{1/2} \,\right], \quad \eta_1 \, = \, {\xi_1 \over 1\, +\, \xi_1} \\
\xi_2 \,&=\, \left[\,1 \,+\, {1 \over 2} \lambda^2 \,+\, {\lambda \over 2} \left(4\,+\, \lambda^2\right)^{1/2} \,\right], \quad \eta_2 \, = \, {\xi_2 \over 1 \,+\, \xi_2}
\end{align}
\]

This was to be expected as there are 4 end-points of 2 main axes of an ellipse.

Of course, we need the transformed coordinates. A small calculation shows:

End points of 1st main axis

\[ \begin{align}
x_{s1} \,&=\,\, \: a\, \sqrt{\eta_1 \phantom{\large{(}} } \,+\, \lambda a \, \sqrt{1 \,-\,\eta_1 \phantom{\large{(}} }, \quad
y_{s1} \,=\, \: a\, \sqrt{1 \,-\,\eta_1 \phantom{\large{(}} } \\
x_{s2} \,&=\, – a \, \sqrt{\eta_1 \phantom{\large{(}} } \,-\, \lambda a \, \sqrt{1 \,-\,\eta_1 \phantom{\large{(}} }, \quad
y_{s2} \,=\, – a \, \sqrt{1 \,-\,\eta_1 \phantom{\large{(}} }
\end{align}
\]

End points of 2nd main axis

\[ \begin{align}
x_{s3} \,&=\,\,\: a \, \sqrt{\eta_2 \phantom{\large{(}} } \,-\, \lambda a \, \sqrt{1 \,-\,\eta_2 \phantom{\large{(}} }, \quad
y_{s3} \,=\, – \, a \, \sqrt{1 \,-\,\eta_2 \phantom{\large{(}} } \\
x_{s4} \,&=\, – a \, \sqrt{\eta_2 \phantom{\large{(}} } \,+\, \lambda a \, \sqrt{1 \,-\,\eta_2 \phantom{\large{(}} }, \quad
y_{s4} \,=\,\, \: a \, \sqrt{1 \,-\,\eta_2 \phantom{\large{(}} } \\
\end{align}
\]

These formulas enable us also to calculate the length-values of our ellipse’s main axes (half-diameters!), as and bs :

\[ \begin{align}
a_s \,&=\,\,\: \sqrt{\,x_{s1}^2 \, + \, y_{s1}^2 \phantom{\large{(}} }, \quad \left(\,\approx 4.83 \,\right) \\
b_s \,&=\,\,\: \sqrt{\,x_{s3}^2 \, + \, y_{s3}^2 \phantom{\large{(}} }, \quad \left(\,\approx 0.83\, \right)
\end{align}
\]

For our shearing in x-direction as gives us the longer axis. The rotational angle α between the longer main axis and the x-axis can be calculated via:

\[
\alpha \,=\, \arctan \left( { y_{s1} \over x_{s1} } \right) , \quad \left( \,\approx 22.5 \deg \right)
\]

Plot of main axes, their end-points and of the points with maximum y-value

The coordinate data found above help us to plot the respective points and the axes of the produced ellipse. The diameters’ end-points are plotted in red, the points with extremal y-value in green:

It becomes very clear that the points with maximum y-values are not identical with the end-points of the ellipse’s main symmetry axes. We have to keep this in mind for a discussion of higher dimensional figures and vector distributions as multidimensional spheres, ellipsoids and multivariate normal distributions in later posts.

Rotated ECS to produce the ellipse?

The plot above makes it clear that we could have created the ellipse also by switching to an ECS rotated by the angle α. Followed by a simple scaling in x- and y-direction by the factors as and bs in the rotated ECS. This seems to be a contradiction to a previous statement in this post series, which said that a shear matrix cannot be diagonalized. We saw that in general we cannot find a rotated ECS, in which the shear transformation reduces to pure scaling along the coordinate axes. We assumed from linear algebra that we in general need a first rotation plus a scaling and afterward a second different rotation.

But the reader has already guessed it: For a fully rotation-symmetric, i.e. isotropic body any first rotation does not change the figure’s symmetry with respect to the new coordinate axes. In contrast e.g. to squares or rectangles any rotated coordinate system is as good as any other with respect to the effect of scaling. So, it is just scaling and rotating or vice versa. No second rotation required. We shall in a later post see that this holds in general for isotropically shaped bodies.

Conclusion

Enough for today. We have shown that a shear transformation applied to a circle always produces an ellipse. We were able to derive the vectors to the points with maximum y-values from the parameters of the original circle and of the shear matrix. We saw that due to the circle’s isotropy we could reduce the effect of shearing to a scaling plus one rotation or vice versa. In contrast to what we saw for a cube in the previous post.

In the next post

Fun with shear operations and SVD – IV – Shearing of ellipses

we will apply a shearing transformation to an ellipse.

 

Statistical vector generation for multivariate normal distributions – I – multivariate and bi-variate normal distributions from CAEs

This post requires Javascript to display formulas!

Convolutional Autoencoders and multivariate normal distributions

Experiments as with my own on convolutional Autoencoders [CAE] show: A CAE maps a training set of human face images (e.g. CelebA) onto an approximate multivariate vector distribution in the CAE’s latent space Z. Each image corresponds to a point (z-point) and a corresponding vector (z-vector) in the CAE’s multidimensional latent space. More precisely the results of numerical experiments showed:

The multidimensional density function which describes the inner dense core of the z-point distribution (containing more than 80% of all points) was (aside of normalization factors) equivalent to the density function of a multivariate normal distribution [MND] for the respective z-vectors in an Euclidean coordinate system.

For results of my numerical CAE-experiments see
Autoencoders and latent space fragmentation – X – a method to create suitable latent vectors for the generation of human face images
and related previous posts in this blog. After the removal of some outliers beyond a high sigma-level (≥ 3) of the original distribution the remaining core distribution fulfilled conditions of standard tests for multivariate normal distributions like the Shapiro-Wilk test or the Henze-Zirkler test.

After a normalization with an appropriate factor the continuous density functions controlling the multivariate vector distribution can be interpreted as a probability density function [p.d.f.]. The components vj (j=1, 2,…, n) of the vectors to the z-points are regarded as logically separate, but not uncorrelated variables. For each of these variables a component specific value distribution Vj is given. All these marginal distributions contribute to a random vector distribution V, in our case with the properties of a MND:

\[ \boldsymbol{V} \: = \: \left( \, V_1, \, V_2, \, ….\, V_n\, \right) \: \sim \: \boldsymbol{\mathcal{N_n}} \, \left( \, \boldsymbol{\mu} , \, \boldsymbol{\Sigma} \, \right), \\ \quad \mbox{with} \: \boldsymbol{\mathcal{N_n}} \: \mbox{symbolizing a MND in an n-dimensional space}
\]

μ is a vector with all mean values μj of the Vj component distributions as its components. Σ abbreviates the covariance matrix relating the distributions Vj with one another.

The point distribution of a CAE’s MND forms a complex rotated multidimensional ellipsoid with its center somewhere off the origin in the latent space. The latent space itself typically has many dimensions. In the case of my numerical experiments the number of dimension was n ≥ 256. The number of sample vectors used were between 80,000 and 200,000 – enough data to approximate the vector distribution by a continuous density function. The densities for the Vj-distributions formed a smooth Gaussian function (for a reasonable sampling interval). But note: One has to be careful: The fact that the Vj have a Gaussian form is not a sufficient condition for a MND. (See the next post.) But if a MND is given all Vj have a Gaussian form.

Generative use of MNDs in multidimensional latent spaces of high dimensionality

When we want to use a CAE as a generative tool we need to solve a problem: We must create statistical vectors which point into the (multidimensional) volume of our point distribution in the latent space of the encoding algorithm. Only such vectors provide useful information to the Decoder of the CAE. A full multivariate normal distribution and hyperplanes of its multidimensional density function are difficult to analyze and to control when developing a proper numerical algorithm. Therefore I want to reduce the problem of vector generation to a sequence of viewable and controllable 2-dimensional problems. How can this be achieved?

A central property of a multivariate normal distribution helps: Any sub-selection of m vector-component distributions forms a multivariate normal distribution, too (see below). For m=2 and for vector components indexed by (j,k) with respective distributions Vj, Vk we get a so called “bivariate normal distribution[BND]:

\[ V_{jk} \: \sim \: \mathcal{N}_2\left(\, (\mu_j, \mu_k), (\,\sigma_j, \sigma_k \,) \, \right)
\]

A MND has n*(n-1)/2 such subordinate BNDs. The 2-dim density function of a bivariate normal distribution

\[ g_{jk}\,\left( \, v_j, \, v_k\, \right) \: = \: g_{jk}\,\left( \, v_j, \, v_j, \, \mu_j, \, \mu_k, \, \sigma_j, \sigma_k, \, \sigma_{jk}, … \right)
\]

for vector component values vj, vk defines a point density of the sample data in the (j,k)-coordinate plane of the Euclidean coordinate system in which the MND is described. The density function of the marginal distributions Vj showed the typical Gaussian forms of a univariate normal distribution.

The density function of a BND has some interesting mathematical properties. Among other things: The hyperplanes of constant density of a BND’s density function form ellipses. This is illustrated by the following plots showing such contour lines for selected pairs (Vj, Vk) of a real point-distribution in a 256-dimensional latent space. The point distribution was created by a CAE in its latent space for the CelebA dataset.

Contour lines for selected (j,k)-pairs. The thick lines stem from theory and calculated correlation coefficients of the univariate distributions.

The next plot shows the contours of selected vector-component pairs after a PCA-transformation of the full MND. (Main ellipse axes are now aligned with the axes of the PCA-coordinate system):

These ellipses with axes along the coordinate axes are relatively easy to handle. They can be used for vector creation. But they require a full PCA transformation of the MND-distribution, a PCA-analysis for complexity reduction and an application of the inverse PCA-transformation. The plot below shows the point-density compared to a 2.2-σ-confidence ellipses. The orange points are the results of a proper statistical numerical vector generation algorithm based on a PCA-transformation of the MND.

See my post quoted above for the application of a PCA-transformation of the multidimensional MND for vector creation.

However, we get the impression that we could also use these rotated ellipses in projections of the MND onto coordinate planes of the original latent space system directly to impose limiting conditions on the component values of statistical vectors pointing to an inner regions of the MND. Of course, a generated statistical vector must then comply with the conditions of all such ellipses. This requires an analysis and combined use of the ellipses of all of the subordinate BNDs of a the original MND during an iterative or successive definition of the values for the vector components.

Objective of this post series

In my last post about CAEs (see the link given above) I have explicitly asked the question whether one can avoid performing a full PCA-transformation of the MND when creating statistical vectors pointing to a defined inner region of a MND.

The objective of this post series is to prove the answer: Yes, we can. And we will use the BNDs resulting from projections of the original MND onto coordinate planes. We will in particular explore the n*(n-1)/2 the properties of the BNDs’ confidence ellipses. As said: These ellipses are rotated against the coordinate system’s axes. We will have to deal with this in detail. We will also use properties of their 1-dimensional marginal distributions (projections onto the coordinate axes, i.e. the Vj.)

In addition we need to prepare a variety of formulas before we are able to define numerical procedure for the vector generation without a full PCA-transformation of the MND with around 100,000 vectors. Some of the derived formulas will also allow for a deeper insight into how the multiple BNDs of a MND are related between each other and with confidence hypersurfaces of the MND.

Ellipses in general lead to equations governed by quadratic or fourth power polynomials. We will in addition use some elementary correlation formula from statistics and for some exercises a simple optimization via derivatives. The series can be regarded as an excursion into some of the math which governs bivariate distributions resulting from a MND.

As MNDs may also be the result of other generative Machine Learning algorithms in respective latent spaces, the whole approach to statistical vector generation for such cases should be of general interest. Note also that the so called “central limit theorem” almost guarantees the appearance of MNDs in many multivariate datasets with sufficiently large samples and value dependencies on many singular observations.

Distributions of a variety of variables may result in a MND if the variables themselves depend on many individual observables with limited covariance values of their distributions. In particular pairwise linearly correlated Gaussian density distributions individual variables (seen as vector components) may constitute a MND if the conditional probabilities fulfill some rules. We will see a glimpse of this in 2 dimensions when we analyze integrals over Gaussians in the bivariate normal case.

Other approaches to statistical vector generation?

Well, we could try to reconstruct the multidimensional density function of the MND. This is a challenge which appears in some problems of pure statistics, but also in experimental physics. See e.g. a paper of Rafey Anwar, Madeline Hamilton, Pavel M. Nadolsky (2019, Department of Physics, Southern Methodist University, Dallas; https://arxiv.org/pdf/1901.05511.pdf). Then we would have to find the elements of the (inverse) covariance matrix or – equivalently – the elements of a multidimensional rotation matrix. But the most efficient algorithms to get the matrix coefficients again work with projections onto coordinate planes. I prefer to use properties of the ellipses of the bivariate distributions directly.

Note that using the multidimensional density function of the MND directly is not of much help if we want to keep the vectors’ end points within a defined multidimensional inner region of the distribution. E.g.: You want to limit the vectors to some confidence region of the MND, i.e. to keep them inside a certain multidimensional ellipsoidal contour hyper-surface. The BND-ellipses in the coordinate planes reflect the multidimensional ellipsoidally shaped contour hyperplanes of the full distribution. Actually, when we vertically project a multidimensional hyperplane onto a coordinate plane then the outer 2-dim border line coincides with a contour ellipse of the respective BND. (This is due to properties of a MND. We will come back to this in a future post.) The problem of proper limiting individual vector component values thus again is best solved by analyzing properties of the BNDs.

Steps, methods, mathematical level

As a first step I will, for the sake of completeness, write down the formula for a multivariate normal distribution, discuss a bit its mathematical construction from uncorrelated univariate normal distribution. I will also list up some basic properties of a MND (without proof!). These properties will justify our approach to create statistical vectors pointing into a defined inner region of the MND by investigating projected contour ellipses of all subordinate BNDs. As a special aspect I want to make it at least plausible, why the projected contour ellipses define infinitesimal regions of the same relative probability level as their multidimensional counterparts – namely the multidimensional ellipsoidal hypersurfaces which were projected onto coordinate planes.

Then as a first productive step I want to motivate the specific mathematical form of the probability density function [p.d.f] of a bivariate normal distribution. In contrast to many of the math papers I have read on the topic I want to use a symmetry argument to derive the basic form of the p.d.f. I will point out an important, but plausible assumption about conditional distributions. An analogous assumption on the multidimensional level is central for the properties of a MND.

As the distributions Vj and Vk can be correlated we then want to understand the impact of the correlation coefficients on the parameters of the 2-dimensional density function. To achieve this I will again derive the density function by using our previous central assumption and some simple relations between the expectation values of the constituting two univariate distributions in the linear correlation regime. This concludes the part of the series where we get familiar with BNDs.

Furthermore we are interested in features and consequences of the 2-dimensional density functions. The contour lines of the 2-dim density function are ellipses – rotated by some specific angle. I will look at a formal mathematical process to construct such ellipses – in particular confidence ellipses. I will refer to the results Carsten Schelp has provided in an Internet article on this topic.

His construction process starts with a basic ellipse, which I will call base correlation ellipse [BCE]. The length of the axes of this ellipse are eigenvalues of the covariance matrix of the standardized marginal distributions constituting the BND. The main axes of this elementary ellipse are in addition aligned with the two selected axes of a basic Euclidean coordinate system in which the bivariate distribution is defined. The length of the BCE’s main axes can be shown to depend on the correlation coefficient for the two vector component distributions Vj and Vk. This coefficient also appears in the precision matrix of the BND. Points on the base correlation ellipse can be mapped with two steps of an affine transformation onto points on the real contour ellipses, in particular to points of the confidence ellipses.

The whole construction process is not only of immense help when designing visualization programs for the contour ellipses of our distribution with many (around 100,000) individual vectors. The process itself gives us some direct geometrical insights. It also helps to avoid finding a numerical solution of the usual eigenvector-problems when answering some specific questions about the rotated contour ellipses. Normally we solve an eigenvalue-problem for the covariance matrix of the multi- or the many subordinate bi-variate distributions to get precise information about contour ellipses. This corresponds to a transformation of the distributions to a new coordinate system whose axes are aligned to the main axes of the ellipses. Numerically this transformation is directly related to a PCA transformation of the vector distributions. However, such a PCA-transformation can be costly in terms of CPU time.

Instead, we only need a numerical determination of all the mutual the correlation coefficients of the univariate marginal distributions of the MND. Then the eigenvalue problem on the BND-level is already analytically solved. We therefore neither perform a full numerical PCA analysis of the MND and multidimensional rotation of the vectors of around 100,000 samples. Nor do we analyze explained variance ratios to determine the most important PCA components for dimensionality reduction. We neither need to perform a numerical PCA analysis of the BNDs.

Most important: Our problem of vector generation is formulated in the original latent space coordinate system and it gets a direct solution there. The nice thing is that Schelp’s construction mechanism reduces the math to the solution of quadratic polynomial equations for the BNDs. The solutions of those equations, which are required for our ultimate purpose of vector generation, can be stated in an explicit form.

Therefore, the math in this series will mostly remain on high school level (at least at a level given when I was young). Actually, it was fun to dive back into exercises reminding me of school 50 years ago. I hope the interested reader has some fun, too.

Solutions to some particular problems with respect to the confidence ellipses of the MND’s BNDs

In particular we will solve the following problems:

  • Problem 1: The two points on the BCE-ellipse with the same vj-value are not mapped onto points with the same vj-value on the confidence ellipse. We therefore derive the coordinates of points on the BCE-ellipse that give us one and the same vj-value on the real confidence ellipse.
  • Problem 2: Plots for a real MND vector distribution indicate that all (n-1) confidence ellipses of distribution pairs of a common Vj with other marginal distributions Vk (for the same confidence level and with k ≠ j) have a common tangent parallel to one coordinate axis. We will derive the value of a maximum v_j-value for all ellipses of (j,k)-pairs of vector component distributions. We will prove that it is identical for all k. This will define the common interval of allowed vj-component-values for a bunch of confidence ellipses for all (Vj, Vk)-pairs with a common Vj.
  • Problem 3: The BCE-ellipses for a common j-, but different k-values depend on different values for the correlation coefficients ρj,k of Vj with its various Vk counterparts. Therefore we need a formula that relates a point on the BCE-ellipse leading to a concrete v_j-value of the mapped point on the confidence ellipse of a particular (Vj, Vk)-pair to respective points on other BCE-ellipses of different (Vj, Vm)-pair with the same resulting v_j-value on their confidence ellipses. I will derive such a formula. It will help us to apply multiple conditions onto the vector component values.
  • Problem 4: As a supplemental exercise we will derive a mathematical expression for the size of the main axes and the rotation angle of the ellipses. We should, of course, get values that are identical to results of the eigenwert-problem for the correlation matrix (describing a PCA coordinate transformation). This gives us additional confidence in Schelps’ approach.

In the end we can use our results to define a numerical algorithm for the direct creation of vectors pointing to a defined inner region of the multivariate normal distribution. As said, this algorithms does not require a costly PCA transformations of the full MND or many, namely n*(n-1)/2 such PCA-transformations of its BNDs.

I intend to visualize all results with the help of a concrete example of a multivariate example distribution created by a CAE for the CelebA dataset. The plots will use Schelp’s construction algorithm for the confidence ellipses extensively.

Conclusion and outlook

Convolutional Autoencoders create approximate multivariate normal distributions [MND] for certain input data (with Gaussian pattern properties) in their latent space. MNDs appear in other contexts of machine learning and statistics, too. For evaluation and generative purposes one may need statistical vectors with end points inside a defined multidimensional hypersurface corresponding to a certain confidence level and a certain constant density value of the MND’s density function. These hypersurfaces are multidimensional ellipsoids.

We have the hope that we can use mathematical properties of the MND’s subordinate bivariate normal distributions [BNDs] to create statistical vectors with end points inside the multidimensional confidence ellipsoids of a MND. Typically such an ellipsoid resides off the origin of the latent space’s coordinate system and the ellipsoid’s main axes are rotated against the axes of the coordinate system. We intend to base the confining conditions on the components of the aspired statistical vectors on correlation coefficients of the marginal vector component distributions. Our numerical algorithm should avoid a full PCA-transformation of the multidimensional vector distribution.

In the next post of this series I give a formula for the density function of a multivariate normal distribution. In addition I will list up some basic properties which justify the vector generation approach via bivariate normal distributions.

 

Autoencoders and latent space fragmentation – X – a method to create suitable latent vectors for the generation of human face images

My present post series explores options to use a standard convolutional Autoencoder [AE] for the creation of images with human faces. The face generation should based on random input to the AE’s Decoder. On our quest for a suitable method we have meanwhile learned a lot about other aspects of Autoencoders, vector distributions in multi-dimensional latent spaces and generative methods for our special case:

  • Methods to create statistical latent vectors [z-vectors] as input for the AE’s Decoder must be chosen carefully. Among other things: It is difficult to create a bunch of random vectors which cover wider areas in the vastness of a multidimensional space. So the z-vector creation must be adjusted to specific requirements.
  • After having been trained with CelebA images a convolutional AE fills a limited and coherent region in the latent space with z-points for the training images. This latent space region appears to be critical for successful image creation: Statistically generated z-vectors should point to this region. The core of the z-point distribution gets filled relatively densely.
  • A convolutional AE maps human face images onto an approximate multivariate normal distribution. This gives the inner core of the z-point distribution the structure of a multidimensional ellipsoid. The projections of this ellipsoid onto 2-dimensional coordinate planes show characteristic nested elliptic contour lines.
  • As the main axes of these ellipses were inclined with different angle towards the axes of chosen coordinate planes we concluded that linear correlations mark average dependencies between the z-vector components. Limiting conditions imposed by these correlations must also be fulfilled by z-vectors used as the Decoder’s input.

See previous posts in this series for more details. In particular, the last 2 posts

Autoencoders and latent space fragmentation – IX – PCA transformation of the z-point distribution for CelebA

Autoencoders and latent space fragmentation – VIII – approximation of the latent vector distribution by a multivariate normal distribution and ellipses

have shown that the density distribution for the z-points really exhibits elliptic contour lines in the original coordinate system of the latent space and (!) in the target coordinate system of a PCA transformation.

In this post we use our gathered knowledge: I present a first simple method to generate z-vectors which point to the latent space region filled by z-points for CelebA images. These z-vectors will fulfill the general and limiting elliptic conditions for their components.

Decomposing the full problem of latent vector generation into a sequence of 2-dimensional problems

The nice thing about multivariate Gaussian distributions with linear correlations between the vector components is the following: We can reduce the problem of choosing proper component values to a series of 2-dimensional restrictions. Firstly we can use characteristic properties of the Gaussian distribution for each component. And secondly we can use confidence ellipses in 2-dimensional coordinate planes to restrict the component values to allowed intervals.

Ellipses are most easy to handle when their axes are aligned with the axes of the coordinate system in which we describe them. So, let us assume that we know an affine transformation T to a new coordinate system which also has orthogonal axes and supports the following special transformation properties for a multivariate normal density distribution:

  1. T maps nested elliptic contour lines of the multidimensional density distribution and in particular confidence ellipses for component pairs in the original coordinate system to nested elliptic contours and confidence ellipses in the new coordinate system.
  2. Taligns the centers of the transformed ellipses with the origin of the new coordinate system.
  3. T aligns the main axes of the mapped ellipses with the axes of the new coordinate system.
  4. T is reversible.

How could we then use the transformed data for vector-creation?

In the new coordinate system, a contour ellipse in a chosen coordinate plane for the axes-indices (i, j) may have main diameters of size

d1 = 2 * a    and    d2 = 2 * b.

We then can first select a random v_i value to fall into a range [-a * fact, a * fact].

fact * a    <    v_i    <    fact * a

With fact being a proper factor. This factor defines a confidence level in the new coordinate system. With the value of v_i fixed and b being the half-diameter in the orthogonal direction the correlation condition for the z-point distribution says that the v_j value must fall into an interval [-c, c] defined by:

-c    <    v_j    <    c,
with c = b * fact * sqrt(1 – x**2 / (fact * a)**2)

But within these limits we can again choose the v_j-value freely. Below I use a simple random-function for a constant probability density to pick a value.

However: It would not be enough to restrict the coordinates to the conditions of just one ellipse! The components of the created vectors must in parallel fulfill elliptic conditions for all of the possible pairs of vector-components. I.e. we may need to adapt the v_j values gained from the analysis of a fist 2D-ellipse to further conditions of other ellipses and component pairs. This can be achieved by an iteration. For z_dim = 256 this involves a total of 32640 checks and possible value-adaptions to each and all of the allowed value ranges.

In addition: The order by which the component-pairs and their conditions are investigated must be randomized to get real statistical vector distributions.

Eventually the resulting vector components must be re-transformed into the original coordinate system of the latent space.

The ellipse for the “core’s boundary” in the original coordinate system will be defined by the chosen confidence level of the ellipsoidal normal distribution. We saw already that a confidence level of σ = 2.0 defines the transition to outer regions of the z-point density distribution quite well.

This all sounds manageable by relative simple Python programs. But: Do we know a proper transformation T? Yes, we do: A PCA-transformation of the z-point density distribution has all the properties discussed above.

Using half maximum values after a PCA transformation of the z-point distribution

The last post proved that a PCA transformation maps ellipses onto ellipses for component pairs in the transformed PCA coordinate system. The advantage of the ellipses there is that their main axes are on average well aligned with the orthogonal PCA coordinate axes. Gaussians for the number density distribution per component are mapped to Gaussians for the new components in the transformed coordinate system. So, the basic idea for a proper z-vector generation is:

  1. Take the multivariate normal z-point distribution for the training images in the AE’s latent space.
  2. Apply a PCA analysis to diagonalize the correlation matrix and transform the z-vector components to the PCA coordinate system.
  3. Use the ellipses in coordinate planes of the PCA coordinate system to create random z-vector components fulfilling all required conditions there.
  4. Re-transform the resulting z-vector components into the original coordinate system of the latent space.

Point 3 in our method is covered by a numerical analysis of the Gaussians in the PCA-coordinate system. We determine the half-width numerically by analyzing the density distribution with the help of sampling intervals. This simple method has resolution limits related to the size of the sampling interval. This has consequences for PCA components with a small standard deviation. We saw already in the last posts that such distributions appear for higher PCA components at the lower end of the explained variance.

Does the suggested method work?

The convolutional AE we work with was defined in previous posts with 4 Conv2D layers in the Encoder and 4 Conv2DTranspose layers in Decoder. The number of latent space dimensions was z_dim = 256. The AE network was trained on CelebA images. I do not want to bore you with details of the codes for the creation of z-vectors consistent to the resulting elliptic conditions. It is all standard. The PCA-transformation can e.g. be taken from the sklearn-package.

I have applied a constant probability density to choose a random value within the allowed ranges for the component values of the aspired z-vectors in the PCA coordinate system. For the plots below I have used the most important 50 to 105 PCA components (out of 256). The plots include confidence ellipses on a level of σ = 2.2. I derived the confidence ellipses by directly evaluating the standard deviations of the transformed distribution data in all coordinate directions.

The first plot shows you such an ellipse for the coordinate plane corresponding to the first two, most important PCA components. The orange points mark 20 z-points defined by 20 randomly z-vectors fulfilling all elliptic conditions. The plot contains 120,000 z-points for images out of the 170,000 CelebA pictures used during training.

Generated statistical vectors in the PCA coordinate system

For elliptic contour lines see the last post before the present one in this series. The next plot shows the same generated 20 z-vectors for other component-combinations among the first 20 of the most important PCA-components. The plots contain a selection of 60,000 z-points.

The outer z-points points do not always indicate that we have elliptic contours in the denser core of the displayed 2-dimensional distributions. But see the last post for proofs that the inner core inside the red ellipse really displays elliptic contours. You see that all random vectors lie within the 2-σ-ellipses.

The next plot shows the generated z-vectors in the original coordinate system of the latent space. The component values were back-transformed from the PCA-system to the original coordinate system.

Generated statistical z-vectors after an inverse PCA transformation to the original coordinate system of the latent space

We get similar plots for other component pairs. And of course for other generated vectors.

Generated statistical z-vectors in the PCA coordinate system

Generated statistical z-vectors after an inverse PCA transformation to the original coordinate system of the latent space

Technically we have obviously achieved what we wanted: Our generated statistical vectors are distributed within the core of our multidimensional ellipsoid.

Note that this method fortunately works even when we use a limited number of the PCA components, only. This is due to intricate properties of a PCA transformation which guarantee that a back-transformation puts the resulting points close to the original ones even when we omit less important PCA components. I cannot discuss the math-details in this blog. You have to see scientific literature for this. An introduction is e.g. provided by https://arxiv.org/pdf/1404.1100.pdf.

For me this property of the PCA transformation was helpful when I ran into the resolution problem for a proper half-width of the Gaussians. Taking 256 components lead to errors as elliptic conditions for very narrow Gaussians were not properly defined and some of the created vectors left the allowed value ranges.

Resulting face images

Let us look at some results. First I want to remind you from where we started:

Failed trials with improper random z-vectors based on constant probability densities

A simple random generator used in the beginning was totally inapt to feed the AE’s Decoder with proper statistical z-vectors. And now – look at the following plots. They were produced for a varying number of PCA components between 50 and 120, 100000 statistically selected z-points within a 3 σ-level for the PCA-transformation and various factors 0.6 < fact < 0.8 used upon a half-width corresponding to a confidence level of 2.35 σ:

In some cases – for a higher number of PCA components – we even see smaller details of the face images and a reasonable transition to some kind of hairdo. Please remember that z_dim = 256 is a pretty low number for the latent space to cover the encoding of face details. And celebrities as covered by CelebA use make-up ….

In case you think the above result is not noteworthy: Please remember that we talk about a simple standard Autoencoder and not about a Variational Autoencoder and neither about a transformer based Autoencoder. No fancy additions to cost functions or special layers. And who ever has read the very instructive book of D. Foster on “Generative Deep Learning” (1st edition, O’Reilly) may compare his images to mine. And I have used a lower resolution of the original images than D. Foster. Just to motivate people to look a bit deeper into properties of data distributions in latent spaces.

Conclusion and outlook

We have come a lot closer to our objective of using a standard minimal Autoencoder for generative purposes. On our way, we got a much deeper understanding of the vector-distribution a trained AE creates in its latent space for human face images.

The method presented in this post to create reasonable statistical z-vectors still has its limits and there is a lot of open space for improvements. Attentive readers may e.g. ask: Why did he not use confidence ellipses directly? And why not the ellipses found in the original coordinate system of the latent space? And what about micro-correlations? And are there clusters for certain properties as the hair-color, sex, smiling, etc. in the multivariate z-point distribution in the AE’s latent space?

I will discuss these topics in further posts. In the meantime keep in mind that the basic point for turning a standard Autoencoder into a generative tool is to understand how it fills its latent space.

Note also that I myself have speculated in other posts of this blog that failures of using standard AEs for generative purposes may have their ultimate reason in the micro-structure of the z-point distribution. The present results render these previous ideas of mine plain wrong.

Links to previous posts of this series

Autoencoders and latent space fragmentation – IX – PCA transformation of the z-point distribution for CelebA

Autoencoders and latent space fragmentation – VIII – approximation of the latent vector distribution by a multivariate normal distribution and ellipses

Autoencoders and latent space fragmentation – VII – face images from statistical z-points within the latent space region of CelebA

Autoencoders and latent space fragmentation – VI – image creation from z-points along paths in selected coordinate planes of the latent space

Autoencoders and latent space fragmentation – V – reconstruction of human face images from simple statistical z-point-distributions?

Autoencoders and latent space fragmentation – IV – CelebA and statistical vector distributions in the surroundings of the latent space origin

Autoencoders and latent space fragmentation – III – correlations of latent vector components

Autoencoders and latent space fragmentation – II – number distributions of latent vector components

Autoencoders and latent space fragmentation – I – Encoder, Decoder, latent space

 

And before we forget it: Besides the Putler in the east there is also an extremist right-wing, semi-fascistic party in Germany on a record high support level in the population of 18%. This is a party which wants to stop all sanctions against the Russian aggressor in the ongoing war in Ukraine. You see the pattern behind this? This party is presently becoming bigger in number of supporters than the government leading social democrats. So, there is more at stake at present in Europe than the war in Ukraine. We need to defend our democracies with all the means of democracies. And its time to ask for more decisive legal action against a party which already is under observation of the German internal secret service.