Autoencoders and latent space fragmentation – IX – PCA transformation of the z-point distribution for CelebA

This series of posts is about standard convolutional Autoencoders [AEs]. We wish to use the creative ability of an AE’s Decoder for the creation of human face images. We trained a relatively simple example AE on the CelebA data set. For the generation of new human face images we wanted to feed the Decoder with randomly created z-vectors in the AE’s latent space. As arbitrary latent vectors did not deliver reasonable images, we had a look at properties of the z-point distribution created in the latent space after training. And we found the z-vectors with end-points within this particular region indeed gave us some first reasonable images.

But during the first posts we also had to learn that it requires substantial effort to produce “randomly” created z-vectors which point to the coherent and confined latent space region populated by z-points for CelebA images. In addition we found that we obviously must fulfill complex correlation conditions between the latent vector components. In the last posts we have, therefore, investigated the structure of the latent space region populated for CelebA images a bit more closely. See:

Autoencoders and latent space fragmentation – VIII – approximation of the latent vector distribution by a multivariate normal distribution and ellipses

Autoencoders and latent space fragmentation – VII – face images from statistical z-points within the latent space region of CelebA

This gave us a somewhat surprising insight:

For our special case of human face images the density distribution of the z-points in the multidimensional latent space roughly resembled a multivariate normal distribution. The variables of this distribution were, of course, the components of the latent z-vectors (reaching from the origin of the latent space coordinate system to the z-points). The number density distributions for the values of the z-vector components could very well be approximated by Gaussian functions. In addition we found strong linear correlations between the z-vector components. This was completely consistent with diagonally oriented elliptic contour lines of the number density distribution for pairs of z-vector components in the respective coordinate planes. The angles between the ellipses’ axes and the axes of the (orthogonal) coordinate system varied.

In this post we test my thesis of a multivariate normal distribution in a different and more complicated way. If the number density distribution of the z-points for CelebA really is close to a multivariate normal distribution then we would expect that the contour lines of the 2-dimensional number density function for coordinate planes still are ellipses after a PCA transformation. Note that the coordinate planes then are planes of a new coordinate system (with orthogonal axes) moved and rotated with respect to the original coordinate system. We call the transformed coordinate system the “PCA coordinate system“. (It is clear the vector components must consistently be transformed with respect to the new coordinate axes.)

This test is much harder because asymmetries, imperfections and deviations from a normal distribution will become clearly visible. Nevertheless we shall see that the z-point distribution decomposes roughly in uncorrelated Gaussian number density (or probability) functions for the (transformed) components of the z-vectors – as expected.

Note that number density distributions can be interpreted as probability distributions after a proper normalization. To understand the results and implications of this post some basic knowledge about multivariate normal probability density distributions in multidimensional spaces is required. Transformation properties of such distributions are important. In particular the effect of a PCA or SVD transformation, which diagonalizes the Pearson correlation matrix, on a multivariate normal distribution, should be familiar: In the moved and rotated coordinate system original linear correlations between vector components disappear. The coordinate axes of the transformed coordinate system get aligned to the eigenvectors (and main axes) of the probability density distribution. In addition elliptic contour lines prevail and the main axes of the ellipses get aligned with the axes of the PCA coordinate system.

For valuable information in previous posts see the link list a the bottom of this article.

Imperfections in the multivariate normal distribution and their consequences for PCA

In our case the latent space had 256 dimensions. The real number density functions for the values of the (256) latent vector components did not always show the full symmetry of their Gaussian fits. Deviations appeared near the center and at the flanks of the individual distributions. Still we got convincing overall ellipses in contour plots of the probability density for selected pairs of components. But there were also relatively many points which lay in regions outside the 2 to 3 sigma confidence ellipses for these 2-dimensional distributions – and the distribution of the points there seemed to violate the elliptic symmetry.

A perfect multivariate normal distribution (with only linear correlations between its variables) decomposes into (seemingly) uncorrelated normal distributions for the new variables, namely the latent vector components in the transformed coordinate system of the latent space. The coordinate transformation corresponds to a diagonalization of the Pearson correlation matrix via finding orthogonal eigenvectors. So we should not only get ellipses after the PCA transformation. The main axes of the resulting ellipses should also be aligned with the coordinate axes of the new translated and rotated PCA coordinate system. (This is a standard feature of “uncorrelated” Gaussians).

Note that the de-correlation is a pure coordinate effect which does not eliminate the dependencies of the original variables. The new coordinates and the respective variables do not have the same meaning as the old ones. We “only” change perspectives such that the mathematical description of the multivariate distribution gets simpler. However, the ratios of the axes of the new ellipses depend on the axis-ratios of the old ellipses in the original standard coordinate system of the latent space.

When we translate and rotate a coordinate system to diagonalize the Pearson matrix of an imperfect multivariate normal distribution the coordinate axes may afterward not fully align with the main axes of the distribution’s inner core – and then we might get clearly visible asymmetries in z-point projections to coordinate planes. Note that points outside the inner core have a relatively large impact on the PCA transformation and especially on the rotation of the new axes with respect to the old ones.

Reduction of outer z-point contributions

Outer z-points often correspond to CelebA images with strong variations in the background. As these z-points have a large distance from the center of the distribution they have a heavy impact on a PCA transformation to eigenvectors of the Pearson matrix. Therefore, I eliminated some outer points from the distribution. I did this by deleting points with coordinate values smaller or bigger than a symmetric threshold in the outer flanks of the 256 Gaussian distributions – e.g.:

z_j < mu_j – fact * FHWM and z_j > mu_j + fact * FHWM.

With FHWM meaning the full width at the half maximum and mu_j being the center of the approximate Gaussian for the j-th vector component of the z-vectors. I tried multiple values of fact between 1.7 and 1.25.

One has to be careful with such an elimination process. Even though we set our limits beyond the 2 σ-level of the individual Gaussians ahead of the transformation, a step-wise consecutive elimination for all components reduces the number of surviving vectors significantly for small values of fact. The plots below correspond to a value of fact = 1.7 (with one discussed exception). This reduced the number of available vectors from originally 170.000 used during the AE’s training to 160.000. From this distribution a random sub-selection of 80,000 z-points was taken to perform the PCA transformation. I call the resulting sample of z-points the “reduced distribution” below. The results did not change much for a sub-selection of more points. The second reduction saved me some CPU-time, however.

PCA transformation of the z-point distribution – explained variance and cumulative importance

Below you see the explained variance ratio of the first 40 and the cumulative importance of the first 120 main components after a PCA transformation of the reduced z-point distribution. (The PCA-algorithm was based on the SVD-algorithm, as provided by the sklearn-package.)

We see that we need around 120 PCA components to explain 80% of the variance of the original z-point data. The contribution of the first 15 components is significant, but it is NOT really dominant. Interesting is also the step-wise decline of the first 10 components in a not so smooth curve.

I have in addition checked that the normalized Pearson correlation coefficient matrix was perfectly diagonalized. All off-diagonal values were smaller than 2.e-7 (which is consistent with error propagation) and the diagonal elements were 1.0 with an accuracy better than 8 eight digits after the decimal point. As it should be.

Gaussians per PCA-vector component?

We expect Gaussian number density functions for each of the z-vector components after the PCA coordinate transformation. The following line plots show the number density distributions (colored line) and a related Gaussian fit (dashed lines) per named vector-components after PCA-transformation. The first plot gives an overview over 120 PCA components:

We see that all distributions look similar to Gaussians. In addition the center of the distributions for all components are very close to or coincide with the origin of the PCA coordinate system. This is something we would expect for an original multivariate normal distribution before the PCA transformation to a new moved and rotated coordinate system: An affine transformation between coordinate systems with orthogonal axes maps Gaussians to Gaussians.

Components 0 and 1:

Components 2 and 3:

Components 4 and 5:

Components 6 and 7:

Components 8 and 9:

We see from these plots that component 3 shows values at the distributions’s flanks well above the values of its Gaussian fit. This may lead to deformed ellipses. In addition component 7 shows a slight asymmetry – probably triggering deviations both at the center and outer regions. Similar effects can be seen for other higher components – but not so pronounced. The chosen sampling interval (0.5) in the new coordinates did smear out small asymmetric wiggles in all distributions.

Ellipses?

Calculating approximate contour lines is CPU intensive. On my presently available laptop I had to focus on some selected examples for pairs of components of z-vectors in the transformed coordinate system. Experiments showed that the most critical components with respect to asymmetries were 2, 3, 7, 8.

Let us first look at combinations of the first 9 PCA components (0) with other components in the range between the 10th and 120th PCA component. Just scroll down to see more images:

The fatter lines represent confidence ellipses derived both from correlation data, i.e. elements of the normalized correlation coefficient matrix, and properties of the Gaussian distributions. See the last blog for details on the method of getting confidence ellipses from data of a probability distribution.

Ellipses for density distributions in coordinate planes for pars of the most important PCA components

Before you think that everything is just perfect we should focus on the most important 10 PCA components and their mutual pair-wise density distributions. We expect trouble especially for combinations with the components 2, 3 and 7.

Component 0 vs other components < 10

We see already here that asymmetries in the density distributions which appear relatively small in the gaussian-like curves per component leave their imprint on the elliptic curves. The contour lines are not always centered with respect to the overall confidence ellipses.

Component 1 vs other components < 10

Component 2 vs other components < 10

The plot for components 2 and 3 was based on fact = 1.35 instead of fact = 1.7. See a section below for more information.

Component 3 vs other components < 10

Component 4 vs other components < 10

Component 5 vs other components < 10

Component 6 vs other components < 10

Component 7 vs other components < 10

Component 8 vs other components < 10

Component 9 vs other components < 10

Ellipses for other seected component pairs

Comments

The z-point distribution is rather close to a multivariate normal distribution, but it is not perfect. The PCA transformation revealed clearly that the center of the overall z-point distribution does not completely coincide with the centers of the individual density distributions for the z-vector components. And not all individual curves are fully symmetrical. Some plots show the expected ellipses, but not fully centered. We also see signs of non-linear correlations as not all ellipses show a complete alignment with the axes of the PCA coordinate system.

The impact of outer z-points becomes obvious when we compare plots for the component pair (2, 3) for different values of fact. The plots are for fact = 1.35, 1.45, 1.55, 1.7 and fact = 1.8 (in this order from left to right and downward). (Regarding z-point elimination the fact-values correspond to 122000, 138000, 149000, 159000 and 164000 remaining z-points.)

The flips in the left-right orientation between some of the plots can be ignored. They depend on random aspects of the transformations and plotting routines used.

We see that z-points (for some some strange images) which have a large distance from the center of the distribution have a major impact on the central density distribution. 10% of the outer points influence the orientation of the central ellipses by their coordinates – although these outer points are not part of the inner core of the distribution.

BUT: Overall we find the effect we wanted to see. The elliptic contours are clearly visible and most of them are aligned with the axes of the PCA coordinate system and the main axes of the confidence ellipses for the z-point distribution in the PCA coordinate system. The inner core is aligned with the axes of the coordinate system even for the critical PCA components 2 and 3, when we omit between 12% and 25% of the points (outside a 3 σ-level).

This means: The PCA transformation confirms that an inner core of the z-point distribution is well described by a multivariate normal distribution – with linear correlations between the number density distributions for various components.

This is an interesting finding by itself.

Conclusion

The last and the present post of this series have shown that a convolutional AE maps human face images of the CelebA data set to a multivariate normal distribution in its multidimensional latent space. Although this is interesting by itself we must not forget our ultimate goal – namely to generate random z-vectors which should deliver us reasonably human face images by the AE’s Decoder. But the results of our analysis provide solid criteria to generate such vectors fulfilling complex correlation conditions.

We can now define statistical generation methods which restrict the components of our aspired random z-vectors such that the resulting z-points lie within regions surrounded by confidence ellipses of the multivariate normal distribution. One such method is the topic of the next post.

Autoencoders and latent space fragmentation – X – a method to create suitable latent vectors for the generation of human face images

Links to other posts of this series

Autoencoders and latent space fragmentation – VIII – approximation of the latent vector distribution by a multivariate normal distribution and ellipses

Autoencoders and latent space fragmentation – VII – face images from statistical z-points within the latent space region of CelebA

Autoencoders and latent space fragmentation – VI – image creation from z-points along paths in selected coordinate planes of the latent space

Autoencoders and latent space fragmentation – V – reconstruction of human face images from simple statistical z-point-distributions?

Autoencoders and latent space fragmentation – IV – CelebA and statistical vector distributions in the surroundings of the latent space origin

Autoencoders and latent space fragmentation – III – correlations of latent vector components

Autoencoders and latent space fragmentation – II – number distributions of latent vector components

Autoencoders and latent space fragmentation – I – Encoder, Decoder, latent space

 

Autoencoders and latent space fragmentation – VIII – approximation of the latent vector distribution by a multivariate normal distribution and ellipses

This post series is about creative abilities of convolutional Autoencoders [AE] which have been trained on a set of human face images. The objectives of this series and its numerical experiments are:

  • We want to create images with human faces from statistical z-vectors and related z-points in the AE’s latent space [z-space or LS]. Image creation will be done with the help of the AE’s Decoder after a training on the CelebA dataset.
  • We work with a standard Autoencoder, only. I.e., we do NOT add any artificial layers and cost terms to the Autoencoder’s layer structure (as it is done e.g. in Variational Autoencoders).
  • We analyze the position, shape and internal structure of the multidimensional z-vector distribution created by the AE’s Encoder after training. We assume that generated statistical z-vectors must point to respective regions of the latent space to guarantee images with reasonable content.
  • We raise the question whether simple statistical generator algorithms are sufficient to cover these regions with statistical z-vectors.

Our numerical experiments gave us some indications that such an endeavor is indeed feasible. In addition the third objective may give us some insight into the rules a trained AE follows when it encodes information about human faces into vectors of its latent space.

We have already studied the “natural” z-vector distribution created by a convolutional Autoencoder for CelebA images after a thorough training. The related z-point distribution fortunately filled just one confined and coherent off-center region of the AE’s latent space. Our experiments have furthermore shown that we must indeed restrict the statistical z-vector creation such that the vectors point to this particular region. Otherwise we will not get reasonable images. For details see the previous posts.

Autoencoders and latent space fragmentation – VII – face images from statistical z-points within the latent space region of CelebA
Autoencoders and latent space fragmentation – VI – image creation from z-points along paths in selected coordinate planes of the latent space
Autoencoders and latent space fragmentation – V – reconstruction of human face images from simple statistical z-point-distributions?
Autoencoders and latent space fragmentation – IV – CelebA and statistical vector distributions in the surroundings of the latent space origin
Autoencoders and latent space fragmentation – III – correlations of latent vector components
Autoencoders and latent space fragmentation – II – number distributions of latent vector components
Autoencoders and latent space fragmentation – I – Encoder, Decoder, latent space

The frustrating point so far was that simple methods for creating statistical vectors fail to put the end-points of the z-vectors into the relevant latent space region. In particular methods based on constant probability distributions within a common value interval for all z-vector components are doomed to miss the interesting region due to intricate mathematical reasons.

Afterward we tried to restrict the component values of test vectors to intervals defined by the shape of the number distribution for the values of each component of the CelebA related z-vectors. Such a distribution is nothing else than a one-dimensional probability density function for our special set of encoded CelebA samples: The function describes the probability that a component of a z-vector for human face images gets a value within a certain small value range. The probability distributions for all z-vector components were bell shaped and showed clear transitions to flat wings with very low values. See the plots below. This allowed us to define a value range

d_j_l   <   x_j   <   d_j_h

for each vector component x_j.

But keeping statistical values per component within the identified respective interval was not a sufficient restriction. We saw this clearly in the last post from significant irregular fluctuations in the reconstructed images. Obviously the components of statistically generated z-vectors must in addition fulfill correlation conditions.

The questions which I want to answer in this post are:

  • Can we approximate the 1-dimensional probability density functions for the z-vector components by some simple and common mathematical function?
  • What kind of correlations do we find between the components of the z-vectors encoding the information of human face images?
  • Can we derive some mathematical description of the multivariate z-vector distribution created by convolutional AEs for human face images in the AE’s multidimensional latent space?

Correlations are to be expected …

Please note: We deal with a multidimensional problem. A single latent vector encodes information about a human face image via all of its component values and by relations between these values. Regarding the purpose and the task an AE has to fulfill, it would be naive to assume that the components of our multi-dimensional z-vectors were independently organized. A z-vector encodes information for a convolutional Decoder to combine patterns detected by the Encoder and represented in neural (feature) maps of the networks to create an image. This is a subtle business. Just think about what you do when you draw a sketch of a human face. There are a lot of rules you follow.

When you think about the properties of basic feature patterns in a human face you would certainly assume that the pixel data of a corresponding image show strong correlations. This is among other things due to obvious symmetries – not excluding fluctuations of basic parameters describing human face features. But a nose tends to be at a position below the eyes and at a mid-distance of the eyes. In additions fluctuations of face features would on average respect certain limits given by natural proportions of a face. It would therefore be unreasonable to assume that the input for a Decoder to create a superposition of elementary patterns consists of un-correlated data. Instead the patterns in the original data should not only lead to well adjusted weights in the convolutional networks’ feature maps, but also to well regulated structural elements in the data distribution in the target space of the information encoding, namely in the latent space.

If the relations of the vector components were of a complex, highly non-linear kind and involved many dimensions at the same time we might be lost. But the results we have gained so far indicate a proper common structure of at least the density function for the individual components. This gives us some hope that the multidimensional problem somehow involves well defined 1-dimensional constituents. Whether this a sign that the multidimensional structure of the z-vector distribution can be decomposed into low-dimensional relations remains to be seen.

Observations regarding the z-vector distribution created by convolutional Autoencoders for human face images

Coordinate values of the z-points are identical to z-vector component values when we fix the end of each vector to the origin of the latent space coordinate system. The z-vector distribution thus directly corresponds to a z-point density distribution in the orthogonal coordinate system of the AE’s multi-dimensional LS. We have already made three interesting observations regarding these distributions:

  • The individual probability density function for a selected component of the latent vectors has a bell-shaped form. One , therefore, is tempted to think of a Gaussian function. This would indicate a possible normal distribution for the coordinate values of the z-points along each of the selected coordinate axes.
    Note: This does not exclude that the probability distributions for the components are correlated in some complex way.
  • When we plotted the projection of the z-point distribution onto 2-dimensional coordinate planes (for selected pairs of coordinate axes) then almost all of the resulting 2-dimensional density distributions seemed to have a defined core with an ellipsoidal form of its boundary.
  • For certain component- or axis-pairs the main axes of the apparent ellipses for pair-wise density function appeared rotated against the coordinate axes. The elongated regular and more or less symmetric forms showed a diagonal orientation (with different angles). This alone signals a strong correlation between related two vector components. Indeed we found high values for certain elements of the matrix of normalized Pearson correlation coefficients for the multi-dimensional distribution of z-vector component values.

These observations are not unrelated; they indicate a clear pattern of dependencies and correlations of the distributions for the variables in place. Regarding the data basis we have to keep five things in mind:

  • We treat the z-point distribution for CelebA images as a multi-dimensional probability density distribution. During the analysis we look in particular at 2-dimensional projections of this distribution onto planes spanned by a selected pair of orthogonal axes of the LS coordinate system. We also consider the one-dimensional value distributions for z-vector components. In this sense we regard the z-vector components as logically separate variables.
  • The data used are numbers of z-points counted in finite 1d-intervals, 2d-rectangles or multidimensional cuboids. We fit idealized functions to the respective discrete bar plots. Even if there is a good 1d-fit fluctuations may especially get visible in multidimensional plots for correlated data. A related probability density requires a normalization. We drop the resulting constant factors in the qualitative discussions below.
  • Statistical (un-)correlation of statistical variable distributions must NOT to be confused with underlying variable (in-) dependency. Linear correlations can be reduced to zero by coordinate transformations without eliminating the original variable dependencies.
  • Pearson correlation coefficients are sensitive to linear elements in the relations of logically separate variable distributions. They can not fully cover non-linear distribution relations or covered variable dependencies.
  • A transformation to a local coordinate system whose axes are aligned to the so called main axes of the multidimensional distributions does not remove the original data relations – but there may exist a coordinate system in which the distribution data can be described in a simple, factorized form corresponding to a composition of seemingly un-correlated data distributions.

Anyway – by discussing density distributions we work on overall and large scale average relations between statistical value distributions for our variables, namely the z-vector components. We do not cover local micro-relations that may be in place in addition.

The relation of ellipses with Gaussian probability densities

Probability density functions for two logically separate, but maybe not un-correlated variables have to be multiplied. In our case this reflects the following point: First we determine the probability that the value of component x_i lies in a certain (infinitesimal) interval and then we determine the probability that (for the given value of x_i) the component x_j falls into another value range. The distributions for a specific variable can include variable relations and thus the probability density g(x_j) can include a dependency g(x_j(x_i)).

In the case of uncorrelated normal distributions per coordinate we can just multiply the individual Gaussians g(x_i) * g(x_j). Due to the quadratic terms in the exponent of the Gaussians we then get a sum of quadratic expressions in the common exponent, having the form fac1 * (x_i-mu_i)**2 + fac2 * (x_j-mu_j)**2.

By setting this expression to a constant value we get contour lines of the probability density distribution for the (x_i, x_j)-distribution. Quadratic sums correspond to the definition of an ellipse having main axes which are aligned with the x_i- and x_j-axes of the coordinate system. Thus the contour lines of a 2-dimensional distribution composed of un-correlated Gaussians are ellipses having an orientation aligned with the coordinate axes.

This was for un-correlated density-distributions of two vector components. Mathematically a linear correlation between a pair of Gaussians-distributions corresponds to an affine transformation of the contour-ellipses. The transformation can be expressed by a defined sequence of matrix operations describing a translation, rotations (in a defined order) and a dilation.

This means: The contour lines for a 2-dimensional probability density composed of linearly correlated Gaussians are still ellipses. But these ellipses will appear to be shifted, rotated and stretched along the main axes in comparison with their originally un-correlated Gaussian counterparts. The angle of rotation depends on details of the correlation function and the original standard deviations. The Pearson correlation matrix for linearly correlated distributions is a positive-definite one and, of course, shows off-diagonal elements different from zero. This result can be extended to multivariate normal distributions in spaces with many dimensions and related affine transformations of the coordinate system.

A multivariate normal distribution with linear correlations between the Gaussians results in elliptic contour lines for pair-wise density distributions in the respective 2D-coordinate planes of an orthogonal coordinate system. When we define the contours via multiples of the standard deviations of the underlying Gaussian functions we arrive at so called confidence ellipses.

A really nice mathematical aspect is that the basic parameters of the confidence ellipses can be derived from the normalized correlation coefficients of the Pearson matrix of the multivariate probability distribution. I will come back to this point in forthcoming posts in more detail. For now we just need to know that a multidimensional probability density comes along with confidence ellipses which can be calculated with the help of Pearson correlation coefficients.

Before we go on a word of caution: For a general multi-variate distribution it is not at all clear that it should decompose into a factorized form. However, for a multivariate normal distribution with un-correlated or only linearly correlated components this is by definition different. In this case a transformation to a coordinate system can be found which leads to a complete decomposition into a product of (seemingly) un-correlated Gaussians per component. The latter point lies at the center of PCA and SVD algorithms, which diagonalize the Pearson correlation matrix.

Do we really have Gaussian probability distributions for the individual z-vector-components?

After this short tour into the world of (multi-variate) normal distributions, Gaussian functions and related ellipses we are a bit better equipped to understand the density distributions in the latent space of our Autoencoders for human face images.

Let me remind you about the shapes of the number distribution for our concrete z-vector components resulting for for CelebA face images. The first plot shows the number densities on sampling intervals of width 0.25 for selected vector components resulting for case I of our experiments. The second plot shows the number densities for the values of selected components of case II.

Ok, these curves do resemble Gaussians and some fluctuations are normal. But can we prove the Gaussian properties of the curves a bit better?

Well, for case II I have drawn the best fits by Gaussian functions with the help of SciPy’s optimize.curve_fit() for 3 and yet another 4 selected components of the latent vectors and the respective number distribution curves. The dashed lines show the approximations by Gaussian functions:

The selected components are part of the list of around 20 dominant component distributions – due to their relatively large standard deviations. But the Gaussian form is consistently found for all components (with some small deviations regarding the symmetry of the curves).

So all in all it looks like as if our convolutional AE has indeed created a multivariate normal z-point distribution in the latent space. As said: This does not exclude correlations …

Pairwise linear correlations of the (normal) probability distributions for the latent vector components?

Now we are a bit bold – and assume the best case for us: Could the approxiate Gaussians distributions for the component values be pair-wise and linearly correlated? What would be a clear indication of a pair-wise linear correlation of our component distributions?

Well, we should find an elliptic form of contour lines in the 2-dimensional distribution for the component pair in the respective coordinate plane of the basic orthogonal LS coordinate system. This imposes quite strong symmetry conditions on the contour lines. The ellipses can be shifted and rotated – but they should remain being ellipses. If non-linear contributions to the correlation had a significant impact this would not be the case.

Practically it is not trivial to prove that we have approximately rotated ellipses in 2 dimensions. Satter plos alone do not help: Ellipses fit a lot of plotted distributions of discrete data points quite well. We really need to count number densities to get reliable contour lines. The following plots show such contour lines based on number sampling in rectangles and local smoothing operations with the help of scipy.stats.gaussian_kde().

The fat red and dark orange lines show corresponding confidence ellipses derived from the original CelebA distribution. See below for some remarks on confidence ellipses.

The contours are basically of elliptic shape although they do not show the complete symmetry expected for pure and linearly dependent Gaussian distributions. But overall the confidence ellipses fit quite well into the general form and orientation of the distributions. We also see that for higher σ-levels the coincidence with nearby contours is quite good. The wiggles in the contour change with the z-vector selection a bit.

We conclude that our basic impression regarding an elliptic shape of the z-point distributions is basically consistent with only linearly correlated Gaussian probability density distributions for the component values of the latent vectors.

Approximation of the core of the multivariate z-point distribution by confidence ellipses for component pairs

Above I referred to the boundary of a core of the probability density for two selected vector components. But how would we define the “boundary” of a continuous distribution in the coordinate planes? Answer: As we like – but based on the decline of the approximate Gaussian curves.

We can e.g. pick two times the half-width in each direction or we can use contours defined by confidence levels.
For 2 ≤ fact * σ ≤ 3 we saw already that the contour lines could well be fitted by confidence ellipses. A 3-sigma level covers around 97% of all data points or more. A 2 sigma-level ellipse encircles between 70% and 90% of all data points, depending on the eccentricity of the ellipse. Note that the numbers are smaller for ellipses than for rectangles. I.e. the standard 68-95-99.7 rule does not apply.

The plots below give you an impression of how well ellipses for a -confidence level approximate the core of the CelebA distribution in selected 2D coordinate planes of the latent space:

Each of the sub-plots was based on 10,000 statistically selected vectors of the 170,000 available in my test runs. This is a relative low number. Therefore, for a certain diameter of the points in the scatter plot only the inner core appears to be densely populated. The next plots shows the results for a 3 σ-level of the ellipse – but this time for 50,000 vectors. With more vectors we could visually fill the outer regions of the core.

The orange points mark the center of the multidimensional distributions derived from the one-dimensional distribution curves for the components. We see that it does not always appear to be optimally centered. There are multiple reasons: Our functions are not fully symmetric as ideal Gaussians. And equally important: The accuracy of the position depends on the sampling resolution which was coarse. Outliers of the distribution do have an impact.

And how would we explain the appearance of Gaussians and ellipses?

This all looks quite good, despite some notable deviations regarding symmetry and maxima. Gaussians fit at least most of the important probability density curves very well, though not by a 100%. The appearance of an elliptic shape of the inner core of the distribution and the appearance of overall elliptic contour curves can be explained by linear correlations of the Gaussian distributions for the components.

The appearance of normal distributions per component and basically linear correlations is something that really should be explained. I mean, dwell a bit on what we have found:

A convolutional Autoencoder network with more than 10 million adjustable parameters encoded information about human face images in the form of a roughly multivariate normal distribution of z-points in its latent space – with basically linear correlations between the Gaussian curves describing the probability densities functions for the component values of the z-vectors.

I find this astonishing and not at all self-evident. It is one of the most simple solutions for a multidimensional situation one can imagine. The following questions automatically came to my mind:

Does such a result only appear for training images of defined objects with some Gaussian variation in their features? Are the normal distributions a reflection of variations of relevant features in the original data?
Is this a typical result for (convolutional) AEs? How does it depend on the dimensionality of the latent space? Does it automatically come with a large number of z-space dimensions? Is it an efficient way to encode feature differences in the latent space, which (convolutional) AEs in general tend to use due to their structure?

Do I personally have a convincing explanation? No. Especially not, as the data shown above stem from convolutional neural networks [CNNs] without any batch-normalization layers.

A first idea would be that the dominant features of a human face themselves show variations described by Gaussian normal distributions already in the original data and that convolutional filtering does not destroy such distributions during optimization. A problem of this idea lies in the (non-) linear activation functions used at the nodes of the neural maps. Though ReLU, Leaky ReLU and SeLU contain linear parts.

The other problem is the linear form of the correlations. This is a rather simple kind of correlations. But why should an AE choose this simple form into its mapping of image information to latent space vectors after training?

How to generate statistical vectors for the creation of human face images?

The positive message which comes with the above results is that our problem of how to create proper statistical z-vectors decomposes into a sequence of two-dimensional problems. We can use the data of the ellipses appearing in the density-distributions for pairs of vector components to confine the components of statistically generated z-vectors to the relevant region in the latent space. All ellipses together restrict the component values in a well defined form. In the next post I will shortly outline some methods of how we can use the information contained in the ellipses with available algorithms.

Conclusion

In this post we have seen that for the case of a convolutional Autoencoder trained on CelebA human face images the latent vector distributions showed some remarkable properties:

The probability density functions for all component values can roughly be approximated by Gaussian functions. The components appear to be pairwise linearly correlated – at least to first order analysis. This automatically implies elliptic contour curves for the pairwise number density functions of coordinate values. Such contour curves were indeed found with first order accuracy. The core of the probability density for the z-points in the latent space could therefore be approximated by confidence ellipses for a σ-level above σ = 2.5.
The elliptic conditions correspond to a multivariate normal distribution with linear correlations of the variables.

Before we get to enthusiastic about these findings we should be careful and await a further test. All statements refer to a first order approximations. A real multivariate normal distribution would decompose into un-correlated Gaussians and 2D-ellipses of probability densities of component pairs after a PCA transformation.

In the next post

Autoencoders and latent space fragmentation – IX – PCA transformation of the z-point distribution for CelebA

I shall present the results of a PCA analysis. In later posts I will introduce a related method to restrict the components of statistical vectors to the relevant region in the latent space of our Autoencoder.

Links and literature

On first sight my short description of the relation between multivariate Gaussian normal distributions and ellipses as the contour lines for the projected density distributions on coordinate planes may appear plausible. But in the general multi-dimensional case the question of linear correlations requires some more math than indicated. For details I just refer to some articles on the Internet – but any good book on multivariate analysis will give you the relevant information
https://de.wikipedia.org/ wiki/ Multivariate_ Normalverteilung
https://de.wikipedia.org/ wiki/ Mehrdimensionale_ Normalverteilung
http:// www.mi.uni-koeln.de/ ~jeisenbe/ Vortrag2.pdf
https://methodenlehre.uni-mainz.de/ files/ 2019/06/ Multivariate-Distanz-Normalverteilung-MDC-Bayes.pdf
https://en.wikipedia.org/ wiki/ Multivariate_ normal_ distribution
https://en.wikipedia.org/ wiki/ Confidence_region
https://users.cs.utah.edu/ ~tch/ CS6640F2020/ resources/ How to draw a covariance error ellipse.pdf
https://biotoolbox.binghamton.edu/ Multivariate Methods/ Multivariate Tools and Background/ pdf files/ MTB%20070.pdf

Regarding the intimate relation between the ellipses’ main axes to normalized Pearson correlation coefficients I also refer to
https://carstenschelp.github.io/ 2018/09/14/ Plot_ Confidence_ Ellipse_ 001.html
I am very grateful that the author Carsten Schelp saved me a lot of time when trying to find a way to program a solution for confidence ellipses. Thank you, Mr. Schelp for the great work.

 

Google Colab, RAM, VRAM and GPU usage limits – I – no clear conditions over multiple sessions

I am a retired physicist with a hobby: Machine Learning [ML]. I travel sometimes. I would like to work with my ML programs even when I only have a laptop available, with inadequate hardware. One of my ex-colleagues recommended Google Colab as a solution for my problem. Well, I am no friend of the tech giants and for all they offer as “free” Cloud services you actually pay a lot by giving them your personal data in the first place. My general experience is also that you sooner or later have to pay for resources a serious project requires. I.e. when you want and need more than just a playground.

Nevertheless, I gave Colab a try some days ago. My first impression of the alternative “Paperspace” was unfortunately not a good one. “No free GPU resources” is not a good advertisement for a first time visitor. When I afterward tried Google’s Colab I directly got a Virtual Machine [VM] providing a Jupyter environment and an optional connection to a GPU with a reasonable amount of VRAM. So, is everything nice with Google Colab? My answer is: Not really.

Google’s free Colab VMs have hard limits regarding RAM and VRAM. In addition there are unclear limits regarding CPU/GPU usage over multiple sessions in an unknown period of days. In this post series I first discuss some of these limits. In a second post I describe a few general measures on the coding side of ML projects which may help to make your ML project compatible with RAM and VRAM limitations.

The 12.7 GB RAM limit for the RAM of free Colab VMs

Even for mid-size datasets you soon feel the 12.7 GB limit on RAM as a serious obstacle. Some RAM (around 0.9 to 1.4 GB) is already consumed by the VM for general purposes. So, we are left with around 11 GB. My opinion: This is not enough for mid-size projects with either big amounts of text or hundreds of thousands of images – or both.

When I read about Colab I found articles on the Internet saying that 25 GB RAM was freely available. The trick was to drive the VM into a crash by an allocation of too much RAM. Afterward Google would generously offer you more RAM. Really? Nope! This does not work any more since July 2020. Read through the discussion here:

Google instead wants you to pay for Google Pro. But as reports on the Internet will tell you: You still get only 25 GB RAM with Pro. So as soon as you want to do some serious work with Colab you are supposed to pay – a lot for Colab Pro+. This is what many professional people will do – as it often takes more time to rework the code than just paying a limited amount per month. I shall go a different way ..

Why is RAM consumption not always negative?

I admit: When I work with ML experiments on my private PCs, RAM seldom is a resource I think about much. I have enough RAM (128 GB) on one of my Linux machines for most of the things I am interested in. So, when I started with Colab I naively copied and ran cells from one of my existing Jupyter notebooks without much consideration. And pretty soon I crashed the VMs due to an exhaustion of RAM.

Well, normally we do not use RAM to a maximum for fun or to irritate Google. The basic idea of having the objects of a ML dataset in a Numpy array or tensor in RAM is a fast transfer of batch junks to and from the GPU – you do not want to have a disk involved when you do the real number-crunching. Especially not for training runs of a neural network. But the limit on Colab VMs make a different and more time consuming strategy obligatory. I discuss elements of such a strategy in the next post.

15 GB of GPU VRAM

The GPU offer is OK from my perspective. The GPU is not the fastest available. However, 15 GB is something you can do a lot with. Still there are data sets, for which you may have to implement a batch based data-flow to the GPU via a Keras/TF2 generator. I discuss also this approach in more detail in the next post.

Sometimes: No access to a GPU or TPU

Whilst preparing this article I was “punished” by Google for my Colab usage during the last 3 days. My test notebook was not allowed to connect to a GPU any more – instead I was asked to pay for Colab Pro. Actually, this happened after some successful measures to keep RAM and VRAM consumption rather low during some “longer” test runs the day before. Two hours later – and after having worked on the VMs CPU only – I got access to a GPU again. By what criterion? Well, you have no control or a clear overview over usage limits and how close you have come to such a limit (see below). And uncontrollable phases during which Google may deny you access to a GPU or TPU are no conditions you want to see in a serious project.

No clear resource consumption status over multiple sessions and no overview over general limitations

Colab provides an overview over RAM, GPU VRAM and disk space consumption during a running session. That’s it.

On a web page about Colab resource limitations you find the following statement (05/04/2023): “Colab is able to provide resources free of charge in part by having dynamic usage limits that sometimes fluctuate, and by not providing guaranteed or unlimited resources. This means that overall usage limits as well as idle timeout periods, maximum VM lifetime, GPU types available, and other factors vary over time. Colab does not publish these limits, in part because they can (and sometimes do) vary quickly. You can relax Colab’s usage limits by purchasing one of our paid plans here. These plans have similar dynamics in that resource availability may change over time.”

In short: Colab users get no complete information and have no control about resource access – independent of whether they pay or not. Not good. And there are no price plans for students or elderly people. We understand: In the mindset of Google’s management serious ML is something for the rich.

The positive side of RAM limitations

Well, I am retired and have no time pressure in ML projects. For me the positive side of limited resources is that you really have to care about splitting project processes into cycles for scaleable batches of objects. In addition one must take care of Python’s garbage collection to free as much RAM as possible after each cycle. Which is a good side-effect of Colab as it teaches you to meet future resource limits on other systems.

My test case

As you see from other posts in this blog I presently work with (Variational) Autoencoders and study data distributions in latent spaces. One of my favorite datasets is CelebA. When I load all of my prepared 170,000 training images into a Numpy array on my Linux PC more than 20 GB RAM are used. (And I use already centered and cut images of a 96×96 pixel resolution). This will not work on Colab. Instead we have to work with much smaller batches of images and work consecutively. From my image arrays I normally take slices and provide them to my GPU for training or prediction. The tool is a generator. This should work on Colab, too.

One of my neural layer models for experiments with CelebA is a standard Convolutional Autoencoder (with additional Batch Normalization layers). The model was set up with the help of Keras for Tensorflow 2.

First steps with Colab – and some hints

The first thing to learn with Colab is that you can attach your Google MyDrive (coming with a Google account) to the VM environment where you run your Jupyter notebooks. But you should not interactively work with data files and data sets on the mounted disk (on /content/MyDrive on the VM). The mount is done over a network and not via a local system bus. Actually transfers to MyDrive are pretty slow – actually slower than what I have experienced with sshfs-based mounts on other hosted servers. So: Copy singular files to and from MyDrive, but work with such files on some directory on the VM (e.g. under /home) afterward.

This means: The first thing you have to take care of in a Colab project is the coding of a preparation process which copies your ML datasets, your own modules for details of your (Keras) based ML model architecture, ML model weights and maybe latent space data from your MyDrive to the VM.

A second thing which you may have to do is to install some helpful Python modules which the standard Colab environment may not contain. One of these routines is the Nvidia smi version for Python. It took me a while to find out that the right smi-module for present Python 3 versions is “nvidia-ml-py3”. So the required Jupyter cell command is:

!pip install nvidia-ml-py3

Other modules (e.g. seaborne) work with their standard names.

Conclusion

Google Colab offers you a free Jupyter based ML environment. However, you have no guarantee that you always can access a GPU or a TPU. In general the usage conditions over multiple sessions are not clear. This alone, in my opinion, disqualifies the free Colab VMs as an environment for serious ML projects. But if you have no money for adequate machines it is at least good for development and limited tests. Or for learning purposes.

In addition the 12 GB limit on RAM usage is a problem when you deal with reasonably large data sets. This makes it necessary to split the work with such data sets into multiple steps based on batches. One also has to code such that Python’s garbage collection can work on small time periods. In the next post I present and discuss some simple measures to control the RAM and VRAM consumption. It was a bit surprising for me that one sometimes has to manually care about the Keras Backend status to keep the RAM consumption low.

Links

Tricks and tests
https://damor.dev/your-session-crashed-after-using-all-available-ram-google-colab/
https://github.com/ googlecolab/ colabtools/ issues/253
https:// www.analyticsvidhya.com/ blog/ 2021/05/10-colab-tips-and-hacks-for-efficient-use-of-it/

Alternatives to Google Colab
See a Youtube video of an Indian guy who calls himself “1littlecoder” and discusses three alternatives to Colab: https:// www.youtube.com/ watch?v=xfzayexeUss

Kaggle (which is also Google)
https://towardsdatascience.com/ kaggle-vs-colab-faceoff-which-free-gpu-provider-is-tops-d4f0cd625029

Criticism of Colab
https://analyticsindiamag.com/ explained-5-drawback-of-google-colab/
https://www.reddit.com/ r/ GoogleColab/ comments/ r7zq3r/ is_it_just_me_ or_has_google_colab_ suddenly_gotten/
https://www.reddit.com/ r/ GoogleColab/ comments/ lgz04a/ regarding_ usage_limits_ in_colab_ some_common_sense/
https://github.com/ googlecolab/ colabtools/ issues/1964
https://medium.com/ codex/ can-you-use-google-colab-free-version-for-professional-work-69b2ba4392d2