Autoencoders and latent space fragmentation – IX – PCA transformation of the z-point distribution for CelebA

This series of posts is about standard convolutional Autoencoders [AEs]. We wish to use the creative ability of an AE’s Decoder for the creation of human face images. We trained a relatively simple example AE on the CelebA data set. For the generation of new human face images we wanted to feed the Decoder with randomly created z-vectors in the AE’s latent space. As arbitrary latent vectors did not deliver reasonable images, we had a look at properties of the z-point distribution created in the latent space after training. And we found the z-vectors with end-points within this particular region indeed gave us some first reasonable images.

But during the first posts we also had to learn that it requires substantial effort to produce “randomly” created z-vectors which point to the coherent and confined latent space region populated by z-points for CelebA images. In addition we found that we obviously must fulfill complex correlation conditions between the latent vector components. In the last posts we have, therefore, investigated the structure of the latent space region populated for CelebA images a bit more closely. See:

Autoencoders and latent space fragmentation – VIII – approximation of the latent vector distribution by a multivariate normal distribution and ellipses

Autoencoders and latent space fragmentation – VII – face images from statistical z-points within the latent space region of CelebA

This gave us a somewhat surprising insight:

For our special case of human face images the density distribution of the z-points in the multidimensional latent space roughly resembled a multivariate normal distribution. The variables of this distribution were, of course, the components of the latent z-vectors (reaching from the origin of the latent space coordinate system to the z-points). The number density distributions for the values of the z-vector components could very well be approximated by Gaussian functions. In addition we found strong linear correlations between the z-vector components. This was completely consistent with diagonally oriented elliptic contour lines of the number density distribution for pairs of z-vector components in the respective coordinate planes. The angles between the ellipses’ axes and the axes of the (orthogonal) coordinate system varied.

In this post we test my thesis of a multivariate normal distribution in a different and more complicated way. If the number density distribution of the z-points for CelebA really is close to a multivariate normal distribution then we would expect that the contour lines of the 2-dimensional number density function for coordinate planes still are ellipses after a PCA transformation. Note that the coordinate planes then are planes of a new coordinate system (with orthogonal axes) moved and rotated with respect to the original coordinate system. We call the transformed coordinate system the “PCA coordinate system“. (It is clear the vector components must consistently be transformed with respect to the new coordinate axes.)

This test is much harder because asymmetries, imperfections and deviations from a normal distribution will become clearly visible. Nevertheless we shall see that the z-point distribution decomposes roughly in uncorrelated Gaussian number density (or probability) functions for the (transformed) components of the z-vectors – as expected.

Note that number density distributions can be interpreted as probability distributions after a proper normalization. To understand the results and implications of this post some basic knowledge about multivariate normal probability density distributions in multidimensional spaces is required. Transformation properties of such distributions are important. In particular the effect of a PCA or SVD transformation, which diagonalizes the Pearson correlation matrix, on a multivariate normal distribution, should be familiar: In the moved and rotated coordinate system original linear correlations between vector components disappear. The coordinate axes of the transformed coordinate system get aligned to the eigenvectors (and main axes) of the probability density distribution. In addition elliptic contour lines prevail and the main axes of the ellipses get aligned with the axes of the PCA coordinate system.

For valuable information in previous posts see the link list a the bottom of this article.

Imperfections in the multivariate normal distribution and their consequences for PCA

In our case the latent space had 256 dimensions. The real number density functions for the values of the (256) latent vector components did not always show the full symmetry of their Gaussian fits. Deviations appeared near the center and at the flanks of the individual distributions. Still we got convincing overall ellipses in contour plots of the probability density for selected pairs of components. But there were also relatively many points which lay in regions outside the 2 to 3 sigma confidence ellipses for these 2-dimensional distributions – and the distribution of the points there seemed to violate the elliptic symmetry.

A perfect multivariate normal distribution (with only linear correlations between its variables) decomposes into (seemingly) uncorrelated normal distributions for the new variables, namely the latent vector components in the transformed coordinate system of the latent space. The coordinate transformation corresponds to a diagonalization of the Pearson correlation matrix via finding orthogonal eigenvectors. So we should not only get ellipses after the PCA transformation. The main axes of the resulting ellipses should also be aligned with the coordinate axes of the new translated and rotated PCA coordinate system. (This is a standard feature of “uncorrelated” Gaussians).

Note that the de-correlation is a pure coordinate effect which does not eliminate the dependencies of the original variables. The new coordinates and the respective variables do not have the same meaning as the old ones. We “only” change perspectives such that the mathematical description of the multivariate distribution gets simpler. However, the ratios of the axes of the new ellipses depend on the axis-ratios of the old ellipses in the original standard coordinate system of the latent space.

When we translate and rotate a coordinate system to diagonalize the Pearson matrix of an imperfect multivariate normal distribution the coordinate axes may afterward not fully align with the main axes of the distribution’s inner core – and then we might get clearly visible asymmetries in z-point projections to coordinate planes. Note that points outside the inner core have a relatively large impact on the PCA transformation and especially on the rotation of the new axes with respect to the old ones.

Reduction of outer z-point contributions

Outer z-points often correspond to CelebA images with strong variations in the background. As these z-points have a large distance from the center of the distribution they have a heavy impact on a PCA transformation to eigenvectors of the Pearson matrix. Therefore, I eliminated some outer points from the distribution. I did this by deleting points with coordinate values smaller or bigger than a symmetric threshold in the outer flanks of the 256 Gaussian distributions – e.g.:

z_j < mu_j – fact * FHWM and z_j > mu_j + fact * FHWM.

With FHWM meaning the full width at the half maximum and mu_j being the center of the approximate Gaussian for the j-th vector component of the z-vectors. I tried multiple values of fact between 1.7 and 1.25.

One has to be careful with such an elimination process. Even though we set our limits beyond the 2 σ-level of the individual Gaussians ahead of the transformation, a step-wise consecutive elimination for all components reduces the number of surviving vectors significantly for small values of fact. The plots below correspond to a value of fact = 1.7 (with one discussed exception). This reduced the number of available vectors from originally 170.000 used during the AE’s training to 160.000. From this distribution a random sub-selection of 80,000 z-points was taken to perform the PCA transformation. I call the resulting sample of z-points the “reduced distribution” below. The results did not change much for a sub-selection of more points. The second reduction saved me some CPU-time, however.

PCA transformation of the z-point distribution – explained variance and cumulative importance

Below you see the explained variance ratio of the first 40 and the cumulative importance of the first 120 main components after a PCA transformation of the reduced z-point distribution. (The PCA-algorithm was based on the SVD-algorithm, as provided by the sklearn-package.)

We see that we need around 120 PCA components to explain 80% of the variance of the original z-point data. The contribution of the first 15 components is significant, but it is NOT really dominant. Interesting is also the step-wise decline of the first 10 components in a not so smooth curve.

I have in addition checked that the normalized Pearson correlation coefficient matrix was perfectly diagonalized. All off-diagonal values were smaller than 2.e-7 (which is consistent with error propagation) and the diagonal elements were 1.0 with an accuracy better than 8 eight digits after the decimal point. As it should be.

Gaussians per PCA-vector component?

We expect Gaussian number density functions for each of the z-vector components after the PCA coordinate transformation. The following line plots show the number density distributions (colored line) and a related Gaussian fit (dashed lines) per named vector-components after PCA-transformation. The first plot gives an overview over 120 PCA components:

We see that all distributions look similar to Gaussians. In addition the center of the distributions for all components are very close to or coincide with the origin of the PCA coordinate system. This is something we would expect for an original multivariate normal distribution before the PCA transformation to a new moved and rotated coordinate system: An affine transformation between coordinate systems with orthogonal axes maps Gaussians to Gaussians.

Components 0 and 1:

Components 2 and 3:

Components 4 and 5:

Components 6 and 7:

Components 8 and 9:

We see from these plots that component 3 shows values at the distributions’s flanks well above the values of its Gaussian fit. This may lead to deformed ellipses. In addition component 7 shows a slight asymmetry – probably triggering deviations both at the center and outer regions. Similar effects can be seen for other higher components – but not so pronounced. The chosen sampling interval (0.5) in the new coordinates did smear out small asymmetric wiggles in all distributions.

Ellipses?

Calculating approximate contour lines is CPU intensive. On my presently available laptop I had to focus on some selected examples for pairs of components of z-vectors in the transformed coordinate system. Experiments showed that the most critical components with respect to asymmetries were 2, 3, 7, 8.

Let us first look at combinations of the first 9 PCA components (0) with other components in the range between the 10th and 120th PCA component. Just scroll down to see more images:

The fatter lines represent confidence ellipses derived both from correlation data, i.e. elements of the normalized correlation coefficient matrix, and properties of the Gaussian distributions. See the last blog for details on the method of getting confidence ellipses from data of a probability distribution.

Ellipses for density distributions in coordinate planes for pars of the most important PCA components

Before you think that everything is just perfect we should focus on the most important 10 PCA components and their mutual pair-wise density distributions. We expect trouble especially for combinations with the components 2, 3 and 7.

Component 0 vs other components < 10

We see already here that asymmetries in the density distributions which appear relatively small in the gaussian-like curves per component leave their imprint on the elliptic curves. The contour lines are not always centered with respect to the overall confidence ellipses.

Component 1 vs other components < 10

Component 2 vs other components < 10

The plot for components 2 and 3 was based on fact = 1.35 instead of fact = 1.7. See a section below for more information.

Component 3 vs other components < 10

Component 4 vs other components < 10

Component 5 vs other components < 10

Component 6 vs other components < 10

Component 7 vs other components < 10

Component 8 vs other components < 10

Component 9 vs other components < 10

Ellipses for other seected component pairs

Comments

The z-point distribution is rather close to a multivariate normal distribution, but it is not perfect. The PCA transformation revealed clearly that the center of the overall z-point distribution does not completely coincide with the centers of the individual density distributions for the z-vector components. And not all individual curves are fully symmetrical. Some plots show the expected ellipses, but not fully centered. We also see signs of non-linear correlations as not all ellipses show a complete alignment with the axes of the PCA coordinate system.

The impact of outer z-points becomes obvious when we compare plots for the component pair (2, 3) for different values of fact. The plots are for fact = 1.35, 1.45, 1.55, 1.7 and fact = 1.8 (in this order from left to right and downward). (Regarding z-point elimination the fact-values correspond to 122000, 138000, 149000, 159000 and 164000 remaining z-points.)

The flips in the left-right orientation between some of the plots can be ignored. They depend on random aspects of the transformations and plotting routines used.

We see that z-points (for some some strange images) which have a large distance from the center of the distribution have a major impact on the central density distribution. 10% of the outer points influence the orientation of the central ellipses by their coordinates – although these outer points are not part of the inner core of the distribution.

BUT: Overall we find the effect we wanted to see. The elliptic contours are clearly visible and most of them are aligned with the axes of the PCA coordinate system and the main axes of the confidence ellipses for the z-point distribution in the PCA coordinate system. The inner core is aligned with the axes of the coordinate system even for the critical PCA components 2 and 3, when we omit between 12% and 25% of the points (outside a 3 σ-level).

This means: The PCA transformation confirms that an inner core of the z-point distribution is well described by a multivariate normal distribution – with linear correlations between the number density distributions for various components.

This is an interesting finding by itself.

Conclusion

The last and the present post of this series have shown that a convolutional AE maps human face images of the CelebA data set to a multivariate normal distribution in its multidimensional latent space. Although this is interesting by itself we must not forget our ultimate goal – namely to generate random z-vectors which should deliver us reasonably human face images by the AE’s Decoder. But the results of our analysis provide solid criteria to generate such vectors fulfilling complex correlation conditions.

We can now define statistical generation methods which restrict the components of our aspired random z-vectors such that the resulting z-points lie within regions surrounded by confidence ellipses of the multivariate normal distribution. One such method is the topic of the next post.

Autoencoders and latent space fragmentation – X – a method to create suitable latent vectors for the generation of human face images

Links to other posts of this series

Autoencoders and latent space fragmentation – VIII – approximation of the latent vector distribution by a multivariate normal distribution and ellipses

Autoencoders and latent space fragmentation – VII – face images from statistical z-points within the latent space region of CelebA

Autoencoders and latent space fragmentation – VI – image creation from z-points along paths in selected coordinate planes of the latent space

Autoencoders and latent space fragmentation – V – reconstruction of human face images from simple statistical z-point-distributions?

Autoencoders and latent space fragmentation – IV – CelebA and statistical vector distributions in the surroundings of the latent space origin

Autoencoders and latent space fragmentation – III – correlations of latent vector components

Autoencoders and latent space fragmentation – II – number distributions of latent vector components

Autoencoders and latent space fragmentation – I – Encoder, Decoder, latent space

 

Autoencoders and latent space fragmentation – VII – face images from statistical z-points close to the latent space region of CelebA

I continue with my analysis of the z-point and latent vector distribution a trained Autoencoder creates in its latent space for CelebA images. These images show human faces. To make the Autoencoder produce new face images from statistically generated latent vectors is a problem. See some previous posts in this series for reasons.

Autoencoders and latent space fragmentation – I – Encoder, Decoder, latent space
Autoencoders and latent space fragmentation – II – number distributions of latent vector components
Autoencoders and latent space fragmentation – III – correlations of latent vector components
Autoencoders and latent space fragmentation – IV – CelebA and statistical vector distributions in the surroundings of the latent space origin
Autoencoders and latent space fragmentation – V – reconstruction of human face images from simple statistical z-point-distributions?

These problems are critical for a generative usage of standard Autoencoders. Generative tasks in Machine Learning very often depend on a clear and understandable structure of the latent space regions an Encoder/Decoder pair uses. In general we would like to create statistical latent vectors such that a reasonable object creation (here: image creation) is guaranteed. In the last post

Autoencoders and latent space fragmentation – VI – image creation from z-points along paths in selected coordinate planes of the latent space

we saw that we at least get some clear face features when we make use of some basic information about the shape and location of the z-point distribution for the images the AE was trained with. This distribution is specific for an Autoencoder, the image set used and details of the training run. In our case the z-point distribution could be analyzed by rather simple methods after the training of an AE with CelebA images had been concluded. The number distribution curves per vector component revealed value limits per latent vector component. The core of the z-point distribution itself appeared to occupy a single and rather compact sub-volume inside the latent space. (The exact properties depend on the AE’s layer structure and the training run.) Of the N=256 dimensions of our latent space only a few determined the off-origin position of the center of the z-point distribution’s core. This multidimensional core had an overall ellipsoidal shape. We could see this both from the Gaussian like number distributions for the components and more directly from projections onto 2-dimensional coordinate planes. (We will have a closer look at these properties which indicate a multivariate normal distribution in forthcoming posts.)

As long as we kept the statistical values for artificial latent vector components within the value ranges set by the distribution’s core our chances that the AE’s Decoder produced images with new and clearly visible faces rose significantly. So far we have only used z-points along defined paths crossing the distributions core. In this post I will vary the components of our statistically created latent vectors a bit more freely. This will again show us that correlations of the vector components are important.

Constant probability for each component value within a component specific interval

In the first posts of this series I naively created statistical latent vectors from a common value range for the components. We saw this was an inadequate approach – both for general mathematical and for problem specific reasons. The following code snippets shows an approach which takes into account value ranges coming from the Gaussian-like distributions for the individual components of the latent vectors for CelebA. The arrays “ay_mu_comp” and “ay_mu_hw” have the following meaning:

  • ay_mu_comp: Component values of a latent vector pointing to the center of the CelebA related z-point distribution
  • ay_mu_hw: Half-width of the Gaussian like number distribution for the component specific values
num_per_row  = 7
num_rows     = 3
num_examples = num_per_row * num_rows

fact = 1.0

# Get component specific value ranges into an array 
li_b = []
for j in range(0, z_dim):  
    add_val = fact*abs(ay_mu_hw[j])
    b_l = ay_mu_comp[j] - add_val
    b_r = ay_mu_comp[j] + add_val
    li_b.append((b_l, b_r))
    
# Statistical latent vectors
ay_stat_zpts = np.zeros( (num_examples, z_dim), dtype=np.float32 )     
for i in range(0, num_examples): 
    for j in range(0, z_dim):
        b_l = li_b[j][0]
        b_r = li_b[j][1]
        val_c = np.random.uniform(b_l, b_r) 
        ay_stat_zpts[i, j] = val_c

# Prediction 
reco_img_stat = AE.decoder.predict(ay_stat_zpts)
# print("Shape of reco_img = ", reco_img_stat.shape)

The main difference is that we take random values from real value intervals defined per component. Within each interval we assume a constant probability density. The factor “fact” controls the width of the value interval we use. A small value covers the vicinity of the center of the CelebA z-point distribution; a larger fact leads to values at the border region of the z-point distribution.

Image results for different value ranges

fact=0.4

fact=0.5

fact=0.6

fact=0.7

fact=0.8

fact=0.9

fact=1.0

Selected individuals

Below you find some individual images created for a variety of statistical vectors. They are ordered by a growing distance from the center of the CelebA related z-point distribution.

Quality? Missing correlations?

The first thing we see is that we get problems for all factors fact. Some images are OK, but others show disturbances and the contrasts of the face against the background are not well defined – even for small factors fact. The reason is that our random selection ignores correlations between the components completely. But we know already that there are major correlations between certain vector components.

For larger values of fact the risk to place a generated latent vector outside the core of the CelebA z-point distribution gets bigger. Still some images interesting face variations.

Obviously, we have no control over the transitions from face to hair and from hair to background. Our suspicion is that micro-correlations of the latent vector components for CelebA images may encode the respective information. To understand this aspect we would have to investigate the vicinity of a z-point a bit more in detail.

Conclusion

We are able to create images with new human faces by using statistical latent vectors whose component values fall into component specific defined real value intervals. We can derive the limits of these value ranges from the real z-point distribution for CelebA images of a trained AE. But again we saw:

One should not ignore major correlations between the component values.

We have to take better care of this point in a future post when we perform a transformation of the coordinate system to align with the main axes of the z-point distribution. But there is another aspect which is interesting, too:

Micro-correlations between latent vector components may determine the transition from faces to complex hair and background-patterns.

We can understand such component dependencies when we assume that the superposition especially of small scale patterns a convolutional Decoder must arrange during image creation is a subtle balancing act. A first step to understand such micro-correlations better could be to have a closer look at the nearest CelebA z-point neighbors of an artificially created latent z-point. If they form some kind of pattern, then maybe we can change the components of our z-point a bit in the right direction?

Or do we have to deal with correlations on a much coarser level? What do the Gaussians and the roughly elliptic form of the core of the z-point distribution for CelebA images really imply? This is the topic of the next post.

Autoencoders and latent space fragmentation – VIII – approximation of the latent vector distribution by a multivariate normal distribution and ellipses

 

Autoencoders and latent space fragmentation – V – reconstruction of human face images from simple statistical z-point-distributions?

In this post series we have so far studied the distribution of latent vectors and z-points for CelebA images in the latent space of an Autoencoder [AE]. The CelebA images show human faces. We want to reconstruct images with new faces from artificially created, statistical latent vectors. Our latent space had a number dimensions N=256. For basics of Autoencoders, terms like latent space, z-points, latent vectors etc. see the first blog of this series:

Autoencoders and latent space fragmentation – I – Encoder, Decoder, latent space

During the experiments and analysis discussed in the other posts

Autoencoders and latent space fragmentation – II – number distributions of latent vector components
Autoencoders and latent space fragmentation – III – correlations of latent vector components
Autoencoders and latent space fragmentation – IV – CelebA and statistical vector distributions in the surroundings of the latent space origin

we have learned that the multi-dimensional latent space volume which the Encoder of a convolution AE fills densely with z-points for CelebA images has a special shape and location. The bulk of z-points is confined to a multi-dimensional ellipsoid which is relatively small. Its center has a position, which does not coincide with the latent space’s origin. The bulk center is located within or very close to a hyper-sub-volume spanned by only a few coordinate axes.

We also saw also that we have difficulties to hit this region by artificially created z-points via latent vectors. Especially, if the approach for statistical vector generation is a simple one. In the beginning we had naively assumed that we can treat the vector components as independent variables. We assigned each vector component values, which we took from a value-interval [-b, b] with a constant probability density. But we saw that such a simple and seemingly well founded method for statistical vector creation has a variety of disadvantages with respect to the bulk distribution of latent vectors for CelebA images. To name a few problems:

  • The components of the latent vectors for CelebA images are correlated and not independent. See the third post for details.
  • If we choose b too large (&gt. 2.0) then the length or radius values of the created vectors will not fit typical radii of the CelebA vectors. See the 2nd post for details.
  • The generated vectors only correspond to z-points which fill parts of a thin multi-dimensional spherical shell within the cube filled by points with coordinate values -b < x_j < +b. This is due to mathematical properties of the distribution in multi-dimensional spaces. See the second post for more details.
  • Optimal parameter values are 1.0 < b < 2.0, just to guarantee at least the right vector length. However, due to the fact that the origin of the latent space is located in a border region of the CelebA bulk z-point distribution, most points will end up outside the bulk volume.

In this post I will show you that our statistical vectors and the related z-points do indeed not lead to a successful reconstruction of images with clearly visible human faces. Afterward I will briefly discuss whether we still can have some hope to create new human face images by a standard AE’s Decoder and statistical points in its latent space.

Relevant values of parameter b for the interval from which we choose statistical values for the vector components

What does the number distribution for the length of latent CelebA vectors look like? For case I explained in the 2nd post of this series we get:

For case II:

Now let me remind you of a comparison with the lengths of statistical vectors for different values of b:

See the 2nd post of this series for more details. Obviously, for our simple generation method used to create statistical vectors a parameter value b = 1.5 is optimal. b=1.0 an b=2.0 mark the edges of a reasonable range of b-values. Larger or smaller values will drive the vector lengths out of the required range.

Image reconstructions for statistical vectors with independent component values x_j in the range -1.5 < x_j < 1.5

I created multiples of 512 images by feeding 512 statistical vectors into the Decoder of our convolutional Autoencoder which was trained on CelebA images (see the 1st and the 2nd post for details). The vector components x_j fulfilled the condition :

-1.5 ≤   x_j   ≤ 1.5,    for all j in [0, 256]

The result came out to be frustrating every time. Only singular image showed like outlines of a human face. The following plot is one example out of series of tenths of failures.

And this was an optimal b-value! 🙁 But due to the analysis of the 4th post in this series we had to expect this.

Image reconstructions for statistical vectors with independent component values x_j in the range -1.0 < x_j < 1.0

The same for

-1.0 ≤   x_j   ≤ 1.0,    for all j in [0, 256]

Image reconstructions for statistical vectors with independent component values x_j in the range -2.0 < x_j < 2.0

The same for

-2.0 ≤   x_j   ≤ 2.0,    for all j in [0, 256]

Image reconstructions for statistical vectors with independent component values in the range -5.0 < x_j < 5.0

If you have read the previous posts in this series then you may think: Some of the component values of latent vectors for CelebA images had bigger values anyway. So let us try:

-1.0 ≤   x_j   ≤ 1.0,    for all j in [0, 256]

And we get:

Although some component values may fall into the value regions of latent CelebA vectors most of them do not. The lengths (or radii) of the statistical vectors for b=5.0 do not at all fit the average radii of latent vectors for CelebA images.

What should we do?

I have described the failure to create reasonable images from statistical vectors in some regions around the origin of the latent space also in other posts on standard Autoencoders and in posts on Variational Autoencoders [VAEs]. For VAEs one enforces that regions around the origin are filled systematically by special Encoder layers.

But is all lost for a standard Encoder? No, not at all. It is time that we begin to use our knowledge about the latent z-point distribution which our AE creates for CelebA images. As I have shown in the 2nd post it is simple to get the number distribution for the component values. Because these distributions were similar to Gaussians we have rather well defined value regions per component which we can use for vector creation. I.e. we can perform a kind of first order approximation to the main correlations of the components and thus put our artificially created values inside the densely populated bulk of the z-point region for CelebA images. If that region in the latent space has a meaning at all then we should find some interesting reconstruction results for z-points within it.

We can do this even with our simple generation method based on constant probability densities, if we use individual value regions -b_j ≤ x_j ≤ b_j for each component. (Instead of a common value interval for all components). But even better would be Gaussian approximations to the real number distributions for the component values. In any case we have to restrict the values for each component much stronger than before.

Conclusion

What we already had expected from previous analysis became true. A simple method for the creation of statistical latent vectors does not give us z-points which are good for the creation of reasonable images by a trained AE’s Decoder. The simplifications

  1. that we can treat the component values of latent vectors as independent variables
  2. and that we can assign the components x_j values from a common interval -b ≤ x_j &le b

cause that we miss the bulk region in the AE’s latent space, which gets filled by its trained Encoder. We have demonstrated this for the case of an AE which had been trained on images of human faces.

In the next post

Autoencoders and latent space fragmentation – VI – image creation from z-points along paths in selected coordinate planes of the latent space

I will show that even a simple vector creation can give us latent vectors within the region filled for CelebA images. And we will see that such points indeed lead to the reconstruction of images with clearly visible human face images by the Decoder of a standard AE trained on the CelebA dataset.

 

Autoencoders and latent space fragmentation – IV – CelebA and statistical vector distributions in the surroundings of the latent space origin

I continue with my investigation of the z-point- and latent vector distribution which a convolutional Autoencoder [AE] creates in its latent space for CelebA images. Such images show human faces – and our objective is to find out whether we can force the AE’s Decoder to create human face images from artificially generated and statistically distributed z-points in the latent space. E.g. for creative tasks – without using a Variational Autoencoder.

The first posts of this series

Autoencoders and latent space fragmentation – I – Encoder, Decoder, latent space
Autoencoders and latent space fragmentation – II – number distributions of latent vector components
Autoencoders and latent space fragmentation – III – correlations of latent vector components

have revealed that the multi-dimensional volume region filled with z-points for CelebA images is rather small and has an ellipsoidal shape. The region is extended in the direction of a few main axes. Its center is located at some distance from the origin of the latent space. Its position is rather close to or within a hyper-volume of the latent space spanned by a few axes, only. The origin of the latent space is instead located close to the border of the bulk region of CelebA z-points.

We have also found out that artificially created z-points may miss the region of the CelebA z-points. In particular when we generate respective vectors under the assumption that the vector components are independent variables and can be filled with values obeying a constant probability distribution within a real value interval [-b, b]. See the second post for links to a study of the mathematical properties of such artificial vector distributions. We saw that the radii of the artificial vectors only match those of CelebA vectors if we choose 1.0 < b < 2.0. An optimal value appeared to be b = 1.5. This means that the created statistical vectors would have positions relatively close to the origin. We had hoped that such artificial vectors overlap at least in parts with the latent vector distribution for CelebA. Such an overlap may be required to get a reconstruction of images with clearly visible human faces.

In this post I, therefore, have a look at the surroundings of the latent space origin. We focus on projections of the neighboring z-points onto planes formed by selected latent vector components. We choose these components such that the border position of the origin with respect to the volume occupied by the bulk of CelebA z-points becomes clear. We afterward look at real and artificial z-points close to a slice of the multi-dimensional latent space volume. The vectors to the z-points in this slice fulfill the following condition: All components x_j, with the exception of two selected ones, have values x_j < 1.5. This will reduce projection effects with respect to the selected projection plane. The results will show us that many of the artificial z-points unfortunately fall into empty regions (voids). It is sufficient to show this for some selected coordinate pairs. The latent space of our AE has N=256 dimensions.

Position of the origin with respect to the CelebA z-point distribution

First I want to remind you of the border position of the latent space’s origin with respect to the bulk of the CelebA z-point-distribution. The following plots show again 5000 randomly selected z-points corresponding to latent vectors for CelebA images (blue points). The yellow point marks the origin of the latent space. The red dots correspond to 10 artificially created z-points for b = 1.5. The individual plots correspond to selected pairs of vector components and planes spanned by respective axes.

That the center of the distribution appears extremely densely populated is a bit due to the chosen diameter of the blue points. When interpreting these plots, please note: We are looking at orthogonal projections. Therefore we always have to take into account projection effects.

A closer look at the environment of the latent space’s origin

The following plot shows the environment of the origin with a higher resolution for our 5600 z-points. Despite the fact that this is a projection of many points onto the selected plane we get a first impression that CelebA z-point distribution is not really a homogeneous one – although being a relatively dense one around the center of the ellipsoidal bulk distribution.

Some of our artificial z-points seem in both cases to mix with the CelebA z-points. Below I want to show that this is a projection effect, only.

The surroundings of the origin in a flat cuboid

In the second post of this series we had derived that a parameter b = 1.5 is optimal to get the right vector length of our artificial statistical vectors to match the length of the latent CelebA vectors. Therefore, I have reduced the amount of CelebA z-points by imposing the following conditions on the components x_j:

-1.5 ≤   x_j   ≤ 1.5,    for all j in [0, 256], with the exception of two selected values j = j1 or j = j2

I.e. we look at CelebA z-points close to the plane defined by the axes corresponding to our specially selected vector components x_j1 and x_j2. Thus we get rid of projection effects from any points outside the multi-dimensional slice. We only get projections from points inside our multi-dimensional slice, which contains the cube defined by a side-length -1.5 ≤ x_j ≤ +1.5 around the origin. Our statistically generated vectors have end-points inside this multi-dimensional cube. The result is:

Ooops, only two out of our 5000 CelebA points are present in the slice region, which I also have populated with 200 artificial z-points. So, clearly this is not a region which the AE’s Encoder fills densely for CelebA images.

Even for 80,000 CelebA z-points the situation does not improve so much. Only 56 latent CelebA vectors point to our region.

Most of the artificially created z-points (in red) thus come to fall into empty volume regions – regions not used by CelebA z-points. This already diminishes our chances to reconstruct reasonable human face images by our artificial distribution of latent vectors.

Situation for a second and a third plane

Can we reproduce this also for other component pairs? Yes, indeed, e.g. for the pair (177, 242):

For 5000 CelebA z-points:

Only one out of 5000 CelebA vectors points to the relevant slice:

For 80,000 images 39 regular CelebA z-points survive, only. I skip the respective image.

Vector components (30, 118)
Another interesting pair of components and respective coordinate axes is (30, 118):

And for our slice we get:

From 80,000 points only around 70 are located in our slice of the multidimensional space:

Vector components (118, 156)
For the pair (118, 156) the respective plots are:

We see some overlaps between the artificially created points and the CelebA z-points. However, you should keep in mind that the probability that an artificial point falls into a void in the multi-dimensional space gets bigger with every individual component value putting the point outside the CelebA bulk region. And: Our “overlaps” are still the result of a (significantly reduced) projection effect. Furthermore, the plots do not distinguish the components of an individual point from those of other points. If one component shows an overlap with CelebA points, another component for the same point may not. And one component is enough to determine a position outside the bulk.

Radii of the artificially created z-points

When rating probabilities of our artificially created z-points to hit a region populated by CelebA z-points you should also remember that our artificially created points fall into a rather narrow spherical shell for so many dimensions as our latent space has. See the second post of this series for this phenomenon.

Conclusion

What have we learned? The second post in this series gave us hope that at least some of the artificially created z-points (based on independent component values taken with a constant probability from a common value interval) would get a position within the confined region populated by the real CelebA z-points. A closer look, however, showed us that the origin of the latent space resides within a border-region of the ellipsoidal bulk of the multi-dimensional CelebA z-point distribution. Only very few CelebA z-points are found in this border region and within slices close to selected coordinate planes.

What does this mean? The chances that most of the artificially created z-points for b = 1.5 will fall into a void not used by the AE’s Decoder for CelebA images is much bigger than we originally may have thought. In addition our statistical points only populate a spherical shell within a multi-dimensional cube around the origin of the latent space with a side length of 2b. Even if we compensate this effect by generating vectors for different b-values we do not gain much. This raises the fundamental question whether a method that generates statistical z-points via independent component values is a reasonable choice for our objective to reconstruct human face images.

In the next post

Autoencoders and latent space fragmentation – V – reconstruction of human face images from simple statistical z-point-distributions?

I will show that the results of such reconstruction efforts are indeed frustrating. As a consequence I will discuss how we could simply adjust our generating method to the real distribution of latent vectors for CelebA images.

 

Autoencoders and latent space fragmentation – III – correlations of latent vector components

The topics of this post series are

  • convolutional Autoencoders,
  • images of human faces, provided by the CelebA dataset
  • and related data point and vector distributions in the AEs’ latent spaces.

In the first post

Autoencoders and latent space fragmentation – I – Encoder, Decoder, latent space

I have repeated some basics about the representation of images by vectors. An image corresponds e.g. to a vector in a feature space with orthogonal axes for all individual pixel values. An AE’s Encoder compresses and encodes the image information in form of a vector in the AE’s latent space. This space has many, but significantly fewer dimensions than the original feature space. The end-points of latent vectors are so called z-points in the latent space. We can plot their positions with respect to two coordinate axes in the plane spanned by these axes. The positions reflect the respective vector component values and are the result of an orthogonal projection of the z-points onto this plane. In the second post

Autoencoders and latent space fragmentation – II – number distributions of latent vector components

I have discussed that the length and orientation of a latent vector correspond to a recipe for a constructive process of The AE’s (convolutional) Decoder: The vector component values tell the Decoder how to build a superposition of elementary patterns to reconstruct an image in the original feature space. The fundamental patterns detected by the convolutional AE layers in images of the same class of objects reflect typical pixel correlations. Therefore the resulting latent vectors should not vary arbitrarily in their orientation and length.

By an analysis of the component values of the latent vectors for many CelebA images we could explicitly show that such vectors indeed have end points within a small coherent, confined and ellipsoidal region in the latent space. The number distributions of the vectors’ component values are very similar to Gaussian functions. Most of them with a small standard deviation around a central mean value very close to zero. But we also found a few dominant components with a wider value spread and a central average value different from zero. The center of the latent space region for CelebA images thus lies at some distance from the origin of the latent space’s coordinate system. The center is located close to or within a region spanned by only a few coordinate axes. The Gaussians define a multidimensional ellipsoidal volume with major anisotropic extensions only along a few primary axes.

In addition we studied artificial statistical vector distributions which we created with the help of a constant probability distribution for the values of each of the vector components. We found that the resulting z-points of such vectors most often are not located inside the small ellipsoidal region marked by the latent vectors for the CelebA dataset. Due to the mathematical properties of this kind of artificial statistical vectors only rather small parameter values 1.0 ≤ b ≤ 2.0 for the interval [-b, b], from which we pick all the the component values, allow for vectors with at least the right length. However, whether the orientations of such artificial vectors fit the real CelebA vector distribution also depends on possible correlations of the components.

In this post I will show you that there indeed are significant correlations between the components of latent vectors for CelebA images. The correlations are most significant for those components which determine the location of the center of the z-point distribution and the orientation of the main axes of the z-point region for CelebA images. Therefore, a method for statistical vector creation which explicitly treats the vector components as statistically independent properties may fail to cover the interesting latent space region.

Normalized correlation coefficient matrix

When we have N variables (X_1, x_2, … x_n) and M parallel observations for the variable values then we can determine possible correlations by calculating the so called covariance matrix with elements Cij. A normalized version of this matrix provides the so called “Pearson product-moment correlation coefficients” with values in the range [0.0, 1.0]. Values close to 1.0 indicate a significant correlation of the variables x_i and x_j. For more information see e.g. the following links to the documentation on Numpy’s versions for the calculation of the (normalized) covariance matrix from an array containing the observations in an ordered matrix form: “numpy.cov” and to “numpy.corrcoef“.

So what are the “variables” and “observations” in our case?

Latent vectors and their components

In the last post we have calculated the latent vectors that a trained convolutional AE produces for a 170,000 images of the CelebA dataset. As we chose the number N of dimensions of the latent space to be N=256 each of the latent vectors had 256 components. We can interpret the 256 components as our “variables” and the latent vectors themselves as “observations”. An array containing M rows for individual vectors and N columns for the component values can thus be used as input for Numpy’s algorithm to calculate the normalized correlation coefficients.

When you try to perform the actual calculations you will soon detect that determining the covariance values based on a statistics for all of the 170,000 latent vectors which we created for CelebA images requires an enormous amount of RAM with growing M. So, we have to chose M << 170,000. In the calculations below I took M = 5000 statistically selected vectors out of my 170,000 training vectors.

Some special latent vector components

Before I give you the Pearson coefficients I want to remind you of some special components of the CelebA latent vectors. I had called these components the dominant ones as they had either relatively large absolute mean values or a relatively large half-width. The indices of these components, the related mean values mu and half-widths hw are listed below for a AE with filter numbers in the Encoder’s and Decoder’s 4 convolutional layers given by (64, 64, 128, 128) and (128, 128, 64, 64), respectively:

 15   mu : -0.25 :: hw:  1.5
 16   mu :  0.5  :: hw:  1.125
 56   mu :  0.0  :: hw:  1.625
 58   mu :  0.25 :: hw:  2.125
 66   mu :  0.25 :: hw:  1.5
 68   mu :  0.0  :: hw:  2.0
110   mu :  0.5  :: hw:  1.875
118   mu :  2.25 :: hw:  2.25
151   mu :  1.5  :: hw:  4.125
177   mu : -1.0  :: hw:  2.25
178   mu :  0.5  :: hw:  1.875
180   mu : -0.25 :: hw:  1.5
188   mu :  0.25 :: hw:  1.75
195   mu : -1.5  :: hw:  2.0
202   mu : -0.5  :: hw:  2.25
204   mu : -0.5  :: hw:  1.25
210   mu :  0.0  :: hw:  1.75
230   mu :  0.25 :: hw:  1.5
242   mu : -0.25 :: hw:  2.375
253   mu : -0.5  :: hw:  1.0

The first row provides the component number.

Pearson correlation coefficients for dominant components of latent CelebA vectors

For the latent space of our AE we had chosen the number N of its dimensions to be N=256. Therefore, the covariance matrix has 256×256 elements. I do not want to bore you with a big matrix having only a few elements with a size worth mentioning. Instead I give you a code snippet which should make it clear what I have done:

import numpy as np
#np.set_printoptions(threshold=sys.maxsize)

# The Pearson correlation coefficient matrix 
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
print(z_points.shape)
print()
num_pts      = 5000

# Special points in slice 
num_pts_spec = 100000
jc1_sp = 118; jc2_sp = 164
jc1_sp = 177; jc2_sp = 195

len_z = z_points.shape[0]

ay_sel_ptsx = z_points[np.random.choice(len_z, size=num_pts, replace=False), :]
print(ay_sel_ptsx.shape)

# special points 
threshcc = 2.0
ay_sel_pts1 = ay_sel_ptsx[( abs(ay_sel_ptsx[:,:jc1_sp])         < threshcc).all(axis=1)] 
print("shape of ay_sel_pts1 :  ", ay_sel_pts1.shape )
ay_sel_pts2 = ay_sel_pts1[( abs(ay_sel_pts1[:,jc1_sp+1:jc2_sp]) < threshcc).all(axis=1)] 
print("shape of ay_sel_pts2 :  ", ay_sel_pts2.shape )
ay_sel_pts3 = ay_sel_pts2[( abs(ay_sel_pts2[:,jc2_sp+1:])       < threshcc).all(axis=1)] 
print("shape of ay_sel_pts3 :  ", ay_sel_pts3.shape )
ay_sel_pts_sp  = ay_sel_pts3

ay_sel_pts = ay_sel_ptsx.transpose()
print("shape of ay_sel_pts :  ", ay_sel_pts.shape)

ay_sel_pts_spec = ay_sel_pts_sp.transpose()
print("shape of ay_sel_pts_spec :  ",ay_sel_pts_spec.shape)
print()
       
# Correlation corefficients for the selected points  
corr_coeff = np.corrcoef(ay_sel_pts)
nd = corr_coeff.shape[0]

print(corr_coeff.shape)
print()

for k in range(1,7): 
    thresh = k/10.
    print( "num coeff >", str(thresh), ":", int( ( (np.absolute(corr_coeff) > thresh).sum() - nd) / 2) )

The result was:

(170000, 256)

(5000, 256)
shape of ay_sel_pts1 :   (101, 256)
shape of ay_sel_pts2 :   (80, 256)
shape of ay_sel_pts3 :   (60, 256)
shape of ay_sel_pts :   (256, 5000)
shape of ay_sel_pts_spec :   (256, 60)

(256, 256)

num coeff > 0.1 : 1456
num coeff > 0.2 : 158
num coeff > 0.3 : 44
num coeff > 0.4 : 25
num coeff > 0.5 : 16
num coeff > 0.6 : 8

The lines at the end give you the number of pairs of component indices whose correlation coefficients are bigger than a threshold value. All numbers vary a bit with the selection of the random vectors, but in narrow ranges around the values above. The intermediate part reduces the amount of CelebA vectors to a slice where all components have small values < 2.0 with the exception of 2 special components. This reflects z-points close to the plane panned by the axes for the two selected components.

Now let us extract the component indices which have a significant correlation coefficient > 0.5:

li_ij = []
li_ij_inverse = {}
# threshc  = 0.2      
threshc  = 0.5

ncc = 0.0
for i in range(0, nd):
    for j in range(0, nd):
        val = corr_coeff[i,j]
        if( j!=i and abs(val) > threshc ): 
            # Check if we have the index pair already 
            if (i,j) in li_ij_inverse.keys():
                continue 
            # save the inverse combination
            li_ij_inverse[(j,i)] = 1
            li_ij.append((i,j))
            print("i =",i,":: j =", j, ":: corr=", val)
            ncc += 1

print()
print(ncc)
print()
print(li_ij)

We get 16 pairs:

i = 31  :: j = 188 :: corr= -0.5169590614268832
i = 68  :: j = 151 :: corr=  0.6354094560888554
i = 68  :: j = 177 :: corr= -0.5578352818543628
i = 68  :: j = 202 :: corr= -0.5487381785057351
i = 110 :: j = 188 :: corr=  0.5797971250208538
i = 118 :: j = 195 :: corr= -0.647196329744637
i = 151 :: j = 177 :: corr= -0.8085621658509928
i = 151 :: j = 202 :: corr= -0.7664405924287517
i = 151 :: j = 242 :: corr=  0.8231503928254471
i = 177 :: j = 202 :: corr=  0.7516815584868468
i = 177 :: j = 242 :: corr= -0.8460097558498094
i = 188 :: j = 210 :: corr=  0.5136571387916908
i = 188 :: j = 230 :: corr= -0.5621165900366926
i = 195 :: j = 242 :: corr=  0.5757354150766792
i = 202 :: j = 242 :: corr= -0.6955230633323528
i = 210 :: j = 230 :: corr= -0.5054635808381789

16

[(31, 188), (68, 151), (68, 177), (68, 202), (110, 188), (118, 195), (151, 177), (151, 202), (151, 242), (177, 202), (177, 242), (188, 210), (188, 230), (195, 242), (202, 242), (210, 230)]

You note, of course, that most of these are components which we already identified as the dominant ones for the orientation and lengths of our latent vectors. Below you see a plot of the number distributions for the values the most important components take:

Visualization of the correlations

It is instructive to look at plots which directly visualize the correlations. Again a code snippet:

import numpy as np
num_per_row = 4
num_rows    = 4
num_examples = num_per_row * num_rows

li_centerx = []
li_centery = []
li_centerx.append(0.0)
li_centery.append(0.0)

#num of plots
n_plots = len(li_ij)
print("n_plots = ", n_plots)

plt.rcParams['figure.dpi'] = 96 
fig = plt.figure(figsize=(16, 16))
fig.subplots_adjust(hspace=0.2, wspace=0.2)

#special CelebA point 
n_spec_pt = 90415

# statisitcal vectors for b=4.0 
delta = 4.0
num_stat = 10
ay_delta_stat = np.random.uniform(-delta, delta, size = (num_stat,z_dim))

print("shape of ay_sel_pts : ", ay_sel_pts.shape)

n_pair = 0 
for j in range(num_rows): 
    if n_pair == n_plots:
        break
    offset = num_per_row * j
    # move through a row 
    for i in range(num_per_row): 
        if n_pair == n_plots:
            break
        j_c1 = li_ij[n_pair][0]
        j_c2 = li_ij[n_pair][1]
        li_c1 = []
        li_c2 = []
        for npl in range(0, num_pts): 
            #li_c1.append( z_points[npl][j_c1] )  
            #li_c2.append( z_points[npl][j_c2] )  
            li_c1.append( ay_sel_pts[j_c1][npl] )  
            li_c2.append( ay_sel_pts[j_c2][npl] )  
        
        # special CelebA point 
        li_spec_pt_c1=[]
        li_spec_pt_c2=[]
        li_spec_pt_c1.append( z_points[n_spec_pt][j_c1] )  
        li_spec_pt_c2.append( z_points[n_spec_pt][j_c2] )  
        
        # statistical vectors 
        li_stat_pt_c1=[]
        li_stat_pt_c2=[]
        for n_stat in range(0, num_stat):
            li_stat_pt_c1.append( ay_delta_stat[n_stat][j_c1] )  
            li_stat_pt_c2.append( ay_delta_stat[n_stat][j_c2] )  
        
        # plot 
        sp_names = [str(j_c1)+' - '+str(j_c2)]
        axc = fig.add_subplot(num_rows, num_per_row, offset + i +1)
        #axc.axis('off')
        axc.scatter(li_c1, li_c2, s=0.8 )
        axc.scatter(li_stat_pt_c1, li_stat_pt_c2, s=20, color="red", alpha=0.9 )
        axc.scatter(li_spec_pt_c1, li_spec_pt_c2, s=80, color="black" )
        axc.scatter(li_spec_pt_c1, li_spec_pt_c2, s=50, color="orange" )
        axc.scatter(li_centerx, li_centery, s=100, color="black" )
        axc.scatter(li_centerx, li_centery, s=60, color="yellow" )
        axc.legend(labels=sp_names, handletextpad=0.1)
        n_pair += 1 

        

The result is:

The (5000) blue dots show the component values of the randomly selected latent vectors for CelebA images. The yellow dot marks the origin of the latent space’s coordinate system. The red dots correspond to artificially created random vectors for b=4.0. The orange dot marks the values for one selected CelebA image. We also find indications of an ellipsoidal form of the z-point region for the CelebA dataset. But keep in mind that we only a re looking at projections onto planes. Also watch the different scales along the two axes!

Interpretation

The plots clearly show some average correlation for the depicted latent vector components (and their related z-points). We also see that many of the artificially created vector components seem to lie within the blue cloud. This appears a bit strange as we had found in the last post that the radii of such vectors do not fit the CelebA vector distribution. But you have to remember that we only look at projections of the real z-points down to some selected 2D-planes within of the multi-dimensional space. The location in particular projections does not tell you anything about the radius. In a later sections I also show you plots where the red dots quite often fall outside the blue regions of other components.

I want to draw your attention to the fact that the origin seems to be located close to the border of the region marked by some components. At least in the present projection of the z-points to the 2D-planes. If we only had the plots above then the origin could also have a position outside the bulk of CelebA z-points. The plots confirm however what we said in the last post: The CelebA vector distributions has its center off the origin.

We also see an indication that the density of the z-points drops sharply towards most of the border regions. In the projections this becomes not so clear due to the amount of points. See the plot below for only 500 randomly selected CelebA vectors and the plots in other sections below.

Border position of the origin with respect to the latent vector distribution for CelebA

Below you find a plot for 1000 randomly selected CelebA vectors, some special components and b=4.0. The components which I selected in this case are NOT the ones with the strongest correlations.

These plots again indicate that the border position of the latent space’s origin is located in a border region of the CelebA z-points. But as mentioned above: We have to be careful regarding projection effects. But we also have the plot of all number distributions for the component values; see the last post for this. And there we saw that all the curves cover a range of values which includes the value 0.0. Together we the plots above this is actually conclusive: The origin is located in a border region of the latent z-point volume resulting from CelebA images after the training of our Autoencoder.

This fact also makes artificial vector distributions with a narrow spread around the origin determined by a b ≤ 2.0 a bit special. The reason is that in certain directions the component value may force the generated artificial z-point outside the border of the CelebA distribution. The range between 1.0 < b < 2.0 had been found to be optimal for our special statistical distribution. The next plot shows red dots for b=1.5.

This does not look too bad for the selected components. So we may still hope that our statistical vectors may lead to reconstructed images by the Decoder which show human faces. But note: The plots are only projections and already one larger component-value can be enough to put the z-point into a very thinly populated region outside the main volume fo CelebA z-points.

Conclusion

The values for some of the components of the latent vectors which a trained convolutional AE’s Encoder creates for CelebA images are correlated. This is reflected in plots that show an orthogonal projection of the multi-dimensional z-point distribution onto planes spanned by two coordinate axes. Some other components also revealed that the origin of the latent space has a position close to a border region of the distribution. A lot of artificially created z-points, which we based on a special statistical vector distribution with constant probabilities for each of the independent component values, may therefore be located outside the main z-point distribution for CelebA. This might even be true for an optimal parameter b=1.5, which we found in our analysis in the last post.

We will have a closer look at the border topic in the next post:

Autoencoders and latent space fragmentation – IV – CelebA and statistical vector distributions in the surroundings of the latent space origin