Autoencoders and latent space fragmentation – X – a method to create suitable latent vectors for the generation of human face images

My present post series explores options to use a standard convolutional Autoencoder [AE] for the creation of images with human faces. The face generation should based on random input to the AE’s Decoder. On our quest for a suitable method we have meanwhile learned a lot about other aspects of Autoencoders, vector distributions in multi-dimensional latent spaces and generative methods for our special case:

  • Methods to create statistical latent vectors [z-vectors] as input for the AE’s Decoder must be chosen carefully. Among other things: It is difficult to create a bunch of random vectors which cover wider areas in the vastness of a multidimensional space. So the z-vector creation must be adjusted to specific requirements.
  • After having been trained with CelebA images a convolutional AE fills a limited and coherent region in the latent space with z-points for the training images. This latent space region appears to be critical for successful image creation: Statistically generated z-vectors should point to this region. The core of the z-point distribution gets filled relatively densely.
  • A convolutional AE maps human face images onto an approximate multivariate normal distribution. This gives the inner core of the z-point distribution the structure of a multidimensional ellipsoid. The projections of this ellipsoid onto 2-dimensional coordinate planes show characteristic nested elliptic contour lines.
  • As the main axes of these ellipses were inclined with different angle towards the axes of chosen coordinate planes we concluded that linear correlations mark average dependencies between the z-vector components. Limiting conditions imposed by these correlations must also be fulfilled by z-vectors used as the Decoder’s input.

See previous posts in this series for more details. In particular, the last 2 posts

Autoencoders and latent space fragmentation – IX – PCA transformation of the z-point distribution for CelebA

Autoencoders and latent space fragmentation – VIII – approximation of the latent vector distribution by a multivariate normal distribution and ellipses

have shown that the density distribution for the z-points really exhibits elliptic contour lines in the original coordinate system of the latent space and (!) in the target coordinate system of a PCA transformation.

In this post we use our gathered knowledge: I present a first simple method to generate z-vectors which point to the latent space region filled by z-points for CelebA images. These z-vectors will fulfill the general and limiting elliptic conditions for their components.

Decomposing the full problem of latent vector generation into a sequence of 2-dimensional problems

The nice thing about multivariate Gaussian distributions with linear correlations between the vector components is the following: We can reduce the problem of choosing proper component values to a series of 2-dimensional restrictions. Firstly we can use characteristic properties of the Gaussian distribution for each component. And secondly we can use confidence ellipses in 2-dimensional coordinate planes to restrict the component values to allowed intervals.

Ellipses are most easy to handle when their axes are aligned with the axes of the coordinate system in which we describe them. So, let us assume that we know an affine transformation T to a new coordinate system which also has orthogonal axes and supports the following special transformation properties for a multivariate normal density distribution:

  1. T maps nested elliptic contour lines of the multidimensional density distribution and in particular confidence ellipses for component pairs in the original coordinate system to nested elliptic contours and confidence ellipses in the new coordinate system.
  2. Taligns the centers of the transformed ellipses with the origin of the new coordinate system.
  3. T aligns the main axes of the mapped ellipses with the axes of the new coordinate system.
  4. T is reversible.

How could we then use the transformed data for vector-creation?

In the new coordinate system, a contour ellipse in a chosen coordinate plane for the axes-indices (i, j) may have main diameters of size

d1 = 2 * a    and    d2 = 2 * b.

We then can first select a random v_i value to fall into a range [-a * fact, a * fact].

fact * a    <    v_i    <    fact * a

With fact being a proper factor. This factor defines a confidence level in the new coordinate system. With the value of v_i fixed and b being the half-diameter in the orthogonal direction the correlation condition for the z-point distribution says that the v_j value must fall into an interval [-c, c] defined by:

-c    <    v_j    <    c,
with c = b * fact * sqrt(1 – x**2 / (fact * a)**2)

But within these limits we can again choose the v_j-value freely. Below I use a simple random-function for a constant probability density to pick a value.

However: It would not be enough to restrict the coordinates to the conditions of just one ellipse! The components of the created vectors must in parallel fulfill elliptic conditions for all of the possible pairs of vector-components. I.e. we may need to adapt the v_j values gained from the analysis of a fist 2D-ellipse to further conditions of other ellipses and component pairs. This can be achieved by an iteration. For z_dim = 256 this involves a total of 32640 checks and possible value-adaptions to each and all of the allowed value ranges.

In addition: The order by which the component-pairs and their conditions are investigated must be randomized to get real statistical vector distributions.

Eventually the resulting vector components must be re-transformed into the original coordinate system of the latent space.

The ellipse for the “core’s boundary” in the original coordinate system will be defined by the chosen confidence level of the ellipsoidal normal distribution. We saw already that a confidence level of σ = 2.0 defines the transition to outer regions of the z-point density distribution quite well.

This all sounds manageable by relative simple Python programs. But: Do we know a proper transformation T? Yes, we do: A PCA-transformation of the z-point density distribution has all the properties discussed above.

Using half maximum values after a PCA transformation of the z-point distribution

The last post proved that a PCA transformation maps ellipses onto ellipses for component pairs in the transformed PCA coordinate system. The advantage of the ellipses there is that their main axes are on average well aligned with the orthogonal PCA coordinate axes. Gaussians for the number density distribution per component are mapped to Gaussians for the new components in the transformed coordinate system. So, the basic idea for a proper z-vector generation is:

  1. Take the multivariate normal z-point distribution for the training images in the AE’s latent space.
  2. Apply a PCA analysis to diagonalize the correlation matrix and transform the z-vector components to the PCA coordinate system.
  3. Use the ellipses in coordinate planes of the PCA coordinate system to create random z-vector components fulfilling all required conditions there.
  4. Re-transform the resulting z-vector components into the original coordinate system of the latent space.

Point 3 in our method is covered by a numerical analysis of the Gaussians in the PCA-coordinate system. We determine the half-width numerically by analyzing the density distribution with the help of sampling intervals. This simple method has resolution limits related to the size of the sampling interval. This has consequences for PCA components with a small standard deviation. We saw already in the last posts that such distributions appear for higher PCA components at the lower end of the explained variance.

Does the suggested method work?

The convolutional AE we work with was defined in previous posts with 4 Conv2D layers in the Encoder and 4 Conv2DTranspose layers in Decoder. The number of latent space dimensions was z_dim = 256. The AE network was trained on CelebA images. I do not want to bore you with details of the codes for the creation of z-vectors consistent to the resulting elliptic conditions. It is all standard. The PCA-transformation can e.g. be taken from the sklearn-package.

I have applied a constant probability density to choose a random value within the allowed ranges for the component values of the aspired z-vectors in the PCA coordinate system. For the plots below I have used the most important 50 to 105 PCA components (out of 256). The plots include confidence ellipses on a level of σ = 2.2. I derived the confidence ellipses by directly evaluating the standard deviations of the transformed distribution data in all coordinate directions.

The first plot shows you such an ellipse for the coordinate plane corresponding to the first two, most important PCA components. The orange points mark 20 z-points defined by 20 randomly z-vectors fulfilling all elliptic conditions. The plot contains 120,000 z-points for images out of the 170,000 CelebA pictures used during training.

Generated statistical vectors in the PCA coordinate system

For elliptic contour lines see the last post before the present one in this series. The next plot shows the same generated 20 z-vectors for other component-combinations among the first 20 of the most important PCA-components. The plots contain a selection of 60,000 z-points.

The outer z-points points do not always indicate that we have elliptic contours in the denser core of the displayed 2-dimensional distributions. But see the last post for proofs that the inner core inside the red ellipse really displays elliptic contours. You see that all random vectors lie within the 2-σ-ellipses.

The next plot shows the generated z-vectors in the original coordinate system of the latent space. The component values were back-transformed from the PCA-system to the original coordinate system.

Generated statistical z-vectors after an inverse PCA transformation to the original coordinate system of the latent space

We get similar plots for other component pairs. And of course for other generated vectors.

Generated statistical z-vectors in the PCA coordinate system

Generated statistical z-vectors after an inverse PCA transformation to the original coordinate system of the latent space

Technically we have obviously achieved what we wanted: Our generated statistical vectors are distributed within the core of our multidimensional ellipsoid.

Note that this method fortunately works even when we use a limited number of the PCA components, only. This is due to intricate properties of a PCA transformation which guarantee that a back-transformation puts the resulting points close to the original ones even when we omit less important PCA components. I cannot discuss the math-details in this blog. You have to see scientific literature for this. An introduction is e.g. provided by https://arxiv.org/pdf/1404.1100.pdf.

For me this property of the PCA transformation was helpful when I ran into the resolution problem for a proper half-width of the Gaussians. Taking 256 components lead to errors as elliptic conditions for very narrow Gaussians were not properly defined and some of the created vectors left the allowed value ranges.

Resulting face images

Let us look at some results. First I want to remind you from where we started:

Failed trials with improper random z-vectors based on constant probability densities

A simple random generator used in the beginning was totally inapt to feed the AE’s Decoder with proper statistical z-vectors. And now – look at the following plots. They were produced for a varying number of PCA components between 50 and 120, 100000 statistically selected z-points within a 3 σ-level for the PCA-transformation and various factors 0.6 < fact < 0.8 used upon a half-width corresponding to a confidence level of 2.35 σ:

In some cases – for a higher number of PCA components – we even see smaller details of the face images and a reasonable transition to some kind of hairdo. Please remember that z_dim = 256 is a pretty low number for the latent space to cover the encoding of face details. And celebrities as covered by CelebA use make-up ….

In case you think the above result is not noteworthy: Please remember that we talk about a simple standard Autoencoder and not about a Variational Autoencoder and neither about a transformer based Autoencoder. No fancy additions to cost functions or special layers. And who ever has read the very instructive book of D. Foster on “Generative Deep Learning” (1st edition, O’Reilly) may compare his images to mine. And I have used a lower resolution of the original images than D. Foster. Just to motivate people to look a bit deeper into properties of data distributions in latent spaces.

Conclusion and outlook

We have come a lot closer to our objective of using a standard minimal Autoencoder for generative purposes. On our way, we got a much deeper understanding of the vector-distribution a trained AE creates in its latent space for human face images.

The method presented in this post to create reasonable statistical z-vectors still has its limits and there is a lot of open space for improvements. Attentive readers may e.g. ask: Why did he not use confidence ellipses directly? And why not the ellipses found in the original coordinate system of the latent space? And what about micro-correlations? And are there clusters for certain properties as the hair-color, sex, smiling, etc. in the multivariate z-point distribution in the AE’s latent space?

I will discuss these topics in further posts. In the meantime keep in mind that the basic point for turning a standard Autoencoder into a generative tool is to understand how it fills its latent space.

Note also that I myself have speculated in other posts of this blog that failures of using standard AEs for generative purposes may have their ultimate reason in the micro-structure of the z-point distribution. The present results render these previous ideas of mine plain wrong.

Links to previous posts of this series

Autoencoders and latent space fragmentation – IX – PCA transformation of the z-point distribution for CelebA

Autoencoders and latent space fragmentation – VIII – approximation of the latent vector distribution by a multivariate normal distribution and ellipses

Autoencoders and latent space fragmentation – VII – face images from statistical z-points within the latent space region of CelebA

Autoencoders and latent space fragmentation – VI – image creation from z-points along paths in selected coordinate planes of the latent space

Autoencoders and latent space fragmentation – V – reconstruction of human face images from simple statistical z-point-distributions?

Autoencoders and latent space fragmentation – IV – CelebA and statistical vector distributions in the surroundings of the latent space origin

Autoencoders and latent space fragmentation – III – correlations of latent vector components

Autoencoders and latent space fragmentation – II – number distributions of latent vector components

Autoencoders and latent space fragmentation – I – Encoder, Decoder, latent space

 

And before we forget it: Besides the Putler in the east there is also an extremist right-wing, semi-fascistic party in Germany on a record high support level in the population of 18%. This is a party which wants to stop all sanctions against the Russian aggressor in the ongoing war in Ukraine. You see the pattern behind this? This party is presently becoming bigger in number of supporters than the government leading social democrats. So, there is more at stake at present in Europe than the war in Ukraine. We need to defend our democracies with all the means of democracies. And its time to ask for more decisive legal action against a party which already is under observation of the German internal secret service.

 

Autoencoders and latent space fragmentation – IX – PCA transformation of the z-point distribution for CelebA

This series of posts is about standard convolutional Autoencoders [AEs]. We wish to use the creative ability of an AE’s Decoder for the creation of human face images. We trained a relatively simple example AE on the CelebA data set. For the generation of new human face images we wanted to feed the Decoder with randomly created z-vectors in the AE’s latent space. As arbitrary latent vectors did not deliver reasonable images, we had a look at properties of the z-point distribution created in the latent space after training. And we found the z-vectors with end-points within this particular region indeed gave us some first reasonable images.

But during the first posts we also had to learn that it requires substantial effort to produce “randomly” created z-vectors which point to the coherent and confined latent space region populated by z-points for CelebA images. In addition we found that we obviously must fulfill complex correlation conditions between the latent vector components. In the last posts we have, therefore, investigated the structure of the latent space region populated for CelebA images a bit more closely. See:

Autoencoders and latent space fragmentation – VIII – approximation of the latent vector distribution by a multivariate normal distribution and ellipses

Autoencoders and latent space fragmentation – VII – face images from statistical z-points within the latent space region of CelebA

This gave us a somewhat surprising insight:

For our special case of human face images the density distribution of the z-points in the multidimensional latent space roughly resembled a multivariate normal distribution. The variables of this distribution were, of course, the components of the latent z-vectors (reaching from the origin of the latent space coordinate system to the z-points). The number density distributions for the values of the z-vector components could very well be approximated by Gaussian functions. In addition we found strong linear correlations between the z-vector components. This was completely consistent with diagonally oriented elliptic contour lines of the number density distribution for pairs of z-vector components in the respective coordinate planes. The angles between the ellipses’ axes and the axes of the (orthogonal) coordinate system varied.

In this post we test my thesis of a multivariate normal distribution in a different and more complicated way. If the number density distribution of the z-points for CelebA really is close to a multivariate normal distribution then we would expect that the contour lines of the 2-dimensional number density function for coordinate planes still are ellipses after a PCA transformation. Note that the coordinate planes then are planes of a new coordinate system (with orthogonal axes) moved and rotated with respect to the original coordinate system. We call the transformed coordinate system the “PCA coordinate system“. (It is clear the vector components must consistently be transformed with respect to the new coordinate axes.)

This test is much harder because asymmetries, imperfections and deviations from a normal distribution will become clearly visible. Nevertheless we shall see that the z-point distribution decomposes roughly in uncorrelated Gaussian number density (or probability) functions for the (transformed) components of the z-vectors – as expected.

Note that number density distributions can be interpreted as probability distributions after a proper normalization. To understand the results and implications of this post some basic knowledge about multivariate normal probability density distributions in multidimensional spaces is required. Transformation properties of such distributions are important. In particular the effect of a PCA or SVD transformation, which diagonalizes the Pearson correlation matrix, on a multivariate normal distribution, should be familiar: In the moved and rotated coordinate system original linear correlations between vector components disappear. The coordinate axes of the transformed coordinate system get aligned to the eigenvectors (and main axes) of the probability density distribution. In addition elliptic contour lines prevail and the main axes of the ellipses get aligned with the axes of the PCA coordinate system.

For valuable information in previous posts see the link list a the bottom of this article.

Imperfections in the multivariate normal distribution and their consequences for PCA

In our case the latent space had 256 dimensions. The real number density functions for the values of the (256) latent vector components did not always show the full symmetry of their Gaussian fits. Deviations appeared near the center and at the flanks of the individual distributions. Still we got convincing overall ellipses in contour plots of the probability density for selected pairs of components. But there were also relatively many points which lay in regions outside the 2 to 3 sigma confidence ellipses for these 2-dimensional distributions – and the distribution of the points there seemed to violate the elliptic symmetry.

A perfect multivariate normal distribution (with only linear correlations between its variables) decomposes into (seemingly) uncorrelated normal distributions for the new variables, namely the latent vector components in the transformed coordinate system of the latent space. The coordinate transformation corresponds to a diagonalization of the Pearson correlation matrix via finding orthogonal eigenvectors. So we should not only get ellipses after the PCA transformation. The main axes of the resulting ellipses should also be aligned with the coordinate axes of the new translated and rotated PCA coordinate system. (This is a standard feature of “uncorrelated” Gaussians).

Note that the de-correlation is a pure coordinate effect which does not eliminate the dependencies of the original variables. The new coordinates and the respective variables do not have the same meaning as the old ones. We “only” change perspectives such that the mathematical description of the multivariate distribution gets simpler. However, the ratios of the axes of the new ellipses depend on the axis-ratios of the old ellipses in the original standard coordinate system of the latent space.

When we translate and rotate a coordinate system to diagonalize the Pearson matrix of an imperfect multivariate normal distribution the coordinate axes may afterward not fully align with the main axes of the distribution’s inner core – and then we might get clearly visible asymmetries in z-point projections to coordinate planes. Note that points outside the inner core have a relatively large impact on the PCA transformation and especially on the rotation of the new axes with respect to the old ones.

Reduction of outer z-point contributions

Outer z-points often correspond to CelebA images with strong variations in the background. As these z-points have a large distance from the center of the distribution they have a heavy impact on a PCA transformation to eigenvectors of the Pearson matrix. Therefore, I eliminated some outer points from the distribution. I did this by deleting points with coordinate values smaller or bigger than a symmetric threshold in the outer flanks of the 256 Gaussian distributions – e.g.:

z_j < mu_j – fact * FHWM and z_j > mu_j + fact * FHWM.

With FHWM meaning the full width at the half maximum and mu_j being the center of the approximate Gaussian for the j-th vector component of the z-vectors. I tried multiple values of fact between 1.7 and 1.25.

One has to be careful with such an elimination process. Even though we set our limits beyond the 2 σ-level of the individual Gaussians ahead of the transformation, a step-wise consecutive elimination for all components reduces the number of surviving vectors significantly for small values of fact. The plots below correspond to a value of fact = 1.7 (with one discussed exception). This reduced the number of available vectors from originally 170.000 used during the AE’s training to 160.000. From this distribution a random sub-selection of 80,000 z-points was taken to perform the PCA transformation. I call the resulting sample of z-points the “reduced distribution” below. The results did not change much for a sub-selection of more points. The second reduction saved me some CPU-time, however.

PCA transformation of the z-point distribution – explained variance and cumulative importance

Below you see the explained variance ratio of the first 40 and the cumulative importance of the first 120 main components after a PCA transformation of the reduced z-point distribution. (The PCA-algorithm was based on the SVD-algorithm, as provided by the sklearn-package.)

We see that we need around 120 PCA components to explain 80% of the variance of the original z-point data. The contribution of the first 15 components is significant, but it is NOT really dominant. Interesting is also the step-wise decline of the first 10 components in a not so smooth curve.

I have in addition checked that the normalized Pearson correlation coefficient matrix was perfectly diagonalized. All off-diagonal values were smaller than 2.e-7 (which is consistent with error propagation) and the diagonal elements were 1.0 with an accuracy better than 8 eight digits after the decimal point. As it should be.

Gaussians per PCA-vector component?

We expect Gaussian number density functions for each of the z-vector components after the PCA coordinate transformation. The following line plots show the number density distributions (colored line) and a related Gaussian fit (dashed lines) per named vector-components after PCA-transformation. The first plot gives an overview over 120 PCA components:

We see that all distributions look similar to Gaussians. In addition the center of the distributions for all components are very close to or coincide with the origin of the PCA coordinate system. This is something we would expect for an original multivariate normal distribution before the PCA transformation to a new moved and rotated coordinate system: An affine transformation between coordinate systems with orthogonal axes maps Gaussians to Gaussians.

Components 0 and 1:

Components 2 and 3:

Components 4 and 5:

Components 6 and 7:

Components 8 and 9:

We see from these plots that component 3 shows values at the distributions’s flanks well above the values of its Gaussian fit. This may lead to deformed ellipses. In addition component 7 shows a slight asymmetry – probably triggering deviations both at the center and outer regions. Similar effects can be seen for other higher components – but not so pronounced. The chosen sampling interval (0.5) in the new coordinates did smear out small asymmetric wiggles in all distributions.

Ellipses?

Calculating approximate contour lines is CPU intensive. On my presently available laptop I had to focus on some selected examples for pairs of components of z-vectors in the transformed coordinate system. Experiments showed that the most critical components with respect to asymmetries were 2, 3, 7, 8.

Let us first look at combinations of the first 9 PCA components (0) with other components in the range between the 10th and 120th PCA component. Just scroll down to see more images:

The fatter lines represent confidence ellipses derived both from correlation data, i.e. elements of the normalized correlation coefficient matrix, and properties of the Gaussian distributions. See the last blog for details on the method of getting confidence ellipses from data of a probability distribution.

Ellipses for density distributions in coordinate planes for pars of the most important PCA components

Before you think that everything is just perfect we should focus on the most important 10 PCA components and their mutual pair-wise density distributions. We expect trouble especially for combinations with the components 2, 3 and 7.

Component 0 vs other components < 10

We see already here that asymmetries in the density distributions which appear relatively small in the gaussian-like curves per component leave their imprint on the elliptic curves. The contour lines are not always centered with respect to the overall confidence ellipses.

Component 1 vs other components < 10

Component 2 vs other components < 10

The plot for components 2 and 3 was based on fact = 1.35 instead of fact = 1.7. See a section below for more information.

Component 3 vs other components < 10

Component 4 vs other components < 10

Component 5 vs other components < 10

Component 6 vs other components < 10

Component 7 vs other components < 10

Component 8 vs other components < 10

Component 9 vs other components < 10

Ellipses for other seected component pairs

Comments

The z-point distribution is rather close to a multivariate normal distribution, but it is not perfect. The PCA transformation revealed clearly that the center of the overall z-point distribution does not completely coincide with the centers of the individual density distributions for the z-vector components. And not all individual curves are fully symmetrical. Some plots show the expected ellipses, but not fully centered. We also see signs of non-linear correlations as not all ellipses show a complete alignment with the axes of the PCA coordinate system.

The impact of outer z-points becomes obvious when we compare plots for the component pair (2, 3) for different values of fact. The plots are for fact = 1.35, 1.45, 1.55, 1.7 and fact = 1.8 (in this order from left to right and downward). (Regarding z-point elimination the fact-values correspond to 122000, 138000, 149000, 159000 and 164000 remaining z-points.)

The flips in the left-right orientation between some of the plots can be ignored. They depend on random aspects of the transformations and plotting routines used.

We see that z-points (for some some strange images) which have a large distance from the center of the distribution have a major impact on the central density distribution. 10% of the outer points influence the orientation of the central ellipses by their coordinates – although these outer points are not part of the inner core of the distribution.

BUT: Overall we find the effect we wanted to see. The elliptic contours are clearly visible and most of them are aligned with the axes of the PCA coordinate system and the main axes of the confidence ellipses for the z-point distribution in the PCA coordinate system. The inner core is aligned with the axes of the coordinate system even for the critical PCA components 2 and 3, when we omit between 12% and 25% of the points (outside a 3 σ-level).

This means: The PCA transformation confirms that an inner core of the z-point distribution is well described by a multivariate normal distribution – with linear correlations between the number density distributions for various components.

This is an interesting finding by itself.

Conclusion

The last and the present post of this series have shown that a convolutional AE maps human face images of the CelebA data set to a multivariate normal distribution in its multidimensional latent space. Although this is interesting by itself we must not forget our ultimate goal – namely to generate random z-vectors which should deliver us reasonably human face images by the AE’s Decoder. But the results of our analysis provide solid criteria to generate such vectors fulfilling complex correlation conditions.

We can now define statistical generation methods which restrict the components of our aspired random z-vectors such that the resulting z-points lie within regions surrounded by confidence ellipses of the multivariate normal distribution. One such method is the topic of the next post.

Autoencoders and latent space fragmentation – X – a method to create suitable latent vectors for the generation of human face images

Links to other posts of this series

Autoencoders and latent space fragmentation – VIII – approximation of the latent vector distribution by a multivariate normal distribution and ellipses

Autoencoders and latent space fragmentation – VII – face images from statistical z-points within the latent space region of CelebA

Autoencoders and latent space fragmentation – VI – image creation from z-points along paths in selected coordinate planes of the latent space

Autoencoders and latent space fragmentation – V – reconstruction of human face images from simple statistical z-point-distributions?

Autoencoders and latent space fragmentation – IV – CelebA and statistical vector distributions in the surroundings of the latent space origin

Autoencoders and latent space fragmentation – III – correlations of latent vector components

Autoencoders and latent space fragmentation – II – number distributions of latent vector components

Autoencoders and latent space fragmentation – I – Encoder, Decoder, latent space

 

Autoencoders and latent space fragmentation – VIII – approximation of the latent vector distribution by a multivariate normal distribution and ellipses

This post series is about creative abilities of convolutional Autoencoders [AE] which have been trained on a set of human face images. The objectives of this series and its numerical experiments are:

  • We want to create images with human faces from statistical z-vectors and related z-points in the AE’s latent space [z-space or LS]. Image creation will be done with the help of the AE’s Decoder after a training on the CelebA dataset.
  • We work with a standard Autoencoder, only. I.e., we do NOT add any artificial layers and cost terms to the Autoencoder’s layer structure (as it is done e.g. in Variational Autoencoders).
  • We analyze the position, shape and internal structure of the multidimensional z-vector distribution created by the AE’s Encoder after training. We assume that generated statistical z-vectors must point to respective regions of the latent space to guarantee images with reasonable content.
  • We raise the question whether simple statistical generator algorithms are sufficient to cover these regions with statistical z-vectors.

Our numerical experiments gave us some indications that such an endeavor is indeed feasible. In addition the third objective may give us some insight into the rules a trained AE follows when it encodes information about human faces into vectors of its latent space.

We have already studied the “natural” z-vector distribution created by a convolutional Autoencoder for CelebA images after a thorough training. The related z-point distribution fortunately filled just one confined and coherent off-center region of the AE’s latent space. Our experiments have furthermore shown that we must indeed restrict the statistical z-vector creation such that the vectors point to this particular region. Otherwise we will not get reasonable images. For details see the previous posts.

Autoencoders and latent space fragmentation – VII – face images from statistical z-points within the latent space region of CelebA
Autoencoders and latent space fragmentation – VI – image creation from z-points along paths in selected coordinate planes of the latent space
Autoencoders and latent space fragmentation – V – reconstruction of human face images from simple statistical z-point-distributions?
Autoencoders and latent space fragmentation – IV – CelebA and statistical vector distributions in the surroundings of the latent space origin
Autoencoders and latent space fragmentation – III – correlations of latent vector components
Autoencoders and latent space fragmentation – II – number distributions of latent vector components
Autoencoders and latent space fragmentation – I – Encoder, Decoder, latent space

The frustrating point so far was that simple methods for creating statistical vectors fail to put the end-points of the z-vectors into the relevant latent space region. In particular methods based on constant probability distributions within a common value interval for all z-vector components are doomed to miss the interesting region due to intricate mathematical reasons.

Afterward we tried to restrict the component values of test vectors to intervals defined by the shape of the number distribution for the values of each component of the CelebA related z-vectors. Such a distribution is nothing else than a one-dimensional probability density function for our special set of encoded CelebA samples: The function describes the probability that a component of a z-vector for human face images gets a value within a certain small value range. The probability distributions for all z-vector components were bell shaped and showed clear transitions to flat wings with very low values. See the plots below. This allowed us to define a value range

d_j_l   <   x_j   <   d_j_h

for each vector component x_j.

But keeping statistical values per component within the identified respective interval was not a sufficient restriction. We saw this clearly in the last post from significant irregular fluctuations in the reconstructed images. Obviously the components of statistically generated z-vectors must in addition fulfill correlation conditions.

The questions which I want to answer in this post are:

  • Can we approximate the 1-dimensional probability density functions for the z-vector components by some simple and common mathematical function?
  • What kind of correlations do we find between the components of the z-vectors encoding the information of human face images?
  • Can we derive some mathematical description of the multivariate z-vector distribution created by convolutional AEs for human face images in the AE’s multidimensional latent space?

Correlations are to be expected …

Please note: We deal with a multidimensional problem. A single latent vector encodes information about a human face image via all of its component values and by relations between these values. Regarding the purpose and the task an AE has to fulfill, it would be naive to assume that the components of our multi-dimensional z-vectors were independently organized. A z-vector encodes information for a convolutional Decoder to combine patterns detected by the Encoder and represented in neural (feature) maps of the networks to create an image. This is a subtle business. Just think about what you do when you draw a sketch of a human face. There are a lot of rules you follow.

When you think about the properties of basic feature patterns in a human face you would certainly assume that the pixel data of a corresponding image show strong correlations. This is among other things due to obvious symmetries – not excluding fluctuations of basic parameters describing human face features. But a nose tends to be at a position below the eyes and at a mid-distance of the eyes. In additions fluctuations of face features would on average respect certain limits given by natural proportions of a face. It would therefore be unreasonable to assume that the input for a Decoder to create a superposition of elementary patterns consists of un-correlated data. Instead the patterns in the original data should not only lead to well adjusted weights in the convolutional networks’ feature maps, but also to well regulated structural elements in the data distribution in the target space of the information encoding, namely in the latent space.

If the relations of the vector components were of a complex, highly non-linear kind and involved many dimensions at the same time we might be lost. But the results we have gained so far indicate a proper common structure of at least the density function for the individual components. This gives us some hope that the multidimensional problem somehow involves well defined 1-dimensional constituents. Whether this a sign that the multidimensional structure of the z-vector distribution can be decomposed into low-dimensional relations remains to be seen.

Observations regarding the z-vector distribution created by convolutional Autoencoders for human face images

Coordinate values of the z-points are identical to z-vector component values when we fix the end of each vector to the origin of the latent space coordinate system. The z-vector distribution thus directly corresponds to a z-point density distribution in the orthogonal coordinate system of the AE’s multi-dimensional LS. We have already made three interesting observations regarding these distributions:

  • The individual probability density function for a selected component of the latent vectors has a bell-shaped form. One , therefore, is tempted to think of a Gaussian function. This would indicate a possible normal distribution for the coordinate values of the z-points along each of the selected coordinate axes.
    Note: This does not exclude that the probability distributions for the components are correlated in some complex way.
  • When we plotted the projection of the z-point distribution onto 2-dimensional coordinate planes (for selected pairs of coordinate axes) then almost all of the resulting 2-dimensional density distributions seemed to have a defined core with an ellipsoidal form of its boundary.
  • For certain component- or axis-pairs the main axes of the apparent ellipses for pair-wise density function appeared rotated against the coordinate axes. The elongated regular and more or less symmetric forms showed a diagonal orientation (with different angles). This alone signals a strong correlation between related two vector components. Indeed we found high values for certain elements of the matrix of normalized Pearson correlation coefficients for the multi-dimensional distribution of z-vector component values.

These observations are not unrelated; they indicate a clear pattern of dependencies and correlations of the distributions for the variables in place. Regarding the data basis we have to keep five things in mind:

  • We treat the z-point distribution for CelebA images as a multi-dimensional probability density distribution. During the analysis we look in particular at 2-dimensional projections of this distribution onto planes spanned by a selected pair of orthogonal axes of the LS coordinate system. We also consider the one-dimensional value distributions for z-vector components. In this sense we regard the z-vector components as logically separate variables.
  • The data used are numbers of z-points counted in finite 1d-intervals, 2d-rectangles or multidimensional cuboids. We fit idealized functions to the respective discrete bar plots. Even if there is a good 1d-fit fluctuations may especially get visible in multidimensional plots for correlated data. A related probability density requires a normalization. We drop the resulting constant factors in the qualitative discussions below.
  • Statistical (un-)correlation of statistical variable distributions must NOT to be confused with underlying variable (in-) dependency. Linear correlations can be reduced to zero by coordinate transformations without eliminating the original variable dependencies.
  • Pearson correlation coefficients are sensitive to linear elements in the relations of logically separate variable distributions. They can not fully cover non-linear distribution relations or covered variable dependencies.
  • A transformation to a local coordinate system whose axes are aligned to the so called main axes of the multidimensional distributions does not remove the original data relations – but there may exist a coordinate system in which the distribution data can be described in a simple, factorized form corresponding to a composition of seemingly un-correlated data distributions.

Anyway – by discussing density distributions we work on overall and large scale average relations between statistical value distributions for our variables, namely the z-vector components. We do not cover local micro-relations that may be in place in addition.

The relation of ellipses with Gaussian probability densities

Probability density functions for two logically separate, but maybe not un-correlated variables have to be multiplied. In our case this reflects the following point: First we determine the probability that the value of component x_i lies in a certain (infinitesimal) interval and then we determine the probability that (for the given value of x_i) the component x_j falls into another value range. The distributions for a specific variable can include variable relations and thus the probability density g(x_j) can include a dependency g(x_j(x_i)).

In the case of uncorrelated normal distributions per coordinate we can just multiply the individual Gaussians g(x_i) * g(x_j). Due to the quadratic terms in the exponent of the Gaussians we then get a sum of quadratic expressions in the common exponent, having the form fac1 * (x_i-mu_i)**2 + fac2 * (x_j-mu_j)**2.

By setting this expression to a constant value we get contour lines of the probability density distribution for the (x_i, x_j)-distribution. Quadratic sums correspond to the definition of an ellipse having main axes which are aligned with the x_i- and x_j-axes of the coordinate system. Thus the contour lines of a 2-dimensional distribution composed of un-correlated Gaussians are ellipses having an orientation aligned with the coordinate axes.

This was for un-correlated density-distributions of two vector components. Mathematically a linear correlation between a pair of Gaussians-distributions corresponds to an affine transformation of the contour-ellipses. The transformation can be expressed by a defined sequence of matrix operations describing a translation, rotations (in a defined order) and a dilation.

This means: The contour lines for a 2-dimensional probability density composed of linearly correlated Gaussians are still ellipses. But these ellipses will appear to be shifted, rotated and stretched along the main axes in comparison with their originally un-correlated Gaussian counterparts. The angle of rotation depends on details of the correlation function and the original standard deviations. The Pearson correlation matrix for linearly correlated distributions is a positive-definite one and, of course, shows off-diagonal elements different from zero. This result can be extended to multivariate normal distributions in spaces with many dimensions and related affine transformations of the coordinate system.

A multivariate normal distribution with linear correlations between the Gaussians results in elliptic contour lines for pair-wise density distributions in the respective 2D-coordinate planes of an orthogonal coordinate system. When we define the contours via multiples of the standard deviations of the underlying Gaussian functions we arrive at so called confidence ellipses.

A really nice mathematical aspect is that the basic parameters of the confidence ellipses can be derived from the normalized correlation coefficients of the Pearson matrix of the multivariate probability distribution. I will come back to this point in forthcoming posts in more detail. For now we just need to know that a multidimensional probability density comes along with confidence ellipses which can be calculated with the help of Pearson correlation coefficients.

Before we go on a word of caution: For a general multi-variate distribution it is not at all clear that it should decompose into a factorized form. However, for a multivariate normal distribution with un-correlated or only linearly correlated components this is by definition different. In this case a transformation to a coordinate system can be found which leads to a complete decomposition into a product of (seemingly) un-correlated Gaussians per component. The latter point lies at the center of PCA and SVD algorithms, which diagonalize the Pearson correlation matrix.

Do we really have Gaussian probability distributions for the individual z-vector-components?

After this short tour into the world of (multi-variate) normal distributions, Gaussian functions and related ellipses we are a bit better equipped to understand the density distributions in the latent space of our Autoencoders for human face images.

Let me remind you about the shapes of the number distribution for our concrete z-vector components resulting for for CelebA face images. The first plot shows the number densities on sampling intervals of width 0.25 for selected vector components resulting for case I of our experiments. The second plot shows the number densities for the values of selected components of case II.

Ok, these curves do resemble Gaussians and some fluctuations are normal. But can we prove the Gaussian properties of the curves a bit better?

Well, for case II I have drawn the best fits by Gaussian functions with the help of SciPy’s optimize.curve_fit() for 3 and yet another 4 selected components of the latent vectors and the respective number distribution curves. The dashed lines show the approximations by Gaussian functions:

The selected components are part of the list of around 20 dominant component distributions – due to their relatively large standard deviations. But the Gaussian form is consistently found for all components (with some small deviations regarding the symmetry of the curves).

So all in all it looks like as if our convolutional AE has indeed created a multivariate normal z-point distribution in the latent space. As said: This does not exclude correlations …

Pairwise linear correlations of the (normal) probability distributions for the latent vector components?

Now we are a bit bold – and assume the best case for us: Could the approxiate Gaussians distributions for the component values be pair-wise and linearly correlated? What would be a clear indication of a pair-wise linear correlation of our component distributions?

Well, we should find an elliptic form of contour lines in the 2-dimensional distribution for the component pair in the respective coordinate plane of the basic orthogonal LS coordinate system. This imposes quite strong symmetry conditions on the contour lines. The ellipses can be shifted and rotated – but they should remain being ellipses. If non-linear contributions to the correlation had a significant impact this would not be the case.

Practically it is not trivial to prove that we have approximately rotated ellipses in 2 dimensions. Satter plos alone do not help: Ellipses fit a lot of plotted distributions of discrete data points quite well. We really need to count number densities to get reliable contour lines. The following plots show such contour lines based on number sampling in rectangles and local smoothing operations with the help of scipy.stats.gaussian_kde().

The fat red and dark orange lines show corresponding confidence ellipses derived from the original CelebA distribution. See below for some remarks on confidence ellipses.

The contours are basically of elliptic shape although they do not show the complete symmetry expected for pure and linearly dependent Gaussian distributions. But overall the confidence ellipses fit quite well into the general form and orientation of the distributions. We also see that for higher σ-levels the coincidence with nearby contours is quite good. The wiggles in the contour change with the z-vector selection a bit.

We conclude that our basic impression regarding an elliptic shape of the z-point distributions is basically consistent with only linearly correlated Gaussian probability density distributions for the component values of the latent vectors.

Approximation of the core of the multivariate z-point distribution by confidence ellipses for component pairs

Above I referred to the boundary of a core of the probability density for two selected vector components. But how would we define the “boundary” of a continuous distribution in the coordinate planes? Answer: As we like – but based on the decline of the approximate Gaussian curves.

We can e.g. pick two times the half-width in each direction or we can use contours defined by confidence levels.
For 2 ≤ fact * σ ≤ 3 we saw already that the contour lines could well be fitted by confidence ellipses. A 3-sigma level covers around 97% of all data points or more. A 2 sigma-level ellipse encircles between 70% and 90% of all data points, depending on the eccentricity of the ellipse. Note that the numbers are smaller for ellipses than for rectangles. I.e. the standard 68-95-99.7 rule does not apply.

The plots below give you an impression of how well ellipses for a -confidence level approximate the core of the CelebA distribution in selected 2D coordinate planes of the latent space:

Each of the sub-plots was based on 10,000 statistically selected vectors of the 170,000 available in my test runs. This is a relative low number. Therefore, for a certain diameter of the points in the scatter plot only the inner core appears to be densely populated. The next plots shows the results for a 3 σ-level of the ellipse – but this time for 50,000 vectors. With more vectors we could visually fill the outer regions of the core.

The orange points mark the center of the multidimensional distributions derived from the one-dimensional distribution curves for the components. We see that it does not always appear to be optimally centered. There are multiple reasons: Our functions are not fully symmetric as ideal Gaussians. And equally important: The accuracy of the position depends on the sampling resolution which was coarse. Outliers of the distribution do have an impact.

And how would we explain the appearance of Gaussians and ellipses?

This all looks quite good, despite some notable deviations regarding symmetry and maxima. Gaussians fit at least most of the important probability density curves very well, though not by a 100%. The appearance of an elliptic shape of the inner core of the distribution and the appearance of overall elliptic contour curves can be explained by linear correlations of the Gaussian distributions for the components.

The appearance of normal distributions per component and basically linear correlations is something that really should be explained. I mean, dwell a bit on what we have found:

A convolutional Autoencoder network with more than 10 million adjustable parameters encoded information about human face images in the form of a roughly multivariate normal distribution of z-points in its latent space – with basically linear correlations between the Gaussian curves describing the probability densities functions for the component values of the z-vectors.

I find this astonishing and not at all self-evident. It is one of the most simple solutions for a multidimensional situation one can imagine. The following questions automatically came to my mind:

Does such a result only appear for training images of defined objects with some Gaussian variation in their features? Are the normal distributions a reflection of variations of relevant features in the original data?
Is this a typical result for (convolutional) AEs? How does it depend on the dimensionality of the latent space? Does it automatically come with a large number of z-space dimensions? Is it an efficient way to encode feature differences in the latent space, which (convolutional) AEs in general tend to use due to their structure?

Do I personally have a convincing explanation? No. Especially not, as the data shown above stem from convolutional neural networks [CNNs] without any batch-normalization layers.

A first idea would be that the dominant features of a human face themselves show variations described by Gaussian normal distributions already in the original data and that convolutional filtering does not destroy such distributions during optimization. A problem of this idea lies in the (non-) linear activation functions used at the nodes of the neural maps. Though ReLU, Leaky ReLU and SeLU contain linear parts.

The other problem is the linear form of the correlations. This is a rather simple kind of correlations. But why should an AE choose this simple form into its mapping of image information to latent space vectors after training?

How to generate statistical vectors for the creation of human face images?

The positive message which comes with the above results is that our problem of how to create proper statistical z-vectors decomposes into a sequence of two-dimensional problems. We can use the data of the ellipses appearing in the density-distributions for pairs of vector components to confine the components of statistically generated z-vectors to the relevant region in the latent space. All ellipses together restrict the component values in a well defined form. In the next post I will shortly outline some methods of how we can use the information contained in the ellipses with available algorithms.

Conclusion

In this post we have seen that for the case of a convolutional Autoencoder trained on CelebA human face images the latent vector distributions showed some remarkable properties:

The probability density functions for all component values can roughly be approximated by Gaussian functions. The components appear to be pairwise linearly correlated – at least to first order analysis. This automatically implies elliptic contour curves for the pairwise number density functions of coordinate values. Such contour curves were indeed found with first order accuracy. The core of the probability density for the z-points in the latent space could therefore be approximated by confidence ellipses for a σ-level above σ = 2.5.
The elliptic conditions correspond to a multivariate normal distribution with linear correlations of the variables.

Before we get to enthusiastic about these findings we should be careful and await a further test. All statements refer to a first order approximations. A real multivariate normal distribution would decompose into un-correlated Gaussians and 2D-ellipses of probability densities of component pairs after a PCA transformation.

In the next post

Autoencoders and latent space fragmentation – IX – PCA transformation of the z-point distribution for CelebA

I shall present the results of a PCA analysis. In later posts I will introduce a related method to restrict the components of statistical vectors to the relevant region in the latent space of our Autoencoder.

Links and literature

On first sight my short description of the relation between multivariate Gaussian normal distributions and ellipses as the contour lines for the projected density distributions on coordinate planes may appear plausible. But in the general multi-dimensional case the question of linear correlations requires some more math than indicated. For details I just refer to some articles on the Internet – but any good book on multivariate analysis will give you the relevant information
https://de.wikipedia.org/ wiki/ Multivariate_ Normalverteilung
https://de.wikipedia.org/ wiki/ Mehrdimensionale_ Normalverteilung
http:// www.mi.uni-koeln.de/ ~jeisenbe/ Vortrag2.pdf
https://methodenlehre.uni-mainz.de/ files/ 2019/06/ Multivariate-Distanz-Normalverteilung-MDC-Bayes.pdf
https://en.wikipedia.org/ wiki/ Multivariate_ normal_ distribution
https://en.wikipedia.org/ wiki/ Confidence_region
https://users.cs.utah.edu/ ~tch/ CS6640F2020/ resources/ How to draw a covariance error ellipse.pdf
https://biotoolbox.binghamton.edu/ Multivariate Methods/ Multivariate Tools and Background/ pdf files/ MTB%20070.pdf

Regarding the intimate relation between the ellipses’ main axes to normalized Pearson correlation coefficients I also refer to
https://carstenschelp.github.io/ 2018/09/14/ Plot_ Confidence_ Ellipse_ 001.html
I am very grateful that the author Carsten Schelp saved me a lot of time when trying to find a way to program a solution for confidence ellipses. Thank you, Mr. Schelp for the great work.

 

Autoencoders, latent space and the curse of high dimensionality – I

Recently, I had to give a presentation about standard Autoencoders (AEs) and related use cases. Whilst preparing examples I stumbled across a well-known problem: The AE solved tasks as to reconstruct faces hidden in extreme noisy or leaky input images perfectly. But the reconstruction of human faces from arbitrarily chosen points in the so called “latent space” of a standard Autoencoder did not work well.

In this series of posts I want to discuss this problem a bit as it illustrates why we need Variational Autoencoders for a systematic creation of faces with varying features from points and clusters in the latent space. But the problem also raises some fundamental and interesting questions

  • about a certain “blindness” of neural networks during training in general, and
  • about the way we save or conserve the knowledge which a neural network has gained about patterns in input data during training.

This post requires experience with the architecture and principles of Autoencoders.

Note, 02/14/2023: I have revised and edited this post to get consistent with new insights from extended experiments with AEs and VAEs.

Standard tasks for conventional Autoencoders

For preparing my talk I worked with relatively simple Autoencoders. I used Convolutional Neural Networks [CNNs] with just 4 convolutional layers to create the Encoder and Decoder parts of the Autoencoder. As typical applications I chose the following:

  • Effective image compression and reconstruction by using a latent space of relatively low dimensionality. The trained AEs were able to compress input images into latent vectors with only few components and reconstruct the original image from the compressed format.
  • Denoising of images where the original data were obscured by the superposition of statistical noise and/or statistically dropped pixels. (This is my favorite task for AEs which they solve astonishingly well.)
  • Recolorization of images: The trained AE in this case transforms images with only gray pixels into colorful images.

Such challenges for AEs are discussed in standard ML literature. In a first approach I applied my Autoencoders to the usual MNIST and Fashion MNIST datasets. For the task of recolorization I used the Cifar 10 dataset. But a bit later I turned to the Celeb A dataset with images of celebrity faces. Just to make all of the tasks a bit more challenging.

Standard Autoencoders and low dimensions of the latent space for (Fashion) MNIST and Cifar10 data

My Autoencoders excelled in all the tasks named above – both for MNIST, CELEB A and, regarding recolorization, CIFAR 10.

Regarding MNIST and MNIST/Fashion 4-layer CNNs for the Encoder and Decoder are almost an overkill. For MNIST the dimension z_dim of the latent space can be chosen to be pretty small:

z_dim = 12 gives a really good reconstruction quality of (test) images compressed to minimum information in the latent space. z_dim=4 still gave an acceptable quality and even with z_dim = 2 most of test images were reconstructed well enough. The same was true for the reconstruction of images superimposed with heavy statistical noise – such that the human eye could no longer guess the original information. For Fashion MNIST a dimension number 20 < z_dim < 40 gave good results. Also for recolorization the results were very plausible. I shall present the results in other blog posts in the future.

Face reconstructions of (noisy) Celeb A images require a relative high dimension of the latent space

Then I turned to the Celeb A dataset. By the way: I got interested in Celeb A when reading the books of David Foster on “Generative Deep Learning” and of Tariq Rashi “Make Your First GANs with PyTorch” (see the complete references in the last section of this post).

The Celeb A data set contains images of around 200,000 faces with varying contours, hairdos and very different, in-homogeneous backgrounds. And the faces are displayed from very different viewing angles.

For a good performance of image reconstruction in all of the named use cases one needs to raise the number of dimensions of the latent space significantly. Instead of 12 dimensions of the latent space as for MNIST we now talk about 200 up to 1200 dimensions for CELEB A – depending on the task the AE gets trained for and, of course, on the quality expectations. For reconstruction of normal images and for the reconstruction of clear images from noisy input images higher numbers of dimensions z_dim ≥ 512 gave visibly better results.

Actually, the impressive quality for the reconstruction of test images of faces, which were almost totally obscured by the superimposition of statistical noise or the statistical removal of pixels after a self-supervised training on around 100,000 images surprised me. (Totalitarian states and security agencies certainly are happy about the superb face reconstruction capabilities of even simple AEs.) Part of the explanation, of course, is that 20% un-obscured or un-blurred pixels out of 30,000 pixels still means 6,000 clear pixels. Obviously enough for the AE to choose the right pattern superposition to compose a plausible clear image.

Note that we are not talking about overfitting here – the Autoencoder handled test images, i.e. images which it had never seen before, very well. AEs based on CNNs just seem to extract and use patterns characteristic for faces extremely effectively.

But how is the target space of the Encoder, i.e. the latent space, filled for Celeb A data? Do all points in the latent space give us images with well recognizable faces in the end?

Face reconstruction after a training based on Celeb A images

To answer the last question I trained an AE with 100,000 images of Celeb A for the reconstruction task named above. The dimension of the latent space was chosen to be z_dim = 200 for the results presented below. (Actually, I used a VAE with a tiny amount of KL loss by a factor of 1.e-6 smaller than the standard Binary Cross-Entropy loss for reconstruction – to get at least a minimum confinement of the z-points in the latent space. But the results are basically similar to those of a pure AE.)

My somewhat reworked and centered Celeb A images had a dimension of 96×96 pixels. So the original feature space had a number of dimensions of 27,648 (almost 30000). The challenge was to reproduce the original images from latent data points created of test images presented to the Encoder. To be more precise:

After a certain number of training epochs we feed the Encoder (with fixed weights) with test images the AE has never seen before. Then we get the components of the vectors from the origin to the resulting points in the latent space (z-points). After feeding these data into the Decoder we expect the reproduction of images close to the test input images.

With a balanced training controlled by an Adam optimizer I already got a good resemblance after 10 epochs. The reproduction got better and very acceptable also with respect to tiny details after 25 epochs for my AE. Due to possible copyright and personal rights violations I do not dare to present the results for general Celeb A images in a public blog. But you can write me a mail if you are interested.

Most of the data points in the latent space were created in a region of 0 < |x_i| < 20 with x_i meaning one of the vector components of a z-point in the latent space. I will provide more data on the z-point distribution produced by the Encoder in later posts of this mini-series.

Face reconstruction from randomly chosen points in the latent space

Then I selected arbitrary data points in the latent space with randomly chosen and uniformly distributed components 0 < |x_i| < boundary. The values for boundary were systematically enlarged.

Note that most of the resulting points will have a tendency to be located in outer regions of the multidimensional cube with an extension in each direction given by boundary. This is due to the big chance that one of the components will get a relatively high value.

Then I fed these arbitrary z-points into the Decoder. Below you see the results after 10 training epochs of the AE; I selected only 10 of 100 data points created for each value of boundary (the images all look more or less the same regarding the absence or blurring of clear face contours):

boundary = 0.5

boundary = 2.5

boundary = 5.0

boundary = 8.0

boundary = 10.0

boundary = 15.0

boundary = 20.0

boundary = 30.0

boundary = 50

This is more a collection of face hallucinations than of usable face images. (Interesting for artists, maybe? Seriously meant …).

So, most of the points in the latent space of an Autoencoder do NOT represent reasonable faces. Sometimes our random selection came close to a region in latent space where the result do resemble a face. See e.g. the central image for boundary=10.

From the images above it becomes clear that some arbitrary path inside the latent space will contain more points which do NOT give you a reasonable face reproduction than points that result in plausible face images – despite a successful training of the Autoencoder.

This result supports the impression that the latent space of well trained Autoencoders is almost unusable for creative purposes. It also raises the interesting question of what the distribution of “meaningful points” in the latent space really looks like. I do not know whether this has been investigated in depth at all. Some links to publications which prove a certain scientific interest in this question are given in the last section of this posts.

I also want to comment on an article published in the Quanta Magazine lately. See “Self-Taught AI Shows Similarities to How the Brain Works”. This article refers to “masked” Autoencoders and self-supervised learning. Reconstructing masked images, i.e. images with a superposition of a mask hiding/blurring pixels with a reasonably equipped Autoencoder indeed works very well. Regarding this point I totally agree. Also with the term “self-supervised learning”.

But to suggest that an Autoencoder with this (rather basic) capability reflects methods of the human brain is in my opinion a massive exaggeration. On the contrary, in my opinion an AE reflects a dumbness regarding the storage and usage of otherwise well extracted feature patterns. This is due to its construction and the nature of its mapping of image contents to the latent space. A child can, after some teaching, draw characteristic features of human faces – out of nothing on a plain white piece of paper. The Decoder part of a standard Autoencoder (in some contrast to a GAN) can not – at least not without help to pick a meaningful point in latent space. And this difference is a major one, in my opinion.

A first interpretation – the curse of many dimensions of the latent space

I think the reason why arbitrary points in the multi-dimensional latent space cannot be mapped to images with recognizable faces is yet another effect of the so called “curse of high dimensionality”. But this time also related to the latent space.

A normal Autoencoder (i.e. one without the Kullback-Leibler loss) uses the latent space in its vast extension to produce points where typical properties (features) of faces and background are encoded in a most unique way for each of the input pictures. But the distinct volume filled by such points is a pretty small one – compared to the extensions of the high dimensional latent space. The volume of data points resulting from a mapping-transformation of arbitrary points in the original feature space to points of the latent space is of course much bigger than the volume of points which correspond to images showing typical human faces.

This is due to the fact that there are many more images with arbitrary pixel values already in the original feature space of the input images (with lets say 30000 dimensions for 100×100 color pixels) than images with reasonable values for faces in front of some background. The points in the feature space which correspond to reasonable images of faces (right colors and dominant pixel values for face features), is certainly small compared to the extension of the original feature space. Therefore: If you pick a random point in latent space – even within a confined (but multidimensional) volume around the origin – the chance that this point lies outside the particular volume of points which make sense regarding face reproduction is big. I guess that for z_dim > 200 the probability is pretty close to 1.

In addition: As the mapping algorithm of a neural Encoder network as e.g. CNNs is highly non-linear it is difficult to say how the boundary hyperplanes of mapping areas for faces look like. Complicated – but due to the enormous number of original images with arbitrary pixel values – we can safely guess that they enclose a rather small volume.

The manifold of data points in the z-space giving us recognizable faces in front of a reasonably separated background may follow a curved and wiggly “path” through the latent space. In principal there could even be isolated unconnected regions separated by areas of “chaotic reconstructions”.

I think this kind of argumentation line holds for standard Autoencoders and variational Autoencoders with a very small KL loss in comparison to the reconstruction loss (BCE (binary cross-entropy) or MSE).

Why do Variational Autoencoders [VAEs] help?

The fist point is: VAEs reduce the total occupied volume of the latent space. Due to mu-related term in the Kullback-Leibler loss the whole distribution of z-points gets condensed into a limited volume around the origin of the latent space.

The second reason is that the distribution of meaningful points are smeared out by the logvar-term of the Kullback-Leibler loss.

Both effects enforce overlapping regions of meaningful standard Gaussian-like z-point distributions in the latent space. So VAEs significantly increase the probability to hit a meaningful z-point in latent space – if you chose points around the origin within a distance of “1” per coordinate (or vector component).

The total distance of a point and its vector in z-space has to be measured with some norm, e.g. the Euclidian one. Actually we should get meaningful reconstructions around a multidimensional sphere of radius “16”. Why this is reasonable will be discussed in forthcoming posts.

Please, also look at the series on the technical realization of VAEs in this blog. The last posts there prove the effects of the KL-loss experimentally for Celeb A data. Below you find a selection of images created from randomly chosen points in the latent space of a Variational Autoencoder with z_dim=200 after 10 epochs.

Conclusion

Enough for today. Whilst standard Autoencoders solve certain tasks very well, they seem to produce very specific data distributions in the latent space for CelebA images: Only certain regions seem to be suitable for the reconstruction of “meaningful” images with human faces.

This problem may have its origin already in the feature space of the original images. Also there only a small minority of points represents humanly interpretable face images. This becomes obvious when you look at the vast amount of possible pixel values in a feature space of lets say 96x96x3 = 27,648. Each of these dimension can get a value between 0 and 255. This gives us more than 7 million combinations. Only a tiny fraction of these possible images will show reasonable faces in the center with a reasonably structured background around.

From a first experiment the chance of hitting a data point in latent space which gives you a meaningful image seems to be small. This result appears to be a variant of the curse of high dimensionality – this time including the latent space.

In a forthcoming post
Autoencoders, latent space and the curse of high dimensionality – II – a view on fragments and filaments of the latent space for CelebA images
we will investigate the z-point distribution in latent space with a variety of tools. And find that this distribution is fragmented and that the z-points for CelebA images are arranged in certain regions of the latent space. In addition we will get indications that the distribution contains filament-like structures.

Links

https://towardsdatascience.com/ exploring-the-latent-space-of-your-convnet-classifier-b6eb862e9e55

Felix Leeb, Stefan Bauer, Michel Besserve,Bernhard Schölkopf, “Exploring the Latent Space of Autoencoders with
Interventional Assays”, 2022,
https://arxiv.org/abs/2106.16091v2 // https://arxiv.org/pdf/2106.16091.pdf
https://wiredspace.wits.ac.za/ handle/10539/33094?show=full
https://www.elucidate.ai/post/ exploring-deep-latent-spaces

Books:
T. Rashid, “GANs mit PyTorch selbst programmieren”, 2020, O’Reilly, dpunkt.verlag, Heidelberg, ISBN 978-3-96009-147-9
D. Foster, “Generatives Deep Learning”, 2019, O’Reilly, dpunkt.verlag, Heidelberg, ISBN 978-3-96009-128-8

 

Variational Autoencoder with Tensorflow – VIII – TF 2 GradientTape(), KL loss and metrics

I continue with my series on options for an implementation of the Kullback-Leibler divergence as a loss [KL loss] contribution in Variational Autoencoder [VAE] models:

Variational Autoencoder with Tensorflow – I – some basics
Variational Autoencoder with Tensorflow – II – an Autoencoder with binary-crossentropy loss
Variational Autoencoder with Tensorflow – III – problems with the KL loss and eager execution
Variational Autoencoder with Tensorflow – IV – simple rules to avoid problems with eager execution
Variational Autoencoder with Tensorflow – V – a customized Encoder layer for the KL loss
Variational Autoencoder with Tensorflow – VI – KL loss via tensor transfer and multiple output
Variational Autoencoder with Tensorflow – VII – KL loss via model.add_loss()

Our objective is to avoid or circumvent potential problems with the eager execution mode of present Tensorflow 2 versions. I have already described three solutions based on standard Keras functionality:

  • Either we add loss contributions via the function layer.add_loss()and a special layer of the Encoder part of the VAE
  • or we add a loss to the output of a full VAE-model via function model.add_loss()
  • or we build a complex model which transports required KL-related tensors from the Encoder part of the VAE model to the Decoder’s output layer.

In all these cases we invoke native Keras functions to handle loss contributions and related operations. Keras controls the calculation of the gradient components of the KL related tensors “mu” and “log_var” in the background for us. This comprises partial derivatives with respect to trainable weight variables of lower Encoder layers and related operations. The same holds for partial derivatives of reconstruction tensors at the Decoder’s output layer with respect to trainable parameters of all layers of the VAE-model. Keras does most of the job

  • of derivative calculation and the registration of related operation sequences during forward pass
  • and the correct application of the registered operations and values in later weight corrections during backward propagation

for us in the background as long as we respect certain rules for eager mode execution.

But Tensorflow 2 [TF2] gives us a much more flexible and low-level option to control the calculation of gradients under the conditions of eager execution. This option requires that we inform the TF/Keras machinery which processes the training steps of an epoch of how to exactly calculate losses and their partial derivatives. Rules to determine and create metrics output must be provided in addition.

TF2 provides a context for registering operations for loss and derivative evaluations. This context is provided by a functional object called GradientTape(). In addition we have to write an encapsulating function “train_step()” to control gradient calculations and output during training.

In this post I will describe how we integrate such an approach with our class “MyVariationalAutoencoder()” for the setup of a VAE model based on convolutional layers. I have discussed the elements and methods of this class MyVariationalAutoencoder() in detail during the last posts.

Regarding the core of the technical solution for train_step() and GradientTape() I follow more or less the recommendations of one of the masters of Keras: F. Chollet. His original code for a TF2-compatible implementation of a VAE can be found here:
https://keras.io/ examples/ generative/vae/

However, in my opinion Chollet’s code contains a small problem, which I have allowed myself to correct.

The general recipe presented here can, of course, be extended to more complex tasks beyond the optimization of KL and reconstruction losses of VAEs. Therefore, a brief study of the methods to establish detailed loss control is really worth it for ML and VAE beginners. But TF2 and Keras experts will not learn anything new from this post.

I provide the pure code of the classes in this post. In the next post you will find Jupyter cell code for an application to the Celeb A dataset. To prove that the classes below do their job in the end I show you some faces which have been created from arbitrarily chosen points in the latent space after training.

These faces do not exist in reality. They are constructed by the VAE based on compressed and “normalized” data for face patterns and face attribute distributions in the latent space. Note that I used a latent space with a dimension of z_dim =200.

Layer setup by class MyVariationalAutoencoder()

We have already many of the required methods ready. In the last posts we used the flexible functional interface of Keras to set up Neural Network models for both Encoder and Decoder, each with sequences of (convolutional) layers. For our present purposes we will not change the elementary layer structure of the Encoder or Decoder. In particular the layers for the “mu” and “log_var” contributions to the KL loss and a subsequent sampling-layer of the Encoder will remain unchanged.

In the course of the last two posts I have already introduced a parameter “solution_type” to control specifics of our VAE model. We shall use it now to invoke a child class of Keras’ Model() which allows for detailed steps of loss and gradient evaluations.

A child class of keras.models.Model() for loss and gradient evaluation

The standard Keras method Model.fit() normally provides a convenient interface for Keras users. We do not have to think about calling the low-level functions at all if we do not want to or do not need to control gradient calculations in detail. In our present approach, however, we use the low level functionality of GradientTape() directly. This requires to overwrite a specific method of the standard Keras class Model() – namely the function “train_step()”.

If you have never worked with a self-defined training_step() and GradientTape() before then I recommend to read the following introductions first:
https://www.tensorflow.org/ guide/ autodiff
customizing what happens in fit() and the relation to training_step()
These articles contain valuable information about how to operate at low level with training_step() regarding losses, derivatives and metrics. This information will help to better understand the methods of a new class VAE() which I am going to derive from Keras’ class Model() below.

Let us first briefly repeat some imports required.

Imports

# Imports 
# ~~~~~~~~ 
import sys
import numpy as np
import os
import pickle

import tensorflow as tf
import tensorflow.keras as keras
from tensorflow.keras.layers import Layer, Input, Conv2D, Flatten, Dense, Conv2DTranspose, Reshape, Lambda, \
                                    Activation, BatchNormalization, ReLU, LeakyReLU, ELU, Dropout, AlphaDropout
from tensorflow.keras.models import Model
# to be consistent with my standard loading of the Keras backend in Jupyter notebooks:  
from tensorflow.keras import backend as B      
from tensorflow.keras import metrics
#from tensorflow.keras.backend import binary_crossentropy

from tensorflow.keras.optimizers import Adam
from tensorflow.keras.callbacks import ModelCheckpoint 
from tensorflow.keras.utils import plot_model

#from tensorflow.python.debug.lib.check_numerics_callback import _maybe_lookup_original_input_tensor

# Personal: The following works only if the path in the notebook is supplemented by the path to /projects/GIT/mlx
# The user has to organize his paths for modules to be referred to from Jupyter notebooks himself and 
# replace this settings  
from mynotebooks.my_AE_code.utils.callbacks import CustomCallback, VAE_CustomCallback, step_decay_schedule    
from keras.callbacks import ProgbarLogger

Now we define a class VAE() which inherits basic functionality from the Keras class Model() and overwrite the method train_step(). We shall later create an instance of this new class within an object of class MyVariationalAutoencoder().

New Class VAE

from tensorflow.keras import metrics
...
...
# A child class of Model() to control train_step with GradientTape() 
class VAE(keras.Model): 
    
    # We use our self defined __init__() to provide a reference MyVAE 
    # to an object of type "MyVariationalAutoencoder" 
    # This in turn allows us to address the Encoder and the Decoder  
    def __init__(self, MyVAE, **kwargs):
        super(VAE, self).__init__(**kwargs)
        self.MyVAE   = MyVAE 
        self.encoder = self.MyVAE.encoder
        self.decoder = self.MyVAE.decoder
        
        # A factor to control the ratio between the KL loss and the reconstruction loss 
        self.fact = MyVAE.fact
        
        # A counter 
        self.count = 0 
        
        # A factor to scale the absolute values of the losses 
        # e.g. by the number of pixels of an image
        self.scale_fact = 1.0  # no scaling
        # self.scale_fact = tf.constant(self.MyVAE.input_dim[0] * self.MyVAE.input_dim[1], dtype=tf.float32)
        self.f_scale    = 1. / self.scale_fact
        
        # loss type : 0: BCE, 1: MSE 
        self.loss_type = self.MyVAE.loss_type
        
        # track loss development via metrics 
        self.total_loss_tracker = keras.metrics.Mean(name="total_loss")
        self.reco_loss_tracker  = keras.metrics.Mean(name="reco_loss")
        self.kl_loss_tracker    = keras.metrics.Mean(name="kl_loss")

    def call(self, inputs):
        x, z_m, z_var = self.encoder(inputs)
        return self.decoder(x)  

    # Overwrite the metrics() of Model() - use getter mechanism  
    @property
    def metrics(self):
        return [
            self.total_loss_tracker,
            self.reco_loss_tracker,
            self.kl_loss_tracker
        ]

    # Core function to control all operations regarding eager differentiation operations, 
    # i.e. the calculation of loss terms with respect to tensors and differentiation variables 
    # and metrics data 
    def train_step(self, data):
        # We use the GradientTape context to record differntiation operations/results 
        #self.count += 1 
        
        with tf.GradientTape() as tape:
            z, z_mean, z_log_var = self.encoder(data)
            reconstruction = self.decoder(z)
            #reco_shape = tf.shape(self.reconstruction)
            #print("reco_shape = ", reco_shape, self.reconstruction.shape, data.shape)
            
            #BCE loss (Binary Cross Entropy) 
            if self.loss_type == 0: 
                reconstruction_loss = tf.reduce_mean(
                    tf.reduce_sum(
                        keras.losses.binary_crossentropy(data, reconstruction), axis=(1, 2)
                    )
                ) * self.f_scale
            
            # MSE loss (Mean Squared Error) 
            if self.loss_type == 1: 
                reconstruction_loss = tf.reduce_mean(
                    tf.reduce_sum(
                        keras.losses.mse(data, reconstruction), axis=(1, 2)
                    )
                ) * self.f_scale
            
            kl_loss = -0.5 * self.fact * (1 + z_log_var - tf.square(z_mean) - tf.exp(z_log_var))
            kl_loss = tf.reduce_mean(tf.reduce_sum(kl_loss, axis=1))  
            total_loss = reconstruction_loss + kl_loss 
        
        grads = tape.gradient(total_loss, self.trainable_weights)
        self.optimizer.apply_gradients(zip(grads, self.trainable_weights))
        #if self.count == 1: 
            
        self.total_loss_tracker.update_state(total_loss)
        self.reco_loss_tracker.update_state(reconstruction_loss)
        self.kl_loss_tracker.update_state(kl_loss)
        return {
            "total_loss": self.total_loss_tracker.result(),
            "reco_loss": self.reco_loss_tracker.result(),
            "kl_loss": self.kl_loss_tracker.result(),
        }
        
    def compile_VAE(self, learning_rate):

        # Optimizer
        # ~~~~~~~~~ 
        optimizer = Adam(learning_rate=learning_rate)
        # save the learning rate for possible intermediate output to files 
        self.learning_rate = learning_rate
        self.compile(optimizer=optimizer)

Explanation of class VAE(): Details of the methods of the additional class

First, we need to import an additional library tensorflow.keras.metrics. Its functions, as e.g. Mean(), will help us to print out intermediate data about various loss contributions during training – averaged over the batches of an epoch.

Then, we have added four central methods to class VAE:

  • a function __init__(),
  • a function metrics() together with Python’s getter-mechanism
  • a function call()
  • and our central function training_step().

All functions overwrite the defaults of the parent class Model(). Be careful to distinguish the range of batches which keras.metrics() and training_step() operate on:

  • A “training step” covers just one batch eventually provided to the training mechanism by the Model.fit()-function.
  • Averaging performed by functions of keras.metrics instead works across all batches of an epoch.

Functions “__init__() ” and call() to instantiate a Model based on class VAE()

In general we can use the standard interface of __init__(inputs, outputs, …) or a call()-interface to instantiate an object of class-type Model(). See
https://www.tensorflow.org/ api_docs/python/ tf/ keras/ Model
https://docs.w3cub.com/ tensorflow~python/ tf/ keras/ model.html

We have to be precise about the parameters of __init()__ or the call()-interface if we intend to use properties of the standard compile()– and fit()-interfaces of a model – at least in application cases where we do not control everything regarding losses and gradients ourselves.

To define a complete model for the general case we therefore add the call()-method. At the same time we “misuse” the __init__() function of VAE() to provide a reference to our instance of class “MyVariationalAutoencoder”. Actually, providing “call()” is done only for the sake of flexibility in other use cases than the one discussed here. For our present purposes we could actually omit call().

The __init__()-function retrieves some parameters from MyVAE. You see the factor “fact” which controls the ratio of the KL-loss to the reconstruction loss. In addition I provided an option to scale the loss values by a division by the number of pixels of input images. You just have to un-comment the respective statement. Sorry, I have not yet made it controllable by a parameter of MyVariationalAutoencoder().

Finally, the parameter loss_type is evaluated; for a value of “1” we take MSE as a loss function instead of the standard BCE (Binary Cross-Entropy); see the Jupyter cells in the next post. This allows for some more flexibility in dealing with certain datasets.

Function “metrics()” to produce loss values as output during training

With the function metrics() we are able to establish our own “tracking” of the evolution of the Model’s loss contributions during training. In our case we are particularly interested in the evolution of the “reconstruction loss” and the “KL-loss“.

Note that the @property decorator is added to the metrics()-function. This allows us to define its output via the getter-mechanism for Python classes. In our case the __init__()-function defines the mechanism to fill required variables:
The three “tracker”-variables there get their values from the function tensorflow.keras.metrics.Mean(). Note that the names given to the loss-trackers in __init__() are of importance for later output handling!

Note also that keras.metrics.Mean() calculates averages over values derived for all batches of an epoch. The tf.reduce_mean()-statements in the GradientTape() section of the code above, instead, refer to averages calculated over the samples of a single batch.

Actualized loss output is later delivered during each training step by the method update_state(). You find a description of the methods of keras.metrics.Mean() here.

The result of all this is that metrics() delivers loss values by actualized tracker-variables of our child class VAE(). Note that neither __init__() nor metrics() define what exactly is to be done to calculate each loss term. __init__() and metrics() only prepare the later output technically by formal class constructs. Note also that all the data defined by metrics() are updated and averaged per epoch without the requirement to call the function “reset_states()” (see the Keras docs). This is automatically done at the beginning of each epoch.

train_step() and GradientTape() to control losses and their gradients

Let us turn to the necessary calculations which must be performed during each training step. In an eager environment we must watch the trainable variables, on which the different loss terms depend, to be able to calculate the partial derivatives and record related operations and intermediate results already during forward pass:

We must track the differentiation operations and resulting values to know exactly what has to be done in reverse during error backward propagation. To be able to do this TF2 offers us a recording mechanism called GradientTape(). Its results are kept in an object which often is called a “tape”.

You find more information about these topics at
https://debuggercafe.com/ basics-of-tensorflow-gradienttape/
https://runebook.dev/de/docs/ tensorflow/gradienttape

Within train_step() we need some tensors which are required for loss calculations in an explicit form. So, we must change the Keras model for the Encoder to give us the tensors for “mu” and “log_var” as its output.

This is no problem for us. We have already made the output of the Encoder dependent on a variable “solution_type” and discussed a multi-output Encoder model already in the post Variational Autoencoder with Tensorflow 2.8 – VI – KL loss via tensor transfer and multiple output.

Therefore, we just have to add a new value 3 to the checks of “solution_type”. The same is true for the input control of the Decoder (see a section about the related methods of MyVariationalAutoencoder() below).

The statements within the section for GradientTape() deal with the calculation of loss terms and record the related operations. All the calculations should be be familiar from previous posts of this series.

This includes an identification of the trainable_weights of the involved layers. Quote from
https://keras.io/ guides/ writing_a_ training_loop_ from_scratch/ #using-the-gradienttape-a-first-endtoend-example:

Calling a model inside a GradientTape scope enables you to retrieve the gradients of the trainable weights of the layer with respect to a loss value. Using an optimizer instance, you can use these gradients to update these variables (which you can retrieve using model.trainable_weights).

In train_step() we need to register that the total loss is dependent on all trainable weights and that all related partial derivatives have to be taken into account during optimization. This is done by

        grads = tape.gradient(total_loss, self.trainable_weights)
        self.optimizer.apply_gradients(zip(grads, self.trainable_weights))

To be able to get actualized output during training we update the state of all tracked variables:

        self.total_loss_tracker.update_state(total_loss)
        self.reco_loss_tracker.update_state(reco_loss)
        self.kl_loss_tracker.update_state(kl_loss)

A small problem with F. Chollet’s code

The careful reader may have noticed that my code of the function “train_step()” deviates from F. Chollet’s recommendations. Regarding the return statement I use

        return {
            "total_loss": self.total_loss_tracker.result(),
            "reco_loss": self.reco_loss_tracker.result(),
            "kl_loss": self.kl_loss_tracker.result(),
        }

whilst F. Chollet’s original code contains a statement like

        return {
            "loss": self.total_loss_tracker.result(),     # here lies the main difference - different "name" than defined in __init__!
            "reconstruction_loss": self.reconstruction_loss_tracker.result(),  # ignore my abbreviation to reco_loss 
            "kl_loss": self.kl_loss_tracker.result(),
        }

Chollet’s original code unfortunately gives inconsistent loss data: The sum of his “reconstruction loss” and the “KL (Kullback Leibler) loss” do not add up to the (total) “loss”. This can be seen from the data of the first epochs in F. Chollet’s example on the tutorial at
keras.io/ examples/ generative/ vae.

Some of my own result data for the MNIST example with this error look like:

Epoch 1/5
469/469 [============================_build_dec==] - 7s 13ms/step - reconstruction_loss: 209.0115 - kl_loss: 3.4888 - loss: 258.9048
Epoch 2/5
469/469 [==============================] - 7s 14ms/step - reconstruction_loss: 173.7905 - kl_loss: 4.8220 - loss: 185.0963
Epoch 3/5
469/469 [==============================] - 6s 13ms/step - reconstruction_loss: 160.4016 - kl_loss: 5.7511 - loss: 167.3470
Epoch 4/5
469/469 [==============================] - 6s 13ms/step - reconstruction_loss: 155.5937 - kl_loss: 5.9947 - loss: 162.3994
Epoch 5/5
469/469 [==============================] - 6s 13ms/step - reconstruction_loss: 152.8330 - kl_loss: 6.1689 - loss: 159.5607

Things do get better from epoch to epoch – but we want a consistent output from the beginning: The averaged (total) loss should always be printed as equal to the sum of the averaged) KL loss plus the reconstruction loss.

The deviation is surprising as we seem to use the right tracker-results in the code. And the name used in the return statement of the train_step()-function here should only be relevant for the printing …

However, the name “loss” is NOT consistent with the name defined in the statement Mean(name=”total_loss”) in the __init__() function of Chollet, where he defines his tracking mechanisms.

self.total_loss_tracker = keras.metrics.Mean(name="total_loss")

This has consequences: The inconsistency triggers a different output than a consistent use of names. Just try it out on your own …

This is not only true for the deviation between “loss” in

return {
            "loss": self.total_loss_tracker.result(),
            ....
       }

and “total_loss” in the __init__) function

self.total_loss_tracker = keras.metrics.Mean(name="total_loss") , namely a value lacking averaging  -     

but also for deviations in the names used for the other loss contributions. In case of an inconsistency Keras seems to fall back to a default here which does not reflect the standard linear averaging of Mean() over all values calculated for the batches of an epoch (without any special weights).

That there is some common default mechanism working can be seen from the fact that wrong names for all individual losses (here the KL loss and the reconstruction loss) give us at least a consistent sum-value for the total amount again. But all the values derived by the fallback are much closer to the start values at an epochs begin than the mean values averaged over an epoch. You may test this yourself.

To get on the safe side we use the correct “names” defined in the __init__()-function of our code:

        return {
            "total_loss": self.total_loss_tracker.result(),
            "reco_loss": self.reco_loss_tracker.result(),
            "kl_loss": self.kl_loss_tracker.result(),
        }

For MNIST data fed into our VAE model we then get:

Epoch 1/5
469/469 [==============================] - 8s 13ms/step - reco_loss: 214.5662 - kl_loss: 2.6004 - total_loss: 217.1666
Epoch 2/5
469/469 [==============================] - 7s 14ms/step - reco_loss: 186.4745 - kl_loss: 3.2799 - total_loss: 189.7544
Epoch 3/5
469/469 [==============================] - 6s 13ms/step - reco_loss: 181.9590 - kl_loss: 3.4186 - total_loss: 185.3774
Epoch 4/5
469/469 [==============================] - 6s 13ms/step - reco_loss: 177.5216 - kl_loss: 3.9433 - total_loss: 181.4649
Epoch 5/5
469/469 [==============================] - 6s 13ms/step - reco_loss: 163.7209 - kl_loss: 5.5816 - total_loss: 169.3026

This is exactly what we want.

A general recipe to use train_step()

So, the general recipe is:

  • Define what metric properties you are interested in. Create respective tracker-variables in the __init__() function.
  • Use the getter mechanism to define your metrics() function and its output via references to the trackers.
  • Define your own training step by a function train_step().
  • Use Tensorflow’s GradientTape context to register statements which control the calculation of loss contributions from elementary tensors of your (functional) Keras model. Provide all layers there, e.g. by references to their models.
  • Register gradient-operations of the total loss with respect to all trainable weights and updates of metrics data within function “train_step()”.

Actually, I have used the GradientTape() mechanism already in this blog when I played a bit with approaches to create so called DeepDream images. See
https://linux-blog.anracom.com/category/machine-learning/deep-dream/
for more information – there in a different context.

How to combine the classes “VAE()” and “MyVariationalAutoencoder()” ?

Where do we stand? We have defined a new class “VAE()” which modifies the original Keras Model() class. And we have our class “MyVariationalAutoencoder()” to control the setup of a VAE model.

Next we need to address the question of how we combine these two classes. If you have read my previous posts you may expect a major change to the method “_build_VAE()“. This is correct, but we also have to modify the conditions for the Encoder output construction in _build_enc() and the definition of the Decoder’s input in _build_dec(). Therefore I give you the modified code for these functions. For reasons of completeness I add the code for the __init__()-function:

Function __init__()

    def __init__(self
        , input_dim                  # the shape of the input tensors (for MNIST (28,28,1)) 
        , encoder_conv_filters       # number of maps of the different Conv2D layers   
        , encoder_conv_kernel_size   # kernel sizes of the Conv2D layers 
        , encoder_conv_strides       # strides - here also used to reduce spatial resolution avoid pooling layers 
                                     # used instead of Pooling layers 
        , encoder_conv_padding       # padding - valid or same  
        
        , decoder_conv_t_filters     # number of maps in Con2DTranspose layers 
        , decoder_conv_t_kernel_size # kernel sizes of Conv2D Transpose layers  
        , decoder_conv_t_strides     # strides for Conv2dTranspose layers - inverts spatial resolution
        , decoder_conv_t_padding     # padding - valid or same  
        
        , z_dim                      # A good start is 16 or 24  
        , solution_type  = 0         # Which type of solution for the KL loss calculation ?
        , act            = 0         # Which type of activation function?  
        , fact           = 0.65e-4   # Factor for the KL loss (0.5e-4 < fact < 1.e-3is reasonable) 
        , loss_type      = 0         # 0: BCE, 1: MSE   
        , use_batch_norm = False     # Shall BatchNormalization be used after Conv2D layers? 
        , use_dropout    = False     # Shall statistical dropout layers be used for tregularization purposes ? 
        , dropout_rate   = 0.25      # Rate for statistical dropout layer  
        , b_build_all    = False     # Added by RMO - full Model is build in 2 steps 
        ):
        
        '''
        Input: 
        The encoder_... and decoder_.... variables are Python lists,
        whose length defines the number of Conv2D and Conv2DTranspose layers 
        
        input_dim : Shape/dimensions of the input tensor - for MNIST (28,28,1) 
        encoder_conv_filters:     List with the number of maps/filters per Conv2D layer    
        encoder_conv_kernel_size: List with the kernel sizes for the Conv-Layers   
        encoder_conv_strides:     List with the strides used for the Conv-Layers   

        z_dim : dimension of the "latent_space"
        solution_type : Type of solution for KL loss calculation (0: Customized Encoder layer, 
                                                                  1: transfer of mu, var_log to Decoder 
                                                                  2: model.add_loss()
                                                                  3: definition of training step with Gradient.Tape()
        
        act :  determines activation function to use (0: LeakyRELU, 1:RELU , 2: SELU)
               !!!! NOTE: !!!!
               If SELU is used then the weight kernel initialization and the dropout layer need to be special   
               https://github.com/christianversloot/machine-learning-articles/blob/main/using-selu-with-tensorflow-and-keras.md
               AlphaDropout instead of Dropout + LeCunNormal for kernel initializer
        fact = 0.65e-4 : Factor to scale the KL loss relative to the reconstruction loss
                         Must be adapted to the way of calculation - 
                         e.g. for solution_type == 3 the loss is not averaged over all pixels 
                         => at least factor of around 1000 bigger than normally 
        loss-type = 0:   Defines the way we calculate a reconstruction loss 
                         0: Binary Cross Entropy - recommended by many authors 
                         1: Mean Square error - recommended by some authors especially for "face arithmetics"
        use_batch_norm = False   # True : We use BatchNormalization   
        use_dropout    = False   # True : We use dropout layers (rate = 0.25, see Encoder)
        b_build_all    = False   # True : Full VAE Model is build in 1 step; 
                                   False: Encoder, Decoder, VAE are build in separate steps   
        '''
        
        self.name = 'variational_autoencoder'

        # Parameters for Layers which define the Encoder and Decoder 
        self.input_dim                  = input_dim
        self.encoder_conv_filters       = encoder_conv_filters
        self.encoder_conv_kernel_size   = encoder_conv_kernel_size
        self.encoder_conv_strides       = encoder_conv_strides
        self.encoder_conv_padding       = encoder_conv_padding
        
        self.decoder_conv_t_filters     = decoder_conv_t_filters
        self.decoder_conv_t_kernel_size = decoder_conv_t_kernel_size
        self.decoder_conv_t_strides     = decoder_conv_t_strides
        self.decoder_conv_t_padding     = decoder_conv_t_padding
        
        self.z_dim = z_dim

        # Check param for activation function 
        if act < 0 or act > 2: 
            print("Range error: Parameter act = " + str(act) + " has unknown value ")  
            sys.exit()
        else:
            self.act = act 
        
        # Factor to scale the KL loss relative to the Binary Cross Entropy loss 
        self.fact = fact 
        
        # Type of loss - 0: BCE, 1: MSE 
        self.loss_type = loss_type
        
        
        # Check param for solution approach  
        if solution_type < 0 or solution_type > 3: 
            print("Range error: Parameter solution_type = " + str(solution_type) + " has unknown value ")  
            sys.exit()
        else:
            self.solution_type = solution_type 

        self.use_batch_norm = use_batch_norm
        self.use_dropout    = use_dropout
        self.dropout_rate   = dropout_rate

        # Preparation of some variables to be filled later 
        self._encoder_input  = None  # receives the Keras object for the Input Layer of the Encoder 
        self._encoder_output = None  # receives the Keras object for the Output Layer of the Encoder 
        self.shape_before_flattening = None # info of the Encoder => is used by Decoder 
        self._decoder_input  = None  # receives the Keras object for the Input Layer of the Decoder
        self._decoder_output = None  # receives the Keras object for the Output Layer of the Decoder

        # Layers / tensors for KL loss 
        self.mu      = None # receives special Dense Layer's tensor for KL-loss 
        self.log_var = None # receives special Dense Layer's tensor for KL-loss 

        # Parameters for SELU - just in case we may need to use it somewhere 
        # https://keras.io/api/layers/activations/ see selu
        self.selu_scale = 1.05070098
        self.selu_alpha = 1.67326324

        # The number of Conv2D and Conv2DTranspose layers for the Encoder / Decoder 
        self.n_layers_encoder = len(encoder_conv_filters)
        self.n_layers_decoder = len(decoder_conv_t_filters)

        self.num_epoch = 0 # Intialization of the number of epochs 

        # A matrix for the values of the losses 
        self.std_loss  = tf.TensorArray(tf.float32, size=0, dynamic_size=True, clear_after_read=False)

        # We only build the whole AE-model if requested
        self.b_build_all = b_build_all
        if b_build_all:
            self._build_all()


Changes to the Encoder and Decoder code

We just need to set the right options for the output tensors of the Encoder and the input tensors of the Decoder. The relevant code parts are controlled by the parameter “solution_type”.

Modified code of _build_enc() of class MyVariationalAutoencoder

    def _build_enc(self, solution_type = -1, fact=-1.0):
        '''  Your documentation '''
        # Checking whether "fact" and "solution_type" for the KL loss shall be overwritten
        if fact < 0:
            fact = self.fact  
        if solution_type < 0:
            solution_type = self.solution_type
        else: 
            self.solution_type = solution_type
        
        # Preparation: We later need a function to calculate the z-points in the latent space 
        # The following function wiChangedll be used by an eventual Lambda layer of the Encoder 
        def z_point_sampling(args):
            '''
            A point in the latent space is calculated statistically 
            around an optimized mu for each sample 
            '''
            mu, log_var = args # Note: These are 1D tensors !
            epsilon = B.random_normal(shape=B.shape(mu), mean=0., stddev=1.)
            return mu + B.exp(log_var / 2) * epsilon

        
        # Input "layer"
        self._encoder_input = Input(shape=self.input_dim, name='encoder_input')

        # Initialization of a running variable x for individual layers 
        x = self._encoder_input

        # Build the CNN-part with Conv2D layers 
        # Note that stride>=2 reduces spatial resolution without the help of pooling layers 
        for i in range(self.n_layers_encoder):
            conv_layer = Conv2D(
                filters = self.encoder_conv_filters[i]
                , kernel_size = self.encoder_conv_kernel_size[i]
                , strides = self.encoder_conv_strides[i]
                , padding = 'same'  # Important ! Controls the shape of the layer tensors.    
                , name = 'encoder_conv_' + str(i)
                )
            x = conv_layer(x)
            
            # The "normalization" should be done ahead of the "activation" 
            if self.use_batch_norm:
                x = BatchNormalization()(x)

            # Selection of activation function (out of 3)      
            if self.act == 0:
                x = LeakyReLU()(x)
            elif self.act == 1:
                x = ReLU()(x)
            elif self.act == 2: 
                # RMO: Just use the Activation layer to use SELU with predefined (!) parameters 
                x = Activation('selu')(x) 

            # Fulfill some SELU requirements 
            if self.use_dropout:
                if self.act == 2: 
                    x = AlphaDropout(rate = 0.25)(x)
                else:
                    x = Dropout(rate = 0.25)(x)

        # Last multi-dim tensor shape - is later needed by the decoder 
        self._shape_before_flattening = B.int_shape(x)[1:]

        # Flattened layer before calculating VAE-output (z-points) via 2 special layers 
        x = Flatten()(x)
        
        # "Variational" part - create 2 Dense layers for a statistical distribution of z-points  
        self.mu      = Dense(self.z_dim, name='mu')(x)
        self.log_var = Dense(self.z_dim, name='log_var')(x)

        if solution_type == 0: 
            # Customized layer for the calculation of the KL loss based on mu, var_log data
            # We use a customized layer according to a class definition  
            self.mu, self.log_var = My_KL_Layer()([self.mu, self.log_var], fact=fact)


        # Layer to provide a z_point in the Latent Space for each sample of the batch 
        self._encoder_output = Lambda(z_point_sampling, name='encoder_output')([self.mu, self.log_var])

        # The Encoder Model 
        # ~~~~~~~~~~~~~~~~~~~
        # With extra KL layer or with vae.add_loss()
        if self.solution_type == 0 or self.solution_type == 2: 
            self.encoder = Model(self._encoder_input, self._encoder_output, name="encoder")
        
        # Transfer solution => Multiple outputs 
        if self.solution_type == 1  or self.solution_type == 3: 
            self.encoder = Model(inputs=self._encoder_input, outputs=[self._encoder_output, self.mu, self.log_var], name="encoder")

The difference is the dependency of the output on “solution_type 3”. For the Decoder we have:

Modified code of _build_enc() of class MyVariationalAutoencoder

    def _build_dec(self):
        ''' Your documentation       '''       
 
        # Input layer - aligned to the shape of z-points in the latent space = output[0] of the Encoder 
        self._decoder_inp_z = Input(shape=(self.z_dim,), name='decoder_input')
        
        # Additional Input layers for the KL tensors (mu, log_var) from the Encoder
        if self.solution_type == 1  or self.solution_type == 3: 
            self._dec_inp_mu       = Input(shape=(self.z_dim), name='mu_input')
            self._dec_inp_var_log  = Input(shape=(self.z_dim), name='logvar_input')
            
            # We give the layers later used as output a name 
            # Each of the Activation layers below just correspond to an identity passed through 
            #self._dec_mu            = self._dec_inp_mu 
            #self._dec_var_log       = self._dec_inp_var_log 
            self._dec_mu            = Activation('linear',name='dc_mu')(self._dec_inp_mu) 
            self._dec_var_log       = Activation('linear', name='dc_var')(self._dec_inp_var_log) 

        # Here we use the tensor shape info from the Encoder          
        x = Dense(np.prod(self._shape_before_flattening))(self._decoder_inp_z)
        x = Reshape(self._shape_before_flattening)(x)

        # The inverse CNN
        for i in range(self.n_layers_decoder):
            conv_t_layer = Conv2DTranspose(
                filters = self.decoder_conv_t_filters[i]
                , kernel_size = self.decoder_conv_t_kernel_size[i]
                , strides = self.decoder_conv_t_strides[i]
                , padding = 'same' # Important ! Controls the shape of tensors during reconstruction
                                   # we want an image with the same resolution as the original input 
                , name = 'decoder_conv_t_' + str(i)
                )
            x = conv_t_layer(x)

            # Normalization and Activation 
            if i < self.n_layers_decoder - 1:
                # Also in the decoder: normalization before activation  
                if self.use_batch_norm:
                    x = BatchNormalization()(x)
                
                # Choice of activation function
                if self.act == 0:
                    x = LeakyReLU()(x)
                elif self.act == 1:
                    x = ReLU()(x)
                elif self.act == 2: 
                    #x = self.selu_scale * ELU(alpha=self.selu_alpha)(x)
                    x = Activation('selu')(x)
                
                # Adaptions to SELU requirements 
                if self.use_dropout:
                    if self.act == 2: 
                        x = AlphaDropout(rate = 0.25)(x)
                    else:
                        x = Dropout(rate = 0.25)(x)
                
            # Last layer => Sigmoid output 
            # => This requires s<pre style="padding:8px; height: 400px; overflow:auto;">caled input => Division of pixel values by 255
            else:
                x = Activation('sigmoid', name='dc_reco')(x)

        # Output tensor => a scaled image 
        self._decoder_output = x

        # The Decoder model 
        # solution_type == 0/2/3: Just the decoded input 
        if self.solution_type == 0 or self.solution_type == 2 or self.solution_type == 3: 
            self.decoder = Model(self._decoder_inp_z, self._decoder_output, name="decoder")
        
        # solution_type == 1: The decoded tensor plus the transferred tensors mu and log_var a for the variational distribution 
        if self.solution_type == 1: 
            self.decoder = Model([self._decoder_inp_z, self._dec_inp_mu, self._dec_inp_var_log], 
                                 [self._decoder_output, self._dec_mu, self._dec_var_log], name="decoder")

Changes to the methods _build_VAE for building the VAE model

Our VAE model now is set up with the help of the __init__() method of our new class VAE. We just have to supplement the object created by MyVariationalAutoencoder.

Modified code of _build_VAE() of class MyVariationalAutoencoder

    def _build_VAE(self):     
        ''' Your documentation '''
        
        # Solution with train_step() and GradientTape(): Control is transferred to class VAE  
        if self.solution_type == 3:
            self.model = VAE(self)   # Here parameter "self" provides a reference to an instance of MyVariationalAutoencoder  
            self.model.summary()
        
        # Solutions with layer.add_loss or model.add_loss() 
        if self.solution_type == 0 or self.solution_type == 2:
            model_input  = self._encoder_input
            model_output = self.decoder(self._encoder_output)
            self.model = Model(model_input, model_output, name="vae")

        # Solution with transfer of data from the Encoder to the Decoder output layer
        if self.solution_type == 1: 
            enc_out      = self.encoder(self._encoder_input)
            dc_reco, dc_mu, dc_var = self.decoder(enc_out)
            # We organize the output and later association of cost functions and metrics via a dictionary 
            mod_outputs = {'vae_out_main': dc_reco, 'vae_out_mu': dc_mu, 'vae_out_var': dc_var}
            self.model = Model(inputs=self._encoder_input, outputs=mod_outputs, name="vae")

Note that we keep the resulting model within the object for class MyVariationalAutoencoder. See the Jupyter cells in my next post.

Changes to the method compile_myVAE()

The modification of the function compile_myVAE is simple

    def compile_myVAE(self, learning_rate):

        # Optimizer
        # ~~~~~~~~~ 
        optimizer = Adam(learning_rate=learning_rate)
        # save the learning rate for possible intermediate output to files 
        self.learning_rate = learning_rate
        
        # Parameter "fact" will be used by the cost functions defined below to scale the KL loss relative to the BCE loss 
        fact = self.fact
        
        # Function for solution_type == 1
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~  
        @tf.function
        def mu_loss(y_true, y_pred):
            loss_mux = fact * tf.reduce_mean(tf.square(y_pred))
            return loss_mux
        
        @tf.function
        def logvar_loss(y_true, y_pred):
            loss_varx = -fact * tf.reduce_mean(1 + y_pred - tf.exp(y_pred))    
            return loss_varx

        # Function for solution_type == 2 
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        # We follow an approach described at  
        # https://www.tensorflow.org/api_docs/python/tf/keras/layers/Layer
        # NOTE: We can NOT use @tf.function here 
        def get_kl_loss(mu, log_var):
            kl_loss = -fact * tf.reduce_mean(1 + log_var - tf.square(mu) - tf.exp(log_var))
            return kl_loss


        # Required operations for solution_type==2 => model.add_loss()
        # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        res_kl = get_kl_loss(mu=self.mu, log_var=self.log_var)

        if self.solution_type == 2: 
            self.model.add_loss(res_kl)    
            self.model.add_metric(res_kl, name='kl', aggregation='mean')
        
        # Model compilation 
        # ~~~~~~~~~~~~~~~~~~~~
        
        # Solutions with layer.add_loss or model.add_loss() 
        if self.solution_type == 0 or self.solution_type == 2: 
            if self.loss_type == 0: 
                self.model.compile(optimizer=optimizer, loss="binary_crossentropy",
                                   metrics=[tf.keras.metrics.BinaryCrossentropy(name='bce')])
            if self.loss_type == 1: 
                self.model.compile(optimizer=optimizer, loss="mse",
                                   metrics=[tf.keras.metrics.MSE(name='mse')])
        
        # Solution with transfer of data from the Encoder to the Decoder output layer
        if self.solution_type == 1: 
            if self.loss_type == 0: 
                self.model.compile(optimizer=optimizer
                                   , loss={'vae_out_main':'binary_crossentropy', 'vae_out_mu':mu_loss, 'vae_out_var':logvar_loss} 
                                   #, metrics={'vae_out_main':tf.keras.metrics.BinaryCrossentropy(name='bce'), 'vae_out_mu':mu_loss, 'vae_out_var': logvar_loss }
                                   )
            if self.loss_type == 1: 
                self.model.compile(optimizer=optimizer
                                   , loss={'vae_out_main':'mse', 'vae_out_mu':mu_loss, 'vae_out_var':logvar_loss} 
                                   #, metrics={'vae_out_main':tf.keras.metrics.MSE(name='mse'), 'vae_out_mu':mu_loss, 'vae_out_var': logvar_loss }
                                   )
       
        # Solution with train_step() and GradientTape(): Control is transferred to class VAE  
        if self.solution_type == 3:
            self.model.compile(optimizer=optimizer)

Note the adaptions to the new parameter “loss_type” which we have added to the __init__()-function!

Changes to the method train_myVAE() – inclusion of a dataflow “generator

It gets a bit more complicated for the function “train_myVAE()”. The reason is that we use the opportunity to include the output of so called generators which create limited batches on the fly from disc or memory.

Such a generator is very useful if you have to handle datasets which you cannot get into the VRAM of your video card. A typical case might be the Celeb A dataset for older graphics cards as mine.

In such a case we provide a dataflow to the function. The batches in this data flow are continuously created as needed and handed over to Tensorflows data processing on the graphics card. So, “x_train” as an input variable must not be taken literally in this case! It is replaced by the generator’s dataflow then. See the code for the Jupyter cells in the next post.

In addition we prepare for cases where we have to provide target data to compare the input data “x_train” to which deviate from each other. Typical cases are the application of AEs/VAEs for denoising or recolorization.

    # Function to initiate training 
    # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    def train_myVAE(self, x_train, x_target=None
                    , b_use_generator   = False 
                    , b_target_ne_train = False
                    , batch_size = 32
                    , epochs = 2
                    , initial_epoch = 0, 
                    t_mu=None, 
                    t_logvar=None 
                    ):

        ''' 
        @note: Sometimes x_target MUST be provided - e.g. for Denoising, Recolorization 
        @note: x_train will come as a dataflow in case of a generator 
        '''

        # cax = ProgbarLogger(count_mode='samples', stateful_metrics=None)
        
        class MyPrinterCallback(tf.keras.callbacks.Callback):
            # def on_train_batch_begin(self, batch, logs=None):
            #     # Do something on begin of training batch
        
            def on_epoch_end(self, epoch, logs=None):
                # Get overview over available keys 
                #keys = list(logs.keys())
                print("\nEPOCH: {}, Total Loss: {:8.6f}, // reco loss: {:8.6f}, mu Loss: {:8.6f}, logvar loss: {:8.6f}".format(epoch, 
                      logs['loss'], logs['decoder_loss'], logs['decoder_1_loss'], logs['decoder_2_loss'] 
                                            ))
                print()
                #print('EPOCH: {}, Total Loss: {}'.format(epoch, logs['loss']))
                #print('EPOCH: {}, metrics: {}'.format(epoch, logs['metrics']))
        
            def on_epoch_begin(self, epoch, logs=None):
                print('-'*50)
                print('STARTING EPOCH: {}'.format(epoch))
                
        if not b_target_ne_train : 
            x_target = x_train

        # Data are provided from tensors in the Video RAM 
        if not b_use_generator: 

            # Solutions with layer.add_loss or model.add_loss() 
            # Solution with train_step() and GradientTape(): Control is transferred to class VAE  
            if self.solution_type == 0 or self.solution_type == 2 or self.solution_type == 3: 
                self.model.fit(     
                    x_train
                    , x_target
                    , batch_size = batch_size
                    , shuffle = True
                    , epochs = epochs
                    , initial_epoch = initial_epoch
                )
            
            # Solution with transfer of data from the Encoder to the Decoder output layer
            if self.solution_type == 1: 
                self.model.fit(     
                    x_train
                    , {'vae_out_main': x_target, 'vae_out_mu': t_mu, 'vae_out_var':t_logvar}
    #               also working  
    #                , [x_train, t_mu, t_logvar] # we provide some dummy tensors here  
                    , batch_size = batch_size
                    , shuffle = True
                    , epochs = epochs
                    , initial_epoch = initial_epoch
                    #, verbose=1
                    , callbacks=[MyPrinterCallback()]
                )
    
        # If data are provided as a batched dataflow from a generator - e.g. for Celeb A 
        else: 

            # Solution with transfer of data from the Encoder to the Decoder output layer
            if self.solution_type == 1: 
                print("We have no solution yet for solution_type==1 and generators !")
                sys.exit()

            # Solutions with layer.add_loss or model.add_loss() 
            # Solution with train_step() and GradientTape(): Control is transferred to class VAE  
            if self.solution_type == 0 or self.solution_type == 2 or self.solution_type == 3: 
                self.model.fit(     
                    x_train   # coming as a batched dataflow from the outside generator - no batch size required here 
                    , shuffle = True
                    , epochs = epochs
                    , initial_epoch = initial_epoch
                )

As I have not tested a solution for solution_type==1 and generators, yet, I leave the writing of a working code to the reader. Sorry, I did not find the time for experiments. Presently, I use generators only in combination with the add_loss() based solutions and the solution based on train_step() and GradientTape().

Note also that if we use generators they must take care for a flow of target data to. As said: You must not take “x_train” literally in the case of generators. It is more of a continuously created “dataflow” of batches then – both for the training’s input and target data.

Conclusion

In this post I have outlined how we can use the methods train_step() and the tape-context of Tensorflows GradientTape() to control loss contributions and their gradients. Though done for the specific case of the KL-loss of a VAE the general approach should have become clear.

I have added a new class to create a Keras model from a pre-constructed Encoder and Decoder. For convenience reasons we still create the layer structure with our old class “MyVariationalAutoencoder(). But we switch control then to a new instance of a class representing a child class of Keras’ Model. This class uses customized versions of train_step() and GradientTape().

I have added some more flexibility in addition: We can now include a dataflow generator for input data (as images) which do not fit into the VRAM (Video RAM) of our graphics card but into the PC’s standard RAM. We can also switch to MSE for reconstruction losses instead of BCE.

The task of the KL-loss is to compress the data distribution in the latent space and normalize the distribution around certain feature centers there. In the next post
Variational Autoencoder with Tensorflow – IX – taming Celeb A by resizing the images and using a generator
we apply this to images of faces. We shall use the “Celeb A” dataset for this purpose. We are going to see that the scaling factor of the KL loss in this case has to be chosen rather big in comparison to simple cases like MNIST. We will also see that chosing a high dimension of the latent space does not really help to create a reasonable face from statistically chosen points in the latent space.

And before I forget it:
Ceterum Censeo: The worst living fascist and war criminal living today is the Putler. He must be isolated at all levels, be denazified and imprisoned. Long live a free and democratic Ukraine!