Autoencoders and latent space fragmentation – III – correlations of latent vector components

The topics of this post series are

  • convolutional Autoencoders,
  • images of human faces, provided by the CelebA dataset
  • and related data point and vector distributions in the AEs’ latent spaces.

In the first post

Autoencoders and latent space fragmentation – I – Encoder, Decoder, latent space

I have repeated some basics about the representation of images by vectors. An image corresponds e.g. to a vector in a feature space with orthogonal axes for all individual pixel values. An AE’s Encoder compresses and encodes the image information in form of a vector in the AE’s latent space. This space has many, but significantly fewer dimensions than the original feature space. The end-points of latent vectors are so called z-points in the latent space. We can plot their positions with respect to two coordinate axes in the plane spanned by these axes. The positions reflect the respective vector component values and are the result of an orthogonal projection of the z-points onto this plane. In the second post

Autoencoders and latent space fragmentation – II – number distributions of latent vector components

I have discussed that the length and orientation of a latent vector correspond to a recipe for a constructive process of The AE’s (convolutional) Decoder: The vector component values tell the Decoder how to build a superposition of elementary patterns to reconstruct an image in the original feature space. The fundamental patterns detected by the convolutional AE layers in images of the same class of objects reflect typical pixel correlations. Therefore the resulting latent vectors should not vary arbitrarily in their orientation and length.

By an analysis of the component values of the latent vectors for many CelebA images we could explicitly show that such vectors indeed have end points within a small coherent, confined and ellipsoidal region in the latent space. The number distributions of the vectors’ component values are very similar to Gaussian functions. Most of them with a small standard deviation around a central mean value very close to zero. But we also found a few dominant components with a wider value spread and a central average value different from zero. The center of the latent space region for CelebA images thus lies at some distance from the origin of the latent space’s coordinate system. The center is located close to or within a region spanned by only a few coordinate axes. The Gaussians define a multidimensional ellipsoidal volume with major anisotropic extensions only along a few primary axes.

In addition we studied artificial statistical vector distributions which we created with the help of a constant probability distribution for the values of each of the vector components. We found that the resulting z-points of such vectors most often are not located inside the small ellipsoidal region marked by the latent vectors for the CelebA dataset. Due to the mathematical properties of this kind of artificial statistical vectors only rather small parameter values 1.0 ≤ b ≤ 2.0 for the interval [-b, b], from which we pick all the the component values, allow for vectors with at least the right length. However, whether the orientations of such artificial vectors fit the real CelebA vector distribution also depends on possible correlations of the components.

In this post I will show you that there indeed are significant correlations between the components of latent vectors for CelebA images. The correlations are most significant for those components which determine the location of the center of the z-point distribution and the orientation of the main axes of the z-point region for CelebA images. Therefore, a method for statistical vector creation which explicitly treats the vector components as statistically independent properties may fail to cover the interesting latent space region.

Normalized correlation coefficient matrix

When we have N variables (X_1, x_2, … x_n) and M parallel observations for the variable values then we can determine possible correlations by calculating the so called covariance matrix with elements Cij. A normalized version of this matrix provides the so called “Pearson product-moment correlation coefficients” with values in the range [0.0, 1.0]. Values close to 1.0 indicate a significant correlation of the variables x_i and x_j. For more information see e.g. the following links to the documentation on Numpy’s versions for the calculation of the (normalized) covariance matrix from an array containing the observations in an ordered matrix form: “numpy.cov” and to “numpy.corrcoef“.

So what are the “variables” and “observations” in our case?

Latent vectors and their components

In the last post we have calculated the latent vectors that a trained convolutional AE produces for a 170,000 images of the CelebA dataset. As we chose the number N of dimensions of the latent space to be N=256 each of the latent vectors had 256 components. We can interpret the 256 components as our “variables” and the latent vectors themselves as “observations”. An array containing M rows for individual vectors and N columns for the component values can thus be used as input for Numpy’s algorithm to calculate the normalized correlation coefficients.

When you try to perform the actual calculations you will soon detect that determining the covariance values based on a statistics for all of the 170,000 latent vectors which we created for CelebA images requires an enormous amount of RAM with growing M. So, we have to chose M << 170,000. In the calculations below I took M = 5000 statistically selected vectors out of my 170,000 training vectors.

Some special latent vector components

Before I give you the Pearson coefficients I want to remind you of some special components of the CelebA latent vectors. I had called these components the dominant ones as they had either relatively large absolute mean values or a relatively large half-width. The indices of these components, the related mean values mu and half-widths hw are listed below for a AE with filter numbers in the Encoder’s and Decoder’s 4 convolutional layers given by (64, 64, 128, 128) and (128, 128, 64, 64), respectively:

 15   mu : -0.25 :: hw:  1.5
 16   mu :  0.5  :: hw:  1.125
 56   mu :  0.0  :: hw:  1.625
 58   mu :  0.25 :: hw:  2.125
 66   mu :  0.25 :: hw:  1.5
 68   mu :  0.0  :: hw:  2.0
110   mu :  0.5  :: hw:  1.875
118   mu :  2.25 :: hw:  2.25
151   mu :  1.5  :: hw:  4.125
177   mu : -1.0  :: hw:  2.25
178   mu :  0.5  :: hw:  1.875
180   mu : -0.25 :: hw:  1.5
188   mu :  0.25 :: hw:  1.75
195   mu : -1.5  :: hw:  2.0
202   mu : -0.5  :: hw:  2.25
204   mu : -0.5  :: hw:  1.25
210   mu :  0.0  :: hw:  1.75
230   mu :  0.25 :: hw:  1.5
242   mu : -0.25 :: hw:  2.375
253   mu : -0.5  :: hw:  1.0

The first row provides the component number.

Pearson correlation coefficients for dominant components of latent CelebA vectors

For the latent space of our AE we had chosen the number N of its dimensions to be N=256. Therefore, the covariance matrix has 256×256 elements. I do not want to bore you with a big matrix having only a few elements with a size worth mentioning. Instead I give you a code snippet which should make it clear what I have done:

import numpy as np
#np.set_printoptions(threshold=sys.maxsize)

# The Pearson correlation coefficient matrix 
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
print(z_points.shape)
print()
num_pts      = 5000

# Special points in slice 
num_pts_spec = 100000
jc1_sp = 118; jc2_sp = 164
jc1_sp = 177; jc2_sp = 195

len_z = z_points.shape[0]

ay_sel_ptsx = z_points[np.random.choice(len_z, size=num_pts, replace=False), :]
print(ay_sel_ptsx.shape)

# special points 
threshcc = 2.0
ay_sel_pts1 = ay_sel_ptsx[( abs(ay_sel_ptsx[:,:jc1_sp])         < threshcc).all(axis=1)] 
print("shape of ay_sel_pts1 :  ", ay_sel_pts1.shape )
ay_sel_pts2 = ay_sel_pts1[( abs(ay_sel_pts1[:,jc1_sp+1:jc2_sp]) < threshcc).all(axis=1)] 
print("shape of ay_sel_pts2 :  ", ay_sel_pts2.shape )
ay_sel_pts3 = ay_sel_pts2[( abs(ay_sel_pts2[:,jc2_sp+1:])       < threshcc).all(axis=1)] 
print("shape of ay_sel_pts3 :  ", ay_sel_pts3.shape )
ay_sel_pts_sp  = ay_sel_pts3

ay_sel_pts = ay_sel_ptsx.transpose()
print("shape of ay_sel_pts :  ", ay_sel_pts.shape)

ay_sel_pts_spec = ay_sel_pts_sp.transpose()
print("shape of ay_sel_pts_spec :  ",ay_sel_pts_spec.shape)
print()
       
# Correlation corefficients for the selected points  
corr_coeff = np.corrcoef(ay_sel_pts)
nd = corr_coeff.shape[0]

print(corr_coeff.shape)
print()

for k in range(1,7): 
    thresh = k/10.
    print( "num coeff >", str(thresh), ":", int( ( (np.absolute(corr_coeff) > thresh).sum() - nd) / 2) )

The result was:

(170000, 256)

(5000, 256)
shape of ay_sel_pts1 :   (101, 256)
shape of ay_sel_pts2 :   (80, 256)
shape of ay_sel_pts3 :   (60, 256)
shape of ay_sel_pts :   (256, 5000)
shape of ay_sel_pts_spec :   (256, 60)

(256, 256)

num coeff > 0.1 : 1456
num coeff > 0.2 : 158
num coeff > 0.3 : 44
num coeff > 0.4 : 25
num coeff > 0.5 : 16
num coeff > 0.6 : 8

The lines at the end give you the number of pairs of component indices whose correlation coefficients are bigger than a threshold value. All numbers vary a bit with the selection of the random vectors, but in narrow ranges around the values above. The intermediate part reduces the amount of CelebA vectors to a slice where all components have small values < 2.0 with the exception of 2 special components. This reflects z-points close to the plane panned by the axes for the two selected components.

Now let us extract the component indices which have a significant correlation coefficient > 0.5:

li_ij = []
li_ij_inverse = {}
# threshc  = 0.2      
threshc  = 0.5

ncc = 0.0
for i in range(0, nd):
    for j in range(0, nd):
        val = corr_coeff[i,j]
        if( j!=i and abs(val) > threshc ): 
            # Check if we have the index pair already 
            if (i,j) in li_ij_inverse.keys():
                continue 
            # save the inverse combination
            li_ij_inverse[(j,i)] = 1
            li_ij.append((i,j))
            print("i =",i,":: j =", j, ":: corr=", val)
            ncc += 1

print()
print(ncc)
print()
print(li_ij)

We get 16 pairs:

i = 31  :: j = 188 :: corr= -0.5169590614268832
i = 68  :: j = 151 :: corr=  0.6354094560888554
i = 68  :: j = 177 :: corr= -0.5578352818543628
i = 68  :: j = 202 :: corr= -0.5487381785057351
i = 110 :: j = 188 :: corr=  0.5797971250208538
i = 118 :: j = 195 :: corr= -0.647196329744637
i = 151 :: j = 177 :: corr= -0.8085621658509928
i = 151 :: j = 202 :: corr= -0.7664405924287517
i = 151 :: j = 242 :: corr=  0.8231503928254471
i = 177 :: j = 202 :: corr=  0.7516815584868468
i = 177 :: j = 242 :: corr= -0.8460097558498094
i = 188 :: j = 210 :: corr=  0.5136571387916908
i = 188 :: j = 230 :: corr= -0.5621165900366926
i = 195 :: j = 242 :: corr=  0.5757354150766792
i = 202 :: j = 242 :: corr= -0.6955230633323528
i = 210 :: j = 230 :: corr= -0.5054635808381789

16

[(31, 188), (68, 151), (68, 177), (68, 202), (110, 188), (118, 195), (151, 177), (151, 202), (151, 242), (177, 202), (177, 242), (188, 210), (188, 230), (195, 242), (202, 242), (210, 230)]

You note, of course, that most of these are components which we already identified as the dominant ones for the orientation and lengths of our latent vectors. Below you see a plot of the number distributions for the values the most important components take:

Visualization of the correlations

It is instructive to look at plots which directly visualize the correlations. Again a code snippet:

import numpy as np
num_per_row = 4
num_rows    = 4
num_examples = num_per_row * num_rows

li_centerx = []
li_centery = []
li_centerx.append(0.0)
li_centery.append(0.0)

#num of plots
n_plots = len(li_ij)
print("n_plots = ", n_plots)

plt.rcParams['figure.dpi'] = 96 
fig = plt.figure(figsize=(16, 16))
fig.subplots_adjust(hspace=0.2, wspace=0.2)

#special CelebA point 
n_spec_pt = 90415

# statisitcal vectors for b=4.0 
delta = 4.0
num_stat = 10
ay_delta_stat = np.random.uniform(-delta, delta, size = (num_stat,z_dim))

print("shape of ay_sel_pts : ", ay_sel_pts.shape)

n_pair = 0 
for j in range(num_rows): 
    if n_pair == n_plots:
        break
    offset = num_per_row * j
    # move through a row 
    for i in range(num_per_row): 
        if n_pair == n_plots:
            break
        j_c1 = li_ij[n_pair][0]
        j_c2 = li_ij[n_pair][1]
        li_c1 = []
        li_c2 = []
        for npl in range(0, num_pts): 
            #li_c1.append( z_points[npl][j_c1] )  
            #li_c2.append( z_points[npl][j_c2] )  
            li_c1.append( ay_sel_pts[j_c1][npl] )  
            li_c2.append( ay_sel_pts[j_c2][npl] )  
        
        # special CelebA point 
        li_spec_pt_c1=[]
        li_spec_pt_c2=[]
        li_spec_pt_c1.append( z_points[n_spec_pt][j_c1] )  
        li_spec_pt_c2.append( z_points[n_spec_pt][j_c2] )  
        
        # statistical vectors 
        li_stat_pt_c1=[]
        li_stat_pt_c2=[]
        for n_stat in range(0, num_stat):
            li_stat_pt_c1.append( ay_delta_stat[n_stat][j_c1] )  
            li_stat_pt_c2.append( ay_delta_stat[n_stat][j_c2] )  
        
        # plot 
        sp_names = [str(j_c1)+' - '+str(j_c2)]
        axc = fig.add_subplot(num_rows, num_per_row, offset + i +1)
        #axc.axis('off')
        axc.scatter(li_c1, li_c2, s=0.8 )
        axc.scatter(li_stat_pt_c1, li_stat_pt_c2, s=20, color="red", alpha=0.9 )
        axc.scatter(li_spec_pt_c1, li_spec_pt_c2, s=80, color="black" )
        axc.scatter(li_spec_pt_c1, li_spec_pt_c2, s=50, color="orange" )
        axc.scatter(li_centerx, li_centery, s=100, color="black" )
        axc.scatter(li_centerx, li_centery, s=60, color="yellow" )
        axc.legend(labels=sp_names, handletextpad=0.1)
        n_pair += 1 

        

The result is:

The (5000) blue dots show the component values of the randomly selected latent vectors for CelebA images. The yellow dot marks the origin of the latent space’s coordinate system. The red dots correspond to artificially created random vectors for b=4.0. The orange dot marks the values for one selected CelebA image. We also find indications of an ellipsoidal form of the z-point region for the CelebA dataset. But keep in mind that we only a re looking at projections onto planes. Also watch the different scales along the two axes!

Interpretation

The plots clearly show some average correlation for the depicted latent vector components (and their related z-points). We also see that many of the artificially created vector components seem to lie within the blue cloud. This appears a bit strange as we had found in the last post that the radii of such vectors do not fit the CelebA vector distribution. But you have to remember that we only look at projections of the real z-points down to some selected 2D-planes within of the multi-dimensional space. The location in particular projections does not tell you anything about the radius. In a later sections I also show you plots where the red dots quite often fall outside the blue regions of other components.

I want to draw your attention to the fact that the origin seems to be located close to the border of the region marked by some components. At least in the present projection of the z-points to the 2D-planes. If we only had the plots above then the origin could also have a position outside the bulk of CelebA z-points. The plots confirm however what we said in the last post: The CelebA vector distributions has its center off the origin.

We also see an indication that the density of the z-points drops sharply towards most of the border regions. In the projections this becomes not so clear due to the amount of points. See the plot below for only 500 randomly selected CelebA vectors and the plots in other sections below.

Border position of the origin with respect to the latent vector distribution for CelebA

Below you find a plot for 1000 randomly selected CelebA vectors, some special components and b=4.0. The components which I selected in this case are NOT the ones with the strongest correlations.

These plots again indicate that the border position of the latent space’s origin is located in a border region of the CelebA z-points. But as mentioned above: We have to be careful regarding projection effects. But we also have the plot of all number distributions for the component values; see the last post for this. And there we saw that all the curves cover a range of values which includes the value 0.0. Together we the plots above this is actually conclusive: The origin is located in a border region of the latent z-point volume resulting from CelebA images after the training of our Autoencoder.

This fact also makes artificial vector distributions with a narrow spread around the origin determined by a b ≤ 2.0 a bit special. The reason is that in certain directions the component value may force the generated artificial z-point outside the border of the CelebA distribution. The range between 1.0 < b < 2.0 had been found to be optimal for our special statistical distribution. The next plot shows red dots for b=1.5.

This does not look too bad for the selected components. So we may still hope that our statistical vectors may lead to reconstructed images by the Decoder which show human faces. But note: The plots are only projections and already one larger component-value can be enough to put the z-point into a very thinly populated region outside the main volume fo CelebA z-points.

Conclusion

The values for some of the components of the latent vectors which a trained convolutional AE’s Encoder creates for CelebA images are correlated. This is reflected in plots that show an orthogonal projection of the multi-dimensional z-point distribution onto planes spanned by two coordinate axes. Some other components also revealed that the origin of the latent space has a position close to a border region of the distribution. A lot of artificially created z-points, which we based on a special statistical vector distribution with constant probabilities for each of the independent component values, may therefore be located outside the main z-point distribution for CelebA. This might even be true for an optimal parameter b=1.5, which we found in our analysis in the last post.

We will have a closer look at the border topic in the next post:

Autoencoders and latent space fragmentation – IV – CelebA and statistical vector distributions in the surroundings of the latent space origin

 

Autoencoders, latent space and the curse of high dimensionality – II – a view on fragments and filaments of the latent space for CelebA images

I continue with experiments regarding the structure which an Autoencoder [AE] builds in its latent space. In the last post of this series

Autoencoders, latent space and the curse of high dimensionality – I

we have trained an AE with images of the CelebA dataset. The Encoder and the Decoder of the AE consist of a series of convolutional layers. Such layers have the ability to extract characteristic patterns out of input (image) data and save related information in their so called feature maps. CelebA images show human heads against varying backgrounds. The AE was obviously able to learn the typical features of human faces, hair-styling, background etc. After a sufficient number of training epochs the AE’s Encoder produces “z-points” (vectors) in the latent space. The latent space is a vector space which has a relatively low number of dimension compared with the number of image pixels. The Decoder of the AE was able to reconstruct images from such z-points which resembled the original closely and with good quality.

We saw, however, that the latent space (or “z-space”) lacks an important property:

The latent space of an Autoencoder does not appear to be densely and uniformly populated by the z-points of the training data.

We saw that his makes the latent space of an Autoencoder almost unusable for creative and generative purposes. The z-points which gave us good reconstructions in the sense of recognizable human faces appeared to be arranged and positioned in a very special way within the latent space. Below I call a CelebA related z-point for which the Decoder produces a reconstruction image with a clearly visible face a “meaningful z-point“.

We could not reconstruct “meaningful” images from randomly chosen z-points in the latent space of an Autoencoder trained on CelebA data. Randomly in the sense of random positions. The Decoder could not re-construct images with recognizable human heads and faces from almost any randomly positioned z-point. We got the impression that many more non-meaningful z-points exist in latent space than meaningful z-points.

We would expect such a behavior if the z-points for our CelebA training samples were arranged in tiny fragments or thin (and curved) filaments inside the multidimensional latent space. Filaments could have the structure of

  • multi-dimensional manifolds with almost no extensions in some dimensions
  • or almost one-dimensional string-like manifolds.

The latter would basically be described by a (wiggled) thin curve in the latent space. Its extensions in other dimensions would be small.

It was therefore reasonable to assume that meaningful z-points are surrounded by areas from which no reasonable interpretable image with a clear human face can be (re-) constructed. Paths from a “meaningful” z-point would only in a very few distinct directions lead to another meaningful point. As it would be the case if you had to follow a path on a thin curved manifold in a multidimensional vector space.

So, we had some good reasons to speculate that meaningful data points in the latent space may be organized in a fragmented way or that they lie within thin and curved filaments. I gave my readers a link to a scientific study which supported this view. But without detailed data or some visual representations the experiments in my last post only provided indirect indications of such a complex z-point distribution. And if there were filaments we got no clue whether these were one- or multidimensional.

Important Addendum, 03/18/2023:

I have to correct this post regarding the basic line of thought: Even if we find that the z-points for CelebA images are arranged in filaments the failure we saw in the first post of this series may not have its direct cause in missing these filaments in latent space by randomly chosen z-points. It could also be that we miss a much larger, coherent region where meaningful points are located. The filaments then would correspond to a correlation of certain features, only, which may not be decisive for the reconstruction of a face. So, the investigation of the existence of filaments is interesting – but the explanation of the AE’s reconstruction failure may require a more thorough analysis. I have done the calculations already, but have not yet found the time to write about them. As soon as the posts are ready I am going to provide a link. See also an added comment at the end of this post.

Do we have a chance to get a more direct evidence about a fragmented or filamental population of the latent space? Yes, I think so. And this is the topic of this post.

However, the analysis is a bit complicated as we have to deal with a multidimensional space. In our case the number of dimensions of the latent space is z_dim = 256. No chance to plot any clusters or filaments directly! However, some other methods will help to reduce the dimensionality of the problem and still get some valid representations of the data point correlations. In the end we will have a very strong evidence for the existence of filaments in the AE’s z-space.

Methods to work with data distributions in many dimensions

Below I will use several methods to investigate the z-point distribution in the multidimensional latent space:

  • An analysis of the variation of the z-point number-density along coordinate axes and vs. radius values.
  • An application of t-SNE projections from the standard multidimensional coordinate system onto a 2-dimensional plane.
  • PCA analysis and subsequent t-SNE projections of the PCA-transformed z-point distribution and its most important PCA components down to a 2-dim plane. Note that such an approach corresponds to a sequence of projections:
    1) Linear projections onto PCA rotated coordinates.
    2) A non-linear SNE-projection which scales and represents data point correlations on different scales on a 2-dim plane.
  • A direct view on the data distribution projected onto flat planes formed by two selected coordinate axes in the PCA-coordinate system. This will directly reveal whether the data (despite projection effects exhibit filaments and voids on some (small ?) scales.
  • A direct view on the data distribution projected onto a flat plane formed by two coordinate axes of the original latent space.

The results of all methods combined strongly support the claim that the latent space is neither populated densely nor uniformly on (small) scales. Instead data points are distributed along certain filamental structures around voids.

Layer structure of the Autoencoder

Below you find the layer structure of the AE’s Encoder. It got four Conv2D layers. The Decoder has a corresponding reverse structure consisting of Conv2DTranspose layers. The full AE model was constructed with Keras. It was trained on CelebA for 24 epochs with a small step size. The original CelebA images were reduced to a size of 96×96 pixels.

Encoder

Decoder

Number density of z-points vs. coordinate values

Each z-point can be described by a vector, whose components are given by projections onto the 256 coordinate axes. We assume orthogonal axes. Let us first look at the variation of the z-point number density vs. reasonable values for each of the 256 vector-components.

Below I have plotted the number density of z-points vs. coordinate values along all 256 coordinate axes. Each curve shows the variation along one of the 256 axes. The data sampling was done on intervals with a width of 0.25:

Most curves look like typical Gaussians with a peak at the coordinate value 0.0 with a half-width of around 2.

You see, however, that there are some coordinates which dominate the spatial distribution in the latent vector-space. For the following components the number density distribution is relatively broad and peaks at a center different from the origin of the z-space. To pick a few of these coordinate axes:

 52, center:  5.0,  width: 8
 61; center;  1.0,  width: 3 
 73; center:  0.0,  width: 5.5  
 83; center: -0.5,  width: 5
 94; center:  0.0,  width: 4
116; center:  0.0,  width: 4
119; center:  1.0,  width: 3
130; center: -2.0,  width: 9
171; center:  0.7,  width: 5
188; center:  0.75, width: 2.75
200; center:  0.5,  width: 11
221; center: -1.0,  width: 8

The first number is just an index of the vector component and the related coordinate axis. The next plot shows the number density along some these specific coordinate axes:

What have we learned?
For most coordinate axes of the latent space the number density of the z-points peaks at 0.0. We see an approximate Gaussian form of the number density distribution. There are around 5 coordinate directions where the distribution has a peak significantly off the origin (52, 130, 171, 200, 221). Along the corresponding axes the distribution of z-points obviously has an elongated form.

If there were only one such special vector component then we would speak of an elongated, ellipsoidal and almost cigar like distribution with the thickest area at some position along the specific coordinate axis. For a combination of more axes with elongated distributions, each with with a center off the origin, we get instead diagonally oriented multidimensional and elongated shapes.

These findings show again that large regions of the latent space of an AE remain empty. To get an idea just imagine a three dimensional space with all data in x-direction culminating at a coordinate value of 5 with a half-width of lets say 8. In the other directions y and z we have our Gaussian distributions with a total half-width of 1 around the mean value 0. What do we get? A cigar like shape confined around the x-axis and stretching from -3 < x < 13. And the rest of the space: More or less empty. We have obviously found something similar at different angular directions of our multidimensional latent space. As the number of special coordinate directions is limited these findings tell us that a PCA analysis could be helpful. But let us first have a look at the variation of number density with the radius value of the z-points.

Number density of z-points vs. radius

We define a radius via an Euclidean L2 norm for our 256-dimensional latent space. Afterward we can reduce the visualization of the z-point distribution to a one dimensional problem. We can just plot the variation of the number density of z-points vs. the radius of the z-points.

In the first plot below the sampling of data was done on intervals of 0.5 .

The curve does not remain that smooth on smaller sampling intervals. See e.g. for intervals of width 0.05

Still, we find a pronounced peak at a radius of R=16.5. But do not get misguided: 16 appears to be a big value. But this is mainly due to the high number of dimensions!

How does the peak in the close vicinity of R=16 fit to the above number density data along the coordinate axes? Answer: Very well. If you assume a z-point vector with an average value of 1 per coordinate direction we actually get a radius of exactly R=16!

But what about Gaussian distributions along the coordinate axes? Then we have to look at resulting expectation values. Let us assume that we fill a vector of dimension 256 with numbers for each component picked statistically from a normal distribution with a width of 1. And let us repeat this process many times. Then what will the expectation value for each component be?

A coordinate value contributes with its square to the radius. The math, therefore, requires an evaluation of the integral integral[(x**2)*gaussian(x)] per coordinate. This integral gives us an expectation value for the contribution of each coordinate to the total vector length (on average). The integral indeed has a resulting value of 1.0. From this it follows that the expectation value for the distance according to an Euclidean L2-metric would be avg_radius = sqrt(256) = 16. Nice, isn’t it?

However, due to the fact that not all Gaussians along the coordinate axes peak at zero, we get, of course, some deviations and the flank of the number distribution on the side of larger radius values becomes relatively broad.

What do we learn from this? Regions very close to the origin of the z-space are not densely populated. And above a radius value of 32, we do not find z-points either.

t-SNE correlation analysis and projections onto a 2-dimensional plane

To get an impression of possible clustering effects in the latent space let us apply a t-SNE analysis. A non-standard parameter set for the sklearn-variant of t-SNE was chosen for the first analysis

tsne2 = TSNE(n_components, early_exaggeration=16, perplexity=10, n_iter=1000) 

The first plot shows the result for 20,000 randomly selected z-points corresponding to CelebA images

Also this plot indicates that the latent space is not populated with uniform density in all regions. Instead we see some fragmentation and clustering. But note that this might happened on different length scales. t-SNE arranges its projections such that correlations on different scales get clearly indicated. So the distances in this plot must not be confused with the real spatial distances in the original latent space. The axes of the t-SNE plot do not reflect any axes of the latent space and the plotted distribution is not the real data point distribution after a linear and orthogonal projection onto a plane. t-SNE works non-linearly.

However, the impression of clustering remains for a growing numbers of z-points. In contrast to the first plot the next plots for 80,000 and 165,000 z-points were calculated with standard t-SNE parameters.

We still see gaps everywhere between locally dense centers. At the center the size of the plotted points leads to overlapping. If one could zoom into some of the centers then gaps would again appear on smaller scales (see more plots below).

PCA analysis and t-SNE-plots of the z-point distribution in the (rotated) PCA coordinate system

The z-point distribution can be analyzed by a PCA algorithm. There is one dominant component and the importance smooths out to an almost constant value after the first 10 components.

This is consistent with the above findings. Most of the coordinates show rather similar Gaussian distributions and thus contribute in almost the same manner.

The PCA-analysis transforms our data to a rotated coordinate system with a its origin at a position such that the transformed z-point distribution gets centered around this new origin. The orthogonal axes of the new PCA-coordinates system show into the direction of the main components.

When the projection of all points onto planes formed by two selected PCA axes do not show a uniform distribution but a fragmented one, then we can safely assume that there really is some fragmentation going on.

t-SNE after PCA

Below you see t-SNE-plots for a growing number of leading PCA components up to 4. The filamental structure gets a bit smeared out, but it does not really disappear. Especially the elongated empty regions (voids) remain clearly visible.

t-SNE after PCA for the first 2 main components – 80,000 randomly selected z-points

t-SNE after PCA for the first 2 main components – 165,000 randomly selected z-points

t-SNE after PCA for the first 4 main PCA components – 165,000 randomly selected z-points

For 10 components t-SNE gets a presentation problem and the plots get closer to what we saw when we directly operated on the latent space.

But still the 10-dim space does not appear to be uniformly populated. Despite an expected smear out effect due to the non-linear projection the empty ares seem to be at least as many and as extended as the populated areas.

Direct view on the z-point distribution after PCA in the rotated and centered PCA coordinate system

t-SNE blows correlations up to make them clearly visible. Therefore, we should also answer the following question:

On what scales does the fragmentation really happen ?

For this purpose we can make a scatter plot of the projection of the z-points onto a plane formed by the leading two primary component axes. Let us start with an overview and relatively large limiting values along the two (PCA) axes:

Yeah, a PCA transformation obviously has centered the distribution. But now the latent space appears to be filled densely and uniformly around the new origin. Why?

Well, this is only a matter of the visualized length scales. Let us zoom in to a square of side-length 5 at the center:

Well, not so densely populated as we thought.

And yet a further zoom to smaller length scales:

And eventually a really small square around the origin of the PCA coordinate system:

z-point distribution at the center of a two-dim plane formed by the coordinate axes of the first 2 primary components
The chosen qsquare has its corners at (-0.25, -0.25), (-0.25, 0.25), (0.25, -0.25), (0.25, 0.25).

Obviously, not a dense and neither a uniform distribution! After a PCA transformation we see the still see how thinly the latent space is populated and that the “meaningful” z-points from the CelebA data lie along curved and narrow lines or curves with some point-like intersections. Between such lines we see extended voids.

Let us see what happens when we look at the 2-dim pane defined by the first and the 18th axes of the PCA coordinate system:

Or the distribution resulting for the plane formed by the 8th and the 35th PCA axis:

We could look at other flat planes, but we do not get rid of he line like structures around void like areas. This is really a strong indication of filamental structures.

Interpretation of the line patterns:
The interesting thing is that we get lines for z-point projections onto multiple planes. What does this tell us about the structure of the filaments? In principle we have the two possibilities already named above: 1) Thin multidimensional manifolds or 2) thin and basically one-dimensional manifolds. If you think a bit about it, you will see that projections of multidimensional manifolds would not give us lines or curves on all projection planes. However curved string- or tube-like manifolds do appear as lines or line segments after a projection onto almost all flat planes. The prerequisite is that the extension of the string in other directions than its main one must really be small. The filament has to have a small diameter in all but one directions.

So, if the filaments really are one-dimensional string-like objects: Should we not see something similar in the original z-space? Let us for example look at the plane formed by axis 52 and axis 221 in the original z-space (without PCA transformation). You remember that these were axes where the distribution got elongated and had centers at -2 and 5, respectively. And indeed:

Again we see lines and voids. And this strengthens our idea about filaments as more or less one-dimensional manifolds.

The “meaningful” z-points for our CelebA data obviously get positioned on long, very thin and basically one-dimensional filaments which surround voids. And the voids are relatively large regarding their area/volume. (Reminds me of the galaxy distribution in simulations of the development of the early universe, by the way.)

Therefore: Whenever you chose a randomly positioned z-point the chance that you end up in an unpopulated region of the z-space or in a void and not on a filament is extremely big.

Conclusion

We have used a whole set of methods to analyze the z-point distribution of an AE trained on CelebA images. We found the the z-point distribution is dominated by the number density variation along a few coordinate axes. Elongated shapes in certain directions of the latent space are very plausible on larger scales.

We found that the number density distributions along most of the coordinate axes have a thin Gaussian form with a peak at the origin and a half-with of 1. We have no real explanation for this finding. But it may be related to the fact the some dominant features of human faces show Gaussian distributions around a mean value. With Gaussians given we could however explain why the number density vs. radius showed a peak close to R=16.

A PCA analysis finds primary directions in the multidimensional space and transforms the z-point distribution into a corresponding one for orthogonal primary components axes. For logical reason we can safely assume that the corresponding projections of the z-point distribution on the new axes would still reveal existing thin filamental structures. Actually, we found lines surrounding voids independently onto which flat plane we projected the data. This finding indicates thin, elongated and curved but basically one-dimensional filaments (like curved strings or tubes). We could see the same pattern of line-like structure in projections onto flat coordinate planes in the original latent space. The volume of the void areas is obviously much bigger than the volume occupied by the filaments.

Non-linear t-SNE projections onto a 2-dim flat hyperplanes which in addition reproduce and normalize correlations on multiple scales should make things a bit fuzzier, but still show empty regions between denser areas. Our t-SNE projections all showed signs of complex correlation patterns of the z-points with a lot of empty space between curved structures.

Important Addendum, 03/18/2023:
The following original conclusion is misleading and by parts wrong:

The experiments all in all indicate that z-points of the training data, for which we get good reconstructions, lie within thin filaments on characteristic small length scales. The areas/volumes of the voids between the filaments instead are relatively big. This explains why chances that randomly chosen points in the z-space falls into a void are very high.
The results of the last post are consistent with the interpretation that z-points in the voids do not lead to reconstructions by the Decoder which exhibit standard objects of the training images. in the case of CelebA such z-points do not produce images with clear face or head like patterns. Face like features obviously correspond to very special correlations of z-point coordinates in the latent space. These correlations correspond to thin manifolds consuming only a tiny fraction of the z-space with a volume close zero.

Due to a new analysis I would like to replace my original statemets with a question:

Do our findings of the existence of filaments and large surrounding voids really explain the results of the first post that randomly chosen z-points miss areas in the latent space which allow for a reconstruction of “faces”?

I am going to answer this question in another better prepared post series, soon. To make you a bit curious I leave you with the fact that the following picture shows a face reconstructed by an AE from a randomly selected point in the latent space – with some simple conditions applied:

 

Variational Autoencoder with Tensorflow – XI – image creation by a VAE trained on CelebA

I continue with my series on Variational Autoencoders [VAEs] and related methods to control the KL-loss.

Variational Autoencoder with Tensorflow – I – some basics
Variational Autoencoder with Tensorflow – II – an Autoencoder with binary-crossentropy loss
Variational Autoencoder with Tensorflow – III – problems with the KL loss and eager execution
Variational Autoencoder with Tensorflow – IV – simple rules to avoid problems with eager execution
Variational Autoencoder with Tensorflow – V – a customized Encoder layer for the KL loss
Variational Autoencoder with Tensorflow – VI – KL loss via tensor transfer and multiple output
Variational Autoencoder with Tensorflow – VII – KL loss via model.add_loss()
Variational Autoencoder with Tensorflow – VIII – TF 2 GradientTape(), KL loss and metrics
Variational Autoencoder with Tensorflow – IX – taming Celeb A by resizing the images and using a generator
Variational Autoencoder with Tensorflow – X – VAE application to CelebA images

VAEs fall into a section of ML which is called “Generative Deep Learning“. The reason is that we can VAEs to create images with contain objects with features of objects learned from training images. One interesting category of such objects are human faces – of different color, with individual expressions and features and hairstyles, seen from different perspectives. One dataset which contains such images is the CelebA dataset.

During the last posts we came so far that we could train a CNN-based Variational Autoencoder [VAE] with images of the CelebA dataset. Even on graphics cards with low VRAM. Our VAE was equipped with a GradientTape()-based method for KL-loss control. We still have to prove that this method works in the expected way:

The distribution of data points (z-points) created by the VAE’s Encoder for training input should be confined to a region around the origin in the latent space (z-space). And neighboring z-points up to a limited distance should result in similar output of the Decoder.

Therefore, we have to look a bit deeper into the results of some VAE-experiments with the CelebA dataset. I have already pointed out why creating rather complex images from arbitrarily chosen points in the latent space is a suitable and good test for a VAE. Please remember that our efforts regarding the KL-loss have to do with the following fact:

not create reasonable images/objects from arbitrarily chosen z-points in the latent space.

This eliminates the use of an AE for creative purposes. A VAE, however, should be able to solve this type of task – at least for z-points in a limited surroundings of the latent space’s origin. Thus, by creating images from randomly selected z-points with the Decoder of a VAE, which has been trained on the CelebA data set, we cover two points:

  • Test 1: We test the functionality of the VAE-class, which we have developed and which includes the code for KL-loss handling via TF2’s GradientTape() and Keras’ train_step().
  • Test 2: We test the ability of the VAE’s Decoder to create images with convincing human-like face and hairstyle features from random z-points within an area close to the origin of the latent space.

Most of the experiments discussed below follow the same prescription: We take our trained VAE, select some random points in the latent space, feed the z-point-data into the VAE’s Decoder for a prediction and plot the images created on the Decoder’s output side. The Encoder only plays a role when we want to test reconstruction abilities.

For a low dimension z_dim=256 of the latent space we will find that the generated images display human faces reasonably well. But the images appear a bit blurry or unsharp – as if not fully focused. So, we need to discuss what we can do about this point. I will also name some plausible causes for the loss of accuracy in the representation of details.

Afterwards I want to show you that a VAE Decoder reconstructs original images relatively badly from the z-points calculated by the Encoder. At least when one looks at details. A simple AE with a sufficiently high dimension of the latent space performs much better. One may feel disappointed about the reconstruction ability of a VAE. But actually it is the ability of a VAE to forget about details and instead to focus on general features which enables it (the VAE) to create something meaningful from randomly chosen z-points in the latent space.

In a last step in this post we are going to look at images created from z-points with a growing distance from the origin of the multidimensional latent space [z-space]. (Distance can be defined by a L2-Euclidean norm). We will see that most z-points which have some z-coordinates above a value of 3 produce fancy images where the face structures get dominated or distorted by background structures learned from CelebA images. This effect was to be expected as the KL-loss enforced a distribution of the z-points which is confined to a region relatively close to the origin. Ideally, this distribution would be characterized by a normal distribution in all coordinates with a sigma of only 1. So, the fact that z-points in the vicinity of the origin of the latent space lead to a construction of images which show recognizable human faces is an indirect proof of the confining impact of the KL-loss on the z-point distribution. In another post I shall deliver data which prove this more directly.

Below I will call the latent space of a (V)AE also z-space.

Characteristics of the VAE tested

Our trained VAE with four Conv2D-layers in the Encoder and 4 corresponding Conv2DTranspose-Layers in the Decoder has the following basic characteristics:

(Encoder-CNN-) filters=(32,64,128,256), kernels=(3,3), stride=2,
reconstruction loss = BCE (binary crossentropy), fact=5.0, z_dim=256

The order of the filter- (= map-) numbers is, of course reversed for the Decoder. The factor fact to scale the KL-loss in comparison to the reconstruction loss was chosen to be fact=5, which led to a 3% contribution of the KL-loss to the total loss during training. The VAE was trained on 170,000 CelebA images with 24 epochs and a small epsilon=0.0005 plus Adam optimizer.

When you perform similar experiments on your own you may notice that the total loss values after around 24 epochs ( > 5015) are significantly higher than those of comparable experiments with a standard AE (4850). This already is an indication that our VAE will not reproduce a similar good match between an image reconstructed by the Decoder in comparison to the original input image fed into the Encoder.

Results for z-points with coordinates taken from a normal distribution around the origin of the latent space

The picture below shows some examples of generated face-images coming from randomly chosen z-points in the vicinity of the z-space’s origin. To calculate the coordinates of such z-points I applied a normal distribution:

z_points = np.random.normal(size = (n_to_show, z_dim)) # n_to_show = 28

So, what do the results for z_dim=256 look like?

Ok, we get reasonable images of human-like faces. The variations in perspective, face forms and hairstyles are also clearly visible and reflect part of the related variety in the training set. You will find more variations in more images below. So, we take this result as a success! In contrast to a pure AE we DO get something from random z-points which we clearly can interpret as human faces. The whole effort of confining z-points around the origin and at the same time of smearing out z-points with similar content over a region instead of a fixed point-mapping (as in an AE) has paid off. See for comparison:
Autoencoders, latent space and the curse of high dimensionality – I

Unfortunately, the images and their details details appear a bit blurry and not very sharp. Personally, this reminded me of the times when the first CCD-chips with relative low resolution were introduced in cameras and the raw image data looked disappointing as long as we did not apply some sharpening filters. The basic information to enhance details were there, but they had to be used explicitly to improve the plain raw data of the CCD.

The quality in details is about the same as what we see in example images in the book of D.Foster on “Generative Deep Learning”, 2019, O’Reilly. Despite the fact that Foster used a slightly higher resolution of the input images (128x128x3 pixels). The higher input resolution there also led to a higher resolution of the maps of the innermost convolutional layer. Regarding quality see also the images presented in:
https://datagen.tech/guides/image-datasets/celeba/

Enhancement processing of the images ?

Just for fun, I took a screenshot of my result, saved it and applied two different sharpening filters from the ShowFoto program:

Much better! And we do not have the impression that we added some fake information to the images by our post-processing ….

Now I hear already argument saying that such an enhancement should not be done. Frankly, I do not see any reason against post-processing of images created by a VAE-algorithm.

Remember: This is NOT about reproduction quality with respect to originals or a close-to-reality show. This is about generating new mages of human-like faces based on basic features which a VAE-algorithm hopefully has learned from training images. All of what we do with a VAE is creative. And it also comes close to a proof that ML-algorithms based on convolutional layers really can “learn” something about the basic features of objects presented to them. (The learned features are e.g. in the Encoder’s case saved in the sensitivity of the convolutional maps to typical patterns in the input images.)

And as in the case of raw-images of CCD or CMOS camera chips: Sometimes some post-processing is required to utilize the information optimally for sharpness.

Sharpening by PIL’s enhancement functionality

Of course we do not want to produce images in a ML run, take screenshots and sharpen each image individually. We need some tool that fits into the ML process pipeline. The good old PIL library for Python offers sharpening as one of multiple enhancement options for images. The next examples are results from the application of a PIL enhancement procedure:

These images look quite OK, too. The basic code fragment I used for each individual image in the above grid:

    # reconst_new is the output from my VAE's Decoder 
    ay_img      = reconst_new[i, :,:,:] * 255
    ay_img      = np.asarray(ay_img, dtype="uint8" )
    img_orig    = Image.fromarray(ay_img)
    img_shr_obj = ImageEnhance.Sharpness(img)
    sh_factor   = 7   # Specified Factor for Enhancing Sharpness
    img_sh      = img_shr_obj.enhance(sh_factor)

The sharpening factor I chose was quite high, namely sh_factor = 7.

The effect of PIL’s sharpening factor

Just to further demonstrate the effect of different factors for sharpening by PIL you find some examples below for sh_factor = 0, 3, 6.

sh_factor = 0

sh_factor = 3

sh_factor = 6

Obviously, the enhancement is important to get clearer and sharper images.
However, when you enlarge the images sufficiently enough you see some artifacts in the form of crossing lines. These artifacts are partially already existing in the Decoder’s output, but they are enhanced by the Sharpening mechanism used by PIL (unsharp masking). The artifacts become more pronounced with a growing sh_factor.
Hint: According to ML-literature the use of Upsampling layers instead of Conv2DTranspose layers in the Decoder may reduce such artefacts a bit. I have not yet tried it myself.

Assessment

How do we assess the point of relatively unclear, unsharp images produced by our VAE? What are plausible reasons for the loss of details?

  1. Firstly, already AEs with a latent space dimension z_dim=256 in general do not reconstruct brilliant images from z-points in the latent space. To get a good reconstruction quality even from an AE which does nothing else than to compress and reconstruct images of a size (96x96x3) z_dim-values > 1000 are required in my experience. More about this in another post in the future.
  2. A second important aspect is the following: Enforcing a compact distribution of similar images in the latent space via the KL-loss automatically introduces a loss of detail information. The KL-loss is designed to lead to a smear-out effect in z-space. Only basic concepts and features will be kept by the VAE to ensure a similarity of neighboring images. Details will be omitted and “smoothed” out. This has consequences also with respect to sharpness of detail structures. A detail as an eyebrow in a face is to be considered as an average of similar details found for images in the same region of the z-space. This alone brings some loss of clarity with it.
  3. Thirdly, a simple (V)AE based on some directly connected Conv2D-layers has limited capabilities in general. The reason is that we systematically reduce resolution whilst information is propagated from one Conv2D layer to the next neighboring one. Remember that we use a stride > 2 or pooling layers to cover filters on larger image scales. Due to this information processing a convolutional network automatically suppresses details in its inner layers – their resolution shrinks with growing distance from the input layer. In later posts of this blog we shall see that using ResNets instead of CNNs in the Encoder and Decoder already helps a bit regarding the reconstruction of clearer images. The correlation between details and large scale information is better kept up there than in CNNs.

Regarding the first point one may think of increasing z_dim. This may not be the best idea. It contradicts the whole idea of a VAE which at its core is a reduction of the degrees of freedom for z-points. For a higher dimensional space we may have to raise the ratio of KL-loss to reconstruction loss even further.

Regarding the third point: Of course it would also help to increase kernel sizes for the first two Conv2D layers and the number of maps there. A higher resolution of the input images would also be of advantage. Both methods may, however, conflict with your VRAM or GPU time limits.

If the second point were true then reduction of fact in our models, which controls the ration of KL-loss to reconstruction loss, would lead to a better image quality. In this case we are doomed to find an optimal value for fact – satisfying both the need for generalization and clarity of details in our images. You cannot have both … here we see a basic problem related to VAEs and the creation of realistic images. Actually, I tried this out – the effect is there, but the gain actually is not worth the effort. And for too small values of fact we eventually loose the ability to create reasonable images from arbitrary z-points at all.

All in all post-processing appears to be a simple and effective method to get images with somewhat sharper details.
Hint: If you want to create images of artificially generated faces with a really high quality, you have to turn to GANs.

Further examples – with PIL sharpening

In this example you see that not all points give you good images of faces. The z-point of the middle image in the second to last of the first illustration below has a relatively high distance from the origin. The higher the distance from the origin in z-space the weirder the images get. We shall see this below in a more systematic way.

Reconstruction quality of a VAE vs. an AE – or the “female” side of myself

If I were not afraid of copy and personal rights aspects of using CelebA images directly I could show you now a comparison of the the reconstruction ability of an AE in comparison to a VAE. You find such a comparison, though a limited one, by looking at some images in the book of D. Foster.

To avoid any problems I just tried to work with an image of myself. Which really gave me a funny result.

A plain Autoencoder with

  • an extended latent space dimension of z_dim = 1600,
  • a reasonable convolutional filter sequence of (64, 64, 128, 128)
  • a stride value of stride=2
  • and kernels ((5,5),(5,5),(3,3),(3,3))

is well able to reproduce many detailed features one’s face after a training on 80,000 CelebA images. Below see the result for an image of myself after 24 training epochs of such an AE:

The left image is the original, the right one the reconstruction. The latter is not perfect, but many details have been reproduced. Please note that the trained AE never had seen an image of myself before. For biometric analysis the reproduction would probably be sufficient.

Ok, so much about an AE and a latent space with a relatively high dimension. But what does a VAE think of me?
With fact = 5.0, filters like (32,64,128,256), (3,3)-kernels, z_dim=256 and after 18 epochs with 170,000 training images of CelebA my image really got a good cure:

My wife just laughed and said: Well, now in the age of 64 at least an AI has found something soft and female in you … Well, had the CelebA included many faces of heavy metal figures the results would have looked differently. I bet …

So with generative VAEs we obviously pay a price: Details are neglected in favor of very general face features and hairstyle aspects. And we loose sharpness. Which is good if you have wrinkles. Good for me and the celebrities, too. 🙂

However, I recommend anybody who wants to study VAEs to check the reproduction quality for CelebA test images (not from the training set). You will see the generalization effect for a broader range of images. And, of course, a better reproduction with smaller values for the ratio of the KL-loss to the reconstruction loss. However, for too small values of fact you will not be able to create realistic face images at all from arbitrary z-points – even if you choose them to be relatively close to the origin of the latent space.

Dependency of the creation of reasonable images on the distance from the origin

In another post in this blog I have discussed why we need VAEs at all if we want to reconstruct reasonable face images from randomly picked points in the latent space. See:
Autoencoders, latent space and the curse of high dimensionality – I

I think the reader is meanwhile convinced that VAEs do a reasonably good job to create images from randomly chosen z-points. But all of the above images were taken from z-points calculated with the help of a function assuming a normal distribution in the z-space coordinates. The width of the resulting distribution around the origin is of course rather limited. Most points lie within a 3 sigma distance around the origin. This is OK as we have put a lot of effort into the KL-loss to force the z-points to approach such a normal distribution around the origin of the latent space.

But what happens if and when we increase the distance of our random z-points from the origin? An easy way to investigate this is to create the z-points with a function that creates the coordinates randomly, but equally distributed in an interval ]0,limit]. The chance that at least one of the coordinates gets a high value is rather big then. This in turn ensures relatively high radius values (in terms of an L2-distance norm).

Below you find the results for z-points created by the function random.uniform:

r_limit = 1.5
l_limit = -r_limit
znew = np.random.uniform(l_limit, r_limit, size = (n_to_show, z_dim))

r_limit is varied as indicated:

r_limit = 0.5

r_limit = 1.0

r_limit = 1.5

r_limit = 2.0

r_limit = 2.5

r_limit = 3.0

r_limit = 3.5

r_limit = 5.0

r_limit = 8.0

Well, this proves that we get reasonable images only up to a certain distance from the origin – and only in certain areas or pockets of the z-space at higher radii.

Another notable aspect is the fact that the background variations are completely smoothed out a low distances from the origin. But they get dominant in the outer regions of the z-space. This is consistent with the fact that we need more information to distinguish various background shapes, forms and colors than basic face patterns. Note also that the faces appear relatively homogeneous for r_limit = 0.5. The farther we are away from the origin the larger the volumes to cover and distinguish certain features of the training images become.

Conclusion

Our VAE with the GradientTape()-mechanism for the control of the KL-loss seems to do its job. In contrast to a pure AE the smear-out effect of the KL-loss allows now for the creation of images with interpretable contents from arbitrary z-points via the VAE’s Decoder – as long as the selected z-points are not too far away from the z-space’s origin. Thus, by indirect evidence we can conclude that the z-points for training images of the CelebA dataset were distributed and at the same time confined around the origin. The strongest indication came from the last series of images. But we pay a price: The reconstruction abilities of a VAE are far below those of AEs. A relatively low number of dimensions of the latent space helps with an effective confinement of the z-points. But it leads to a significant loss in detail sharpness of the generated images, too. However, part of this effect can be compensated by the application of standard procedures for image enhancemnet.

In the next post
Variational Autoencoder with Tensorflow – XII – save some VRAM by an extra Dense layer in the Encoder
I will discuss a simple trick to reduce the VRAM consumption of the Encoder. In a further post we shall then analyze the confinement of the z-point distribution with the help of more explicit data.

And let us all who praise freedom not forget:
The worst fascist, war criminal and killer living today is the Putler. He must be isolated at all levels, be denazified and sooner than later be imprisoned. An aggressor who orders the bombardment of civilian infrastructure, civilian buildings, schools and hospitals with drones bought from other anti-democrats and women oppressors puts himself in the darkest and most rotten corner of human history.