Autoencoders, latent space and the curse of high dimensionality – I

Recently, I had to give a presentation about standard Autoencoders (AEs) and related use cases. Whilst preparing examples I stumbled across a well-known problem: The AE solved tasks as to reconstruct faces hidden in extreme noisy or leaky input images perfectly. But the reconstruction of human faces from arbitrarily chosen points in the so called “latent space” of a standard Autoencoder did not work well.

In this series of posts I want to discuss this problem a bit as it illustrates why we need Variational Autoencoders for a systematic creation of faces with varying features from points and clusters in the latent space. But the problem also raises some fundamental and interesting questions

  • about a certain “blindness” of neural networks during training in general, and
  • about the way we save or conserve the knowledge which a neural network has gained about patterns in input data during training.

This post requires experience with the architecture and principles of Autoencoders.

Note, 02/14/2023: I have revised and edited this post to get consistent with new insights from extended experiments with AEs and VAEs.

Standard tasks for conventional Autoencoders

For preparing my talk I worked with relatively simple Autoencoders. I used Convolutional Neural Networks [CNNs] with just 4 convolutional layers to create the Encoder and Decoder parts of the Autoencoder. As typical applications I chose the following:

  • Effective image compression and reconstruction by using a latent space of relatively low dimensionality. The trained AEs were able to compress input images into latent vectors with only few components and reconstruct the original image from the compressed format.
  • Denoising of images where the original data were obscured by the superposition of statistical noise and/or statistically dropped pixels. (This is my favorite task for AEs which they solve astonishingly well.)
  • Recolorization of images: The trained AE in this case transforms images with only gray pixels into colorful images.

Such challenges for AEs are discussed in standard ML literature. In a first approach I applied my Autoencoders to the usual MNIST and Fashion MNIST datasets. For the task of recolorization I used the Cifar 10 dataset. But a bit later I turned to the Celeb A dataset with images of celebrity faces. Just to make all of the tasks a bit more challenging.

Standard Autoencoders and low dimensions of the latent space for (Fashion) MNIST and Cifar10 data

My Autoencoders excelled in all the tasks named above – both for MNIST, CELEB A and, regarding recolorization, CIFAR 10.

Regarding MNIST and MNIST/Fashion 4-layer CNNs for the Encoder and Decoder are almost an overkill. For MNIST the dimension z_dim of the latent space can be chosen to be pretty small:

z_dim = 12 gives a really good reconstruction quality of (test) images compressed to minimum information in the latent space. z_dim=4 still gave an acceptable quality and even with z_dim = 2 most of test images were reconstructed well enough. The same was true for the reconstruction of images superimposed with heavy statistical noise – such that the human eye could no longer guess the original information. For Fashion MNIST a dimension number 20 < z_dim < 40 gave good results. Also for recolorization the results were very plausible. I shall present the results in other blog posts in the future.

Face reconstructions of (noisy) Celeb A images require a relative high dimension of the latent space

Then I turned to the Celeb A dataset. By the way: I got interested in Celeb A when reading the books of David Foster on “Generative Deep Learning” and of Tariq Rashi “Make Your First GANs with PyTorch” (see the complete references in the last section of this post).

The Celeb A data set contains images of around 200,000 faces with varying contours, hairdos and very different, in-homogeneous backgrounds. And the faces are displayed from very different viewing angles.

For a good performance of image reconstruction in all of the named use cases one needs to raise the number of dimensions of the latent space significantly. Instead of 12 dimensions of the latent space as for MNIST we now talk about 200 up to 1200 dimensions for CELEB A – depending on the task the AE gets trained for and, of course, on the quality expectations. For reconstruction of normal images and for the reconstruction of clear images from noisy input images higher numbers of dimensions z_dim ≥ 512 gave visibly better results.

Actually, the impressive quality for the reconstruction of test images of faces, which were almost totally obscured by the superimposition of statistical noise or the statistical removal of pixels after a self-supervised training on around 100,000 images surprised me. (Totalitarian states and security agencies certainly are happy about the superb face reconstruction capabilities of even simple AEs.) Part of the explanation, of course, is that 20% un-obscured or un-blurred pixels out of 30,000 pixels still means 6,000 clear pixels. Obviously enough for the AE to choose the right pattern superposition to compose a plausible clear image.

Note that we are not talking about overfitting here – the Autoencoder handled test images, i.e. images which it had never seen before, very well. AEs based on CNNs just seem to extract and use patterns characteristic for faces extremely effectively.

But how is the target space of the Encoder, i.e. the latent space, filled for Celeb A data? Do all points in the latent space give us images with well recognizable faces in the end?

Face reconstruction after a training based on Celeb A images

To answer the last question I trained an AE with 100,000 images of Celeb A for the reconstruction task named above. The dimension of the latent space was chosen to be z_dim = 200 for the results presented below. (Actually, I used a VAE with a tiny amount of KL loss by a factor of 1.e-6 smaller than the standard Binary Cross-Entropy loss for reconstruction – to get at least a minimum confinement of the z-points in the latent space. But the results are basically similar to those of a pure AE.)

My somewhat reworked and centered Celeb A images had a dimension of 96×96 pixels. So the original feature space had a number of dimensions of 27,648 (almost 30000). The challenge was to reproduce the original images from latent data points created of test images presented to the Encoder. To be more precise:

After a certain number of training epochs we feed the Encoder (with fixed weights) with test images the AE has never seen before. Then we get the components of the vectors from the origin to the resulting points in the latent space (z-points). After feeding these data into the Decoder we expect the reproduction of images close to the test input images.

With a balanced training controlled by an Adam optimizer I already got a good resemblance after 10 epochs. The reproduction got better and very acceptable also with respect to tiny details after 25 epochs for my AE. Due to possible copyright and personal rights violations I do not dare to present the results for general Celeb A images in a public blog. But you can write me a mail if you are interested.

Most of the data points in the latent space were created in a region of 0 < |x_i| < 20 with x_i meaning one of the vector components of a z-point in the latent space. I will provide more data on the z-point distribution produced by the Encoder in later posts of this mini-series.

Face reconstruction from randomly chosen points in the latent space

Then I selected arbitrary data points in the latent space with randomly chosen and uniformly distributed components 0 < |x_i| < boundary. The values for boundary were systematically enlarged.

Note that most of the resulting points will have a tendency to be located in outer regions of the multidimensional cube with an extension in each direction given by boundary. This is due to the big chance that one of the components will get a relatively high value.

Then I fed these arbitrary z-points into the Decoder. Below you see the results after 10 training epochs of the AE; I selected only 10 of 100 data points created for each value of boundary (the images all look more or less the same regarding the absence or blurring of clear face contours):

boundary = 0.5

boundary = 2.5

boundary = 5.0

boundary = 8.0

boundary = 10.0

boundary = 15.0

boundary = 20.0

boundary = 30.0

boundary = 50

This is more a collection of face hallucinations than of usable face images. (Interesting for artists, maybe? Seriously meant …).

So, most of the points in the latent space of an Autoencoder do NOT represent reasonable faces. Sometimes our random selection came close to a region in latent space where the result do resemble a face. See e.g. the central image for boundary=10.

From the images above it becomes clear that some arbitrary path inside the latent space will contain more points which do NOT give you a reasonable face reproduction than points that result in plausible face images – despite a successful training of the Autoencoder.

This result supports the impression that the latent space of well trained Autoencoders is almost unusable for creative purposes. It also raises the interesting question of what the distribution of “meaningful points” in the latent space really looks like. I do not know whether this has been investigated in depth at all. Some links to publications which prove a certain scientific interest in this question are given in the last section of this posts.

I also want to comment on an article published in the Quanta Magazine lately. See “Self-Taught AI Shows Similarities to How the Brain Works”. This article refers to “masked” Autoencoders and self-supervised learning. Reconstructing masked images, i.e. images with a superposition of a mask hiding/blurring pixels with a reasonably equipped Autoencoder indeed works very well. Regarding this point I totally agree. Also with the term “self-supervised learning”.

But to suggest that an Autoencoder with this (rather basic) capability reflects methods of the human brain is in my opinion a massive exaggeration. On the contrary, in my opinion an AE reflects a dumbness regarding the storage and usage of otherwise well extracted feature patterns. This is due to its construction and the nature of its mapping of image contents to the latent space. A child can, after some teaching, draw characteristic features of human faces – out of nothing on a plain white piece of paper. The Decoder part of a standard Autoencoder (in some contrast to a GAN) can not – at least not without help to pick a meaningful point in latent space. And this difference is a major one, in my opinion.

A first interpretation – the curse of many dimensions of the latent space

I think the reason why arbitrary points in the multi-dimensional latent space cannot be mapped to images with recognizable faces is yet another effect of the so called “curse of high dimensionality”. But this time also related to the latent space.

A normal Autoencoder (i.e. one without the Kullback-Leibler loss) uses the latent space in its vast extension to produce points where typical properties (features) of faces and background are encoded in a most unique way for each of the input pictures. But the distinct volume filled by such points is a pretty small one – compared to the extensions of the high dimensional latent space. The volume of data points resulting from a mapping-transformation of arbitrary points in the original feature space to points of the latent space is of course much bigger than the volume of points which correspond to images showing typical human faces.

This is due to the fact that there are many more images with arbitrary pixel values already in the original feature space of the input images (with lets say 30000 dimensions for 100×100 color pixels) than images with reasonable values for faces in front of some background. The points in the feature space which correspond to reasonable images of faces (right colors and dominant pixel values for face features), is certainly small compared to the extension of the original feature space. Therefore: If you pick a random point in latent space – even within a confined (but multidimensional) volume around the origin – the chance that this point lies outside the particular volume of points which make sense regarding face reproduction is big. I guess that for z_dim > 200 the probability is pretty close to 1.

In addition: As the mapping algorithm of a neural Encoder network as e.g. CNNs is highly non-linear it is difficult to say how the boundary hyperplanes of mapping areas for faces look like. Complicated – but due to the enormous number of original images with arbitrary pixel values – we can safely guess that they enclose a rather small volume.

The manifold of data points in the z-space giving us recognizable faces in front of a reasonably separated background may follow a curved and wiggly “path” through the latent space. In principal there could even be isolated unconnected regions separated by areas of “chaotic reconstructions”.

I think this kind of argumentation line holds for standard Autoencoders and variational Autoencoders with a very small KL loss in comparison to the reconstruction loss (BCE (binary cross-entropy) or MSE).

Why do Variational Autoencoders [VAEs] help?

The fist point is: VAEs reduce the total occupied volume of the latent space. Due to mu-related term in the Kullback-Leibler loss the whole distribution of z-points gets condensed into a limited volume around the origin of the latent space.

The second reason is that the distribution of meaningful points are smeared out by the logvar-term of the Kullback-Leibler loss.

Both effects enforce overlapping regions of meaningful standard Gaussian-like z-point distributions in the latent space. So VAEs significantly increase the probability to hit a meaningful z-point in latent space – if you chose points around the origin within a distance of “1” per coordinate (or vector component).

The total distance of a point and its vector in z-space has to be measured with some norm, e.g. the Euclidian one. Actually we should get meaningful reconstructions around a multidimensional sphere of radius “16”. Why this is reasonable will be discussed in forthcoming posts.

Please, also look at the series on the technical realization of VAEs in this blog. The last posts there prove the effects of the KL-loss experimentally for Celeb A data. Below you find a selection of images created from randomly chosen points in the latent space of a Variational Autoencoder with z_dim=200 after 10 epochs.

Conclusion

Enough for today. Whilst standard Autoencoders solve certain tasks very well, they seem to produce very specific data distributions in the latent space for CelebA images: Only certain regions seem to be suitable for the reconstruction of “meaningful” images with human faces.

This problem may have its origin already in the feature space of the original images. Also there only a small minority of points represents humanly interpretable face images. This becomes obvious when you look at the vast amount of possible pixel values in a feature space of lets say 96x96x3 = 27,648. Each of these dimension can get a value between 0 and 255. This gives us more than 7 million combinations. Only a tiny fraction of these possible images will show reasonable faces in the center with a reasonably structured background around.

From a first experiment the chance of hitting a data point in latent space which gives you a meaningful image seems to be small. This result appears to be a variant of the curse of high dimensionality – this time including the latent space.

In a forthcoming post
Autoencoders, latent space and the curse of high dimensionality – II – a view on fragments and filaments of the latent space for CelebA images
we will investigate the z-point distribution in latent space with a variety of tools. And find that this distribution is fragmented and that the z-points for CelebA images are arranged in certain regions of the latent space. In addition we will get indications that the distribution contains filament-like structures.

Links

https://towardsdatascience.com/ exploring-the-latent-space-of-your-convnet-classifier-b6eb862e9e55

Felix Leeb, Stefan Bauer, Michel Besserve,Bernhard Schölkopf, “Exploring the Latent Space of Autoencoders with
Interventional Assays”, 2022,
https://arxiv.org/abs/2106.16091v2 // https://arxiv.org/pdf/2106.16091.pdf
https://wiredspace.wits.ac.za/ handle/10539/33094?show=full
https://www.elucidate.ai/post/ exploring-deep-latent-spaces

Books:
T. Rashid, “GANs mit PyTorch selbst programmieren”, 2020, O’Reilly, dpunkt.verlag, Heidelberg, ISBN 978-3-96009-147-9
D. Foster, “Generatives Deep Learning”, 2019, O’Reilly, dpunkt.verlag, Heidelberg, ISBN 978-3-96009-128-8

 

Blender – complexity inside spherical and concave cylindrical mirrors – II – a step towards the S-curve

In my last post

Blender – complexity inside spherical and concave cylindrical mirrors – I – some impressions

I briefly discussed some interesting sculptures and optical experiments in reality. The basic ideas are worth some optical experiments in the virtual ray-tracing world of Blender. In this post I start with my trial to reconstruct something like the so called “S-curve” of the artist Anish Kapoor with Blender meshes.

If you looked at the link I gave in my last article or googled for other pictures of the S-curve you certainly saw that the metallic surface the artist placed at the Kistefoss museum is not just a simple combination of mirrored cylindrical surfaces. It is much more elegant:

The first point is that it consists of one continuous coherent piece of metal. The surface is deformed and changes its curvature continuously. It shows symmetry and rotational axes. When my wife and I first saw it we stood at a rather orthogonal position opposite of it. We only got aware of the different cylindrical deformations on the left and right side. We wondered what Kapoor had done at the middle vertical axis as we expected a gap there. Later we went to another position – and there was no gap at all, but a smooth variation of curvatures along the main axes of the object.

The second point is the combination of different curvatures: a cylindrical curvature in vertical direction (mirrored in left/right direction) plus the elegant S-like curvature in horizontal direction. The curvature in vertical direction grows with horizontal distance from the center – it is zero at the central vertical axis. The left and right part of the object are identical – they reflect a 180 °ree; rotation (not a mirroring process) around the central vertical axis. Actually, the gradient at the central rotational axis and the along the horizontal symmetry axes disappears. And no curvature at all at the central vertical axis.

All in all a lot of different symmetries and smooth curvature transitions! The artist plays with the appeal of symmetries to the human brain. But, at the same time, he breaks symmetry strongly in the visual impression of the viewer with the help of the rules of optics. Wonderful!

In this article I want to tackle the problem of a smooth transition between two cylindrically deformed surfaces in Blender first. The S-curvature is the topic of the next post.

The result first

I first show you what we want to achieve:

We get an impression of the mirroring effects in “viewport shading mode” by adding a sky texture to the world background and a simple textured plane:

The reader may have detected small dips (indentations) at the centers of the upper and lower edge. I come to this point later on. Compared to the real S-curve a major difference in vertical direction is that Kapoor did not use a the full curvature of a half cylinder at the outmost left and right ends. He maybe took only a cut off part of a half circle there. But what part of a half-circle you use in the end is a minor point regarding the general construction of such a surface in Blender.

How to get there?

As I only use Blender seldom I
really wondered how to create a surface like the one shown above. Mesh based or nurbs based? And how to get a really smooth surface? Regarding the latter point you may think of subdivisions, but this is a wrong approach as a subdivision of a mesh intersects linear connections between vertices. Therefore, if you applied simple subdivision to the object you would create points not residing on a circle/cylinder/surface – which in the end would disturb the optics by visible lines and flat planes. Even if you added a smoothing modifier afterward.

The solution in the end was simple and mesh based. There is one important point to note which has to do with rules for object creation in Blender:

You define the resolution of the mesh(es) we are going to construct in the beginning!

As we need to edit some vertex positions manually the resolution in first experiments should rather be limited. For a continuous surface we shall apply a surface smoothing modifier anyway. This modifier rounds up edges a bit – which leads to the “dips” I mentioned. They will be smaller the higher you choose the meshes’ resolution – but this is something for a final polished version.

Constructional steps

All in all there are many steps to follow. I only give a basic outline. Read the Blender manual for details.
Note: I added the application of a modifier in the middle of the steps for illustration purposes. You should skip this step and apply the modifier only in the end. I sometimes experienced strange effects when applying and deleting the modifier during work with vertices.

Step 1: You first create a mesh based circle. You now decide which number of mesh nodes and basic resolution of the later surface you want to have. This is done by the the tool menu that opens in Blender version 2.82 in the lower left of the viewport. Lets keep to the standard value of 32 mesh points (vertices). This obviously means that a half-circle later on will contain 17 vertices. All vertices of our first reside exactly on the circle line. The circles center resides at the global world center. You also see that 4 points of the circle sit on the world axes X, Y. Leave the circle exactly where it is. Do not apply any translation. (It would be hard to realign it with world axes later on.)

Step 2: Change to Edit mode and remove one side of the circle (left of the X-axis) by eliminating the superfluous vertices. Do it such that the end points of the remaining half-circle reside exactly on the X-axis of the world mesh. Keep the origin of the mesh were it is. Do NOT close the circle mesh on the X-axis, i.e. do not create a closed loop of vertices!

Step 3: You then add a line mesh in Object mode. This can e.g. achieved by first creating a path. Move it along the world Y-axis to get some X-distance from the half-circle (-3m). Select the path by right clicking and convert the path to a mesh by the help of a menu point. Go to Edit mode again and eliminate vertices (or adding by subdividing) – until the resulting line mesh has exactly the same number of vertices (17) as your half circle (including the end points). In object mode set the origin to the mesh’s geometry, i.e. its center. Move the line mesh to X=0. Change its X-dimension to the same value the half circle has (2m).

Step 4: Rotate the half-circle by 90 degrees around the X-axis to get a basic scene like in the picture below. Join the two meshes to one object.

Step 5: Go to Edit mode and provide missing edges to connect the line segment with the half-circle.

Step 6: Add faces by selecting all vertices and choosing menu point “Face > Grid Fill”.

Hey, this was a major step. save your results – and make a backup copy for later experiments.

Step 7: Add a Sky Texture to the world. Activate the Cycles renderer. Rotate the object by 90 degree around the Y-axis. Choose viewport shading mode.

Step 8: Move object to Z=1m. Right click in Object mode on your object it; choose “Shade smooth”.

Just to find that you still see the edges of the faces. Smooth is not really smooth, unfortunately.

Step 9: Skip this step in your own experiment and perform it at the end of our construction. Just for illustrating that the flat surfaces can be eliminated later on, I add a modifier to our object – namely the modifier “subdivision surface” – which offers a more intelligent algorithm than “Shade smooth”. Just for testing I give it the following parameters:

We get:

Much more convincing! You see e.g. at the left side that the corners have been rounded – this will later lead to the dips I mentioned.

Intermediate consideration
We could now duplicate our object, rotate the duplicate and join it with the original. But before we do this we change the height values of the vertices along the left edge (actually a line segment). From our construction it is clear that corresponding vertices on the half circle and the left edge cannot have the same Z-coordinate values – they reside at different heights above the ground. The “catmull clark” algorithm of our modifier therefore creates a surface with gradients and curvature varying in all coordinate directions. There is no real problem with this. However, we reduce the chance for certain caustics and cascades of multiple reflections on the concave side of the final surface. Cylindrical surfaces (i.e. with constant curvature) give rise to sharp reflective caustics. To retain a bit of this and keep the curvature rather constant in Z-direction (whilst varying in X-direction), we are going to adjust the heights of the vertices along the straight left edge to the heights of the vertices along the half-circle.

Step 10: Go to edit mode. Do NOT move the vertices of the half circle! Check the Z-value of each of the vertices of the half-circle by selecting one by one and looking at the information on
the sidebar of the Blender interface (View > Sidebar). Change the Z-coordinate of the half-circle’s counterpart on the left straight edge to the very same value. Repeat this process for all vertices of the half-circle and the corresponding ones of the straight edge.

You see that the vertices are now non-equidistantly distributed along the Z-axis on the left side !
This gives us already a slightly different shading in the lower part.

Step 11: Important! Remove the modifier if you applied it. Then: Move the object such that all vertices on the left corner are at Y=0 and X=0. For Y=0 you can just adjust the median of the vertices. Check also that the corners of the half-circle have X=0 and Y=3. All vertices of the half circle should have Y=3.

Then snap the cursor to the grid at X=0, Y=0, Z=1. Afterward snap the origin of the object to the cursor. The object’s coordinates should now be X=0, Y=0, Z=1.

Step 10: In Object mode: Duplicate the object by SHIFT D + Enter. Do not move the mouse in between; don’t touch it. Rotate the active duplicate around the Z-axis by 180 degrees.

Check the coordinates of the vertices of the mirrored object. If its right vertices reside at y=0 and its left at y=-3 then join the two objects to one. Note: At the middle there are still two rows of vertices. But their vertices should coincide exactly at their x=0, Y=0 and Z-values. If not you will see it later by some distortions in the optics.

Step 11: Add a metallic material

Place the camera at

and add
the modifier again with the settings given above. Render with the help of the material preview:

Step 11: Add a Sun at almost 180 degrees and play a bit with the sky

We get in full viewport shading:

Watch the sharp edges created by multi-reflections on the left concave side of the object. This we got due to our laborious adjustment of the Z-coordinates of our central vertices.

Save your result for later purposes!

Adding some elements to the scene

After having created such an object we can move and rotate it as we like. In the following images I mirrored it (2 rotations!). The concave curvature is now at the right side. Then I added a plane with some minimum texture with disturbances. Eventually, I added some objects and extended light sources, plus a change of the sun’s color to the red side of the spectrum. (Hint: When moving around spacious light sources relatively close to the object the reflections should not show any straight line disturbances. Its a way to test the smoothness of your surface created by the modifier.)

Yeah, one piece of metal with growing cylindrical concave and convex curvatures to the left and the right. We are getting closer to a reconstruction of the S-curve. And have a look at the nice deformations of the reflected images of a red cylinder, a green cone and a blue sphere, which I have placed relatively closely to the concave surface on the right side. Physics and Blender are fun! But all respect and tribute again to Anish Kapoor for his original idea!

In the next post

Blender – complexity inside spherical and concave cylindrical mirrors – III – a second step towards the S-curve

I have a look at an additional S-curvature in horizontal direction. Stay tuned ..