Autoencoders and latent space fragmentation – II – number distributions of latent vector components

This post series studies the (in-) ability of a trained Autoencoder [AE] to create reasonable human face images from statistical vectors placed in its latent space. In my last post

Autoencoders and latent space fragmentation – I – Encoder, Decoder, latent space

I have described what the purposes of the two sub-networks of an AE, i.e. the Encoder and the Decoder, are. We saw that the so called latent space plays an important role for the interplay of these sub-networks: Vectors in the latent space – z-pointsencode properties of objects presented to the Autoencoder, more precisely to its Encoder. The bunch of objects used during training thus gives us a distribution of vectors and respective z-points within certain regions of the latent space. The Decoder reconstructs objects from latent vectors.

One of my eventual objectives in this series is the creation of new objects of the same class presented to the AE during its training. I focus on the special case of images displaying human faces. Therefore, I have trained a convolutional AE with the so called CelebA dataset. After training one may hope that an AE will be able to produce images with new faces, which are not present in CelebA, when we feed the Decoder with suitable z-points. The question is what “suitable z-points” are and where they are located in the latent space.

Of course, I want to use the Decoder’s reconstruction abilities to achieve my goal. To get new faces, a statistical element is a must. The basic idea is to use statistically created z-points as input for the Decoder.

Objective of this post

In the first post of this series I have already indicated that not all vectors in the AE’s latent space may lead to the production of reasonable images. It might well be that we must hit certain confined regions of the latent space. An interesting question, therefore, is the following:

Are all generators of statistical vectors suitable to hit the regions of a latent space which an Autoencoder will fill for certain training objects?

In this and further posts I want to show you that this is not the case. A bad choice of a statistical generating method in combination with the high number of latent space dimensions may lead to a complete failure.

To achieve this insight we must study both the real z-point distribution which an AE creates for CelebA images and the artificial vector or z-point distribution coming from a specific statistical generator. In this post I study a generator which assigns the vector components values which are taken from a real number interval with a constant probability. The number of dimensions N of the latent space shall be N=256.

We shall see that we indeed get confronted with a special side of the curse of high dimensionality and that our artificial z-point distribution does not match the real one for CelebA at all if we do not restrict parameters in a somewhat counter-intuitive way. As a side effect we will learn that our AE organizes the latent vector distributions for CelebA images via functions very similar to Gaussians. Furthermore, we shall see that we speak of just one coherent and confined z-point region with very small extensions. The center of this region sits close to a hypervolume spanned by only a few (out of 256) coordinate axes.

Methods to create statistical z-points

To create statistical z-points in a latent space we have to employ some generating mechanism for respective statistical vectors. See my first post for the correspondence of z-points to latent vectors. There are multiple ways available to create latent vectors. I just name 3 popular ones:

  1. Create a dense homogeneous distribution by filling some volume around the origin with a grid of points.
  2. Use a constant probability density for values in a real number interval [-b, b], pick such values statistically and assign them to individual vector components.
  3. Use one or multiple Gaussian distributions to define the vector component values.

Note that when applying the second and the third method the statistics works on the level of the vector components. These components are handled as independent variables, each obeying a certain probability distribution of assignable real values.

A small calculation shows that method 1 will not work in practice, as such a distribution requires an enormous number of points for high dimensions and a decent resolution. Method 2 looks simple and works well in 2D- and 3D-spaces. The third method requires assumptions about the mean values and standard deviations to take – per vector coordinate.

Therefore it is tempting to use method 2 to create one or more artificial statistical distribution of random vectors in the latent space. This is exactly what we will try in this post – in the hope that a significant part of the resulting points will hit regions which give us reasonable images.

If you are interested in mathematical properties of vector distributions created by method 2 in multi-dimensional spaces, you will find them in the following posts of this blog:

Latent spaces – pitfalls of distributing points in multi dimensions – I – constant probability density per dimension
Latent spaces – pitfalls of distributing points in multi dimensions – II – missing specific regions

Why must we hit specific regions of the latent space?

Experience tells us that we will not get reasonable images from arbitrarily and randomly placed z-points in the latent space.. (Concrete examples will be given in the next post.) What could a plausible reason be?

Convolutional networks [CNNs] extract patterns and save them in parameters of their layer filters (or neural maps). In another post series I have shown such elementary patterns, which the innermost convolutional layers react sensitively to, for the simple case of MNIST. Patterns correspond to correlations between constituting elements of the objects we present to the neural networks. Such patterns reflect certain features of the objects. The number of elementary patterns a CNN can handle depends on the number of available kernel filters – which is fixed by the network structure. For a trained convolutional Autoencoder it is therefore reasonable to assume that a latent space vector encodes a prescription for the superposition of certain elementary patterns by which the Decoder eventually creates an image. The information for the pattern mixture is encoded both by the length and angles of latent vectors. The latter with respect to the many coordinate axes; the multitude of angles describes the orientation of a latent vector in its multi-dimensional space.

It is clear that not all prescriptions for a mixture of elementary patterns will reflect the real pixel correlations of a human face in front of some background. We must therefore assume that the latent space regions filled by a trained AE-Encoder for the training objects are the ones which give us reasonable Decoder results. In principle we could find multiple such regions in different parts of the latent space. They could have particular locations and could be confined to a relatively small volume. Therefore, we should really check that at least a part of our statistical vectors points to those regions.

Number distributions for vector components

A priori we do not know anything about the shape of z-point distribution which an Autoencoder will create in its latent space for CelebA data. Therefore, we need to get an overview over some properties of such a z-point distribution. As multidimensional spaces with a high number of dimensions like N ≥ 256 can not be presented in 3D, we need some other kind of visualization. What we shall use is a kind of spectral display for the coordinate values of our vectors:

We are going to analyze the number distribution for the values of each of the vector components.

I.e., we count the numbers of latent vectors which have values for a certain component in a series of sampling intervals for real values. We do this for all components and for a reasonable total range of values. Note that a distribution for a specific component is a one-dimensional function over ℜ.

Below we will first derive the component related distributions from real Autoencoder vectors for CelebA images. This will tell us already a lot about the orientation, the off-center location and the extensions of the real z-point distribution for CelebA.

Afterward, we will compare the CelebA specific distributions to the number distributions for artificial statistical vector distributions created by method 2.

Number distributions for vector lengths

Another nice and simple method to analyze the compatibility of vector distributions is to compare the number distributions for the vector lengths. For an orthogonal coordinate system of the latent space we can compute the length of a multi-dimensional vector by an Euclidean L2-norm. We will compare the number distribution for the lengths of latent CelebA vectors with the distribution for the lengths of vectors created by method 2. We will call the length of a vector also its “radius”.

Network setup

The basic layer structure of my AE was described already in the last post. It is relatively simple. I employ only a few Encoder and Decoder layers. I have ensured the Encoder and Decoder are actually able to solve the basic task of encoding and decoding data of real CelebA images.

We look at the results of two AE networks which differ in the number of convolutional kernel filters used:

  • Test case I: We use 4 Conv2D layers in the Encoder with 32, 64, 128, 256 filters and 4 TransposeConv2D layers in the Decoder with 256, 128, 64, 32 filters, respectively.
  • Test case II: We use 4 Conv2D layers in the Encoder with 64, 64, 128, 128 filters and 4 TransposeConv2D layers in the Decoder with 128, 128, 64, 64 filters, respectively.

This is reflected in the following code snippet. There you also get information on the kernel sizes, strides and padding-methods. The number of dimensions of the latent space is N = z_dim = 256. The activation function is a chosen to be Leaky Relu.

        # Test case I
        AE1 = Autoencoder(
            input_dim                  = INPUT_DIM
            , encoder_conv_filters     = [32,64,128,256]       # We take a bit bigger than D. Foster 
            , encoder_conv_kernel_size = [3,3,3,3]
            , encoder_conv_strides     = [2,2,2,2]
            , encoder_conv_padding     = ['same','same','same','same']

            , decoder_conv_t_filters     = [128,64,32,n_ch]    # !!! n_ch = 1 or 3 
            , decoder_conv_t_kernel_size = [3,3,3,3]
            , decoder_conv_t_strides     = [2,2,2,2]
            , decoder_conv_t_padding     = ['same','same','same','same']
            , z_dim = 256
            , act   = 0                  # activation 0:Leaky ReLU (standard), 1: ReLU, 2: SELU    
        )
        # test case II
        AE2 = Autoencoder(
            input_dim                  = INPUT_DIM
            , encoder_conv_filters     = [64,64,128,128]       # We take a bit bigger than D. Foster 
            , encoder_conv_kernel_size = [5,5,3,3]
            , encoder_conv_strides     = [2,2,2,2]
            , encoder_conv_padding     = ['same','same','same','same']

            , decoder_conv_t_filters     = [128,64,64,n_ch]    # !!! n_ch = 1 or 3 
            , decoder_conv_t_kernel_size = [3,3,5,5]
            , decoder_conv_t_strides     = [2,2,2,2]
            , decoder_conv_t_padding     = ['same','same','same','same']
            , z_dim = 256
            , act   = 0                  # activation 0:Leaky ReLU (standard), 1: ReLU, 2: SELU    
        )

A method to analyze the vector distribution in a high-dimensional vector space

The components of our latent vectors determine their angle and length. We base our analysis of corresponding z-points on the number distribution per component-value. To do this we select a suitable real value interval covering all the values for vector components which the AE actually uses. We divide this interval into a series of sufficient sub-intervals for data sampling. In our case around 100 sub-intervals.

After training of our AE we once again feed all training objects (in our case > 170,000 CelebA images) into the Encoder and keep the vectors in some Numpy arrays. Then we look at a specific component and a sampling interval and count the number of vectors for which the component value resides inside the sampling interval. Repeating this for all components and intervals we get a number distribution which can be plotted. If we are lucky the resulting shapes of the number distributions will give us information about the corresponding shape of the multi-dimensional regions which the AE fills for CelebA images.

CelebA images: Number distribution for component values of latent vectors

The following plot shows the number distributions for all of the 256 components of vectors for CelebA images in our trained AE’s latent space:

Case I: Number distribution after 24 epochs

Case I: Selected components

Case II: Number distribution after 30 epochs

Case II: Selected components

We see that the individual number distributions are very similar to Gaussian distributions. For test case II I also give you the values for the central average value μ (named mu in the list below) and the half-width (named hw below) of the most interesting components. The half-width is the difference between those coordinate values where the distribution function achieves a value of half of the maximum number value at μ:

 15 mu : -0.25 :: hw:  1.5
 16 mu :  0.5  :: hw:  1.125
 56 mu :  0.0  :: hw:  1.625
 58 mu :  0.25 :: hw:  2.125
 66 mu :  0.25 :: hw:  1.5
 68 mu :  0.0  :: hw:  2.0
110 mu :  0.5  :: hw:  1.875
118 mu :  2.25 :: hw:  2.25
151 mu :  1.5  :: hw:  4.125
177 mu : -1.0  :: hw:  2.25
178 mu :  0.5  :: hw:  1.875
180 mu : -0.25 :: hw:  1.5
188 mu :  0.25 :: hw:  1.75
195 mu : -1.5  :: hw:  2.0
202 mu : -0.5  :: hw:  2.25
204 mu : -0.5  :: hw:  1.25
210 mu :  0.0  :: hw:  1.75
230 mu :  0.25 :: hw:  1.5
242 mu : -0.25 :: hw:  2.375
253 mu : -0.5  :: hw:  1.0

These components obviously have either a relatively large absolute μ-value or a relatively large half-width. I call these components the dominant ones.

Interpretation of the number distributions per vector component

The first thing these plots prove is the fact that the results for different networks are different, too. Without proving it, I also say from experience that even two different training runs for one and the same AE network structure may result in somewhat different number distributions. But although there are some differences there are also striking similarities:

  1. Most of the components show a Gaussian like number distribution about some mean value.
  2. The mean coordinate value μ for most of the components (≥ 90%) is zero or close to it.
  3. The component values cover a region of [-12, 12] in our specific case.
  4. There are only a few components ( ≤ 25) with mean values <μ> ≥ 0.25 or <μ> ≤ -0.25.
  5. There are only a few components ( ≤ 10) with mean values <μ> ≥ +1.0 or <μ> ≤ -1.
  6. There are only relatively few components (≤ 40) with a half-width ≥ 1.
  7. There are only a few components (≤ 20) with a half-width ≥ 1.5
  8. There are only very few components ≤ 5 with a half-width ≥ 3.

A bit of thinking and imagination tells us that the center of the distribution must be located somewhat off the origin, but very close to or within a hypervolume spanned by only a few dominant coordinate axes. The multi-dimensional region filled by the z-points has significant anisotropic extensions or elongations around its center only in a few directions of the multidimensional space.

All in all we speak about a very specific, limited multi-dimensional region, located at some distance from the origin (but not too far) with a center point close to or within a sub-volume of very low dimensions spanned by some coordinate axes. The overall direction of the center with respect to the origin is well defined by a few coordinates. The diameters of the regions are small in most directions. The z-points concentrate strongly towards the center of this region. Significant diameters around the center are only given in some particular directions. The respective axes involved are less than 10% of the total number.

This also means that a primary component analysis of the z-point distribution should only give us a number of dominant main components in the same number region (0.1 * N). See forthcoming posts for such an analysis.

Analogon in 3D: In an analogous case within a 3-dimensional space we would speak of a kind of ellipsoid with a significant diameter beyond 1 only in a specific direction and a center located close to a line in a plane spanned by 2 of the 3 coordinate axes.

Comparison to the number distribution for statistically created vectors with a constant probability for component values in a specific interval

Now we look at the number distributions for all components of 200,000 vectors created by our method 2. We choose the same region for values [-12, 12] as displayed for our test cases I and II above. The following plot confirms what you certainly have already guessed:

This plot just reflects the design of our probability distribution for each of the components. But the plot also indicates clearly that most of our statistically created points will not hit the latent space region which is filled by the Autoencoder for CelebA data.

The statistics obviously plays against us

It is very instructive to write a small program which scans all of the artificially created vectors and checks whether their end points lie within the region defined by the real CelebA points. I leave this to the reader. To get a good guess I personally defined a region of three times the half-width left and right of each component value center of the real distribution. The number of points fulfilling all criteria for our CelebA region came out to be exactly zero. Even with 12 times the half-width you get only very few vectors pointing into the CelebA region (around 20 vectors). Why is this the case?

The curse of a high dimensionality and a constant probability distribution

The first point is that for some components of the CelebA vectors the half-width really is rather small. So you do not cover the whole value interval [-12, 12] for the component values, but only around 75% for 12 times the half-width. Let us assume that this is the case for 7 to 8 components. Let us further assume that another 50 components the width is 90%. Then the probability to get a point is (0.75)8 * 0.965 ≈ 0.1 * 0.0018 ≈ 0.00018. This gives us only 30 out of 170,000 vectors potentially fulfilling our conditions. We are in that range.

But we know already that 90% of all components should hit an interval [-3,3]. The probability for this in the case of our method 2 is 0.25 per component. The probability for a hit thus is 0.25220 which is zero for all practical purposes. This is it what really kills our efforts to place a point in the most interesting regions of the latent space with the help of method 2 and seemingly fitting values of b.

Now, you may think: A decisive parameter for method 2 ist the interval [-b, b] from which I pick my statistical component values. What if we diminish b? E.g. to b=3 or b=2? This is a good idea as we shall see in the next paragraph. Yet, you still get only a few vectors (< 10) fulfilling all criteria for two times the half-width.

Let us reduce our b-interval to [-3, 3] for vector creation. Then for a typical vector more than 50 components miss the target region. Test runs show that even for [-2, 2] only around 10 out of 170.000 vectors would hit (outer parts) of the target region for CelebA images. In this case the few components which have off-center mu values seemingly work against us.

Things change dramatically for b=1.5 and 2 times the half-width. Now the components of all the artificial vectors fulfill our criteria. If you, however, change the intervals to hit to 1.5 times the half-width you are back again to only a very few vectors (< 10). For b=1 and 1.5 times the half-width we get again a number of 170,000 vectors fulfilling our criteria.

Why does this happen? And does it mean that when we restrict our component values to [-1.5, 1.5] we would cover our real CelebA distribution?

Number distributions for the vector lengths

Below you find a plot showing the number distribution with respect to typical vector lengths – both for CelebA (in red) and vectors artificially created with method 2 (other colors).

The parameter “b” of our artificial distributions defines the interval [-b, b] from which we pick component values. The probability density is a constant for this interval.

The first point you may dwell upon is the fact that the radius values get so big – much bigger than b. This is due to the high number of dimensions; see the posts named for a mathematical cover of method 2.

The second counter-intuitive point may be the following: One expects that b=12 should really be a reasonable parameter value for method 2 to cover the range of component values for CelebA. But already for b=3 or b=4 we get vectors outside the vector length interval which CelebA latent vectors fill. The reason for this are properties of the artificial radius distributions which can be derived mathematically. A mathematical calculation of the expectation value for the mean length (= radius R) of our artificially generated vectors gives us a value around

<R>    ≈    b * sqrt(1/3 * N) * sqrt( 1 / (1 + 1/(4*N) )

with a relatively very narrow spread. See the derivation in the posts quoted above. sqrt in the formula above stands for the square root. The standard deviation Δstd(R) has a size of approximately

Δstd(R)    ≈    b * sqrt(1/15 * N) * sqrt( 1 + 1/(4*N) )

The ratio of the half-width to radius thus declines with the square root of the number of dimensions N. As we see, only the artificial distributions for b=1, b=1.5, b=2 cover parts of the radius distribution for latent CelebA vectors.

So, if we want to use method 2 then we should work with b-values 1.0 ≤ b ≤ 2.0 to get a probability > 0 for creating reasonable images of human faces.

Will a proper value of b guarantee us reasonable face images?

We have seen that b must be reduced to a range 1.0 ≤ b ≤ 2.0 to get reasonable radius values of our statistical vectors. Unfortunately, this does not guarantee us proper images either. The reason is that there might be correlations between the dominant component values which our simple number distributions do not reveal. We will take care of this in the next post. For now I just show you a plot of the correlation between two specific dominant components for 1000 randomly selected CelebA latent vectors:

Conclusion

In this post we have partially analyzed the distribution of vectors and related z-points which an Autoencoder creates for CelebA images in its latent space. We have found that the number distributions per vector component look like Gaussian distributions. While most of the components have a small spread around the value zero, there are a few dominant components which determine the (off-center) location and orientation of a coherent, confined and ellipsoidally shaped region for CelebA z-points. The center of this region is close to a hypervolume defined by a few axes.

It was a bit counter-intuitive to see that a simple method to create statistical z-points via a constant probability distribution for individual component values would obviously miss the relevant latent region for CelebA images totally. We saw that we would need very special parameter values to limit the component values to get artificial latent vectors with the required length. These findings alone make it very improbable that arbitrary z-points created without some restrictions for their component values would lead to reasonable face image creation by the Decoder.

Something that we have not yet covered is the question of correlations between vector components. This is the topic of the next post:

Autoencoders and latent space fragmentation – III – correlations of latent vector components

 

Latent spaces – pitfalls of distributing points in multi dimensions – II – missing specific regions

This post requires Javascript to display formulas!

In this post series I discuss results of a private study about some simple statistical vector distributions in multi-dimensional latent vector spaces. Latent spaces often appear in Machine Learning contexts and can be represented by the ℜN. My main interest is:

What kind of regions of such a space may we miss by choosing a vector distribution based on a simple statistical creation process?

This problem is relevant for statistical surveys of extended regions in latent vector spaces which were filled by encoding or embedding Neural Networks. A particular reason for such a survey could be the study of the reaction of a Decoder to statistical vectors in an Autoencoder’s latent space. E.g. for creative purposes. During such surveys we want to fill extended regions of the latent space with statistical data points. More precisely: With points defined by vectors reaching out from the origin. The resulting point distribution does not need to be a homogeneous one, but it should cover the whole target volume somehow and should not miss certain sub-regions in it.

Theoretically derived results for a uniform probability distribution per vector component

In my last post

Latent spaces – pitfalls of distributing points in multi dimensions – I – constant probability density per dimension

I derived some formulas for central properties of a very simple statistical vector distribution. We assumed that each component of the vectors could be created independently and with the help of a uniform, constant probability distribution: Each vector component was based on a random value taken from a defined real number interval [-b, b] with a constant and normalized probability density. Obviously, this process treats the components as statistically independent variables.

Resulting vector end points fill a quadratic area in a 2D-space or a cubic volume in 3D-space relatively well. See my last post for examples. The formulas revealed, however, that the end points of our vectors lie within a multi-dimensional spherical shell of an average radius <R>. This shell is relatively broad for small dimensions (N=2,3). But it gets narrower and narrower with a growing number of dimensions N ≥ 4.

In this post I will first test my formulas and approximations for a constant probability density in [-b, b] with the help of a numerical experiment. Afterward I discuss what kind of regions in a latent space we may miss even when we fill a sequence of growing cubes around the origin with statistical points based on our special vector distribution.

Continue reading

Latent spaces – pitfalls of distributing points in multi dimensions – I – constant probability density per dimension

This post requires Javascript to display formulas!

Sometimes when we solve some problems with Deep Neural Networks we have no direct clue about how such a network evolves during training and how exactly it encodes knowledge learned from patterns in the input data. The encoding may not only affect weights of neurons in neural filters or maps, but also the functional mapping of input data onto vectors of some intermediate or target vector space. To get a better understanding it may sometimes seem that experiments with statistical points or vectors in such vector spaces could be helpful. One could e.g. suggest to study the reaction of a trained Artificial Neural Network to input vectors distributed over a multidimensional feature space.

Another important case is the test of an Autoencoder’s Decoder network on data points which we distribute statistically over extended regions of the Autoencoder’s latent space.

An Autoencoder [AEs] is an example of a Encoder-Decoder pair of networks. The Encoder encodes original complex input data into vectors of the so called latent space – a multi-dimensional vector space. The Decoder instead uses vectors of the latent space to reconstruct or generate output data in the feature space of the original input objects. The dimensionality of the latent space may be much smaller than that of the original feature space, but still much bigger than the 2 or 3 dimensions we have intuitive experience with.

Latent spaces play a major role in Machine Learning [ML] whenever a deep neural algorithm encodes information. Embedding input data (like words or images) in a vector space by some special neural layers (e.g. for text or image classification) is nothing else than filling a kind of latent space with encoded data in an optimized way to help the neural network to solve a special task like e.g. text or image classification. But we can also use statistical data in the latent space of an AE together with its trained Decoder to create or better generate new objects. This leads us into the field of using neural networks for generative, creative tasks.

Unfortunately, we know from experience that not all data regions in a latent space support the production of objects which we would consider as being similar to our training objects or categories of such objects. A priori we actually know very little about the shape of the data distribution an encoding network will create during training. Therefore we may be tempted to scan the latent space in some way – and to distribute data points in it in some statistical way. And test a Decoder’s reaction to such points …in the hope to find an interesting region by chance.

In this small series of posts I want to point out that such statistical experiments may be futile. On the one hand side trying to fill a multidimensional space with “statistical” data may lead to unexpected and counter-intuitive results. On the other hand side you may easily miss interesting regions of such a vector space. Especially confined regions. The whole point is just another interesting variant of the curse of high dimensions.

The theoretical and numerical results which we will derive during the next posts underline the necessity to analyze the concrete data distributions a neural network creates in a latent space very carefully. Both regarding position and extension of the filled region – and maybe also with respect to sub-structures. The results will help us to understand better why an Autoencoder most often fails to produce reasonable objects from data points statistically distributed over larger regions of its latent space.

This post series requires an understanding of some basic math. In the first post I will derive some formulas describing data distributions resulting from a constant probability density for each vector component, i.e. a constant probability density per dimension. We will learn that with a growing number of dimensions the data points concentrate more and more in regions which are parts of a multi-dimensional sphere shell. Which is at least in my opinion a bit counter-intuitive and therefore an interesting finding in itself.

Many dimensions …

An AE compresses data very effectively. So the dimension of a latent space will most often be significantly smaller than the number of dimensions M characterizing the feature space of the original training objects. The space spanned by logically independent variables which describe digitized objects may have a number of dimensions in the range of millions and tenths of millions. In the case of latent spaces we instead have to deal with a number of dimensions N in the range of multiples of 10 up to tens of thousands. (For relatively simple objects showing many correlations in their feature space …)

10 ≤ N ≤ 105    << M

Still, such numbers N are far beyond standard experience.

Representation of a N-dimensional space by the ℜN

We can in general represent a multidimensional feature or latent space in Machine Learning by the ℜN. Each point in this space can be defined by a vector whose components correspond to the points’ coordinate values along the N orthogonal axes. A vector is thus represented by a N-tuple of real (coordinate) values:

(x1, x2, …, xk, … xN) .

Formally, the set of all tuples together with some addition and multiplication operations form a vector space over ℜ. Whether vector operations performed with vectors to different data points in a latent space or a feature space really have a solid interpretable meaning in your ML- or AE-context is not our business in this post. Our objective is to cover such a space with statistically defined points (or vectors).

The curse of high dimensionality

It seems to be easy to fill a N-dimensional cube with data points that form a grid. More precisely, a grid with a constant distance between the points in all orthogonal directions parallel to the axes. However, a simple calculation shows that such an approach is doomed in computational practice:

Let us assume that we deal with a space of N = 256 dimensions. Let us also assume that we have reasons to believe that we should focus on the region of a cube around the origin that has a side length of 10 in each dimension k. Then the coordinate values xk for data points within the cube fulfill

\[ -b \lt x_{k} \lt +b \mbox{,} \quad \forall k \: \in [1,256] \quad \mbox{and} \quad b=5
\]

Let us further assume that we want to have a grid resolution of 10 points per dimension. Then we would need to create

\[ 10^{256} \: \mbox{points}
\]

to fill our 256-dimensional cube with a homogeneous (!) point distribution. This is already a very clear indication that we can not systematically study a multidimensional cube in this way. Multidimensional spaces are huge by their nature.

We have to get and use more information about interesting regions or we must make assumptions on the distribution of points in each coordinate direction.

Constant probability density per coordinate value and dimension

A most simple case for making an assumption is a constant probability distribution of each coordinate value xk within a defined interval along the related coordinate axis. Let us again pick a cube around the origin of a space with N=256:

\[-b \lt x_{k} \lt +b, \quad \forall k \: \in [1,256] \quad \mbox{and} \quad b=5 \]

Let us assume that we only want a resolution of 100 equidistant values per side length of the cube. From a statistical point of view we deal with our dimensions like they were part of a classical experiment in which we pick balls with imprinted values from a (mixing) black box. In our assumed case we have a box with 100 numbers and fetch a ball 256 times to create a 256-tuple for each vector – of course with laying back the picked ball each time. We will come back to this interpretation in a second post. For now let us define a constant probability density and see where we get from there mathematically.

The normalized density distribution in the interval [-b, b] simply is

\[ \rho(x_k) \: = \: \frac{1}{2b} \:, \quad \mbox {for} \: -b \le x_k < b . \]

One may “naively” think the following:
We statistically fetch 256 xk-values, each from our uniform distribution, and fill our vector N-tuples with these values. Repeating this vector generation process for e.g. 200000 times should give us some widespread point distribution covering our cube very well. Let me warn you:

This is a common mistake – although such a procedure seems to work rather well in 2 or 3 dimensions!

It admit: The whole thing is somewhat counter-intuitive. If you create a homogeneous distribution for each of the coordinate values xk then the points should well cover the whole space? The answer unfortunately is: No.

To get a better understanding we need some more math. And we first reduce the problem to statements about a one-dimensional quantity, namely the vector length. But let us first look at results in 3 dimensions.

Point distribution in a 3-dimensional cube for a constant probability density of each coordinate value

Let us create some 200 points in a cube

\[ 0 \lt x_{k} \lt +b, \quad \forall k \: \in [1,3] \quad \mbox{and} \quad b=10 \]

The following plots show the distributions projections on the planes between the coordinate axes:

This looks quite OK. All areas seem to be covered. I have marked some points to illustrate their positions. We see that filling a corner in 2D does not necessarily mean that the points are close to the origin of the 3D coordinate system. Now, let us look at a 3D-plot:

I have marked the origin in violet. Notably, extended elongated stripes at the edges with two coordinate close to extreme values seem strangely depleted. This is no accident. Below you find a similar plot with 400 points and b=5:

What is the reason? Well, it is simply statistics: To fill the special regions named above you have to fix one or two coordinates at the same time. Let us look at a stripe stretching from one corner and parallel to an axis and perpendicular to the plane {x_1, x_2}:

\[ x_1 \in [0,1] \: \mbox{and} \: x_2 \in [9,10], \quad \mbox{for} \:\: b=10 \]

As we deal with a uniform distribution per coordinate value the chances are 1/10 * 1/10 = 1/100 that a randomly constructed point will fulfill these conditions.

Also: For a region with all coordinates around [0,1] the chance is just 0.13 = 0.001.

When see already that the probability to get a point with a limited radius vector becomes rather small. Now, what happens in a real multi-dimensional space, e.g. with N=256?

The mean radius of data points created with a constant probability density for each coordinate value

Let us call the length of a vector directed from the origin to some defined point in our cube the “radius” R of the vector. This radius then is defined as:

\[ R \: = \: \sqrt{\,\sum_{k=1}^{N} (x_k)^2 \,} \]

What is the length of a vector with all xk = 1 for all k? For N=256 dimensions the answer is:

\[ R = \sqrt{256} \: = 16 \: \]

So, we should not be surprised when we have to deal with large average numbers for vector lengths in the course of our analysis below. We again assume a constant probability density ρ along the each coordinate axis in the interval [-b, b]

\[ \rho(x_k) \: = \: \frac{1}{2b} \:, \quad \mbox {for} \: -b \le x_k < b . \]

Now, what will the mean radius of a “random” point distribution in the cube be when we base the distribution on this probability density? How do we get the expectation value <R> for the length of our vectors?

If we wanted to approach the problem directly we would have to solve an integral over the vector length and a (normalized) probability density. This would lead us to integrals of the type

\[ \lt R \gt \: = \: \int_{-b}^{b}\int_{-b}^{b}…\int_{-b}^{b}\, \rho_N(x_1, x_2, …x_{N}) * \sqrt{ \sum_{k=1}^{256} (x_k)² }\:\: dx_1\,dx_2…\,dx_{N} \]
\[ \rho_N(x_1, x_2, …x_{N}) = \rho(x_1) * \rho(x_2) * … * \rho(x_N) \]

As our normalized probability density per dimension is a constant, the whole thing looks simple. However, the integrals over the square root are, unfortunately, not really trivial to handle. Try it yourself …

Therefore, I choose a different way: Instead of looking for the expectation value of the vector length <R> I look for the square root of the expectation value of the vector length squared R2 – and assume:

\[ \lt R \gt \: \approx \: \sqrt{\lt R^{\,2} \gt\,} \quad . \]

We take care of this assumption later by giving a reference and an approximation formula.

We also use the fact that the expectation value is a linear function for a sum of independent quantities. This leads us to

\[ \lt R^{\,2} \gt \: = \: \sum_{k=0}^{N} \lt (x_k)^2 \gt \]

The resulting integral is much easier to solve. It splits up into simple 1-dimensional integrals. Note: Each of your squared coordinate values (xk)2 is associated with a simple, un-squared (!) probability density of 1/(2b) for an interval [-b, b] or 1/(b-a) for [a, b].

The result for a general common interval [a,b] for all coordinate values a \le; xk ≤ b is :

\[ \lt R^{\,2} \gt \: = \: \frac{b³ \, – \, a³}{3\, ( b \, – \, a )} * N \quad .
\]

So, for our special case a = -b we get:

\[ \lt R \gt \: \approx \: \sqrt{\lt R^{\,2} \gt} \: = \: b * \sqrt{ 1/3 * N \,}
\]

This will give us for b = 5:

\[ R_{mean} = \lt R \gt \: \approx \: \sqrt{\lt R^{\,2} \gt} \: \approx \: 46.188 , \quad for N=256, b=5 \]

Note that

\[ \sqrt{1 / 3} \: \approx \: 0.57735 \: \gt \: 0.5 \]

Ok, the mean length value is a bit bigger than half of the length which a diagonal vector from the origin to the outmost edge of our cube (b, b,,…,b) would have. This does not say so much about the point and the radius distributions themselves. We have to look at higher momenta. Let us first look at the variance and standard variation for R2.

Variance and standard deviation of R2

What we actually would like to see is the variance and standard deviation for R. But again it is easier to first solve the problem for the squared quantity. The variance of R2 is defined as:

\[ \mbox{Variance of } \: R^{\,2} \:\: : \:\: \left< { \, \left( \, R^{\,2} \, – \, \lt R^{\,2} \gt \right)^{2} \, } \right>
\]
\[ \mbox{Standard deviation of } R^{\,2} \:\: : \:\: \sqrt{ \left< {\, \left( \, R^{\,2} \, – \, \lt R^{\,2} \gt \, \right)^{2} \, } \right> \,}
\]

We can solve this step by step. First we use:

\[ \left< { \,\, {\left( \, R^{\,2} \, – \, \lt R^{\,2} \gt \right)}^{2} \,\, } \right> \: = \: \left< { \, \left( R^{\,2} \right)^{2} \, } \right> – \left( \lt { R^{\,2} } \gt \right)^{2}
\]

We evaluate (for our cube):

\[ \left< { \,\, {\left( R^{\,2} \right)^{2}} \,\, } \right> – \left( \lt { R^{\,2} } \gt \right)^{2} \: = \: \left< { \,\, \left( R^{\,2} \right)^{2} \,\, } \right> \: – \: {1 \over 9} N^{\,2} \, b^{\,4}
\]

and

\[ \left< { \,\, \left( R^{\,2} \right)^{2} \,\,} \right> \: = \: \left< { \,\, \left( \sum_{k=1}^N (x_k)^{\,2} \,\right)} * {\left( \sum_{m=1}^N (x_m)^{\,2} \,\right) \,\,} \right>
\]

For the different contributions to the expectation values we get

\[ \left< { \,\, \left( R^{\,2} \right)^{2} \,\,} \right> \: = \: N * \left< { \,\, (x_k)^{\,4} \,\, } \right> + N(N-1) * \left< { \,\, (x_i)^{\,2}\,(x_k)^{\,2} \,\, } \right>
\]

Integrating over our constant probability densities for the independent coordinates gives

\[ \left< { \,\, \left( R^{\,2} \right)^{2} \,\,} \right> \: = \: N {1 \over 5} b^{\,4} \, + \, N^{\,2} {1\over 9} b^{\,4} \, – \, N {1 \over 9} b^{\,4}
\]

Thus

\[ \left< {\left( R^{\,2} \right)^{\,2}} \right> – \left( \lt R^{\,2} \gt \right)^{\,2} \: = \: N \, b^{\,4} * \left( {1 \over 5} – {1\over 9} \right) \: = \: {4 \over 45} N * b^{\,4} \: .
\]

Eventually, we get for the ratio of the standard variation of R2 to <R2> :

\[ {1 \over {\left< { \, \left( \, R^{\,2} \right)^{2} \, } \right>} } * \sqrt{ \left< \, \left( \, R^{\,2} \, – \lt R^{\,2} \gt \, \right)^{2} \, \right> \,} \: = \: { 3 \over {N * b^{\,2} }} * \sqrt{{4 \over 45} N \, b^{\,4} \,} \: = \: 2 * \sqrt{ 1 / ( 5 N) \, }
\]

The ratio of the standard variation of R to <R>

We are basically interested in the ratio

\[ {1 \over {\left< R \right>} } * \sqrt{ \left< \, \left( { \, R \, – \lt R \gt \, } \right)^{2} \, \right> \,}
\]

Before you start to fight the complex integrals over square roots think of the following:

\[ {1 \over 2} * { { \left( R_0 + \Delta \right)^{\,2} \, – \, \left( R_0 – \Delta \right)^{\,2} } \over {{R_0}^{\,2}} } \: = \: 2 { \Delta \over R_0}
\]

From an adaption of this argument we conclude

\[ {1 \over {\left< R \right>} } * \sqrt{ \left< \, \left( \, R \, – \lt R \gt \, \right)^{2} \, \right> \,} \: \approx \: \sqrt{ 1 / (5N) \, }
\]

A very simple relation! It shows that on average the interval for the radius spread systematically becomes smaller in comparison to the average radius with a growing number of dimensions. In the case of N=256 the relative radius spread is only of the order of some percent.

\[ { \sqrt{ \left( \Delta R \right)^{2} \, } \over {\left< R \right>} } \: \approx \: 0.02795 \:, \quad \mbox{for} \,\, N=256 \: .
\]

This defines a region or regions within a very narrow multi-dimensional spherical shell. The plot below shows the result of a numerical simulation based on one million data points for dimensions between 3 ≤ N ≤ 2048. You see the radius $lt; R > on the x-axis, the number of points on the y-axis and the spread around the mean radius for the selected numbers of dimensions. Although the spread and the standard deviation remain almost constant the radius value is steadily rising.

Note that one should be careful not to assume more than a point concentration in regions of a multi-dimensional spherical shell. Even within such a shell the created data points may avoid certain regions. I will give some arguments that support this point of view in the next post. But already now it has become very clear that a statistical distribution of vectors with constant probabilities of component values per dimension will not at all fill the latent space homogeneously.

Some better approximations for <R> and the relative standard deviation

The above formulas suffer from the fact that

\[ \sqrt{\lt R^{\,2} \gt} \: \gtrsim \: \lt R \gt
\]

is only a rough approximation which overestimates the real $lt; R > – value, which comes out of numerical simulations. The deviation rises sharply with a small number of dimensions. For a thorough discussion of how good the approximation is see the answer and links at the following discussion at stackexchange
https://stats.stackexchange.com/ questions/ 317095/ expectation-of-square-root-of-sum-of-independent-squared-uniform-random-variable

Below I give you a better approximation to < R >, which should be sufficient for most purposes and N ≥ 3:

\[ \lt R \gt \: \approx \: = \: b * \sqrt{ 1 /3 * N \,} * \sqrt{{1 \over {1 + {1 \over 4N} }} }
\]

This approximation can in turn be used to improve the ratio of the standard deviation to < N gt;:

\[ {1 \over {\left< R \right>} } * \sqrt{ \left< \, \left( \, R \, – \lt R \gt \, \right)^{2} \, \right> \,} \: \approx \: \sqrt{ 1 / (5N) \, } * {1 \over {1 + {1 \over 4N} }}
\]

Conclusion and outlook on the next post

In this post I have tried to fill a cube around the origin of a multi-dimensional orthogonal coordinate system with statistical points. I have assumed a constant probability density for each component of the respective vectors. A simple mathematical analysis leads to the somewhat surprising conclusion that the resulting data distribution concentrates in regions which are located within a multi-dimensional spherical shell. This shell becomes narrower and narrower with a rising number of dimensions N.

In the next post

Latent spaces – pitfalls of distributing points in multi dimensions – II – missing specific regions

I shall first prove the derived results and given approximations by numerical data. Then I will discuss what kind of regions, that a Neural Network might fill in its latent space, we will miss by such an approach.