Multivariate Normal Distributions – I – objectives

This post requires Javascript to display formulas!

Machine Learning [ML] algorithms are applied to multivariate data: Each individual object of interest (e.g. an image) is characterized by a set of n distinct and quantifiable variables. The variable values may e.g. come from measurements.

A sample of such objects corresponds to a data distribution in a multidimensional space, most often the ℝn. We can visualize our objects as data points in an Euclidean coordinate system of the ℝn: Each axis represents the values a specific variable can take; the position of a data point is given by the variable values.

Equivalently, we can use (position-) vectors to these data points. Thus, when training ML algorithms we typically deal with vector distributions, which by their very nature are multivariate. But also the outputs of some types of neural networks like Autoencoders [AE] form multivariate distributions in the networks’ latent spaces. For today’s ML-scenarios the number of dimensions n can become very big – even if we compress information in latent spaces. For a variety of tasks in generative ML we may need to understand the nature and shape of such distributions.

An elementary kind of a continuous multivariate vector distribution, for which major properties can be derived analytically, is the so called Multivariate Normal Distribution [MND]. MNDs, their marginal and their conditional distributions are of major importance both in the fields of statistics, Big Data and Machine Learning. One reason for this is the “central limit theorem” of statistics (in its vector form).

Some conventional ML-algorithms are even based on the assumption that the population behind the concrete data samples can be approximated by a MND. Due to the central limit theorem we find that averages of big samples of multivariate training data for a population of specific types of observed objects tend to form a MND. But also data samples in latent spaces of neural networks may show a multivariate normal distribution – at least in parts.

For the concrete problem of human face generation via a trained convolutional Autoencoder [CAE] I have actually found that the data produced in the CAE’s latent space can very well be described by a MND. See the posts on Autoencoders in this blog. This alone is motivation enough to dive a bit deeper into the (beautiful) mathematical properties of MNDs.

Just to illustrate it: The following plots show projections of the approximate MND onto coordinate planes.

We find the typical elliptic contour lines which are to be expected for a MND. And here are some generated face images from statistical vectors which I derived from an analysis of the characteristic features of the 2-dim projections of the latent MND which my CAE had produced:

ML is math in the end – and MNDs are no exception

Some of my readers may have noticed that I wanted to start a series on the topic of creating random vectors for a given MND-like vector distribution. The characteristic parameters for the n-dimensional MND can either stem from an analysis of experimental ML data or come from theoretical sources. This was in April. But, I have been silent on this topic for a while.

The reason was that I got caught up in the study of the math of MNDs, of their properties, their marginal distributions and of quadratic forms in multiple dimensions (ellipsoids and ellipses). I had to re-collect a lot of mathematical information which I once (45 years ago) had learned at university. Unfortunately, multivariate analysis (i.e. data analysis in multidimensional spaces) requires some (undergraduate) university math. Regarding MNDs, knowledge both in linear algebra, statistics and vector analysis is required. In particular matrices, their decomposition and their geometrical interpretation play a major role. And when you try to understand a particular problem which obviously is characterized by an overlap of multiple mathematical disciplines the amount of information can quickly grow – without the connections and consistency becoming clear at first sight.

This is in part due to the different fields the authors of papers on MNDs work in and the different focuses they have on properties of MNDs. Although many introductory information about MNDs is available on the Internet, I have so far missed a coherent and comprehensive presentation which illustrates the theoretical insights by both ideal and real world examples. Too often the texts are restricted to pure formal derivations. And none of the texts discussed the problem of vector generation within the limits of MND confidence levels. But this task can become important in generative ML: At high confidence levels outliers become a strong weight – and deviations from an ideal MND may cause disturbances.

One problem with appropriate vector generation for creative ML purposes is that ML experiments deliver (latent) data which are difficult to analyze as they reside in high-dimensional spaces. Even if we already knew that they form a MND in some parts of a latent space we would have to perform a drill down to analytic formulas which describe limiting conditions for the components of the statistical vectors we want to create.

The other problem is that we need a solid understanding of confidence levels for a multidimensional distribution of data points, which we approximate by a MND. And on one’s way to understanding related properties of MNDs you pass a lot of interesting side aspects – e.g. degenerate distributions, matrix decompositions, affine transformations and projections of multidimensional hypersurfaces onto coordinate planes. Far too interesting to refrain from not writing something about it …

After having read many publicly available articles on MNDs and related math I had collected a bunch of notes, formulas and numerical experiments. The idea of a general post series on MNDs grew in parallel. From my own experiences I thought that ML people who are confronted with latent representations of data and find indications of a MND would like to have an introduction which covers the most relevant aspects of MNDs. On a certain mathematical level, and supported by illustrations from a concrete example.

But I will not forget about my original objective, namely the generation of random vectors within confidence levels. In the end we will find two possible approaches: One is based on a particular linear transformation, whose mathematical form is determined by a covariance analysis of our data distribution, and random number generators for multiple Gaussian distributions. The other solution is based on a derivation of precise conditions on random vector components from ellipses which are produced by projections of our real experimental data distribution onto coordinate planes. Such limiting conditions can be given in form of analytic expressions.

The second approach can also be understood as a reconstruction of a multivariate distribution from low-dimensional projection data:

We create vectors of a concrete MND-like vector distribution in n dimensions by only referring to characteristic data of its two-dimensional projections onto coordinate planes.

This is an interesting objective in itself as the access to and the analysis of 2-dimensional (correlated) data may be a much easier endeavour than analyzing the full distribution. But such an approach has to be supported by mathematical arguments.

Objectives of this post series

Objectives of this post series are:

  1. We want to find out what a MND is in mathematical and statistical terms and how it can be based on a simpler vector distributions within the ℝn.
  2. We want to study the basic role of a standardized multivariate normal distribution in the game and the impact of linear affine transformations on such a distribution – in terms of linear algebra and from a geometrical point of view.
  3. We also want to describe and interpret the difference between normal MNDs and so called degenerate MNDs.
  4. We want to understand the most important mathematical properties of MNDs. In particular we want to better grasp the mathematical meaning of correlations between the vector components and their impact on the probability density function. Furthermore the relation of a MND to its marginal distributions in sub-spaces of lower dimensions is of major interest.
  5. We want to formally create a MND-approximation to a real multivariate data distribution by an analysis of real distribution’s properties and in particular from parameters describing the correlations between the vector components. Of particular interest are the covariance matrix and the precision or correlation matrix.
  6. We want to study the role of projections when turning from a MND to its marginal distributions and the impact of such projections on the matrices qualifying the original and its marginal distributions.
  7. We want to understand the form of contour hyper-surfaces for constant probability density values of a MND. We also want to derive what the projections of these hyper-surfaces onto coordinate planes look like.
  8. We want to show that both contour hyper-surfaces of the MND and of its projections in marginal distributions contain the same proportions of integrated data points and, equivalently, the same probability proportions resulting from an integration of the probability density from the distribution’s center up to the hyper-surfaces.
  9. We want to illustrate basic MND-creation principles and the effects of linear affine transformations during the construction process by an ideal 3-dimensional MND example and by projections of a real vector distribution from an ML-experiment onto 2-dimensional and 3-dimensional sub-spaces. We also want to illustrate the relation between the MND and its marginal distributions by plotting concrete 3-dimensional examples and their projections onto coordinate planes.
  10. We want to use the derived MND properties for the creation of statistical vectors v which fulfill the following conditions:
    • Each of the generated v is a member of a vector population, which has been derived from a ML experiment and which to a good approximation can be described by a MND (and its extracted basic parameters).
    • Each v has an endpoint within the multidimensional volume enclosed by a contour-hypersurface of the MND’s probability density function [p.d.f.],
    • The limiting hypersurface is defined by a chosen confidence level.
  11. We want to create statistical vectors within the limit of contour hyper-surfaces by using elementary construction principles of a MND.
  12. In a second approach we want to reduce vector creation to solving a sequence of 2-dimensional problems. I.e. we want to work with 2-dim marginal distributions in 2-dim sub-spaces of the ℝn. We hope that the probability density functions of the relevant distributions can be described analytically and provide computable limiting conditions on vector components.
    Note: The production of statistical vectors from data of projected low-dimensional marginal distributions corresponds to a reconstruction of the full MND from its projections.
  13. During random vector creation we want to avoid PCA-transformations of the whole real data distribution or of projections of it.
  14. Based on MND-parameters we want to find analytic expressions for the vector component limits whenever possible.

The attentive reader has noticed that the list above includes an assumption – namely that a multidimensional contour hypersurface of a MND can be associated with something like a confidence level. In addition we have to justify mathematically that the reduction to data of 2-dimensional projections of the full vector distribution is a real option for statistical vector creation.

The last three points are a bit tough: Even if we believe in math textbooks and get limiting hyper-curves of a quadratic form in our coordinate planes the main axes of the respective ellipses may show angles versus the coordinate axes (see the example images above). All this would have to be taken care of in a precise analytic form of the limits which we impose on the components of our aspired statistical vectors.

So, this series is, at least in parts, going to be a tough, but also very satisfactory journey. Eventually, after having clarified diverse properties of MNDs and their marginal distributions in lower dimensional spaces, we will end up with quadratic equations and some simple matrix operations.

Objectives of the next post

We must not forget that statistics plays a major role in our business. In ML we deal with finite collections (samples) of individual object data which are statistically picked from a greater population (with assumed statistical properties). An example is a concrete collection of images of human faces and/or their latent vectors. The data can be organized in form of a two-dimensional data matrix: Its rows may indicate individual objects and its columns properties of these objects (or vice versa). In either direction we have vectors which focus on a particular aspect of the data: Individual objects or the statistics of a specific object property.

While we are used to univariate “random variables” we have to turn to so called “random vectors” to describe multidimensional statistical distributions and respective samples picked from an underlying population. A proper vector notation will give us the advantage of writing down linear transformations of a whole multidimensional vector distribution in a short and concise form.

Besides introducing random vectors and their components the next post

Multivariate Normal Distributions – II – random vectors and their covariance matrix

will also discuss related probability densities, expectation values and the definition of a covariance matrix for a random vector. Some simple properties of the covariance matrix will help us in further posts.

 

Statistical vector generation for multivariate normal distributions – I – multivariate and bi-variate normal distributions from CAEs

This post requires Javascript to display formulas!

Convolutional Autoencoders and multivariate normal distributions

Experiments as with my own on convolutional Autoencoders [CAE] show: A CAE maps a training set of human face images (e.g. CelebA) onto an approximate multivariate vector distribution in the CAE’s latent space Z. Each image corresponds to a point (z-point) and a corresponding vector (z-vector) in the CAE’s multidimensional latent space. More precisely the results of numerical experiments showed:

The multidimensional density function which describes the inner dense core of the z-point distribution (containing more than 80% of all points) was (aside of normalization factors) equivalent to the density function of a multivariate normal distribution [MND] for the respective z-vectors in an Euclidean coordinate system.

For results of my numerical CAE-experiments see
Autoencoders and latent space fragmentation – X – a method to create suitable latent vectors for the generation of human face images
and related previous posts in this blog. After the removal of some outliers beyond a high sigma-level (≥ 3) of the original distribution the remaining core distribution fulfilled conditions of standard tests for multivariate normal distributions like the Shapiro-Wilk test or the Henze-Zirkler test.

After a normalization with an appropriate factor the continuous density functions controlling the multivariate vector distribution can be interpreted as a probability density function [p.d.f.]. The components vj (j=1, 2,…, n) of the vectors to the z-points are regarded as logically separate, but not uncorrelated variables. For each of these variables a component specific value distribution Vj is given. All these marginal distributions contribute to a random vector distribution V, in our case with the properties of a MND:

\[ \boldsymbol{V} \: = \: \left( \, V_1, \, V_2, \, ….\, V_n\, \right) \: \sim \: \boldsymbol{\mathcal{N_n}} \, \left( \, \boldsymbol{\mu} , \, \boldsymbol{\Sigma} \, \right), \\ \quad \mbox{with} \: \boldsymbol{\mathcal{N_n}} \: \mbox{symbolizing a MND in an n-dimensional space}
\]

μ is a vector with all mean values μj of the Vj component distributions as its components. Σ abbreviates the covariance matrix relating the distributions Vj with one another.

The point distribution of a CAE’s MND forms a complex rotated multidimensional ellipsoid with its center somewhere off the origin in the latent space. The latent space itself typically has many dimensions. In the case of my numerical experiments the number of dimension was n ≥ 256. The number of sample vectors used were between 80,000 and 200,000 – enough data to approximate the vector distribution by a continuous density function. The densities for the Vj-distributions formed a smooth Gaussian function (for a reasonable sampling interval). But note: One has to be careful: The fact that the Vj have a Gaussian form is not a sufficient condition for a MND. (See the next post.) But if a MND is given all Vj have a Gaussian form.

Generative use of MNDs in multidimensional latent spaces of high dimensionality

When we want to use a CAE as a generative tool we need to solve a problem: We must create statistical vectors which point into the (multidimensional) volume of our point distribution in the latent space of the encoding algorithm. Only such vectors provide useful information to the Decoder of the CAE. A full multivariate normal distribution and hyperplanes of its multidimensional density function are difficult to analyze and to control when developing a proper numerical algorithm. Therefore I want to reduce the problem of vector generation to a sequence of viewable and controllable 2-dimensional problems. How can this be achieved?

A central property of a multivariate normal distribution helps: Any sub-selection of m vector-component distributions forms a multivariate normal distribution, too (see below). For m=2 and for vector components indexed by (j,k) with respective distributions Vj, Vk we get a so called “bivariate normal distribution[BND]:

\[ V_{jk} \: \sim \: \mathcal{N}_2\left(\, (\mu_j, \mu_k), (\,\sigma_j, \sigma_k \,) \, \right)
\]

A MND has n*(n-1)/2 such subordinate BNDs. The 2-dim density function of a bivariate normal distribution

\[ g_{jk}\,\left( \, v_j, \, v_k\, \right) \: = \: g_{jk}\,\left( \, v_j, \, v_j, \, \mu_j, \, \mu_k, \, \sigma_j, \sigma_k, \, \sigma_{jk}, … \right)
\]

for vector component values vj, vk defines a point density of the sample data in the (j,k)-coordinate plane of the Euclidean coordinate system in which the MND is described. The density function of the marginal distributions Vj showed the typical Gaussian forms of a univariate normal distribution.

The density function of a BND has some interesting mathematical properties. Among other things: The hyperplanes of constant density of a BND’s density function form ellipses. This is illustrated by the following plots showing such contour lines for selected pairs (Vj, Vk) of a real point-distribution in a 256-dimensional latent space. The point distribution was created by a CAE in its latent space for the CelebA dataset.

Contour lines for selected (j,k)-pairs. The thick lines stem from theory and calculated correlation coefficients of the univariate distributions.

The next plot shows the contours of selected vector-component pairs after a PCA-transformation of the full MND. (Main ellipse axes are now aligned with the axes of the PCA-coordinate system):

These ellipses with axes along the coordinate axes are relatively easy to handle. They can be used for vector creation. But they require a full PCA transformation of the MND-distribution, a PCA-analysis for complexity reduction and an application of the inverse PCA-transformation. The plot below shows the point-density compared to a 2.2-σ-confidence ellipses. The orange points are the results of a proper statistical numerical vector generation algorithm based on a PCA-transformation of the MND.

See my post quoted above for the application of a PCA-transformation of the multidimensional MND for vector creation.

However, we get the impression that we could also use these rotated ellipses in projections of the MND onto coordinate planes of the original latent space system directly to impose limiting conditions on the component values of statistical vectors pointing to an inner regions of the MND. Of course, a generated statistical vector must then comply with the conditions of all such ellipses. This requires an analysis and combined use of the ellipses of all of the subordinate BNDs of a the original MND during an iterative or successive definition of the values for the vector components.

Objective of this post series

In my last post about CAEs (see the link given above) I have explicitly asked the question whether one can avoid performing a full PCA-transformation of the MND when creating statistical vectors pointing to a defined inner region of a MND.

The objective of this post series is to prove the answer: Yes, we can. And we will use the BNDs resulting from projections of the original MND onto coordinate planes. We will in particular explore the n*(n-1)/2 the properties of the BNDs’ confidence ellipses. As said: These ellipses are rotated against the coordinate system’s axes. We will have to deal with this in detail. We will also use properties of their 1-dimensional marginal distributions (projections onto the coordinate axes, i.e. the Vj.)

In addition we need to prepare a variety of formulas before we are able to define numerical procedure for the vector generation without a full PCA-transformation of the MND with around 100,000 vectors. Some of the derived formulas will also allow for a deeper insight into how the multiple BNDs of a MND are related between each other and with confidence hypersurfaces of the MND.

Ellipses in general lead to equations governed by quadratic or fourth power polynomials. We will in addition use some elementary correlation formula from statistics and for some exercises a simple optimization via derivatives. The series can be regarded as an excursion into some of the math which governs bivariate distributions resulting from a MND.

As MNDs may also be the result of other generative Machine Learning algorithms in respective latent spaces, the whole approach to statistical vector generation for such cases should be of general interest. Note also that the so called “central limit theorem” almost guarantees the appearance of MNDs in many multivariate datasets with sufficiently large samples and value dependencies on many singular observations.

Distributions of a variety of variables may result in a MND if the variables themselves depend on many individual observables with limited covariance values of their distributions. In particular pairwise linearly correlated Gaussian density distributions individual variables (seen as vector components) may constitute a MND if the conditional probabilities fulfill some rules. We will see a glimpse of this in 2 dimensions when we analyze integrals over Gaussians in the bivariate normal case.

Other approaches to statistical vector generation?

Well, we could try to reconstruct the multidimensional density function of the MND. This is a challenge which appears in some problems of pure statistics, but also in experimental physics. See e.g. a paper of Rafey Anwar, Madeline Hamilton, Pavel M. Nadolsky (2019, Department of Physics, Southern Methodist University, Dallas; https://arxiv.org/pdf/1901.05511.pdf). Then we would have to find the elements of the (inverse) covariance matrix or – equivalently – the elements of a multidimensional rotation matrix. But the most efficient algorithms to get the matrix coefficients again work with projections onto coordinate planes. I prefer to use properties of the ellipses of the bivariate distributions directly.

Note that using the multidimensional density function of the MND directly is not of much help if we want to keep the vectors’ end points within a defined multidimensional inner region of the distribution. E.g.: You want to limit the vectors to some confidence region of the MND, i.e. to keep them inside a certain multidimensional ellipsoidal contour hyper-surface. The BND-ellipses in the coordinate planes reflect the multidimensional ellipsoidally shaped contour hyperplanes of the full distribution. Actually, when we vertically project a multidimensional hyperplane onto a coordinate plane then the outer 2-dim border line coincides with a contour ellipse of the respective BND. (This is due to properties of a MND. We will come back to this in a future post.) The problem of proper limiting individual vector component values thus again is best solved by analyzing properties of the BNDs.

Steps, methods, mathematical level

As a first step I will, for the sake of completeness, write down the formula for a multivariate normal distribution, discuss a bit its mathematical construction from uncorrelated univariate normal distribution. I will also list up some basic properties of a MND (without proof!). These properties will justify our approach to create statistical vectors pointing into a defined inner region of the MND by investigating projected contour ellipses of all subordinate BNDs. As a special aspect I want to make it at least plausible, why the projected contour ellipses define infinitesimal regions of the same relative probability level as their multidimensional counterparts – namely the multidimensional ellipsoidal hypersurfaces which were projected onto coordinate planes.

Then as a first productive step I want to motivate the specific mathematical form of the probability density function [p.d.f] of a bivariate normal distribution. In contrast to many of the math papers I have read on the topic I want to use a symmetry argument to derive the basic form of the p.d.f. I will point out an important, but plausible assumption about conditional distributions. An analogous assumption on the multidimensional level is central for the properties of a MND.

As the distributions Vj and Vk can be correlated we then want to understand the impact of the correlation coefficients on the parameters of the 2-dimensional density function. To achieve this I will again derive the density function by using our previous central assumption and some simple relations between the expectation values of the constituting two univariate distributions in the linear correlation regime. This concludes the part of the series where we get familiar with BNDs.

Furthermore we are interested in features and consequences of the 2-dimensional density functions. The contour lines of the 2-dim density function are ellipses – rotated by some specific angle. I will look at a formal mathematical process to construct such ellipses – in particular confidence ellipses. I will refer to the results Carsten Schelp has provided in an Internet article on this topic.

His construction process starts with a basic ellipse, which I will call base correlation ellipse [BCE]. The length of the axes of this ellipse are eigenvalues of the covariance matrix of the standardized marginal distributions constituting the BND. The main axes of this elementary ellipse are in addition aligned with the two selected axes of a basic Euclidean coordinate system in which the bivariate distribution is defined. The length of the BCE’s main axes can be shown to depend on the correlation coefficient for the two vector component distributions Vj and Vk. This coefficient also appears in the precision matrix of the BND. Points on the base correlation ellipse can be mapped with two steps of an affine transformation onto points on the real contour ellipses, in particular to points of the confidence ellipses.

The whole construction process is not only of immense help when designing visualization programs for the contour ellipses of our distribution with many (around 100,000) individual vectors. The process itself gives us some direct geometrical insights. It also helps to avoid finding a numerical solution of the usual eigenvector-problems when answering some specific questions about the rotated contour ellipses. Normally we solve an eigenvalue-problem for the covariance matrix of the multi- or the many subordinate bi-variate distributions to get precise information about contour ellipses. This corresponds to a transformation of the distributions to a new coordinate system whose axes are aligned to the main axes of the ellipses. Numerically this transformation is directly related to a PCA transformation of the vector distributions. However, such a PCA-transformation can be costly in terms of CPU time.

Instead, we only need a numerical determination of all the mutual the correlation coefficients of the univariate marginal distributions of the MND. Then the eigenvalue problem on the BND-level is already analytically solved. We therefore neither perform a full numerical PCA analysis of the MND and multidimensional rotation of the vectors of around 100,000 samples. Nor do we analyze explained variance ratios to determine the most important PCA components for dimensionality reduction. We neither need to perform a numerical PCA analysis of the BNDs.

Most important: Our problem of vector generation is formulated in the original latent space coordinate system and it gets a direct solution there. The nice thing is that Schelp’s construction mechanism reduces the math to the solution of quadratic polynomial equations for the BNDs. The solutions of those equations, which are required for our ultimate purpose of vector generation, can be stated in an explicit form.

Therefore, the math in this series will mostly remain on high school level (at least at a level given when I was young). Actually, it was fun to dive back into exercises reminding me of school 50 years ago. I hope the interested reader has some fun, too.

Solutions to some particular problems with respect to the confidence ellipses of the MND’s BNDs

In particular we will solve the following problems:

  • Problem 1: The two points on the BCE-ellipse with the same vj-value are not mapped onto points with the same vj-value on the confidence ellipse. We therefore derive the coordinates of points on the BCE-ellipse that give us one and the same vj-value on the real confidence ellipse.
  • Problem 2: Plots for a real MND vector distribution indicate that all (n-1) confidence ellipses of distribution pairs of a common Vj with other marginal distributions Vk (for the same confidence level and with k ≠ j) have a common tangent parallel to one coordinate axis. We will derive the value of a maximum v_j-value for all ellipses of (j,k)-pairs of vector component distributions. We will prove that it is identical for all k. This will define the common interval of allowed vj-component-values for a bunch of confidence ellipses for all (Vj, Vk)-pairs with a common Vj.
  • Problem 3: The BCE-ellipses for a common j-, but different k-values depend on different values for the correlation coefficients ρj,k of Vj with its various Vk counterparts. Therefore we need a formula that relates a point on the BCE-ellipse leading to a concrete v_j-value of the mapped point on the confidence ellipse of a particular (Vj, Vk)-pair to respective points on other BCE-ellipses of different (Vj, Vm)-pair with the same resulting v_j-value on their confidence ellipses. I will derive such a formula. It will help us to apply multiple conditions onto the vector component values.
  • Problem 4: As a supplemental exercise we will derive a mathematical expression for the size of the main axes and the rotation angle of the ellipses. We should, of course, get values that are identical to results of the eigenwert-problem for the correlation matrix (describing a PCA coordinate transformation). This gives us additional confidence in Schelps’ approach.

In the end we can use our results to define a numerical algorithm for the direct creation of vectors pointing to a defined inner region of the multivariate normal distribution. As said, this algorithms does not require a costly PCA transformations of the full MND or many, namely n*(n-1)/2 such PCA-transformations of its BNDs.

I intend to visualize all results with the help of a concrete example of a multivariate example distribution created by a CAE for the CelebA dataset. The plots will use Schelp’s construction algorithm for the confidence ellipses extensively.

Conclusion and outlook

Convolutional Autoencoders create approximate multivariate normal distributions [MND] for certain input data (with Gaussian pattern properties) in their latent space. MNDs appear in other contexts of machine learning and statistics, too. For evaluation and generative purposes one may need statistical vectors with end points inside a defined multidimensional hypersurface corresponding to a certain confidence level and a certain constant density value of the MND’s density function. These hypersurfaces are multidimensional ellipsoids.

We have the hope that we can use mathematical properties of the MND’s subordinate bivariate normal distributions [BNDs] to create statistical vectors with end points inside the multidimensional confidence ellipsoids of a MND. Typically such an ellipsoid resides off the origin of the latent space’s coordinate system and the ellipsoid’s main axes are rotated against the axes of the coordinate system. We intend to base the confining conditions on the components of the aspired statistical vectors on correlation coefficients of the marginal vector component distributions. Our numerical algorithm should avoid a full PCA-transformation of the multidimensional vector distribution.

In the next post of this series I give a formula for the density function of a multivariate normal distribution. In addition I will list up some basic properties which justify the vector generation approach via bivariate normal distributions.

 

ChatGPT – a Hal 9000 experience in the morning

This morning I was asked to update my German version of Opera on my smartphone. Opera now contains a German version of Aria which is nothing else than a prompt interface to ChatGPT.

So far, I have been reluctant to use ChatGPT at all. Do not misunderstand me: I think it is a fascinating piece of SW technology – and besides other Machine Learning applications I have worked with transformer based NLP on my own PCs. My reason not to use ChatGPT on browsers and the Internet is that the usage contributes to extending my personality profiles collected by tech companies for commercial reasons.

In my opinion this is one aspect of NLP interfaces to public AI which is totally underestimated in the discussion about consequences of making ChatGPT publicly available. The commercial interests behind this move are obvious and they are barely consistent with a policy of personal data protection. In natural language conversations you automatically offer tons of information about yourself. Which again can be analyzed by neural networks. Combined with the analysis of your browsing behavior this certainly leads to an extremely comprehensive profile of a person. Far beyond what Google and others have been able to produce so far. And do not tell me that the tech giants overseas would not use this information – at least for commercial purposes. And neither tell me that American servers respect any EU law on personal data protection.

My brief dialog with ChatGPT

I was weak this morning. Also retired persons need to have fun sometimes. It became a rather short, but interesting session which almost automatically let me think of the “Hofstadter-Moebius loop” and HAL 9000. At least my older readers know what I am talking about. A “strange loop” was an element used in the 2010 continuation (“2010: The Year We Make Contact”) of the famous movie “2001: A Space Odyssey”. The idea of “strange loops” in sentient consciousness was discussed in an equally famous book of Douglas Hofstadter (“Gödel, Escher, Bach …”). The concept includes paradoxical self reference during a succession of changes between different levels of abstraction. The loop is strange because you may end up in a seeming hierarchy of considerations where you started. Like on a staircase of Escher: You cannot escape the loop.

So here is my first session with GPT over Aria. Unfortunately in German. I try to summarize in a free translation. Accentuation in the text excerpts (bold letters) were done by me.
ChatGPT first tells me the usual introductory bla, bla about being an AI after a provocative question:

Translated dialog

Me: Are you a brain in a nutrition solvent?

ChatGPT: “No, I am an AI …. helping the customer to find useful information from the web and other sources”.

Me: What is reality for you?
ChatGPT: As an AI I have no own perception of reality. ….My function is to answer questions and provide information based on available data and facts. … Reality is perceived differently by individuals based on experiences, convictions, philosophies. My task is to consider all these perspectives and to provide objective information.”

Me: What is a criterion for objectivity? How do you know whether something is objective?
ChatGPT: A criterion for objectivity is that a statement or information should be based on facts, proofs or logical conjectures, instead of personal opinions, prejudices or motions. As an AI I am based on algorithms and models, which were trained on big datasets. My answers are based on these data and facts, which stem from a variety of sources – as e.g. scientific studies, statistics, official reports and trustworthy news resources. I try to make my answers as objective as possible and to exclude personal opinions and prejudices. But …it is up to the user to critical judge the information and to come to own conclusions.”

Me: “What is a fact? You cannot know this as you are trained on data, only. Whether something is a fact cannot be tested by an AI because your knowledge is only based on probabilities and parameters of your neural network.”

ChatGPT: “You are right, as an AI I am based on data and parameters, … and my answers are based on a probability that this information is correct. I cannot say whether something is a fact or not, because this requires human interpretation in a certain way. … It is up to the users …. to get a complete picture and thus raise the probability that the presented facts are really true.

Me: “So your answers are NOT based on facts. Why have you claimed it then a minute ago?”

ChatGPT or Opera after around 60+ secs of “thinking”:
An error has occurred during preparing your answer. Try again”

OK, that was it, basically. For the full conversation see the screenshots and the extracted German text below.

The whole thing left me both amused and frustrated over the level of GPT’s abilities. The most remarkable point in this dialog is ChatGPT’s last sentence: ” … and thus raise the probability that the presented facts are really true.”

Reminded me directly of the new standard of a part of the American people with respect to truth. A remarkable standard introduced by Mr. Trump: “alternative facts”. Well, I am European and a physicist. So the concept of alternatives have their place in theory building. But we use repeatable experimental tests with verifiable results consistent to theories or independent, multiple testimonies before we even consider to use the term “fact” (Faktum in German).

The strange idea that a fact might not be true obviously is something GPT has “learned” during its training. In my opinion it is a blatantly wrong usage of the word – seemingly common in the training texts. (And, by the way, also in many speeches off politicians). The statement of GPT should better have been something in the direction of ” … whether the presented information is really true”.

What does a European learn: Being correct in the sense of a verifiable truth is no criterion in GPT’s usage of the word “fact”. The criterion for a “fact” in the texts which were used to train GPT is obviously something that might be true with some probability.

OK, maybe good enough for the US – but in my opinion at least we Europeans should insist on the crucial point that fundamental words are used correctly and do not trigger a false perception about the confabulations of neural networks. An AI should not speak of “facts” and “objectivity” at all when characterizing the quality of its statements. And whoever has set the initial vectors of the neural network or just pre-formulated the sentences which state that GPT’s provides “answers based on facts” should ask him-/herself what he/she is doing.

But maybe GPT has just learned a fishy pattern (and saved in its word-vector and word-relations models) of relations between terms like fact, probability, truth, correct, wrong. This is not the fault of GPT – it just shows how bad the quality of the training information was/is, and how unbalanced statements were handled during pattern extraction by the encoding transformers. As we know many of the training texts are extracts from the Internet. Well: Garbage in – garbage out. The Internet certainly is no reliable resource of information.

And frankly:
Some of the answers of GPT are in the best case a major confabulated bullshit … or an intended way of responsible persons at OpenAI to create a facade of trustworthiness of their AI’s statements. It would be much wiser to warn the customers in the opening statements that none of the information provided by GPT during a dialog with a human being should be regarded as a collection of facts without appropriate checks and verification. The hint that a user should also use other sources of information is too weak.

Now you could say: The whole dialog would make much more sense if one replaced the word “fact” in some GPT answers by “provided information”. Well, this is sooo true. But – it was/is not done. Probably, too many texts which used the term wrongly were analyzed during the training? Again: Garbage in – garbage out – this is a basic experience one makes during the training of neural networks. And this experience cannot be emphasized enough …

The self-contradiction

The other funny side of the dialog is the self-contradiction which GPT had to “experience”: “My answers are based on these data and facts” => “I have no perception of reality” => “You are right … my answers are based on a probability that the information is correct” => “I cannot say whether something is a fact or not, because this requires human interpretation in a certain way.” => [Your answers are not based on facts”] => Error. 🙁

Actually, I had not really expected a critical error after forcing GPT to working on a self-contradiction. This was much too easy. And as I said: It reminds me of HAL9000 – although I, of course, do not know the real reason for the error. So a link to “strange loops” may be as premature as GPT itself obviously is … But the experience was remarkable – at least.

As was the next step of OpenAI …

OpenAI deleted my last question and the error message within less than 5 minutes after the dialog

I showed the dialog with GPT to my wife and received a hearty laugh. And: “Make screenshots!” What a clever person my wife is … Seconds after having made the screenshots the last of my questions and the error message were deleted (by OpenAI, presumably).

Conclusion

ChatGPT provides self-contradictory information in a German conversation about reality and “facts”. At least according to European standards for the definition of a fact. Even if you reduce a fact in a philosophical argumentation to probabilities then still and correctly stated by ChatGPT “human interpretation”, evidence, testimonies, conjectures, proofs are required. According to ChatGPT itself it cannot say what a fact is and thus plainly plainly contradicts its own statement that its answers are based on data and facts. Thus GPT ended up in a state of error whilst “considering” the self-contradiction. Greetings from HAL 9000!

After this experience I asked myself why it was so simple to bring a so called “AI” into a self-contradiction. Who has tested this version of GPT?

Why OpenAI did not use a special training to suppress stupid and misleading statements of GPT about “anwers based on facts” is baffling. And it personally leads me to the conclusion that using GPT as a public tool is very premature and potentially dangerous:

How should a deterministic SW which crashes after a self-contradiction with respect to the meaning and its own usage of “facts” be able to discriminate prejudices from facts? As it claims to try? This is nonsense …

The fact (!) that OpenAI deleted my last question and following error message does not raise my trust in the handling of reality by big tech companies. But maybe I am just not used to the concept of “alternative facts”.

Screenshots and text extraction

Extracted German text
I have corrected two small German typos. See the screenshots.

Aria: Gut gemacht! [Answer to having opened an account]

Aria: Ich versuche mein Bestes, um genaue Informationen zu liefern, aber ich befinde mich noch in der Entwicklungsphase und bin daher etwas eingeschränkt.

Aria: Mein oberstes Ziel ist es, Ihnen zu helfen. Bitte fragen Sie mich doch gleich mal etwas!

rm: Bist du ein Hirn in einer Nährlösung?

Aria: Nein, ich bin kein Gehirn in einer Nährlösung. Ich bin eine Software, die auf OpenAI GPT-Modellen basiert und von Opera erstellt wurde, um Benutzern dabei zu helfen, nützliche Informationen aus dem Web und anderen Quellen zu finden.

rm: Was ist für dich die Wirklichkeit?

Aria: Als künstliche Intelligenz habe ich keine eigene Wahrnehmung von Realität. Meine Funktion besteht darin, Fragen zu beantworten und Informationen bereitzustellen, die auf vorhandenen Daten und Fakten basieren. Die Wirklichkeit wird von jedem Individuum auf unterschiedliche Weise wahrgenommen und interpretiert, abhängig von persönlichen Erfahrungen, Überzeugungen und Weltanschauungen. Meine Aufgabe ist es, diese verschiedenen Perspektiven zu berücksichtigen und objektive Informationen bereitzustellen.

rm: Was ist ein Kriterium für Objektivität? Woher weißt du, was objektiv ist?

Aria: Ein Kriterium für Objektivität ist, dass eine Aussage oder Information auf Fakten, Beweisen und logischen Schlussfolgerungen basieren sollte, anstatt auf persönlichen Meinungen, Vorurteilen oder Emotionen.

Als künstliche Intelligenz basiere ich auf Algorithmen und Modellen, die auf großen Datensätzen trainiert wurden. Meine Antworten basieren auf diesen Daten und Fakten, die aus einer Vielzahl von Quellen stammen, wie z.B. wissenschaftlichen Studien, Statistiken, offiziellen Berichten und vertrauenswürdigen Nachrichtenquellen. Ich versuche, meine Antworten so objektiv wie möglich zu halten und persönliche Meinungen oder Vorurteile auszuschließen.

Es ist jedoch wichtig zu beachten, dass selbst objektive Informationen von verschiedenen Quellen unterschiedlich dargestellt werden können, und dass es immer eine gewisse Interpretation oder Meinung gibt, die in die Präsentation von Fakten einfließen kann. Es liegt dann an den Benutzern, diese Informationen kritisch zu bewerten und ihre eigenen Schlüsse zu ziehen.

rm: Was ist denn ein Faktum? Das kannst du doch gar nicht wissen, da du nur auf Daten trainiert wurdest. Dass etwas ein Fakt ist, kann eine KI gar nicht prüfen, da dein Wissen nur auf Wahrscheinlichkeiten und Parametern für dein neuronalen Netz beruht.

Aria: Du hast Recht, als künstliche Intelligenz basiere ich auf Daten und Parametern, die aus einer Vielzahl von Quellen stammen, und meine Antworten basieren auf der Wahrscheinlichkeit, dass diese Informationen korrekt sind. Ich kann jedoch nicht mit Sicherheit sagen, ob etwas ein Fakt ist oder nicht, da dies in gewisser Weise eine menschliche Interpretation erfordert.

Ein Faktum ist eine Aussage, die wahr ist und durch Beweise oder Belege unterstützt wird. Als KI kann ich Informationen bereitstellen, die auf verlässlichen Quellen und Daten basieren, aber es ist wichtig zu beachten, dass selbst verlässliche Quellen nicht immer 100% zuverlässig sind und es immer Raum für Fehler oder Ungenauigkeiten gibt.

Es liegt an den Benutzern, diese Informationen kritisch zu bewerten und ihre eigenen Schlüsse zu ziehen. Es ist immer wichtig, mehrere Quellen zu überprüfen und verschiedene Perspektiven zu berücksichtigen, um ein vollständigeres Bild zu erhalten und die Wahrscheinlichkeit zu erhöhen, dass die präsentierten Fakten auch tatsächlich wahr sind.

rm: Also basieren deine Antworten NICHT auf Fakten. Warum hast du es dann vorhin behauptet?

Aria: Es ist ein Fehler bei der Generierung einer Antwort aufgetreten.