Statistical vector generation for multivariate normal distributions – I – multivariate and bi-variate normal distributions from CAEs

This post requires Javascript to display formulas!

Convolutional Autoencoders and multivariate normal distributions

Experiments as with my own on convolutional Autoencoders [CAE] show: A CAE maps a training set of human face images (e.g. CelebA) onto an approximate multivariate vector distribution in the CAE’s latent space Z. Each image corresponds to a point (z-point) and a corresponding vector (z-vector) in the CAE’s multidimensional latent space. More precisely the results of numerical experiments showed:

The multidimensional density function which describes the inner dense core of the z-point distribution (containing more than 80% of all points) was (aside of normalization factors) equivalent to the density function of a multivariate normal distribution [MND] for the respective z-vectors in an Euclidean coordinate system.

For results of my numerical CAE-experiments see
Autoencoders and latent space fragmentation – X – a method to create suitable latent vectors for the generation of human face images
and related previous posts in this blog. After the removal of some outliers beyond a high sigma-level (≥ 3) of the original distribution the remaining core distribution fulfilled conditions of standard tests for multivariate normal distributions like the Shapiro-Wilk test or the Henze-Zirkler test.

After a normalization with an appropriate factor the continuous density functions controlling the multivariate vector distribution can be interpreted as a probability density function [p.d.f.]. The components vj (j=1, 2,…, n) of the vectors to the z-points are regarded as logically separate, but not uncorrelated variables. For each of these variables a component specific value distribution Vj is given. All these marginal distributions contribute to a random vector distribution V, in our case with the properties of a MND:

\[ \boldsymbol{V} \: = \: \left( \, V_1, \, V_2, \, ….\, V_n\, \right) \: \sim \: \boldsymbol{\mathcal{N_n}} \, \left( \, \boldsymbol{\mu} , \, \boldsymbol{\Sigma} \, \right), \\ \quad \mbox{with} \: \boldsymbol{\mathcal{N_n}} \: \mbox{symbolizing a MND in an n-dimensional space}
\]

μ is a vector with all mean values μj of the Vj component distributions as its components. Σ abbreviates the covariance matrix relating the distributions Vj with one another.

The point distribution of a CAE’s MND forms a complex rotated multidimensional ellipsoid with its center somewhere off the origin in the latent space. The latent space itself typically has many dimensions. In the case of my numerical experiments the number of dimension was n ≥ 256. The number of sample vectors used were between 80,000 and 200,000 – enough data to approximate the vector distribution by a continuous density function. The densities for the Vj-distributions formed a smooth Gaussian function (for a reasonable sampling interval). But note: One has to be careful: The fact that the Vj have a Gaussian form is not a sufficient condition for a MND. (See the next post.) But if a MND is given all Vj have a Gaussian form.

Generative use of MNDs in multidimensional latent spaces of high dimensionality

When we want to use a CAE as a generative tool we need to solve a problem: We must create statistical vectors which point into the (multidimensional) volume of our point distribution in the latent space of the encoding algorithm. Only such vectors provide useful information to the Decoder of the CAE. A full multivariate normal distribution and hyperplanes of its multidimensional density function are difficult to analyze and to control when developing a proper numerical algorithm. Therefore I want to reduce the problem of vector generation to a sequence of viewable and controllable 2-dimensional problems. How can this be achieved?

A central property of a multivariate normal distribution helps: Any sub-selection of m vector-component distributions forms a multivariate normal distribution, too (see below). For m=2 and for vector components indexed by (j,k) with respective distributions Vj, Vk we get a so called “bivariate normal distribution[BND]:

\[ V_{jk} \: \sim \: \mathcal{N}_2\left(\, (\mu_j, \mu_k), (\,\sigma_j, \sigma_k \,) \, \right)
\]

A MND has n*(n-1)/2 such subordinate BNDs. The 2-dim density function of a bivariate normal distribution

\[ g_{jk}\,\left( \, v_j, \, v_k\, \right) \: = \: g_{jk}\,\left( \, v_j, \, v_j, \, \mu_j, \, \mu_k, \, \sigma_j, \sigma_k, \, \sigma_{jk}, … \right)
\]

for vector component values vj, vk defines a point density of the sample data in the (j,k)-coordinate plane of the Euclidean coordinate system in which the MND is described. The density function of the marginal distributions Vj showed the typical Gaussian forms of a univariate normal distribution.

The density function of a BND has some interesting mathematical properties. Among other things: The hyperplanes of constant density of a BND’s density function form ellipses. This is illustrated by the following plots showing such contour lines for selected pairs (Vj, Vk) of a real point-distribution in a 256-dimensional latent space. The point distribution was created by a CAE in its latent space for the CelebA dataset.

Contour lines for selected (j,k)-pairs. The thick lines stem from theory and calculated correlation coefficients of the univariate distributions.

The next plot shows the contours of selected vector-component pairs after a PCA-transformation of the full MND. (Main ellipse axes are now aligned with the axes of the PCA-coordinate system):

These ellipses with axes along the coordinate axes are relatively easy to handle. They can be used for vector creation. But they require a full PCA transformation of the MND-distribution, a PCA-analysis for complexity reduction and an application of the inverse PCA-transformation. The plot below shows the point-density compared to a 2.2-σ-confidence ellipses. The orange points are the results of a proper statistical numerical vector generation algorithm based on a PCA-transformation of the MND.

See my post quoted above for the application of a PCA-transformation of the multidimensional MND for vector creation.

However, we get the impression that we could also use these rotated ellipses in projections of the MND onto coordinate planes of the original latent space system directly to impose limiting conditions on the component values of statistical vectors pointing to an inner regions of the MND. Of course, a generated statistical vector must then comply with the conditions of all such ellipses. This requires an analysis and combined use of the ellipses of all of the subordinate BNDs of a the original MND during an iterative or successive definition of the values for the vector components.

Objective of this post series

In my last post about CAEs (see the link given above) I have explicitly asked the question whether one can avoid performing a full PCA-transformation of the MND when creating statistical vectors pointing to a defined inner region of a MND.

The objective of this post series is to prove the answer: Yes, we can. And we will use the BNDs resulting from projections of the original MND onto coordinate planes. We will in particular explore the n*(n-1)/2 the properties of the BNDs’ confidence ellipses. As said: These ellipses are rotated against the coordinate system’s axes. We will have to deal with this in detail. We will also use properties of their 1-dimensional marginal distributions (projections onto the coordinate axes, i.e. the Vj.)

In addition we need to prepare a variety of formulas before we are able to define numerical procedure for the vector generation without a full PCA-transformation of the MND with around 100,000 vectors. Some of the derived formulas will also allow for a deeper insight into how the multiple BNDs of a MND are related between each other and with confidence hypersurfaces of the MND.

Ellipses in general lead to equations governed by quadratic or fourth power polynomials. We will in addition use some elementary correlation formula from statistics and for some exercises a simple optimization via derivatives. The series can be regarded as an excursion into some of the math which governs bivariate distributions resulting from a MND.

As MNDs may also be the result of other generative Machine Learning algorithms in respective latent spaces, the whole approach to statistical vector generation for such cases should be of general interest. Note also that the so called “central limit theorem” almost guarantees the appearance of MNDs in many multivariate datasets with sufficiently large samples and value dependencies on many singular observations.

Distributions of a variety of variables may result in a MND if the variables themselves depend on many individual observables with limited covariance values of their distributions. In particular pairwise linearly correlated Gaussian density distributions individual variables (seen as vector components) may constitute a MND if the conditional probabilities fulfill some rules. We will see a glimpse of this in 2 dimensions when we analyze integrals over Gaussians in the bivariate normal case.

Other approaches to statistical vector generation?

Well, we could try to reconstruct the multidimensional density function of the MND. This is a challenge which appears in some problems of pure statistics, but also in experimental physics. See e.g. a paper of Rafey Anwar, Madeline Hamilton, Pavel M. Nadolsky (2019, Department of Physics, Southern Methodist University, Dallas; https://arxiv.org/pdf/1901.05511.pdf). Then we would have to find the elements of the (inverse) covariance matrix or – equivalently – the elements of a multidimensional rotation matrix. But the most efficient algorithms to get the matrix coefficients again work with projections onto coordinate planes. I prefer to use properties of the ellipses of the bivariate distributions directly.

Note that using the multidimensional density function of the MND directly is not of much help if we want to keep the vectors’ end points within a defined multidimensional inner region of the distribution. E.g.: You want to limit the vectors to some confidence region of the MND, i.e. to keep them inside a certain multidimensional ellipsoidal contour hyper-surface. The BND-ellipses in the coordinate planes reflect the multidimensional ellipsoidally shaped contour hyperplanes of the full distribution. Actually, when we vertically project a multidimensional hyperplane onto a coordinate plane then the outer 2-dim border line coincides with a contour ellipse of the respective BND. (This is due to properties of a MND. We will come back to this in a future post.) The problem of proper limiting individual vector component values thus again is best solved by analyzing properties of the BNDs.

Steps, methods, mathematical level

As a first step I will, for the sake of completeness, write down the formula for a multivariate normal distribution, discuss a bit its mathematical construction from uncorrelated univariate normal distribution. I will also list up some basic properties of a MND (without proof!). These properties will justify our approach to create statistical vectors pointing into a defined inner region of the MND by investigating projected contour ellipses of all subordinate BNDs. As a special aspect I want to make it at least plausible, why the projected contour ellipses define infinitesimal regions of the same relative probability level as their multidimensional counterparts – namely the multidimensional ellipsoidal hypersurfaces which were projected onto coordinate planes.

Then as a first productive step I want to motivate the specific mathematical form of the probability density function [p.d.f] of a bivariate normal distribution. In contrast to many of the math papers I have read on the topic I want to use a symmetry argument to derive the basic form of the p.d.f. I will point out an important, but plausible assumption about conditional distributions. An analogous assumption on the multidimensional level is central for the properties of a MND.

As the distributions Vj and Vk can be correlated we then want to understand the impact of the correlation coefficients on the parameters of the 2-dimensional density function. To achieve this I will again derive the density function by using our previous central assumption and some simple relations between the expectation values of the constituting two univariate distributions in the linear correlation regime. This concludes the part of the series where we get familiar with BNDs.

Furthermore we are interested in features and consequences of the 2-dimensional density functions. The contour lines of the 2-dim density function are ellipses – rotated by some specific angle. I will look at a formal mathematical process to construct such ellipses – in particular confidence ellipses. I will refer to the results Carsten Schelp has provided in an Internet article on this topic.

His construction process starts with a basic ellipse, which I will call base correlation ellipse [BCE]. The length of the axes of this ellipse are eigenvalues of the covariance matrix of the standardized marginal distributions constituting the BND. The main axes of this elementary ellipse are in addition aligned with the two selected axes of a basic Euclidean coordinate system in which the bivariate distribution is defined. The length of the BCE’s main axes can be shown to depend on the correlation coefficient for the two vector component distributions Vj and Vk. This coefficient also appears in the precision matrix of the BND. Points on the base correlation ellipse can be mapped with two steps of an affine transformation onto points on the real contour ellipses, in particular to points of the confidence ellipses.

The whole construction process is not only of immense help when designing visualization programs for the contour ellipses of our distribution with many (around 100,000) individual vectors. The process itself gives us some direct geometrical insights. It also helps to avoid finding a numerical solution of the usual eigenvector-problems when answering some specific questions about the rotated contour ellipses. Normally we solve an eigenvalue-problem for the covariance matrix of the multi- or the many subordinate bi-variate distributions to get precise information about contour ellipses. This corresponds to a transformation of the distributions to a new coordinate system whose axes are aligned to the main axes of the ellipses. Numerically this transformation is directly related to a PCA transformation of the vector distributions. However, such a PCA-transformation can be costly in terms of CPU time.

Instead, we only need a numerical determination of all the mutual the correlation coefficients of the univariate marginal distributions of the MND. Then the eigenvalue problem on the BND-level is already analytically solved. We therefore neither perform a full numerical PCA analysis of the MND and multidimensional rotation of the vectors of around 100,000 samples. Nor do we analyze explained variance ratios to determine the most important PCA components for dimensionality reduction. We neither need to perform a numerical PCA analysis of the BNDs.

Most important: Our problem of vector generation is formulated in the original latent space coordinate system and it gets a direct solution there. The nice thing is that Schelp’s construction mechanism reduces the math to the solution of quadratic polynomial equations for the BNDs. The solutions of those equations, which are required for our ultimate purpose of vector generation, can be stated in an explicit form.

Therefore, the math in this series will mostly remain on high school level (at least at a level given when I was young). Actually, it was fun to dive back into exercises reminding me of school 50 years ago. I hope the interested reader has some fun, too.

Solutions to some particular problems with respect to the confidence ellipses of the MND’s BNDs

In particular we will solve the following problems:

  • Problem 1: The two points on the BCE-ellipse with the same vj-value are not mapped onto points with the same vj-value on the confidence ellipse. We therefore derive the coordinates of points on the BCE-ellipse that give us one and the same vj-value on the real confidence ellipse.
  • Problem 2: Plots for a real MND vector distribution indicate that all (n-1) confidence ellipses of distribution pairs of a common Vj with other marginal distributions Vk (for the same confidence level and with k ≠ j) have a common tangent parallel to one coordinate axis. We will derive the value of a maximum v_j-value for all ellipses of (j,k)-pairs of vector component distributions. We will prove that it is identical for all k. This will define the common interval of allowed vj-component-values for a bunch of confidence ellipses for all (Vj, Vk)-pairs with a common Vj.
  • Problem 3: The BCE-ellipses for a common j-, but different k-values depend on different values for the correlation coefficients ρj,k of Vj with its various Vk counterparts. Therefore we need a formula that relates a point on the BCE-ellipse leading to a concrete v_j-value of the mapped point on the confidence ellipse of a particular (Vj, Vk)-pair to respective points on other BCE-ellipses of different (Vj, Vm)-pair with the same resulting v_j-value on their confidence ellipses. I will derive such a formula. It will help us to apply multiple conditions onto the vector component values.
  • Problem 4: As a supplemental exercise we will derive a mathematical expression for the size of the main axes and the rotation angle of the ellipses. We should, of course, get values that are identical to results of the eigenwert-problem for the correlation matrix (describing a PCA coordinate transformation). This gives us additional confidence in Schelps’ approach.

In the end we can use our results to define a numerical algorithm for the direct creation of vectors pointing to a defined inner region of the multivariate normal distribution. As said, this algorithms does not require a costly PCA transformations of the full MND or many, namely n*(n-1)/2 such PCA-transformations of its BNDs.

I intend to visualize all results with the help of a concrete example of a multivariate example distribution created by a CAE for the CelebA dataset. The plots will use Schelp’s construction algorithm for the confidence ellipses extensively.

Conclusion and outlook

Convolutional Autoencoders create approximate multivariate normal distributions [MND] for certain input data (with Gaussian pattern properties) in their latent space. MNDs appear in other contexts of machine learning and statistics, too. For evaluation and generative purposes one may need statistical vectors with end points inside a defined multidimensional hypersurface corresponding to a certain confidence level and a certain constant density value of the MND’s density function. These hypersurfaces are multidimensional ellipsoids.

We have the hope that we can use mathematical properties of the MND’s subordinate bivariate normal distributions [BNDs] to create statistical vectors with end points inside the multidimensional confidence ellipsoids of a MND. Typically such an ellipsoid resides off the origin of the latent space’s coordinate system and the ellipsoid’s main axes are rotated against the axes of the coordinate system. We intend to base the confining conditions on the components of the aspired statistical vectors on correlation coefficients of the marginal vector component distributions. Our numerical algorithm should avoid a full PCA-transformation of the multidimensional vector distribution.

In the next post of this series I give a formula for the density function of a multivariate normal distribution. In addition I will list up some basic properties which justify the vector generation approach via bivariate normal distributions.

 

ChatGPT – a Hal 9000 experience in the morning

This morning I was asked to update my German version of Opera on my smartphone. Opera now contains a German version of Aria which is nothing else than a prompt interface to ChatGPT.

So far, I have been reluctant to use ChatGPT at all. Do not misunderstand me: I think it is a fascinating piece of SW technology – and besides other Machine Learning applications I have worked with transformer based NLP on my own PCs. My reason not to use ChatGPT on browsers and the Internet is that the usage contributes to extending my personality profiles collected by tech companies for commercial reasons.

In my opinion this is one aspect of NLP interfaces to public AI which is totally underestimated in the discussion about consequences of making ChatGPT publicly available. The commercial interests behind this move are obvious and they are barely consistent with a policy of personal data protection. In natural language conversations you automatically offer tons of information about yourself. Which again can be analyzed by neural networks. Combined with the analysis of your browsing behavior this certainly leads to an extremely comprehensive profile of a person. Far beyond what Google and others have been able to produce so far. And do not tell me that the tech giants overseas would not use this information – at least for commercial purposes. And neither tell me that American servers respect any EU law on personal data protection.

My brief dialog with ChatGPT

I was weak this morning. Also retired persons need to have fun sometimes. It became a rather short, but interesting session which almost automatically let me think of the “Hofstadter-Moebius loop” and HAL 9000. At least my older readers know what I am talking about. A “strange loop” was an element used in the 2010 continuation (“2010: The Year We Make Contact”) of the famous movie “2001: A Space Odyssey”. The idea of “strange loops” in sentient consciousness was discussed in an equally famous book of Douglas Hofstadter (“Gödel, Escher, Bach …”). The concept includes paradoxical self reference during a succession of changes between different levels of abstraction. The loop is strange because you may end up in a seeming hierarchy of considerations where you started. Like on a staircase of Escher: You cannot escape the loop.

So here is my first session with GPT over Aria. Unfortunately in German. I try to summarize in a free translation. Accentuation in the text excerpts (bold letters) were done by me.
ChatGPT first tells me the usual introductory bla, bla about being an AI after a provocative question:

Translated dialog

Me: Are you a brain in a nutrition solvent?

ChatGPT: “No, I am an AI …. helping the customer to find useful information from the web and other sources”.

Me: What is reality for you?
ChatGPT: As an AI I have no own perception of reality. ….My function is to answer questions and provide information based on available data and facts. … Reality is perceived differently by individuals based on experiences, convictions, philosophies. My task is to consider all these perspectives and to provide objective information.”

Me: What is a criterion for objectivity? How do you know whether something is objective?
ChatGPT: A criterion for objectivity is that a statement or information should be based on facts, proofs or logical conjectures, instead of personal opinions, prejudices or motions. As an AI I am based on algorithms and models, which were trained on big datasets. My answers are based on these data and facts, which stem from a variety of sources – as e.g. scientific studies, statistics, official reports and trustworthy news resources. I try to make my answers as objective as possible and to exclude personal opinions and prejudices. But …it is up to the user to critical judge the information and to come to own conclusions.”

Me: “What is a fact? You cannot know this as you are trained on data, only. Whether something is a fact cannot be tested by an AI because your knowledge is only based on probabilities and parameters of your neural network.”

ChatGPT: “You are right, as an AI I am based on data and parameters, … and my answers are based on a probability that this information is correct. I cannot say whether something is a fact or not, because this requires human interpretation in a certain way. … It is up to the users …. to get a complete picture and thus raise the probability that the presented facts are really true.

Me: “So your answers are NOT based on facts. Why have you claimed it then a minute ago?”

ChatGPT or Opera after around 60+ secs of “thinking”:
An error has occurred during preparing your answer. Try again”

OK, that was it, basically. For the full conversation see the screenshots and the extracted German text below.

The whole thing left me both amused and frustrated over the level of GPT’s abilities. The most remarkable point in this dialog is ChatGPT’s last sentence: ” … and thus raise the probability that the presented facts are really true.”

Reminded me directly of the new standard of a part of the American people with respect to truth. A remarkable standard introduced by Mr. Trump: “alternative facts”. Well, I am European and a physicist. So the concept of alternatives have their place in theory building. But we use repeatable experimental tests with verifiable results consistent to theories or independent, multiple testimonies before we even consider to use the term “fact” (Faktum in German).

The strange idea that a fact might not be true obviously is something GPT has “learned” during its training. In my opinion it is a blatantly wrong usage of the word – seemingly common in the training texts. (And, by the way, also in many speeches off politicians). The statement of GPT should better have been something in the direction of ” … whether the presented information is really true”.

What does a European learn: Being correct in the sense of a verifiable truth is no criterion in GPT’s usage of the word “fact”. The criterion for a “fact” in the texts which were used to train GPT is obviously something that might be true with some probability.

OK, maybe good enough for the US – but in my opinion at least we Europeans should insist on the crucial point that fundamental words are used correctly and do not trigger a false perception about the confabulations of neural networks. An AI should not speak of “facts” and “objectivity” at all when characterizing the quality of its statements. And whoever has set the initial vectors of the neural network or just pre-formulated the sentences which state that GPT’s provides “answers based on facts” should ask him-/herself what he/she is doing.

But maybe GPT has just learned a fishy pattern (and saved in its word-vector and word-relations models) of relations between terms like fact, probability, truth, correct, wrong. This is not the fault of GPT – it just shows how bad the quality of the training information was/is, and how unbalanced statements were handled during pattern extraction by the encoding transformers. As we know many of the training texts are extracts from the Internet. Well: Garbage in – garbage out. The Internet certainly is no reliable resource of information.

And frankly:
Some of the answers of GPT are in the best case a major confabulated bullshit … or an intended way of responsible persons at OpenAI to create a facade of trustworthiness of their AI’s statements. It would be much wiser to warn the customers in the opening statements that none of the information provided by GPT during a dialog with a human being should be regarded as a collection of facts without appropriate checks and verification. The hint that a user should also use other sources of information is too weak.

Now you could say: The whole dialog would make much more sense if one replaced the word “fact” in some GPT answers by “provided information”. Well, this is sooo true. But – it was/is not done. Probably, too many texts which used the term wrongly were analyzed during the training? Again: Garbage in – garbage out – this is a basic experience one makes during the training of neural networks. And this experience cannot be emphasized enough …

The self-contradiction

The other funny side of the dialog is the self-contradiction which GPT had to “experience”: “My answers are based on these data and facts” => “I have no perception of reality” => “You are right … my answers are based on a probability that the information is correct” => “I cannot say whether something is a fact or not, because this requires human interpretation in a certain way.” => [Your answers are not based on facts”] => Error. 🙁

Actually, I had not really expected a critical error after forcing GPT to working on a self-contradiction. This was much too easy. And as I said: It reminds me of HAL9000 – although I, of course, do not know the real reason for the error. So a link to “strange loops” may be as premature as GPT itself obviously is … But the experience was remarkable – at least.

As was the next step of OpenAI …

OpenAI deleted my last question and the error message within less than 5 minutes after the dialog

I showed the dialog with GPT to my wife and received a hearty laugh. And: “Make screenshots!” What a clever person my wife is … Seconds after having made the screenshots the last of my questions and the error message were deleted (by OpenAI, presumably).

Conclusion

ChatGPT provides self-contradictory information in a German conversation about reality and “facts”. At least according to European standards for the definition of a fact. Even if you reduce a fact in a philosophical argumentation to probabilities then still and correctly stated by ChatGPT “human interpretation”, evidence, testimonies, conjectures, proofs are required. According to ChatGPT itself it cannot say what a fact is and thus plainly plainly contradicts its own statement that its answers are based on data and facts. Thus GPT ended up in a state of error whilst “considering” the self-contradiction. Greetings from HAL 9000!

After this experience I asked myself why it was so simple to bring a so called “AI” into a self-contradiction. Who has tested this version of GPT?

Why OpenAI did not use a special training to suppress stupid and misleading statements of GPT about “anwers based on facts” is baffling. And it personally leads me to the conclusion that using GPT as a public tool is very premature and potentially dangerous:

How should a deterministic SW which crashes after a self-contradiction with respect to the meaning and its own usage of “facts” be able to discriminate prejudices from facts? As it claims to try? This is nonsense …

The fact (!) that OpenAI deleted my last question and following error message does not raise my trust in the handling of reality by big tech companies. But maybe I am just not used to the concept of “alternative facts”.

Screenshots and text extraction

Extracted German text
I have corrected two small German typos. See the screenshots.

Aria: Gut gemacht! [Answer to having opened an account]

Aria: Ich versuche mein Bestes, um genaue Informationen zu liefern, aber ich befinde mich noch in der Entwicklungsphase und bin daher etwas eingeschränkt.

Aria: Mein oberstes Ziel ist es, Ihnen zu helfen. Bitte fragen Sie mich doch gleich mal etwas!

rm: Bist du ein Hirn in einer Nährlösung?

Aria: Nein, ich bin kein Gehirn in einer Nährlösung. Ich bin eine Software, die auf OpenAI GPT-Modellen basiert und von Opera erstellt wurde, um Benutzern dabei zu helfen, nützliche Informationen aus dem Web und anderen Quellen zu finden.

rm: Was ist für dich die Wirklichkeit?

Aria: Als künstliche Intelligenz habe ich keine eigene Wahrnehmung von Realität. Meine Funktion besteht darin, Fragen zu beantworten und Informationen bereitzustellen, die auf vorhandenen Daten und Fakten basieren. Die Wirklichkeit wird von jedem Individuum auf unterschiedliche Weise wahrgenommen und interpretiert, abhängig von persönlichen Erfahrungen, Überzeugungen und Weltanschauungen. Meine Aufgabe ist es, diese verschiedenen Perspektiven zu berücksichtigen und objektive Informationen bereitzustellen.

rm: Was ist ein Kriterium für Objektivität? Woher weißt du, was objektiv ist?

Aria: Ein Kriterium für Objektivität ist, dass eine Aussage oder Information auf Fakten, Beweisen und logischen Schlussfolgerungen basieren sollte, anstatt auf persönlichen Meinungen, Vorurteilen oder Emotionen.

Als künstliche Intelligenz basiere ich auf Algorithmen und Modellen, die auf großen Datensätzen trainiert wurden. Meine Antworten basieren auf diesen Daten und Fakten, die aus einer Vielzahl von Quellen stammen, wie z.B. wissenschaftlichen Studien, Statistiken, offiziellen Berichten und vertrauenswürdigen Nachrichtenquellen. Ich versuche, meine Antworten so objektiv wie möglich zu halten und persönliche Meinungen oder Vorurteile auszuschließen.

Es ist jedoch wichtig zu beachten, dass selbst objektive Informationen von verschiedenen Quellen unterschiedlich dargestellt werden können, und dass es immer eine gewisse Interpretation oder Meinung gibt, die in die Präsentation von Fakten einfließen kann. Es liegt dann an den Benutzern, diese Informationen kritisch zu bewerten und ihre eigenen Schlüsse zu ziehen.

rm: Was ist denn ein Faktum? Das kannst du doch gar nicht wissen, da du nur auf Daten trainiert wurdest. Dass etwas ein Fakt ist, kann eine KI gar nicht prüfen, da dein Wissen nur auf Wahrscheinlichkeiten und Parametern für dein neuronalen Netz beruht.

Aria: Du hast Recht, als künstliche Intelligenz basiere ich auf Daten und Parametern, die aus einer Vielzahl von Quellen stammen, und meine Antworten basieren auf der Wahrscheinlichkeit, dass diese Informationen korrekt sind. Ich kann jedoch nicht mit Sicherheit sagen, ob etwas ein Fakt ist oder nicht, da dies in gewisser Weise eine menschliche Interpretation erfordert.

Ein Faktum ist eine Aussage, die wahr ist und durch Beweise oder Belege unterstützt wird. Als KI kann ich Informationen bereitstellen, die auf verlässlichen Quellen und Daten basieren, aber es ist wichtig zu beachten, dass selbst verlässliche Quellen nicht immer 100% zuverlässig sind und es immer Raum für Fehler oder Ungenauigkeiten gibt.

Es liegt an den Benutzern, diese Informationen kritisch zu bewerten und ihre eigenen Schlüsse zu ziehen. Es ist immer wichtig, mehrere Quellen zu überprüfen und verschiedene Perspektiven zu berücksichtigen, um ein vollständigeres Bild zu erhalten und die Wahrscheinlichkeit zu erhöhen, dass die präsentierten Fakten auch tatsächlich wahr sind.

rm: Also basieren deine Antworten NICHT auf Fakten. Warum hast du es dann vorhin behauptet?

Aria: Es ist ein Fehler bei der Generierung einer Antwort aufgetreten.

 

We humans confabulate, too – not only AI

As a scientist you have to learn and accept that our perception of the world and of the rules governing it may reflect more of our genetically designed and socially acquired prejudices than reality. Scientist go through a long training to mistrust our prejudices. They instead try to understand reality on a deeper level of experimental tests combined with the building of theories and verifiable predictions. On this background I want to discuss a specific aspect in the presently heated debate about the alleged dangers of A(G)I. An aspect which I think is at least in parts misunderstood and not grasped at its full extend.

A typical argument in the discussion, which is used to underline a critical view on AI, is: “AI as e.g. in the form of GPT4 makes things up. Therefore, we cannot trust it and therefore it can be dangerous.” I do not disagree. But the direction of the critics misses one important point: Are we humans actually better?

I would clearly say: Not so much as we like to believe. We still have a big advantage in comparison with AI: As we are embedded into the physical world and interact with it we can make clever experiments to explore underlying patterns of cause and action – and thus go beyond the detection of correlations. We also can test our ideas in conversations with others and in confrontation with their experiences. Not only in science, but in daily social interaction. However, to assume that we humans do not confabulate is a big mistake. Actually, the fact that large language models (and other models of AI) often “hallucinate” makes them more similar to human beings than many of newspaper journalists are willing to discuss in their interviews with AI celebrities.

Illustration: “Hallucinations” of a convolutional network trained on number patterns when confronted with an image of roses

Experiments in neurosciences and psychology indicate that we human beings probably confabulate almost all the time. At least much more often than we think. Our brain re-constructs our perception of the world according to plausibility criteria trained end developed both during evolution of mankind and during our personal life. And the brain presents us manipulated stories to give us a coherent and seemingly consistent view upon our interactions with reality and the respective time-line added to our memories.

You do not believe in a confabulation of our brain? Well, I do not want to bore you with links to a whole bunch of respective literature published during the last 3 decades on this subject. Sometimes simple things make the basic argument clear. One of these examples is an image that got viral on social media some years ago. I stumbled across it yesterday when I read an interview of the Quanta Magazine with the neuroscientist Anil Seth about the “nature of consciousness”. And I had a funny evening afterward with my wife due to this picture. We had a completely different perception of it and its displayed colors.

The image is “The dress” of Cecilia Bleasdale. You find it in the named and very informative interview of the Quanta Magazine. You also find it on Wikipedia. I refrain from showing it, as there may be legal right issues. The image displays a skirt.

A lot of people see it as an almost white skirt with golden stripes. Others see it as a blue skirt with almost black stripes. Personally, I see it as a lighter, but clearly blue skirt with darker bronze/golden stripes. But more interesting: My wife and I totally disagreed.

We disagreed on the colors both yesterday night and this morning – under different light conditions and looking at the image on different computer screens. Today we also looked at hex codes of the colors: I had to admit that the red, green, blue mixture in total indicates much darker stripes than I perceive them. But still the dominant red/green combination gives a clear indication of something of a darker gold. The blue areas of the skirt are undisputed between me and my wife, although I seemingly perceive it in a lighter shading than my wife.

This is a simple example of how our brain obviously tells us our own individual stories about reality. There are many other and much more complex examples. One of the most disputed one is the question of whether we really control our intentional behavior and related decisions at a period around the decision making. A whole line of experiments indicates that our brain only confabulates afterward that we were in control. Our awareness of decisions made under certain circumstances appears to be established some hundred milliseconds after our brain actually triggered our actions. This does not exclude that we may have a chance of control on longer timescales and by (re-) training and changing our decision making processes. But on short timescales our brain decides and simply acts. And this is good so. Because it enables us to react fast in critical situations. A handball player or a sword fencer does not have much time to reflect his or her actions; sportsmen and sportswomen very often rely on trained automatisms.

What can we be sure about regarding our perceptions? Well, physical reality is something different than what we perceive via the reaction of our nerve system (including the brain) to interactions with objects around our bodies and resulting stimuli. Or brain constructs a coherent perception of reality with the help of all our senses. The resulting imagination helps us to survive in our surroundings by permanently extrapolating and predicting relatively stable conditions and evolution of other objects around us. But a large part of that perception is imagination and our brain’s story telling. As physics and neuroscience has shown: We often have a faulty imagination of reality. On a fundamental level, but often enough also on the level of judging visual or acoustic information. Its one of the reasons why criminal prosecutors must be careful with the statements of eye-witnesses.

Accepting this allows for a different perspective on our human way of thinking and perceiving: Its not really me who is thinking. IT, the brain – a neural network – is doing it. IT works and produces imaginations I can live with. And the “I” is an embedded entity of my imagination of reality. Note that I am not disputing a free will with this. This is yet another and more complex discussion.

Now let us apply this skeptical view on human perception onto today’s AI. GPT without doubt makes things up. It confabulates on the background of already biased information used during training. It is not yet able to check its statements via interactions with the physical world and experiments. But a combination of transformer technology, GAN-technology and Reinforcement Learning will create new and much more capable AI-systems soon. Already now interactions with simulated “worlds” are a major part of the ongoing research.

In such a context the confabulations of AI-systems make them more human than we may think and like. Let us face it: Confabulation is an expected side aspect on our path to future AGI-systems. It is not a failure. Confabulation is a feature we know very well from us human beings. And as with manipulative human beings we have to be very careful with whatever an AI produces as output. But fortunately enough AI-systems do not yet have an access to physical means to turn their confabulations into action.

This thought, in my opinion, should gain more weight in the discussion about the AI development to come. We should much more often ask ourselves whether we as human beings fulfill the criteria for a conscious intelligent system really so much better than these new kinds of information analyzing networks. I underline: I do not at all think that GPT is some self-conscious system. But the present progress is only a small step at an early stage of the development of capable AI. Upon this all leading experts agree. And we should be careful to give AI systems access to physical means and resources.

Not only do researchers see more and more emergent abilities of large language networks aside those capabilities the networks were trained for. But even some of the negative properties as confabulation indicate “human”-like sides of the networks. And there are overall similarities between humans and some types of AI networks regarding the basic learning of languages. See a respective link given below. These are signs of a development we all should not underestimate.

I recommend to read an interview with Geoffrey Hinton (the prize-winning father of back-propagation algorithms as the base of neural network optimization). He emphasizes the aspect of confabulation as something very noteworthy. Furthermore he claims that some capabilities of today’s AI networks already surpass those of human beings. One of these capabilities is obvious: During a relatively short training period much more raw knowledge than a human could process on a similar time scale gets integrated into the network’s optimization and calibration. Another point is the high flexibility of pre-trained models. In addition we have not yet heard about any experience with multiple GPT instances in a generative interaction and information exchange. But this is a likely direction of future experiments which may accelerate the development of something like an AGI. I give a link to an article of MIT Technology Review with Geoffrey Hinton below.

Links and articles

https://www.quantamagazine.org/ what-is-the-nature-of-consciousness-20230531/
https://slate.com/ technology/ 2017/04/ heres-why-people-saw-the-dress-differently.html
https://www.theguardian.com/ science/ head-quarters/ 2015/feb/27/ the-dress-blue-black-white-gold-vision-psychology-colour-constancy
https://www.technologyreview.com/ 2023/05/02/ 1072528/ geoffrey-hinton-google-why-scared-ai/
https://www.quantamagazine.org/ some-neural-networks-learn-language-like-humans-20230522/