We humans confabulate, too – not only AI

As a scientist you have to learn and accept that our perception of the world and of the rules governing it may reflect more of our genetically designed and socially acquired prejudices than reality. Scientist go through a long training to mistrust our prejudices. They instead try to understand reality on a deeper level of experimental tests combined with the building of theories and verifiable predictions. On this background I want to discuss a specific aspect in the presently heated debate about the alleged dangers of A(G)I. An aspect which I think is at least in parts misunderstood and not grasped at its full extend.

A typical argument in the discussion, which is used to underline a critical view on AI, is: “AI as e.g. in the form of GPT4 makes things up. Therefore, we cannot trust it and therefore it can be dangerous.” I do not disagree. But the direction of the critics misses one important point: Are we humans actually better?

I would clearly say: Not so much as we like to believe. We still have a big advantage in comparison with AI: As we are embedded into the physical world and interact with it we can make clever experiments to explore underlying patterns of cause and action – and thus go beyond the detection of correlations. We also can test our ideas in conversations with others and in confrontation with their experiences. Not only in science, but in daily social interaction. However, to assume that we humans do not confabulate is a big mistake. Actually, the fact that large language models (and other models of AI) often “hallucinate” makes them more similar to human beings than many of newspaper journalists are willing to discuss in their interviews with AI celebrities.

Illustration: “Hallucinations” of a convolutional network trained on number patterns when confronted with an image of roses

Experiments in neurosciences and psychology indicate that we human beings probably confabulate almost all the time. At least much more often than we think. Our brain re-constructs our perception of the world according to plausibility criteria trained end developed both during evolution of mankind and during our personal life. And the brain presents us manipulated stories to give us a coherent and seemingly consistent view upon our interactions with reality and the respective time-line added to our memories.

You do not believe in a confabulation of our brain? Well, I do not want to bore you with links to a whole bunch of respective literature published during the last 3 decades on this subject. Sometimes simple things make the basic argument clear. One of these examples is an image that got viral on social media some years ago. I stumbled across it yesterday when I read an interview of the Quanta Magazine with the neuroscientist Anil Seth about the “nature of consciousness”. And I had a funny evening afterward with my wife due to this picture. We had a completely different perception of it and its displayed colors.

The image is “The dress” of Cecilia Bleasdale. You find it in the named and very informative interview of the Quanta Magazine. You also find it on Wikipedia. I refrain from showing it, as there may be legal right issues. The image displays a skirt.

A lot of people see it as an almost white skirt with golden stripes. Others see it as a blue skirt with almost black stripes. Personally, I see it as a lighter, but clearly blue skirt with darker bronze/golden stripes. But more interesting: My wife and I totally disagreed.

We disagreed on the colors both yesterday night and this morning – under different light conditions and looking at the image on different computer screens. Today we also looked at hex codes of the colors: I had to admit that the red, green, blue mixture in total indicates much darker stripes than I perceive them. But still the dominant red/green combination gives a clear indication of something of a darker gold. The blue areas of the skirt are undisputed between me and my wife, although I seemingly perceive it in a lighter shading than my wife.

This is a simple example of how our brain obviously tells us our own individual stories about reality. There are many other and much more complex examples. One of the most disputed one is the question of whether we really control our intentional behavior and related decisions at a period around the decision making. A whole line of experiments indicates that our brain only confabulates afterward that we were in control. Our awareness of decisions made under certain circumstances appears to be established some hundred milliseconds after our brain actually triggered our actions. This does not exclude that we may have a chance of control on longer timescales and by (re-) training and changing our decision making processes. But on short timescales our brain decides and simply acts. And this is good so. Because it enables us to react fast in critical situations. A handball player or a sword fencer does not have much time to reflect his or her actions; sportsmen and sportswomen very often rely on trained automatisms.

What can we be sure about regarding our perceptions? Well, physical reality is something different than what we perceive via the reaction of our nerve system (including the brain) to interactions with objects around our bodies and resulting stimuli. Or brain constructs a coherent perception of reality with the help of all our senses. The resulting imagination helps us to survive in our surroundings by permanently extrapolating and predicting relatively stable conditions and evolution of other objects around us. But a large part of that perception is imagination and our brain’s story telling. As physics and neuroscience has shown: We often have a faulty imagination of reality. On a fundamental level, but often enough also on the level of judging visual or acoustic information. Its one of the reasons why criminal prosecutors must be careful with the statements of eye-witnesses.

Accepting this allows for a different perspective on our human way of thinking and perceiving: Its not really me who is thinking. IT, the brain – a neural network – is doing it. IT works and produces imaginations I can live with. And the “I” is an embedded entity of my imagination of reality. Note that I am not disputing a free will with this. This is yet another and more complex discussion.

Now let us apply this skeptical view on human perception onto today’s AI. GPT without doubt makes things up. It confabulates on the background of already biased information used during training. It is not yet able to check its statements via interactions with the physical world and experiments. But a combination of transformer technology, GAN-technology and Reinforcement Learning will create new and much more capable AI-systems soon. Already now interactions with simulated “worlds” are a major part of the ongoing research.

In such a context the confabulations of AI-systems make them more human than we may think and like. Let us face it: Confabulation is an expected side aspect on our path to future AGI-systems. It is not a failure. Confabulation is a feature we know very well from us human beings. And as with manipulative human beings we have to be very careful with whatever an AI produces as output. But fortunately enough AI-systems do not yet have an access to physical means to turn their confabulations into action.

This thought, in my opinion, should gain more weight in the discussion about the AI development to come. We should much more often ask ourselves whether we as human beings fulfill the criteria for a conscious intelligent system really so much better than these new kinds of information analyzing networks. I underline: I do not at all think that GPT is some self-conscious system. But the present progress is only a small step at an early stage of the development of capable AI. Upon this all leading experts agree. And we should be careful to give AI systems access to physical means and resources.

Not only do researchers see more and more emergent abilities of large language networks aside those capabilities the networks were trained for. But even some of the negative properties as confabulation indicate “human”-like sides of the networks. And there are overall similarities between humans and some types of AI networks regarding the basic learning of languages. See a respective link given below. These are signs of a development we all should not underestimate.

I recommend to read an interview with Geoffrey Hinton (the prize-winning father of back-propagation algorithms as the base of neural network optimization). He emphasizes the aspect of confabulation as something very noteworthy. Furthermore he claims that some capabilities of today’s AI networks already surpass those of human beings. One of these capabilities is obvious: During a relatively short training period much more raw knowledge than a human could process on a similar time scale gets integrated into the network’s optimization and calibration. Another point is the high flexibility of pre-trained models. In addition we have not yet heard about any experience with multiple GPT instances in a generative interaction and information exchange. But this is a likely direction of future experiments which may accelerate the development of something like an AGI. I give a link to an article of MIT Technology Review with Geoffrey Hinton below.

Links and articles

https://www.quantamagazine.org/ what-is-the-nature-of-consciousness-20230531/
https://slate.com/ technology/ 2017/04/ heres-why-people-saw-the-dress-differently.html
https://www.theguardian.com/ science/ head-quarters/ 2015/feb/27/ the-dress-blue-black-white-gold-vision-psychology-colour-constancy
https://www.technologyreview.com/ 2023/05/02/ 1072528/ geoffrey-hinton-google-why-scared-ai/
https://www.quantamagazine.org/ some-neural-networks-learn-language-like-humans-20230522/

 

Opensuse Leap 15.4 – Problems with Optimus and prime-select after updates of SW packages

Presently, I work a lot on an old laptop which has a so called Optimus combination of a dedicated Nvidia GPU and an Intel GPU coming with the main CPU-processor. “prime-select” is a tool which Opensuse includes with Leap 15.4 to provide an efficient way of controlling which GPU shall be used. As good as prime-select has worked for me on Leap 15.3 and also some time with Leap 15.4 recent updates of a variety of SW packages lead into trouble.

I had the Nvidia card active before the SW updates. After a cold restart of the system it did no longer start the SDDM display manager on the default systemd target. This happened even when the updates did not directly affect the kernel or the Nvidia kernel modules.

The problem always had to do with bbswitch turning off the Nvidia device when the system switched to the default graphical target. And with a turned off Nvidia graphics device the Nvidia drivers can not be loaded.

So some SW updates lead to a change of the configuration prime-select had set up before the updates. The stupid thing is that it is not quite so simple to get things back to work. To try to us “init 3” to go to a console interface on a non-graphical target and then use “prime-select nvidia” plus a subsequent “init 5” on the command line does not work. You do not change the wrong bbswitch actions that way. You can also turn bbswitch off by “tee /proc/acpi/bbswitch <<< OFF". And then load the Nvidia driver successfully. But trying to afterward switch to the standard graphical target invokes bbswitch again in the wrong way. It is a bit of a mess. The following steps seem to work to get back to normal operation again:

  • Step 1: Use “init 3” on a console terminal.
  • Step 2: Use the command “prime-select intel”.
  • Step 3: Restart your system. It should boot now into the graphical target based on the i915 intel GPU driver.
  • Step 4: Ignore any information from a prime-select icon. It shows you a plainly wrong info that you are using Nvidia.
  • Step 5: Log in as root on a root terminal window. Switch bbswitch off (e.g. by the command given above). Load the Nvidia module by “modprobe nvidia”. Check via lsmod that it is successfully loaded.
  • Step 6: Type in “prime-select nvidia”.
  • Step 7: Log out from your graphical interface.
  • Step 8: Check that SDDM or whatever display manager is started with bbswitch not shutting down the Nvidia card. Log inn with the Nvidia card active.
  • Step 9: Check that the Nvidia driver is still loaded on a root terminal window. Then issue “mkinitrd” and restart your Leap 15.4 system.

Afterward using the “prime-select intel” or “prime-select nvidia” commands at the command line of a root terminal window, a logout from the graphical desktop and a login again via the restarted graphical display manager switches correctly between the cards.

However, the prime-select applet gives you wrong information when the intel card is active. And it does not give you the chance to switch back to the Nvidia card again. Its stupid, but no major problem as long as the basic prime-select command does its job on the command line.

Hope this helps people having to work with Opensuse on an Optimus system.

 

Autoencoders and latent space fragmentation – X – a method to create suitable latent vectors for the generation of human face images

My present post series explores options to use a standard convolutional Autoencoder [AE] for the creation of images with human faces. The face generation should based on random input to the AE’s Decoder. On our quest for a suitable method we have meanwhile learned a lot about other aspects of Autoencoders, vector distributions in multi-dimensional latent spaces and generative methods for our special case:

  • Methods to create statistical latent vectors [z-vectors] as input for the AE’s Decoder must be chosen carefully. Among other things: It is difficult to create a bunch of random vectors which cover wider areas in the vastness of a multidimensional space. So the z-vector creation must be adjusted to specific requirements.
  • After having been trained with CelebA images a convolutional AE fills a limited and coherent region in the latent space with z-points for the training images. This latent space region appears to be critical for successful image creation: Statistically generated z-vectors should point to this region. The core of the z-point distribution gets filled relatively densely.
  • A convolutional AE maps human face images onto an approximate multivariate normal distribution. This gives the inner core of the z-point distribution the structure of a multidimensional ellipsoid. The projections of this ellipsoid onto 2-dimensional coordinate planes show characteristic nested elliptic contour lines.
  • As the main axes of these ellipses were inclined with different angle towards the axes of chosen coordinate planes we concluded that linear correlations mark average dependencies between the z-vector components. Limiting conditions imposed by these correlations must also be fulfilled by z-vectors used as the Decoder’s input.

See previous posts in this series for more details. In particular, the last 2 posts

Autoencoders and latent space fragmentation – IX – PCA transformation of the z-point distribution for CelebA

Autoencoders and latent space fragmentation – VIII – approximation of the latent vector distribution by a multivariate normal distribution and ellipses

have shown that the density distribution for the z-points really exhibits elliptic contour lines in the original coordinate system of the latent space and (!) in the target coordinate system of a PCA transformation.

In this post we use our gathered knowledge: I present a first simple method to generate z-vectors which point to the latent space region filled by z-points for CelebA images. These z-vectors will fulfill the general and limiting elliptic conditions for their components.

Decomposing the full problem of latent vector generation into a sequence of 2-dimensional problems

The nice thing about multivariate Gaussian distributions with linear correlations between the vector components is the following: We can reduce the problem of choosing proper component values to a series of 2-dimensional restrictions. Firstly we can use characteristic properties of the Gaussian distribution for each component. And secondly we can use confidence ellipses in 2-dimensional coordinate planes to restrict the component values to allowed intervals.

Ellipses are most easy to handle when their axes are aligned with the axes of the coordinate system in which we describe them. So, let us assume that we know an affine transformation T to a new coordinate system which also has orthogonal axes and supports the following special transformation properties for a multivariate normal density distribution:

  1. T maps nested elliptic contour lines of the multidimensional density distribution and in particular confidence ellipses for component pairs in the original coordinate system to nested elliptic contours and confidence ellipses in the new coordinate system.
  2. Taligns the centers of the transformed ellipses with the origin of the new coordinate system.
  3. T aligns the main axes of the mapped ellipses with the axes of the new coordinate system.
  4. T is reversible.

How could we then use the transformed data for vector-creation?

In the new coordinate system, a contour ellipse in a chosen coordinate plane for the axes-indices (i, j) may have main diameters of size

d1 = 2 * a    and    d2 = 2 * b.

We then can first select a random v_i value to fall into a range [-a * fact, a * fact].

fact * a    <    v_i    <    fact * a

With fact being a proper factor. This factor defines a confidence level in the new coordinate system. With the value of v_i fixed and b being the half-diameter in the orthogonal direction the correlation condition for the z-point distribution says that the v_j value must fall into an interval [-c, c] defined by:

-c    <    v_j    <    c,
with c = b * fact * sqrt(1 – x**2 / (fact * a)**2)

But within these limits we can again choose the v_j-value freely. Below I use a simple random-function for a constant probability density to pick a value.

However: It would not be enough to restrict the coordinates to the conditions of just one ellipse! The components of the created vectors must in parallel fulfill elliptic conditions for all of the possible pairs of vector-components. I.e. we may need to adapt the v_j values gained from the analysis of a fist 2D-ellipse to further conditions of other ellipses and component pairs. This can be achieved by an iteration. For z_dim = 256 this involves a total of 32640 checks and possible value-adaptions to each and all of the allowed value ranges.

In addition: The order by which the component-pairs and their conditions are investigated must be randomized to get real statistical vector distributions.

Eventually the resulting vector components must be re-transformed into the original coordinate system of the latent space.

The ellipse for the “core’s boundary” in the original coordinate system will be defined by the chosen confidence level of the ellipsoidal normal distribution. We saw already that a confidence level of σ = 2.0 defines the transition to outer regions of the z-point density distribution quite well.

This all sounds manageable by relative simple Python programs. But: Do we know a proper transformation T? Yes, we do: A PCA-transformation of the z-point density distribution has all the properties discussed above.

Using half maximum values after a PCA transformation of the z-point distribution

The last post proved that a PCA transformation maps ellipses onto ellipses for component pairs in the transformed PCA coordinate system. The advantage of the ellipses there is that their main axes are on average well aligned with the orthogonal PCA coordinate axes. Gaussians for the number density distribution per component are mapped to Gaussians for the new components in the transformed coordinate system. So, the basic idea for a proper z-vector generation is:

  1. Take the multivariate normal z-point distribution for the training images in the AE’s latent space.
  2. Apply a PCA analysis to diagonalize the correlation matrix and transform the z-vector components to the PCA coordinate system.
  3. Use the ellipses in coordinate planes of the PCA coordinate system to create random z-vector components fulfilling all required conditions there.
  4. Re-transform the resulting z-vector components into the original coordinate system of the latent space.

Point 3 in our method is covered by a numerical analysis of the Gaussians in the PCA-coordinate system. We determine the half-width numerically by analyzing the density distribution with the help of sampling intervals. This simple method has resolution limits related to the size of the sampling interval. This has consequences for PCA components with a small standard deviation. We saw already in the last posts that such distributions appear for higher PCA components at the lower end of the explained variance.

Does the suggested method work?

The convolutional AE we work with was defined in previous posts with 4 Conv2D layers in the Encoder and 4 Conv2DTranspose layers in Decoder. The number of latent space dimensions was z_dim = 256. The AE network was trained on CelebA images. I do not want to bore you with details of the codes for the creation of z-vectors consistent to the resulting elliptic conditions. It is all standard. The PCA-transformation can e.g. be taken from the sklearn-package.

I have applied a constant probability density to choose a random value within the allowed ranges for the component values of the aspired z-vectors in the PCA coordinate system. For the plots below I have used the most important 50 to 105 PCA components (out of 256). The plots include confidence ellipses on a level of σ = 2.2. I derived the confidence ellipses by directly evaluating the standard deviations of the transformed distribution data in all coordinate directions.

The first plot shows you such an ellipse for the coordinate plane corresponding to the first two, most important PCA components. The orange points mark 20 z-points defined by 20 randomly z-vectors fulfilling all elliptic conditions. The plot contains 120,000 z-points for images out of the 170,000 CelebA pictures used during training.

Generated statistical vectors in the PCA coordinate system

For elliptic contour lines see the last post before the present one in this series. The next plot shows the same generated 20 z-vectors for other component-combinations among the first 20 of the most important PCA-components. The plots contain a selection of 60,000 z-points.

The outer z-points points do not always indicate that we have elliptic contours in the denser core of the displayed 2-dimensional distributions. But see the last post for proofs that the inner core inside the red ellipse really displays elliptic contours. You see that all random vectors lie within the 2-σ-ellipses.

The next plot shows the generated z-vectors in the original coordinate system of the latent space. The component values were back-transformed from the PCA-system to the original coordinate system.

Generated statistical z-vectors after an inverse PCA transformation to the original coordinate system of the latent space

We get similar plots for other component pairs. And of course for other generated vectors.

Generated statistical z-vectors in the PCA coordinate system

Generated statistical z-vectors after an inverse PCA transformation to the original coordinate system of the latent space

Technically we have obviously achieved what we wanted: Our generated statistical vectors are distributed within the core of our multidimensional ellipsoid.

Note that this method fortunately works even when we use a limited number of the PCA components, only. This is due to intricate properties of a PCA transformation which guarantee that a back-transformation puts the resulting points close to the original ones even when we omit less important PCA components. I cannot discuss the math-details in this blog. You have to see scientific literature for this. An introduction is e.g. provided by https://arxiv.org/pdf/1404.1100.pdf.

For me this property of the PCA transformation was helpful when I ran into the resolution problem for a proper half-width of the Gaussians. Taking 256 components lead to errors as elliptic conditions for very narrow Gaussians were not properly defined and some of the created vectors left the allowed value ranges.

Resulting face images

Let us look at some results. First I want to remind you from where we started:

Failed trials with improper random z-vectors based on constant probability densities

A simple random generator used in the beginning was totally inapt to feed the AE’s Decoder with proper statistical z-vectors. And now – look at the following plots. They were produced for a varying number of PCA components between 50 and 120, 100000 statistically selected z-points within a 3 σ-level for the PCA-transformation and various factors 0.6 < fact < 0.8 used upon a half-width corresponding to a confidence level of 2.35 σ:

In some cases – for a higher number of PCA components – we even see smaller details of the face images and a reasonable transition to some kind of hairdo. Please remember that z_dim = 256 is a pretty low number for the latent space to cover the encoding of face details. And celebrities as covered by CelebA use make-up ….

In case you think the above result is not noteworthy: Please remember that we talk about a simple standard Autoencoder and not about a Variational Autoencoder and neither about a transformer based Autoencoder. No fancy additions to cost functions or special layers. And who ever has read the very instructive book of D. Foster on “Generative Deep Learning” (1st edition, O’Reilly) may compare his images to mine. And I have used a lower resolution of the original images than D. Foster. Just to motivate people to look a bit deeper into properties of data distributions in latent spaces.

Conclusion and outlook

We have come a lot closer to our objective of using a standard minimal Autoencoder for generative purposes. On our way, we got a much deeper understanding of the vector-distribution a trained AE creates in its latent space for human face images.

The method presented in this post to create reasonable statistical z-vectors still has its limits and there is a lot of open space for improvements. Attentive readers may e.g. ask: Why did he not use confidence ellipses directly? And why not the ellipses found in the original coordinate system of the latent space? And what about micro-correlations? And are there clusters for certain properties as the hair-color, sex, smiling, etc. in the multivariate z-point distribution in the AE’s latent space?

I will discuss these topics in further posts. In the meantime keep in mind that the basic point for turning a standard Autoencoder into a generative tool is to understand how it fills its latent space.

Note also that I myself have speculated in other posts of this blog that failures of using standard AEs for generative purposes may have their ultimate reason in the micro-structure of the z-point distribution. The present results render these previous ideas of mine plain wrong.

Links to previous posts of this series

Autoencoders and latent space fragmentation – IX – PCA transformation of the z-point distribution for CelebA

Autoencoders and latent space fragmentation – VIII – approximation of the latent vector distribution by a multivariate normal distribution and ellipses

Autoencoders and latent space fragmentation – VII – face images from statistical z-points within the latent space region of CelebA

Autoencoders and latent space fragmentation – VI – image creation from z-points along paths in selected coordinate planes of the latent space

Autoencoders and latent space fragmentation – V – reconstruction of human face images from simple statistical z-point-distributions?

Autoencoders and latent space fragmentation – IV – CelebA and statistical vector distributions in the surroundings of the latent space origin

Autoencoders and latent space fragmentation – III – correlations of latent vector components

Autoencoders and latent space fragmentation – II – number distributions of latent vector components

Autoencoders and latent space fragmentation – I – Encoder, Decoder, latent space

 

And before we forget it: Besides the Putler in the east there is also an extremist right-wing, semi-fascistic party in Germany on a record high support level in the population of 18%. This is a party which wants to stop all sanctions against the Russian aggressor in the ongoing war in Ukraine. You see the pattern behind this? This party is presently becoming bigger in number of supporters than the government leading social democrats. So, there is more at stake at present in Europe than the war in Ukraine. We need to defend our democracies with all the means of democracies. And its time to ask for more decisive legal action against a party which already is under observation of the German internal secret service.