Orthogonal projections of MVNs and of their ellipsoidal contour surfaces

Some readers may remember a post series I have written in this blog about the reconstruction of human faces with a CNN-based Autoencoder. I could show that the information in the latent space of the Autoencoder is given in form of a core of a Multivariate Normal Distribution [MVN].

This did not surprise too much as there are good reasons to assume that facial features on average, but in particular across rather symmetric celebrity faces follow Gaussian distributions. Hundreds of encoded features together form a MVN-distribution in a latent space of hundreds of dimensions. The Encoder part of a CNN-based Autoencoder is a pattern extraction machine – and there is no simpler pattern in multiple dimensions than a (off-center) MVN! A MVN’s multidimensional and concentric contour surfaces are ellipsoids, which have an algebraic description in form of quadratic forms. In case of the MVN defined by the inverse of the covariance matrix.

During the named series, I have extensively used that fact that the projections of a multidimensional MVN down to a coordinate planes result in 2-dimensional bivariate MVNs. The elements of the (2×2)-covariance matrix of the various 2-dimensional projected distributions could simply be picked from (nxn) covariance matrix of the original MVN – by a simple selection process. I had taken this procedure as granted, as it had been claimed in some publications. And it worked very well … See e.g.

and links therein to other posts. The projection of course affects the (n-1)-dimensional ellipsoidal and concentric contour surfaces of a MVN and maps them onto (p-1)-dimensional contour ellipses of the projected 2-dimensional MVNs. For respective images see this post:

Last weeks I looked a bit deeper into the mathematics of orthogonal projections of multidimensional ellipsoids onto sub-spaces of the ℝn. It came a bit of a surprise to me that the math behind the projections of figures controlled by quadratic forms is relatively complicated. In the general case of the projection to a p-dimensional sub-space, the quadratic form matrix for the ellipsoidal hull of the projection image is a so called Schur complement of the original ellipsoid’s quadratic form matrix.

Fortunately, the relation between the inverse matrices of the quadratic forms for the ellipsoids could be established in a way that is fully consistent with the mapping of covariance matrices of MVNs and and related matrices of their projection images. However and in contrast to other publications, I found that a solid proof requires some Linear Algebra around Schur complements.

Readers interested in MVNs and their mathematical properties for statistical analysis e.g. in Machine Learning contexts may find detailed information in the following articles of mine:

Orthogonal projections of multidimensional ellipsoids

However, basic Linear Algebra knowledge is required! The articles should also be interesting for physicists.
 

Keras 3/TF vs. PyTorch – small model performance tests on a Nvidia 4060 TI

There are many PROs and CONs regarding the choice of a Machine Learning [ML] framework for private studies on a Linux Workstations. Two mainly used frameworks are PyTorch and a Keras/Tensorflow combination. One aspect for productive work with ML models certainly is performance. And as I personally do not have TPUs or other advanced chips available, but just a consumer Nvidia 4060 TI graphics card, performance and optimal GPU usage are of major interest – even for the training of relatively small models.

With this post I just want to point out that the question of performance advantages of some framework on a CUDA controlled graphics card can not be answered in a unique way. Even for small neural network [NN] models the performance may depend on a variety of relevant settings, on jit-/xla-compilation and the chosen precision level of your training or inference runs.

Continue reading

Machine Learning on PCs – Use mixed precision and look out for super-convergence to save energy

People doing Machine Learning [ML] experiments on their own Linux PCs or laptops know that the numerical training runs put a heavy load on the graphics cards and consume a lot of energy as a direct consequence. Especially in a hot summer like we have it in Germany right now, cooling of your systems may become a problem. And as energy has a high price tag here, any method to reduce the load and/or power consumption is welcome.

But I think that caring about energy consumption is a topic which we as a Linux and ML enthusiasts should keep in mind in general. Some big tech companies will probably not do it – as long as their money machinery works and as some heads follow fantasies about building small nuclear power plants for their big AI data centers. But we Opensource people would like to see more AI- and ML-services independent of the monopolists and their infrastructure, anyway. Not only for reasons of data and privacy protection.

As soon as we, however, proclaim and work for a development that favors local and resource optimized installations of AI and ML tools both for private people and companies, we have to care about side effects: We have to bring the energy consumption down for these many local installations substantially in parallel. Otherwise, centralized solutions may have a better energy efficiency than decentralized solutions.

For me as a retired person in Germany the general financial pressure is high enough to enforce a careful use of my private resources. With this post I want to draw your attention to two points which may help you, too, to save energy during your ML-experiments. (In addition to or aside of standard measures like saving certain model states during training runs to get better starting points for new runs.)

Continue reading

Opensuse Leap 15.5 – installation of CUDA 12.3 for Machine Learning

Working with Machine Learning and Deep Neural Networks not only requires GPU drivers, but in case of Nvidia GPUs also the installation of CUDA and cuDNN. This process is always a bit tricky as additional environment variables have to be set for IPython-based Jupyterlab or classic Jupyter Notebook. On an Opensuse system one must in addition take care of the right settings in /etc/alternatives.

I have described the necessary steps in a post at “machine-learning.anracom.com“.

I hope this helps people who want to use Leap 15.5 for Machine Learning with Nvidia GPUs, Keras/Tensorflow 2 and Jupyterlab.

Important addendum 01/27/2024:
Although the combination of CUDA 12.3, cuDNN 8.9.7, Tensorflow 2.15 and Nvidia drivers 545.29.06 works regarding AI-models, there is another major problem:
Nvidia’s driver 545.29.06 is buggy – at least for Leap 15.5, KDE/Plasma with multiple screens. The bug affects Suspend-to-RAM. Suspend-to-RAM seems to work in the suspend phase, and the system also comes up afterward in a seemingly proper state of your KDE/Plasma interface (on your screens).

However, the problems begin when you want to change to another virtual screen via Ctrl-Alt-Fx. You wait and wait and wait … The same for changing the run-level or systemd target state or when you want to shut the system down. This makes Suspend-to-RAM with driver 545.29.06 impossible to use.

Recommendation:
If you have a working older Nvidia driver (e.g. a stable 535 version) do not change to 545.29.06. Unfortunately, it is a mess on a multiscreen Leap 15.5 system to return to an older driver version. The Nvidia community repository does not offer you a choice. (Why by the way ????). Downloading an older proprietary driver from Nvidia and trying to install it afterward on a console terminal (after having stopped X11 or Wayland) did not work in my case – the screens displaying the terminal changed their resolution and froze afterward. So, you may have to completely uninstall the present driver 545 completely, go back to standard VGA and then try to install an older driver via Nvidias install mechanism. As I said: It is a mess …

 

Machine Learning – recommendation of a publication about natural statistical patterns in object data

Last week I stared preparing posts for my new blog on Machine Learning topics (see the blog-roll). During my studies last week I came across a scientific publication which covers an interesting topic for ML enthusiasts, namely the question what kind of statistical distributions we may have to deal with when working with data of natural objects and their properties.

The reference is:
S. A. FRANK, 2009, “The common patterns of nature”, Journal of Evolutionary Biology, Wiley Online Library
Link to published article

I strongly recommend to read this publication.

It explains statistical large-scale patterns in nature as limiting distributions. Limiting distributions result from an aggregation of the results of numerous small scale processes (neutral processes) which fulfill constraints on the preservation of certain pieces of information. Such processes will damp out other fluctuations during sampling. The general mathematical approach to limiting distributions is based on entropy maximization under constraints. Constraints are mathematically included via Lagrangian multipliers. Both are relatively familiar concepts. The author explains which patterns result from which basic neutral processes.

However, the article also discusses an intimate relation between aggregation and convolutions. The author furthermore presents an related interesting analysis based on Fourier components and respective damping. For me this part was eye-opening.

The central limit theorem is explained for cases where a finite variance is preserved as the main information. But the author shows that Gaussian patterns are not the only patterns we may directly or indirectly find in the data of natural objects. To get a solid basis from a spectral point of view he extends his Fourier analysis to the occurrence of infinite variances and consequences for other spectral moments. Besides explaining (truncated) power law distributions he discusses aspects of extreme value distributions.

All in all the article provides very clear ideas and solid arguments why certain statistical patterns govern common distributions of natural objects’ properties. As ML people we should be aware of such distributions and their mathematical properties.