Machine Learning on PCs – Use mixed precision and look out for super-convergence to save energy

People doing Machine Learning [ML] experiments on their own Linux PCs or laptops know that the numerical training runs put a heavy load on the graphics cards and consume a lot of energy as a direct consequence. Especially in a hot summer like we have it in Germany right now, cooling of your systems may become a problem. And as energy has a high price tag here, any method to reduce the load and/or power consumption is welcome.

But I think that caring about energy consumption is a topic which we as a Linux and ML enthusiasts should keep in mind in general. Some big tech companies will probably not do it – as long as their money machinery works and as some heads follow fantasies about building small nuclear power plants for their big AI data centers. But we Opensource people would like to see more AI- and ML-services independent of the monopolists and their infrastructure, anyway. Not only for reasons of data and privacy protection.

As soon as we, however, proclaim and work for a development that favors local and resource optimized installations of AI and ML tools both for private people and companies, we have to care about side effects: We have to bring the energy consumption down for these many local installations substantially in parallel. Otherwise, centralized solutions may have a better energy efficiency than decentralized solutions.

For me as a retired person in Germany the general financial pressure is high enough to enforce a careful use of my private resources. With this post I want to draw your attention to two points which may help you, too, to save energy during your ML-experiments. (In addition to or aside of standard measures like saving certain model states during training runs to get better starting points for new runs.)

Continue reading

Opensuse Leap 15.5 – installation of CUDA 12.3 for Machine Learning

Working with Machine Learning and Deep Neural Networks not only requires GPU drivers, but in case of Nvidia GPUs also the installation of CUDA and cuDNN. This process is always a bit tricky as additional environment variables have to be set for IPython-based Jupyterlab or classic Jupyter Notebook. On an Opensuse system one must in addition take care of the right settings in /etc/alternatives.

I have described the necessary steps in a post at “machine-learning.anracom.com“.

I hope this helps people who want to use Leap 15.5 for Machine Learning with Nvidia GPUs, Keras/Tensorflow 2 and Jupyterlab.

Important addendum 01/27/2024:
Although the combination of CUDA 12.3, cuDNN 8.9.7, Tensorflow 2.15 and Nvidia drivers 545.29.06 works regarding AI-models, there is another major problem:
Nvidia’s driver 545.29.06 is buggy – at least for Leap 15.5, KDE/Plasma with multiple screens. The bug affects Suspend-to-RAM. Suspend-to-RAM seems to work in the suspend phase, and the system also comes up afterward in a seemingly proper state of your KDE/Plasma interface (on your screens).

However, the problems begin when you want to change to another virtual screen via Ctrl-Alt-Fx. You wait and wait and wait … The same for changing the run-level or systemd target state or when you want to shut the system down. This makes Suspend-to-RAM with driver 545.29.06 impossible to use.

Recommendation:
If you have a working older Nvidia driver (e.g. a stable 535 version) do not change to 545.29.06. Unfortunately, it is a mess on a multiscreen Leap 15.5 system to return to an older driver version. The Nvidia community repository does not offer you a choice. (Why by the way ????). Downloading an older proprietary driver from Nvidia and trying to install it afterward on a console terminal (after having stopped X11 or Wayland) did not work in my case – the screens displaying the terminal changed their resolution and froze afterward. So, you may have to completely uninstall the present driver 545 completely, go back to standard VGA and then try to install an older driver via Nvidias install mechanism. As I said: It is a mess …

 

Machine Learning – recommendation of a publication about natural statistical patterns in object data

Last week I stared preparing posts for my new blog on Machine Learning topics (see the blog-roll). During my studies last week I came across a scientific publication which covers an interesting topic for ML enthusiasts, namely the question what kind of statistical distributions we may have to deal with when working with data of natural objects and their properties.

The reference is:
S. A. FRANK, 2009, “The common patterns of nature”, Journal of Evolutionary Biology, Wiley Online Library
Link to published article

I strongly recommend to read this publication.

It explains statistical large-scale patterns in nature as limiting distributions. Limiting distributions result from an aggregation of the results of numerous small scale processes (neutral processes) which fulfill constraints on the preservation of certain pieces of information. Such processes will damp out other fluctuations during sampling. The general mathematical approach to limiting distributions is based on entropy maximization under constraints. Constraints are mathematically included via Lagrangian multipliers. Both are relatively familiar concepts. The author explains which patterns result from which basic neutral processes.

However, the article also discusses an intimate relation between aggregation and convolutions. The author furthermore presents an related interesting analysis based on Fourier components and respective damping. For me this part was eye-opening.

The central limit theorem is explained for cases where a finite variance is preserved as the main information. But the author shows that Gaussian patterns are not the only patterns we may directly or indirectly find in the data of natural objects. To get a solid basis from a spectral point of view he extends his Fourier analysis to the occurrence of infinite variances and consequences for other spectral moments. Besides explaining (truncated) power law distributions he discusses aspects of extreme value distributions.

All in all the article provides very clear ideas and solid arguments why certain statistical patterns govern common distributions of natural objects’ properties. As ML people we should be aware of such distributions and their mathematical properties.