KDE Plasma users – reconfigure your desktop from scratch when you experience startup delays after system upgrades

This post is basically for KDE Plasma users on Linux systems which are not based on rolling releases, but which get upgraded in the course of defined upgrade cycles of the distribution. It is a kind of reminder about a rather trivial point when you upgrade your Linux system or when someone else has done this for you:

  • It may sometimes be worth the effort to rebuild and reconfigure your KDE Plasma desktop after your Linux OS has been upgraded.

A full upgrade of a modern Linux system comprises the upgrade of very different components. In most cases a change to a newer version of your preferred desktop environment is part of the packet upgrades. Regarding the evolution of KDE/Plasma during the last 20 years, I have several times experienced that the desktop may exhibit problems after such an upgrade. In a few cases the desktop itself and its configuration would struggle with problems, sometimes a certain application or plasmoid had deficits. Such deficits have most often affected the performance or presentation of only the application on the desktop, but sometimes the overall desktop performance was hampered.

Startup times for standard Linux systems are pretty fast these days – including the start of a graphical desktop session. Even on a 12 years old laptop a KDE Plasma desktop based on X11 builds up within 3 secs after passing login credentials – even with reconstructing the status of multiple applications you left open at the end of your last KDE session. Part of the time may go into (Wifi) network access – and the establishment of complex firewall rules during network startup by networkmanager – dependent on your configuration. But under normal circumstances, establishing network connections by networkmanager should not be an obstacle for a fast general start of the graphical desktop.

Therefore, it should make you nervous when you find that starting a KDE Plasma session on a laptop takes around 10 to 20 secs. This was an experience I yesterday had on a system which had been upgraded to Opensuse Leap 15.6 some months ago. The user was frustrated about a long delay during KDE startup – which he could prove by excerpts of logged system messages. Unfortunately, neither the system logs nor the X11 logs indicated clearly what the reason was. In my opinion, this was an indication of some problematic configuration. Even if the KDE desktop appeared to work without problems after its long startup phase. Some app or plasmoid could have waited during the start of the desktop session for a rather long period for a timeout.

I could help my acquaintance. The question who has caused the glitch, the user or the OS provider, most often is relatively unproductive and fruitless. As long as there is no indication of a hacker’s manipulation … Leaving the possibility of a hacker’s interference aside, we first of all want to reconstruct a working graphical environment. If we find the culprit for the problems on our way – the better it is. Nevertheless, for important system exposed for whatever reason to unfriendly attackers, I would recommend to make forensic copies of the system first. Never ever think, Linux is a kind of barrier for experienced hackers …

This post, therefore, is about a possible solution under best circumstances – namely about a systematic reconstruction of your KDE desktop, its layout, plasmoids and applications to avoid configuration “errors” you may have carried around with you since the last or even since some older upgrade.

Continue reading

S3Dlib, Matplotlib, 3D rendering – spheres in front of a surface

Recently, I needed a certain type of 3D-illustration for a post series about cosmology. I wanted to show a 2-dimensional manifold above a mesh grid with respective coordinate lines on the surface. In front of the surface I wanted to place some opaque spheres. Such illustrations are often used in physics to demonstrate the effect of some objects on a physical quantity – e.g. of spherical bodies on the gravitational potential or on a component of the metric tensor of space-time.

The simple problem to get a correct rendering of objects along a defined line of view upon a 3D scene posed a problem for Matplotlib’s 3D renderer for multiple objects in a 3D axis frame (created by ax = plt.axes(projection=’3d’)). The occlusion of objects was displayed wrongly for most view ports and viewing angles.

In this post, I briefly want to outline how this problem can be solved with the help of S3Dlib. As a beginner regarding the use of S3Dlib, I had to overcome some problems there, too. So, this small exercise with some options of S3Dlib might be interesting for some readers which want to use Python and Matplotlib for rendering simple 3D scenes.

The following plot shows what I wanted to achieve:

Correct rendering of two spheres in front of a surface by S3Dlib
Correct rendering of two spheres in front of a surface by S3Dlib
Continue reading

Bye, bye Opera … welcome Vivaldi

These are hard times regarding politics and IT. Present developments, in particular in the USA, in China and Russia have an indirect or direct impact on various types of IT-components – concerning e.g. production sites, quality, tariffs and prices, data control, digital privacy. We in Europe who have supported Opensource and Opensource-based applications for decades, can – in my opinion – not ignore tendencies both of dictatorial regimes and capitalistic tech-giants to control the future of IT in general, the production of IT-related products and the efforts to control and analyze more and more of user-generated traffic. Be it for the surveillance and control of citizens or to earn money via analyzing user profiles and spamming them indirectly with advertisement. Even more concerning is the growing power of a handful of companies and institutions over the development and the ultimate direction of AI. The risks in all of these sectors to harm digital privacy (aside of the un-social media) are growing.

But we Europeans should also have an eye on who invests in what – and whether such investments come from countries which support aggressors against European countries or the EU. We sometimes need to take a clear position. Better late than never as in my case.

Continue reading

Upgrade from Leap 15.3 over 15.4, 15.5 up to Leap 15.6 – problems with the named service

Sometimes one takes a challenge with Linux. During my stay in Norway I wanted to find out whether one could bring a really old server system (regarding HW) to the latest Leap version of Opensuse. Such an old system can still serve valuable purposes – as testing complex configurations of server components, using it as an extended IDS/Firewall-system, etc. In my case I was fortunate as the real problem occurred with SW and not HW.

Regarding HW my concerns related to an old Nvidia GT 710. The advantage of this card was/is that it is passively cooled and provides enough power for using both a present KDE or Gnome desktop – if necessary or useful. I was lucky to find that the present G05 Nvidia drivers support this card.

Somewhat unexpectedly, real problems occurred with an installed named-service at the upgrade from 15.3 to Leap 15.4 – and again when upgrading from 15.5 to 15.6. While you can find many complaints on the Internet, I did not find a solution that covered all my problem. Therefore, I want to give Linux users or administrators who experience similar problems some hints.

Continue reading

Keras 3/TF vs. PyTorch – small model performance tests on a Nvidia 4060 TI

There are many PROs and CONs regarding the choice of a Machine Learning [ML] framework for private studies on a Linux Workstations. Two mainly used frameworks are PyTorch and a Keras/Tensorflow combination. One aspect for productive work with ML models certainly is performance. And as I personally do not have TPUs or other advanced chips available, but just a consumer Nvidia 4060 TI graphics card, performance and optimal GPU usage are of major interest – even for the training of relatively small models.

With this post I just want to point out that the question of performance advantages of some framework on a CUDA controlled graphics card can not be answered in a unique way. Even for small neural network [NN] models the performance may depend on a variety of relevant settings, on jit-/xla-compilation and the chosen precision level of your training or inference runs.

Continue reading