Replacing unstable Blender 2.82 on Leap 15.3 with flatpak or snap based Blender 3.1

I wanted to work a bit more with my S-curve project in Blender. See e.g.

Blender – complexity inside spherical and concave cylindrical mirrors – IV – reflective images of a Blender variant of Mr Kapoor’s S-curve

At my present location I have to use my old but beloved laptop. As a new start in the project I just wanted to make a little movie showing dynamic changes in the reflections of some moving spheres in front of the S-curve’s metallic surface. As I had already created a few Blender movies some years ago I expected this to be an easy job. But I ran into severe trouble. A major reason was the fact that Opensuse does not provide a reasonably usable RPM of a current Blender versions for Leap 15.3.

I had upgraded the laptop to Leap 15.3 in December, 2022. The laptop has an Optimus system. Of course, I prefer the Nvidia card when creating scenes with Blender. I had had some minor problems with prime-select before. But after recent updates I did not get the Nvidia card running at all – neither with Bumblebee nor with Opensuse’s prime-select. I had to invest an hour to solve this problem. After a complete uninstall of Bumblebee and bbswitch and a de- and re-installation of the Nvidia drivers and suse-prime I got prime-select working in the end.

Just to find that Blender version 2.82 provided by the Leap 15.3 OSS repository was not really stable in a Leap 15.3 environment.

Blender 2.82 unstable on Opensuse Leap 15.3

I tried to create a movie with 100 frames (400×200) in Matroska format with H.264 encoding. Rendering of the complex S-curve reflections with a strongly reduced number of light ray samples still took considerable time with the Cycles renderer on 2 out of 4 real (and 8 hyperthreaded) CPU cores. It happened that the movie – after rendering an hour – was not baked correctly. In some runs Blender crashed after having rendered around 40 frames.

Therefore I changed my policy to rendering just a collection of PNG images. The risk of wasting something like 8 hours of CPU time for a real animation was just too big. But, a new disaster happened when I wanted to open a “video editing”- project in Blender to stitch the images together to create the final movie. Blender just crashed. Actually, Blender crashed on Leap 15.3 whenever I tried to open any project type other than “General”. Not funny … I had not experienced something like this on Leap 15.2.

Unfortunately, and in contrast to Tumbleweed there is no official and recent version of Blender available for Leap 15.3. So much about stability and reliability of Leap. What to do?

Canonical’s snap service or Flatpak to the rescue

I had read about SNAP of Canonical before – but never tried it. Canonical’s strange side steps during the evolution of Linux have always been disturbing for me. Regarding “snap” I also had a newspaper article in mind about a controversy with Linux Mint – with claims that snap has closed proprietary components and is transmitting telemetry data to Canonical. Working with root rights …

Therefore, a flatpack installation had to be considered, too…. Risks are small on my old laptop. I just tried – despite concerns and doubts regarding snap.

Blender 3.1 via “snap

I just followed the instructions for Opensuse on https://snapcraft.io/docs/installing-snap-on-opensuse to install “snap” on my Leap 15.3 system. As root:

zypper addrepo --refresh     https://download.opensuse.org/repositories/system:/snappy/openSUSE_Leap_15.3     snappy
zypper install snapd
systemctl enable --now snapd
systemctl enable --now snapd.apparmor
rcsnapd status
# install blender
snap install blender --channel=3.1/stable --classic

Another description can be found here:
https://www.linuxcapable.com/how-to-install-snap-snap-store-snapcraft-on-opensuse-leap-15/

The run-time environment of snap is handled by a daemon “snapd”. We find the majority of the installed files in four directories

/snap           -- 1.1 GB thereof 686 MB for Blender
/usr/lib/snap   -- 48 MB
/var/snap       -- almost nothing 
/var/lib/snapd  -- 360 MB 

In the first directory you also find the installed applications. In my case “Blender 3.1.2”.
The folder “/snap” contains around 1.1 GB. Blender is part of it and consumes 686 MB.

The startup of Blender via

/snap/bin/blender &

takes about 2 secs. Loading my S-curve test file took even longer. Otherwise snap’s Blender version 3.1.2 seemed to work perfectly. All the bugs I had seen with 2.82 were and are gone. And: I could not really notice a difference in performance when working with 3D objects in the Blender viewport. For a render test case I found 17.72 secs per frame on average. Memory release after leaving Blender seemed to be OK.

Blender 3.1 via Flatpak

Then I tried a Blender installation with flatpak. The installation is almost as simple as for snap. See https://www.flatpak.org/ setup/ openSUSE. Which sums up more or less to issuing the following commands :

zypper install flatpak
flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
flatpak install flathub org.blender.Blender

The required packages of the Opensuse repository were

Regarding the installation I was surprised to find that flatpak requires much more space than snap. The majority of flatpack files is located in the following folder:

/var/lib/flatpak  -- 2.3 GB, thereof around 628 MByte for Blender

So, in comparison to snap, this makes a remarkable huge difference regarding the required files to provide a working runtime environment! I understand that for just one application many files have to be provided for a stable run-time environment on KDE – but in comparison to the snap installation it appears bloated.

We start flatpak’s Blender via

flatpak run org.blender.Blender &

The startup time was comparable to the snap installation – no difference felt. The test render time per frame was 18.95 secs. So that was 1.2 secs or 6.5 % slower than snap. Reason unclear. The difference after repeated trials was sometimes a bit smaller (4%), but sometimes also bigger (8.5%) -depending on the start order of the blender applications. But on average snap’s Blender was always a bit faster. I regard the difference is not really as problematic. But it is interesting. Memory release worked also perfectly with flatpak’s Blender installation.

Conclusion

Before you start struggling with the Blender 2.82 binary Opensuse provides, it is really worth trying a flatpak-based or a snap-based installation of Blender 3.1.2. On my system both worked perfectly.

Before you make a decision whether to go with flatpak or snap take into account the critical points which were discussed on the Internet regarding proprietary aspects of snap. Personally, I am going to use flatpak for Blender.
Otherwise, my opinion is that distributors like Opensuse should provide an important application as Blender in its present version as a binary and resolve dependencies as soon as possible. For me a flatpack installation is just a compromise. And it cost me a lot of valuable SSD space.

Links

https://www.linux-community.de/ausgaben/ linuxuser/2018/02/dreikampf/
https://hackaday.com/2020/06/24/whats-the-deal-with-snap-packages/
https://www.makeuseof.com/snap-vs-appimage-vs-flatpak/

Ceterum censeo: The worst living fascist and war criminal today who must be isolated, denazified and imprisoned is the Putler.

 

Switching between Intel and Nvidia graphics cards on an Optimus laptop with Leap 15.3

At my present location an old laptop with an Optimus system still provides me with good services. Recently, I installed Opensuse Leap 15.3 on it. Since then I have tried almost all I can think of to get Bumblebee running smoothly on it. In the end I gave up – sometimes “primusrun” worked, sometimes not – especially with more complex applications like Blender. And even if “primusrun” worked – “optirun” most often did not.

So, I had a look at “prime-select” installed by the packet “suse-prime” of Opensuse. Again I had mixed experiences: This worked sometimes from the command line and sometimes not. There were multiple things which were somehow interwoven:

  • bbswitch intervened or was used by prime-select when switching to Intel – and when it did and turned the nvidia card off, you could activate it again, but afterward the Nvidia device it was not recognized by the native Nvida driver or by prime-select. Most often I had to reboot.<
  • When I was lucky and after reboots got a configuration where the Nvidia driver was loaded in parallel to the i915 for Intel and used “prime-select” together with an init 3 / init 5 sequence the display manager sddm did not come up. Others had this problem, too.
  • To switch between the graphics cards it was absolutely necessary to login into my graphical KDE desktop, use a root terminal and switch to the Nvidia card with prime-select from there. Then I had to log out and if I was lucky the display manager came up and the Nvidia card was really active.

But do not ask me, what I had to do in which order to get the aspired result. All in all the behavior of “prime-select” was a bit unpredictable. Then after updating a lot of packets yesterday I could no longer get prime-select to do the right things any more. Most often I got the message:

ERROR: Unable to query GPU information

Or : No such device found.

Solution Part 1: Deinstallation of unrequired RPMs and reinstallation of required RPMs

In the end I deinstalled Bumblebee (as recommended by posts in Suse forums) and also bbswitch. Afterward I reinstalled the packets “suse-prime” and “plasma5-applet-suse-prime” from the Leap 15.3 Update-repository (i.e. the Update repo for the Enterprise system) and the Nvidia-drivers from Opensuse’s Nvidia repository. I rebooted and my “sddm” display-manager came up – with the Intel driver active – but the Nvidia driver was loaded already nevertheless. Checked with lsmod.

Later, I found that all files in “/etc/prime” had been rewritten. I suspect that I had something in there which was no longer compatible with the latest drivers and prime-versions. In addition the file “90-nvidia.conf” has been replaced in “/etc/X11/xorg.conf.d“.

Solution Part 2: Use the “Suse Prime Selector” applet!

Then I used the applet in the KDE Plasma desktop to switch to Nvidia. This seemed to work and led to no errors. I just had to log out of my KDE Plasma session and in again to work with KDE on a running Nvidia card. (Ignore the message that you have to reboot).
Afterward I also tested switching by using the command “prime-select” as root in a terminal window of the KDE desktop. Worked perfectly, too.

Conclusion

I am not sure what the ultimate reason was to get prime-select running. The deinstallation of Bumblebee and bbswitch? The new Prime and Nvidia configuration files? Or a new udev during other updates?

Anyway – using the Suse Prime Selector applet or just using the command “prime-select” as root in a terminal window inside KDE Plasma has become a very stable way on my Leap 15.3 laptop with KDE to switch between graphics cards reliably. I still have to log in and log out again of the KDE session – but so far I have never experienced any problems with the display-manager again.

Just for information: I am using the following Nvidia drivers packages:

The prime-select packet version is:

Links

https://www.reddit.com/ r/ openSUSE/ comments/ ib9buv/ opensuse_kde_suse_ prime_selector_ applet_for/
https://forums.developer.nvidia.com/ t/ kubuntu-18-04-gives-black-screen-when-sddm-starts-no-login/ 61034/11
https://wiki.archlinux.org/ title/ PRIME# Configure_applications_%20 to_render_using_GPU
https://www.opensuse-forum.de/ thread/ 63871-suse-prime-error-unable-to-query-gpu-information/
https://opensuse.github.io/ openSUSE-docs-revamped-temp/hybrid_graphics/

Ceterum censeo: The worst living fascist and war criminal, who must be isolated, denazified and imprisoned, is the Putler.

 

KMeans as a classifier for the WIFI and MNIST datasets – IV – KMeans on PCA transformed data

In the last posts of this series

KMeans as a classifier for the WIFI and MNIST datasets – I – Cluster analysis of the WIFI example
KMeans as a classifier for the WIFI and MNIST datasets – II – PCA in combination with KMeans for the WIFI-example
KMeans as a classifier for the WIFI and MNIST datasets – III – KMeans as a classifier for the WIFI-example

we applied the KMeans algorithm to perform a cluster analysis of the WIFI dataset of the UCI Irvine. The results gave us insights into the spatial grouping and the separability of the data samples in their 7-dimensional feature space. An additional PCA analysis helped to understand why projections of the data into some selected 2-dimensional sub-spaces of the feature space revealed the four or five dominant clusters very well. In the third post I discussed a simple method to transform KMeans into a classifier. In the WIFI case a set of 9 to 11 clusters provided a good resolution of the data distribution and we reached a convincing classifier accuracy.

What we have not done, yet, is to transform and project the WIFI data into the coordinate system of the most important main components and afterward apply clustering by the help of KMeans. We know already that three primary components fit the data very well and give us around 90% of the “explained variance“. See the second post for these basic PCA results. We, therefore, expect comparably accurate prediction results of a cluster classifier for the PCA transformed data as the accuracy values given in the last post. For 500 test samples after a KMeans fit of 1500 training samples in the original feature space we found a prediction accuracy of around 98%.

In this post we first perform a PCA analysis for three primary components of the WIFI data distribution and then transform the vectors of 1500 randomly selected training samples to the 3-dimensional main component space. Then we apply KMeans onto the data in the reduced vector space and establish a classifier predictor based on the methods described in the last article. Eventually, we check the accuracy and display the resulting confusion matrix for the 500 test samples.

As a side-step for readers who look for real world use cases regarding signals I want to mention an article in “Nature”, which I found today via a newspaper podcast. There neural firing rates of a brain region, i.e. some very special signals, were used to enable an ALS patient to select letters from presented sequences and form statements – by his “thoughts”. This looks like an environment where Machine Learning really could contribute more in the future.

KMeans as a classifier on the PCA transformed WIFI data

Below I give you the results for the WIFI data transformed and projected to the most important three primary components and 11 clusters:

Results for 1500 training samples

  
Confusion matrix for training data - 11 clusters, 3 PCA components 
A confusion matrix for the classes according to the clustering
[[374   0   1   0]
 [  0 362  13   0]
 [  4   7 359   5]
 [  1   0   0 374]]

Number of wrongly predicted train samples:  31  :: avg_err =  0.020

So, just from counting wrongly classified examples the average error is measured to be around 2% and the relative accuracy is something like 98%.

And for the test data I got:

Number of wrongly predicted test samples:  6  :: avg_err =  0.012

This gives us the following confusion matrix:

This actually proves that our assumption about combining a PCA transformation with a KMeans classifier was correct. The reduction of the dimensionality of the problem did not affect the prediction accuracy very much.

Just for completeness the data for only 2 primary components:

The accuracy of around 97% is still convincing. The reason is that the two most important primary components already deliver around 85% of the “explained variance”.

Why is the Wifi-example not so boring as one may think?

A reader wrote me that he finds the WIFI example too simple and boring. OK, but … The principles and methods remain the same when more complex data are analyzed for clusters. Especially in the case of binary classification. But are there interesting real world use cases for other types of signals? Oh, yes. I just want to refer to an interesting example which I read about this morning.

The WIFI example works with samples which describe 7 signals. Now, imagine that such signals come from a sensor implant measuring electric potentials of a human brain and that we do not analyze for the location of rooms but for the selection of letters by “Yes/No” decision-“imaginations” – made by a human who was trained via frequency based audio-feedback for the brain regions covered by the implants. Science fiction? No, reality. And of huge help for ALS patients. See

Chaudhary, U., Vlachos, I., Zimmermann, J.B. et al. Spelling interface using intracortical signals in a completely locked-in patient enabled via auditory neurofeedback training. Nat Commun 13, 1236 (2022). https://doi.org/10.1038/s41467-022-28859-8

and
https://www.nature.com/articles/s41467-022-28859-8

There, signals were measured from two implant arrays with 64 electrodes. OK, these are somewhat more signals than just 7. But if I understood the text correctly not all channels were used or useful. Just a few. Reminds us of PCA? In addition the time structure of the signal (firing rates) are important – but these are just different signal characteristics. And we have different labels. But, at least in principle, we speak of nothing else than pattern detection based on signal values.

I only had a brief look into the supplementary data of the experiment (an Excel file) and I am not at all familiar with the the experimental setup – but from reading my impression was that just threshold values for the firing rate of some channels were used to distinguish “Yes” from “No”. Maybe we could do a bit better with AI (PCA and classifying according to multidimensional pattern analysis)? Does this look like an interesting use case?

Conclusion

In the case of the WIFI example KMeans can be used as an efficient classifier for samples in a feature space which describes characteristics of multiple signal sources. We have seen that the basic concept also works when we apply KMeans after a PCA based transformation to the most important primary components.

The question is: Does this work equally well for other data sets? The answer depends upon the accuracy by which clusters reside completely within regions of the feature space filled by samples of a specific label.
A data set whose samples show grouping in a multidimensional feature space and appear relatively well separable by their labels is the MNIST data set. In the next post of this series we shall therefore try and apply a clustering algorithm to the MNIST data ensemble.

Stay tuned …

Ceterum censeo: The worst fascist today who must be isolated and denazified is the Putler.