Flatpak and Blender after system updates – update the flatpak Nvidia packet to match your drivers

Some days ago I wanted to perform some Blender experiments on my laptop. Before I had upgraded most of the installed Leap 15.3 packages installed. Afterwards I thought this was a good opportunity to bring my flatpak based installation of Blender up to date, too.

A flatpack installation of Blender had been necessary since I had changed the laptop’s OS to Opensuse Leap 15.3. Reason: Opensuse did and does not offer any current version of Blender in its official repositories for Leap 15.3 (which in itself is a shame). (I have not yet checked whether the situation has changed with Leap 15.4.)

Blender’s present version available for flatpack is 3.3.1. I had Blender version 3.1 installed. So, I upgraded to version 3.3.1. However, this upgrade step for Blender alone did not work. (In contrast to a snap upgrade).

The problem

The sequence of commands I used to perform the update was:

flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
flatpak update org.blender.Blender  
flatpak list

The second command brought Blender V3.3.1 to my disk. However, when I tried to start my new Blender with

flatpak run org.blender.Blender &

the system complained about a not working GLX and GL installation.

But, actually my KDE desktop was running on the laptop’s Nvidia card. The laptop has an Optimus configuration. I use Opensuse Prime to switch between the Intel i915 driver for the graphics card integrated in the processor and the Nvidia driver for the dedicated graphics card. And Nvidia was running definitively.

The cause of the failure and its solution

Flatpack requires the right interface for the Nvidia card AND the presently active GLX-environment to start OpenGL applications.

A “flatpack list” showed me that I had an “app” “nvidia-470-141-01” running.

Name                    Application ID                                   Version    Branch   Installation
Blender                 org.blender.Blender                              3.3.1      stable   system
Codecs                  org.blender.Blender.Codecs                                  stable   system
Freedesktop Platform    org.freedesktop.Platform                         21.08.16   21.08    system
Mesa                    org.freedesktop.Platform.GL.default              21.3.9     21.08    system
nvidia-470-141-01       org.freedesktop.Platform.GL.nvidia-470-141-01               1.4      system
Intel                   org.freedesktop.Platform.VAAPI.Intel                        21.08    system
ffmpeg-full             org.freedesktop.Platform.ffmpeg-full                        21.08    system
openh264                org.freedesktop.Platform.openh264                2.1.0      2.0      system

A quick view to the nvidia-settings and YaST showed me, however, that the drivers and other components installed on Leap 15.3 were of version “nvidia-470-141-03”.

Then I tried

mytux:~ # flatpak install flathub org.freedesktop.Platform.GL.nvidia-470-141
Looking for matches…
Similar refs found for ‘org.freedesktop.Platform.GL.nvidia-470-141’ in remote ‘flathub’ (system):

   1) runtime/org.freedesktop.Platform.GL.nvidia-470-94/x86_64/1.4
   2) runtime/org.freedesktop.Platform.GL.nvidia-470-141-03/x86_64/1.4
   3) runtime/org.freedesktop.Platform.GL.nvidia-470-141-10/x86_64/1.4
   4) runtime/org.freedesktop.Platform.GL.nvidia-470-74/x86_64/1.4
   5) runtime/org.freedesktop.Platform.GL.nvidia-430-14/x86_64/1.4
   6) runtime/org.freedesktop.Platform.GL.nvidia-390-141/x86_64/1.4

Which do you want to use (0 to abort)? [0-6]: 2

This command offered me a list to select the required subversion from. In my case option 2 was the appropriate one.

And indeed after the installation I could start my new Blender version again.

By the way: Flatpak allows for multiple versions to be installed at the same time. Like:

Name                 Application ID                                Version  Branch Installation
Blender              org.blender.Blender                           3.3.1    stable system
Codecs               org.blender.Blender.Codecs                             stable system
Freedesktop Platform org.freedesktop.Platform                      21.08.16 21.08  system
Mesa                 org.freedesktop.Platform.GL.default           21.3.9   21.08  system
nvidia-470-141-03    org.freedesktop.Platform.GL.nvidia-470-141-03          1.4    system
nvidia-470-141-10    org.freedesktop.Platform.GL.nvidia-470-141-10          1.4    system
Intel                org.freedesktop.Platform.VAAPI.Intel                   21.08  system
ffmpeg-full          org.freedesktop.Platform.ffmpeg-full                   21.08  system
openh264             org.freedesktop.Platform.openh264             2.1.0    2.0    system

Conclusion

Your flatpack installation must provide a version of the ‘org.freedesktop.Platform.GL.nvidia-xxx-nnn-mm’ packet which matches the present Nvidia driver installation on your Linux operative system.
Do not forget to upgrade flatpak packets for NVidia after having upgraded Nvidia drivers on your Linux OS!

Links

https://github.com/flathub/ org.blender.Blender/ issues/97
Replacing unstable Blender 2.82 on Leap 15.3 with flatpak or snap based Blender 3.1

Blender – even on old laptops a graphics card increases rendering performance

My present experiments with Blender on my old laptop take considerable time to render- especially animations. So, I got interested in whether rendering on the laptop’s old Nvidia card, a GT 645M, would make a difference in comparison to rendering on the available 8 hyperthreaded cores of the CPU. The laptop’s CPU is an old one, too, namely an i7-3632QM. The laptop’s operative system is Opensuse Leap 15.3. The system uses Optimus technology. To switch between the Nvidia card and the Intel graphics I invoke Suse’s Prime Select application on KDE.

I got a factor of 2 up to 5.2 faster rendering on the GPU in comparison to the CPU. The difference depends on multiple factors. The number of CPU cores used is an important one.

How to activate GPU rendering in Blender?

Basically three things are required: (1) A working recent Nvidia driver (with compute components) for your graphics card. (2) A certain setting in Blender’s preferences. (3) A setting for the Cycles renderer.

Regarding the CUDA toolkit I quote from Blender’s documentation

Normally users do not need to install the CUDA toolkit as Blender comes with precompiled kernels.

With respect to required Blender settings one has to choose a CUDA capable device via the menu point “Preferences >> System”:

You may also select both the GPU and the CPU. Then rendering will be done both on the GPU and the CPU. My graphics card unfortunately only understands a low level of CUDA instructions. The Nvidia driver I used is of version 470.103.01, installed via Opensuse’s Nvidia community repository:

In addition, you must set an option for the Cycles renderer:

With all these settings I got a factor of 2 up to > 6 faster rendering on the GPU in comparison to a CPU with multiple cores.

The difference in performance, of course, depends on

  • the number of threads used on the CPU with 8 (hyperthreaded) cores available to the Linux OS
  • tiling – more precisely the “tile size” – in case of the GPU and the CPU

All other render options with the exception of “Fast G” were kept constant during the experiments.

Scene Setup

To give the Blender’s Cylces renderer something to do I set up a scene with the following elements:

  • a mountain-like landscape (via the A.N.T Landscape Add-On) with a sub-dividion of 256 to 128 – plus subdivision modifier (Catmull-Clark, render level 2, limit surface quality 3) – plus simple procedural texture with some noise and bumps
  • a plane with an “ocean” modifier (no repetition, waves + noisy bump texture for the normal to simulate waves)
  • a world with a sky texture of the Nishita type ( blue sky by much oxygen, some dust and a sun just above the horizon)

The scene looked like

The central red rectangle marks the camera perspective and the area to be rendered. With 80 samples and a resolution of 1200×600 we get:

The hardest part for the renderer is the reflection on the water (Ocean with wave and texture). Also the “landscape” requires some time. The Nishita world (i.e. the sky with the sun), however, is rendered pretty fast.

Required time for rendering on multiple CPU cores

I used 40 samples to render – no denoising, progressive multi-jitter, 0 minimum bounces.
Other settings can be found here:


The number of threads, the tile size and the use of the Fast CI approximation were varied.
The resolution was chosen to be 1200×600 px.

All data below were measured on a flatpak installation of Blender 3.1.2 on Opensuse Leap 15.3.

tile size threads Fast GI time
64 2 no 82.24
128 2 no 81.13
256 2 no 81.01
32 4 no 45.63
64 4 no 43.73
128 4 no 43.47
256 4 no 43.21
512 4 no 44.06
128 8 no 31.25
256 8 no 31.04
256 8 yes 26.52
512 8 no 31.22

A tile size of 256×256 seems to provide an optimum regarding rendering performance. In my experience this depends heavily on the scene and the chosen image resolution.

“Fast GI” gives you a slight, but noticeable improvement. The differences in the rendered picture could only be seen in relatively tiny details of my special test case. It may be different for other scenes and illumination.

Note: With 8 CPU cores activated my laptop was stressed regarding CPU temperature: It went up to 81° Celsius.

Required time for rendering on the mobile GPU

Below are the time consumption data for rendering on the mobile Nvidia GPU 645M:

tile size Fast GI time
64 no 18.3
128 no 16.47
256 no 15.56
512 no 15.41
1024 no 15.39
1200 no 15.21
1200 yes 12.80

Bigger tile sizes improve the GPU rendering performance! This may be different for rendering on a CPU, especially for small scenes. There you have to find an optimum for the tile size. Again, we see an effect of Fast GI.

Note: The temperature of the mobile graphics card never rose above 58° Celsius. I measured this whilst rendering a much bigger image of 4800×2400 px. I therefore think that the temperature stress Blender rendering exerts on the GPU is relatively smaller in comparison to the heat stress on a CPU.

Required time for rendering both on the CUDA capable mobile GPU and the CPU

As the CPU is CUDA capable one can activate CUDA based rendering on the CPU in addition to the GPU in the “preferences” settings. With 4 CPU cores this brings you down to around 11 secs, with 8 cores down to 10 secs.

tile size threads Fast GI time
64 4 no 11.01
128 8 no 10.08

Conclusion

Even on an old laptop with Optimus technology it is worthwhile to use a CUDA capable Nvidia graphics card for Cycles based rendering in Blender experiments. The rise in temperature was relatively low in my case. The gain in performance may range from a factor 2 to 5 depending on how many CPU cores you can invoke without overheating your laptop.

Ceterum censeo: The worst living fascist and war criminal today, who must be isolated, denazified and imprisoned, is the Putler.