Opensuse Leap 15.4 – Online Upgrade from Leap 15.3 on an Encrypted Laptop

After my retirement I was overwhelmed by a lot of typical German bureaucracy. But last weekend I used some time to start the long overdue upgrade of my old laptop from Opensuse Leap 15.3 to Leap 15.4. (The support for Leap 15.3 ended the at the end of 2022.)

I am always a bit afraid of upgrading my old laptop. It has a somewhat complicated configuration:

Its LVM volumes are fully encrypted with LUKS 2. It is an Optimus-System – and in the past it was not always easy to switch from the integrated Intel graphics card to the dedicated Nvidia card. Instead of Bumblebee I have used Opensuse’s Prime-Select with Leap 15.3. I use KDE as my graphical desktop environment. On Leap 15.3 I did not yet apply Wayland – but I intend to switch to Wayland with Leap 15.4. For some of my activities I also use Blender with full OpenGL support in form of a Flatpack installation. Furthermore, the laptop is used for both Machine learning, i.e. Python development, as well Web-development based on LAMP. So, it hosts a variety of services you normally find on servers. In addition we have KVM and VMware WS Pro installations. So, there are a lot of things which can go wrong. The Nvidia card is also an old one – a GT 645M which cannot be run with the latest generation of Nvidia drivers.

The good message is: The upgrade from leap 15.3 to 15.4 went very smoothly. At least regarding the things I was interested in. Below I describe the steps I have taken to upgrade. With some modifications you should be able to adapt it to your situation.

Backup of the encrypted LVM volume mounted on “/”

On my desktop PCs with Opensuse-installations, which I use for daily work, I follow a two-fold “backup”-policy ahead of upgrades: I copy my root-volume/partition to another LVM-volume or partition, and make it bootable in parallel to the existing installation. Reason: I want to be able to quickly switch to my present installation in case of trouble. As I have all of my personal and project data on separate LVM volumes with dedicated backups, the root-volume is the only one which I really must take care of. Therefore, I also copy it to a backup file on an external disk. For all data volumes I have a separate backup routine.

On my laptop I am a bit more relaxed: I just copy the volume mounted on “/” to an external disk. I have no second bootable installation on some other encrypted volume on the laptop. This means that I must boot a Live system or a Rescue system to make a backup of the unmounted “/”-volume.

For my purposes the Leap 15.3 “Rescue System”, which you can find on an DVD-ISO-image for the installation of Leap 15.3, was sufficient. You get the ISO image for such a DVD from opensuse.org and can burn it onto a DVD. The steps afterward were as follows:

  1. Boot your Leap 15.3 system. Check, on which partition or LVM volume your (encrypted) root-filesystem resides. Use e.g. YaST’s partitioner or gparted for this purpose. Shut down.
  2. Insert the DVD, select a boot menu, select the DVD, start from it, select “More …” in the GRUB-like menu, then select the DVD with the “Rescue System” and boot it.
  3. Login as root (no password required). Check that a tmpfs is mounted on / – and not some real partition.
    Note: The root-filesystem of our Leap-installation is NOT mounted on “/” of the rescue system. When I speak of the “root-filesystem” below I always refer to the filesystem containing the operative system of our current Leap 15.3 installation and not the root-fs of the rescue system.
  4. Check with command blkid what the device names of all accessible partitions and LVM volumes are. You should see encrypted and other volumes/partitions of your laptop disks/SSDs there.
  5. Plugin an external backup USB-disk. blkid should now also show the partitions on this disk, too.
  6. Mount the target filesystem of the external disk, where you want to place your backup, onto “/mnt” in your booted rescue system. Check the available space. In my case (with sdc being the external disk) :
    tty1:rescue:~ # mount /dev/sdc2 /mnt
    tty1:rescue:~ # df -h 
    ..
    /dev/sdc2     825G   78G   706G     10%   /mnt 
    ...
    
  7. Locate your Leap 15.3 root-filesystem. In my case the root-filesystem of the laptop is an LUKS2-encrypted LVM available as “/dev/mapper/vgb-lvb2”. Note: You must know in advance, i.e. from your Leap 15.3 setup, where your root-filesystem resides.
  8. We now use the command “dd” to copy the root-filesystem onto a restorable image file. In my case:
    dd status=progress if=/dev/mapper/vgb-lvb2 of=/mnt/root_lap.img  
    

After the backup of the (encrypted) root-fs of our Leap 15.3 installation we shut down the rescue system, remove the DVD and boot Leap 15.3 again.

Check your RPM repositories – refresh and update

On the rebooted Leap 15.3 we check what we have of active repositories. In my case these were quite many:

(Ignore the double “mozilla” entry.)

Recommendation: You should make a similar screenshot and save it somewhere outside your laptop to later be able to restore all of the different repositories for Leap 15.4.

However, the most important repositories required to perform the upgrade are three update repositories:

  • One with renewed RPMs for the OSS,
  • one for Backports (backportet RPMs, e.g. security RPMS backportet from newer kernel or glibc related versions than the presently available versions on Opensuse Leap/SLES)
  • and one for renewed RPMs for the SLES version corresponding to the current Leap.

Update-repositories contain the latest RPMs of an Opensuse distribution. In our upgrade process we still deal with relevant update repositories for Leap 15.3. But we are soon going to exchange them with their Leap 15.4 counterparts.

Look out for the URLs of the current update repositories :

 * https://download.opensuse.org/update/leap/15.3/oss/
 * https://download.opensuse.org/<br>update/leap/15.3/backports/
 * https://download.opensuse.org/<br>update/leap/15.3/sle/ repo-sle-update

Leap 15.3 and 15.4 RPMs are binary compatible to those for the related SLES versions. In my case I had switched most of my Leap 15.3 RPMs to those of the update repo of SLES already a long time ago. If you have not done this yet you should do so now with the help of YaST.

I also directly deleted the repository for games as I regard it unimportant during an Upgrade.

Now, we refresh the lists of available RPMs and update to the latest versions. You can use the graphical YaST2 for this purpose or the command line:

mytuxlap:~ # zypper refresh

Then we perform an update of our Leap 15.3 RPMs to the latest available versions:

mytuxlap:~ # zypper update

In my case some of my Leap 15.3 repositories (for games, graphics, xfce and for snappy) were no longer available and could not be refreshed. I just had waited too long with my upgrade. But this resulted in no major problems during the upgrade.

After the update reboot and verify that your Leap 15.3 system still works.

Change repository URLs to contain the ${releasever} instead of an explicit version number

We change the URLs of our repositories now to contain ${releasever} instead of an explicit “15.3” in the URLs. It is easy to do this on the command line:

mytuxlap:~ # sed -i 's/15.3/${releasever}/g' /etc/zypp/repos.d/*.repo
mytuxlap:~ # sed -i 's/$releasever/${releasever}/g' /etc/zypp/repos.d/*.repo

The second command is just for being on the save side of the shell interpreter. I had previously already changed some of the repo URLs to include $releasever, but I want everything to consistently use ${releasever}.

Refresh for Leap 15.4 repository content – and eliminate some repositories

Next we start switching to the repositories for Leap 15.4. The first step is a refresh on the command line, but now for the Leap 15.4 repos. We can do this with the help of the variable ${releasever} in the following form:

mytuxlap:~ # zypper --releasever=15.4 refresh

Note that this does not yet change our repositories themselves, yet, but just the local content information. It gets replaced by lists about the contents of the Leap 15.4 repositories.

In my case this refresh process lead to errors. The reason was that some of the repositories which I used on Leap 15.3 had got a different path structure of the respective web resource below “download.opensuse.org/” for Leap 15.4. You have to ask the Opensuse people why they changed this.

mytuxlap:~ # zypper --releasever=15.4 refresh
Warning: Enforced setting: $releasever=15.4
Retrieving repository 'nVidia Graphics Drivers' metadata ...........................................[done]
Building repository 'nVidia Graphics Drivers' cache ................................................[done]
Retrieving repository 'Packman Repository' metadata ................................................[done]
Building repository 'Packman Repository' cache .....................................................[done]
Retrieving repository 'Update 15.4' metadata .......................................................[done]
Building repository 'Update 15.4' cache..... .......................................................[done]
Retrieving repository 'graphics' metadata .........................................................[error]
Repository 'graphics' is invalid.
[openSUSE_Leap_${releasever}_1|https://download.opensuse.org/repositories/graphics/openSUSE_Leap_15.4/] Valid metadata not found at specified URL
History:
 - [openSUSE_Leap_${releasever}_1|https://download.opensuse.org/repositories/graphics/openSUSE_Leap_15.4/] Repository type can't be determined.

Please check if the URIs defined for this repository are pointing to a valid repository.
Skipping repository 'graphics' because of the above error.
Retrieving repository 'mozilla' metadata ...........................................................[done]
Building repository 'mozilla' cache ................................................................[done]
Retrieving repository 'XFCE' metadata .............................................................[error]
Repository 'XFCE' is invalid.
[openSUSE_Leap_${releasever}_3|https://download.opensuse.org/repositories/X11:/xfce/openSUSE_Leap_15.4/] Valid metadata not found at specified URL
History:
 - [openSUSE_Leap_${releasever}_3|https://download.opensuse.org/repositories/X11:/xfce/openSUSE_Leap_15.4/] Repository type can't be determined.

Please check if the URIs defined for this repository are pointing to a valid repository.
Skipping repository 'XFCE' because of the above error.
Retrieving repository 'Libdvdcss Repository' metadata ..............................................[done]
Building repository 'Libdvdcss Repository' cache ...................................................[done]
Retrieving repository 'Update repository of openSUSE Backports' metadata ...........................[done]
Building repository 'Update repository of openSUSE Backports' cache ................................[done]
Retrieving repository 'Non-OSS Repository' metadata ................................................[done]
Building repository 'Non-OSS Repository' cache .....................................................[done]
Retrieving repository 'openSUSE-Leap-15.4-Oss' metadata ............................................[done]
Building repository 'openSUSE-Leap-15.4-Oss' cache .................................................[done]
Retrieving repository 'Update repository with updates from SUSE Linux Enterprise 15' metadata ......[done]
Building repository 'Update repository with updates from SUSE Linux Enterprise 15' cache ...........[done]
Retrieving repository 'Aktualisierungs-Repository (Nicht-Open-Source-Software)' metadata ...........[done]
Building repository 'Aktualisierungs-Repository (Nicht-Open-Source-Software)' cache ................[done]
Retrieving repository 'snappy' metadata ............................................................[done]
Building repository 'snappy' cache .................................................................[done]
Some of the repositories have not been refreshed because of an error.

Then I changed again to the repository administration of YaST and simply deleted the problematic repos. We will care for their new URL later.

Note: The fact that we may have RPMs from missing repos during the upgrade is later on compensated by allowing for a “vendor change” – which means a repository change. See below.

After having eliminated problematic repos we get a successful refresh for the contents of remaining 15.4 repositories on the command line:

mytuxlap:~ # zypper --releasever=15.4 refresh
Warning: Enforced setting: $releasever=15.4
Repository 'nVidia Graphics Drivers' is up to date.                                     
Repository 'Packman Repository' is up to date.                                          
Repository 'mozilla' is up to date.                                                     
Repository 'Libdvdcss Repository' is up to date.                                        
Repository 'Update repository of openSUSE Backports' is up to date.                     
Repository 'Non-OSS Repository' is up to date.                                          
Repository 'openSUSE-Leap-15.4-Oss' is up to date.                                      
Repository 'Update repository with updates from SUSE Linux Enterprise 15' is up to date.
Repository 'Aktualisierungs-Repository (Nicht-Open-Source-Software)' is up to date.     
Repository 'snappy' is up to date.                                                      
All repositories have been refreshed.

Download of the RPMs without applying them, yet

The next step is to download the RPMs from the Leap 15.4 repos and save them in a cache for the later upgrade process. On a TTY or a root terminal window

mytuxlap:~ #  zypper --releasever=15.4 dup --download-only --allow-vendor-change

The option “–download-only” avoids the installation of the new 15.4 RPMs. Also note the option “–allow-vendor-change”: If a RPM cannot be replaced a substitute from other major repositories will be used – if one is found.

Agree to the RPM setup displayed and the license conditions. Some 5 to 10 minutes later, after having downloaded everything, we must deactivate the graphical desktop.

Perform the Upgrade on an ASCII terminal (TTY)

On a system with both an integrated Intel card and a dedicated Nvidia card you may first want to decide which card driver you want to be loaded during the upgrade. You may use the Prime-Select Applet of Opensuse to switch to Intel on your desktop. Then logout and login again and check whether the Nvidia driver is no longer active.

Personally, I just kept the Nvidia card and the respective driver running. The resulting small problems were easy to overcome; see below.

mytuxlap:~ # lsmod | grep nvidia
nvidia_drm             69632  5
nvidia_modeset       1204224  6 nvidia_drm
nvidia              35512320  281 nvidia_modeset
drm_kms_helper        303104  2 nvidia_drm,i915
drm                   634880  10 drm_kms_helper,nvidia,nvidia_drm,i915,ttm
mytuxlap:~ #  

Important: Logout now of the graphical desktop to perform the Upgrade.

Move to an ASCII terminal (e.g. via Ctrl-Alt F1). There login as root. Type in “init 3” to stop your running X- or Wayland server. And then start the real upgrade and the respective rpm installation via “zypper –no-refresh –releasever=15.4 dup –allow-vendor-change” :

mytuxlap:~ # init 3 
mytuxlap:~ # zypper --no-refresh --releasever=15.4 dup --allow-vendor-change

You must again confirm the RPM configuration and the license conditions. Depending on your previous configuration several thousands of packages will then be installed the next 10 minutes or so from the preloaded and cached RPMs.

After all required RPMs have been installed just reboot by typing “init 6” on the command line.

My Leap 15.4 situation after reboot

In may case the systems behavior after reboot was a bit strange.

The good news is:

I experienced no problems with LUKS 2, grub2, initramfs and the second phase of the startup during which all of my other LUKS2-encrypted LVM volumes were decrypted, checked and mounted.

Off topic: Leap uses initramfs, but stores it at /boot/initrd.

The whole startup process worked like before: I get asked for the LUKS2 decryption key directly after starting the boot process, then the graphical grub2 menu comes up and I can start the primary phase of the boot process based on initramfs. In my installation, due to security precautions, I was asked to provide the decryption key once again before the second boot phase on the real root-filesystem started. (Off topic: There are configuration tricks to circumvent the 2nd request for the LuKS2 key, but my personal opinion is that the asking a second time enhances security a bit. I cannot go into the related details of a LUKS 2 configuration here.)

The bad news is:
The behavior of the Optimus environment was not consistent. Although the Nvidia RPMs had been shifted to those from the Nvidia community repository for Leap 15.4 after the reboot the Intel i915 was loaded – and I did not manage to activate the Nvidia driver. Also bbswitch interfered with my trials and shut down the Nvidia card:

The warm reboot directly after the upgrade seemed to work without major error messages (with the exception of an expected VMware related error; see below). The startup process eventually led to graphical login screen of sddm.
After login the applet for Prime-Select told me that Nvidia was active.

However, after shutting the laptop down completely and starting it via a cold boot I saw that the laptop’s LED signalling the activation of Nvidia was off (more precise showing a blue instead a red color). The Intel driver i915 was loaded with the start of the sddm login screen. Afterward the X11-KDE/Plasma combination actually worked perfectly with it. As did the combination Wayland and KDE Plasma; see below.

But at least for work with Blender I do need an active Nvidia card on the desktop. So, how to get it running?

Optimus – and a small problem with the Nvidia card

When I turned to a TTY and issued “init 3” I, actually, could activate the NVidia card via

mytuxlap:~ # tee /proc/acpi/bbswitch <<< ON

And I also could load the Nvidia driver by

mytuxlap:~ # modprobe nvidia 

In addition

mytuxlap:~ # prime-select nvidia 

seemed to be accepted by the system.

However, when I afterward wanted to start the graphical desktop again via “init 5” I experienced that the Nvidia card was directly deactivated and that the Nvidia driver, therefore, could not work or be reloaded.

What a stupid situation! Obviously, the configuration of bbswitch had not been aligned correctly with prime-select and Nvidia during Upgrade.

Solution
In the end the solution was simple: I turned to a TTY, issued “init 3”, activated the Nvidia card, loaded the present driver and used the ASCII version of YaST (not graphical yast2) to reinstall (= update unconditionally) the Nvidia drivers from the Nvidia repository

I had to pick the G05-drivers as my graphics card is rather old. Note that the driver version 470 is also relatively old and has been reported to have some problems with the display manager Wayland.

After reboot everything then already worked as expected:
The Nvidia card was activated from the start and used for the graphical desktop afterwards. And I could use the Prime-Select Applet to switch to the Intel Driver with a subsequent logout from the KDE desktop and then a re-login. With Intel the Nvidia card got deactivated – which is very reasonable as it reduces the power consumption and heat generation of the laptop.

You may also check if things are already OK after a re-installation of the Nvidia drivers. The probably important thing is that during the reinstallation mkinitrd is started in the background and dracut is forced to re-configure the initramfs – this time with a loaded Nvidia driver.

If things still do not work in your case: Check that you have blacklisted the Nouveau driver in file “/etc/modprobe.d/50-blacklist.conf” and/or “/etc/modprobe.d/nvidia-default.conf” with entries

blacklist nouveau
options nouveau modeset=0

Then stop the graphical target again: Go to a terminal (Ctrl-Alt-F1), use “init 3” and try

mytuxlap:~ # init 3 
mytuxlap:~ # tee /proc/acpi/bbswitch <<< ON
mytuxlap:~ # modprobe nvidia

This should work. Then

mytuxlap:~ # mkinitrd

Then reboot. On the graphical desktop (probably still using the Intel driver) open a root terminal window. Try

  
mytuxlap:~ # prime-select nvidia

Log out from the graphical desktop, watch the laptop LED indicating the activation of the Nvidia card (should now show that Nvidia is on), log in and check that the Nvidia driver was loaded:

mytuxlap:~ # lsmod | grep video 

This should give you something like:

mytuxlap:~ # lsmod | grep nvidia
nvidia_drm             69632  7
nvidia_modeset       1204224  16 nvidia_drm
nvidia_uvm           1138688  0
nvidia              35512320  980 nvidia_uvm,nvidia_modeset
drm_kms_helper        303104  2 nvidia_drm,i915
drm                   634880  12 drm_kms_helper,nvidia,nvidia_drm,i915,ttm

Then test the reversion to the Intel driver via Opensuse’s prime-select applet. Should work now.

No cube animation for switching virtual desktops on KDE any more!

I had a brief look at other things on my new Leap 15.4 installation. Regarding KDE on Xorg the only thing I could complain about on Leap 15.4 was that the rotating cube animation for switching between virtual desktops was gone. This is due to decisions of the KDE people. So, Opensuse is NOT to blame for it. Personally, I think the loss of the animation is a pity, but it does not hinder any productivity, either. So, no big thing …

Wayland with KDE 5.24

A switch off the display server from Xorg to Wayland is a major step. I had been reluctant to use Wayland with Leap 15.2 and 15.3. Kernel, KDE and the Nvidia driver – all of their components must support Wayland. Unfortunately, Nvidia has for years been a major hinder in the support process – in contrast to Intel or AMD. So, I was a bit skeptical with Wayland, KDE/Plasma and Nvidia’s 470-driver on my old graphics card.

Positive results: KDE 5 started well. The startup of the desktop took longer time than with Xorg but completed successfully. Afterwards: No flickering of KDE, no problems with switching between virtual desktops or 3D desktop animations. Glxspheres worked. No problems with new windows of browsers like Firefox or Chromium – as were previously reported by others.

Best of all: My flatpack installation of Blender 3.3 did work very well.

Negative results: Nvidia-settings 470 did not work. Also, 3D-animation effects like wobbly windows appeared to have a slightly better performance on Xorg. After a session break (and the display of a protection screen with the option to relogin) a return to the KDE session lead to a strong white-flickering of the background. But this could be stopped by a mouse-click on the flickering background.

All in all: Even on my relatively old laptop I can productively use Wayland with Opensuse Leap 15.4 and KDE/Plasma 5.24 and Nvidia driver 470.

Leap 15.4 repositories with different locations than for 15.3

In general we can find available repositories at “https://download.opensuse.org”. The graphics repository has found a new location at

https://download.opensuse.org/repositories/graphics/15.4/,

the XFCE at

https://download.opensuse.org/repositories/X11:/xfce/15.4/.

Use Yast to add these repositories back to your list of active Leap 15.4 repos.

Still no actual Blender version on Leap 15.4

Note: Blender in a version above 2.82 is still not available for Leap 15.4. Which is a major shame. The glibc version is just too old for Blender 3.x. The only way out of this dilemma is a Flatpack or Snap based installation of Blender 3.4.
Such installations work, however, very well on Leap 15.4 – both with Xorg and Wayland.

Multimedia: Change system packages to RPMs of the packman repository

A broad range of multimedia tools and codecs require the packman repositories. What I typically do is to add a mirror with the packman repository, e.g.

https://ftp.gwdg.de/pub/linux/misc/packman/suse/openSUSE_Leap_${releasever}/    

to the list of repositories, use YaST2 for the display of the contents of this repository and then click on the link “Switch system packages to the versions in this repository (Packman repository)”.

I tested some typical multimedia applications I use: Pulseaudio, PA equalizer, Clementine, VLC and TV channels on browsers. No problems.

What about Python?

My last development work on a desktop machine was done with Python 3.9, Jupyter notebooks and Eclipse. Leap 15.4 offers Python 3.6 as the standard. However, you can in parallel install either Python 3.9 OR Python 3.10. the OR is unfortunately exclusive. (The current Python version is 3.11).

I think I can live for some time with Python 3.10. So, I tested an installation of a virtual Python environment on Leap 15.4. The key to do so is to move to a directory where you want to implement your virtual environment – and install the relevant interpreter plus related basic directories. The following commands show an example:

myself@mytuxlap:~> mkdir /projekte/GIT/ml_5
myself@mytuxlap:~> cd /projekte/GIT/
myself@mytuxlap:/projekte/GIT> virtualenv -p /usr/bin/python3.10 ml_5 
myself@mytuxlap:/projekte/GIT> cd ml_5
myself@mytuxlap:/projekte/GIT/ml_5> source bin/activate
(ml_5) myself@mytuxlap:/projekte/GIT/ml_5> pip install --upgrade pip
Collecting pip
  Using cached pip-23.0.1-py3-none-any.whl (2.1 MB)
Installing collected packages: pip
  Attempting uninstall: pip
    Found existing installation: pip 20.2
    Uninstalling pip-20.2:
      Successfully uninstalled pip-20.2
Successfully installed pip-23.0.1
(ml_5)  myself@mytuxlap:/projekte/GIT/ml_5> pip install jupyter      
Collecting jupyter
  Using cached jupyter-1.0.0-py2.py3-none-any.whl (2.7 kB)
...
...
(ml_5) myself@mytuxlap:/projekte/GIT/ml_5> jupyter-notebook 
...

This all works – but there are some (expected) errors regarding the jupyter_nbextensions_configurator. This is all well known – and also what has to be done to configure the jupyter_nbextensions correctly. This is no matter of leap 15.4.
Anyway, a Jupyter notebook will start in your default browser and you can start working with Python 3.10. I systematically added the needed libs and modules afterward with the help of pip. So, no majro problem with Python 3.10 on Leap 15.4!

What about PHP?

Well, Leap 15.4 offers an installation of either PHP7 or PHP8.0. I picked PHP8. But how does PHP 8 work together with a standard Apache2 installation on Leap 15.4?

Answer: It depends!

From the Apache point of view we would like to distribute the web server’s load on multiple Apache processes with a minimum consumption of RAM. Therefore, we would like to run Apache with an event based MPM module or just with the standard MPM-module. The problem is that this does not work with PHP. This problem already existed for lower PHP-versions than PHP 8.

You run into an error message like:

Apache is running a threaded MPM, but your PHP Module is not compiled to be threadsafe. You need to recompile PHP.

There are two solutions to this problem:

  • Switch to a prefork configuration of Apache 2.4 – and ignore the resulting RAM consumption
  • Use FastCGI and php8-fpm.

You also have to decide which method you want to use for changing the Apache2 configuration on Leap 15.4. You can remove RPMs or use a2enmod, a2dismod and maybe a2config, respectively. Relevant commands in our case would be “a2dismod mpm_worker”, a2dismod mpm_event” and a2enmod mpm_prefork”.

The easiest way, however, is to remove the RPMs “apache2-event” and/or “apache2-worker”, depending on what kind of configuration you have installed. I have no time to discuss the specific differences of these types of Multi-process setups of Apache2 here. To be able to activate prefork the RPM apache2-prefork must be installed. A reasonable RPM selection for a prefork variant would then look like this:

With this RPM selection you can just start Apache2 with the following modules successively:

mytuxlap:~ # rcapache2 restart
mytuxlap:~ # a2enmod rewrite 
mytuxlap:~ # a2enmod -l
actions alias auth_basic authn_core authn_file authz_host authz_groupfile authz_core authz_user autoindex cgi dir env expires include log_config mime negotiation setenvif ssl socache_shmcb userdir reqtimeout php8 version mpm_prefork rewrite
mytuxlap:~ # 

I.e.: For the simple prefork solution we can either try to disable the modules mpm_worker and/or mpm_event and activate “mpm_prefork” OR remove/install related RPMs.

But there is also another way to get PHP8 running – which is based on a FastCGI configuration of Apache2 together with the installation of a service for php8, namely php8-fpm. Personally, I have not yet tried a fast-cgi / php8-fpm combination on Leap 15.4. But I intend to describe the setup soon in this blog. In the meantime, please, check the information at the following links. It is given for other operative systems, but an adaption is straightforward.

Note: php-fpm is a service which must be started on your system via systemd’s command “systemctl”.

Digital ocean on PHP-fpm and Apache2 for Ubuntu 18
Digital Ocean on PHP and BSD
Digital ocean on PHP-fpm and Apache2 for Ubuntu 20

VMware and KVM

KVM works on leap 15.4 wwithout problems. I could directly start an existing qemu-virtualized Debian installation.

VMware WS also works on Leap 15.4. But you must have a version > WS 16.2.3 available. I updated to WS 16.2.5 by installing the bundle “VMware-Workstation-Full-16.2.5-20904516.x86_64.bundle”. Afterward I could start both VMware-virtualized Windows 10 and Win 7 installations on a Leap 15.4 KDE desktop without any problems.

Conclusion

The Upgrade from Opensuse Leap 15.3 to Leap 15.4 (with a KDE desktop) works without major problems even on older laptops with old Nvidia mobile graphics cards. Its a bit irritating that some Leap repositories got a new location with Leap 15.4 – but this can be fixed after the Upgrade.

A big positive surprise was that KDE 5.24 worked with Wayland even on my old Nvidia GT 645M card. A current Blender version MUST, unfortunately, be installed via Flatpack. Python 3.10 and PHP 8.0 are supported. KVM and VMware WS 16.2.5 pose no problems on Leap 15.4.

Happy working with Leap 15.4!

Links

Wayland vs. Xorg
https://linuxiac.com/ xorg-x11-wayland-linux-display-servers-and-protocols-explained/

Apache2 and PHP8
https://bbs.archlinux.org/ viewtopic.php?id=178124

 

Ceterum censeo: The worst fascist, war criminal and killer living today is the Putler. He must be isolated at all levels, be denazified and sooner than later be imprisoned. A president who orders the systematic destruction of civilian infrastructure must be fought and defeated because he is a permanent danger to basic principles of humanity. He must be brought to justice in front of an international court. Long live a free and democratic Ukraine!

 

Variational Autoencoder with Tensorflow – XIV – Change of variational model parameters at inference time

In my last post of this series I compared a Variational Autoencoder [VAE] with only a tiny amount of KL-loss to a standard Autoencoder [AE]. See the links at

Variational Autoencoder with Tensorflow – XIII – Does a VAE with tiny KL-loss behave like an AE? And if so, why?

for more information and preparational posts.

Both the Keras models for the VAE and the AE were trained on the CelebA dataset of images of human heads. We found a tight similarity regarding the clustering of predicted data points for the training and test data in the latent space. In addition the VAE with tiny KL loss failed to reconstruct reasonable human face images from arbitrarily chosen points in the latent space. Just as a standard AE does. In forthcoming posts we will continue to study the relation between VAEs and AEs.

But in this post I want to briefly point out an interesting technical problem which may arise when you start to tests predictions for certain data samples after a training phase. Your Encoder or Decoder models may include parameters which you want to experiment with when predicting results for interesting input data. This raises the question whether we can vary such parameters at inference time. Actually, this was not quite as easy as it seemed to be when I started with respective experiments. To perform them I had to learn two aspects of Keras models I had not been aware of before.

How to switch of the z-point variation at inference time?

In my particular case the starting point was the following consideration:

At inference time there is no real need for using the logvar-based variation around mu-values predicted by the Encoder.

The variation in z-point values in VAEs is done by adding a statistical value to mu-values. The added term is based on a log_var value multiplied by a statistically fluctuating factor “epsilon”, which comes from a normal Gaussian distribution around zero. mu, therefore, is the center of a distribution a specific input tensor is mapped to in latent space for consecutive predictions. The mu- and log_var-values depend on weights of two dense layers of the Encoder and thus indirectly on the optimization during training.

But while the variation is essential during training one may NOT regard it necessary for predictions. During inference we may in some experiments have good reasons to only refer to the central mu value when predicting a z-point in latent space. For test and analysis purposes it could be interesting to omit the log_var contribution.

The question then is: How we can switch off the log_var component for the Encoder’s predictions, i.e. predictions of our Keras based encoder model?

One idea is to include variables of a Python class hosting the Keras models for the Encoder, Decoder and the composed VAE in the function for the calculation of z-point vectors.

The mu-, logvar and sampling layers of the VAE’s Encoder model encapsulated in a Python class

During this post series we have encapsulated the code for Encoder, Decoder and resulting VAE models in a Python class. Remember that the Encoder produced its output, namely z-points in the latent space via two “dense” Keras layers and a sampling layer (based on a Keras Lambda-layer). The dense layers followed a series of convolutional layers and gave us mu and log_var values. The Lambda-layer produced the eventual z-point-vector including the variation. In my case the code fragments for the layers look similar to the following ones:

# .... Layer model of the Encoder part 
...
...     # Definition of an input layer and multiple Conv2D layers 
...
        # "Variational" part - 2 Dense layers for a statistical distribution of z-points  
        self.mu      = Dense(self.z_dim, name='mu')(x)
        self.log_var = Dense(self.z_dim, name='log_var')(x)
        # Layer to provide a z_point in the Latent Space for each sample of the batch 
        self._encoder_output = Lambda(z_point_sampling, name='encoder_output')([self.mu, self.log_var])
...
        # The Encoder model 
        self.encoder = Model(inputs=self._encoder_input, outputs=[self._encoder_output, self.mu, self.log_var], name="encoder")
...

The “self” refers to a class “MyVariationalAutoencoder” comprising the Encoder’s, Decoder’s and the VAE’s model and layer structures. See for details and explained code fragments of the class e.g. the posts around Variational Autoencoder with Tensorflow 2.8 – X – VAE application to CelebA images.

The sampling is in my case done by a function “z_point_sampling”:

        def z_point_sampling(args):
            '''
            A point in the latent space is calculated statistically 
            around an optimized mu for each sample 
            '''
            mu, log_var = args # Note: These are 1D tensors !
            epsilon = B.random_normal(shape=B.shape(mu), mean=0., stddev=1.)
            return mu + B.exp(log_var / 2.) * epsilon * self.enc_v_eps_factor

You see that this function uses a class member variable “self.enc_v_eps_factor”.

Switch the variation with log_var on and off for predictions?

Our objective is to switch the log_var contribution on or off for input certain images or batches of such images fed into the Encoder. For this purpose we could in principle use the variable “self.enc_v_eps_factor” as a kind of boolean switch with values of either 0.0 or 1.0. To set the variable I had defined two class methods:

    def set_enc_to_predict(self):
        self.enc_v_eps_factor = 0.0 
    
    def set_enc_to_train(self):  
        self.enc_v_eps_factor = 1.0 

The basic idea was that the sampling function would pick the value of enc_v_eps_factor given at the runtime of a prediction, i.e. at inference time. This assumption was, however, wrong. At least in a certain sense.

Is a class variable change impacting a layer output noted during consecutive predictions of a Keras model?

Let us assume that we have instantiated our class and assigned it to a Python variable MyVae. Let us further assume the the comprised Keras models are referenced by variables

  • MyVae.encoder (for the Encoder part),
  • MyVae.decoder (for the Decoder part)
  • and MyVae.model (for the full VAE-model).

We do not care about further details of the VAE (consisting of the Encoder, the Decoder and GradientTape based cost control). But we should not forget that all the models’ layers and their weights determine cost functions derivatives and are therefore targets of the optimization performed during training. All factors determining gradients and value calculation with given weights are encoded with the compilation of a Keras model – for training purposes [using model.fit() with a Keras model], but as well for predictions! Without a compiled Keras model we cannot use model.predict().

This means: As soon as you have a compiled Keras model almost and load the weight-values saved after a a sufficient number of training epochs everything is settled for inference and predictions. Including the present value of self.enc_v_eps_factor. At compile time?

Well, thinking about it a bit more in detail from a developer perspective tells us:
The compilation would in principle not prevent the use of a changed variable at the run-time of a prediction. But on the other hand side we also have the feeling that Keras must do something to make training (which also requires predictions in the sense of a forward pass) and later raw predictions for batches at inference time pretty fast. Intermediate functionality changes would hamper performance – if only for the reason that you have to watch out for such changes.

So, it is natural to assume that Keras would keep any factors in the Lambda-layer taken from a class variable constant after compilation and during training or inference, i.e. predictions.

If this assumption were true then chain of actions AFTER a training of a VAE model (!) like

Define a Keras based VAE-model with a sampling layer and a factor enc_v_eps_factor = 1   =>   compile it (including the sub-models for the Encoder and Decoder)   =>   load weight parameters which had been saved after training   =>   switch the value of the class variable enc_v_eps_factor to 0.0   =>   Load an image or image batch for prediction

would probably NOT work as expected. To be honest: This is “wisdom” derived after experiments which did not give me the naively expected results.

Indeed, some first simple experiments showed: The value of enc_v_eps_factor (e.g. enc_v_eps_factor = 1) which was given at compile-time was used during all following prediction calculations, e.g. for a particular image in tensorial form. So, a command sequence like

MyVae.set_enc_to_train()
MyVae.encoder.compile() 
...
# Load weights from a set path 
MyVae.model.load_weights(path_weights)
...

z_point, mu, log_var = MyVae.encoder.predict(img1)
print(z_point, mu)
MyVae.set_enc_to_predict(self)
z_point, mu, log_var = MyVae.encoder.predict(img1)

did not give different results. Note, that I did not change the value of enc_v_eps_factor between compile time and the first call for a prediction.

Let us look at the example in more detail.

A concrete example

After a full training of my VAE on CelebA for 24 epochs I checked for a maximum log_var-value. Non-negligible values may indeed occur for certain images and special z-vector components for some images despite the tiny KL-loss. And indeed such a value occurred for a very special (singular) image with two diagonal black border stripes on the left and right of the photographed person’s head. I do not show the image due to digital rights concerns. But let us look at the predicted z, mu and log_var values (for a specific z-point vector component) of this image. Before I compiled the VAE model I had set enc_v_eps_factor = 1.0 :

# Set the learning rate and COMPILE the model 
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
learning_rate = 0.0005

# The following is only required for compatibility reasons
b_old_optimizer = True     

# Set enc_v_eps_factor to 1.0
MyVae.set_enc_to_train()

# Separate Encoder compilation
# - does not harm the compilation of the full VAE-model, but is useful to avoid later trace warnings after changes 
MyVae.encoder.compile()

# Compilation of the full VAE model (Encoder, Decoder, cost functions used by GradientTape) 
MyVae.compile_myVAE(learning_rate=learning_rate, b_old_optimizer = b_old_optimizer )
...

# Load weights from a set path 
MyVae.model.load_weights(path_weights)
...
# start predictions
...

with

    def compile_myVAE(self, learning_rate, b_old_optimizer = False):
        # Version 1.1 of 221212
        # Forced to special handling of the optimizer for data resulting from training before TF 11.2 , due to warnings:  
        #      ValueError: You are trying to restore a checkpoint from a legacy Keras optimizer into a v2.11+ Optimizer, 
        #      which can cause errors. Please update the optimizer referenced in your code to be an instance 
        #      of `tf.keras.optimizers.legacy.Optimizer`, e.g.: `tf.keras.optimizers.legacy.Adam`.

        # Optimizer handling
        # ~~~~~~~~~ 
        if b_old_optimizer: 
            optimizer = tf.keras.optimizers.legacy.Adam(learning_rate=learning_rate)
        else:    
            optimizer = Adam(learning_rate=learning_rate)
        ....

        # Solution type with train_step() and GradientTape()  
        if self.solution_type == 3:
            self.model.compile(optimizer=optimizer)

Details of the compilation are not important – though you may be interested in the fact that training data saved after a training based on Python module version < 11.2 of TF 2 requires a legacy version of the optimizer when later using a module version ≥ 11.2 (corresponding to TF V2.11 and above).

However, the really important point is that the compilation is done given a certain value of enc_v_eps_factor = 1.

Then we load the image in form a prepared training batch with just one element set and provide it to the predict() function of the Keras model for the Encoder. We perform two predictions and ahead of the second one we change the value of enc_v_eps_factor to 0.0:

# Choose an image 
j = 123223
img = x_train[j] # this has already a tensor compatible format 
img_list = []
img_list.append(img)
tf_img = tf.convert_to_tensor(img_list)

# Encoder prediction 
z_points, mu, logvar  = MyVae.encoder.predict(tf_img)

print(z_points[0][230])
print(mu[0][230])
print(logvar[0][230])

# !!!! Set enc_v_eps_factor to 0.0 !!!!
MyVae.set_enc_to_predict()

# New Encoder prediction 
z_points, mu, logvar  = MyVae.encoder.predict(tf_img)

print()
print()

print(z_points[0][230])
print(mu[0][230])
print(logvar[0][230])

The result is:

...
2.637196
-0.141142
3.3761873
...
-0.2085279
-0.141142
3.3761873

The result depends on statistical variations for the factor epsilon (Gaussian statistics; see the sampling function above).

But the central point is not the deviation for the two different prediction calls. The real point is that we have used MyVae.set_enc_to_predict() ahead of the second prediction and, yet, the values for mu and log_var for the special z_point-component (230; out of 256 components) were NOT identical. I.e. the variable value enc_v_eps_factor = 1.0, which we set before the compilation, was used during both of our prediction calculations!

Can we just recompile between different calls to model.predict() ?

The experiment described above seems to indicate that the value for the class variable enc_v_eps_factor given at compile time is used during all consecutive predictions. We could, of course, enforce a zero variation for all predictions by using MyVae.set_enc_to_predict() ahead of the compilation of the Encoder model. But this would not give us no flexibility to switch the log_var contribution off ahead of predictions for some special images and then turn it on again for other images.

But the is simple – if we need not do this switching permanently: We just recompile the Encoder model!

Compilation does not take much time for Encoder models with only a few (convolutional and dense) layers. Let us test this by modifying the code above:

# Choose an image 
j = 123223
img = x_train[j] # this has already a tensor compatible format 
img_list = []
img_list.append(img)
tf_img = tf.convert_to_tensor(img_list)

# Set enc_v_eps_factor to 0.0
MyVae.set_enc_to_predict()
# !!!!
MyVae.encoder.compile() 

# Encoder prediction 
z_points, mu, logvar  = MyVae.encoder.predict(tf_img)
# Decoder prediction - just for fun 
reco_list = MyVae.decoder.predict(z_points) # just for fun 
print(z_points[0][230])
print(mu[0][230])
print(logvar[0][230])

print() 
print()

# Set enc_v_eps_factor back to 1.0
MyVae.set_enc_to_train()
MyVae.encoder.compile()

# New Encoder prediction 
z_points, mu, logvar  = MyVae.encoder.predict(tf_img)

print(z_points[0][230])
print(mu[0][230])
print(logvar[0][230])

We get

Shape of img_list =  (1, 96, 96, 3)
eps_fact =  0.0
1/1 [==============================] - 0s 280ms/step
1/1 [==============================] - 0s 18ms/step
Shape of reco_list =  (1, 96, 96, 3)

-0.141142
-0.141142
3.3761873


eps_fact =  1.0
1/1 [==============================] - 0s 288ms/step

-0.63676023
-0.141142
3.3761873

This is exactly what we want!

The function for the prediction step of a Keras model is cached at inference time …

The example above gave us the impression that it could be the compilation of a model which “settles” all of the functionality used during predictions, i.e. at inference time. Actually, this is no quite true.

The documentation on a Keras model helped me to get a better understanding. Near the section on the method “predict()” we find some other interesting functions. A look at the remarks on “predict_step()“, reveals (quotation)

The logic for one inference step.

This method can be overridden to support custom inference logic. This method is called by Model.make_predict_function.

This method should contain the mathematical logic for one step of inference. This typically includes the forward pass.

This leads us to the function “make_predict_function()” for Keras models. And there we find the following interesting remarks – I quote:

This method can be overridden to support custom inference logic. This method is called by Model.predict and Model.predict_on_batch.

Typically, this method directly controls tf.function and tf.distribute.Strategy settings, and delegates the actual evaluation logic to Model.predict_step.

This function is cached the first time Model.predict or Model.predict_on_batch is called. The cache is cleared whenever Model.compile is called. You can skip the cache and generate again the function with force=True.

Ah! The function predict_step() normally covers the forward pass through the network and “make_predict_function()” caches the resulting (function) object at the first invocation of model.predict(). And the respective cache is not cleared automatically.

So, what really may have hindered my changes of the sampling functionality at inference time is a cache filled at the first call to encoder.predict()!

Let us test this!

Changing the sampling parameters after compilation, but before the first call of encoder.predict()

If our suspicion is right we should be able to set up the model from scratch again, compile it, use MyVae.set_enc_to_predict() and afterward call MyVae.encoder.predict() – and get the same values for mu and z_point.

So we do something like

# Build encoder according to layer parameters 
MyVae._build_enc()
# Build decoder according to layer parameters 
MyVae._build_dec()
# Build the VAE-model 
MyVae._build_VAE()
...

# Set variable to 1.0
MyVae.set_enc_to_train()

# Compile 
learning_rate = 0.0005
b_old_optimizer = True     
MyVae.compile_myVAE(learning_rate=learning_rate, b_old_optimizer = b_old_optimizer )
MyVae.encoder.compile()   # used to prevent retracing delays - when later changing encoder variables 

...
# Load weights from a set path 
MyVae.model.load_weights(path_weights)
...

# preparation if the selected img. 
...
...

MyVae.set_enc_to_predict()
print("eps_fact = ", MyVae.enc_v_eps_factor)
# Note: NO recompilation is done !

# First prediction 
z_points, mu, logvar  = MyVae.encoder.predict(tf_img)
print()
print(z_points[0][230])
print(mu[0][230])
print(logvar[0][230])
..

Note that the change of enc_v_eps_factor ahead of the first call of predict(). And, indeed:

Shape of img_list =  (1, 96, 96, 3)
eps_fact =  0.0
...
-0.141142
-0.141142
3.3761873

Use make_predict_function(force=True) to clear and refill the cache for predict_step() and its forward pass functionality

The other option the documentation indicates is to use the function make_predict_function(force=True).
This leads to yet another experiment:

# img preparation 
....

# Set enc_v_eps_factor to 1.0
MyVae.set_enc_to_train()
MyVae.encoder.compile() 
print("eps_fact = ", MyVae.enc_v_eps_factor)

# Encoder prediction 
z_points, mu, logvar  = MyVae.encoder.predict(tf_img)
print(z_points[0][230])
print(mu[0][230])
print(logvar[0][230])

print() 
print()

# Set enc_v_eps_factor to 0.0
MyVae.set_enc_to_predict()
# !!!!
MyVae.encoder.make_predict_function(
    force=True
)
print("eps_fact = ", MyVae.enc_v_eps_factor)

# Encoder prediction 
z_points, mu, logvar  = MyVae.encoder.predict(tf_img)
print(z_points[0][230])
print(mu[0][230])
print(logvar[0][230])

We get

...
eps_fact =  1.0
1/1 [==============================] - 0s 287ms/step

-5.5451365
-0.141142
3.3761873


eps_fact =  0.0
1/1 [==============================] - 0s 271ms/step

-0.141142
-0.141142
3.3761873

Yes, exactly as expected. This again shows us that it is the cache which counts after the first call of model.predict() – and not the compilation of the Keras model (for the Encoder) !

Other approaches?

The general question of changing parameters at inference time also triggers the question whether we may be able to deliver parameters to the function model.predict() and transfer them further to customized variants of predict_step(). I found a similar question at stack overflow
Passing parameters to model.predict in tf.keras.Model

However, the example there was rather special – and I did not apply the lines of thought explained there to my own case. But the information given in the answer may still be useful for other readers.

Conclusion

In this post we have seen that we can change parameters influencing the forward pass of a Keras model at inference time. We saw, however, that we have to clear and fill a cache to make the changes effective. This can be achieved by

  • either applying a recompilation of the model
  • or enforcing a clearance and refilling of the cache for the model’s function predict_step().

In the special case of a VAE this allows for deactivating and re-activating the logvar-dependent statistical variation of the z-points a specific image is mapped to by the Encoder model during predictions. This gives us the option to focus on the central mu-dependent position of certain images in the latent space during experiments at inference time.

In the next post of this series we shall have a closer look at the filamental structure of the latent space of a VAE with tiny KL loss in comparison to the z-space structure of a VAE with sufficiently high KL loss.

 

Ceterum censeo: The worst fascist, war criminal and killer living today is the Putler. He must be isolated at all levels, be denazified and sooner than later be imprisoned. Somebody who orders the systematic destruction of civilian infrastructure must be fought and defeated because he is a permanent danger to basic principles of humanity – not only in Europe. Long live a free and democratic Ukraine!

 

Opensuse Leap, X, xauth, flatpak vs. kate – deviating selection of magic cookie entries in .Xauthority – I

On an average Linux system xorg’s X services or (X)Wayland services are fundamental building blocks for graphical user interfaces like KDE. We use these services every day without thinking much about security measures and other processes in the background. But sometimes strange events wake you up from daily routine. This blog post got triggered by such an event – namely an odd and varying reaction of an X-client application to changes of the network name of a Linux host (running Opensuse Leap 15.3).

This post focuses on a standard cookie-based access mechanism of X-clients to local Xorg-services on the very same Linux host. The basic idea is that the X-server grants access to its socket and services only if the X-client presents a fitting “secret” cookie. The required magic cookie value is defined by the X-server when it is started. The X-client gets information about X-related cookies from a user specific file ~/.Xauthority which may contain multiple entries. I will show that changing of the name of the host, i.e. the hostname, may have an unexpected impact on the subsequent start of X-clients and that different X-clients may react in deviating ways to a hostname change.

The ambiguity in the startup process of an X-client after a hostname change – even if the correct cookie is in principle available – is demonstrated by different reactions of a standard KDE application, namely kate, and a flatpak-based application, namely Blender. Flatpak is of interest as there are many complaints on the Internet regarding an unexplained denial of X-server access, especially on Opensuse systems.

We first look a bit into basic settings for a standard MIT-1 cookie based access mechanism. (For an in-depth introduction to X-server security readers will have to consult specific documentation available from Xorg). Afterward I will show that the rules for selecting a relevant cookie entry in the file .Xauthority deviate between the applications named above. The analysis will show that there is an intimate relation of these rules with available sources of the hostname and .Xauthority entries for various hosts. Hostname changes during running X-sessions may, therefore, have an impact on the start of graphical applications. An unexpected negative impact may even occur after a restart of the DisplayManager.

In this post we look at the effects of hostname changes during a running X-session, e.g. a KDE-session. In further posts I will look a bit deeper into sequences of actions a user may perform following a hostname change – such as a simple logout from and a new login into a graphical session or a full restart of systemd’s graphical target. Even then the consequences of a hostname change may create confusion. If and when I find the time I may also look into aspects of Wayland access in combination with KDE applications and flatpak.

What had happened? Denial and acceptance of Xorg-server access for a flatpak-based application after changing the host’s name …

Some days ago, I came to a location where I had to use a friend’s WLAN router to get an Internet connection. On my laptop NetworkManager controls the WLAN access. NetworkManager has an option to submit a hostname to the router. As I do not want my standard hostname to be spread around I first changed it via YaST during a running KDE-session. Then I configured NetworkManager for the local WLAN and restarted the network. Access to the WLAN router and Internet worked as expected. However, when I tried to start a flatpak based Blender installation I got the message

Invalid MIT-MAGIC-COOKIE-1 key Unable to open a display

This appeared to be strange because the flatpak Blender installation had worked flawlessly before the change of the hostname and before WLAN access. The question that hits you as a normal Linux user is: What do a hostname change and a network restart have to do with the start of a flatpak application in an already running KDE session?

Then I changed the hostname again to a different string – and could afterward start flatpak Blender without any X-authorization problem. Confused? I was.

Then I combined the change of the hostname with an intermediate start of the display manager and/or a stop and restart of systemd’s “graphical target” – and got more disturbing results. So I thought: This is an interesting area worth of looking into it a bit deeper.

The error message, of course, indicated problems with an access to the display offered by the running X-server. Therefore, I wanted answers to the following questions:

  • Why did the magic cookie fail during an already running X-session? Had the X-access conditions not been handled properly already when I logged myself into my KDE session?
  • Why was the magic cookie information invalid the first time, but not during the second trial?
  • What impact has the (changed) hostname on the cookie-based X-authorization mechanism?
  • What can I check by which tool regarding xauth-problems?
  • What does flatpak exactly send to the X server when asking for access?
  • Which rules govern the cookie based X-authorization mechanism for other applications than flatpak-based applications?

I had always assumed that X socket access would follow clear rules – independent of a specific X-client. But after some simple initial tests I started wondering whether the observed variations regarding X access had something to do with entries in the ~/.Xauthority file of the user which I had used to log into my KDE-session. And I also wondered whether a standard KDE application like “kate” would react to hostname changes in really the same way as a flatpak application.

Changes of ~/.Xauthority entries …

During a series of experiments I started to manipulate the contents of the ~/.Xauthority file of the user of a running KDE session. Whilst doing so I compared the reaction of a typical KDE application like kate with the behavior of a (sandboxed) flatpak Blender installation. And guess what: The reactions were indeed different.

You find the details of the experiments below. I have not yet analyzed the results with respect to potential security issues. But the variability in the selection and usage of .Xauthority entries by different applications appears a bit worrisome to me. In any case the deviating reactions at least have an impact upon whether an X-client application does start or does not start after one or multiple hostname changes. This makes an automatic handling of certain X-client starts by user scripts a bit more difficult than expected.

“Transient” state of a Linux system regarding its hostname?

The basic problem is that a user can create a transient system status regarding a change of the static name of a host: A respective entry in the file /etc/hostname may differ from values which other resources of the running graphical desktop session may provide to the user or programs. Transient is a somewhat problematic term. E.g. the command “hostnamectl” may show a transient hostname for some seconds when you change the contents of /etc/hostname directly – until a (systemd) background process settles the status. So what exactly do I mean by “transient”?

By “transient” I mean that a change of the static hostname in /etc/hostname may not be reflected by environment variables containing hostname information, e.g. by the variables HOST and HOSTNAME. Conflicting hostname information may occur as long as we do not restart the display manager and/or restart the “graphical target” of systemd after a change of the hostname.

To my astonishment I had to learn that it is not self-evident what will happen in such transient situations regarding access authorization to an Xorg-server via a so called “magic MIT-1 cookie“. This is partially due to the fact that the file “~/.Xauthority” may contain multiple entries. See below.

Objectives of this post

In this first post on the topics named above I will have a look at what happens

  • when we change the static hostname during a running X-session (e.g. a KDE session),
  • when we change entries in the ~/.Xauthority file during a running X-session, in particular with respect to hostname(s), and try to start an X-client afterwards.

The second point will help us to identify rules which X-client applications follow whilst choosing and picking up a cookie value among multiple entries in the file “~/.Xauthority”. We compare the behavior of “kate” as an example for a KDE application with the reaction of a sandboxed flatpak application (Blender). The astonishing result will be that the rules really differ. The rules for flatpak may prevent a start of a flatpak application although a valid entry may be present in .Xauthority.

Our starting point: A consistent system-status regarding the hostname in a LAN or WLAN environment

The whole problem of an invalid cookie started with a hostname change ahead of a WLAN access. Why is a defined hostname important? How do we change it on an Opensuse system? What are potential sources on the system regarding information about the present hostname? And what should a consistent situation at the beginning of experiments with hostname changes look like?

In a typical structured LAN/WLAN there are multiple hosts wanting to interact with servers on the Internet, but also with each other or servers in (sub-) networks. As an administrator you may define separate sub-nets behind a common gateway. Members of a sub-net may be objects to routing and firewall restrictions regarding the communication with other hosts in the same or other sub-nets or servers on the Internet. As we humans operate with host-names rather than IP-addresses we may give hosts unique names in a defined (sub-) network, use DHCP to statically or dynamically assign IP addresses (and maybe even host-names) and use DNS-services to translate hostnames and related FQDNs into IP-addresses. In a local LAN/WLAN you may have full control over all these ingredients and design a consistent landscape of named and interacting hosts. Plus a pre-designed network segregation, host- routers, gateways, gateway-/perimeter-firewalls, etc.

The situation may be different when you put your laptop in a foreign and potentially dangerous WLAN environment. In my case I use Networkmanager on Leap 15.3 (soon 15.4) to configure WLAN access. The WLAN routers in most networks I have to deal with most often offer both DHCP and DNS services. The IP-address is assigned dynamically. The router’s DNS server works as a forwarder with respect to the Internet. But regarding the local network, which the router controls, the DNS-service of the router may respect your wishes regarding a hostname or not. And you do not know what other hosts creep around in the network you become a member of – and how they see you with respect to your host’s name.

There are three important things I want to achieve in such a situation:

  1. As soon as a WLAN connection to a router gets up I want to establish firewall rules blocking all incoming traffic and limit the outgoing traffic as much as possible.
  2. I do not want the WLAN routers to mingle with the hostname set by me, because this name play a role in certain scripts and some host-internal virtual network environments.
  3. Other friendly hosts in the network may ping me under a certain defined hostname – which the DNS part of the WLAN router should be informed about.

All these things can be managed directly or indirectly by Networkmanager (and some additional scripts). In particular you can a start a script that installs netfilter rules for a certain pre-defined hostname and does further things – e.g. delete or supplement entries in “/etc/hosts”.

However, the host’s name has somehow to be defined ahead of the network connection. There are multiple options to do this. One is to edit the file “/etc/hostnames”. Another is offered by YaST on Opensuse systems. A third is a script of your own, which in addition may manage settings in /etc/hosts, e.g. regarding virtual networks controlled by yourself.

Under perfect circumstances you may achieve a status where you have a well defined static hostname not touched by the WLAN router, a local host-based firewall controlling all required connections and your chosen hostname has been accepted by the router and integrated into its own DNS domain.

Let us check this for the WLAN environment at one of my friends locations:

myself@xtux:~> hostname; cat /etc/hostname; echo $HOST; echo $HOSTNAME; echo $XAUTHLOCALHOSTNAME; echo $SESSION_MANAGER; hostnamectl
xtux
xtux
xtux
xtux
xtux
local/xtux:@/tmp/.ICE-unix/7289,unix/xtux:/tmp/.ICE-unix/7289
   Static hostname: xtux
         Icon name: computer-laptop
           Chassis: laptop
        Machine ID: ...... (COMMENT: long ID) 
           Boot ID: b5d... (COMMENT: long ID)
  Operating System: openSUSE Leap 15.3
       CPE OS Name: cpe:/o:opensuse:leap:15.3
            Kernel: Linux 5.3.18-150300.59.101-default
      Architecture: x86-64
myself@xtux:~> 

myself@xtux:~ # ping xtux
PING xtux.home (192.168.10.186) 56(84) bytes of data.
64 bytes from xtux.home (192.168.10.196): icmp_seq=1 ttl=64 time=0.030 ms
64 bytes from xtux.home (192.168.10.196): icmp_seq=2 ttl=64 time=0.060 ms
^C
--- xtux.home ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2013ms

(The COMMENTs are not real output, but were added by me.)
You see that there is a whole variety of potential sources regarding information about the host’s name. In particular there are multiple environment variables. In the situation shown above all information sources agree about the host’s name, namely “xtux”. But you also see that the local domain name used by the WLAN router is a default one used by a certain router vendor. You would get the domain present name used also by issuing the command “dnsdomainname” at a shell-prompt.

An interesting question is: Which sources will register a hostname change during a running KDE session? Another interesting question is: Is the hostname used for local X-access authorization? The answers will be given by the experiments described below.

Cookie based X-server access by the MIT-MAGIC-COOKIE-1 mechanism

X-server access can be controlled by a variety of mechanisms. The one we focus on here is a cookie-based access. The theory is that an X-server when it starts up queries the hostname and creates a “secret” cookie plus a hash defining a file where the server saves the cookie. Afterward any X-client must provide this specific magic cookie when trying to get access to the X-server (more precisely to its socket). See e.g. a Wikipedia article for these basic principles.

Let us check what happens when we start a X-server. On an Opensuse system a variety of systemd-services is associated with a pseudo-state “3” of the host which can be set by a command “init 3”. This “state” corresponds, of course, to a systemd target. Network and multiuser operations are provided. A command “init 5” then moves the system to the graphical target. This includes the start of the Display Manager – in my case sddm. sddm in turn starts xorg’s X server. On an Opensuse system see the file /etc/sysconfig/displaymanager for respective settings.

Now let us get some information from the system about these points. First we look at the command which had been used to start the X-server. As root:

xtux:~ # pgrep -a X
16690 /usr/bin/X -nolisten tcp -auth /run/sddm/{0ca1db02-e253-4f2b-972f-9b124764a65f} -background none -noreset -displayfd 18 -seat seat0 vt7

The path after the -auth option gives us the location of the file containing the “magic cookie”. We can analyze its contents by the xauth command:

xtux:~ # xauth -f /run/sddm/\{0ca1db02-e253-4f2b-972f-9b124764a65f\} list
xtux/unix:0  MIT-MAGIC-COOKIE-1  650ade473bc07c2e981d6174871c2ad0

(Hint: The tab-key will help you to avoid retyping the long hash). Ok, here we have our secret cookie for further authentication of applications wanting to get access to the X-server’s socket.

The other side of the story is the present user who opened an X-session – in my case a KDE session. Where does it or any X-clients, which the user later starts on a graphical desktop, get the knowledge about the required cookie from?

Answer: Whilst logging in via the DisplayManager an entry is written into or replaced inside the file ~/.Xauthority (by a root controlled process). If this file does not exist it is created.

The .Xauthority file should normally not be edited as it contains non-interpretable characters. But the command “xauth list” helps again to present the contents in readable form. It automatically picks the file ~/.Xauthority:

myself@xtux:~> xauth list
xtux/unix:0  MIT-MAGIC-COOKIE-1  650ade473bc07c2e981d6174871c2ad0

We see the same cookie here as on the local X-server’s side. Any applications with graphical output can and should use this information to get access to the X-server. E.g. “ssh -X” can use this information to rewrite cookie information set during interaction with a remote system to the present local X-server cookie. And there are of course the sandboxed flatpak applications in their namespaces. Note that the screen used by definition is :0 on the local host.

Note further that the situation regarding .Xauthority is ideal: In our present situation it contains only one entry. On systems working a lot on different remote hosts this file normally contains multiple entries, in particular if you have opened X-connections to other hosts in the past. Or, if you have changed your hostname before …

Access to the running X-server by a flatpak-based Blender installation

When we start a flatpak Blender installation this will just open the Blender interface on the hosts graphical desktop screen. We can close Blender directly afterward again.

myself@xtux:~> flatpak run org.blender.Blender &
[1] 23145
myself@xtux:~> Saved session recovery to '/tmp/quit.blend'

Blender quit

[1]+  Fertig                  flatpak run org.blender.Blender
myself@xtux:~> 

We shall later see that flatpak replaces a screen-number :99 used in the applications namespace to :0 when interacting with the locally running X-server. In this respect it seems to work similar to “ssh -X”.

Disturbing the hostname settings

What happens if we disturb the initial consistent setup by changing the host’s name during a running X-session?

We have already identified potential sources of a mismatch regarding the hostname: /etc/hostname vs. a variety of environment variables vs. the DNS system.
The reader has certainly also noticed that the entry in the ~/.Xauthority file starts with the present name of the host. So here we have yet another potential source of a mismatch after a change of the hostname.

There are obvious limitations: The present cookie value stored at the location /run/sddm/\{..} should not be overwritten. This may disturb running X-clients and the start of other application windows in sub-shells of the running KDE-session. Changing environment variables which contain the hostname, may be dangerous, too.

On the other side the entry in ~/.Xauthority may not fit the new hostname if the entry is not being adapted. Actually, changing the hostname via YaST on an Opensuse system leaves .Xauthority entries as they were. So how would a KDE application and a flatpak application react to such discrepancies?

Basic Experiment: Change the hostname during a KDE session and try to run a flatpak-based Blender application afterward

There are multiple options to change the hostname. We could overwrite the contents of /etc/hostname directly as root. (And wait for some systemd action to note the difference) But that does not automatically change other network settings. Let us use therefore use YaST in the form of the graphical yast2.

We just ignore the warning and define a new hostname “xmux”. Then we let yast2 do its job to reconfigure the network settings. Note that this will not interrupt an already running NetworkManager-controlled WLAN connection. Afterward we check our bash environment:

myself@xtux:~> hostname; cat /etc/hostname; echo $HOST; echo $HOSTNAME; echo $XAUTHLOCALHOSTNAME; echo $SESSION_MANAGER; hostnamectl
xmux
xmux
xtux
xtux
xtux
local/xtux:@/tmp/.ICE-unix/7289,unix/xtux:/tmp/.ICE-unix/7289
   Static hostname: xmux
         Icon name: computer-laptop
           Chassis: laptop
        Machine ID: .....   (COMMENT: unchangd long string) 
           Boot ID: 65c..   (COMMENT: changed long string)
  Operating System: openSUSE Leap 15.3
       CPE OS Name: cpe:/o:opensuse:leap:15.3
            Kernel: Linux 5.3.18-150300.59.101-default
      Architecture: x86-64
myself@xtux:~> xauth list
xtux/unix:0  MIT-MAGIC-COOKIE-1  650ade473bc07c2e981d6174871c2ad0

We got a new bootid, but neither the environment variables nor the contents of ~/.Xauthority have been changed. Will our flatpak-based Blender run?
Answer: Yes, it will – without any warnings! Will kate run? Yes, it will, but with some warning:

myself@xtux:~> kate &
[1] 6685
myself@xtux:~> No protocol specified

The reaction of kate is a bit questionable. Obviously, it detects an unexpected discrepancy, but starts nevertheless.

There are 2 possible explanations regarding flatpak: 1) flatpack ignores the hostname in the .Xauthority entry. It just reacts to the screen number. 2) flatpak remembers the last successful cookie and uses it.

How can we test this?

Preparation of further experiments: How to change the contents of .Xauthority manually

One way to test the reaction of applications to discrepancies of a changed hostname with cookie entries in .Xauthority is to manually change entries in this file. This is not as easy as it may seem as this file has weird characters in it. But it is possible with the help of kate or kwrite. You need an editor which can grasp most of the symbols in a sufficient way. vi is not adequate.

BUT: You have to be very careful when copying entry lines. Identify the symbol sequence marking the beginning of an entry is a first important step. Note also: When you change a hostname of an entry you must use one of the same length!
AND: Keep a copy of the original .Xauthority-file somewhere safe during your experiments.

Multiple entries in ~/.Xauthority

The file ~/.Xauthority can in principle have multiple entries for different hosts and different X-servers. Among other scenarios multiple entries are required to cover situations for

  1. for direct access of a local X-client via a network to other hosts’ X-servers
  2. for remote application access based on “ssh -X” and a display of the output on the local host’s X-server.

Furthermore: A restart of an X-server will lead to new additional entries in ~/.Xauthority, whilst existing entries are kept there.

Therefore, it is wise to work with multiple entry lines in ~/.Xauthority for further experiments. However: Multiple entries with a combination of hostnames and cookie values open up a new degree of freedom:

The decision whether an X-client application gets successfully started or not will not only depend on a cookie match at the X-server, but also on the selection of an entry in the file ~/.Xauthority.

Hopefully, the results of experiments, which mix old, changed and freely chosen hostnames with valid and invalid cookie values will give us answers to the question of how a hostname change affects running X-sessions and freshly started X-clients. We are especially interested in the rules that guide an application when it must select a particular entry in .Xauthority among others. If we are lucky we will also get an idea about how an intermediate restart of the X-server after a hostname change may influence the start of X-clients afterward.

During the following experiments I try to formulate and improve rules regarding kate and flatpak applications with respect to the selection and usage of entries in .Xauthority.

Please note:

In all forthcoming experiments we only consider local applications which try to gain access to a locally running X-server!

The cookie of an entry in ~/.Xauthority is considered to be “correct” if it matches the magic cookie of the running X-server. Otherwise we regard it as “incorrect” with respect to the running X-server.

Experiment 1: Change of the file ~/.Xauthority – leading entry for the original hostname with an incorrect cookie, second entry for new hostname with the correct cookie

I prepared a file /home/myself/.Xauthority (as root) during the running X-session with the following entries:

myself@xtux:~> xauth list
xtux/unix:0  MIT-MAGIC-COOKIE-1  650ade473bc07c2e981d6174871c2a44
xmux/unix:0  MIT-MAGIC-COOKIE-1  650ade473bc07c2e981d6174871c2ad0
myself@xtux:~> kate
myself@xtux:~> flatpak run org.blender.Blender &
[1] 28187
myself@xtux:~> Invalid MIT-MAGIC-COOKIE-1 keyUnable to open a display
^C
[1]+  Exit 134                flatpak run org.blender.Blender
myself@xtux:~>

You see that I have changed the hostname (xtux) of our original entry (with the correct cookie) to the meanwhile changed hostname (xmux). This entry is associated with the presently valid magic cookie. And I have added a leading entry with the original hostname, but a modified and therefore invalid cookie value.

Now, let us check what a KDE application like kate would do afterward: The output above shows that it just started – without any warning. However, flatpak did and does NOT start:

myself@xtux:~> flatpak run org.blender.Blender &[1] 16686
myself@xtux:~> Invalid MIT-MAGIC-COOKIE-1 keyUnable to open a display

Experiment 2: ~/.Xauthority with a first entry for the new hostname but with an invalid cookie, plus a second entry for the original hostname with the correct cookie

Let us change the entries to:

myself@xtux:~> xauth list
xmux/unix:0  MIT-MAGIC-COOKIE-1  650ade473bc07c2e981d6174871c2a44
xtux/unix:0  MIT-MAGIC-COOKIE-1  650ade473bc07c2e981d6174871c2ad0
myself@xtux:~> kate
Invalid MIT-MAGIC-COOKIE-1 keymyself@xtux:~> flatpak run org.blender.Blender &
[1] 28371
myself@xtux:~> Invalid MIT-MAGIC-COOKIE-1 keyUnable to open a display
^C
[1]+  Exit 134                flatpak run org.blender.Blender
myself@xtux:~> 

It may seem that a line break is missing in the output and that kate did not start. But this is wrong:

  • Kate actually DID start! But it produced an alarming warning!
  • However flatpak did NOT start AND gave us a warning!

Meaning:

We got a clear indication that

  • different entries are used by our two applications and
  • that the potential discrepancies of the hostnames associated with the .Xauthority-entries in comparison to environment variables and /etc/hostname are handled differently by our two applications.

Why kate starts despite the clear warning is a question others have to answer. I see no direct security issue, but I have not really thought it through.

Experiment 3: ~/.Xauthority with a leading entry for original hostname and the correct cookie plus entry for the new hostname but an incorrect cookie

Let us now change the order of the cookie entries:

myself@xtux:~> xauth list
xtux/unix:0  MIT-MAGIC-COOKIE-1  650ade473bc07c2e981d6174871c2ad0
xmux/unix:0  MIT-MAGIC-COOKIE-1  650ade473bc07c2e981d6174871c2a44
myself@xtux:~> kate
Invalid MIT-MAGIC-COOKIE-1 keymyself@xtux:~> flatpak run org.blender.Blender &
[1] 28571
myself@xtux:~> /run/user/1004/gvfs/ non-existent directory
Saved session recovery to '/tmp/quit.blend'

Blender quit

[1]+  Fertig                  flatpak run org.blender.Blender
myself@xtux:~> 

kate again gives us a warning, but starts.
And, oh wonder, flatpak now does start Blender in its namespace without any warning!

Experiment 4: ~/.Xauthority with a leading entry for the new hostname and the correct cookie, plus an entry for the original hostname but an invalid cookie

Let us switch the hostnames again in the given order of the cookie entries. This gives us the last variation for mixing the new and the old hostnames with valid/invalid cookies :

myself@xtux:~> xauth list
xmux/unix:0  MIT-MAGIC-COOKIE-1  650ade473bc07c2e981d6174871c2ad0
xtux/unix:0  MIT-MAGIC-COOKIE-1  650ade473bc07c2e981d6174871c2a44
myself@xtux:~> kate
myself@xtux:~> flatpak run org.blender.Blender &
[1] 28936
myself@xtux:~> /run/user/1004/gvfs/ non-existent directory
Saved session recovery to '/tmp/quit.blend'

Blender quit

[1]+  Fertig                  flatpak run org.blender.Blender
myself@xtux:~> 

kate now starts without any warning. And also flatpak starts without warning.

Intermediate, but insufficient interpretation – reason for further experiments

How can we interpret the results above? So far the results are consistent with the following rules:
kate: Whenever the cookie associated with the present static hostname in .Xauthority matches the X-server’s cookie kate will start without warning. Otherwise it issues a warning, but starts nevertheless.
flatpak: Whenever the first entry in .Xauthority provides a cookie that matches the X-server’s cookie for the session than flatpak starts an X-client program like Blender.

But things are more complicated than this. We also have to check what happens if we, for some reason, have entries in ~/.Xauthority that reflect a hostname neither present in /etc/hostname nor in environment variables. (Such an entry may have resulted from previous access trials to the X-servers of remote hosts or “ssh -X” connections.)

I will call such a hostname a locally “unknown hostname” below. It admit that this is not the best wording, but it is a short one. A “known hostname” instead is one either provided by environment variables or being present in /etc/hostname.

Experiment 5: ~/.Xauthority with a leading entry for an unknown hostname and the correct cookie, plus an entry for the new hostname but an incorrect cookie

Entries of the form

myself@xtux:~> xauth list
xfux/unix:0  MIT-MAGIC-COOKIE-1  650ade473bc07c2e981d6174871c2ad0
xmux/unix:0  MIT-MAGIC-COOKIE-1  650ade473bc07c2e981d6174871c2a44

reflect such a situation.

The reactions of both kate and flatpak are negative in the given situation:

myself@xtux:~> xauth list
xfux/unix:0  MIT-MAGIC-COOKIE-1  650ade473bc07c2e981d6174871c2ad0
xmux/unix:0  MIT-MAGIC-COOKIE-1  650ade473bc07c2e981d6174871c2a44
myself@xtux:~> kate
Invalid MIT-MAGIC-COOKIE-1 keyNo protocol specified
qt.qpa.xcb: could not connect to display :0
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
Failed to create wl_display (No such file or directory)
qt.qpa.plugin: Could not load the Qt platform plugin "wayland" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.

Available platform plugins are: wayland-org.kde.kwin.qpa, eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, wayland-egl, wayland, wayland-xcomposite-egl, wayland-xcomposite-glx, xcb.

Abgebrochen (Speicherabzug geschrieben)
myself@xtux:~> 
myself@xtux:~> flatpak run org.blender.Blender &
[1] 30565
myself@xtux:~> Invalid MIT-MAGIC-COOKIE-1 keyUnable to open a display
^C
[1]+  Exit 134                flatpak run org.blender.Blender
myself@xtux:~> 

Meaning:

flatpak reacts allergic to entries with unknown hostnames and to entries with known hostnames, but with a wrong cookie.

Experiment 6: ~/.Xauthority with a leading entry for the new hostname and the correct cookie, plus an entry for an unknown hostname with an invalid cookie

However:

myself@xtux:~> xauth list
xmux/unix:0  MIT-MAGIC-COOKIE-1  650ade473bc07c2e981d6174871c2ad0
xfux/unix:0  MIT-MAGIC-COOKIE-1  650ade473bc07c2e981d6174871c2a44
myself@xtux:~> kate
myself@xtux:~> flatpak run org.blender.Blender &
[1] 30710
myself@xtux:~> /run/user/1004/gvfs/ non-existent directory
Saved session recovery to '/tmp/quit.blend'

Blender quit

[1]+  Fertig                  flatpak run org.blender.Blender
myself@xtux:~> 

Both applications start without warning!

Experiment 7: ~/.Xauthority with a leading entry for the original hostname and the correct cookie, plus an entry for an unknown hostname with an invalid cookie

The reaction of our applications changes again for the following settings:

myself@xtux:~> xauth list
xtux/unix:0  MIT-MAGIC-COOKIE-1  650ade473bc07c2e981d6174871c2ad0
xfux/unix:0  MIT-MAGIC-COOKIE-1  650ade473bc07c2e981d6174871c2a44
myself@xtux:~> kate
No protocol specified
myself@xtux:~> flatpak run org.blender.Blender &
[1] 30859
myself@xtux:~> /run/user/1004/gvfs/ non-existent directory
Saved session recovery to '/tmp/quit.blend'

Blender quit

[1]+  Fertig                  flatpak run org.blender.Blender
myself@xtux:~> 

Meaning:

If the hostname associated with the right cookie is present in the environment variables, but does not correspond to the contents of /etc/hostname then kate will start with some warning. Flatpak starts starts without warning.

Experiment 8: ~/.Xauthority with a leading entry for an unknown hostname and an incorrect cookie, plus an entry for a known hostname with an invalid cookie

Switching entries and renaming confirms previous results:

myself@xtux:~> xauth list
xfux/unix:0  MIT-MAGIC-COOKIE-1  650ade473bc07c2e981d6174871c2a44
xtux/unix:0  MIT-MAGIC-COOKIE-1  650ade473bc07c2e981d6174871c2ad0
myself@xtux:~> kate
No protocol specified
myself@xtux:~> flatpak run org.blender.Blender &
[1] 31113
myself@xtux:~> /run/user/1004/gvfs/ non-existent directory
Saved session recovery to '/tmp/quit.blend'

Blender quit

[1]+  Fertig                  flatpak run org.blender.Blender

And:

myself@xtux:~> xauth list
xfux/unix:0  MIT-MAGIC-COOKIE-1  650ade473bc07c2e981d6174871c2a44
xmux/unix:0  MIT-MAGIC-COOKIE-1  650ade473bc07c2e981d6174871c2ad0
myself@xtux:~> kate
myself@xtux:~> flatpak run org.blender.Blender &
[1] 31348
myself@xtux:~> /run/user/1004/gvfs/ non-existent directory
Saved session recovery to '/tmp/quit.blend'

Blender quit

Experiment 9: ~/.Xauthority with entries for unknown hostnames, only

Let us use unknown hostnames, only.

myself@xtux:~> xauth list
xfuxi/unix:0  MIT-MAGIC-COOKIE-1  650ade473bc07c2e981d6174871c2ad0
xruxi/unix:0  MIT-MAGIC-COOKIE-1  650ade473bc07c2e981d6174871c2ad0
myself@xtux:~> kate
No protocol specified
No protocol specified
qt.qpa.xcb: could not connect to display :0
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
Failed to create wl_display (No such file or directory)
qt.qpa.plugin: Could not load the Qt platform plugin "wayland" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.

Available platform plugins are: wayland-org.kde.kwin.qpa, eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, wayland-egl, wayland, wayland-xcomposite-egl, wayland-xcomposite-glx, xcb.

Abgebrochen (Speicherabzug geschrieben)
myself@xtux:~> flatpak run org.blender.Blender &
[1] 29115
myself@xtux:~> No protocol specified
Unable to open a display
^C
[1]+  Exit 134                flatpak run org.blender.Blender

So, having unknown hostnames only, will lead to no X-access, neither for flatpak nor kate.

Derived rules for the selection of an entry in ~/.Xauthority

So the rules for our two selected applications regarding the selection of an entry in ~/.Xauthority and resulting X-server access are more like described below:

  • kate: If an entry in .Xauthority has a known hostname and fits the X server’s cookie kate is started – with a warning, if the hostname does not fit the present static hostname (in /etc/hostname). If there is an additional deviating entry for the present static hostname, but with an incorrect cookie then the warning includes the fact that the cookie is invalid, but kate starts nevertheless.
  • flatpak Blender: The first entry which matches a known hostname (among the available ones from environment variables or from /etc/hosts) and which matches screen :0 is used to pick the respective cookie for X-server access. The application (X-client) only starts if the resulting cookie matches the X-server’s cookie.
  • Both: If ~/.Xauthority does not contain entries which match any of the known hostnames than both programs fail regarding X-access. kate does check for other possible sockets (e.g. for (X)Wayland) in this case.

When evaluating these rules with respect to security issues one should always keep in mind that .Xauthority-entries like the ones we have artificially constructed may have been the result of a sequence of hostname changes followed by restarts of the X-server. This will become clearer in the next post.

Conclusion

By some simple experiments one can show that the access to an Xorg-server requested by a X-client application does not only depend on a cookie match but also on the combination of hostnames and associated magic cookie values offered by multiple entries in the file ~/.Xauthority. The rules by which an X-client application selects a specific entry may depend on the application and the rules may differ from those other applications follow. We have seen that at least a flatpak based Blender installation follows other rules than e.g. KDE’s kate. Therefore, changes of the hostname during a running X-session may have an impact on the startup of applications. E.g. if .Xauthority already contains an entry for the new hostname.

The attentive reader has, of course, noticed that the experiments described above alone do not explain the disturbing reaction of flatpak to hostname changes described in the beginning. These reactions had to do with already existing entries for certain hostnames in .Xauthority. Additional entries may in general be the result of previous (successful) accesses to remote hosts’ X-servers or previous local hostname changes followed by X-server restarts. In the next post I will, therefore, extend the experiments to intermediate starts of both the X-server and the graphical target after hostname changes.

Links

Opensuse Leap 15.3 documentation on xauth

Stackoverflow question on “How does X11 authorization work? (MIT Magic Cookie)”

Stackexchange question on Invalid MIT-MAGIC-COOKIE-1 keyxhost: unable to open display “:0”

Opensuse bug Bug 1137491 – flatpak/snap: “Invalid MIT-MAGIC-COOKIE-1 key” after resume (network/hostname changes?)

Also see comment of the 25th og January, 2019, in an archived Opensuse.org contribution

 

And before we who love democracy and freedom forget it:
The worst fascist, war criminal and killer living today is the Putler. He must be isolated at all levels, be denazified and sooner than later be imprisoned. Somebody who orders the systematic destruction of civilian infrastructure must be fought and defeated because he is a danger to mankind and principles of humanity. Long live a free and democratic Ukraine!