Replacing unstable Blender 2.82 on Leap 15.3 with flatpak or snap based Blender 3.1

I wanted to work a bit more with my S-curve project in Blender. See e.g.

Blender – complexity inside spherical and concave cylindrical mirrors – IV – reflective images of a Blender variant of Mr Kapoor’s S-curve

At my present location I have to use my old but beloved laptop. As a new start in the project I just wanted to make a little movie showing dynamic changes in the reflections of some moving spheres in front of the S-curve’s metallic surface. As I had already created a few Blender movies some years ago I expected this to be an easy job. But I ran into severe trouble. A major reason was the fact that Opensuse does not provide a reasonably usable RPM of a current Blender versions for Leap 15.3.

I had upgraded the laptop to Leap 15.3 in December, 2022. The laptop has an Optimus system. Of course, I prefer the Nvidia card when creating scenes with Blender. I had had some minor problems with prime-select before. But after recent updates I did not get the Nvidia card running at all – neither with Bumblebee nor with Opensuse’s prime-select. I had to invest an hour to solve this problem. After a complete uninstall of Bumblebee and bbswitch and a de- and re-installation of the Nvidia drivers and suse-prime I got prime-select working in the end.

Just to find that Blender version 2.82 provided by the Leap 15.3 OSS repository was not really stable in a Leap 15.3 environment.

Blender 2.82 unstable on Opensuse Leap 15.3

I tried to create a movie with 100 frames (400×200) in Matroska format with H.264 encoding. Rendering of the complex S-curve reflections with a strongly reduced number of light ray samples still took considerable time with the Cycles renderer on 2 out of 4 real (and 8 hyperthreaded) CPU cores. It happened that the movie – after rendering an hour – was not baked correctly. In some runs Blender crashed after having rendered around 40 frames.

Therefore I changed my policy to rendering just a collection of PNG images. The risk of wasting something like 8 hours of CPU time for a real animation was just too big. But, a new disaster happened when I wanted to open a “video editing”- project in Blender to stitch the images together to create the final movie. Blender just crashed. Actually, Blender crashed on Leap 15.3 whenever I tried to open any project type other than “General”. Not funny … I had not experienced something like this on Leap 15.2.

Unfortunately, and in contrast to Tumbleweed there is no official and recent version of Blender available for Leap 15.3. So much about stability and reliability of Leap. What to do?

Canonical’s snap service or Flatpak to the rescue

I had read about SNAP of Canonical before – but never tried it. Canonical’s strange side steps during the evolution of Linux have always been disturbing for me. Regarding “snap” I also had a newspaper article in mind about a controversy with Linux Mint – with claims that snap has closed proprietary components and is transmitting telemetry data to Canonical. Working with root rights …

Therefore, a flatpack installation had to be considered, too…. Risks are small on my old laptop. I just tried – despite concerns and doubts regarding snap.

Blender 3.1 via “snap

I just followed the instructions for Opensuse on https://snapcraft.io/docs/installing-snap-on-opensuse to install “snap” on my Leap 15.3 system. As root:

zypper addrepo --refresh     https://download.opensuse.org/repositories/system:/snappy/openSUSE_Leap_15.3     snappy
zypper install snapd
systemctl enable --now snapd
systemctl enable --now snapd.apparmor
rcsnapd status
# install blender
snap install blender --channel=3.1/stable --classic

Another description can be found here:
https://www.linuxcapable.com/how-to-install-snap-snap-store-snapcraft-on-opensuse-leap-15/

The run-time environment of snap is handled by a daemon “snapd”. We find the majority of the installed files in four directories

/snap           -- 1.1 GB thereof 686 MB for Blender
/usr/lib/snap   -- 48 MB
/var/snap       -- almost nothing 
/var/lib/snapd  -- 360 MB 

In the first directory you also find the installed applications. In my case “Blender 3.1.2”.
The folder “/snap” contains around 1.1 GB. Blender is part of it and consumes 686 MB.

The startup of Blender via

/snap/bin/blender &

takes about 2 secs. Loading my S-curve test file took even longer. Otherwise snap’s Blender version 3.1.2 seemed to work perfectly. All the bugs I had seen with 2.82 were and are gone. And: I could not really notice a difference in performance when working with 3D objects in the Blender viewport. For a render test case I found 17.72 secs per frame on average. Memory release after leaving Blender seemed to be OK.

Blender 3.1 via Flatpak

Then I tried a Blender installation with flatpak. The installation is almost as simple as for snap. See https://www.flatpak.org/ setup/ openSUSE. Which sums up more or less to issuing the following commands :

zypper install flatpak
flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
flatpak install flathub org.blender.Blender

The required packages of the Opensuse repository were

Regarding the installation I was surprised to find that flatpak requires much more space than snap. The majority of flatpack files is located in the following folder:

/var/lib/flatpak  -- 2.3 GB, thereof around 628 MByte for Blender

So, in comparison to snap, this makes a remarkable huge difference regarding the required files to provide a working runtime environment! I understand that for just one application many files have to be provided for a stable run-time environment on KDE – but in comparison to the snap installation it appears bloated.

We start flatpak’s Blender via

flatpak run org.blender.Blender &

The startup time was comparable to the snap installation – no difference felt. The test render time per frame was 18.95 secs. So that was 1.2 secs or 6.5 % slower than snap. Reason unclear. The difference after repeated trials was sometimes a bit smaller (4%), but sometimes also bigger (8.5%) -depending on the start order of the blender applications. But on average snap’s Blender was always a bit faster. I regard the difference is not really as problematic. But it is interesting. Memory release worked also perfectly with flatpak’s Blender installation.

Conclusion

Before you start struggling with the Blender 2.82 binary Opensuse provides, it is really worth trying a flatpak-based or a snap-based installation of Blender 3.1.2. On my system both worked perfectly.

Before you make a decision whether to go with flatpak or snap take into account the critical points which were discussed on the Internet regarding proprietary aspects of snap. Personally, I am going to use flatpak for Blender.
Otherwise, my opinion is that distributors like Opensuse should provide an important application as Blender in its present version as a binary and resolve dependencies as soon as possible. For me a flatpack installation is just a compromise. And it cost me a lot of valuable SSD space.

Links

https://www.linux-community.de/ausgaben/ linuxuser/2018/02/dreikampf/
https://hackaday.com/2020/06/24/whats-the-deal-with-snap-packages/
https://www.makeuseof.com/snap-vs-appimage-vs-flatpak/

Ceterum censeo: The worst living fascist and war criminal today who must be isolated, denazified and imprisoned is the Putler.

 

Nupro X3000 RC – a solid high quality supplement to your Linux Audio

A friend asked me what sound equipment I use on my Linux machine. She wanted to to buy some new decent speakers. I had to make a similar decision a year ago. Coming to a conclusion back then became a more difficult process than I had expected.

I admit that I am a total amateur regarding sound equipment. I have not changed my sound cards (Asus Sonar D2X, Creative X-Fi Titanium, Onboard High Definition GM206) for a long, long time. And I do not hear as well as in my younger years. But during Corona and home office times I became really discontent with my old Creative speakers. One cannot all the time wear headphones. So some new speakers for my Linux workstation became a topic on my private agenda.

Questions ahead of a decision for some speakersfor your PC

When I seriously started thinking about some investment the following questions came up:

A surround system? Active or passive boxes? Suitable for a shelf or standing on the floor? Do you want to use the speakers later also in other contexts than just as a background equipment in your working room? What is appropriate for your room size? Connections cable (copper, optical?) based or WiFi or Bluetooth based? In my age when hearing capabilities are reduced: Will high end properties make a difference at all? And the most limiting factor: budget.

Taking all these factors into account will certainly lead to very personal decisions. So, when I make an explicit recommendation here – take it with caution and a grain of salt.

Guidelines to choosing speakers for a non-professional PC environment

Here are the personal guidelines which I followed – after I had read reviews, listened to Teufel and Edifier speakers at friends and listened to a relative expensive Logitech surround system at my nephew. You may have other references, other budgets and hear much better and more differentiated than I do. So relax if you come to other conclusions.

And do not forget: I am talking about sound equipment on a PC for background music enjoyment in a working room – not for professional objectives and High End specialists.

  • Recommendation 1: If you are interested in sound quality and are a music enthusiast – forget about surround systems. Quantity (many speakers) almost always enforces quality compromises, which you are going to hear in the end. Better invest your money into a 2.0 or 2.1 system which fits the (probably) limited size of your working room.
  • Recommendation 2: If your room size is up to 30 square meters, invest into relatively small speakers – but of studio quality. They will give you a much more pronounced and positioned sound than surround systems. Regarding money think of speakers which you later can supplement with a sub-woofer – e.g. in case you want to move the speakers to a larger room sometime in the future.
  • Recommendation 3: Regarding bass: I am a heavy metal friend – sometimes. I have my phases and periods regarding music … Sometimes I like Jazz, only. Bass in the named two cases has a different meaning to me – but in any case I do not like resonances of my speakers. The stereo speakers alone should already provide a solid, broad and resonance free bass fundament – without a sub-woofer. A sub-woofer can deliver an extra feeling in the case of metal – but for Jazz and classical music I would not consider a sub-woofer as really relevant. So go for some solid speakers with the option of adding a sub-woofer in the future.
  • Recommendation 4: Do not underestimate the effect (or limitations) of the DAC in your sound card! At a certain quality level of your future speakers you are probably going to hear differences. So – if you are lucky and can invest into expensive speakers rethink your sound card equipment, too.
  • Recommendation 5: Do not underestimate the effect of the boxes’ positions in the room. Also in small rooms you will experience bass line effects around 100 Hz or so if you place your boxes in the room’s corners. This leads to the point that you may want some equalizer option to optimize the bass base a bit. Well, Linux or at least most music applications for Linux supply you with equalizers; but it is a nice option to be able to do something at the (active) boxes themselves to get a basic “direction” into your sound environment. And here we would also like to have the option of defining some “presets”.
  • Recommendation 6: Active boxes or amplifier? A very difficult question! In a PC and mobile environment I would tend to active speakers, but … The amplifier technique today is so good that at least in my case my hearing deficits are certainly more important.
  • Recommendation 7: Wifi? personally, I would say: Yes, you should have this option. But if so: Go for a 5 GHz band. And check whether your router offers you the option to define the precise band it should work on or whether the router automatically adapts the precise channel to avoid disturbances with other sources.
  • Personal opinion some people certainly would like to crucify me for: Teufel speakers seem to be a bit overestimated. Personally I do not think that the quality-price relation is convincing. After having heard to a standing speaker pair I think that the balance between bass and mid-range frequency sound is strange. Very vague in a way.

Nupro X3000 speakers as a solid option for a reasonable price

Taking all these aspects into account I ended up with a decision for (active) Nupro X3000 RC speakers from the producer “Nubert electronic GmbH“.

So far, I have not regretted this decision for a second. These boxes did not disappoint me – neither with Classical music, Jazz nor Heavy Metal.

Though admittedly, if you want to feel bass and drumming these boxes improve their performance in larger rooms certainly a bit when combined with a sub-woofer (which I personally use at a second sound card). But this happens at rare occasions …

Ease of setup?

The setup of the active boxes is very simple; the explanations on the accompanying leaflets are fully sufficient. You define everything by a 4 direction control button on one of the speakers. The button and a small display are hidden behind magnetically attached front panels.

Basically, you just have to define a master and a slave speaker in the first setup round and choose a connection to your sound source – here to the output connectors of a PC soundcard. In the end I used the “aux” entry and still live with an analog cable based connection between the sound card and the main box plus a digital coax cable between the boxes. (Due to the speakers’ distance I had to buy an additional coax cable. It disappears behind a shelf).

But a WiFi connection between the speakers works very well, too. I could see no major conflict with the 5 GHz channels occupied by the WLAN routers in my surroundings.

The basic connection options to your PC and sound card are manifold: The USB-interface of the Nupro sound processor appears as an USB sound card on your PC; this
“sound card” is well supported on my Opensuse and KDE based Linux systems. You just have to chose the SPDIF stereo variant of the two options offered in the KDE/Phonon sound settings.

Besides an USB cable the connection cables delivered with the speakers include an optical cable with TOSLink adapters, a SPDIF cable and analog cables with cinch connectors. And eventually there also is the option of a Bluetooth connection – if your PC has such a device.

In the end I personally heard no major difference between analog and digital signal handling. Neither with USB nor the optical connection to my old ASUS Xonar D2X sound card or the optical connection to the X-FI Titanium nor the onboard GM206 High Definition soundcard. The TI-Burr-Brown DAC of the Asus card still seems to be relatively good – at least for my ears.

I also have an additional X-FI Titanium card from Creative in my PC. I like the sound of the Asus card better with my Sennheiser headphones. Regarding the Nupro X3000 I was actually in doubt: For some music I find the sound slightly crispier with the X-Fi. However, whether this is a sign of quality is questionable. I change the sound card from time to time, just for fun – and still have no real preference.

Regarding distances the analog cable option for the connection to your PC’s sound card may be the most reasonable solution – as the optical, SPDIF coax and USB cables coming with the speakers are of limited length.

There is even a possibility to realize a pure Wifi connection from your PC to the X3000 RC speakers. Such a solution, however, requires a special transceiver (135 €) from the producer Nubert; see below. I have no tested this type of connection, yet.

They speakers offer you some basic options regarding the sound balance. A very positive feature is the integrated 5 band equalizer. As said above this allows for a basic adjustment of the sound signature. Not unimportant in my age. In addition the handheld remote control device allows for a change of the relative basic balance between bass and treble.

You can also define a lower cut-off frequency for the bass and the transition frequency to a sub-woofer. Furthermore you can set 6dB a gain of certain analog input channels.

Disappointments ?

Something which disappointed me was the Bluetooth connection of the X3000 RC to my old Samsung smartphone – here I got periodic dropouts. I have not clarified this problem up to now. I do not exclude problems with the Bluetooth and the VLC player on my phone. In reviews I have not read about any such dropouts – but you have been warned. I recently tried a Bluetooth connection from my laptop, too. This one worked flawless. So, I do not know …

Another major disappointment was and is Nubert’s “X-Remote App”. In my case it simply does not work on my Android 6 device. It gets stopped by Android just after granting permission to determine the geo-location. Which by the way is something I do not like in general. I got in contact with the Nubert company recently. They affirmed that they do not collect data, but that it is Google which enforces the explicit accept for geo-location when building up Wifi connections. Had to be expected, we know this stupid problem already from the mess with the German Corona App on Android. BBG again – Big Brother Google … No further comments required.

I had no real need for the App so far. After the basic setup of all the speaker’s internal settings (e.g. the equalizer) I can control the most needed adjustments via the handheld remote control accompanying the speakers. The “room calibration” feature of the App would have been nice – but it requires buying an additional piece of microphone equipment from Nubert for Android smartphones.

Sound quality

Do not expect a solid sound quality review from me. I have neither equipment nor objective, trained ears for such a review. I can only describe an impression – very much in analogy to wine – a sort of personal sound “taste and feeling”
after having heard a lot of music on the speakers. Do I like them with different kinds of music, vocals and instruments?

In a nightlong session I have also compared the Nupro X3000 capabilities with my old Elac 4π (4 Pi) speakers in the living room. They are controlled by NAD pre- and end-amplifiers plus a NAD CD player. I did the comparison with music pieces of very different styles. I really was astonished how good the the small Nupro 3000x speakers could follow the 4π (4 Pi) Elac speakers and fill the room with sound and a solid bass base! Well, of course the Elacs do a better job with the bass at some point, but no wonder regarding their dimensions. Still, this first impression of the Nupro speakers was very convincing.

Then I moved the Elacs and Nupros boxes into my smaller working room – well, the Nupro X3000 at once felt much more adequate. They positioned different sound origins in the stereo sound cloud much more precisely – which is no wonder either. And they filled the whole room with music easily.

A hint: As the speakers work with a bass reflex opening at their backside you should not position the boxes directly at at wall – but leave some space.

Meanwhile, I have listened to a broad spectrum of music on these speakers – ranging from Eberhard Weber, Jan Gabarek, Kjetil Bjørnstad (with an without vocals), Laurie Anderson to compositions of Steve Reich, Rihm, Arvo Pärt and to recent recordings of classical music as of the Danish String Quartet or Sol Gabetta. Intermixed with stuff from Riverside, Korn, Linkin Park, Amorphis, Insomnium, Dark Tranquility, In Flames and Rammstein. As well as a lot of classical symphony and opera recordings. And – as a very welcome side effect – I have re-detected the wonders in the songs of Tom Waits.

You know what: All of it was pure joy – taking into account the sometimes strange intentional distorted mix you find in some heavy metal pieces.

In my opinion the balance between bass, mid-range and treble of the X3000 RC speakers is very good. You (almost) never loose the resolution of instruments covering different frequency regions. Some critics in the audio press was directed to problems in the mid-range frequency area. Personally, I cannot confirm this. If there is some problem, I would bet it appears in larger rooms. But this is not the target environment of these speakers. In my working room the mid range appears very present – both with vocals and classical instruments. But, probably I do not know what high end sound really is … 🙂

I could not hear any bass resonances so far – with standard settings. But when you place the speakers close to a wall or corner you may want to reduce the low bass (< 100 Hz) a bit.

Summary: I very seldom use my Sennheiser headphones these days. I really do like the sound of these speakers.

Are there weaknesses? Well, the X3000 speakers have a little weakness at very low volume in my opinion – the relative weight of mid-range vs. bass changes to bass. May have to do with reflections in the room (or my hearing). But the advantage is that I have so far not felt any need for setting the loudness option to on.

Future options?

Now, I come to a point which makes the Nupro boxes also an investment into some future wireless audio infrastructure: For 135€ you get the NuConnect trX Wireless transceiver (https://www.nubert.de/nuconnect-trx/p4210/). This little brick allows eg. for multi-room wireless solutions, but also for a transmission of digital signals from your PC or other sources to the active speakers.

Alternatively, you could also think about a combination of the trX Transceiver with the “NuControl 2 pre-amplifier” or (a cheaper) AmpX amplifier – both interesting products of Nubert. The latter amplifier uses in my understanding the same amplifying bricks as the active speakers, but now combined and supplemented with other electronics and thus turned into a full amplifier. The critics of this 700 € amplifier
are surprisingly good (see: https://www.nubert.de/nuconnect-ampx/p3646/?category=225).

So, the speakers mark an entrance into a much broader eco-system. In my case a completely digitized audio center on a Linux workstation combined with the trX transceiver, the X3000 speakers, the AmpX and other already existing audio equipment in different rooms appears on the horizon.

Sound support on my Linux system

Working with two soundcards
As I have two sound cards available I kept the three front speakers and the subwoofer box of my old Creative speaker set. The front speakers are placed on my working table – the subwoofer on the floor. This allows for astonishing surround feelings even with stereo sound. A little contribution of these desktop speakers to the louder sound coming from the X3000 in the background and you “swim in an extended audio space”. Interesting for some kinds of music. Here the Pulseaudio mixer (pavucontol) on a Linux system is of advantage to balance sound contributions between the different channels of the active sound cards accurately and al gusto.

Regarding the Linux sound support in general
As a Linux user I have made my peace with Pulseaudio, pavucontrol, the Ladspa equalizer and KDE’s Phonon over the years. It is sometimes still a mess to reproduce working settings for multiple multi-channel sound cards after system upgrades – but once PA and Phonon do work as expected, they do their work well.

The last time when strange things happened was when I upgraded to Opensuse Leap 15.2. Reason: Substantial changes to the Phonon user interface combined with a loss of differentiated setting options. As a result I had to manipulate the directives in the PA configuration files locally in my home directory and below /etc/pulse to get everything right again. The loss or hiding of options is a sickness that has spread itself over central KDE applications during the last years …. I always make a backup of my personal PA settings in my home directory and central Alsa and PA settings, now.

A major topic always is to find working settings which direct all sound output of any application through the Ladspa equalizer and then its output to multiple sound cards. On a KDE desktop such settings have to be consistent with Phonon settings – or the system will forget and overwrite your preferences with the next system start. Then you know that you have to manually change entries in the configuration files …

Be careful with your new speakers when experimenting and switching to new sound configurations – e.g. from analog to digital signals or changes of the the sound card or moving from PA to pure Alsa. The resulting sound and, in some cases, also distortions may be louder than you expect! Always turn the volume of your external speakers to a minimum ahead of such experiments – and also reduce the volume of sound sources to a very low level.

During the last three to four years I have used the PA mixer “pavucontrol” to control the relative volumes of sound sources (i.e. applications) and the audio channels of the different sound cards on my system. But be careful with your settings here, too. In the past Pulseaudio did some strange things with audio signals from the system – e.g. turning them suddenly to 100%. I have not experienced such things in the past 3 years, but Nupro X boxes are too expensive to risk any accidental damage.

The 15-band PA Ladspa equalizer helps to define some basic sound presets with very slight adjustments – the Nupro speakers basically do not need any significant changes from a flat frequency curve of the equalizer.

Note that changes of the equalizer’s settings may be accompanied by a general volume reduction on pavucontrol and a loss of relative channel weights there. Saving (and loosing) presets of the equalizer is no fun either. Some mess will probably always remain with PA … You just need to invest some time into balanced
presets – and then do not touch the central equalizer again.

The good thing is that you can change the direction of the output of applications to a sound sink directly with pavucontrol. So, you can configure the sound output of music applications to run through an equalizer or not. Again – be careful with the impact of such changes on the volume.

My favorite player still is Clementine at 48.000 or 96.000 Hz sampling rate. It offers its own equalizer. If you want to fiddle with an equalizer than use this one.

Sound extraction from CD recordings I do with K3B to “lossless” Ogg Vorbis or Flac encoding.

Conclusion

The active Nupro X 3000 RC speakers are worth the money you have to pay for them. They suit any Linux workstation well. The connection options to sound sources are manifold. Basic analog cable connections work, of course. An USB connection was directly supported on my Opensuse Linux. Optical and SPDIF coax connections to respective output connectors of sound cards work well, too. The possibility to create a full Wifi based solution with some extra (135€) equipment from Nubert is an additional goody.

The setup and the configuration of a speaker pair were very simple. You get an included 5 band equalizer in each speaker, which allows for basic room and position adjustment.

The general sound quality is in my opinion and for my ears excellent. The speakers easily fill small and even rooms up to 40 square meters with sound and provide a solid bass. The balance between bass, mid range and treble fits my ears. Single instruments in complicated arrangements are well distinguished. The positioning of sources in the stereo range is very good.

Links

https://www.igorslab.de/en/welcher-passt-besser-nubert-nupro-x-3000-rc-oder-nupro-x-4000-rc-und-die-qual-der-wahl-2/4/
https://www.lite-magazin.de/2018/11/aktivlautsprecher-nubert-nupro-x-3000-kompakte-komplettloesung-auf-audiophilem-niveau/
https://www.technic3d.com/ article/ audio/ lautsprecher/2087-test-aktive-kompaktbox-nubert-nupro-x-3000-rc/1.htm

Enforcing specific command arguments for a selected user with sudo

In one of my last articles I needed to enforce the execution of a command with certain arguments – specific for the present user. I.e., I wanted to take away the freedom of the user to set arbitrary command argument(s) as he/she liked.

This had to be done in addition to another set of rules – namely a bunch of iptables filter rules which also depended on the UID. So the command had to be run with the UID of the user him/herself and not with the UID of root.

The solution came with sudo. This may appear a bit surprising to some readers. The reason is that sudo normally is used to allow selected users to run commands with another UID than the one they have themselves. I call “the other user” the “effective user” below. In a sudo context the effective user corresponds to a SUDO_UID variable in addition to the UID environment variable. The predominant example for invoking the sudo-mechanism certainly is to allow users to run a command as root. But it can be extended to any other user (made harmless by taking away hsi/her login shell or being especially privileged due to membership in a special group).

In my case I needed to enforce command execution with an effective user being identical to the original user him/herself – but with special arguments. In such a situation you have to take take the permission to execute/read the original command completely away from the user. Otherwise he/she could use it with any argument. But sudo requires that the defined effective user is able to read and execute the command. This seemingly contradictory situation can be solved by invoking a special user-group.

Maybe the recipe described below helps some readers to enforce command execution with specific arguments in other contexts.

A simple scenario

Let us assume you have developed a program “myprog” which accesses special web-service that you have installed on some web-servers in your Intranet. Let us further assume that some specific users shall be restricted to access the service on a defined server only – and there only with certain arguments. Such parameters may reduce or give rights to access certain data the service could in principle provide. All this is regulated by 2 arguments to the myprog-program: a FQDN for the host and a “level”. “level 0” allows free access to a very basic service version. “level 1” invokes the program with personalized options and requires a login. But your people have started to play around with the Login. So, you want a group of users to issue the command with a certain “host” and “level 0” only. Let us assume that user “mark” is one of those users who should invoke the command only in the form

mark@mytux:~>myprog -h myserv.anraconc.de -L 0

How can we achieve this with sudo?

Restricting command use to specific arguments

Below I discuss modifications of the “/etc/sudoer” file. This is risky in very many ways – not only regarding security.

Disclaimer: I take no responsibility whatever for the consequences of the sudo approach described below and its application to your computers. The sudoer rules have to be tested carefully before the are used in a production environment and their setup must be supervised by an expert.

I assume that you have installed your program at the path “/usr/bin/myprog” with standard rights

-rwxr-xr-x 1 root root 334336 17. Mai 2020  /usr/bin/myprog

Then one can follow the following recipe (as root) to get “mark” under control:

  • Step 1: Create a special group for the command in question, e.g. “mygroup”. Ensure that mark does not become a member of this group.
  • Step 2: Change the ownership and access rights of “/
    usr/bin/myprog” according to
    • chown root.mygroup /usr/bin/myprog
    • chmod 750 /usr/bin/myprog
  • Step 3: Add some lines to “/etc/sudoers” (with visudo):
     
    ....
    Defaults env_reset
    Defaults env_keep = "LANG LC_ADDRESS LC_CTYPE LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE LC_ATIME LC_ALL LANGUAGE LINGUAS XDG_SESSION_COOKIE"
    
    Defaults:mark env_keep += "DISPLAY"
    
    #Defaults targetpw   # ask for the password of the effective target user e.g. root
    #ALL   ALL=(ALL) ALL
    
    mark ALL=(mark:mygroup) /usr/bin/myprog -h mysrv.anraconc.de -L 0
    ....
    

Explanation:
Due to step 2 user “mark” cannot read, change or execute the command directly any longer. The rest depends a bit on ensuring the “mark” never becomes a member of group “mygroup”. (But other users which you trust may become members.)

Regarding the sudoer rules I assumed that you reset the environment of a sudo user as a default. Keeping up the “DISPLAY” variable helps to get around some access problems with the present X11-screen of “mark”. I also assumed that you use the sudo-mechanism in a way which requests that the user enters his/her password. The last line allows “mark” on all hosts/terminals to execute “/usr/bin/myprog” as him/herself, but with the GID of group “mygroup” and exactly with the options “-h mysrv.anraconc.de -L 0”.

Note that sudo compares the command including arguments as one string!

User “mark” must run the command myprog from now on in the form:

sudo -u mark -g mygroup  myprog -h myserv.anraconc.de -L 0

and enter his password. Any deviation will be blocked by the sudoer mechanism.

Some practical hints

After carefully evaluating security implications you can make life easier for “mark” in two ways:

  • Let him execute the command (with the defined arguments) without providing a password. Use the NOPASSWD attribute; see the man pages for the sudoers file.
  • Write a script which encapsulates the described sudo command with the options.

By such measures, you may save “mark” some typing time.

Another point is to keep the file permissions up in the future. This may become a problem if and when you apply the described mechanism to some standard Linux commands which are installed and updated by some package administration tool of your distribution. You have to carefully check that the installation routines do not overwrite the permission settings! A handwritten systemd service or a cron job may help you with this task.

With some reading or experience it should be easy to extend the described recipe to groups of users and to other commands.

There are multiple ways to allow other users to execute the command freely if this should be required. The sudoer file knows about a logical NOT operator (!); this helps to add a sudoer rule for all users but NOT “mark”. Another simple approach would be to add all users but “mark” to the group “mygroup”.

Conclusion

The sudoer-mechanism is a mighty Linux tool. We can not only allow users to execute commands as another user, but also with the permissions of another group. AND we can enforce the usage of commands with predefined arguments for selected users or user groups.

As fiddling with the sudoer mechanism is always a bit risky: Please, write me a mail if you find some major mistake or security problem of my approach.

Corona App – Standortbestimmung

Ich bin technologie-affin, aber ich bin kein Freund von Datenkraken. Ich finde eine Corona-App wichtig und richtig – aber wenn sie denn endlich eingeführt wird, dann doch bitte in Gänze mit offenen Karten.

Ich habe die deutsche Corona-App nun seit einer Woche auf meinem Android-Smartphone installiert und bin zunehmend verärgert. Zur Irritation beigetragen hat gestern die Lektüre einer Leser-Diskussion in der Zeit zum Thema; siehe die Debatte zum Artikel
https://www.zeit.de/digital/mobil/2020-06/tracing-app-corona-datenschutz-standortdaten-bluetooth-virus-bekaempfung.

Da überziehen sich die Debattanten mit dem Vorwurf der Unkenntnis [“keine Ahnung”, “Unwissenheit”, …] und sind z.T. stolz darauf, dass sie selbst (die Schlaumeier) wissen, dass man Zugriffsrechte (begrenzt) pro App regeln kann (übrigens auf die Gefahr hin, dass die betroffene App dann nicht mehr funktioniert). Scheingefechte unter Möchtegern-Gurus … und aus meiner Sicht eine Thema-Verfehlung.

Das eigentliche Thema ist, dass die Aktivierung der Corona-App vom Anwender die Freigabe der Standortbestimmung grundsätzlich und in pauschaler Weise verlangt, obwohl die Corona-App selbst die dadurch ermittelbaren Daten gar nicht nutzt.

Vorgeblicher Grund für die notwendige Freigabe: Nicht durchschaubare Android-Spezifika seit der Android Version 6.0, die ab der Version 8 aber wieder aufgeweicht wurden. Eine pauschale Freigabe der Standortbestimmung ist natürlich ein gefundenes Fressen für viele andere auf einem Android-Smartphone installierte Apps (darunter natürlich an dominanter Stelle etliche Standard-Apps von Google).

Man müsste also nach der Installation der Corona-App alle anderen Anwendungen Stück für Stück bzgl. der Rechte konfigurieren. Auf das notwendige Verfahren weist einen die Corona-App aber gar nicht hin! Und ich bezweifle, dass jeder Anwender jede installierte App auf die Rechtesetzung überprüfen würde.

Die Aktivierung der Corona-App bringt also (wirtschaftliche) Vorteile für Google und andere App-Ersteller mit sich, die weit über die Entwicklungsgelder für die Corona-App hinaus reichen. Man reibt sich verwundert die maskengereizten Wangen …

Ich habe in der vergangenen Woche mehrere Leute (8) – darunter auch 4 IT-ler – danach befragt, ob Ihnen bewusst sei, dass die Aktivierung der Corona-App auf ihrem Handy zu einer Freigabe der Standortbestimmung geführt habe. Sieben wussten davon nichts. Drei entgegneten mir, das sei ja gerade bei der Corona-App nicht der Fall. Eine hatte etwas während der Installation gelesen, aber dann weiter geklickt …

Wahrlich Zeit für eine “Standortbestimmung” in puncto Corona-App und zugehörige Vorinformationen durch Regierung und Presse. Gerade, weil ich selbst kein Android-Experte bin, möchte ich dabei ein paar Ungereimtheiten ansprechen …

Drei Perspektiven

Bevor ich mir – ähnlich wie manche Kommentierenden des Zeit-Artikels – anhören muss, ich würde ja auch sonst viele Daten durch Nutzung meines Smartphones an Google und Co weitergeben, stelle ich mal meine Sicht auf die Dinge dar:

Die Corona-Perspektive: Ich reise aufs beruflichen Gründen generell viel mit der Bahn. So war ich etwa die ganze letzte Woche unterwegs; quer durch Deutschland. In durchaus gut gefüllten Zügen, die die DB selbst als zu mehr als 50% ausgelastet angezeigt hat. Wegen meines Alters zähle ich leider bereits zu dem Personenkreis mit erhöhtem Corona-Risiko. Die Corona-App bewahrt mich natürlich nicht vor diesen Risiken, aber sie wäre doch ein wichtiger Puzzlestein im Bemühen, rechtzeitig einen Arzt zu kontaktieren, wenn eine bestimmtes Infektionsrisiko aus den erfassten Daten mit einer hohen Eintrittswahrscheinlichkeit bewertet wird. Die App böte auch die Chance, meine ältere
Lebensgefährtin, die aus verschiedenen Gründen zur Hochrisikogruppe gehört, zu warnen und geeignete Maßnahmen zu treffen. Deshalb habe ich mir am Dienstag vergangener Woche die deutsche Corona-App auf mein Smartphone installiert.

Die Smartphone-Perspektive: Mein privates Android-Smartphone ist relativ alt und mit Android 6 versehen. Es hat einen guten HW-Unterbau, viel Speicher, eine schnelle verschlüsselte Zusatzkarte und ist performanter als manches Smartphone von anderen Familienangehörigen mit Android 8. Ich benutze mein Smartphone primär zum Lesen in Zeitungen (per FF-Browser bei aktivierter VPN-Lösung). VLC ermöglicht mir dabei, Musik zu hören. Die Kommunikation mit anderen Mitbürgern erfolgt ausschließlich über Signal. Standortbestimmung, GPS, Bluetooth sind normalerweise abgeschaltet; der Zugang ins Internet über variierende VPN-Server ist regelmäßig aktiviert. Entfernbare Apps sind entfernt. Die Zugriffsberechtigungen von Apps sind auf ein Minimum reduziert (soweit möglich). Ich nutze weder Google Chrome noch Googles Suchmaschine am Handy für Standardsuchen im Browser; Noscript blockt Javascript, bis ich es zulasse. Ich nutze Facebook und Whatsapp (nach eingehendem Studium über 4 Monate hinweg) natürlich nicht, Youtube ist für mich auf dem Smartphone uninteressantes Terrain und deaktiviert.
Ich meine deshalb das, was man ohne Rooten des Smartphones tun kann, getan zu haben, um für Samsung und Google zumindest nicht unmittelbar überwachbar zu sein. Ja, ich höre schon: das ist eine Illusion. Stimmt, aber ich reduziere halt den direkten Zugriff, so gut es geht. Meine Frau hält mein Handy deshalb für nicht benutzbar – ein gutes Zeichen. Meine grundlegende Devise ist: Ich habe keine Freunde im Internet. Und Google ist mir bislang sicher nicht als Freund, sondern als kommerzielles Unternehmen mit kapitalistisch unterfüttertem Interesse an Daten zu meiner Person begegnet.

Die Perspektive auf unseren Staat in der aktuellen Krise: Ich hege kein fundamentales Misstrauen gegenüber unserem Staat und seinen Institutionen; eher gegenüber einzelnen Politikern und bestimmten Parteien. Ich bin sehr froh, in Deutschland zu leben und schätze unser Grundgesetz mit jedem Lebensalter mehr. Ich fand die Maßnahmen unserer Bundesregierung und der meisten Landesregierungen in Sachen Corona-Pandemie richtig. (Es schadet bei der Beurteilung dieser Maßnahmen übrigens nicht, selbst mal ein wenig auf Basis der bekannten Zahlen zu rechnen. Aber die Fähigkeit zum “Rechnen”, geschweige denn das Beherrschen von ein wenig Statistik, scheint ja bei vielen Mitbürgern, Moderatoren und Politikern nicht mehr zur Grundausstattung zu gehören … außer bei Fr. Dr. Merkel). Die Anstrengungen der Bundesregierung, eine App auf die Beine zu stellen, fand ich begrüßenswert. Den leider erst spät erfolgten Schritt in Richtung dezentrale Datenhaltung und Open Source auch.

Standortermittlung – ?

Es war klar, dass die Corona-App Bluetooth (Low Energy) nutzen würde. Die generelle Umsetzung von Bluetooth auf Smartphones durch die jeweiligen Hersteller ist bislang leider immer wieder durch viele Sicherheitslücken aufgefallen und weniger als solide Kommunikationstechnologie. Normalerweise aktiviert kein vernünftiger Mensch Bluetooth in einem dicht gefüllten Zugabteil. Aber in Coronazeiten muss man hier leider Kompromisse schließen. Das war – wie gesagt – von Anfang an klar und ist aus meiner Sicht auch hinreichend dargestellt und debattiert worden.

Dass man auf Android Smartphones aber pauschal die Standortbestimmung (inkl. GPS) aktivieren muss, das war nicht ausgemacht. In der öffentlichen Darstellung wurde im Vorfeld auch eher das Gegenteil betont. So traute ich meinen Augen nicht, als ich im Zuge der Installation und Aktivierung der Corona-App auf Seite 2 einer Popup-Meldung lesen musste:

“Für diese Funktionen [es ging um Bluetooth] sind die folgenden Berechtigungen erforderlich:
nGerätestandort
Diese Einstellung ist erforderlich, damit Bluetooth-Geräte in deiner Nähe gefunden werden können. Für Benachrichtigungen zu möglicher Begegnung mit Covid-19-Infizierten wird der Gerätestandort jedoch nicht genutzt. Andere Apps mit der Berechtigung zur Standortermittlung haben Zugriff auf den Gerätestandort.”

Ich hätte das beinahe überblättert. Zwar fiel mir das Wort “Gerätestandort” auf; aber aufgrund der Meldungen im Fernsehen und der Presse hatte ich die Erwartungshaltung, dass ich diesbzgl. lediglich darüber informiert würde, dass die Standortbestimmung nicht notwendig sei. Erst als ich erneut “Aktivieren” drücken sollte, wurde ich unschlüssig – die App oder was anderes? Und begann zu lesen …

Davon habe ich dann einem IT-Kollegen am Mittwoch vergangener Woche im Zug erzählt. Er hatte die App selbst installiert, hatte dabei allerdings über die zwei Erläuterungs-Popup-Seiten am Anfang zügig hinweggeklickt. Er wollte meine Darstellung erst nicht glauben. Bis ich ihm dann eine damals noch englische (!) Warnmeldung zeigte, die hochkam, wenn man die durch die Corona-App aktivierte Standortbestimmung (inkl. GPS) explizit wieder deaktivierte. Inzwischen (App-Version 1.0.4) ist die Meldung auf Deutsch da, aber zumindest auf meinem Gerät im Meldungsbereich nicht direkt in Gänze zu lesen. Da muss man dann auf dem Smartphone schon unter den eigenen “Google”-Einstellungen nachsehen:

“Benachrichtigungen zu möglicher Begegnung mit Covid-19-Infizierten. Diese Funktion ist deaktiviert”.

Tja, das ist wohl eindeutig. Keine Aktivierung der Standortbestimmung => dann auch keine Aktivierung der Benachrichtigung durch die Corona-App. War beim Kollegen dann auch so (trotz eines viel aktuelleren Handys). Richtig geglaubt hat der Kollege es aber erst, als ich ihm die Beschreibung der technischen Voraussetzungen unter den “Datenschutzinformationen” der App zeigte.
Interessanterweise findet man ja zur Standortbestimmung nichts unter “Häufige Fragen”, sondern eben unter “Datenschutzinformation” und dort weit unten unter Punkt 7 “Technische Voraussetzungen” – Punkt b) Android-Smartphones:

“Die Standortermittlung Ihres Smartphones muss aktiviert sein, damit ihr Gerät nach Bluetooth-Signalen anderer Smartphones sucht. Standortdaten werden dabei aber nicht erhoben. “

Eine fast gleichlautende Info erhält man übrigens auch, wenn man die Corona-App über deren eigenen Schalter deaktiviert und dann wieder reaktiviert.

Wirkt sich die Aktivierung der Standortbestimmung auf andere Apps aus? Oh ja! Am einfachsten kann man das über Google Maps oder Kompass-Anwendungen testen. Die fragen dann nicht mehr nach, ob z.B. GPS aktiviert werden soll; man hat das im Zuge der Aktivierung der Corona-App tatsächlich pauschal freigegeben (falls man anderweitig nicht andere Einschränkungen getroffen hat.) Was nicht heißt, das die Corona-App selbst Standortdaten verwendet. Dazu unten mehr.

Soviel erst mal zu den Fakten und zu dem, was einem die Corona-App selbst mitteilt.

Was zeigen Recherchen im Internet?

Man kann nun ein wenig im Internet recherchieren. Dabei wird man feststellen, dass man sich nicht getäuscht hat. Auf Android-Geräten ist zunächst die pauschale Standortfreigabe notwendig. Auch wenn die Corona-App selbst keine Standortdaten nutzt … Und was lernt man auf die Schnelle sonst noch?

Punkt 1: Google koppelt die “Standortbestimmung” (inkl. der generellen Freigabe von GPS für andere Apps) und die Nutzung von “Bluetooth Low Energy” faktisch und technisch aneinander. Das war und ist so beabsichtigt! Mindestens mal in Android 6.x und 7.X. Siehe:

stackoverflow.com/ questions/ 33045581/ location-needs-to-be-
enabled-for-bluetooth-low-energy-scanning-on-android-6-0

https://www.mobiflip.de/shortnews/corona-warn-app-geraetestandort-unter-android/

Man kann Bluetooth zwar separat aktivieren (inkl. Low Energy-Funktionalität [LE]), aber es nutzt dir nichts, wenn du nicht gleichzeitig eine Standortbestimmung zulässt. Grund: Bluetooth LE würde sonst angeblich gar nicht anfangen, nach anderen Bluetooth-Geräten zu suchen. Leider wird dir als Corona-App-Nutzer in der der Statuszeile des Android-Bildschirms nicht angezeigt, dass die Standortbestimmung aktiviert wurde. Das musst du dann schon selbst entdecken …

Interessant ist in diesem Zusammenhang die Erkenntnis, dass die zwei technisch zunächst getrennten Funktionalitäten der Bluetooth-LE-Aktivierung und der faktischen Freigabe einer Standortbestimmung in Apples iOS nicht miteinander verbunden worden. Der Bluetooth-Chip ist auch auf Android-Handys eine eigene Einheit und hat z.B. mit einer groben weiteren Standortbestimmung per Wifi oder GPS oder durch andere verortete Bluetooth-Empfänger-Geräte technisch nicht direkt etwas zu tun. Dennoch hat Google die Kopplung auf Betriebssystemebene (!) herbeigeführt. Ein Schelm, wer dabei Böses denkt ….

Punkt 2: Die fast verschämte Antwort, die dazu an verschiedenen Stellen im Internet kolportiert wird, lautet, dass man das deshalb so handhabe, weil ja auch über Bluetooth eine Standortbestimmung vorgenommen werden könne. Ich zitiere aus https://developer.android.com/guide/topics/connectivity/bluetooth-le:

“Because discoverable devices might reveal information about the user’s location, the device discovery process requires location access. If your app is being used on a device that runs Android 8.0 (API level 26) or higher, use the Companion Device Manager API. This API performs device discovery on your app’s behalf, so your app doesn’t need to request location permissions.”

Das ist schon eine seltsame Logik. Weil die Aktivierung eines technischen Senders indirekt und in Kombination mit anderen technischen Funktionalitäten oder anderen Geräten eine Standortbestimmung ermöglichen könnte, muss man die Standortbestimmung auf dem eigenen Gerät pauschal für alle anderen Apps freigeben? Aber ab Android 8 dann doch nicht mehr? Verarschen kann man sich alleine wirklich besser ….

Siehe hierzu auch:
https://android.stackexchange.com/ questions/ 160479/ why-do-i-need-to-turn-on-location-services-to-pair-with-a-bluetooth-device
https://www.t-online.de/ digital/ id_88069644/ corona-warn-app-android-standort-freigeben-so-verhindern-sie-google-tracking.html
https://www.smartdroid.de/ corona-warn-app-standortermittlung-kommt-von-google-ein-erklaerungsversuch/

Tja, Google, warum wurde ich eigentlich bisher beim Nutzen von Bluetooth LE darauf nicht aufmerksam gemacht? Wie man es auch dreht und wendet: Es gibt keine logische Begründung dafür, zwingend eine Standortbestimmung auch für andere Apps (inkl. GPS-Nutzung) freizugeben, weil Bluetooth ggf. eine Standortbestimmung möglich macht.

Punkt 3: Dennoch bleibt festzuhalten: Ja, es stimmt, man (ein anderer Bluetoothnutzer bzw. eine anderes Bluetooth-Gerät) kann über Bluetooth und z.B. Wifi indirekt eine Standortbestimmung deines Handys vornehmen. U.a. über die Beacon-Funktionalität. Auch Google konnte (bei aktiviertem Bluetooth auf deinem Geräte) wohl immer schon
deinen Standort über Dritte ermitteln, die sich in deiner Nähe mit aktiviertem Bluetooth, GPS oder WiFi befanden.

Punkt 4: Es bleibt die Frage, warum Google zwei, eigentlich völlig unabhängige Funktionen (nämlich Bluetooth LE und eine Standortbestimmung) seit Android 6 technisch so eng aneinander gekoppelt hat. Es wäre ja auch anders gegangen. Du aktivierst Bluetooth und erhältst eine Warnung, dass damit auch eine Standortbestimmungen möglich sind. Und eine Beschreibung der möglichen Mechanismen. So wurde das aber nicht aufgezogen … Zieht man allerdings für die Kopplung auch andere als nur technische Gründe in Betracht, so macht das Ganze plötzlich sehr viel Sinn; allerdings keinen technischen sondern einen ökonomischen. Zufall?

Punkt 5: Bluetooth und Bluetooth LE … Habe ich als Anwender eigentlich Kontrolle darüber, welche App was genau einsetzt? Und ob die App bei Bedarf nicht auch die Standortfreigabe auf irgendeine Rückfrage im Hintergrund aktiviert? Schalte ich (normales) Bluetooth an, ohne Standortfreigabe und GPS explizit zu aktivieren, sehe ich im Moment gerade 10 Geräte in unserem Wohnhaus mit aktiviertem Bluetooth. Es gibt für den Android-Laien somit vier Möglichkeiten, die Situation zu beurteilen:
(a) Entweder ist die Aktivierung der Standortbestimmung für Bluetooth und das Erkennen anderer Geräte technisch gar nicht erforderlich (entgegen Google’s eigener Verlautbarung für Bluetooth LE …). Zu dieser Variante passt: Es gibt unter den Bluetooth-Einstellungen die Möglichkeit, sich selbst aktiv sichtbar zu machen.
(b) Oder die Standortfreigabe wird im Hintergrund aktiviert – ohne dass ich als Anwender informiert werde.
(c) Eine dritte Variante ist, dass die Standortfreigabe für das aktive Suchen nach anderen Bluetooth-Geräten nur im Falle von “Bluetooth Low Energy” [BLE], wie für die Corona-App eingesetzt, verwendet wird – und in diesem Fall aber technisch unumgänglich ist.
(d) Über eine deutlich schlimmere Variante mag ich nicht weiter nachdenken.

Lieber Leser: Können wir uns darauf einigen, dass das ohne tiefere technische Einblicke in Android mindestens mal eine äußerst dubiose Sachlage ist? Die nun aber Google indirekt sehr zugute kommt?

Punkt 6: Nun nutzt ja die Corona-App selbst die im Prinzip ermittelbaren Standortdaten nicht. Da glaube ich mal jenen, die den Code der App bereits im Detail durchforstet haben (hoffentlich). Aber was macht Google ggf. auf anderen Wegen mit der aktivierten “groben” Standortbestimmung? Aus Diskussionen in Zeitungen (z.B. der SZ, der Zeit oder der Faz) entnimmt man nicht mehr als die Versicherung von Google, dass es die Daten im Kontext der Corona-App nicht erfasse. Soll ich das glauben? Hmm, es ist immerhin strafbewehrt ….

Punkt 7: Aber halt: Da fand sich in den Hinweisen der Corona-App selbst ja noch der kleine Satz am Schluss: “Andere Apps mit der Berechtigung zur Standortermittlung haben Zugriff auf den Gerätestandort.” Offenbar dürfen die anderen Apps die Standortdaten dann wohl auch nutzen. Der User hat ja schließlich die Standortbestimmung explizit freigegeben. Um sich gegen Corona zu schützen …

Aha! Wer dürfte sich da freuen? Google! 10 Millionen Downloads => 10 Millionen pauschal eine für Apps pauschal aktivierte Standortbestimmung frei Haus – durch Aktivierung der Corona-App! Nein, die Standortermittlung erfolgt nicht durch die Corona-App selbst – aber eben (potentiell) durch andere Apps, die über die Aktivierung der Corona App nun die Nutzung der Standortbestimmung freigeschaltet bekommen haben. Das ist so genial, dass man fast neidisch wird.

Punkt 8: Nun werden einige Schlaumeier sagen, man könne ja die Rechte der (vielen) anderen Apps über den “Anwendungsmanager” einschränken. Stimmt. Kann man. Darauf weist die Corona-App aber leider gar nicht hin. Und, hast du, Schlaumeier, das auch schon
mal gemacht? Jede Anwendung im Anwendungsmanager aufgerufen und die Rechte beschränkt? Das ist übrigens eine sehr lehrreiche Übung, die ich jedem Android-Nutzer dringend ans Herz lege … Bei der Corona-App selbst kannst du auf dem Weg übrigens nur die Berechtigung zur Nutzung der Kamera (wird bei Übermittlung eines Testergebnisses verwendet) abschalten …

Punkt 10: Ein wenig Nachforschen zeigt: Ein paar Einstellungen zur Corona-App können auch im Kontext der Google-Einstellungen gesetzt werden (man öffne die Android-Anwendung “Einstellungen” und dort “Google”!) . Da erfährt man übrigens auch definitiv, dass die Benachrichtigung bzgl. des Kontakts mit Infizierten deaktiviert wurde, wenn man die Standortfreigabe zwischenzeitlich – aus welchen Gründen auch immer – abgeschaltet hat. Und da überfällt mich dann wieder ein größeres Stirnrunzeln. Die Corona-App selbst zeigt dir das nämlich nicht an; die werkelt einfach weiter. Ein Hinweis darauf, dass die Entwickler davon ausgegangen sind, dass die Freigabe der Standortbestimmung nicht mehr deaktiviert wird?

Die App verhält sich bei explizit abgeschalteter Standortbestimmung seltsam

Ich habe die Standortbestimmung am Mittwochnachmittag letzter Woche abgeschaltet. Damals kam dann noch ein englischer Hinweis auf eine Deaktivierung der Benachrichtigung zum Infektionsrisiko. Die App selber meldete mir über ihren misslichen Zustand erst mal gar nichts. Sie lief mindestens drei Tage weiter, zeigte mir jeden Tag an dass sie aktiv sei (entgegen der Feststellung unter meinen Google-Einstellungen) und dass mein Risiko nach wie vor gering sei. Heute – am Montagvormittag – zeigte sie mir dann an, dass kein Risiko bestimmt werden könne, weil die App seit 3 Tage nicht “updaten” konnte. ????
Test: Standortbestimmung wieder freigeben. Corona-Benachrichtigung wieder in der App aktiviert. App läuft => geringes Risiko. Standortermittlung explizit deaktiviert => Corona-App läuft weiter, als sei nichts geschehen => “Geringes Risiko. 6 von 14 Tagen aktiv!”. Aber Meldung unter den Google-Einstellungen: Diese Funktion (gemeint ist die Risikobenachrichtigung) ist deaktiviert!

Ich würde das mindestens mal als Inkonsistenz bezeichnen … eigentlich ist es ein massiver Bug.

Auf Nachfrage bei anderen Nutzern der Corona-App hat sich übrigens herausgestellt, dass mancher der Befragten die Standortbestimmung tatsächlich einfach wieder deaktiviert hatte: Sobald sie nämlich zufällig entdeckten, dass sie aktiv war. Zwei dachten, sie hätten wohl vergessen, die Standortbestimmung nach einer Benutzung von Google Maps wieder abzuschalten. Die Meldung, die dann nicht vollständig lesbar war, hatten sie ignoriert. Sie konnten sie auch nicht unmittelbar der Corona-App zuordnen. Die “lief” ja noch und zeigte ein geringes Risiko an …. Oh mei …

Bewertung

Ich finde, das Ganze ist eine schwierige, bis üble Gemengelage. Das liegt zunächst nicht an der Corona-App selbst und schon gar nicht an unserer Regierung. Es liegt einzig und allein an Google – und deren Kopplung des Einsatzes von Bluetooth LE mit der Freigabe der Standortbestimmung – für alle anderen Apps. Diese Kopplung spielt Google nicht erst jetzt eindeutig ökonomisch in die Hände bzw. unterstützt Googles Geschäftsmodell massiv.

Ja, man kann zwar nach der Installation der Corona-App alle anderen Apps bzgl. der Rechte konfigurieren. Aber ehrlich: Wer macht das schon? Der normale Anwender weiß nicht mal, wo und wie er Zugriffsrechte verwalten kann. Die Corona-App macht einen darauf leider auch nicht aufmerksam.

Hinzu kommt auch, dass die Corona-App nichts anzeigt oder meldet, wenn die angeblich zwingend benötigte Freigabe der Standortbestimmung im Nachhinein abgestellt wird. Das ist entweder fahrlässig – oder aber Freigabe der Standortbestimmung wird entgegen aller Beteuerungen doch nicht benötigt. Keine der beiden Alternativen ist gut.

Was hätte ich erwartet?

  • Zunächst mal klare
    und eindeutige Informationen über die notwendige Aktivierung der Standortbestimmung im Vorfeld der Veröffentlichung der App – durch die Regierung und auch durch Google. Später eine klare Information durch die App selbst. Plus Informationen für Interessierte, warum zwei technisch unterschiedliche Funktionen ab Android 6 überhaupt aneinander gekoppelt wurden.
  • Ich hätte erwartet, dass mir die pauschale Aktivierung der Standortbestimmung in der oberen Statusleiste von Android durch ein eindeutiges Symbol im Zuge der Aktivierung der Corona-App angezeigt wird. Am besten in einer Signalfarbe.
  • Ich hätte detaillierte Informationen dazu erwartet, dass man nach der Installation und Aktivierung der Corona-App andere Apps bzgl. deren Zugriffsrechte nachjustieren muss – und für den Normalverbraucher auch Infos dazu, wie man das macht.
  • Ich hätte grundsätzlich erwartet, dass Google eine technische Lösung anbietet, die eine Umgehung einer pauschalen Freigabe der Standortbestimmung ermöglicht. Das war schon lange überfällig. Warum hat man da seitens des Auftraggebers (= Regierung) nicht mehr Druck ausgeübt?
  • Ich hätte zumindest erwartet, dass mir die Corona-App anzeigt, dass sie nicht mehr richtig funktioniert, wenn ich die Standortbestimmung aus irgendwelchen Gründen abschalte.
  • Ich hätte mindestens mal vom CCC (Chaos Computer Club) eine Bewertung der pauschalen Freigabe der Standortbestimmung für andere Apps unter Android erwartet. Statt dessen: https://www.zdf.de/nachrichten/politik/corona-app-launch-100.html. Sicher ist Vieles vorbildlich gewesen … aber Open Source allein reicht nicht als Gütesiegel …

Fazit: Ich bin mit der App nicht zufrieden. Es bleibt das schale Gefühl, das Google uns alle mal wieder über den Tisch gezogen hat, auch wenn die Corona-App selbst sinnvoll ist. Meine Lehre aus dem Ganzen ist: Es wäre endlich an der Zeit, eine Alternative zu Android auf europäischer Ebene zu entwickeln. Aber das werde ich wohl nicht mehr erleben …

Habe ich die Corona-App jetzt abgeschaltet? Nein. Aber ich habe die Rechte anderer Apps weiter reduziert – mit interessanten Folgen. Jetzt hat mir Google eine freundliche Mail geschickt, ich möge nun doch bitte meine Google-Account-Einstellungen zur Nutzung personenbezogener Werbung vervollständigen … Ein Schelm, wer Böses denkt ….

Upgrading Win 7 to Win 10 guests on Opensuse/Linux based VMware hosts – I – some experiences

As my readers know I am not a fan of MS or any “Windows N” operating system – whatever the version number N. But some of you may be facing the same situation as me:

A customer or an employer enforces the use of MS products – as e.g. MS Office, clients for MS Exchange, Skype for Business, Sharepoint, components for effort booking and so on. For the fulfillment of most of your customer’s demands you can use browser based interfaces or Linux clients.

However, something that regularly leads to problems is the heavy use of MS Office programs or graphics tools in their latest versions. Despite other claims: A friction-less back and forth between Libreoffice and MS Office is still a dream. Crossover Office is nice – but the latest MS Office versions are often not yet covered when you need them. Another very reasonable field of using MS Windows guests on Linux is, by the way, training for pen-testing and security measures.

So, even Linux enthusiasts are sometimes forced to work with or within a native Windows environment. We would use a virtualized Windows guest machine then – on a Linux host with the help of VMware, KVM or Virtualbox. Regarding graphical performance, support of basic 3D features, Direct X and of the latest USB-versions in the emulated system environment I have a tendency to use VMware Workstation, despite its high price. Get me right: I practically never use VMware to virtualize Linux systems – for this purpose I use LXC containers or KVM. But for “Win 7” or “Win 10” VMware seemed to be a good choice – so far.

Upgrade to Win 10

During the last days of orchestrated panic regarding the transition from Windows 7 to Windows 10 I eventually gave in and upgraded some of my VMware-virtualized Windows 7 systems to Windows 10. More because of having some free time to get into this process than because assuming a sudden drop in security. (As if we ever trusted in the security of Windows system … I come back to security and privacy aspects in a second article.) However, on a perspective of some weeks or months the transition from Win 7 to Win 10 is probably unavoidable – if you cannot isolate your Windows machine completely from the Internet and/or from other external servers which bring a potential attack risk with them. The latter may even hold for servers of your clients.

I was a bit skeptical about the outcome of the upgrade procedure and the effort it would require on my side. A good friend of mine, who sells and administers Windows system professionally, had told me that he had experienced a whole variety of different problems – depending on the Win 7 setup, the amount and character of application SW installed, hardware drivers and the validity of licenses.

Well, my Windows 7 Pro clients were equipped with rather elementary SW: MS Office in different versions, MS Project, Lexware, Adobe Creative suite in an old version, some mind mapping SW, Adobe Reader, Anti malware SW. The “hardware” of the virtual machines is standard, partially emulated by VMware with appropriate drivers. So, no need to be especially nervous.

To be on the safe side I also ordered a VMware WS Pro upgrade to version 15.X. (I own WS 12.5.9 and WS 14 licenses.) Reason: I had read that only the WS 15.5 Pro supports the latest Win 10 versions fully. Well reading without thinking may lead to a waste of resources – see below.

Another rumor you often hear is that Windows 10 requires rather new hardware and is quite resource-demanding. MS itself recommends to buy a new PC or laptop on its web-sites – of course often followed by advertisement for MS notebook models on the very same web page. Yeah, money makes the world turn around. Well, regarding resources for my Windows guest systems I was/am rather restrictive:

Virtual machines for MS Win never get a lot of RAM from me – a maximum of 4 GB at most. This is enough for office purposes. (All really resource craving
things I do on Linux 🙂 ). Neither do my virtualized Win systems get a lot of disk space – typically < 60 GB. I mostly use vmdk-files to provide virtual hard disks – without full space allocation at startup, but dynamically added 4GB extents. vdmk files allow for an easy movement of virtual machines and simple backup procedures. And I usually give my virtual Win machines a maximum of 2 processor cores. So, these limitations contributed a bit to my skepticism. In addition I have 3D support on for my Win 7 guests in the virtual machine setup.

Meanwhile, I have successfully performed multiple upgrades on a rather old Linux host with an i7 950 CPU and newer hosts with I7 6700 K and modern i9 9900 processors. The operative system on all hosts run Opensuse Leap 15.1; I did not find the time to test my Debian hosts, yet.

I had some nice and some annoying experiences. I also found some aspects which you should take care of ahead of the Win 7 to Win 10 upgrade.

Make a backup!

As always with critical operations: Make a backup first! This is quite easy with a VMware virtual machine based on “vmdk”-files: Just copy the machines directory with all its files to some Linux formatted backup medium and keep up all the access rights during copying (=> cp -dpRv). In case of partition based virtual machines – make a copy of the partition with “dd”.

If you should need to restore the virtual machine in its old state again and to copy your backup files to their old places: VMware will notice this and will ask you whether you moved or copied the guest. Then answer “moved” (!) – which appears a bit paradox. But otherwise there is a very high probability that trouble with your Windows license will follow. VMware interprets a “copy”-operation as a duplication of a virtual machine and puts a related information somewhere (?) which Windows evaluates. Windows will almost certainly ask for a reactivation of your installation in case that your Win license was/is an individual one – as e.g. an OEM license.

Good news and potentially bad news regarding the upgrade to Win 10

The good news is:

  • Provided that you have valid licences for your Win 7 and for all SW components installed and provided that there is enough real and virtual disk space available, the Win 7 to Win 10 upgrade works smoothly. However, it takes a considerable amount of time.
  • I did not experience any performance problems after the upgrades – not even regarding transparency effects and other gimmicks in comparison to Windows 7. VMware’s 3D support for Win works – in WS 15 even for DirectX 10.

The requirement for time depends partially on the bandwidth of your Internet connection and partially on the performance of your disk access as well as your CPU and the available RAM. In my case I had to invest around 1 hr – in those cases when everything went straight through.

The potentially bad news comprises the following points:

  • The upgrade requires a considerable amount of free space on your virtual machine’s hard disk, which will be used temporarily. So, you should carefully check the available disk space – inside the virtual machine and – a bit surprising – also on the Linux filesystem keeping the vmdk-files. I ran into problems with limited space for multiple upgrades on both sides; see below. Whether you will experience something similar depends on your safety margin policies with respect to disk space in the guest and on the host.
  • A really annoying
    aspect of the upgrade had to do with VMware’s development and market strategy. From advertisement you may conclude that it would be best to use VMware WS 14 or 15 to handle Windows 10. However, on older Intel based systems you should absolutely check whether the CPU is compatible with VMware WS 14 and 15. Check it, before you think upgrading a Vmware WS 12 license to anything higher. On my Intel i7 950 neither WS 14 nor WS 15 did work at all. Even if you get these WS versions working by a trick (see below) they perform badly.
  • Then there is a certain privacy aspect. As said, the upgrade takes a lot of time during which you are connected to the Internet and to Microsoft servers. This is only partially due to the fact that Win 10 SW has to be downloaded during the upgrade process; there are more phases of information exchange. It is also quite understandable that MS has to analyze and check your system on a full scale. But do we know what Big Brother [BB] MS is doing during this time and what information/data they transfer to their own systems? No, we do not. So, if you have any sensitive data files on your system – how to protect them? You cannot isolate your Windows 10 during the upgrade. And even worse: Later on you will be more or less forced to perform updates within certain periods. So, how to keep sensitive data inaccessible for BB during the upgrade and beyond?

I address the first two aspects below. The last point of privacy is an interesting but complicated one. I shall discuss it in a separate article.

Which VMware workstation version should I use?

Do not get misguided by reports or advertisement on the Internet that certain MS Win 10 require the latest version of VMware Workstation! WS 12 Pro was the first version which supported Win 10 in late 2015. Now VMware 15.X has arrived. And yes, there are articles that claim incompatibility of VMware WS 12, WS 14 and early subversions of WS 15 with some of the latest Win 10 builds and updates. See the following links and discussions therein:
https://communities.vmware.com/thread/608589
https://www.borncity.com/blog/2019/10/03/windows-10-update-kb4522015-breaks-vmware-workstation/
https://www.askwoody.com/forums/topic/vmware-12-and-newer-incompatible-with-windows-10-1903/

But read carefully: The statements on incompatibility refer mostly (if not only) to using a MS Win 10 system as a host for VMware! But we guys are using Linux systems as hosts.

Therefore the good message is:

Windows 10 as a VMware guest is already supported by VM WS 12.5.9 Pro, which runs also on older CPUs. For all practical purposes and 2D graphics a Win 10 guest installation works quite well on a Linux host with VMware 12.5.9.

At least, I have not yet noticed anything wrong on my hosts with Opensuse Leap 15.1 and VMware WS 12.5.9 PRO for a Win 10 guests. (Neither did I see problems with WS 14 or WS 15 on those hosts where I could use these versions).

The compatibility of WS 12.5 with Win 10 guest on Linux is more important than you may think if your host has an older CPU. If you really want to spend money and use WS 14 or WS 15 please note:

WS 14 Pro and WS 15 Pro require that your CPU provides Intel VT-x virtualization technology and EPT abilities.

So, the potentially bad message for you as the still proud owner of an older but capable CPU is:

The present VMware WS versions 14 and 15 which support Win 10 fully (as guest and host system) may not be compatible with your CPU!

Check
compatibility twice BEFORE you intend to upgrade VMware Workstation ahead of a “Win7 to Win 10”-upgrade. It would be a major waste of money if your CPU is not supported. And as stated: Win 12.5 does a good job with Win 10 guests.

VMware has deserved a lot of criticism with their decision to ignore older processors with WS Pro versions > 14. See
https://communities.vmware.com/thread/572931
https://vinfrastructure.it/2018/07/vmware-workstation-pro-14-issues-with-old-cpu/
https://www.heise.de/newsticker/meldung/VMware-Workstation-14-braucht-juengere-Prozessoren-3847372.html
For me this is a good reason to try a bit harder with KVM for the virtualization of Windows – and drop VMware wherever possible.

There is a small trick, though, to get WS 14 Pro running on an i7 950 and other older processors: In the file “/etc/vmware/config” you can add the setting

monitor.allowLegacyCPU = “true”

See https://communities.vmware.com/thread/572804.

But: I have tested this and found that a Win 7 start takes around 3 minutes! You really have to be very patient… This is crazy – and for me unacceptable. After you once are logged in, performance of Win 7 seems to be OK – maybe a bit sluggish. Still I cannot bear the waiting at boot time. So, I went back to WS 12 Pro on the machine with an i7 950.

Another problem for you may be that the installation of WS 12.5.9 on both Opensuse Leap 15.0 and 15.1 requires some special settings and tricks which I have written about in this blog. See:
Upgrade auf Opensuse Leap 15.0 – Probleme mit Nvidia-Treiber aus dem Repository und mit VMware WS 12.5.9
Upgrade Laptop to Opensuse 42.3, Probleme mit Bumblebee und VMware WS 12.5, Workarounds
The first article is relevant also for Opensuse 15.1.

Use the Windows Upgrade site and the Media Creation Tool page to save money

If you have a valid Win 7 license for all of your virtualized Win 7 installations it is not required to spend money on a new Win 10 license. Microsoft’s offer for a cost free upgrade to Win 10 still works. See e.g.:
https://www.cnet.com/how-to/windows-10-dont-wait-on-free-upgrade-because-windows-7-officially-done/
https://www.techbook.de/apps/kostenloses-update-windows-10
Follow the steps there – as I have done successfully myself.

Problems with disk space within the VMware Windows 7 guest during upgrade

My first Win7 to Win10 upgrade trial ran into trouble twice. The first problem occurred during the upgrade process and within the virtual machine:
I got a warning from the upgrade program at its start that I should free at least some 8.5 GByte.

Not so funny – as said, I am a bit picky about resources. The virtual guest machine had only a 60 GB C-disk. Fortunately, there were a lot of temporary files which could be deleted. Actually Gigabytes and partially years old – makes you wonder why Win 7 kept those files piled up. I also could move a bunch of data files to a D-disk. And I deinstalled some programs. All in all – it just worked out. The upgrade itself afterwards went friction-free and without

So one
message is:

Ensure that you have around 15 GB free on your virtual C-disk.

It is better to solve the problems with freeing C-disk space inside Win 7 without pressure – meaning: ahead of the upgrade to Win 10. If you run into the described problem it may be better to abort the Win 10 upgrade. I have tested this – and the Win 7 system was restored – apparently in good health. I got a strange message during reboot that the system was prepared for first use – but after everything was as before.

On another system I got a warning during the upgrade, when the “search for updates” began, that I should clear some 10 GByte of temporarily required disk space or attach an external drive (USB) to be used for temporary operations. The latter went OK in this case. But be careful the USB disk must be kept attached to the virtual machine over some reboots. Do not touch it until the upgrade has finalized.

So, a second message is:

Be prepared to have some external device with some free 20 GB ready if you have a complex installation with a lot of application SW and/or a complex virtual HW configuration.

I advise you to check your external USB drive, USB stick or whatever you use for filesystem errors before attaching it. And have your VMware window active whilst attaching the device! VMware will then warn you that the Linux host may claim access to the device and you just have to click the buttons in the dialog boxes to give the VMware guest full control instead of the host OS.

If you now should think about a general enlargement of the virtual disk(s) of your existing Win 7 installation please take into account the following:

On the one hand side an enlargement is of course possible and relatively easy to handle if you use vdmk files for disk virtualization and have free space on the Linux partition which hosts the vmdks. VMware supports the resizing process in the disk section of the virtual machine “settings”. On Win 7 you afterward can use the Win admin tools to extend the NTFS filesystem to the full extent of the newly configured disk.

But, on the other side, please, consider that Windows may react allergic to a change of the main C-disk and request a new activation due to major hardware changes. 🙁

This is one of the points why we do not like Windows ….
So, how you solve a potential free disk problem depends a bit on what you think is the bigger problem – reactivation or freeing disk space by deletions, movement of files or deinstallations.

Addendum: Also check old restore points Win 7 may have created over time! After a successful upgrade to Win 10 I stumbled across an option to release all restore information for old installations (in this case for Win 7 and its kept restore points). This will give you again many Gigabytes if you had not deleted “restore point” data for a long time in your Win 7. In my case I gained remarkable 17 GB! => Should have deleted some old restore points data already before the upgrade.

Problems with disk space on the Linux host

The second problem with disk space occurred after or during some upgrades to Win 10: I ran out of space in the Linux filesystem containing the vmdk files of my virtual machine. In one case the upgrade simply stopped. In another case the problem occurred a while after the upgrade – without me actually doing much on the new Win 10 installation. VMware suddenly issued a warning regarding the Linux file system and paused the virtual machine. I was first a bit surprised as I had not experienced this lack of space during normal usage of the previous Win 7 installation.

The explanation was simple: As said, I had set up the virtual disk such that the required space was not allocated at once, but as required. Due to the upgrade the VMware had created all 4GB-extends
to provide the full disk space the guest needed. In addition I had activated “Autoprotect Snapshots” on VMware (3 per day) – the first automatically created snapshot after the upgrade required a lot of additional space on the Linux file system – due to heavy changes on the hard disk.

My virtualized machines most often reside on specific (encrypted) LVM-based Linux partitions. And there it just got tight – when VMware stopped the virtual machine only 3.5 GB were left free. Not funny: You cannot kill snapshots on a paused virtual guest – the guest must be running or be shut down. And if you want to enlarge a Linux partition – which is possible if there is (neighboring) space free on your hard disk – then the filesystem should best be unmounted. Well, you can enlarge a GPT-partition with the ext4-filesystem in operation (e.g. with YaST) – but it gives you an uncomfortable feeling.

In my case I decided to brutally power down the virtual machines. In one case where this problem occurred I could at least eliminate one snapshot. I could start the virtual machine then again and let Windows check the NTFS filesystems for errors. Then I shut down the virtual machine again, deleted another snapshot and used the tools of VMware to defragment and compact the virtual disks. This gave me a considerable amount of free GBs. Good!
Afterwards I additionally reduced the number of protection snapshots – if this still seemed to be necessary.

On another system with a more important Win 7/10 installation I really extended the Linux partition and its ext4 filesystem by 20 GB – I had some spare space, fortunately – and then followed the steps just described.

So, there is a whole spectrum of options to regain disk space after the upgrade. See also:
thebackroomtech.com : reduce-size-virtual-machine-disk-vmware-workstation/

My third message is:

Ensure a reasonable amount of free space in the Linux filesystem – for required extents and snapshots!
After the backup of your old Win 7 installation, eliminate all VMware snapshots which you do not absolutely need – in the snapshot manager from the left to the right. Also use the VMware tools to defragment and compact your virtual disks ahead of the upgrade.

By the way: I hope that it is clear that snapshots do NOT replace backups. You should make a backup of your successfully upgraded Win 10 installation after you have tested the functionality of your applications and before you start working seriously with your new Win 10. You do not want to go through the upgrade procedure again ..

Addendum: Circumvent the enforcement of Windows 10 updates after your upgrade

Updates on Windows 7 have often lead to trouble in the past – and as an administrator you were happy to have some control over the download and installation points for updates in time. After reading a bit, I got the impression that the situation has not changed much: There have occurred some major problems related to updates of Win 10 since 2016. Yet, Windows 10 enforces updates more rigidly than Win 7.

I, therefore, generally recommend the following:

Delay or stop automatic updates on Win 10. Then use VMware’s snapshot mechanism before manual updates to be able to turn back to a running Win 10 guest version. In this order.

The first point is not so easy as it may seem – there are no basic and directly accessible options to only get informed about available updates as on Win 7. Win 10 enforces updates if you have enabled “Windows Update”; there is no “inform only” or “download only”. You have to either disable updates totally or to delay them. The latter only works for a maximum period of 35 days. How to deactivate updates completely is described here:

https://www.easeus.com/todo-backup-resource/how-to-stop-windows-10-from-automatically-update.html
https://www.t-online.de/digital/software/id_77429674/windows-10-automatische-updates-deaktivieren-so-geht-s.html

There is also a description on “Upgrade” values for a related registry entry:
www.deskmodder.de/wiki/index.php/Automatische-Updates-deaktivieren-oder-auf-manuell-setzen-Windows-10#Windows_10_1607.2C-1703-Pro-Updates-auf-manuell-setzen-oder-deaktivieren

I am not sure whether this works on Win 10 Pro build 1909 – we shall see.

Conclusion

Win 7 and Win 10 can be run on VMware WS Pro versions 12.5 up to 15.5 on Linux hosts. Before you upgrade VMware WS check for compatibility with your CPU! An upgrade of a Win 7 Pro installation on a VMware virtual machine to Win 10 Pro basically works smoothly – but you should take care of providing enough disk space within the virtual machine and also on the host’s filesystem containing the vdmk-files for the virtual disks.

It is not necessary to change the quality of the virtualized hardware configuration. Win 10 appears to be running with at least the same performance as the old Win 7 on a given virtual machine.

In the next article I will discuss some privacy aspects during the upgrade and after. The main question there will be: What can we do to prevent the transfer of sensitive data files from a Win 10 installation?