Samba 4, shares, wsdd and Windows 10 – how to list Linux Samba servers in the Win 10 Explorer

These days I relatively often need to work with Windows 10 at home (home-office, corona virus, …). Normally, I isolate my own Win 10 instance in a VMware virtual machine on my Linux PC – and reduce any network connections of this VM to selected external servers. Under normal conditions all ports on the Linux host are closed for the virtual machine [VM]. But on a few temporary occasions I want to the Win 10 system to access a specific Samba exchange directory on a KVM virtualized Linux instance on the same host.

Off topic: You see that I never present directories of my Linux host directly to a Win 10 guest via Samba. Instead I transfer files via an exchange directory on an intermediate VM whose Samba service is configured to disallow access of the Win system on shares presented to the host. A primitive, but effective form of separation. The only inconvenient consequence is that synchronization becomes a two-fold process on the host and the Linux VM. But we have Linux tools for this, so the effort is limited. )

Of course we want to use the SMB protocol in a modern version, i.e. version 3.x (SMB3), over TCP/IP for this purpose (port 445). In addition we need some mechanism to detect and browse SMB servers on the Windows system. In the old days NetBIOS was used for the latter. On the Linux side we had the nmbd-daemon for it – and we could set up a special Samba server as a WINS server.

During the last year Microsoft has – via updates and new builds of Windows 10 – followed a consistent politics of deactivating the use of SMB V1.0 systematically. This, however, led to problems – not only between Windows PCs, but also between Win 10 instances and Samba 4 servers. This article addresses one of these problems: the missing list of available Samba servers in the Windows Explorer.

There are many contributions on the Internet describing this problem and some even say that you only can solve it by restoring SMB V1 capabilities in Win 10 again. In this article I want to recommend two different solutions:

  • Ignore the problem of Samba server detection and use your Samba shares on Win 10 with the SMB3 protocol as network drives.
  • If you absolutely want to see and list your Samba servers in the Windows Explorer of a Win 10 client, use the “Web-Service-Discovery” service via a WSDD-daemon provided by a Python script of Steffen Christgau.

I myself got on the right track of solving the named problem by an article of a guy called “Stilez”. His article is the first one listed under the section “Links” below. I recommend strongly to read it; it is Stilez who deserves all credit in pointing out both the problem and the solution. I just applied his insight to my own situation with virtualized Samba servers based on Opensuse Leap 15.1.

SMB V1.0 should be avoided – but NetBIOS needs it to exchange information about SMB servers

SMB, especially version SMB V1.0, is well known for security problems. Even MS has understood this – especially after the Wannacry disaster. See e.g. the links in the section “Links” => “Warnings of SMBV1” at the end of this article. MS has deactivated SMB V1 in the background via some updates of Win 8 and Win 10.

One of the resulting problem is that we do not see Samba servers in the Windows Explorer of a Win 10 system any longer. In the section “Network” of the Windows Explorer you normally should see a list of servers which are members of a Workgroup and offer shares.

Two years ago we would use NetBIOS’s discovery protocol and a WINS server to get this information. Unfortunately, the NetBIOS service detection ability depends on SMB1 features. The stupid thing is that we for a long while now had and have a relatively secure SMB2/3, but NetBIOS discovery only worked with SMB V1 enabled on the Windows client. Deactivating SMB V1 means deactivating NetBIOS at the
same time – and if you watch your Firewall logs for incoming packets from the Win 10 clients you will notice that exactly such a thing happened on Win 10 clients.

This actually means that you can have a full featured Samba/NetBIOS setup on the Linux side, that you may have opened the right ports on the firewalls for your Samba/WINS server and client systems, but that you will nevertheless not get any list of available Samba servers in Win 10’s Explorer. 🙁

Having understood this leads to the key question for our problem:

By what did MS replace the detection features of NetBIOS in combination with SMB-services?

Settings on the MS Win side – which alone will not help

When you google a bit you may find many hints regarding settings by which you activate network “discovery” functionalities via two Windows services. See

https://www.wintips.org/fix-windows-10-network-computers-not-showing/
https://winaero.com/blog/network-computers-not-visible-windows-10-version-1803/

You can follow these recommendations. If you want to see your own PC and other Windows systems in the Explorer’s list of network resources you must have activated them (see below). However, in my Win 10 client the recommended settings were already activated – with the exception of SMB V1, which I did and do not wish to reactivate again. The “discovery” settings may help you with other older Windows systems, but they do not enable a listing of Samba 4 servers without additional measures on Win 10.

There is another category of hints which in my opinion are contra-productive regarding security. See https://devanswers.co/network-error-problem-windows-cannot-access-hostname-samba/
Why activate an insecure setting? Especially, as such a setting does not really help us with our special problem? 🙁

A last set of hints concerns the settings on the Samba server, itself. I find it especially nice when such recommendations come from Microsoft :-). See: http://woshub.com/cannot-access-smb-network-shares-windows-10-1709/

[global]
server min protocol = SMB2_10
client max protocol = SMB3
client min protocol = SMB2_10
encrypt passwords = true
restrict anonymous = 2

Thanks to MS we now understand that we should not use SMB V1 …. But, actually, these hints are again insufficient regarding the Explorer problem …

What you could do – but should NOT do

Once you have understood that NetBIOS and SMB V1 still have an intimate relation (at least on a Windows systems) you may get the idea that there might exist some option to reactivate SMBV1 again on the Win 10 system. This is indeed possible. See here:
https://community.nethserver.org/t/windows-10-not-showing-servers-shares-in-network-browser/14263/4
https://www.wintips.org/fix-windows-10-network-computers-not-showing/

If you follow the advice of the authors and in addition re-open the standard ports for NetBIOS (UDP) 137, 138, (TCP) 139 on your firewalls between the Win 10 machine and your Samba servers you will – almost at once – get up the list of your accessible Samba servers in the Network section of the Win 10 Explorer. (Maybe you have to restart the smb and nmb services on your Linux machines).

But: You should not do this! SMB V1 should definitely become history!

Fortunately, a re-activation of SMB V1
on a Win 10 system is NOT required to mount Samba shares and it is neither required to get a list of available Samba servers in the Win 10 Explorer.

What you should do: Win 10 service settings

There are two service settings which are required to see other servers (and your own Win10 PC itself) in the list of network hosts presented by the Windows explorer:
Start services.msc ( press the Windows key + R => Enter “services.msc” in the dialog. Or: start services.msc it via the Control Panel => System and Security => Services)

  • Look for “Function Discovery Provider Host” => Set : Startup Type => Automatic
  • Look for “Function Discovery Resource Publication” => Set : Startup Type => Automatic (Delayed Start) !!

I noticed that on my VMware Win 10 guests the second setting appeared to be crucial to get the Win 10 PC itself listed among the network servers.

What you should do: Use the SMBV3 protocol!

As you as a Linux user meanwhile have probably replaced all your virtualized Win 7 guests, you should use the following settings in the [global] section of the configuration file “/etc/samba/smb.conf” of your Samba servers:

[global]

“protocol = SMB3”.

This is what Win 10 supports; you need SMB2_10 with some builds of Win 8 (???), only. Remember also that port 445 must be open on a firewall between the Win 10 client and your Samba server.

For Linux requirements to use SMB3 see
https://wiki.samba.org: SMB3 kernel status
For “SMB Direct” (RDMA) you normally need a kernel version > 4.16. On Opensuse Leap 15.1 most of the required kernel features have been backported. In Win 10 SMB Direct is normally activated; you find it in the “Window-Features” settings (https://www.windowscentral.com/how-manage-optional-features-windows-10)

Not seeing Samba servers in the Explorer does not mean that mounting a Samba share as a network drive does not work

Not seeing the Samba servers in the Win 10 Explorer – because the NetBIOS detection is defunct – does not mean that you cannot work with a Samba share on a Win 10 system. You can just “mount” it on Windows as a “network drive“:

Open a Windows Explorer, choose “This PC” on the left side, then click “Map network drive” in the upper area of the window and follow the instructions:
You choose a free drive letter and provide the Samba server name and its share in the usual MS form as “\\SERVERNAME\SHARE”.
Afterwards, you must activate the option “Connect using different credentials” in the dialog on the Win 10 side, if your Win 10 user for security reasons has a different UID and Password on the Samba server than on Win 10. Needless to say that this is a setting I strongly recommend – and of course we do not allow any direct anonymous or guest access to our Samba server without credentials delivered from a Windows machine (at least not without any central authentication systems).
So, you eventually must provide a valid Samba user name on your Samba server and the password – and there you happily go and use your resources on the Samba share from your Win 10 client.

I assumed of course that you have allowed access from the Win 10 host and the user by respective settings of “hosts allow” and “valid users” for the share in your Samba configuration.
Note: You need not mark the option for reconnecting the share in the Windows dialog for network drives if you only use the Samba exchange shares temporarily.

On an Opensuse system this works perfectly with the protocol settings for SMB3 on the server. So, you can use your shares even without seeing the samba
server in the Explorer: You just have to know what your shares are named and on which Samba servers they are located. No problem for a Linux admin.

In my opinion this approach is the most secure one among all “peer to peer”-approaches which have to work without a central network wide authentication service. It only requires to open port 445 for the time of a Samba session to a specific Samba server. Otherwise you do not provide any information for free to the Win 10 system and its “users”. (Well, an open question is what MS really does with the provided Samba credentials. But that is another story ….)

What you should do: Use the WSDD service on your Samba server

If you allow for some information sharing between your virtualized Win 10 and other KVM based virtual Samba machines in your LAN – and are not afraid of Microsoft or Antivirus companies on the Windows system to collect respective information – then there is a working option to get a stable list of the available Samba servers in the Windows Explorer – without the use of SMB V1.0.

Windows 10 implements web service detection via multiple mechanisms; among them: Multicast messages over ports 3702 (UDP), TCP 5357 and 1900 (UDP). For a detection of Samba services you “only” need ports 3702 (UDP) and 5357 (TCP). The general service detection port 1900 can remain closed in the firewalls between your Win 10 instances and your Linux world for our specific purpose. See
https://www.speedguide.net/port.php?port=5357
https://www.speedguide.net/port.php?port=3702
https://techcommunity.microsoft.com/t5/ask-the-performance-team/ws2008-the-wsd-port-monitor/ba-p/372760
https://en.wikipedia.org/wiki/Simple Service Discovery Protocol

The mechanism using ports 3702 and 5351 is called “Web Service Discovery” and was introduced by MS to cover the detection of printers and other devices in networks. In combination with SMB2 and SMB3 it is the preferred service to detect Samba services.

OK, do we have something like a counter-part available on a Linux system? Obviously, such a service is not (yet?) included in Samba 4 – at least not in the 4.9 version on my system with Opensuse Leap 15.1. The fact that WSD is not (yet?) a part of Samba may have some good reasons. See link.
One can understand the reservations and hesitation to include it, as WSD also serves other purposes than just the detection of SMB services.

Fortunately, a guy named Steffen Christgau, has written an (interesting) Python 3 script, which offers you the basic WSD functionality. See https://github.com/christgau/wsdd.

You can use the script in form of a daemon process on a Linux system – hence we speak of WSDD.

Using YaST I quickly found out that a WSDD RPM package is actually included in my “Opensuse Leap 15.1 Update” repository. People with other Linux distros may download the present WSDD version from GitHub.

On Opensuse it comes with an associated systemd service-file which you find in the directory “/usr/lib/systemd/system”.

[Unit]
Description=Web Services Dynamic Discovery host daemon
After=network-online.target
Wants=network-online.target

[Service]
Type=simple
AmbientCapabilities=CAP_SYS_CHROOT
PermissionsStartOnly=true
Environment= WSDD_ARGS=-p
ExecStartPre=/usr/lib/wsdd/wsdd-init.sh
EnvironmentFile=-/run/sysconfig/wsdd
ExecStart=/usr/sbin/wsdd --shortlog -c /run/wsdd $WSDD_ARGS
ExecStartPost=/usr/bin/rm /run/
sysconfig/wsdd
User=wsdd
Group=wsdd

[Install]
WantedBy=multi-user.target

Reading the documentation you find out that the daemon runs chrooted – which is a reasonable security measure.
Opensuse even provides an elementary configuration file in “/etc/sysconfig/wsdd“.

I used the parameter

WSDD_WORKGROUP=”MYWORKGROUP”

there to announce the right Workgroup for my (virtualized) Samba server.

So, I had everything ready to start WSDD by “rcwsdd start” (or by “systemctl start wsdd.service”) on my Samba server.

On the local firewall of the SMB server I opened

  • port 445 (TCP) for SMB(3) In/Out for the server and from/to the Win-10-Client,
  • port 3702 (UDP) for incoming packets to the server and outgoing packets from the server to the Multicast address 239.255.255.250,
  • port 5357 (TCP) In/Out for the server and from/to the Win 10 client.

And: I closed all NetBIOS ports (UDP 137, 138 / TCP 139) and eventually stopped the “nmbd”-service on the Samba server! (UDP 137, 138 / TCP 139)

Within a second or so, my Samba 4 server appeared in the Windows 10 Explorer!

Further hints:
As the 3702 port is used with the UDP protocol it should be regarded as potentially dangerous. See: https://blogs.akamai.com/sitr/2019/09/new-ddos-vector-observed-in-the-wild-wsd-attacks-hitting-35gbps.html
The port 1900 which appeared in the firewall logs does not seem to be important. I blocked it.

So far, so good. However, when I refreshed the list in the Win 10 Explorer my SAMBA server disappeared again. 🙁

What you should do: Take special care about the network interface to which the WSDD service gets attached to

It took me a while to find out that the origin of the last problem had to do with the fact that my virtualized server and my Win 10 client both had multiple network interfaces on virtualized bridges. There are no loops in the configuration, but it occurred that multiple broadcasts packets arrive via different paths at the Samba server and were answered – and thus multiple return messages appeared at the Win 10 client during a refresh – which Win 10 did not like (see the discussion in the following link.
https://github.com/christgau/wsdd/issues/8

As soon as I restricted the answer of the Samba server to exactly one of the interfaces on my virtual bridge via the the parameter “WSDD_INTERFACES” in the “/etc/sysconfig/wsdd”-configuration file everything went fine. Refreshes now lead to an immediate update including the Samba server.

So, be a little careful, when you have some complicated bridge structures associated with your virtualized VMware or KVM guests. The WSDD service should be limited to exactly one interface of the Samba server.

Note: As we do not need NetBIOS any longer – block ports 137, 138 (UDP) and 139 (TCP) in your firewalls! It will make you feel better instantaneously.

Conclusion

The “end” of SMB V1 on Win 10 is a reasonable step. However, it undermines the visibility of Samba servers in the Windows Explorers. The reason is that NetBIOS requires SMB1.0 features on Windows. NetBIOS is/was therefore consistently deactivated on Win 10, too. The service detection on the network is replaced by the WSD service which was originally introduced for printer detection (and possibly other devices). Activating it on the Win 10 system may help with the detection of other Windows (8 and 10) systems on the network, but not with Samba 4 servers. Samba servers presently only serve NetBIOS requests of Win clients
to allow for server and share detection. Therefore, without additional measures, they are not displayed in the Windows Explorer of a regular Win 10 client.

This does, however, not restrict the usage of Samba shares on the Win 10 client via the SMB3 protocol. They can be used as “network drives” – just as before. Not distributing name and device information on a network has its advantages regarding security.

If you absolutely must see your Samba servers in the Win 10 Explorer install and configure the WSDD package of Steffen Christgau. You can use it as a systemd service. You should restrict the interfaces WSDD gets attached to – especially if your Samba servers are attached to virtual network bridges (Linux bridges or VMware bridges).

So:

  • Disable SMBV1 in Windows 10 if an update has not yet done it for you!
  • Set the protocol in the Samba servers to SMBV3!
  • Try to work with “networks drives” on your Win 10 guests, only!
  • Install, configure and use WSDD, if you really need to see your Samba servers in the Windows Explorer.
  • Open the port 445 (TCP, IN/OUT between the Win 10 client and the server), 3072 (UDP, OUT from the server and the Win 10 client to 239.255.255.250, IN to the server from the Win 10 client / IN to the Win 10 client from the server; rules details depending on the firewall location), port 5357 (TCP; In/OUT between the Samba server and the Win 10 client) on your firewalls between the Samba server and the Win 10 system.
  • Close the NetBIOS ports in your firewalls!
  • You should also take care of stopping multicast messages leaving perimeter firewalls; normally packets to multicast addresses should not be routed, but blocking them explicitly for certain interfaces is no harm, either.

Of course you must repeat the WSDD and firewall setup for all your Samba servers. But as a Linux admin you have your tools for distributing common configuration files or copying virtualization setups.

Links

The real story
!!!! https://www.ixsystems.com/community/resources/how-to-kill-off-smb1-netbios-wins-and-still-have-windows-network-neighbourhood-better-than-ever.106/ !!!

https://forums.linuxmint.com/viewtopic.php?p=1799875

https://devanswers.co/discover-ubuntu-machines-samba-shares-windows-10-network/

https://bugs.launchpad.net/ubuntu/ source/ samba/ +bug/ 1831441

https://forums.opensuse.org/ showthread.php/ 540083-Samba-Network-Device-Type-for-Windows-10

https://kofler.info/zugriff-auf-netzwerkverzeichnisse-mit-nautilus/

WSDD and its problems
https://github.com/christgau/wsdd
https://github.com/christgau/wsdd/issues/8
https://forums.opensuse.org/ showthread.php/ 540083-Samba-Network-Device-Type-for-Windows-10

Warnings of SMB V1
https://docs.microsoft.com/de-de/windows-server/storage/file-server/troubleshoot/detect-enable-and-disable-smbv1-v2-v3
https://blog.malwarebytes.com/101/2018/12/how-threat-actors-are-using-smb-vulnerabilities/
https://securityboulevard.com/2018/12/whats-the-problem-with-smb-1-and-should-you-worry-about-smb-2-and-3/
https://techcommunity.microsoft.com/t5/storage-at-microsoft/stop-using-smb1/ba-p/425858
https://www.cubespotter.de/cubespotter/wannacry-nsa-exploits-und-das-maerchen-von-smbv1/

Problems with Win 10 and shares
https://social.technet.microsoft.com/ Forums/ en-US: cannot-connect-to-cifs-smb-samba-network-shares-amp-shared-folders-in-windows-10-after-update?forum=win10itpronetworking

RDMA and SMB Direct
https://searchstorage.techtarget.com/ definition/ Remote-Direct-Memory-Access

Other settings in the SMB/Samba environment of minor relevance
http://woshub.com/cannot-access-smb-network-shares-windows-10-1709/
https://superuser.com/questions/1466968/unable-to-connect-to-a-linux-samba-server-via-hostname-on-windows-10
https://superuser.com/questions/1522896/windows-10-cannot-connect-to-linux-samba-shares-except-from-smb1-cifs
https://www.reddit.com/ r/ techsupport/ comments/ 3yevip/ windows 10 cant see samba shares/
https://devanswers.co/network-error-problem-windows-cannot-access-hostname-samba/

 

VMware WS 12.5.9 on Opensuse Leap 15.1

A blog reader has asked me how to get the VMware workstation WS 12.5.9 running again after he installed and updated Leap to version 15.1 on some older hardware. The reader referred to my article on running Win 10 as a VMware guest on an Opensuse Leap system.

Upgrading Win 7 to Win 10 guests on Opensuse/Linux based VMware hosts – I – some experiences

There I mentioned that the latest VMware WS version does not run on older hardware … Then yu have to use VMware WS 12.5.9. But not only during your initial VMware installation but also after kernel updates on a Leap 15.1 system you are confronted with an error message similar to the following one, when the VMware WS modules must be (re-) compiled:

I have actually written about this problem in 2018. But I admit that the relevant article is somewhat difficult to find as it has no direct reference to Leap 15.1. See:

Upgrade auf Opensuse Leap 15.0 – Probleme mit Nvidia-Treiber aus dem Repository und mit VMware WS 12.5.9

This article has a link to a VMware community article with a remedy to the mentioned problem:
https://communities.vmware.com / message / 2777306#2777306

The necessary steps are the following:

wget https://github.com/mkubecek/vmware-host-modules/archive/workstation-12.5.9.tar.gz
tar -xzf workstation-12.5.9.tar.gz
cd vmware-host-modules-workstation-12.5.9
make
make install

and

cd /usr/lib/vmware/lib/libfontconfig.so.1
mv libfontconfig.so.1 libfontconfig.so.1.old
ln -s /usr/lib64/libfontconfig.so.1

All credits are due to the guys “Kubecek” and “portsample”. Thx!

Upgrading Win 7 to Win 10 guests on Opensuse/Linux based VMware hosts – I – some experiences

As my readers know I am not a fan of MS or any “Windows N” operating system – whatever the version number N. But some of you may be facing the same situation as me:

A customer or an employer enforces the use of MS products – as e.g. MS Office, clients for MS Exchange, Skype for Business, Sharepoint, components for effort booking and so on. For the fulfillment of most of your customer’s demands you can use browser based interfaces or Linux clients.

However, something that regularly leads to problems is the heavy use of MS Office programs or graphics tools in their latest versions. Despite other claims: A friction-less back and forth between Libreoffice and MS Office is still a dream. Crossover Office is nice – but the latest MS Office versions are often not yet covered when you need them. Another very reasonable field of using MS Windows guests on Linux is, by the way, training for pen-testing and security measures.

So, even Linux enthusiasts are sometimes forced to work with or within a native Windows environment. We would use a virtualized Windows guest machine then – on a Linux host with the help of VMware, KVM or Virtualbox. Regarding graphical performance, support of basic 3D features, Direct X and of the latest USB-versions in the emulated system environment I have a tendency to use VMware Workstation, despite its high price. Get me right: I practically never use VMware to virtualize Linux systems – for this purpose I use LXC containers or KVM. But for “Win 7” or “Win 10” VMware seemed to be a good choice – so far.

Upgrade to Win 10

During the last days of orchestrated panic regarding the transition from Windows 7 to Windows 10 I eventually gave in and upgraded some of my VMware-virtualized Windows 7 systems to Windows 10. More because of having some free time to get into this process than because assuming a sudden drop in security. (As if we ever trusted in the security of Windows system … I come back to security and privacy aspects in a second article.) However, on a perspective of some weeks or months the transition from Win 7 to Win 10 is probably unavoidable – if you cannot isolate your Windows machine completely from the Internet and/or from other external servers which bring a potential attack risk with them. The latter may even hold for servers of your clients.

I was a bit skeptical about the outcome of the upgrade procedure and the effort it would require on my side. A good friend of mine, who sells and administers Windows system professionally, had told me that he had experienced a whole variety of different problems – depending on the Win 7 setup, the amount and character of application SW installed, hardware drivers and the validity of licenses.

Well, my Windows 7 Pro clients were equipped with rather elementary SW: MS Office in different versions, MS Project, Lexware, Adobe Creative suite in an old version, some mind mapping SW, Adobe Reader, Anti malware SW. The “hardware” of the virtual machines is standard, partially emulated by VMware with appropriate drivers. So, no need to be especially nervous.

To be on the safe side I also ordered a VMware WS Pro upgrade to version 15.X. (I own WS 12.5.9 and WS 14 licenses.) Reason: I had read that only the WS 15.5 Pro supports the latest Win 10 versions fully. Well reading without thinking may lead to a waste of resources – see below.

Another rumor you often hear is that Windows 10 requires rather new hardware and is quite resource-demanding. MS itself recommends to buy a new PC or laptop on its web-sites – of course often followed by advertisement for MS notebook models on the very same web page. Yeah, money makes the world turn around. Well, regarding resources for my Windows guest systems I was/am rather restrictive:

Virtual machines for MS Win never get a lot of RAM from me – a maximum of 4 GB at most. This is enough for office purposes. (All really resource craving
things I do on Linux 🙂 ). Neither do my virtualized Win systems get a lot of disk space – typically < 60 GB. I mostly use vmdk-files to provide virtual hard disks – without full space allocation at startup, but dynamically added 4GB extents. vdmk files allow for an easy movement of virtual machines and simple backup procedures. And I usually give my virtual Win machines a maximum of 2 processor cores. So, these limitations contributed a bit to my skepticism. In addition I have 3D support on for my Win 7 guests in the virtual machine setup.

Meanwhile, I have successfully performed multiple upgrades on a rather old Linux host with an i7 950 CPU and newer hosts with I7 6700 K and modern i9 9900 processors. The operative system on all hosts run Opensuse Leap 15.1; I did not find the time to test my Debian hosts, yet.

I had some nice and some annoying experiences. I also found some aspects which you should take care of ahead of the Win 7 to Win 10 upgrade.

Make a backup!

As always with critical operations: Make a backup first! This is quite easy with a VMware virtual machine based on “vmdk”-files: Just copy the machines directory with all its files to some Linux formatted backup medium and keep up all the access rights during copying (=> cp -dpRv). In case of partition based virtual machines – make a copy of the partition with “dd”.

If you should need to restore the virtual machine in its old state again and to copy your backup files to their old places: VMware will notice this and will ask you whether you moved or copied the guest. Then answer “moved” (!) – which appears a bit paradox. But otherwise there is a very high probability that trouble with your Windows license will follow. VMware interprets a “copy”-operation as a duplication of a virtual machine and puts a related information somewhere (?) which Windows evaluates. Windows will almost certainly ask for a reactivation of your installation in case that your Win license was/is an individual one – as e.g. an OEM license.

Good news and potentially bad news regarding the upgrade to Win 10

The good news is:

  • Provided that you have valid licences for your Win 7 and for all SW components installed and provided that there is enough real and virtual disk space available, the Win 7 to Win 10 upgrade works smoothly. However, it takes a considerable amount of time.
  • I did not experience any performance problems after the upgrades – not even regarding transparency effects and other gimmicks in comparison to Windows 7. VMware’s 3D support for Win works – in WS 15 even for DirectX 10.

The requirement for time depends partially on the bandwidth of your Internet connection and partially on the performance of your disk access as well as your CPU and the available RAM. In my case I had to invest around 1 hr – in those cases when everything went straight through.

The potentially bad news comprises the following points:

  • The upgrade requires a considerable amount of free space on your virtual machine’s hard disk, which will be used temporarily. So, you should carefully check the available disk space – inside the virtual machine and – a bit surprising – also on the Linux filesystem keeping the vmdk-files. I ran into problems with limited space for multiple upgrades on both sides; see below. Whether you will experience something similar depends on your safety margin policies with respect to disk space in the guest and on the host.
  • A really annoying
    aspect of the upgrade had to do with VMware’s development and market strategy. From advertisement you may conclude that it would be best to use VMware WS 14 or 15 to handle Windows 10. However, on older Intel based systems you should absolutely check whether the CPU is compatible with VMware WS 14 and 15. Check it, before you think upgrading a Vmware WS 12 license to anything higher. On my Intel i7 950 neither WS 14 nor WS 15 did work at all. Even if you get these WS versions working by a trick (see below) they perform badly.
  • Then there is a certain privacy aspect. As said, the upgrade takes a lot of time during which you are connected to the Internet and to Microsoft servers. This is only partially due to the fact that Win 10 SW has to be downloaded during the upgrade process; there are more phases of information exchange. It is also quite understandable that MS has to analyze and check your system on a full scale. But do we know what Big Brother [BB] MS is doing during this time and what information/data they transfer to their own systems? No, we do not. So, if you have any sensitive data files on your system – how to protect them? You cannot isolate your Windows 10 during the upgrade. And even worse: Later on you will be more or less forced to perform updates within certain periods. So, how to keep sensitive data inaccessible for BB during the upgrade and beyond?

I address the first two aspects below. The last point of privacy is an interesting but complicated one. I shall discuss it in a separate article.

Which VMware workstation version should I use?

Do not get misguided by reports or advertisement on the Internet that certain MS Win 10 require the latest version of VMware Workstation! WS 12 Pro was the first version which supported Win 10 in late 2015. Now VMware 15.X has arrived. And yes, there are articles that claim incompatibility of VMware WS 12, WS 14 and early subversions of WS 15 with some of the latest Win 10 builds and updates. See the following links and discussions therein:
https://communities.vmware.com/thread/608589
https://www.borncity.com/blog/2019/10/03/windows-10-update-kb4522015-breaks-vmware-workstation/
https://www.askwoody.com/forums/topic/vmware-12-and-newer-incompatible-with-windows-10-1903/

But read carefully: The statements on incompatibility refer mostly (if not only) to using a MS Win 10 system as a host for VMware! But we guys are using Linux systems as hosts.

Therefore the good message is:

Windows 10 as a VMware guest is already supported by VM WS 12.5.9 Pro, which runs also on older CPUs. For all practical purposes and 2D graphics a Win 10 guest installation works quite well on a Linux host with VMware 12.5.9.

At least, I have not yet noticed anything wrong on my hosts with Opensuse Leap 15.1 and VMware WS 12.5.9 PRO for a Win 10 guests. (Neither did I see problems with WS 14 or WS 15 on those hosts where I could use these versions).

The compatibility of WS 12.5 with Win 10 guest on Linux is more important than you may think if your host has an older CPU. If you really want to spend money and use WS 14 or WS 15 please note:

WS 14 Pro and WS 15 Pro require that your CPU provides Intel VT-x virtualization technology and EPT abilities.

So, the potentially bad message for you as the still proud owner of an older but capable CPU is:

The present VMware WS versions 14 and 15 which support Win 10 fully (as guest and host system) may not be compatible with your CPU!

Check
compatibility twice BEFORE you intend to upgrade VMware Workstation ahead of a “Win7 to Win 10”-upgrade. It would be a major waste of money if your CPU is not supported. And as stated: Win 12.5 does a good job with Win 10 guests.

VMware has deserved a lot of criticism with their decision to ignore older processors with WS Pro versions > 14. See
https://communities.vmware.com/thread/572931
https://vinfrastructure.it/2018/07/vmware-workstation-pro-14-issues-with-old-cpu/
https://www.heise.de/newsticker/meldung/VMware-Workstation-14-braucht-juengere-Prozessoren-3847372.html
For me this is a good reason to try a bit harder with KVM for the virtualization of Windows – and drop VMware wherever possible.

There is a small trick, though, to get WS 14 Pro running on an i7 950 and other older processors: In the file “/etc/vmware/config” you can add the setting

monitor.allowLegacyCPU = “true”

See https://communities.vmware.com/thread/572804.

But: I have tested this and found that a Win 7 start takes around 3 minutes! You really have to be very patient… This is crazy – and for me unacceptable. After you once are logged in, performance of Win 7 seems to be OK – maybe a bit sluggish. Still I cannot bear the waiting at boot time. So, I went back to WS 12 Pro on the machine with an i7 950.

Another problem for you may be that the installation of WS 12.5.9 on both Opensuse Leap 15.0 and 15.1 requires some special settings and tricks which I have written about in this blog. See:
Upgrade auf Opensuse Leap 15.0 – Probleme mit Nvidia-Treiber aus dem Repository und mit VMware WS 12.5.9
Upgrade Laptop to Opensuse 42.3, Probleme mit Bumblebee und VMware WS 12.5, Workarounds
The first article is relevant also for Opensuse 15.1.

Use the Windows Upgrade site and the Media Creation Tool page to save money

If you have a valid Win 7 license for all of your virtualized Win 7 installations it is not required to spend money on a new Win 10 license. Microsoft’s offer for a cost free upgrade to Win 10 still works. See e.g.:
https://www.cnet.com/how-to/windows-10-dont-wait-on-free-upgrade-because-windows-7-officially-done/
https://www.techbook.de/apps/kostenloses-update-windows-10
Follow the steps there – as I have done successfully myself.

Problems with disk space within the VMware Windows 7 guest during upgrade

My first Win7 to Win10 upgrade trial ran into trouble twice. The first problem occurred during the upgrade process and within the virtual machine:
I got a warning from the upgrade program at its start that I should free at least some 8.5 GByte.

Not so funny – as said, I am a bit picky about resources. The virtual guest machine had only a 60 GB C-disk. Fortunately, there were a lot of temporary files which could be deleted. Actually Gigabytes and partially years old – makes you wonder why Win 7 kept those files piled up. I also could move a bunch of data files to a D-disk. And I deinstalled some programs. All in all – it just worked out. The upgrade itself afterwards went friction-free and without

So one
message is:

Ensure that you have around 15 GB free on your virtual C-disk.

It is better to solve the problems with freeing C-disk space inside Win 7 without pressure – meaning: ahead of the upgrade to Win 10. If you run into the described problem it may be better to abort the Win 10 upgrade. I have tested this – and the Win 7 system was restored – apparently in good health. I got a strange message during reboot that the system was prepared for first use – but after everything was as before.

On another system I got a warning during the upgrade, when the “search for updates” began, that I should clear some 10 GByte of temporarily required disk space or attach an external drive (USB) to be used for temporary operations. The latter went OK in this case. But be careful the USB disk must be kept attached to the virtual machine over some reboots. Do not touch it until the upgrade has finalized.

So, a second message is:

Be prepared to have some external device with some free 20 GB ready if you have a complex installation with a lot of application SW and/or a complex virtual HW configuration.

I advise you to check your external USB drive, USB stick or whatever you use for filesystem errors before attaching it. And have your VMware window active whilst attaching the device! VMware will then warn you that the Linux host may claim access to the device and you just have to click the buttons in the dialog boxes to give the VMware guest full control instead of the host OS.

If you now should think about a general enlargement of the virtual disk(s) of your existing Win 7 installation please take into account the following:

On the one hand side an enlargement is of course possible and relatively easy to handle if you use vdmk files for disk virtualization and have free space on the Linux partition which hosts the vmdks. VMware supports the resizing process in the disk section of the virtual machine “settings”. On Win 7 you afterward can use the Win admin tools to extend the NTFS filesystem to the full extent of the newly configured disk.

But, on the other side, please, consider that Windows may react allergic to a change of the main C-disk and request a new activation due to major hardware changes. 🙁

This is one of the points why we do not like Windows ….
So, how you solve a potential free disk problem depends a bit on what you think is the bigger problem – reactivation or freeing disk space by deletions, movement of files or deinstallations.

Addendum: Also check old restore points Win 7 may have created over time! After a successful upgrade to Win 10 I stumbled across an option to release all restore information for old installations (in this case for Win 7 and its kept restore points). This will give you again many Gigabytes if you had not deleted “restore point” data for a long time in your Win 7. In my case I gained remarkable 17 GB! => Should have deleted some old restore points data already before the upgrade.

Problems with disk space on the Linux host

The second problem with disk space occurred after or during some upgrades to Win 10: I ran out of space in the Linux filesystem containing the vmdk files of my virtual machine. In one case the upgrade simply stopped. In another case the problem occurred a while after the upgrade – without me actually doing much on the new Win 10 installation. VMware suddenly issued a warning regarding the Linux file system and paused the virtual machine. I was first a bit surprised as I had not experienced this lack of space during normal usage of the previous Win 7 installation.

The explanation was simple: As said, I had set up the virtual disk such that the required space was not allocated at once, but as required. Due to the upgrade the VMware had created all 4GB-extends
to provide the full disk space the guest needed. In addition I had activated “Autoprotect Snapshots” on VMware (3 per day) – the first automatically created snapshot after the upgrade required a lot of additional space on the Linux file system – due to heavy changes on the hard disk.

My virtualized machines most often reside on specific (encrypted) LVM-based Linux partitions. And there it just got tight – when VMware stopped the virtual machine only 3.5 GB were left free. Not funny: You cannot kill snapshots on a paused virtual guest – the guest must be running or be shut down. And if you want to enlarge a Linux partition – which is possible if there is (neighboring) space free on your hard disk – then the filesystem should best be unmounted. Well, you can enlarge a GPT-partition with the ext4-filesystem in operation (e.g. with YaST) – but it gives you an uncomfortable feeling.

In my case I decided to brutally power down the virtual machines. In one case where this problem occurred I could at least eliminate one snapshot. I could start the virtual machine then again and let Windows check the NTFS filesystems for errors. Then I shut down the virtual machine again, deleted another snapshot and used the tools of VMware to defragment and compact the virtual disks. This gave me a considerable amount of free GBs. Good!
Afterwards I additionally reduced the number of protection snapshots – if this still seemed to be necessary.

On another system with a more important Win 7/10 installation I really extended the Linux partition and its ext4 filesystem by 20 GB – I had some spare space, fortunately – and then followed the steps just described.

So, there is a whole spectrum of options to regain disk space after the upgrade. See also:
thebackroomtech.com : reduce-size-virtual-machine-disk-vmware-workstation/

My third message is:

Ensure a reasonable amount of free space in the Linux filesystem – for required extents and snapshots!
After the backup of your old Win 7 installation, eliminate all VMware snapshots which you do not absolutely need – in the snapshot manager from the left to the right. Also use the VMware tools to defragment and compact your virtual disks ahead of the upgrade.

By the way: I hope that it is clear that snapshots do NOT replace backups. You should make a backup of your successfully upgraded Win 10 installation after you have tested the functionality of your applications and before you start working seriously with your new Win 10. You do not want to go through the upgrade procedure again ..

Addendum: Circumvent the enforcement of Windows 10 updates after your upgrade

Updates on Windows 7 have often lead to trouble in the past – and as an administrator you were happy to have some control over the download and installation points for updates in time. After reading a bit, I got the impression that the situation has not changed much: There have occurred some major problems related to updates of Win 10 since 2016. Yet, Windows 10 enforces updates more rigidly than Win 7.

I, therefore, generally recommend the following:

Delay or stop automatic updates on Win 10. Then use VMware’s snapshot mechanism before manual updates to be able to turn back to a running Win 10 guest version. In this order.

The first point is not so easy as it may seem – there are no basic and directly accessible options to only get informed about available updates as on Win 7. Win 10 enforces updates if you have enabled “Windows Update”; there is no “inform only” or “download only”. You have to either disable updates totally or to delay them. The latter only works for a maximum period of 35 days. How to deactivate updates completely is described here:

https://www.easeus.com/todo-backup-resource/how-to-stop-windows-10-from-automatically-update.html
https://www.t-online.de/digital/software/id_77429674/windows-10-automatische-updates-deaktivieren-so-geht-s.html

There is also a description on “Upgrade” values for a related registry entry:
www.deskmodder.de/wiki/index.php/Automatische-Updates-deaktivieren-oder-auf-manuell-setzen-Windows-10#Windows_10_1607.2C-1703-Pro-Updates-auf-manuell-setzen-oder-deaktivieren

I am not sure whether this works on Win 10 Pro build 1909 – we shall see.

Conclusion

Win 7 and Win 10 can be run on VMware WS Pro versions 12.5 up to 15.5 on Linux hosts. Before you upgrade VMware WS check for compatibility with your CPU! An upgrade of a Win 7 Pro installation on a VMware virtual machine to Win 10 Pro basically works smoothly – but you should take care of providing enough disk space within the virtual machine and also on the host’s filesystem containing the vdmk-files for the virtual disks.

It is not necessary to change the quality of the virtualized hardware configuration. Win 10 appears to be running with at least the same performance as the old Win 7 on a given virtual machine.

In the next article I will discuss some privacy aspects during the upgrade and after. The main question there will be: What can we do to prevent the transfer of sensitive data files from a Win 10 installation?

 

VMware Workstation – Virtual Ethernet failed – sometimes for good!

I use VMware Workstation as a hypervisor for hosting a few MS Windows guests, which I need in customer projects. As I do not trust Windows systems I sometimes place such guests in different virtual and isolated host-only networks on my workstation or a dedicated server. On other systems I run virtualized Linux server systems for production and tests with the help of KVM/QEMU, LXC and libvirt. Some time ago I learned the hard way that I have to keep track of the different “local” virtual networks more thoroughly. As the VMware side was affected by a misconfiguration in a rather peculiar way I thought this experience might be interesting for others, too.

Host interface to VMware virtual network sometimes not available?

The whole thing coincided with an upgrade of one of my Opensuse workstations. I am always a bit nervous whether and how such an upgrade may impact the VMware WS installation. Quite often I experienced problems with the automatic compilation in the past. In addition the start of some modules lead to secondary errors. However, at a first test VMware WS 14 compiled without problems on a system with Opensuse Leap 15. No wonder, I thought, as the kernel version 4.12 of Leap15 is relatively old.

Then I had to set up a Windows guest. I gave it an IP address within a virtual VMware host-only network, which covered an IPv4 address range of a class C network. VMware uses a kind of virtual bridge to support such a network; normally one associates a virtual network device on the host with it, to which you can assign a certain IP address in the net’s address ange. For a C-net VMware uses yyy.yyy.yyy.1 as a default on the host’s interface (with yyy.yyy.yyy defining the C-network). In my case the device on the host was named “vmnet6”. In the beginning this virtual interface also worked as expected.

However, during the following days I noticed trouble with “vmnet6”. In a normal host configuration the VMware WS initialization script (“/etc/init.d/vmware”) is started as a LSB service by systemd. The script loads VMware modules and configures the defined virtual networks. Unfortunately, relatively often, my “vmnet6” interface did not become directly available after the start of the Linux host. Some experiments showed however that I could circumvent this problem by some “dubious” action:

I had to enter the “Virtual Network Editor” with root rights and save the already existing virtual network again. Then ifconfig or the ip command enlisted interface “vmnet6” – which also seemed to work normally.

You get a strange feeling when you are forced to remedy something as root …. However, I ignored this problem, which I could not solve directly, for a while. I also noticed that the problem did not occur all the time. This should have rang a bell … but I was too stupid. Until, yesterday, when I needed to set up another special MS WIN guest in the same address range.

Virtual Ethernet failed

In addition I upgraded to VMware WS Version 14.1.3. The (automatic) compilation of the VMware WS modules on Opensuse Leap 15 again worked without major problems. But, when I manually (re-) started the VMware modules via “/etc/init.d/vmware restart” I saw an error message:

“Virtual Ethernet failed”.

Such a message makes you nervous because you assume some real big problem with the VMware modules. However, starting some of my VMware guests (not the ones linked to vmnet6) proved that these guests operated flawlessly and could communicate via their and the hosts network devices. But my “vmnet6” did not appear in the output of “ip a s”.

The problem “Virtual Ethernet failed” is reported in some Internet articles; however without a real solution. See e.g. https://ubuntuforums.org/showthread.php?t=1592977; https://www.linuxquestions.org/questions/slackware-14/vmware-workstation-unable-to-start-services-4175547448/; https://communities.vmware.com/thread/264235.

Now, I seriously began to wonder whether and how this problem was related to the already known problem with my virtual device “vmnet6”. Some tests quickly showed that the error message disappeared when I eliminated the host-only network related to “vmnet6”. Then I tried a new host-only test network with a device “vmnet7” and a different IP range. Also then the “virtual Ethernet” started flawlessly. So, what was wrong with “vmnet6” and/or its IP range? Some residual garbage from old installations? A search for configuration files gave me no better ideas.

vnetlib-log

After a minute I thought: Maybe VMware is right. A look into the log-file “/var/log/vnetlib” revealed:

Oct 14 14:22:49 VNL_Load - LOG_ERR logged
Oct 14 14:22:49 VNL_Load - LOG_WRN logged
Oct 14 14:22:49 VNL_Load - LOG_OK logged
Oct 14 14:22:49 VNL_Load - Successfully initialized Vnetlib
...
Oct 14 14:22:49 VNL_StartService - Started "Bridge" service for vnet: vmnet0
Oct 14 14:22:49 VNLPingAndCheckSubnet - Return value of vmware-ping: 0
...
Oct 14 14:22:50 VNL_CheckSubnetAvailability - Subnet: xxx.xxx.xxx.xxx on vnet: vmnet2 is available
...
Oct 14 14:22:49 VNL_CheckSubnetAvailability - Subnet: yyy.yyy.yyy.yyy on vnet: vmnet6 is not available
Subnet on vmnet6 is no longer available for usage, please run the network editor to reconfigure different subnet
Oct 14 14:22:50 VNL_CheckSubnetAvailability - Subnet: xxx.xxx.xxx.xxx on vnet: vmnet7 is available
Oct 14 14:22:50 VNL_CheckSubnetAvailability - Subnet: xxx.xxx.xxx.xxx on vnet: vmnet8 is available
.....

(I have replaced real addresses by xxx and yyy.) So, obviously VMware at startup performs a kind of ping check beyond the locally defined virtual devices and networks. Interesting! Why do they ping?

The answer is trivial: When you set up the (virtual) internal bridge for the host-only network you may want to guarantee that the IP-address for the respective host device (“vmnet6”) does not exist anywhere else in the reachable network – especially as the host’s interface of a host-only network may be used for a host controlled routing (and NAT) of the otherwise isolated VMware guests to the outside world.

Address overlap

And this resolved my problem: A manual ping for the address of the planned host’s interface address yyy.yyy.yyy.1 indeed gave me a result. I located the source on another server, where I sometimes perform tests of different virtual network configurations with KVM guests, LXC containers and libvirt. Unfortunately, I had not disabled all of my test networks after my last tests. Due to default DHCP-settings a virtual interface with address yyy.yyy.yyy.1 got established on the test server whenever it was booted. This also explained the finding that the VMware WS problem did not occur always – it only happened when the test server was active in my physical LAN!

Then I actually remembered that I had had a similar problem once in 2016, whilst experimenting with QEMU/VMware-bridge-coupling. (See: Opensuse/Linux – KVM, VMware WS – virtuelle Brücken zwischen den Welten). So, I should have known better and planned my “local” virtual experiments a bit more carefully to avoid address overlaps with other potential virtual networks on different hosts in the LAN.

Stupid me … The VMware WS startup script was absolutely right to not establish a second address
of that kind on my workstation! This lead to the error message “Virtual ethernet failed” – which in my opinion should include a bit more detailed information or a hint to a log-file. But the fault clearly was on my side: One should not think in terms of a local host environment when using VMware WS for virtual networks, but consider the global network configuration.

Still, the whole VMware handling of a possible IP address overlap left me a bit puzzled :

Why can you by saving the questionable virtual network again establish a host interface despite the fact that another interface with the same address is running somewhere in the network? OK, you are root when you do it – but why is no warning given at this point? (VMware could do a ping check there, too …. )

A second question also worried me: Why did the existence of two devices with the same IP-address did not lead to more chaos in the network? One reason probably was that I had allowed pinging but no general TCP-transport to the virtual device on the test server. And explicit routes were otherwise defined properly on the different hosts. However, I could bet on some problems on the ARP-level when the test server was up and running. Anyway – such a basic misconfiguration in a a network may lead to security holes, too, and should, of course, be avoided.

A third question that came up for a second was: How do you avoid overlaps in case one wants to assign KVM/QEMU-guests and VMware guests IP addresses within the same network address space on one and the same virtualization host? Such a scenario is not at all as far fetched as it may seem at first sight. One reason for such a configuration could be the EU GDPR (DSGVO):

If you need to guarantee a customer confidentiality and are nevertheless forced to use a standard MS Windows client, you have a problem as MS may in an uncontrollable way transfer data to their own servers (outside the EU). Just read the license and maintenance agreements you sign with the operation of a standard Win 10 client! Therefore, you may want to isolate such clients drastically and only allow for communication with certain IP-addresses on the Internet (and NOT with MS servers). You may allow communication with some (virtualized) Linux machines in the same sub-network and one of them may serve as a gateway and perimeter firewall with strict filters. You can build up such scenarios by coupling a QEMU-virtual bridge to a VMware virtual bridge. At the same time, however, you need full control over the DHCP-systems on both sides (besides a bit of scripting) to avoid address overlaps. But this is actually easy: the DHCP-control files on the VMware side are found under the directory “/etc/vmware/vmnetX/dhcpd”, with “X” standing for a virtual VMware interface number. On the QEMU side you find the files at “/etc/libvirt/qemu/networks”. There you can control your IP assignments (or even no assignments for some special interfaces). Ok, but this is the beginning of another story.

Conclusion

Never think “local” or host based when working with virtual networks! Always include local virtual test networks in your documentation of your global network landscape! An do not always just accept the standard address assignment for host interfaces to virtual networks without thinking.

Mounten eines vmdk-Laufwerks im Linux Host – IV – guestmount, virt-filesystems, qemu-img

In den vorhergehenden Beiträgen dieser Serie

Mounten eines vmdk Laufwerks im Linux Host – I – vmware-mount
Mounten eines vmdk-Laufwerks im Linux Host – II – Einschub, Spezifikation, Begriffe
Mounten eines vmdk-Laufwerks im Linux Host – III – qemu-nbd, loop-devices, kpartx

hatten wir gesehen, dass der Zugang zu komplexen vmdk-Images mit Snapshots, Extents und Base-Disk Files eine Kontrolle über das vmdk-Format und die zugehörige Block-Adressierung erfordert. Die notwendigen Informationen sind ggf. über mehrere Files in unterschiedlichen Ästen des Filesystems eines Hosts verteilt; eine Verwendung unter Linux erfordert daher einen zwischengeschalteten Layer:

  • Das Tool “vmware-mount” produziert vor dem automatischen Mounten auf ein Zielverzeichnis aus den verstreuten Dateien des vmdk-Images zunächst ein zusammenhängendes “flat”- oder “raw”-File. Die darin beherbergten Partitionen/Filesysteme können wir dabei zur Not auch selbst mit fdisk/parted erkennen und als Loop-Devices mounten. Dabei sind Offset-Angaben zu beachten.
  • Das Tool “qemu-nbd” interpretiert komplexe vmdk-Images ebenfalls korrekt und bietet dem User über ein Kernelmodul einen Block-Layer an: das Disk-Image wird als Ganzes als Block-Device angeboten. Zudem werden enthaltene Filesysteme automatisch erkannt und ebenfalls als Block-Devices angeboten, wenn das Modul entsprechend parametriert wurde. Die im Block-Device der gesamten Disk enthaltenen Filesysteme können alternativ aber auch über Loop-Devices angesprochen werden.

Wir hatten zudem erkannt, dass man beim Einsatz der bisher besprochenen Tools bzgl. der resultierenden Zugriffsrechte aufpassen muss:

Ein automatisches Mounten mit vmware-mount geht mit Zugriffsrechten viel zu freizügig um. Eigene Mount-Befehle sind dagegen mit entsprechenden Optionen zu berechtigten Usern und Zugriffsmasken zu versehen – soweit möglich.

Wir stellen nun ein Tool vor, dass Sicherheitsaspekte beim Mounten von Disk-Images auf Hosts von Haus aus noch etwas ernster nimmt und FUSE nutzt, um Filesysteme von Disk-Image-Dateien unter Linux in handlicher Weise bereit zu stellen: guestmount.

Flankierend betrachten wir in diesem Artikel zudem zwei Tools, die es einem erlauben, die Snapshot- und Extent-Verteilung eines vmdk-Images sowie die enthaltenen Filesysteme vorab auch ohne Mount-Prozess zu ermitteln: qemu-img und virt-filesystems. Für den Einstieg in forensische Arbeiten mit unbekannten, komplexen vmdk-Images sind diese Kommandos zusammen mit Zeitstempelinformationen durchaus wertvoll.

Vorsicht bei Experimenten

Auch im Falle von “guestmount” gilt: Konkurrierende Schreibzugriffe auf das Disk-Image sind zu vermeiden. Das bedeutet, dass die virtuelle (VMware-, QEMU-, Virtualbox-) Maschine, die das vmdk-Image normalerweise als Disk nutzt, abgeschaltet sein muss. Zur Sicherheit Kopien der Disk-Images anlegen und ggf. nur “read only” mounten.

Notwendige Pakete

“guestmount” nutzt die libguestfs-Bibliothek. Das Kommando gehört zu einem ganzen Set von Tools, die seit 2009 für Virtualisierung (im Besonderen mit QEMU) entwickelt und optimiert wurden. Siehe hierzu die ganz unten angegebenen Links. Unter Opensuse benötigt man für “libguestfs” die Pakete libguestfs0, guestfs-tools,
guestfs-data und ggf. perl-Sys-Guestfs. Installieren sollte man in jedem Fall auch “qemu-tools“.

Unter Debian/Kali gibt es ebenfalls eine ganze Reihe von Paketen (s. den Link unten); das Notwendigste wird aber im Zuge der Installation von “libguestfs-tools” bereitgestellt. Das zudem nützliche Paket “qemu-utils” hatten wir schon im letzten Artikel erwähnt.

Interessant ist (unter Opensuse) im Besonderen das Paket guestfs-data: Die Beschreibung dieses RPM-Pakets zeigt, dass es eine minimale virtuelle Maschine (nach dem Supermin-Muster) beinhaltet, die offenbar von libguestfs-Anwendungen verwendet wird – wie wir sehen werden, auch von “guestmount”.

Bevor man “guestmount” einzusetzt, lohnt es sich, zunächst ein paar Informationen über das vmdk-Image und enthaltene Filesysteme zu sammeln.

Beschaffen von Infos zu Snapshots, Extents in vmdk-Disk-Image

Ein wichtiges Tool, um sich Informationen zur Datei-Struktur (Snapshot- und Extent-Files) von Disk-Images für virtuelle Maschinen zu verschaffen, ist “qemu-img“. Ein Auszug aus der man-Seite besagt:

qemu-img allows you to create, convert and modify images offline. It can handle all image formats supported by QEMU. .... The following commands are supported: ....
...
info [--object objectdef] [--image-opts] [-f fmt] [--output=ofmt] [--backing-chain] filename
....
snapshot [--object objectdef] [--image-opts] [-q] [-l | -a snapshot | -c snapshot | -d snapshot] filename
...
	snapshot is the name of the snapshot to create, apply or delete
...
    	-l  lists all snapshots in the given image
...
info [-f fmt] [--output=ofmt] [--backing-chain] filename
           Give information about the disk image filename. Use it in particular to know the size reserved on disk which can be different from the displayed size. If VM snapshots are stored in the disk image, they are displayed too. The command can output in the format ofmt           which is either "human" or "json".

           If a disk image has a backing file chain, information about each disk image in the chain can be recursively enumerated by using the option "--backing-chain".
....

 
Ähnliche Informationen gibt uns auch “qemu-img -h”. Der Wiki-Link zu “Qemu/Images” unten bestätigt, dass “qemu-img” die vmdk-Formate 3,4 und 6 unterstützt.

Der Ausdruck “backing-chain” erinnert den aufmerksamen Leser natürlich sofort an die “Link-Kette” für vmdk-Snapshots, die wir im zweiten Artikel dieser Serie angesprochen hatten.
Wer meinen letzten Artikel gelesen hat, weiß, dass ich ein vmdk-Test-Image habe, das drei Snapshots aufweist und dessen Base-Disk-File in einem anderen Verzeichnis liegt als die Dateien zu den Snapshots. Damit wollen wir “qemu-img” mal testen. (Ich mache das nachfolgend als User root; erforderlich ist das bei hinreichenden Zugriffsrechten auf die Image-Dateien aber keinesfalls.)

mytux:/vmw/Win7Test # qemu-img info Win7_x64_ssdx-000003.vmdk
image: Win7_x64_ssdx-000003.vmdk
file format: vmdk
virtual size: 4.0G (4294967296 bytes)
disk size: 24M
cluster_size: 65536
backing file: Win7_x64_ssdx-000002.vmdk
Format specific information:
    cid: 4245824591
    parent cid: 1112365808
    create type: twoGbMaxExtentSparse
    extents:
        [0]:
            virtual size: 4261412864
            filename: Win7_x64_ssdx-000003-s001.vmdk
            cluster size: 65536
            format: SPARSE
        [1]:
            virtual size: 33554432
            filename: Win7_x64_ssdx-000003-s002.vmdk
            cluster size: 65536
            format: SPARSE

Wir sehen hier offenbar den Beginn der Snapshot-Historie: Der dritte VMware-Snapshot “Win7_x64_ssdx-000003.vmdk”, der bislang offenbar nur geringe Veränderungen aufnehmen musste, setzt auf auf einem “backing file” auf: Win7_x64_ssdx-000002.vmdk. Wir sehen damit allerdings noch nicht die gesamte Snapshot-Historie (s.u.).

Dafür erkennen wir aber, dass der Inhalt pro Snapshot über 2 sparse Extents verteilt ist. Die zugehörige CID-Information ist – wie wir aus dem 2-ten Artikel der Serie wissen – im Deskriptor-File des Snapshots beinhaltet. Insgesamt gilt, dass “qemu-img” im Fall von vmdk_Images im Wesentlichen den Inhalt der Deskriptor-Dateien darstellt.

Übersicht über die gesamte Backing-Chain
Wollen wir die gesamte Backing-Chain – also im Fall von vmdk-Images die vollständige Snapshot-Historie – sehen, müssen wir beim Kommando zusätzlich eine Unteroption “- -backing-chain” angeben:

mytux:/vmw/Win7Test # qemu-img info --backing-chain Win7_x64_ssdx-000003.vmdk
image: Win7_x64_ssdx-000003.vmdk
file format: vmdk
virtual size: 4.0G (4294967296 bytes)
disk size: 24M
cluster_size: 65536
backing file: Win7_x64_ssdx-000002.vmdk
Format specific information:
    cid: 4245824591
    parent cid: 1112365808
    create type: twoGbMaxExtentSparse
    extents:
        [0]:
            virtual size: 4261412864
            filename: Win7_x64_ssdx-000003-s001.vmdk
            cluster size: 65536
            format: SPARSE
        [1]:
            virtual size: 33554432
            filename: Win7_x64_ssdx-000003-s002.vmdk
            cluster size: 65536
            format: SPARSE

image: Win7_x64_ssdx-000002.vmdk
file format: vmdk
virtual size: 4.0G (4294967296 bytes)
disk size: 1.1M
cluster_size: 65536
backing file: Win7_x64_ssdx-000001.vmdk
Format specific information:
    cid: 1112365808
    parent cid: 3509963510
    create type: twoGbMaxExtentSparse
    extents:
        [0]:
            virtual size: 4261412864
            filename: Win7_x64_ssdx-000002-s001.vmdk
            cluster size: 65536
            format: SPARSE
        [1]:
            virtual size: 33554432
            filename: Win7_x64_ssdx-000002-s002.vmdk
            cluster size: 65536
            format: SPARSE

image: Win7_x64_ssdx-000001.vmdk
file format: vmdk
virtual size: 4.0G (4294967296 bytes)
disk size: 1.0M
cluster_size: 65536
backing file: /vmw/Win7/Win7_x64_ssdx.vmdk
Format specific information:
    cid: 3509963510
    parent cid: 3060125035
    create type: twoGbMaxExtentSparse
    extents:
        [0]:
            virtual size: 4261412864
            filename: Win7_x64_ssdx-000001-s001.vmdk
            cluster size: 65536
            format: SPARSE
        [1]:
            virtual size: 33554432
            filename: Win7_x64_ssdx-000001-s002.vmdk
            cluster size: 65536
            format: SPARSE

image: /vmw/Win7/Win7_x64_ssdx.vmdk
file format: vmdk
virtual size: 4.0G (4294967296 bytes)
disk size: 2.2G
cluster_size: 65536
Format specific information:
    cid: 3060125035
    parent cid: 4294967295
    create type: twoGbMaxExtentSparse
    extents:
        [0]:
            virtual size: 4261412864
            filename: /vmw_win7/Win7_x64_ssdx-s001.vmdk
            cluster size: 65536
            format: SPARSE
        [1]:
            virtual size: 33554432
            filename: /vmw_win7/Win7_x64_ssdx-s002.vmdk
            cluster size: 65536
            format: SPARSE
mytux:/vmw/Win7Test #   

 
“qemu-img” stellt uns eine über mehrere Filesystem-Orte verteilte Snapshot- und Extent-Zusammensetzung eines vmd-Images konsolidiert dar. Sprich: Durch die Option “- – backing-chain” werden alle Deskriptor-Dateien zu allen
Snapshots ausgelesen und in der richtigen Reihenfolge ausgegeben. Das ist zumindest im Falle mehrerer Disks und vieler Snapshots recht nützlich, zumal wenn unterschiedliche Disks eines VMware-Gastes sehr unterschiedliche Snapshot-Historien aufweisen sollten. Was einem in der Praxis durchaus unter die Finger kommt.

Leider muss der Linux-User aber von vornherein wissen, in welchem Verzeichnis man den jüngsten Snapshot (mit der höchsten Nummer im Namenszusatz) und die zugehörige Deskriptor-Datei findet. Die Verlinkung der vmdk-Snapshot-Chain erfolgt nur nach unten in Richtung Base-File – also einseitig.

Die im “type twoGbMaxExtentSparse” angegebenen 2GB für die Extends muss man unter aktuellen Linux-Systemen nicht zu ernst nehmen; faktisch operiert z.B. die VMware WS 14 mit wachsenden 4 GB Extents.

Übrigens:

Da Snapshots in vmdk-Images über eine Backing-Chain (im VMware-Sprech: eine Link-Chain) realisiert werden, liefert uns das Kommando “qemu-img snapshot -l” keine passende Informationen; die Snapshots sind anders als etwa in qcow2-Disks nicht intrinsischer Teil des/der primären Image-Files.

mytux:/vmw/Win7Test # qemu-img snapshot -l  Win7_x64_ssdx-000003.vmdk
mytux:/vmw/Win7Test # 

Beschaffen von Informationen zu Filesystemen in einem vmdk-Disk-Image

Das obige Kommando hat uns noch nicht gezeigt, welche Partitionen/Filesysteme in der Disk aktuell beheimatet sind. Diese Information benötigen wir aber für die Anwendung von “guestmount”. Hier hilft die “libguestfs” mit dem Tool “virt-filesystems” weiter:

mytux:/vmw/Win7Test # virt-filesystems -a Win7_x64_ssdx-000003.vmdk -l
Name       Type        VFS   Label   Size        Parent
/dev/sda1  filesystem  ntfs  Volume  2718957568  -
/dev/sda2  filesystem  ntfs  Volume  1572864000  -

oder – für die zusätzliche Ausgabe von Partitionen:

mytux:/vmw/Win7Test # virt-filesystems -a Win7_x64_ssdx-000003.vmdk --all --long --uuid -h
Name      Type       VFS  Label  MBR Size Parent   UUID
/dev/sda1 filesystem ntfs Volume -   2.5G -        00E60BCAE60BBF42
/dev/sda2 filesystem ntfs Volume -   1.5G -        42A048ACA048A7ED
/dev/sda1 partition  -    -      07  2.5G /dev/sda -
/dev/sda2 partition  -    -      07  1.5G /dev/sda -
/dev/sda  device     -    -      -   4.0G -        -

Gut, nicht wahr?
Hinweis:

Die Bezeichnungen “/dev/sda1”, “/dev/sda2” haben nichts mit evtl. existierenden, gleichnamigen Devices des aktuellen Hosts zu tun. Die Bezeichnungen beziehen sich ausschließlich auf interne Partitionen des Disk-Images.

Zu weiteren Optionen, wie etwa “–extra” für die Anzeige weiterer nicht mountbarer Filesysteme, siehe die man-page zum Kommando.

guestmount – bequemes Mounten vorhandener vmdk-Filesysteme

Mit dem obigen Wissen ausgestattet, können wir nun endlich guestmount einsetzen. Die Kommandostruktur ist für den Normalfall recht einfach

guestmount -a img-file -m /dev/sdax [–ro] mount-point-im-Linux-host

Wollen wir also in unserem Beispiel das zweite Filesystem mounten, so ist Folgendes anzugeben:

mytux:/vmw/Win7Test # guestmount -a Win7_x64_ssdx-000003.vmdk  -m /dev/sda2  --ro /mnt3 
mytux:/vmw/Win7Test # la /mnt3/
$RECYCLE.BIN/                           System Volume Information/
Cosmological_Voids/                     mysql-installer-community-5.6.14.0.msi
Muflons/                                ufos/
mytux:/vmw/Win7Test # la /mnt3/ufos/
total 5
drwxrwxrwx 1 root root    0 Mar 31 15:26 .
drwxrwxrwx 1 root root 4096 Mar 28 20:25 ..
-rwxrwxrwx 1 root root   18 
Mar 31 15:26 ufo.txt
-rwxrwxrwx 1 root root    0 Mar 31 14:02 xufos
mytux:/vmw/Win7Test #
mytux:/vmw/Win7Test # guestunmount  /mnt3 
mytux:/vmw/Win7Test # 

Hinweise:

  • Zum Entfernen eines Mounts muss man das Kommand “guestunmount” verwenden.
  • Die Option “–ro” sorgt dafür, dass “read-only” gemountet wird.

guestmount – Möglichkeit zum gestaffelten Mounten

Es gibt weitere Optionen und Einsatzmöglichkeiten für “guestmount”, die man der man-Seite entnehmen kann. Die anderen Möglichkeiten sind aber eher für Linux/Unix-Filesysteme im vmdk-Image geeignet. Auf eine Möglichkeit – nämlich die eines geschachtelten Mounts in einem Schritt – möchte ich aber doch hinweisen.

Eines meiner mit VMware virtualisierten Windows 7-Testsysteme weist ein (freies) Verzeichnis “mounts/mnt2” auf. Der aktuelle Snapshot des Hauptsystems mit dem Win7-OS heißt “Win7_x64-cl1-000011.vmdk”. Dann kann man etwa Folgendes machen:

mytux:/vmw/Win7Test # guestmount -a Win7_x64-cl1-000005.vmdk -m /dev/sda1 -a Win7_x64_ssdx-000003.vmdk  -m /dev/sdb2:/mounts/mnt2  --ro /mnt2 
mytux:/vmw/Win7Test # la /mnt2/
total 4194272
drwxrwxrwx  1 root root          0 Jan 25  2014 $Recycle.Bin
drwxrwxrwx  1 root root      12288 Mar 20 18:05 .
drwxr-xr-x 39 root root       4096 Mar 28 17:02 ..
....
-rwxrwxrwx  1 root root       8192 Aug 23  2012 BOOTSECT.BAK
drwxrwxrwx  1 root root       4096 Sep 23  2016 Boot
lrwxrwxrwx  2 root root         60 Jul 14  2009 Documents and Settings -> /sysroot/Users
lrwxrwxrwx  2 root root         60 Aug 23  2012 Dokumente und Einstellungen -> /sysroot/Users
....
drwxrwxrwx  1 root root       4096 Jan 25  2014 Users
drwxrwxrwx  1 root root      24576 Apr  8 11:43 Windows
...
-rwxrwxrwx  1 root root 4294434816 Apr  8 11:41 pagefile.sys
drwxrwxrwx  1 root root       4096 Mar 27 19:37 mounts
...

mytux:/vmw/Win7Test # la /mnt2/mounts/mnt2/ufos/
total 5
drwxrwxrwx 1 root root    0 Mar 31 15:26 .
drwxrwxrwx 1 root root 4096 Mar 28 20:25 ..
-rwxrwxrwx 1 root root   18 Mar 31 15:26 ufo.txt
-rwxrwxrwx 1 root root    0 Mar 31 14:02 xufos

mytux:/vmw/Win7Test # la /mnt2/mounts
total 28
drwxrwxrwx 1 root root  4096 Mar 27 19:37 .
drwxrwxrwx 1 root root 12288 Mar 20 18:05 ..
drwxrwxrwx 1 root root     0 Mar 22 10:30 mnt
drwxrwxrwx 1 root root  4096 Mar 28 20:25 mnt2
lrwxrwxrwx 1 root root   216 Mar 27 19:21 mnt3 -> /sysroot/.NTFS-3G/Volume{c5......}
lrwxrwxrwx 1 root root   216 Mar 27 19:37 mnt4 -> /sysroot/.NTFS-3G/Volume{a4......}

Nett, nicht wahr? Man beachte, dass die Partitionen des zweiten mit der Option “-a” bereitgestellten Images mit “/dev/sdbx” bezeichnet werden. Einige Informationen wurden im obigen Auszug mit “….” ersetzt.

Die Frage ist halt, in wieweit ein solcher geschachtelter Mount mit Windows-Systemen Sinn macht. Man könnte einwenden, dass auch unter Windows Disks in den Verzeichnisbaum gemountet werden können. Stimmt auch – nur wird ein solcher Mount viel umständlicher realisiert als unter Linux. Im obigen Beispiel zeigen die letzten Zeilen gerade solche belegten Mount-Punkte. Die dortigen Windows-Verweise führen aber ohne gestartetes System ins Nirwana. So funktioniert Folgendes leider nicht:

            
mytux:/vmw/Win7Test #  guestmount -a Win7_x64-cl1-000005.vmdk -m /dev/sda1 -a Win7_x64_ssdx-000003.vmdk  -m /dev/sdb2:/mounts/mnt4  --ro /mnt2 
libguestfs: error: mount_options: mount: /mounts/mnt4: No such file or directory
guestmount: '/dev/sdb2' could not be mounted.
guestmount: Did you mean to mount one of these filesystems?
guestmount:     /dev/sda1 (ntfs)
guestmount:     /dev/sdb1 (ntfs)
guestmount:     /dev/sdb2 (ntfs)

Man kann daher – auch wennn man schon weiß, wie
bestimmte NTFS-Partitionen in das Windows-Hauptlaufwerk gemountet werden – diese Verhältnisse unter Linux nicht direkt nachstellen.

Auf ähnliche Einschränkungen stößt man bzgl. der Guestmount-Option “-i” für vmdk-Images. Auch eine automatische Aufdröselung von Mount-Punkten funktioniert mit NTFS-Filesystemen nicht. Das ist aber nicht so schlimm; Forensiker können die wahre Struktur eine Windows-Maschine über die Registry (in Subverzeichnissen von Windows/System32/config) und Infos in weiteren Verzeichnissen auslesen.

guestmount – Sicherheit über Einsatz einer minimalen virtuellen QEMU-Maschine

Der Leser erinnert sich sicher an die mahnenden Worte von D.P.Berrange in
https://www.berrange.com/tags/libguestfs/
bzgl. der Risiken beim Mounten von Disk-Images virtualisierter Gastsysteme auf einem Linux-Host. Weitere Hinweise auf solche Risiken liefern die Entwickler von libguestfs unter folgendem Link:
http://libguestfs.org/guestfs-security.1.html.

Um evtl. Problemen mit manipulierten Disk-Images und ihren Filesystemen vorzubeugen, operiert “libguestfs” mit einer virtuellen QEMU-Maschine im Hintergrund. Ich zitiere aus dem Text des eben genannten Links:

“We run a Linux kernel inside a qemu virtual machine, usually running as a non-root user. The attacker would need to write a filesystem which first exploited the kernel, and then exploited either qemu virtualization (eg. a faulty qemu driver) or the libguestfs protocol, and finally to be as serious as the host kernel exploit it would need to escalate its privileges to root. Additionally if you use the libvirt back end and SELinux, sVirt is used to confine the qemu process. This multi-step escalation, performed by a static piece of data, is thought to be extremely hard to do, although we never say ‘never’ about security issues.”

Hervorhebung durch mich. Leute, dieser Ansatz gefällt mir! Dennoch funktioniert er auf einem Opensuse-System ohne SE-Linux, aber mit AppArmor, leider nicht vollständig (s.u.).

Können wir die virtuelle QEMU-Maschine (die übrigens von der Supermin-Gattung ist) sehen? Ja das geht; z.B. als unpriviligierter User “myself”:

myself:/vmw/ssdx> ps aux | grep qemu
myself      18986  0.0  0.0  10564  1640 pts/8    S+   12:22   0:00 grep --color=auto qemu
myself:/vmw/ssdx> guestmount -a Win7_x64_ssdx.vmdk  -m /dev/sda2  --ro /mnt/vmdk                                                        
myself:/vmw/ssdx> ps aux | grep qemu
myself      19002 19.8  0.3 1534676 247408 pts/8  Sl   12:22   0:00 /usr/bin/qemu-system-x86_64 -global virtio-blk-pci.scsi=off -nodefconfig -enable-fips -nodefaults -display none -machine accel=kvm:tcg -cpu host -m 500 -no-reboot -rtc driftfix=slew -no-hpet -global kvm-pit.lost_tick_policy=discard -kernel /var/tmp/.guestfs-1553/appliance.d/kernel -initrd /var/tmp/.guestfs-1553/appliance.d/initrd -device virtio-scsi-pci,id=scsi -drive file=/tmp/libguestfsCrlFc9/overlay1,cache=unsafe,format=qcow2,id=hd0,if=none -device scsi-hd,drive=hd0 -drive file=/var/tmp/.guestfs-1553/appliance.d/root,snapshot=on,id=appliance,cache=unsafe,if=none -device scsi-hd,drive=appliance -device virtio-serial-pci -serial stdio -device sga -chardev socket,path=/tmp/libguestfsCrlFc9/guestfsd.sock,id=channel0 -device virtserialport,chardev=channel0,name=org.libguestfs.channel.0 -append panic=1 console=ttyS0 udevtimeout=6000 udev.event-timeout=6000 no_timer_check acpi=off printk.time=1 cgroup_disable=memory root=/dev/sdb selinux=0 TERM=xterm-256color                                                                                                                                      
myself      19021  0.0  0.0  10564  1648 pts/8    S+   12:22   0:00 grep --color=auto qemu
myself:/vmw/ssdx> guestunmount /
mnt/vmdk
myself:/vmw/ssdx> ps aux | grep qemu
myself      19047  0.0  0.0  10564  1652 pts/8    S+   12:22   0:00 grep --color=auto qemu
myself:/vmw/ssdx> 

Man beachte die leider erfolgte Deaktivierung des Security Models durch den Parameter “selinux=0”; auf meinem Opensuse-System läuft AppArmor.

Immerhin: Es ist gut, dass eine virtuelle Maschine bei der Ausführung der guestmount- oder generell von libguestfs-Befehle zwischengeschaltet wird. Wie genau die Übergänge zwischen Fuse auf dem Linux-Host, dem Starten einer QEMU-Maschine, dem nachfolgenden Mount-Prozess innerhalb der QEMU-Machine (wiederum unter Einbeziehung von FUSE für das dortige Mounten des entdeckten Filesystems, z.B. NTFS) und dem Re-Export des schließlich emulierten Block-Devices auf den Host – genauer: auf den dortigen Mount-Point – funktioniert, muss uns als Anwender – Gott sei Dank – nicht interessieren. Aus dem Jahr 2009 stammt vom Erfinder folgende Darstellung der Prozess-Kette:

            
Linux VFS (in the host) -> fuse -> guestmount process -> libguestfs -> XDR protocol over a TCP socket -> host kernel -> QEMU’s SLIRP stack -> guest kernel -> guestfsd -> Linux VFS (in the guest) -> fuse -> mount-ntfs-3g -> Windows block device -> QEMU (emulating the block device) -> host kernel -> real block device

Siehe: https://rwmj.wordpress.com/2009/10/30/fuse-support-for-libguestfs/

Teile davon dürften heute wohl noch stimmen. Habe aber keine genaue Ahnung … 🙂 .

Ein paar Infos zur virtuellen libguestfs-QEMU-Maschine

Man kann das Starten der virtuellen Maschine auch an “libvirtd” deligieren. Das hat den Vorteil, dass man mittels per virsh-Kommandos ein paar Informationen über die virtuelle QEMU-Maschine ermitteln kann. Notwendig zum Involvieren von “libvirt” ist das Setzen einer Umgebungsvariablen für die Shell des unpriviligierten Users:

export LIBGUESTFS_ATTACH_METHOD=libvirt

Führen wir dann wieder “guestmount” aus, sehen wir, dass der Aufruf der virtuellen Maschine mit anderer Parametersetzung erfolgt:

myself:/vmw/ssdx> export LIBGUESTFS_ATTACH_METHOD=libvirt
myself:/vmw/ssdx> guestmount -a Win7_x64_ssdx.vmdk  -m /dev/sda2  --ro /mnt/vmdk
myself:/vmw/ssdx> ps aux | grep qemu
myself      13201  5.2  0.3 1927592 261572 ?      Sl   17:20   0:01 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name guest=guestfs-7pup8sjqcmvakaoz,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/home/myself/.config/libvirt/qemu/lib/domain-1-guestfs-7pup8sjqcmva/master-key.aes -machine pc-i440fx-2.9,accel=kvm,usb=off,dump-guest-core=off -cpu host -m 500 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 61273dde-6a1c-4c0a-be8d-ae06494041aa -display none -no-user-config -nodefaults -device sga -chardev socket,id=charmonitor,path=/home/rmo/.config/libvirt/qemu/lib/domain-1-guestfs-7pup8sjqcmva/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-reboot -no-acpi -boot strict=on -kernel /var/tmp/.guestfs-1553/appliance.d/kernel -initrd /var/tmp/.guestfs-1553/appliance.d/initrd -append panic=1 console=ttyS0 udevtimeout=6000 udev.event-timeout=6000 no_timer_check acpi=off printk.time=1 cgroup_disable=memory root=/dev/sdb selinux=0 TERM=xterm-256color -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x3 -drive file=/tmp/libguestfsi5ApZX/overlay1,format=qcow2,if=none,id=drive-scsi0-0-0-0,cache=unsafe -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1 -drive 
file=/tmp/libguestfsi5ApZX/overlay2,format=qcow2,if=none,id=drive-scsi0-0-1-0,cache=unsafe -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=1,lun=0,drive=drive-scsi0-0-1-0,id=scsi0-0-1-0 -chardev socket,id=charserial0,path=/tmp/libguestfsi5ApZX/console.sock -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,path=/tmp/libguestfsi5ApZX/guestfsd.sock -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.libguestfs.channel.0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 -msg timestamp=on
myself      13256  0.0  0.0  10564  1612 pts/5    S+   17:20   0:00 grep --color=auto qemu
myself:/vmw/ssdx> 

virsh zeigt uns auch Devices der virtuellen Maschine an; leider aber nicht Details:

myself:/vmw/ssdx> virsh
Willkommen bei virsh, dem interaktiven Virtualisierungsterminal.

Tippen Sie:  'help' für eine Hilfe zu den Befehlen
       'quit' zum Beenden

virsh # list
 Id    Name                           Status
----------------------------------------------------
 1     guestfs-7pup8sjqcmvakaoz       laufend

virsh # domblklist guestfs-7pup8sjqcmvakaoz
Ziel       Quelle
------------------------------------------------
sda        /tmp/libguestfsi5ApZX/overlay1
sdb        /tmp/libguestfsi5ApZX/overlay2

virsh # domblkinfo guestfs-7pup8sjqcmvakaoz sda
Kapazität:     4294967296
Zuordnung:      200704
Physisch:       196672

virsh # domblkinfo guestfs-7pup8sjqcmvakaoz sdb
Kapazität:     4294967296
Zuordnung:      1904640
Physisch:       1966080

Man kann auch noch rausbekommen, dass unser vmdk.file innerhalb der virtuellen Maschine mit /dev/sda assoziiert ist:

virsh # dumpxml guestfs-7pup8sjqcmvakaoz
<domain type='kvm' id='1' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
  <name>guestfs-7pup8sjqcmvakaoz</name>
  <uuid>61273dde-6a1c-4c0a-be8d-ae06494041aa</uuid>
  <memory unit='KiB'>512000</memory>
  <currentMemory unit='KiB'>512000</currentMemory>
  <vcpu placement='static'>1</vcpu>
  <os>
    <type arch='x86_64' machine='pc-i440fx-2.9'>hvm</type>
    <kernel>/var/tmp/.guestfs-1004/appliance.d/kernel</kernel>
    <initrd>/var/tmp/.guestfs-1004/appliance.d/initrd</initrd>
    <cmdline>panic=1 console=ttyS0 udevtimeout=6000 udev.event-timeout=6000 no_timer_check acpi=off printk.time=1 cgroup_disable=memory root=/dev/sdb selinux=0 TERM=xterm-256color</cmdline>
    <boot dev='hd'/>
    <bios useserial='yes'/>
  </os>
  <cpu mode='host-passthrough' check='none'/>
  <clock offset='utc'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>destroy</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/bin/qemu-kvm</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='unsafe'/>
      <source file='/tmp/libguestfsi5ApZX/overlay1'/>
      <backingStore type='file' index='1'>
        <format type='raw'/>
        <source file='/vmw_win7/ssdx/Win7_x64_ssdx.vmdk'/>
        <backingStore/>
      </backingStore>
      <target dev='sda' bus='scsi'/>
      <alias name='scsi0-0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='
qemu' type='qcow2' cache='unsafe'/>
      <source file='/tmp/libguestfsi5ApZX/overlay2'/>
      <backingStore type='file' index='1'>
        <format type='raw'/>
        <source file='/var/tmp/.guestfs-1004/appliance.d/root'/>
        <backingStore/>
      </backingStore>
      <target dev='sdb' bus='scsi'/>
      <shareable/>
      <alias name='scsi0-0-1-0'/>
      <address type='drive' controller='0' bus='0' target='1' unit='0'/>
    </disk>
    <controller type='scsi' index='0' model='virtio-scsi'>
      <alias name='scsi0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </controller>
    <controller type='usb' index='0' model='piix3-uhci'>
      <alias name='usb'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'>
      <alias name='pci.0'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <alias name='virtio-serial0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </controller>
    <serial type='unix'>
      <source mode='connect' path='/tmp/libguestfsi5ApZX/console.sock'/>
      <target port='0'/>
      <alias name='serial0'/>
    </serial>
    <console type='unix'>
      <source mode='connect' path='/tmp/libguestfsi5ApZX/console.sock'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <channel type='unix'>
      <source mode='connect' path='/tmp/libguestfsi5ApZX/guestfsd.sock'/>
      <target type='virtio' name='org.libguestfs.channel.0' state='connected'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <input type='mouse' bus='ps2'>
      <alias name='input0'/>
    </input>
    <input type='keyboard' bus='ps2'>
      <alias name='input1'/>
    </input>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </memballoon>
  </devices>
  <qemu:commandline>
    <qemu:env name='TMPDIR' value='/var/tmp'/>
  </qemu:commandline>
</domain>

 
Man beachte den zweiten Eintrag unter <devices> !

Das war es dann aber auch schon;

virsh # domfsinfo
Fehler: Befehl 'domfsinfo' erfordert <domain> Option
virsh # domfsinfo guestfs-7pup8sjqcmvakaoz
Fehler: Unable to get filesystem information
Fehler: argument unsupported: QEMU guest agent is not configured

Wie der programmierte Fuse-Prozess auf dem Host das QEMU-Gastsystem und das dort benutzten /dev/sda auf den mountpoint auf dem Host bringt, lässt sich mit einfachen Systemtools niocht weiter analysieren. Das /dev/fuse auf /mnt/vmdk gemountet wird, ist dabei eine wenig aussagekräftige Binsenweisheit:

myself:/vmw/ssdx> mount | grep fuse
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
vmware-vmblock on /run/vmblock-fuse type fuse.vmware-vmblock (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
gvfsd-fuse on /run/user/1553/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,relatime,user_id=1553,group_id=100)
/dev/fuse on /mnt/vmdk type fuse (rw,nosuid,nodev,relatime,user_id=1553,group_id=100)
myself:/
vmw/ssdx> la /sys/fs/fuse/connections/47
insgesamt 0
dr-x------ 2 myself  users 0 13. Apr 17:20 .
drwxr-xr-x 5 root root  0 13. Apr 09:15 ..
--w------- 1 myself  users 0 13. Apr 17:20 abort
-rw------- 1 myself  users 0 13. Apr 17:20 congestion_threshold
-rw------- 1 myself  users 0 13. Apr 17:20 max_background
-r-------- 1 myself  users 0 13. Apr 17:20 waiting

Sicherheit – Zugriff durch andere User?

Die Informationsseiten zu “guestmount” versprechen Folgendes:

Other users cannot see the filesystem by default. .. If you mount a filesystem as one user (eg. root), then other users will not be able to see it by default. The fix is to add the FUSE allow_other option when mounting:
sudo guestmount […] -o allow_other /mnt
and to enable this option in /etc/fuse.conf.

Eine nette Frage ist deshalb: Wie sicher ist denn der Zugang zu dem gemounteten Filesystem? Die angezeigten Rechte deuten eigentlich an, dass jeder zugreifen darf:

myself:/vmw/ssdx> la /mnt/vmdk
insgesamt 196128
drwxrwxrwx 1 root root      4096 28. Mär 19:53 .
drwxr-xr-x 5 root root      4096  8. Apr 11:14 ..
drwxrwxrwx 1 root root      4096 28. Mär 09:53 Cosmological_Voids
-rwxrwxrwx 2 root root 200822784  4. Nov 2013  mysql-installer-community-5.6.14.0.msi
drwxrwxrwx 1 root root         0 28. Mär 09:52 $RECYCLE.BIN
drwxrwxrwx 1 root root         0 28. Mär 09:51 System Volume Information
drwxrwxrwx 1 root root         0 28. Mär 19:53 ufos

Das täuscht aber; den selbst als User “root” erlebt man Folgendes:

mytux:~ # la /mnt/vmdk
ls: cannot access '/mnt/vmdk': Permission denied
mytux:~ # la /mnt/
ls: cannot access '/mnt/vmdk': Permission denied
total 16
drwxr-xr-x  5 root root 4096 Apr  8 11:14 .
drwxr-xr-x 39 root root 4096 Mar 28 17:02 ..
drwxr-xr-x  3 root root 4096 Mar 20  2017 bups
drwxr-xr-x  2 root root 4096 Apr  4  2017 tux
d?????????  ? ?    ?       ?            ? vmdk

Mit dieser Rechtesetzung kann nicht mal der User “root” was anfangen!

Also:

guestmount immer als unpriviligierter User ausführen!

(Auch wenn ich das selbst in den ersten Beispielen dieses Artikels selbst nicht gemacht habe.)

Apparmor als Security-Modell für die libguestfs-QEMU-Maschine wird leider (noch) ignoriert!

Die Tatsache, dass die libguestfs-Etwickler selbst darauf hinweisen, dass für die virtuelle libguestfs-Maschine sVirt und Sicherheitsregeln von SE-Linux/Apparmor zum Zuge kommen, weist uns auf 2 Dinge hin:

  • Wir sollten “guestmount” nicht als root ausführen, wenn das nicht zwingend nötig ist. Die Rechte an den vmdk-Files ändern wir ggf. entsprechend ab.
  • Wir sollten auf dem Linux-System, das wir für unsere Arbeiten verwenden, die Sicherheitseinstellungen für SELinux oder Apparmor prüfen!

Man kann “libvirt” auf einem Opensuse-System, auf dem Aparmor und nicht SE-Linux eingestzt wird, wie folgt konfigurieren: In der Datei “/etc/libvirt/qemu.conf” setzt man:

security_driver = “apparmor”
security_default_confined = 1
security_require_confined = 1

Zur Bedeutung dieser Parameter siehe die Erläuterungen im File selbst. Für normale virtuelle Maschinene funktioniert das auch; als Beispiel mag eine virtuelle Debian-Maschine dienen:

mytux:~ # ps aux | grep debian9
qemu     15096 14.2  0.2 9410692 193460 ?      Dl   17:51   0:08 /usr/bin/qemu-system-x86_64 -machine accel=kvm -name guest=debian9,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-debian9/master-key.aes ...
....
mytux~ # virsh 
Welcome to virsh, the virtualization interactive terminal.

Type: 
 'help' for help with commands
       'quit' to quit

virsh # list
 Id    Name                           State
----------------------------------------------------
 1     debian9                        running

virsh # dominfo debian9
Id:             1
Name:           debian9
UUID:           789498d2-e025-4f8e-b255-5a3ac0f9c965
OS Type:        hvm
State:          running
CPU(s):         3
CPU time:       16.4s
Max memory:     8388608 KiB
Used memory:    8388608 KiB
Persistent:     yes
Autostart:      disable
Managed save:   no
Security model: apparmor
Security DOI:   0
Security label: libvirt-789498d2-e025-4f8e-b255-5a3ac0f9c965 (enforcing)
virsh # 

Man beachte den korrekten Wert “apparmor” für das “Security model”.

[Off topic: Man erkennt übrigens, dass libvirt die QEMU-Maschine dem user qemu zuordnet und selbst mit root-Rechten operiert, selbst wenn man die virtuelle Machine über virt-manager als unprivilegierter User startet.]

Aber für unsere vom User “myself” per guestmount gestartet Maschine liefert virsh:

myself:/vmw/ssdx> virsh
virsh # dominfo guestfs-7pup8sjqcmvakaoz
Id:             1
Name:           guestfs-7pup8sjqcmvakaoz
UUID:           61273dde-6a1c-4c0a-be8d-ae06494041aa
OS Typ:         hvm
Status:         laufend
CPU(s):         1
CPU-Zeit:       1,3s
Max Speicher:   512000 KiB
Verwendeter Speicher: 512000 KiB
Bleibend:       nein
Automatischer Start: deaktiviert
Verwaltete Sicherung: nein
Sicherheits-Modell: none
Sicherheits-DOI: 0

Schade, kein Sicherheitsmodell! Apparmor wird hier ignoriert, obwohl wir den Start der libguestfs-QEMU-Maschine an libvirtd deligiert hatten! Perfekt ist das libguestfs/guestmount an dieser Stelle also (noch!) nicht …

Eine entsprechende Anfrage bei den libguestfs-Entwicklern habe ich übrigens gestartet. Siehe: https://bugzilla.redhat.com/show_bug.cgi?id=1564885

Konvertierung von vmdk-Disk-Images inkl. aller Snapshops und Extents in ein “raw-File” mit Hilfe von qemu-img

Wir kommen am Ende dieses Artikels nochmal auf das eingangs schon betrachtete Tool “qemu-img” zurück: Man kann “qemu-img” auch dazu verwenden, ein vmdk-Disk-Image in genau ein zusammenhängendes “raw-File” umzuwandeln. Das belegt dann allerdings den Plattenplatz, den die Extents bereits einnehmen.

mytux:/vmw/Win7Test # qemu-img convert -f vmdk -O raw -S 4k Win7_x64_ssdx-000003.vmdk image_ssdx.raw
mytux:/vmw/Win7Test # la -h | grep image
-rw-r--r-- 1 root root  4.0G Apr  7 14:46 image_ssdx.raw
mytux:/vmw/Win7Test # du -h image_ssdx.raw 
2.2G    image_ssdx.raw

mytux:/vmw/Win7Test # fdisk -l image_ssdx.raw
Disk image_ssdx.raw: 4 GiB, 4294967296 bytes, 8388608 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0efb9e3e

Device          Boot   Start     End Sectors  Size Id Type
image_ssdx.raw1         2048 5312511 5310464  2.5G  7 HPFS/NTFS/exFAT
image_ssdx.raw2      5312512 8384511 3072000  1.5G  7 HPFS/NTFS/exFAT
mytux:/vmw/Win7Test # 
mytux:/vmw/Win7Test # losetup -o 2720006144 /dev/loop2 image_ssdx.raw 
mytux:/vmw/Win7Test # mount /dev/loop2 /mnt2
mytux:/vmw/Win7Test # la /mnt2
total 196128
drwxrwxrwx  1 root root         0 Mar 28 09:52 $RECYCLE.BIN
drwxrwxrwx  1 root root      4096 Mar 28 20:25 .
drwxr-xr-x 39 root root      4096 Mar 28 17:02 ..
drwxrwxrwx  1 root root      4096 Mar 28 09:53 Cosmological_Voids
drwxrwxrwx  1 root root         0 Mar 28 20:26 Muflons
drwxrwxrwx  1 root root         0 Mar 28 09:51 System Volume Information
-rwxrwxrwx  2 root root 200822784 Nov  4  2013 mysql-installer-
community-5.6.14.0.msi
drwxrwxrwx  1 root root         0 Mar 31 15:26 ufos
mytux:/vmw/Win7Test # umount /mnt2
mytux:/vmw/Win7Test # losetup -d /dev/loop2

 
Die Berechnung des Offsets für das “losetup”-Kommando hatten wir schon in den letzten Artikeln besprochen. Die letzten zwei Kommandos drehen alles wieder zurück. Natürlich wird man bei Bedarf beim Mounten noch Einschränkungen der Zugriffsrechte vornehmen; siehe auch hierzu den letzten Artikel.

Vertraut der am Inhalt von vmdk-Files Interessierte/Forensiker also nicht auf “guestmount” oder “vmware-mount”, kann er mit Hilfe von “qemu-img” zunächst immer ein raw-File erstellen, mit dem er dann nach Herzenslust herumwerkeln kann.

Ausblick

Im nächsten Artikel dieser Serie

Mounten eines vmdk-Laufwerks im Linux Host – IV – affuse, vdfuse

werfen wir einen kleinen (etwas entäuschenden) Blick auf “affuse” und “vdfuse”.

Links

Infos zu libguestfs
http://libguestfs.org/
Manuals
http://libguestfs.org/guestfs.3.html
http://libguestfs.org/guestfs-security.1.html
https://rwmj.wordpress.com/2012/07/23/new-in-libguestfs-use-libvirt-to-launch-the-appliance/

guestmount
http://libguestfs.org/guestmount.1.html

Debian Packets
Deb-packets for libguestfs

qemu-img
managing-disk-images-with-qemu-img
https://en.wikibooks.org/wiki/QEMU/Images
sles-12-book_virt-data-cha-qemu-guest-inst-qemu-img.html
https://jmutai.com/2017/03/06/qemu-img-cheatsheet-for-working-with-qemu-img/

Allgemeines
stackoverflow.com questions 22327728 mounting-vmdk-disk-image