KVM/Qemu VMs with a multi-screen Spice console – IV – remote access via SSH, remote-viewer and a Unix socket

I continue with my series on methods to access the graphical Spice console of virtual machines [VM] based on the KVM/Qemu-hypervisor combination on a Linux host.

KVM/Qemu VMs with a multi-screen Spice console – III – local access with remote-viewer via a Unix socket
KVM/Qemu VMs with a multi-screen Spice console – II – local access with remote-viewer via a network port
KVM/Qemu VMs with a multi-screen Spice console – I – Overview over local and remote access methods

In the last article we saw that “remote-viewer” can be used locally on the KVM-host to directly access the Qemu-emulator of a specific VM via a Unix socket instead of a network port. Its a simple and fairly effective method – though not well documented. We confined the right to access the socket for a VM to a specific group of users.

Actually, the socket based access method also provides the basis for a simple remote scenario in an Intranet – namely via ssh -X. This is the topic of this article.

Such a method requires relatively high data transfer rates across the network – but in a switched Gigabit LAN the rates are within reasonable limits …. And despite a lack of OpenGL HW acceleration Spice reacts very responsively to mouse operations and window movements. In the course of our experiments I will also introduce another virtual “video” device model which can be used together with our VM – namely a “virtio” device with multiple heads. As the QXL device it corresponds to a kind of virtual graphics card.

I assume that the reader is familiar with SSH and the setup of the SSH-service on a Linux system. Some knowledge about Pulseaudio is helpful, too.

Why do we care about remote Spice scenarios in an Intranet?

Why do I discuss remote scenarios for a “one seat” console of a VM in an Intranet at all? One answer is:

Any free-lance consultant or developer must think about a systematic way of how to organize data and work for customers in accordance with security requirements like the EU-GDP or the German DSGVO. Personally, I strongly recommend to confine the work and all data exchange processes for a selected customer to a specific VM on a well managed Linux server host. You then can encrypt the virtual disks and isolate the VM(s) pretty well by configuring both firewalls in the virtual network, on each VM as well as on the KVM-host and on routers in your LAN. Backup, recovery and machine extensions are easy to manage, too.
But you may need to access a VM’s desktop graphically from a client system (PC, laptop). This is were Spice comes into the game – at least in a Linux environment. Being able to work with a full fledged graphical desktop of a VM from different clients and locations in your LAN might be a basic requirement for preparing presentations, documents and maybe some development work in parallel on the VM.

I myself, for instance, often access the full desktop of server-based VMs from my Linux workstation or from a Linux laptop. Via SSH and the Spice console. We shall see below that the network data transfer rates for applications as Libreoffice Draw via SSH to the KVM host and using the Spice console can become smaller than in a situation where we open Libreoffice remotely by a direct “ssh -X” call to the VM itself. And the situation is even better in other scenarios we shall study in forthcoming articles.

In general it will be interesting to watch the objective data transfer rates plus the felt
responsiveness of Spice clients in remote scenarios throughout all our coming experiments.

Encryption requirements – the advantages of SSH

Even in the LAN/Intranet of a free-lancer or in a home-office with multiple users encryption for remote interactions with VMs may be required. We have two main options to achieve this for remote-viewer:

  • We use SSH on the remote system, connect to the KVM-host and start remote-viewer there.
  • We start remote-viewer on the remote system and encrypt the connection to the VM on the KVM host with TLS.

Both methods have their advantages and disadvantages. In the end usability on the remote system is an important criterion. A TLS setup will be discussed in a forthcoming post. Note that we also can use remote-viewers’ sister application “virt-viewer” in a SSH-based scenario – but this is a different story, too.

It is clear that using “ssh -X” is a simple approach which just uses the X11-protocol capabilities to realize a remote scenario. But it has some major advantages over other scenarios:

  • We get encryption almost for free. Most SSH implementations on Linux systems work out of the box.
  • We can enforce the use of secure Opensource encryption algorithms – both for the asymmetric KEX and authentication mechanisms and for the symmetric encryption parts of the data exchange. (See https://stribika.github.io/2015/01/04/secure-secure-shell.html)
  • We get user authentication based on a public key algorithm almost for free.
  • We can use a “ssh-agent” on the remote client to control the different authentication keys for different users allowed to access different VMs.
  • It is sufficient to open a SSH-port on the server. We do not need to open extra network ports for the Spice protocol.
  • We can get encrypted audio data transfer with some simple tricks in combination with Pulseaudio.

Therefore, it is really worthwhile to test a combination of “ssh -X” with starting remote-viewer on the KVM host. I shall, however, not discuss basics of SSH server and client configurations in this article. The preferred or enforced use of certain encryption algorithms for specific SSH connections is something a Linux user should be or become familiar with.

Regarding authentication I assume a standard configuration where private and public authentication keys are organized in the folders “~./ssh/” both for the involved user on the remote client system and the invoked user on the KVM/Qemu server host, respectively.

Schematic drawing

I have not yet depicted the SSH scenario with remote-viewer in any of my schematic drawings so far. The combination of remote-viewer with SSH is a variant of a local scenario as we open the “remote-viewer”-application on the KVM host [MySRV] and just transfer its graphical output via SSH to the X-Server of a remote Linux workstation [MyWS].

We do not care about the transfer of audio data during our first steps. We shall cover this problem in some minutes.

On the left side we see a Linux workstation from which a user logs into our KVM host as user “uvmb”. I assume that user “uvmb” has become a member of the special group “spicex” on the KVM host which we gave read/write access to the Spice UNIX socket created by Qemu (see my last post). On the right side we have our a KVM/Qemu server host. The user starts the remote-viewer application there (i.e. on the KVM
host), but gets its graphical output on his desktop on the remote workstation. On the KVM/Qemu host we, of course, use the fastest method for the remote-viewer application to exchange data with the Qemu-emulator process – namely via a Unix socket. See the definitions for the VM’s XML file (for libvirt applications) discussed in the last post:

    
    <graphics type='spice' autoport='no' keymap='de' defaultMode='insecure'>
      <listen type='socket' socket='/var/spicex/spice.socket'/>
      <image compression='off'/>
      <gl enable='no'/>
    </graphics>

This scenario may appear a bit strange for those among my readers who know that remote-viewer is a network client application: Remote-viewer is normally used on a remote client systems to connect to the Qemu process for a VM on a server host via TCP over a LAN. In our present scenario, however, we start remote-viewer on the server host itself and achieve network capabilities only by making use of SSH. But such a scenario sets comparison standards regarding data transfer rates. Any real client/server solution should provide advantages over such a simple approach. We come back to such comparisons in the forthcoming articles of this series.

An interesting question, for example, is whether the whole data exchange will resemble more a transfer of image data in the case of a full desktop presentation by remote-viewer or a transfer of X commands for constructing individual desktop contents. We should not forget that Spice and remote-viewer have their own handling of graphical data and a special client-server model for it.

A first disadvantage of our simple SSH-based scenario could result from the following fact:

Spice does not accept an activation for the compression of image data for a local socket-based configuration. As we start remote-viewer in our scenario on the KVM host we, therefore, cannot use the image-compression option for the Spice configuration. If a reduction of data transfer rates is required due to a limited LAN bandwidth our only chance is to use data compression for SSH. SSH uses gzip; due to extra CPU activities on both sides using compression may reduce the performance of application which exchange many data between the SSH client and server during user interactions.

In my test setup the KVM-host is controlled by an Opensuse Leap 15.2 OS, whereas the remote client system – a laptop (MyLAP) – runs Opensuse Leap 15.1. (Yes, I should have upgraded it already …).

Requirements for a reasonable performance of remote scenarios with SSH, remote-viewer and Spice

“ssh -X” is not the most efficient way of transferring graphical data. The performance experience depends a bit on the symmetric encryption algorithm and very much on the bandwidth of your network. To make a long story short:

For the QXL device temporary peaks of the data transfer rate can reach 60 MiB/s to 90 MiB/s for some window operations on a Spice console. Such rates may e.g. occur appear when you move complex and large windows quickly around with the mouse on the displayed VM’s desktop – with transparency effects of a XRender compositor being active. With the “virtio” graphics device we reach a rate of around and below 40 MBit/s.

Such rates may seem quite high – and they indeed are. But a quick test shows that you reach 25 – 45 MiB/sec already when quickly moving around a complex transparent pattern within a “Libreoffice Draw” sketch remotely over a network connection with SSH. The presentation of transparent windows within a KDE desktop with compositor dependent effects is far more complex. So Gigabit NICs are required.

If your network becomes a limiting factor you can use the “-C”-option of SSH to enable data compression. This may give you a factor between 8 and 10 for the reduction of transfer data rates. In a test
case with remote-viewer I could reduce the data transfer rate below 8 MiB/s from something of 80 MiB/s without compression. This is an impressive reduction of data.

But there is a caveat of compression, too. The compression has to happen very (!) quickly for fast user interactions with the displayed VM-desktop in the Spice windows. So, you may get a delayed response now for some fast actions on the displayed desktop due to the compression overhead. Now you need pretty fast CPU cores on the KVM/Qemu host and the remote client system! Depending on your system and your LAN I would experiment a bit with and without compression.

A first test

I use a laptop with the hostname “MyLAP” with an Opensuse Leap 15.1 installation for a quick test. The VM (with a KALI 2020.4 OS) is located on a server host “MySRV” with Opensuse Leap 15.2 (see the last articles of this series for its configuration).

On the laptop I start a KDE session as user “myself”. We have a SSH authentication key pair prepared. Our (private) key resides in “~/.ssh/id_rsa_vm”. We have exported the public key to the KVM host into the “~/.ssh/”-directory of the user “uvmb” there (probably “/home/uvmb/.ssh/”). User “uvmb” is a member of the group who got “rw”-access by ACL rules on the KVM-server to the specific UNIX socket used by our test VM “debianx” (see the previous articles).

On the KVM host a privileged user “uvma” has already started the VM “debianx” (with a local socket configuration) for us. Just to be on the safe side we open a desktop session for user “uvmb” on the KVM/Qemu” server and test remote-viewer there:

All Ok here.

Now, we move to the laptop. There we open a KDE session, too, as user “myself”. In a terminal we start the ssh-session:

myself@mylap:~/.ssh> ssh -X -i ~/.ssh/id_rsa_x uvmb@mysrv
Enter passphrase for key '/home/myself/.ssh/id_rsa_x': 
Last login: Thu Mar 25 09:54:53 2021 from 192.168.2.22
Have a lot of fun...
uvmb@mysrv:~> 
uvmb@mysrv:~> remote-viewer spice+unix:///var/spicex/spice.socket &
[1] 5041
uvmb@mysrv:~> 
(remote-viewer:5041): GStreamer-WARNING **: 12:37:49.271: External plugin loader failed. This most likely means that the plugin loader helper binary was not found or could not be run. You might need to set the GST_PLUGIN_SCANNER environment variable if your setup is unusual. This should normally not be required though.

(remote-viewer:5041): GSpice-WARNING **: 12:37:49.409: Warning no automount-inhibiting implementation available

We ignore the warnings – and get our two Spice windows (on the KDE desktop of the laptop).

So far so good.

Let us move a complexly structured window (Firefox or the KDE settings window with a significant size (800×800)) around on the VM’s desktop in the Spice window of the laptop, with the help of fast mouse movements. Whilst we do this we measure the data transfer rates over the relevant NIC on the KVM server:

If you enlarge the picture you see peak rates of 85 MiB/s for data sent to the SSH-client.
In my network this has, fortunately, no major effect on the interaction between laptop and the VM – no major delay or lagging behind. And due to a fast switch my wife can nevertheless stream videos over a gateway system from the Internet. 🙂

How can we explain such transfer rates? Well, the window within the Spice screen I moved around had a size of around 800×800 px. Assume a 32 Bit color depth and a refresh rate of the pixel information on the virtual screen of around 30 times a second. You can do the calculation by yourself. The data fit well to the observations. Thus, we probably transfer changed image data of the window area on the VM’s desktop.

Reducing data transfer rates by SSH integrated (gzip) compression

We end the Spice session now on the laptop (by closing the Spice windows) and log out of the SSH session. Then we restart a new SSH-session with

<pre>myself@mylap:~/.ssh> ssh -XC -i ~/.ssh/id_rsa_x uvmb@mysrv
Enter passphrase for key '/home/myself/.ssh/id_rsa_x': 
Last login: Thu Mar 25 09:54:53 2021 from 192.168.2.22
Have a lot of fun...
uvmb@mysrv:~> 
uvmb@mysrv:~> remote-viewer spice+unix:///var/spicex/spice.socket &
[1] 5041
uvmb@mysrv:~> 

Note the “C“-option for the ssh-command!
Now the measured transfer rates on the KVM-server are less than 9 MiB/s.

However, I notice some lagging of the moved windows reaction to quick mouse cursor changes on the remote client. Not, that it affects normal working – but palpable. I cross checked by working with complex figures within Libreoffice Draw – absolutely no problems with the performance there. So, the reduced responsiveness is mainly due to operations which trigger the VM’s window manager and the re-drawing of the windows as well as the desktop within the Spice induced X-window on the client-system. In our case fast mouse movements to change the position of some application windows on the displayed VM desktop quickly and erratically ….

I see the lagging also with the Gnome desktop of the Kali guest – especially, when moving transparent terminal windows. In my opinion the lagging is even more pronounced. So, KDE 5 is not so bad after all 🙂 . And then its time for optimizing via desktop settings. Remember that you can switch off a compositor totally for the KDE desktop.

I also found that the decline of responsiveness with SSH data compression also depended somewhat on the number of opened Spice “displays” or “screens” and their sizes. Responsiveness is better with just one Spice window open on the remote system. In our SSH-based scenario responsiveness depends

  • on the number of virtual Spice displays,
  • on the size of the moved window,
  • on the complexity and to a minor degree also on transparency effects.

I could also see these dependencies for a “ssh -XC” when I exchanged the QXL device with a so called “virtio”-video-device.

Using a “virtio” video device

So far we have worked with the QXL device for a virtual graphics card device in the VM’s configuration. Let us try an alternative – namely a so called “virtio”-video-device. “virtio”-devices for virtual NICs and virtual storage devices enhance performance due to special interaction concepts with the real hardware; see the links at the bottom of this post for more information on the ideas behind virtio-drivers. Can we get a performance improvement in our scenario by a “virtio” device for the virtual graphics card?

Our configuration for the VM then, for example, looks like

   <graphics 
type='spice' keymap='de' defaultMode='insecure'>
      <listen type='socket' socket='/var/spicex/spice.socket'/>
      <image compression='off'/>
      <gl enable='no'/>
    </graphics>
    ...
    ...
    <video>
      <model type='virtio' heads='2' primary='yes'>
        <acceleration accel3d='yes'/>
      </model>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    ...

You see that we can set multiple heads for a virtio video device, too. A big advantage is that we do not need any special memory settings as for the QXL device.

When you try this setting, you will found out that it works quite well, too. And there is a remarkable difference regarding data transfer rates:

The maximum rates for the same kind of window movements are now well below 48 MiB/s. For the same kind of fast movements of complex windows across the desktop surface in the Spice window.

Now, if you in addition use SSH compression (ssh -XC) you get the rates down to 8.2 MiB/sbut with only a slightly better responsiveness of windows to the mouse movement on the remote Spice window than for a QXL setup.

In my opinion a virtio-display device is something worth to experiment with (even without 3D acceleration).

Libreoffice Draw as a real world test case

Let us briefly compare data rates for something more realistic in daily work.

In the tests described below I firstly open Libreoffice [LO] Draw by a direct “ssh -X” call to the VM itself. Then I open LO Draw within the remotely displayed desktop of the VM based on a “SSH -X” connection to the KVM server. This means a major difference regarding the SSH connection and the data transfer requests!

Within LO Draw I use a sketch like the following

and, later on, move the green or violet figures very fast around in the LO frame with the mouse. Transparency included.

So, for a first test, let us open the VM’s LO Draw on my laptop MyLAP via a direct “ssh -X” command (without data compression!) directed to the VM:

<pre>myself@mylap:~/.ssh> ssh -X -i ~/.ssh/id_rsa_x myself@debianx
Enter passphrase for key '/home/myself/.ssh/id_rsa_x': 
Linux debianx 5.10.0-kali3-amd64 ......
......
Last login: Fri Mar 26 17:38:16 2021 from 192.168.2.22
...
myself@debianx:~> 
myself@debianx:~$ libreoffice --draw 

Note that “debianx” is now used as a host name! (The host name was chosen to be the same as the name of the VM in virt-manager; but the meaning in a network context, where “debianx” must be resolved to an IP, is now different. Note that the VM communicates to the outside world via a virtual network of the KVM host and routes defined on the VM and the KVM host directing data from the VM in the end over some NIC of the KVM host).

When moving the drawing’s figures around I measure data transfer rates on the relevant Ethernet device of the KVM-server:

Taking sent and received data together we have total rates around 25 MiB/s.

Now, in a second test, let us do something very different: We open Libreoffice Draw on the VM’s KDE desktop displayed in a Spice window, which in turn got transferred via SSH to the X11-service on my laptop:

And, again, we move the figures around very fast. The measured rates then are significantly smaller – below 4.4 MiB/s.

This proves the following statement which in turn justifies the whole Spice approach:

It may be more efficient to work remotely with a VM application on the VM’s desktop via Spice and a “SSH -X” connection to the KVM-server than requesting the graphical output of the VM’s application directly via “SSH -X” from the VM itself!

And what about sound?

We now turn to a topic which also deserves more documentation – namely the handling of sound with a remote solution like ours. We need Pulseaudio for a transfer of sound data from a VM (on the KVM/Qemu server) to the remote client system. Well, the very same Pulseaudio [PA] which often enough has ruined some of my nerves as a Linux user in the past 12 years or so. In combination with remote-viewer we simply cannot avoid it.

To be able to understand its configuration in a network with Opensuse Leap systems we must deal with the server properties of PA. See e.g. the following links for some explanation:
Free desktop Org documentation on Pulseaudio in a network
Archlinux documentation on PulseAudio

A Pulseaudio installation can work as a daemon-based service for other client-applications than the ones started locally during the desktop session of a user. Such clients can be applications started on other computers. PA’s client/server structure has network capabilities! To use PA in a network context some requirements must be fulfilled:

  • The module “module-native-protocol-tcp ” must be loaded. On a standard Opensuse Leap system this is the case; see the settings in the files “/etc/pulse/default.pa” and for a specific user in “~/.config/pulse/default.pa“.
  • For a direct connection between two PCs, as we need it for our present purpose, we can use a special TCP-port. The standard port is “4713“. For some first tests we will open this port for a directed transfer from the server to the client on local firewalls of the systems as well as on firewalls in between. But later we will rather integrate the port handling into our SSH tunnel.
  • The PA service must be told to accept remote connections over TCP. We can use the “paprefs” application for it.
  • We may require some form of authentication to grant access. We move this point to SSH by opening a remote tunnel – so we can forget about this point.

To get some information about
what is playing where during the next steps it is useful to have the applications “pavucontrol” (pulseaudio volume control) and “paman” (pulseaudio manager) running. You find the relevant packets in the standard Update Repository of your Leap distribution. The packet “qemu-audio-pa” should be installed, too, if it is not yet present on your system.

Where is sound of the VM played if we do nothing?

The funny thing about sound in our remote scenario with SSH and a Unix socket on the KVM host is the following:

When we start remote-viewer on the KVM-server in our scenario without any special measures for audio, we will see the graphical output on the remote client, but hear the sound on the speaker system of the server (if it has any). Well, in my test scenario the “server” has such an equipment.

well, let us start some sounds via the Spice windows on the client “MyLAP”. In the above images you saw that I had opened a directory with KDE sound files. Whilst we play them (e.g. with the parole player) we can at the same time have a look at the pavucontrol window on the KDE desktop of user “uvmb” on the server “MySRV”:

If you enlarge the image you see a PA-client there with the name “Remote Viewer”. This is not too astonishing as we had started remote-viewer on the KVM server and not on the remote laptop. And as a default remote-viewer interacts with the active PA on the system where remote-viewer itself is running:

Well this might be okay if the client and the server are in the same room. But if you had moved with your laptop into another room you, of course, would like to hear the sound on your laptop’s speakers. To achieve this we have to redirect the audio data stream of an application of the VM to a remote PA service.

How do we transfer sound from a SSH-server to the PA-system on the client during a SSH session?

I assume now that port 4713 is open on all firewalls. Still we have to prepare the PA-service on the remote client system – here on our laptop “MyLAP”.
For this purpose we open “paprefs” on MyLAP (NOT in the VM displayed in the Spice windows there, but in a standard terminal window of MyLAP’s desktop):

myself@mylap:~> paprefs

we turn to the tab “network server” and activate the following options:

Your laptop (the SSH- and Spice-client) is then able to work as a sound-server within the LAN.

Do not worry too much about the deactivated authentication. You can control in your firewall settings which system gets access – and later on we shall close port 4713 completely again for remote access by restrictive firewall rules. (If you really need authentication you must copy the cookie under “~/.config/pulse/cookie” from your laptop onto the server and uvmb’s folder structure.)

But now: How, do we tell an application started on the KVM-server to direct its audio output to the PA server on the laptop? Well, this is controlled by an environment variable “PULSE_SERVER”; see the documentation mentioned above about this.

You
can easily test this by opening a “ssh -X” connection from your remote system to an SSH server and redirect the audio output of an application like smplayer to the PA on your remote system. In my case:

<pre>myself@mylap:~/.ssh> ssh -X -i ~/.ssh/id_rsa_x uvmb@mysrv
Enter passphrase for key '/home/myself/.ssh/id_rsa_x': 
Last login: Thu Mar 25 09:54:53 2021 from 192.168.2.22
Have a lot of fun...
uvmb@mysrv:~> 
uvmb@mysrv:~> env PULSE_SERVER=192.168.2.22 smplayer & 
[1] 5041
uvmb@mysrv:~> 

Any sound played with smplayer is now handled by the PA on the laptop. See the screenshot from the laptop:

Now, we can of course do the same with our remote-viewer:

<pre>myself@mylap:~/.ssh> ssh -X -i ~/.ssh/id_rsa_x uvmb@mysrv
Enter passphrase for key '/home/myself/.ssh/id_rsa_x': 
Last login: Thu Mar 25 09:54:53 2021 from 192.168.2.22
Have a lot of fun...
uvmb@mysrv:~> 
uvmb@mysrv:~> env PULSE_SERVER=192.168.2.22 remote-viewer spice+unix:///var/spicex/spice.socket &
[1] 5041
uvmb@mysrv:~> 

You should hear any sound played with an audio application within the VM on the remote system (in my case on the laptop MyLAP):

Isn’t it fun?

Remote SSH tunnel and port forwarding for the transfer of audio data

Ok, we have a sound transfer – but not encrypted. This can – dependent on your audio applications – be a security hole. In addition we lack control over the users who may access our PA-server on the remote system. To cover both problems we are going now to make use of the full power of SSH. We open a reverse SSH tunnel with port forwarding from some arbitrarily chosen port on the KVM/Qemu server to port 4713 on the laptop:

<pre>myself@mylap:~/.ssh> ssh -X -R 44713:localhost:4713 -i ~/.ssh/id_rsa_x uvmb@mysrv
Enter passphrase for key '/home/myself/.ssh/id_rsa_x': 
Last login: .... from 192.168.2.22
Have a lot of fun...
uvmb@mysrv:~> 
uvmb@mysrv:~> env PULSE_SERVER=tcp:localhost:44713 remote-viewer spice+unix:///var/spicex/spice.socket &
[1] 5041
uvmb@mysrv:~> 

You see the difference? We direct the audio output of remote-viewer on the KVM-host to port 44713 – and SSH does the rest for us via port-forwarding (plus encryption). (Control question: Which system does “localhost” in the SSH statement refer to? The laptop or the KVM/Qemu server?)

The result of this sound redirection looks, of course, the same on pavucontrol on our remote client system as before.

We now can close the port 4713 by some suitable firewall rule on our client system for any external access. Due to SSH port forwarding we only access the port locally there. You can even pin it on the “lo”-device with the SSH command. Read about it on the Internet.

The additional overhead of the audio data transfer is minimal in comparison to the video data transfer triggered by window manager operations:

We speak about some 600 KiB/s for some stereo sound.

To make things complete –
here are the data transfer rates for high resolution Live TV video streaming from the VM on the KVM-server over “SSH -X” to the remote client (without data compression):

You see: Its Easter time! Old Hollywood movies are running on German TV …

Conclusion

The method to access the Spice console of a VM with remote-viewer and via a Unix socket locally on the KVM host enabled a first secure remote scenario by simply redirecting the graphical data stream from the KVM-server to a remote X-window service with “SSH -X”.
The combination with a virtio-video device proved to deliver a relatively small peak data transfer rate around 45 MiB/s for complex window operations requiring a fast redraw of major parts of the desktop in the remote Spice windows. Without SSH data compression we got a very good responsiveness of complex windows to fast movements induced by the mouse cursor on the remotely displayed desktop of the VM. We saw that we could reduce the resulting data transfer rates below 9 MiB/s by using SSH data compression. However, this had some negative impact on the felt responsiveness of operations triggering the window manager of the VM’s graphical desktop.
However, working with graphical applications like Libreoffice Draw on the remotely displayed desktop of the VM via Spice and SSH required substantially smaller transfer rates than in a scenario where we requested a display of the application by a direct “ssh-X” connection to the VM itself.
I have shown in addition that we can easily transfer the sound created by audio applications within the VM via a remote SSH tunnel and port forwarding to the Pulseaudio server on the remote client system.

In the next article of this series we are preparing a TLS based remote solution for accessing the Spice console of a VM.

Links

SSH with compression
https://www.xmodulo.com/how-to-speed-up-x11-forwarding-in-ssh.html?format=pdf

SSH with Pulseaudio
https://askubuntu.com/questions/371687/how-to-carry-audio-over-ssh

Linux bridges – can iptables be used against MiM attacks based on ARP spoofing ? – II

In the last post
Linux bridges – can iptables be used against MiM attacks based on ARP spoofing ? – I
of this series we saw that iptables rules with options like

-m physdev –physdev-in/out device

may help in addition to other netfilter tools (for lower layers) to block redirected traffic to a “man in the middle system” on a Linux bridge.

Tools like FWbuilder support the creation of such “physdev”-related rules as soon as bridge devices are marked as bridged in the interface definition process for the firewall host. However, we have also seen that we need to bind IP addresses to certain bridge ports. This in turn requires knowledge about a predictable IP-to-port-configuration.

Such a requirement may be an obstacle for using iptables in scenarios with many virtual guests on one or several Linux bridges of a virtualization host as it reduces flexibility for automated IP address assignment.

Before we discuss administrative aspects in a further post, let us expand our iptables rules to a more complex situation:

In this post we discuss a scenario with 2 linked Linux bridges “virbr4” and “virbr6” plus the host attached to “virbr4”. This provides us with a virtual infrastructure for which we need to construct a more complex, but more general set of rules in comparison to what we discussed in the last article. We will look at the required rules and their order. Testing of the rules will be done in a forthcoming post.

Two coupled bridges and the host attached via veth devices

You see my virtual bridge setup in the following drawing.

(Note for those who read the article before: I have exchanged the picture a bit to make it consistent with a forthcoming post. The port for kali2 has been renamed to “vk42”).

bridge3

The small blue rectangles inside the bridges symbolize standard Linux tap devices – whereas the RJ45 like rectangles symbolize veth devices. veth pairs deliver a convenient way on a Linux system to link bridges and to attach the host to them in a controllable way. As a side effect one can avoid to assign the bridge itself an IP address. See:
Fun with veth devices, Linux virtual bridges, KVM, VMware – attach the host and connect bridges via veth

In the drawing you recognize our bridge “virbr6” and its guests from the 1st post of this series. The new bridge “virbr4” is only equipped with one guest (kali2); this is sufficient for our test case purposes. Of course, you could have many more guests there in more realistic scenarios. Note that attaching certain groups of guests to distinct bridges also occurs in physical reality for a variety of reasons.

Two types of ports

For the rest of this post we call ports as vethb2 on virbr6 as well as vethb1 and vmh1 on virbr4 “border ports” of their respective bridges. Such border ports

  • connect a bridge to another bridge,
  • connect a bridge to the virtualization host
  • or connect the bridge to hosts on external, physically real Ethernet segments.

We remind the reader that it always is the perspective of the bridge that decides about the INcoming or OUTgoing direction of an Ethernet packet via a specific port when we define respective IN/OUT iptables rules.

Therefore, packets crossing a border port in the IN direction always come from outside the bridge. Packets leaving the port OUTwards may however come from guests of the bridge itself AND from guests outside other border ports of the very same bridge.

In contrast to border ports we shall call a port of a bridge with just one defined guest behind it a “guest port”. [In our test case the bridge connection of guests is realized with tap devices because this is convenient with KVM. In the case of LXC and docker containers we would rather see veth-pairs.]

Multiple bridges on one host – how are the iptables rules probed by the kernel?

Just from looking at the sketch above we see a logical conundrum, which has a significant impact on the setup of iptables rules on a host with multiple bridges in place:

A packet created at one of the ports may leave the bridge where it has been created and travel into a neighboring bridge via border ports. But when and how are port-related iptables-rules tested by the kernel as the packet travels – lets e.g. say from kali5 to the guest at “vnet0” or to the host at “vmh2”?

  • Bridge for bridge – IN-Port-rules, then OUT-port-rules on the same first bridge => afterwards IN-port-rules/OUT-port-rules again – but this time for the ports of the next entered bridge?
  • Or: iptables rules are checked only once, but globally and for all bridges – with some knowledge of port-MAC-relations of the different bridges included?

If the latter were true just one passed ACCEPT rule on a single bridge port would lead to an overall acceptance of a packet despite the fact that the packet possibly will cross further bridges afterward. Such a behavior would be unreasonable – but who knows …

So the basic question is:

After having been checked on a first bridge, having been accepted for leaving one border port of this first bridge and then having entered a second linked bridge via a corresponding border port – will the packet be checked again against all denial and acceptance rules of the second bridge? Will the packet with its transportation attributes be injected again into the whole set of iptables rules?

It is obvious that the answer would have an impact of how we need to define our rules. Especially during port flooding, which we already observed in the tests described in our first article.

Tests of the order of iptables rules probing for ports of multiple bridges on a packet’s path

As a first test we do something very simple: we define some iptables rules for ICMP pings formally in the following logical order: We first deny a passage through vethb1 on virbr4 before we allow the packet to pass vethb2 on virbr6:

bridge vibr4 rule 15:  src 192.168.50.14, dest 192.168.50.1 - ICMP IN vethb1 => DENY   
bridge vibr6 rule 16:  src 192.168.50.14, dest 192.168.50.1 - ICMP OUT vethb2 => ALLOW    

and then we test the order of how these rules are passed by logging them.

To avoid any wrong or missing ARP information on the involved guest/host systems and missing MAC-port-relations in the “forward databases” [FWB] of the bridges we first clear any iptables rules and try some pings. Then we activate the rules and get the following results for ping packets sent from kali4 to the host:

2016-02-27T12:09:33.295145+01:00 mytux kernel: [ 
5127.067043] RULE 16 -- ACCEPT IN=virbr6 OUT=virbr6 PHYSIN=vk64 PHYSOUT=vethb2 MAC=96:b0:a9:7c:73:7d:52:54:00:74:60:4a:08:00 SRC=192.168.50.14 DST=192.168.50.1 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=22031 DF PROTO=ICMP TYPE=8 CODE=0 ID=1711 SEQ=1     
2016-02-27T12:09:33.295158+01:00 mytux kernel: [ 5127.067062] RULE 15 -- DENY IN=virbr4 OUT=virbr4 PHYSIN=vethb1 PHYSOUT=vmh1 MAC=96:b0:a9:7c:73:7d:52:54:00:74:60:4a:08:00 SRC=192.168.50.14 DST=192.168.50.1 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=22031 DF PROTO=ICMP TYPE=8 CODE=0 ID=1711 SEQ=1    
2016-02-27T12:09:34.302140+01:00 mytux kernel: [ 5128.075040] RULE 16 -- ACCEPT IN=virbr6 OUT=virbr6 PHYSIN=vk64 PHYSOUT=vethb2 MAC=96:b0:a9:7c:73:7d:52:54:00:74:60:4a:08:00 SRC=192.168.50.14 DST=192.168.50.1 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=22131 DF PROTO=ICMP TYPE=8 CODE=0 ID=1711 SEQ=2 
2016-02-27T12:09:34.302153+01:00 mytux kernel: [ 5128.075056] RULE 15 -- DENY IN=virbr4 OUT=virbr4 PHYSIN=vethb1 PHYSOUT=vmh1 MAC=96:b0:a9:7c:73:7d:52:54:00:74:60:4a:08:00 SRC=192.168.50.14 DST=192.168.50.1 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=22131 DF PROTO=ICMP TYPE=8 CODE=0 ID=1711 SEQ=2   
 

Now we do a reverse test: We allow the incoming direction over port vk64 of virbr6 before we deny the incoming package over vethb1 on virbr4:

bridge vibr6 rule :  src 192.168.50.14, dest 192.168.50.1 - IN vk64 => ALLOW   
bridge vibr4 rule :  src 192.168.50.14, dest 192.168.50.1 - IN vethb1 => DENY     
 

We get

2016-02-27T14:02:32.821286+01:00 mytux kernel: [11913.962828] RULE 15 -- ACCEPT IN=virbr6 OUT=virbr6 PHYSIN=vk64 PHYSOUT=vethb2 MAC=96:b0:a9:7c:73:7d:52:54:00:74:60:4a:08:00 SRC=192.168.50.14 DST=192.168.50.1 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=21400 DF PROTO=ICMP TYPE=8 CODE=0 ID=2104 SEQ=1     
2016-02-27T14:02:32.821307+01:00 mytux kernel: [11913.962869] RULE 16 -- DENY IN=virbr4 OUT=virbr4 PHYSIN=vethb1 PHYSOUT=vmh1 MAC=96:b0:a9:7c:73:7d:52:54:00:74:60:4a:08:00 SRC=192.168.50.14 DST=192.168.50.1 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=21400 DF PROTO=ICMP TYPE=8 CODE=0 ID=2104 SEQ=1 
2016-02-27T14:02:33.820257+01:00 mytux kernel: [11914.962965] RULE 15 -- ACCEPT IN=virbr6 OUT=virbr6 PHYSIN=vk64 PHYSOUT=vethb2 MAC=96:b0:a9:7c:73:7d:52:54:00:74:60:4a:08:00 SRC=192.168.50.14 DST=192.168.50.1 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=21494 DF PROTO=ICMP TYPE=8 CODE=0 ID=2104 SEQ=2    
2016-02-27T14:02:33.820275+01:00 mytux kernel: [11914.962987] RULE 16 -- DENY IN=virbr4 OUT=virbr4 PHYSIN=vethb1 PHYSOUT=vmh1 MAC=96:b0:a9:7c:73:7d:52:54:00:74:60:4a:08:00 SRC=192.168.50.14 DST=192.168.50.1 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=21494 DF PROTO=ICMP TYPE=8 CODE=0 ID=2104 SEQ=2   
  

So to our last test:

bridge vibr6 rule :  src 192.168.50.14, dest 192.168.50.1 - IN vk64 => ALLOW    
bridge vibr6 rule :  src 192.168.50.14, dest 192.168.50.1 - IN vethb2 => DENY   
bridge vibr4 rule :  src 192.168.50.14, dest 192.168.50.1 - IN vethb1 => DENY  
  

We get:

2016-02-27T14:26:08.964616+01:00 mytux kernel: [13331.634200] RULE 15 -- ACCEPT IN=virbr6 OUT=virbr6 PHYSIN=vk64 PHYSOUT=vethb2 MAC=96:b0:a9:7c:73:7d:52:54:00:74:60:4a:08:00 SRC=192.168.50.14 DST=192.168.50.1 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=27122 DF PROTO=ICMP TYPE=8 CODE=0 ID=2218 SEQ=1   
2016-02-27T14:26:08.964633+01:00 mytux kernel: [13331.634232] RULE 17 -- DENY IN=virbr4 OUT=virbr4 PHYSIN=vethb1 PHYSOUT=vmh1 MAC=96:b0:a9:7c:73:7d:52:54:00:74:60:4a:08:00 SRC=192.168.50.14 DST=192.168.50.1 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=27122 DF PROTO=ICMP TYPE=8 CODE=0 ID=2218 SEQ=1  
2016-02-27T14:26:09.972621+01:00 mytux kernel: [13332.643587] RULE 15 -- ACCEPT IN=virbr6 OUT=virbr6 PHYSIN=vk64 PHYSOUT=vethb2 MAC=96:b0:a9:7c:73:7d:52:54:00:74:60:4a:08:00 SRC=192.168.50.14 DST=192.168.50.1 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=27347 DF PROTO=ICMP TYPE=8 CODE=0 ID=2218 SEQ=2 
2016-02-27T14:26:09.972637+01:00 mytux kernel: [13332.643605] RULE 17 -- DENY IN=virbr4 OUT=virbr4 PHYSIN=vethb1 PHYSOUT=vmh1 MAC=96:b0:a9:7c:73:7d:52:54:00:74:60:4a:08:00 SRC=192.168.50.14 DST=192.168.50.1 LEN=84 TOS=0x00 PREC=0x00 TTL=64 ID=27347 DF PROTO=ICMP TYPE=8 CODE=0 ID=2218 SEQ=2   
 

Intermediate conclusions

We can conclude the following points:

  • A packet is probed per bridge – in the order of how multiple bridges of the host are passed by the packet.
  • An ALLOW rule for a port on one bridge does not overrule a DENY rule for a port on a second bridge which the package may pass on its way.
  • A packet is tested both for IN/OUT conditions of a FORWARD rule for each bridge it passes.
  • If we split IN and OUT rules on a bridge (as we need to do within some tools as FWbuilder) than we must probe the OUT rules first to guarantee the prevention of illegal packet transport.

For the rest of the post we shall follow the same rule we already used as a guide line in the previous post:
Our general iptables policy is that a packet will be denied if it is not explicitly accepted by one of the tested rule.

Blocking of border ports in port flooding situations

During our tests in the last post we have seen that port flooding situations may occur – depending among other things on the “setaging” parameter of the bridge and the resulting deletion of stale entries in the “Forward Database” [FWD] of a bridge. Flooding of veth based border ports may be critical for packet transmission and may have to be blocked in some cases.

E.g., it would be unreasonable to transfer packets logically meant for hosts beyond port vmh1 of virbr4 over vethb1/2 to virbr6. We would stop such packets already via OUT DENY rules for vethb1:

bridge vibr4 rule :  src "guest of virbr4", dest "no guest of virbr6" - OUT vethb1 => DENY  

Rules regarding packets just crossing and passing a bridge

Think about a bridge “virbrx” linked on its both sides to two other bridges “virbr_left” and “virbr_right”. In such a scenario packets could arrive at virbrx from bridge virbr_right, enter the intermediate bridge virbrx and leave it at once again for the third bridge virbr_left – because it never was destined to any guest of bridge virbrx.

For such packets we need at least one ACCEPT rule om virbrx – either on the IN direction of the border port of virbrx against virbr_right or on the OUT direction at the border port to virbr_left.

Again, we cling to our policy of the last article:
We define DENY rules for outgoing packets at all ports – also for border ports – and put these DENY rules to the top of the iptables list; then we define DENY rules for ports which are passed in IN direction; only after that we define ACCEPT rules for incoming packets for all ports of a bridge – including border ports – and set these rules below/after the DENIAL rules. This should provide us with a consistent handling also of packets crossing and passing bridges.

Grouping of guests/hosts

From looking at the drawing above we also understand the following point:
In order to handle packets at border ports connecting two bridges we have the choice to block packets at either border port – i.e. before the OUTgoing port passage on the first bridge OR before the INcoming port passage on the second. We shall do the blocking at the port in the packets OUTgoing direction. Actually, there would be no harm in setting up reasonable DENY rules for both ports. Then we would safely cover all types of situations.

Anyway – we also find that the rules for border ports require a certain grouping of the guests and hosts:

  • Group 1: Guests attached to the bridge that has a border port.
  • Group 2: Guests on the IN side of the border port of a bridge – i.e. the internal side of the bridge. This group includes Group 1 plus external guests of further bridges beyond other border ports of the very same bridge.
  • Group 3:Guests on the outgoing side of a border port – i.e. the side to the next connected bridge. This group contains hosts of Group 1 for the next connected bridge and/or groups of external hosts on the OUT side of all other border ports of the connected bridge.

These groups can easily be formed per bridge by tools like FWbuilder. Without going into details: Note that FWbuilder handles the overall logical OR/AND switching during a negation of multiple groups of hosts correctly when compiling iptables rules.

Overall rules order in case of multiple and connected bridges

Taking into account the results of the first post in this series I suggest the following order of iptables rules:

  • We first define OUT DENY rules for all guest ports of all bridges – with ports grouped by bridges just to keep the overview. These rules are the most important ones to prevent ARP spoofing and a resulting packet redirection.
  • We then define all OUT DENY rules for border ports of all bridges – first grouped by bridges and then per bridge and ports grouped by hosts for the OUTgoing direction. These rules cover also port flooding situations with respect to neighboring linked bridges.
  • We then define IN DENY rules for incoming packets over border ports. These rules may in addition to the previous rules prevent implausible packet transport.
  • Now we apply OUT DENY and IN DENY rules for Ethernet devices on the virtualization host. Such rules must must not be forgotten and can be placed here in the rules’ sequence.
  • We then define IN ACCEPT rules on individual guest ports – ports again grouped by bridges.
  • We eventually define IN ACCEPT rules on bridge border ports – note that such rules are required for packets just passing an intermediate bridge without being destined to a guest of the bridge.
  • IN ACCEPT rules for the virtualization hosts’s Ethernet interfaces must not be forgotten and can be placed at the end.

How does that look like in FWbuilder?

Before looking at the pics note that we have defined the host groups

  • br6_grp to contain kali3, kali4, kali5,
  • br4_grp to contain only kali2,
  • ext_grp to contain the host and some external web server “lamp“.

With this we get the following 7 groups of rules:

full_1

full_2

full_3

full_4

full_5

full_6

full_7

Despite the host grouping : this makes quite a bunch of rules! But not uncontrollable …

Enough for today. I hope that tests being performed in a third post of this series will not proof me wrong. I am confident …. See
Linux bridges – can iptables be used against MiM attacks based on ARP spoofing ? – III

 

Fun with veth devices, Linux virtual bridges, KVM, VMware – attach the host and connect bridges via veth

Typically, virtual “veth” Ethernet devices are used for connecting virtual containers (as LXC) to virtual bridges like an OpenVswitch. But, due to their pair nature, veth” devices promise flexibility also in other, much simpler contexts of virtual network construction. Therefore, the objective of this article is to experiment a bit with “veth” devices as tools to attach the virtualization host itself or other (virtual) devices like a secondary Linux bridge or a VMware bridge to a standard Linux bridge – and thus enable communication with and between virtualized guest systems.

Motivation

I got interested in “veth”-devices when trying to gain flexibility for quickly rebuilding and rearranging different virtual network configurations in a pen-testing lab on Linux laptops. For example:

  • Sometimes you strongly wish to avoid giving a Linux bridge itself an IP. Assigning an IP to a Linux bridge normally enables host communication with KVM guests attached to the bridge. However, during attack simulations across the bridge the host gets very exposed. In my opinion the host can better and more efficiently be protected by packet filters if it communicates with the bridge guests over a special “veth” interface pair which is attached to the bridge. In other test or simulation scenarios one may rather wish to connect the host like an external physical system to the bridge – i.e. via a kind of uplink port.
  • There are scenarios for which you would like to couple two bridges, each with virtual guests, to each other – and make all guests communicate with each other and the host. Or establish communication from a guest of one Linux bridge to VMware guests of a VMware bridge attached to yet another Linux bridge. In all these situations all guests and the host itself may reside in the same logical IP network segment, but in segregated parts. In the physical reality admins may have used such a segregation for improving performance and avoiding an overload of switches.
  • In addition one can solve some problems with “veth” pairs which otherwise would get complicated. One example is avoiding the assignment of an IP address to a special enslaved ethernet device representing the bridge for the Linux system. Both libvirt’s virt-manager and VMware WS’s “network editor” automatically perform such an IP assignment when creating virtual host-only-networks. We shall come back to this point below.

As a preparation let us first briefly compare “veth” with “tap” devices and summarize some basic aspects of Linux bridges – all according to my yet limited understanding. Afterwards, we shall realize a simple network scenario as for training purposes.

vtap vs veth

A virtual “tap” device is a single point to point device which can be used by a program in user-space or a virtual machine to send Ethernet packets on layer 2 directly to the kernel or receive packets from it. A file descriptor (fd) is read/written during such a transmission. KVM/qemu virtualization uses “tap” devices to equip virtualized guest system with a virtual and configurable ethernet interface – which then interacts with the fd. A tap device can on the other side be attached to a virtual Linux bridge; the kernel handles the packet transfer as if it occurred over a virtual bridge port.

“veth” devices are instead created as pairs of connected virtual Ethernet interfaces. These 2 devices can be imagined as being connected by a network cable; each veth-device of a pair can be attached to different virtual entities as OpenVswitch bridges, LXC containers or Linux standard bridges. veth pairs are ideal to connect virtual devices to each other.

While not supporting veth directly, a KVM guest can bridge a veth device via
macVtap/macVlan (see https://seravo.fi/2012/virtualized-bridged-networking-with-macvtap.

In addition, VMware’s virtual networks can be bridged to a veth device – as we shall show below.

Aspects and properties of Linux bridges

Several basic aspects and limitations of standard Linux bridges are noteworthy:

  • A “tap” device attached to one Linux bridge cannot be attached to another Linux bridge.
  • All attached devices are switched into the promiscuous mode.
  • The bridge itself (not a tap device at a port!) can get an IP address and may work as a standard Ethernet device. The host can communicate via this address with other guests attached to the bridge.
  • You may attach several physical Ethernet devices (without IP !) of the host to a bridge – each as a kind of “uplink” to other physical switches/hubs and connected systems. With the spanning tree protocol activated all physical systems attached to the network behind each physical interface may communicate with physical or virtual guests linked to the bridge by other physical interfaces or virtual ports.
  • Properly configured the bridge transfers packets directly between two specific bridge ports related to the communication stream of 2 attached guests – without exposing the communication to other ports and other guests. The bridge may learn and update the relevant association of MAC addresses to bridge ports.
  • The virtual bridge device itself – in its role as an Ethernet device – does not work in promiscuous mode. However, packets arriving through one of its ports for (yet) unknown addresses may be flooded to all ports.
  • You cannot bridge a Linux bridge directly by or with another Linux bridge (no Linux bridge cascading). You can neither connect a Linux bride to another Linux bridge via a “tap” device.

In combination with VMware (on a Linux host) some additional aspects are interesting:

  • A virtual Linux bridge in its role as an Ethernet device can be bridged by non-native Linux bridges – e.g. by VMware bridges – and thereby be switched into promiscuous mode. The VMware (master) bridge then uses a Linux bridge as an attached (slave) device. This type of bridge cascading may have security impacts: packets arriving via a physical port at the Linux bridge and being destined to VMware guests connected to their VMware master bridge may become visible at the Linux bridge ports. See:
    VMware WS – bridging of Linux bridges and security implications
  • The “vmnet”-Ethernet device related to a VMware bridge on a Linux host can be attached (without an IP-address) to a Linux bridge thus enabling communication between VMware guests attached to a VMware bridge and KVM guests connected to the Linux bridge. However, as this is an uplink like situation we must get rid of any IP address assigned to the “vmnet”-Ethernet device.
  • A test scenario

    I want to realize the following test scenario with the help of veth-pairs:

    Our virtual network shall contain two coupled Linux bridges, each with a KVM guest. The host “mytux” shall be attached via a regular bridge port to only one of the bridges. In addition we want to connect a VMware bridge to one of the Linux bridges. All KVM/VMware guests shall belong to the same logical layer 3 network segment and be able to communicate with each other and the host (plus external systems via routing).

    veth6

    The RJ45 like connectors in the picture above represent veth-devices – which occur in pairs. The blue small rectangles on the Linux bridges instead represent ports associated with virtual tap-devices. I admit: This scenario of a virtual network inside a host is a bit academic. But it allows us to test what is possible with “veth”-pairs.

    Building the bridges

    On our Linux host we use virt-manager’s “connection details >> virtual networks” to define 2 virtual host only networks with bridges “virbr4” and “virbr6”.

    veth7

    Note: We do not allow for bridge specific “dhcp-services” and do not assign network addresses. We shall later configure addresses of the guests manually; you will find some remarks on a specific, network wide DHCP service at the end of the article.

    Then we implement and configure 2 KVM Linux guests (here Kali systems) – one with an Ethernet interface attached to “vibr4”; the other guest will be connected to “virbr6”. The next picture shows the network settings for guest “kali3” which gets attached to “virbr6”.

    veth8

    We activate the networks and boot our guests. Then on the guests (activate the right interface and deactivate other interfaces, if necessary) we need to set IP-addresses: The interfaces on kali2, kali3 must be configured manually – as we had not activated DHCP. kali2 gets the address “192.168.50.12”, kali3 the address “192.168.50.13”.

    veth9

    If we had defined several tap interfaces on our guest system kali3 we may have got a problem to identify the right interface associated with bridge. It can however be identified by its MAC and a comparison to the MACs of “vnet” devices in the output of the commands “ip link show” and “brctl show virbr6”.

    Now let us look what information we get about the bridges on the host :

    mytux:~ # brctl show virbr4
    bridge name     bridge id               STP enabled     interfaces
    virbr4          8000.5254007e553d       yes             virbr4-nic
                                                            vnet6
    mytux:~ # ifconfig virbr4 
    virbr4    Link encap:Ethernet  HWaddr 52:54:00:7E:55:3D  
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
    mytux:~ # ifconfig vnet6
    vnet6     Link encap:Ethernet  HWaddr FE:54:00:F2:A4:8D  
              inet6 addr: fe80::fc54:ff:fef2:a48d/64 Scope:Link
    ....
    mytux:~ # brctl show virbr6
    bridge name     bridge id               STP enabled     interfaces
    virbr6          8000.525400c0b06f       yes             virbr6-nic
                                                            vnet2
    mytux:~ # ip addr show virbr6 
    22: virbr6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
        link/ether 52:54:00:c0:b0:6f brd ff:ff:ff:ff:ff:ff
    mytux:~ # ifconfig vnet2
    vnet2     Link encap:Ethernet  HWaddr FE:54:00:B1:5D:1F  
              inet6 addr: fe80::fc54:ff:feb1:5d1f/64 Scope:Link
    .....
    mytux:~ # 
    

     
    Note that we do not see any IPv4-information on the “tap” devices vnet5 and vnet2 here. But note, too, that no IP-address has been assigned by the host to the bridges themselves.

    Ok, we have bridges virbr4 with guest “kali2” and a separate bridge virbr6 with KVM guest “kali3”. The host has no
    role in this game, yet. We are going to change this in the next step.

    Note that virt-manager automatically started the bridges when we started the KVM guests. Alternatively, we could have manually set
    mytux:~ # ip link set virbr4 up
    mytux:~ # ip link set virbr6 up
    We may also configure the bridges with “virt-manager” to be automatically started at boot time.

    Attaching the host to a bridge via veth

    According to our example we shall attach the host now by the use of a veth-pair to virbr4 . We create such a pair and connect one of its Ethernet interfaces to “virbr4”:

    mytux:~ # ip link add dev vmh1 type veth peer name vmh2       
    mytux:~ # brctl addif virbr4 vmh1
    mytux:~ # brctl show virbr4 
    bridge name     bridge id               STP enabled     interfaces
    virbr4          8000.5254007e553d       yes             virbr4-nic
                                                            vmh1
                                                            vnet6
    

     
    Now, we assign an IP address to interface vmh2 – which is not enslaved by any bridge:

    mytux:~ # ip addr add 192.168.50.1/24 broadcast 192.168.50.255 dev vmh2
    mytux:~ # ip addr show vmh2
    6: vmh2@vmh1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000
        link/ether 42:79:e6:a7:fb:09 brd ff:ff:ff:ff:ff:ff
        inet 192.168.50.1/24 brd 192.168.50.255 scope global vmh2
           valid_lft forever preferred_lft forever
    

     
    We then activate vmh1 and vmh2. Next we need a route on the host to the bridge (and the guests at its ports) via vmh2 (!!) :

    mytux:~ # ip  link set vmh1 up
    mytux:~ # ip  link set vmh2 up
    mytux:~ # route add -net 192.168.50.0/24 dev vmh2
    mytux:~ # route
    Kernel IP routing table
    Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
    default         ufo             0.0.0.0         UG    0      0        0 br0
    192.168.10.0    *               255.255.255.0   U     0      0        0 br0
    ...
    192.168.50.0    *               255.255.255.0   U     0      0        0 vmh2
    ...
    

     
    Now we try whether we can reach guest “kali2” from the host and vice versa:

    mytux:~ # ping 192.168.50.12
    PING 192.168.50.12 (192.168.50.12) 56(84) bytes of data.
    64 bytes from 192.168.50.12: icmp_seq=1 ttl=64 time=0.291 ms
    64 bytes from 192.168.50.12: icmp_seq=2 ttl=64 time=0.316 ms
    64 bytes from 192.168.50.12: icmp_seq=3 ttl=64 time=0.322 ms
    ^C
    --- 192.168.50.12 ping statistics ---
    3 packets transmitted, 3 received, 0% packet loss, time 1999ms
    rtt min/avg/max/mdev = 0.291/0.309/0.322/0.024 ms
    
    root@kali2:~ # ping 192.168.50.1
    PING 192.168.50.1 (192.168.50.1) 56(84) bytes of data.
    64 bytes from 192.168.50.1: icmp_seq=1 ttl=64 time=0.196 ms
    64 bytes from 192.168.50.1: icmp_seq=2 ttl=64 time=0.340 ms
    64 bytes from 192.168.50.1: icmp_seq=3 ttl=64 time=0.255 ms
    ^C
    --- 192.168.50.1 ping statistics ---
    3 packets transmitted, 3 received, 0% packet loss, time 1998ms
    rtt min/avg/max/mdev = 0.196/0.263/0.340/0.062 ms
    

     
    So, we have learned that the host can easily be connected to a Linux bridge via an veth-pair – and that we do not need to assign an IP address to the bridge itself. Regarding the connection links the resulting situation is very similar to bridges where you use a physical “eth0” NIC as an uplink to external systems of a physical network.

    All in all I like this situation much better than having a bridge with an IP. During critical penetration tests we now can just plug vmh1 out of the bridge. And regarding packet-
    filters: We do not need to establish firewall-rules on the bridge itself – which has security implications if only done on level 3 – but on an “external” Ethernet device. Note also that the interface “vmh2” could directly be bridged by VMware (if you have more trust in VMware bridges) without producing guest isolation problems as described in a previous article (quoted above).

    Linking of two Linux bridges with each other

    Now, we try to create a link between our 2 Linux bridges. As Linux bridge cascading is forbidden, it is interesting to find out whether at least bridge linking is allowed. We use an additional veth-pair for this purpose:

    mytux:~ # ip link add dev vethb1 type veth peer name vethb2       
    mytux:~ # brctl addif virbr4 vethb1
    mytux:~ # brctl addif virbr4 vethb2
    mytux:~ # brctl show virbr4
    bridge name     bridge id               STP enabled     interfaces
    virbr4          8000.5254007e553d       yes             vethb1
                                                            virbr4-nic
                                                            vmh1
                                                            vnet6
    mytux:~ # brctl show virbr6
    bridge name     bridge id               STP enabled     interfaces
    virbr6          8000.2e424b32cb7d       yes             vethb2
                                                            virbr6-nic
                                                            vnet2
    
    
    mytux:~ # ip link set vethb1 up 
    mytux:~ # ip link set vethb2 up 
    

     
    Note, that the STP protocol is enabled on both bridges! (If you see something different you can manually activate STP via options of the brctl command.)

    Now, can we communicate from “kali3” at “virbr6” over the veth-pair and “virbr4” with the host?
    [Please, check the routes on all involved machines for reasonable entries first and correct if necessary; one never knows …].

    veth10
    and
    veth11

    Yes, obviously we can – and also the host can reach the virtual guest kali3.

    mytux:~ # ping -c4 192.168.50.13
    PING 192.168.50.13 (192.168.50.13) 56(84) bytes of data.
    64 bytes from 192.168.50.13: icmp_seq=1 ttl=64 time=0.259 ms
    64 bytes from 192.168.50.13: icmp_seq=2 ttl=64 time=0.327 ms
    64 bytes from 192.168.50.13: icmp_seq=3 ttl=64 time=0.191 ms
    64 bytes from 192.168.50.13: icmp_seq=4 ttl=64 time=0.287 ms
    
    --- 192.168.50.13 ping statistics ---
    4 packets transmitted, 4 received, 0% packet loss, time 2998ms
    rtt min/avg/max/mdev = 0.191/0.266/0.327/0.049 ms
    

     
    and of course
    veth12

    This was just another example of how we can use veth-pairs. We can link Linux bridges together – and all guests at both bridges are able to communicate with each other and with the host. Good !

    Connecting a virtual VMware bridge to a Linux bridge via a veth-pair

    Our last experiment involves a VMware WS bridge. We could use the VMware Network Editor to define a regular “VMware Host Only Network”. However, the bridge for such a network will automatically be created with an associated, enslaved Ethernet device for and on the host. And the bridge itself would automatically get an IP address – namely 192.168.50.1. There is no way known to me to avoid this – we
    would need to manually eliminate this address afterward.

    So, we take a different road:
    We first create a pair of veth devices – and then bridge (!) one of these veth devices by VMware:

    mytux:~ # ip link add dev vmw1 type veth peer name vmw2
    mytux:~ # brctl link virbr4 vmw1   
    mytux:~ # ip link set vmw1 up
    mytux:~ # ip link set vmw2 up
    mytux:~ # /etc/init.d/vmware restart
    

     
    To create the required VMware bridge to vmw2 we use VMware’s Virtual Network Editor”:

    veth13

    Note that by creating a specific bridge to one of the veth devices we have avoided any automatic IP address assignment (192.168.50.1) to the Ethernet device which would normally be created by VMware together with a host only bridge. Thus we avoid any conflicts with the already performed address assignment to “vmh2” (see above).

    In our VMware guest (hier a Win system) we configure the network device – e.g. with address 192.168.50.21 – and then try our luck:

    veth14

    Great! What we expected! Of course our other virtual clients and the host can also send packets to the VMware guest. I need not show this here explicitly.

    Summary

    veth-pairs are easy to create and to use. They are ideal tools to connect the host and other Linux or VMware bridges to a Linux bridge in a well defined way.

    A remark on DHCP

    Reasonable and precisely defined address assignment to the bridges and or virtual interfaces can become a problem with VMware as well as with KVM /virt-manager or virsh. Especially, when you want to avoid address assignment to the bridges themselves. Typically, when you define virtual networks in your virtualization environment a bridge is created together with an attached Ethernet interface for the host – which you may not really need. If you in addition enable DHCP functionality for the bridge/network the bridge itself (or the related device) will inevitably (!) and automatically get an address like 192.168.50.1. Furthermore related host routes are automatically set. This may lead to conflicts with what you really want to achieve.

    Therefore: If you want to work with DHCP I advise you to do this with a central DHCP service on the Linux host and not to use the DHCP services of the various virtualization environments. If you in addition want to avoid assigning IP addresses to the bridges themselves, you may need to work with DHCP pools and groups. This is beyond the scope of this article – though interesting in itself. An alternative would, of course, be to set up the whole virtual network with the help of a script, which may (with a little configuration work) be included as a unit into systemd.

    Make veth settings persistent

    Here we have a bit of a problem with Opensuse 13.2/Leap 42.1! The reason is that systemd in Leap and OS 13.2 is of version 210 and does not yet contain the service “systemd-networkd.service” – which actually would support the creation of virtual devices like “veth”-pairs during system startup. To my knowledge neither the “wicked” service used by Opensuse nor the “ifcfg-…” files allow for the definition of veth-pairs, yet. Bridge creation and address assignment to existing ethernet devices are, however, supported. So, what can we do to make things persistent?

    Of course, you can write a script that creates and configures all of your required veth-pairs. This script could be integrated in the boot process as a systemd-service to be started before the “wicked.service”. In addition you may
    configure the afterward existing Ethernet devices with “ifcfg-…”-files. Such files can also be used to guarantee an automatic setup of Linux bridges and their enslavement of defined Ethernet devices.

    Another option is – if you dare to take some risks – to fetch systemd’s version 224 from Opensuse’s Tumbleweed repository. Then you may create a directory “/etc/systemd/network” and configure the creation of veth-pairs via corresponding “….netdev”-files in the directory. E.g.:

    mytux # cat veth1.netdev 
    [NetDev]
    Name=vmh1
    Kind=veth
    [Peer]
    Name=vmh2
    

     
    I tried it – it works. However, systemd version 224 has trouble with the rearrangement of Leap’s apparmor startup. I have not looked at this in detail, yet.

    Nevertheless, have fun with veth devices in your virtual networks !