More fun with veth, network namespaces, VLANs – VI – veth subdevices for private 802.1q VLANs

My present post series on veth devices and Linux network namespaces continues articles I have written in 2017 about virtual networking and veth based connections. The last two posts of the present series

were a kind of excursion: We studied the effects of routes on ARP-requests in network namespaces where two or more L2-segments of a LAN terminated – with all IPs in both L2-segments belonging to one and the same IP subnet. So far, the considered segments in this series were standard L2-segments, not VLAN segments.

Despite being instructive, scenarios with multiple standard L2-segments attached to a namespace (or virtualized host) are a bit academic. However, when you replace the virtual L2-segments by private virtual VLAN segments we come to more relevant network configurations. A namespace or a virtualized host with multiple attached VLAN segments is not academic. Even without forwarding and even if the namespace got just one IP. We speak of a virtual “multi-homed” namespace (or virtualized host) connected to multiple VLAN segments.

What could be relevant examples?

  • Think of a virtual host that administers other guest systems in various private VLANs of a virtualization server! To achieve this we need routes in the multi-homed host, but no forwarding.
  • On the other side think of a namespace/host where multiple VLANs terminate and where groups of LAN segments must be connected to each other or to the outer world. In this case we would combine private VLANs with forwarding; the namespace would become a classic router for virtual hosts loacted in different VLANs.

The configuration of virtual private VLAN scenarios based on Linux veth and bridge devices is our eventual objective. In forthcoming posts we will consider virtual VLANs realized with the help of veth subdevices. Such VLANs follow the IEEE 802.1q standard for the Link layer. And regarding VLAN-aware bridges we may refer to RFC5517. A Linux (virtual) bridge fulfills almost all of the requirements of this RFC.

This very limited set of tools gives us a choice between many basic options of setting up a complex VLAN landscape. Different options to configure Linux bridge ports and/or VLAN aware veth endpoints in namespaces extends the spectrum of decisions to be made. An arangement of multiple cascading bridges may define a complex hierarchcal topology. But there are other factors, too. Understanding the influence of routes and of Linux kernel parameters on the path and propagation of ARP and ICMP requests in virtual LANs with VLANs and multi-homed namespaces (or hosts) can become crucial regarding both functionality and weak points of different solutions.

In this post we start with briefly discussing different basic options of VLAN creation. We then look closer at configuration variants for veth VLAN aware interfaces inside a namespace. The results will prepares us for two very simple VLAN scenarios that will help us to investigate the impact of routes and kernel parameters in the next two posts. In later posts I will turn to the port configuration of Linux bridges for complex VLAN solutions.

Private VLANs and veth endpoints

We speak of private VLANs because we want to separate packet traffic between different groups of hosts within a LAN-segment – on the Link layer. This requires that the packets get marked and processed according to these marks. In a 802.1q compliant VLAN the packets get VLAN tags, which identify a VLAN ID [VID] in special data sections of Ethernet packets. By analyzing these tags by and in VLAN aware devices we can filter packets and thus subdivide the original LAN segment into different VLAN-segments. Private VLAN segments form a Link layer broadcast domain (here: Ethernet broadcast domain) of their own.

The drawing below shows tagged packets (diamond like shape) moving along one and the same veth connection line. The colors represent a certain VID.

We see some VLAN aware network devices on both sides. They send and receive packets with certain tags only. A veth endpoint can be supplemented with such VLAN aware subdevices. The device in the middle may instead handle untagged packets only. But we could also have something like this:

Here on the right side a typical “trunk interface” transports all kinds of tagged and untagged packets into a target space – which could be a namespace or the inner part of a bridge. We will see that such a trunk interface is realized by a standard veth endpoint.

Basic VLAN variants based on veths and Linux bridges

Linux bridge ports for attached VLAN aware or unaware devices can be configured in various ways to tag or de-tag packets. Via VID-related filters the bridge can establish and control allowed or disallowed connections between its ports, all on the Link layer. Based on veths and Linux bridges there are four basic ways to set up to (virtual) VLAN solutions:

  1. We can completely rely on the tagging and traffic separation abilities of Linux bridges. All namespaces and hosts would be connected to the bridge via veth lines and devices that transport untagged packets, only. The bridge then defines the VLANs of a LAN-segment.
  2. We can start with VLAN separation already in the multi-homed namespace/host via VLAN aware sub-devices of veth endpoints and connect respective VLAN-aware devices or a trunk device of a veth peer device to the bridge. These options require different settings on bridge ports.
  3. We can combine approaches (1) and (2).
  4. We can connect different namespaces directly via VLAN aware veth peer devices.

Continue reading

More fun with veth, network namespaces, VLANs – V – link two L2-segments of the same IP-subnet by a routing network namespace

During the last two posts of this series

More fun with veth, network namespaces, VLANs – IV – L2-segments, same IP-subnet, ARP and routes

More fun with veth, network namespaces, VLANs – III – L2-segments of the same IP-subnet and routes in coupling network namespaces

we have studied a Linux network namespace with two attached L2-segments. All IPs were members of one and the same IP-subnet. Forwarding and Proxy ARP had been deactivated in this namespace.

So far, we have understood that routes have a decisive impact on the choice of the destination segment when ICMP- and ARP-requests are sent from a network namespace with multiple NICs – independent of forwarding being enabled or not. Insufficiently detailed routes can lead to problems and asymmetric arrival of replies from the segments – already on the ARP-level!

The obvious impact of routes on ARP-requests in our special scenario has surprised at least some readers, but I think remaining open questions have been answered in detail by the experiments discussed in the preceding post. We can now move on, on sufficiently solid ground.

We have also seen that even with detailed routes ARP- and ICMP-traffic paths to and from the L2-segments remain separated in our scenario (see the graphics below). The reason, of course, was that we had deactivated forwarding in the coupling namespace.

In this post we will study what happens when we activate forwarding. We will watch results of experiments both on the ICMP- and the ARP-level. Our objective is to link our otherwise separate L2-segments (with all their IPs in the same IP-subnet) seamlessly by a forwarding network namespace – and thus form some kind of larger segment. And we will test in what way Proxy ARP will help us to achieve this objective.

Not just fun …

Now, you could argue that no reasonable admin would link two virtual segments with IPs in the same IP-subnet by a routing namespace. One would use a virtual bridge. First answer: We perform virtual network experiments here for fun … Second answer: Its not just fun ..

Our eventual objective is the configuration of virtual VLAN configurations and related security measures. Of particular interest are routing namespaces where two tagging VLANs terminate and communicate with a third LAN-segment, the latter leading to an Internet connection. The present experiments with standard segments are only a first step in this direction.

When we imagine a replacement of the standard segments by tagged VLAN segments we already get the impression that we could use a common namespace for the administration of VLANs without accidentally mixing or transferring ICMP- and ARP-traffic between the VLANs. But the results in the last two previous posts also gave us a clear warning to distinguish carefully between routing and forwarding in namespaces.

The modified scenario – linking two L2-segments by a forwarding namespace

Let us have a look at a sketch of our scenario first:

We see our segments S1 and S2 again. All IPs are memebers of 192.168.5.0/24. The segments are attached to a common network namespace netnsR. The difference to previous scenarios in this post series lies in the activated forwarding and the definition of detailed routes in netnsR for the NICs with IPs of the same C-class IP-subnet.

Our experiments below will look at the effect of default gateway definitions and at the requirement of detailed routes in the L2-segments’ namespaces. In addition we will also test in what way enabling Proxy ARP in netnsR can help to achieve seamless segment coupling in an efficient centralized way.

Continue reading

More fun with veth, network namespaces, VLANs – IV – L2-segments, same IP-subnet, ARP and routes

In the course of my present post series on veths, network namespaces and virtual VLANs

we have studied a somewhat academic network scenario:

Two L2-segments were attached to a common network namespace, with all IPs of both segments belonging to one and the same IP subnet class. The network traffic was analyzed for different route configurations. Only detailed routes ensured a symmetric handling of packets in both segments.

The IP-setup makes the scenario a bit peculiar. It is not a situation an admin would normally create. Instead the IPs of each segment would be members of one of two distinct and different IP-subnets.

However, our scenario already taught us some important lessons to keep in mind when we approach our eventual objective:

Sooner or later we want to answer the question how to configure virtual VLAN configurations, in which veth subdevices for tagged VLAN lines terminate in a common and routing namespace.

By our academic scenario we found out that we need to set up much more specific routes than the standard ones which are automatically created in the wake of “ip addr add” commands. Despite the fact that forwarding was disabled in the coupling network namespace! Only with clear and fitting routes we got a reasonable behavior of our artificial network with respect to ICMP- and ARP-requests in the last post (III).

Well, this is, in a way, a platitude. Whenever you have a special network you must adapt the routes to your network layout. This, of course, includes the resolution of ambiguities which our scenario introduced. The automatically defined routes were insufficient and caused an asymmetry in the ICMP-replies in the otherwise very symmetric configuration (see the drawing below).

Nevertheless, the last post and also older posts on veths and virtual VLANs triggered an intense discussion of two of my readers with me concerning ARP. In this post I, therefore, want to look at the impact of routes on ARP-requests and ARP-replies between the namespaces in more detail.

The key question is whether and to what extend the creation and emission of ARP-packets via a particular network interface may become route-dependent in namespaces (or on hosts) with multiple network interfaces.

While we saw a clear and route dependent asymmetry in the reaction of the network to ICMP-requests, we did not fully analyze how this asymmetry showed up on the ARP-level.

In my opinion we have already seen that ARP-requests triggered within ICMP-requests showed some asymmetry regarding the NIC used and the segment addressed – depending on the defined routes. But the experimental data of the last post did not show the flow of ARP-replies in detail. And we only regarded ARP-requests triggered by ICMP-requests. I.e. we watched ARP-requests created by the Linux-kernel to support the execution of ICMP-requests, but not pure and standalone ARP-requests.

Therefore, it is still unclear

  • whether routes have an impact on pure standalone ARP-requests, i.e. ARP-requests not caused by ICMP-requests (or by other requests of higher layer protocols),
  • whether routes do have an impact on ARP-replies.

The experiments below will deliver detailed information to clarify these points.

When I speak of ARP-tables below, I am referring to the ARP-caches of the various network namespaces in our scenario.

Continue reading