More fun with veth, network namespaces, VLANs – V – link two L2-segments of the same IP-subnet by a routing network namespace

During the last two posts of this series

More fun with veth, network namespaces, VLANs – IV – L2-segments, same IP-subnet, ARP and routes

More fun with veth, network namespaces, VLANs – III – L2-segments of the same IP-subnet and routes in coupling network namespaces

we have studied a Linux network namespace with two attached L2-segments. All IPs were members of one and the same IP-subnet. Forwarding and Proxy ARP had been deactivated in this namespace.

So far, we have understood that routes have a decisive impact on the choice of the destination segment when ICMP- and ARP-requests are sent from a network namespace with multiple NICs – independent of forwarding being enabled or not. Insufficiently detailed routes can lead to problems and asymmetric arrival of replies from the segments – already on the ARP-level!

The obvious impact of routes on ARP-requests in our special scenario has surprised at least some readers, but I think remaining open questions have been answered in detail by the experiments discussed in the preceding post. We can now move on, on sufficiently solid ground.

We have also seen that even with detailed routes ARP- and ICMP-traffic paths to and from the L2-segments remain separated in our scenario (see the graphics below). The reason, of course, was that we had deactivated forwarding in the coupling namespace.

In this post we will study what happens when we activate forwarding. We will watch results of experiments both on the ICMP- and the ARP-level. Our objective is to link our otherwise separate L2-segments (with all their IPs in the same IP-subnet) seamlessly by a forwarding network namespace – and thus form some kind of larger segment. And we will test in what way Proxy ARP will help us to achieve this objective.

Not just fun …

Now, you could argue that no reasonable admin would link two virtual segments with IPs in the same IP-subnet by a routing namespace. One would use a virtual bridge. First answer: We perform virtual network experiments here for fun … Second answer: Its not just fun ..

Our eventual objective is the configuration of virtual VLAN configurations and related security measures. Of particular interest are routing namespaces where two tagging VLANs terminate and communicate with a third LAN-segment, the latter leading to an Internet connection. The present experiments with standard segments are only a first step in this direction.

When we imagine a replacement of the standard segments by tagged VLAN segments we already get the impression that we could use a common namespace for the administration of VLANs without accidentally mixing or transferring ICMP- and ARP-traffic between the VLANs. But the results in the last two previous posts also gave us a clear warning to distinguish carefully between routing and forwarding in namespaces.

The modified scenario – linking two L2-segments by a forwarding namespace

Let us have a look at a sketch of our scenario first:

We see our segments S1 and S2 again. All IPs are memebers of 192.168.5.0/24. The segments are attached to a common network namespace netnsR. The difference to previous scenarios in this post series lies in the activated forwarding and the definition of detailed routes in netnsR for the NICs with IPs of the same C-class IP-subnet.

Our experiments below will look at the effect of default gateway definitions and at the requirement of detailed routes in the L2-segments’ namespaces. In addition we will also test in what way enabling Proxy ARP in netnsR can help to achieve seamless segment coupling in an efficient centralized way.

Continue reading

More fun with veth, network namespaces, VLANs – IV – L2-segments, same IP-subnet, ARP and routes

In the course of my present post series on veths, network namespaces and virtual VLANs

we have studied a somewhat academic network scenario:

Two L2-segments were attached to a common network namespace, with all IPs of both segments belonging to one and the same IP subnet class. The network traffic was analyzed for different route configurations. Only detailed routes ensured a symmetric handling of packets in both segments.

The IP-setup makes the scenario a bit peculiar. It is not a situation an admin would normally create. Instead the IPs of each segment would be members of one of two distinct and different IP-subnets.

However, our scenario already taught us some important lessons to keep in mind when we approach our eventual objective:

Sooner or later we want to answer the question how to configure virtual VLAN configurations, in which veth subdevices for tagged VLAN lines terminate in a common and routing namespace.

By our academic scenario we found out that we need to set up much more specific routes than the standard ones which are automatically created in the wake of “ip addr add” commands. Despite the fact that forwarding was disabled in the coupling network namespace! Only with clear and fitting routes we got a reasonable behavior of our artificial network with respect to ICMP- and ARP-requests in the last post (III).

Well, this is, in a way, a platitude. Whenever you have a special network you must adapt the routes to your network layout. This, of course, includes the resolution of ambiguities which our scenario introduced. The automatically defined routes were insufficient and caused an asymmetry in the ICMP-replies in the otherwise very symmetric configuration (see the drawing below).

Nevertheless, the last post and also older posts on veths and virtual VLANs triggered an intense discussion of two of my readers with me concerning ARP. In this post I, therefore, want to look at the impact of routes on ARP-requests and ARP-replies between the namespaces in more detail.

The key question is whether and to what extend the creation and emission of ARP-packets via a particular network interface may become route-dependent in namespaces (or on hosts) with multiple network interfaces.

While we saw a clear and route dependent asymmetry in the reaction of the network to ICMP-requests, we did not fully analyze how this asymmetry showed up on the ARP-level.

In my opinion we have already seen that ARP-requests triggered within ICMP-requests showed some asymmetry regarding the NIC used and the segment addressed – depending on the defined routes. But the experimental data of the last post did not show the flow of ARP-replies in detail. And we only regarded ARP-requests triggered by ICMP-requests. I.e. we watched ARP-requests created by the Linux-kernel to support the execution of ICMP-requests, but not pure and standalone ARP-requests.

Therefore, it is still unclear

  • whether routes have an impact on pure standalone ARP-requests, i.e. ARP-requests not caused by ICMP-requests (or by other requests of higher layer protocols),
  • whether routes do have an impact on ARP-replies.

The experiments below will deliver detailed information to clarify these points.

When I speak of ARP-tables below, I am referring to the ARP-caches of the various network namespaces in our scenario.

Continue reading

More fun with veth, network namespaces, VLANs – III – L2-segments of the same IP-subnet and routes in coupling network namespaces

Linux network namespaces allow for experiments in virtual networks without setting up (virtual) hosts. This includes the study of packet propagation through virtual devices as bridges/switches and between network namespaces. With or without packet filters or firewall rules in place. In the first posts of this series

More fun with veth, network namespaces, VLANs – I – open questions

More fun with veth, network namespaces, VLANs – II – two L2-segments attached to a common network namespace

I have posed a scenario with two L2-segments connected to a common network namespace. Such a scenario appears to be nothing special; we all are used to situations that require routing and forwarding because LAN-segments belong to different IP-subnets.

However, in my scenario all IPs are members of one and the same IP-subnet. An other interesting point is that the assignment of IPs to (virtual) NICs and bridges by the command “ip addr add” causes an automatic setup of elementary routes. We will see that such routes are insufficient to resolve the ambiguities of packet transport from the segment-coupling namespace to the attached segments.

To keep the conditions as plain and simple as possible in our scenario we do not yet mix in VLANs or packet filters or firewalls.

In the 2nd post named above we have already come to some conclusions about the posed scenario and its elements. However, these conclusions were based on theoretical considerations. In this post and in two additional ones we will verify or falsify our preliminary conclusions by performing concrete experiments on a Linux host.

The host I used for the experiments below was a laptop with an Opensuse Leap 15.5 OS, based on packets of SuSE’s enterprise server SLES. The namespaces are set up as unnamed network namespaces. This requires some special commands.

The detailed analysis of the scenario will help us to better understand more complex configurations including virtual VLANs in forthcoming posts. In particular the impact of routes on ARP traffic.

If you notice varying MACs in some images below: You may ignore this safely. It is due to repeating the setup in the course of the experiments. If you have any doubts just repeat the experiments with the help of the commnds provided in the text and in an attachment.

The scenario

The scenario is visualized by the following drawing:

I have already described details in the 2nd post listed above.

How to setup the network namespaces and required devices?

I have discussed all commands required to configure such scenarios with unnamed Linux network namespaces in the first post of another older post series. See

Fun with veth-devices, Linux bridges and VLANs in unnamed Linux network namespaces – I

If you are new to experiments with unnamed Linux network namespaces please read this post first. Of particular importance are the commands

  • unshare …
  • nsenter …
  • ip link …
  • ip address …

I will not discuss details of the chain of commands required for setting up the elements of our scenario. At the end of this post you will find three PDFs with contents you can copy to bash shell scripts. All commands in their given form are meant to be executed by the user root in a safe test environment. I am very confident that readers interested in this post will understand the structure of the commands without further explanation.

Due to the fact that the scripts set and export intermediate variables as environment variables for sub-shells you should run the scripts via the “source“-command if you want these variables to be available in your original shell (probably in a terminal window), too.

The first script (create_netns) will then execute the following commands (you may need to scroll to see all):

Continue reading

More fun with veth, network namespaces, VLANs – II – two L2-segments attached to a common network namespace

In the first post of this series about virtual networking

More fun with veth, network namespaces, VLANs – I – open questions

I have collected some questions which had remained open in an older post series of 2017 about veths, unnamed network namespaces and virtual VLANs. In the course of the present series I will try to answer at least some of these questions.

The topic of the present and the next two post will be a special network namespace which we create with an artificial ambiguity regarding the path that ICMP and even ARP packets could take. We will study a scenario with the following basic properties:

We set up two L2-segments, each based on a Linux bridge to which we attach two separate network namespaces by veth devices. These L2-segments will be connected (by further veths) to yet another common, but not forwarding network namespace “netnsR”. The IPs of all veth end-points will be members of one and the same IP-subnet (a class C net).

No VLANs or firewalls will be set up. So, this is a very plain and seemingly simple scenario: Two otherwise separate L2-segments terminate with border NICs in a common network namespace.

Note, however, that our scenario is different from the typical situation of a router or routing namespace: One reason is that our L2-segments and their respective NICs do not belong to different logical IP networks with different IP-broadcast regions. We have just one common C-class IP-subnet and not two different ones. The other other reason is that we will not enable “forwarding” in the coupling namespace “netnsR”.

In this post we will first try to find out by theoretical reasoning what pitfalls may await us and what may be required to enable a symmetric communication between the common namespace netnsR and each of the L2-segments. We will try to identify critical issues which we have to check out in detail by experiments.

One interesting aspect is that the setup basically is totally symmetric. But therefore it is also somewhat ambiguous regarding the possible position of IPs in one of the networks. Naively set routes in netnsR may break or reflect this symmetry on the IP layer. But we shall also consider ARP requests and replies on the Link layer under the conditions of our scenario.

In the next forthcoming post we will verify our ideas and clarify open points by concrete experiments. As long as forwarding is disabled in the coupling namespace netnsR we do not expect any cross-segment transfer of packets. In yet another post we will use our gathered results to establish a symmetric cross-segment communication and study how we must set up routes to achieve this. All these experiments will prepare us for a later investigation of virtual VLANs.

I will use the abbreviation “netns” for network namespaces throughout this post.

Scenario SC-1: Two L2-segments coupled by a common and routing namespace

Let us first look at a graphical drawing showing our scenario:

A simple way to build such a virtual L2-segment is the following:

We set up a Linux bridge (e.g. brB1) in a dedicated network namespace (e.g. netnsB1). Via veth devices we attach two further network namespaces (netns11 and netns12) to the bridge. You may associate the latter namespace with virtual hosts reduced to elementary networking abilities. As I have shown in my previous post series of the year 2017 we can enter such a network namespace and execute shell commands there; see here. The veth endpoints in netns11 and netns12 get IP addresses.

We build two of such segments, S1 and S2, with each of the respective bridges located in its own namespace (netnsB1 and netnsB2). Then we use further veth devices to connect the two bridges (= segments) to a common namespace (netnsR). Despite setting default routes we do not enable forwarding in netnsR.

The graphics shows that we all in all have 7 network namespaces:

  • netns11, netns12, netns21, netns22 represent hosts with NICs and IPs that want to communicate with other hosts.
  • netnsB1 and netnsB2 host the Linux bridges.
  • netnsR is a non-forwarding namespace where both segments, S1 and S2, terminate – each via a border NIC (veth-endpoint).

netnsR is the namespace which is most interesting in our scenario: Without special measures packets from netns11 will not reach netns21 or netns22. So, we have indeed realized two separated L2-segments S1 and S2 attached to a common network namespace.

The sketch makes it clear that communication within each of the segments is possible: netns11 will certainly be able to communicate with netns12. The same holds for netns21 and netns22.

But we cannot be so sure what will happen with e.g. ARP and ICMP request and/or answering packets send from netnsR to one of the four namespaces netns11, netns12, netns21, netns22. You may guess that this might depend on route definitions. I come back to this point in a minute.

Regarding IP addresses: Outside the bridges we must assign IP addresses to the respective veth endpoints. As said: During the setup of the devices I use IP-addresses of one and the same class C network: 192.168.5.0/24. The bridges themselves do not need assigned IPs. In our scenario the bridges could at least in principle have been replaced by a Hub or even by an Ethernet bus cable with outtakes.

Side remark: Do not forget that a Linux bridge can in principle get an IP address itself and work as a special NIC connected to the bridge ports. We do, however, not need or use this capability in our scenario.

Regarding routes: Of course we need to define routes in all of the namespaces. Otherwise the NICs with IPs would not become operative. In a first naive approach we will just rely on the routes which are automatically generated when we create the NICs. We will see that this leads to a somewhat artificial situation in netnsR. Both routes will point to the same IP-subnet 192.168.5.0/24 – but via different NICs.

Theoretical analysis of the situation of and within netnsR

Both L2-segments terminate in netnsR. The role of netnsR basically is very similar to that of a router, but with more ambiguity and uncertainty because both border NICs belong to the same IP-subnet.

Continue reading

Fun with veth-devices, Linux bridges and VLANs in unnamed Linux network namespaces – VIII

In the last post of this series

Fun with veth-devices, Linux bridges and VLANs in unnamed Linux network namespaces – VII [Theoretical considerations regarding the connection of a network namespace or container to two separated VLANs]

we discussed two different approaches to connect a network namespace (or container) “netns9” to two (or more) separated VLANs. Such a network namespace could e.g. represent an administrative system (for example in form of a LXC container) for both VLANs. It has its own connection to the virtual Linux bridge which technically defines the VLANs by special port configurations. See the picture below, where we represented a VLAN1 by a member network namespace netns1 and a VLAN2 by a member netns4:

The solution on the left side is based on a bridge in an intermediate network namespace and packet tagging up into the namespace for the VLANs’ common member system netns9. The approach on the right side of the graphics uses a bridge, too, but without packet tagging along the connection to the common VLAN member system. In our analysis in the last post we assumed that we would have to compensate for this indifference by special PVID/VID settings.

The previous articles of this series already introduced general Linux commands for network namespace creation and the setup of VLANs via Linux bridge configurations. See e.g.: Fun with … – IV [Virtual VLANs for network namespaces (or containers) and rules for VLAN tagging at Linux bridge ports]. We shall use these methods in the present and a coming post to test configurations for a common member of two VLANs. We want to find out whether the theoretically derived measures regarding route definitions in netns9 and special PVID/VID-settings at the bridge work as expected. A test of packet filtering at bridge ports which we regarded as important for security is, however, postponed to later posts.

Extension of our test environment

First, we extend our previous test scenario by yet another network namespace “netns9“.

Our 2 VLANs in the test environment are graphically distinguished by “green” and “pink” tags (corresponding to different VLAN ID numbers). netns9 must be able to communicate with systems in both VLANs. netns9 shall, however, not become a packet forwarder between the VLANs; the VLANs shall remain separated despite the fact that they have a common member. We expect, that a clear separation of communication paths to the VLANs requires a distinction between network targets already inside netns9.

Bridge based solutions with packet tagging and veth sub-interfaces

There are two rather equivalent solutions for the connection of netns9 to brx in netns3; see the schematic graphics below:

Both solutions are based on veth sub-interfaces inside netns9. Thus, both VLAN connections are properly terminated in netns9. The approach depicted on the right side of the graphics uses a pure trunk port at the bridge; but also this solutions makes use of packet tagging between brx and netns9.

Note that we do not need to used tagged packets along the connections from bridge brx to netns1, netns2, netns4, netns5. The VLANs are established by the PVID/VID settings at the bridge ports and forwarding rules inside a VLAN aware bridge. Note also that our test environment contains an additional bridge bry and additional network namespaces.

We first concentrate on the solution on the left side with veth sub-interfaces at the bridge. It is easy to switch to a trunk port afterwards.

The required commands for the setup of the test environment are given below; you may scroll and copy the commands to the prompt of a terminal window for a root shell:

unshare --net --uts /bin/bash &
export pid_netns1=$!
unshare --net --uts /bin/bash &
export pid_netns2=$!
unshare --net --uts /bin/bash &
export pid_netns3=$!
unshare --net --uts /bin/bash &
export pid_netns4=$!
unshare --net --uts /bin/bash &
export pid_netns5=$!
unshare --net --uts /bin/bash &
export pid_netns6=$!
unshare --net --uts /bin/bash &
export pid_netns7=$!
unshare --net --uts /bin/bash &
export pid_netns8=$!
unshare --net --uts /bin/bash &
export pid_netns9=$!

# assign different hostnames  
nsenter -t $pid_netns1 -u hostname netns1
nsenter -t $pid_netns2 -u hostname netns2
nsenter -t $pid_netns3 -u hostname netns3
nsenter -t $pid_netns4 -u hostname netns4
nsenter -t $pid_netns5 -u hostname netns5
nsenter -t $pid_netns6 -u hostname netns6
nsenter -t $pid_netns7 -u hostname netns7
nsenter -t $pid_netns8 -u hostname netns8
nsenter -t $pid_netns9 -u hostname netns9
     
# set up veth devices in netns1 to netns4 and in netns9 with connections to netns3  
ip link add veth11 netns $pid_netns1 type veth peer name veth13 netns $pid_netns3
ip link add veth22 netns $pid_netns2 type veth peer name veth23 netns $pid_netns3
ip link add veth44 netns $pid_netns4 type veth peer name veth43 netns $pid_netns3
ip link add veth55 netns $pid_netns5 type veth peer name veth53 netns $pid_netns3
ip link add veth99 netns $pid_netns9 type veth peer name veth93 netns $pid_netns3

# set up veth devices in netns6 and netns7 with connection to netns8   
ip link add veth66 netns $pid_netns6 type veth peer name veth68 netns $pid_netns8
ip link add veth77 netns $pid_netns7 type veth peer name veth78 netns $pid_netns8

# Assign IP addresses and set the devices up 
nsenter -t $pid_netns1 -u -n /bin/bash
ip addr add 192.168.5.1/24 brd 192.168.5.255 dev veth11
ip link set veth11 up
ip link set lo up
exit
nsenter -t $pid_netns2 -u -n /bin/bash
ip addr add 192.168.5.2/24 brd 192.168.5.255 dev veth22
ip link set veth22 up
ip link set lo up
exit
nsenter -t $pid_netns4 -u -n /bin/bash
ip addr add 192.168.5.4/24 brd 192.168.5.255 dev veth44
ip link set veth44 up
ip link set lo up
exit
nsenter -t $pid_netns5 -u -n /bin/bash
ip addr add 192.168.5.5/24 brd 192.168.5.255 dev veth55
ip link set veth55 up
ip link set lo up
exit
nsenter -t $pid_netns6 -u -n /bin/bash
ip addr add 192.168.5.6/24 brd 192.168.5.255 dev veth66
ip link set veth66 up
ip link set lo up
exit
nsenter -t $pid_netns7 -u -n /bin/bash
ip addr add 192.168.5.7/24 brd 192.168.5.255 dev veth77
ip link set veth77 up
ip link set lo up
exit
nsenter -t $pid_netns9 -u -n /bin/bash
ip addr add 192.
168.5.9/24 brd 192.168.5.255 dev veth99
ip link set veth99 up
ip link set lo up
exit

# set up bridge brx and its ports 
nsenter -t $pid_netns3 -u -n /bin/bash
brctl addbr brx  
ip link set brx up
ip link set veth13 up
ip link set veth23 up
ip link set veth43 up
ip link set veth53 up
brctl addif brx veth13
brctl addif brx veth23
brctl addif brx veth43
brctl addif brx veth53
exit

# set up bridge bry and its ports 
nsenter -t $pid_netns8 -u -n /bin/bash
brctl addbr bry  
ip link set bry up
ip link set veth68 up
ip link set veth78 up
brctl addif bry veth68
brctl addif bry veth78
exit

# set up 2 VLANs on each bridge 
nsenter -t $pid_netns3 -u -n /bin/bash
ip link set dev brx type bridge vlan_filtering 1
bridge vlan add vid 10 pvid untagged dev veth13
bridge vlan add vid 10 pvid untagged dev veth23
bridge vlan add vid 20 pvid untagged dev veth43
bridge vlan add vid 20 pvid untagged dev veth53
bridge vlan del vid 1 dev brx self
bridge vlan del vid 1 dev veth13
bridge vlan del vid 1 dev veth23
bridge vlan del vid 1 dev veth43
bridge vlan del vid 1 dev veth53
bridge vlan show
exit
nsenter -t $pid_netns8 -u -n /bin/bash
ip link set dev bry type bridge vlan_filtering 1
bridge vlan add vid 10 pvid untagged dev veth68
bridge vlan add vid 20 pvid untagged dev veth78
bridge vlan del vid 1 dev bry self
bridge vlan del vid 1 dev veth68
bridge vlan del vid 1 dev veth78
bridge vlan show
exit

# Create a veth device to connect the two bridges 
ip link add vethx netns $pid_netns3 type veth peer name vethy netns $pid_netns8
nsenter -t $pid_netns3 -u -n /bin/bash
ip link add link vethx name vethx.50 type vlan id 50
ip link add link vethx name vethx.60 type vlan id 60
brctl addif brx vethx.50
brctl addif brx vethx.60
ip link set vethx up
ip link set vethx.50 up
ip link set vethx.60 up
bridge vlan add vid 10 pvid untagged dev vethx.50
bridge vlan add vid 20 pvid untagged dev vethx.60
bridge vlan del vid 1 dev vethx.50
bridge vlan del vid 1 dev vethx.60
bridge vlan show
exit

nsenter -t $pid_netns8 -u -n /bin/bash
ip link add link vethy name vethy.50 type vlan id 50
ip link add link vethy name vethy.60 type vlan id 60
brctl addif bry vethy.50
brctl addif bry vethy.60
ip link set vethy up
ip link set vethy.50 up
ip link set vethy.60 up
bridge vlan add vid 10 pvid untagged dev vethy.50
bridge vlan add vid 20 pvid untagged dev vethy.60
bridge vlan del vid 1 dev vethy.50
bridge vlan del vid 1 dev vethy.60
bridge vlan show
exit

# Add subinterfaces in netns9
nsenter -t $pid_netns9 -u -n /bin/bash
ip link add link veth99 name veth99.10 type vlan id 10
ip link add link veth99 name veth99.20 type vlan id 20
ip link set veth99 up
ip link set veth99.10 up
ip link set veth99.20 up
exit

# Add subinterfaces in netns3
nsenter -t $pid_netns3 -u -n /bin/bash
ip link add link veth93 name veth93.10 type vlan id 10
ip link add link veth93 name veth93.20 type vlan id 20
ip link set veth93 up
ip link set veth93.10 up
ip link set veth93.20 up
brctl addif brx veth93.10
brctl addif brx veth93.20
bridge vlan add vid 10 pvid untagged dev veth93.10
bridge vlan add vid 20 pvid untagged dev veth93.20
bridge vlan del vid 1 dev veth93.10
bridge vlan del vid 1 dev veth93.20
exit

We just have to extend the command list of the experiment conducted already in the second to last post by some more lines which account for the setup of netns9 and its connection to the bridge “brx” in netns3.

Now, we open a separate terminal, which inherits the defined environment variables (e.g. on KDE by “konsole &>/dev/null &”), and try a ping from netns9 to netns7:

mytux:~ # nsenter -t $pid_netns9 
-u -n /bin/bash
netns9:~ # ping 192.168.5.1
PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data.
^C
--- 192.168.5.1 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1008ms

netns9:~ # ping 192.168.5.7
PING 192.168.5.7 (192.168.5.7) 56(84) bytes of data.
^C
--- 192.168.5.7 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1006ms

netns9:~ # 

Obviously, the pings failed! The reason is that we forgot to set routes in netns9! Such routes are, however, vital for the transport of e.g. ICMP answering and request packets from netns9 to members of the two VLANs. See the last post for details. We add the rules for the required routes:

#Set routes in netns9 
nsenter -t $pid_netns9 -u -n /bin/bash
route add 192.168.5.1 veth99.10                                                     
route add 192.168.5.2 veth99.10                                                    
route add 192.168.5.4 veth99.20
route add 192.168.5.5 veth99.20                                                    
route add 192.168.5.6 veth99.10
route add 192.168.5.7 veth99.20
exit

By these routes we, obviously, distinguish different paths: Packets heading for e.g. netns1 and netns2 go through a different interface than packets sent e.g. to netns4 and netns5. Now, again, in our second terminal window:

mytux:~ # nsenter -t $pid_netns9 -u -n /bin/bash 
netns9:~ # ping 192.168.5.1 -c2
PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data.
64 bytes from 192.168.5.1: icmp_seq=1 ttl=64 time=0.067 ms
64 bytes from 192.168.5.1: icmp_seq=2 ttl=64 time=0.083 ms

--- 192.168.5.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.067/0.075/0.083/0.008 ms
netns9:~ # ping 192.168.5.7 -c2
PING 192.168.5.7 (192.168.5.7) 56(84) bytes of data.
64 bytes from 192.168.5.7: icmp_seq=1 ttl=64 time=0.079 ms
64 bytes from 192.168.5.7: icmp_seq=2 ttl=64 time=0.078 ms

--- 192.168.5.7 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.078/0.078/0.079/0.008 ms
netns9:~ # ping 192.168.5.4 -c2
PING 192.168.5.4 (192.168.5.4) 56(84) bytes of data.
64 bytes from 192.168.5.4: icmp_seq=1 ttl=64 time=0.151 ms
64 bytes from 192.168.5.4: icmp_seq=2 ttl=64 time=0.076 ms

--- 192.168.5.4 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.076/0.113/0.151/0.038 ms

Thus, we have confirmed our conclusion from the last article that we need route definitions in a common member of two VLANs if and when we terminate tagged connection lines by veth sub-interfaces inside such a network namespace or container.

But are our VLANs still isolated from each other?
We open another terminal and try pinging from netns1 to netns4, netns7 and netns2:

mytux:~ # nsenter -t $pid_netns1 -u -n /bin/bash
netns1:~ # ping 192.168.5.4
PING 192.168.5.4 (192.168.5.4) 56(84) bytes of data.
^C
--- 192.168.5.4 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2015ms

netns1:~ # ping 192.168.5.7
PING 192.168.5.7 (192.168.5.7) 56(84) bytes of data.
^C
--- 192.168.5.7 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1007ms

netns1:~ # ping 192.168.5.2
PING 192.168.5.2 (192.168.5.2) 56(84) bytes of data.
64 bytes from 192.168.5.2: icmp_seq=1 ttl=64 time=0.195 ms
64 bytes from 192.168.5.2: icmp_seq=2 ttl=64 time=0.102 ms
^C
--- 192.168.5.
2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.102/0.148/0.195/0.048 ms
netns1:~ # 

And in reverse direction :

mytux:~ # nsenter -t $pid_netns5 -u -n /bin/bash                                               
netns5:~ # ping 192.168.5.4
PING 192.168.5.4 (192.168.5.4) 56(84) bytes of data.                                           
64 bytes from 192.168.5.4: icmp_seq=1 ttl=64 time=0.209 ms                                     
64 bytes from 192.168.5.4: icmp_seq=2 ttl=64 time=0.071 ms                                     
^C                                                                                             
--- 192.168.5.4 ping statistics ---                                                            
2 packets transmitted, 2 received, 0% packet loss, time 999ms                                  
rtt min/avg/max/mdev = 0.071/0.140/0.209/0.069 ms                                              
netns5:~ # ping 192.168.5.1
PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data.                                           
^C                                                                                             
--- 192.168.5.1 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1008ms

netns5:~ # 

Good! As expected!

Forwarding between two VLANs?

We have stressed in the last post that setting routes should clearly be distinguished from “forwarding” if we want to keep our VLANs separated:

We have NOT enabled forwarding in netns9. If we had done so we would have lost the separation of the VLANs and opened a direct communication line between the VLANs.

Let us – just for fun – test the effect of forwarding in netns9:

netns9:~ # echo 1 > /proc/sys/net/ipv4/conf/all/forwarding
netns9:~ # 

But still:

netns5:~ # ping 192.168.5.1
PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data.
^C
--- 192.168.5.1 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 1999ms

Enabling forwarding in netns9 alone is obviously not enough to enable a packet flow in both directions! A little thinking , however, shows:

If we e.g. want ARP resolution and pinging from netns5 to netns1 to work via netns9 we must establish further routes both in netns1 and netns5. Reason: Both network namespaces must be informed that netns9 now works as a gateway for both request and answering packets:

netns1:~ # route add 192.168.5.5 gw 192.168.5.9
netns5:~ # route add 192.168.5.1 gw 192.168.5.9

Eventually:

netns5:~ # ping 192.168.5.1
PING 192.168.5.1 (192.168.5.1) 56(84) bytes of data.
64 bytes from 192.168.5.1: icmp_seq=1 ttl=63 time=0.186 ms
64 bytes from 192.168.5.1: icmp_seq=2 ttl=63 time=0.134 ms
^C
--- 192.168.5.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.134/0.160/0.186/0.026 ms
netns5:~ # 

So, yes, forwarding outside the bridge builds a connection between otherwise separated VLANs. In connection with a packet filter this could be used to allow some hosts of a VLAN1 to reach e.g. some servers in a VLAN2. But this is not the topic of this post. So, do not forget to disable the forwarding in netns9 again for further experiments:

netns9:~ # echo 0 > /proc/sys/net/ipv4/conf/all/forwarding
netns9:~ # 

Bridge based solutions with packet tagging and a trunk port at the Linux bridge

The following commands replace the sub-interface ports veth93.10 and veth93.20 at the bridge by a single trunk port:

# Change veth93 to trunk like interface in brx 
nsenter -t $pid_netns3 -u -n /bin/bash
brctl delif brx veth93.10
brctl delif brx veth93.20
ip link del dev veth93.10
ip link del dev veth93.20
brctl addif brx veth93
bridge vlan add vid 10 tagged dev veth93
bridge vlan add vid 20 tagged dev veth93
bridge vlan del vid 1 dev veth93
bridge vlan show
exit 

Such a solution works equally well:

netns9:~ # ping 192.168.5.4 -c2
PING 192.168.5.4 (192.168.5.4) 56(84) bytes of data.
64 bytes from 192.168.5.4: icmp_seq=1 ttl=64 time=0.145 ms
64 bytes from 192.168.5.4: icmp_seq=2 ttl=64 time=0.094 ms

--- 192.168.5.4 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.094/0.119/0.145/0.027 ms
netns9:~ # ping 192.168.5.6 -c2
PING 192.168.5.6 (192.168.5.6) 56(84) bytes of data.
64 bytes from 192.168.5.6: icmp_seq=1 ttl=64 time=0.177 ms
64 bytes from 192.168.5.6: icmp_seq=2 ttl=64 time=0.084 ms

--- 192.168.5.6 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.084/0.130/0.177/0.047 ms
netns9:~ # 

Summary and outlook

It is easy to make a network namespace or container a common member of two separate VLANs realized by a Linux bridge. You have to terminate virtual veth connections, which transport tagged packets from both VLANs, properly inside the common target namespace by sub-interfaces. As long as we do not enable forwarding in the common namespace the VLANs remain separated. But routes need to be defined to direct packets from the common member to the right VLAN.

In the next post we look at commands to realize a connection of bridge based VLANs to a common network namespace with untagged packets. Such solutions are interesting for connecting multiple virtual VLANs to routers to external networks.