K8s L2 CNI for Containers and VMs

Brändli, Michael and Ceriani, Leandro (2023) K8s L2 CNI for Containers and VMs. Other thesis, OST Ostschweizer Fachhochschule.

[thumbnail of FS 2023-SA-EP-Brändli-Ceriani-K8s L2 CNI for Containers and VMs.pdf] Text
FS 2023-SA-EP-Brändli-Ceriani-K8s L2 CNI for Containers and VMs.pdf - Supplemental Material

Download (443kB)

Abstract

This paper investigates how layer 2 networking inside Kubernetes can be realized. Layer 2 networking is vital for some applications and especially useful for simulating real-world networking scenarios in Kubernetes. The INS is providing networking laboratory exercises to the students of the Eastern Switzerland University of Applied Sciences. The underlying platform should be migrated from bare metal Docker to Kubernetes for performance and scalability improvements. This is the main reason why the Institute for Network and Security (INS) is looking into this technology. The INS needs layer 2 point-to-point connections. This means protocols like Link Layer Discovery Protocol (LLDP) and layer 3 multicast applications like Hot Standby Router Protocol (HSRP) should work inside Kubernetes without being filtered or blocked. This matter gets further complicated by the fact that not all workloads can be containerized. Some workloads need to be run as virtual machines. These virtual machines can be managed and run inside Kubernetes using KubeVirt.

Pods in Kubernetes use Container Network Interface (CNI) plugins to communicate with each other and external networks. Currently, no single CNI plugin can handle unfiltered layer 2 point-to-point connections between pods and KubeVirt virtual machines. Additionally, connections must be possible over multiple physical Kubernetes nodes.

This paper tests and analyzes possible methods of realizing layer 2 connectivity. There are two main challenges with establishing layer 2 connectivity. The first is getting layer 2 inter-node traffic inside Kubernetes working with a CNI. This involved testing multiple open-source CNIs that claim to provide layer 2 connectivity and breaking down how they work. The second challenge is connecting the Kubevirt VMs to the Kubernetes CNI without losing packages.

There are many usable approaches to getting pod-to-pod layer 2 connections working. The most successful way was to use a CNI called Kube-OVN. Kube-OVN leverages Open Virtual Network (OVN), which is based on an Open Virtual Switch (OVS), to allow for proper networking configuration inside Kubernetes. With many workarounds, getting all layer 2 traffic from a KubeVirt VM to a pod is possible. However, it is not flawless, as during proof of concept 1 each interface can only handle one static MAC Address. This limitation makes using HSRP impossible. There are efforts from the community to make the virtual machines integrate better into an existing CNI. However, the macvtap binding mode, which the KubeVirt community is working on, did not succeed during testing.

It was shown that pod-to-pod connectivity could be realized on layer 2, but additional research is required to allow VM-to-pod connectivity in Kubernetes.

Item Type: Thesis (Other)
Subjects: Technologies > Virtualization
Technologies > Network
Metatags > INS (Institute for Networked Solutions)
Divisions: Bachelor of Science FHO in Informatik > Student Research Project
Depositing User: OST Deposit User
Contributors:
Contribution
Name
Email
Thesis advisor
Baumann, Urs
UNSPECIFIED
Date Deposited: 21 Oct 2023 11:56
Last Modified: 21 Oct 2023 11:56
URI: https://eprints.ost.ch/id/eprint/1133

Actions (login required)

View Item
View Item