top of page

LoxiLB Cluster Networking: Elevating Kubernetes Networking capabilities

Updated: Feb 8

Since the inception of microservices and distributed applications, Kubernetes reigns supreme, providing a robust platform for deploying, managing, and scaling containerized applications. At the core of Kubernetes lies Kubernetes cluster networking, a sophisticated web of connectivity that ensures seamless communication between containers, pods, and services within the cluster.

In simple words - Kubernetes abstracts away the underlying infrastructure, allowing developers to focus on defining the desired state of their applications. Cluster networking is the backbone that facilitates this abstraction, ensuring that containers can communicate with each other regardless of the nodes they reside on.

Cluster Networking includes pod-to-pod, pod-to-service, service abstraction, network policies etc. but in modern networking and service orchestration, the cluster networking is not just only about communication. Rather, network communication and network policies put together with resilience networking, enhanced scalability, reliability, and observability has somewhat become a basic expectation. While traditional cluster networking solutions have gained considerable attention, but they have presented different challenges as well. A cluster-wide networking solution must handle the fundamental aspects of data transfer, providing a lightweight and efficient solution for networking challenges. These includes network policy enforcement which can add more complexity and performance overhead while handling high-volume network traffic. In applications where low-latency communication is critical, such as financial trading platforms or real-time systems, proxy-based architecture can add a performance overhead. Other challenges such as scalability, where services and end-users grows, and ease of integration with the existing infrastructure, maybe in a resource-constraint environments.

LoxiLB Cluster Networking

LoxiLB Cluster Networking aims to reshape the landscape, offering a streamlined approach to network optimization and management using Linux kernel's revolutionary technology, eBPF. Unlike traditional cluster-wide networking solutions where proxies(such as kube-proxy in Kubernetes) are used for handling network traffic and may act as a network bottleneck, LoxiLB cluster network can provide a simple and straight-forward solution. LoxiLB cluster networking provides features such as load balancing, service discovery, security, endpoint health monitoring and performs much faster and efficiently because of it's eBPF-based data-path core engine which operates in the Linux kernel, focusing on enforcing network policies, tracking and tracing of the network connections and efficient transmission of data between services.

LoxiLB's cloud-native nature allows it to run as a container in cloud-native environments such as Kubernetes. And, because of it's lightweight design, LoxiLB cluster networking is well-suited for certain scenarios where simplicity, efficiency, and low-latency communication are paramount.

Let's discuss how LoxiLB cluster networking checks all the boxes to cater to those challenges:

  1. Scalability: LoxiLB is capable of handling massive number of connections with optimized packet processing, offering a ground for high scalability. Additionally, LoxiLB offers load balancing and traffic management capabilities, allowing for efficient distribution of network traffic.

  2. Traffic Management: LoxiLB offers rich traffic management capabilities which includes features like load balancing, traffic routing and shaping.

  3. Observability: LoxiLB has it's own built-in conntrack feature using eBPF. LoxiLB leverages eBPF technology to collect other detailed network metrics as well, allowing for deeper visibility into network traffic.

  4. Security: LoxiLB uses eBPF-based core engine to enforce fine-grained network security policies, preventing network attacks and protecting the workload applications. In addition to that, LoxiLB offers a comprehensive security framework that includes firewalls, traffic shaping and secure service communication via IPSec.

  5. API Support: LoxiLB provides programmatic control through a set of REST APIs for managing networking and fine-grained security rules.

Let's delve into LoxiLB cluster networking architecture and explore how it's revolutionizing the way we handle network traffic and ensure optimal performance.

Let's look at the left side of the diagram. If we notice the traffic flow of the packet as it enters the Kubernetes cluster, it has to pass through series of iptables chain of rules. Kube-proxy, the de-facto networking agent in the Kubernetes which runs on each node of the cluster which monitors the services and translates them to either iptables or IPVS tangible rules. If we talk about the functionality or a cluster with low volume traffic then kube-proxy is fine but when it comes to scalability or a high volume traffic then it acts as a bottle-neck. This is the reason, we were motivated to solve this problem and provide a end-to-end fast-lane solution for service-to-service communication with other features such as security, observability and transparency.

LoxiLB cluster networking solution works with Flannel and kube-proxy in IPVS mode. It simplifies all the IPVS rules and injects them in it's in-kernel eBPF data-path. Traffic will reach at the interface, will be processed by eBPF and sent directly to the pod or to the other node, bypassing all the layers of Linux networking. This way, all the services, be it External, NodePort or ClusterIP, can be managed through LoxiLB.

How LoxiLB cluster networking is different from traditional cluster networking?

Like other cluster-wide networking solutions, LoxiLB is a tool to manage traffic and optimize the performance of the network and applications. However, it differs in technology, design principles and functionality. As discussed at the start of this blog that traditional cluster network solutions offers rich bouquet of feature set but they do bring some challenges which may introduce a different battle ground rather than solving a problem. LoxiLB cluster networking solution addresses those challenges and solves them by implementing the networking data-path layer directly into the kernel layer using eBPF. It will give users an option for a lightweight solution, where no complex application level inspection, not many layers of networking or sidecar-free design for providing fast and reliable with low latency service-to-service communication is preferred.

How to Get Started

We will setup a 4-node K3s cluster which will run with flannel and ipvs-proxy mode.

Test Setup :

  • 1xMaster 4 vCPU, 4Gb RAM

  • 3xWorker 4 vCPU, 4Gb RAM

  • 1xClient 4 vCPU, 4Gb RAM

Configure Master Node
$ curl -sfL | INSTALL_K3S_EXEC="--disable traefik \
--disable servicelb \
--disable-cloud-controller \
--kube-proxy-arg proxy-mode=ipvs \
--flannel-iface=eth1 \
--disable-network-policy \
--node-ip=${MASTER_IP} \
--node-external-ip=${MASTER_IP} \
--bind-address=${MASTER_IP}" sh -
Configure Worker Nodes
$ curl -sfL | K3S_URL="https://${MASTER_IP}:6443"\
--node-external-ip=${WORKER_IP} \
--kube-proxy-arg proxy-mode=ipvs \
--flannel-iface=eth1" sh -
Install kube-loxilb
$ sudo kubectl apply -f
serviceaccount/kube-loxilb created created created
deployment.apps/kube-loxilb created
Install LoxiLB
$ sudo kubectl apply -f
serviceaccount/loxilb-lb created created
daemonset.apps/loxilb-lb created
service/loxilb-lb-service created
Verify the status
$ sudo kubectl get all -n kube-system
NAME                                          READY   STATUS    RESTARTS   AGE
pod/local-path-provisioner-84db5d44d9-zv4x5   1/1     Running   0          29h
pod/coredns-6799fbcd5-qq9dv                   1/1     Running   0          29h
pod/metrics-server-67c658944b-sm9wv           1/1     Running   0          29h
pod/kube-loxilb-5fb5566999-vv7sm              1/1     Running   0          5m28s
pod/loxilb-lb-7rqnd                           1/1     Running   0          3m44s
pod/loxilb-lb-zvj7j                           1/1     Running   0          3m44s
pod/loxilb-lb-sj7z9                           1/1     Running   0          3m44s
pod/loxilb-lb-wx2c7                           1/1     Running   0          3m44s

NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                       AGE
service/kube-dns            ClusterIP      <none>        53/UDP,53/TCP,9153/TCP        29h
service/metrics-server      ClusterIP   <none>        443/TCP                       29h
service/loxilb-lb-service   ClusterIP   None            <none>        11111/TCP,179/TCP,50051/TCP   3m44s

daemonset.apps/loxilb-lb   4         4         4       4            4           <none>          3m44s

NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/local-path-provisioner   1/1     1            1           29h
deployment.apps/coredns                  1/1     1            1           29h
deployment.apps/metrics-server           1/1     1            1           29h
deployment.apps/kube-loxilb              1/1     1            1           5m28s

NAME                                                DESIRED   CURRENT   READY   AGE
replicaset.apps/local-path-provisioner-84db5d44d9   1         1         1       29h
replicaset.apps/coredns-6799fbcd5                   1         1         1       29h
replicaset.apps/metrics-server-67c658944b           1         1         1       29h
replicaset.apps/kube-loxilb-5fb5566999              1         1         1       5m28s
Create an external service
$ sudo kubectl create -f iperf-service.yml 
service/iperf-service created
pod/iperf1 created

$ sudo kubectl get pods 
iperf1   1/1     Running   0          26s

$ sudo kubectl get svc
NAME            TYPE           CLUSTER-IP     EXTERNAL-IP         PORT(S)           AGE
kubernetes      ClusterIP      <none>              443/TCP           29h
iperf-service   LoadBalancer   llb-   55001:30336/TCP   68s

Performance Comparison

We benchmarked our implementation with two popular solutions: MetalLB with flannel and Cilium. Let's have a look at the comparison charts for throughput:

We created a iperf service and used iperf client in a separate VM outside the cluster. Traffic flow originated from client, hit the Load Balancer, goes to NodePort and then redirected to the workload. Now, It depends which cluster node hosts the service and where the selected workload is scheduled: Same or Different Node. Throughput will be naturally higher when the service and workload are hosted in the same node. But, In both the cases, LoxiLB performed better in case of throughput.

Then, we benchmarked these solutions with RabbitMQ load test as well. Here are the results:

RabbitMQ TPS

RabbitMQ Median Latency

Each test was run for a minimum of 100seconds with 10 producer & consumers threads each. In all our tests with different variations, LoxiLB edged out in all parameters.

Future Work

Currently, LoxiLB cluster networking supports protocols like TCP/IP, UDP, SCTP, PFCP, NGAP etc. It is a long road ahead. In future, support for L7 protocols will be added to widen the cover.


It's important to note that the choice between traditional and modern solutions depends on the specific requirements of the application or system. As the Kubernetes ecosystem continues to evolve, so too will the strategies and technologies employed to navigate the intricacies of cluster networking. Innovations like eBPF-based networking solutions, improved multi-cluster communication, and enhanced service mesh integrations are shaping the future of Kubernetes networking. Many solutions provide a richer set of application-layer features that may be essential in certain use cases, while LoxiLB offers advantages in terms of simplicity, performance and efficiency. By streamlining load balancing, enhancing fault tolerance, and improving overall network performance, LoxiLB is proving to be a valuable addition to the toolkit of modern IT architects. So, it is really up to the users, which one they prefer as per their deployment architecture.

458 views0 comments


Learn, Contribute & Share


Get started with deploying LoxiLB in your cluster


Check LoxiLB Documentation for more information.

Join the LoxiLB slack channel to chat with the developers community.

bottom of page