Getting started with loxilb on Amazon EKS
Updated: Feb 17
Kubernetes can be installed on top of various cloud infrastructures, whether EKS or GCE. However, each cloud infrastructure has its own way to manage the network, and it is virtually impossible for Kubernetes to support all of them. Kubernetes solves this problem by providing interface modules called cloud providers. Each cloud provider implements a cloud provider interface suitable for its cloud environment and provides it to Kubernetes, and Kubernetes can use this interface to configure load balancer rules, nodes, and networking routes.
However, sometimes there may be a situation where you use EKS but need the features of another load balancer instead of the features provided by ELB. To cater this case, Kubernetes provides a LoadBalancerClass field in service specs starting from version 1.24. By setting the LoadBalancerClass field, you can use a load balancer other than the default set by the cloud provider..
For services with LoadBalancerClass set, the default load balancer does nothing. In order to use the LoadBalancerClass, the user needs to process the LoadBalancerClass and perform tasks, e.g. an application that can connect to other load balancer and configure the rules to the load balancer.
LoxiLB provides the kube-loxilb application to support the LoadBalancerClass. In this post, we will see how to deploy kube-loxilb to EKS and set up LoxiLB.
kube-loxilb
kube-loxilb is an application deployed in Kubernetes as a Deployment. It monitors k8s service creation events and checks whether LoadBalancerClass is specified in the load balancer service specification. If the LoadBalancerClass value is "loxilb.io/loxilb", kube-loxilb allocates an External IP and configures the service in the LoxiLB load balancer.
Topology
This is the topology used for the setup:

We created an EKS consisting of 3 nodes and a LoxiLB node to act as a load balancer node. To enable access to the topology from the outside, a public IP that can be accessed from outside is granted by AWS, and this IP is set on LoxiLB node's eth0 interface. (Kubernetes version 1.24 or higher supports LoadBalancerClass) The OS on each node uses Ubuntu 20.04 LTS version.
Install eksctl
We will install eksctl locally to provision EKS. First, install the AWS CLI.
$ sudo apt install unzip
$ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
$ unzip awscliv2.zip
$ sudo ./aws/install
Then add the user's access information to the AWS CLI with the command below. For region, we will use ap-northeast-2 in this post.
$ aws configure
AWS Access Key ID[None]:~~~
AWS Secret Access Key [None]:~~~
Default region name [None]: ap-northeast-2
Default output format [None]: json
Now install eksctl:
$ curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz"| tar xz -C/tmp
$ sudo mv /tmp/eksctl /usr/local/bin
$ eksctl version
Create EKS cluster
Let's create EKS Cluster using ekscli with this command:
$ eksctl create cluster \
--version 1.24 \
--name loxilb-demo \
--vpc-nat-mode Single \
--region ap-northeast-2 \
--node-type t3.medium \
--nodes 3 \
--with-oidc \
--ssh-access \
--ssh-public-key aws-netlox \
--managed
--version : Specifies the Kubernetes version.
--name : Specifies the EKS cluster name.
--vpc-nat-mode : Specified as Single.
--ssh-public-key : Specifies the key for SSH access.
Create LoxiLB node
Once the EKS cluster is deployed, we now create LoxiLB nodes. We will spawn a new EC2 instance of size t2.large and Ubuntu 20.04 OS. Since traffic coming from outside must go through LoxiLB nodes and go to EKS after load balancing, designate the VPC as the same VPC as the EKS cluster. The subnet will use the public network of the EKS cluster.

Also create security groups for external access. In this post, we will open SSH port and port 8765 for external connection.

For communication between LoxiLB nodes and EKS nodes, add inbound rules to allow all traffic from loxilb-node-sg to security groups of EKS.

For External access, Elastic IP has been allocated.

You can now SSH into the LoxiLB node using the AWS key file.
$ ssh -i aws-netlox.pem ubuntu@15.164.9.61
Pre-requisites
Also, kubectl is used to create a service or check information on k8s. This post assumes that you can manage k8s information using kubectl on your LoxiLB node. Also, LoxiLB is deployed in the form of a docker container, so docker must be installed on the LoxiLB node.
Install kubectl on LoxiLB nodes
By installing kubectl on the LoxiLB node, you can configure Kubernetes management on the LoxiLB node. Since we deployed EKS with version 1.24 above, we also install kubectl with version 1.24. You can refer to this link to install the desired version.
$ curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/1.24.9/2023-01-11/bin/linux/amd64/kubectl
$ chmod +x ./kubectl
$ mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && exportPATH=$PATH:$HOME/bin
$ echo 'export PATH=$PATH:$HOME/bin'>>~/.bashrc
$ kubectl version --short --client
Accessing EKS through kubectl requires AWS authentication and a kubeconfig file. Refer to the eksctl installation in this post to register the AWS user key information on the LoxiLB node as well. The kubeconfig file is created in the ~/.kube/config path of the node where the eksctl create cluster command was executed. Copy that file to the LoxiLB node's ~/.kube directory. After authentication, you can get information via kubectl get nodes command.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-192-168-29-103.ap-northeast-2.compute.internal Ready <none> 46m v1.24.9-eks-49d8fe8
ip-192-168-54-159.ap-northeast-2.compute.internal Ready <none> 46m v1.24.9-eks-49d8fe8
ip-192-168-91-227.ap-northeast-2.compute.internal Ready <none> 46m v1.24.9-eks-49d8fe8
Install docker on LoxiLB nodes
Refer to this link to install docker on LoxiLB node.
Running LoxiLB on LoxiLB Nodes
LoxiLB node can be accessed by the public IP provided by AWS.
Run the LoxiLB docker container on the LoxiLB node with the following command:
$ docker run -u root --cap-add SYS_ADMIN --net=host --restart unless-stopped --privileged -dit -v /dev/log:/dev/log --name loxilb ghcr.io/loxilb-io/loxilb:latest --host=0.0.0.0
Deploy kube-loxilb on k8s
Next download kube-loxilb from github.
Please make sure that kubectl is available on the node where you are downloading kube-loxilb.
ubuntu@loxilb:~$ git clone <https://github.com/loxilb-io/kube-loxilb.git>
Cloning into 'kube-loxilb'...
remote: Enumerating objects: 68, done.
remote: Counting objects: 100% (68/68), done.
remote: Compressing objects: 100% (56/56), done.
remote: Total 68 (delta 10), reused 50 (delta 3), pack-reused 0
Unpacking objects: 100% (68/68), 57.74 KiB | 3.40 MiB/s, done.
Go to the kube-loxilb/manifest directory and open the kube-loxilb.yaml file.
ubuntu@loxilb:~$ cd kube-loxilb/manifest/
ubuntu@loxilb:~/kube-loxilb/manifest$ vi kube-loxilb.yaml
Locate the kube-loxilb Deployment entry in the kube-loxilb.yaml file. If you go to spec.container.args, you can check loxiURL, externalCIDR, and setLBMode settings. You must modify these options before deploying kube-loxilb.
terminationGracePeriodSeconds: 0
containers:
- name: kube-loxilb
image: ghcr.io/loxilb-io/kube-loxilb:latest
imagePullPolicy: Always
command:
- /bin/kube-loxilb
args:
- --loxiURL=http://12.12.12.1:11111,http://14.14.14.1:11111
- --externalCIDR=123.123.123.1/24
#- --setBGP=true
#- --setLBMode=1
#- --config=/opt/loxilb/agent/kube-loxilb.conf
resources:
requests:
cpu: "100m"
memory: "50Mi"
A little explanation about the three settings:
loxiURL : This specifies LoxiLB API server address. You can specify it in the form of http:{LoxiLB node IP}:11111. Comma separated multiple entries can be specified at the same time (used when configuring multiple LoxiLB instances for HA clustering)
externalCIDR : Specifies a global IP that can be assigned to the service by the load balancer and accessible from outside. The IP range set in externalCIDR is assigned to the ExternalIP of the load balancer service using LoxiLB.
setLBMode : Specifies the NAT mode of the load balancer. There are currently 3 modes supported. Read more here.
In the topology for this post, the LoxiLB node has an IP of 192.168.3.68. We will use this IP address as externalCIDR. We modified the kube_loxilb.yaml file accordingly.
args:
- --loxiURL=http://192.168.3.68:11111
- --externalCIDR=192.168.3.68/32
#- --setBGP=true
- --setLBMode=2
#- --config=/opt/loxilb/agent/kube-loxilb.conf
Modify the options and then deploy kube-loxilb using kubectl.
root@loxilb:/home/ubuntu/kube-loxilb/manifest# kubectl apply -f kube-loxilb.yaml
serviceaccount/kube-loxilb created
clusterrole.rbac.authorization.k8s.io/kube-loxilb created
clusterrolebinding.rbac.authorization.k8s.io/kube-loxilb created
deployment.apps/kube-loxilb created
You can verify that the Deployment has been created in the kube-system namespace of k8s with the following command:
root@loxilb:/home/ubuntu/kube-loxilb/manifest# kubectl -n kube-system get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
coredns 2/2 2 2 4d2h
kube-loxilb 1/1 1 1 113s
Create a service using LoadBalancerClass
You can now use LoxiLB as a load balancer by specifying a LoadBalancerClass. For testing, let's create nginx.yaml file as follows:
root@loxilb:/home/ubuntu# vi nginx.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service1
labels:
app: loxilb
spec:
selector:
app: loxilb
ports:
- port: 8765
targetPort: 80
type: LoadBalancer
loadBalancerClass: "loxilb.io/loxilb"
---
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: loxilb
spec:
containers:
- name: nginx
image: nginx:stable
ports:
- containerPort: 80
name: http-web-svc
The nginx.yaml file creates the nginx-service1 load balancer service and one pod associated with it. The service has loxilb.io/loxilb specified as the loadBalancerClass. By specifying a loadBalancerClass with that name, kube-loxilb will detect the creation of the service and associate it with the LoxiLb load balancer.
Create a service with the following command:
root@loxilb:/home/ubuntu# kubectl apply -f nginx.yaml
service/nginx-service1 created
pod/nginx created
We can see that the IP from externalCIDR has been assigned as ExternalIP:
root@loxilb:/home/ubuntu# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 4d2h
nginx-service1 LoadBalancer 10.100.124.111 192.168.3.68 8765:32617/TCP 4m37s
On the LoxiLB node, the following command confirms that the load balancer rule has been created in LoxiLB as well:
root@loxilb:/home/ubuntu# docker exec -ti loxilb loxicmd get lb -o wide
Check service access from outside
Now you can access the k8s service using the LoxiLB node's external IP ( 15.164.9.61 - Public IP given by AWS in this example). When the packet comes inside the AWS then it will be NATed to LoxiLB local IP/Service IP(192.168.3.68).
Let's test the external connection using curl as follows:
MacBookAir ~ % curl http://15.164.9.61:8765
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="<http://nginx.org/>">nginx.org</a>.<br/>
Commercial support is available at
<a href="<http://nginx.com/>">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
We can see that nginx pods can be accessed successfully from outside.
We hope you liked our blog. For more information, please visit our github page.