top of page

Oracle Cloud: Bring your own LB for self-managed clusters

Updated: Jan 15

Today, almost all cloud providers provide managed Kubernetes clusters e.g. OKE, as well as managed load-balancer services e.g. ELB to simplify the deployment, management, and scaling of containerized applications and networking resources. But there are users who prefer self-managed load-balancing in public-cloud for various reasons - Customization and Control, Reduced Cost, Security and Compliance, No vendor lock-in, Specialized use-cases etc.


In this blog, we will discuss how LoxiLB can be deployed in Oracle Cloud Infrastructure(OCI) with HA load balancing for self-managed clusters. Before continuing this blog, we request readers to go through the previous blog to get acquainted with the basic details of LoxiLB with High Availability and it's other components. It will be particularly helpful for people who are interested in deploying self-managed K8s clusters with their own LB implementation, considering the fact , it is very difficult to find open-source cloud-native K8s LB implementations running seamlessly in public-clouds.

Deployment scheme

We will install a Kubernetes cluster comprising of four nodes and for providing hitless failover, we will deploy LoxiLB as a minimum cluster of two nodes in MASTER-BACKUP mode. kube-loxilb will be monitoring the health state of each LoxiLB instance. The idea of LoxiLB HA clustering is to create each k8s LB services with one of the IP address from the externalCIDR and use service IP address as their reachable point.


Install LoxiLB


For deploying a Linux based virtual machine in OCI, you must follow some steps including creating the compartment, a virtual cloud network and a subnet. These steps are nicely documented here. For this use-case, we will deploy LoxiLB in external mode (more details here).


Assuming the user is already running K8s cluster in OCI. Next step is to install LoxiLB instances and kube-loxilb to deploy services in HA mode. LoxiLB instances will run as docker containers in separate VMs.


System requirements for LoxiLB VM in OCI

  • Supported Operating System : Ubuntu 20.04, Ubuntu 22.04, Oracle Linux 8/9

  • Linux Kernel version : > 5.15.x

  • CPUs : >= 3

  • Memory : >= 8GB


Create VMs for LoxiLB

Users can create a separate private subnet or use the same subnet as K8s cluster and attach it to the LoxiLB VM instances. For external services, we would need to have an externalCIDR. It should be a subset of the private subnet attached to the LoxiLB VM.


For LoxiLB VMs, we created a private subnet of CIDR - 10.0.10.0/24 and we are going to use 10.0.10.143/32 as the external service IP.


After the VMs are ready, we have to prepare the VMs to call OCI APIs. Why? We will come to that later. We followed this document to generate API keys in the VM.


The config file would be similar to (in ~/.oci) :

[DEFAULT]
user=*******
fingerprint=********
key_file=/home/opc/.oci/oci_api_key.pem
tenancy=*******
region=******

For loxilb docker, we need this(In /root/.oci/) :

[DEFAULT]
user=********
fingerprint=*******
key_file=/root/.oci/oci_api_key.pem
tenancy=********

Make a copy of the keys at "/home/opc/loxilb-oci-config" and run LoxiLB docker as below:

#llb1
 docker run -u root --cap-add SYS_ADMIN   --restart unless-stopped --privileged -dit -v /dev/log:/dev/log -v /home/opc/loxilb-oci-config:/root/.oci --name loxilb ghcr.io/loxilb-io/loxilb:latest  --cluster=$llb2IP --self=0 --cloud=oci

#llb2
 docker run -u root --cap-add SYS_ADMIN   --restart unless-stopped --privileged -dit -v /dev/log:/dev/log -v /home/opc/loxilb-oci-config:/root/.oci --name loxilb ghcr.io/loxilb-io/loxilb:latest --cluster=$llb1IP --self=1 --cloud=oci
Note : Make sure to check all the ingress and egress traffic policies in the security list for VCN and allow the necessary traffic profiles. Also, need to check if flag for checking source/destination needs to be disable.

Install kube-loxilb


LoxiLB provides kube-loxilb, a loadbalancer spec implementation for K8s and in order to make it come into action. You can download kube-loxilb from github, change it if needed and apply it in one of the K8s nodes.

$ git clone https://github.com/loxilb-io/kube-loxilb.git
$ cd kube-loxilb/manifest/ext-cluster
$ vi kube-loxilb.yaml

You may need to do some changes, find apiServerURL and replace the IP addresses with loxilb docker IPs (facing towards Kubernetes network):

containers:
     - name: kube-loxilb
       image: ghcr.io/loxilb-io/kube-loxilb:latest
       imagePullPolicy: Always
       command:
       - /bin/kube-loxilb
       args:
       - --setRoles=0.0.0.0
       - --loxiURL=http://10.0.10.141:11111,http://10.0.10.88:11111
       - --externalCIDR=10.0.10.143/32
       - --setLBMode=2

Now, simply apply it :

$ sudo kubectl apply -f kube-loxilb.yaml 
serviceaccount/kube-loxilb created clusterrole.rbac.authorization.k8s.io/kube-loxilb created clusterrolebinding.rbac.authorization.k8s.io/kube-loxilb created deployment.apps/kube-loxilb created

kube-loxilb with "--setRoles=0.0.0.0" argument will assume the responsibility of detecting LoxiLB instances health and electing MASTER and BACKUP accordingly.

This "externalCIDR=10.0.10.143/32" will be used as floating IP and will be attached to the current MASTER LoxiLB instance through OCI APIs and makes it the reachable point for the external services. This is the reason we setup the keys in LoxiLB VMs to call OCI APIs.


LoxiLB can deploy services in three different NAT modes:

  1. default mode

  2. onearm mode

  3. fullnat mode

Please read here to know more about the different NAT modes and how they can play a part in deploying LoxiLB with HA.


Create a services using LoadBalancerClass


You can now use LoxiLB as a load balancer by specifying a LoadBalancerClass. For testing, we will create a TCP service in fullnat mode. Sample yaml files for creating services can be found here.


The yaml file creates the load balancer services and one pod associated with it. The service has loxilb.io/loxilb specified as the loadBalancerClass. By specifying a loadBalancerClass with that name, kube-loxilb will detect the creation of the service and associate it with the LoxiLB load balancer.


Create services with the following command:

vagrant@master~$: sudo kubectl apply -f tcp_fullnat.yml
service/tcp-lb-fullnat created
pod/tcp-fullnat-test created

Check external LB service is created :

$ sudo kubectl get svc
NAME            TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)           AGE
tcp-lb-fullnat    LoadBalancer   172.17.23.14    llb-10.0.10.14   56002:35201/TCP    2h

This output confirms that the services of type LoadBalancer with external IP “10.0.30.100" has been created. We can verify the service created in any loxilb node:

$ sudo docker exec -it loxilb loxicmd get lb
|   EXT IP   | PORT  | PROTO |          NAME           | MARK | SEL |  MODE   | # OF ENDPOINTS | MONITOR |
|------------|-------|-------|-------------------------|------|-----|---------|----------------|---------|
| 10.0.10.143 | 56002 | tcp   | default_tcp-lb-fullnat  |    0 | rr  | fullnat |              2 | On      |

In OCI, the external Service IP has to be attached to the LoxiLB VM(MASTER) to make it a reachable point.




Traffic Flow



Check service access from client

Let's test the service connection using curl as follows:


$ curl http://10.0.10.143:56002
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="<http://nginx.org/>">nginx.org</a>.<br/>
Commercial support is available at
<a href="<http://nginx.com/>">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Let's reboot the MASTER LoxiLB and check if the floating is re-assigned to other LoxiLB VM.



Now the traffic flow will be like this:



Let's test the service connection again:


$ curl http://10.0.10.143:56002
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="<http://nginx.org/>">nginx.org</a>.<br/>
Commercial support is available at
<a href="<http://nginx.com/>">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

So far, we saw the how LoxiLB provided the high availability for new connections. Let's check out this video to see how a long-lived connections are maintained and services remain intact even if the MASTER LoxiLB goes down.



Conclusion

This blog demonstrated the high availability load balancing of the services in OCI using LoxiLB. We hope you liked this blog. Soon, We will publish a similar blog about Hitless HA load balancing in AWS. For more of the interesting content, please visit our github and website.


References

Other Readings


561 views0 comments

Recent Posts

See All

GIThub

Learn, Contribute & Share

GETTING STARTED

Get started with deploying LoxiLB in your cluster

Documentation

Check LoxiLB Documentation for more information.

Join the LoxiLB slack channel to chat with the developers community.

bottom of page