top of page
Writer's pictureLoxiLB

NGAP Load Balancing with LoxiLB

Updated: Oct 23


In this blog, we are going to discuss NGAP based L7 load-balancing and why it is necessary especially in cloud-native architectures. Before we start, let’s revisit basics about the NGAP protocol. The Next Generation Application Protocol (NGAP) is a key protocol used in 5G networks, specifically within the 5G Core (5GC) architecture. It is part of the control plane protocols that operate over the N2 interface and  serves as the essential medium for communication between the core network and radio access network (RAN) in 5G. In other words, it facilitates communication between the Access and Mobility Management Function (AMF) and the gNodeB (gNB), which is the 5G equivalent of the base station in previous generations of mobile networks.


NGAP is part of the Application layer in the OSI model, operating over SCTP. Furthermore, NAS messages over NGAP are used to uniquely recognize and manage each UE throughout its lifecycle within the 5G network.


NGAP includes a variety of message types for different purposes, such as:

  • Initial UE Message: Sent from the gNB to the AMF when a new UE attaches to the network.

  • UE Context Release: Manages the release of the UE context when a session ends.

  • Handover Request: Facilitates the handover process from one gNB to another.

  • Paging: Used for locating a UE within the network.


Services for N2 interface

3GPP introduced the concept of Service Communication Proxy. It is not a new concept, rather very similar to the Diameter Signaling Router in 4G. The idea of SCP is nothing but exposing all the components through services. You can read more about it here.  


Services can be L4 or L7 based. In a L4 based service, load balancing will be done on L4 parameters and in L7 based service, it would be done at application level. When AMFs are deployed in a Kubernetes cluster and a service is created for gNBs, Load balancing for N2 interface can be done at Layer 4 using SCTP parameters when gNB tries to connect to the AMF. In this case, gNB will make a direct connection with the AMF instance which means all the UEs belonging to that particular gNB will go to that particular AMF because NAS Signaling messages uses NGAP protocol which is an application level protocol and messages are distributed using L4 protocol information.


Now, Let’s discuss the scenario when 5G core components are hosted as services in a cloud environment and different UEs and gNBs connect with the 5G Core. Now, Suppose AMF is hosted behind a L4 load balancer service. Then, gNB connections will be distributed evenly by the load balancer.


Depending on the implementation, AMF can be deployed as stateful or stateless. Stateful means AMF maintains UE state with them locally and Stateless means UE state is shared, maybe through DB, not maintained locally. For this blog, we are going to use Open5gs which is stateful.


NGAP load-balancing problems

Although, load-balancing NGAP can be done at layer4, it may not be the most optimal one depending on the use-case. First one is when AMF may overload when gNB has more UEs and second one is Handover.


Let’s discuss the first one. In case of L4 load balancing, each gNB gets assigned to an AMF instance. As a result of this all UEs associated with a particular gNB also need to be handled by the same AMF. If the number of UEs served by a gNB increases for any reason, it can result it sub-optimal response times for UE calls and sessions. Hence, load-balancing may not achieve its goal.




Another issue is in the case of handover. When UE moves from one gNB to a different gNB which may be connected to a different AMF instance. This AMF instance may not have UE context information. This is the usual case when AMFs are stateful. In this case, UE will have to re-initiate its connection with gNB and then to a new AMF, making handover almost impossible. When AMFs are stateless however, it doesn't really matter which AMF handles which UE.



Solution 

To solve the above mentioned pain-points, we need a L7 load-balancer (in NGAP perspective) which is able to not only comprehend NGAP protocol but NAS messages transported via NGAP as well.



This diagram shows the UEs are getting distributed across AMFs. And after handover, UE will remain connected to the same AMF:




LoxiLB is a load-balancer which not only implements L4 services but also able to overcome challenges like NGAP L7 load-balancing. Now, Let’s discuss step-by-step how we deployed the scenario and tested this with a complete open-source solution based on Open5gs, LoxiLB and UERANSIM. Furthermore all components are deployed in a cloud-native fashion using Kubernetes as the base platform.


Topology


The topology will have 6 nodes. This setup will have 1 node for UERAM sim which will simulate two UEs, 2 nodes for UPFs. LoxiLB will run in one node and 2 Open5gs Cores will run in a k3s cluster of 1 node each.


Prepare the Kubernetes cluster

We are assuming that the user has already set up a Kubernetes cluster. If not, then there are plenty of Quick start guides to refer to. 


Prepare LoxiLB Instances

Once the Kubernetes cluster is ready, we can deploy LoxiLB. To avoid a single point of failure, there are plenty of ways to deploy LoxiLB with High Availability. Please refer to this to know a few of them. For this blog, we will keep things simple and use a single LoxiLB instance. Follow the steps to make LoxiLB up and running:

$ apt-get update
$ apt-get install -y software-properties-common
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - add-apt-repository -y "deb [arch=amd64] https://download.docker.com/linux/ubuntu  $(lsb_release -cs)  stable"
$ apt-get update
$ apt-get install -y docker-ce
$ docker run -u root --cap-add SYS_ADMIN --restart unless-stopped --privileged -dit -v /dev/log:/dev/log --entrypoint=/root/loxilb-io/loxilb/loxilb --net=host --name loxilb ghcr.io/loxilb-io/loxilb:scp

Deploy kube-loxilb

kube-loxilb is used to deploy LoxiLB with Kubernetes.

kube-loxilb.yaml

        args:            
            - --loxiURL=http://172.17.0.2:11111
            - --cidrPools=defaultPool=17.17.10.0/24
            - --setMode=2

A description of these options follows:

loxiURL: LoxiLB API server address. kube-loxilb uses this URL to communicate with LoxiLB. The IP must be kube-loxilb can access. (e.g. private IP of LoxiLB node).

--cidrPools=defaultPool: When creating a LoadBalancer service, LoxiLB specifies the VIP CIDR with the pool name to allocate to the LB rule. In this document, we will specify the private IP range. 

setLBMode: Specifies the NAT mode of the load balancer. Currently, there are three modes supported (0=default, 1=oneArm, 2=fullNAT), and we will use mode 2 (fullNAT) for this deployment.

 

In the topology, the LoxiLB node's private IP is 192.168.80.9. So, I changed the values to:

        args:
        - --loxiURL=http://192.168.80.9:11111
        - --cidrPools=defaultPool=192.168.80.9/32
        - --setLBMode=2
        - --appendEPs #This is required when AMF is going run in separate      cluster and you want single loxilb for multi-cluster.

After modifying the options, use kubectl to deploy kube-loxilb.

$ kubectl apply -f kube-loxilb.yaml
serviceaccount/kube-loxilb created
clusterrole.rbac.authorization.k8s.io/kube-loxilb created
clusterrolebinding.rbac.authorization.k8s.io/kube-loxilb created
deployment.apps/kube-loxilb created

When the deployment is complete, you can verify that the Deployment has been created in the kube-system namespace of k8s with the following command:

$ kubectl -n kube-system get deployment
NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
calico-kube-controllers   1/1     1            1           18d
coredns                   2/2     2            2           18d
kube-loxilb               1/1     1            1           18d
metrics-server            1/1     1            1           18d

Deploy UPF

Now, let's install Open5gs UPF on the UPF node.

Login to the UPF node and install mongodb first. Import the key for installation.

$ sudo apt update
$ sudo apt install gnupg
$ curl -fsSL <https://pgp.mongodb.com/server-6.0.asc> | sudo gpg -o 
usr/share/keyrings/mongodb-server-6.0.gpg --dearmor
$ echo "deb [ arch=amd64,arm64 signed-by=/usr/share/keyrings/mongodb-server-6.0.gpg] <https://repo.mongodb.org/apt/ubuntu> focal/mongodb-org/6.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-6.0.list

Install mongodb with the following command.

$ sudo apt update
$ sudo apt install -y mongodb-org
$ sudo systemctl start mongod  # (if '/usr/bin/mongod' is not running)
$ sudo systemctl enable mongod # (ensure to automatically start it on system boot)

After mongodb installation is complete, install open5gs with the following command.

$ sudo add-apt-repository ppa:open5gs/latest
$ sudo apt update
$ sudo apt install open5gs

When Open5gs is installed, initially all Open5gs processes are running, but we just have to run only upf on that node. So, stopping everything else with the following command.

$ sudo systemctl stop open5gs*

If you don't want the process to run again when node restart you can use the following command: However, since * does not apply to the commands below, you must manually apply them to all processes.

$ sudo systemctl disable open5gs-amfd
$ sudo systemctl disable open5gs-smfd
...

Open the /etc/open5gs/upf.yaml file. Change the addr of the upf - pfcp and gtpu objects to the private IP of the UPF node.

upf:
    pfcp:
      - addr: 192.168.80.5
    gtpu:
      - addr: 192.168.80.5
    subnet:
      - addr: 10.45.0.1/16
      - addr: 2001:db8:cafe::1/48
    metrics:
      - addr: 127.0.0.7
        port: 9090

Restart UPF with the following command.

$ sudo systemctl start open5gs-upfd

Repeat the procedure for another UPF node with IP address 192.168.80.6.


Install UERAN simulator

Follow the steps below to install UERAN simulator:

$ git clone https://github.com/my5G/my5G-RANTester.git 
$ cd my5G-RANTester 
$ go mod download
$ cd cmd 
$ go build app.go

Deploy Open5gs Core to EKS using helm

For deployment, you need to have helm installed locally where you can use kubectl.

Before deploying, check open5gs-helm-charts/values.yaml file.

$ cd open5gs-helm-repo
$ vim open5gs-helm-charts/values.yaml

Modify the upfPublicIP value of the UPF-1 node:

smf:
  N4Int: eth0
  upfPublicIP: 192.168.80.5

Check the template file for AMF and make the changes:

$ vim open5gs-helm-charts/templates/amf-1-deploy.yaml
apiVersion: v1
kind: Service
metadata:
  name: {{ .Release.Name }}-amf
  annotations:
    loxilb.io/probetype : "none"
    loxilb.io/lbmode : "fullproxy"
    loxilb.io/epselect: “n2”
  labels:
    epc-mode: amf
spec:
  type: LoadBalancer
  loadBalancerClass: loxilb.io/loxilb

Please note these two arguments:

    loxilb.io/lbmode : "fullproxy"
    loxilb.io/epselect: “n2”

“fullproxy” lbmode combined with “n2” epselect is being used for L7 load balancer rule.


After that, you can deploy open5gs with the following command.

$ kubectl create ns open5gs
$ helm -n open5gs upgrade --install core5g ./open5gs-helm-charts/

When the deployment is complete, you can check the open5gs pod with the following command.

$ sudo kubectl get pods -A
NAMESPACE     NAME                                      READY   STATUS      RESTARTS   AGE
kube-system   coredns-6799fbcd5-99klv                   1/1     Running     0          6h29m
kube-system   local-path-provisioner-6c86858495-92spb   1/1     Running     0          6h29m
kube-system   metrics-server-54fd9b65b-sm6z8            1/1     Running     0          6h29m
open5gs       core5g-mongodb-5c5d64455c-4nvcf           1/1     Running     0          6h3m
open5gs       core5g-mongo-ue-import-kc52h              0/1     Completed   0          6h3m
open5gs       core5g-nrf-deployment-b4d796466-hl989     1/1     Running     0          6h3m
open5gs       core5g-udm-deployment-54bfd97d56-jbw9r    1/1     Running     0          6h3m
open5gs       core5g-nssf-deployment-5df4d988fd-6lg45   1/1     Running     0          6h3m
open5gs       core5g-ausf-deployment-684b4bb9f-bp6bm    1/1     Running     0          6h3m
open5gs       core5g-amf-deployment-cd7bb7dd6-2v6lx     1/1     Running     0          6h3m
open5gs       core5g-smf-deployment-67f9f4bcd-4mfql     1/1     Running     0          6h3m
open5gs       core5g-bsf-deployment-8f6dbd599-j5ntf     1/1     Running     0          6h3m
open5gs       core5g-webui-7d69d8fd46-vgthx             1/1     Running     0          6h3m
open5gs       core5g-udr-deployment-7656cbbd7b-mxrph    1/1     Running     0          6h3m
open5gs       core5g-pcf-deployment-7b87484dcf-drgc6    1/1     Running     0          6h3m
kube-system   kube-loxilb-6dbb4d7776-lfqhs              1/1     Running     0          5h58m

All the pods must be in a “Running” state except “core5g-mongo-ue-import-rvtkr”. As soon as it becomes “Completed” then the deployment can be considered completed.


Verify the services

$  sudo kubectl get svc -n open5gs
NAME                 TYPE           CLUSTER-IP      EXTERNAL-IP        PORT(S)              AGE
core5g-bsf           ClusterIP      10.43.130.128   <none>             80/TCP               6h4m
core5g-ausf          ClusterIP      10.43.25.111    <none>             80/TCP               6h4m
core5g-udr           ClusterIP      10.43.108.130   <none>             80/TCP,7777/TCP      6h4m
core5g-smf           ClusterIP      10.43.179.12    <none>             2123/UDP,8805/UDP,3868/TCP,3868/SCTP,7777/TCP,2152/UDP,9090/TCP,80/TCP   6h4m
core5g-webui         ClusterIP      10.43.151.179   <none>             80/TCP               6h4m
core5g-nssf          ClusterIP      10.43.116.217   <none>             80/TCP               6h4m
core5g-pcf           ClusterIP      10.43.118.48    <none>             80/TCP               6h4m
core5g-mongodb-svc   ClusterIP      10.43.86.217    <none>             27017/TCP            6h4m
core5g-udm           ClusterIP      10.43.204.100   <none>             80/TCP               6h4m
core5g-nrf           ClusterIP      10.43.240.128   <none>             80/TCP,7777/TCP      6h4m
core5g-amf           LoadBalancer   10.43.77.17     llb-192.168.80.9   38412:30803/SCTP,7777:30358/TCP,80:31004/TCP                             6h4m

Setup the second cluster similarly with just one small change. 


Before deploying, check open5gs-helm-charts/values.yaml file.

$ cd open5gs-helm-repo
$ vim open5gs-helm-charts/values.yaml

For this core, we are going to use the second UPF node so we have to set the correct UPF IP in the values.yaml file.


Modify the upfPublicIP value of the UPF-2 node:

smf:
  N4Int: eth0
  upfPublicIP: 192.168.80.6

Install it the same as the first cluster and verify the results.


Verify the services at LoxiLB:

$ loxicmd get lb -o wide
|    EXT IP     | SEC IPS | PORT  | PROTO |        NAME        | MARK | SEL |   MODE    |   
ENDPOINT    | EPORT | WEIGHT | STATE  | COUNTERS |
|---------------|---------|-------|-------|--------------------|------|-----|-
| 192.168.80.9  |         |    80 | tcp   | open5gs_core5g-amf |    0 | n2  | fullproxy | 192.168.80.10 | 31004 |      1 | active | 0:0      |
|               |         |       |       |                    |      |     |           | 192.168.80.20 | 30160 |      1 | active | 0:0      |
| 192.168.80.9  |         |  7777 | tcp   | open5gs_core5g-amf |    0 | n2  | fullproxy | 192.168.80.10 | 30358 |      1 | active | 0:0      |
|               |         |       |       |                    |      |     |           | 192.168.80.20 | 31624 |      1 | active | 0:0      |
| 192.168.80.9  |         | 38412 | sctp  | open5gs_core5g-amf |    0 | n2  | fullproxy | 192.168.80.10 | 30803 |      1 | active | 0:0      |
|               |         |       |       |                    |      |     |           | 192.168.80.20 | 32751 |      1 | active | 0:0      |

Now, Check the logs at both the UPF nodes to confirm N2 interface (PFCP) is established.

$ tail -f /var/log/open5gs/upf.log 
Open5GS daemon v2.7.1

06/18 12:46:19.510: [app] INFO: Configuration: '/etc/open5gs/upf.yaml' (../lib/app/ogs-init.c:133)
06/18 12:46:19.510: [app] INFO: File Logging: '/var/log/open5gs/upf.log' (../lib/app/ogs-init.c:136)
06/18 12:46:19.577: [metrics] INFO: metrics_server() [http://127.0.0.7]:9090 (../lib/metrics/prometheus/context.c:299)
06/18 12:46:19.577: [pfcp] INFO: pfcp_server() [192.168.80.5]:8805 (../lib/pfcp/path.c:30)
06/18 12:46:19.577: [gtp] INFO: gtp_server() [192.168.80.5]:2152 (../lib/gtp/path.c:30)
06/18 12:46:19.579: [app] INFO: UPF initialize...done (../src/upf/app.c:31)
06/27 11:15:32.206: [pfcp] INFO: ogs_pfcp_connect() [192.168.80.101]:3443 (../lib/pfcp/path.c:61)
06/27 11:15:32.207: [upf] INFO: PFCP associated [192.168.80.101]:3443 (../src/upf/pfcp-sm.c:184)

Check the status for all the current active connections at LoxiLB:



Configure UERAN Simulator

You have to change the UE’s configuration file to connect the UE to the core. The path of the configuration file is ~/my5G-RANTester/config/config.yml.

gnodeb:
  controlif:
   ip: "172.0.14.27"
   port: 9487
  dataif:
   ip: "172.0.14.27"
   port: 2152
  plmnlist:
    mcc: "208"
    mnc: "93"
    tac: "000007"
    gnbid: "000001"
  slicesupportlist:
    sst: "01"
    sd: "000001"

ue:
  msin: "0000000031"
  key: "0C0A34601D4F07677303652C0462535B"
  opc: "63bfa50ee6523365ff14c1f45f88737d"

 amf: "8000"
  sqn: "0000000"
  dnn: "internet"
  hplmn:
    mcc: "208"
    mnc: "93"

 snssai:
    sst: 01
    sd: "000001"

amfif:
  ip: "43.201.17.32"
  port: 38412

logs:
    level: 4

First, register the private IP of the UE node in gnodeb - controlif object’s ip and dataif object’s ip.

gnodeb:
  controlif:
    ip: "172.0.14.27"
    port: 9487
  dataif:
    ip: "172.0.14.27"
    port: 2152

Next, you modifies the values of mcc, mnc, and tac in plmnlist object. This value should match the AMF settings of the Open5gs core deployed with helm. You can check the values in the ./open5gs-helm-charts/values.yaml file. Here are the AMF settings in the values.yaml file used in this post.


amf:
  mcc: 208
  mnc: 93
  tac: 7
  networkName: Open5GS
  ngapInt: eth0

nssf:
  sst: "1"
  sd: "1"

The values of mcc, mnc, and tac in the UE settings must match the values above.

plmnlist:
    mcc: "208"
    mnc: "93"
    tac: "000007"
    gnbid: "000001"

The sst, sd values of the slicesupportlist object in the UE settings must match the values of the nssf object in ./open5gs-helm-charts/values.yaml.


slicesupportlist:
    sst: "01"
    sd: "000001"

The msin, key, and opc values of the ue object in the UE settings must match the simulator-ue1 object in ./open5gs-helm-charts/values.yaml . Here is the content of the ./open5gs-helm-charts/values.yaml file.


simulator:
   ue1:
     imsi: "208930000000031"
     imei: "356938035643803"
     imeiSv: "4370816125816151"
     op: "8e27b6af0e692e750f32667a3b14605d"
     secKey: "8baf473f2f8fd09487cccbd7097c6862"
     sst: "1"
     sd: "1"

If you modify the ue settings according to the contents of the values.yaml file, it looks like this:

  • msin: the last 10 digits of the imsi value excluding mcc(208) and mnc(93)

  • key: secKey

  • opc: op

  • mcc, mnc, sst, sd: Enter the values described above

Other values are left as default.

ue:
  msin: "0000000031"
  key: "8baf473f2f8fd09487cccbd7097c6862"
  opc: "8e27b6af0e692e750f32667a3b14605d"
  amf: "8000"
  sqn: "0000000"

 dnn: "internet"
  hplmn:
    mcc: "208"
    mnc: "93"
  snssai:
    sst: 01
    sd: "000001"

Finally, you have to modify amfif - ip value. Since the gNB needs to connect to the AMF through the LoxiLB load balancer, it needs to be changed to the service IP for N2 interface. In the current topology it is 192.168.80.9.

amfif:
  ip: "192.168.80.9"
  port: 38412

After editing the configuration file, the UE can be connected to AMF with the following command.


$ cd ~/my5G-RANTester/cmd

$ sudo ./app load-test -n 2
INFO[0000] my5G-RANTester version 1.0.1                 
INFO[0000] ---------------------------------------      
INFO[0000] [TESTER] Starting test function: Testing registration of multiple UEs 
INFO[0000] [TESTER][UE] Number of UEs: 2                
INFO[0000] [TESTE0R][GNB] gNodeB control interface IP/Port: 192.168.80.4/9487 
INFO[0000] [TESTER][GNB] gNodeB data interface IP/Port: 192.168.80.4/2152 
INFO[0000] [TESTER][AMF] AMF IP/Port: 192.168.80.9/38412 
INFO[0000] ---------------------------------------      
INFO[0000] [GNB] SCTP/NGAP service is running           
INFO[0000] [GNB] UNIX/NAS service is running            
INFO[0000] [GNB][SCTP] Receive message in 0 stream      
INFO[0000] [GNB][NGAP] Receive Ng Setup Response        
INFO[0000] [GNB][AMF] AMF Name: open5gs-amf             
INFO[0000] [GNB][AMF] State of AMF: Active              
INFO[0000] [GNB][AMF] Capacity of AMF: 255              
INFO[0000] [GNB][AMF] PLMNs Identities Supported by AMF -- mcc: 208 mnc:93 
INFO[0000] [GNB][AMF] List of AMF slices Supported by AMF -- sst:01 sd:000001 
INFO[0001] [TESTER] TESTING REGISTRATION USING IMSI 0000000031 UE 
INFO[0001] [UE] UNIX/NAS service is running             
INFO[0001] [GNB][SCTP] Receive message in 0 stream      
INFO[0001] [GNB][NGAP] Receive Downlink NAS Transport   
INFO[0001] [UE][NAS] Message without security header    
INFO[0001] [UE][NAS] Receive Authentication Request     
INFO[0001] [UE][NAS][MAC] Authenticity of the authentication request message: OK 
INFO[0001] [UE][NAS][SQN] SQN of the authentication request message: VALID 
INFO[0001] [UE][NAS] Send authentication response       
INFO[0001] [GNB][SCTP] Receive message in 0 stream      
INFO[0001] [GNB][NGAP] Receive Downlink NAS Transport   
INFO[0001] [UE][NAS] Message with security header       
INFO[0001] [UE][NAS] Message with integrity and with NEW 5G NAS SECURITY CONTEXT 
INFO[0001] [UE][NAS] successful NAS MAC verification    
INFO[0001] [UE][NAS] Receive Security Mode Command      
INFO[0001] [UE][NAS] Type of ciphering algorithm is 5G-EA0 
INFO[0001] [UE][NAS] Type of integrity protection algorithm is 128-5G-IA2 
INFO[0002] [GNB][SCTP] Receive message in 0 stream      
INFO[0002] [GNB][NGAP] Receive Initial Context Setup Request 
INFO[0002] [GNB][UE] UE Context was created with successful 
INFO[0002] [GNB][UE] UE RAN ID 1                        
INFO[0002] [GNB][UE] UE AMF ID 3                        
INFO[0002] [GNB][UE] UE Mobility Restrict --Plmn-- Mcc: not informed Mnc: not informed 
INFO[0002] [GNB][UE] UE Masked Imeisv: 1110000000ffff00 
INFO[0002] [GNB][UE] Allowed Nssai-- Sst: 01 Sd: 000001 
INFO[0002] [GNB][NAS][UE] Send Registration Accept.     
INFO[0002] [UE][NAS] Message with security header       
INFO[0002] [UE][NAS] Message with integrity and ciphered 
INFO[0002] [UE][NAS] successful NAS MAC verification    
INFO[0002] [UE][NAS] successful NAS CIPHERING           
INFO[0002] [GNB][NGAP][AMF] Send Initial Context Setup Response. 
INFO[0002] [UE][NAS] Receive Registration Accept        
INFO[0002] [UE][NAS] UE 5G GUTI: [230 0 183 93]         
INFO[0002] [GNB][SCTP] Receive message in 0 stream      
INFO[0002] [GNB][NGAP] Receive Downlink NAS Transport   
INFO[0002] [UE][NAS] Message with security header       
INFO[0002] [UE][NAS] Message with integrity and ciphered 
INFO[0002] [UE][NAS] successful NAS MAC verification    
INFO[0002] [UE][NAS] successful NAS CIPHERING           
INFO[0002] [UE][NAS] Receive Configuration Update Command 
INFO[0003] [GNB][SCTP] Receive message in 0 stream      
INFO[0003] [GNB][NGAP] Receive PDU Session Resource Setup Request 
INFO[0003] [GNB][NGAP][UE] PDU Session was created with successful. 
INFO[0003] [GNB][NGAP][UE] PDU Session Id: 1            
INFO[0003] [GNB][NGAP][UE] NSSAI Selected --- sst: 01 sd: 000001 
INFO[0003] [GNB][NGAP][UE] PDU Session Type: ipv4       
INFO[0003] [GNB][NGAP][UE] QOS Flow Identifier: 1       
INFO[0003] [GNB][NGAP][UE] Uplink Teid: 44772           
INFO[0003] [GNB][NGAP][UE] Downlink Teid: 1             
INFO[0003] [GNB][NGAP][UE] Non-Dynamic-5QI: 9           
INFO[0003] [GNB][NGAP][UE] Priority Level ARP: 8        
INFO[0003] [GNB][NGAP][UE] UPF Address: 192.168.80.5 :2152 
INFO[0003] [UE][NAS] Message with security header       
INFO[0003] [UE][NAS] Message with integrity and ciphered 
INFO[0003] [UE][NAS] successful NAS MAC verification    
INFO[0003] [UE][NAS] successful NAS CIPHERING           
INFO[0003] [UE][NAS] Receive DL NAS Transport           
INFO[0003] [UE][NAS] Receiving PDU Session Establishment Accept 
INFO[0003] [UE][DATA] UE is ready for using data plane  
INFO[0011] [TESTER] TESTING REGISTRATION USING IMSI 0000000032 UE 
INFO[0011] [UE] UNIX/NAS service is running             
INFO[0011] [GNB][SCTP] Receive message in 0 stream      
INFO[0011] [GNB][NGAP] Receive Downlink NAS Transport   
INFO[0011] [UE][NAS] Message without security header    
INFO[0011] [UE][NAS] Receive Authentication Request     
INFO[0011] [UE][NAS][MAC] Authenticity of the authentication request message: OK 
INFO[0011] [UE][NAS][SQN] SQN of the authentication request message: VALID 
INFO[0011] [UE][NAS] Send authentication response       
INFO[0011] [GNB][SCTP] Receive message in 0 stream      
INFO[0011] [GNB][NGAP] Receive Downlink NAS Transport   
INFO[0011] [UE][NAS] Message with security header       
INFO[0011] [UE][NAS] Message with integrity and with NEW 5G NAS SECURITY CONTEXT 
INFO[0011] [UE][NAS] successful NAS MAC verification    
INFO[0011] [UE][NAS] Receive Security Mode Command      
INFO[0011] [UE][NAS] Type of ciphering algorithm is 5G-EA0 
INFO[0011] [UE][NAS] Type of integrity protection algorithm is 128-5G-IA2 
INFO[0012] [GNB][SCTP] Receive message in 0 stream      
INFO[0012] [GNB][NGAP] Receive Initial Context Setup Request 
INFO[0012] [GNB][UE] UE Context was created with successful 
INFO[0012] [GNB][UE] UE RAN ID 2                        
INFO[0012] [GNB][UE] UE AMF ID 8                        
INFO[0012] [GNB][UE] UE Mobility Restrict --Plmn-- Mcc: not informed Mnc: not informed 
INFO[0012] [GNB][UE] UE Masked Imeisv: 1110000000ffff00 
INFO[0012] [GNB][UE] Allowed Nssai-- Sst: 01 Sd: 000001 
INFO[0012] [GNB][NAS][UE] Send Registration Accept.     
INFO[0012] [GNB][NGAP][AMF] Send Initial Context Setup Response. 
INFO[0012] [UE][NAS] Message with security header       
INFO[0012] [UE][NAS] Message with integrity and ciphered 
INFO[0012] [UE][NAS] successful NAS MAC verification    
INFO[0012] [UE][NAS] successful NAS CIPHERING           
INFO[0012] [UE][NAS] Receive Registration Accept        
INFO[0012] [UE][NAS] UE 5G GUTI: [203 0 3 135]          
INFO[0012] [GNB][SCTP] Receive message in 0 stream      
INFO[0012] [GNB][NGAP] Receive Downlink NAS Transport   
INFO[0012] [UE][NAS] Message with security header       
INFO[0012] [UE][NAS] Message with integrity and ciphered 
INFO[0012] [UE][NAS] successful NAS MAC verification    
INFO[0012] [UE][NAS] successful NAS CIPHERING           
INFO[0012] [UE][NAS] Receive Configuration Update Command 
INFO[0012] [GNB][SCTP] Receive message in 0 stream      
INFO[0012] [GNB][NGAP] Receive PDU Session Resource Setup Request 
INFO[0012] [GNB][NGAP][UE] PDU Session was created with successful. 
INFO[0012] [GNB][NGAP][UE] PDU Session Id: 2            
INFO[0012] [GNB][NGAP][UE] NSSAI Selected --- sst: 01 sd: 000001 
INFO[0012] [GNB][NGAP][UE] PDU Session Type: ipv4       
INFO[0012] [GNB][NGAP][UE] QOS Flow Identifier: 1       
INFO[0012] [GNB][NGAP][UE] Uplink Teid: 41726           
INFO[0012] [GNB][NGAP][UE] Downlink Teid: 2             
INFO[0012] [GNB][NGAP][UE] Non-Dynamic-5QI: 9           
INFO[0012] [GNB][NGAP][UE] Priority Level ARP: 8        
INFO[0012] [GNB][NGAP][UE] UPF Address: 192.168.80.6 :2152 
INFO[0012] [UE][NAS] Message with security header       
INFO[0012] [UE][NAS] Message with integrity and ciphered 
INFO[0012] [UE][NAS] successful NAS MAC verification    
INFO[0012] [UE][NAS] successful NAS CIPHERING           
INFO[0012] [UE][NAS] Receive DL NAS Transport           
INFO[0012] [UE][NAS] Receiving PDU Session Establishment Accept 
INFO[0013] [UE][DATA] UE is ready for using data plane

If we notice the logs carefully, one UE got UPF Address: 192.168.80.5 :2152 and second UE got UPF Address: 192.168.80.6 :2152.

We can see both the UEs have been distributed to different AMFs running in different clusters. 


Verify the result at LoxiLB:

$ loxicmd get ct -o wide
|    SERVICE NAME    |    DESTIP    |    SRCIP     | DPORT | SPORT | PROTO | STATE |                       ACT                       | PACKETS | BYTES |
|--------------------|--------------|--------------|-------|-------|-------|-------|-------------------------------------------------|---------|-------|
| open5gs_core5g-amf | 192.168.80.9 | 192.168.80.4 | 38412 |  9487 | sctp  | est   |                                                 |      80 |  6242 |
|                    |              |              |       |       |       |       | fp|192.168.80.9:53322->192.168.80.10:30803|sctp |         |       |
|                    |              |              |       |       |       |       | fp|192.168.80.9:38169->192.168.80.20:32751|sctp |         |       |

Small exercise for the readers

Just to validate that this load balancing is happening over the L7(NGAP) parameters, let's re-install both the Open5gs with a little change. Instead of fullproxy mode, use fullnat mode.

vim open5gs-helm-charts/templates/amf-1-deploy.yaml


apiVersion: v1
kind: Service
metadata:
  name: {{ .Release.Name }}-amf
  annotations:
    loxilb.io/probetype : "none"
    loxilb.io/lbmode : "fullnat"
  labels:
    epc-mode: amf
spec:
  type: LoadBalancer
  loadBalancerClass: loxilb.io/loxilb

You will observe, the load balancing is happening over L4(SCTP) connection. Each time you run UERAM sim, you will observe that both UEs will connect to the same cluster and will receive the same UPF IP address. Check “ loxicmd get ct -o wide”, UPF logs and AMF pod logs to verify.


Conclusion

We observed L7 load balancing provides a more sophisticated and application-aware approach to traffic management compared to L4 load balancing. For N2 interface, L4 load balancer service would also do just fine but we have seen that it is evidently beneficial to have L7(NGAP) based load balancing especially when number of UEs under gNBs can be skewed or for Handover.







978 views0 comments

Recent Posts

See All

Comentarios


GIThub

Learn, Contribute & Share

GETTING STARTED

Get started with deploying LoxiLB in your cluster

Documentation

Check LoxiLB Documentation for more information.

Join the LoxiLB slack channel to chat with the developers community.

bottom of page