Cloud Container Service Engine (CCSE)

Load Balancing Best Practices

2025-07-11 06:12:30

1. Access in Cluster

Assuming we have launched two applications in a K8S cluster, a front-end application "frontend" and a back-end application "backend". If the frontend needs to access the backend, there are two methods.

1.1 Access by Application Name

Suppose the back-end application deployed in the K8S cluster is named "backend" and it listens on port 8080. In the container of the front-end application, you can directly use the name of the back-end application plus the port http://backend:8080 for access. K8S will automatically forward to a back-end container instance. As shown below:

 

图示 描述已自动生成

 

Note, the above forwarding is actually a Layer 4 forwarding. If the back-end application is of MySQL type, you can use tcp://backend:8080 for access. If the back-end application is a web application, you can use http://backend:8080 for access.

Also, the above access method requires both the front-end and back-end applications to be in the same namespace in the K8S cluster. If they are not in the same namespace, for example, if the back-end application is in the namespace "ns2", then the front-end should use http://backend.ns2:8080 for access (that is, you should add the namespace after the application name).

 

1.2 Use the Microservice Architecture for Your Registration Mechanism.

If the microservice architecture used by the business developer has a registration mechanism, you can directly use the registration mechanism of the microservice architecture for load balancing. As shown below:

图示 描述已自动生成

 

The back-end application container instances register their IP and port with zookeeper or eureka. The front-end application container instances retrieve the list of back-end application instances from the Registration Center. The front-end instances can then decide which back-end instance to access.

The Registration Center can be deployed either inside or outside the K8S cluster, depending on the business requirements.

 

2. Access Outside Cluster:

Assume we want to access the front-end application from a browser, here are several methods.

 

2.1 NodePort

The simplest method is to use NodePort. We only need to create a Service object for the front-end application in the K8S cluster, and specify a host port, such as 30000 in this object. Then, kube-proxy on each host (already deployed when the K8S cluster is deployed) will listen on port 30000. When port 30000 of the host is accessed, kube-proxy forwards it to the container instance of the front-end application. As shown below:

图示 中度可信度描述已自动生成

 

Since CCSE has installed keepalived on the Master host of the K8S cluster and bound a VIP, we can access it through http://vip:30000, ensuring the high availability of the access entry.

 

2.2 NodePort+Nginx

In the above NodePort solution, the kube-proxy cannot set HTTPS certificates, so the above solution only supports HTTP, not HTTPS.

If HTTPS is needed, you need to set up a Nginx cluster manually in front of the kube-proxy, and configure HTTPS certificates and forwarding rules manually at Nginx. You also need to install keepalived on two Nginx hosts to bind a VIP to ensure high availability. As shown below:

图示 描述已自动生成

 

The above Nginx can be set up outside the K8S cluster, or on the host of the K8S cluster (physically deployed on the host, not deployed in the container). However, the Master host of the K8S cluster should not host Nginx. This is because keepalived is already deployed on the Master host and may conflict with the Nginx keepalived.

 

2.3 NodePort+LVS

In the NodePort solution, the VIP is only on one host, which may not withstand heavy traffic. At this time, we can manually build LVS in front of NodePort as follows:

图示 描述已自动生成

1. To ensure the high availability of the entry, you need to set up a LVS manually and install keepalived to bind a VIP. 2. Configure LVS for DR mode for forwarding to multiple K8S hosts. 3. For each host port opened at the kube-proxy, a corresponding port configuration is required in LVS. 4. This solution does not support HTTPS, because LVS operates at layer 4 forwarding, which does not support HTTPS configuration.

 

2.4 NodePort+Nginx+LVS

For environments experiencing high traffic and requiring HTTPS support, this solution is recommended.

Manually set up Nginx in front of kube-proxy and then LVS in front of Nginx as follows:

图示 描述已自动生成

 

1. To ensure the high availability of the entry, you need to set up a LVS manually and install keepalived to bind a VIP. 2. Install Nginx manually and configure an HTTPS certificate. 3. In LVS, you need to configure only one port to forward to the corresponding port in Nginx. Nginx should listen to only one port, forwarding to respective kube-proxy ports based on domain names. Kube-proxy then forwards to various applications respectively within the cluster based on the port.

 

2.5 Ingress

Ingress is a different solution from NodePort and it operates as follows:

图示 描述已自动生成

 

1. IngressController only listens to one port, and forwards to different applications in the K8S cluster through different domain names. Therefore, this solution only supports domain name access, not IP access. 2. Only the Master host of K8S will have IngressController. Since there is already a VIP on the Master host, DNS server can resolve the domain name to VIP, thus ensuring the high availability of the entry. 3. On the CCSE management console page, by creating different Ingress objects for the application, different domain names can be specified for different applications. 4. The IngressController currently does not support configuring HTTPS certificates, but future updates will include this capability. So, this solution currently supports HTTP only, with no support for HTTPS.

 

2.6 Ingress+LVS

The above Ingress solution, as the VIP is only on one host, is not suitable for scenarios with heavy traffic. If there is a large amount of traffic, Ingress+LVS can be used, as follows:

图示 描述已自动生成

 

1. To ensure the high availability of the entry, you need to set up a LVS manually and install keepalived to bind a VIP. 2. LVS needs to listen on only one single port (the one that the IngressController listens on). 3. When a browser accesses the system through a domain name, the DNS server resolves the domain name to the VIP of LVS. LVS then forwards the request to a specific IngressController. IngressController in turn forwards the request to different applications within the cluster via the domain name. 4. Similar to the previous solution, this solution also does not support HTTPS currently, but future updates will include this capability.

 

2.7 Comparison

Solution

Layer 4 or Layer 7   Forwarding

Support High Availability

Support HTTPS

Advantages and Disadvantages

Application Scenario

NodePort

Layer 4

Yes

No

Advantage: Easy to use, no additional configuration   required

Applicable for TCP or HTTP services, and scenarios with   low traffic

NodePort+Nginx

Layer 7

Yes

Yes

Disadvantage: Require additional manual deployment and configuration   of Nginx.

Applicable for HTTPS services and scenarios with low   traffic.

NodePort+LVS

Layer 4

Yes

No

Disadvantage: Require additional manual deployment and   configuration of LVS.

Applicable for TCP or HTTP services, and scenarios with   high traffic.

NodePort+Nginx+LVS

Layer 7

Yes

Yes

Disadvantage: Require additional manual deployment and   configuration of Nginx and LVS.

Applicable for HTTPS services and scenarios with high   traffic.

Ingress

Layer 7

Yes

Currently not supported, but will be supported in future   updates.

Advantage: Easy to use. Disadvantage: It only supports   domain name access, with no support for IP access, and requires manual DNS   server configuration.

Applicable for HTTP services (with support for HTTPS in   future) and scenarios with low traffic.

Ingress+LVS

Layer 7

Yes

Currently not supported, but will be supported in future   updates.

Disadvantage: It requires additional deployment of LVS,   only supports domain name access, does not support IP access, and requires   manual DNS server configuration.

Applicable for HTTP services (with support for HTTPS in   future) and scenarios with high traffic.


di2UFdvifK4w