Cloud Container Service Engine (CCSE)

Using Cubecni Network Plug-in

2025-07-10 10:07:42

Cubecni is an eSurfing Cloud open-source Container Network Interface (CNI) plug-in based on VPC network. It supports defining inter-container access policies based on the Kubernetes standard Network Policy. You can use the Cubecni network plug-in to achieve internal network communication within the Kubernetes cluster. This section introduces how to use the Cubecni network plug-in of eSurfing Cloud CCSE Kubernetes clusters.

  • Background

The Cubecni network plug-in is a network plug-in developed by CCSE, which assigns the native elastic NIC and subnet IP to the Pod to implement Pod networking. It supports defining container access policies based on the Kubernetes standard Network Policy.

In the Cubecni network plug-in, each Pod has its own network stack and IP address. Communication between Pods within the same ECS is directly forwarded through the machine itself. Communication and packets between Pods across ECS are directly forwarded through the elastic NIC of VPC. The Cubecni mode network has superior communication performance because it does not require tunneling technologies such as VxLAN for packet encapsulation.

Comparison Between Cubecni and Calico

When creating a cluster, CCSE provides two network plug-ins, Cubecni and Calico:

Cubecni: A network plug-in developed by eSurfing Cloud CCSE. CCSE assigns eSurfing Cloud elastic NIC and subnet IPs to the container. It supports defining inter-container access policies based on the Kubernetes standard network policy. Cubecni has a more flexible IPAM (IP Address Management) strategy, avoiding address wastage. If intercommunication between containers and virtual machines is not required, you can choose Calico. In other cases, Cubecni is recommended.

Calico: It uses the IP IP mode of the community Calico CNI plug-in.

  • Comparison Between Cubecni and Calico

Comparison Items

Cubecni

Calico

Performance

Its performance is close to that of elastic NIC

IP IP decapsulation loss

Security

Support the use of the Network Policy

Support the use of the Network Policy

Address Management

No need to allocate address segments by node. Allocate   address segments on-demand, avoiding address wastage. Support configuration   of security groups.

Each node allocates a virtual address segment in advance.

 

ELB

The ELB backend cannot directly connect to the Pod. It   must be forwarded through NodePort.

The ELB backend cannot directly connect to the Pod. It   must be forwarded through NodePort.

 

  • Configuration

Step 1: Plan and Prepare Cluster Network

When creating a CCSE Kubernetes cluster, you need to specify the VPC, the Pod subnet (address segment), and the Service CIDR (address segment).

You can use the following CIDR block configuration to quickly build a Cubecni network.

VPC CIDR Block

Pod Subnet

Service CIDR

192.168.0.0/16

192.168.0.0/20

172.21.0.0/20

This section uses the recommended CIDR blocks above to illustrate how to create a private network and Pod subnet.

Log in to the private network console.

At the top navigation bar, select the region of the private network, then click Create Private Network.

In the private network creation page, set the name to vpc_192_168_0_0_16. Then, input 192.168.0.0/16 in the IPv4 CIDR block text box.

Click + to add, set the second switch name to pod_switch_192_168_32_0_19, select the availability zone, and set the IPv4 CIDR block to 192.168.32.0/20.

Click Confirm.

Step 2: Configure Cubecni Network

The procedure for configuring the key parameters of the cluster network for the Cubecni network plug-in is as follows:

VPC: Select the private network created in Step 1: Plan and Prepare Cluster Network.

Network Plug-in: Select Cubecni

Pod Subnet: Select the Pod subnet created in Step 1: Plan and Prepare Cluster Network.

Service CIDR: Keep the default value.

  • Cubecni IPVlan Mode

When creating a cluster, if you choose the Cubecni network plug-in, use the Cubecni IPvlan mode. Cubecni IPvlan mode adopts IPvlan virtualization to achieve high-performance Pod and Service networks:

Different from the default Terway network mode, the IPvlan mode mainly optimizes Pod network, Service, and Network Policy for performance:

n  The Pod network uses the IPvlan L2 sub-interface of the ENI NIC for direct implementation. This significantly simplifies network forwarding process on the host, resulting in Pod network performance that closely mirrors that of the host, and reducing latency by 30% compared to traditional modes.

n  The Pod Network Policy uses eBPF to replace the original iptables implementation. This eliminates the need for a large number of iptables rules on the host, thereby reducing the impact of Network Policy on network performance.


XrtcaWVGBngi