EKS-A container networking interfaces (CNIs) – Architecting for Disconnected Edge Computing Scenarios

Because disconnected environments do not participate in the network virtualization stack backing in-region VPC services, the VPC-CNI plugin is not available. Instead, EKS-A offers a choice of Cilium or Kindnet for container networking. Only one can be selected at deployment time, and it cannot be changed after deployment:

Figure 9.16 – EKS-A encapsulating IP flows between worker nodes using GENEVE

EKS-A providers

Similar to how EKS handles the deployment/management of EC2 instance-based worker nodes in-region, EKS-A supports several providers that allow it to do something similar with non-EC2 compute resources.

The supported providers include the following:

VMware vSphere: Via a vCenter central management server

Nutanix AHV: Via a Prism central management server

CloudStack: Via a cluster endpoint (similar to the K8S API)

Bare metal: Via BMC/IPMI interfaces to orchestrate network builds

Snow: Via AWS Snow APIs on each Snow family device

Docker: For unsupported development environments only

This section will cover scenarios involving the bare metal and Snow family providers only. Detailed information about all providers can be found in the installation section of the EKS-A documentation.

The EKS-A provider for Snow family devices

The EKS-A provider for Snow family devices allows customers to deploy and manage EKS-A clusters on AWS Snowball Edge devices. An entire EKS-A cluster can be deployed to a single device for maximum portability and simplicity, or it can target three or more devices to provide higher availability and scaling of compute resources:

Figure 9.17 – Amazon EKS-A Snow family provider communication channels

Regardless of the chosen topology, the deployment process is essentially the same for a fully disconnected environment:

Order AWS Snowball Edge device(s): When selecting the type of device, choose one of the three Snowball Edge compute-optimized variants during Step 2. After doing so, the Step 3 page will show a tab that lets you choose options for EKS-A on Snow. This includes whether you wish to have everything on a single device or a cluster of up to 10 devices. This is also where you can select the AMI and Kubernetes versions to install on the node images.

When the devices arrive, connect to and unlock them with AWS OpsHub or the Snow CLI as normal and follow the manual setup procedure found in the AWS Snow Family documentation. Alternatively, setup tools are offered that automate the setup process. See the setup-tools section of the EKS-A documentation for more details.

Set up a local harbor registry instance: This is needed in DDIL scenarios to eliminate the need for EKS-A to reach back to the AWS Elastic Container Registry (ECR) to obtain images. See the container-registry-ami-builder section of the EKS-A documentation for more information.

Instantiate the EKS-A admin server and deploy EKS-A clusters: The admin server is a small standalone EKS cluster that is used to bootstrap the environment and deploy/manage other clusters that contain your workloads. See the Install on Snow section of the EKS-A documentation for more specifics.

Direct Network Interfaces (DNIs) and EKS-A

When EKS-A deploys cluster nodes to AWS Snowball Edge devices, it creates and associates a DNI to each node. This allows pods within the nodes to use GENEVE encapsulation with the Cilium CNI – thus avoiding the need for NAT. While this is desirable, it limits the number of nodes per device due to the constraint of seven DNIs per physical interface (see Chapter 4).

Leave a comment

Your email address will not be published. Required fields are marked *