During my learning of the Kubernetes gateway API, I wanted to explore two of the different options Cilium has to assign an external IP to a service. Namely leveraging Node IPAM which will allow you to use the IP addresses of your nodes, as well as the more general L2 option where you specify what IP addresses it should respond to within my home’s local network.
It’s not intended to be used this way for long, but gives me flexibility in presenting services in a way I know will work while sorting out some larger IPv6 routing choices by my ISP. (Figuring out how to convince my router to handle multiple /64s since I have a /56 delegated to me)
My learning/example application is using echoserver as the application and service, and then we’ll leverage gateway API and HTTPRoutes. It also shows the code and assumes the contents shown are put into a file and applied using kubectl apply
Without further adieu, let’s show how you configure it and what the results look like.
Let’s set up a basic echo app with Cilium’s Gateway API using both Node IPAM and L2!
1. Configure Cilium for L2
There are two items we need to set up with Cilium:
- Configure Cilium to do L2 announcements - by setting
l2announcements.enabled=truein Cilium’s config. (see upstream docs for full instructions on customizing Cilium) - Create a
LoadBalancerIPPooland aCiliumL2AnnouncementPolicyobject
The below sets what IPs are available, and on what interfaces on the nodes should Cilium respond to these IPs on.
---
apiVersion: "cilium.io/v2"
kind: CiliumLoadBalancerIPPool
metadata:
name: "example-l2"
spec:
blocks:
- cidr: "192.168.123.0/24"
---
apiVersion: "cilium.io/v2alpha1"
kind: CiliumL2AnnouncementPolicy
metadata:
name: example-l2-policy
spec:
interfaces:
# Advertise on enp* named interfaces
# Change if you interface is different
- ^enp[0-9]s[0-9]f[0-9]+
externalIPs: true
loadBalancerIPs: true
Once that is applied, we now have IP addresses associated with our L2 setup for the load balancer (Envoy in our case) to leverage. Because we have no selectors set up as seen in the upstream docs, it will be the greedy default provider.
2. Create your Node IPAM configuration
To do this, we need to configure Cilium to allow Node IPAM - by setting nodeIPAM.enabled=true in Cilium’s config. (see upstream docs for full instructions on customizing Cilium).
To leverage this, we will need to make sure our LoadBalancer service is using the loadBalancerClass of io.cilium/node.
3. Create our deployments
Our next step is that we will deploy two different deployments of echoserver named for each gateway to make it easier to tell apart. This is the same deployment with different names.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: echoserver-l2
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: echoserver-l2
template:
metadata:
labels:
app: echoserver-l2
spec:
containers:
- image: ealen/echo-server:latest
imagePullPolicy: IfNotPresent
name: echoserver
ports:
- containerPort: 80
env:
- name: PORT
value: "80"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: echoserver-node
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: echoserver-node
template:
metadata:
labels:
app: echoserver-node
spec:
containers:
- image: ealen/echo-server:latest
imagePullPolicy: IfNotPresent
name: echoserver
ports:
- containerPort: 80
env:
- name: PORT
value: "80"
You can check out the pods with kubectl get pods at this point to ensure they are running.
4. Deploy our gateways
Our goal here is to create a gateway targeting each IPAM approach (Node vs L2). It’s important to note that the Node IPAM approach requires a second GatewayClass to be created and it’s associated config to set loadBalancerClass as mentioned in step 2
---
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: node-gateway-class
spec:
controllerName: io.cilium/gateway-controller
description: Non default gateway class to use node ipam.
parametersRef:
group: cilium.io
kind: CiliumGatewayClassConfig
name: node-gateway-config
namespace: default
---
apiVersion: cilium.io/v2alpha1
kind: CiliumGatewayClassConfig
metadata:
name: node-gateway-config
namespace: default
spec:
service:
loadBalancerClass: io.cilium/node
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: my-l2-gateway
spec:
# Use default gatewayClass
gatewayClassName: cilium
listeners:
- protocol: HTTP
port: 80
name: l2-gw
allowedRoutes:
namespaces:
from: Same
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: my-node-gateway
spec:
gatewayClassName: node-gateway-class
listeners:
- protocol: HTTP
port: 80
name: node-gw
allowedRoutes:
namespaces:
from: Same
To confirm they are created you can see them with kubectl get gateway. If you have errors - you’ll want to start with the cilium-operator logs.
5. Deploy the HTTPRoutes
Next, we can add the route that ties the service defined in the next step with the gateway we just created for each of our example deployments.
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-l2-route
spec:
parentRefs:
- name: my-l2-gateway
namespace: default
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- name: l2-service
port: 80
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-node-route
spec:
parentRefs:
- name: my-node-gateway
namespace: default
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- name: node-service
port: 80
To confirm they are created you can see them with kubectl get httproute. If you have errors - you’ll want to start with the cilium-operator logs.
We should see 1 attached route to each gateway, and each gateway should have an external IP. The Node IPAM will list all of the available nodes.
kubectl get gateway -o yaml | grep ttach
- attachedRoutes: 1
- attachedRoutes: 1
kubectl get gateway
NAME CLASS ADDRESS PROGRAMMED AGE
my-l2-gateway cilium 192.168.123.2 True 3m
my-node-gateway nodeipamlb-gateway-class 192.168.124.3 True 3m
6. Deploy the services
Next, we need to attach our deployments to our gateway using a service - seeing the labels attach the components together.
---
apiVersion: v1
kind: Service
metadata:
name: l2-service
labels:
app: echoserver-l2
service: l2-service
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: echoserver-l2
---
apiVersion: v1
kind: Service
metadata:
name: node-service
labels:
app: echoserver-node
service: node-service
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: echoserver-node
And as always you can double check they exist using kubectl get svc.
7. Confirm connectivity
The last part will require jq for ease. We can use it to grab our external gateways and then curl the IP to find out which deployment we’re talking to.
Grab the external gateway IPs and then we’ll curl to echo back the info we want:
L2GATEWAY=$(kubectl get gateway my-l2-gateway -o json | jq -r '.status.addresses[0].value')
NODEGATEWAY=$(kubectl get gateway my-node-gateway -o json | jq -r '.status.addresses[0].value')
curl -v http://${L2GATEWAY}/ | jq '.environment.HOSTNAME'
curl -v http://${NODEGATEWAY}/ | jq '.environment.HOSTNAME'
If it’s working correctly you’ll see the normal curl output followed by "echoserver-l2-<rest of container name>" or "echoserver-node-<rest of container name>"
NOTE: As of 1.18, it’s not possible to enable access logs in Envoy which makes debugging a little harder to note that your HTTPRoute is not pointed to a service that exists.