angle-uparrow-clockwisearrow-counterclockwisearrow-down-uparrow-leftatcalendarcard-listchatcheckenvelopefolderhouseinfo-circlepencilpeoplepersonperson-fillperson-plusphoneplusquestion-circlesearchtagtrashx

Using IPv6 with Microk8s

Configuring dual-stack Microk8s is much easier than expected.

3 December 2024 Updated 3 December 2024
post main image
https://www.pexels.com/@sedanur-kunuk-78972032

For a part my application that consists of several Docker-Compose projects, I needed access to external IPv6-only services. In previous posts I wrote that I already moved part of my application from Docker to Microk8s (Kubernetes). This is the part that requires access to the external IPv6 services. The Docker application communicates with the Microk8s application using a NodePort or Ingress Controller.

My development machine is on a IPv4 only local network, and my internet connection also is IPv4. I certainly do not want to change my ISP connection from IPv4 to IPv6.

In this post we create an IPv6, in fact a dual-stack, test environment where we can run an application in Microk8s that can access an IPv6 dummy service running on a virtual machine created with VirtualBox.

As always I am doing this on Ubuntu 22.04 Desktop.

About IPv6 addresses

Although not very complex, good explanations about IPv6 can be found in 'IPv6 Explained for Beginners', see links below, very helpful.

In short, an IPv6 address consists of 128-bits. These are split into a network part and node part, each having 64-bits. From the network the upper 48-bits are used for routing over the internet.

Here, we use the IPv6 documentation prefix, 2001:db8::/32. From the document "Understanding the IPv6 Documentation Prefix: 2001:db8::/32", see links below: This is a reserved range of IPv6 addresses set aside for use in documentation, examples, and educational materials. This prefix was chosen to ensure that any addresses using it are not accidentally routed on the internet, thus preventing conflicts with real-world IPv6 addresses.

This means the network part is 32-bits. For our network, we add 'abcd:0012'.

Example address of a node on our network:

2001:db8:abcd:12::1

Which is short for:

2001:0db8:abcd:0012:0000:0000:0000:0001

The 'Prefix' is like the mask in IPv4.

Enabling and disabling IPv6 on Ubuntu

In Ubuntu we can enable and disable IPv6 settings using the command sysctl. To view the settings:

sysctl -a 2>/dev/null | grep disable_ipv6

Long time ago I disabled IPv6 on my development machine. I added the following:

net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
net.ipv6.conf.tun0.disable_ipv6 = 1

To the file:

/etc/sysctl.conf

Now I must comment these lines again.

We can also set these values temporary, example:

sudo sysctl -w net.ipv6.conf.all.disable_ipv6=0

To reload all settings again, we issue the command:

sudo sysctl --system

All 'disable_ipv6' values should be '0'.

Also make sure the file:

/etc/hosts

contains the following lines, where ::1 is the loopback address:

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

Development machine and VirtualBox configuration

The development machine, Ubuntu Desktop, was already using IPv4, I just needed to add IPv6. We don't have a gateway or DNS server, so we just put some random addresses there.
Using VirtualBox, we create an Ubuntu 22.04 server. We assign the following addresses to the development machine and the new virtual machine:

Ubuntu 220.04 Desktop

Address:  2001:db8:abcd:12::1
Prefix:   64
Gateway:  2001:db8:abcd:12::2
DNS:      Automatic
Routes:   Automatic

Ubuntu 220.04 Server (Virtualbox)

Address:  2001:db8:abcd:12::1:1/64
Prefix:   64
Gateway:  2001:db8:abcd:12::2
DNS:      Automatic
Routes:   Automatic

On our VirtualBox Ubuntu 22.04 server we use the 'Bridged Adapter' for networking. In Ubuntu 22.04 Server, we edit the network configuration in:

/etc/netplan/00-installer-config.yaml

Contents:

network:
  ethernets:
    enp0s3:
      dhcp4: no
      dhcp6: no
      addresses:
        - 192.168.1.26/24
        - 2001:db8:abcd:12::1:1/64
      nameservers:
        addresses:
        - 192.168.1.2
        - 2606:4700:4700::1111
      routes:
        - to: default
          via: 192.168.1.1
        - to: default
          via: "2001:db8:abcd:12::2"
  version: 2

Note that I included IPv4 settings here. I do not know if the default SSH server settings include IPv6. Now we can log in at least login via IPv4. When done, we re-activate it:

sudo netplan apply

On both machines we can check the IPv6 addresses:.

On the development machine:

ip a | grep inet6

Result:

    inet6 ::1/128 scope host 
    inet6 2001:db8:abcd:12::1/64 scope global noprefixroute 
    inet6 fe80::fe11:314e:9bb4:73c1/64 scope link noprefixroute 
    ...

On the virtual machine:

ip a | grep inet6

Result:

    inet6 ::1/128 scope host 
    inet6 2001:db8:abcd:12::1:1/64 scope global 
    inet6 fe80::a00:27ff:feda:7026/64 scope link 
    ...

Check IPv6 on both machines and if they can communicate

On both machines we can run:

ping6 ::1

Result:

64 bytes from ip6-localhost: icmp_seq=0 ttl=64 time=0,078 ms
64 bytes from ip6-localhost: icmp_seq=1 ttl=64 time=0,048 ms
64 bytes from ip6-localhost: icmp_seq=2 ttl=64 time=0,048 ms

On the development machine, we ping the virtual machine:

ping6 2001:db8:abcd:12::1:1

On the virtual machine, we ping the development machine:

ping6 2001:db8:abcd:12::1

Finally, we check if we can run a service on the virtual machine and see if we can access this service.

On the virtual machine, we start 'netcat', and bind it to the IPv6 address:

nc -l 2001:db8:abcd:12::1:1 3492

On the development machine, we start 'telnet', type 'hello' and check if it echo-ed:

telnet -6 2001:db8:abcd:12::1:1 3492

Result:

Trying 2001:db8:abcd:12::1:1...
Connected to 2001:db8:abcd:12::1:1.
Escape character is '^]'.
hello
^]
telnet> quit
Connection closed.

Working, nice.

Installing Microk8s with a dual-stack

This appeared to be much more easy than expected. I followed the instructions from the post 'How to configure network Dual-stack', see links below.

On the development machine, we first create a file:

/var/snap/microk8s/common/.microk8s.yaml

with the following contents:

---
version: 0.1.0
extraCNIEnv:
  IPv4_SUPPORT: true
  IPv4_CLUSTER_CIDR: 10.3.0.0/16
  IPv4_SERVICE_CIDR: 10.153.183.0/24
  IPv6_SUPPORT: true
  IPv6_CLUSTER_CIDR: fd02::/64
  IPv6_SERVICE_CIDR: fd99::/108
extraSANs:
  - 10.153.183.1
addons:
  - name: dns

Note(s):

  • We enable 'dns' here. If you do not do this here, then you must do this after installing Microk8s!
  • We assign Unique Local Addresses (ULA) here, they cannot be routed on the public Internet.

Then we install Microk8s:

snap install microk8s --classic

Check if the pods get an IPv4 and an IPv6 address assigned, for example by running:

microk8s kubectl -n kube-system describe pod

It should show something like this for each pod there:

...
IPs:
  IP:           10.3.105.121
  IP:           fd02::332e:7684:ac2:6979
...

Now we run BusyBox in Microk8s:

kubectl run -i -t busybox --image=busybox --restart=Never

Once running we can check if an IPv6 address was assigned:

cat /etc/hosts

Result:

# Kubernetes-managed hosts file.
127.0.0.1	localhost
::1	localhost ip6-localhost ip6-loopback
fe00::0	ip6-localnet
fe00::0	ip6-mcastprefix
fe00::1	ip6-allnodes
fe00::2	ip6-allrouters
10.3.105.122	busybox
fd02::332e:7684:ac2:697a	busybox

IPv6 tests with Microk8s

On the development machine, start a 'netcat' listener:

nc -l 2001:db8:abcd:12::1 3491

Then in the BusyBox terminal, we try to connect, and type a message:

telnet -6 2001:db8:abcd:12::1 3491

Result:

Connected to 2001:db8:abcd:12::1
hello
^]

Console escape. Commands are:

 l	go to line mode
 c	go to character mode
 z	suspend telnet
 e	exit telnet
e

Now start the 'netcat' listener on the virtual machine:

nc -l 2001:db8:abcd:12::1:1 3492

Again, in BusyBox on the development machine we try to connect:

telnet 2001:db8:abcd:12::1:1 3492

It is not connecting! This was to be expected. Outgoing IPv6 traffic is blocked.

From the post 'Enabling IPv6 for SC4SNMP', see links below: The default CNI used for microk8s is Calico. For pods to be able to reach internet over IPv6, you need to enable the natOutgoing parameter in ipv6 ip pool configuration from calico. To set it create the yaml file with the following content:

# calico-ippool.yaml
---
apiVersion: crd.projectcalico.org/v1
kind: IPPool
metadata:
  name: default-ipv6-ippool
spec:
  natOutgoing: true

To activate:

microk8s kubectl apply -f calico-ippool.yaml

Stop and start Microk8s:

microk8s stop
microk8s start

Now remove and restart the BusyBox pod:

kubectl delete pod  busybox
kubectl run -i -t busybox --image=busybox --restart=Never

Again, in BusyBox on the development machine we try to connect to the lister in the virtual machine and type a message:

telnet 2001:db8:abcd:12::1:1 3492

Result:

Connected to 2001:db8:abcd:12::1:1
hello
^]

Console escape. Commands are:

 l	go to line mode
 c	go to character mode
 z	suspend telnet
 e	exit telnet
e

Great, now it's working!

More about Calico and IP pools

Calico (Project Calico Documentation) is an open source networking and network security solution for containers, virtual machines, and native host-based workloads.

From 'Microk8s - Configure Calico', see links below: For most of the cases [], the way to change Calico configuration is to patch the deployed cni.yaml and then re-apply it to the cluster.

To check the calico api-resources:

kubectl api-resources | grep 'projectcalico.org'

Result:

NAME                                SHORTNAMES   APIVERSION                        NAMESPACED   KIND
bgpconfigurations                                crd.projectcalico.org/v1          false        BGPConfiguration
bgppeers                                         crd.projectcalico.org/v1          false        BGPPeer
blockaffinities                                  crd.projectcalico.org/v1          false        BlockAffinity
caliconodestatuses                               crd.projectcalico.org/v1          false        CalicoNodeStatus
clusterinformations                              crd.projectcalico.org/v1          false        ClusterInformation
felixconfigurations                              crd.projectcalico.org/v1          false        FelixConfiguration
globalnetworkpolicies                            crd.projectcalico.org/v1          false        GlobalNetworkPolicy
globalnetworksets                                crd.projectcalico.org/v1          false        GlobalNetworkSet
hostendpoints                                    crd.projectcalico.org/v1          false        HostEndpoint
ipamblocks                                       crd.projectcalico.org/v1          false        IPAMBlock
ipamconfigs                                      crd.projectcalico.org/v1          false        IPAMConfig
ipamhandles                                      crd.projectcalico.org/v1          false        IPAMHandle
ippools                                          crd.projectcalico.org/v1          false        IPPool
ipreservations                                   crd.projectcalico.org/v1          false        IPReservation
kubecontrollersconfigurations                    crd.projectcalico.org/v1          false        KubeControllersConfiguration
networkpolicies                                  crd.projectcalico.org/v1          true         NetworkPolicy
networksets                                      crd.projectcalico.org/v1          true         NetworkSet

To get the IP pools:

kubectl get ippools

Result:

NAME                  AGE
default-ipv4-ippool   11d
default-ipv6-ippool   11d

To describe the IP pools:

kubectl describe IPPool

Result:

Name:         default-ipv4-ippool
Namespace:    
Labels:       
Annotations:  projectcalico.org/metadata: {"uid":"421fa69c-6657-4bd3-8bba-a06681c33c82","creationTimestamp":"2024-11-21T14:11:25Z"}
API Version:  crd.projectcalico.org/v1
Kind:         IPPool
Metadata:
  Creation Timestamp:  2024-11-21T14:11:25Z
  Generation:          1
  Resource Version:    108882
  UID:                 50e3ccfc-627d-42f8-aaa3-a61e6553f8ed
Spec:
  Allowed Uses:
    Workload
    Tunnel
  Block Size:     26
  Cidr:           10.3.0.0/16
  Ipip Mode:      Never
  Nat Outgoing:   true
  Node Selector:  all()
  Vxlan Mode:     Always
Events:           

Name:         default-ipv6-ippool
Namespace:    
Labels:       
Annotations:  projectcalico.org/metadata: {"uid":"8377dd46-681d-4dbd-88a4-86e04a1d52c3","creationTimestamp":"2024-11-21T14:11:25Z"}
API Version:  crd.projectcalico.org/v1
Kind:         IPPool
Metadata:
  Creation Timestamp:  2024-11-21T14:11:25Z
  Generation:          2
  Resource Version:    1506748
  UID:                 49501699-831f-4317-9c9f-2d2b15168547
Spec:
  Allowed Uses:
    Workload
    Tunnel
  Block Size:     122
  Cidr:           fd02::/64
  Ipip Mode:      Never
  Nat Outgoing:   true
  Node Selector:  all()
  Vxlan Mode:     Always
Events:           

Note that 'Nat Outgoing' is 'true' both for the 'default-ipv4-ippool' and the 'default-ipv6-ippool'.

We can also use calicoctl. But first we must install it.

To get the current version of Calico installed in Microk8s:

kubectl -n kube-system describe pod calico-kube-controllers-759cd8b574-wscp9 | grep Image

Result:

    Image:          docker.io/calico/kube-controllers:v3.25.1
    Image ID:       docker.io/calico/kube-controllers@sha256:02c1232ee4b8c5a145c401ac1adb34a63ee7fc46b70b6ad0a4e068a774f25f8a

This means the current version of Calico is:

v3.25.1

To get the calicoctl manifest:

wget https://raw.githubusercontent.com/projectcalico/calico/v3.25.1/manifests/calicoctl.yaml

To activate:

kubectl apply -f calicoctl.yaml

To check it is runnging:

kubectl get pods -A

Result:

NAMESPACE     NAME                                       READY   STATUS    RESTARTS        AGE
...
kube-system   calicoctl                                  1/1     Running   0               2m17s
...

To get the calico profiles:

kubectl exec -ti -n kube-system calicoctl  -- /calicoctl get profiles -o wide

Result:

NAME                                                          LABELS                                                                                                                                             
projectcalico-default-allow                                                                                                                                                                                      
kns.default                                                   pcns.kubernetes.io/metadata.name=default,pcns.projectcalico.org/name=default                                                                       
kns.ingress                                                   pcns.kubernetes.io/metadata.name=ingress,pcns.projectcalico.org/name=ingress                                                                       
kns.kube-node-lease                                           pcns.kubernetes.io/metadata.name=kube-node-lease,pcns.projectcalico.org/name=kube-node-lease     
...

To create a calicoctl alias:

alias calicoctl="kubectl exec -i -n kube-system calicoctl -- /calicoctl"

Now we can run the above command as follows:

calicoctl get profiles -o wide

We also can use calicoctl to get the IP pools:

calicoctl get ippools

Result:

NAME                  CIDR          SELECTOR   
default-ipv4-ippool   10.3.0.0/16   all()      
default-ipv6-ippool   fd02::/64     all()      

To show more:

calicoctl get IPPool default-ipv6-ippool -o wide

Result:

NAME                  CIDR        NAT    IPIPMODE   VXLANMODE   DISABLED   DISABLEBGPEXPORT   SELECTOR   
default-ipv6-ippool   fd02::/64   true   Never      Always      false      false              all()      

We can delete and create an IP pool:

calicoctl delete ippool <ippool-to-delete>

cat <<EOF | calicoctl create -f -
- apiVersion: v1
  kind: ipPool
  metadata:
          cidr: fd0e:c226:9228:fd1a::/64
  spec: {}
EOF

Or, replace an existing IP pool. Example:

calicoctl replace -f - << EOF
apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
  name: default-ipv6-ippool
spec:
  blockSize: 122
  cidr: fd01::/64
  ipipMode: Never
  nodeSelector: all()
  vxlanMode: Never
  natOutgoing: true
EOF

Summary

We created an IPv6 testing environment for Microk8s applications that must access IPv6 external services. Using VirtualBox we created a Ubuntu server virtual machine and enabled IPv6 both on the development machine and the virtual machine. Then we installed dual-stack Microk8s on the development machine and enabled outgoing IPv6 traffic. Now we can 'telnet' inside Microk8s on the development machine to an IPv6 service running on the virtual machine.

Links / credits

Building a multi-node IPv6-enabled bare-metal kubernetes cluster with microk8s, metallb, longhorn and VyOS
https://blog.mowoe.com/building-a-multi-node-ipv6-enabled-bare-metal-kubernetes-cluster-with-microk8s-metallb-longhorn-and-vyos.html

calicoctl user reference
https://docs.tigera.io/calico/latest/reference/calicoctl/overview

How to configure network Dual-stack
https://discuss.kubernetes.io/t/how-to-configure-network-dual-stack/24784

IPv6 Explained for Beginners
http://www.steves-internet-guide.com/ipv6-guide

IPv6 masquerading for egress on microk8s on EC2
https://www.checklyhq.com/blog/ipv6-masquerading-for-egress-on-microk8s-on-ec2

Kubernetes - IPv4/IPv6 dual-stack
https://kubernetes.io/docs/concepts/services-networking/dual-stack

Microk8s - Configure Calico
https://microk8s.io/docs/change-cidr

Understanding the IPv6 Documentation Prefix: 2001:db8::/32
https://ipv6.net/blog/understanding-the-ipv6-documentation-prefix-2001db8-32

Leave a comment

Comment anonymously or log in to comment.

Comments

Leave a reply

Reply anonymously or log in to reply.