mirror of https://github.com/k3s-io/k3s
![]() With IPv4, the node CIDR prefix is set to /24, which gives 256 pods per node and 256 nodes, when assuming a /16 is used for the pod subnet. For IPv6, the node CIDR prefix, is hard coded to /64. This does not work, because currently the pod subnet prefix must be /66 or higher and must be a larger subnet (lower value) than the node CIDR prefix. In addition, the bit mask used to track the subnets (implying the number of nodes), can only handle 64K entries, so the difference between pod subnet prefix and node CIDR prefix cannot be more than 16 (bits). The node CIDR value needs to support this restriction. To address this, the following algorithm is proposed... For pod subnet prefixes of /113 or smaller, the remaining bits will be used for the node CIDR, in multiples of 8, and 9-16 bits will be reserved for the nodes, so that there are 512-64K nodes and 256, 512, 768, ... pods/node. For example, with a pod network of /111, there will be 17 bits available. This would give 8 bits for pods per node and 9 bits for nodes. The node CIDR would be /120. For a pod network of /104, there will be 24 bits available. There will be 8 bits for nodes, and 16 bits for pods/node, using a /112 node CIDR. If the pod subnet prefix is /112, then the node CIDR will be set to /120, and 256 nodes and 256 pods/node will be available. If the subnet prefix is /113 to /128, we don't have enough bits and will set the node CIDR prefix to be the same as the pod subnet prefix. This will cause a falure later, when it tests that the pod subnet prefix is larger than the node CIDR prefix. |
||
---|---|---|
.. | ||
app | ||
test | ||
BUILD | ||
OWNERS | ||
kubeadm.go |