ruk·si

Kubernetes
Node

Updated at 2021-09-14 17:43

A "node" is a worker machine in Kubernetes.

Each node has the tools to run "pods" like Docker, kubelet and kube-proxy.

Nodes are not created by Kubernetes, unlike pods and services. Kubernetes only maintains representation of the nodes and performs health checks based on metadata.name.

{
  "kind": "Node",
  "apiVersion": "v1",
  "metadata": {
  "name": "10.240.79.157",
  "labels": {
    "name": "my-first-k8s-node"
  }
  }
}

Nodes can be defined automatically with node controller or manually. To disable automatic registration of nodes, add kubelet flag --register-node=false.

You can mark node as unschedulable. Then new pods won't be assigned to the node but old ones will stay.

# mark the node as unschedulable
kubectl cordon $NODENAME
# mark the node as unschedulable and relocate current pods
kubectl drain $NODENAME

Conditions

Conditions tell the status of a Running node.

  • ConfigOK: kubelet is correctly configured.
  • Ready: ready to accept pods
  • OutofDisk: out of free space for adding new pods
  • MemoryPressure: node memory is low
  • DiskPressure: node disk capacity is low
  • NetworkUnavailable: network is not correctly configured
"conditions": [
  {
  "type": "Ready",
  "status": "True"
  }
]

Ready condition can also be Unknown if the node controller hasn't heard from the node in 40 seconds, configurable with node-monitor-grace-period.

Ready condition with Unknown or False status trigger deletion timeout. Default timeout is 5 minutes but it can be configured with pod-eviction-timeout.

Sometimes you need to delete nodes manually. If Kubernetes cannot deduce if a node has permanently left the cluster, the cluster administrator may need to delete the node by hand. This will free all pod names.

1.8 introduced taints to replace conditions. If the feature is enabled, scheduler ignores conditions and compares node taints to pod tolerations.

Capacity

Node capacity is the amount of CPU power and memory. Normally nodes report their capacity when self-registering, but this can also done manually.

Kubernetes scheduler ensures there is enough resources for all pods in a node. Sum of the requested container resources should never be greater than the node capacity.

The whole node can go out-of-memory, causing the node to become unresponsive. This can happen if workload pods have memory limit larger than the memory request or no limit at all. kubelet checks for OOM only on a few second intervals and sudden surge of memory usage might memory starve the node if no proper memory limits are set for the workloads.

You can use placeholder pods to reserve resources for non-pod processes.

apiVersion: v1
kind: Pod
metadata:
  name: resource-reserver
spec:
  containers:
    - name: sleep-forever
      image: k8s.gcr.io/pause:0.8.0
  resources:
    requests:
      cpu: 100m
      memory: 100Mi

Sources