Updated at 2018-12-31 00:56

Kubernetes networking in a nutshell:

  1. Every Kubernetes pod receives an IP address.
  2. kube-dns resolves Kubernetes service DNS names to IP addresses.
  3. kube-proxy sets up iptables rules in order to do random load balancing.
1. you make a request to
2. it resolves to
3. iptables on your local host redirect it to or at random

Kubernetes requires that each container has an IP address. So contrary to normal "one IP, multiple ports", Kubernetes uses "multiple IP addresses, one port" approach.

AWS computer has IP       
You want a container that has IP
VPC Route Table rules that          10.4.4.* packets go to
(this works up to 50 instances as the rule limit is 50)

Prefer to automate the network setup. There are a couple of software that help to automate container networking, but both are limited to communication between instances in the same availability zone.

  1. Flannel: vxlan encapsulation or host-gw (just set route tables)
  2. Calico: ip-in-ip encapsulation or regular mode (just set route tables)

Networking software often relies very heavily on the Linux kernel. It's worth your while to get more familiar with low level networking on Linux so you can efficiently debug and fix problems, especially sysctl configuration.

Use MAC addresses if source is in the same network as the destination. LANs and AWS availability zones utilize MAC addresses, not IP addresses. You can "ignore" the actual IP and just use the MAC address to send packages.

ip route add via dev eth0
# now all traffic to 10.4.4.* is sent to MAC address of

Use destination host instance IP, if source and destination separate AZs. Here you have to encapsulate the network package inside another network packet, e.g. vxlan or ip-in-ip.

vxlan: encapsulates the packet, including MAC address, inside a UDP packet

ip-in-ip: adds an extra IP header on the packet, so it won't keep the MAC

# setup a new network interface
ip tunnel add mytun mode ipip remote local ttl 255
ifconfig mytun

# route packets to the magic interface
route add -net dev mytun
route list

You usually configure the Linux route tables using one of the following ways:

  1. a program reads the routes from Kubernetes cluster etcd and adds them.
  2. a program reads the routes from BGP protocol gossip between nodes.

Major changes to production networking infrastructure is dangerous. Configure it the right way from the start. If you ever need to do major changes, be sure to have a couple people who know the infrastructure in-and-out.