Google’s internal experience goes open source
In June 2014 Google announced the release of Kubernetes, a container orchestration system inspired by Borg, the internal infrastructure Google has used for over a decade to manage its workloads at planetary scale. The name comes from Greek and means “helmsman” — a nautical reference reflected in the logo, a seven-spoked ship’s wheel.
The goal is ambitious: to provide an abstraction layer above containers that allows operators to declare the desired state of a distributed application and let the system reach and maintain that state over time.
Kubernetes primitives
Kubernetes organises infrastructure around a set of declarative primitives:
- Pod: the smallest deployable unit, composed of one or more containers sharing network and storage. A pod represents a single logical process of the application
- Service: a stable network abstraction that exposes a group of pods behind a constant IP address and port, regardless of which pods are running at any given moment
- ReplicaSet: ensures that a specified number of pod replicas are always running. If a pod is terminated, the ReplicaSet creates a new one
- Deployment: manages the lifecycle of ReplicaSets, enabling gradual updates (rolling updates) and rollbacks to previous versions
These primitives compose together: a Deployment creates a ReplicaSet, which maintains Pods, which are exposed through a Service. The desired state is declared in YAML files and submitted to the cluster’s API server.
Declarative model and self-healing
The fundamental difference from traditional deployment scripts is the shift from an imperative model to a declarative one. The operator does not tell the cluster “start three instances”; they declare “I want three instances running”. Kubernetes’ controller manager continuously observes the current state of the cluster, compares it with the desired state and takes action to bridge the gap.
If a cluster node becomes unreachable, Kubernetes reschedules its pods onto healthy nodes. If a container terminates unexpectedly, it is restarted. This self-healing mechanism reduces the need for manual intervention and makes the system resilient to partial failures.
Built-in service discovery allows pods to find each other via internal DNS, eliminating static endpoint configuration. The scheduler assigns pods to nodes based on available resources, affinity constraints and distribution policies.
Link: kubernetes.io
