Certificates from
First Principles
Before we talk about certificates, we need to understand what world we live in without them. The internet is, by default, a postcard system. Every packet you send travels through dozens of routers, switches, and cables owned by people you don’t know and have no reason to trust. Your ISP can read your traffic. The coffee-shop WiFi operator can read your traffic. A government with a fiber tap can read your traffic. This isn’t paranoia — it’s physics. Data moves through shared infrastructure in plaintext unless you explicitly make it not.
In 2013, the Snowden disclosures revealed that the NSA’s MUSCULAR program was tapping the unencrypted links between Google’s data centers. Google engineers were furious — and immediately encrypted all inter-datacenter traffic. The threat model isn’t hypothetical.This gives us two problems that certificates ultimately exist to solve:
Confidentiality. How do you prevent eavesdroppers from reading your data? You need encryption — a way to scramble the message so only the intended recipient can unscramble it.
Authentication. Even if you encrypt, how do you know you’re talking to the right server? An attacker could intercept your connection, present themselves as your bank, and you’d happily encrypt your password and send it straight to them. This is a man-in-the-middle attack, and encryption alone doesn’t prevent it. You need a way for the server to prove its identity.
Authentication is the harder problem. Encryption is “just math” — well-understood algorithms. But authentication requires trust infrastructure: someone, somewhere, needs to vouch for identities. That infrastructure is what we call PKI (Public Key Infrastructure), and certificates are its documents.Certificates don’t do the encryption themselves. They solve the authentication problem — they let your browser verify that the public key it just received actually belongs to google.com and not to someone pretending to be Google. But to understand certificates, you need to understand the cryptographic primitives they’re built on. So let’s build up from nothing.
In 2010, a Firefox extension called Firesheep made the lack of HTTPS viscerally real. It let anyone on a coffee-shop WiFi click a button and hijack other people’s Facebook and Twitter sessions — no hacking skills required. Over a million people downloaded it in its first week. The resulting panic was one of the catalysts that pushed major sites to adopt HTTPS by default.
The oldest and most intuitive form of encryption: both parties share the same secret key. The sender uses the key to encrypt, the receiver uses the same key to decrypt. A lockbox with two identical keys.
Modern symmetric ciphers like AES-256-GCM are extraordinarily fast — billions of operations per second on CPUs with hardware AES-NI instructions. They’re also, as far as we know, unbreakable when used correctly. AES-256 has a keyspace of 2256 possible keys. If every atom in the observable universe were a computer trying a billion keys per second, it would take longer than the age of the universe to try them all.
Why GCM? AES is a block cipher — it encrypts 16 bytes at a time. You need a mode of operation to handle messages longer than 16 bytes. GCM (Galois/Counter Mode) is an AEAD (Authenticated Encryption with Associated Data): it encrypts and authenticates in one pass. If anyone tampers with even a single bit of the ciphertext, decryption fails entirely rather than producing corrupted plaintext. Older modes like CBC don’t authenticate, which led to real-world attacks like BEAST and Lucky Thirteen.But symmetric encryption has a devastating bootstrapping problem: how do you get the shared key to both parties in the first place? If Alice wants to talk securely to Bob, she needs to somehow transmit the key to him. But if she sends it over the network, an eavesdropper captures it. If she could already communicate securely with Bob, she wouldn’t need the key. It’s circular.
For centuries, this meant encryption required physical key exchange — diplomatic couriers, codebooks distributed in person, sealed envelopes. That’s fine for embassies. It doesn’t work when you want to buy something from a website you’ve never visited before.
AES wasn’t designed in a back room. In 1997, NIST held a public competition to replace the aging DES cipher. Fifteen algorithms from teams worldwide were submitted. After three years of public cryptanalysis, Rijndael (by Belgian cryptographers Joan Daemen and Vincent Rijmen) won. The open process was deliberate — a cipher hiding a backdoor would be caught by the global community. This is why the crypto community trusts AES in a way it never trusted, say, the NSA-designed Skipjack cipher inside the Clipper Chip.
In 1976, Whitfield Diffie and Martin Hellman published “New Directions in Cryptography,” one of the most important papers in the history of computer science. They proposed something that sounded impossible: two strangers could agree on a shared secret over a public channel, even if an eavesdropper heard every word.
The paint-mixing analogy: Alice and Bob publicly agree on a base color (yellow). Each secretly picks a private color. They each mix their private color with the public one and exchange the results. Both arrive at the same final color — but an eavesdropper who only saw the intermediate mixtures can’t reverse the mixing to find the private colors. The mathematical equivalent is modular exponentiation or elliptic curve point multiplication — easy forward, computationally infeasible to reverse.This is the Diffie-Hellman key exchange. It relies on the mathematical asymmetry of certain operations: easy to compute in one direction, effectively impossible to reverse. Modern TLS uses ECDHE — Elliptic Curve Diffie-Hellman Ephemeral — which achieves the same thing with smaller numbers and faster computation.
The “ephemeral” part is critical. Both sides generate new, temporary key pairs for every session. Even if a server’s long-term private key is compromised years later, an attacker who recorded past traffic can’t decrypt it, because the ephemeral keys are long gone. This property is called forward secrecy, and it’s why TLS 1.3 mandates ECDHE and dropped support for static RSA key exchange.
Why forward secrecy matters politically: Without it, a state actor can record all encrypted traffic today, steal a server’s private key next year (via warrant, hack, or coercion), and decrypt everything retroactively. With forward secrecy, recorded ciphertext is permanently useless. This is why the IETF pushed so hard for it in TLS 1.3.But Diffie-Hellman solves only half the problem. It gives you a shared secret — but it doesn’t tell you who you derived that secret with. A man-in-the-middle could perform DH with Alice, separately perform DH with Bob, and relay traffic between them. To prevent this, you need authentication — and that requires asymmetric cryptography used in a different way.
Diffie and Hellman weren’t actually first. In 1970 — six years earlier — James Ellis at Britain’s GCHQ independently discovered public-key cryptography. His colleague Clifford Cocks then invented what we now call RSA, also years before Rivest, Shamir, and Adleman. But it was all classified. The GCHQ work wasn’t declassified until 1997. Ellis died a month before the public announcement and never received the recognition.
Asymmetric (public-key) cryptography uses a key pair: a public key you give to the world, and a private key you guard with your life. The two are mathematically linked by a trapdoor function — a computation that’s efficient in one direction and infeasible in the other.
RSA
The classic. Named after Rivest, Shamir, and Adleman (1977). The trapdoor is integer factorization:
- Pick two large random primes, p and q (each 1024+ bits).
- Compute n = p × q. This is your modulus — it’s public.
- Compute φ(n) = (p−1)(q−1). This requires knowing p and q.
- Choose a public exponent e (commonly 65537). Compute the private exponent d such that ed ≡ 1 (mod φ(n)).
- Public key: (n, e). Private key: (n, d).
The security rests on one fact: given n (a 2048-bit number), nobody knows how to efficiently find p and q. Multiplying two 1024-bit primes takes microseconds. Factoring their product takes longer than the universe will exist.
Why RSA is fading: RSA keys need to be huge (2048–4096 bits). ECDSA and Ed25519 achieve equivalent security with 256-bit keys — smaller certificates, faster handshakes, less bandwidth. Most new systems prefer EC keys. RSA remains for backwards compatibility.Elliptic Curves (ECDSA, Ed25519)
Instead of factoring, EC crypto relies on the Elliptic Curve Discrete Logarithm Problem. You have a curve, a base point G, and a random integer k (your private key). Your public key is Q = kG. Given G and Q, recovering k is computationally infeasible. The result: 256-bit keys as strong as 3072-bit RSA keys.
What can you do with these keys?
- Encrypt: Anyone can encrypt with your public key. Only your private key can decrypt.
- Sign: You can create a digital signature with your private key. Anyone with your public key can verify it.
Certificates care about the second operation — signing — far more than the first.
Everything above has a ticking clock. Shor’s algorithm, run on a sufficiently powerful quantum computer, can factor large integers and solve discrete logarithms in polynomial time — breaking both RSA and elliptic curve crypto. No such computer exists yet, but “harvest now, decrypt later” attacks are already a concern: adversaries record encrypted traffic today, expecting to break it when quantum computers mature. NIST finalized its first post-quantum cryptography standards in 2024 (ML-KEM, ML-DSA, SLH-DSA), and the migration is underway. The next generation of certificates will use lattice-based math instead of factoring.
A digital signature proves two things simultaneously: who produced a piece of data, and that the data hasn’t been altered since it was signed.
Why hash first? Asymmetric operations are ~1000× slower than symmetric crypto. You don’t want to sign a 500MB file directly. Instead, you hash it to a 32-byte digest (SHA-256) and sign that. The hash is a faithful fingerprint — change one bit of the file, and the hash changes unpredictably. Signing the hash is as good as signing the whole file.Run the data through SHA-256. This produces a fixed-size 32-byte digest — a deterministic fingerprint. It’s preimage-resistant (can’t reverse it) and collision-resistant (can’t find two inputs with the same hash).
Encrypt the digest with your private key. The result is the signature — a blob that could only have been produced by someone possessing that private key.
Send the original message, the signature, and your public key (or a certificate containing it).
The receiver independently hashes the message, decrypts the signature with your public key, and compares. If the hashes match, the signature is valid: authentic and unmodified.
This mechanism underpins everything: TLS, code signing, JWTs, git commits, package managers, and certificates themselves.
When hash functions break: MD5 was standard until 2004, when researchers found practical collision attacks. SHA-1 fell in 2017 — Google produced two different PDFs with the same SHA-1 hash (the “SHAttered” attack). Both are now broken for signatures. This is why SHA-256 is everywhere — and why cryptographic agility is built into certificate standards.Now we can define what a certificate actually is, and it’s simpler than you might expect.
A certificate is a document that says: “I, the issuer, certify that this public key belongs to this identity.” And then the issuer signs that statement with their own private key.
That’s it. A certificate binds a public key to an identity, and a trusted third party’s signature makes that binding believable. The standard format is X.509v3:
| Field | Purpose | Why it matters |
|---|---|---|
Subject | Who this cert identifies | For web: the domain. For K8s: the component identity. The verifier checks this matches who they expect. |
SANs | Additional identities | Modern TLS uses SANs, not Subject CN, for hostname verification. A cert can cover multiple domains or IPs. |
Issuer | Who signed this cert | Points up the chain of trust. If the issuer is trusted and the signature is valid, the cert is trusted. |
Subject Public Key | The key being certified | The actual payload. The whole point of the cert is to vouch for this key. |
Validity Period | Not Before / Not After | Limits exposure if a key is compromised. Let’s Encrypt: 90 days. K8s: 1 year. |
Key Usage | Allowed operations | Digital Signature, Key Encipherment, Cert Sign. A leaf cert must NOT have Cert Sign. |
Extended Key Usage | Application-level purpose | Server Auth vs Client Auth. Prevents a server cert from being misused as a client cert. |
Basic Constraints | Is this a CA? | CA:TRUE = can sign other certs. CA:FALSE = leaf. Critical security boundary. |
Signature | Issuer’s digital signature | The proof. Hash all fields, sign with issuer’s private key. Verifier re-hashes and compares. |
Anyone can create a certificate claiming anything. I can generate a cert right now saying “this key belongs to google.com.” What makes it trustworthy isn’t its content — it’s the signature. And the signature is only meaningful if you trust the entity that signed it.
How do you catch a CA that issues a certificate it shouldn’t? Since 2018, Chrome requires all publicly-trusted certificates to be logged in Certificate Transparency (CT) logs — public, append-only, cryptographically auditable ledgers. Anyone can monitor them. If a CA issues a cert for google.com that Google didn’t request, Google’s monitoring catches it within minutes. CT has already exposed mis-issuances by Symantec, WoSign, and others. It turns the CA system from “trust by policy” into “trust but verify.”
If you need a trusted third party to sign your cert, who signs their cert? And who signs that entity’s cert? This regression stops at root Certificate Authorities.
A root CA is a certificate that signs itself. Its issuer is itself. This is obviously circular — so why trust it? Because your operating system’s vendor has pre-installed it into your trust store — a curated list of roughly 150 root certificates that your machine trusts implicitly.
Root store governance is serious. Getting accepted into Apple’s or Mozilla’s root program requires annual WebTrust audits, compliance with CA/Browser Forum Baseline Requirements, and financial stability assessments. Violate the rules: distrusted. See Symantec (2017, distrusted by Chrome) and CNNIC (2015, distrusted after issuing an unauthorized intermediate used for MITM).In practice, root CAs don’t directly sign your server’s certificate. The chain has three levels:
CA:FALSE — cannot sign other certs. Short-lived (90 days with Let’s Encrypt). This is what your server presents during the TLS handshake.Why intermediates? The root CA’s private key is the single point of trust. If compromised, every certificate in the chain becomes untrustworthy, and there’s no recovery short of replacing the root in every device’s trust store worldwide. So root keys are kept offline, accessed only during formal key ceremonies. The intermediates do the daily work. If one is compromised, the root signs a new one, the old gets revoked, damage contained. Defense in depth applied to trust infrastructure.
Revocation is messy. Three mechanisms: CRLs (lists of revoked serials — get huge, stale). OCSP (real-time check — adds latency, fails-open or breaks). OCSP Stapling (server pre-fetches its status and attaches it to the handshake — best of both worlds, but incomplete adoption). Let’s Encrypt’s strategy: make certs so short-lived that revocation matters less.Verification in practice
- Server sends its leaf cert + intermediate cert (root is omitted — you already have it locally).
- Check the leaf’s signature using the intermediate’s public key. ✓
- Check the intermediate’s signature using the root’s public key (from your trust store). ✓
- Root is trusted. Chain complete. Connection trusted.
- Also: validity dates, SANs match hostname, key usage is appropriate, not revoked.
In 2011, attackers compromised DigiNotar, a Dutch CA, and issued fraudulent certificates for over 500 domains including *.google.com. The fake certs were used to intercept Gmail traffic of Iranian dissidents. When discovered, every browser vendor revoked DigiNotar’s root. The company filed for bankruptcy within a month. It remains the definitive cautionary tale of CA compromise — and the event that accelerated Certificate Transparency development.
This is where every concept snaps together into a single coherent protocol. TLS 1.3 is the current version — faster and more secure than its predecessors.
TLS 1.3 removed a graveyard: RSA key exchange (no forward secrecy), CBC ciphers (BEAST, Lucky 13), compression (CRIME), renegotiation. TLS 1.2 needed 2 round-trips; 1.3 does it in 1. Every removed feature had a CVE history.| Client | Server | Why | |
|---|---|---|---|
| Key Exchange | |||
| ClientHello: cipher suites, random nonce, ECDHE key share | → | Client sends DH public value upfront — saves a round trip vs TLS 1.2 | |
| ← | ServerHello: chosen cipher, random, ECDHE key share | Both sides can now derive the shared session key | |
| Authentication (encrypted from here) | |||
| ← | Certificate: leaf + intermediate chain | Server proves identity. Client walks the chain to a trusted root. | |
| ← | CertificateVerify: signature over handshake transcript | Proves the server has the private key. Without this, a stolen cert could be replayed. | |
| ← | Finished: HMAC of entire handshake | Integrity check. MITM tampering breaks this MAC. | |
| Finished: HMAC of entire handshake | → | Client confirms its view matches the server’s. | |
| Application Data (AES-256-GCM) | |||
| Encrypted request | ↔ | Encrypted response | Symmetric encryption. Fast. |
Notice the layering: ECDHE for key exchange (forward secrecy), certificates + signatures for authentication, AES-GCM for bulk encryption. Each primitive does what it’s best at.
CertificateVerify deserves special attention: the server signs the handshake transcript with its private key. This proves the server possesses the private key, not just the (public) certificate. Without it, an attacker who obtained the cert file (but not the key) could impersonate the server.
TLS 1.3 has a trick up its sleeve: 0-RTT. If a client has connected to a server before, it can send application data in the very first message — zero round trips before data flows. The catch: 0-RTT data is replayable. An attacker can capture and resend that first message. So 0-RTT should only be used for idempotent requests (GET, not POST). It’s a deliberate security/performance tradeoff, and servers must opt in.
Before November 2015, getting a TLS certificate meant: paying $50–$300/year, generating a CSR manually, emailing it, waiting days, receiving the cert via email, installing it, and setting a calendar reminder. Let’s Encrypt changed everything by making certificates free, automated, and open.
Why 90-day certs? Three reasons. (1) Limiting damage: compromised key exposure is at most 90 days. (2) Forcing automation: you can’t manually manage 90-day certs, so you must automate, and automated systems are more reliable than humans with calendars. (3) Agility: if a vulnerability is found, the fleet rotates within 90 days with zero human intervention.How ACME works
The core idea: prove you control the domain before the CA signs. Let’s Encrypt only does Domain Validation (DV).
Your client (certbot, caddy) generates an account key pair and registers with acme-v02.api.letsencrypt.org.
“I want a cert for example.com.” The server responds with challenges.
HTTP-01: Place a token at http://example.com/.well-known/acme-challenge/<token>. LE fetches it from port 80. Limitation: no wildcards.
DNS-01: Create a TXT record at _acme-challenge.example.com. Advantage: works for wildcard certs, no ports needed.
Generate a fresh key pair locally, create a CSR with your public key and domain info, submit it. The CA never sees your private key.
Let’s Encrypt signs it with the intermediate’s key. The whole process takes seconds.
That is the complete web PKI story. From trapdoor functions, through chains of trust, through TLS handshakes, to ACME automation. Now let’s see how Kubernetes takes these same primitives and uses them internally.
On the web, TLS is usually one-directional: the server proves its identity, but the client stays anonymous. Kubernetes is different. Every component must authenticate to every other component, using mutual TLS (mTLS) — both sides present certificates.
Why mTLS, not tokens? Defense in depth. A bearer token can be stolen and replayed from anywhere. A client certificate proves the holder possesses a private key that never left the node — you can’t replay it without the key. K8s also supports token auth, but cert-based auth is primary for infrastructure components because it provides identity at the transport layer, before any application code runs.The kubelet on worker-3 reports pod status. The API server checks: is this a legitimate kubelet? The kubelet presents a client cert with CN=system:node:worker-3, O=system:nodes. The API server verifies it against the cluster CA, extracts the identity, applies RBAC. Meanwhile, the kubelet verifies the API server’s cert too. Mutual verification on every connection:
Every arrow requires at least one, usually two, certificates. This is why a cluster has so many — not overengineering, but the minimum machinery for authenticated communication.
When you initialize a cluster with kubeadm init, the first thing generated is the cluster CA:
/etc/kubernetes/pki/ca.crt # Root certificate (public, distributed everywhere)
/etc/kubernetes/pki/ca.key # Root private key (the crown jewel)
This CA is the root of trust for the entire cluster. Every other K8s certificate is either signed directly by this CA or by a subordinate signed by it. The ca.crt is embedded in every kubeconfig.
ca.key is stolen: The attacker signs any cert with any subject — create a cert with O=system:masters and get unrestricted cluster-admin access. In production, consider an external CA (HashiCorp Vault), where the root key lives in a hardware-backed secret store and signing happens through an auditable API. Some teams generate the CA offline, never store the root key on any cluster node.
The API server is the nexus — every component talks to it. It needs certificates for both directions:
Server certificate (incoming connections)
/etc/kubernetes/pki/apiserver.crt
/etc/kubernetes/pki/apiserver.key
Presented to anything connecting to the API server. Its SANs must include every reachable name/IP: kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local, the node’s hostname and IP, the cluster IP (10.96.0.1), and any load-balancer addresses.
kubernetes.default.svc but the cert only lists 10.96.0.1, the connection fails. Every possible addressing method must be covered. This is the most common source of “x509: certificate is valid for X, not Y” errors.
Client certificate (outgoing to kubelets)
/etc/kubernetes/pki/apiserver-kubelet-client.crt / .key
Used when the API server connects to kubelets (kubectl logs, kubectl exec). Subject: O=system:masters.
Each kubelet — one per node — needs its own pair of certificates.
Client certificate (kubelet → API server)
The subject IS the kubelet’s RBAC identity:
Subject: CN = system:node:worker-3
O = system:nodes
CN identifies the node. O maps to a K8s group. The system:nodes group is bound to the system:node ClusterRole, which scopes kubelet permissions to pods on its own node.
CN = username, O = groups. Consequence: you can’t revoke access without revoking the cert (K8s doesn’t support CRL/OCSP) or waiting for expiry. This is why cert lifetimes and rotation matter so much.
Server certificate (API server → kubelet)
When the API server initiates connections to the kubelet (logs, exec, port-forward), the kubelet presents its server cert with the node’s IP and hostname as SANs.
Here is the chicken-and-egg problem that makes K8s cert management genuinely interesting: a new node needs a client cert to talk to the API server. To get one signed, it must submit a CSR to the API server. To talk to the API server, it needs a cert.
The solution: TLS Bootstrap — a protocol using a short-lived, low-privilege bootstrap token to break the circularity.
Why not pre-generate certs? You could, for small static clusters. But in a cloud-native world with autoscaling, spot instances, and thousands of nodes churning, you need nodes to self-provision certificates. TLS Bootstrap makes nodes self-provisioning.| Actor | Action | Why |
|---|---|---|
| Phase 1 — Preparation | ||
admin |
kubeadm token create — generates abcdef.0123456789abcdef. Creates a Secret in kube-system. |
The token is deliberately limited: 24h expiry, only permission to create CSRs. Minimal privilege for the most vulnerable moment. |
admin |
Provides the node a bootstrap kubeconfig: API server address, cluster CA cert, bootstrap token. | The CA cert lets the node verify the API server on first contact. --discovery-token-ca-cert-hash adds a second layer: verifies the CA itself by hash, preventing MITM during bootstrap. |
| Phase 2 — Initial Contact | ||
kubelet |
Connects to API server using the bootstrap token. Authenticated as system:bootstrappers. |
Almost no permissions — only enough to submit a CSR. Limits damage if a bootstrap token leaks. |
| Phase 3 — Certificate Request | ||
kubelet |
Generates a fresh key pair locally. Private key never leaves the node. | The API server never sees or touches the private key. It only receives the public key inside the CSR. |
kubelet |
Submits a CertificateSigningRequest: public key, subject CN=system:node:<name>, O=system:nodes, usage: client auth. |
The CSR is a standard K8s resource. Visible with kubectl get csr. |
| Phase 4 — Approval & Signing | ||
csrapproving |
Checks: submitted by system:bootstrappers? Subject matches system:node:*? Only client auth usage? → auto-approve. |
Policy-driven, not blind. A request for O=system:masters from a bootstrap token would be rejected. |
csrsigning |
Signs the CSR with the cluster CA key. Signed cert attached to the CSR object. | This is where ca.key is used. The controller-manager needs access to it. |
| Phase 5 — Normal Operation | ||
kubelet |
Downloads signed cert, writes it + private key to disk, creates new kubeconfig, reconnects as system:node:<name>. |
Bootstrap token discarded. Full node-level permissions via RBAC. |
kubelet |
With --rotate-certificates=true (default since 1.19): auto-submits new CSR before expiry. Hot-swaps cert with zero downtime. |
Continuous rotation for the node’s entire lifetime. No human intervention after initial join. |
kubeadm join wraps all of this. One command: (1) connect with token, (2) verify CA cert by hash, (3) execute the full TLS bootstrap flow, (4) start the kubelet. The ceremony, automated.
The beauty of TLS Bootstrap is how it decomposes a chicken-and-egg problem into minimal privilege escalations: start with a low-value token, use it to obtain a proper certificate, discard the token. Each step gives only what’s needed for the next.
Beyond the cluster CA, Kubernetes maintains two additional CAs and a signing key pair, each for a distinct trust domain.
The etcd CA
/etc/kubernetes/pki/etcd/ca.crt # etcd root CA
/etc/kubernetes/pki/etcd/server.crt # etcd server cert
/etc/kubernetes/pki/etcd/peer.crt # etcd-to-etcd replication
etcd stores every piece of cluster state. Its CA is deliberately separate.
Why separate? Blast radius reduction. Compromised cluster CA = impersonate K8s components, but can’t impersonate etcd (different CA). Compromised etcd CA = read/write cluster state, but can’t impersonate K8s components. Two independent breaches required for full compromise.The API server uses a cert signed by the etcd CA (not the cluster CA) to connect to etcd as a client: apiserver-etcd-client.crt.
The Front Proxy CA
For the API aggregation layer. When the API server proxies to an extension server (metrics-server), it presents the front-proxy-client cert and passes user identity via headers. The extension server trusts only the front-proxy CA. Third trust domain.
Service Account Key Pair
/etc/kubernetes/pki/sa.key # Signs JWTs
/etc/kubernetes/pki/sa.pub # Verifies JWTs
Not X.509. A raw key pair for signing/verifying ServiceAccount tokens (JWTs). Controller-manager signs with sa.key when creating service accounts; API server verifies with sa.pub.
apiserver.crt
apiserver-kubelet-client.crt
kubelet client & server certs
scheduler.conf
controller-manager.conf
admin.conf
etcd/server.crt
etcd/peer.crt
apiserver-etcd-client.crt
front-proxy-client.crt
+ sa.key/sa.pub (JWT, not X.509)
Here is every certificate and key file on a kubeadm-provisioned control-plane node:
| File | Type | Signed By | Used By | Purpose |
|---|---|---|---|---|
| Cluster CA Trust Domain | ||||
ca.crt / ca.key | Root CA | Self-signed | Everything | Cluster root of trust |
apiserver.crt | Server | Cluster CA | kube-apiserver | TLS for incoming connections |
apiserver-kubelet-client.crt | Client | Cluster CA | kube-apiserver | API server → kubelets |
kubelet client cert | Client | Cluster CA | kubelet | Kubelet → API server |
kubelet server cert | Server | Cluster CA | kubelet | Kubelet HTTPS (port 10250) |
scheduler.conf | Client | Cluster CA | kube-scheduler | Scheduler → API server |
controller-manager.conf | Client | Cluster CA | controller-manager | CM → API server |
admin.conf | Client | Cluster CA | kubectl | Cluster-admin (O=system:masters) |
| etcd CA Trust Domain | ||||
etcd/ca.crt | Root CA | Self-signed | etcd | Separate root for etcd |
etcd/server.crt | Server | etcd CA | etcd | Client → etcd TLS |
etcd/peer.crt | Peer | etcd CA | etcd | etcd ↔ etcd replication |
apiserver-etcd-client.crt | Client | etcd CA | kube-apiserver | API server → etcd |
| Front Proxy CA Trust Domain | ||||
front-proxy-ca.crt | Root CA | Self-signed | Aggregation | API aggregation trust root |
front-proxy-client.crt | Client | Front Proxy CA | kube-apiserver | Proxying to extension APIs |
| Service Account Keys (not X.509) | ||||
sa.key / sa.pub | Key pair | N/A | CM / apiserver | Sign & verify SA JWTs |
~14 cert/key pairs + 1 SA key pair on a single control-plane node. In a 3-node HA setup: 30+ certificates.
Lifetimes: CA = 10 years. Components = 1 year. Kubelet auto-rotates (default since 1.19). Monitor withkubeadm certs check-expiration or Prometheus alert on apiserver_client_certificate_expiration_seconds.
Three CAs, by design
- Cluster CA compromised: Impersonate any K8s component. Devastating — but can’t directly access etcd (different CA).
- etcd CA compromised: Read/write all cluster state. Can’t impersonate K8s components.
- Front Proxy CA compromised: Spoof identity to extension APIs. Narrowest blast radius.
Three independent breaches required for full compromise. Each key should be stored independently.
Inspecting your cluster’s certificates
# View a cert
openssl x509 -in /etc/kubernetes/pki/apiserver.crt -text -noout
# Check expiry
kubeadm certs check-expiration
# Renew all
kubeadm certs renew all
# See what the API server presents live
openssl s_client -connect <api-server>:6443 -showcerts 2>/dev/null | \
openssl x509 -text -noout
# Verify cert against CA
openssl verify -CAfile /etc/kubernetes/pki/ca.crt \
/etc/kubernetes/pki/apiserver.crt
# View pending CSRs
kubectl get csr
A surprising number of Kubernetes outages are caused by expired certificates. The control-plane certs generated by kubeadm expire after 1 year. If you don’t run kubeadm certs renew all (or upgrade the cluster, which auto-renews) before the anniversary, the API server stops accepting connections. The cluster is up but unreachable. The fix is straightforward but the outage is embarrassing — and in production, entirely preventable with monitoring.
That’s the complete picture. From trapdoor functions, through chains of trust, through TLS handshakes, through ACME automation, to a new Kubernetes node bootstrapping itself with nothing but a temporary token and a CA hash. Every layer built on the one below it. Every design decision motivated by a specific threat or operational reality. Certificates aren’t magic — they’re signed documents, verified by math, organized by trust hierarchies, and automated by protocols. The complexity is the minimum machinery required to establish trust between strangers over hostile networks.
Diagrams rendered as inline SVG.
A treatise on trust.