Security Engineering
Threat Modeling Modern Cloud-Native Architectures
Microservices, service meshes, and ephemeral infrastructure have fundamentally changed the threat model for cloud applications. Here's how to adapt your threat modeling practice.

What Changed with Cloud-Native
Traditional threat modeling assumed relatively stable infrastructure: servers with known IP addresses, clear network perimeters, and long-lived processes. Cloud-native architectures throw most of these assumptions out.
Containers spin up and disappear in seconds. Service instances are ephemeral and their network addresses change constantly. The “perimeter” is an API gateway that’s exposed to the internet by design. Secrets that used to live in a config file now need to be injected at runtime into processes that may run for 30 seconds before being replaced.
The threat model has to catch up.
The New Attack Surface
The container runtime is a shared resource. A container escape — exploiting a kernel vulnerability or misconfiguration in the container runtime — can give an attacker access to the host and all other containers running on it. In a dense Kubernetes node, that could be dozens of services.
The Kubernetes API server is the crown jewel of any cluster deployment. Any workload that can reach the API server with cluster-admin credentials (a shockingly common misconfiguration) can read all secrets, modify deployments, and exec into any running container.
Service account tokens are mounted into every pod by default, even pods that never need to interact with the Kubernetes API. An attacker who compromises any pod immediately has a valid JWT for API authentication.
The IMDS (Instance Metadata Service) on cloud providers exposes IAM credentials to any process running on the instance or in a pod that can reach 169.254.169.254. SSRF vulnerabilities in any service running in the cluster can exfiltrate cloud credentials with whatever permissions the instance role carries.
Applying STRIDE to Microservices
Spoofing in a microservices context means impersonating services. Without mutual TLS and workload identity, any process that can reach the internal network can claim to be any service. Service meshes like Istio and Linkerd solve this at the infrastructure layer.
Tampering risk increases dramatically with shared storage and message queues. If multiple services write to the same database or consume from the same Kafka topic, a compromise of any one service can corrupt shared data.
Repudiation is harder to address in distributed systems. Centralized, immutable logging with correlation IDs that trace requests across service boundaries is essential for forensic reconstruction after an incident.
Information Disclosure from over-permissive RBAC is the most common cloud-native finding in our assessments. Services routinely have read access to secrets they never use, simply because it was easier to give broad access than to scope it precisely.
Denial of Service at the API level — targeting the Kubernetes API server or a critical gateway service — can take down an entire cluster. Rate limiting and resource quotas at the Kubernetes level are often overlooked.
Elevation of Privilege through container escape or RBAC abuse is the most impactful attack path. The securityContext settings on pods — privileged: true, allowPrivilegeEscalation: true, hostPID: true — are still commonly seen in production manifests.
Practical Recommendations
Start with an accurate map of your service dependencies and data flows. You cannot threat model what you cannot see. Tools like Cilium’s Hubble, Linkerd’s tap, or simply analyzing your service mesh telemetry provide the ground truth.
Treat your Kubernetes RBAC configuration as a security-critical artifact that deserves the same review rigor as your application code. Run kubectl auth can-i --list for every service account in your cluster and be prepared to be surprised.
Enable Pod Security Admission (PSA) with the restricted profile as your default, and explicitly allow exceptions only for workloads that genuinely need elevated privileges — with documentation explaining why.
Finally, ensure your threat models get updated when the architecture changes. A diagram that reflects the state of the system two quarters ago is not a threat model — it’s a historical document.
- Service-to-service authentication. mTLS and workload identity (SPIFFE/SPIRE) establish cryptographic trust between services without relying on network topology or static credentials.
- Secrets management. Environment variables are not secrets stores. Use Vault, AWS Secrets Manager, or Kubernetes Secrets with envelope encryption — and rotate them automatically.
- RBAC over-provisioning. The default service account in Kubernetes has cluster-wide read permissions in most configurations. Scope every identity to the minimum privilege needed for its function.
"In a containerized environment, lateral movement is no longer about pivoting between servers — it's about escaping the pod, abusing RBAC, and reaching the control plane."