Studying for KCNA by Actually Building Things
How I built a local Kubernetes lab on an M-series Mac to study for the KCNA exam, and why building beats reading every time.
April 5, 2026
For KCNA I decided to do it differently. Instead of studying Kubernetes, I'd build with it.
I've been down this road before. Buy the course, read the docs, watch the videos, take the practice exams. Pass the test. Forget most of it in three months.
For KCNA I decided to do it differently. Instead of studying Kubernetes, I'd build with it.
What is KCNA?
The Kubernetes and Cloud Native Associate (KCNA) is a foundational certification from the CNCF. It covers Kubernetes fundamentals, container orchestration, cloud native application delivery (GitOps, CI/CD, Helm), and cloud native architecture including observability and service meshes. It's not a hands-on exam, but hands-on is still the best way to learn it.
The lab
I built an 8-phase lab on my M-series Mac using kind (Kubernetes in Docker) and Colima as a lightweight Docker runtime. No Docker Desktop, no cloud costs, no waiting for cluster provisioning.
Each phase produced something that actually ran. That was the rule: no phase was complete until I could verify it in the terminal. I started with a basic kind cluster and kubectl, then built up through the core objects, ConfigMaps and Secrets, Ingress, namespaces and RBAC, a full end-to-end application, a Helm chart deployed with Argo CD, and finally Prometheus and Grafana for cluster observability.
The full lab is in the GitHub repo with manifests for every phase and a Kubernetes cheatsheet that maps everything back to AWS equivalents.
The AWS brain
I've been working in AWS for years. ECS, Lambda, CloudFormation. That's my daily context. Kubernetes has its own vocabulary and I found the fastest way to learn it was to map everything to something I already knew.
The core objects map cleanly: a Pod is an ECS Task, a Deployment is an ECS Service, a Service is an ALB Target Group. ConfigMaps are Parameter Store, Secrets are Secrets Manager, ServiceAccounts with RBAC are IAM roles for workloads. Helm charts are CloudFormation templates and Argo CD is CodePipeline. Once I had that mental model, Kubernetes clicked. The concepts aren't foreign. The names just changed.
What surprised me
Labels are everything. In AWS, things connect via ARNs and resource IDs. In Kubernetes, things connect via labels. A Deployment finds its Pods because the selector.matchLabels matches template.metadata.labels. A Service finds its Pods the same way. Get the labels wrong and nothing connects, with no useful error message. This tripped me up early and often.
RBAC transfers directly. If you understand IAM policies, Kubernetes RBAC is immediately readable. Role equals policy, RoleBinding equals attaching a policy to a principal, ServiceAccount equals an IAM role for a workload. The mental model is the same. The syntax is different.
GitOps is satisfying. Pushing a change to GitHub and watching Argo CD reconcile it without touching kubectl feels like the right way to run things. Phase 7 was my favourite.
Observability is one command. The kube-prometheus-stack installs Prometheus, Grafana, Alertmanager, and pre-built dashboards in one helm install. You go from zero to full cluster metrics in minutes. It's resource-heavy on a local cluster, but it works.
Exam is coming up. I'll post the result.