KCNA: P1 & P2 Cloud Arch. Fundyzz Containers w/Docker

Blog post includes covering Cloud Architecture Fundamental’s in preparation for the KCNA.

  • Autoscaling
    • Reactive
    • Predictive
    • Vertical
    • Horzontal
    • Cluster Autoscaler
      • HPAs
        • Scale # of replicas in an app
      • VPAs
        • Scale resource requests & limits of a pod
    • Keda
      • Scaled object should scale & what are triggers while scaling to 0
  • Serverless
    • Event driven & billed accordingly upon execution
    • Knative & OpenFaaS & CloudEvents
  • Cloud Native Personas
    • DevOps Engineer
    • Site Reliability Engineer
    • CloudOps Engineer
    • Security Engineer
    • DevSecOps Engineer
    • Full Stack Developer
    • Data Engineer
  • Open Standards
    • Docker, OCI, runc
    • PodMan – image-spec
    • Firecracker – runtime-spec
    • Container Network Interface (CNI)
      • Calico
    • Container Storage Interface (CSI)
      • Rook
    • Container Runtime Interface (CRI)
      • Goes to containerd, kata, firecracker, etc..
    • Service Mesh Interface (SMI)
      • Istio!

Blog post includes covering Containers w/Docker in preparation for the KCNA.

  • Docker Desktop
    • docker vs docker desktop
    • k8s w/docker desktop
  • Containers:
    • History womp womp
    • Linux
      • user, pid, network, mount, uts, ipc, namespace, & cgroups
  • Images
    • container vs container image
    • registry
    • tag
    • layers
    • union
    • digest vs ids
  • Running Containers
    • docker run -it –rm…
  • Container Networking Services/Volumes
    • docker run –rm nginx
    • docker run -d –rm nginx
    • docker ps
    • docker run -d –rm -P nginx
    • curl
    • docker run -d –rm -p 12345:80 nginx
    • docker exec -it bash
  • Building Containers
    • https://github.com/abishekvashok/cmatrix
    • docker pull, images, build . -t,
    • vim
      • FROM
      • #maintainer
      • LABEL
    • docker run –rm -it sh
      • git clone
        • apk update, add git
      • history
    • vim
      • history
    • docker buildx create, use, build –no-cache linux/amd64, . -t –push
    • docker system prune

KCNA: P6 Delivery Boy!

Blog post includes covering Cloud Application Delivery in preparation for the KCNA.

  • ArgoCD

ArgoCD: https://github.com/spurin/argo-f-yourself

# kubectl create namespace argocd
# kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
# watch --differences kubectl get all -n argocd
# kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
# kubectl -n argocd get svc

KCNA: P5 Automate Em’ All

Blog post includes covering K8s Automation, Telemetry, & Observability in preparation for the KCNA.

  • Helm Charts
  • Prometheus
  • Grafana
  • Probes & Kubelet
  • When Nodes Fail

Helm Charts: there magic simply put..conduct your standard linux practices & can navigate thru your helm chart install

# apt update && apt install -y git tree
# curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
# cd flappy-app
# vim Chart.yaml
# vim values.yaml
# helm install flappy-app ./flappy-app-0.1.0.tgz
# export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=flappy-app,app.kubernetes.io/instance=flappy-app" -o jsonpath="{.items[0].metadata.name}"); export CONTAINER_PORT=$(kubectl get pod --namespace default $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}"); echo "Visit http://127.0.0.1:8080 to use your application"; kubectl --namespace default port-forward $POD_NAME 8080:$CONTAINER_PORT
# kubectl get deployment; echo; kubectl get pods; echo; kubectl get svc

Prometheus & Grafana: 1st – add specific helm version for prometheus. 2nd – add nginx pod every 30 seconds. 3rd – then use cluster-ip to see the pods being added in prometheus & grafana.

# apt update && apt install -y git
# curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
# helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
# helm search repo prometheus-community/kube-prometheus-stack -l
# helm install my-observability prometheus-community/kube-prometheus-stack --version 55.5.0
# kubectl get all -A
# kubectl get svc
# for i in {1..10}; do kubectl run nginx-${i} --image=nginx; sleep 30; done
# helm uninstall my-observability
# kubectl -n kube-system delete service/my-observability-kube-prom-kubelet --now

When Nodes Fail:

  • Start as Healthy Nodes
  • Deployment
  • Stop kubelet & fail
    • Documentation informs us that we wait 5 minutes before posting as unknown & evicted
  • Grep to see pods moving from node to node
  • If a node stops reporting & taking pods… it becomes NotReady, existing workload continues if permitted, after 5 minutes the node controller evicts the pods onto healthy nodes, & can describe to see the status as unknown

Probes & The Kubelet:

  • Health Checks tell k8s what to do w/a container..
    • Liveness Probe
      • ARE YOU ALIVE!? if fails, kubelet restarts container
    • Readiness Probe
      • Ready for traffic? if fails, kubelet tells API to remove pod from svc endpt
        • Does NOT restart
    • Startup Probe
      • Kubelet checks if application is inside the container & started
        • If probe is running, liveness, & readiness checks are paused..once succeeds & probes take over
    • Probes don’t act on their own

KCNA: P4 K8s Going Deep Homeboy.. part 2

Blog post includes covering Kubernetes Deep Dive in preparation for the KCNA.

  • RBAC

Security: 1st – create yaml file for ubuntu & can shell into root. 2nd – update spec for non-priv user & can escalate priv, 3rd – add spec to not allowed in escalating priv.

# kubectl run ubuntu --image=spurin/rootshell:latest -o yaml --dry-run=client -- sleep infinity | tee ubuntu_secure.yaml
# apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: ubuntu
  name: ubuntu
spec:
  containers:
  - args:
    - sleep
    - infinity
    image: spurin/rootshell:latest
    name: ubuntu
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: ubuntu
  name: ubuntu
spec:
  securityContext:
    runAsUser: 1000
    runAsGroup: 1000
  containers:
  - args:
    - sleep
    - infinity
    image: spurin/rootshell:latest
    name: ubuntu
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
EOF
cat <<EOF > ubuntu_secure.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: ubuntu
  name: ubuntu
spec:
  securityContext:
    runAsUser: 1000
    runAsGroup: 1000
  containers:
  - args:
    - sleep
    - infinity
    image: spurin/rootshell:latest
    name: ubuntu
    resources: {}
    securityContext:
      allowPrivilegeEscalation: false
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
EOF

Stateful Sets: create yaml w/3 replicas, delete pod & watch magic to re-appear! 2nd – create clusterip service. 3rd – curl into shell & add rolling update.

# kubectl delete pod/nginx-2 --now
# kubectl get pods -o wide
# kubectl patch statefulset/nginx -p '{"spec":{"updateStrategy":{"rollingUpdate":{"partition":2}}}}'
# kubectl set image statefulset/nginx nginx=nginx:alpine && kubectl rollout status statefulset/nginx

Persistent Storage: 1st – update yaml spec for volume mount. 2nd – shell into pod, create a note. 3rd – delete the pod & watch to spin back up, shell back into see the note.

cat <<EOF > statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  serviceName: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx
        volumeMounts:
        - name: nginx
          mountPath: /data
  volumeClaimTemplates:
  - metadata:
      name: nginx
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "local-path"
      resources:
        requests:
          storage: 1Gi
EOF
# kubectl delete -f statefulset.yaml && kubectl apply -f statefulset.yaml
# watch --differences kubectl get pods -o wide
# kubectl get pvc
# kubectl get pv
# kubectl exec -it nginx-0 -- bash
# kubectl delete statefulset/nginx --now; for i in 0 1 2; do kubectl delete pvc/nginx-nginx-$i --now; done; kubectl delete service/nginx; rm statefulset.yaml

RBAC: 1st variation of echo & base64 -d the CLIENT_KEY_DATA or CERTIFICATE_AUTHORITY_DATA.

RBAC – Cluster Role Binding: 1st – create own super user group by creating clusterrole & then bind together.

# kubectl get clusterrolebindings -o wide
# kubectl describe ClusterRole/cluster-admin

Manual – genrsa, key & csr, CSR_DATA & CSR_USER, yaml, apply, get csr base64 -d,

Automate RBAC Kubeconfig file: 1st – configure key & CSR, apply CSR, capture k8s info, create kubeconfig file, & cleanup k8s CSRs

# apt update && apt install -y git jq openssl
# git clone https://github.com/spurin/kubeconfig-creator.git
# 

Watch-Only RBAC Group: 1st – create cluster role & clusterrolebinding. 2nd, – see if can access w/specific role. 3rd – run shell script & get nodes.

# ./kubeconfig_creator.sh -u uatu -g cluster-watchers

Roles & RoleBindings w/Namespaces: 1st create NS w/role & rolebinding. 2nd – see if user can access NS. 3rd – run shell to add more users.

# kubectl create namespace gryffindor
# kubectl -n gryffindor create role gryffindor-admin --verb='*' --resource='*'
# kubectl -n gryffindor create rolebinding gryffindor-admin --role=gryffindor-admin --group=gryffindor-admins
# kubectl -n gryffindor auth can-i '*' '*' --as-group="gryffindor-admins" --as="harry"
# ./kubeconfig_creator.sh -u harry -g gryffindor-admins -n gryffindor

KCNA: P4 K8s Going Deep Homeboy.. part 1

Blog post includes covering Kubernetes Deep Dive in preparation for the KCNA.

API: run kubectl proxy to interact w/API in background, curl IP request/local-host, create yaml file for pod, and delete/rm pods.

kubectl proxy & echo $! > /var/run/kubectl-proxy.pid
curl --location 'http://localhost:8001/api/v1/namespaces/default/pods?pretty=true' 
--header 'Content-Type: application/json' 
--header 'Accept: application/json' 
--data '{
    "kind": "Pod",
    "apiVersion": "v1",
    "metadata": {
        "name": "nginx",
        "creationTimestamp": null,
        "labels": {
            "run": "nginx"
        }
    },
    "spec": {
        "containers": [
            {
                "name": "nginx",
                "image": "nginx",
                "resources": {}
            }
        ],
        "restartPolicy": "Always",
        "dnsPolicy": "ClusterFirst"
    },
    "status": {}
}'
kubectl get pods
kill $(cat /var/run/kubectl-proxy.pid)
rm /var/run/kubectl-proxy.pid
kubectl get pods

Scheduling: create yaml file for scheduler, pod is pending cuz no scheduler selected created, git clone, & view pod

kubectl run nginx --image=nginx -o yaml --dry-run=client | tee nginx_scheduler.yaml
cat <<EOF > nginx_scheduler.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: nginx
  name: nginx
spec:
  schedulerName: my-scheduler
  containers:
  - image: nginx
    name: nginx
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
EOF
apt update && apt install -y git jq
git clone https://github.com/spurin/simple-kubernetes-scheduler-example.git
cd simple-kubernetes-scheduler-example; more my-scheduler.sh
# ./my_scheduler.sh 
🚀 Starting the custom scheduler...
🎯 Attempting to bind the pod nginx in namespace default to node worker-2
🎉 Successfully bound the pod nginx to node worker-2
# kubectl get pods -o wide
NAME    READY   STATUS    RESTARTS   AGE   IP          NODE       NOMINATED NODE   READINESS GATES
nginx   1/1     Running   0          28s   10.42.2.3   worker-2   <none>           <none>

Node Name: change spec to nodename specific area & notice variance of spec usage

# cat <<EOF > nginx_scheduler.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: nginx
  name: nginx
spec:
  nodeName: worker-2
  containers:
  - image: nginx
    name: nginx
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
EOF
# kubectl apply -f nginx_scheduler.yaml
# kubectl get pods -o wide

Node Selector: now change spec to nodeselector from label selector

# cat <<EOF > nginx_scheduler.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: nginx
  name: nginx
spec:
  nodeSelector:
    kubernetes.io/hostname: worker-1
  containers:
  - image: nginx
    name: nginx
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
EOF
# kubectl apply -f nginx_scheduler.yaml
# kubectl get pods -o wide

Storage: create ubuntu yaml, add volume mount to file, & shell into the ubuntu pod to see storage mount

# kubectl run --image=ubuntu ubuntu -o yaml --dry-run=client --command sleep infinity | tee ubuntu_emptydir.yaml

# cat <<EOF > ubuntu_emptydir.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: ubuntu
  name: ubuntu
spec:
  containers:
  - command:
    - sleep
    - infinity
    image: ubuntu
    name: ubuntu
    resources: {}
    volumeMounts:
    - mountPath: /cache
      name: cache-volume
  dnsPolicy: ClusterFirst
  restartPolicy: Always
  volumes:
  - name: cache-volume
    emptyDir: {
      medium: Memory,
    }
status: {}
EOF
# kubectl get pods -o wide
# kubectl exec -it ubuntu -- bash

Persistent Storage: create yaml pv & pvc (mention pv name in spe),

# cat <<EOF > manual_pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: manual-pv001
spec:
  storageClassName: local-path
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/var/lib/rancher/k3s/storage/manual-pv001"
    type: DirectoryOrCreate
EOF
------------------------------------
# cat <<EOF > manual_pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: manual-claim
spec:
  accessModes:
    - ReadWriteOnce
  volumeMode: Filesystem
  resources:
    requests:
      storage: 10Gi
  storageClassName: local-path
  volumeName: manual-pv001
EOF

Dynamic PVC: create yaml for pvc, edit yaml for volume mount to manua/dynamic claim, then add to node-selector a specific node desired.

# cat <<EOF > dynamic_pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: dynamic-claim
spec:
  accessModes:
    - ReadWriteOnce
  volumeMode: Filesystem
  resources:
    requests:
      storage: 10Gi
  storageClassName: local-path
EOF
# kubectl run --image=ubuntu ubuntu -o yaml --dry-run=client --command sleep infinity | tee ubuntu_with_volumes.yaml
# cat <<EOF > ubuntu_with_volumes.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: ubuntu
  name: ubuntu
spec:
  containers:
  - command:
    - sleep
    - infinity
    image: ubuntu
    name: ubuntu
    resources: {}
    volumeMounts:
    - mountPath: /manual
      name: manual-volume
    - mountPath: /dynamic
      name: dynamic-volume
  dnsPolicy: ClusterFirst
  restartPolicy: Always
  volumes:
    - name: manual-volume
      persistentVolumeClaim:
        claimName: manual-claim
    - name: dynamic-volume
      persistentVolumeClaim:
        claimName: dynamic-claim
status: {}
EOF
# cat <<EOF > ubuntu_with_volumes.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: ubuntu
  name: ubuntu
spec:
  nodeSelector:
    kubernetes.io/hostname: worker-1
  containers:
  - command:
    - sleep
    - infinity
    image: ubuntu
    name: ubuntu
    resources: {}
    volumeMounts:
    - mountPath: /manual
      name: manual-volume
    - mountPath: /dynamic
      name: dynamic-volume
  dnsPolicy: ClusterFirst
  restartPolicy: Always
  volumes:
    - name: manual-volume
      persistentVolumeClaim:
        claimName: manual-claim
    - name: dynamic-volume
      persistentVolumeClaim:
        claimName: dynamic-claim
status: {}
EOF

Network Policies: 1st – create pod, expose port, & curl to see access. 2nd – policy to restrict access w/label…cant access now..

# kubectl run nginx --image=nginx
# kubectl expose pod/nginx --port=80
# kubectl run --rm -i --tty curl --image=curlimages/curl:8.4.0 --restart=Never -- sh
# curl nginx.default.svc.cluster.local
If you don't see a command prompt, try pressing enter.
~ $ curl nginx.default.svc.cluster.local
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
~ $ exit
pod "curl" deleted
# cat <<EOF > networkpolicy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-nginx-access
  namespace: default
spec:
  podSelector:
    matchLabels:
      run: nginx
  policyTypes:
    - Ingress
  ingress:
    - from:
      - podSelector:
          matchLabels:
            run: curl
EOF

Pod Disruption Budgets: 1st – create replica-set deployment, & cordon node. 2nd – drain to now notice disruption cuz control-plane & worker-1 are “protected” & you all worker-2 nodes are empty. 3rd – uncordon. 4th – create PDB, & notice cant cordon or drain more than the PDB created.

# kubectl create deployment nginx --image=nginx --replicas=5
# kubectl cordon control-plane && kubectl delete pods -l app=nginx --field-selector=spec.nodeName=control-plane --now
# kubectl drain worker-2 --delete-emptydir-data=true --ignore-daemonsets=true
# kubectl uncordon control-plane worker-1 worker-2
# kubectl create pdb nginx --selector=app=nginx --min-available=2
# kubectl cordon control-plane worker-1 worker-2; kubectl drain control-plane worker-1 worker-2 --delete-emptydir-data=true --ignore-daemonsets=true
# kubectl uncordon worker-1 worker-2

Security:

Wanna secure EKS w/CA & TLS?

Goal:

DO YOU HAVE A KUBERNETES CLUSTER! IS IT INSECURE!? …. I’m out of breath & getting dizzy, Idk how those commercials bring that outside voice & energy – – its exhausting!

Alright, I’m back – all this will show you is how to secure your cluster. Below you can see how one can authenticate w/one another’s Kubernetes after you have a certificate & attach a certificate authority (CA) by creating certificate’s to bootstrap your Kubernetes cluster.

  • Please note – there are two (2) controllers, two (2) workers, & an Kubernetes API Load Balancer

Lessons Learned:

  • Permit/Provision CA
  • Create Kubernetes client certs & kubelet client certs for two (2) nodes:
    • Admin Client Certificate
    • Kubelet Client Certificate
    • Manager Client Cert
    • Kube-Proxy Client Certificate
    • Kube-Scheduler Client Certificate
  • Kubernetes API server certificate
  • Kubernetes service account key pair
  • If you follow these lessons learned, you will not let this happen to you – don’t be Karen.
  • Created to sign other certificates & other certs can now use the CA to show legitness (its a word, look it up in the dictionary..urban, dictionary..) that no fakers are occurring

Admin Client Certificate:

Kubelet Client Certificate:

Manager Client Cert:

Kube-Proxy Client Certificate:

Kube-Scheduler Client Certificate:

  • These gifs are TOOOOO good for info commercial’s in the late 90s’/early 2000s’

Create Kubernetes API server certificate:

Create Kubernetes service account key pair:

When you see Smoke – – – there is Kubernetes Cluster being Tested..

Goal:

Stuff happen, so when it does – it is good to know what to do w/your Kubernetes cluster. The answer is – drum roll please… smoke testing, tahhh-dahhh! This is useful not just when stuff hits the fan, but to see if the known vulnerable features are working properly becuase the goal is to verify the health of the cluster.

Example of smoke tests of the Kubernetes cluster conducted will contain:

  • Data Encryption
  • Deployment
  • Port Forwarding
  • Logs
  • Exec
  • Services

Lessons Learned:

  • Cluster Data Encryption
  • Deployments Work
  • Remote Access works w/Port Forwarding
  • Access Container Logs w/Kubectl Logs
  • Execute Commands inside the Container
  • Services Work
  • Create test data for secret key
  • Ensure secret key is stored
  • Create & verify deployment
  • Snag that pod name & store in variable
  • Forward port to nginx pod
  • Open new terminal – – – & curl IP address/port
  • Get logs from nginx pod
  • Confirm you can run “exec” command & will see the version
  • Test to see if service can be deployed
  • Get node port from variable
  • Curl IP address/port

Lets save Martha aka MiniKube..

Goal:

The Bat signal has been lit in the sky, its time to suit up, & don’t let the kryptonite divide us. Fix the broken Minikube cluster

Lessons Learned:

  • Start up the Bat Mobile (Minikube)
    • See screenshot for a whole slew of commands
  • Create Object in YAML files to Confirm Cluster is up
    • Kubectl apply -f
    • Kubectl get po/pv/pvc

Start up the Bat Mobile (Minikube):

See screenshot for a whole slew of commands:

  • Minikube start
  • sudo chown -R
    • Change directory owner
      • .kube
      • .minikube
  • Minikube config set
    • Update the version
  • Sudo apt install -y docker.io
    • Get docker
  • Kubectl apply -f
  • Kubectl get
    • po
    • pv
    • pvc

Create Object in YAML files to Confirm Cluster is up:

  • Kubectl apply -f
  • Kubectl get po/pv/pvc

Blueprint to Build & Use a K3 Cluster

Goal:

Wanna see how the sausage is made – – – K3 cluster. We’ll bootstrap a K3 cluster, install the K3 on multipler servers, & have it Frankenstein to form a multi-server cluster. Lets get cookin’

Lessons Learned:

  • Build that K3 server
    • Install K3 server
    • List nodes
    • Get node token
  • Build two (2) K3 worker nodes
    • Install K3 on worker node w/private IP address & node tokens
  • Run on New Cluster
    • Create pod yaml file
    • Create, check, & view pod

Build that K3 server:

  • Install K3 server
  • List nodes
  • Get node token

Build K3 worker nodes:

Install K3 on worker node w/private IP address & node tokens:

Run on New Cluster:

Create pod yaml file:

Create, check, & view pod:

Come on, lets Explore Terraform State w/Kubernetes Containers

Let’s blend some pimp tools together & launch something into space – cyber space that is. Below is an example to show useful it is to understand Terraform state, deploy resources w/Kubernetes, & see how Terraform maintains the state file to track all your changes along w/deploying containers!

  • Check Terraform & Minikube Status
  • Clone Terraform Code & Switch Proper Directory
    • Switch directories
  • Deploy Terraform code & Observe State File
    • Terraform Init
    • Terraform Plan
    • Terraform Apply
  • Terraform State File Tracks Resources
    • Terraform State
    • Terraform Destroy
  • terraform version

Switch directories:

  • Terraform –
    • Init
    • Plan
    • Apply

Terraform State File Tracks Resources:

Terraform Plan:

Terraform Apply:

Terraform Destroy: