KCNA: P3 Kubernetes Fundyzzz..part 1

Blog post includes covering Kubernetes Fundamental’s in preparation for the KCNA.

  • Init-Containers
  • Pods
  • Namespaces
  • Labels

K8 Pods – Init Containers: create yaml file w/init container before main container, apply, & then watch logs

cat <<EOF > countdown-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: countdown-pod
spec:
  initContainers:
  - name: init-countdown
    image: busybox
    command: ['sh', '-c', 'for i in \$(seq 120 -1 0); do echo init-countdown: \$i; sleep 1; done']

  containers:
  - name: main-container
    image: busybox
    command: ['sh', '-c', 'while true; do count=\$((count + 1)); echo main-container: sleeping for 30 seconds - iteration \$count; sleep 30; done']
EOF
kubectl apply -f countdown-pod.yaml
kubectl get pods -o wide
until kubectl logs pod/countdown-pod -c init-countdown --follow --pod-running-timeout=5m; do sleep 1; done; until kubectl logs pod/countdown-pod -c main-container --follow --pod-running-timeout=5m; do sleep 1; done
kubectl get pods -o wide

K8 Pods: create image, port forward, curl/shell into pod, create another yaml file image combined as sidecar, & output sidecar response of pod containers

kubectl run nginx --image=nginx
kubectl get pods
kubectl logs pod/nginx
kubectl get pods -o wide
NGINX_IP=$(kubectl get pods -o wide | awk '/nginx/ { print $6 }'); echo $NGINX_IP
ping -c 3 $NGINX_IP
ssh worker-1 ping -c 3 $NGINX_IP
ssh worker-2 ping -c 3 $NGINX_IP
echo $NGINX_IP
kubectl run -it --rm curl --image=curlimages/curl:8.4.0 --restart=Never -- http://$NGINX_IP
kubectl exec -it ubuntu -- bash
apt update && apt install -y curl
kubectl run nginx --image=nginx --dry-run=client -o yaml | tee nginx.yaml
cat <<EOF > combined.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: mypod
  name: mypod
spec:
  containers:
  - image: nginx
    name: webserver
    resources: {}
  - image: ubuntu
    name: sidecar
    args:
    - /bin/sh
    - -c
    - while true; do echo "\$(date +'%T') - Hello from the sidecar"; sleep 5; if [ -f /tmp/crash ]; then exit 1; fi; done
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
EOF
MYPOD_IP=$(kubectl get pods -o wide | awk '/mypod/ { print $6 }'); echo $MYPOD_IP
kubectl logs pod/mypod -c sidecar
kubectl delete pod/mypod --now

    Namespaces: make an ns w/image, change config context default to new ns, & switch back & forth to notice not all pods created under ns

    kubectlget ns
    kubectl -n thissuxns run nginx --image=nginx
    kubectl get pods -o wide
    kubectl -n thissuxns get pods
    kubectl config view
    kubectl config set-context --current --namespace=thissuxns
    kubectl get pods -o wide
    kubectl config set-context --current --namespace=default
    kubectl get pods -o wide

    Labels: starting pod on port 80, utilize selector label, apply new yaml file of 3 options for selector label, & then get pods for just that particular label selector

    kubectl run nginx --image nginx --port 80 -o yaml --dry-run=client
    kubectl run nginx --image nginx --port 80
    kubectl expose pod/nginx --dry-run=client -o yaml
    kubectl expose pod/nginx
    cat <<EOF > coloured_pods.yaml
    apiVersion: v1
    kind: Pod
    metadata:
      creationTimestamp: null
      labels:
        run: ubuntu
        colour: red
      name: ubuntu-red
    spec:
      containers:
      - command:
        - sleep
        - infinity
        image: ubuntu
        name: ubuntu
        resources: {}
      dnsPolicy: ClusterFirst
      restartPolicy: Always
    status: {}
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      creationTimestamp: null
      labels:
        run: ubuntu
        colour: green
      name: ubuntu-green
    spec:
      containers:
      - command:
        - sleep
        - infinity
        image: ubuntu
        name: ubuntu
        resources: {}
      dnsPolicy: ClusterFirst
      restartPolicy: Always
    status: {}
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      creationTimestamp: null
      labels:
        run: ubuntu
        colour: pink
      name: ubuntu-pink
    spec:
      containers:
      - command:
        - sleep
        - infinity
        image: ubuntu
        name: ubuntu
        resources: {}
      dnsPolicy: ClusterFirst
      restartPolicy: Always
    status: {}
    EOF
    kubectl apply -f coloured_pods.yaml
    kubectl get pods -o wide
    kubectl get all --selector colour=green

    Wanna secure EKS w/CA & TLS?

    Goal:

    DO YOU HAVE A KUBERNETES CLUSTER! IS IT INSECURE!? …. I’m out of breath & getting dizzy, Idk how those commercials bring that outside voice & energy – – its exhausting!

    Alright, I’m back – all this will show you is how to secure your cluster. Below you can see how one can authenticate w/one another’s Kubernetes after you have a certificate & attach a certificate authority (CA) by creating certificate’s to bootstrap your Kubernetes cluster.

    • Please note – there are two (2) controllers, two (2) workers, & an Kubernetes API Load Balancer

    Lessons Learned:

    • Permit/Provision CA
    • Create Kubernetes client certs & kubelet client certs for two (2) nodes:
      • Admin Client Certificate
      • Kubelet Client Certificate
      • Manager Client Cert
      • Kube-Proxy Client Certificate
      • Kube-Scheduler Client Certificate
    • Kubernetes API server certificate
    • Kubernetes service account key pair
    • If you follow these lessons learned, you will not let this happen to you – don’t be Karen.
    • Created to sign other certificates & other certs can now use the CA to show legitness (its a word, look it up in the dictionary..urban, dictionary..) that no fakers are occurring

    Admin Client Certificate:

    Kubelet Client Certificate:

    Manager Client Cert:

    Kube-Proxy Client Certificate:

    Kube-Scheduler Client Certificate:

    • These gifs are TOOOOO good for info commercial’s in the late 90s’/early 2000s’

    Create Kubernetes API server certificate:

    Create Kubernetes service account key pair:

    When you see Smoke – – – there is Kubernetes Cluster being Tested..

    Goal:

    Stuff happen, so when it does – it is good to know what to do w/your Kubernetes cluster. The answer is – drum roll please… smoke testing, tahhh-dahhh! This is useful not just when stuff hits the fan, but to see if the known vulnerable features are working properly becuase the goal is to verify the health of the cluster.

    Example of smoke tests of the Kubernetes cluster conducted will contain:

    • Data Encryption
    • Deployment
    • Port Forwarding
    • Logs
    • Exec
    • Services

    Lessons Learned:

    • Cluster Data Encryption
    • Deployments Work
    • Remote Access works w/Port Forwarding
    • Access Container Logs w/Kubectl Logs
    • Execute Commands inside the Container
    • Services Work
    • Create test data for secret key
    • Ensure secret key is stored
    • Create & verify deployment
    • Snag that pod name & store in variable
    • Forward port to nginx pod
    • Open new terminal – – – & curl IP address/port
    • Get logs from nginx pod
    • Confirm you can run “exec” command & will see the version
    • Test to see if service can be deployed
    • Get node port from variable
    • Curl IP address/port

    Lets save Martha aka MiniKube..

    Goal:

    The Bat signal has been lit in the sky, its time to suit up, & don’t let the kryptonite divide us. Fix the broken Minikube cluster

    Lessons Learned:

    • Start up the Bat Mobile (Minikube)
      • See screenshot for a whole slew of commands
    • Create Object in YAML files to Confirm Cluster is up
      • Kubectl apply -f
      • Kubectl get po/pv/pvc

    Start up the Bat Mobile (Minikube):

    See screenshot for a whole slew of commands:

    • Minikube start
    • sudo chown -R
      • Change directory owner
        • .kube
        • .minikube
    • Minikube config set
      • Update the version
    • Sudo apt install -y docker.io
      • Get docker
    • Kubectl apply -f
    • Kubectl get
      • po
      • pv
      • pvc

    Create Object in YAML files to Confirm Cluster is up:

    • Kubectl apply -f
    • Kubectl get po/pv/pvc

    Blueprint to Build & Use a K3 Cluster

    Goal:

    Wanna see how the sausage is made – – – K3 cluster. We’ll bootstrap a K3 cluster, install the K3 on multipler servers, & have it Frankenstein to form a multi-server cluster. Lets get cookin’

    Lessons Learned:

    • Build that K3 server
      • Install K3 server
      • List nodes
      • Get node token
    • Build two (2) K3 worker nodes
      • Install K3 on worker node w/private IP address & node tokens
    • Run on New Cluster
      • Create pod yaml file
      • Create, check, & view pod

    Build that K3 server:

    • Install K3 server
    • List nodes
    • Get node token

    Build K3 worker nodes:

    Install K3 on worker node w/private IP address & node tokens:

    Run on New Cluster:

    Create pod yaml file:

    Create, check, & view pod:

    Come on, lets Explore Terraform State w/Kubernetes Containers

    Let’s blend some pimp tools together & launch something into space – cyber space that is. Below is an example to show useful it is to understand Terraform state, deploy resources w/Kubernetes, & see how Terraform maintains the state file to track all your changes along w/deploying containers!

    • Check Terraform & Minikube Status
    • Clone Terraform Code & Switch Proper Directory
      • Switch directories
    • Deploy Terraform code & Observe State File
      • Terraform Init
      • Terraform Plan
      • Terraform Apply
    • Terraform State File Tracks Resources
      • Terraform State
      • Terraform Destroy
    • terraform version

    Switch directories:

    • Terraform –
      • Init
      • Plan
      • Apply

    Terraform State File Tracks Resources:

    Terraform Plan:

    Terraform Apply:

    Terraform Destroy:

    A sprinkle of MiniKube & a pinch of Helm

    Goal:

    So you got a Minikube cluster right? Now lets use Helm to deploy a microservice stack!

    Lessons Learned:

    • Start Minikube Cluster
    • Unpack Helm, Move-it, Install, & Init
      • tar -xvzf ~/helm.tar.gz
      • sudo mv
      • Sudo helm init
    • Install Namespace w/Helm
      • sudo kubectl
      • sudo helm install
      • sudo kubectl
    • Edit to use Nodeport & Configure Nginx to Proxy

    Start Minikube Cluster:

    Edit to use Nodeport & Configure Nginx to Proxy:

    tar -xvzf ~/helm.tar.gz:

    sudo mv:

    Sudo helm init:

    Install Namespace w/Helm:

    Sudo kubectl:

    Sudo helm install:

    Sudo kubectl:

    Edit to use Nodeport & Configure Nginx to Proxy:

    Release the Helm Kraken!

    Goal:

    Humans aren’t constant, but Helm versions are! So this is an efficient way to release & clarify your versions of charts in Helm. Then for gigs we will rollback to the original state, cuz – why not?

    Lessons Learned:

    • Update index & version #
      • Update values.yaml
      • Update chart.yaml
    • Initialize
      • Helm install
    • Release the chart & confirm version #
      • Check the node port & see it launched!
    • Update index data & version #
      • Update the files again
    • Rollback it on back now! – – – to Previous Version #:

    Update index & version #:

    • Updated index & type of service as well as nodeport #

    Update values.yaml:

    Update Chart.yaml:

    • Update version #

    Initialize & Patch Helm:

    Helm install:

    Release the chart & confirm version #:

    Check the node port & see it launched!

    Update Index Data & Version #:

    Update the files again:

    Helm ls –short & upgrade the release

    • Just go to the values & Chart yaml files – – just update something!

    Rollback it on back now! – – – to Previous Version #:

    Advance your Helm Charts!

    Goal:

    Hmmm I wish there was a way to validate the resources deployed in Kubernetes.. wait, I just had an epiphany, or was it a download from the universe? Either way, Helm can help w/creating a special hook deploy & operate.

    Lessons Learned:

    • Create Manifest for test the Helm Charts Location
    • Validate, Release, & Test the App

    Create Manifest for test the Helm Charts Location:

    Create directory along w/new manifest:

    Validate, Release, & Test the App:

    Cd into top directory & run Helm install & Kubectl:

    Lemme teach you to – – … Install Helm

    Goal:

    Everyone likes bread-n-butter, unless you physically cant cuz of some gluten thing or cuz your lactose intolerant.. BUT IF YOUR NOT, check this basic bread-n-butter stuff out homie..

    First your gonna install Helm, k? Next configure the repository yah? Following that well release the chart to see what were rollin with, mmkay? Lastly we’ll clean up our messy cluster w/, you guessed it – HELMMMMMMMMMMMMM.

    Lessons Learned:

    • Install & Configure Helm
    • Create a Helm Release
    • Verify the Release & Clean

    Install & Configure Helm:

    Create a Helm Release:

    Verify the Release & Clean: