microk8s or mini-me?

Pre-Reqx:

# snap install kubectl -classic
# kubectl version --client
# sudo snap install microk8s --classic
# sudo usermod -a -G microk8s <username>
# sudo chown -R <username> ~/.kube
# newgrp microk8s
# microk8s kubectl get nodes
# cd $HOME
# mkdir .kube
# cd .kube
# microk8s config > config
# microk8s start

K8s Cluster:

  • Might have to add SSH keys – so go to your github account, setting, ssh keys, & add new SSH key
# git clone git@github.com:<docker_hub_name>/react-article-display.git
# cd react-article-display
# docker build -t <docker_hub_name>/react-article-display:demo .
# docker run -d -p 3000:80 <docker_hub_name>/react-article-display:demo
localhost:3000
# docker stop <see string above from previous command>
# docker login
# docker push <image name>
# kubectl run my-app-image --image <above>
# kubectl get pods
# kubectl port-forward my-app-image 3000:80

KCNA: P3 Kubernetes Fundyzzz..part 2

Blog post includes covering Kubernetes Fundamental’s in preparation for the KCNA.

  • Deployments & ReplicaSets
  • Services
  • Jobs
  • ConfigMaps
  • Secrets

Deployments & Replicasets: create image deployment & replicaset, annotate/version yaml alteration by changing scale or image name, view rollout history, & undo/revert back deployment to specific version/annotation.

kubectl create deployment nginx --image=nginx --dry-run=client -o yaml | tee nginx-deployment.yaml | kubectl apply -f -
kubectl scale deployment/nginx --replicas=4; watch kubectl get pods -o wide
kubectl rollout history deployment/nginx
kubectl get pods -o wide
kubectl rollout undo deployment/nginx --to-revision=1 && kubectl rollout status deployment/nginx
kubectl delete deployment/nginx --now

Jobs/Cron-Job: create a job & watch rollout of pod, alter yaml file to add pod amount, grep to see log of answer, then also can set cronjob of when to launch a pod.

kubectl create job calculatepi --image=perl:5.34.0 -- "perl" "-Mbignum=bpi" "-wle" "print bpi(2000)"
watch kubectl get jobs
kubectl apply -f calculatepi.yaml && sleep 1 && watch kubectl get pods -o wide
PI_POD=$(kubectl get pods | grep calculatepi | awk {'print $1'}); echo $PI_POD
kubectl create cronjob calculatepi --image=perl:5.34.0 --schedule="* * * * *" -- "perl" "-Mbignum=bpi" "-wle" "print bpi(2000)"

ConfigMaps: create configmap, edit, run, logs, delete…rinse & repeat.

kubectl create configmap colour-configmap --from-literal=COLOUR=red --from-literal=KEY=value
kubectl describe configmap/colour-configmap
cat configmap-colour.properties
kubectl create configmap colour-configmap --from-env-file=configmap-colour.properties
kubectl run --image=ubuntu --dry-run=client --restart=Never -o yaml ubuntu --command bash -- -c 'env; sleep infinity' | tee env-dump-pod.yaml
kubectl delete -f env-dump-pod.yaml --now; kubectl apply -f env-dump-pod.yaml

Secrets: create a secret color, echo decode w/base64, & then cat to apply

kubectl create secret generic colour-secret --from-literal=COLOUR=red --from-literal=KEY=value --dry-run=client -o yaml
echo -n value | base64
echo dmFsdWU= | base64 -d
kubectl get secret/colour-secret -o yaml
kubectl apply -f env-dump-pod.yaml
kubectl logs ubuntu

Services:

  • Can cover multiple types –
    • ClusterIP
    • NodePort
    • LoadBalancer
    • ExternalName
    • Headless

Service – ClusterIP: create image deployment on port 80 w/3 replicas, expose, get IP, & shell into curl

kubectl create deployment nginx --image=spurin/nginx-debug --port=80 --replicas=3 -o yaml --dry-run=client
kubectl create deployment nginx --image=spurin/nginx-debug --port=80 --replicas=3
kubectl expose deployment/nginx --dry-run=client -o yaml
kubectl expose deployment/nginx
kubectl run --rm -it curl --image=curlimages/curl:8.4.0 --restart=Never -- sh

Service – NodePort: expose a node, grep to get control-plane-ip & nodeport-port, then shell into curl the pod information

kubectl expose deployment/nginx --type=NodePort
CONTROL_PLANE_IP=$(kubectl get nodes -o wide | grep control-plane | awk {'print $6'}); echo $CONTROL_PLANE_IP
NODEPORT_PORT=$(kubectl get services | grep NodePort | grep nginx | awk -F'[:/]' '{print $2}'); echo $NODEPORT_PORT
curl ${CONTROL_PLANE_IP}:${NODEPORT_PORT}

Service – LoadBalancer: expose LB to port 80, grep to get IP & port, then scale to watch the IP change from each of the 3 scaled pods

kubectl expose deployment/nginx --type=LoadBalancer --port 8080 --target-port 80
LOADBALANCER_IP=$(kubectl get service | grep LoadBalancer | grep nginx | awk '{split($0,a," "); split(a[4],b,","); print b[1]}'); echo $LOADBALANCER_IP
LOADBALANCER_PORT=$(kubectl get service | grep LoadBalancer | grep nginx | awk -F'[:/]' '{print $2}'); echo $LOADBALANCER_PORT
kubectl scale deployment/nginx --replicas=1; watch --differences "curl ${LOADBALANCER_IP}:${LOADBALANCER_PORT} 2>/dev/null"
watch --differences "curl ${LOADBALANCER_IP}:${LOADBALANCER_PORT} 2>/dev/null"

Service – ExternalName: create various deployments on port 80, then expose them, & finally curl IP to shell into deployment name

kubectl create deployment nginx-blue --image=spurin/nginx-blue --port=80
kubectl expose deployment/nginx-blue
kubectl create service externalname my-service --external-name nginx-red.default.svc.cluster.local
kubectl run --rm -it curl --image=curlimages/curl:8.4.0 --restart=Never -- sh
curl nginx-blue

KCNA: P3 Kubernetes Fundyzzz..part 1

Blog post includes covering Kubernetes Fundamental’s in preparation for the KCNA.

  • Init-Containers
  • Pods
  • Namespaces
  • Labels

K8 Pods – Init Containers: create yaml file w/init container before main container, apply, & then watch logs

cat <<EOF > countdown-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: countdown-pod
spec:
  initContainers:
  - name: init-countdown
    image: busybox
    command: ['sh', '-c', 'for i in \$(seq 120 -1 0); do echo init-countdown: \$i; sleep 1; done']

  containers:
  - name: main-container
    image: busybox
    command: ['sh', '-c', 'while true; do count=\$((count + 1)); echo main-container: sleeping for 30 seconds - iteration \$count; sleep 30; done']
EOF
kubectl apply -f countdown-pod.yaml
kubectl get pods -o wide
until kubectl logs pod/countdown-pod -c init-countdown --follow --pod-running-timeout=5m; do sleep 1; done; until kubectl logs pod/countdown-pod -c main-container --follow --pod-running-timeout=5m; do sleep 1; done
kubectl get pods -o wide

K8 Pods: create image, port forward, curl/shell into pod, create another yaml file image combined as sidecar, & output sidecar response of pod containers

kubectl run nginx --image=nginx
kubectl get pods
kubectl logs pod/nginx
kubectl get pods -o wide
NGINX_IP=$(kubectl get pods -o wide | awk '/nginx/ { print $6 }'); echo $NGINX_IP
ping -c 3 $NGINX_IP
ssh worker-1 ping -c 3 $NGINX_IP
ssh worker-2 ping -c 3 $NGINX_IP
echo $NGINX_IP
kubectl run -it --rm curl --image=curlimages/curl:8.4.0 --restart=Never -- http://$NGINX_IP
kubectl exec -it ubuntu -- bash
apt update && apt install -y curl
kubectl run nginx --image=nginx --dry-run=client -o yaml | tee nginx.yaml
cat <<EOF > combined.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: mypod
  name: mypod
spec:
  containers:
  - image: nginx
    name: webserver
    resources: {}
  - image: ubuntu
    name: sidecar
    args:
    - /bin/sh
    - -c
    - while true; do echo "\$(date +'%T') - Hello from the sidecar"; sleep 5; if [ -f /tmp/crash ]; then exit 1; fi; done
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
EOF
MYPOD_IP=$(kubectl get pods -o wide | awk '/mypod/ { print $6 }'); echo $MYPOD_IP
kubectl logs pod/mypod -c sidecar
kubectl delete pod/mypod --now

    Namespaces: make an ns w/image, change config context default to new ns, & switch back & forth to notice not all pods created under ns

    kubectlget ns
    kubectl -n thissuxns run nginx --image=nginx
    kubectl get pods -o wide
    kubectl -n thissuxns get pods
    kubectl config view
    kubectl config set-context --current --namespace=thissuxns
    kubectl get pods -o wide
    kubectl config set-context --current --namespace=default
    kubectl get pods -o wide

    Labels: starting pod on port 80, utilize selector label, apply new yaml file of 3 options for selector label, & then get pods for just that particular label selector

    kubectl run nginx --image nginx --port 80 -o yaml --dry-run=client
    kubectl run nginx --image nginx --port 80
    kubectl expose pod/nginx --dry-run=client -o yaml
    kubectl expose pod/nginx
    cat <<EOF > coloured_pods.yaml
    apiVersion: v1
    kind: Pod
    metadata:
      creationTimestamp: null
      labels:
        run: ubuntu
        colour: red
      name: ubuntu-red
    spec:
      containers:
      - command:
        - sleep
        - infinity
        image: ubuntu
        name: ubuntu
        resources: {}
      dnsPolicy: ClusterFirst
      restartPolicy: Always
    status: {}
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      creationTimestamp: null
      labels:
        run: ubuntu
        colour: green
      name: ubuntu-green
    spec:
      containers:
      - command:
        - sleep
        - infinity
        image: ubuntu
        name: ubuntu
        resources: {}
      dnsPolicy: ClusterFirst
      restartPolicy: Always
    status: {}
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      creationTimestamp: null
      labels:
        run: ubuntu
        colour: pink
      name: ubuntu-pink
    spec:
      containers:
      - command:
        - sleep
        - infinity
        image: ubuntu
        name: ubuntu
        resources: {}
      dnsPolicy: ClusterFirst
      restartPolicy: Always
    status: {}
    EOF
    kubectl apply -f coloured_pods.yaml
    kubectl get pods -o wide
    kubectl get all --selector colour=green

    Come on, lets Explore Terraform State w/Kubernetes Containers

    Let’s blend some pimp tools together & launch something into space – cyber space that is. Below is an example to show useful it is to understand Terraform state, deploy resources w/Kubernetes, & see how Terraform maintains the state file to track all your changes along w/deploying containers!

    • Check Terraform & Minikube Status
    • Clone Terraform Code & Switch Proper Directory
      • Switch directories
    • Deploy Terraform code & Observe State File
      • Terraform Init
      • Terraform Plan
      • Terraform Apply
    • Terraform State File Tracks Resources
      • Terraform State
      • Terraform Destroy
    • terraform version

    Switch directories:

    • Terraform –
      • Init
      • Plan
      • Apply

    Terraform State File Tracks Resources:

    Terraform Plan:

    Terraform Apply:

    Terraform Destroy:

    Deep Pass of Secret’s to Kubernetes Container

    Kubernetes is dope for data bro! Watch how we send configuration data from containers to applications that were stored in secrets & ConfigMaps.

    • Created password file & store it in ….. secrets..
    • Create the Nginx Pod

    Generate a file for the secret password file & data:

    Vi pod.yml:

    Kubectl exec — curl -u user: <PASSWORD> <IP_ADDRESS>:

    Falco to Detect Threats on Containers in Kubernetes!

    Falco Lombardi is… ahem.. Falco is able to detect any shady stuff going on in your Kubernetes environment in no time.

    • Create a Falco Rules File to Scan the Container
    • Run Falco to Obtain a Report of ALL the Activity
    • Create rule to scan container, basically this scripts rule will:
    • Run Falco for up to a minute & see if anything is detected
      • -r = rule
      • -M = time