microk8s or mini-me?

Pre-Reqx:

# snap install kubectl -classic
# kubectl version --client
# sudo snap install microk8s --classic
# sudo usermod -a -G microk8s <username>
# sudo chown -R <username> ~/.kube
# newgrp microk8s
# microk8s kubectl get nodes
# cd $HOME
# mkdir .kube
# cd .kube
# microk8s config > config
# microk8s start

K8s Cluster:

  • Might have to add SSH keys – so go to your github account, setting, ssh keys, & add new SSH key
# git clone git@github.com:<docker_hub_name>/react-article-display.git
# cd react-article-display
# docker build -t <docker_hub_name>/react-article-display:demo .
# docker run -d -p 3000:80 <docker_hub_name>/react-article-display:demo
localhost:3000
# docker stop <see string above from previous command>
# docker login
# docker push <image name>
# kubectl run my-app-image --image <above>
# kubectl get pods
# kubectl port-forward my-app-image 3000:80

KCNA: P1 & P2 Cloud Arch. Fundyzz Containers w/Docker

Blog post includes covering Cloud Architecture Fundamental’s in preparation for the KCNA.

  • Autoscaling
    • Reactive
    • Predictive
    • Vertical
    • Horzontal
    • Cluster Autoscaler
      • HPAs
        • Scale # of replicas in an app
      • VPAs
        • Scale resource requests & limits of a pod
    • Keda
      • Scaled object should scale & what are triggers while scaling to 0
  • Serverless
    • Event driven & billed accordingly upon execution
    • Knative & OpenFaaS & CloudEvents
  • Cloud Native Personas
    • DevOps Engineer
    • Site Reliability Engineer
    • CloudOps Engineer
    • Security Engineer
    • DevSecOps Engineer
    • Full Stack Developer
    • Data Engineer
  • Open Standards
    • Docker, OCI, runc
    • PodMan – image-spec
    • Firecracker – runtime-spec
    • Container Network Interface (CNI)
      • Calico
    • Container Storage Interface (CSI)
      • Rook
    • Container Runtime Interface (CRI)
      • Goes to containerd, kata, firecracker, etc..
    • Service Mesh Interface (SMI)
      • Istio!

Blog post includes covering Containers w/Docker in preparation for the KCNA.

  • Docker Desktop
    • docker vs docker desktop
    • k8s w/docker desktop
  • Containers:
    • History womp womp
    • Linux
      • user, pid, network, mount, uts, ipc, namespace, & cgroups
  • Images
    • container vs container image
    • registry
    • tag
    • layers
    • union
    • digest vs ids
  • Running Containers
    • docker run -it –rm…
  • Container Networking Services/Volumes
    • docker run –rm nginx
    • docker run -d –rm nginx
    • docker ps
    • docker run -d –rm -P nginx
    • curl
    • docker run -d –rm -p 12345:80 nginx
    • docker exec -it bash
  • Building Containers
    • https://github.com/abishekvashok/cmatrix
    • docker pull, images, build . -t,
    • vim
      • FROM
      • #maintainer
      • LABEL
    • docker run –rm -it sh
      • git clone
        • apk update, add git
      • history
    • vim
      • history
    • docker buildx create, use, build –no-cache linux/amd64, . -t –push
    • docker system prune

KCNA: P3 Kubernetes Fundyzzz..part 1

Blog post includes covering Kubernetes Fundamental’s in preparation for the KCNA.

  • Init-Containers
  • Pods
  • Namespaces
  • Labels

K8 Pods – Init Containers: create yaml file w/init container before main container, apply, & then watch logs

cat <<EOF > countdown-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: countdown-pod
spec:
  initContainers:
  - name: init-countdown
    image: busybox
    command: ['sh', '-c', 'for i in \$(seq 120 -1 0); do echo init-countdown: \$i; sleep 1; done']

  containers:
  - name: main-container
    image: busybox
    command: ['sh', '-c', 'while true; do count=\$((count + 1)); echo main-container: sleeping for 30 seconds - iteration \$count; sleep 30; done']
EOF
kubectl apply -f countdown-pod.yaml
kubectl get pods -o wide
until kubectl logs pod/countdown-pod -c init-countdown --follow --pod-running-timeout=5m; do sleep 1; done; until kubectl logs pod/countdown-pod -c main-container --follow --pod-running-timeout=5m; do sleep 1; done
kubectl get pods -o wide

K8 Pods: create image, port forward, curl/shell into pod, create another yaml file image combined as sidecar, & output sidecar response of pod containers

kubectl run nginx --image=nginx
kubectl get pods
kubectl logs pod/nginx
kubectl get pods -o wide
NGINX_IP=$(kubectl get pods -o wide | awk '/nginx/ { print $6 }'); echo $NGINX_IP
ping -c 3 $NGINX_IP
ssh worker-1 ping -c 3 $NGINX_IP
ssh worker-2 ping -c 3 $NGINX_IP
echo $NGINX_IP
kubectl run -it --rm curl --image=curlimages/curl:8.4.0 --restart=Never -- http://$NGINX_IP
kubectl exec -it ubuntu -- bash
apt update && apt install -y curl
kubectl run nginx --image=nginx --dry-run=client -o yaml | tee nginx.yaml
cat <<EOF > combined.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: mypod
  name: mypod
spec:
  containers:
  - image: nginx
    name: webserver
    resources: {}
  - image: ubuntu
    name: sidecar
    args:
    - /bin/sh
    - -c
    - while true; do echo "\$(date +'%T') - Hello from the sidecar"; sleep 5; if [ -f /tmp/crash ]; then exit 1; fi; done
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
EOF
MYPOD_IP=$(kubectl get pods -o wide | awk '/mypod/ { print $6 }'); echo $MYPOD_IP
kubectl logs pod/mypod -c sidecar
kubectl delete pod/mypod --now

    Namespaces: make an ns w/image, change config context default to new ns, & switch back & forth to notice not all pods created under ns

    kubectlget ns
    kubectl -n thissuxns run nginx --image=nginx
    kubectl get pods -o wide
    kubectl -n thissuxns get pods
    kubectl config view
    kubectl config set-context --current --namespace=thissuxns
    kubectl get pods -o wide
    kubectl config set-context --current --namespace=default
    kubectl get pods -o wide

    Labels: starting pod on port 80, utilize selector label, apply new yaml file of 3 options for selector label, & then get pods for just that particular label selector

    kubectl run nginx --image nginx --port 80 -o yaml --dry-run=client
    kubectl run nginx --image nginx --port 80
    kubectl expose pod/nginx --dry-run=client -o yaml
    kubectl expose pod/nginx
    cat <<EOF > coloured_pods.yaml
    apiVersion: v1
    kind: Pod
    metadata:
      creationTimestamp: null
      labels:
        run: ubuntu
        colour: red
      name: ubuntu-red
    spec:
      containers:
      - command:
        - sleep
        - infinity
        image: ubuntu
        name: ubuntu
        resources: {}
      dnsPolicy: ClusterFirst
      restartPolicy: Always
    status: {}
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      creationTimestamp: null
      labels:
        run: ubuntu
        colour: green
      name: ubuntu-green
    spec:
      containers:
      - command:
        - sleep
        - infinity
        image: ubuntu
        name: ubuntu
        resources: {}
      dnsPolicy: ClusterFirst
      restartPolicy: Always
    status: {}
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      creationTimestamp: null
      labels:
        run: ubuntu
        colour: pink
      name: ubuntu-pink
    spec:
      containers:
      - command:
        - sleep
        - infinity
        image: ubuntu
        name: ubuntu
        resources: {}
      dnsPolicy: ClusterFirst
      restartPolicy: Always
    status: {}
    EOF
    kubectl apply -f coloured_pods.yaml
    kubectl get pods -o wide
    kubectl get all --selector colour=green

    Lets save Martha aka MiniKube..

    Goal:

    The Bat signal has been lit in the sky, its time to suit up, & don’t let the kryptonite divide us. Fix the broken Minikube cluster

    Lessons Learned:

    • Start up the Bat Mobile (Minikube)
      • See screenshot for a whole slew of commands
    • Create Object in YAML files to Confirm Cluster is up
      • Kubectl apply -f
      • Kubectl get po/pv/pvc

    Start up the Bat Mobile (Minikube):

    See screenshot for a whole slew of commands:

    • Minikube start
    • sudo chown -R
      • Change directory owner
        • .kube
        • .minikube
    • Minikube config set
      • Update the version
    • Sudo apt install -y docker.io
      • Get docker
    • Kubectl apply -f
    • Kubectl get
      • po
      • pv
      • pvc

    Create Object in YAML files to Confirm Cluster is up:

    • Kubectl apply -f
    • Kubectl get po/pv/pvc

    Come on, lets Explore Terraform State w/Kubernetes Containers

    Let’s blend some pimp tools together & launch something into space – cyber space that is. Below is an example to show useful it is to understand Terraform state, deploy resources w/Kubernetes, & see how Terraform maintains the state file to track all your changes along w/deploying containers!

    • Check Terraform & Minikube Status
    • Clone Terraform Code & Switch Proper Directory
      • Switch directories
    • Deploy Terraform code & Observe State File
      • Terraform Init
      • Terraform Plan
      • Terraform Apply
    • Terraform State File Tracks Resources
      • Terraform State
      • Terraform Destroy
    • terraform version

    Switch directories:

    • Terraform –
      • Init
      • Plan
      • Apply

    Terraform State File Tracks Resources:

    Terraform Plan:

    Terraform Apply:

    Terraform Destroy:

    Part 2: Monitoring Containers w/Prometheus

    Goal:

    Context:

    Lets show how you can help a team migrate their infrastructure to Docker containers..

    Part 2 Activities:

    Monitor the new environment w/Docker (stats) & Prometheus, you can see how to utilize a cool feature like Docker Compose & cAdvisor.

    Lessons Learned:

    • Create a Prometheus YAML File
      • vi prometheus.yml
    • Create a Prometheus Service
      • vi docker-compose.yml
      • docker-compose up -d
      • docker ps
    • Create Stats Shell
      • Investigate cAdvisor
      • Stats in Docker
      • Doper Stats

    Create a Prometheus YAML File:

    • Collect metrics & monitoring using Prometheus/cAdvisor to deploy containers using docker-compose

    vi Prometheus.yml:

    Create a Prometheus Service:

    vi docker-compose.yml:

    docker-compose up -d:

    docker ps

    Create Stats Shell:

    Investigate cAdvisor:

    Stats in Docker:

    • docker stats

    Doper Stats:

    • vi stats.sh
    • chmod a+x stats.sh

    Part 1: Use Grafana w/Prometheus for Alert & Monitoring

    Goal:

    Context:

    Lets show how you can help a team migrate their infrastructure to Docker containers..

    Part 1 Activities:

    See how to utilize Prometheus to monitor your toys (containers). Then you can use the gangster tool of Grafana to visualize & alert!

    Lessons Learned:

    • Pre-Req
      • SSH & Eleevate to Sudo su – !!
    • Configure Docker
      • Open Port
      • Create daemon.json file
      • Restart docker
      • Curl to test Docker
    • Update the Prometheus YAML File
    • Update the Docker-Compose YAML File
      • docker-compose.yml
      • Apply changes & rebuild
      • Ensure stuff is runnin’!
      • Open port 9090
    • Install the Docker & Monitoring DB
      • Create Grafana Data Source
      • Add Docker Dashboard
      • Add email notification
      • Alert for CPU Usage

    Pre-Req:

    SSH & Elevate to Sudo su – !!:

    Configure Docker:

    Open Port (for FW in Docker reporting under Prometheus):

    Create daemon.json file:

    Restart Docker:

    Curl to test Docker:

    Update the Prometheus YAML File:

    • Add gateway & Grafana to have visualization/reporting for Docker metrics

    Update the Docker-Compose YAML File:

    docker-compose.yml:

    Apply changes & rebuild (docker-compose up -d):

    Ensure stuff is runnin (docker ps) & Open port 9090:

    Install the Docker & Monitoring DB:

    Create Grafana Data Source:

    Add Docker Dashboard:

    Add email notification:

    Alert for CPU Usage:

    Deep Pass of Secret’s to Kubernetes Container

    Kubernetes is dope for data bro! Watch how we send configuration data from containers to applications that were stored in secrets & ConfigMaps.

    • Created password file & store it in ….. secrets..
    • Create the Nginx Pod

    Generate a file for the secret password file & data:

    Vi pod.yml:

    Kubectl exec — curl -u user: <PASSWORD> <IP_ADDRESS>:

    Falco to Detect Threats on Containers in Kubernetes!

    Falco Lombardi is… ahem.. Falco is able to detect any shady stuff going on in your Kubernetes environment in no time.

    • Create a Falco Rules File to Scan the Container
    • Run Falco to Obtain a Report of ALL the Activity
    • Create rule to scan container, basically this scripts rule will:
    • Run Falco for up to a minute & see if anything is detected
      • -r = rule
      • -M = time

    Canary in Coal Mine to find Kubernetes & Jenkins

    Goal:

    Our coal mine (CICD pipeline) is struggling, so lets use canary deployments to monitor a Kubernetes cluster under a Jenkins pipeline. Alright, lets level set here…

    • You got a Kubernetes cluster, mmmmkay?
    • A pipeline from Jenkins leads to CICD deployments, yeah?
    • Now we must add the deetz (details) to get canary to deploy

    Lessons Learned:

    • Run Deployment in Jenkins
    • Add Canary to Pipeline to run Deployment

    Run Deployment in Jenkins:

    Source Code:

    • Create fork & update username

    Setup Jenkins (Github access token, Docker Hub, & KubeConfig):

    Jenkins:

    • Credz
      • Github user name & password (Access token)

    Github:

    • Generate access token

    DockerHub:

    • DockerHub does not generate access tokens

    Kubernetes:

    Add Canary to Pipeline to run Deployment:

    Create Jenkins Project:

    • Multi-Branch Pipeline
    • Github username
    • Owner & forked repository
      • Provided an option for URL, select deprecated visualization
    • Check it out homie!

    Canary Template:

    • We have prod, but need Canary features for stages in our deployment!
    • Pay Attention:
      • track
      • spec
      • selector
      • port

    Add Jenkinsfile to Canary Stage:

    • Between Docker Push & DeployToProduction
      • We add CanaryDeployment stage!

    Modify Productions Deployment Stage:

    EXECUTE!!