KCNA: P3 Kubernetes Fundyzzz..part 2

Blog post includes covering Kubernetes Fundamental’s in preparation for the KCNA.

  • Deployments & ReplicaSets
  • Services
  • Jobs
  • ConfigMaps
  • Secrets

Deployments & Replicasets: create image deployment & replicaset, annotate/version yaml alteration by changing scale or image name, view rollout history, & undo/revert back deployment to specific version/annotation.

kubectl create deployment nginx --image=nginx --dry-run=client -o yaml | tee nginx-deployment.yaml | kubectl apply -f -
kubectl scale deployment/nginx --replicas=4; watch kubectl get pods -o wide
kubectl rollout history deployment/nginx
kubectl get pods -o wide
kubectl rollout undo deployment/nginx --to-revision=1 && kubectl rollout status deployment/nginx
kubectl delete deployment/nginx --now

Jobs/Cron-Job: create a job & watch rollout of pod, alter yaml file to add pod amount, grep to see log of answer, then also can set cronjob of when to launch a pod.

kubectl create job calculatepi --image=perl:5.34.0 -- "perl" "-Mbignum=bpi" "-wle" "print bpi(2000)"
watch kubectl get jobs
kubectl apply -f calculatepi.yaml && sleep 1 && watch kubectl get pods -o wide
PI_POD=$(kubectl get pods | grep calculatepi | awk {'print $1'}); echo $PI_POD
kubectl create cronjob calculatepi --image=perl:5.34.0 --schedule="* * * * *" -- "perl" "-Mbignum=bpi" "-wle" "print bpi(2000)"

ConfigMaps: create configmap, edit, run, logs, delete…rinse & repeat.

kubectl create configmap colour-configmap --from-literal=COLOUR=red --from-literal=KEY=value
kubectl describe configmap/colour-configmap
cat configmap-colour.properties
kubectl create configmap colour-configmap --from-env-file=configmap-colour.properties
kubectl run --image=ubuntu --dry-run=client --restart=Never -o yaml ubuntu --command bash -- -c 'env; sleep infinity' | tee env-dump-pod.yaml
kubectl delete -f env-dump-pod.yaml --now; kubectl apply -f env-dump-pod.yaml

Secrets: create a secret color, echo decode w/base64, & then cat to apply

kubectl create secret generic colour-secret --from-literal=COLOUR=red --from-literal=KEY=value --dry-run=client -o yaml
echo -n value | base64
echo dmFsdWU= | base64 -d
kubectl get secret/colour-secret -o yaml
kubectl apply -f env-dump-pod.yaml
kubectl logs ubuntu

Services:

  • Can cover multiple types –
    • ClusterIP
    • NodePort
    • LoadBalancer
    • ExternalName
    • Headless

Service – ClusterIP: create image deployment on port 80 w/3 replicas, expose, get IP, & shell into curl

kubectl create deployment nginx --image=spurin/nginx-debug --port=80 --replicas=3 -o yaml --dry-run=client
kubectl create deployment nginx --image=spurin/nginx-debug --port=80 --replicas=3
kubectl expose deployment/nginx --dry-run=client -o yaml
kubectl expose deployment/nginx
kubectl run --rm -it curl --image=curlimages/curl:8.4.0 --restart=Never -- sh

Service – NodePort: expose a node, grep to get control-plane-ip & nodeport-port, then shell into curl the pod information

kubectl expose deployment/nginx --type=NodePort
CONTROL_PLANE_IP=$(kubectl get nodes -o wide | grep control-plane | awk {'print $6'}); echo $CONTROL_PLANE_IP
NODEPORT_PORT=$(kubectl get services | grep NodePort | grep nginx | awk -F'[:/]' '{print $2}'); echo $NODEPORT_PORT
curl ${CONTROL_PLANE_IP}:${NODEPORT_PORT}

Service – LoadBalancer: expose LB to port 80, grep to get IP & port, then scale to watch the IP change from each of the 3 scaled pods

kubectl expose deployment/nginx --type=LoadBalancer --port 8080 --target-port 80
LOADBALANCER_IP=$(kubectl get service | grep LoadBalancer | grep nginx | awk '{split($0,a," "); split(a[4],b,","); print b[1]}'); echo $LOADBALANCER_IP
LOADBALANCER_PORT=$(kubectl get service | grep LoadBalancer | grep nginx | awk -F'[:/]' '{print $2}'); echo $LOADBALANCER_PORT
kubectl scale deployment/nginx --replicas=1; watch --differences "curl ${LOADBALANCER_IP}:${LOADBALANCER_PORT} 2>/dev/null"
watch --differences "curl ${LOADBALANCER_IP}:${LOADBALANCER_PORT} 2>/dev/null"

Service – ExternalName: create various deployments on port 80, then expose them, & finally curl IP to shell into deployment name

kubectl create deployment nginx-blue --image=spurin/nginx-blue --port=80
kubectl expose deployment/nginx-blue
kubectl create service externalname my-service --external-name nginx-red.default.svc.cluster.local
kubectl run --rm -it curl --image=curlimages/curl:8.4.0 --restart=Never -- sh
curl nginx-blue

KCNA: P3 Kubernetes Fundyzzz..part 1

Blog post includes covering Kubernetes Fundamental’s in preparation for the KCNA.

  • Init-Containers
  • Pods
  • Namespaces
  • Labels

K8 Pods – Init Containers: create yaml file w/init container before main container, apply, & then watch logs

cat <<EOF > countdown-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: countdown-pod
spec:
  initContainers:
  - name: init-countdown
    image: busybox
    command: ['sh', '-c', 'for i in \$(seq 120 -1 0); do echo init-countdown: \$i; sleep 1; done']

  containers:
  - name: main-container
    image: busybox
    command: ['sh', '-c', 'while true; do count=\$((count + 1)); echo main-container: sleeping for 30 seconds - iteration \$count; sleep 30; done']
EOF
kubectl apply -f countdown-pod.yaml
kubectl get pods -o wide
until kubectl logs pod/countdown-pod -c init-countdown --follow --pod-running-timeout=5m; do sleep 1; done; until kubectl logs pod/countdown-pod -c main-container --follow --pod-running-timeout=5m; do sleep 1; done
kubectl get pods -o wide

K8 Pods: create image, port forward, curl/shell into pod, create another yaml file image combined as sidecar, & output sidecar response of pod containers

kubectl run nginx --image=nginx
kubectl get pods
kubectl logs pod/nginx
kubectl get pods -o wide
NGINX_IP=$(kubectl get pods -o wide | awk '/nginx/ { print $6 }'); echo $NGINX_IP
ping -c 3 $NGINX_IP
ssh worker-1 ping -c 3 $NGINX_IP
ssh worker-2 ping -c 3 $NGINX_IP
echo $NGINX_IP
kubectl run -it --rm curl --image=curlimages/curl:8.4.0 --restart=Never -- http://$NGINX_IP
kubectl exec -it ubuntu -- bash
apt update && apt install -y curl
kubectl run nginx --image=nginx --dry-run=client -o yaml | tee nginx.yaml
cat <<EOF > combined.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: mypod
  name: mypod
spec:
  containers:
  - image: nginx
    name: webserver
    resources: {}
  - image: ubuntu
    name: sidecar
    args:
    - /bin/sh
    - -c
    - while true; do echo "\$(date +'%T') - Hello from the sidecar"; sleep 5; if [ -f /tmp/crash ]; then exit 1; fi; done
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
EOF
MYPOD_IP=$(kubectl get pods -o wide | awk '/mypod/ { print $6 }'); echo $MYPOD_IP
kubectl logs pod/mypod -c sidecar
kubectl delete pod/mypod --now

    Namespaces: make an ns w/image, change config context default to new ns, & switch back & forth to notice not all pods created under ns

    kubectlget ns
    kubectl -n thissuxns run nginx --image=nginx
    kubectl get pods -o wide
    kubectl -n thissuxns get pods
    kubectl config view
    kubectl config set-context --current --namespace=thissuxns
    kubectl get pods -o wide
    kubectl config set-context --current --namespace=default
    kubectl get pods -o wide

    Labels: starting pod on port 80, utilize selector label, apply new yaml file of 3 options for selector label, & then get pods for just that particular label selector

    kubectl run nginx --image nginx --port 80 -o yaml --dry-run=client
    kubectl run nginx --image nginx --port 80
    kubectl expose pod/nginx --dry-run=client -o yaml
    kubectl expose pod/nginx
    cat <<EOF > coloured_pods.yaml
    apiVersion: v1
    kind: Pod
    metadata:
      creationTimestamp: null
      labels:
        run: ubuntu
        colour: red
      name: ubuntu-red
    spec:
      containers:
      - command:
        - sleep
        - infinity
        image: ubuntu
        name: ubuntu
        resources: {}
      dnsPolicy: ClusterFirst
      restartPolicy: Always
    status: {}
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      creationTimestamp: null
      labels:
        run: ubuntu
        colour: green
      name: ubuntu-green
    spec:
      containers:
      - command:
        - sleep
        - infinity
        image: ubuntu
        name: ubuntu
        resources: {}
      dnsPolicy: ClusterFirst
      restartPolicy: Always
    status: {}
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      creationTimestamp: null
      labels:
        run: ubuntu
        colour: pink
      name: ubuntu-pink
    spec:
      containers:
      - command:
        - sleep
        - infinity
        image: ubuntu
        name: ubuntu
        resources: {}
      dnsPolicy: ClusterFirst
      restartPolicy: Always
    status: {}
    EOF
    kubectl apply -f coloured_pods.yaml
    kubectl get pods -o wide
    kubectl get all --selector colour=green

    Lambda Magic for RDS

    Steps below to create:

    To stop an RDS instance every 7 days using AWS Lambda and Terraform, below are the following concepts followed:

    Explanation:

    • Step 1
    • Step 2
      • Lambda FunctionA Python-based Lambda function that uses the AWS SDK (boto3) to stop the specified RDS instance(s).
    • Step 3
    • Step 4
      • To deploy:
        • Save the Terraform code in .tf files and the Python code as lambda_function.py.
        • Zip the Python file into lambda_function.zip.
        • Initialize Terraform: terraform init
        • Plan the deployment: terraform plan
          • Apply the changes: terraform apply
    • Main.tf
    # Define an IAM role for the Lambda function
    resource "aws_iam_role" "rds_stop_lambda_role" {
      name = "rds-stop-lambda-role"
    
      assume_role_policy = jsonencode({
        Version = "2012-10-17",
        Statement = [
          {
            Action = "sts:AssumeRole",
            Effect = "Allow",
            Principal = {
              Service = "lambda.amazonaws.com"
            }
          }
        ]
      })
    }
    
    # Attach a policy to the role allowing RDS stop actions and CloudWatch Logs
    resource "aws_iam_role_policy" "rds_stop_lambda_policy" {
      name = "rds-stop-lambda-policy"
      role = aws_iam_role.rds_stop_lambda_role.id
    
      policy = jsonencode({
        Version = "2012-10-17",
        Statement = [
          {
            Effect = "Allow",
            Action = [
              "rds:StopDBInstance",
              "rds:DescribeDBInstances"
            ],
            Resource = "*" # Restrict this to specific RDS instances if needed
          },
          {
            Effect = "Allow",
            Action = [
              "logs:CreateLogGroup",
              "logs:CreateLogStream",
              "logs:PutLogEvents"
            ],
            Resource = "arn:aws:logs:*:*:*"
          }
        ]
      })
    }
    
    # Create the Lambda function
    resource "aws_lambda_function" "rds_stop_lambda" {
      function_name = "rds-stop-every-7-days"
      handler       = "lambda_function.lambda_handler"
      runtime       = "python3.9"
      role          = aws_iam_role.rds_stop_lambda_role.arn
      timeout       = 60
    
      # Replace with the path to your zipped Lambda code
      filename         = "lambda_function.zip"
      source_code_hash = filebase64sha256("lambda_function.zip")
    
      environment {
        variables = {
          RDS_INSTANCE_IDENTIFIER = "my-rds-instance" # Replace with your RDS instance identifier
          REGION                  = "us-east-1"       # Replace with your AWS region
        }
      }
    }
    
    # Create an EventBridge (CloudWatch Event) rule to trigger the Lambda
    resource "aws_cloudwatch_event_rule" "rds_stop_schedule" {
      name                = "rds-stop-every-7-days-schedule"
      schedule_expression = "cron(0 0 ? * SUN *)" # Every Sunday at 00:00 UTC
    }
    
    # Add the Lambda function as a target for the EventBridge rule
    resource "aws_cloudwatch_event_target" "rds_stop_target" {
      rule      = aws_cloudwatch_event_rule.rds_stop_schedule.name
      target_id = "rds-stop-lambda-target"
      arn       = aws_lambda_function.rds_stop_lambda.arn
    }
    
    # Grant EventBridge permission to invoke the Lambda function
    resource "aws_lambda_permission" "allow_cloudwatch_to_call_lambda" {
      statement_id  = "AllowExecutionFromCloudWatch"
      action        = "lambda:InvokeFunction"
      function_name = aws_lambda_function.rds_stop_lambda.function_name
      principal     = "events.amazonaws.com"
      source_arn    = aws_cloudwatch_event_rule.rds_stop_schedule.arn
    }
    • lambda_function.py (Python code for the Lambda function):
    import boto3
    import os
    
    def lambda_handler(event, context):
        rds_instance_identifier = os.environ.get('RDS_INSTANCE_IDENTIFIER')
        region = os.environ.get('REGION')
    
        if not rds_instance_identifier or not region:
            print("Error: RDS_INSTANCE_IDENTIFIER or REGION environment variables are not set.")
            return {
                'statusCode': 400,
                'body': 'Missing environment variables.'
            }
    
        rds_client = boto3.client('rds', region_name=region)
    
        try:
            response = rds_client.stop_db_instance(
                DBInstanceIdentifier=rds_instance_identifier
            )
            print(f"Successfully initiated stop for RDS instance: {rds_instance_identifier}")
            return {
                'statusCode': 200,
                'body': f"Stopping RDS instance: {rds_instance_identifier}"
            }
        except Exception as e:
            print(f"Error stopping RDS instance {rds_instance_identifier}: {e}")
            return {
                'statusCode': 500,
                'body': f"Error stopping RDS instance: {e}"
            }
    • Zipping the Lambda Code:
    zip lambda_function.zip lambda_function.py

    Wanna secure EKS w/CA & TLS?

    Goal:

    DO YOU HAVE A KUBERNETES CLUSTER! IS IT INSECURE!? …. I’m out of breath & getting dizzy, Idk how those commercials bring that outside voice & energy – – its exhausting!

    Alright, I’m back – all this will show you is how to secure your cluster. Below you can see how one can authenticate w/one another’s Kubernetes after you have a certificate & attach a certificate authority (CA) by creating certificate’s to bootstrap your Kubernetes cluster.

    • Please note – there are two (2) controllers, two (2) workers, & an Kubernetes API Load Balancer

    Lessons Learned:

    • Permit/Provision CA
    • Create Kubernetes client certs & kubelet client certs for two (2) nodes:
      • Admin Client Certificate
      • Kubelet Client Certificate
      • Manager Client Cert
      • Kube-Proxy Client Certificate
      • Kube-Scheduler Client Certificate
    • Kubernetes API server certificate
    • Kubernetes service account key pair
    • If you follow these lessons learned, you will not let this happen to you – don’t be Karen.
    • Created to sign other certificates & other certs can now use the CA to show legitness (its a word, look it up in the dictionary..urban, dictionary..) that no fakers are occurring

    Admin Client Certificate:

    Kubelet Client Certificate:

    Manager Client Cert:

    Kube-Proxy Client Certificate:

    Kube-Scheduler Client Certificate:

    • These gifs are TOOOOO good for info commercial’s in the late 90s’/early 2000s’

    Create Kubernetes API server certificate:

    Create Kubernetes service account key pair:

    When you see Smoke – – – there is Kubernetes Cluster being Tested..

    Goal:

    Stuff happen, so when it does – it is good to know what to do w/your Kubernetes cluster. The answer is – drum roll please… smoke testing, tahhh-dahhh! This is useful not just when stuff hits the fan, but to see if the known vulnerable features are working properly becuase the goal is to verify the health of the cluster.

    Example of smoke tests of the Kubernetes cluster conducted will contain:

    • Data Encryption
    • Deployment
    • Port Forwarding
    • Logs
    • Exec
    • Services

    Lessons Learned:

    • Cluster Data Encryption
    • Deployments Work
    • Remote Access works w/Port Forwarding
    • Access Container Logs w/Kubectl Logs
    • Execute Commands inside the Container
    • Services Work
    • Create test data for secret key
    • Ensure secret key is stored
    • Create & verify deployment
    • Snag that pod name & store in variable
    • Forward port to nginx pod
    • Open new terminal – – – & curl IP address/port
    • Get logs from nginx pod
    • Confirm you can run “exec” command & will see the version
    • Test to see if service can be deployed
    • Get node port from variable
    • Curl IP address/port

    Lets save Martha aka MiniKube..

    Goal:

    The Bat signal has been lit in the sky, its time to suit up, & don’t let the kryptonite divide us. Fix the broken Minikube cluster

    Lessons Learned:

    • Start up the Bat Mobile (Minikube)
      • See screenshot for a whole slew of commands
    • Create Object in YAML files to Confirm Cluster is up
      • Kubectl apply -f
      • Kubectl get po/pv/pvc

    Start up the Bat Mobile (Minikube):

    See screenshot for a whole slew of commands:

    • Minikube start
    • sudo chown -R
      • Change directory owner
        • .kube
        • .minikube
    • Minikube config set
      • Update the version
    • Sudo apt install -y docker.io
      • Get docker
    • Kubectl apply -f
    • Kubectl get
      • po
      • pv
      • pvc

    Create Object in YAML files to Confirm Cluster is up:

    • Kubectl apply -f
    • Kubectl get po/pv/pvc

    Blueprint to Build & Use a K3 Cluster

    Goal:

    Wanna see how the sausage is made – – – K3 cluster. We’ll bootstrap a K3 cluster, install the K3 on multipler servers, & have it Frankenstein to form a multi-server cluster. Lets get cookin’

    Lessons Learned:

    • Build that K3 server
      • Install K3 server
      • List nodes
      • Get node token
    • Build two (2) K3 worker nodes
      • Install K3 on worker node w/private IP address & node tokens
    • Run on New Cluster
      • Create pod yaml file
      • Create, check, & view pod

    Build that K3 server:

    • Install K3 server
    • List nodes
    • Get node token

    Build K3 worker nodes:

    Install K3 on worker node w/private IP address & node tokens:

    Run on New Cluster:

    Create pod yaml file:

    Create, check, & view pod:

    Come on, lets Explore Terraform State w/Kubernetes Containers

    Let’s blend some pimp tools together & launch something into space – cyber space that is. Below is an example to show useful it is to understand Terraform state, deploy resources w/Kubernetes, & see how Terraform maintains the state file to track all your changes along w/deploying containers!

    • Check Terraform & Minikube Status
    • Clone Terraform Code & Switch Proper Directory
      • Switch directories
    • Deploy Terraform code & Observe State File
      • Terraform Init
      • Terraform Plan
      • Terraform Apply
    • Terraform State File Tracks Resources
      • Terraform State
      • Terraform Destroy
    • terraform version

    Switch directories:

    • Terraform –
      • Init
      • Plan
      • Apply

    Terraform State File Tracks Resources:

    Terraform Plan:

    Terraform Apply:

    Terraform Destroy:

    A sprinkle of MiniKube & a pinch of Helm

    Goal:

    So you got a Minikube cluster right? Now lets use Helm to deploy a microservice stack!

    Lessons Learned:

    • Start Minikube Cluster
    • Unpack Helm, Move-it, Install, & Init
      • tar -xvzf ~/helm.tar.gz
      • sudo mv
      • Sudo helm init
    • Install Namespace w/Helm
      • sudo kubectl
      • sudo helm install
      • sudo kubectl
    • Edit to use Nodeport & Configure Nginx to Proxy

    Start Minikube Cluster:

    Edit to use Nodeport & Configure Nginx to Proxy:

    tar -xvzf ~/helm.tar.gz:

    sudo mv:

    Sudo helm init:

    Install Namespace w/Helm:

    Sudo kubectl:

    Sudo helm install:

    Sudo kubectl:

    Edit to use Nodeport & Configure Nginx to Proxy:

    Part 2: Monitoring Containers w/Prometheus

    Goal:

    Context:

    Lets show how you can help a team migrate their infrastructure to Docker containers..

    Part 2 Activities:

    Monitor the new environment w/Docker (stats) & Prometheus, you can see how to utilize a cool feature like Docker Compose & cAdvisor.

    Lessons Learned:

    • Create a Prometheus YAML File
      • vi prometheus.yml
    • Create a Prometheus Service
      • vi docker-compose.yml
      • docker-compose up -d
      • docker ps
    • Create Stats Shell
      • Investigate cAdvisor
      • Stats in Docker
      • Doper Stats

    Create a Prometheus YAML File:

    • Collect metrics & monitoring using Prometheus/cAdvisor to deploy containers using docker-compose

    vi Prometheus.yml:

    Create a Prometheus Service:

    vi docker-compose.yml:

    docker-compose up -d:

    docker ps

    Create Stats Shell:

    Investigate cAdvisor:

    Stats in Docker:

    • docker stats

    Doper Stats:

    • vi stats.sh
    • chmod a+x stats.sh