AWS – VCS – HCP.. its all connected

Summary of Steps Below:

  • Create Github repo
    • git add
  • Git Commands
    • git add .
    • git commit -m
    • git push
  • HCP Migrate VCS Workflow
    • dev
  • In Github Add development branch
  • Use VCS to deploy DEVELOPMENT
    • git branch -f
    • git checkout development
    • git branch
    • git status
    • terraform init -backend-config=dev.hcl -reconfigure
    • terraform validate
    • terraform plan
  • CAN NOT do..
    • terraform apply
  • Git Commands
    • git status
    • git add .
    • git commit -m “remove extra server & refactor outputs”
    • git push
  • HCP
    • Approve
  • Github
    • Review development branch to main
  • Use VCS to deploy PRODUCTION
    • Github
      • Merge pull request
    • HCP
      • Automaticlaly KO pipeline & approve
    • Github & HCP
      • See the MR merged/approved
  • AWS Console
    • Review new resources added or destroyed

Create New GitHub Repo:

git init
git remote add origin https://github.com/<YOUR_GIT_HUB_ACCOUNT>/my-app.git

Commit changes to github:

git add .
git commit -m "terraform code update for my app"
git push --set-upstream origin master

Migrate VCS Workflow:

add development github branch:

Use VCS to deploy development:

git branch -f development origin/development
git checkout development
git branch
terraform init -backend-config=dev.hcl -reconfigure
terraform validate
terraform plan
  • Remember, cant do this…..
git status
git add .
git commit -m "remove extra server & refactor outputs"
git push
  • Approve in HCP & can review Github development branch to main

Use VCS to deploy main/production:

  • Github:
    • IF development goes well & passes, then can merge pull request
  • HCP:
    • Automatically KO the pipeline for production & can approve
  • Github:
    • Can see the MR has merged from development to production
  • AWS Console:
    • Check to see your resources

Terraforming the Cloud Alphabet Soup aka HCP/AWS.

  • In providers.tf add backend to remote so you can operate in enhanced/remote/HCP as well as your state be in enhanced/remote/HCP & even stream to your CLI in VsCode..compared to standard state that just stores state (like an S3 backend)

TF Remote provider magic:

  • Seeing the TF at work locally in the CLI & live in HCP, woah – magic..
  • Then jump to the ole’ AWS Console to check your IaC
  • Alright alright alright, lets destroy in the CLI
  • Annnnnnnnnnd, once again you can see live “streamin” in HCP
  • OMG its gone!!

S3 —–> HCP Enhanced/Remote:

  • Then if you have your backend provider already established, you can see live the new state before any Terraform is planned or applied
  • WOW, legitness.

Karpor Evolves K8s like Magicarp to Gyarados

Install KusionStack Karpor CLI:

# brew tap KusionStack/tap
# brew install KusionStack/tap/kusion

Helm Repo Add Karpor

# helm repo add kusionstack https://kusionstack.github.io/charts
# helm repo update
# helm install karpor kusionstack/karpor

Port-Forward to 7443 & Review Dashboard:

  • Now you can register a cluster to review w/magic “agentic” ai

ArgoCD, not the Ben Affleck movie, or insurance duck commercial..

Install ArgoCD:

brew install argocd
kubectl port-forward svc/argocd-server -n argocd 8080:443
argocd login 127.0.0.1:8080

Code:

kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

Port-forward:

  • Option 1 from CLI
  • Option 2 from K9s

“Secret” Password:

  • Option 1 from CLI
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
  • Option 2 from K9s
    • go to secrets
    • hit x on the preferred option you desire
    • initial admin secret – login to argocd
    • secret – to get RSA private keys

Helm Chart install:

  • See pods in k9s
  • Port-forward to 8888

Scale-Up Replicas to 10:

  • Update your github file & commit
  • Watch MAGIC..or “agentic-ai” – whatever you wanna call it
  • Notice in k9s the pods age

Rollback in ArgoCD

  • For a quick fix cuz not sure what broke, can quickly go back instead of push more supposed changes/fixes from github
  • Annnnnd see the termination of pods from the rollback in ArgoCD

AWS IAM +1 to partayyyy

  • logged into AWS
    • aws configure
  • Created 4 tf files
    • main
    • variables
    • output
    • tfvars
main.tf
provider "aws" {
  region = var.aws_region
}

# Create IAM user
resource "aws_iam_user" "example_user" {
  name = var.user_name
}

# Attach policy to the user
resource "aws_iam_user_policy_attachment" "example_user_policy" {
  user       = aws_iam_user.example_user.name
  policy_arn = var.policy_arn
}

# Create access keys for the user
resource "aws_iam_access_key" "example_user_key" {
  user = aws_iam_user.example_user.name
}
output.tf
output "iam_user_name" {
  value = aws_iam_user.example_user.name
}

output "access_key_id" {
  value = aws_iam_access_key.example_user_key.id
}

output "secret_access_key" {
  value     = aws_iam_access_key.example_user_key.secret
  sensitive = true
}
variables.tf
variable "aws_region" {
  description = "AWS region"
  type        = string
  default     = "us-east-1"
}

variable "user_name" {
  description = "IAM username"
  type        = string
  default     = "example-user"
}

variable "policy_arn" {
  description = "IAM policy ARN to attach"
  type        = string
  default     = "arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess"
}
terrform.tfvars
aws_region = "us-east-1"
user_name  = "terraform-user"
policy_arn = "arn:aws:iam::aws:policy/AdministratorAccess"
  • terraform fmt
  • terraform init
  • terraform plan
  • terraform apply

K8s on Roidz aka K8sGPT

Blog post includes installing K8s…GPT, see below for the goodies:

Installszz:

Github
https://github.com/k8sgpt-ai/k8sgpt
k8sgpt Docx:
https://docs.k8sgpt.ai/getting-started/in-cluster-operator/?ref=anaisurl.com
Ubuntu
# curl -LO https://github.com/k8sgpt-ai/k8sgpt/releases/download/v0.4.26/k8sgpt_amd64.deb
# sudo dpkg -i k8sgpt_amd64.deb
# k8sgpt version
# k8sgpt --help (handful of commands & flags available)

Pre-Reqzz:

Minikube
# unset KUBECONFIG
# minikube start
# minikube status
OpenAi
#  https://platform.openai.com/account/api-keys
K8sgpt
# k8sgpt generate
# k8sgpt auth add openai
# k8sgpt auth list

Troubleshoot why deployment is not running:

  • Create yaml file
  • Create namespace
  • Apply file
  • Review K9s
  • Utilize k8sgpt to see what’s going on…

2 Links to leverage:

# deployment2
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
        securityContext:
          readOnlyRootFilesystem: true
# kubectl create ns demo
# kubectl apply -f deployment2 -n demo
# k8sgpt analyse
# k8sgpt analyse --explain
Set pods, deployments, etc w/the following commands
# kubectl get pods -n demo
# kubectl get pods -A
# kubectl get deployments -n demo
# kubectl get pods --all-namespaces
# k8sgpt integration list
# k8sgpt filters list
# k8sgpt analyse --filter=VulnerabilityReport
# vi deployment2
# kubectl apply -f deployment2 -n demo
  • port-forward to ensure can access pod

K8s Operator:

# brew install helm
# helm repo add k8sgpt https://charts.k8sgpt.ai/
# helm repo update
# helm install release k8sgpt/k8sgpt-operator -n k8sgpt-operator-system --create-namespace --values values.yaml
Commands to see if your new ns installed:
# kubectl get ns
# kubectl get pods -n k8sgpt-operator-system
# k9s

ServiceMonitor to send reports to Prometheus & create DB for K8sgpt:

# helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
# kube-prometheus-stack has been installed. Check its status by running:
  kubectl --namespace k8sgpt-operator-system get pods -l "release=prom"
Commands to squirrel away:
- Get Grafana 'admin' user password by running:
# kubectl --namespace k8sgpt-operator-system get secrets prom-grafana -o jsonpath="{.data.admin-password}" | base64 -d ; echo
- Access Grafana local instance:
# export POD_NAME=$(kubectl --namespace k8sgpt-operator-system get pod -l "app.kubernetes.io/name=grafana,app.kubernetes.io/instance=prom" -oname)
  kubectl --namespace k8sgpt-operator-system port-forward $POD_NAME 3000
- Get your grafana admin user password by running:
  kubectl get secret --namespace k8sgpt-operator-system -l app.kubernetes.io/component=admin-secret -o jsonpath="{.items[0].data.admin-password}" | base64 --decode ; ech

OpenAi API-Keyz for K8s Secret:

# export OPENAI_TOKEN=<YOUR API KEY HERE>
# kubectl create secret generic k8sgpt-sample-secret --from-literal=openai-api-key=$OPENAI_TOKEN -n k8sgpt-operator-system
# 
apiVersion: core.k8sgpt.ai/v1alpha1
kind: K8sGPT
metadata:
  name: k8sgpt-sample
  namespace: k8sgpt-operator-system
spec:
  ai:
    enabled: true
    model: gpt-4o-mini
    backend: openai
    secret:
      name: k8sgpt-sample-secret
      key: openai-api-key
  noCache: false
  version: v0.4.26
# kubectl apply -f k8sgpt-resource.yaml -n k8sgpt-operator-system
k9s
- services, shift-f, port-forward prometheus-operated:9090
# kubectl get results -n k8sgpt-operator-system
# kubectl port-forward service/prom-grafana -n prom 3000:80
Finding grafana password
- secrets & press-x

Help I am stuck – Namespace!

https://www.redhat.com/en/blog/troubleshooting-terminating-namespaces
Open 2 terminals:
- Terminal 1
# minikube start
# minikube dashboard --url
- Terminal 2
# kubectl get namespace k8sgpt
-operator-system -o json > tmp.json
# vi tmp.json
# curl -k -H "Content-Type: application/json" -X PUT --data-binary @tmp.json http://127.0.0.1:38717/api/v1/namespaces/k8spt-operator-system/finalize

microk8s or mini-me?

Pre-Reqx:

# snap install kubectl -classic
# kubectl version --client
# sudo snap install microk8s --classic
# sudo usermod -a -G microk8s <username>
# sudo chown -R <username> ~/.kube
# newgrp microk8s
# microk8s kubectl get nodes
# cd $HOME
# mkdir .kube
# cd .kube
# microk8s config > config
# microk8s start

K8s Cluster:

  • Might have to add SSH keys – so go to your github account, setting, ssh keys, & add new SSH key
# git clone git@github.com:<docker_hub_name>/react-article-display.git
# cd react-article-display
# docker build -t <docker_hub_name>/react-article-display:demo .
# docker run -d -p 3000:80 <docker_hub_name>/react-article-display:demo
localhost:3000
# docker stop <see string above from previous command>
# docker login
# docker push <image name>
# kubectl run my-app-image --image <above>
# kubectl get pods
# kubectl port-forward my-app-image 3000:80