AWS Medical Transcribe & Comprehend w/nibble of HIPAAhh!!

Below are details into to use Terraform in creating a tool to upload a medical professional notes into AWS & summarize your notes auto-magically, w/the help of HIPPA the Hippo!

  • General flow of steps
  • Step 1 – upload audio file w/nifty commands
  • Step 2- check your AWS console that the infra is ALIVVVVVE
  • Step 3 – run lambda.py script
  • Step 4 – confirm AWS Transcribe Medical & S3 bucket has goodiezzzz
  • Step 5.1 – AWS Comprehend Medical Create Job
  • Step 5.2 – AWS Comprehend Medical Real-Time Analysis
  • Step 6 – the sausage aka code

##############################################

##############################################

User uploads audio → S3

        ↓

Transcribe Medical job

        ↓

Transcript saved to S3

        ↓

Lambda calls Comprehend Medical

        ↓

Extracted entities saved to S3

  • Record audio on phone or laptop, place in downloads or desired folder
sudo apt update
sudo apt install ffmpeg
ffmpeg -version
ffmpeg -i "S3-AWS-Medical.m4a" -ar 16000 -ac 1 S3-AWS-Medical.wav
terraform init
terraform fmt
terraform validate
terraform plan
terraform apply
aws s3 cp S3-AWS-Medical.wav s3://your-input-bucket-name/
  • Check various AWS locations where you should see code – s3, lambda, iam policy/roles, transcribe, etc.
  • You should see 3 new buckets
    • audio-input
    • medical-output
      • job.json
    • results-bucket
python3 transcribe.py
  • This recording will consist of Discussing multiple topics that have to do with some form of AWS transform, comprehend. Terraform S3. Lambda To better understand what is the most efficient way. To actually Make an individual health professional’s job easier so that they can listen to their patients. Do an audio recording. Depending the Situation, the details. The diagnosis. The meds The recommendations they should be on. At which point they can then use the AWS. Comprehend To take that transcribe. Audio recording. From speech to text and put it in. Comprehend that’ll summarize it. Hopefully, LOL. YOLO, we’ll see if this works.
  • To confirm hit download & view in vscode json – prolly in 1 line, use shift-alt-f to quick review
  • Important Note:
    • input bucket
      • output bucket…i know confusing, dont do audio file – remember what comprehend does…
    • output bucket
      • your results bucket
    • iam role
      • should pop-up in dropdown if code is correct in policy
git init
git add .
git status
git commit -m "First commit for AWS Transcribe + Comprehend Medical w/Terraform."
git remote add origin https://github.com/earpjennings37/aws-medical-tf.git
git branch -M Main
git push -u origin main
  • to ensure connect to repo, i did a thing
    • commands
    • open a pull request – create a pull request
    • merge pull request
    • merged & closed
git checkout -b update-readme
git branch
git status
git add .
git commit -m "updated-readme"
git push -u origin update-readme

kubectl, but gooder w/”agentic”-ai.

You like kubectl ya? Well how about slappin some “agentic-ai” on that cli & see what happens. Lets run it, see below for 4 examples.

Pre-Reqxzz aka 3 stepzz:

Commands to get goin:

curl -sSL https://raw.githubusercontent.com/GoogleCloudPlatform/kubectl-ai/main/install.sh | bash
export GEMINI_API_KEY=your_api_key_here
kubectl-ai --help
kubectl-ai models
kubectl-ai --model gemini-2.5-flash

Example 1:

  • informally talk w/to get namespaces & create namespace
  • then can check k9s to see if it worked
    • trust but verify type of thing ya know?

Example 2:

  • shows you what command its running based off your informal dialogue

Example 3:

Example 4:

alrite peace

AWS – VCS – HCP.. its all connected

Summary of Steps Below:

  • Create Github repo
    • git add
  • Git Commands
    • git add .
    • git commit -m
    • git push
  • HCP Migrate VCS Workflow
    • dev
  • In Github Add development branch
  • Use VCS to deploy DEVELOPMENT
    • git branch -f
    • git checkout development
    • git branch
    • git status
    • terraform init -backend-config=dev.hcl -reconfigure
    • terraform validate
    • terraform plan
  • CAN NOT do..
    • terraform apply
  • Git Commands
    • git status
    • git add .
    • git commit -m “remove extra server & refactor outputs”
    • git push
  • HCP
    • Approve
  • Github
    • Review development branch to main
  • Use VCS to deploy PRODUCTION
    • Github
      • Merge pull request
    • HCP
      • Automaticlaly KO pipeline & approve
    • Github & HCP
      • See the MR merged/approved
  • AWS Console
    • Review new resources added or destroyed

Create New GitHub Repo:

git init
git remote add origin https://github.com/<YOUR_GIT_HUB_ACCOUNT>/my-app.git

Commit changes to github:

git add .
git commit -m "terraform code update for my app"
git push --set-upstream origin master

Migrate VCS Workflow:

add development github branch:

Use VCS to deploy development:

git branch -f development origin/development
git checkout development
git branch
terraform init -backend-config=dev.hcl -reconfigure
terraform validate
terraform plan
  • Remember, cant do this…..
git status
git add .
git commit -m "remove extra server & refactor outputs"
git push
  • Approve in HCP & can review Github development branch to main

Use VCS to deploy main/production:

  • Github:
    • IF development goes well & passes, then can merge pull request
  • HCP:
    • Automatically KO the pipeline for production & can approve
  • Github:
    • Can see the MR has merged from development to production
  • AWS Console:
    • Check to see your resources

Terraforming the Cloud Alphabet Soup aka HCP/AWS.

  • In providers.tf add backend to remote so you can operate in enhanced/remote/HCP as well as your state be in enhanced/remote/HCP & even stream to your CLI in VsCode..compared to standard state that just stores state (like an S3 backend)

TF Remote provider magic:

  • Seeing the TF at work locally in the CLI & live in HCP, woah – magic..
  • Then jump to the ole’ AWS Console to check your IaC
  • Alright alright alright, lets destroy in the CLI
  • Annnnnnnnnnd, once again you can see live “streamin” in HCP
  • OMG its gone!!

S3 —–> HCP Enhanced/Remote:

  • Then if you have your backend provider already established, you can see live the new state before any Terraform is planned or applied
  • WOW, legitness.

Karpor Evolves K8s like Magicarp to Gyarados

Install KusionStack Karpor CLI:

# brew tap KusionStack/tap
# brew install KusionStack/tap/kusion

Helm Repo Add Karpor

# helm repo add kusionstack https://kusionstack.github.io/charts
# helm repo update
# helm install karpor kusionstack/karpor

Port-Forward to 7443 & Review Dashboard:

  • Now you can register a cluster to review w/magic “agentic” ai

ArgoCD, not the Ben Affleck movie, or insurance duck commercial..

Install ArgoCD:

brew install argocd
kubectl port-forward svc/argocd-server -n argocd 8080:443
argocd login 127.0.0.1:8080

Code:

kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

Port-forward:

  • Option 1 from CLI
  • Option 2 from K9s

“Secret” Password:

  • Option 1 from CLI
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
  • Option 2 from K9s
    • go to secrets
    • hit x on the preferred option you desire
    • initial admin secret – login to argocd
    • secret – to get RSA private keys

Helm Chart install:

  • See pods in k9s
  • Port-forward to 8888

Scale-Up Replicas to 10:

  • Update your github file & commit
  • Watch MAGIC..or “agentic-ai” – whatever you wanna call it
  • Notice in k9s the pods age

Rollback in ArgoCD

  • For a quick fix cuz not sure what broke, can quickly go back instead of push more supposed changes/fixes from github
  • Annnnnd see the termination of pods from the rollback in ArgoCD

AWS IAM +1 to partayyyy

  • logged into AWS
    • aws configure
  • Created 4 tf files
    • main
    • variables
    • output
    • tfvars
main.tf
provider "aws" {
  region = var.aws_region
}

# Create IAM user
resource "aws_iam_user" "example_user" {
  name = var.user_name
}

# Attach policy to the user
resource "aws_iam_user_policy_attachment" "example_user_policy" {
  user       = aws_iam_user.example_user.name
  policy_arn = var.policy_arn
}

# Create access keys for the user
resource "aws_iam_access_key" "example_user_key" {
  user = aws_iam_user.example_user.name
}
output.tf
output "iam_user_name" {
  value = aws_iam_user.example_user.name
}

output "access_key_id" {
  value = aws_iam_access_key.example_user_key.id
}

output "secret_access_key" {
  value     = aws_iam_access_key.example_user_key.secret
  sensitive = true
}
variables.tf
variable "aws_region" {
  description = "AWS region"
  type        = string
  default     = "us-east-1"
}

variable "user_name" {
  description = "IAM username"
  type        = string
  default     = "example-user"
}

variable "policy_arn" {
  description = "IAM policy ARN to attach"
  type        = string
  default     = "arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess"
}
terrform.tfvars
aws_region = "us-east-1"
user_name  = "terraform-user"
policy_arn = "arn:aws:iam::aws:policy/AdministratorAccess"
  • terraform fmt
  • terraform init
  • terraform plan
  • terraform apply

K8s on Roidz aka K8sGPT

Blog post includes installing K8s…GPT, see below for the goodies:

Installszz:

Github
https://github.com/k8sgpt-ai/k8sgpt
k8sgpt Docx:
https://docs.k8sgpt.ai/getting-started/in-cluster-operator/?ref=anaisurl.com
Ubuntu
# curl -LO https://github.com/k8sgpt-ai/k8sgpt/releases/download/v0.4.26/k8sgpt_amd64.deb
# sudo dpkg -i k8sgpt_amd64.deb
# k8sgpt version
# k8sgpt --help (handful of commands & flags available)

Pre-Reqzz:

Minikube
# unset KUBECONFIG
# minikube start
# minikube status
OpenAi
#  https://platform.openai.com/account/api-keys
K8sgpt
# k8sgpt generate
# k8sgpt auth add openai
# k8sgpt auth list

Troubleshoot why deployment is not running:

  • Create yaml file
  • Create namespace
  • Apply file
  • Review K9s
  • Utilize k8sgpt to see what’s going on…

2 Links to leverage:

# deployment2
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80
        securityContext:
          readOnlyRootFilesystem: true
# kubectl create ns demo
# kubectl apply -f deployment2 -n demo
# k8sgpt analyse
# k8sgpt analyse --explain
Set pods, deployments, etc w/the following commands
# kubectl get pods -n demo
# kubectl get pods -A
# kubectl get deployments -n demo
# kubectl get pods --all-namespaces
# k8sgpt integration list
# k8sgpt filters list
# k8sgpt analyse --filter=VulnerabilityReport
# vi deployment2
# kubectl apply -f deployment2 -n demo
  • port-forward to ensure can access pod

K8s Operator:

# brew install helm
# helm repo add k8sgpt https://charts.k8sgpt.ai/
# helm repo update
# helm install release k8sgpt/k8sgpt-operator -n k8sgpt-operator-system --create-namespace --values values.yaml
Commands to see if your new ns installed:
# kubectl get ns
# kubectl get pods -n k8sgpt-operator-system
# k9s

ServiceMonitor to send reports to Prometheus & create DB for K8sgpt:

# helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
# kube-prometheus-stack has been installed. Check its status by running:
  kubectl --namespace k8sgpt-operator-system get pods -l "release=prom"
Commands to squirrel away:
- Get Grafana 'admin' user password by running:
# kubectl --namespace k8sgpt-operator-system get secrets prom-grafana -o jsonpath="{.data.admin-password}" | base64 -d ; echo
- Access Grafana local instance:
# export POD_NAME=$(kubectl --namespace k8sgpt-operator-system get pod -l "app.kubernetes.io/name=grafana,app.kubernetes.io/instance=prom" -oname)
  kubectl --namespace k8sgpt-operator-system port-forward $POD_NAME 3000
- Get your grafana admin user password by running:
  kubectl get secret --namespace k8sgpt-operator-system -l app.kubernetes.io/component=admin-secret -o jsonpath="{.items[0].data.admin-password}" | base64 --decode ; ech

OpenAi API-Keyz for K8s Secret:

# export OPENAI_TOKEN=<YOUR API KEY HERE>
# kubectl create secret generic k8sgpt-sample-secret --from-literal=openai-api-key=$OPENAI_TOKEN -n k8sgpt-operator-system
# 
apiVersion: core.k8sgpt.ai/v1alpha1
kind: K8sGPT
metadata:
  name: k8sgpt-sample
  namespace: k8sgpt-operator-system
spec:
  ai:
    enabled: true
    model: gpt-4o-mini
    backend: openai
    secret:
      name: k8sgpt-sample-secret
      key: openai-api-key
  noCache: false
  version: v0.4.26
# kubectl apply -f k8sgpt-resource.yaml -n k8sgpt-operator-system
k9s
- services, shift-f, port-forward prometheus-operated:9090
# kubectl get results -n k8sgpt-operator-system
# kubectl port-forward service/prom-grafana -n prom 3000:80
Finding grafana password
- secrets & press-x

Help I am stuck – Namespace!

https://www.redhat.com/en/blog/troubleshooting-terminating-namespaces
Open 2 terminals:
- Terminal 1
# minikube start
# minikube dashboard --url
- Terminal 2
# kubectl get namespace k8sgpt
-operator-system -o json > tmp.json
# vi tmp.json
# curl -k -H "Content-Type: application/json" -X PUT --data-binary @tmp.json http://127.0.0.1:38717/api/v1/namespaces/k8spt-operator-system/finalize