EKS Cluster: Part 1 – ArgoCD

Series of blog posts show progress of updating/adding to EKS Cluster

Below are links for details:

Shhh, its an AWS Secret..

View Code here for details w/this dope link:

Below is a summary of steps:

  • KMS key for Secrets Manager
  • Secrets created container & version
  • Data read secret version & depends_on block to ensure version creation first
  • DynamoDB Table
  • IAM Role, Policy, & Attachment
  • Locals created map for DynamoDB name & app for JSON decoding from secret version

AWS Medical Transcribe & Comprehend w/nibble of HIPAAhh!!

View Code here for details w/this dope link:

Below are details into to use Terraform in creating a tool to upload a medical professional notes into AWS & summarize your notes auto-magically, w/the help of HIPPA the Hippo!

  • General flow of steps
  • Step 1 – upload audio file w/nifty commands
  • Step 2- check your AWS console that the infra is ALIVVVVVE
  • Step 3 – run lambda.py script
  • Step 4 – confirm AWS Transcribe Medical & S3 bucket has goodiezzzz
  • Step 5.1 – AWS Comprehend Medical Create Job
  • Step 5.2 – AWS Comprehend Medical Real-Time Analysis
  • Step 6 – the sausage aka code

##############################################

##############################################

User uploads audio → S3

        ↓

Transcribe Medical job

        ↓

Transcript saved to S3

        ↓

Lambda calls Comprehend Medical

        ↓

Extracted entities saved to S3

  • Record audio on phone or laptop, place in downloads or desired folder
sudo apt update
sudo apt install ffmpeg
ffmpeg -version
ffmpeg -i "S3-AWS-Medical.m4a" -ar 16000 -ac 1 S3-AWS-Medical.wav
terraform init
terraform fmt
terraform validate
terraform plan
terraform apply
aws s3 cp S3-AWS-Medical.wav s3://your-input-bucket-name/
  • Check various AWS locations where you should see code – s3, lambda, iam policy/roles, transcribe, etc.
  • You should see 3 new buckets
    • audio-input
    • medical-output
      • job.json
    • results-bucket
python3 transcribe.py
  • This recording will consist of Discussing multiple topics that have to do with some form of AWS transform, comprehend. Terraform S3. Lambda To better understand what is the most efficient way. To actually Make an individual health professional’s job easier so that they can listen to their patients. Do an audio recording. Depending the Situation, the details. The diagnosis. The meds The recommendations they should be on. At which point they can then use the AWS. Comprehend To take that transcribe. Audio recording. From speech to text and put it in. Comprehend that’ll summarize it. Hopefully, LOL. YOLO, we’ll see if this works.
  • To confirm hit download & view in vscode json – prolly in 1 line, use shift-alt-f to quick review
  • Important Note:
    • input bucket
      • output bucket…i know confusing, dont do audio file – remember what comprehend does…
    • output bucket
      • your results bucket
    • iam role
      • should pop-up in dropdown if code is correct in policy
git init
git add .
git status
git commit -m "First commit for AWS Transcribe + Comprehend Medical w/Terraform."
git remote add origin https://github.com/earpjennings37/aws-medical-tf.git
git branch -M Main
git push -u origin main
  • to ensure connect to repo, i did a thing
    • commands
    • open a pull request – create a pull request
    • merge pull request
    • merged & closed
git checkout -b update-readme
git branch
git status
git add .
git commit -m "updated-readme"
git push -u origin update-readme

AWS – VCS – HCP.. its all connected

Summary of Steps Below:

  • Create Github repo
    • git add
  • Git Commands
    • git add .
    • git commit -m
    • git push
  • HCP Migrate VCS Workflow
    • dev
  • In Github Add development branch
  • Use VCS to deploy DEVELOPMENT
    • git branch -f
    • git checkout development
    • git branch
    • git status
    • terraform init -backend-config=dev.hcl -reconfigure
    • terraform validate
    • terraform plan
  • CAN NOT do..
    • terraform apply
  • Git Commands
    • git status
    • git add .
    • git commit -m “remove extra server & refactor outputs”
    • git push
  • HCP
    • Approve
  • Github
    • Review development branch to main
  • Use VCS to deploy PRODUCTION
    • Github
      • Merge pull request
    • HCP
      • Automaticlaly KO pipeline & approve
    • Github & HCP
      • See the MR merged/approved
  • AWS Console
    • Review new resources added or destroyed

Create New GitHub Repo:

git init
git remote add origin https://github.com/<YOUR_GIT_HUB_ACCOUNT>/my-app.git

Commit changes to github:

git add .
git commit -m "terraform code update for my app"
git push --set-upstream origin master

Migrate VCS Workflow:

add development github branch:

Use VCS to deploy development:

git branch -f development origin/development
git checkout development
git branch
terraform init -backend-config=dev.hcl -reconfigure
terraform validate
terraform plan
  • Remember, cant do this…..
git status
git add .
git commit -m "remove extra server & refactor outputs"
git push
  • Approve in HCP & can review Github development branch to main

Use VCS to deploy main/production:

  • Github:
    • IF development goes well & passes, then can merge pull request
  • HCP:
    • Automatically KO the pipeline for production & can approve
  • Github:
    • Can see the MR has merged from development to production
  • AWS Console:
    • Check to see your resources

Terraforming the Cloud Alphabet Soup aka HCP/AWS.

  • In providers.tf add backend to remote so you can operate in enhanced/remote/HCP as well as your state be in enhanced/remote/HCP & even stream to your CLI in VsCode..compared to standard state that just stores state (like an S3 backend)

TF Remote provider magic:

  • Seeing the TF at work locally in the CLI & live in HCP, woah – magic..
  • Then jump to the ole’ AWS Console to check your IaC
  • Alright alright alright, lets destroy in the CLI
  • Annnnnnnnnnd, once again you can see live “streamin” in HCP
  • OMG its gone!!

S3 —–> HCP Enhanced/Remote:

  • Then if you have your backend provider already established, you can see live the new state before any Terraform is planned or applied
  • WOW, legitness.

AWS IAM +1 to partayyyy

  • logged into AWS
    • aws configure
  • Created 4 tf files
    • main
    • variables
    • output
    • tfvars
main.tf
provider "aws" {
  region = var.aws_region
}

# Create IAM user
resource "aws_iam_user" "example_user" {
  name = var.user_name
}

# Attach policy to the user
resource "aws_iam_user_policy_attachment" "example_user_policy" {
  user       = aws_iam_user.example_user.name
  policy_arn = var.policy_arn
}

# Create access keys for the user
resource "aws_iam_access_key" "example_user_key" {
  user = aws_iam_user.example_user.name
}
output.tf
output "iam_user_name" {
  value = aws_iam_user.example_user.name
}

output "access_key_id" {
  value = aws_iam_access_key.example_user_key.id
}

output "secret_access_key" {
  value     = aws_iam_access_key.example_user_key.secret
  sensitive = true
}
variables.tf
variable "aws_region" {
  description = "AWS region"
  type        = string
  default     = "us-east-1"
}

variable "user_name" {
  description = "IAM username"
  type        = string
  default     = "example-user"
}

variable "policy_arn" {
  description = "IAM policy ARN to attach"
  type        = string
  default     = "arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess"
}
terrform.tfvars
aws_region = "us-east-1"
user_name  = "terraform-user"
policy_arn = "arn:aws:iam::aws:policy/AdministratorAccess"
  • terraform fmt
  • terraform init
  • terraform plan
  • terraform apply

Lambda Magic for RDS

Steps below to create:

To stop an RDS instance every 7 days using AWS Lambda and Terraform, below are the following concepts followed:

Explanation:

  • Step 1
  • Step 2
    • Lambda FunctionA Python-based Lambda function that uses the AWS SDK (boto3) to stop the specified RDS instance(s).
  • Step 3
  • Step 4
    • To deploy:
      • Save the Terraform code in .tf files and the Python code as lambda_function.py.
      • Zip the Python file into lambda_function.zip.
      • Initialize Terraform: terraform init
      • Plan the deployment: terraform plan
        • Apply the changes: terraform apply
  • Main.tf
# Define an IAM role for the Lambda function
resource "aws_iam_role" "rds_stop_lambda_role" {
  name = "rds-stop-lambda-role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17",
    Statement = [
      {
        Action = "sts:AssumeRole",
        Effect = "Allow",
        Principal = {
          Service = "lambda.amazonaws.com"
        }
      }
    ]
  })
}

# Attach a policy to the role allowing RDS stop actions and CloudWatch Logs
resource "aws_iam_role_policy" "rds_stop_lambda_policy" {
  name = "rds-stop-lambda-policy"
  role = aws_iam_role.rds_stop_lambda_role.id

  policy = jsonencode({
    Version = "2012-10-17",
    Statement = [
      {
        Effect = "Allow",
        Action = [
          "rds:StopDBInstance",
          "rds:DescribeDBInstances"
        ],
        Resource = "*" # Restrict this to specific RDS instances if needed
      },
      {
        Effect = "Allow",
        Action = [
          "logs:CreateLogGroup",
          "logs:CreateLogStream",
          "logs:PutLogEvents"
        ],
        Resource = "arn:aws:logs:*:*:*"
      }
    ]
  })
}

# Create the Lambda function
resource "aws_lambda_function" "rds_stop_lambda" {
  function_name = "rds-stop-every-7-days"
  handler       = "lambda_function.lambda_handler"
  runtime       = "python3.9"
  role          = aws_iam_role.rds_stop_lambda_role.arn
  timeout       = 60

  # Replace with the path to your zipped Lambda code
  filename         = "lambda_function.zip"
  source_code_hash = filebase64sha256("lambda_function.zip")

  environment {
    variables = {
      RDS_INSTANCE_IDENTIFIER = "my-rds-instance" # Replace with your RDS instance identifier
      REGION                  = "us-east-1"       # Replace with your AWS region
    }
  }
}

# Create an EventBridge (CloudWatch Event) rule to trigger the Lambda
resource "aws_cloudwatch_event_rule" "rds_stop_schedule" {
  name                = "rds-stop-every-7-days-schedule"
  schedule_expression = "cron(0 0 ? * SUN *)" # Every Sunday at 00:00 UTC
}

# Add the Lambda function as a target for the EventBridge rule
resource "aws_cloudwatch_event_target" "rds_stop_target" {
  rule      = aws_cloudwatch_event_rule.rds_stop_schedule.name
  target_id = "rds-stop-lambda-target"
  arn       = aws_lambda_function.rds_stop_lambda.arn
}

# Grant EventBridge permission to invoke the Lambda function
resource "aws_lambda_permission" "allow_cloudwatch_to_call_lambda" {
  statement_id  = "AllowExecutionFromCloudWatch"
  action        = "lambda:InvokeFunction"
  function_name = aws_lambda_function.rds_stop_lambda.function_name
  principal     = "events.amazonaws.com"
  source_arn    = aws_cloudwatch_event_rule.rds_stop_schedule.arn
}
  • lambda_function.py (Python code for the Lambda function):
import boto3
import os

def lambda_handler(event, context):
    rds_instance_identifier = os.environ.get('RDS_INSTANCE_IDENTIFIER')
    region = os.environ.get('REGION')

    if not rds_instance_identifier or not region:
        print("Error: RDS_INSTANCE_IDENTIFIER or REGION environment variables are not set.")
        return {
            'statusCode': 400,
            'body': 'Missing environment variables.'
        }

    rds_client = boto3.client('rds', region_name=region)

    try:
        response = rds_client.stop_db_instance(
            DBInstanceIdentifier=rds_instance_identifier
        )
        print(f"Successfully initiated stop for RDS instance: {rds_instance_identifier}")
        return {
            'statusCode': 200,
            'body': f"Stopping RDS instance: {rds_instance_identifier}"
        }
    except Exception as e:
        print(f"Error stopping RDS instance {rds_instance_identifier}: {e}")
        return {
            'statusCode': 500,
            'body': f"Error stopping RDS instance: {e}"
        }
  • Zipping the Lambda Code:
zip lambda_function.zip lambda_function.py

TF + EKS = Deployed Yo!

Goal:

Look man, I just wanna set up a tin EKS cluster w/a couple nodes using Terraform.

Lessons Learned:

  • Configure AWS CLI
  • Deploy EKS Cluster
  • Deploy NGINX Pods
  • Destroy!!!

Configure AWS CLI:

Use Access & Secret Access Key:

Change Directory:

Review TF Configuration Files:

Deploy EKS Cluster:

Terraform init, plan, & apply:

Kubectl to chat w/yo EKS cluster:

Check to see your cluster is up & moving:

Deploy NGINX Pods:

Deploy to EKS Cluster:

Check again if your cluster is up… & MOVINGG!:

Destroy!!

Kubernetes Clusters w/EKS is Kewl as (S)hell!

Shells are da bomb right? Just like in Mario Kart! Cloud Shell can be dope too in creating a Kubernetes cluster using EKS, lets party Mario.

  • Create an EKS cluster in a Region
  • Deploy a Application to Mimic the Application
  • Use DNS name of Load Balancer to Test the Cluster

AWS Stuff:

Create EC2:

Download AWS CLI v2, kubectl, ekcctl, & move directory files:

Create the cluster, connect, & verify running eksctl:

Run thru some kubectl applys to yaml files & test to see those pods running:

  • Now curl the load balancer DNS name…walllll-ahhhhh