AWS – VCS – HCP.. its all connected

Summary of Steps Below:

  • Create Github repo
    • git add
  • Git Commands
    • git add .
    • git commit -m
    • git push
  • HCP Migrate VCS Workflow
    • dev
  • In Github Add development branch
  • Use VCS to deploy DEVELOPMENT
    • git branch -f
    • git checkout development
    • git branch
    • git status
    • terraform init -backend-config=dev.hcl -reconfigure
    • terraform validate
    • terraform plan
  • CAN NOT do..
    • terraform apply
  • Git Commands
    • git status
    • git add .
    • git commit -m “remove extra server & refactor outputs”
    • git push
  • HCP
    • Approve
  • Github
    • Review development branch to main
  • Use VCS to deploy PRODUCTION
    • Github
      • Merge pull request
    • HCP
      • Automaticlaly KO pipeline & approve
    • Github & HCP
      • See the MR merged/approved
  • AWS Console
    • Review new resources added or destroyed

Create New GitHub Repo:

git init
git remote add origin https://github.com/<YOUR_GIT_HUB_ACCOUNT>/my-app.git

Commit changes to github:

git add .
git commit -m "terraform code update for my app"
git push --set-upstream origin master

Migrate VCS Workflow:

add development github branch:

Use VCS to deploy development:

git branch -f development origin/development
git checkout development
git branch
terraform init -backend-config=dev.hcl -reconfigure
terraform validate
terraform plan
  • Remember, cant do this…..
git status
git add .
git commit -m "remove extra server & refactor outputs"
git push
  • Approve in HCP & can review Github development branch to main

Use VCS to deploy main/production:

  • Github:
    • IF development goes well & passes, then can merge pull request
  • HCP:
    • Automatically KO the pipeline for production & can approve
  • Github:
    • Can see the MR has merged from development to production
  • AWS Console:
    • Check to see your resources

Terraforming the Cloud Alphabet Soup aka HCP/AWS.

  • In providers.tf add backend to remote so you can operate in enhanced/remote/HCP as well as your state be in enhanced/remote/HCP & even stream to your CLI in VsCode..compared to standard state that just stores state (like an S3 backend)

TF Remote provider magic:

  • Seeing the TF at work locally in the CLI & live in HCP, woah – magic..
  • Then jump to the ole’ AWS Console to check your IaC
  • Alright alright alright, lets destroy in the CLI
  • Annnnnnnnnnd, once again you can see live “streamin” in HCP
  • OMG its gone!!

S3 —–> HCP Enhanced/Remote:

  • Then if you have your backend provider already established, you can see live the new state before any Terraform is planned or applied
  • WOW, legitness.

AWS IAM +1 to partayyyy

  • logged into AWS
    • aws configure
  • Created 4 tf files
    • main
    • variables
    • output
    • tfvars
main.tf
provider "aws" {
  region = var.aws_region
}

# Create IAM user
resource "aws_iam_user" "example_user" {
  name = var.user_name
}

# Attach policy to the user
resource "aws_iam_user_policy_attachment" "example_user_policy" {
  user       = aws_iam_user.example_user.name
  policy_arn = var.policy_arn
}

# Create access keys for the user
resource "aws_iam_access_key" "example_user_key" {
  user = aws_iam_user.example_user.name
}
output.tf
output "iam_user_name" {
  value = aws_iam_user.example_user.name
}

output "access_key_id" {
  value = aws_iam_access_key.example_user_key.id
}

output "secret_access_key" {
  value     = aws_iam_access_key.example_user_key.secret
  sensitive = true
}
variables.tf
variable "aws_region" {
  description = "AWS region"
  type        = string
  default     = "us-east-1"
}

variable "user_name" {
  description = "IAM username"
  type        = string
  default     = "example-user"
}

variable "policy_arn" {
  description = "IAM policy ARN to attach"
  type        = string
  default     = "arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess"
}
terrform.tfvars
aws_region = "us-east-1"
user_name  = "terraform-user"
policy_arn = "arn:aws:iam::aws:policy/AdministratorAccess"
  • terraform fmt
  • terraform init
  • terraform plan
  • terraform apply

Lambda Magic for RDS

Steps below to create:

To stop an RDS instance every 7 days using AWS Lambda and Terraform, below are the following concepts followed:

Explanation:

  • Step 1
  • Step 2
    • Lambda FunctionA Python-based Lambda function that uses the AWS SDK (boto3) to stop the specified RDS instance(s).
  • Step 3
  • Step 4
    • To deploy:
      • Save the Terraform code in .tf files and the Python code as lambda_function.py.
      • Zip the Python file into lambda_function.zip.
      • Initialize Terraform: terraform init
      • Plan the deployment: terraform plan
        • Apply the changes: terraform apply
  • Main.tf
# Define an IAM role for the Lambda function
resource "aws_iam_role" "rds_stop_lambda_role" {
  name = "rds-stop-lambda-role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17",
    Statement = [
      {
        Action = "sts:AssumeRole",
        Effect = "Allow",
        Principal = {
          Service = "lambda.amazonaws.com"
        }
      }
    ]
  })
}

# Attach a policy to the role allowing RDS stop actions and CloudWatch Logs
resource "aws_iam_role_policy" "rds_stop_lambda_policy" {
  name = "rds-stop-lambda-policy"
  role = aws_iam_role.rds_stop_lambda_role.id

  policy = jsonencode({
    Version = "2012-10-17",
    Statement = [
      {
        Effect = "Allow",
        Action = [
          "rds:StopDBInstance",
          "rds:DescribeDBInstances"
        ],
        Resource = "*" # Restrict this to specific RDS instances if needed
      },
      {
        Effect = "Allow",
        Action = [
          "logs:CreateLogGroup",
          "logs:CreateLogStream",
          "logs:PutLogEvents"
        ],
        Resource = "arn:aws:logs:*:*:*"
      }
    ]
  })
}

# Create the Lambda function
resource "aws_lambda_function" "rds_stop_lambda" {
  function_name = "rds-stop-every-7-days"
  handler       = "lambda_function.lambda_handler"
  runtime       = "python3.9"
  role          = aws_iam_role.rds_stop_lambda_role.arn
  timeout       = 60

  # Replace with the path to your zipped Lambda code
  filename         = "lambda_function.zip"
  source_code_hash = filebase64sha256("lambda_function.zip")

  environment {
    variables = {
      RDS_INSTANCE_IDENTIFIER = "my-rds-instance" # Replace with your RDS instance identifier
      REGION                  = "us-east-1"       # Replace with your AWS region
    }
  }
}

# Create an EventBridge (CloudWatch Event) rule to trigger the Lambda
resource "aws_cloudwatch_event_rule" "rds_stop_schedule" {
  name                = "rds-stop-every-7-days-schedule"
  schedule_expression = "cron(0 0 ? * SUN *)" # Every Sunday at 00:00 UTC
}

# Add the Lambda function as a target for the EventBridge rule
resource "aws_cloudwatch_event_target" "rds_stop_target" {
  rule      = aws_cloudwatch_event_rule.rds_stop_schedule.name
  target_id = "rds-stop-lambda-target"
  arn       = aws_lambda_function.rds_stop_lambda.arn
}

# Grant EventBridge permission to invoke the Lambda function
resource "aws_lambda_permission" "allow_cloudwatch_to_call_lambda" {
  statement_id  = "AllowExecutionFromCloudWatch"
  action        = "lambda:InvokeFunction"
  function_name = aws_lambda_function.rds_stop_lambda.function_name
  principal     = "events.amazonaws.com"
  source_arn    = aws_cloudwatch_event_rule.rds_stop_schedule.arn
}
  • lambda_function.py (Python code for the Lambda function):
import boto3
import os

def lambda_handler(event, context):
    rds_instance_identifier = os.environ.get('RDS_INSTANCE_IDENTIFIER')
    region = os.environ.get('REGION')

    if not rds_instance_identifier or not region:
        print("Error: RDS_INSTANCE_IDENTIFIER or REGION environment variables are not set.")
        return {
            'statusCode': 400,
            'body': 'Missing environment variables.'
        }

    rds_client = boto3.client('rds', region_name=region)

    try:
        response = rds_client.stop_db_instance(
            DBInstanceIdentifier=rds_instance_identifier
        )
        print(f"Successfully initiated stop for RDS instance: {rds_instance_identifier}")
        return {
            'statusCode': 200,
            'body': f"Stopping RDS instance: {rds_instance_identifier}"
        }
    except Exception as e:
        print(f"Error stopping RDS instance {rds_instance_identifier}: {e}")
        return {
            'statusCode': 500,
            'body': f"Error stopping RDS instance: {e}"
        }
  • Zipping the Lambda Code:
zip lambda_function.zip lambda_function.py

TF + EKS = Deployed Yo!

Goal:

Look man, I just wanna set up a tin EKS cluster w/a couple nodes using Terraform.

Lessons Learned:

  • Configure AWS CLI
  • Deploy EKS Cluster
  • Deploy NGINX Pods
  • Destroy!!!

Configure AWS CLI:

Use Access & Secret Access Key:

Change Directory:

Review TF Configuration Files:

Deploy EKS Cluster:

Terraform init, plan, & apply:

Kubectl to chat w/yo EKS cluster:

Check to see your cluster is up & moving:

Deploy NGINX Pods:

Deploy to EKS Cluster:

Check again if your cluster is up… & MOVINGG!:

Destroy!!

Kubernetes Clusters w/EKS is Kewl as (S)hell!

Shells are da bomb right? Just like in Mario Kart! Cloud Shell can be dope too in creating a Kubernetes cluster using EKS, lets party Mario.

  • Create an EKS cluster in a Region
  • Deploy a Application to Mimic the Application
  • Use DNS name of Load Balancer to Test the Cluster

AWS Stuff:

Create EC2:

Download AWS CLI v2, kubectl, ekcctl, & move directory files:

Create the cluster, connect, & verify running eksctl:

Run thru some kubectl applys to yaml files & test to see those pods running:

  • Now curl the load balancer DNS name…walllll-ahhhhh

Deploy Nodes w/Terraform in Kubernetes

Kubernetes is up & running!? Sick! Buuuuuuuuuuuuuuuuuuut, I wanna make some changes – so Imma use Terraform. W/out further a-due… lets get these nodes deployed!

  • Initially set up a cluster using kubectl
  • Deployed NGINX nodes using Terraform
  • As an admin I deployed a NodePort to Kubernetes clstuer w/NGINX Nodes
  • Used Terraform to deploy NodePort & scale NGINX nodes
  • ….DESTROY video boy (…..what is Benchwarmers..)

Set up the goodies:

Check to see cluster is created & get SSL info for server IP address:

Edit Variables file:

Terraform init & apply:

Get the TF config file:

Vim lab_kubernetes_service.tf:

vim lab_kubernetes_resources.tf:

  • Terraform Destroy
  • kind delete cluster –name lab-terraform-kubernetes

Deep Pass of Secret’s to Kubernetes Container

Kubernetes is dope for data bro! Watch how we send configuration data from containers to applications that were stored in secrets & ConfigMaps.

  • Created password file & store it in ….. secrets..
  • Create the Nginx Pod

Generate a file for the secret password file & data:

Vi pod.yml:

Kubectl exec — curl -u user: <PASSWORD> <IP_ADDRESS>:

Be Like 2 Kubernetes in a Pod

Alright alright alright…. lets create a lil baby pod & eventually create an entire Kubernetes application!!

  • Create YAML file w/the pod details for the nginx pod
  • Create the pod…just do it!
  • SSH!!

Vi Nginx.yaml:

Kubectl create -f ~/nginx.yml:

  • Create the pod bro

kubectl get pods -n web:

  • Double check the pod is created dude

kubectl describe pod nginx -n web:

  • Looooook at daaa deeeeetaillllllllzzzuhhh