AWS IAM +1 to partayyyy

  • logged into AWS
    • aws configure
  • Created 4 tf files
    • main
    • variables
    • output
    • tfvars
main.tf
provider "aws" {
  region = var.aws_region
}

# Create IAM user
resource "aws_iam_user" "example_user" {
  name = var.user_name
}

# Attach policy to the user
resource "aws_iam_user_policy_attachment" "example_user_policy" {
  user       = aws_iam_user.example_user.name
  policy_arn = var.policy_arn
}

# Create access keys for the user
resource "aws_iam_access_key" "example_user_key" {
  user = aws_iam_user.example_user.name
}
output.tf
output "iam_user_name" {
  value = aws_iam_user.example_user.name
}

output "access_key_id" {
  value = aws_iam_access_key.example_user_key.id
}

output "secret_access_key" {
  value     = aws_iam_access_key.example_user_key.secret
  sensitive = true
}
variables.tf
variable "aws_region" {
  description = "AWS region"
  type        = string
  default     = "us-east-1"
}

variable "user_name" {
  description = "IAM username"
  type        = string
  default     = "example-user"
}

variable "policy_arn" {
  description = "IAM policy ARN to attach"
  type        = string
  default     = "arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess"
}
terrform.tfvars
aws_region = "us-east-1"
user_name  = "terraform-user"
policy_arn = "arn:aws:iam::aws:policy/AdministratorAccess"
  • terraform fmt
  • terraform init
  • terraform plan
  • terraform apply

Lambda Magic for RDS

Steps below to create:

To stop an RDS instance every 7 days using AWS Lambda and Terraform, below are the following concepts followed:

Explanation:

  • Step 1
  • Step 2
    • Lambda FunctionA Python-based Lambda function that uses the AWS SDK (boto3) to stop the specified RDS instance(s).
  • Step 3
  • Step 4
    • To deploy:
      • Save the Terraform code in .tf files and the Python code as lambda_function.py.
      • Zip the Python file into lambda_function.zip.
      • Initialize Terraform: terraform init
      • Plan the deployment: terraform plan
        • Apply the changes: terraform apply
  • Main.tf
# Define an IAM role for the Lambda function
resource "aws_iam_role" "rds_stop_lambda_role" {
  name = "rds-stop-lambda-role"

  assume_role_policy = jsonencode({
    Version = "2012-10-17",
    Statement = [
      {
        Action = "sts:AssumeRole",
        Effect = "Allow",
        Principal = {
          Service = "lambda.amazonaws.com"
        }
      }
    ]
  })
}

# Attach a policy to the role allowing RDS stop actions and CloudWatch Logs
resource "aws_iam_role_policy" "rds_stop_lambda_policy" {
  name = "rds-stop-lambda-policy"
  role = aws_iam_role.rds_stop_lambda_role.id

  policy = jsonencode({
    Version = "2012-10-17",
    Statement = [
      {
        Effect = "Allow",
        Action = [
          "rds:StopDBInstance",
          "rds:DescribeDBInstances"
        ],
        Resource = "*" # Restrict this to specific RDS instances if needed
      },
      {
        Effect = "Allow",
        Action = [
          "logs:CreateLogGroup",
          "logs:CreateLogStream",
          "logs:PutLogEvents"
        ],
        Resource = "arn:aws:logs:*:*:*"
      }
    ]
  })
}

# Create the Lambda function
resource "aws_lambda_function" "rds_stop_lambda" {
  function_name = "rds-stop-every-7-days"
  handler       = "lambda_function.lambda_handler"
  runtime       = "python3.9"
  role          = aws_iam_role.rds_stop_lambda_role.arn
  timeout       = 60

  # Replace with the path to your zipped Lambda code
  filename         = "lambda_function.zip"
  source_code_hash = filebase64sha256("lambda_function.zip")

  environment {
    variables = {
      RDS_INSTANCE_IDENTIFIER = "my-rds-instance" # Replace with your RDS instance identifier
      REGION                  = "us-east-1"       # Replace with your AWS region
    }
  }
}

# Create an EventBridge (CloudWatch Event) rule to trigger the Lambda
resource "aws_cloudwatch_event_rule" "rds_stop_schedule" {
  name                = "rds-stop-every-7-days-schedule"
  schedule_expression = "cron(0 0 ? * SUN *)" # Every Sunday at 00:00 UTC
}

# Add the Lambda function as a target for the EventBridge rule
resource "aws_cloudwatch_event_target" "rds_stop_target" {
  rule      = aws_cloudwatch_event_rule.rds_stop_schedule.name
  target_id = "rds-stop-lambda-target"
  arn       = aws_lambda_function.rds_stop_lambda.arn
}

# Grant EventBridge permission to invoke the Lambda function
resource "aws_lambda_permission" "allow_cloudwatch_to_call_lambda" {
  statement_id  = "AllowExecutionFromCloudWatch"
  action        = "lambda:InvokeFunction"
  function_name = aws_lambda_function.rds_stop_lambda.function_name
  principal     = "events.amazonaws.com"
  source_arn    = aws_cloudwatch_event_rule.rds_stop_schedule.arn
}
  • lambda_function.py (Python code for the Lambda function):
import boto3
import os

def lambda_handler(event, context):
    rds_instance_identifier = os.environ.get('RDS_INSTANCE_IDENTIFIER')
    region = os.environ.get('REGION')

    if not rds_instance_identifier or not region:
        print("Error: RDS_INSTANCE_IDENTIFIER or REGION environment variables are not set.")
        return {
            'statusCode': 400,
            'body': 'Missing environment variables.'
        }

    rds_client = boto3.client('rds', region_name=region)

    try:
        response = rds_client.stop_db_instance(
            DBInstanceIdentifier=rds_instance_identifier
        )
        print(f"Successfully initiated stop for RDS instance: {rds_instance_identifier}")
        return {
            'statusCode': 200,
            'body': f"Stopping RDS instance: {rds_instance_identifier}"
        }
    except Exception as e:
        print(f"Error stopping RDS instance {rds_instance_identifier}: {e}")
        return {
            'statusCode': 500,
            'body': f"Error stopping RDS instance: {e}"
        }
  • Zipping the Lambda Code:
zip lambda_function.zip lambda_function.py

TF + EKS = Deployed Yo!

Goal:

Look man, I just wanna set up a tin EKS cluster w/a couple nodes using Terraform.

Lessons Learned:

  • Configure AWS CLI
  • Deploy EKS Cluster
  • Deploy NGINX Pods
  • Destroy!!!

Configure AWS CLI:

Use Access & Secret Access Key:

Change Directory:

Review TF Configuration Files:

Deploy EKS Cluster:

Terraform init, plan, & apply:

Kubectl to chat w/yo EKS cluster:

Check to see your cluster is up & moving:

Deploy NGINX Pods:

Deploy to EKS Cluster:

Check again if your cluster is up… & MOVINGG!:

Destroy!!

Kubernetes Clusters w/EKS is Kewl as (S)hell!

Shells are da bomb right? Just like in Mario Kart! Cloud Shell can be dope too in creating a Kubernetes cluster using EKS, lets party Mario.

  • Create an EKS cluster in a Region
  • Deploy a Application to Mimic the Application
  • Use DNS name of Load Balancer to Test the Cluster

AWS Stuff:

Create EC2:

Download AWS CLI v2, kubectl, ekcctl, & move directory files:

Create the cluster, connect, & verify running eksctl:

Run thru some kubectl applys to yaml files & test to see those pods running:

  • Now curl the load balancer DNS name…walllll-ahhhhh

Deploy Nodes w/Terraform in Kubernetes

Kubernetes is up & running!? Sick! Buuuuuuuuuuuuuuuuuuut, I wanna make some changes – so Imma use Terraform. W/out further a-due… lets get these nodes deployed!

  • Initially set up a cluster using kubectl
  • Deployed NGINX nodes using Terraform
  • As an admin I deployed a NodePort to Kubernetes clstuer w/NGINX Nodes
  • Used Terraform to deploy NodePort & scale NGINX nodes
  • ….DESTROY video boy (…..what is Benchwarmers..)

Set up the goodies:

Check to see cluster is created & get SSL info for server IP address:

Edit Variables file:

Terraform init & apply:

Get the TF config file:

Vim lab_kubernetes_service.tf:

vim lab_kubernetes_resources.tf:

  • Terraform Destroy
  • kind delete cluster –name lab-terraform-kubernetes

Deep Pass of Secret’s to Kubernetes Container

Kubernetes is dope for data bro! Watch how we send configuration data from containers to applications that were stored in secrets & ConfigMaps.

  • Created password file & store it in ….. secrets..
  • Create the Nginx Pod

Generate a file for the secret password file & data:

Vi pod.yml:

Kubectl exec — curl -u user: <PASSWORD> <IP_ADDRESS>:

Be Like 2 Kubernetes in a Pod

Alright alright alright…. lets create a lil baby pod & eventually create an entire Kubernetes application!!

  • Create YAML file w/the pod details for the nginx pod
  • Create the pod…just do it!
  • SSH!!

Vi Nginx.yaml:

Kubectl create -f ~/nginx.yml:

  • Create the pod bro

kubectl get pods -n web:

  • Double check the pod is created dude

kubectl describe pod nginx -n web:

  • Looooook at daaa deeeeetaillllllllzzzuhhh

Falco to Detect Threats on Containers in Kubernetes!

Falco Lombardi is… ahem.. Falco is able to detect any shady stuff going on in your Kubernetes environment in no time.

  • Create a Falco Rules File to Scan the Container
  • Run Falco to Obtain a Report of ALL the Activity
  • Create rule to scan container, basically this scripts rule will:
  • Run Falco for up to a minute & see if anything is detected
    • -r = rule
    • -M = time

Kubernetes Cluster & Terraform

Goal:

Lets see if I can deploy a web app to my EKS cluster & Terraform. After EKS cluster is deployed w/Terraform I’ll provision the cluster & run Node.js & use MongoDB as the backend DB.

Basically it goes like this:

  • Web Browser – – – EKS Cluster – – – Public Endpoint
  • Namespace – – – Node.js – – – Docker Image Repository – – – MongoDB

Lessons Learned:

  • Deploy EKS Cluster w/Terraform:
  • Complete Terraform Configuration:
  • Deploy Web App w/Terraform:
  • Scale Kubernetes Web App:

Deploy EKS Cluster w/Terraform:

  • Cloud User – – – Security Credz – – – Access Keys
  • Add key details in CLI

Couple Commands to Leverage for Sanity Check:

  • LS files
  • Unzip
  • LS
  • CD
  • LS
    • Now can see all TF files

Terraform – init, fmt, apply:

Complete Terraform Configuration:

Double Check its Running:

Couple Commands:

Vim modules/pac-man/pac-man-deployment.tf:

Vim pac-man.tf:

Terraform – Fmt, Init, & Apply:

Deploy Web App w/Terraform:

Scale Kubernetes Web App:

Change Deployment Files

  • MongoDB = 2
  • Pacman Pods = 3

Double Check Working:

Prometheus 2 the movie, Featuring Kubernetes & Grafana

Goal:

Imma monitor a CI/CD pipeline w/3 tools, wanna see if we use Prometheus to synthesize the data & Grafana to display the data? Our goal is get some insight on performance dawg!

Lessons Learned:

  • Use Helm to install Grafana
  • Install Prometheus in Kubernetes Cluster
  • Install Grafana in Kubernetes Cluster

Use Helm to install Grafana

SSH into Master Public IP:

Initiate Helm:

Install Prometheus in Kubernetes Cluster

Create Prometheus YAML File:

Install Prometheus:

Install Grafana in Kubernetes Cluster

Create Grafana YAML File:

Install Grafana:

Create Grafana-Extension YAML File:

Log-in to Grafana: