main.tf
provider "aws" {
region = var.aws_region
}
# Create IAM user
resource "aws_iam_user" "example_user" {
name = var.user_name
}
# Attach policy to the user
resource "aws_iam_user_policy_attachment" "example_user_policy" {
user = aws_iam_user.example_user.name
policy_arn = var.policy_arn
}
# Create access keys for the user
resource "aws_iam_access_key" "example_user_key" {
user = aws_iam_user.example_user.name
}
output.tf
output "iam_user_name" {
value = aws_iam_user.example_user.name
}
output "access_key_id" {
value = aws_iam_access_key.example_user_key.id
}
output "secret_access_key" {
value = aws_iam_access_key.example_user_key.secret
sensitive = true
}
aws_lambda_function.rds_stop_lambda: This resource defines the Lambda function itself, including its runtime, handler, associated IAM role, and the zipped code. It also passes the RDS_INSTANCE_IDENTIFIER and REGION as environment variables for the Python script.
aws_cloudwatch_event_rule.rds_stop_schedule: This creates a scheduled EventBridge rule using a cron expression. cron(0 0 ? * SUN *) schedules the execution for every Sunday at 00:00 UTC. Adjust this cron expression as needed for your desired 7-day interval and time.
Let’s blend some pimp tools together & launch something into space – cyber space that is. Below is an example to show useful it is to understand Terraform state, deploy resources w/Kubernetes, & see how Terraform maintains the state file to track all your changes along w/deploying containers!
Kubernetes is up & running!? Sick! Buuuuuuuuuuuuuuuuuuut, I wanna make some changes – so Imma use Terraform. W/out further a-due… lets get these nodes deployed!
Lessons Learned:
Initially set up a cluster using kubectl
Deployed NGINX nodes using Terraform
As an admin I deployed a NodePort to Kubernetes clstuer w/NGINX Nodes
Used Terraform to deploy NodePort & scale NGINX nodes
….DESTROY video boy (…..what is Benchwarmers..)
Initially set up a cluster using kubectl:
Set up the goodies:
Check to see cluster is created & get SSL info for server IP address:
Edit Variables file:
Deployed NGINX nodes using Terraform:
Terraform init & apply:
As an admin I deployed a NodePort to Kubernetes clstuer w/NGINX Nodes:
Get the TF config file:
Used Terraform to deploy NodePort & scale NGINX nodes:
Lets see if I can deploy a web app to my EKS cluster & Terraform. After EKS cluster is deployed w/Terraform I’ll provision the cluster & run Node.js & use MongoDB as the backend DB.
Basically it goes like this:
Web Browser – – – EKS Cluster – – – Public Endpoint
Okay, were not using Xbox controllers… but PS5 controllers! JK.. but what we will mess w/is deploy an EKS cluster to create admission controllers from a Terraform configuration file.
DevSecOps IaC tooling resembles my favorite anime/cartoons –
Dragon Ball Z
Pokemon
X-Men
Avengers
Justice League
& now this is your queue to think of your bestest squaaaaad.
My Goal:
W/that said, why not look at how these dope tools can integrate together!? This post is dedicated to showing how AWS, Ansible, Jenkins, & Terraform can work together.
Lessons Learned (so what had happen was…):
Deploy a distributed multi-region Jenkins CI/CD Pipeline
Include VPC (& of course peering!) along w/gateways, public subnets & security groups
In addition are EC2 that have Jenkins running w/main & worker nodes
Place Jenkins main node behind an ALB that is attached to allow HTTPs traffic w/a SSL certificate from AWS certificate manager in a Route 53 public zone
Create Ansible playbooks to install software for Jenkins & apply configurations
So w/out further a-due, provide me an applause (I know, so humble) for the next 7 minute read!
6–9 minutes
Below is a table of contents for your ability to jump around to key places you fancy (click here to see table of contents)
S3 bucket names are global, so don’t copy-pasta my bucket or you will get an error
The bucket name can be between 3 and 63 characters long, and can contain only lower-case characters, numbers, periods, and dashes.
Vim Backend.tf
Step showed how to tie AWS & Terraform together in a quick script, screenshots below
Vim Providers.tf & Variables.tf in Terraform:
Created 2 files that will be the key/foundation to the rest of infrastructure built & reference. This is the source code used to manage Terraform resources:
The first file/variable is where the EC2 instances are deployed in
The second file displays the providers region.
Network Deployment – VPC, Subnets, Security Groups, & Internet Gateways:
Goal is to create:
Environment w/VPC, internet gateway, & 2 public subnets
Environment w/VPC, internet gateway, & 1 public subnet
Lessons Learned:
vim networks.tf
terraform fmt
terraform validate
Goal is to create:
VPC Peering connection between 2 regions
As well as route tables for each VPC
View the magic in AWS!!
Lessons Learned:
Vim networks.tf
terraform fmt
terraform validate
terraform plan
Terraform Fmt & Validate:
Terraform Plan:
AWS account to see Terraform communicating w/AWS #maaaaaaagic
Goal is to create:
Deploy Security Groups w/ALB communicating w/Jenkins Master & Worker
Lessons Learned:
Vim Security_groups.tf
Vim variables.tf
Terraform plan
Terraform apply
Vim security_groups.tf:
Vim Variables.tf:
Added Jenkins worker variable
Terraform Plan:
Terraform Apply:
VM Deployment – AMIs, Key Pairs, & Jenkins:
Goal is to create:
Deploying application node to Jenkins application that will fetch AMI IDs
Data Source (SSM Parameter Store) to AMI IDs
Lessons Learned:
Terraform Data Source for SSM Parameter
SSM Parameter Store – Parameter for Public AMI IDs
Terraform SSM Data Source Returns AMI ID
Vim Instances.tf
#Get Linux AMI ID using SSM Parameter endpoint in us-east-1 data “aws_ssm_parameter” “linuxAmi” { provider = aws.region-master name = “/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2” }
#Get Linux AMI ID using SSM Parameter endpoint in us-west-2 data “aws_ssm_parameter” “linuxAmiOregon” { provider = aws.region-worker name = “/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2” }
Terraform Init & fmt & validate:
Terraform Plan:
Vim Backend.tf:
Goal is to create:
Deploying EC2 key pairs into Jenkins EC2 instance to permit SSH access
Lessons Learned:
Create SSH-key gen private/public key
Edit script to incorporate key-pairs for both regions
SSH:
Vim instances.tf
Terraform fmt, validate, plan, & apply:
Goal is to create:
Deploy Jenkins Master & Worker Instances
Lessons Learned:
Created 1 new script (outputs) & edited 2 scripts (instances & variables)
Can connect instances over SSH & IP addresses
Vim instances, variables, & outputs:
Terraform fmt, validate, plan, & apply:
SSH into EC2 Jenkins Master & Worker Nodes:
Terraform Configuration Management w/Ansible:
Goal is to create:
Configure TF Provision’s for Config Mgmt via Ansible
Lessons Learned:
Created new directory to hold 2 new scripts for Jenkins regions
Update script to call Ansible playbook
Mkdir ansible templates:
Vim ansible.cfg:
Mkdir inventory_aws:
wget -c: (might have to re-do)
Vim ‘tf_aws_ec2.yml: (created from above)
Vim pip3 install boto3 –user:
Vim instances.tf:
Terraform fmt, validate, plan, & apply:
JQ:
sudo yum install jq
jq
Routing Traffic via ALB to EC2:
Goal is to create:
Create ALB to route traffic to EC2 node
Via Terraform run a web server behind ALB on EC2
Lessons Learned:
Use Ansible playbook on EC2 nodes to run Jenkins application
Create new playbook for ALB
Edit variable playbook for port information as well as the security groups playbook ingress rule
Vim alb.tf:
Vim variables.tf:
Vim security_groups.tf:
Vim outputs.tf:
Vim jenkins-master-sample.yml:
Terraform fmt, validate, plan, & apply:
Route 53 & HTTPs:
Goal is to create:
Create path for user to connect to Jenkins application from Route 53, ALB, & ACM
Lessons Learned:
Create AWS Route 53 & generate SSL certificate
Connect w/public hosted zone connected pointing to DNS ALB
Traffic routed to Jenkins EC2 application
Vim variables.tf:
Vim acm.tf:
Vim dns.tf:
Vim alb.tf:
Terraform fmt, validate, plan, & apply:
Ansible Playbooks:
Goal is to create:
Building Ansible playbook w/tasks by installing Jenkins Master/Worker
Inspiration is clutch & I received it for starting this bad boy, so why not dedicate the first post in how I Frankensteined (woah – I created a blog, a blog post, & a past tense verb all in one) it together?
My Goal:
Was to create a blog & WordPress site – I then had a brain blast (Queue Jimmy Neutron), what if I did this through some from of IaC? So I tried the basic goodies, you know:
Terraform
Ansible
Docker
AWS
ChatGPT
WUT!?
Click-Opps
Back-pocked that for last on the learning journey
All were fun to mess w/& see where I got stuck quicker than others to debug some of the code. However this post follows the option of AWS & I see joy in posting the other journeys I had later, but for now lets not see double & jerk that pistol & go to work (name that movie).
Lessons Learned:
New ways to spend my Bennies ($$$) w/a AWS Account, ayyyy
Create an RDS instance for the MySQL database
Create an EC2 instance for the WordPress application
Install and configure WordPress on EC2
Upload and download files to and from S3
Access your WordPress site from the internet
Step 1: Create a RDS instance for MySQL Database
Prolly important to have something to store “my precious” (another movie quote) data aka goodiezzzz
Step 2: Create EC2 Instance
I wanted to get virtual & had a plethora of options to configure w/AMI, instance type, storage, tags, key names, security groups, etc.
Oh yeah, I overlooked the key pair part…I didn’t save/remember that information – so I had to re-do this. #DOAHHHHH
Step 3: SSH into EC2
Here was a quick double check of my work that helped me re-navigate in the console to find key information to plug-in to my SSH command (yeah, I used PowerShell. Why? Cuz its the most powerfullest, duh)
Then after some yum & systemctl – I had an apache test page… Woah, I know fancy.
Really had to pay attention to the next handful of commands to download the latest WordPress Package, Extract it, change ownership w/some chown, & then nano/vi into the configuration file.
Couple Example Below (sparing you all the commands):
Then after copy-pasta the public-IP-Address from AWS I started to click more stuff..
Conclusion:
Just like that it was done & could check into the blog & AWS to see the specimen…. ANNNNND then I tore it down. Why? Cuz I was intrigued by the other options available & see the other avenues to create a blog. I don’t have a favorite, but as mentioned above I’ll have posts about how to create a WordPress blog in the handful of options above. Yeah, even some Chat GPT action, stay tuned.