In providers.tf add backend to remote so you can operate in enhanced/remote/HCP as well as your state be in enhanced/remote/HCP & even stream to your CLI in VsCode..compared to standard state that just stores state (like an S3 backend)
TF Remote provider magic:
Seeing the TF at work locally in the CLI & live in HCP, woah – magic..
Then jump to the ole’ AWS Console to check your IaC
Alright alright alright, lets destroy in the CLI
Annnnnnnnnnd, once again you can see live “streamin” in HCP
OMG its gone!!
S3 —–> HCP Enhanced/Remote:
Then if you have your backend provider already established, you can see live the new state before any Terraform is planned or applied
main.tf
provider "aws" {
region = var.aws_region
}
# Create IAM user
resource "aws_iam_user" "example_user" {
name = var.user_name
}
# Attach policy to the user
resource "aws_iam_user_policy_attachment" "example_user_policy" {
user = aws_iam_user.example_user.name
policy_arn = var.policy_arn
}
# Create access keys for the user
resource "aws_iam_access_key" "example_user_key" {
user = aws_iam_user.example_user.name
}
output.tf
output "iam_user_name" {
value = aws_iam_user.example_user.name
}
output "access_key_id" {
value = aws_iam_access_key.example_user_key.id
}
output "secret_access_key" {
value = aws_iam_access_key.example_user_key.secret
sensitive = true
}
aws_lambda_function.rds_stop_lambda: This resource defines the Lambda function itself, including its runtime, handler, associated IAM role, and the zipped code. It also passes the RDS_INSTANCE_IDENTIFIER and REGION as environment variables for the Python script.
aws_cloudwatch_event_rule.rds_stop_schedule: This creates a scheduled EventBridge rule using a cron expression. cron(0 0 ? * SUN *) schedules the execution for every Sunday at 00:00 UTC. Adjust this cron expression as needed for your desired 7-day interval and time.
Kubernetes is up & running!? Sick! Buuuuuuuuuuuuuuuuuuut, I wanna make some changes – so Imma use Terraform. W/out further a-due… lets get these nodes deployed!
Lessons Learned:
Initially set up a cluster using kubectl
Deployed NGINX nodes using Terraform
As an admin I deployed a NodePort to Kubernetes clstuer w/NGINX Nodes
Used Terraform to deploy NodePort & scale NGINX nodes
….DESTROY video boy (…..what is Benchwarmers..)
Initially set up a cluster using kubectl:
Set up the goodies:
Check to see cluster is created & get SSL info for server IP address:
Edit Variables file:
Deployed NGINX nodes using Terraform:
Terraform init & apply:
As an admin I deployed a NodePort to Kubernetes clstuer w/NGINX Nodes:
Get the TF config file:
Used Terraform to deploy NodePort & scale NGINX nodes: