Dude, where is my Helm Chart?

Goal:

Scenario:

  • Uhhhhh dude, wheres my car? REMIX!
  • Uhhhhh dude, where’s my chart? But I have a Kubernetes deployment & I just want to convert it to a Helm chart! Wait you can do that? TEACH ME!

You right now:

Golly, it be nice to have a chart right now…also would be really nice to know how to have a Kubernetes deployment be converted into a Helm chart..Sooooooo, lets use what we got & convert this bad boiiiiii into a ….. HELM CHART (mic drop).

TLDR:

  • Basically your app is in prod already has a manifest & convert it to a helm chart to release the resources w/a template for Kubernetes from a values file

Lessons Learned:

  • Convert Service Manifest into a Service Template in a New Helm Chart
  • Convert Application Manifest into a Deployment Template in a New Helm Chart
  • Check the Manifests & Deploy NodePort Application

Convert Service Manifest into a Service Template in a New Helm Chart:

Make directories & YAML files:

Copy yaml file, update service file, & run Helm:

Convert Application Manifest into a Deployment Template in a New Helm Chart:

Edit values.yaml & copy application.yaml to edit:

Check the Manifests & Deploy NodePort Application:

Run helm install & deploy, get pod/svc details:

Kubernetes Cluster & Terraform

Goal:

Lets see if I can deploy a web app to my EKS cluster & Terraform. After EKS cluster is deployed w/Terraform I’ll provision the cluster & run Node.js & use MongoDB as the backend DB.

Basically it goes like this:

  • Web Browser – – – EKS Cluster – – – Public Endpoint
  • Namespace – – – Node.js – – – Docker Image Repository – – – MongoDB

Lessons Learned:

  • Deploy EKS Cluster w/Terraform:
  • Complete Terraform Configuration:
  • Deploy Web App w/Terraform:
  • Scale Kubernetes Web App:

Deploy EKS Cluster w/Terraform:

  • Cloud User – – – Security Credz – – – Access Keys
  • Add key details in CLI

Couple Commands to Leverage for Sanity Check:

  • LS files
  • Unzip
  • LS
  • CD
  • LS
    • Now can see all TF files

Terraform – init, fmt, apply:

Complete Terraform Configuration:

Double Check its Running:

Couple Commands:

Vim modules/pac-man/pac-man-deployment.tf:

Vim pac-man.tf:

Terraform – Fmt, Init, & Apply:

Deploy Web App w/Terraform:

Scale Kubernetes Web App:

Change Deployment Files

  • MongoDB = 2
  • Pacman Pods = 3

Double Check Working:

Canary in Coal Mine to find Kubernetes & Jenkins

Goal:

Our coal mine (CICD pipeline) is struggling, so lets use canary deployments to monitor a Kubernetes cluster under a Jenkins pipeline. Alright, lets level set here…

  • You got a Kubernetes cluster, mmmmkay?
  • A pipeline from Jenkins leads to CICD deployments, yeah?
  • Now we must add the deetz (details) to get canary to deploy

Lessons Learned:

  • Run Deployment in Jenkins
  • Add Canary to Pipeline to run Deployment

Run Deployment in Jenkins:

Source Code:

  • Create fork & update username

Setup Jenkins (Github access token, Docker Hub, & KubeConfig):

Jenkins:

  • Credz
    • Github user name & password (Access token)

Github:

  • Generate access token

DockerHub:

  • DockerHub does not generate access tokens

Kubernetes:

Add Canary to Pipeline to run Deployment:

Create Jenkins Project:

  • Multi-Branch Pipeline
  • Github username
  • Owner & forked repository
    • Provided an option for URL, select deprecated visualization
  • Check it out homie!

Canary Template:

  • We have prod, but need Canary features for stages in our deployment!
  • Pay Attention:
    • track
    • spec
    • selector
    • port

Add Jenkinsfile to Canary Stage:

  • Between Docker Push & DeployToProduction
    • We add CanaryDeployment stage!

Modify Productions Deployment Stage:

EXECUTE!!

Grab the Network wheel, our SGs & NACLs are 2-trackin!

Goal:

Uhh-ohh, we let the newbie drive & were off the road… lets take a peak under the hood & see why we can’t connect to the internet. We understand why an instance cant connect to internet. This post should share an order of operations if one does not know why an instance is not connecting to the internet.

Lessons Learned:

  • Determine why instance cant connect to internet
  • ID issues preventing instances from connecting to the internet
  • Important Notes:
    • We have 3 VPCs w/SSH connection & NACLs configured through route table
    • Instance 1 & 2 have connection to internet & are a-okay…
    • Instance 3 is not connected to the internet, so we outtah’ figure out the problem.

Order of Operations:

  • Instance
  • Security Group
  • Subnet
  • NACL
  • Route table
  • Internet gateway

Solution:

  • Instance
    • No public IP address
  • NACL
    • Deny rules for inbound & outbound that prevents all pinging & traffic to instance
  • Route Table
    • Did not have route to internet gateway

Determine why instance cant connect to internet:

Instance:

  • Start w/networking & manage IP address
    • See no public IP address below in screenshot
  • Wham bam thank ya mam! Fixed!… Wait, it isn’t?

Security Group:

  • Can we ping the instance?
  • Remember when looking at rules, just cuz says private – doesn’t mean it is! So check the inbound/outbound rules details

PING!

  • Nothing. Okay, I reckon to keep lookin..

Subnet:

  • Look at private IP address & then VPC
    • Specifically under subnets pay attention to the VPC ID
  • Looks okay so far, keep on keepin on!

NACLs:

  • We found the issue!! The NACL rules deny all inbound/outbound traffic into the instance!
    • Even tho the security group does allow traffic, remember the order of operations from in-to-out..

PING!!

  • Still nothing, hmm..

Route Table:

  • Ah-ha! We found the issue…again!
    • There is no route to the internet gateway

ID issues preventing instances from connecting to the internet:

Instance:

  • Allocate an Elastic IP Address, not a public one!!

NACLs:

  • The options we have are:
    • Change the NACL security rules
    • Get a different NACL w/proper rules in it
      • In prod… dont do this cuz it can affect all the subnets inside of it.
  • Under public-subnet4 (which was the original VPC ID we had for instance 3), select edit network ACL association, & change to the NACL to the public-subnet3

Route Tables:

  • The options we have are:
    • Add a route to the table that allows traffic to flow from subnet to internet gateway
      • Remember in other environments, there maybe others using this route table only permitting private access, so not modify.
    • Select route table that has appropriate entries
  • Here we edit the route table association & then notice the difference in the route table permitting connection/traffic

Ping!

  • YEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEET!!
  • Now if you desired you can SSH into the instance

Lamb<da>s in the AMAZON!? (..SQS)

Goal:

W/magic I will make this message appear!…. or just use a Lambda function that is triggered using SQS & input data into a DB.

Lessons Learned:

  • Create Lambda function
  • Create SQS trigger
  • Copy source code into Lambda function
  • Go to console for the EC2 & test the script
  • Double check messages were placed into the DB

Create Lambda function:

  • 3 minor details to utilize:
    • Name = SQS DynamoDB
    • Use = Python 3.x
    • Role = lambda-execution-role
  • Alright, whew – now thats over w/it….

Create SQS trigger:

  • Are you triggered bro? Hopefully “SQS” & “Messages” trigger you…
    • Important note – create a SQS message, so when creating the trigger – you can snag that message created in SQS

Copy source code into Lambda function:

  • Copy-n-pasta into the lambda_function.py…. now destroy .. ahem, DEPLOY HIM!!

Go to console for the EC2 & test the script:

  • Sign your life away & see what the damage is! (aka: go to your EC2 instance)

Double check messages were placed into the DB

  • After you checked EC2, lets double… quadruple? You checked it 1x, so your checking 2x? Or is it multiples of 4?.. idk regardless, you can look at your DB to see if you have a message from Lambda. Have at it.
    • Below is what SQS & Dynamo DB prolly looks like

Wanna Monitor a CloudFormation stack w/AWS Config?

Goal:

Lets see how to use AWS Config to monitor if EC2 instances that are launched comply w/the instance types specified in AWS Config

Lessons Learned:

  • Create AWS Config Rule
  • Make EC2 instance compliant w/config rule

Create AWS Config Rule:

  • You will see a couple json files, grab the 2nd one “badSG”
  • Create a key-pair
  • Example of the issue in the CloudFormation Stack
  • Here you can see we only say “securitygroups” – – – not, “SecuritygroupIDs”.
    • Easy fix, once you find it in the documentation.

Create new stack for updated SG:

  • Go ahead & post the 3rd json file in the “infrastructure composer” under CloudFormation
  • Like before go get your subnet, SG, & VPC IDs

Make EC2 instance compliant w/config rule:

  • Snag the 1st json file in the CloudFormation github link
  • Go to AWS Config
  • Now create a new stack for config record
  • Now your stack is created – wow.
  • Jump back to AWS Config to see your rules, are you compliant?
    • If not, re-upload your CloudFormation template depending what your AWS Config found
      • Example
        • EC2 instance non-compliant
  • Now what? Well delete whatever is not in use. OR don’t & see your bills pile up!

Play Detective w/CloudFormation

Goal:

Configuration draft is like poetry, & everyone hates poetry…Cloudformation can assist in bringing the stack back in sync to the original template after IDing the drift.

Lessons Learned:

  • Create CloudFormation Stack
  • Terminate an EC2 instance for stack drfit
  • Eliminate drift from stack

Create Key Pair:

  • Before you get into the house, gotta have keys right?!

Create CloudFormation Stack:

  • I think what AWS has in the “infrastructure composer” is sick, both options of “canvas” and “template” are so slick, also toggling between “YAML” & “JSON” is epic!
  • After the template is created, go ahead & select your VPC as well as subnet of choice
  • Tahhhhh DAhhhhhhhhhhhhhhhhhh!!!!

Terminate an EC2 instance for stack drift:

  • Annnnnd now its time to run some EVILLL experiments, muuhh-hahahaha… ahemm..
    • Go to your EC2 instances
  • Change instance 3 security groups
  • Delete/Terminate instance 1!!
  • Now edit your security group inbound rules
    • Add HTTP & HTTPs
  • Go to S3
  • Detect drift on CloudFormation stack
  • You can see the details of your drift detection & compare the before/after

Terminate Drift on Individual Resource:

  • Put the “afterdriftdetection” file in & prepare for re-upload

Update Stack to Eliminate Drift:

  • Go giggles, you can manually re-add the security group and re-enable the s3 static web hosting… OR just upload the other file & see the magic happen.
    • Cuz as as seen above, AWS tells you the difference for the drift & w/that code you can re-update the file for re-upload. #ohhhyeaaaaah
  • Dont forget to delete your stack if your done, orrrr it will stay there – – – … 4Evahhhh

Gettin’ cheaper infrastructure w/CloudFormation

Goal:

GREAT-SCOTT! One just realized our EC2 instance is more compute power than required, & thats not all! Plus were spending wayyy to much chedahhhhhhhh (we want to save for more other goodies – like Pokemon cards & new fancy coffee mugs.. just a thought)

Lessons Learned:

  • Configure InstanceType Parameter to “t3.micro”
  • Launch Updated stack & ensure EC2 can connect

The Appetizer before configuring “t3.micro” & Updating the stack:

Configure InstanceType Parameter to “t3.micro”:

  • After maneuvering to your CloudFormation stack & selecting update – take a peak at the template as seen below.
    • Don’t fret, all these lines can be leveraged from the link above in the github repository.
  • Screenshot below shows the “Default: t3.small” that requires update
  • This is a perty-neat feature I thunk you would find dope. Instead of lines of code, you can mold your own visual CloudFormation by selections on the side.
    • OR you can just see how each AWS service connects to one another.
  • After you make the minor edit for the EC2 size, select validate
  • Once that is complete, your screen will look like this below

Launch Updated stack & ensure EC2 can connect:

  • Queue Jeopardy theme song…
    • After a couple minutes you will see updates to your template
  • Scroll down to find your instance ID to see if your instance update is complete
  • SEEE!??!
  • Wanna double check? Go to outputs & lastly snag your Public IP address

Stacks on Stacks of Docker Swarmzzz

Goal:

  • Migrate my plethora of Docker Containers w/Docker SWARRRRRRM

Lessons Learned:

  • Set up Swarm cluster w/manager & worker nodes
  • Test cluster

Initialize the SWARRRM:

  • Connect w/command:
    • SSH into public IP address
  • Begin to conduct swarm w/command:
    • Perform docker swarm init \
  • Establish private IP address w/command:
    • –advertise-addr
  • BOOOOM, now your an assistant-to-the-regional-manager!
  • Now you receive a command to place in your worker node, you did create a worker node…right?
  • Once your worker node is connected, quick see your list of nodes w/command:
    • docker node ls
  • Now create Ngninx service for the swarm w/the command above
    • (see above for the 4 lines of code)
  • To quick see your list of services w/the command:
    • docker service ls

Add Worker to Cluster:

  • Connect w/command:
    • SSH into public IP address
  • Add worker node to manager node w/command seen below
    • (see below for lengthy command)

AWS, Terraform, Ansible & a lil Jenkins – oh my!

  • Dragon Ball Z
  • Pokemon
  • X-Men
  • Avengers
  • Justice League
  • & now this is your queue to think of your bestest squaaaaad.

My Goal:

W/that said, why not look at how these dope tools can integrate together!? This post is dedicated to showing how AWS, Ansible, Jenkins, & Terraform can work together.

Lessons Learned (so what had happen was…):

  • Deploy a distributed multi-region Jenkins CI/CD Pipeline
  • Include VPC (& of course peering!) along w/gateways, public subnets & security groups
  • In addition are EC2 that have Jenkins running w/main & worker nodes
    • Place Jenkins main node behind an ALB that is attached to allow HTTPs traffic w/a SSL certificate from AWS certificate manager in a Route 53 public zone
  • Create Ansible playbooks to install software for Jenkins & apply configurations 
6–9 minutes
Below is a table of contents for your ability to jump around to key places you fancy (click here to see table of contents)
  1. Pre-requisites:
    • Install Terraform, IAM Permissions, Ansible, & AWS CLI
    • Create S3 Bucket, Vim backend.tf, Vim Providers.tf & Variables.tf
  2. Network Deployment – VPC, Subnets, Security Groups, & Internet Gateways:
    • Create environment w/networks.tf file
      • Includes route table, VPC peering, etc
    • Quick view into AWS console to see Terraform magic
    • Create ALB.tf w/Jenkins Master & Worker
    • Created security_groups.tf
    • Created variables.tf w/Jenkins variables
  3. VM Deployment – AMIs, Key Pairs, & Jenkins:
    • Deploy Jenkins to snag AMI IDs from the SSM parameter store
    • Create instances.tf
    • Deploy key pairs into Jenkins to permit SSH access
    • Deploy Jenkins master & worker instances
      • Update isntances.tf, variables.tf, & outputs.tf w/IP addresses
    • SSH into EC2 Jenkins Master/Worker nodes
  4. Terraform Configuration Management w/Ansible:
    • Create new directory for Jenkins regions to hold ansible_templates
    • Update ansible.cfg backend file
    • Create inventory_aws directory
    • Update instances.tf
  5. Routing Traffic via ALB to EC2:
    • Update ALB.tf w/a new playbook for ingress rules into the security group & port information
    • Update instances.tf & security_groups.tf w/port information
    • Update output.tf w/DNS
    • Create jenkins-master/worker-sample.yml
  6. Route 53 & HTTPs:
    • Create path for user to connect to application from Route 53, ALB, & ACM
    • Create acm.tf for certification requests to be validated via DNS route 53
  7. Ansible Playbooks:
    • Create playbook w/7 tasks to install Jenkins master/worker
    • Generate SSH key-pair
  8. Jinja2:
    • Build Jinja2 template for Ansible playbook for tasks
  9. Verifying IaC Code & Terraform Apply:
    • Do the thing, terraform fmt, validate, plan, & apply
  10. Conclusion – Summary:

Here are some housekeeping items I addressed before I stood up this environment:

Installed Terraform:

IAM Permissions for Terraform

  • sudo apt-get -y install python-pip
  • pip3 install awscli –user

Connect Ansible:

AWS CLI:

  • A extensive policy was created & seen here, copy & prepare to pasta!
  • Log-in to your AWS Console & either;
    • Create a separate IAM user w/required permissions
    • Create an EC2 (IAM Role) instance profile w/required permissions & attach it to EC2

Create S3 Bucket:

  • Ls
  • cd deploy_iac_tf_ansible
  • aws s3api create-bucket –bucket terraformstatebucketwp
  • Important Notes:
    • Remember the region you are in
    • S3 bucket names are global, so don’t copy-pasta my bucket or you will get an error
    • The bucket name can be between 3 and 63 characters long, and can contain only lower-case characters, numbers, periods, and dashes.

Vim Backend.tf

  • Step showed how to tie AWS & Terraform together in a quick script, screenshots below

Vim Providers.tf & Variables.tf in Terraform:

  • Created 2 files that will be the key/foundation to the rest of infrastructure built & reference. This is the source code used to manage Terraform resources:
    • The first file/variable is where the EC2 instances are deployed in
    • The second file displays the providers region.

Goal is to create:

  • Environment w/VPC, internet gateway, & 2 public subnets
  • Environment w/VPC, internet gateway, & 1 public subnet

Lessons Learned:

  • vim networks.tf
  • terraform fmt
  • terraform validate

Goal is to create:

  • VPC Peering connection between 2 regions
  • As well as route tables for each VPC
  • View the magic in AWS!!

Lessons Learned:

  • Vim networks.tf
  • terraform fmt
  • terraform validate
  • terraform plan

Terraform Fmt & Validate:

Terraform Plan:

  • AWS account to see Terraform communicating w/AWS #maaaaaaagic

Goal is to create:

  • Deploy Security Groups w/ALB communicating w/Jenkins Master & Worker

Lessons Learned:

  • Vim Security_groups.tf
  • Vim variables.tf
  • Terraform plan
  • Terraform apply

Vim security_groups.tf:

Vim Variables.tf:

  • Added Jenkins worker variable

Terraform Plan:

Terraform Apply:

Goal is to create:

  • Deploying application node to Jenkins application that will fetch AMI IDs
    • Data Source (SSM Parameter Store) to AMI IDs

Lessons Learned:

  • Terraform Data Source for SSM Parameter
  • SSM Parameter Store – Parameter for Public AMI IDs
  • Terraform SSM Data Source Returns AMI ID

Vim Instances.tf

  • #Get Linux AMI ID using SSM Parameter endpoint in us-east-1 data “aws_ssm_parameter” “linuxAmi” { provider = aws.region-master name = “/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2” }
  • #Get Linux AMI ID using SSM Parameter endpoint in us-west-2 data “aws_ssm_parameter” “linuxAmiOregon” { provider = aws.region-worker name = “/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2” }

Terraform Init & fmt & validate:

Terraform Plan:

Vim Backend.tf:

Goal is to create:

  • Deploying EC2 key pairs into Jenkins EC2 instance to permit SSH access

Lessons Learned:

  • Create SSH-key gen private/public key
  • Edit script to incorporate key-pairs for both regions

SSH:

Vim instances.tf

Terraform fmt, validate, plan, & apply:

Goal is to create:

  • Deploy Jenkins Master & Worker Instances

Lessons Learned:

  • Created 1 new script (outputs) & edited 2 scripts (instances & variables)
  • Can connect instances over SSH & IP addresses

Vim instances, variables, & outputs:

Terraform fmt, validate, plan, & apply:

SSH into EC2 Jenkins Master & Worker Nodes:

Goal is to create:

  • Configure TF Provision’s for Config Mgmt via Ansible

Lessons Learned:

  • Created new directory to hold 2 new scripts for Jenkins regions
  • Update script to call Ansible playbook

Mkdir ansible templates:

Vim ansible.cfg:

Mkdir inventory_aws:

wget -c: (might have to re-do)

Vim ‘tf_aws_ec2.yml: (created from above)

Vim pip3 install boto3 –user:

Vim instances.tf:

Terraform fmt, validate, plan, & apply:

JQ:

  • sudo yum install jq
  • jq

Goal is to create:

  • Create ALB to route traffic to EC2 node
  • Via Terraform run a web server behind ALB on EC2

Lessons Learned:

  • Use Ansible playbook on EC2 nodes to run Jenkins application
    • Create new playbook for ALB
    • Edit variable playbook for port information as well as the security groups playbook ingress rule

Vim alb.tf:

Vim variables.tf:

Vim security_groups.tf:

Vim outputs.tf:

Vim jenkins-master-sample.yml:

Terraform fmt, validate, plan, & apply:

Goal is to create:

  • Create path for user to connect to Jenkins application from Route 53, ALB, & ACM

Lessons Learned:

  • Create AWS Route 53 & generate SSL certificate
  • Connect w/public hosted zone connected pointing to DNS ALB
  • Traffic routed to Jenkins EC2 application

Vim variables.tf:

Vim acm.tf:

Vim dns.tf:

Vim alb.tf:

Terraform fmt, validate, plan, & apply:

Goal is to create:

Building Ansible playbook w/tasks by installing Jenkins Master/Worker

Lessons Learned:

  • Install dependencies
  • Clone Jenkins files
  • Set up Jenkins repo & GPG key
  • Install Jenkins & ensure its stopped
  • Delete default installation & copy clone Jenkins fles
  • Restore Jenkins files & restart Jenkins
  • Wait for Jenkins to start up before ending playbook
Vim install_jenkins_master.yml:

ansible-playbook –syntax-check -e”passed_in_hosts-localhost” install_jenkins_master.yml

Lessons Learned:

  • Generate SSH key-pair & add own public key to file
  • Copy Jenkins worker agent XML config file
    • Jinja Template
  • Read SSH private key from copying over Jenkins worker credz XML Jinja template & embed into private key
  • Install dependencies
    • yum
  • Download Jenkins API from Jenkins Master
  • Copy Jenkins auth file
  • Use Jenkins API client to create credz for Jenkins worker & connect to Jenkins Master

Vim install_jenkins_worker.yml (under ansible_templates):

ansible-playbook –syntax-check -e”passed_in_hosts-localhost” install_jenkins_worker.yml

Goal is to create:

  • Build Jinja2 Templates for Ansible Playbooks

Lessons Learned:

  • Leverage Jinja2 from Ansible playbook tasks created

Vim Node.j2:

Vim cred-privkey.j2:

Goal is to create:

  • Go-live & hope it doesn’t break…

Lessons Learned:

  • Ensure all dependencies such as Ansible, Terraform, AWS, CLI, boto3, & SSH work!
  • Run that Terraform fmt, validate, plan, & apply!

Vim instances.tf:

Vim variables.tf:

Terraform fmt, validate, plan, & apply:

  • Annnnnnnnnd time. Done. Now can connect CiCd pipelines w/distributed jobs.

Annnnnnnnnd time. Done. Now can connect CiCd pipelines w/distributed jobs.