Kubernetes Clusters w/EKS is Kewl as (S)hell!

Shells are da bomb right? Just like in Mario Kart! Cloud Shell can be dope too in creating a Kubernetes cluster using EKS, lets party Mario.

  • Create an EKS cluster in a Region
  • Deploy a Application to Mimic the Application
  • Use DNS name of Load Balancer to Test the Cluster

AWS Stuff:

Create EC2:

Download AWS CLI v2, kubectl, ekcctl, & move directory files:

Create the cluster, connect, & verify running eksctl:

Run thru some kubectl applys to yaml files & test to see those pods running:

  • Now curl the load balancer DNS name…walllll-ahhhhh

Deploy Nodes w/Terraform in Kubernetes

Kubernetes is up & running!? Sick! Buuuuuuuuuuuuuuuuuuut, I wanna make some changes – so Imma use Terraform. W/out further a-due… lets get these nodes deployed!

  • Initially set up a cluster using kubectl
  • Deployed NGINX nodes using Terraform
  • As an admin I deployed a NodePort to Kubernetes clstuer w/NGINX Nodes
  • Used Terraform to deploy NodePort & scale NGINX nodes
  • ….DESTROY video boy (…..what is Benchwarmers..)

Set up the goodies:

Check to see cluster is created & get SSL info for server IP address:

Edit Variables file:

Terraform init & apply:

Get the TF config file:

Vim lab_kubernetes_service.tf:

vim lab_kubernetes_resources.tf:

  • Terraform Destroy
  • kind delete cluster –name lab-terraform-kubernetes

Deep Pass of Secret’s to Kubernetes Container

Kubernetes is dope for data bro! Watch how we send configuration data from containers to applications that were stored in secrets & ConfigMaps.

  • Created password file & store it in ….. secrets..
  • Create the Nginx Pod

Generate a file for the secret password file & data:

Vi pod.yml:

Kubectl exec — curl -u user: <PASSWORD> <IP_ADDRESS>:

Be Like 2 Kubernetes in a Pod

Alright alright alright…. lets create a lil baby pod & eventually create an entire Kubernetes application!!

  • Create YAML file w/the pod details for the nginx pod
  • Create the pod…just do it!
  • SSH!!

Vi Nginx.yaml:

Kubectl create -f ~/nginx.yml:

  • Create the pod bro

kubectl get pods -n web:

  • Double check the pod is created dude

kubectl describe pod nginx -n web:

  • Looooook at daaa deeeeetaillllllllzzzuhhh

Falco to Detect Threats on Containers in Kubernetes!

Falco Lombardi is… ahem.. Falco is able to detect any shady stuff going on in your Kubernetes environment in no time.

  • Create a Falco Rules File to Scan the Container
  • Run Falco to Obtain a Report of ALL the Activity
  • Create rule to scan container, basically this scripts rule will:
  • Run Falco for up to a minute & see if anything is detected
    • -r = rule
    • -M = time

Canary in Coal Mine to find Kubernetes & Jenkins

Goal:

Our coal mine (CICD pipeline) is struggling, so lets use canary deployments to monitor a Kubernetes cluster under a Jenkins pipeline. Alright, lets level set here…

  • You got a Kubernetes cluster, mmmmkay?
  • A pipeline from Jenkins leads to CICD deployments, yeah?
  • Now we must add the deetz (details) to get canary to deploy

Lessons Learned:

  • Run Deployment in Jenkins
  • Add Canary to Pipeline to run Deployment

Run Deployment in Jenkins:

Source Code:

  • Create fork & update username

Setup Jenkins (Github access token, Docker Hub, & KubeConfig):

Jenkins:

  • Credz
    • Github user name & password (Access token)

Github:

  • Generate access token

DockerHub:

  • DockerHub does not generate access tokens

Kubernetes:

Add Canary to Pipeline to run Deployment:

Create Jenkins Project:

  • Multi-Branch Pipeline
  • Github username
  • Owner & forked repository
    • Provided an option for URL, select deprecated visualization
  • Check it out homie!

Canary Template:

  • We have prod, but need Canary features for stages in our deployment!
  • Pay Attention:
    • track
    • spec
    • selector
    • port

Add Jenkinsfile to Canary Stage:

  • Between Docker Push & DeployToProduction
    • We add CanaryDeployment stage!

Modify Productions Deployment Stage:

EXECUTE!!

Grab the Network wheel, our SGs & NACLs are 2-trackin!

Goal:

Uhh-ohh, we let the newbie drive & were off the road… lets take a peak under the hood & see why we can’t connect to the internet. We understand why an instance cant connect to internet. This post should share an order of operations if one does not know why an instance is not connecting to the internet.

Lessons Learned:

  • Determine why instance cant connect to internet
  • ID issues preventing instances from connecting to the internet
  • Important Notes:
    • We have 3 VPCs w/SSH connection & NACLs configured through route table
    • Instance 1 & 2 have connection to internet & are a-okay…
    • Instance 3 is not connected to the internet, so we outtah’ figure out the problem.

Order of Operations:

  • Instance
  • Security Group
  • Subnet
  • NACL
  • Route table
  • Internet gateway

Solution:

  • Instance
    • No public IP address
  • NACL
    • Deny rules for inbound & outbound that prevents all pinging & traffic to instance
  • Route Table
    • Did not have route to internet gateway

Determine why instance cant connect to internet:

Instance:

  • Start w/networking & manage IP address
    • See no public IP address below in screenshot
  • Wham bam thank ya mam! Fixed!… Wait, it isn’t?

Security Group:

  • Can we ping the instance?
  • Remember when looking at rules, just cuz says private – doesn’t mean it is! So check the inbound/outbound rules details

PING!

  • Nothing. Okay, I reckon to keep lookin..

Subnet:

  • Look at private IP address & then VPC
    • Specifically under subnets pay attention to the VPC ID
  • Looks okay so far, keep on keepin on!

NACLs:

  • We found the issue!! The NACL rules deny all inbound/outbound traffic into the instance!
    • Even tho the security group does allow traffic, remember the order of operations from in-to-out..

PING!!

  • Still nothing, hmm..

Route Table:

  • Ah-ha! We found the issue…again!
    • There is no route to the internet gateway

ID issues preventing instances from connecting to the internet:

Instance:

  • Allocate an Elastic IP Address, not a public one!!

NACLs:

  • The options we have are:
    • Change the NACL security rules
    • Get a different NACL w/proper rules in it
      • In prod… dont do this cuz it can affect all the subnets inside of it.
  • Under public-subnet4 (which was the original VPC ID we had for instance 3), select edit network ACL association, & change to the NACL to the public-subnet3

Route Tables:

  • The options we have are:
    • Add a route to the table that allows traffic to flow from subnet to internet gateway
      • Remember in other environments, there maybe others using this route table only permitting private access, so not modify.
    • Select route table that has appropriate entries
  • Here we edit the route table association & then notice the difference in the route table permitting connection/traffic

Ping!

  • YEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEET!!
  • Now if you desired you can SSH into the instance

Lamb<da>s in the AMAZON!? (..SQS)

Goal:

W/magic I will make this message appear!…. or just use a Lambda function that is triggered using SQS & input data into a DB.

Lessons Learned:

  • Create Lambda function
  • Create SQS trigger
  • Copy source code into Lambda function
  • Go to console for the EC2 & test the script
  • Double check messages were placed into the DB

Create Lambda function:

  • 3 minor details to utilize:
    • Name = SQS DynamoDB
    • Use = Python 3.x
    • Role = lambda-execution-role
  • Alright, whew – now thats over w/it….

Create SQS trigger:

  • Are you triggered bro? Hopefully “SQS” & “Messages” trigger you…
    • Important note – create a SQS message, so when creating the trigger – you can snag that message created in SQS

Copy source code into Lambda function:

  • Copy-n-pasta into the lambda_function.py…. now destroy .. ahem, DEPLOY HIM!!

Go to console for the EC2 & test the script:

  • Sign your life away & see what the damage is! (aka: go to your EC2 instance)

Double check messages were placed into the DB

  • After you checked EC2, lets double… quadruple? You checked it 1x, so your checking 2x? Or is it multiples of 4?.. idk regardless, you can look at your DB to see if you have a message from Lambda. Have at it.
    • Below is what SQS & Dynamo DB prolly looks like

Wanna Monitor a CloudFormation stack w/AWS Config?

Goal:

Lets see how to use AWS Config to monitor if EC2 instances that are launched comply w/the instance types specified in AWS Config

Lessons Learned:

  • Create AWS Config Rule
  • Make EC2 instance compliant w/config rule

Create AWS Config Rule:

  • You will see a couple json files, grab the 2nd one “badSG”
  • Create a key-pair
  • Example of the issue in the CloudFormation Stack
  • Here you can see we only say “securitygroups” – – – not, “SecuritygroupIDs”.
    • Easy fix, once you find it in the documentation.

Create new stack for updated SG:

  • Go ahead & post the 3rd json file in the “infrastructure composer” under CloudFormation
  • Like before go get your subnet, SG, & VPC IDs

Make EC2 instance compliant w/config rule:

  • Snag the 1st json file in the CloudFormation github link
  • Go to AWS Config
  • Now create a new stack for config record
  • Now your stack is created – wow.
  • Jump back to AWS Config to see your rules, are you compliant?
    • If not, re-upload your CloudFormation template depending what your AWS Config found
      • Example
        • EC2 instance non-compliant
  • Now what? Well delete whatever is not in use. OR don’t & see your bills pile up!

Gettin’ cheaper infrastructure w/CloudFormation

Goal:

GREAT-SCOTT! One just realized our EC2 instance is more compute power than required, & thats not all! Plus were spending wayyy to much chedahhhhhhhh (we want to save for more other goodies – like Pokemon cards & new fancy coffee mugs.. just a thought)

Lessons Learned:

  • Configure InstanceType Parameter to “t3.micro”
  • Launch Updated stack & ensure EC2 can connect

The Appetizer before configuring “t3.micro” & Updating the stack:

Configure InstanceType Parameter to “t3.micro”:

  • After maneuvering to your CloudFormation stack & selecting update – take a peak at the template as seen below.
    • Don’t fret, all these lines can be leveraged from the link above in the github repository.
  • Screenshot below shows the “Default: t3.small” that requires update
  • This is a perty-neat feature I thunk you would find dope. Instead of lines of code, you can mold your own visual CloudFormation by selections on the side.
    • OR you can just see how each AWS service connects to one another.
  • After you make the minor edit for the EC2 size, select validate
  • Once that is complete, your screen will look like this below

Launch Updated stack & ensure EC2 can connect:

  • Queue Jeopardy theme song…
    • After a couple minutes you will see updates to your template
  • Scroll down to find your instance ID to see if your instance update is complete
  • SEEE!??!
  • Wanna double check? Go to outputs & lastly snag your Public IP address