AWS: Creating a VPC With an Auto-scaling Group Using T2.micro Instances | by Jennelle Cosby | Apr, 2022

Credit score: AWS Auto Scaling [*note: although the diagram depicts the use of an Application Load Balancer, this exercise will not use one and focus instead on stressing CPU of instances.]

On this walkthrough, we’ll talk about the structure of VPCs inside AWS (Amazon Net Providers) and the usage of auto-scaling teams in EC2 to assist keep a self-healing structure.

Be aware: For the needs of this train, I’m additionally utilizing the brand new AWS Administration Console. I can be configuring a customized VPC reasonably than utilizing the default created by AWS. My area is about to us-east-1.

Confirming the success of the Auto Scaling group and utilizing a stress device

  1. Affirm Apache has been put in on every occasion: After a couple of minutes after launching the Auto Scaling group, the cases will start to initialize and launch. As soon as an occasion is operating and has handed each standing checks, choose the occasion id to view the main points and duplicate the general public ip. Within the search bar of the browser, sort http://<public_ip> and hit “Enter”. When you attain the Apache check web page, the set up from the bootstrap was a hit.
Apache check web page

2. From the AWS Administration Console, use the search bar to navigate to CloudWatch. As a result of we checked off choices earlier within the course of to trace the CPU Utilization metric in CloudWatch, there ought to already be alarms created for the Auto Scaling group within the CloudWatch dashboard.

3. To run the stress device we bootstrapped into the Auto Scaling group cases, open the terminal, change listing utilizing $ cd <name_of_directory> to be sure to are in the identical listing because the keypair .pem file. If vital, use the command $chmod 400 <nameofkeypair>.pem to safe it from being viewable. Then, log into one of many operating EC2 cases utilizing the next code:

$ ssh -i "<nameofkeypair>.pem" ec2-user@<public_ip_of_instance>$ sudo stress --cpu 1 --timeout 300
  • Since my cases would require 60 seconds to heat up (as configured within the Auto Scaling group), the stress device might want to have sufficient stress on the CPU to max out and set off the CloudWatch alarm configured within the Goal Monitoring Coverage. The --timeout worth represents the size of the stress check. The stress check will finish and produce the next consequence:
Within the native terminal, the stress device will stress the occasion I’ve chosen for a interval of 300 seconds (5 minutes), permitting CloudWatch to trace the required metric (CPU Utilization).
CloudWatchWidget with CPU Utilization metric set, monitoring the utmost over a 5 minute time interval. The stress check yielded 100% CPU Utilization, which suggests the stress check was profitable. (This may be considered in CloudWatch or when you choose a operating occasion in EC2).

** I spent loads of time adjusting the stress check with totally different values to see how CloudWatch adjusted the graphs. I additionally experimented with the values for monitoring for a similar objective. Throughout this stage of the train, it was useful to see how these changes affected the end result.

More Posts