Seguimos con los posts dedicados a la certificación de AWS SAA.
Topics covered:
EC2
Classes of EC2
- OnDemand: Pay a fixed rate per hour (or per second). Linux by the second, Windows per hour.
- Reserved: 1 year or 3-year terms. Bigger discount.
- Spot: cheaper than OnDemand, the price you want for instance capacity.
- Dedicated Hosts: Physical EC2.
OnDemand
- Users who want low cost and flexibility without the long-term commitment
- Applications with short term, cannot be interrupted
- Applications being developed
Reserved
- Predictable usage
- Applications which requiere reserved capacity
- Upfront payments to reduce total computing costs
- Standards RI’s 75% off than OnDemand
- Convertible RI’s up to 54% off than OnDemand
- Scheduled RI launch within the time windows you reserver. Match your capacity to a predictable recurring schedule
- Reserved Instances don’t get interrupted unlike Spot instances in the event that there are not enough unused EC2 instances to meet the demand
- You can have capacity reservations that recur on a daily, weekly, or monthly basis, with a specified start time and duration, for a one-year term through Scheduled Reserved Instances
Spot
- You can block Spot Instances from being terminted using Spot Block
- Flexible start and times
- Applications only feasible at very low compute prices
- if you terminate the instance you pay for the hour
- if AWS terminate the instance you get the hour it was terminated for free
- Spot Fleet
- attempts to launch number of Spot instances and On-Demand instances to try to meet the target capacity you specified
- Fleet needs
- available capacity
- maximum price to pay that you specified exceeds the current Spot price. (Si el precio máximo de su solicitud es superior al precio de spot actual, Amazon EC2 la atiende inmediatamente si hay capacidad disponible. )
- Fleet will try to maintain your capacity when instances are interrupted
- Set up different launch pools. Define things like Ec2 instance type, operating system and availability zone.
- You can have multiple pools and fleet will try to implement depending the strategy you decide
- Spot fleets will stop launching Ec2 instances once you reach your price treshold or capacity desire
- Spot Fleet strategies
- capacity optimized
- diversified
- lowestPrice
- InstancePoolstoUseCount
Dedicated Hosts
Types of EC2
Graviton 2 Based C6GN (Network optimized)
BareMetal Instances
Bare metal instances allow EC2 customers to run applications that benefit from deep performance analysis tools, specialized workloads that require direct access to bare metal infrastructure, legacy workloads not supported in virtual environments, and licensing-restricted Tier 1 business critical applications,” the company explained.
ARM-Based Ec2
EC2 Status Checks
Status checks are performed every minute and each returns a pass or a fail status. If all checks pass, the overall status of the instance is OK. If one or more checks fail, the overall status is impaired.
System Status Checks
Monitor the AWS systems on which your instance runs. These checks detect underlying problems with your instance that require AWS involvement to repair. When a system status check fails, you can choose to wait for AWS to fix the issue, or you can resolve it yourself. For instances backed by Amazon EBS, you can stop and start the instance yourself, which in most cases migrates it to a new host. For instances backed by instance store, you can terminate and replace the instance.
The following are examples of problems that can cause system status checks to fail:
- Loss of network connectivity
- Loss of system power
- Software issues on the physical host
- Hardware issues on the physical host that impact network reachability
Instance Status Checks
The following are examples of problems that can cause instance status checks to fail:
- Failed system status checks
- Incorrect networking or startup configuration
- Exhausted memory
- Corrupted file system
- Incompatible kernel
EC2 Hibernate
- Preserves in-memory (RAM) on persistent storage (EBS)
- Much faster boot because don’t need to reload OS
- Ram must be less than 150 GB
- C3,C4,M3-4-5,R3-4-5
- Windows, Linux 2 AMI, and Ubuntu
- Can’t be hibernated for more than 60 days
- Available for On-Demand and Reserved
Alarm Actions in case of System Status Check Failed
Update
- 2019
- New EC2 M5 and R5 instances:
- M5 general-purpose workloads, 64vCPU,256GBRAM,20GB/s network bandwidth,EBS or SSD storage
- R5 memory-intensive workloads, half Terabyte of RAM
- On-Demand Capacity reservations can now be shared: you can now share the reservation with another AWS account or AWS organization. You can share across multiple accounts, now organizations can plan capacity needs at are aggregate level and optimize costs.
- New BareMetal instances Extreme Memory: up to 24TB of RAM. Available are Dedicated Host with a Three Year Reservation (U family)
- New BareMetal ARM-based Ec2 instances: powered by Arm-based AWS Graviton processors.
- New EC2 M5 and R5 instances:
- 2020
- Attach multiple Elastic Inference Accelerators: You can use a single EC2 in an auto-scaling group when you are running inference for multiple models. By attaching multiple accelerators to a single instance, you can avoid deploying multiple auto-scaling groups of CPU or GPU instances for your inference and lower your operation costs.
- Now you can stop and start the EC2 Spot Instances.
- Graviton 2 Based C6GN (Network optimized)
- Habana Gaudi Based Instance Types
AMI
EBS
- storage volumes
- automatically replicated to protect you from the failure
- you cannot use 1 EBS volume to multiple instances, instead, use EFS
- Cannot encrypt EBS root volumes of your Defaults AMIS
Instance Stores vs EBS Backed instances
- EC2 with Instance Store are fewer families to choose from.
- The critical difference: cannot stop or start EC2 with Instance Store. If there is a hypervisor issue, with EBS we can stop-start the EC2. Not with Instance Store. You’ve lost that Instance. This is why is called Ephemeral Storage.
- Instance Store Volumes are not shown in the Volumes section. Can’t do anything at all with them.
Backup EBS tips
- snapshots are located in S3.
- snapshot, point in time copies of volumes
- snapshot, are incremental
- recommended to stop an ec2 if you want to take a snapshot of EBS root volume
- NOW you can change EBS volume sizes on the fly, including changing size and storage type
- volumes will be always available in the same availability as the ec2 instance
- to move a volume from az region to another, take a snapshot or image of the new location
- snapshots of encrypted volumes are encrypted automatically
- you can share snapshots with others if are not encrypted
Snapshots
- Snapshots of encrypted volumes are encrypted automatically
- Volumes restored from encrypted volumes are encrypted automatically
- You can share snapshots but must be unencrypted
- You can now encrypt device root volumes upon creation of EC2
EBS RAID TIPS
- RAID 5 is not recommended by AWS
- Better to use Stripe Volume = RAID0
- how take a snapshot of a multiple EBS (array)?
- application-consistent snapshot
- freeze the system or
- unmount array or
- shut down EC2
DataLifeCycle (DLM) Manager
With Amazon Data Lifecycle Manager, you can manage the lifecycle of your AWS resources. You create lifecycle policies, which are used to automate operations on the specified resources.
Amazon DLM supports:
- Amazon EBS volumes
- snapshots
- AMIs
- select target instances for AMI creation, set schedules for creation and retention, copy newly created AMIs to other regions, share them with other AWS accounts, and even add tags to the AMI and the snapshots. On the deletion side, you can choose to simply deregister the AMI or to delete the associated snapshots as well.
Termination protection is turned off by default
DeleteOnTermination
attribute for each attached Amazon EBS volume to determine whether to preserve or delete the volume.By default, the DeletionOnTermination
attribute for the root volume of an instance is set to true
. Therefore, the default is to delete the root volume of an instance when the instance terminates.
By default, when you attach an EBS volume to an instance, its DeleteOnTermination
attribute is set to false
. Therefore, the default is to preserve these volumes. You must delete a volume to avoid incurring further charges. For more information, see Deleting an Amazon EBS Volume. After the instance terminates, you can take a snapshot of the preserved volume or attach it to another instance.
To verify the value of the DeleteOnTermination
attribute for an EBS volume that is in-use, look at the instance’s block device mapping. For more information, see Viewing the EBS Volumes in an Instance Block Device Mapping.
You can change the value of the DeleteOnTermination
attribute for a volume when you launch the instance or while the instance is running.
EBS types
GP2 General Purpose SSD (BOOT) = 10KS IOPS
- 1 Gib – 16 Tib
- 3 IOPS per GB
- up to 10K IOPS
- burst up to 3K IOPS
IO1 ProvisioneD IOPS (BOOT)> 10KS IOPS
- 4 Gib – 16 Tib
- IO Intensive applications
- use it if you need more than 10K IOPS
Magnetic Standard (BOOT)
- Lowest cost per gigabyte of all EBS volumes that is bootable
HDD Throughput Optimized ST1
- 500 Gib – 16 Tib
- big data, data warehouses, log processing
- Since files are read in whole, HDD based storage would offer very high sequential read throughput
- cannot be a boot volume
HDD Cold SC1
- 500 Gib – 16 Tib
- Lowest cost storage for IA workload, infrequent access
- cannot be a boot volume
- Cold HDD volumes provide low-cost magnetic storage that defines performance in terms of throughput rather than IOPS. With a lower throughput limit than Throughput Optimized HDD, this is a good fit ideal for large, sequential cold-data workloads. If you require infrequent access to your data and are looking to save costs, Cold HDD provides inexpensive block storage. Take note that bootable Cold HDD volumes are not supported.
General Purpose Base Ratio IOPS
GP2 has a base of 3IOPS per/GiB of volume size.
- Maximum volume size 16 TiB
- Maximum IOPS size of 10K
For example, if we have a volume of 100 GiB we will obtain 300 IOPS (100 x 3). And we could also burst up to 3000 IOPS, if we want, using I/O credits. In this scenario the burst will be of 2700 (3000-300)
Every volume receives an initial I/O credit balance of 5.400.000 credits. This amount is equal to sustain a burst of 3000 over 30 minutes.
When you are below your performance baseline you will earn new credits.
Maximum Ratio Size: IOPS
Resize EBS volumes
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-modify-volume.html?icmpid=docs_ec2_console
Updates
- 2019
- Amazon EBS Fast Snapshots Restore (FSR)
- Ensure EBS volumes restores from FSR enabled snapshots to instantly receive full provisioned performance. Enables you to restore multiple volumes from a snapshot without the need to initialize volumes yourself. Data from an EBS snapshot is lazy loaded into an EBS volume. If the volume is accessed where the data is not loaded, the application accessing the volume encounters a higher latency than normal while the data gets loaded. To avoid this impact for latency-sensitive applications, customers pre-warm their data from a snapshot into an EBS volume.
- https://aws.amazon.com/about-aws/whats-new/2019/11/amazon-ebs-fast-snapshot-restore-eliminates-need-for-prewarming-data-into-volumes-created-snapshots/?nc1=h_ls
- Amazon EBS Fast Snapshots Restore (FSR)
- 2020
- IO2 Block Express: Today AWS announced availability, in preview, of io2 Block Express volumes that are designed to deliver up to 4x higher throughput, IOPS, and capacity than io2 volumes, while also delivering sub-millisecond latency and 99.999% durability. (https://aws.amazon.com/es/about-aws/whats-new/2020/12/aws-quadruples-per-volume-maximum-capacity-and-performance-on-io2-volumes-in-preview/=)
Security Groups
- all inbound traffic is blocked by default
- all outbound traffic is allowed
- changes immediately
- Stateful
- If you allow traffic in, that traffic is automatically allowed back out again.
- Cannot block specific IP address with security groups, instead use NACL
- you can specify allow rules, but not deny rules.
LOAD BALANCERS
- ALB (Application Layer, HTTP/HTTPS) – 2016
- Classic Load Balancer
- NLB
Classic Load Balancer
- Cross Zone Load Balancing disabled by default
-
Each load balancer node for your Classic Load Balancer distributes requests evenly across the registered instances in all enabled Availability Zones. If cross-zone load balancing is disabled, each load balancer node distributes requests evenly across the registered instances in its Availability Zone only.
-
- Recommended for Ec2 classic network
- TLS termination is supported only by Classic and Application Load balancers
- Target Ec2 instances
- Works on ec2 classic and vpc
- http
- https
- tcp
- ssl
- Ssl offloading (termination tls)
- Sticky sessions
- Osi layer 4 and 7
Classic Load Balancer does not support Server Name Indication (SNI). You have to use an Application Load Balancer instead or a CloudFront web distribution to allow the SNI feature
Application Load Balancer
- Cross Zone LB enabled by default
- Target ec2 instances, containers, and private addresses
- Content-based routing
- path
- Host
- Load balance across different ports on an ec2 instance
- Sticky sessions
- Supports http https http 2 websocket
- Osi layer 7
- Flexible application management and TLS Termination
TLS termination is supported only by Classic and Application Load balancers- Now termination TLS is enabled in NLB. This is great to offload the TLS overhead from the EC2.
Updates
- 2018
-
Slow Start Algorithm, targets can warm up before to start fresh traffic
-
Application Load Balancers now support two new security policies:
-
ELBSecurityPolicy-FS-2018-06 and ELBSecurityPolicy-TLS-1-2-Ext-2018-06.ELBSecurityPolicy-FS-2018-06 implements ciphers that ensure Forward Secrecy. Customers now have a policy that prevents out-of-band decryption if someone records the traffic and later compromises the server’s private key.
-
ELBSecurityPolicy-TLS-1-2-Ext-2018-06 gives customers the option of only using the latest TLS 1.2 protocol with the same set of ciphers as available with default ELBSecurityPolicy-2016-08. With cipher parity, this new policy also provides an easy migration path to TLS 1.2-only from TLS 1.1 or TLS 1.0.
-
-
- 2019
- ALB now supports advanced request routing based on HTTP headers and methods, query parameters and source IP addresses. You can now route your traffic based on multiple conditions and each condition can match multiple values.
NLB
- Extreme performance and static IP
- Target ec2 instances, containers, and private addresses
- Very high performance
- Optimized for volatile traffic patterns
- Long-lived tcp connections (web socket)
- One static IP per AZ
- Preserves IP source address
- Osi layer 4
- Network Load Balancer is the only product that assigns a static IP address per availability zone where it is deployed. You can use this static IP address to configure your client application. DNS lookup of Application and Classic Load Balancer Names would return a list of load balancer nodes that are valid at that time; however, this list can change depending on the load on the system. So, for application and classic load balancers, you should always referer it by name
-
Network Load Balancer currently does not support Security Groups. You can enforce filtering in Security Group of the EC2 instances. Network Load Balancer forwards the requests to EC2 instances with source IP indicating the caller source IP (Source IP is automatically provided by NLB when EC2 instances are registered using Instance Target Type. If you register instances using IP address as Target Type, then you would need to enable the proxy protocol to forward source IP to EC2 instances)
Both Application and Network Load Balancers allow you to add targets by IP address. You can use this capability to register instances located on-premises and VPC to the same load balancer. Do note that instances can be added only using private IP address and on-premises data center should have a VPN connection to AWS VPC or a Direct Connect link to your AWS infrastructure
Updates
- 2018
- Support Inter-Region VPC Peering: we can communicate resources located in different regions without outgoing over the Internet.
- Network Load Balancers now support connections from clients to IP-based targets in peered VPCs across different AWS Regions. Previously, access to Network Load Balancers from an inter-region peered VPC was not possible. With this launch, you can now have clients access Network Load Balancers over an inter-region peered VPC. Network Load Balancers can also load balance to IP-based targets that are deployed in an inter-region peered VPC.
- https://aws.amazon.com/es/about-aws/whats-new/2018/10/network-load-balancer-now-supports-inter-region-vpc-peering/
- Support Inter-Region VPC Peering: we can communicate resources located in different regions without outgoing over the Internet.
- 2019
- Now termination TLS is enabled in NLB. This is great to offload the TLS overhead from the EC2.
- https://aws.amazon.com/blogs/aws/new-tls-termination-for-network-load-balancers/
- Network Load Balancer now supports UDP protocol.
- NLB now supports SNI. Multiple TLS certificates over the same NLB listener.
- Now termination TLS is enabled in NLB. This is great to offload the TLS overhead from the EC2.
Observations
Both Application and Network Load Balancers allow you to add targets by IP address. You can use this capability to register instances located on-premises and VPC to the same load balancer. Do note that instances can be added only using private IP address and on-premises data center should have a VPN connection to AWS VPC or a Direct Connect link to your AWS infrastructure.Sticky Session:To implement the sticky session feature, you need to have 2 things:
An HTTP/HTTPS load balancer. At least one healthy instance in each Availability Zone.
TargetGroup HealthChecks
Metadata EC2
$ curl http://169.254.169.254/latest/meta-data/
Placement Group
- Recommended for applications that benefit from low network latency, high network throughput or both.
Can’t span multiple Availability Zones. Single Point of Failure.Name you to specify for a PG must be unique within your AWS account.- Spread Placement Groups can be deployed across availability zones since they spread the instances further apart.
- Spread placement groups have a specific limitation that you can only have a maximum of 7 running instances per Availability Zone.
- Clustered Placement Groups can only exist in one Availabiity Zone since they are focused on keeping instances together, which you cannot do across Availability Zones
- A logical group of instances within a single availability zone. 10 Gbps network.
- Partition Placement Group
- Spread Placement Groups can be deployed across availability zones since they spread the instances further apart.
- Only certain types of instances can be launched in a placement group.
- AWS recommends homogeneous instances within the placement group.
- You can’t merge PG.
- You can’t move an existing instance to a PG. You can create an AMI from your existing instance and then launch a new instance from the AMI into a PG.
EFS
Update
- 2018
-
Amazon EFS now allows you to instantly provision the throughput required for your applications independent of the amount of data stored in your file system, allowing you to optimize throughput for your application’s performance needs. You can use Amazon EFS for applications with a wide range of performance requirements. Until today, the amount of throughput an application could demand from Amazon EFS was based on the amount of data stored in the file system. This default Amazon EFS throughput bursting mode offers a simple experience that is suitable for a majority of applications. Now with Provisioned Throughput, applications with throughput requirements greater than those allowed by Amazon EFS’s default throughput bursting mode can achieve the throughput levels required immediately and consistently independent of the amount of data. You can quickly configure throughput of your file system with a few simple steps using the AWS Console, AWS CLI or AWS API.
-
With Provisioned Throughput, you will be billed separately for the amount of storage used and for the throughput provisioned beyond what you are entitled to through the default Amazon EFS Bursting Throughput mode.
-
- Provisioned Throughput up to 1 GB/s, even for small filesystems
-
Currently, the only instances that supports EFS mounting across VPC peering are in the following families: T3 C5 C5d I3.metal M5 M5d R5 R5d z1d
- 2019
- Price reduction for Infrequent Access Storage: 44% reduction in storage prices for EFS IA
- Now available in all commercial AWS REGIONS
- 2020
- Now Supports IAM to manage NFS access fro EFS. You can use IAM roles to identify NFS clients with cryptographic security and use IAM policies to manage client-specific permissions.
FSx for Lustre
Is a high performance file system optimized for workloads such as machine learning, high performance computing, video processing, financial modeling, and analytics.
Update
- 2020
- Moving data between FSx for Lustre and S3 enhancements: FSx has quadrupled the speed of launching FSx file systems that are linked to s3 buckets
Autoscaling
Type of requests
- A one-time Spot Instance request remains active until Amazon EC2 launches the Spot Instance, the request expires, or you cancel the request. If the Spot price exceeds your maximum price or capacity is not available, your Spot Instance is terminated and the Spot Instance request is closed.
- A persistent Spot Instance request remains active until it expires or you cancel it, even if the request is fulfilled. If the Spot price exceeds your maximum price or capacity is not available, your Spot Instance is interrupted. After your instance is interrupted, when your maximum price exceeds the Spot price or capacity becomes available again, the Spot Instance is started if stopped or resumed if hibernated. You can stop a Spot Instance and start it again if capacity is available and your maximum price exceeds the current Spot price. If the Spot Instance is terminated (irrespective of whether the Spot Instance is in a stopped or running state), the Spot Instance request is opened again and Amazon EC2 launches a new Spot Instance. For more information, see Stopping a Spot Instance, Starting a Spot Instance, and Terminating a Spot Instance.
Spots Termination CASE
Example of Spot billing
- Price – The Spot price is greater than your maximum price.
- Capacity – If there are not enough unused EC2 instances to meet the demand for Spot Instances, Amazon EC2 interrupts Spot Instances. The order in which the instances are interrupted is determined by Amazon EC2.
- Constraints – If your request includes a constraint such as a launch group or an Availability Zone group, these Spot Instances are terminated as a group when the constraint can no longer be met.
The default cooldown for the Auto Scaling group is 300 seconds (5 minutes), so it takes about 5 minutes until you see the scaling activity.
New instances are launched before terminating old ones. May momentarily exceed the maximum by greater of 10% or 1 instance.
Scaling Options
- Maintain current instance levels at all times
- You can configure your Auto Scaling group to maintain a specified number of running instances at all times.
- Manual scaling
- Scale based on a schedule
- Scaling by schedule means that scaling actions are performed automatically as a function of time and date.
- Scale based on demand
- A more advanced way to scale your resources, using scaling policies, lets you define parameters that control the scaling process.
-
ChangeInCapacity—Increase or decrease the current capacity of the group by the specified number of instances. A positive value increases the capacity and a negative adjustment value decreases the capacity.Example: If the current capacity of the group is 3 instances and the adjustment is 5, then when this policy is performed, there are 5 instances added to the group for a total of 8 instances.
-
ExactCapacity—Change the current capacity of the group to the specified number of instances. Specify a positive value with this adjustment type.Example: If the current capacity of the group is 3 instances and the adjustment is 5, then when this policy is performed, the capacity is set to 5 instances.
-
PercentChangeInCapacity—Increment or decrement the current capacity of the group by the specified percentage. A positive value increases the capacity and a negative value decreases the capacity. If the resulting value is not an integer, it is rounded as follows:
-
Values greater than 1 are rounded down. For example, 12.7 is rounded to 12.
-
Values between 0 and 1 are rounded to 1. For example, .67 is rounded to 1.
-
Values between 0 and -1 are rounded to -1. For example, -.58 is rounded to -1.
-
Values less than -1 are rounded up. For example, -6.67 is rounded to -6.
Example: If the current capacity is 10 instances and the adjustment is 10 percent, then when this policy is performed, 1 instance is added to the group for a total of 11 instances. -
Suspending autoscaling processes
If you suspend AZRebalance and a scale-out or scale-in event occurs, the scaling process still tries to balance the Availability Zones. For example, during scale-out, it launches the instance in the Availability Zone with the fewest instances.
If you suspend the Launch process, AZRebalance neither launches new instances nor terminates existing instances. This is because AZRebalance terminates instances only after launching the replacement instances.
If you suspend the Terminate process, your Auto Scaling group can grow up to ten percent larger than its maximum size, because this is allowed temporarily during rebalancing activities. If the scaling process cannot terminate instances, your Auto Scaling group could remain above its maximum size until you resume the Terminate process.
Default Termination policy
It selects the Availability Zone with the instances and terminates the instance launched from the oldest launch configuration. If the instances were launched from the same launch configuration, the Auto Scaling group selects the instance that is closest to the next billing hour and terminates it.
EC2 Instance Connect
Is a brand new service (2019) which enables you to connect you to EC2 instances using SSH, centralize control access to your instances using AWS IAM policies. Record and audit with CloudTrail, support temporary SSH keys, compatible SSH and Putty.
One Reply to “AWS Solutions Architect Associate (SAA) 2018 – III”