AWS Solutions Architect Associate (SAA) 2018 – III

Seguimos con los posts dedicados a la certificación de AWS SAA. Topics covered:

EC2

Classes of EC2

  • OnDemand: Pay a fixed rate per hour (or per second). Linux by the second, Windows per hour.
  • Reserved: 1 year or 3-year terms. Bigger discount.
  • Spot: cheaper than OnDemand, the price you want for instance capacity.
  • Dedicated Hosts: Physical EC2.

OnDemand

  • Users who want low cost and flexibility without the long-term commitment
  • Applications with short term, cannot be interrupted
  • Applications being developed

Reserved

  • Predictable usage
  • Applications which requiere reserved capacity
  • Upfront payments to reduce total computing costs
  • Standards RI’s 75% off than OnDemand
  • Convertible RI’s up to 54% off than OnDemand
  • Scheduled RI launch within the time windows you reserver. Match your capacity to a predictable recurring schedule
  • Reserved Instances don’t get interrupted unlike Spot instances in the event that there are not enough unused EC2 instances to meet the demand
  • You can have capacity reservations that recur on a daily, weekly, or monthly basis, with a specified start time and duration, for a one-year term through Scheduled Reserved Instances

Spot

  • You can block Spot Instances from being terminted using  Spot Block
  • Flexible start and times
  • Applications only feasible at very low compute prices
  • if you terminate the instance you pay for the hour
  • if AWS terminate the instance you get the hour it was terminated for free
  • Spot Fleet
    • attempts to launch number of Spot instances and On-Demand instances to try to meet the target capacity you specified
    • Fleet needs
      • available capacity
      • maximum price to pay that you specified exceeds the current Spot price. (Si el precio máximo de su solicitud es superior al precio de spot actual, Amazon EC2 la atiende inmediatamente si hay capacidad disponible. )
    • Fleet will try to maintain your capacity when instances are interrupted
      • Set up different launch pools. Define things like Ec2 instance type, operating system and availability zone.
      • You can have multiple pools and fleet will try to implement depending the strategy you decide
      • Spot fleets will stop launching Ec2 instances  once you reach your price treshold or capacity desire

Spot Fleet strategies

  • capacity optimized
    • The Spot Instances come from the pools with optimal capacity for the number of instances that are launching. You can optionally set a priority for each instance type in your fleet using capacityOptimizedPrioritized. Spot Fleet optimizes for capacity first but honors instance type priorities on a best-effort basis.
  • diversified
    • The Spot Instances are distributed across all pools.
  • lowestPrice
    • The Spot Instances come from the pool with the lowest price. This is the default strategy.
  • InstancePoolstoUseCount
    • The Spot Instances are distributed across the number of Spot pools that you specify. This parameter is valid only when used in combination with lowestPrice.

Dedicated Hosts

If your needs does not support virtualization.

Types of EC2

  • D Density Storage
  • R Memory Optimized (Memory Intensive applications)
  • T  CHEAP (Webservers, small databases)
  • M Main choice for general purposes
  • C Compute Optimized
  • G Graphics (Video Encoding)
  • F FPGA
  • I High-Speed Storage (No SQL, DataWarehouse, etc)
  • P Graphics/General Purpose
  • X Extreme memory (SAP Hana, Apache Spark, etc)
  • Z1D HighCompute and High Memory Footprint (EDA/Database workloads high per-core licensing costs) Habana Gaudi Based Instance Types (Training deep learning models)
  • H High Disk Trougput (MapReduce based workloads) U-6Tb1 BareMetal (Eliminate Virtualization Overhead) A1 ARM Workloads

BareMetal Instances

Bare metal instances allow EC2 customers to run applications that benefit from deep performance analysis tools, specialized workloads that require direct access to bare metal infrastructure, legacy workloads not supported in virtual environments, and licensing-restricted Tier 1 business-critical application.

ARM-Based Ec2

https://aws.amazon.com/es/blogs/aws/new-ec2-instances-a1-powered-by-arm-based-aws-graviton-processors/

EC2 Status Checks

Status checks are performed every minute and each returns a pass or a fail status. If all checks pass, the overall status of the instance is OK. If one or more checks fail, the overall status is impaired.

System Status Checks

Monitor the AWS systems on which your instance runs. These checks detect underlying problems with your instance that require AWS involvement to repair. When a system status check fails, you can choose to wait for AWS to fix the issue, or you can resolve it yourself. For instances backed by Amazon EBS, you can stop and start the instance yourself, which in most cases migrates it to a new host. For instances backed by instance store, you can terminate and replace the instance. The following are examples of problems that can cause system status checks to fail:

  • Loss of network connectivity
  • Loss of system power
  • Software issues on the physical host
  • Hardware issues on the physical host that impact network reachability<

Instance Status Checks

Monitor the software and network configuration of your individual instance. Amazon EC2 checks the health of the instance by sending an address resolution protocol (ARP) request to the ENI. These checks detect problems that require your involvement to repair. When an instance status check fails, typically you will need to address the problem yourself (for example, by rebooting the instance or by making instance configuration changes).

The following are examples of problems that can cause instance status checks to fail:

  • Failed system status checks
  • Incorrect networking or startup configuration
  • Exhausted memory
  • Corrupted file system
  • Incompatible kernel

EC2 Hibernate

  • Preserves in-memory (RAM) on persistent storage (EBS)
  • Much faster boot because don’t need to reload OS
  • Ram must be less than 150 GB
  • C3,C4,M3-4-5,R3-4-5
  • Windows, Linux 2 AMI, and Ubuntu
  • Can’t be hibernated for more than 60 days
  • Available for On-Demand and Reserved

Alarm Actions in case of System Status Check Failed

When the StatusCheckFailed_System alarm is triggered, and the recovery action is initiated, you are notified by the Amazon SNS topic that you chose when you created the alarm and associated the recovery action. During instance recovery, the instance is migrated during an instance reboot, and any data that is in-memory is lost. When the process is complete, information is published to the SNS topic you’ve configured for the alarm. Anyone who is subscribed to this SNS topic receives an email notification that includes the status of the recovery attempt and any further instructions. You notice an instance reboot on the recovered instance

The recovery action can be used only with StatusCheckFailed_System, not with StatusCheckFailed_Instance.

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/UsingAlarmActions.html#AddingRecoverActions

AMI

Copy AMI image to regions

Imagine the EC2 instances you are currently using depending on a pre-built AMI. This AMI is not accessible to another region hence,  you have to copy it to the us-west-2 region to properly establish your disaster recovery instance.

You can copy an Amazon Machine Image (AMI) within or across an AWS region using the AWS Management Console, the AWS command-line tools or SDKs, or the Amazon EC2 API, all of which support the CopyImage action. You can copy both Amazon EBS-backed AMIs and instance store-backed AMIs. You can copy encrypted AMIs and AMIs with encrypted snapshots.

 Remember that an AMI is a regional resource and if you need to use this in another region, you have to make a copy of it.

EBS

  • storage volumes
  • automatically replicated to protect you from the failure
  • you cannot use 1 EBS volume to multiple instances, instead, use EFS
  • Cannot encrypt EBS root volumes of your Defaults AMIS
Instance Stores vs EBS Backed instances
  • EC2 with Instance Store are fewer families to choose from.
  • The critical difference: cannot stop or start EC2 with Instance Store. If there is a hypervisor issue, with EBS we can stop-start the EC2. Not with Instance Store. You’ve lost that Instance.
  • This is why is called Ephemeral Storage.
  • Instance Store Volumes are not shown in the Volumes section. Can’t do anything at all with them.

Backup EBS tips

  • snapshots are located in S3.
  • snapshot, point in time copies of volumes
  • snapshot, are incremental
  • recommended to stop an ec2 if you want to take a snapshot of EBS root volume
  • NOW you can change EBS volume sizes on the fly, including changing size and storage type
  • volumes will be always available in the same availability as the ec2 instance
  • to move a volume from az region to another, take a snapshot or image of the new location
  • snapshots of encrypted volumes are encrypted automatically
  • you can share snapshots with others if are not encrypted

Snapshots

  • Snapshots of encrypted volumes are encrypted automatically
  • Volumes restored from encrypted volumes are encrypted automatically
  • You can share snapshots but must be unencrypted
  • You can now encrypt device root volumes upon creation of EC2

EBS RAID TIPS

  • RAID 5 is not recommended by AWS
  • Better to use Stripe Volume = RAID0
  • how take a snapshot of a multiple EBS (array)?
    • application-consistent snapshot
    • freeze the system or
    • unmount array or
    • shut down EC2

DataLifeCycle (DLM) Manager

With Amazon Data Lifecycle Manager, you can manage the lifecycle of your AWS resources. You create lifecycle policies, which are used to automate operations on the specified resources. Amazon DLM supports:

  • Amazon EBS volumes
  • snapshots
  • AMIs
    •   select target instances for AMI creation, set schedules for creation and retention, copy newly created AMIs to other regions, share them with other AWS accounts, and even add tags to the AMI and the snapshots. On the deletion side, you can choose to simply deregister the AMI or to delete the associated snapshots as well.

Termination protection is turned off by default

When an instance terminates, Amazon EC2 uses the value of the DeleteOnTermination attribute for each attached Amazon EBS volume to determine whether to preserve or delete the volume.

By default, when you attach an EBS volume to an instance, its DeleteOnTermination the attribute is set to false. Therefore, the default is to preserve these volumes. You must delete a volume to avoid incurring further charges. For more information, see Deleting an Amazon EBS Volume. After the instance terminates, you can take a snapshot of the preserved volume or attach it to another instance.

To verify the value of the DeleteOnTermination attribute for an EBS volume that is in use, look at the instance’s block device mapping. For more information, see Viewing the EBS Volumes in an Instance Block Device Mapping.

You can change the value of the DeleteOnTermination attribute for a volume when you launch the instance or while the instance is running.

By default, the DeletionOnTermination attribute for the root volume of an instance is set to true. Therefore, the default is to delete the root volume of an instance when the instance terminates.

 EBS types

GP2 General Purpose SSD (BOOT) = 10KS IOPS
  • 1 Gib – 16 Tib
  • 3 IOPS per GB
  • up to 10K IOPS
  • burst up to 3K IOPS
IO1 ProvisioneD IOPS (BOOT)> 10KS IOPS
  • 4 Gib – 16 Tib
  • IO Intensive applications
  • use it if you need more than 10K IOPS
Magnetic Standard (BOOT)
  • Lowest cost per gigabyte of all EBS volumes that is bootable
HDD Throughput Optimized ST1
  • 500 Gib – 16 Tib
  • big data, data warehouses, log processing
  • Since files are read in whole, HDD based storage would offer very high sequential read throughput
  • cannot be a boot volume
HDD Cold SC1
  • 500 Gib – 16 Tib
  • Lowest cost storage for IA workload, infrequent access
  • cannot be a boot volume
    • Cold HDD volumes provide low-cost magnetic storage that defines performance in terms of throughput rather than IOPS. With a lower throughput limit than Throughput Optimized HDD, this is a good fit ideal for large, sequential cold-data workloads. If you require infrequent access to your data and are looking to save costs, Cold HDD provides inexpensive block storage. Take note that bootable Cold HDD volumes are not supported.

General Purpose Base Ratio IOPS

GP2 has a base of 3IOPS per/GiB of volume size.

  • Maximum volume size 16 TiB
  • Maximum IOPS size of 10K

For example, if we have a volume of 100 GiB we will obtain 300 IOPS (100 x 3). And we could also burst up to 3000 IOPS, if we want, using I/O credits. In this scenario, the burst will be 2700 (3000-300) Every volume receives an initial I/O credit balance of 5.400.000 credits. This amount is equal to sustaining a burst of 3000 over 30 minutes. When you are below your performance baseline you will earn new credits.

Maximum Ratio Size: IOPS

50:1 is the maximum ratio of provisioned IOPS to the requested volume size in Gigabyte (GiB).

So for instance, a 10 GiB volume can be provisioned with up to 500 IOPS. Any volume 640 GiB in size or greater allows provisioning up to the 32,000 IOPS maximum (50 × 640 GiB = 32,000)

Resize EBS volumes

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-modify-volume.html?icmpid=docs_ec2_console

Updates

Security Groups

  • all inbound traffic is blocked by default
  • all outbound traffic is allowed
  • changes immediately
  • Stateful
    • If you allow traffic in, that traffic is automatically allowed back out again.
    • For VPC security groups, this means that responses to allowed inbound traffic are allowed to flow out, regardless of outbound rules.
  • Cannot block specific IP address with security groups, instead use NACL
  • you can specify allow rules, but not deny rules.

LOAD BALANCERS

  • ALB (Application Layer, HTTP/HTTPS) – 2016
  • Classic Load Balancer
  • NLB

Classic Load Balancer

  • Cross Zone Load Balancing disabled by default
    • Each load balancer node for your Classic Load Balancer distributes requests evenly across the registered instances in all enabled Availability Zones. If cross-zone load balancing is disabled, each load balancer node distributes requests evenly across the registered instances in its Availability Zone only.
  • Recommended for Ec2 classic network
  • TLS termination is supported only by Classic and Application Load balancers
  • Target Ec2 instances
  • Works on ec2 classic and vpc
  • http
  • https
  • tcp
  • ssl
  • Ssl offloading (termination tls)
  • Sticky sessions
  • Osi layer 4 and 7

Classic Load Balancer does not support Server Name Indication (SNI). You have to use an Application Load Balancer instead or a CloudFront web distribution to allow the SNI feature

Application Load Balancer

  • Cross Zone LB enabled by default
  • Target ec2 instances, containers, and private addresses
  • Content-based routing
    • path
    • Host
  • Load balance across different ports on an ec2 instance
  • Sticky sessions
  • Supports http https http 2 websocket
  • Osi layer 7
  • Flexible application management and TLS Termination
  • TLS termination is supported only by Classic and Application Load balancers
    • Now termination TLS is enabled in NLB. This is great to offload the TLS overhead from the EC2.

NLB

  • Extreme performance and static IP
  • Target ec2 instances, containers, and private addresses
  • Very high performance
  • Optimized for volatile traffic patterns
  • Long-lived tcp connections (web socket)
  • One static IP per AZ
  • Preserves IP source address
  • Osi layer 4
    • Network Load Balancer is the only product that assigns a static IP address per availability zone where it is deployed. You can use this static IP address to configure your client application. DNS lookup of Application and Classic Load Balancer Names would return a list of load balancer nodes that are valid at that time; however, this list can change depending on the load on the system. So, for application and classic load balancers, you should always referer it by name
  • Network Load Balancer currently does not support Security Groups. You can enforce filtering in Security Group of the EC2 instances. Network Load Balancer forwards the requests to EC2 instances with source IP indicating the caller source IP (Source IP is automatically provided by NLB when EC2 instances are registered using Instance Target Type. If you register instances using IP address as Target Type, then you would need to enable the proxy protocol to forward source IP to EC2 instances)

Both Application and Network Load Balancers allow you to add targets by IP address. You can use this capability to register instances located on-premises and VPC to the same load balancer. Do note that instances can be added only using private IP address and on-premises data center should have a VPN connection to AWS VPC or a Direct Connect link to your AWS infrastructure

Updates

  • 2018
    • Support Inter-Region VPC Peering: we can communicate resources located in different regions without outgoing over the Internet.
      • Network Load Balancers now support connections from clients to IP-based targets in peered VPCs across different AWS Regions. Previously, access to Network Load Balancers from an inter-region peered VPC was not possible. With this launch, you can now have clients access Network Load Balancers over an inter-region peered VPC. Network Load Balancers can also load balance to IP-based targets that are deployed in an inter-region peered VPC.
      • https://aws.amazon.com/es/about-aws/whats-new/2018/10/network-load-balancer-now-supports-inter-region-vpc-peering/
  • 2019
    • Now termination TLS is enabled in NLB. This is great to offload the TLS overhead from the EC2.
      • https://aws.amazon.com/blogs/aws/new-tls-termination-for-network-load-balancers/
    • Network Load Balancer now supports UDP protocol.
    • NLB now supports SNI. Multiple TLS certificates over the same NLB listener.

TargetGroup HealthChecks

The approximate amount of time, in seconds, between the health checks of an individual target. For HTTP and HTTPS health checks, the range is 5–300 seconds. For TCP health checks, the supported values are 10 and 30 seconds. If the target type is instance or IP, the default is 30 seconds. If the target type is lambda, the default is 35 seconds.

Metadata EC2

$ curl http://169.254.169.254/latest/meta-data/

Placement Group

  • Recommended for applications that benefit from low network latency, high network throughput or both.
  • Can’t span multiple Availability Zones. Single Point of Failure.Name you to specify for a PG must be unique within your AWS account.
    • Spread Placement Groups can be deployed across availability zones since they spread the instances further apart.
      • Spread placement groups have a specific limitation that you can only have a maximum of 7 running instances per Availability Zone.
    • Clustered Placement Groups can only exist in one Availabiity Zone since they are focused on keeping instances together, which you cannot do across Availability Zones
      • A logical group of instances within a single availability zone. 10 Gbps network.
    • Partition Placement Group
  • Only certain types of instances can be launched in a placement group.
  • AWS recommends homogeneous instances within the placement group.
  • You can’t merge PG.
  • You can’t move an existing instance to a PG. You can create an AMI from your existing instance and then launch a new instance from the AMI into a PG.

EFS

NFSv4 protocol
Only pay for the storage you use
Scale up to PetaBytes
Thousands of concurrent NFS connections
Is stored across multiple AZ’s within a region
Read after Write Consistency (NFS is a block based storage)

Update

  • 2018
    • Amazon EFS now allows you to instantly provision the throughput required for your applications independent of the amount of data stored in your file system, allowing you to optimize throughput for your application’s performance needs. You can use Amazon EFS for applications with a wide range of performance requirements. Until today, the amount of throughput an application could demand from Amazon EFS was based on the amount of data stored in the file system. This default Amazon EFS throughput bursting mode offers a simple experience that is suitable for a majority of applications. Now with Provisioned Throughput, applications with throughput requirements greater than those allowed by Amazon EFS’s default throughput bursting mode can achieve the throughput levels required immediately and consistently independent of the amount of data. You can quickly configure throughput of your file system with a few simple steps using the AWS Console, AWS CLI or AWS API.
      • With Provisioned Throughput, you will be billed separately for the amount of storage used and for the throughput provisioned beyond what you are entitled to through the default Amazon EFS Bursting Throughput mode.
    • Provisioned Throughput up to 1 GB/s, even for small filesystems
Currently, the only instances that supports EFS mounting across VPC peering are in the following families: T3 C5 C5d I3.metal M5 M5d R5 R5d z1d
  • 2019
    • Price reduction for Infrequent Access Storage: 44% reduction in storage prices for EFS IA
    • Now available in all commercial AWS REGIONS
  • 2020
    • Now Supports IAM to manage NFS access fro EFS. You can use IAM roles to identify NFS clients with cryptographic security and use IAM policies to manage client-specific permissions.

FSx for Lustre

Is a high performance file system optimized for workloads such as machine learning, high performance computing, video processing, financial modeling, and analytics.

Update

  • 2020
    • Moving data between FSx for Lustre and S3 enhancements: FSx has quadrupled the speed of launching FSx file systems that are linked to s3 buckets

Autoscaling

Type of requests

  • A one-time Spot Instance request remains active until Amazon EC2 launches the Spot Instance, the request expires, or you cancel the request. If the Spot price exceeds your maximum price or capacity is not available, your Spot Instance is terminated and the Spot Instance request is closed.
  • A persistent Spot Instance request remains active until it expires or you cancel it, even if the request is fulfilled. If the Spot price exceeds your maximum price or capacity is not available, your Spot Instance is interrupted. After your instance is interrupted, when your maximum price exceeds the Spot price or capacity becomes available again, the Spot Instance is started if stopped or resumed if hibernated. You can stop a Spot Instance and start it again if capacity is available and your maximum price exceeds the current Spot price. If the Spot Instance is terminated (irrespective of whether the Spot Instance is in a stopped or running state), the Spot Instance request is opened again and Amazon EC2 launches a new Spot Instance. For more information, see Stopping a Spot Instance, Starting a Spot Instance, and Terminating a Spot Instance.

Spots Termination CASE

If your Spot instance is terminated or stopped by Amazon EC2 in the first instance hour, you will not be charged for that usage. However, if you terminate the instance yourself, you will be charged to the nearest second.
If the Spot instance is terminated or stopped by Amazon EC2 in any subsequent hour, you will be charged for your usage to the nearest second. If you are running on Windows and you terminate the instance yourself, you will be charged for an entire hour.

Example of Spot billing

If a Spot instance has been running for more than an hour, which is past the first instance hour, this means that you will be charged from the time it was launched till the time it was terminated by AWS. The computation for your 90 minute usage would be $0.04 (60 minutes) + $0.02 (30 minutes) = $0.06.
Remember that AWS automatically terminates the instance when the Spot price exceeds your maximum price. Since there was an increase in price after 40 minutes (which is within the first instance hour) the EC2 instance was terminated by AWS. The following are the possible reasons why Amazon EC2 will interrupt your Spot Instances:
  • Price – The Spot price is greater than your maximum price.
  • Capacity – If there are not enough unused EC2 instances to meet the demand for Spot Instances, Amazon EC2 interrupts Spot Instances. The order in which the instances are interrupted is determined by Amazon EC2.
  • Constraints – If your request includes a constraint such as a launch group or an Availability Zone group, these Spot Instances are terminated as a group when the constraint can no longer be met.
The default cooldown for the Auto Scaling group is 300 seconds (5 minutes), so it takes about 5 minutes until you see the scaling activity.
New instances are launched before terminating old ones. May momentarily exceed the maximum by greater of 10% or 1 instance.

Scaling Options

  • Maintain current instance levels at all times
    • You can configure your Auto Scaling group to maintain a specified number of running instances at all times.
  • Manual scaling
  • Scale based on a schedule
    • Scaling by schedule means that scaling actions are performed automatically as a function of time and date.
  • Scale based on demand
    • A more advanced way to scale your resources, using scaling policies, lets you define parameters that control the scaling process.
Amazon EC2 Auto Scaling supports the following adjustment types for step scaling and simple scaling:
  • ChangeInCapacity—Increase or decrease the current capacity of the group by the specified number of instances. A positive value increases the capacity and a negative adjustment value decreases the capacity.
    Example: If the current capacity of the group is 3 instances and the adjustment is 5, then when this policy is performed, there are 5 instances added to the group for a total of 8 instances.
  • ExactCapacity—Change the current capacity of the group to the specified number of instances. Specify a positive value with this adjustment type.
    Example: If the current capacity of the group is 3 instances and the adjustment is 5, then when this policy is performed, the capacity is set to 5 instances.
  • PercentChangeInCapacity—Increment or decrement the current capacity of the group by the specified percentage. A positive value increases the capacity and a negative value decreases the capacity. If the resulting value is not an integer, it is rounded as follows:
    • Values greater than 1 are rounded down. For example, 12.7 is rounded to 12.
    • Values between 0 and 1 are rounded to 1. For example, .67 is rounded to 1.
    • Values between 0 and -1 are rounded to -1. For example, -.58 is rounded to -1.
    • Values less than -1 are rounded up. For example, -6.67 is rounded to -6.
    Example: If the current capacity is 10 instances and the adjustment is 10 percent, then when this policy is performed, 1 instance is added to the group for a total of 11 instances.

Suspending autoscaling processes

If you suspend AZRebalance and a scale-out or scale-in event occurs, the scaling process still tries to balance the Availability Zones. For example, during scale-out, it launches the instance in the Availability Zone with the fewest instances. If you suspend the Launch process, AZRebalance neither launches new instances nor terminates existing instances. This is because AZRebalance terminates instances only after launching the replacement instances. If you suspend the Terminate process, your Auto Scaling group can grow up to ten percent larger than its maximum size, because this is allowed temporarily during rebalancing activities. If the scaling process cannot terminate instances, your Auto Scaling group could remain above its maximum size until you resume the Terminate process.

Default Termination policy

It selects the Availability Zone with the instances and terminates the instance launched from the oldest launch configuration. If the instances were launched from the same launch configuration, the Auto Scaling group selects the instance that is closest to the next billing hour and terminates it.

EC2 Instance Connect

Is a brand new service (2019) which enables you to connect you to EC2 instances using SSH, centralize control access to your instances using AWS IAM policies. Record and audit with CloudTrail, support temporary SSH keys, compatible SSH and Putty.

One thought on “AWS Solutions Architect Associate (SAA) 2018 – III

  1. Pingback: AWS certifications posts

Leave a Reply

Your email address will not be published. Required fields are marked *