AWS Solutions Architect Associate (SAA) 2018 – II

Seguimos con los posts dedicados a la certificación de AWS SAA.

Topics covered:

Links of interest



  • Edge Location: this is the location where content is cached and can be used to write too. Separate to an AWS Region/Az. Actually, there are more Edge Locations than AWS regions. EdgeLocation cached files from the origin speeds up delivery to videos, images, etc. Used for S3 Transfer Acceleration.
  • Origin can be S3 Bucket, EC2 Instance, ELB or Route53
  • Distribution: the name was given to our CDN
  • Web distribution: typically used for websites
  • RTMP: Used for media streaming
Objects are cached for the life of the TTL (seconds, 24 hours, 86400 by default)
You can clear cached objects, (but you will be charged)

URLS vs Cookies

  • A signed url is for individual files
    • 1 url = 1 file
  • A signed cookie is for multiple files
    • 1 cookie = multiple files

We will need a policy that can contain

  • URL expiration
  • IP ranges
  • Trusted signers

CloufFront Signed URL features

  • Can have different origins.
  • Can utilize caching features.
  • Can have filters: IP, path, address, etc
  • Key-pair created is managed by the root user

S3 Signed URL features

  • Issues a request as the IAM user who created the presigned URL
  • Limited lifetime


 Storage Gateway

  • Supports Microsft HyperV or VMWARE ESXI
  • All data transferred between any type of gateway appliance and AWS storage is encrypted using SSL.
  • By default, all data stored by AWS Storage Gateway in S3 is encrypted server-side with Amazon S3-Managed Encryption Keys (SSE-S3).
  • Also when using the file gateway, you can optionally configure each file share to have your objects encrypted with AWS KMS-Managed Keys using SSE-KMS.

storage gateway aws diagram

Modes available

  • File Gateway (NFS): Flat files in S3, pdfs, pictures, videos,(stored in S3)
  • Volume Gateway (iSCSI): virtual hard disk
    • Stored Volumes: an entire copy of the dataset is stored on-site and is backup async to S3.
    • Cached Volumes: the entire dataset is stored on S3 and the most frequently accessed data is cached on site.
  • Gateway-Virtual Tape Library (VTL): backup, create virtual tapes, and send to S3.

File Gateway (NFS)

Files stored as objects in s3 Buckets accessed through an NFS.
Ownership, permissions, timestamps durably stored in S3.
Once files are transferred to S3, they can be managed as native S3 objects, and bucket policies such as versioning, lifecycle management, cross-region replication, etc

Volume Gateway (iSCSI)

Virtual Hard Disks presented at on-premises via iSCSI and you backup them to S3. Data written to these volumes can be asynchronously backed up as point-in-time snapshots of your volumes and stored as EBS snapshots. Snapshots are incremental.

Volume GatewayStored Volumes

Let you store your primary data locally while asynchronously backing up that data to AWS.

Low latency access to their entire datasets, with durable backups. Data written to your stored volumes are stored on your on-premises storage hardware.

This data is backed up to AWS S3 in the form of EBS Snapshots.

Entire dataset is stored on site (on-premises) an is asynchronusly backed up to S3 in the form of EBS snapshots

Volume Gateway Cached Volumes

Lets you use S3 as your primary data storage while retaining frequently accessed data locally in your storage gateway.
The most recent data is on-premises storage hardware.
Old data is in S3.
Cached volumes offer substantial cost savings on primary storage and minimize the need to scale your storage on-premises.

Entire dataset is stored on S3 and the most frequently accessed data is cached on site

Tape Gateway

Supported by NetBackup, Backup Exec, Veaam, etc. Instead of having physical tapes, you have virtual tapes.
Update 2020
  • You can now create Tape Gateway and Volume Gateway local storage caches as large as 64 terabytes, four times as large as before, for gateways that are running on a virtual machine.
  • You can now create a schedule to control the maximum network bandwidth consumed by your Tape and Volume Gateways.


Old service was Import/Export
ReInvent 2015 announced


You send your own disks

Snowball Standard

Snowball is petabyte-scale data transport that uses secure appliances to transfer large amounts of data into AWS.
AWS Snowball device costs less than AWS Snowball Edge
Up to 80 TB
Snowball in all regions.
256-bit encryption.
TPM standard industry.
Transferring data is simple, fast, secure and cheap. Once the data transfer job has been processed and verified, AWS performs a software erasure of Snowball Appliance.

SnowBall Edge

100 TB transfer device with on board storage and compute capabilities.
To move into and out of AWS, temporary storage tier for large local datasets.
SnowBall AWS datacenter in a box


Is a truck container.
100 PetaBytes. Massive volumes of data to the cloud. The only US.
Datacenter migration to AWS

Case scenario of moving data ou of on-premises for large scale data

To sum up:

  • Use AWS SCT to process data locally and move that data into the AWS Snowball Edge device.
  • You send the device to AWS
  • AWS load data into S3
  • Use AWS SCT to migrate data to Redshift, for example.

You can use an AWS SCT agent to extract data from your on-premises data warehouse and migrate it to Amazon Redshift. The agent extracts your data and uploads the data to either Amazon S3 or, for large-scale migrations, an AWS Snowball Edge device. You can then use AWS SCT to copy the data to Amazon Redshift.

Large-scale data migrations can include many terabytes of information and can be slowed by network performance and by the sheer amount of data that has to be moved. AWS Snowball Edge is an AWS service you can use to transfer data to the cloud at faster-than-network speeds using an AWS-owned appliance. An AWS Snowball Edge device can hold up to 100 TB of data. It uses 256-bit encryption and an industry-standard Trusted Platform Module (TPM) to ensure both security and full chain-of-custody for your data. AWS SCT works with AWS Snowball Edge devices.

When you use AWS SCT and an AWS Snowball Edge device, you migrate your data in two stages. First, you use the AWS SCT to process the data locally and then move that data to the AWS Snowball Edge device. You then send the device to AWS using the AWS Snowball Edge process, and then AWS automatically loads the data into an Amazon S3 bucket. Next, when the data is available on Amazon S3, you use AWS SCT to migrate the data to Amazon Redshift. Data extraction agents can work in the background while AWS SCT is closed. You manage your extraction agents by using AWS SCT. The extraction agents act as listeners. When they receive instructions from AWS SCT, they extract data from your data warehouse.



Host level metrics

  • 5 minute is the standard interval
  • defaults metrics are
    • cpu
    • network
    • disk
    • status check
  • RAM is a custom metric, no default metric
  • For custom metrics default granularity is 1 minute
  • CloudWatch can be used also in on-premise. You only need to install the agent.

How long data is stored?

  • You can retrieve data calling GetMetricStatistics API.
  • By default logs are stored indefinitely.
  • You can retrieve data from terminated EC2 or ELB

One thought on “AWS Solutions Architect Associate (SAA) 2018 – II

  1. Pingback: AWS certifications posts

Leave a Reply

Your email address will not be published. Required fields are marked *