AWS Certified Solution Architect Associate – Dump 01

You lead a team to develop a new online game application in AWS EC2. The application will have a large number of users globally. For a great user experience, this application requires very low network latency and jitter. If the network speed is not fast enough, you will lose customers. Which tool would you choose to improve the application performance? (Select TWO.)

A.AWS VPN

B.AWS Global Accelerator

C.Direct Connect

D.API Gateway

E.CloudFront

Correct​ ​Answer​ ​–​ B, E

This online game application has global users and needs low latency. Both CloudFront and Global Accelerator can speed up the distribution of contents over the AWS global network.

  • Option​ ​A ​is​ ​incorrect:​ AWS VPN links on-premise network to AWS network. However, no on-premise services are mentioned in this question.
  • Option​ ​B ​is​ CORRECT:​ AWS Global Accelerator works at the network layer and is able to direct traffic to optimal endpoints. Check https://docs.aws.amazon.com/global-accelerator/latest/dg/what-is-global-accelerator.html for reference.
  • Option​ ​C ​is​ ​incorrect:​ Direct Connect links on-premise network to AWS network. However, no on-premise services are mentioned in this question.
  • Option​ ​D ​is​ ​incorrect:​ API Gateway is a regional service and cannot improve the application performance. API Gateway is suitable for serverless applications such as Lambda.
  • Option​ ​E ​is​ CORRECT:​ Because CloudFront delivers content through edge locations and users are routed to the edge location that has the lowest time delay.

A company has a media processing application deployed in a local data center. Its file storage is built on a Microsoft Windows file server. The application and file server need to be migrated to AWS. You want to quickly set up the file server in AWS and the application code should continue working to access the file systems. Which method should you choose to create the file server?

A.Create a Windows File Server from Amazon WorkSpaces.

B.Configure a high performance Windows File System in Amazon EFS.

C.Create a Windows File Server in Amazon FSx.

D.Configure a secure enterprise storage through Amazon WorkDocs.

Correct​ ​Answer​ ​–​ C

In this question, a Windows file server is required in AWS and the application should continue to work unchanged. Amazon FSx for Windows File Server is the correct answer as it is backed by a fully native Windows file system.

  • Option​ ​A ​is​ ​incorrect:​ Because Amazon WorkSpace configures a desktop server which is not required in this question. Only a Windows file server is needed.
  • Option​ ​B ​is​ ​incorrect:​ Because EFS can not be used to configure a Windows file server.
  • Option​ ​C ​is​ CORRECT:​ Because Amazon FSx provides fully managed Microsoft Windows file servers. Check https://docs.aws.amazon.com/fsx/latest/WindowsGuide/what-is.html.
  • Option​ ​D ​is​ ​incorrect:​ Because Amazon WorkDocs is a file sharing service in AWS. It cannot provide a native Windows file system.

You are developing an application using AWS SDK to get objects in AWS S3. The objects have big sizes and sometimes there are failures when getting objects especially when the network connectivity is poor. You want to get a specific range of bytes in a single GET request and retrieve the whole object in parts. Which method can achieve this?

A.Enable multipart upload in the AWS SDK.

B.Use the “Range” HTTP header in a GET request to download the specified range bytes of an object.

C.Reduce the retry requests and enlarge the retry timeouts through AWS SDK when fetching S3 objects.

D.Retrieve the whole S3 object through a single GET operation.

Correct​ ​Answer​ ​–​ B

Through the “Range” header in the HTTP GET request, a specified portion of the objects can be downloaded instead of the whole objects. Check the explanations in https://docs.aws.amazon.com/AmazonS3/latest/dev/GettingObjectsUsingAPIs.html.

  • Option​ ​A ​is​ ​incorrect:​ Because the question asks for multipart download rather than multipart upload.
  • Option​ ​B ​is​ CORRECT:​ Because with byte-range fetches, users can establish concurrent connections to Amazon S3 to fetch different parts from within the same object.
  • Option​ ​C ​is​ ​incorrect:​ Because adjusting retry requests and timeouts cannot download specific parts of an object.
  • Option​ ​D ​is​ ​incorrect:​ Because the method to retrieve the entire object does not meet the requirement.

You have an application hosted in an Auto Scaling group and an application load balancer distributes traffic to the ASG. You want to add a scaling policy that keeps the average aggregate CPU utilization of the Auto Scaling group to be 60 percent. The capacity of the Auto Scaling group should increase or decrease based on this target value. Which scaling policy does it belong to?

A.Target tracking scaling policy.

B.Step scaling policy.

C.Simple scaling policy.

D.Scheduled scaling policy.

Correct​ ​Answer​ ​–​ A

In ASG, you can add a target tracking scaling policy based on a target. Check https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scale-based-on-demand.html for different scaling policies.

  • Option​ ​A ​is​ CORRECT:​ Because a target tracking scaling policy can be applied to check the ASGAverageCPUUtilization metric according to https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-target-tracking.html.
  • Option​ ​B ​is​ ​incorrect:​ Because Step scaling adjusts the capacity based on step adjustments instead of a target.
  • Option​ ​C ​is​ ​incorrect:​ Because Simple scaling changes the capacity based on a single adjustment.
  • Option​ ​D ​is​ ​incorrect:​ With Scheduled scaling, the capacity is adjusted based on a schedule rather than a target.

You need to launch a number of EC2 instances to run Cassandra. There are large distributed and replicated workloads in Cassandra and you plan to launch instances using EC2 placement groups. The traffic should be distributed evenly across several partitions and each partition should contain multiple instances. Which strategy would you use when launching the placement groups?

A.Cluster placement strategy.

B.Spread placement strategy.

C.Partition placement strategy.

D.Network placement strategy.

Correct​ ​Answer​ ​–​ C

Placement groups have the placement strategies of Cluster, Partition and Spread. With the Partition placement strategy, instances in one partition do not share the underlying hardware with other partitions. This strategy is suitable for distributed and replicated workloads such as Cassandra. Details please refer to https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html#placement-groups-limitations-partition.

  • Option​ ​A ​is​ ​incorrect:​ Cluster placement strategy puts instances together in an availability zone. This does not resolve the problem mentioned in the question.
  • Option​ ​B ​is​ ​incorrect:​ Because Spread placement strategy puts instances across different racks. It does not group instances in a partition.
  • Option​ ​C ​is​ CORRECT:​ With Partition placement strategy, instances in a partition have their own set of racks.
  • Option​ ​D ​is​ ​incorrect:​ Because there is no Network placement strategy.

To improve the network performance, you launch a C5 EC2 Amazon Linux instance and enable enhanced networking by modifying the instance attribute with “aws ec2 modify-instance-attribute –instance-id instance_id –ena-support”. Which mechanism does the EC2 instance use to enhance the networking capabilities?

A. Intel 82599 Virtual Function (VF) interface.

B. Elastic Fabric Adapter (EFA).

C. Elastic Network Adapter (ENA).

D. Elastic Network Interface (ENI).

Correct​ ​Answer​ ​–​ C

Enhanced networking has two mechanisms: Elastic Network Adapter (ENA) and Intel 82599 Virtual Function (VF) interface. For ENA, users can enable it with –ena-support. References can be found in

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking-ena.html.

  • Option​ ​A ​is​ ​incorrect:​ Because the option of “–ena-support” is not used by Intel 82599 Virtual Function (VF) interface.
  • Option​ ​B ​is​ ​incorrect:​ Because Elastic Fabric Adapter (EFA) is not used for enhanced networking.
  • Option​ ​C ​is​ CORRECT:​ In Amazon Linux, users can enable the enhanced networking attribute with the AWS CLI command mentioned in the question.
  • Option​ ​D ​is​ ​incorrect:​ In this scenario, the mechanism used for enhanced networking should be Elastic Network Adapter (ENA) instead of Elastic Network Interface (ENI).

You have an application that has been dockerized. You plan to deploy the application in an AWS ECS cluster. As the application gets configuration files from an S3 bucket, the ECS containers should have the AmazonS3ReadOnlyAccess permission. What is the correct method to configure the IAM permission?

A.Add an environment variable to the ECS cluster configuration to allow the S3 read only access.

B.Add the AmazonS3ReadOnlyAccess permission to the IAM entity that creates the ECS cluster.

C.Modify the user data of ECS instances to assume an IAM role that has the AmazonS3ReadOnlyAccess permission.

D.Attach the AmazonS3ReadOnlyAccess policy to the ECS container instance IAM role. Use this role when creating the ECS cluster.

Correct​ ​Answer​ ​–​ D

ECS containers have access to permissions that are supplied to the container instance role. Details please check the ECS documentation in https://docs.aws.amazon.com/AmazonECS/latest/developerguide/instance_IAM_role.html.

  • Option​ ​A ​is​ ​incorrect:​ Because ECS cluster uses the container instance IAM role instead of environment variables to control its permissions.
  • Option​ ​B ​is​ ​incorrect:​ Because the IAM entity that creates the ECS cluster does not pass its permissions to the ECS cluster. You need to configure an IAM role and attach it to the ECS cluster.
  • Option​ ​C ​is​ ​incorrect:​ This is not the correct method to configure IAM permissions for an ECS cluster.
  • Option​ ​D ​is​ CORRECT:​ After the AmazonS3ReadOnlyAccess policy is attached to the IAM role, the ECS instances can use the role to get objects from S3. When launching an ECS cluster, you can associate the container instance role as follows:

You have an EC2 instance in the AWS us-east-1 region. The application in the instance needs to access a DynamoDB table that is located in the AWS us-east-2 region. The connection must be private without leaving the Amazon network and the instance should not use any public IP for communication. How would you configure this?

A. Configure an inter-region VPC endpoint for the DynamoDB service.

B. Configure inter-region VPC peering and create a VPC endpoint for DynamoDB in us-east-2.

C. Create an inter-region VPC peering connection between us-east-1 and us-east-2.

D. There is no way to setup the private inter-region connections.

Correct​ ​Answer​ ​–​ B

For the private connections between regions, VPC peering should be used. Then VPC endpoint allows users to privately access the DynamoDB service. Please check the reference in https://aws.amazon.com/about-aws/whats-new/2018/10/aws-privatelink-now-supports-access-over-inter-region-vpc-peering/.

  • Option​ ​A ​is​ ​incorrect:​ Because you cannot configure an inter-region VPC endpoint directly.
  • Option​ ​B ​is​ CORRECT:​ With inter-region VPC peering and VPC endpoint (PrivateLink), the EC2 instance can communicate with the DynamoDB table privately even if they belong to different regions.
  • Option​ ​C ​is​ ​incorrect:​ This option does not mention the usage of VPC endpoint.
  • Option​ ​D ​is​ ​incorrect:​ Because VPC peering supports the inter-region connections.

You configure an Amazon S3 bucket as the origin for a new CloudFront distribution. You need to restrict access so that users cannot view the files by directly using the the S3 URLs. The files should be only fetched through the CloudFront URL. Which method is the most appropriate?

A. Configure Signed URLs to serve private content by using CloudFront.

B. Configure Signed Cookies to restrict access to S3 files.

C. Create the origin access identity (OAI) and associate it with the distribution.

D. Configure the CloudFront web distribution to ask viewers to use HTTPS to request S3 objects.

Correct​ ​Answer​ ​–​ C

In this scenario users should only access S3 files through CloudFront instead of S3 URLs. Option C is the correct option. About how to work with origin access identities, please check https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html.

  • Option​ ​A ​is​ ​incorrect:​ Because Signed URLs are used to restrict access to files in CloudFront edge caches. It cannot prevent users from fetching files directly through S3 URLs. Check https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html.
  • Option​ ​B ​is​ ​incorrect:​ Same reason as option A.
  • Option​ ​C ​is​ CORRECT:​ Because you can configure the CloudFront origin to restrict bucket access through OAI:
  • Option​ ​D ​is​ ​incorrect:​ With HTTPS, connections are encrypted between CloudFront and viewers. However, it does not restrict access to the S3 content.

You own a MySQL RDS instance in AWS Region us-east-1. The instance has a Multi-AZ instance in another availability zone for high availability. As business grows, there are more and more clients coming from Europe (eu-west-2) and most of the database workload is read-only. What is the proper way to reduce the load on the source RDS instance?

A. Create a snapshot of the instance and launch a new instance in eu-west-2.

B. Promote the Multi-AZ instance to be a Read Replica and move the instance to eu-west-2 region.

C. Configure a read-only Multi-AZ instance in eu-west-2 as Read Replicas cannot span across regions.

D. Create a Read Replica in the AWS Region eu-west-2.

Correct​ ​Answer​ ​–​ D

Read Replica should be used to share the read workload of the source DB instance. Read Replica can also be configured in a different AWS region. Refer to https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html.

  • Option​ ​A ​is​ ​incorrect:​ Because Read Replica should be configured to share the read traffic. You should not launch a totally new instance.
  • Option​ ​B ​is​ ​incorrect:​ Because a Multi-AZ instance cannot be promoted to be a Read Replica.
  • Option​ ​C ​is​ ​incorrect:​ Because a Read Replica can be launched in another region for RDS MySQL.
  • Option​ ​D ​is​ CORRECT:​ Users can quickly configure a Read Replica in another region:

A Media firm has a global presence for its Sports programming & broadcasting network which uses AWS Infrastructure.  They have multiple AWS accounts created based upon verticals & to manage these accounts they have created AWS organizations. Recently this firm is acquired by another media firm which is also using AWS Infrastructure for media streaming services. Both these firms need to merge AWS Organisations to have new policies created & enforce in all the member AWS accounts of merged entities.

As an AWS Consultant which of the following steps you will suggest to the client to move the master account of original media firm to AWS Organisation used by the merged entity? (Select Three.)

A. Remove all member accounts from the organization.

B. Make another member account as a Master account.

C. Delete old organisation

D. Invite an old master account to join a new organization as a member account.

E. Invite an old master account to join a new organization as a master account.

Correct Answer – A, C, D

To move the master account from one AWS Organization to other AWS Organization, following steps needs to be implemented,

  • Removal of all member accounts from AWS Organization.
  • Delete the old organization.
  • Invite master account of old AWS Organization as a member account to join the new AWS Organization.
  • Option B is incorrect as the Master account of an AWS Organization cannot be replaced with another member account.
  • Option E is incorrect as a master account will be joining as a member account of a new organization instead of a Master account.

For more information on migrating accounts between AWS organizations, refer to the following URL,


A start-up firm is using AWS Organization for managing policies across it’s Development and Production accounts. Development account is looking for an EC2 dedicated host that would provide visibility on the number of sockets used. The Production account has subscribed to an EC2 dedicated host for its application, but is currently not in use. Which of the following can be done to share the Amazon EC2 dedicated host from the Production account to the Development account?

A. Remove both Development & Production Accounts from Organisation & then share resources between them.

B. You can share resources without enabling sharing within an Organisation.

C. Share Resources as an individual account in an Organisation.

D. Remove destination Development account from an Organization then share resources with it.

Correct Answer – C

 For accounts which are part of AWS Organisation, Resource sharing can be done on an individual account basis in case resource sharing is not enabled at AWS Organisation level.  With this, resources are shared within accounts as external accounts & an invitation needs to be shared between these accounts to start resource sharing.

  • Option A is incorrect as removing both accounts from AWS Organisation for Resource sharing is not a valid option.
  • Option B is incorrect becasue if sharing needs to be done within accounts in an AWS Organisation, then “sharing” needs to be enabled for the resources. Resource sharing can be done within accounts of AWS Organisation as an individual account.
  • Option D is incorrect as removing a destination account from AWS Organisation is not required for resource sharing.

For more information on using AWS Resource Access Manager, refer to the following URL,


A Large Medical Institute is using a legacy database for saving all its patient details. Due to compatibility issues with the latest software they are planning to migrate this database to AWS cloud infrastructure. This large size database will be using a NoSQL database Amazon DynamoDB in AWS.As an AWS Consultant you need to ensure that all tables of the current legacy database are migrated without a glitch to Amazon DynamoDB.  Which of the following is the most cost-effective way of transferring legacy databases to Amazon DynamoDB?

A. Use AWS DMS with AWS Schema Conversion Tool to save data to Amazon S3 bucket & then upload all data to Amazon DynamoDB.

B. Use AWS DMS with engine conversion tool to save data to Amazon S3 bucket & then upload all data to Amazon DynamoDB.

C. Use AWS DMS with engine conversion tool to save data to Amazon EC2 & then upload all data to Amazon DynamoDB.

D. Use AWS DMS with AWS Schema Conversion Tool to save data to Amazon EC2 instance & then upload all data to Amazon DynamoDB.

Correct Answer – A

In this case Legacy Database will be converted to Amazon DynamoDB which will be a heterogenous conversion. Using AWS Schema Conversion Tool is best suited for such conversion along with AWS DMS to transfer data from on-premise to AWS. Using Amazon S3 bucket will help to save any amount of data in a most cost-effective way before its uploaded to Amazon DynamoDB.

  • Option B is incorrect as engine conversion tool is best suited for homogeneous database migration, in this case it’s heterogeneous database, so using AWS SCT along with AWS DMS is a best option.
  • Option C & D are incorrect as using Amazon S3 bucket is a more cost-effective option than using Amazon EC2 instance.

For more information on using AWS Database Migration Service with AWS SCT, refer to the following URL,


A popular blogging site is planning to save all its data to EFS as a redundancy plan. This database is constantly fetch & updated by client information. You need to ensure that all files saved at EFS using AWS DataSync are validated for data-integrity for each packet. Which of the following will ensure fast transfer for data between on-premise & EFS with data integrity done as per security guidelines?

A. Enable Verification & perform all data transfer.

B. Enable verification during initial file transfers & disable it post last data transfer.

C. Disable verification during initial file transfers & enable it post last data transfer.

D. Disable Verification & perform all data transfer.

Correct Answer – C

While transferring a constantly changing database between on-premise servers & EFS using AWS DataSync, data verification can be disabled during data transfer & can be enabled post data transfer for data integrity checks & ensure that all data is properly copied between on-premise servers & EFS.

  • Option A is incorrect as enabling data verification for a constantly changing database will lead to slow transfer of data.
  • Option B is incorrect as Verification needs to be performed post data transfer to ensure all data is properly copied to EFS.
  • Option D is incorrect as Disabling verification will not perform data integrity check on data transfer between on-premise servers & EFS.

For more information on using AWS DataSync, refer to the following URL,


Developer Team is working on a new RTMP based flash application. They want to test this application with a few users spread across multiple in-house locations before making this application live. For this they have created a RTMP distribution in Amazon CloudFront. IT Head has asked you to control access to application so that only specific users from these locations can access this application during a specific time. Which of the following can meet this requirement?

A. Create Signed cookies specifying start date, time & IP address range from which users can access this content.

B. Create Signed cookies specifying end date, time & IP address range from which users can access this content.

C. Create Signed URLs specifying only start date, time & IP address range from which users can access this content.

D. Create Signed URLs specifying only end date, time & IP address range from which users can access this content.

Correct Answer – D

For RTMP distribution, Signed URLs can be used to control access to private content. While specifying periods with Signed URLs, start time & date is optional while end time date / time is required. Also, we can specify the IP address range of users who need to have access to this RTMP application.

  • Option A & B are incorrect as Signed Cookies do not support RTMP distribution.
  • Option C is incorrect as Specifying start date is an optional feature, specifying end date time is required for each Signed URLs.

For more information on using restricting access using Amazon CloudFront, refer to the following URL,


Your development team has just finished developing an application on AWS. This application is created in .NET and is hosted on an EC2 instance. The application currently accesses a DynamoDB table and is now going to be deployed to production. 

Which of the following is the ideal and most secure way for the application to access the DynamoDB table?

A. Pass API credentials to the instance using instance user data.

B. Store API credentials as an object in Amazon S3.

C. Embed the API credentials into your JAR files.

D. Assign an IAM role to the EC2 Instances.

Answer – D

The AWS Documentation mentions the following:

  • You can use roles to delegate access to users, applications, or services that don’t normally have access to your AWS resources.
  • It is not a best practice to use IAM credentials for any production based application. It is always a good practice to use IAM Roles.

For more information on IAM Roles, please visit the following URL:


A company is planning to replicate Infrastructure in multiple environment, since the priority from senior management is to achieve as much automation as required. 

Which of the following components would help them achieve this purpose?

A. AWS Beanstalk

B. AWS CloudFormation

C. AWS CodeBuild

D. AWS CodeDeploy

Answer – B

The AWS Documentation mentions the below on AWS CloudFormation. This supplements the requirement in the question by allowing consultants to use their architecture diagrams to construct cloudFormation templates.

AWS CloudFormation is a service that helps you model and set up your Amazon Web Service resources so that you can spend less time managing those resources and more time focusing on your applications that run on AWS. All you have to do is create a template that describes all the AWS resources that you want (Amazon EC2 instances or Amazon RDS DB instances), and AWS CloudFormation takes care of provisioning and configuring those resources for you.

For more information on AWS CloudFormation, please visit the following URL:


A company has a set of EC2 Instances that store critical data on EBS Volumes. The IT Security team has now mandated that the data on the disk needs to be encrypted. 

Which of the following can be used to achieve this purpose?

A. AWS KMS API

B. AWS Certificate Manager

C. API Gateway with STS

D. IAM Access Key

Answer – A

Option B is incorrect – The AWS Certificate Manager can be used to generate SSL certificates to encrypt traffic in transit, but not at rest.

Option C is incorrect – This is used for issuing tokens while using API gateway for traffic in transit.

Option D is incorrect – This is used for secure access to EC2 Instances.

The AWS Documentation mentions the following on AWS KMS:

AWS Key Management Service (AWS KMS) is a managed service that makes it easy to create and control the encryption keys used to encrypt your data. AWS KMS is integrated with other AWS services including Amazon Elastic Block Store (Amazon EBS), Amazon Simple Storage Service (Amazon S3), Amazon Redshift, Amazon Elastic Transcoder, Amazon WorkMail, Amazon Relational Database Service (Amazon RDS), and others, to make it simple to encrypt your data with encryption keys that you manage.

For more information on AWS KMS, please visit the following URL:


A company has a set of EC2 Instances that store critical data on EBS Volumes. There is a fear from IT Supervisors that if data on the EBS Volumes is lost, then it could result in a lot of effort to recover the data from other sources. Which of the following would help alleviate this concern in an economical way?

A. Take regular EBS Snapshots.

B. Enable EBS Volume encryption

C. Create a script to copy data to an EC2 Instance Store

D. Mirror data across 2 EBS Volumes

Answer – A

Option B is incorrect because it does not help in durability of EBS Volumes.

Option C is incorrect since EC2 Instance stores are not durable.

Option D is incorrect since mirroring data across EBS Volumes is inefficient when you already have an option for EBS Snapshots.

The AWS Documentation mentions the following on AWS EBS Snapshots:

You can back up the data on your Amazon EBS Volumes to Amazon S3 by taking point-in-time snapshots. Snapshots are incremental backups, which means that only the blocks on the device that have changed after your most recent snapshot are saved. This minimizes the time required to create the snapshot and saves on storage costs by not duplicating data. When you delete a snapshot, only the data unique to that snapshot is removed. Each snapshot contains all of the information needed to restore your data (from the moment when the snapshot was taken) to a new EBS volume.

For more information on AWS EBS Snapshots, please visit the following URL:


A team is planning to host data on the AWS Cloud. Following are the key requirements:

a) Ability to store data as JSON documents

b) High availability and durability

Select the ideal schema-less storage mechanism that should be employed to fit this requirement.

A. Amazon EFS

B. Amazon Redshift

C. DynamoDB

D. AWS CloudFormation

Answer – C

AWS Documentation mentions the following on DynamoDB:

  • Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. 
  • The data in DynamoDB is stored in JSON format and hence is the perfect data store for the requirement in question.
  • DynamoDB is a NoSQL database and is schemaless.

For more information on AWS DynamoDB, please visit the following URL:

Note:

As per AWS Docs

“DynamoDBMapper has a new feature that allows you to save an object as a JSON document in a DynamoDB attribute. The mapper does the heavy work of converting the object into a JSON document and storing it in DynamoDB. DynamoDBMapper also takes care of loading the Java object from the JSON document when requested by the user.”

For more information, please check the below AWS docs: