AWS Solutions Architect Associate (SAA-C03) Exam Questions (Sample):
0 of 10 questions completed
Questions:
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
Information
This Sample Test contains 10 Exam Questions. Please fill your Name and Email address and Click on “Start Test”. You can view the results at the end of the test. You will also receive an email with the results. Please purchase to get life time access to Full Practice Tests.
You must specify a text. |
|
You must specify an email address. |
You have already completed the Test before. Hence you can not start it again.
Test is loading...
You must sign in or sign up to start the Test.
You have to finish following quiz, to start this Test:
Your results are here!! for" AWS Certified Solutions Architect Associate Sample Exam "
0 of 10 questions answered correctly
Your time:
Time has elapsed
Your Final Score is : 0
You have attempted : 0
Number of Correct Questions : 0 and scored 0
Number of Incorrect Questions : 0 and Negative marks 0
-
AWS SAA
You have attempted: 0
Number of Correct Questions: 0 and scored 0
Number of Incorrect Questions: 0 and Negative marks 0
-
You can review your answers by clicking on “View Answers”.
Important Note : Open Reference Documentation Links in New Tab (Right Click and Open in New Tab).
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- Answered
- Review
-
Question 1 of 10
1. Question
A multimedia company want to transfer 720 TB of videos from a network-attached file system located at a branch office Amazon S3 Glacier with avoiding saturating the branch office’s low-bandwidth internet connection
What is the MOST cost-effective solution?
Correct
Exam Tip
The solution must avoid saturating the branch office’s low-bandwidth internet connection = Use Snow Ball
10 AWS snowball = 800 TB but you can use just 720 TB so you are missing 30 TB.
Reference
https://docs.aws.amazon.com/snowball/latest/ug/specifications.htmlIncorrect
Exam Tip
The solution must avoid saturating the branch office’s low-bandwidth internet connection = Use Snow Ball
10 AWS snowball = 800 TB but you can use just 720 TB so you are missing 30 TB.
Reference
https://docs.aws.amazon.com/snowball/latest/ug/specifications.htmlUnattempted
Exam Tip
The solution must avoid saturating the branch office’s low-bandwidth internet connection = Use Snow Ball
10 AWS snowball = 800 TB but you can use just 720 TB so you are missing 30 TB.
Reference
https://docs.aws.amazon.com/snowball/latest/ug/specifications.html -
Question 2 of 10
2. Question
A multinational company will launch e-commerce multi-tier application and due to the big promotions , the company expects a huge traffic in the launching day.
As solution architect, what is the best approach to prevent any potential failure in the database layer?Correct
Exam Tip
to provide high availability in a database tier(the most critical tier) = use Multi-AZ RDS
Explanation
Amazon RDS Multi-AZ deployments provide enhanced availability and durability for RDS database (DB) instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention.
Reference
https://aws.amazon.com/rds/features/multi-az/Incorrect
Exam Tip
to provide high availability in a database tier(the most critical tier) = use Multi-AZ RDS
Explanation
Amazon RDS Multi-AZ deployments provide enhanced availability and durability for RDS database (DB) instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention.
Reference
https://aws.amazon.com/rds/features/multi-az/Unattempted
Exam Tip
to provide high availability in a database tier(the most critical tier) = use Multi-AZ RDS
Explanation
Amazon RDS Multi-AZ deployments provide enhanced availability and durability for RDS database (DB) instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby (or to a read replica in the case of Amazon Aurora), so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention.
Reference
https://aws.amazon.com/rds/features/multi-az/ -
Question 3 of 10
3. Question
A video streaming company is hosting a website behind multiple Application Load Balancers. The company has different distribution rights for its content around the world.
Which configuration should the solutions architect choose to ensure that users are served the correct content without violating distribution rights.?Correct
Exam Tip
route traffic based on the location of your users. = Use Geolocation routing policy
Explanation
When you create a record, you choose a routing policy, which determines how Amazon Route 53 responds to queries:
· Simple routing policy – Use for a single resource that performs a given function for your domain, for example, a web server that serves content for the example.com website.
· Failover routing policy – Use when you want to configure active-passive failover.
· Geolocation routing policy – Use when you want to route traffic based on the location of your users.
· Geoproximity routing policy – Use when you want to route traffic based on the location of your resources and, optionally, shift traffic from resources in one location to resources in another.
· Latency routing policy – Use when you have resources in multiple AWS Regions and you want to route traffic to the region that provides the best latency.
· Multivalue answer routing policy – Use when you want Route 53 to respond to DNS queries with up to eight healthy records selected at random.
· Weighted routing policy – Use to route traffic to multiple resources in proportions that you specify.Reference
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.htmlIncorrect
Exam Tip
route traffic based on the location of your users. = Use Geolocation routing policy
Explanation
When you create a record, you choose a routing policy, which determines how Amazon Route 53 responds to queries:
· Simple routing policy – Use for a single resource that performs a given function for your domain, for example, a web server that serves content for the example.com website.
· Failover routing policy – Use when you want to configure active-passive failover.
· Geolocation routing policy – Use when you want to route traffic based on the location of your users.
· Geoproximity routing policy – Use when you want to route traffic based on the location of your resources and, optionally, shift traffic from resources in one location to resources in another.
· Latency routing policy – Use when you have resources in multiple AWS Regions and you want to route traffic to the region that provides the best latency.
· Multivalue answer routing policy – Use when you want Route 53 to respond to DNS queries with up to eight healthy records selected at random.
· Weighted routing policy – Use to route traffic to multiple resources in proportions that you specify.Reference
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.htmlUnattempted
Exam Tip
route traffic based on the location of your users. = Use Geolocation routing policy
Explanation
When you create a record, you choose a routing policy, which determines how Amazon Route 53 responds to queries:
· Simple routing policy – Use for a single resource that performs a given function for your domain, for example, a web server that serves content for the example.com website.
· Failover routing policy – Use when you want to configure active-passive failover.
· Geolocation routing policy – Use when you want to route traffic based on the location of your users.
· Geoproximity routing policy – Use when you want to route traffic based on the location of your resources and, optionally, shift traffic from resources in one location to resources in another.
· Latency routing policy – Use when you have resources in multiple AWS Regions and you want to route traffic to the region that provides the best latency.
· Multivalue answer routing policy – Use when you want Route 53 to respond to DNS queries with up to eight healthy records selected at random.
· Weighted routing policy – Use to route traffic to multiple resources in proportions that you specify.Reference
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html -
Question 4 of 10
4. Question
An e-commerce application is hosted in AWS which uses distributed database on multiple Amazon EC2 instances. The database stores all data on multiple instances to ensure that the operation is fault-tolerant up to the loss of an instance. The database requires block storage with latency and throughput to support several million transactions per second per server.
Which storage solution should the solutions architect use?Correct
Exam Tip
Block Storage + fault-tolerant up to the loss of an instance = EBS
Explanation
Amazon Elastic Block Store (EBS) is an easy to use, high performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction intensive workloads at any scale. A broad range of workloads, such as relational and non-relational databases, enterprise applications, containerized applications, big data analytics engines, file systems, and media workflows are widely deployed on Amazon EBS.
Reference
https://aws.amazon.com/ebs/?ebs-whats-new.sort-by=item.additionalFields.postDateTime&ebs-whats-new.sort-order=descIncorrect
Exam Tip
Block Storage + fault-tolerant up to the loss of an instance = EBS
Explanation
Amazon Elastic Block Store (EBS) is an easy to use, high performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction intensive workloads at any scale. A broad range of workloads, such as relational and non-relational databases, enterprise applications, containerized applications, big data analytics engines, file systems, and media workflows are widely deployed on Amazon EBS.
Reference
https://aws.amazon.com/ebs/?ebs-whats-new.sort-by=item.additionalFields.postDateTime&ebs-whats-new.sort-order=descUnattempted
Exam Tip
Block Storage + fault-tolerant up to the loss of an instance = EBS
Explanation
Amazon Elastic Block Store (EBS) is an easy to use, high performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction intensive workloads at any scale. A broad range of workloads, such as relational and non-relational databases, enterprise applications, containerized applications, big data analytics engines, file systems, and media workflows are widely deployed on Amazon EBS.
Reference
https://aws.amazon.com/ebs/?ebs-whats-new.sort-by=item.additionalFields.postDateTime&ebs-whats-new.sort-order=desc -
Question 5 of 10
5. Question
A startup company has application running on Amazon EC2 instances in a VPC. this application needs to call Amazon S3 API to store and read objects with restricting any internet-bound traffic from the application.
What is the best solution to handle this requirement?Correct
Exam Tip
to integrate with S3 with restricting any internet-bound traffic = Use A gateway endpoint.
Explanation
VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by AWS PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.
Endpoints are virtual devices. They are horizontally scaled, redundant, and highly available VPC components. They allow communication between instances in your VPC and services without imposing availability risks or bandwidth constraints on your network traffic.
There are two types of VPC endpoints: interface endpoints and gateway endpoints. Create the type of VPC endpoint required by the supported service.
An interface endpoint is an elastic network interface with a private IP address from the IP address range of your subnet that serves as an entry point for traffic destined to a supported service. Interface endpoints are powered by AWS PrivateLink, a technology that enables you to privately access services by using private IP addresses. AWS PrivateLink restricts all network traffic between your VPC and services to the Amazon network. You do not need an internet gateway, a NAT device, or a virtual private gateway. i.e all services excepts Amazon S3 and DynamoDB.A gateway endpoint is a gateway that you specify as a target for a route in your route table for traffic destined to a supported AWS service. The following AWS services are supported:
a – Amazon S3
b -DynamoDB
Reference
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints.htmlIncorrect
Exam Tip
to integrate with S3 with restricting any internet-bound traffic = Use A gateway endpoint.
Explanation
VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by AWS PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.
Endpoints are virtual devices. They are horizontally scaled, redundant, and highly available VPC components. They allow communication between instances in your VPC and services without imposing availability risks or bandwidth constraints on your network traffic.
There are two types of VPC endpoints: interface endpoints and gateway endpoints. Create the type of VPC endpoint required by the supported service.
An interface endpoint is an elastic network interface with a private IP address from the IP address range of your subnet that serves as an entry point for traffic destined to a supported service. Interface endpoints are powered by AWS PrivateLink, a technology that enables you to privately access services by using private IP addresses. AWS PrivateLink restricts all network traffic between your VPC and services to the Amazon network. You do not need an internet gateway, a NAT device, or a virtual private gateway. i.e all services excepts Amazon S3 and DynamoDB.A gateway endpoint is a gateway that you specify as a target for a route in your route table for traffic destined to a supported AWS service. The following AWS services are supported:
a – Amazon S3
b -DynamoDB
Reference
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints.htmlUnattempted
Exam Tip
to integrate with S3 with restricting any internet-bound traffic = Use A gateway endpoint.
Explanation
VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by AWS PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.
Endpoints are virtual devices. They are horizontally scaled, redundant, and highly available VPC components. They allow communication between instances in your VPC and services without imposing availability risks or bandwidth constraints on your network traffic.
There are two types of VPC endpoints: interface endpoints and gateway endpoints. Create the type of VPC endpoint required by the supported service.
An interface endpoint is an elastic network interface with a private IP address from the IP address range of your subnet that serves as an entry point for traffic destined to a supported service. Interface endpoints are powered by AWS PrivateLink, a technology that enables you to privately access services by using private IP addresses. AWS PrivateLink restricts all network traffic between your VPC and services to the Amazon network. You do not need an internet gateway, a NAT device, or a virtual private gateway. i.e all services excepts Amazon S3 and DynamoDB.A gateway endpoint is a gateway that you specify as a target for a route in your route table for traffic destined to a supported AWS service. The following AWS services are supported:
a – Amazon S3
b -DynamoDB
Reference
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints.html -
Question 6 of 10
6. Question
While performing PCI auditing on an existing workload deployed on AWS. The review identified a public-facing website running on the same Amazon EC2 instance as a Microsoft Active Directory domain controller that was install recently to support other AWS services.
What should the solutions architect recommend for the new design that would improve the security of the architecture and minimize the administrative demand on IT staff?Correct
Exam Tip
migrate AD to AWS Managed AD and keep the webserver alone + improve the security of the architecture and minimize the administrative demand on IT staff = Use AWS Directory Service to create a managed Active Directory. Uninstall Active Directory on the current EC2 instance.
You can use the Active Directory Migration Toolkit (ADMT) along with the Password Export Service (PES) to migrate users from your self-managed AD to your AWS Managed Microsoft AD directory. This enables you to migrate AD objects and encrypted passwords for your users more easily.
Reference
https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_migrate_users.htmlIncorrect
Exam Tip
migrate AD to AWS Managed AD and keep the webserver alone + improve the security of the architecture and minimize the administrative demand on IT staff = Use AWS Directory Service to create a managed Active Directory. Uninstall Active Directory on the current EC2 instance.
You can use the Active Directory Migration Toolkit (ADMT) along with the Password Export Service (PES) to migrate users from your self-managed AD to your AWS Managed Microsoft AD directory. This enables you to migrate AD objects and encrypted passwords for your users more easily.
Reference
https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_migrate_users.htmlUnattempted
Exam Tip
migrate AD to AWS Managed AD and keep the webserver alone + improve the security of the architecture and minimize the administrative demand on IT staff = Use AWS Directory Service to create a managed Active Directory. Uninstall Active Directory on the current EC2 instance.
You can use the Active Directory Migration Toolkit (ADMT) along with the Password Export Service (PES) to migrate users from your self-managed AD to your AWS Managed Microsoft AD directory. This enables you to migrate AD objects and encrypted passwords for your users more easily.
Reference
https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_migrate_users.html -
Question 7 of 10
7. Question
A security team found database credentials stored in the source code of application which call AWS Lambda Functions. The database credentials need to be removed from the Lambda source code. The credentials must then be securely stored and rotated on an ongoing basis to meet security policy requirements.
What should a solutions architect recommend to meet these requirements?Correct
Exam Tip
You can configure Secrets Manager to automatically rotate your secrets without user intervention and on a specified schedule.
Explanation
AWS Secrets Manager helps you protect secrets needed to access your applications, services, and IT resources. The service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. Users and applications retrieve secrets with a call to Secrets Manager APIs, eliminating the need to hardcode sensitive information in plain text. Secrets Manager offers secret rotation with built-in integration for Amazon RDS, Amazon Redshift, and Amazon DocumentDB. Also, the service is extensible to other types of secrets, including API keys and OAuth tokens. In addition, Secrets Manager enables you to control access to secrets using fine-grained permissions and audit secret rotation centrally for resources in the AWS Cloud, third-party services, and on-premises.
Reference
https://aws.amazon.com/secrets-manager/Incorrect
Exam Tip
You can configure Secrets Manager to automatically rotate your secrets without user intervention and on a specified schedule.
Explanation
AWS Secrets Manager helps you protect secrets needed to access your applications, services, and IT resources. The service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. Users and applications retrieve secrets with a call to Secrets Manager APIs, eliminating the need to hardcode sensitive information in plain text. Secrets Manager offers secret rotation with built-in integration for Amazon RDS, Amazon Redshift, and Amazon DocumentDB. Also, the service is extensible to other types of secrets, including API keys and OAuth tokens. In addition, Secrets Manager enables you to control access to secrets using fine-grained permissions and audit secret rotation centrally for resources in the AWS Cloud, third-party services, and on-premises.
Reference
https://aws.amazon.com/secrets-manager/Unattempted
Exam Tip
You can configure Secrets Manager to automatically rotate your secrets without user intervention and on a specified schedule.
Explanation
AWS Secrets Manager helps you protect secrets needed to access your applications, services, and IT resources. The service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. Users and applications retrieve secrets with a call to Secrets Manager APIs, eliminating the need to hardcode sensitive information in plain text. Secrets Manager offers secret rotation with built-in integration for Amazon RDS, Amazon Redshift, and Amazon DocumentDB. Also, the service is extensible to other types of secrets, including API keys and OAuth tokens. In addition, Secrets Manager enables you to control access to secrets using fine-grained permissions and audit secret rotation centrally for resources in the AWS Cloud, third-party services, and on-premises.
Reference
https://aws.amazon.com/secrets-manager/ -
Question 8 of 10
8. Question
A customer owns a simple API in a VPC behind an internet-facing Application Load Balancer (ALB). a client application which consumes the API is deployed in a second account in private subnets behind a NAT gateway. When requests to the client application increase, the NAT gateway costs are higher than expected. A solutions architect has configured the ALB to be internal.
Which combination of architectural changes will reduce the NAT gateway costs? (Choose two.)
Correct
Exam Tip
PrivateLink makes it easy to connect services across different accounts and VPCs to significantly simplify the network architecture.Explanation
AWS PrivateLink simplifies the security of data shared with cloud-based applications by eliminating the exposure of data to the public Internet. AWS PrivateLink provides private connectivity between VPCs, AWS services, and on-premises applications, securely on the Amazon network. AWS PrivateLink makes it easy to connect services across different accounts and VPCs to significantly simplify the network architecture.
Amazon Virtual Private Cloud (Amazon VPC) enables you to launch AWS resources into a virtual network that you’ve defined.
A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, or with a VPC in another AWS account. The VPCs can be in different regions (also known as an inter-region VPC peering connection).
AWS uses the existing infrastructure of a VPC to create a VPC peering connection; it is neither a gateway nor a VPN connection, and does not rely on a separate piece of physical hardware. There is no single point of failure for communication or a bandwidth bottleneck.
A VPC peering connection helps you to facilitate the transfer of data. For example, if you have more than one AWS account, you can peer the VPCs across those accounts to create a file sharing network. You can also use a VPC peering connection to allow other VPCs to access resources you have in one of your VPCs.
You can establish peering relationships between VPCs across different AWS Regions (also called Inter-Region VPC Peering). This allows VPC resources including EC2 instances, Amazon RDS databases and Lambda functions that run in different AWS Regions to communicate with each other using private IP addresses, without requiring gateways, VPN connections, or separate network appliances. The traffic remains in the private IP space. All inter-region traffic is encrypted with no single point of failure, or bandwidth bottleneck. Traffic always stays on the global AWS backbone, and never traverses the public internet, which reduces threats, such as common exploits, and DDoS attacks. Inter-Region VPC Peering provides a simple and cost-effective way to share resources between regions or replicate data for geographic redundancy.Reference
https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.htmlIncorrect
Exam Tip
PrivateLink makes it easy to connect services across different accounts and VPCs to significantly simplify the network architecture.Explanation
AWS PrivateLink simplifies the security of data shared with cloud-based applications by eliminating the exposure of data to the public Internet. AWS PrivateLink provides private connectivity between VPCs, AWS services, and on-premises applications, securely on the Amazon network. AWS PrivateLink makes it easy to connect services across different accounts and VPCs to significantly simplify the network architecture.
Amazon Virtual Private Cloud (Amazon VPC) enables you to launch AWS resources into a virtual network that you’ve defined.
A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, or with a VPC in another AWS account. The VPCs can be in different regions (also known as an inter-region VPC peering connection).
AWS uses the existing infrastructure of a VPC to create a VPC peering connection; it is neither a gateway nor a VPN connection, and does not rely on a separate piece of physical hardware. There is no single point of failure for communication or a bandwidth bottleneck.
A VPC peering connection helps you to facilitate the transfer of data. For example, if you have more than one AWS account, you can peer the VPCs across those accounts to create a file sharing network. You can also use a VPC peering connection to allow other VPCs to access resources you have in one of your VPCs.
You can establish peering relationships between VPCs across different AWS Regions (also called Inter-Region VPC Peering). This allows VPC resources including EC2 instances, Amazon RDS databases and Lambda functions that run in different AWS Regions to communicate with each other using private IP addresses, without requiring gateways, VPN connections, or separate network appliances. The traffic remains in the private IP space. All inter-region traffic is encrypted with no single point of failure, or bandwidth bottleneck. Traffic always stays on the global AWS backbone, and never traverses the public internet, which reduces threats, such as common exploits, and DDoS attacks. Inter-Region VPC Peering provides a simple and cost-effective way to share resources between regions or replicate data for geographic redundancy.Reference
https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.htmlUnattempted
Exam Tip
PrivateLink makes it easy to connect services across different accounts and VPCs to significantly simplify the network architecture.Explanation
AWS PrivateLink simplifies the security of data shared with cloud-based applications by eliminating the exposure of data to the public Internet. AWS PrivateLink provides private connectivity between VPCs, AWS services, and on-premises applications, securely on the Amazon network. AWS PrivateLink makes it easy to connect services across different accounts and VPCs to significantly simplify the network architecture.
Amazon Virtual Private Cloud (Amazon VPC) enables you to launch AWS resources into a virtual network that you’ve defined.
A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, or with a VPC in another AWS account. The VPCs can be in different regions (also known as an inter-region VPC peering connection).
AWS uses the existing infrastructure of a VPC to create a VPC peering connection; it is neither a gateway nor a VPN connection, and does not rely on a separate piece of physical hardware. There is no single point of failure for communication or a bandwidth bottleneck.
A VPC peering connection helps you to facilitate the transfer of data. For example, if you have more than one AWS account, you can peer the VPCs across those accounts to create a file sharing network. You can also use a VPC peering connection to allow other VPCs to access resources you have in one of your VPCs.
You can establish peering relationships between VPCs across different AWS Regions (also called Inter-Region VPC Peering). This allows VPC resources including EC2 instances, Amazon RDS databases and Lambda functions that run in different AWS Regions to communicate with each other using private IP addresses, without requiring gateways, VPN connections, or separate network appliances. The traffic remains in the private IP space. All inter-region traffic is encrypted with no single point of failure, or bandwidth bottleneck. Traffic always stays on the global AWS backbone, and never traverses the public internet, which reduces threats, such as common exploits, and DDoS attacks. Inter-Region VPC Peering provides a simple and cost-effective way to share resources between regions or replicate data for geographic redundancy.Reference
https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html -
Question 9 of 10
9. Question
A start-up company that offers an intuitive financial data analytics service behind Amazon API Gateway. the traffic to this service is unpredictable and varies from 0 requests to over 500 per second. The data size which needs to be persisted in a database is currently less than 1 GB with unpredictable future growth and can be queried using simple key-value requests.
Which combination of AWS services would meet these requirements? (Choose two.)
Correct
Exam Tip
DynamoDB is noSQL (key – value based),Lambda works out of the box can handle request also both are serverless.
Lambda autoscale and can handle 500 requests per second and integrates with API Gateway and DynamoDB.
ExplanationAmazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. It’s a fully managed, multiregion, multimaster, durable database with built-in security, backup and restore, and in-memory caching for internet-scale applications. DynamoDB can handle more than 10 trillion requests per day and can support peaks of more than 20 million requests per second.
Lambda
the first time you invoke your function, AWS Lambda creates an instance of the function and runs its handler method to process the event. When the function returns a response, it stays active and waits to process additional events. If you invoke the function again while the first event is being processed, Lambda initializes another instance, and the function processes the two events concurrently. As more events come in, Lambda routes them to available instances and creates new instances as needed. When the number of requests decreases, Lambda stops unused instances to free up scaling capacity for other functions.
Your functions’ concurrency is the number of instances that serve requests at a given time. For an initial burst of traffic, your functions’ cumulative concurrency in a Region can reach an initial level of between 500 and 3000, which varies per Region.
Burst concurrency limits
3000 – US West (Oregon), US East (N. Virginia), Europe (Ireland)
1000 – Asia Pacific (Tokyo), Europe (Frankfurt)
500 – Other Regions
After the initial burst, your functions’ concurrency can scale by an additional 500 instances each minute. This continues until there are enough instances to serve all requests, or until a concurrency limit is reached. When requests come in faster than your function can scale, or when your function is at maximum concurrency, additional requests fail with a throttling error (429 status code).
The following example shows a function processing a spike in traffic. As invocations increase exponentially, the function scales up. It initializes a new instance for any request that can’t be routed to an available instance. When the burst concurrency limit is reached, the function starts to scale linearly. If this isn’t enough concurrency to serve all requests, additional requests are throttled and should be retried.Function instances
Open requests
Throttling possible
The function continues to scale until the account’s concurrency limit for the function’s Region is reached. The function catches up to demand, requests subside, and unused instances of the function are stopped after being idle for some time. Unused instances are frozen while they’re waiting for requests and don’t incur any charges.
Reference
https://docs.aws.amazon.com/lambda/latest/dg/invocation-scaling.html
https://aws.amazon.com/dynamodb/Incorrect
Exam Tip
DynamoDB is noSQL (key – value based),Lambda works out of the box can handle request also both are serverless.
Lambda autoscale and can handle 500 requests per second and integrates with API Gateway and DynamoDB.
ExplanationAmazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. It’s a fully managed, multiregion, multimaster, durable database with built-in security, backup and restore, and in-memory caching for internet-scale applications. DynamoDB can handle more than 10 trillion requests per day and can support peaks of more than 20 million requests per second.
Lambda
the first time you invoke your function, AWS Lambda creates an instance of the function and runs its handler method to process the event. When the function returns a response, it stays active and waits to process additional events. If you invoke the function again while the first event is being processed, Lambda initializes another instance, and the function processes the two events concurrently. As more events come in, Lambda routes them to available instances and creates new instances as needed. When the number of requests decreases, Lambda stops unused instances to free up scaling capacity for other functions.
Your functions’ concurrency is the number of instances that serve requests at a given time. For an initial burst of traffic, your functions’ cumulative concurrency in a Region can reach an initial level of between 500 and 3000, which varies per Region.
Burst concurrency limits
3000 – US West (Oregon), US East (N. Virginia), Europe (Ireland)
1000 – Asia Pacific (Tokyo), Europe (Frankfurt)
500 – Other Regions
After the initial burst, your functions’ concurrency can scale by an additional 500 instances each minute. This continues until there are enough instances to serve all requests, or until a concurrency limit is reached. When requests come in faster than your function can scale, or when your function is at maximum concurrency, additional requests fail with a throttling error (429 status code).
The following example shows a function processing a spike in traffic. As invocations increase exponentially, the function scales up. It initializes a new instance for any request that can’t be routed to an available instance. When the burst concurrency limit is reached, the function starts to scale linearly. If this isn’t enough concurrency to serve all requests, additional requests are throttled and should be retried.Function instances
Open requests
Throttling possible
The function continues to scale until the account’s concurrency limit for the function’s Region is reached. The function catches up to demand, requests subside, and unused instances of the function are stopped after being idle for some time. Unused instances are frozen while they’re waiting for requests and don’t incur any charges.
Reference
https://docs.aws.amazon.com/lambda/latest/dg/invocation-scaling.html
https://aws.amazon.com/dynamodb/Unattempted
Exam Tip
DynamoDB is noSQL (key – value based),Lambda works out of the box can handle request also both are serverless.
Lambda autoscale and can handle 500 requests per second and integrates with API Gateway and DynamoDB.
ExplanationAmazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. It’s a fully managed, multiregion, multimaster, durable database with built-in security, backup and restore, and in-memory caching for internet-scale applications. DynamoDB can handle more than 10 trillion requests per day and can support peaks of more than 20 million requests per second.
Lambda
the first time you invoke your function, AWS Lambda creates an instance of the function and runs its handler method to process the event. When the function returns a response, it stays active and waits to process additional events. If you invoke the function again while the first event is being processed, Lambda initializes another instance, and the function processes the two events concurrently. As more events come in, Lambda routes them to available instances and creates new instances as needed. When the number of requests decreases, Lambda stops unused instances to free up scaling capacity for other functions.
Your functions’ concurrency is the number of instances that serve requests at a given time. For an initial burst of traffic, your functions’ cumulative concurrency in a Region can reach an initial level of between 500 and 3000, which varies per Region.
Burst concurrency limits
3000 – US West (Oregon), US East (N. Virginia), Europe (Ireland)
1000 – Asia Pacific (Tokyo), Europe (Frankfurt)
500 – Other Regions
After the initial burst, your functions’ concurrency can scale by an additional 500 instances each minute. This continues until there are enough instances to serve all requests, or until a concurrency limit is reached. When requests come in faster than your function can scale, or when your function is at maximum concurrency, additional requests fail with a throttling error (429 status code).
The following example shows a function processing a spike in traffic. As invocations increase exponentially, the function scales up. It initializes a new instance for any request that can’t be routed to an available instance. When the burst concurrency limit is reached, the function starts to scale linearly. If this isn’t enough concurrency to serve all requests, additional requests are throttled and should be retried.Function instances
Open requests
Throttling possible
The function continues to scale until the account’s concurrency limit for the function’s Region is reached. The function catches up to demand, requests subside, and unused instances of the function are stopped after being idle for some time. Unused instances are frozen while they’re waiting for requests and don’t incur any charges.
Reference
https://docs.aws.amazon.com/lambda/latest/dg/invocation-scaling.html
https://aws.amazon.com/dynamodb/ -
Question 10 of 10
10. Question
An application runs on Amazon EC2 instances in an Auto Scaling group across multiple Availability Zones behind Application Load Balancer. The application performs best when the CPU utilization of the EC2 instances is at or near 30 %.
What should a solutions architect do to maintain the desired performance across all instances in the group?Correct
Exam Tip
because a target tracking scaling policy assumes that it should scale out your Auto Scaling group when the specified metric is above the target value. You cannot use a target tracking scaling policy to scale out your Auto Scaling group when the specified metric is below the target value
Explanation
With target tracking scaling policies, you select a scaling metric and set a target value. Amazon EC2 Auto Scaling creates and manages the CloudWatch alarms that trigger the scaling policy and calculates the scaling adjustment based on the metric and the target value. The scaling policy adds or removes capacity as required to keep the metric at, or close to, the specified target value. In addition to keeping the metric close to the target value, a target tracking scaling policy also adjusts to changes in the metric due to a changing load pattern.
For example, you can use target tracking scaling to:
Configure a target tracking scaling policy to keep the average aggregate CPU utilization of your Auto Scaling group at 40 percent.
Configure a target tracking scaling policy to keep the request count per target of your Application Load Balancer target group at 1000 for your Auto Scaling group.
Depending on your application needs, you might find that one of these popular scaling metrics works best for you when using target tracking, or you might find that a combination of these metrics or a different metric meets your needs better.
Reference
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-target-tracking.htmlIncorrect
Exam Tip
because a target tracking scaling policy assumes that it should scale out your Auto Scaling group when the specified metric is above the target value. You cannot use a target tracking scaling policy to scale out your Auto Scaling group when the specified metric is below the target value
Explanation
With target tracking scaling policies, you select a scaling metric and set a target value. Amazon EC2 Auto Scaling creates and manages the CloudWatch alarms that trigger the scaling policy and calculates the scaling adjustment based on the metric and the target value. The scaling policy adds or removes capacity as required to keep the metric at, or close to, the specified target value. In addition to keeping the metric close to the target value, a target tracking scaling policy also adjusts to changes in the metric due to a changing load pattern.
For example, you can use target tracking scaling to:
Configure a target tracking scaling policy to keep the average aggregate CPU utilization of your Auto Scaling group at 40 percent.
Configure a target tracking scaling policy to keep the request count per target of your Application Load Balancer target group at 1000 for your Auto Scaling group.
Depending on your application needs, you might find that one of these popular scaling metrics works best for you when using target tracking, or you might find that a combination of these metrics or a different metric meets your needs better.
Reference
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-target-tracking.htmlUnattempted
Exam Tip
because a target tracking scaling policy assumes that it should scale out your Auto Scaling group when the specified metric is above the target value. You cannot use a target tracking scaling policy to scale out your Auto Scaling group when the specified metric is below the target value
Explanation
With target tracking scaling policies, you select a scaling metric and set a target value. Amazon EC2 Auto Scaling creates and manages the CloudWatch alarms that trigger the scaling policy and calculates the scaling adjustment based on the metric and the target value. The scaling policy adds or removes capacity as required to keep the metric at, or close to, the specified target value. In addition to keeping the metric close to the target value, a target tracking scaling policy also adjusts to changes in the metric due to a changing load pattern.
For example, you can use target tracking scaling to:
Configure a target tracking scaling policy to keep the average aggregate CPU utilization of your Auto Scaling group at 40 percent.
Configure a target tracking scaling policy to keep the request count per target of your Application Load Balancer target group at 1000 for your Auto Scaling group.
Depending on your application needs, you might find that one of these popular scaling metrics works best for you when using target tracking, or you might find that a combination of these metrics or a different metric meets your needs better.
Reference
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-target-tracking.html
The AWS Certified Solutions Architect – Associate exam validates technical expertise in designing and deploying scalable, highly available, and fault-tolerant systems on AWS. Take this intermediate-level course to learn how to prepare for the exam by exploring the exam’s topic areas and how they map to architecting on AWS and to specific areas to study.
Skillcertpro Offerings (Instructor Note) :
- We are offering 1100 latest real AWS Solutions Architect Associate (SAA-C03) Exam Questions for practice, which will help you to score higher in your exam.
- Aim for above 85% or above in our mock exams before giving the main exam.
- Do review wrong & right answers and thoroughly go through explanations provided to each question which will help you understand the question.
- Master Cheat Sheet was prepared by instructors which contain personal notes of them for all exam objectives. Carefully written to help you all understand the topics easily.
- It is recommended to use the Master Cheat Sheet just before 2-3 days of the main exam to cram the important notes.
It is recommended to have below knowledge when attempting AWS Solutions Architect Associate (SAA-C03) Exam Questions
AWS Certified Solutions Architect – Associate is intended for anyone with one or more years of hands-on experience designing available, cost-efficient, fault-tolerant, and scalable distributed systems on AWS. Before you take this exam, we recommend you have:
- One year of hands-on experience with AWS technology, including using compute, networking, storage, and database AWS services as well as AWS deployment and management services
- Experience deploying, managing, and operating workloads on AWS as well as implementing security controls and compliance requirements
- Familiarity with using both the AWS Management Console and the AWS Command Line Interface (CLI)
- Understanding of the AWS Well-Architected Framework, AWS networking, security services, and the AWS global infrastructure
- Ability to identify which AWS services meet a given technical requirement and to define technical requirements for an AWS-based application
AWS Solutions Architect Associate Intended Audience
This course is intended for:
- Solutions architects who are preparing to take the AWS Certified Solutions Architect – Associate exam
What does it take to earn this certification?
To earn this certification, you’ll need to take and pass the AWS Certified Solutions Architect – Associate exam (SAA-C02). The exam features a combination of two question formats: multiple choice and multiple response. Additional information, such as the exam content outline and passing score, is in the exam guide.
Level: Associate
Length: 130 minutes to complete the exam
Cost: 150 USD
Visit Exam pricing for additional cost information.
Format: 65 questions, either multiple choice or multiple response
Delivery method: Pearson VUE and PSI; testing center or online proctored exam
Check our other AWS Certification offerings : https://skillcertpro.com/product-category/amazon-aws/
Akanksha Verma –
These exams are very helpful. I took the Solutions architect course first before attempting these. Every time I scored one of these tests below 70%, I reviewed my weak points and practiced the test again. Finally taking the real exam, I pass it with a good score thankfully. Thanks Skillcertpro
Maskada J –
these mock tests really test your knowledge of the aws SAA-C02 Exam. I used these as “Pre-Assessment” to gauge my strengths and weaknesses, before I dived into the real exam. I was able to clear my exam.
Yashveer Ramparsad –
Honestly. This team has done an amazing job at putting together the right information for the certification. The questions are on point and will help you get to where you need to go. Whilst everyone’s ability to learn is different, the answers to the questions are explained really well for someone to understand the concepts. In order to pass the exam, I urge anyone to pay for this as it is well worth it, I also used other resources but this was by far the largest contributor. So thanks team for the stellar work and keep it up!
Vish Shri –
Thank you Skillcertpro. — Just took the exam today and passed. Questions were kind of easy having solved your practice exams 🙂
Adil Warsi –
Provides the almost related and some similar question which asked in real exam.
Sumit Karna –
Fantastic study material. I just passed my exam and I learned some of the new concepts and there were few questions in my real exam from those sections, that I never read before this course. Happy to go through this course at the very last moment
Bailey G –
Just passed the exam, the questions are well-constructed to make you deeply think and understand the thought-process to solve the exam. Thank you!
Tony –
I passed the exam with a good score.
A W –
Just took the exam on August 14th and thankfully I passed with a score of 854. I bought SkillCertPro practice exam literally 3 days before the test and I was glad that I had a chance to work on 7 practice sets. They really helped me for the actual exam. Thanks!
Sankara Krishnan –
Better quality tests than actual AWS solutions architect exam. Scored 921 primarily because of taking these tests, the week before. You must read and understand all answers to all 1000 questions and your concepts will get strengthened.
sandip amte –
Dear Skillcertpro Team,
I have passed the AWS SAA-C02 examination in the first attempt.
Thanks for the Practice test, which helped me a lot to pass the exam.
These practice tests helped me to find out my weak areas and using detailed explanations I have recovered those areas.
While giving the real exam I found that most of the questions are similar/same as from these practice tests.
Once again, Thank You :-).
I Will be back soon for AWS SysOps certification practice tests 🙂
Abhishek Gupta –
PASSED WITH AN AWESOME SCORE IN FIRST-ATTEMPT!
I purchased this question bank one month ago and that time 800 questions were there which seemed a lot but after a few days because the exam got updated questions increased from 800 to 1000.
The pattern changed question got tricky on the first few practice sets. First I got discouraged that not able to solve a single question but friends after solving the first 4 sets and getting around why the answer correct and the other option wrong it made me clear about the services which I couldn’t via watching acloudguru lectures.
It gave me the hands-on experience of the problems that I cannot test in real-time and boosted my analytical skills.
Friends they are committed to there word the question is up to date and it’s not like you are cheating or something like that.
Amazon is well-aware of these dumps and they question language to prevent candidates to put in place their strategies but it helps you virtually feel what to do if a situation like that happens.
Trust me guys if you answer that question like that you will never fail the exam or sample paper test.
This is worth it to buy!
Simon de Timary –
this was a great preview, and very helpful for the actual test – I had studied nearly 2 weeks, when a colleague recommended these exams. I passed 9 of 13 practice exams, and just passed the real deal!
Bruno EPIFANIE –
I am very much satisfied with this certification practice exams , I have cleared my certification with the help of these exams , the quality of question and answers are excellent … Thanks a lot for arranging such a helpful course for us … God bless you and your team.
suparna banerjee –
Very much satisfied with the practice exams. Took the exams for two three days and passed real exam with 760 score. Thanks team for curating a great set of questions.
Andreia –
Thank you. I Passed the Solutions Architect. These practice questions helped so much.
Regards,
Vivek Mathew –
Very good collection of high quality practice test. The test have really help me clear my exam in the first attempt itself. I just practiced these mock test for 2 weeks and covered around 6 test. It gave me real confidence to give the real test.
Thank you team.
Cindy Garcia –
Yayyyy passed the Solutions architect exam today with a score of 934. I am entirely new to IT (from a finance background). These practice questions helped a lot. At first try I was only scoring 70% but surely towards the end my scores picked up. then when I did them a second time… I had scores like 89%, 98% and so on. The questions here IMO are harder than the exam. Like everyone has said make sure you understand the questions because they could be tricky. Review both right and wrong answers(very important).
kiran Ravikant –
Step 1. Amazon official training Video + Skillcertpro Practice Test
Step 2. Attempt & Learn all answers of this Mock Test by heart
)To my surprise, Around 80% Questions in main Exam from this Mock Test only. I finished exam in 30 mins & scored 910 score . The Best Mock Test I have ever seen.
charvakudu s –
I took the exam yesterday and I cleared it, these practice papers helped me a lot in understanding the AWS services better, specially once you are done with a practice exam, when you review all the questions there is a good explanation about each option answer of a question, Thanks for these paper
Justin Jacob –
Passed the exam. I just used the questions and the references .few exact questions from this but very useful.
Suraj Patil –
PASSED WITH AN AWESOME SCORE IN FIRST-ATTEMPT!
Krishna Prasad sukumaran –
Passed the exam. Questions were similar to actual exam. Thanks alot.
Venkatesh Sakthivel –
Passed the exam. Thanks for the well prepared practice exams.
Yogesh More –
After going through AWS web training sessions, I purchased these exams and only scored 72% on the first and second attempts. With time, I got a lot better and passed the real certification exam with a score of 96.5%. This was exactly what I needed to pass my exam! Be sure to read through all answer explanations thoroughly (even the wrong answers).
Patricia Webb –
I took this course quite some time ago. But, I found that the questions were very similar to the ones on the actual exam. After taking each exam one to two times, making sure I got at least a 90% or higher, I managed to pass the exam on the first attempt, with a score of 949
Kalpana Yahampath –
This is a good mock exam to pass AWS C02, I was follow questions of this mock exam sets and found similar types of questions in real exam as well.
Omkar Pukale –
These exams are exactly what you need in order to pass your AWS SA Associate certification. Just cleared my certification exam yesterday (11th Nov 2020). I did the course via A Cloud Guru platform but was not confident enough for the exam. These test helped me boost the confidence. The practice test are bit difficult than your actual exam questions but that just helps you to be better prepared. Test are correctly balanced with learning topics which are being asked in actual exam. The language used for phrasing the questions is also very similar to the real exam so you get good experience on that front as well Thank You SkillCertPro. Will be back soon for DevOps cert questions.
James Le –
After failing my 1st exam, I knew I needed to study. I found SkillCerPro amongst the many test prep websites out there. I’m glad I did. I kept doing the practice exams until I got better and better. Took the exam the 2nd time and got my certification!
Kunal Shah –
I passed the test finally after failing twice with score of 659 and 709. I came to these practice test and it prepared me for the test and this are the best tests you can find if one wants to pass the certificate. I passed with the score of 796 after completing all 15 tests.
Jacqueline Ann Best –
I took the exam yesterday and I cleared it, these practice papers helped me a lot in understanding the AWS services better, specially once you are done with a practice exam, when you review all the questions there is a good explanation about each option answer of a question, Thanks for these papers…
Pankaj Mishra –
First of all I would like to thanks Skillcertpro for such a nice Practise sets , questions are similar as asked in the real exam but do not expect the questions to exactly be exactly same in the Real exam (for me I would say I had only 10-20% of exacty same questions in the real exam). I passed my exam yesterday (Jan28th) still waiting for the result. My recommendation would be to concentrate on the first 5-6 Test papers initially and try to go through and understand each and every questions with the proper explanations given which would help to make your understanding better.
Thanks again for the wonderful Test papers.
A B –
Passed on first attempt ! WooHoo 🙂
Unbelievable similarity of questions on the real exam ! Questions are actually updated every 15 days.. Going to purchase a lot more courses from here. Very Satisfied.
Elvin Diaz –
Sucess passed examn..thanks
Ubaid Khan –
Very very good !
I passed the AWS SAA exam yesterday and I am almost certain I wouldn’t have if it wasn’t for these practice exams.
Thanks and kudos to skillcertpro and the team !
MOHAN SUNDARAM –
I passed my exam on Feb 11. More than 90% actual exam question were similar to the practice exams. I would say actual exam questions are easier than the ones here. If you are scoring > 80% here consistently, then you can safely schedule your actual AWS exam. Thank you skillcertpro. Best $20 I ever spent.
Vineet Singh –
This is the best collection of questions one can ask for. After going through the exams one feels completely prepared. I would just say if you don’t do well in first few exams don’t get discouraged instead just go through the sections in which you didn’t do well and prepare. Good Luck!!
Paul David Gelario –
I recently passed the SAA-C02 exam! Thanks to Skillcertpro I was able to pass. I will take Skillcertpro again when I take another AWS Certification.
Jahed Naghipoor –
Very useful set of tests
Nisarg Mistry –
Great set of tests. At par with the actual exam difficulty. I had taken various tests across many platforms. But these tests seem to be the best tests with appropriate difficulty levels. Do go through all tests and grasps the explanations and answers.
Fernandes Samira –
updates to the practice tests demonstrates their commitment to excellence. I’m happy to report that I passed the test with an 861. To be honest, on my first attempts on these courses, I was barely passing or failing. The practice tests are tough, but I’d really prefer to have a tough practice test and an easy actual test than the other way around. Plus, who knows, maybe I got lucky with the question distribution on the actual exam. Either way, I could not have passed without this course.
Demola Onifade –
Best site for practice exams for any IT certification. Passed 6 certification exams so far thanks to skill cert pro
Oghenekaro Esemitodje –
After buying a couple of courses and eventually came across this course and read the reviews of others I decided to give it a try and it first exposed the gaps in knowledge gained from my earlier preparations using other courses for the exam and with time my knowledge improved leading to my passing the exam. Thanks guys and I’m certainly trying your other courses.
Vidhan Jain –
Great question bank. Don’t get discouraged if you feel practice exams are difficult. In my experience, the real exam was easier than the practice exams. However, the higher difficulty just leads to better preparation. This practice exam help me a lot.
KelEthism –
I gave the exam today (3/19/2022) and most questions that came from this set of questions.
Daniel Baldree –
These practice tests were essential learning for the exam. It exposed one particularly area where I was clearly weaker than the others. I focused more on this area and took a shot at the exam and passed.
Vijaya Lakshman –
Thanks Skillcertpro team for these practice exams. I have passed the exam. It wouldn’t have happened without these mock papers. To the ppl who are about the take this exam, Please go over all these practice papers and as everybody said, don’t get discourge if you don’t get good mark, initally it will be tough but try to understand the explaination. CONCEPTS are the key to pass this exam. makesure, you are getting above 85% mark in all these papers. In real exam, questions were lengthy/tircky so, take your time and know all the CONCEPTS fully, you are sure to pass when you have skillcertpro practice exams in hand :). I highly recommend this to anyone.
Faycal Guermaz –
I just passed the exam, successfully !
These questions are good to practice, in the exam, you will find some of them, the same !
I put 4 stars only, because I think a large part of the questions here are going too deep into details (like parameters), things you dont find in the actual exam.
Paul Wither –
I have taken practice tests from other sites as well but none of them match the awesomeness of skillcertpro tests. Really loved taking these practice tests. Passed my exam. Will come back for sysops again 🙂
Rama Surya D –
Hi All,
I have taken AWS SA associate exam in last week of Aug 2022 and I have passed.
Initially ,When I browsed skillcertpro website, I was doubt full whether I can buy these exams or not. But I bought them and practiced.
I have taken only 3 skillcertpro tests (out of 15) and have gone through cheat sheet fully just 3-4 days before exam. The questions are really good and cheat sheet is excellent.
I have only less than 1 year of hands one experience.
Both questions for practice and cheat sheet have helped me to remember concepts .
Thanks for skillcertpro team for helping me to get AWS SA associate certification.. I will come back for any future certifications..
Jason Marina –
Shout-out for the 830 awesome practice exams questions! Most (95%) questions are nicely set — difficulty resembles the real exam; tests on a lot of caveats on AWS and covers even rarely mentioned topics in most other tutorials (e.g. Redis Auth, DAX, installation of CloudWatch Logs Agent, all of which did appear in my exam!) I couldn’t appreciate more the awfully detailed explanations for each question. Wish I could have learnt about this practice exam series earlier.
Julian He –
Happy that you guys keep this stuff up to current. I noticed as I go through the practice tests that the versions change and that you note the reason for the change. Now, I van only hope that your instruction and guidance is the pivotal answer to me being able to pass the AWS Certified Solutions Architect Associate exam! Thank you!
Elsa Jackob –
This is an excellent practice exam tests which helped me to pass the final exam in my first attempt and I scored 900. I would recommend this to those who are trying to pass the exam on their first attempt as each answer has a detailed description for which you’ll learn the concept in depth. I got almost 80-85% common from this practice exam. Very helpful.
Nasia Hussain –
I recently passed the AWS SA exam on my first attempt couple of days ago with a score of 902/1000, largely due to this course! I rarely write in-depth reviews, but this course really deserves it.
I am not a big fan of reading long technical whitepapers, and I believe if you take all 6 practice tests, and retake them until you score at least 85-90%, you are good to take the actual exam. I scored between 70-85% on my first attempt at the practice tests in this course, but thoroughly read through all the answer explanations for BOTH correct and incorrect answers, and jotted down notes for concepts that were tricky or confusing. Do not skip the answer explanations, even though you’ve answered it correctly!
Biswajit Pattanayak –
I passed the SAA-C02 AWS Associate exam today and i can say Skillcert Pro really helped in this.
Do not expect exact questions in the exam but similar. Make sure to practice ALL test papers at least twice and clear the concepts by reading the explanations. In that way you can actually crack the scenario based questions.
Heng Teo –
Thought I should give credit, where credit is due. Took my aws saa c03 on 28 Dec, and glad that most of the questions can be found in the practice exam. Definitely recommended before going for SAA
A B –
I started the practice and got 50 % on the first 5 tests.. and I was discouraged, but kept on going and eventually by 8th or 9th practice I started hitting 75% .. by 13th practice test I hit 90%. I must say the questions are harder than the actual exam.. but don’t get discouraged keep pushing ahead .. I PASSED on first attempt with score of 926/1000
Ash –
Practice Tests by SkillCertPro are a great preparation for the SAA-C03 exam. Questions are very similar to the real test, so they help you get comfortable with what to expect. Well done team, passed on my 1st attempt. This is just one of the mediums I followed along with training in learning platforms helped me to be well prepared for the actual test.
Aleks Suma –
Passed my certification.. 2 March 2023
The real deal was really close to skillcertpro’s exams..
Thank you a lot guys!
Regards,
Alex
Swaminathan Vaidyanathan –
I passed the exam on Apr 09 2023, and my preparation was completely based on questions from Skill cert pro. Questions were in similar context and was able to judge the answers properly in the real exam.
Really thankful to Skill cert pro team, looking forward for more certification and assistance from here.
Manoj Kumar –
I passed the exam on 5/9/2023, I did prepare from SkillCertPro and it helped me a lot in understanding the concept.
I am happy that i was able to pass the exam.
Christain Johnson-Hillard –
I passed after using this course! Happy!
minzu –
I recently passed the AWS SAA exam and I am almost certain I wouldn’t have if it wasn’t for these practice exams. I have 2 years of hands on practice in AWS but for the exam I need the help to pin out the main points and Skillcertpro really helped me. Thanks to skillcertpro team !
Ravi S –
Hello,I cleared the AWS Exam SAA-C02 yesterday, Thx to these Practice test. I would recommend that you do some or all of these Tests.
A few points to note and for encouragement.
1) The marks you score in the Exam are never the replica in your Actual exam. In the Actual Exam there is a scaled Scoring and there would be 15 Qs which are unmarked but you need to attempt all . I would like to declare that if you are scoring near 70% & above in these tests you will clear the main Exam. I scored always between 68-83% in these Tests.
2) The Qs for me were too lengthy and you have to read may be twice or thrice sometimes.
3) There would be 10-20 Direct Questions which you would get from these Practice tests. Qs related to EBS IOPS , EFS, FSX for Lustre/Windows , S3 Glacier , NFS FileGateway, Inspector, Macie, Guardduty, Config, Directory Service,NAT Gateways,VPC Peering, WAF, Shield would be mostly direct and you can score them easily.
4) Heavy Scenario based Qs would be on SQS & SNS with Lambda, API Gateway , StorageGateway, S3 Lifecycle , VPC Endpoints , PrivateLink , Transit Gateways, Fargate, EKS , Kinesis , (Please make sure you understand these 4 Usecases pretty well. KMS , HSM , ACM, SSM) . There would be differences between Redshift Spectrum, S3 Select , Athena. (Which one to Use for SQL Queries ), S3 Cross Region Replication, Redis and Memcached , AWS RAM .Understand the minute differences between Aurora and DynamoDB and RDS.
All the best Guys !!
Bhaskar –
Hi All,
I cleared SAA-C03 today. These practice tests definitely helped me. I would not say questions are direct from this series. It may depend on set you get.
KelEthism –
Lot of questions to practice, pretty much what I need.
Hemant Yadav –
Pass my exam in august only thing i can recommend watch out for tweaked words in exam.
Nick Schoenbaechler –
Thank you Skillcertpro for this course which really helped in passing my exam. It was really helpful in understanding lot of additional concepts, also i would like to mention a special note on your Cheatsheets which is really helpful to know about majority of the services in a quick turnaround time.