AWS Solutions Architect Professional (SAP-C02) Exam Questions
0 of 10 questions completed
Questions:
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
Information
This Sample Test contains 10 Exam Questions. Please fill your Name and Email address and Click on “Start Test”. You can view the results at the end of the test. You will also receive an email with the results. Please purchase to get life time access to Full Practice Tests.
You must specify a text. |
|
You must specify an email address. |
You have already completed the Test before. Hence you can not start it again.
Test is loading...
You must sign in or sign up to start the Test.
You have to finish following quiz, to start this Test:
Your results are here!! for" AWS Solutions Architect Professional Sample Exam "
0 of 10 questions answered correctly
Your time:
Time has elapsed
Your Final Score is : 0
You have attempted : 0
Number of Correct Questions : 0 and scored 0
Number of Incorrect Questions : 0 and Negative marks 0
-
AWS Solutions Architect Professional
You have attempted: 0
Number of Correct Questions: 0 and scored 0
Number of Incorrect Questions: 0 and Negative marks 0
-
You can review your answers by clicking on “View Answers”.
Important Note : Open Reference Documentation Links in New Tab (Right Click and Open in New Tab).
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- Answered
- Review
-
Question 1 of 10
1. Question
A global investment bank has multiple VPN connections from their on-premises data centers located in New York, Paris and Manila to their Virtual Private Cloud in AWS. The Chief Technology Officer is planning to redesign the current network design to have a more convenient and low-cost hub-and-spoke model for primary or backup connectivity between their remote offices. Which of the following is the most suitable and cost-effective connectivity option to use in this scenario?
Correct
If you have multiple VPN connections, you can provide secure communication between sites using the AWS VPN CloudHub. This enables your remote sites to communicate with each other, and not just with the VPC. The VPN CloudHub operates on a simple hub-and-spoke model that you can use with or without a VPC. This design is suitable for customers with multiple branch offices and existing internet connections who’d like to implement a convenient, potentially low-cost hub-and-spoke model for primary or backup connectivity between these remote offices.
References:
https://docs.aws.amazon.com/vpc/latest/userguide/VPN_CloudHub.html
https://docs.aws.amazon.com/vpc/latest/userguide/vpn-connections.htmlIncorrect
If you have multiple VPN connections, you can provide secure communication between sites using the AWS VPN CloudHub. This enables your remote sites to communicate with each other, and not just with the VPC. The VPN CloudHub operates on a simple hub-and-spoke model that you can use with or without a VPC. This design is suitable for customers with multiple branch offices and existing internet connections who’d like to implement a convenient, potentially low-cost hub-and-spoke model for primary or backup connectivity between these remote offices.
References:
https://docs.aws.amazon.com/vpc/latest/userguide/VPN_CloudHub.html
https://docs.aws.amazon.com/vpc/latest/userguide/vpn-connections.htmlUnattempted
If you have multiple VPN connections, you can provide secure communication between sites using the AWS VPN CloudHub. This enables your remote sites to communicate with each other, and not just with the VPC. The VPN CloudHub operates on a simple hub-and-spoke model that you can use with or without a VPC. This design is suitable for customers with multiple branch offices and existing internet connections who’d like to implement a convenient, potentially low-cost hub-and-spoke model for primary or backup connectivity between these remote offices.
References:
https://docs.aws.amazon.com/vpc/latest/userguide/VPN_CloudHub.html
https://docs.aws.amazon.com/vpc/latest/userguide/vpn-connections.html -
Question 2 of 10
2. Question
You are working as a Cloud Security Engineer in your company, and they asked you to ensure that all confidential files shared via S3 cannot be accessed directly; only through CloudFront. Which of these options could satisfy this requirement?
Correct
To restrict access to content that you serve from Amazon S3 buckets, you create CloudFront signed URLs or signed cookies to limit access to files in your Amazon S3 bucket, and then you create a special CloudFront user called an origin access identity (OAI) and associate it with your distribution. Then you configure permissions so that CloudFront can use the OAI to access and serve files to your users, but users can’t use a direct URL to the S3 bucket to access a file there. Taking these steps helps you maintain secure access to the files that you serve through CloudFront.
In general, if you’re using an Amazon S3 bucket as the origin for a CloudFront distribution, you can either allow everyone to have access to the files there, or you can restrict access. If you limit access by using, for example, CloudFront signed URLs or signed cookies, you also won’t want people to be able to view files by simply using the direct URL for the file. Instead, you want them to only access the files by using the CloudFront URL, so your protections work.Typically, if you’re using an Amazon S3 bucket as the origin for a CloudFront distribution, you grant everyone permission to read the objects in your bucket. This allows anyone to access your objects either through CloudFront or using the Amazon S3 URL. CloudFront doesn’t expose Amazon S3 URLs, but your users might have those URLs if your application serves any objects directly from Amazon S3 or if anyone gives out direct links to specific objects in Amazon S3.
The option that says: Create an Origin Access Identity (OAI) and associate it with your CloudFront distribution. Change the permissions on your Amazon S3 bucket so that only the origin access identity has read permission is correct because it gives CloudFront the exclusive access to S3 bucket, and prevents other users from accessing the public content of S3 directly via S3 URL.
Writing an S3 bucket policy that assigns the CloudFront distribution ID as the Principal and the target bucket as the Amazon Resource Name (ARN)Â is incorrect because creating a bucket policy is unnecessary and it does not prevent other users from accessing the public content of S3 directly via S3 URL.
Assigning an IAM user that is granted access to objects in the S3 bucket to CloudFront is incorrect because it does not give CloudFront exclusive access to the S3 bucket.
Writing individual polices for each S3 bucket containing the confidential documents that would grant CloudFront access is incorrect because you do not need to create any individual policies for each bucket.Incorrect
To restrict access to content that you serve from Amazon S3 buckets, you create CloudFront signed URLs or signed cookies to limit access to files in your Amazon S3 bucket, and then you create a special CloudFront user called an origin access identity (OAI) and associate it with your distribution. Then you configure permissions so that CloudFront can use the OAI to access and serve files to your users, but users can’t use a direct URL to the S3 bucket to access a file there. Taking these steps helps you maintain secure access to the files that you serve through CloudFront.
In general, if you’re using an Amazon S3 bucket as the origin for a CloudFront distribution, you can either allow everyone to have access to the files there, or you can restrict access. If you limit access by using, for example, CloudFront signed URLs or signed cookies, you also won’t want people to be able to view files by simply using the direct URL for the file. Instead, you want them to only access the files by using the CloudFront URL, so your protections work.Typically, if you’re using an Amazon S3 bucket as the origin for a CloudFront distribution, you grant everyone permission to read the objects in your bucket. This allows anyone to access your objects either through CloudFront or using the Amazon S3 URL. CloudFront doesn’t expose Amazon S3 URLs, but your users might have those URLs if your application serves any objects directly from Amazon S3 or if anyone gives out direct links to specific objects in Amazon S3.
The option that says: Create an Origin Access Identity (OAI) and associate it with your CloudFront distribution. Change the permissions on your Amazon S3 bucket so that only the origin access identity has read permission is correct because it gives CloudFront the exclusive access to S3 bucket, and prevents other users from accessing the public content of S3 directly via S3 URL.
Writing an S3 bucket policy that assigns the CloudFront distribution ID as the Principal and the target bucket as the Amazon Resource Name (ARN)Â is incorrect because creating a bucket policy is unnecessary and it does not prevent other users from accessing the public content of S3 directly via S3 URL.
Assigning an IAM user that is granted access to objects in the S3 bucket to CloudFront is incorrect because it does not give CloudFront exclusive access to the S3 bucket.
Writing individual polices for each S3 bucket containing the confidential documents that would grant CloudFront access is incorrect because you do not need to create any individual policies for each bucket.Unattempted
To restrict access to content that you serve from Amazon S3 buckets, you create CloudFront signed URLs or signed cookies to limit access to files in your Amazon S3 bucket, and then you create a special CloudFront user called an origin access identity (OAI) and associate it with your distribution. Then you configure permissions so that CloudFront can use the OAI to access and serve files to your users, but users can’t use a direct URL to the S3 bucket to access a file there. Taking these steps helps you maintain secure access to the files that you serve through CloudFront.
In general, if you’re using an Amazon S3 bucket as the origin for a CloudFront distribution, you can either allow everyone to have access to the files there, or you can restrict access. If you limit access by using, for example, CloudFront signed URLs or signed cookies, you also won’t want people to be able to view files by simply using the direct URL for the file. Instead, you want them to only access the files by using the CloudFront URL, so your protections work.Typically, if you’re using an Amazon S3 bucket as the origin for a CloudFront distribution, you grant everyone permission to read the objects in your bucket. This allows anyone to access your objects either through CloudFront or using the Amazon S3 URL. CloudFront doesn’t expose Amazon S3 URLs, but your users might have those URLs if your application serves any objects directly from Amazon S3 or if anyone gives out direct links to specific objects in Amazon S3.
The option that says: Create an Origin Access Identity (OAI) and associate it with your CloudFront distribution. Change the permissions on your Amazon S3 bucket so that only the origin access identity has read permission is correct because it gives CloudFront the exclusive access to S3 bucket, and prevents other users from accessing the public content of S3 directly via S3 URL.
Writing an S3 bucket policy that assigns the CloudFront distribution ID as the Principal and the target bucket as the Amazon Resource Name (ARN)Â is incorrect because creating a bucket policy is unnecessary and it does not prevent other users from accessing the public content of S3 directly via S3 URL.
Assigning an IAM user that is granted access to objects in the S3 bucket to CloudFront is incorrect because it does not give CloudFront exclusive access to the S3 bucket.
Writing individual polices for each S3 bucket containing the confidential documents that would grant CloudFront access is incorrect because you do not need to create any individual policies for each bucket. -
Question 3 of 10
3. Question
A clothing company is using a proprietary e-commerce platform as their online shopping website. The e-commerce platform is hosted on a fleet of on-demand EC2 instances which are launched in a public subnet. Aside from acting as web servers, these EC2 instances also fetch updates and critical security patches from the Internet. The Solutions Architect was tasked to ensure that the instances can only initiate outbound request to specific URLs provided by the proprietary e-commerce platform while accepting all inbound requests from the online shoppers.
Which of the following is the BEST solution that the Architect should implement in this scenario?Correct
Proxy servers usually act as a relay between internal resources (servers, workstations, etc.) and the Internet, and to filter, accelerate and log network activities leaving the private network. One must not confuse proxy servers (also called forwarding proxy servers) with reverse proxy servers, which are used to control and sometimes load-balance network activities entering the private network.
Launching a new web proxy server that only allow outbound access to the URLs provided by the proprietary e-commerce platform in your VPCÂ is correct because it launches a proxy server which filters requests from the client and then only allows certain URLs provided by the proprietary e-commerce platform.
The option that says: Create a new NAT Instance in your VPC. Place the EC2 instances to the private subnet and connect it to a NAT Instance which will handle the outbound URL restriction is absolutely wrong considering that the EC2 instances are used as public facing web servers and thus, must be deployed in the public subnet. An instance in private subnet and connected to a NAT Instance will not be able to accept inbound connections to the online shopping website.
The option that says: Create a new NAT Gateway in your VPC. Place the EC2 instances to the private subnet and connect it to a NAT Gateway which will handle the outbound URL restriction is incorrect with the same reason as the above option. An instance in private subnet and connected to a NAT Gateway will not be able to accept inbound connections to the online shopping website.
Implementing a Network ACL to all specific URLs by the e-commerce platform with an implicit deny rule is incorrect because a network access control list (Network ACL) has limited functionality and cannot filter requests based on URLs.References:
https://aws.amazon.com/articles/using-squid-proxy-instances-for-web-service-access-in-amazon-vpc-another-example-with-aws-codedeploy-and-amazon-cloudwatch/
https://aws.amazon.com/blogs/security/how-to-set-up-an-outbound-vpc-proxy-with-domain-whitelisting-and-content-filtering/Incorrect
Proxy servers usually act as a relay between internal resources (servers, workstations, etc.) and the Internet, and to filter, accelerate and log network activities leaving the private network. One must not confuse proxy servers (also called forwarding proxy servers) with reverse proxy servers, which are used to control and sometimes load-balance network activities entering the private network.
Launching a new web proxy server that only allow outbound access to the URLs provided by the proprietary e-commerce platform in your VPCÂ is correct because it launches a proxy server which filters requests from the client and then only allows certain URLs provided by the proprietary e-commerce platform.
The option that says: Create a new NAT Instance in your VPC. Place the EC2 instances to the private subnet and connect it to a NAT Instance which will handle the outbound URL restriction is absolutely wrong considering that the EC2 instances are used as public facing web servers and thus, must be deployed in the public subnet. An instance in private subnet and connected to a NAT Instance will not be able to accept inbound connections to the online shopping website.
The option that says: Create a new NAT Gateway in your VPC. Place the EC2 instances to the private subnet and connect it to a NAT Gateway which will handle the outbound URL restriction is incorrect with the same reason as the above option. An instance in private subnet and connected to a NAT Gateway will not be able to accept inbound connections to the online shopping website.
Implementing a Network ACL to all specific URLs by the e-commerce platform with an implicit deny rule is incorrect because a network access control list (Network ACL) has limited functionality and cannot filter requests based on URLs.References:
https://aws.amazon.com/articles/using-squid-proxy-instances-for-web-service-access-in-amazon-vpc-another-example-with-aws-codedeploy-and-amazon-cloudwatch/
https://aws.amazon.com/blogs/security/how-to-set-up-an-outbound-vpc-proxy-with-domain-whitelisting-and-content-filtering/Unattempted
Proxy servers usually act as a relay between internal resources (servers, workstations, etc.) and the Internet, and to filter, accelerate and log network activities leaving the private network. One must not confuse proxy servers (also called forwarding proxy servers) with reverse proxy servers, which are used to control and sometimes load-balance network activities entering the private network.
Launching a new web proxy server that only allow outbound access to the URLs provided by the proprietary e-commerce platform in your VPCÂ is correct because it launches a proxy server which filters requests from the client and then only allows certain URLs provided by the proprietary e-commerce platform.
The option that says: Create a new NAT Instance in your VPC. Place the EC2 instances to the private subnet and connect it to a NAT Instance which will handle the outbound URL restriction is absolutely wrong considering that the EC2 instances are used as public facing web servers and thus, must be deployed in the public subnet. An instance in private subnet and connected to a NAT Instance will not be able to accept inbound connections to the online shopping website.
The option that says: Create a new NAT Gateway in your VPC. Place the EC2 instances to the private subnet and connect it to a NAT Gateway which will handle the outbound URL restriction is incorrect with the same reason as the above option. An instance in private subnet and connected to a NAT Gateway will not be able to accept inbound connections to the online shopping website.
Implementing a Network ACL to all specific URLs by the e-commerce platform with an implicit deny rule is incorrect because a network access control list (Network ACL) has limited functionality and cannot filter requests based on URLs.References:
https://aws.amazon.com/articles/using-squid-proxy-instances-for-web-service-access-in-amazon-vpc-another-example-with-aws-codedeploy-and-amazon-cloudwatch/
https://aws.amazon.com/blogs/security/how-to-set-up-an-outbound-vpc-proxy-with-domain-whitelisting-and-content-filtering/ -
Question 4 of 10
4. Question
An online gambling site is hosted in two Elastic Compute Cloud (EC2) instances inside a Virtual Private Cloud (VPC) in the same Availability Zone (AZ) but in different subnets. The first EC2 instance is running a database and the other EC2 instance is a web application that fetches data from the database. You are required to ensure that the two EC2 instances can connect with each other in order for your application to work properly. You also need to track historical changes to the security configurations associated to your instances.
Which of the following options below can meet this requirement? (Choose 2)Correct
AWS provides two features that you can use to increase security in your VPC: security groups and network ACLs. Security groups control inbound and outbound traffic for your instances, and network ACLs control inbound and outbound traffic for your subnets. In most cases, security groups can meet your needs; however, you can also use network ACLs if you want an additional layer of security for your VPC.
AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. With Config, you can review changes in configurations and relationships between AWS resources, dive into detailed resource configuration histories, and determine your overall compliance against the configurations specified in your internal guidelines. This enables you to simplify compliance auditing, security analysis, change management, and operational troubleshooting.
Using Route 53 to ensure that there is proper routing between the two subnets is incorrect because Route 53 can’t be used to connect two subnets. You should use Network ACLs and Security Groups instead.
Ensuring that the default route is set to NAT instance or Internet Gateway (IGW)Â is incorrect because neither a NAT instance nor an Internet gateway is needed for the two EC2 instances to communicate.
Using AWS Systems Manager to track historical changes to the security configurations associated to your instances is incorrect because using AWS Systems Manager is not suitable to track historical changes to the security configurations associated to your instances. You have to use AWS Config instead.References:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Security.html
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.htmlIncorrect
AWS provides two features that you can use to increase security in your VPC: security groups and network ACLs. Security groups control inbound and outbound traffic for your instances, and network ACLs control inbound and outbound traffic for your subnets. In most cases, security groups can meet your needs; however, you can also use network ACLs if you want an additional layer of security for your VPC.
AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. With Config, you can review changes in configurations and relationships between AWS resources, dive into detailed resource configuration histories, and determine your overall compliance against the configurations specified in your internal guidelines. This enables you to simplify compliance auditing, security analysis, change management, and operational troubleshooting.
Using Route 53 to ensure that there is proper routing between the two subnets is incorrect because Route 53 can’t be used to connect two subnets. You should use Network ACLs and Security Groups instead.
Ensuring that the default route is set to NAT instance or Internet Gateway (IGW)Â is incorrect because neither a NAT instance nor an Internet gateway is needed for the two EC2 instances to communicate.
Using AWS Systems Manager to track historical changes to the security configurations associated to your instances is incorrect because using AWS Systems Manager is not suitable to track historical changes to the security configurations associated to your instances. You have to use AWS Config instead.References:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Security.html
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.htmlUnattempted
AWS provides two features that you can use to increase security in your VPC: security groups and network ACLs. Security groups control inbound and outbound traffic for your instances, and network ACLs control inbound and outbound traffic for your subnets. In most cases, security groups can meet your needs; however, you can also use network ACLs if you want an additional layer of security for your VPC.
AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. With Config, you can review changes in configurations and relationships between AWS resources, dive into detailed resource configuration histories, and determine your overall compliance against the configurations specified in your internal guidelines. This enables you to simplify compliance auditing, security analysis, change management, and operational troubleshooting.
Using Route 53 to ensure that there is proper routing between the two subnets is incorrect because Route 53 can’t be used to connect two subnets. You should use Network ACLs and Security Groups instead.
Ensuring that the default route is set to NAT instance or Internet Gateway (IGW)Â is incorrect because neither a NAT instance nor an Internet gateway is needed for the two EC2 instances to communicate.
Using AWS Systems Manager to track historical changes to the security configurations associated to your instances is incorrect because using AWS Systems Manager is not suitable to track historical changes to the security configurations associated to your instances. You have to use AWS Config instead.References:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Security.html
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html -
Question 5 of 10
5. Question
A leading mobile game company is planning to host their GraphQL API in AWS which will be heavily used for their massively multiplayer online role-playing games (MMORPGs) for 3 years or more. You are assigned to prepare the architecture of the entire system and to ensure consistent connection and faster loading times for their players across the globe. Which of the following is the most cost-effective solution that you can implement in this scenario?
Correct
Reserved Instances are best to use for these scenarios:
Applications that have been in use for years and that you plan to continue to use.
Applications with steady state or predictable usage.
Applications that require reserved capacity.
Users who want to make upfront payments to further reduce their total computing costs.
Since the game company is planning to use this application for 3 years or more, the best and the most cost-effective type of EC2 to use is a Reserved Instance. Hence, using Reserved EC2 Instances to host the GraphQL API and CloudFront for web distribution of the static assets is correct. You cannot use a Spot instance here as you need to provide a consistent service to your users without any interruption. An On-Demand instance is a valid option but it costs more than Reserved instance which is why it is incorrect.Reference:
https://aws.amazon.com/ec2/pricing/reserved-instances/Incorrect
Reserved Instances are best to use for these scenarios:
Applications that have been in use for years and that you plan to continue to use.
Applications with steady state or predictable usage.
Applications that require reserved capacity.
Users who want to make upfront payments to further reduce their total computing costs.
Since the game company is planning to use this application for 3 years or more, the best and the most cost-effective type of EC2 to use is a Reserved Instance. Hence, using Reserved EC2 Instances to host the GraphQL API and CloudFront for web distribution of the static assets is correct. You cannot use a Spot instance here as you need to provide a consistent service to your users without any interruption. An On-Demand instance is a valid option but it costs more than Reserved instance which is why it is incorrect.Reference:
https://aws.amazon.com/ec2/pricing/reserved-instances/Unattempted
Reserved Instances are best to use for these scenarios:
Applications that have been in use for years and that you plan to continue to use.
Applications with steady state or predictable usage.
Applications that require reserved capacity.
Users who want to make upfront payments to further reduce their total computing costs.
Since the game company is planning to use this application for 3 years or more, the best and the most cost-effective type of EC2 to use is a Reserved Instance. Hence, using Reserved EC2 Instances to host the GraphQL API and CloudFront for web distribution of the static assets is correct. You cannot use a Spot instance here as you need to provide a consistent service to your users without any interruption. An On-Demand instance is a valid option but it costs more than Reserved instance which is why it is incorrect.Reference:
https://aws.amazon.com/ec2/pricing/reserved-instances/ -
Question 6 of 10
6. Question
You are a Software Engineer for a leading call center company in Seattle. Their corporate web portal is deployed to AWS and is linked to their corporate data center via a link aggregation group (LAG) which terminates at the same AWS Direct Connect endpoint and connected on a private virtual interface (VIF) in your VPC. The portal must authenticate against their on-premises LDAP server. Each Amazon S3 bucket can only be accessed by a logged-in user if it belongs to that user.
How will you implement this architecture in AWS? (Choose 2)Correct
Lightweight Directory Access Protocol (LDAP) is a standard communications protocol used to read and write data to and from Active Directory. You can manage your user identities in an external system outside of AWS and grant users who sign in from those systems access to perform AWS tasks and access your AWS resources. The distinction is where the external system resides—in your data center or an external third party on the web.
For enterprise identity federation, you can authenticate users in your organization’s network, and then provide those users access to AWS without creating new AWS identities for them and requiring them to sign in with a separate user name and password. This is known as the single sign-on (SSO) approach to temporary access. AWS STS supports open standards like Security Assertion Markup Language (SAML) 2.0, with which you can use Microsoft AD FS to leverage your Microsoft Active Directory.
The option that says: Authenticate against LDAP using an identity broker you created, and have it call IAM Security Token Service (STS) to retrieve IAM federated user credentials. The application then gets the IAM federated user credentials of the identity broker to access the appropriate S3 bucket is correct because it follows the correct sequence. It develops an identity broker that authenticates users against LDAP, gets the security token from STS, and then accesses the S3 bucket using the IAM federated user credentials.
The option that says: The application first authenticates against LDAP to retrieve the name of an IAM role associated with the user. It then assumes that role via call to IAM Security Token Service (STS). Afterwards, the application can now use the temporary credentials from the role to access the appropriate S3 bucket is correct because it follows the correct sequence. It authenticates users using LDAP, gets the security token from STS, and then accesses the S3 bucket using the temporary credentials.
The option that says: Create an identity broker that assumes an IAM role, and retrieve temporary AWS security credentials via IAM Security Token Service (STS). The application gets the AWS temporary security credentials from the identity broker to gain access to the appropriate S3 bucket is incorrect because the users need to be authenticated using LDAP first, not STS. Also, the temporary credentials to log into AWS are provided by STS, not identity broker.
The option that says: The application first authenticates against LDAP, and then uses the LDAP credentials to log in to IAM service. Finally, it can now use the IAM temporary credentials to access the appropriate S3 bucket is incorrect because you cannot use the LDAP credentials to log into IAM.
The option that says: Use a Direct Connect Gateway instead of a single Direct Connect connection. Set up a Transit VPC which will authenticate against their on-premises LDAP server is incorrect because using a Direct Connect Gateway will only improve the availability of your on-premises network connection and using a transit VPC is just a common strategy for connecting multiple, geographically disperse VPCs and remote networks in order to create a global network transit center. These two things will not meet the requirement.Reference:
http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_federated-users.htmlIncorrect
Lightweight Directory Access Protocol (LDAP) is a standard communications protocol used to read and write data to and from Active Directory. You can manage your user identities in an external system outside of AWS and grant users who sign in from those systems access to perform AWS tasks and access your AWS resources. The distinction is where the external system resides—in your data center or an external third party on the web.
For enterprise identity federation, you can authenticate users in your organization’s network, and then provide those users access to AWS without creating new AWS identities for them and requiring them to sign in with a separate user name and password. This is known as the single sign-on (SSO) approach to temporary access. AWS STS supports open standards like Security Assertion Markup Language (SAML) 2.0, with which you can use Microsoft AD FS to leverage your Microsoft Active Directory.
The option that says: Authenticate against LDAP using an identity broker you created, and have it call IAM Security Token Service (STS) to retrieve IAM federated user credentials. The application then gets the IAM federated user credentials of the identity broker to access the appropriate S3 bucket is correct because it follows the correct sequence. It develops an identity broker that authenticates users against LDAP, gets the security token from STS, and then accesses the S3 bucket using the IAM federated user credentials.
The option that says: The application first authenticates against LDAP to retrieve the name of an IAM role associated with the user. It then assumes that role via call to IAM Security Token Service (STS). Afterwards, the application can now use the temporary credentials from the role to access the appropriate S3 bucket is correct because it follows the correct sequence. It authenticates users using LDAP, gets the security token from STS, and then accesses the S3 bucket using the temporary credentials.
The option that says: Create an identity broker that assumes an IAM role, and retrieve temporary AWS security credentials via IAM Security Token Service (STS). The application gets the AWS temporary security credentials from the identity broker to gain access to the appropriate S3 bucket is incorrect because the users need to be authenticated using LDAP first, not STS. Also, the temporary credentials to log into AWS are provided by STS, not identity broker.
The option that says: The application first authenticates against LDAP, and then uses the LDAP credentials to log in to IAM service. Finally, it can now use the IAM temporary credentials to access the appropriate S3 bucket is incorrect because you cannot use the LDAP credentials to log into IAM.
The option that says: Use a Direct Connect Gateway instead of a single Direct Connect connection. Set up a Transit VPC which will authenticate against their on-premises LDAP server is incorrect because using a Direct Connect Gateway will only improve the availability of your on-premises network connection and using a transit VPC is just a common strategy for connecting multiple, geographically disperse VPCs and remote networks in order to create a global network transit center. These two things will not meet the requirement.Reference:
http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_federated-users.htmlUnattempted
Lightweight Directory Access Protocol (LDAP) is a standard communications protocol used to read and write data to and from Active Directory. You can manage your user identities in an external system outside of AWS and grant users who sign in from those systems access to perform AWS tasks and access your AWS resources. The distinction is where the external system resides—in your data center or an external third party on the web.
For enterprise identity federation, you can authenticate users in your organization’s network, and then provide those users access to AWS without creating new AWS identities for them and requiring them to sign in with a separate user name and password. This is known as the single sign-on (SSO) approach to temporary access. AWS STS supports open standards like Security Assertion Markup Language (SAML) 2.0, with which you can use Microsoft AD FS to leverage your Microsoft Active Directory.
The option that says: Authenticate against LDAP using an identity broker you created, and have it call IAM Security Token Service (STS) to retrieve IAM federated user credentials. The application then gets the IAM federated user credentials of the identity broker to access the appropriate S3 bucket is correct because it follows the correct sequence. It develops an identity broker that authenticates users against LDAP, gets the security token from STS, and then accesses the S3 bucket using the IAM federated user credentials.
The option that says: The application first authenticates against LDAP to retrieve the name of an IAM role associated with the user. It then assumes that role via call to IAM Security Token Service (STS). Afterwards, the application can now use the temporary credentials from the role to access the appropriate S3 bucket is correct because it follows the correct sequence. It authenticates users using LDAP, gets the security token from STS, and then accesses the S3 bucket using the temporary credentials.
The option that says: Create an identity broker that assumes an IAM role, and retrieve temporary AWS security credentials via IAM Security Token Service (STS). The application gets the AWS temporary security credentials from the identity broker to gain access to the appropriate S3 bucket is incorrect because the users need to be authenticated using LDAP first, not STS. Also, the temporary credentials to log into AWS are provided by STS, not identity broker.
The option that says: The application first authenticates against LDAP, and then uses the LDAP credentials to log in to IAM service. Finally, it can now use the IAM temporary credentials to access the appropriate S3 bucket is incorrect because you cannot use the LDAP credentials to log into IAM.
The option that says: Use a Direct Connect Gateway instead of a single Direct Connect connection. Set up a Transit VPC which will authenticate against their on-premises LDAP server is incorrect because using a Direct Connect Gateway will only improve the availability of your on-premises network connection and using a transit VPC is just a common strategy for connecting multiple, geographically disperse VPCs and remote networks in order to create a global network transit center. These two things will not meet the requirement.Reference:
http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_federated-users.html -
Question 7 of 10
7. Question
A government organization is currently developing a multi-tiered web application prototype which consists of various components for registration, transaction processing, and reporting. All of the components will be using different IP addresses and they are all hosted on one, extra large EC2 instance as its main server. They will be using S3 as a durable and scalable storage service. For security purposes, the IT manager wants to implement 2 separate SSL certificates for the separate components. How can the government organization achieve this with a single EC2 instance?
Correct
You can create a network interface, attach it to an instance, detach it from an instance, and attach it to another instance. The attributes of a network interface follow it as it’s attached or detached from an instance and reattached to another instance.
When you move a network interface from one instance to another, network traffic is redirected to the new instance. You can also modify the attributes of your network interface, including changing its security groups and managing its IP addresses. Every instance in a VPC has a default network interface, called the primary network interface (eth0). You cannot detach a primary network interface from an instance. You can create and attach additional network interfaces. The maximum number of network interfaces that you can use varies by instance type.In this scenario, you basically need to provide multiple IP addresses to a single EC2 instance. This can be easily achieved by using an Elastic Network Interface (ENI). An elastic network interface is a logical networking component in a VPC that represents a virtual network card.
Creating an EC2 instance with multiple security groups attached to it which contain separate rules of each IP address, including custom rules in the Network ACLÂ is incorrect because a security group is mainly used to control the incoming or outgoing traffic to the instance and doesn’t provide multiple IP addresses to an EC2 instance.
Creating an EC2 instance that has multiple subnets in two separate Availability Zones attached to it and each will have a separate IP address is incorrect because you cannot place the same EC2 instance in two separate Availability Zones.
Creating an EC2 instance with a NAT address is incorrect because a NAT address doesn’t provide multiple IP addresses to an EC2 instance.References:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-instance-addressing.html
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/MultipleIP.htmlIncorrect
You can create a network interface, attach it to an instance, detach it from an instance, and attach it to another instance. The attributes of a network interface follow it as it’s attached or detached from an instance and reattached to another instance.
When you move a network interface from one instance to another, network traffic is redirected to the new instance. You can also modify the attributes of your network interface, including changing its security groups and managing its IP addresses. Every instance in a VPC has a default network interface, called the primary network interface (eth0). You cannot detach a primary network interface from an instance. You can create and attach additional network interfaces. The maximum number of network interfaces that you can use varies by instance type.In this scenario, you basically need to provide multiple IP addresses to a single EC2 instance. This can be easily achieved by using an Elastic Network Interface (ENI). An elastic network interface is a logical networking component in a VPC that represents a virtual network card.
Creating an EC2 instance with multiple security groups attached to it which contain separate rules of each IP address, including custom rules in the Network ACLÂ is incorrect because a security group is mainly used to control the incoming or outgoing traffic to the instance and doesn’t provide multiple IP addresses to an EC2 instance.
Creating an EC2 instance that has multiple subnets in two separate Availability Zones attached to it and each will have a separate IP address is incorrect because you cannot place the same EC2 instance in two separate Availability Zones.
Creating an EC2 instance with a NAT address is incorrect because a NAT address doesn’t provide multiple IP addresses to an EC2 instance.References:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-instance-addressing.html
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/MultipleIP.htmlUnattempted
You can create a network interface, attach it to an instance, detach it from an instance, and attach it to another instance. The attributes of a network interface follow it as it’s attached or detached from an instance and reattached to another instance.
When you move a network interface from one instance to another, network traffic is redirected to the new instance. You can also modify the attributes of your network interface, including changing its security groups and managing its IP addresses. Every instance in a VPC has a default network interface, called the primary network interface (eth0). You cannot detach a primary network interface from an instance. You can create and attach additional network interfaces. The maximum number of network interfaces that you can use varies by instance type.In this scenario, you basically need to provide multiple IP addresses to a single EC2 instance. This can be easily achieved by using an Elastic Network Interface (ENI). An elastic network interface is a logical networking component in a VPC that represents a virtual network card.
Creating an EC2 instance with multiple security groups attached to it which contain separate rules of each IP address, including custom rules in the Network ACLÂ is incorrect because a security group is mainly used to control the incoming or outgoing traffic to the instance and doesn’t provide multiple IP addresses to an EC2 instance.
Creating an EC2 instance that has multiple subnets in two separate Availability Zones attached to it and each will have a separate IP address is incorrect because you cannot place the same EC2 instance in two separate Availability Zones.
Creating an EC2 instance with a NAT address is incorrect because a NAT address doesn’t provide multiple IP addresses to an EC2 instance.References:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-instance-addressing.html
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/MultipleIP.html -
Question 8 of 10
8. Question
S.I.G.A Hackers United, a new international hacktivist group, has announced that they will launch wide-scale cyber attacks such as SQL Injection, cross-site scripting (XSS) and DDoS attacks, to multiple government websites which are hosted in AWS. You are hired as an IT consultant to reinforce the security of these government websites. Which of the following approach provides a cost-effective and scalable mitigation from cyber attacks?
Correct
In this scenario, the best option to use is a WAF instead of IDS/IPS, as it provides more protection against common cyber attack patterns. IDS is mostly concerned about detection of unauthorized access and IPS is basically similar to IDS, but provides a prevention mechanism for your network.
AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. AWS WAF gives you control over which traffic to allow or block to your web applications by defining customizable web security rules. You can use AWS WAF to create custom rules that block common attack patterns, such as SQL injection or cross-site scripting, and rules that are designed for your specific application. New rules can be deployed within minutes, letting you respond quickly to changing traffic patterns.
Intrusion Detection Systems (IDS) monitor networks and/or systems for malicious activity or policy violation, and report them to systems administrators or to a security information and event management (SIEM) system. Intrusion Prevention Systems (IPS) are positioned behind firewalls and provide an additional layer of security by scanning and analyzing suspicious content for potential threats. Placed in the direct communication path, an IPS will take automatic action on suspicious traffic within the network.
Implementing an AWS WAF (Web Application Firewall)Â is correct as AWS WAF provides protection against common attack patterns, such as SQL injection, DDoS or cross-site scripting.
Implementing an Intrusion Detection Systems (IDS)Â is incorrect as IDS alone is not enough to provide protection against cyber attacks. It only detects malicious activities or policy violations in your network but it doesn’t do anything about those security flaws. WAF is a better option to choose for this scenario.
Implementing an Intrusion Prevention Systems (IPS)Â is incorrect as IPS is also not enough to provide complete protection against cyber attacks. It only prevents malicious activities or policy violations in your network and you need to have WAF to fully secure your network. WAF is a better option to choose for this scenario.
Implementing both Intrusion Detection Systems (IDS) and Intrusion Prevention Systems (IPS)Â is incorrect as even though you have both IDS and IPS, you still need a WAF for better security against cyber attacks.References:
https://aws.amazon.com/waf/
https://d1.awsstatic.com/Marketplace/scenarios/security/SEC_01_TSB_Final.pdfIncorrect
In this scenario, the best option to use is a WAF instead of IDS/IPS, as it provides more protection against common cyber attack patterns. IDS is mostly concerned about detection of unauthorized access and IPS is basically similar to IDS, but provides a prevention mechanism for your network.
AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. AWS WAF gives you control over which traffic to allow or block to your web applications by defining customizable web security rules. You can use AWS WAF to create custom rules that block common attack patterns, such as SQL injection or cross-site scripting, and rules that are designed for your specific application. New rules can be deployed within minutes, letting you respond quickly to changing traffic patterns.
Intrusion Detection Systems (IDS) monitor networks and/or systems for malicious activity or policy violation, and report them to systems administrators or to a security information and event management (SIEM) system. Intrusion Prevention Systems (IPS) are positioned behind firewalls and provide an additional layer of security by scanning and analyzing suspicious content for potential threats. Placed in the direct communication path, an IPS will take automatic action on suspicious traffic within the network.
Implementing an AWS WAF (Web Application Firewall)Â is correct as AWS WAF provides protection against common attack patterns, such as SQL injection, DDoS or cross-site scripting.
Implementing an Intrusion Detection Systems (IDS)Â is incorrect as IDS alone is not enough to provide protection against cyber attacks. It only detects malicious activities or policy violations in your network but it doesn’t do anything about those security flaws. WAF is a better option to choose for this scenario.
Implementing an Intrusion Prevention Systems (IPS)Â is incorrect as IPS is also not enough to provide complete protection against cyber attacks. It only prevents malicious activities or policy violations in your network and you need to have WAF to fully secure your network. WAF is a better option to choose for this scenario.
Implementing both Intrusion Detection Systems (IDS) and Intrusion Prevention Systems (IPS)Â is incorrect as even though you have both IDS and IPS, you still need a WAF for better security against cyber attacks.References:
https://aws.amazon.com/waf/
https://d1.awsstatic.com/Marketplace/scenarios/security/SEC_01_TSB_Final.pdfUnattempted
In this scenario, the best option to use is a WAF instead of IDS/IPS, as it provides more protection against common cyber attack patterns. IDS is mostly concerned about detection of unauthorized access and IPS is basically similar to IDS, but provides a prevention mechanism for your network.
AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources. AWS WAF gives you control over which traffic to allow or block to your web applications by defining customizable web security rules. You can use AWS WAF to create custom rules that block common attack patterns, such as SQL injection or cross-site scripting, and rules that are designed for your specific application. New rules can be deployed within minutes, letting you respond quickly to changing traffic patterns.
Intrusion Detection Systems (IDS) monitor networks and/or systems for malicious activity or policy violation, and report them to systems administrators or to a security information and event management (SIEM) system. Intrusion Prevention Systems (IPS) are positioned behind firewalls and provide an additional layer of security by scanning and analyzing suspicious content for potential threats. Placed in the direct communication path, an IPS will take automatic action on suspicious traffic within the network.
Implementing an AWS WAF (Web Application Firewall)Â is correct as AWS WAF provides protection against common attack patterns, such as SQL injection, DDoS or cross-site scripting.
Implementing an Intrusion Detection Systems (IDS)Â is incorrect as IDS alone is not enough to provide protection against cyber attacks. It only detects malicious activities or policy violations in your network but it doesn’t do anything about those security flaws. WAF is a better option to choose for this scenario.
Implementing an Intrusion Prevention Systems (IPS)Â is incorrect as IPS is also not enough to provide complete protection against cyber attacks. It only prevents malicious activities or policy violations in your network and you need to have WAF to fully secure your network. WAF is a better option to choose for this scenario.
Implementing both Intrusion Detection Systems (IDS) and Intrusion Prevention Systems (IPS)Â is incorrect as even though you have both IDS and IPS, you still need a WAF for better security against cyber attacks.References:
https://aws.amazon.com/waf/
https://d1.awsstatic.com/Marketplace/scenarios/security/SEC_01_TSB_Final.pdf -
Question 9 of 10
9. Question
A leading telecommunications company has many on-premises data centers scattered across the United States and they want to implement a hybrid network architecture to integrate their VPCs located in AWS US East (N. Virginia) and US West (Oregon). Â
In this scenario, how can you allow VPC resources like EC2 instances, RDS databases, and Lambda functions running in different AWS regions to communicate with each other using private IP addresses?Correct
Amazon EC2 now allows peering relationships to be established between Virtual Private Clouds (VPCs) across different AWS regions. Inter-Region VPC Peering allows VPC resources like EC2 instances, RDS databases, and Lambda functions running in different AWS regions to communicate with each other using private IP addresses, without requiring gateways, VPN connections or separate network appliances.
Inter-Region VPC Peering provides a simple and cost-effective way to share resources between regions or replicate data for geographic redundancy. Built on the same horizontally scaled, redundant, and highly available technology that powers VPC today, Inter-Region VPC Peering encrypts inter-region traffic with no single point of failure or bandwidth bottleneck. Traffic using Inter-Region VPC Peering always stays on the global AWS backbone and never traverses the public Internet, thereby reducing threat vectors, such as common exploits and DDoS attacks.
Hence, the correct answer is setting up an Inter-Region VPC Peering.
Setting up an SSL VPN Connection to all VPCs is incorrect because an SSL VPN Connection only provides a secure VPN connection but doesn’t establish a VPC to VPC peering connection.
Setting up an IPSec VPN Connection to all VPCs is incorrect because an IPSec VPN Connection only provides a secure VPN connection but doesn’t establish a VPC to VPC peering connection.
The option that says:Â This is currently not possible in AWSÂ is incorrect because this statement is false. It can be done in AWS by using Inter-Region VPC Peering.Reference:
https://docs.aws.amazon.com/AmazonVPC/latest/PeeringGuide/Welcome.htmlIncorrect
Amazon EC2 now allows peering relationships to be established between Virtual Private Clouds (VPCs) across different AWS regions. Inter-Region VPC Peering allows VPC resources like EC2 instances, RDS databases, and Lambda functions running in different AWS regions to communicate with each other using private IP addresses, without requiring gateways, VPN connections or separate network appliances.
Inter-Region VPC Peering provides a simple and cost-effective way to share resources between regions or replicate data for geographic redundancy. Built on the same horizontally scaled, redundant, and highly available technology that powers VPC today, Inter-Region VPC Peering encrypts inter-region traffic with no single point of failure or bandwidth bottleneck. Traffic using Inter-Region VPC Peering always stays on the global AWS backbone and never traverses the public Internet, thereby reducing threat vectors, such as common exploits and DDoS attacks.
Hence, the correct answer is setting up an Inter-Region VPC Peering.
Setting up an SSL VPN Connection to all VPCs is incorrect because an SSL VPN Connection only provides a secure VPN connection but doesn’t establish a VPC to VPC peering connection.
Setting up an IPSec VPN Connection to all VPCs is incorrect because an IPSec VPN Connection only provides a secure VPN connection but doesn’t establish a VPC to VPC peering connection.
The option that says:Â This is currently not possible in AWSÂ is incorrect because this statement is false. It can be done in AWS by using Inter-Region VPC Peering.Reference:
https://docs.aws.amazon.com/AmazonVPC/latest/PeeringGuide/Welcome.htmlUnattempted
Amazon EC2 now allows peering relationships to be established between Virtual Private Clouds (VPCs) across different AWS regions. Inter-Region VPC Peering allows VPC resources like EC2 instances, RDS databases, and Lambda functions running in different AWS regions to communicate with each other using private IP addresses, without requiring gateways, VPN connections or separate network appliances.
Inter-Region VPC Peering provides a simple and cost-effective way to share resources between regions or replicate data for geographic redundancy. Built on the same horizontally scaled, redundant, and highly available technology that powers VPC today, Inter-Region VPC Peering encrypts inter-region traffic with no single point of failure or bandwidth bottleneck. Traffic using Inter-Region VPC Peering always stays on the global AWS backbone and never traverses the public Internet, thereby reducing threat vectors, such as common exploits and DDoS attacks.
Hence, the correct answer is setting up an Inter-Region VPC Peering.
Setting up an SSL VPN Connection to all VPCs is incorrect because an SSL VPN Connection only provides a secure VPN connection but doesn’t establish a VPC to VPC peering connection.
Setting up an IPSec VPN Connection to all VPCs is incorrect because an IPSec VPN Connection only provides a secure VPN connection but doesn’t establish a VPC to VPC peering connection.
The option that says:Â This is currently not possible in AWSÂ is incorrect because this statement is false. It can be done in AWS by using Inter-Region VPC Peering.Reference:
https://docs.aws.amazon.com/AmazonVPC/latest/PeeringGuide/Welcome.html -
Question 10 of 10
10. Question
You are working for a hospital chain in London which uses an online central hub for doctors and nurses. The application interacts with millions of requests per day to fetch various medical data of their patients. The system is composed of a web tier, an application tier and a database tier that receives large and unpredictable traffic demands. Your responsibility as a Solutions Architect is to ensure that its architecture is scalable enough to handle web traffic fluctuations automatically. Which of the following AWS architecture should you use to meet the above requirements?
Correct
When users or services interact with an application, they will often perform a series of interactions that form a session. A session is unique data for users that persists between requests while they use the application. A stateless application is an application that does not need knowledge of previous interactions and does not store session information.
For example, an application that, given the same input, provides the same response to any end user, is a stateless application. Stateless applications can scale horizontally because any of the available compute resources (such as EC2 instances and AWS Lambda functions) can service any request. Without stored session data, you can simply add more compute resources as needed. When that capacity is no longer required, you can safely terminate those individual resources, after running tasks have been drained. Those resources do not need to be aware of the presence of their peers—all that is required is a way to distribute the workload to them.
In this scenario, the best option is to use a combination of Elasticache, Cloudwatch, and RDS Read Replica.
The option that says: Run your web and application tiers in stateless instances in an autoscaling group, using Elasticache Memcached for tier synchronization and CloudWatch for monitoring and running your database tier using RDS with read replicas is correct because it uses stateless instances. The web server uses ElastiCache for read operations and CloudWatch which monitors fluctuations in the traffic and notifies the autoscaling group to scale in/scale out accordingly. In addition, it uses read replicas for RDS to handle the read-heavy workload.
The option that says: Run your web and application tiers in stateful instances in an autoscaling group, using CloudWatch for monitoring and running your database tier using RDS with Multi-AZ enabled is incorrect because it uses stateful instances. It also does not use any caching mechanism for web and application tier, and multi-AZ RDS does not improve read performance.
The option that says: Run your web and application tiers in stateful instances in an autoscaling group, using CloudWatch for monitoring and running your database tier using RDS with read replicas is incorrect because it uses stateful instances and it does not use any caching mechanism for web and application tier.
The option that says: Run your web and application tiers in stateless instances in an autoscaling group, using Elasticache Memcached for tier synchronization and CloudWatch for monitoring and running your database tier using RDS with Multi-AZ enabled is incorrect because multi-AZ RDS does not improve read performance.Reference:
https://aws.amazon.com/elasticache/
https://aws.amazon.com/rds/details/read-replicas/
https://d1.awsstatic.com/whitepapers/AWS_Cloud_Best_Practices.pdfIncorrect
When users or services interact with an application, they will often perform a series of interactions that form a session. A session is unique data for users that persists between requests while they use the application. A stateless application is an application that does not need knowledge of previous interactions and does not store session information.
For example, an application that, given the same input, provides the same response to any end user, is a stateless application. Stateless applications can scale horizontally because any of the available compute resources (such as EC2 instances and AWS Lambda functions) can service any request. Without stored session data, you can simply add more compute resources as needed. When that capacity is no longer required, you can safely terminate those individual resources, after running tasks have been drained. Those resources do not need to be aware of the presence of their peers—all that is required is a way to distribute the workload to them.
In this scenario, the best option is to use a combination of Elasticache, Cloudwatch, and RDS Read Replica.
The option that says: Run your web and application tiers in stateless instances in an autoscaling group, using Elasticache Memcached for tier synchronization and CloudWatch for monitoring and running your database tier using RDS with read replicas is correct because it uses stateless instances. The web server uses ElastiCache for read operations and CloudWatch which monitors fluctuations in the traffic and notifies the autoscaling group to scale in/scale out accordingly. In addition, it uses read replicas for RDS to handle the read-heavy workload.
The option that says: Run your web and application tiers in stateful instances in an autoscaling group, using CloudWatch for monitoring and running your database tier using RDS with Multi-AZ enabled is incorrect because it uses stateful instances. It also does not use any caching mechanism for web and application tier, and multi-AZ RDS does not improve read performance.
The option that says: Run your web and application tiers in stateful instances in an autoscaling group, using CloudWatch for monitoring and running your database tier using RDS with read replicas is incorrect because it uses stateful instances and it does not use any caching mechanism for web and application tier.
The option that says: Run your web and application tiers in stateless instances in an autoscaling group, using Elasticache Memcached for tier synchronization and CloudWatch for monitoring and running your database tier using RDS with Multi-AZ enabled is incorrect because multi-AZ RDS does not improve read performance.Reference:
https://aws.amazon.com/elasticache/
https://aws.amazon.com/rds/details/read-replicas/
https://d1.awsstatic.com/whitepapers/AWS_Cloud_Best_Practices.pdfUnattempted
When users or services interact with an application, they will often perform a series of interactions that form a session. A session is unique data for users that persists between requests while they use the application. A stateless application is an application that does not need knowledge of previous interactions and does not store session information.
For example, an application that, given the same input, provides the same response to any end user, is a stateless application. Stateless applications can scale horizontally because any of the available compute resources (such as EC2 instances and AWS Lambda functions) can service any request. Without stored session data, you can simply add more compute resources as needed. When that capacity is no longer required, you can safely terminate those individual resources, after running tasks have been drained. Those resources do not need to be aware of the presence of their peers—all that is required is a way to distribute the workload to them.
In this scenario, the best option is to use a combination of Elasticache, Cloudwatch, and RDS Read Replica.
The option that says: Run your web and application tiers in stateless instances in an autoscaling group, using Elasticache Memcached for tier synchronization and CloudWatch for monitoring and running your database tier using RDS with read replicas is correct because it uses stateless instances. The web server uses ElastiCache for read operations and CloudWatch which monitors fluctuations in the traffic and notifies the autoscaling group to scale in/scale out accordingly. In addition, it uses read replicas for RDS to handle the read-heavy workload.
The option that says: Run your web and application tiers in stateful instances in an autoscaling group, using CloudWatch for monitoring and running your database tier using RDS with Multi-AZ enabled is incorrect because it uses stateful instances. It also does not use any caching mechanism for web and application tier, and multi-AZ RDS does not improve read performance.
The option that says: Run your web and application tiers in stateful instances in an autoscaling group, using CloudWatch for monitoring and running your database tier using RDS with read replicas is incorrect because it uses stateful instances and it does not use any caching mechanism for web and application tier.
The option that says: Run your web and application tiers in stateless instances in an autoscaling group, using Elasticache Memcached for tier synchronization and CloudWatch for monitoring and running your database tier using RDS with Multi-AZ enabled is incorrect because multi-AZ RDS does not improve read performance.Reference:
https://aws.amazon.com/elasticache/
https://aws.amazon.com/rds/details/read-replicas/
https://d1.awsstatic.com/whitepapers/AWS_Cloud_Best_Practices.pdf
- We are offering 960 latest real AWS Solutions Architect Professional (SAP-C02) Exam Questions for practice, which will help you to score higher in your exam.
- Aim for above 85% or above in our mock exams before giving the main exam.Â
- Do review wrong & right answers and thoroughly go through explanations provided to each question which will help you understand the question.
- Master Cheat Sheet was prepared by instructors which contains personal notes of them for all exam objectives. Carefully written to help you all understand the topics easily. It is recommended to use the Master cheat sheet as a final step of preparation to cram the important topics before the exam.
- Weekly updates: We have a dedicated team updating our question bank on a regular basis, based on the feedback of students on what appeared on the actual exam, as well as through external benchmarking.
AWS Solutions Architect Professional is an AWS certification course for someone who needs to evaluate an organization’s demands and make architectural recommendations for implementing and deploying applications on AWS.
AWS Solutions Architect Professional Details:
- Prerequisites: You should be Certified Solutions Architect – Associate to attain this exam.
- At least two years of hands-on experience designing and deploying cloud architecture on AWS and best practice knowledge of multi-application architectural design is strongly recommended.
- Format: Multiple-choice, multiple-answer
- Time: 170 minutes
- Cost: 300 USD
The AWS Solutions Architect Professional (SAP-C02) Exam Questions 2024 covers a broad range of topics that assess your ability to:
- Design for the AWS Well-Architected Framework: This framework helps you build secure, high-performing, cost-optimized, reliable, and scalable cloud architectures. You’ll need to demonstrate proficiency in designing for each of these pillars.
- Implement Security Best Practices: Security is a top priority in the cloud. The exam will test your knowledge of best practices for securing data, managing user access, and implementing strong security policies.
- Optimize for Performance, Cost, and Scalability: As an architect, you need to design solutions that meet performance requirements, stay within budget constraints, and scale to accommodate changing demands. The exam will assess your ability to leverage AWS services and features to achieve these goals.
- Migrate Workloads to AWS: Many organizations are migrating applications and data to the cloud. The exam will test your understanding of different migration strategies and your ability to plan and execute successful AWS migrations.
- Integrate DevOps Practices: DevOps is a critical approach for continuous delivery and infrastructure automation in the cloud. You’ll need to demonstrate your knowledge of using AWS services to implement DevOps practices.
How To Start For AWS Certification?
While there are not standard define steps to start AWS certification, below-given steps are the most straight-forward.
Step 1)
- First of all, you need to Enroll yourself in an AWS training class.
- Select the desired module that you wants to take.
Step 2)Â Review all the available study materials and Exam Guides related to selected AWS module.
Step 3)Â Read multiple AWS whitepapers. It offers plenty of crucial information regarding topics. These hold some useful information, which may answer your questions.
Step 4)Â Next, you need to take regular practice. A practice test will help you to become free of stress about the AWS certification exams.
Step 5)Â Schedule the final AWS certification exam once you are ready. It generally takes around 80-120 hours of practice/studying to be prepared for the exam. However, it depends on your experience and the certification course that you have selected.
Exam Preparation
These training courses and materials will help with exam preparation:
- Architecting on AWS instructor-led, live or virtual 3-day course
- AWS Whitepapers https://aws.amazon.com/certification/certification-prep/
- Identify AWS services which help you to automate, monitor, and manage security operations on AWS.
- AWS Well-Architected web page (various whitepapers linked)
Exam Content
There are mainly two types of questions on the examination:
- Multiple-choice: It has one correct and three incorrect responses
- Multiple-response: Has two correct responses out of five options.
Any Questions Check out FAQ
Know more about the exam at https://aws.amazon.com/certification/certified-solutions-architect-professional/
Debasish –
This is the best Mock test. I passed my exam with 82 %. I tried WHIZLABs as well. But those were a bit easier, and not updated enough. Please keep attending the mock test , as much as you can and read the reasons/descriptions.
Rayanne Johnson Vaughn –
These practice exams are very comprehensive. All of those questions are look like they are coming directly from the AWS team. The best thing about this practice exam is, all the options are explained pretty well. Even wrong answers are explained pretty well. So at the end of the exam, you can go to review your test & see where you got wrong. Thank you, Skillcertpro for such a good set of practice exams. Passed my exam.
CHIDAMBARAM ARUNACHALAM –
I have already followed the main course, and these practice tests are perfect to fill in the gaps or identify areas that needs to be studied more in depths, or realise you didn’t pay enough attention to the course. There are some really tricky questions about real case scenario, so it’s good to have very well documented explanations. Money well spent, which will give you the confidence to sit for the real exam.
Passed the exam.
Venkata G –
These tests are superb practice grounds, appropriate explanations, perfect trainers for passing exam along with gaining knowledge.
B Evans –
These practice exams were very good and they were very instrumental in helping me to pass the certification test. The variety of questions was excellent. When the answers are revealed after taking the test, you receive a very detailed explanation of all of your answer selections, correct and incorrect. This proved very helpful to me as it allowed me to focus on the things that I still needed to improve. I highly recommend this course if you want to create a good foundation for your AWS certification journey.
David Kenny –
Great course. Though I found some questions should have more detailed explanation. But overall mock tests are good for your preparation and helped me pass the exam with score 910. Thank you.
Colby Marques –
Perfect course to study for AWS Solutions Architect Professional exam and see a preview of what the actual exam looks like. There were even a lot of the same questions on my exam. I passed, mostly thanks to this course.
Swastik Acharya –
Excellent questions, very close to the actual exam so a valuable aid in providing a candidate with the necessary knowledge. I passed. so am very happy.
Kofoed Clausen –
The difficulty and concepts of the questions in the main exam reflects in this practice exam. It helped me a lot to polish my concepts and clear the exam.
cuppa –
Exellnt
Aryan Singh –
I passed my exam today from the first try 🙂 This course definitely helped me prepare for the aws architect professional exam. The whole testing experience was basically the same as with these practice exams. Note: I did each test a few times, that is all. I did not use other vendor practice tests while preparing. Thanks for useful content
Nidhi Chugh –
Definitely worth going through this questionnaire , it helps you bridge the gaps that you would have had while studying about it. Good way to answer the questions , to the point helps instantly. Thanks a ton…Cleared the exam….Thanks again for wonderful tests….