You have already completed the Test before. Hence you can not start it again.
Test is loading...
You must sign in or sign up to start the Test.
You have to finish following quiz, to start this Test:
Your results are here!! for" AWS Certified Developer Associate Practice Test 3 "
0 of 65 questions answered correctly
Your time:
Time has elapsed
Your Final Score is : 0
You have attempted : 0
Number of Correct Questions : 0 and scored 0
Number of Incorrect Questions : 0 and Negative marks 0
Average score
Your score
AWS Certified Developer Associate
You have attempted: 0
Number of Correct Questions: 0 and scored 0
Number of Incorrect Questions: 0 and Negative marks 0
You can review your answers by clicking on “View Answers” option. Important Note : Open Reference Documentation Links in New Tab (Right Click and Open in New Tab).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
Answered
Review
Question 1 of 65
1. Question
A developer is developing a web application for AWS Lambda. Users will be able to log in and see private documents using the application. All pages in the application must adhere to the company‘s branding guidelines.How can the developer host the sign-in pages with the LESS custom code possible?
A business is developing a Java application that will be deployed to Amazon Web Services. The firm creates a pipeline for the project using AWS CodePipeline. CodePipeline must build and deploy the application on the AWS Cloud whenever a team member makes changes to the source code.Which AWS services combination does the business need to utilize to achieve these requirements?
A developer is combining an Amazon API Gateway with an AWS Lambda function in order to create an application. When the developer attempts to use the API, he or she gets the following error:Wed Nov 08 01:13:00 UTC 2017 : Method completed with status: 502What is the developer‘s responsibility in resolving the error?
Correct
https://aws.amazon.com/premiumsupport/knowledge-center/malformed-502-api-gateway/ “The format of the Lambda function‘s response is often the source of these errors. If the format is the problem, then you see a message that looks like this in the logs: Thu Dec 08 01:13:00 UTC 2016 : Execution failed due to configuration error: Malformed Lambda proxy response Thu Dec 08 01:13:00 UTC 2016 : Method completed with status: 502“
Incorrect
https://aws.amazon.com/premiumsupport/knowledge-center/malformed-502-api-gateway/ “The format of the Lambda function‘s response is often the source of these errors. If the format is the problem, then you see a message that looks like this in the logs: Thu Dec 08 01:13:00 UTC 2016 : Execution failed due to configuration error: Malformed Lambda proxy response Thu Dec 08 01:13:00 UTC 2016 : Method completed with status: 502“
Unattempted
https://aws.amazon.com/premiumsupport/knowledge-center/malformed-502-api-gateway/ “The format of the Lambda function‘s response is often the source of these errors. If the format is the problem, then you see a message that looks like this in the logs: Thu Dec 08 01:13:00 UTC 2016 : Execution failed due to configuration error: Malformed Lambda proxy response Thu Dec 08 01:13:00 UTC 2016 : Method completed with status: 502“
Question 4 of 65
4. Question
A business need a fully managed source control solution that is compatible with AWS. By sharing sets of changes peer-to-peer, the service must guarantee that revision control synchronizes various dispersed repositories. All users must be productive regardless of whether they are connected to a network.Which version control system should I use?
A developer wishes to get a list of objects from an Amazon DynamoDB table‘s global secondary index.Which DynamoDB API call should the developer use to utilize the fewest read capacity units possible?
A developer is troubleshooting an AWS Lambda function that is being used in conjunction with an Amazon API Gateway. HTTP status code 200 is returned whenever the API Gateway endpoint is contacted, despite the fact that AWS Lambda is logging a 4xx error.What modification is required to deliver an appropriate error code through the API Gateway?
Correct
With the Lambda proxy integration, Lambda is required to return an output of the following format … In this output, statusCode is typically 4XX for a client error and 5XX for a server error. API Gateway handles these errors by mapping the Lambda error to an HTTP error response, according to the specified statusCode. For API Gateway to pass the error type (for example, InvalidParameterException), as part of the response to the client, the Lambda function must include a header (for example, “X-Amzn-ErrorType“:“InvalidParameterException“) in the headers property.
Incorrect
With the Lambda proxy integration, Lambda is required to return an output of the following format … In this output, statusCode is typically 4XX for a client error and 5XX for a server error. API Gateway handles these errors by mapping the Lambda error to an HTTP error response, according to the specified statusCode. For API Gateway to pass the error type (for example, InvalidParameterException), as part of the response to the client, the Lambda function must include a header (for example, “X-Amzn-ErrorType“:“InvalidParameterException“) in the headers property.
Unattempted
With the Lambda proxy integration, Lambda is required to return an output of the following format … In this output, statusCode is typically 4XX for a client error and 5XX for a server error. API Gateway handles these errors by mapping the Lambda error to an HTTP error response, according to the specified statusCode. For API Gateway to pass the error type (for example, InvalidParameterException), as part of the response to the client, the Lambda function must include a header (for example, “X-Amzn-ErrorType“:“InvalidParameterException“) in the headers property.
Question 7 of 65
7. Question
A developing program requires the storage of hundreds of video files. Prior to storage, the data must be encrypted inside the program using a unique key for each video file.How should the application‘s developer code it?
A developer created an application that runs on AWS Lambda and makes use of the AWS Serverless Application Model (AWS SAM).What is the proper sequence of action for a successful application deployment?
Where in the application source bundle should an Elastic Beanstalk configuration file called healthcheckur1.config be placed?
Correct
You can add AWS Elastic Beanstalk configuration files (.ebextensions) to your web application‘s source code to configure your environment and customize the AWS resources that it contains. Configuration files are YAML- or JSON-formatted documents with a .config file extension that you place in a folder named .ebextensions and deploy in your application source bundle. https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/ebextensions.html
Incorrect
You can add AWS Elastic Beanstalk configuration files (.ebextensions) to your web application‘s source code to configure your environment and customize the AWS resources that it contains. Configuration files are YAML- or JSON-formatted documents with a .config file extension that you place in a folder named .ebextensions and deploy in your application source bundle. https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/ebextensions.html
Unattempted
You can add AWS Elastic Beanstalk configuration files (.ebextensions) to your web application‘s source code to configure your environment and customize the AWS resources that it contains. Configuration files are YAML- or JSON-formatted documents with a .config file extension that you place in a folder named .ebextensions and deploy in your application source bundle. https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/ebextensions.html
Question 10 of 65
10. Question
Lambda functions are packaged for deployment in a variety of settings, such as development, test, and production. Each ecosystem is endowed with its own collection of resources, such as databases.How can the Lambda function make use of the current environment‘s resources?
Correct
Use environment variables for the Lambda functions.
Incorrect
Use environment variables for the Lambda functions.
Unattempted
Use environment variables for the Lambda functions.
Question 11 of 65
11. Question
A business is deploying AWS resources using AWS CloudFormation templates. The organization requires an upgrade to one of its Amazon Web Services CloudFormation stacks.What can the business do to ascertain the effect of the adjustments on the operating resources?
To increase read speed, an application makes use of a single-node Amazon ElastiCache for Redis instance. Demand for the application has grown significantly over time, putting an increasing strain on the ElastiCache instance. It is vital that this cache layer is capable of handling the load and being robust in the event of a node failure.What can be done by the developer to meet load and resilience requirements?
A developer has created a serverless application that makes use of a variety of AWS services. The business logic is implemented using Lambda functions that rely on third-party libraries. Amazon API Gateway will be used to provide the Lambda function endpoints. The Lambda function will be used to store the data in Amazon DynamoDB. Although the developer is prepared to launch the program, he or she must have the ability to turn back. How, given these criteria, can this deployment be automated?
A developer chooses to use Amazon S3 to store highly secure data and want to build server-side encryption (SSE) with granular control over who may access the master key. For security reasons, company policy demands that the master key be established, cycled, and deactivated easily as necessary.Which option is most appropriate for meeting these requirements?
Correct
From the exam point of view: SSE-S3: AWS manages both data key and master key SSE-KMS: AWS manages data key and you manage master key SSE-C: You manage both data key and master key See this doc for more details: http://amzn.to/2iVsGvM A ) Server-Side Encryption SSE-S3 (AWS-Managed Keys) => When the requirement is to keep the encryption work simple and minimise the maintenance overhead then use SSE-S3. SSE-KMS (AWS KMS Keys) => When the requirement is to maintain a security audit trail then use SSE-KMS Keys. SSE-C (Customer-Provided Keys) => When end-to-end encryption is not required and the client wants full control of his/her security keys, then use SSE-C. B) Client-Side Encryption AWS KMS-managed, customer master key => When the requirement is to maintain end-to-end encryption plus a security audit trail, then use AWS KMS Keys. Client Managed Master Key => When the requirement is to maintain end-to-end encryption but the client wants full control of his/her security keys, then use Client Managed Master Key.
Incorrect
From the exam point of view: SSE-S3: AWS manages both data key and master key SSE-KMS: AWS manages data key and you manage master key SSE-C: You manage both data key and master key See this doc for more details: http://amzn.to/2iVsGvM A ) Server-Side Encryption SSE-S3 (AWS-Managed Keys) => When the requirement is to keep the encryption work simple and minimise the maintenance overhead then use SSE-S3. SSE-KMS (AWS KMS Keys) => When the requirement is to maintain a security audit trail then use SSE-KMS Keys. SSE-C (Customer-Provided Keys) => When end-to-end encryption is not required and the client wants full control of his/her security keys, then use SSE-C. B) Client-Side Encryption AWS KMS-managed, customer master key => When the requirement is to maintain end-to-end encryption plus a security audit trail, then use AWS KMS Keys. Client Managed Master Key => When the requirement is to maintain end-to-end encryption but the client wants full control of his/her security keys, then use Client Managed Master Key.
Unattempted
From the exam point of view: SSE-S3: AWS manages both data key and master key SSE-KMS: AWS manages data key and you manage master key SSE-C: You manage both data key and master key See this doc for more details: http://amzn.to/2iVsGvM A ) Server-Side Encryption SSE-S3 (AWS-Managed Keys) => When the requirement is to keep the encryption work simple and minimise the maintenance overhead then use SSE-S3. SSE-KMS (AWS KMS Keys) => When the requirement is to maintain a security audit trail then use SSE-KMS Keys. SSE-C (Customer-Provided Keys) => When end-to-end encryption is not required and the client wants full control of his/her security keys, then use SSE-C. B) Client-Side Encryption AWS KMS-managed, customer master key => When the requirement is to maintain end-to-end encryption plus a security audit trail, then use AWS KMS Keys. Client Managed Master Key => When the requirement is to maintain end-to-end encryption but the client wants full control of his/her security keys, then use Client Managed Master Key.
Question 15 of 65
15. Question
When new items are produced in a bucket, a developer uses Amazon S3 as the event source to run a Lambda function. The bucket notification configuration stores the information about the event source mapping. The developer is experimenting with various Lambda function versions and is constantly required to alter notification settings to ensure that Amazon S3 invokes the proper version.What is the MOSTefficient and effective method for mapping the S3 event to Lambda?
Correct
You can assign an alias to a specific version and link the S3 trigger to that alias. If you want to change the version of the Lambda triggered by S3, you just need to edit the alias
Incorrect
You can assign an alias to a specific version and link the S3 trigger to that alias. If you want to change the version of the Lambda triggered by S3, you just need to edit the alias
Unattempted
You can assign an alias to a specific version and link the S3 trigger to that alias. If you want to change the version of the Lambda triggered by S3, you just need to edit the alias
Question 16 of 65
16. Question
An Amazon EC2 instance is configured with an IAM role that expressly forbids access to all Amazon S3 API activities. The EC2 instance credentials file contains the IAM access key and secret access key, both of which provide full administrative access.Which of the following statements is accurate in light of the fact that this EC2 instance supports several types of IAM access?
Correct
EC2 instance profile is the lowest priority. AWS SDK will use credentials file to handle
Incorrect
EC2 instance profile is the lowest priority. AWS SDK will use credentials file to handle
Unattempted
EC2 instance profile is the lowest priority. AWS SDK will use credentials file to handle
Question 17 of 65
17. Question
A business utilizes AWS CodeBuild and AWS CodeCommit to implement a continuous build process. Developers routinely submit code throughout the development period, resulting in large build failures. The firm is looking for a solution that would generate code prior to developers pushing it to the main branch.Which option best fits these criteria in terms of cost-effectiveness?
A Linux EC2 instance operating on Amazon Web Services requires management of the AWS architecture.How may an Amazon EC2 instance be configured to perform secure AWS API calls?
Multiple EC2 instances are used to execute an application behind an ELB.Where is the ideal place to store session data so that it can be consistently delivered over numerous requests?
A developer is in the process of transferring legacy apps to AWS. These apps will be launched on Amazon EC2 instances and will utilize MongoDB as their main data storage. Management expects developers to make minimal modifications to apps while using AWS services.Which option should the developer use to host MongoDB on Amazon Web Services (AWS)?
Correct
“Amazon DocumentDB (with MongoDB compatibility) is a fast, scalable, highly available, and fully managed document database service that supports MongoDB workloads. As a document database, Amazon DocumentDB makes it easy to store, query, and index JSON data. Amazon DocumentDB is a non-relational database service designed from the ground-up to give you the performance, scalability, and availability you need when operating mission-critical MongoDB workloads at scale.“
Incorrect
“Amazon DocumentDB (with MongoDB compatibility) is a fast, scalable, highly available, and fully managed document database service that supports MongoDB workloads. As a document database, Amazon DocumentDB makes it easy to store, query, and index JSON data. Amazon DocumentDB is a non-relational database service designed from the ground-up to give you the performance, scalability, and availability you need when operating mission-critical MongoDB workloads at scale.“
Unattempted
“Amazon DocumentDB (with MongoDB compatibility) is a fast, scalable, highly available, and fully managed document database service that supports MongoDB workloads. As a document database, Amazon DocumentDB makes it easy to store, query, and index JSON data. Amazon DocumentDB is a non-relational database service designed from the ground-up to give you the performance, scalability, and availability you need when operating mission-critical MongoDB workloads at scale.“
Question 21 of 65
21. Question
The code of a developer is saved in an Amazon S3 bucket. The code must be distributed across many AWS Lambda accounts in the same Region as the S3 bucket as an AWS Lambda function. The Lambda function will be launched using a custom AWS CloudFormation template for each account.What is the MOST SECURE method for granting access to Lambda code stored in an S3 bucket?
A business requires security for its current website, which is hosted behind an Elastic Load Balancer. Amazon EC2 instances hosting the website are CPU restricted.How can the website be secured without raising the CPU burden on the Amazon EC2 web servers? (Select two.)
Correct
https://aws.amazon.com/blogs/aws/elastic-load-balancer-support-for-ssl-termination/ You can now create a highly scalable, load-balanced web site using multiple Amazon EC2 instances, and you can easily arrange for the entire HTTPS encryption and decryption process (generally known as SSL termination) to be handled by an Elastic Load Balancer. Your users can benefit from encrypted communication with very little operational overhead or administrative complexity. Until now, you had to handle the termination process within each EC2 instance. This added to the load on the instance and also required you to install an X.509 certificate on each instance. With this new release, you can simply upload the certificates to your AWS account and weÂ’ll take care of getting them distributed to the load balancers.
Incorrect
https://aws.amazon.com/blogs/aws/elastic-load-balancer-support-for-ssl-termination/ You can now create a highly scalable, load-balanced web site using multiple Amazon EC2 instances, and you can easily arrange for the entire HTTPS encryption and decryption process (generally known as SSL termination) to be handled by an Elastic Load Balancer. Your users can benefit from encrypted communication with very little operational overhead or administrative complexity. Until now, you had to handle the termination process within each EC2 instance. This added to the load on the instance and also required you to install an X.509 certificate on each instance. With this new release, you can simply upload the certificates to your AWS account and weÂ’ll take care of getting them distributed to the load balancers.
Unattempted
https://aws.amazon.com/blogs/aws/elastic-load-balancer-support-for-ssl-termination/ You can now create a highly scalable, load-balanced web site using multiple Amazon EC2 instances, and you can easily arrange for the entire HTTPS encryption and decryption process (generally known as SSL termination) to be handled by an Elastic Load Balancer. Your users can benefit from encrypted communication with very little operational overhead or administrative complexity. Until now, you had to handle the termination process within each EC2 instance. This added to the load on the instance and also required you to install an X.509 certificate on each instance. With this new release, you can simply upload the certificates to your AWS account and weÂ’ll take care of getting them distributed to the load balancers.
Question 23 of 65
23. Question
A corporation uses Amazon API Gateway and the API Gateway native API key validation to maintain a REST service. Users can now join up for the service through a new registration website that was recently developed by the corporation. The registration page uses CreateApiKey to generate a new API key and sends it to the user. The user receives a 403 Forbidden error when attempting to call the API with this key. Existing API users are unaffected and can continue to utilize it.What changes to the code will allow these additional users to access the API?
Correct
Do you have a Usage Plan? if not need to create one. Link you API with Usage Plan. For that add a stage, it will link your API. Do you have API Key? if not you need to create an API Key and enable it. Add the Usage Plan which is linked with your API to this API Key. For that add Usage Plan. + https://stackoverflow.com/questions/39061041/using-an-api-key-in-amazon-api-gateway
Incorrect
Do you have a Usage Plan? if not need to create one. Link you API with Usage Plan. For that add a stage, it will link your API. Do you have API Key? if not you need to create an API Key and enable it. Add the Usage Plan which is linked with your API to this API Key. For that add Usage Plan. + https://stackoverflow.com/questions/39061041/using-an-api-key-in-amazon-api-gateway
Unattempted
Do you have a Usage Plan? if not need to create one. Link you API with Usage Plan. For that add a stage, it will link your API. Do you have API Key? if not you need to create an API Key and enable it. Add the Usage Plan which is linked with your API to this API Key. For that add Usage Plan. + https://stackoverflow.com/questions/39061041/using-an-api-key-in-amazon-api-gateway
Question 24 of 65
24. Question
Before data is sent to a downstream service, it is processed by a Lambda function. Each byte of data is around 1MB in size. Following a security assessment, the function must now encrypt data prior to transmitting it downstream.Which API call is necessary to encrypt the data?
Correct
https://docs.aws.amazon.com/kms/latest/developerguide/programming-encryption.html The examples in this topic use the Encrypt, Decrypt, and ReEncrypt operations in the AWS KMS API. These operations are designed to encrypt and decrypt data keys. They use an AWS KMS keys in the encryption operations and they cannot accept more than 4 KB (4096 bytes) of data. Although you might use them to encrypt small amounts of data, such as a password or RSA key, they are not designed to encrypt application data.
Incorrect
https://docs.aws.amazon.com/kms/latest/developerguide/programming-encryption.html The examples in this topic use the Encrypt, Decrypt, and ReEncrypt operations in the AWS KMS API. These operations are designed to encrypt and decrypt data keys. They use an AWS KMS keys in the encryption operations and they cannot accept more than 4 KB (4096 bytes) of data. Although you might use them to encrypt small amounts of data, such as a password or RSA key, they are not designed to encrypt application data.
Unattempted
https://docs.aws.amazon.com/kms/latest/developerguide/programming-encryption.html The examples in this topic use the Encrypt, Decrypt, and ReEncrypt operations in the AWS KMS API. These operations are designed to encrypt and decrypt data keys. They use an AWS KMS keys in the encryption operations and they cannot accept more than 4 KB (4096 bytes) of data. Although you might use them to encrypt small amounts of data, such as a password or RSA key, they are not designed to encrypt application data.
Question 25 of 65
25. Question
A developer enhanced an application that runs on an Amazon EC2 instance and makes use of Amazon SQS. The developer observed a large spike in Amazon SQS prices upon deployment. When monitoring the Amazon SQS metrics using Amazon CloudWatch, the developer saw that this queue receives an average of one message every minute.What can be done to lower this application‘s Amazon SQS costs?
Correct
Increasing the polling will reduce the number of empty messages and also reduce costs
Incorrect
Increasing the polling will reduce the number of empty messages and also reduce costs
Unattempted
Increasing the polling will reduce the number of empty messages and also reduce costs
Question 26 of 65
26. Question
A developer is constructing a template for the AWS Serverless Application Model (AWS SAM). Multiple AWS Lambda functions, an Amazon S3 bucket, and an Amazon CloudFront distribution are defined in the AWS SAM template. One of the Lambda functions is executed on the CloudFront distribution‘s [email protected] The S3 bucket is specified as the CloudFront distribution‘s origin.While the developer installs the AWS SAM template in the eu-west-1 Region, the developer encounters an error when attempting to create the stack.What may have precipitated this problem?
A Linux, Apache, MySQL, and PHP (LAMP) stack is used to construct an on-premises application. The developer want to host this application on Amazon Web Services.Which of the following AWS service sets is appropriate for running this stack?
Correct
A\E: Does not have DB component. B: DynamoDB is NOSQL and not a replacement for MySQL. D: Cognito is not a compute platform.
Incorrect
A\E: Does not have DB component. B: DynamoDB is NOSQL and not a replacement for MySQL. D: Cognito is not a compute platform.
Unattempted
A\E: Does not have DB component. B: DynamoDB is NOSQL and not a replacement for MySQL. D: Cognito is not a compute platform.
Question 28 of 65
28. Question
Amazon S3 is structured as follows: S3:/BUCKET/FOLDERNAME/FILENAME.zipWhich S3 best practice would enhance speed when a single bucket receives thousands of PUT requests per second?
Correct
The practice of improving S3 performance has not changed by using different folder names as prefixes. If you had a bucket, with 10 x folders numbered 1-10, in which contents are places, that is 10x parallel streams of data. Hashing used to be the way to achieve this, but now sequential naming/numbering achieves the same result. You can still use hashes. Note: Prefixes are still used for performance. Hashing maybe not but def prefixes. https://docs.aws.amazon.com/AmazonS3/latest/dev/optimizing-performance.html “You can increase your read or write performance by parallelizing reads. For example, if you create 10 prefixes in an Amazon S3 bucket to parallelize reads, you could scale your read performance to 55,000 read requests per second“ “For example, previously Amazon S3 performance guidelines recommended randomizing prefix naming with hashed characters to optimize performance for frequent data retrievals. ***You no longer have to randomize prefix naming for performance, and can use sequential date-based naming for your prefixes***“
Incorrect
The practice of improving S3 performance has not changed by using different folder names as prefixes. If you had a bucket, with 10 x folders numbered 1-10, in which contents are places, that is 10x parallel streams of data. Hashing used to be the way to achieve this, but now sequential naming/numbering achieves the same result. You can still use hashes. Note: Prefixes are still used for performance. Hashing maybe not but def prefixes. https://docs.aws.amazon.com/AmazonS3/latest/dev/optimizing-performance.html “You can increase your read or write performance by parallelizing reads. For example, if you create 10 prefixes in an Amazon S3 bucket to parallelize reads, you could scale your read performance to 55,000 read requests per second“ “For example, previously Amazon S3 performance guidelines recommended randomizing prefix naming with hashed characters to optimize performance for frequent data retrievals. ***You no longer have to randomize prefix naming for performance, and can use sequential date-based naming for your prefixes***“
Unattempted
The practice of improving S3 performance has not changed by using different folder names as prefixes. If you had a bucket, with 10 x folders numbered 1-10, in which contents are places, that is 10x parallel streams of data. Hashing used to be the way to achieve this, but now sequential naming/numbering achieves the same result. You can still use hashes. Note: Prefixes are still used for performance. Hashing maybe not but def prefixes. https://docs.aws.amazon.com/AmazonS3/latest/dev/optimizing-performance.html “You can increase your read or write performance by parallelizing reads. For example, if you create 10 prefixes in an Amazon S3 bucket to parallelize reads, you could scale your read performance to 55,000 read requests per second“ “For example, previously Amazon S3 performance guidelines recommended randomizing prefix naming with hashed characters to optimize performance for frequent data retrievals. ***You no longer have to randomize prefix naming for performance, and can use sequential date-based naming for your prefixes***“
Question 29 of 65
29. Question
A developer is developing a website that will be hosted on Amazon‘s S3 service. Secure browser connections must be supported by the website.Which steps must the developer perform in combination to satisfy this requirement? (Select two.)
Through an API, a company‘s fleet of Amazon EC2 instances collects data from millions of consumers. To guarantee high access rates, the servers batch the data, create an object for each user, and upload the objects to an S3 bucket. Customer ID, Server ID, TS-Server (TimeStamp and Server ID), the object‘s size, and a timestamp are the object‘s properties. A developer wishes to locate all items gathered for a particular user during a certain time period.How can the developer accomplish this need after establishing an S3 object created event?
Correct
Redshift can‘t be used for storing the object creation and customer details, it is mainly used for Datawarehouse requirements. So any option with Redshift is ruled out. For DynamoDb, User wants to retrieve all records for given customer in a specified time range, so Customer ID has to be a Partition Key and TS Server as sort key to specify the time range. Hence Option C is correct
Incorrect
Redshift can‘t be used for storing the object creation and customer details, it is mainly used for Datawarehouse requirements. So any option with Redshift is ruled out. For DynamoDb, User wants to retrieve all records for given customer in a specified time range, so Customer ID has to be a Partition Key and TS Server as sort key to specify the time range. Hence Option C is correct
Unattempted
Redshift can‘t be used for storing the object creation and customer details, it is mainly used for Datawarehouse requirements. So any option with Redshift is ruled out. For DynamoDb, User wants to retrieve all records for given customer in a specified time range, so Customer ID has to be a Partition Key and TS Server as sort key to specify the time range. Hence Option C is correct
Question 31 of 65
31. Question
A business in the us-east-1 Region has installed web servers on Amazon EC2 instances running Amazon Linux. Amazon Elastic Block Store is used to back up the EC2 instances (Amazon EBS). A developer want to guarantee that all of these instances use an AWS Key Management Service (AWS KMS) key to offer encryption at rest.How can a developer use an AWS KMS key to enable encryption at rest on existing and new instances?
A developer is deploying an application to Amazon EC2 using AWS CodeDeploy. The developer wishes to modify the permissions on a particular deployment file.Which lifecycle event should a developer utilize to do this task?
Correct
One can change the permissions after deploy is done
Incorrect
One can change the permissions after deploy is done
Unattempted
One can change the permissions after deploy is done
Question 33 of 65
33. Question
A developer wishes to search and filter log data in order to troubleshoot an application. Amazon CloudWatch Logs stores the application logs. To count exceptions in the application logs, the Developer sets a new metric filter. The logs, on the other hand, return no results.What is the cause for the absence of filtered results?
Correct
Filters only publish the metric data points for events that happen after the filter was created
Incorrect
Filters only publish the metric data points for events that happen after the filter was created
Unattempted
Filters only publish the metric data points for events that happen after the filter was created
Question 34 of 65
34. Question
A business has transferred a classic application to a fleet of Amazon EC2 instances. The application is now storing data in a MySQL database that is deployed on a single EC2 instance. The organization has chosen to move the database from the EC2 instance to Amazon RDS MySQL.What steps should the developer take to adapt the application to enable Amazon RDS data storage?
Correct
This question looks very incomplete as many steps to perform such migration are needed as create a role for ec2 to access the RDS, COnfigure security group to access the right rds port and also update the endpoint in the application.
Incorrect
This question looks very incomplete as many steps to perform such migration are needed as create a role for ec2 to access the RDS, COnfigure security group to access the right rds port and also update the endpoint in the application.
Unattempted
This question looks very incomplete as many steps to perform such migration are needed as create a role for ec2 to access the RDS, COnfigure security group to access the right rds port and also update the endpoint in the application.
Question 35 of 65
35. Question
A developer is configuring the Amazon API Gateway to support their business‘s goods. Registered developers may use the API to query and change their environments. For financial and security concerns, the organization want to restrict the number of requests that end users may submit. Management want to provide registered developers with the option of purchasing bigger packages that support a greater number of requests.How can the developer do this with the LEAST amount of management overhead?
Correct
After you create, test, and deploy your APIs, you can use API Gateway usage plans to make them available as product offerings for your customers. You can configure usage plans and API keys to allow customers to access selected APIs at agreed-upon request rates and quotas that meet their business requirements and budget constraints. If desired, you can set default method-level throttling limits for an API or set throttling limits for individual API methods.
Incorrect
After you create, test, and deploy your APIs, you can use API Gateway usage plans to make them available as product offerings for your customers. You can configure usage plans and API keys to allow customers to access selected APIs at agreed-upon request rates and quotas that meet their business requirements and budget constraints. If desired, you can set default method-level throttling limits for an API or set throttling limits for individual API methods.
Unattempted
After you create, test, and deploy your APIs, you can use API Gateway usage plans to make them available as product offerings for your customers. You can configure usage plans and API keys to allow customers to access selected APIs at agreed-upon request rates and quotas that meet their business requirements and budget constraints. If desired, you can set default method-level throttling limits for an API or set throttling limits for individual API methods.
Question 36 of 65
36. Question
A developer is using Amazon S3 to store sensitive data created by an application. The developer want to encrypt the data while it is in transit. A corporate policy demands an audit trail detailing when and by whom the master key was used.Which encryption method will satisfy these criteria?
Correct
https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingKMSEncryption.html Similar to SSE-S3, but with some additional benefits along with some additional charges for using this service. provides you with an audit trail of when your key was used and by whom. Additionally, you have the option to create and manage encryption keys yourself, or use a default key that is unique to you.
Incorrect
https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingKMSEncryption.html Similar to SSE-S3, but with some additional benefits along with some additional charges for using this service. provides you with an audit trail of when your key was used and by whom. Additionally, you have the option to create and manage encryption keys yourself, or use a default key that is unique to you.
Unattempted
https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingKMSEncryption.html Similar to SSE-S3, but with some additional benefits along with some additional charges for using this service. provides you with an audit trail of when your key was used and by whom. Additionally, you have the option to create and manage encryption keys yourself, or use a default key that is unique to you.
Question 37 of 65
37. Question
On an Amazon EC2 instance, a developer is executing an application. When the program attempts to read from an Amazon S3 bucket, it fails. The developer discovers that the S3 read permission is missing from the related IAM role. The developer must enable the application to read from the S3 bucket.Which solution satisfies this need with the MINIMUM amount of application downtime?
Correct
after adding additional policy to iam role, it is immediacy effected.
Incorrect
after adding additional policy to iam role, it is immediacy effected.
Unattempted
after adding additional policy to iam role, it is immediacy effected.
Question 38 of 65
38. Question
A developer must verify that an application‘s IAM credentials are not abused or exploited when running on Amazon EC2.What should the developer do to ensure the security of user credentials?
A business is processing records from an Amazon Kinesis data stream using an AWS Lambda function. The firm suddenly noticed that records were being processed slowly. A developer finds that the function‘s iterator age metric is growing and the Lambda run time is consistently more than expected.Which activities should the developer do to boost the performance of the processor? (Select two.)
Using Amazon API Gateway, a developer has established a REST API. The developer want to keep track of which callers and how they utilize the API. Additionally, the developer want to have control over the duration of the logs.What actions should the developer take to ensure compliance with these requirements?
Correct
To control who and hos is accessing the API, just enable API Gateway access logs. To control how long the logs live, just set CW retention setting
Incorrect
To control who and hos is accessing the API, just enable API Gateway access logs. To control how long the logs live, just set CW retention setting
Unattempted
To control who and hos is accessing the API, just enable API Gateway access logs. To control how long the logs live, just set CW retention setting
Question 41 of 65
41. Question
A corporation employs 25,000 people and is expanding. The business is developing an application that will be exclusive to its workers. A developer is storing photos in Amazon S3 and application data in Amazon RDS. The organization demands that all employee data remain in the old Security Assertion Markup Language (SAML) employee directory and is not interested in replicating employee data on AWS.How can the developer ensure that the workers who will be utilizing this program have allowed access to their own application data?
Correct
Identity pool is need to authorize users
Incorrect
Identity pool is need to authorize users
Unattempted
Identity pool is need to authorize users
Question 42 of 65
42. Question
A corporation is deploying one of their apps using AWS CodePipeline. The delivery pipeline is triggered by modifications to the master branch of an AWS CodeCommit repository and utilizes AWS CodeBuild for the test and build phases, as well as AWS CodeDeploy for application deployment.For many months, the pipeline has operated effectively with no adjustments. AWS CodeDeploy failed to deploy the updated application as planned after a recent modification to the application‘s source code.What may be the underlying causes? (Select two.)
Correct
Since it has been running unlikely to be a permission or configuration issue. CodePipeline services are managed so they should never go inactive. Human error A, would result in the pipeline not starting and a failure in the earlier stages would also mean CodeDeploy has nothing to do .
Incorrect
Since it has been running unlikely to be a permission or configuration issue. CodePipeline services are managed so they should never go inactive. Human error A, would result in the pipeline not starting and a failure in the earlier stages would also mean CodeDeploy has nothing to do .
Unattempted
Since it has been running unlikely to be a permission or configuration issue. CodePipeline services are managed so they should never go inactive. Human error A, would result in the pipeline not starting and a failure in the earlier stages would also mean CodeDeploy has nothing to do .
Question 43 of 65
43. Question
A business needs that all data saved in Amazon DynamoDB tables be encrypted at rest using keys owned by the business.How can a developer satisfy these needs WITHOUT making changes to the application?
https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#enveloping “Envelope encryption When you encrypt your data, your data is protected, but you have to protect your encryption key. One strategy is to encrypt it. Envelope encryption is the practice of encrypting plaintext data with a data key, and then encrypting the data key under another key. “
Incorrect
https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#enveloping “Envelope encryption When you encrypt your data, your data is protected, but you have to protect your encryption key. One strategy is to encrypt it. Envelope encryption is the practice of encrypting plaintext data with a data key, and then encrypting the data key under another key. “
Unattempted
https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#enveloping “Envelope encryption When you encrypt your data, your data is protected, but you have to protect your encryption key. One strategy is to encrypt it. Envelope encryption is the practice of encrypting plaintext data with a data key, and then encrypting the data key under another key. “
Question 45 of 65
45. Question
When a developer calls the Amazon CloudWatch API, he receives HTTP 400: ThrottlingException errors sporadically. When a call is not successful, no data is obtained.Which best practice should be implemented first in order to remedy this issue?
Correct
https://aws.amazon.com/premiumsupport/knowledge-center/cloudwatch-400-error-throttling/ It‘s a best practice to use the following methods to reduce your call rate and avoid API throttling: Distribute your API calls evenly over time rather than making several API calls in a short time span. If you require data to be available with a one-minute resolution, you have an entire minute to emit that metric. Use jitter (randomized delay) to send data points at various times. Combine as many metrics as possible into a single API call. For example, a single PutMetricData call can include 20 metrics and 150 data points. You can also use pre-aggregated data sets, such as StatisticSet, to publish aggregated data points, thus reducing the number of PutMetricData calls per second. Retry your call with exponential backoff and jitter.
Incorrect
https://aws.amazon.com/premiumsupport/knowledge-center/cloudwatch-400-error-throttling/ It‘s a best practice to use the following methods to reduce your call rate and avoid API throttling: Distribute your API calls evenly over time rather than making several API calls in a short time span. If you require data to be available with a one-minute resolution, you have an entire minute to emit that metric. Use jitter (randomized delay) to send data points at various times. Combine as many metrics as possible into a single API call. For example, a single PutMetricData call can include 20 metrics and 150 data points. You can also use pre-aggregated data sets, such as StatisticSet, to publish aggregated data points, thus reducing the number of PutMetricData calls per second. Retry your call with exponential backoff and jitter.
Unattempted
https://aws.amazon.com/premiumsupport/knowledge-center/cloudwatch-400-error-throttling/ It‘s a best practice to use the following methods to reduce your call rate and avoid API throttling: Distribute your API calls evenly over time rather than making several API calls in a short time span. If you require data to be available with a one-minute resolution, you have an entire minute to emit that metric. Use jitter (randomized delay) to send data points at various times. Combine as many metrics as possible into a single API call. For example, a single PutMetricData call can include 20 metrics and 150 data points. You can also use pre-aggregated data sets, such as StatisticSet, to publish aggregated data points, thus reducing the number of PutMetricData calls per second. Retry your call with exponential backoff and jitter.
Question 46 of 65
46. Question
A business operates an e-commerce website that makes use of Amazon DynamoDB to dynamically adjust the price of products in real time. At any one moment, numerous changes to price information for a specific product may occur concurrently. This results in the overwriting of the original editor‘s modifications without a thorough review procedure.Which write option in DynamoDB should be used to avoid this overwriting?
Correct
DynamoDB optionally supports conditional writes for these operations. A conditional write succeeds only if the item attributes meet one or more expected conditions. Otherwise, it returns an error. Conditional writes are helpful in many situations. For example, you might want a PutItem operation to succeed only if there is not already an item with the same primary key. Or you could prevent an UpdateItem operation from modifying an item if one of its attributes has a certain value. Conditional writes are helpful in cases where multiple users attempt to modify the same item.
Incorrect
DynamoDB optionally supports conditional writes for these operations. A conditional write succeeds only if the item attributes meet one or more expected conditions. Otherwise, it returns an error. Conditional writes are helpful in many situations. For example, you might want a PutItem operation to succeed only if there is not already an item with the same primary key. Or you could prevent an UpdateItem operation from modifying an item if one of its attributes has a certain value. Conditional writes are helpful in cases where multiple users attempt to modify the same item.
Unattempted
DynamoDB optionally supports conditional writes for these operations. A conditional write succeeds only if the item attributes meet one or more expected conditions. Otherwise, it returns an error. Conditional writes are helpful in many situations. For example, you might want a PutItem operation to succeed only if there is not already an item with the same primary key. Or you could prevent an UpdateItem operation from modifying an item if one of its attributes has a certain value. Conditional writes are helpful in cases where multiple users attempt to modify the same item.
Question 47 of 65
47. Question
A business demands that AWS Lambda functions built by developers record problems in order for System Administrators to resolve issues more efficiently.What should developers do to address this need?
Correct
Insert log in the lambda function which can be available through cloudwatch logs
Incorrect
Insert log in the lambda function which can be available through cloudwatch logs
Unattempted
Insert log in the lambda function which can be available through cloudwatch logs
Question 48 of 65
48. Question
A business is developing a stock trading application. The program requires a latency of less than one millisecond to handle trading requests. The firm stores all trade data in Amazon DynamoDB, which is utilized to perform each trading request.A development team conducts load testing on the application and discovers that the time required to get data is longer than intended. The development team need a solution that significantly improves data retrieval time with the least amount of work feasible.Which solution satisfies these criteria?
Correct
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DAX.html Amazon DynamoDB is designed for scale and performance. In most cases, the DynamoDB response times can be measured in single-digit milliseconds. However, there are certain use cases that require response times in microseconds. For these use cases, DynamoDB Accelerator (DAX) delivers fast response times for accessing eventually consistent data. keywords are “time required to get data“, “significantly improves data retrieval time“ and “least amount of work“
Incorrect
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DAX.html Amazon DynamoDB is designed for scale and performance. In most cases, the DynamoDB response times can be measured in single-digit milliseconds. However, there are certain use cases that require response times in microseconds. For these use cases, DynamoDB Accelerator (DAX) delivers fast response times for accessing eventually consistent data. keywords are “time required to get data“, “significantly improves data retrieval time“ and “least amount of work“
Unattempted
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DAX.html Amazon DynamoDB is designed for scale and performance. In most cases, the DynamoDB response times can be measured in single-digit milliseconds. However, there are certain use cases that require response times in microseconds. For these use cases, DynamoDB Accelerator (DAX) delivers fast response times for accessing eventually consistent data. keywords are “time required to get data“, “significantly improves data retrieval time“ and “least amount of work“
Question 49 of 65
49. Question
A developer is trying to upload an object to an S3 bucket that has default encryption enabled using the Amazon S3 PutObject API action. A 400 Bad Request error is sent to the developer.Which of the following is the most probable source of this error?
Correct
since missing Content-Length returns 411: Code: MissingContentLength Description: You must provide the Content-Length HTTP header. HTTP Status Code: 411 Length Required
Incorrect
since missing Content-Length returns 411: Code: MissingContentLength Description: You must provide the Content-Length HTTP header. HTTP Status Code: 411 Length Required
Unattempted
since missing Content-Length returns 411: Code: MissingContentLength Description: You must provide the Content-Length HTTP header. HTTP Status Code: 411 Length Required
Question 50 of 65
50. Question
A developer is debugging an application‘s permissions for making modifications to an Amazon RDS database. The developer has access to the application‘s IAM role.Which command structure should be used by the developer to verify role permissions?
Correct
Use the sts assume role to verify the permissions
Incorrect
Use the sts assume role to verify the permissions
Unattempted
Use the sts assume role to verify the permissions
Question 51 of 65
51. Question
A developer wishes to lower the execution time of a complete Amazon DynamoDB database scan during off-peak hours without impairing typical workloads. During non-peak hours, workloads average around half of the highly constant read capacity units.How would the Developer optimize this scan if he or she were the developer?
A business is in the process of building a serverless ecommerce web application. The application must perform synchronized, all-or-nothing updates to various products in the company‘s Amazon DynamoDB inventory database.Which solution will satisfy these criteria?
Correct
TransactWriteItems operation differs from a BatchWriteItem operation in that all the actions it contains must be completed successfully, or no changes are made at all. With a BatchWriteItem operation, it is possible that only some of the actions in the batch succeed while the others do not.
Incorrect
TransactWriteItems operation differs from a BatchWriteItem operation in that all the actions it contains must be completed successfully, or no changes are made at all. With a BatchWriteItem operation, it is possible that only some of the actions in the batch succeed while the others do not.
Unattempted
TransactWriteItems operation differs from a BatchWriteItem operation in that all the actions it contains must be completed successfully, or no changes are made at all. With a BatchWriteItem operation, it is possible that only some of the actions in the batch succeed while the others do not.
Question 53 of 65
53. Question
A developer intends to create a REST API via the usage of an Amazon API Gateway and AWS Lambda. The developer will be responsible for managing three unique environments: development, test, and production.How should the application be delivered with the least amount of resources possible?
Amazon Elastic Container Service is used to deploy a microservices application across several containers (Amazon ECS). A developer want to collect trace information across microservices and view the microservices architecture in order to optimize performance.Which solution will satisfy these criteria?
A developer has been requested to write an AWS Lambda function that is called whenever objects in an Amazon DynamoDB database are updated. The function has been built, and the Lambda execution role has been granted the necessary permissions. Although Amazon DynamoDB streams have been enabled for the table, the function continues to fail to execute.Which option would allow the Lambda function to be triggered by DynamoDB database updates?
AWS Lambda functions need read/write access to an Amazon S3 bucket and to an Amazon DynamoDB database. The appropriate IAM policy is already in place.How can I allow the Lambda function access to the S3 bucket and DynamoDB database in the MOST SECURE manner possible?
Correct
Creating Role and providing access is most secure. B. Create an IAM role for the Lambda function. Attach the existing IAM policy to the role. Attach the role to the Lambda function.
Incorrect
Creating Role and providing access is most secure. B. Create an IAM role for the Lambda function. Attach the existing IAM policy to the role. Attach the role to the Lambda function.
Unattempted
Creating Role and providing access is most secure. B. Create an IAM role for the Lambda function. Attach the existing IAM policy to the role. Attach the role to the Lambda function.
Question 57 of 65
57. Question
A nightly batch process populates a DynamoDB database with 1 million new entries. The records are required for one hour, and the table must be completely emptied by the next night‘s batch operation.Which strategy is the MOST efficient and cost-effective for supplying an empty table?
Correct
“Deleting an entire table is significantly more efficient than removing items one-by-one, which essentially doubles the write throughput as you do as many delete operations as put operations“
Incorrect
“Deleting an entire table is significantly more efficient than removing items one-by-one, which essentially doubles the write throughput as you do as many delete operations as put operations“
Unattempted
“Deleting an entire table is significantly more efficient than removing items one-by-one, which essentially doubles the write throughput as you do as many delete operations as put operations“
Question 58 of 65
58. Question
On Amazon EC2 ECS, two containerized microservices are hosted. The first microservice reads a database instance from Amazon RDS Aurora, while the second microservice reads a table from Amazon DynamoDB.How can the bare minimal rights be provided to each microservice?
Correct
ECS_ENABLE_TASK_IAM_ROLE should be set to TRUE in EC2 that has the ECS Agent config file. Then IAM ready only DB roles will be assumed for ECS Tasks
Incorrect
ECS_ENABLE_TASK_IAM_ROLE should be set to TRUE in EC2 that has the ECS Agent config file. Then IAM ready only DB roles will be assumed for ECS Tasks
Unattempted
ECS_ENABLE_TASK_IAM_ROLE should be set to TRUE in EC2 that has the ECS Agent config file. Then IAM ready only DB roles will be assumed for ECS Tasks
Question 59 of 65
59. Question
A developer is using AWS CLI, however it is stalling out when performing list commands on a large number of resources.How can this time-out be avoided?
On Amazon EC2, a developer is developing an application. During testing, the developer experienced a €Access Denied€ error on many API requests to AWS services. The developer must alter the permissions previously granted to the instance.How can these needs be accomplished with the fewest possible adjustments and downtime?
A developer is developing a Lambda function to create and export a file. While the program is running, it needs 100 MB of temporary storage for transient files. These files are no longer required after the function has been completed.How can the developer manage temporary files most efficiently?
A developer has created a web application that is accessible to customers and is running on an Amazon EC2 instance. Every request made to the program is logged. Normally, the program operates without incident, but a traffic surge creates numerous logs, causing the disk to fill up and finally run out of memory. According to company policy, all historical logs must be consolidated for analysis.Which long-term remedy should the developer use to avoid a recurrence of the issue?
An existing serverless application handles picture files that have been uploaded. At the moment, the process is implemented using a single Lambda function that accepts an image file, processes it, and saves it in Amazon S3. The application‘s users now demand picture thumbnail production. Users desire to minimize the time required to complete picture uploads.How can thumbnail creation be integrated into an application while still adhering to user expectations and requiring little modifications to current code?
A developer is developing an AWS Lambda function to produce and publish a weekly newsletter to 100,000 subscribers dynamically. This mail includes both static and dynamic content. The developer need a highly scalable and quick storage location for the photographs that will be hyperlinked throughout the newsletter.Where is the developer supposed to save these images?