You have already completed the Test before. Hence you can not start it again.
Test is loading...
You must sign in or sign up to start the Test.
You have to finish following quiz, to start this Test:
Your results are here!! for" AWS Data Analytics Specialty Practice Test 5 "
0 of 60 questions answered correctly
Your time:
Time has elapsed
Your Final Score is : 0
You have attempted : 0
Number of Correct Questions : 0 and scored 0
Number of Incorrect Questions : 0 and Negative marks 0
Average score
Your score
AWS Certified Data Analytics Specialty
You have attempted: 0
Number of Correct Questions: 0 and scored 0
Number of Incorrect Questions: 0 and Negative marks 0
You can review your answers by clicking view questions. Important Note : Open Reference Documentation Links in New Tab (Right Click and Open in New Tab).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
Answered
Review
Question 1 of 60
1. Question
You need to move a Redshift cluster from one AWS account to another? Which of the below is the first step you would carry out in the entire process.
Correct
1
Incorrect
1
Unattempted
1
Hint
Explanation: The AWS documentation mentions the following on the transfer To manually migrate an Amazon Redshift cluster to another AWS account, follow these steps: 1. Create a manual snapshot of the cluster you want to migrate. 2. Manage snapshot access to authorize another AWS account to view and restore the snapshot. 3. If you need to copy a snapshot to another region, you must first enable cross-region snapshots. 4. In the destination AWS account, restore the shared snapshot from the Snapshots page of the Amazon Redshift console. For more information on the transfer, please refer to the below URL: https://aws.amazon.com/premiumsupport/knowledge-center/account-transfer-redshift/
Question 2 of 60
2. Question
Which of the following API commands can be used to data into a Kinesis stream for Synchronous processing.
Correct
2
Incorrect
2
Unattempted
2
Hint
Explanation: The AWS Documentation mentions the following on the Put Record API Writes a single data record into an Amazon Kinesis stream. Call PutRecord to send data into the stream for real-time ingestion and subsequent processing, one record at a time. Each shard can support writes up to 1,000 records per second, up to a maximum data write total of 1 MB per second. For more information on the Put Record API , please visit the below URL: http://docs.aws.amazon.com/kinesis/latest/APIReference/API_PutRecord.html
Question 3 of 60
3. Question
Your company has a web site hosted in AWS. There is a requirement to analyze the clickstream data for the web site and this needs to be done in real time. Which of the below can be used to fulfil this requirement.
Correct
3
Incorrect
3
Unattempted
3
Hint
Explanation: The AWS documentation mentions the following on the Amazon Kinesis service Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application. With Amazon Kinesis, you can ingest real-time data such as application logs, website clickstreams, IoT telemetry data, and more into your databases, data lakes and data warehouses, or build your own real-time applications using this data. Amazon Kinesis enables you to process and analyze data as it arrives and respond in real-time instead of having to wait until all your data is collected before the processing can begin. For more information on Amazon Kinesis, please refer to the below URL: https://aws.amazon.com/kinesis/
Question 4 of 60
4. Question
Your company has a web site hosted in AWS. There is a requirement to analyze the clickstream data for the web site and this needs to be done in real time. Which of the below can be used to fulfil this requirement.
Correct
3
Incorrect
3
Unattempted
3
Hint
Explanation: The AWS documentation mentions the following on the Amazon Kinesis service Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application. With Amazon Kinesis, you can ingest real-time data such as application logs, website clickstreams, IoT telemetry data, and more into your databases, data lakes and data warehouses, or build your own real-time applications using this data. Amazon Kinesis enables you to process and analyze data as it arrives and respond in real-time instead of having to wait until all your data is collected before the processing can begin. For more information on Amazon Kinesis, please refer to the below URL: https://aws.amazon.com/kinesis/
Question 5 of 60
5. Question
There is a requirement to perform SQL querying along with complex queries on HDFS and S3 file systems. Which of the below tools can fulfil this requirement?
Correct
2
Incorrect
2
Unattempted
2
Hint
Explanation: The AWS documentation mentions the following on AWS Presto Presto is an open-source distributed SQL query engine optimized for low-latency, ad-hoc analysis of data. It supports the ANSI SQL standard, including complex queries, aggregations, joins, and window functions. Presto can process data from multiple data sources including the Hadoop Distributed File System (HDFS) and Amazon S3. For more information on AWS Presto, please refer to the below URL: https://aws.amazon.com/emr/details/presto/
Question 6 of 60
6. Question
There is a requirement to perform SQL querying along with complex queries on HDFS and S3 file systems. Which of the below tools can fulfil this requirement?
Correct
2
Incorrect
2
Unattempted
2
Hint
Explanation: The AWS documentation mentions the following on AWS Presto Presto is an open-source distributed SQL query engine optimized for low-latency, ad-hoc analysis of data. It supports the ANSI SQL standard, including complex queries, aggregations, joins, and window functions. Presto can process data from multiple data sources including the Hadoop Distributed File System (HDFS) and Amazon S3. For more information on AWS Presto, please refer to the below URL: https://aws.amazon.com/emr/details/presto/
Question 7 of 60
7. Question
Which of the following services can be used for monitoring and auditing S3 buckets. Choose 2 answers from the options given below
Correct
he two primary services used for monitoring and auditing S3 buckets are:
A. CloudTrail: This service is specifically designed to log all API calls made to AWS services, including actions taken on S3 buckets. It provides detailed information about who made the request, what action was performed (e.g., PutObject, GetObject, DeleteObject), and the timestamp of the event. CloudTrail logs can be incredibly valuable for monitoring access patterns, identifying unusual activity, and troubleshooting issues.
C. CloudWatch Logs: While not directly an S3-specific service, CloudWatch Logs can be used in conjunction with S3 Server Access Logging to capture detailed information about access requests made directly to S3 buckets. S3 Server Access Logging allows you to configure S3 to store access logs in a CloudWatch Logs log group, where you can then analyze them using CloudWatch insights or integrate them with other security and monitoring tools.
Here’s why the other options are not as suitable:
B. AWS Config: This service primarily focuses on configuration changes made to AWS resources, including S3 buckets, but it doesn’t provide comprehensive access logging or monitoring of actions performed on those resources.
D. EMR: This service (Elastic MapReduce) is primarily used for running large-scale data processing pipelines on Amazon Web Services and isn’t designed for monitoring or auditing S3 buckets specifically.
By combining CloudTrail’s API call logging with S3 Server Access Logging and CloudWatch Logs for detailed access information, you can achieve comprehensive monitoring and auditing of your S3 buckets, ensuring optimal security and compliance posture.
Incorrect
he two primary services used for monitoring and auditing S3 buckets are:
A. CloudTrail: This service is specifically designed to log all API calls made to AWS services, including actions taken on S3 buckets. It provides detailed information about who made the request, what action was performed (e.g., PutObject, GetObject, DeleteObject), and the timestamp of the event. CloudTrail logs can be incredibly valuable for monitoring access patterns, identifying unusual activity, and troubleshooting issues.
C. CloudWatch Logs: While not directly an S3-specific service, CloudWatch Logs can be used in conjunction with S3 Server Access Logging to capture detailed information about access requests made directly to S3 buckets. S3 Server Access Logging allows you to configure S3 to store access logs in a CloudWatch Logs log group, where you can then analyze them using CloudWatch insights or integrate them with other security and monitoring tools.
Here’s why the other options are not as suitable:
B. AWS Config: This service primarily focuses on configuration changes made to AWS resources, including S3 buckets, but it doesn’t provide comprehensive access logging or monitoring of actions performed on those resources.
D. EMR: This service (Elastic MapReduce) is primarily used for running large-scale data processing pipelines on Amazon Web Services and isn’t designed for monitoring or auditing S3 buckets specifically.
By combining CloudTrail’s API call logging with S3 Server Access Logging and CloudWatch Logs for detailed access information, you can achieve comprehensive monitoring and auditing of your S3 buckets, ensuring optimal security and compliance posture.
Unattempted
he two primary services used for monitoring and auditing S3 buckets are:
A. CloudTrail: This service is specifically designed to log all API calls made to AWS services, including actions taken on S3 buckets. It provides detailed information about who made the request, what action was performed (e.g., PutObject, GetObject, DeleteObject), and the timestamp of the event. CloudTrail logs can be incredibly valuable for monitoring access patterns, identifying unusual activity, and troubleshooting issues.
C. CloudWatch Logs: While not directly an S3-specific service, CloudWatch Logs can be used in conjunction with S3 Server Access Logging to capture detailed information about access requests made directly to S3 buckets. S3 Server Access Logging allows you to configure S3 to store access logs in a CloudWatch Logs log group, where you can then analyze them using CloudWatch insights or integrate them with other security and monitoring tools.
Here’s why the other options are not as suitable:
B. AWS Config: This service primarily focuses on configuration changes made to AWS resources, including S3 buckets, but it doesn’t provide comprehensive access logging or monitoring of actions performed on those resources.
D. EMR: This service (Elastic MapReduce) is primarily used for running large-scale data processing pipelines on Amazon Web Services and isn’t designed for monitoring or auditing S3 buckets specifically.
By combining CloudTrail’s API call logging with S3 Server Access Logging and CloudWatch Logs for detailed access information, you can achieve comprehensive monitoring and auditing of your S3 buckets, ensuring optimal security and compliance posture.
Hint
Explanation: The AWS Documentation mentions the following for the monitoring and auditing services available in AWS
Amazon CloudWatch is a monitoring service for AWS cloud resources and the applications you run on AWS. You can use Amazon CloudWatch to collect and track metrics, collect and monitor log files, set alarms, and automatically react to changes in your AWS resources.
For more information on AWS Cloudwatch please refer to the below URL: https://aws.amazon.com/cloudwatch/
AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This event history simplifies security analysis, resource change tracking, and troubleshooting.
For more information on AWS Cloudtrail please refer to the below URL: https://aws.amazon.com/cloudtrail/
Question 8 of 60
8. Question
Which of the following options need to be incorporated in an EMR cluster for better security. Choose 3 answers from the options given below You have to do a security audit on an EMR cluster. Which of the following do you have to make sure are enabled? Choose the 3 correct answers:
Correct
123
Incorrect
123
Unattempted
123
Hint
Explanation: The AWS Documentation mentions the following for encryption of data in rest and transit when using the EMR service Data at rest Data residing on Amazon S3—S3 client-side encryption with EMR Data residing on disk—the Amazon EC2 instance store volumes (except boot volumes) and the attached Amazon EBS volumes of cluster instances are encrypted using Linux Unified Key System (LUKS) Data in transit Data in transit from EMR to S3, or vice versa—S3 client side encryption with EMR Data in transit between nodes in a cluster—in-transit encryption via Secure Sockets Layer (SSL) for MapReduce and Simple Authentication and Security Layer (SASL) for Spark shuffle encryption Data being spilled to disk or cached during a shuffle phase—Spark shuffle encryption or LUKS encryption For more information on securing EMR, please refer to the below URL: https://aws.amazon.com/blogs/big-data/secure-amazon-emr-with-encryption/
Question 9 of 60
9. Question
Which of the following is not a performance factor when it comes to migrating databases using the AWS DB Migration service
Correct
3
Incorrect
3
Unattempted
3
Hint
Explanation: The AWS Documentation mentions the following as performance factors for AWS DB Migration service
A number of factors affect the performance of your AWS DMS migration:
Resource availability on the source
The available network throughput
The resource capacity of the replication server
The ability of the target to ingest changes
The type and distribution of source data
The number of objects to be migrated
For more information on the best practises please refer to the below URL: http://docs.aws.amazon.com/dms/latest/userguide/CHAP_BestPractices.html
Question 10 of 60
10. Question
There is a requirement to load a lot of data from your on-premise network on to AWS Redshift. Which of the below can be used for this data transfer. Choose 2 answers from the options given below.
Correct
23
Incorrect
23
Unattempted
23
Hint
Explanation: The AWS documentation mentions the following about the respective services With a Snowball, you can transfer hundreds of terabytes or petabytes of data between your on-premises data centers and Amazon Simple Storage Service (Amazon S3). AWS Snowball uses Snowball appliances and provides powerful interfaces that you can use to create jobs, transfer data, and track the status of your jobs through to completion. By shipping your data in Snowballs, you can transfer large amounts of data at a significantly faster rate than if you were transferring that data over the Internet, saving you time and money. AWS Direct Connect links your internal network to an AWS Direct Connect location over a standard 1-gigabit or 10-gigabit Ethernet fiber-optic cable. One end of the cable is connected to your router, the other to an AWS Direct Connect router. With this connection in place, you can create virtual interfaces directly to public AWS services (for example, to Amazon S3) or to Amazon VPC, bypassing Internet service providers in your network pat For more information on Direct Connect, please refer to the below URL: http://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html For more information on AWS Snowball, please refer to the below URL: http://docs.aws.amazon.com/snowball/latest/ug/whatissnowball.html
Question 11 of 60
11. Question
Which of the following can be done to ensure the right compression settings are used for a Redshift table , if the compression settings are being entered manually?
Correct
2
Incorrect
2
Unattempted
2
Hint
Explanation: The AWS Documentation mentions the following You might choose to apply compression encodings manually if the new table shares the same data characteristics as another table, or if in testing you discover that the compression encodings that are applied during automatic compression are not the best fit for your data. If you choose to apply compression encodings manually, you can run the ANALYZE COMPRESSION command against an already populated table and use the results to choose compression encodings. For more information on compressing data in Redshift please refer to the below URL: http://docs.aws.amazon.com/redshift/latest/dg/t_Compressing_data_on_disk.html
Question 12 of 60
12. Question
You currently have a Redshift Cluster defined in AWS. The data is currently unencrypted in nature. You have now decided that the cluster needs to have encrypted data. How can you achieve this? Choose 2 answers from the options given below. Each answer forms part of the solution
Correct
12
Incorrect
12
Unattempted
12
Hint
Explanation: The AWS Documentation mentions the following You enable encryption when you launch a cluster. To migrate from an unencrypted cluster to an encrypted cluster, you first unload your data from the existing, source cluster. Then you reload the data in a new, target cluster with the chosen encryption setting For more information on migrating to an encrypted cluster, please visit the below URL: https://docs.aws.amazon.com/redshift/latest/mgmt/migrating-to-an-encrypted-cluster.html
Question 13 of 60
13. Question
A third party auditor is being brought in to review security processes and configurations for all of a company’s AWS accounts. Currently, the company does not use any on-premise identity provider. Instead, they rely on IAM accounts in each of their AWS accounts. The auditor needs read-only access to all AWS resources for each AWS account.Given the requirements, what is the best security method for architecting access for the security auditor? Choose the correct answer from the options below
Correct
3
Incorrect
3
Unattempted
3
Hint
Explanation: The AWS Documentation mentions the following You can use roles to delegate access to users, applications, or services that don’t normally have access to your AWS resources. For example, you might want to grant users in your AWS account access to resources they don’t usually have, or grant users in one AWS account access to resources in another account. Or you might want to allow a mobile app to use AWS resources, but not want to embed AWS keys within the app (where they can be difficult to rotate and where users can potentially extract them). Sometimes you want to give AWS access to users who already have identities defined outside of AWS, such as in your corporate directory. Or, you might want to grant access to your account to third parties so that they can perform an audit on your resources. For more information on IAM roles please refer to the below URL: http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html
Question 14 of 60
14. Question
Your application currently use DynamoDB as the data store. You also have a test environment where you perform load tests on your application. There is a constant need to reset the data in the DynamoDB tables. How can this be achieved. Choose 2 answers from the options below. Each answer forms part of the solution.
Correct
34
Incorrect
34
Unattempted
34
Hint
Explanation: The AWS documentation mentions the following You can use AWS Data Pipeline to export data from a DynamoDB table to a file in an Amazon S3 bucket. You can also use the console to import data from Amazon S3 into a DynamoDB table, in the same AWS region or in a different region. For more information on DynamoDB Data Pipeline, please refer to the below URL: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBPipeline.html
Question 15 of 60
15. Question
You have a series of locations in S3 where files need to be copied onto AWS Redshift. Which of the following can be used to specify the location of the files that need to be copied
Correct
3
Incorrect
3
Unattempted
3
Hint
Explanation: The AWS Documentation mentions the following You can use a manifest to ensure that the COPY command loads all of the required files, and only the required files, for a data load. Instead of supplying an object path for the COPY command, you supply the name of a JSON-formatted text file that explicitly lists the files to be loaded For more information on the manifest file, please visit the below URL: https://docs.aws.amazon.com/redshift/latest/dg/loading-data-files-using-manifest.html
Question 16 of 60
16. Question
In order to efficiently insert or update data into a Redshift table , which of the following must be carried out
Correct
1
Incorrect
1
Unattempted
1
Hint
Explanation: The AWS Documentation mentions the following You can efficiently add new data to an existing table by using a combination of updates and inserts from a staging table. While Amazon Redshift does not support a single merge, or upsert, command to update a table from a single data source, you can perform a merge operation by creating a staging table and then using one of the methods described in this section to update the target table from the staging table. For more information on inserting or updating data in Redshift please refer to the below URL: http://docs.aws.amazon.com/redshift/latest/dg/t_updating-inserting-using-staging-tables-.html
Question 17 of 60
17. Question
You currently have Kinesis streams that are attached to an application. There are separate streams for separate consumers or clients. Each client needs to be billed separately and have a separate invoice. How can this be achieved.
Correct
4
Incorrect
4
Unattempted
4
Hint
Explanation: The AWS Documentation mentions the following You can assign your own metadata to streams you create in Amazon Kinesis Data Streams in the form of tags. A tag is a key-value pair that you define for a stream. Using tags is a simple yet powerful way to manage AWS resources and organize data, including billing data For more information on tagging streams, please visit the below URL: https://docs.aws.amazon.com/streams/latest/dev/tagging.html
Question 18 of 60
18. Question
You have created a DynamoDB table for an application that needs to support thousands of users. You need to ensure that each user can only access their own data in a particular table. Many users already have accounts with a third-party identity provider, such as Facebook, Google, or Login with Amazon. How would you implement this requirement. Choose 2 answers from the options given below
Correct
23
Incorrect
23
Unattempted
23
Hint
Explanation: The AWS Documentation mentions the following With web identity federation, you don’t need to create custom sign-in code or manage your own user identities. Instead, users of your app can sign in using a well-known identity provider (IdP) —such as Login with Amazon, Facebook, Google, or any other OpenID Connect (OIDC)-compatible IdP, receive an authentication token, and then exchange that token for temporary security credentials in AWS that map to an IAM role with permissions to use the resources in your AWS account. Using an IdP helps you keep your AWS account secure, because you don’t have to embed and distribute long-term security credentials with your application. For more information on Web Identity federation, please visit the below URL: http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_oidc.html
Question 19 of 60
19. Question
Which of the following features in AWS IoT Core allows you can create a persistent, virtual version, or “shadow,” of each device?
Correct
3
Incorrect
3
Unattempted
3
Hint
Explanation: The AWS Documentation mentions the following With AWS IoT Core, you can create a persistent, virtual version, or “shadow,” of each device that includes the device’s latest state so that applications or other devices can read messages and interact with the device. The Device Shadows persist the last reported state and desired future state of each device even when the device is offline. You can retrieve the last reported state of a device or set a desired future state through the API or using the rules engine. For more information on AWS IoT Core please refer to the below URL: https://aws.amazon.com/iot-core/features/
Question 20 of 60
20. Question
Which of the following statements on resizing Redshift clusters is false?
Correct
3
Incorrect
3
Unattempted
3
Hint
Explanation: The AWS Documentation mentions the following When you start the resize operation, Amazon Redshift puts the existing cluster into read-only mode until the resize finishes. During this time, you can only run queries that read from the database; you cannot run any queries that write to the database, including read-write queries. For more information on resizing clusters please refer to the below URL: http://docs.aws.amazon.com/redshift/latest/mgmt/rs-resize-tutorial.html
Question 21 of 60
21. Question
Which of the following in IAM best practises is relevant to the term of “granting only the permissions required to perform a task”
Correct
2
Incorrect
2
Unattempted
2
Hint
Explanation: The AWS Documentation mentions the following When you create IAM policies, follow the standard security advice of granting least privilege—that is, granting only the permissions required to perform a task. Determine what users need to do and then craft policies for them that let the users perform only those tasks. For more information on IAM best practises, please visit the below URL: http://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html
Question 22 of 60
22. Question
When configuring AWS Kinesis streams as the source for a Kinesis Data Firehose delivery stream which of the following API calls can be used to add data to the Kinesis Data Firehose delivery stream. Choose 2 answers from the options given below
Correct
12
Incorrect
12
Unattempted
12
Hint
Explanation: The AWS Documentation mentions the following When you configure a Kinesis stream as the source of a Kinesis Data Firehose delivery stream, the Kinesis Data Firehose PutRecord and PutRecordBatch operations are disabled. To add data to your Kinesis Data Firehose delivery stream in this case, use the Kinesis Data Streams PutRecord and PutRecords operations. For more information on using Kinesis streams as the source, please visit the below URL: http://docs.aws.amazon.com/firehose/latest/dev/writing-with-kinesis-streams.html
Question 23 of 60
23. Question
When working with DynamoDB tables, which of the following is a recommended best practice for getting the best throughput for your table
Correct
3
Incorrect
3
Unattempted
3
Hint
Explanation: The AWS Documentation mentions the following When it stores data, DynamoDB divides a table’s items into multiple partitions, and distributes the data primarily based upon the partition key value. Consequently, to achieve the full amount of request throughput you have provisioned for a table, keep your workload spread evenly across the partition key values. Distributing requests across partition key values distributes the requests across partitions. For more information on DynamoDB Table Guidelines, please visit the below URL: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GuidelinesForTables.html
Question 24 of 60
24. Question
When using IoT enabled devices to work with AWS IoT , which of the following is a recommendation.
Correct
2
Incorrect
2
Unattempted
2
Hint
Explanation: The AWS Documentation mentions the following We recommend that all things that connect to AWS IoT have an entry in the thing registry. The thing registry stores information about a thing and the certificates that are used by the thing to secure communication with AWS IoT. Option A,B and D are more of mandatory requirement rather than a recommendation For more information on the working for AWS IoT, please visit the below URL: http://docs.aws.amazon.com/iot/latest/developerguide/aws-iot-how-it-works.html
Question 25 of 60
25. Question
Which of the following can be used along with Amazon EMR to perform SQL like queries on the data stored in EMR.
Correct
1
Incorrect
1
Unattempted
1
Hint
Explanation: The AWS Documentation mentions the following Using Hive with Amazon EMR, you can implement sophisticated data-processing applications with a familiar SQL-like language and easy to use tools available with Amazon EMR. With Amazon EMR, you can turn your Hive applications into a reliable data warehouse to execute tasks such as data analytics, monitoring, and business intelligence tasks For more information on EMR please see the below link https://aws.amazon.com/emr/faqs/
Question 26 of 60
26. Question
You need to use the graphing tools available in Amazon Quicksight. Which of the following would you use for comparing measure values over time?
Correct
4
Incorrect
4
Unattempted
4
Hint
Explanation: The AWS Documentation mentions the following Use line charts to compare changes in values for one or more measures or dimensions over a period of time. Line charts differ from area line charts in that each value is represented by a line instead of a colored area of the chart. For more information on Quicksight Line charts please see the below link http://docs.aws.amazon.com/quicksight/latest/user/line-chart.html
Question 27 of 60
27. Question
You are using QuickSight to identify demand vs supply trends over multiple months. Which type of visualization do you choose?
Correct
4
Incorrect
4
Unattempted
4
Hint
Explanation: The AWS Documentation mentions the following Use line charts to compare changes in values for one or more measures or dimensions over a period of time. Line charts differ from area line charts in that each value is represented by a line instead of a colored area of the chart. For more information on QuickSight Line charts, please visit the below URL: https://docs.aws.amazon.com/quicksight/latest/user/line-chart.html
Question 28 of 60
28. Question
You need to visualize data from Spark and Hive running on an EMR cluster. Which of the options is best for an interactive and collaborative notebook for data exploration?
Which of the following commands can be used to transfer the results of a query in Redshift to Amazon S3?
Correct
3
Incorrect
3
Unattempted
3
Hint
Explanation: The AWS documentation mentions the following Unloads the result of a query to one or more files on Amazon Simple Storage Service (Amazon S3), using Amazon S3 server-side encryption (SSE-S3). You can also specify server-side encryption with an AWS Key Management Service key (SSE-KMS) or client-side encryption with a customer-managed key (CSE-CMK). For more information on the UNLOAD option, please refer to the below URL: http://docs.aws.amazon.com/redshift/latest/dg/r_UNLOAD.html
Question 30 of 60
30. Question
Which of the following is done to create machine learning models in the AWS ML service
Correct
2
Incorrect
2
Unattempted
2
Hint
Explanation: The AWS Documentation mentions the following Training data is used to create machine learning models. It consists of known data points from the past. You can use Amazon Machine Learning to extract patterns from this data, and use them to build machine learning models. For more information on Amazon Machine Learning, please visit the below URL: https://aws.amazon.com/aml/faqs/
Question 31 of 60
31. Question
You are planning on loading a huge amount of data into a Redshift Cluster. You are not sure if the load will succeed or fail. Which of the below options can help see if an error would occur during the load process.
Correct
4
Incorrect
4
Unattempted
4
Hint
Explanation: The AWS Documentation mentions the following To validate the data in the Amazon S3 input files or Amazon DynamoDB table before you actually load the data, use the NOLOAD option with the COPY command. Use NOLOAD with the same COPY commands and options you would use to actually load the data. NOLOAD checks the integrity of all of the data without loading it into the database. The NOLOAD option displays any errors that would occur if you had attempted to load the data. For more information on validating input files , please visit the below URL: http://docs.aws.amazon.com/redshift/latest/dg/t_Validating_input_files.html
Question 32 of 60
32. Question
Which of the following commands can be used to see the impact of a query on a Redshift Table?
Correct
2
Incorrect
2
Unattempted
2
Hint
Explanation: The AWS Documentation mentions the following To understand the impact of the chosen sort key on query performance, use the EXPLAIN command. For more information on the EXPLAIN command please refer to the below URL: http://docs.aws.amazon.com/redshift/latest/dg/r_EXPLAIN.html
Question 33 of 60
33. Question
Your company is planning on creating an application that is going to make use of Amazon Kinesis. The stream records are going to be stored directly in S3. Which of the following is a recommended design approach with regards to this requirement
Correct
4
Incorrect
4
Unattempted
4
Hint
Explanation: The AWS Documentation mentions the following To save stream records directly to storage services such as Amazon S3, Amazon Redshift, or Amazon Elasticsearch Service, you can use a Kinesis Data Firehose delivery stream instead of creating a consumer application. For more information on Kinesis consumers please refer to the below URL: http://docs.aws.amazon.com/streams/latest/dev/amazon-kinesis-consumers.html
Question 34 of 60
34. Question
In AWS Cloudsearch , in order to ensure that data is searchable, in which formats should documents be represented as. Choose 2 answers from the options given below
Correct
14
Incorrect
14
Unattempted
14
Hint
Explanation: The AWS Documentation mentions the following To make your data searchable, you represent it as a batch of documents in either JSON or XML and upload the batch to your search domain. Amazon CloudSearch then generates a search index from your document data according to your domain’s configuration options. You submit queries against this index to find the documents that meet specific search criteria For more information on how AWS Cloudsearch works, one can refer to the below URL: http://docs.aws.amazon.com/cloudsearch/latest/developerguide/how-search-works.html
Question 35 of 60
35. Question
Which of the following methods can be used to disable automated snapshots in Redshift
Correct
2
Incorrect
2
Unattempted
2
Hint
Explanation: The AWS Documentation mentions the following To disable automated snapshots, set the retention period to zero. If you disable automated snapshots, Amazon Redshift stops taking snapshots and deletes any existing automated snapshots for the cluster. For more information on working with snapshots, please visit the below URL: http://docs.aws.amazon.com/redshift/latest/mgmt/working-with-snapshots.html
Question 36 of 60
36. Question
There is a requirement to export data from DynamoDB tables to S3. How can this be achieved in the easiest way possible.
Correct
Explanation:
The easiest way to export data from DynamoDB tables to S3 is:
B. Configure DynamoDB streams to copy data onto S3.
Here’s why:
DynamoDB streams: This is a built-in feature of DynamoDB that continuously captures all changes made to a table, including insertions, modifications, and deletions. It offers a serverless and efficient way to capture data updates and stream them to various destinations, including S3.
Ease of use: Compared to other options, setting up DynamoDB streams is relatively straightforward:
Enable streaming for the desired DynamoDB table.
Specify an S3 bucket as the destination for the stream data.
DynamoDB automatically delivers the change records in JSON format to the chosen S3 bucket.
Minimal configuration: You don’t need to write or manage any code, which simplifies the process and reduces the risk of errors.
While other options can also achieve data export, they might involve additional configuration or coding:
A. AWS Lambda function: While flexible, it requires writing and managing Lambda code to interact with both DynamoDB and S3, making it more complex than DynamoDB streams.
C. AWS Data Pipeline: Offers orchestration for data pipelines but adds another layer of complexity compared to using the built-in DynamoDB streams feature.
D. AWS Import/Export: Primarily designed for large-scale, one-time data transfers, not ideal for ongoing data export from DynamoDB.
Therefore, for the easiest way to export data from DynamoDB tables to S3, configuring DynamoDB streams is the most straightforward and efficient approach.
The easiest way to export data from DynamoDB tables to S3 is:
B. Configure DynamoDB streams to copy data onto S3.
Here’s why:
DynamoDB streams: This is a built-in feature of DynamoDB that continuously captures all changes made to a table, including insertions, modifications, and deletions. It offers a serverless and efficient way to capture data updates and stream them to various destinations, including S3.
Ease of use: Compared to other options, setting up DynamoDB streams is relatively straightforward:
Enable streaming for the desired DynamoDB table.
Specify an S3 bucket as the destination for the stream data.
DynamoDB automatically delivers the change records in JSON format to the chosen S3 bucket.
Minimal configuration: You don’t need to write or manage any code, which simplifies the process and reduces the risk of errors.
While other options can also achieve data export, they might involve additional configuration or coding:
A. AWS Lambda function: While flexible, it requires writing and managing Lambda code to interact with both DynamoDB and S3, making it more complex than DynamoDB streams.
C. AWS Data Pipeline: Offers orchestration for data pipelines but adds another layer of complexity compared to using the built-in DynamoDB streams feature.
D. AWS Import/Export: Primarily designed for large-scale, one-time data transfers, not ideal for ongoing data export from DynamoDB.
Therefore, for the easiest way to export data from DynamoDB tables to S3, configuring DynamoDB streams is the most straightforward and efficient approach.
The easiest way to export data from DynamoDB tables to S3 is:
B. Configure DynamoDB streams to copy data onto S3.
Here’s why:
DynamoDB streams: This is a built-in feature of DynamoDB that continuously captures all changes made to a table, including insertions, modifications, and deletions. It offers a serverless and efficient way to capture data updates and stream them to various destinations, including S3.
Ease of use: Compared to other options, setting up DynamoDB streams is relatively straightforward:
Enable streaming for the desired DynamoDB table.
Specify an S3 bucket as the destination for the stream data.
DynamoDB automatically delivers the change records in JSON format to the chosen S3 bucket.
Minimal configuration: You don’t need to write or manage any code, which simplifies the process and reduces the risk of errors.
While other options can also achieve data export, they might involve additional configuration or coding:
A. AWS Lambda function: While flexible, it requires writing and managing Lambda code to interact with both DynamoDB and S3, making it more complex than DynamoDB streams.
C. AWS Data Pipeline: Offers orchestration for data pipelines but adds another layer of complexity compared to using the built-in DynamoDB streams feature.
D. AWS Import/Export: Primarily designed for large-scale, one-time data transfers, not ideal for ongoing data export from DynamoDB.
Therefore, for the easiest way to export data from DynamoDB tables to S3, configuring DynamoDB streams is the most straightforward and efficient approach.
The easiest way to export data from DynamoDB tables to S3 is:
B. Configure DynamoDB streams to copy data onto S3.
Here’s why:
DynamoDB streams: This is a built-in feature of DynamoDB that continuously captures all changes made to a table, including insertions, modifications, and deletions. It offers a serverless and efficient way to capture data updates and stream them to various destinations, including S3.
Ease of use: Compared to other options, setting up DynamoDB streams is relatively straightforward:
Enable streaming for the desired DynamoDB table.
Specify an S3 bucket as the destination for the stream data.
DynamoDB automatically delivers the change records in JSON format to the chosen S3 bucket.
Minimal configuration: You don’t need to write or manage any code, which simplifies the process and reduces the risk of errors.
While other options can also achieve data export, they might involve additional configuration or coding:
A. AWS Lambda function: While flexible, it requires writing and managing Lambda code to interact with both DynamoDB and S3, making it more complex than DynamoDB streams.
C. AWS Data Pipeline: Offers orchestration for data pipelines but adds another layer of complexity compared to using the built-in DynamoDB streams feature.
D. AWS Import/Export: Primarily designed for large-scale, one-time data transfers, not ideal for ongoing data export from DynamoDB.
Therefore, for the easiest way to export data from DynamoDB tables to S3, configuring DynamoDB streams is the most straightforward and efficient approach.
You are currently making use of the AWS Kinesis service for an application. You are currently looking at ways to cut down the cost if possible for the Kinesis based application if possible. Which of the following can be used to achieve this. Choose 2 answers from the options given below. Each answer forms part of the solution
Correct
24
Incorrect
24
Unattempted
24
Hint
Explanation: The AWS Documentation mentions the following The purpose of resharding is to enable your stream to adapt to changes in the rate of data flow. You split shards to increase the capacity (and cost) of your stream. You merge shards to reduce the cost (and capacity) of your stream. You can also use metrics to determine which are your “hot” or “cold” shards, that is, shards that are receiving much more data, or much less data, than expected. You could then selectively split the hot shards to increase capacity for the hash keys that target those shards. Similarly, you could merge cold shards to make better use of their unused capacity. For more information on resharding strategies please refer to the below URL: http://docs.aws.amazon.com/streams/latest/dev/kinesis-using-sdk-java-resharding-strategies.html
Question 38 of 60
38. Question
Which of the following is not a node type in Amazon EMR?
Correct
2
Incorrect
2
Unattempted
2
Hint
Explanation: The AWS Documentation mentions the following The node types in Amazon EMR are as follows: · Master node: A node that manages the cluster by running software components to coordinate the distribution of data and tasks among other nodes—collectively referred to as slave nodes—for processing. The master node tracks the status of tasks and monitors the health of the cluster. · Core node: A slave node with software components that run tasks and store data in the Hadoop Distributed File System (HDFS) on your cluster. · Task node: A slave node with software components that only run tasks. Task nodes are optional. For more information on AWS EMR, please refer to the below URL: http://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-overview.html
Question 39 of 60
39. Question
You currently have a Kinesis stream configured in AWS. During the monitoring session you can see that certain records are being skipped. Where should you start searching to analyze the underlying issue?
Correct
2
Incorrect
2
Unattempted
2
Hint
Explanation: The AWS Documentation mentions the following The most common cause of skipped records is an unhandled exception thrown from processRecords. The Kinesis Client Library (KCL) relies on your processRecords code to handle any exceptions that arise from processing the data records. Any exception thrown from processRecords is absorbed by the KCL For more information on troubleshooting AWS Kinesis consumers, please visit the below URL: http://docs.aws.amazon.com/streams/latest/dev/troubleshooting-consumers.html
Question 40 of 60
40. Question
Which of the following is where the metadata definition is stored in the Aws Glue service
Correct
2
Incorrect
2
Unattempted
2
Hint
Explanation: The AWS Documentation mentions the following The metadata definition that represents your data. Whether your data is in an Amazon Simple Storage Service (Amazon S3) file, an Amazon Relational Database Service (Amazon RDS) table, or another set of data, a table defines the schema of your data. A table in the AWS Glue Data Catalog consists of the names of columns, data type definitions, and other metadata about a base dataset. For more information on the concepts of AWS Glue, one can refer to the below URL: http://docs.aws.amazon.com/glue/latest/dg/components-key-concepts.html
Question 41 of 60
41. Question
In an AWS EMR Cluster which of the following nodes is responsible for running the YARN ResourceManager service.
Correct
1
Incorrect
1
Unattempted
1
Hint
Explanation: The AWS Documentation mentions the following The master node manages the cluster and typically runs master components of distributed applications. For example, the master node runs the YARN ResourceManager service to manage resources for applications, as well as the HDFS NameNode service For more information on planning instances for the EMR Cluster, please visit the below URL: https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-plan-instances.html
Question 42 of 60
42. Question
When using the Kinesis Producer Library , which of the following can be used to increase the throughput of the number of records sent. Choose 2 answers from the options given below
Correct
12
Incorrect
12
Unattempted
12
Hint
Explanation: The AWS Documentation mentions the following The KPL supports two types of batching: Aggregation – Storing multiple records within a single Kinesis Data Streams record. Collection – Using the API operation PutRecords to send multiple Kinesis Data Streams records to one or more shards in your Kinesis data stream. For more information on Kinesis KPL concepts please refer to the below URL: https://docs.aws.amazon.com/streams/latest/dev/kinesis-kpl-concepts.html
Question 43 of 60
43. Question
Which of the following can be used to monitor EMR Clusters and give reports of the performance of the cluster as a whole?
Correct
1
Incorrect
1
Unattempted
1
Hint
Explanation: The AWS documentation mentions the following The Ganglia open source project is a scalable, distributed system designed to monitor clusters and grids while minimizing the impact on their performance. When you enable Ganglia on your cluster, you can generate reports and view the performance of the cluster as a whole, as well as inspect the performance of individual node instances. Ganglia is also configured to ingest and visualize Hadoop and Spark metrics For more information on Ganglia, please visit the below URL: http://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-ganglia.html
Question 44 of 60
44. Question
Which of the following mechanisms can be used to protect data at rest in the Simple Storage Service? Choose 2 answers from the options given below
Correct
12
Incorrect
12
Unattempted
12
Hint
Explanation: The AWS Documentation mentions the following The following encryption methods can be used 1) Use Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3) – Each object is encrypted with a unique key employing strong multi-factor encryption. As an additional safeguard, it encrypts the key itself with a master key that it regularly rotates. 2) Use Server-Side Encryption with AWS KMS-Managed Keys (SSE-KMS) – Similar to SSE-S3, but with some additional benefits along with some additional charges for using this service. For more information on securing data at rest in S3 please refer to the below URL: http://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
Question 45 of 60
45. Question
Which of the following is the default input data format for Amazon EMR?
Correct
3
Incorrect
3
Unattempted
3
Hint
Explanation: The AWS Documentation mentions the following The default input format for a cluster is text files with each line separated by a newline (\n) character, which is the input format most commonly used For more information on EMR input data , please visit the below URL: http://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-plan-input-accept.html
Question 46 of 60
46. Question
Which of the following commands in Redshift is efficient in loading large amounts of data
Correct
3
Incorrect
3
Unattempted
3
Hint
Explanation: The AWS Documentation mentions the following The COPY command loads data in parallel from Amazon S3, Amazon EMR, Amazon DynamoDB, or multiple data sources on remote hosts. COPY loads large amounts of data much more efficiently than using INSERT statements, and stores the data more effectively as well. For more information on the COPY command, please visit the below URL: http://docs.aws.amazon.com/redshift/latest/dg/c_best-practices-use-copy.html
Question 47 of 60
47. Question
Which of the following SQL function statements can be used in Redshift to specify a result when there are multiple conditions.
Correct
4
Incorrect
4
Unattempted
4
Hint
Explanation: The AWS Documentation mentions the following The CASE expression is a conditional expression, similar to if/then/else statements found in other languages. CASE is used to specify a result when there are multiple conditions. For more information on the CASE Expression, please visit the below URL: http://docs.aws.amazon.com/redshift/latest/dg/r_CASE_function.html
Question 48 of 60
48. Question
Which of the following facilitates the sending and receiving of messages with AWS IoT devices?
Correct
2
Incorrect
2
Unattempted
2
Hint
Explanation: The AWS documentation mentions the following The AWS IoT message broker is a publish/subscribe broker service that enables the sending and receiving of messages to and from AWS IoT. For more information on the AWS IoT Message broker please refer to the below URL: http://docs.aws.amazon.com/iot/latest/developerguide/iot-message-broker.html
Question 49 of 60
49. Question
The AWS IoT Core Device Gateway has support for which of the below protocols. Choose 2 answers from the options given below
Correct
23
Incorrect
23
Unattempted
23
Hint
Explanation: The AWS Documentation mentions the following The AWS IoT Device Gateway enables devices to securely and efficiently communicate with AWS IoT Core. The Device Gateway can exchange messages using a publication/subscription model, which enables one-to-one and one-to-many communications. With this one-to-many communication pattern IoT Core makes it possible for a connected device to broadcast data to multiple subscribers for a given topic. The Device Gateway supports MQTT, WebSockets, and HTTP 1.1 protocols. For more information on AWS IoT Core please refer to the below URL: https://aws.amazon.com/iot-core/features/
Question 50 of 60
50. Question
Which of the following can be used in AWS Redshift to prioritize selected short-running queries ahead of longer-running queries
Correct
2
Incorrect
2
Unattempted
2
Hint
Explanation: The AWS documentation mentions the following Short query acceleration (SQA) prioritizes selected short-running queries ahead of longer-running queries. SQA executes short-running queries in a dedicated space, so that SQA queries aren’t forced to wait in queues behind longer queries. With SQA, short-running queries begin executing more quickly and users see results sooner. For more information on designing queries in Redshift please refer to the below URL: http://docs.aws.amazon.com/redshift/latest/dg/c_designing-queries-best-practices.html For more information on short queries acceleration, please refer to the below URL: https://docs.aws.amazon.com/redshift/latest/dg/wlm-short-query-acceleration.html
Question 51 of 60
51. Question
You need to filter and transform incoming messages coming from a smart sensor you have connected with AWS. Once messages are received, you need to store them as time series data in DynamoDB. Which AWS service can you use?
Correct
3
Incorrect
3
Unattempted
3
Hint
Explanation: The AWS documentation mentions the following Rules give your devices the ability to interact with AWS services. Rules are analyzed and actions are performed based on the MQTT topic stream. You can use rules to support tasks like these: · Augment or filter data received from a device. · Write data received from a device to an Amazon DynamoDB database. · Save a file to Amazon S3. For more information on the IoT Rules engine, please refer to the below URL: http://docs.aws.amazon.com/iot/latest/developerguide/iot-rules.html
Question 52 of 60
52. Question
Which of the following is incorrect when it comes to the Kinesis client library?
Correct
2
Incorrect
2
Unattempted
2
Hint
Explanation: The AWS Documentation mentions the following Note that the KCL is different from the Kinesis Data Streams API that is available in the AWS SDKs. The Kinesis Data Streams API helps you manage many aspects of Kinesis Data Streams (including creating streams, resharding, and putting and getting records), while the KCL provides a layer of abstraction specifically for processing data in a consumer role. For more information on developing consumers, please visit the below URL: http://docs.aws.amazon.com/streams/latest/dev/developing-consumers-with-kcl.html
Question 53 of 60
53. Question
Which of the following is the term given to the data in Machine Learning where you already know the target answers
Correct
1
Incorrect
1
Unattempted
1
Hint
Explanation: The AWS Documentation mentions the following ML problems start with data—preferably, lots of data (examples or observations) for which you already know the target answer. Data for which you already know the target answer is called labeled data. In supervised ML, the algorithm teaches itself to learn from the labeled examples that we provide For more information on labelled data, please visit the below URL: http://docs.aws.amazon.com/machine-learning/latest/dg/collecting-labeled-data.html
Question 54 of 60
54. Question
You work for a company that deals with credit card based transactions. You have to identify potential fraudulent credit card transactions using Amazon Machine Learning. You have been given historical labeled data that you can use to create your model. You will also need to the ability to tune the model you pick. Which model type should you use for this sort of requirement?
Correct
1
Incorrect
1
Unattempted
1
Hint
Explanation: The AWS Documentation mentions the following ML models for binary classification problems predict a binary outcome (one of two possible classes). To train binary classification models, Amazon ML uses the industry-standard learning algorithm known as logistic regression. For more information on the different types of machine learning models, please visit the below URL: https://docs.aws.amazon.com/machine-learning/latest/dg/types-of-ml-models.html
Question 55 of 60
55. Question
Which of the following services can be used for transformation of incoming source data in Amazon Kinesis Data Firehose
Correct
3
Incorrect
3
Unattempted
3
Hint
Explanation: The AWS Documentation mentions the following Kinesis Data Firehose can invoke your Lambda function to transform incoming source data and deliver the transformed data to destinations. You can enable Kinesis Data Firehose data transformation when you create your delivery stream. For more information on Amazon Kinesis Firehose transformations, please visit the below URL: http://docs.aws.amazon.com/firehose/latest/dev/data-transformation.html
Question 56 of 60
56. Question
Your company has a set of web servers hosted on EC2 Instances. There is a requirement to push the logs from these web servers onto a suitable storage device for subsequent analysis. Which of the following can be steps for the implementation process which can satisfy these requirements
Correct
13
Incorrect
13
Unattempted
13
Hint
Explanation: The AWS Documentation mentions the following Kinesis Agent is a stand-alone Java software application that offers an easy way to collect and send data to Kinesis Data Firehose. The agent continuously monitors a set of files and sends new data to your Kinesis Data Firehose delivery stream. The agent handles file rotation, checkpointing, and retry upon failures. It delivers all of your data in a reliable, timely, and simple manner. It also emits Amazon CloudWatch metrics to help you better monitor and troubleshoot the streaming process. For more information on working with agents, please visit the below URL: http://docs.aws.amazon.com/firehose/latest/dev/writing-with-agents.html
Question 57 of 60
57. Question
Which of the following can be used to manage notebook documents using a web browse
Correct
1
Incorrect
1
Unattempted
1
Hint
Explanation: The AWS Documentation mentions the following Jupyter Notebook is a web application that allows you to manage notebook documents using a web browser. For more information on setting up jupyter, one can refer to the below URL: http://docs.aws.amazon.com/mxnet/latest/dg/setup-jupyter.html
Question 58 of 60
58. Question
When planning for the Instance type for EMR nodes , which of the following generally does not need a high configuration in terms of both computing and memory.
Correct
1
Incorrect
1
Unattempted
1
Hint
Explanation: The AWS Documentation mentions the following In general, the master node type, which assigns tasks, doesn’t require an EC2 instance with much processing power; EC2 instances for the core node type, which process tasks and store data in HDFS, need both processing power and storage capacity; EC2 instances for the task node type, which don’t store data, need only processing power For more information on EMR planning guidelines please refer to the below URL: http://docs.aws.amazon.com/emr/latest/DeveloperGuide/emr-plan-instances-guidelines.html
Question 59 of 60
59. Question
You are planning on using AWS Data Pipeline to transfer data from DynamoDB to S3. The DynamoDB tables gets populated by an application. The application generates tables based on orders made to particular products. How can you ensure that the Data pipeline is triggered only when data is actually written to a DynamoDB table by the application?
Correct
4
Incorrect
4
Unattempted
4
Hint
Explanation: The AWS Documentation mentions the following In AWS Data Pipeline, a precondition is a pipeline component containing conditional statements that must be true before an activity can run. For example, a precondition can check whether source data is present before a pipeline activity attempts to copy it. AWS Data Pipeline provides several pre-packaged preconditions that accommodate common scenarios, such as whether a database table exists, whether an Amazon S3 key is present, and so on. For more information on AWS Data Pipeline pre-conditions, please visit the below URL: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-concepts-preconditions.html
Question 60 of 60
60. Question
You are trying to connect to the master node for your EMR cluster. Which of the following must be checked to ensure that the connection is successful
Correct
1
Incorrect
1
Unattempted
1
Hint
Explanation: The AWS Documentation mentions the following In an EMR cluster, the master node is an Amazon EC2 instance that coordinates the EC2 instances that are running as task and core nodes. The master node exposes a public DNS name that you can use to connect to it. By default, Amazon EMR creates security group rules for master and slave nodes that determine how you access the nodes For more information on connecting to EMR master node, please visit the below URL: https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-connect-master-node.html
X
Use Page numbers below to navigate to other practice tests