You have already completed the Test before. Hence you can not start it again.
Test is loading...
You must sign in or sign up to start the Test.
You have to finish following quiz, to start this Test:
Your results are here!! for" AWS Certified Cloud Practitioner Practice Test 14 "
0 of 65 questions answered correctly
Your time:
Time has elapsed
Your Final Score is : 0
You have attempted : 0
Number of Correct Questions : 0 and scored 0
Number of Incorrect Questions : 0 and Negative marks 0
Average score
Your score
AWS Certified Cloud PractitionerÂ
You have attempted: 0
Number of Correct Questions: 0 and scored 0
Number of Incorrect Questions: 0 and Negative marks 0
You can review your answers by clicking on “View Answers” option. Important Note : Open Reference Documentation Links in New Tab (Right Click and Open in New Tab).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
Answered
Review
Question 1 of 65
1. Question
Which AWS services are always free? (Select TWO.)
Correct
AWS Control Tower – AWS Control Tower is always free. It sets up and governs a new, secure, multi-account AWS environment. It‘s based on best practices established through AWS‘ experience working with thousands of enterprises as they move to the cloud. It automates the process of setting up a new baseline multi-account AWS environment that is secure, well-architected, and ready to use. AWS Elastic Beanstalk – Elastic Beanstalk is an easy-to-use service for deploying and running applications in several languages including Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker, on familiar servers such as Apache, Nginx, Passenger, and IIS. AWS Elastic Beanstalk itself is free, you only pay for the underlying AWS resources (e.g., EC2, S3) that your application consumes. Incorrect Options: AWS Glue – AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for users to prepare and load their data for analytics. AWS Glue is not always free, it has associated costs based on the resources consumed. AWS Lambda – AWS Lambda is a serverless computing service that allows you to run code without provisioning or managing servers. You can execute your code in response to events, such as changes to data in an Amazon S3 bucket or an HTTP request, and only pay for the compute time consumed by your code, it is not free. Amazon DynamoDB – Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. DynamoDB isn‘t entirely free; it offers a free tier, but costs accrue if the free tier is exceeded. References: https://aws.amazon.com/free https://aws.amazon.com/controltower/pricing https://aws.amazon.com/elasticbeanstalk/pricing
Incorrect
AWS Control Tower – AWS Control Tower is always free. It sets up and governs a new, secure, multi-account AWS environment. It‘s based on best practices established through AWS‘ experience working with thousands of enterprises as they move to the cloud. It automates the process of setting up a new baseline multi-account AWS environment that is secure, well-architected, and ready to use. AWS Elastic Beanstalk – Elastic Beanstalk is an easy-to-use service for deploying and running applications in several languages including Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker, on familiar servers such as Apache, Nginx, Passenger, and IIS. AWS Elastic Beanstalk itself is free, you only pay for the underlying AWS resources (e.g., EC2, S3) that your application consumes. Incorrect Options: AWS Glue – AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for users to prepare and load their data for analytics. AWS Glue is not always free, it has associated costs based on the resources consumed. AWS Lambda – AWS Lambda is a serverless computing service that allows you to run code without provisioning or managing servers. You can execute your code in response to events, such as changes to data in an Amazon S3 bucket or an HTTP request, and only pay for the compute time consumed by your code, it is not free. Amazon DynamoDB – Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. DynamoDB isn‘t entirely free; it offers a free tier, but costs accrue if the free tier is exceeded. References: https://aws.amazon.com/free https://aws.amazon.com/controltower/pricing https://aws.amazon.com/elasticbeanstalk/pricing
Unattempted
AWS Control Tower – AWS Control Tower is always free. It sets up and governs a new, secure, multi-account AWS environment. It‘s based on best practices established through AWS‘ experience working with thousands of enterprises as they move to the cloud. It automates the process of setting up a new baseline multi-account AWS environment that is secure, well-architected, and ready to use. AWS Elastic Beanstalk – Elastic Beanstalk is an easy-to-use service for deploying and running applications in several languages including Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker, on familiar servers such as Apache, Nginx, Passenger, and IIS. AWS Elastic Beanstalk itself is free, you only pay for the underlying AWS resources (e.g., EC2, S3) that your application consumes. Incorrect Options: AWS Glue – AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for users to prepare and load their data for analytics. AWS Glue is not always free, it has associated costs based on the resources consumed. AWS Lambda – AWS Lambda is a serverless computing service that allows you to run code without provisioning or managing servers. You can execute your code in response to events, such as changes to data in an Amazon S3 bucket or an HTTP request, and only pay for the compute time consumed by your code, it is not free. Amazon DynamoDB – Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. DynamoDB isn‘t entirely free; it offers a free tier, but costs accrue if the free tier is exceeded. References: https://aws.amazon.com/free https://aws.amazon.com/controltower/pricing https://aws.amazon.com/elasticbeanstalk/pricing
Question 2 of 65
2. Question
Which of the following are key principles of the Reliability pillar of the AWS Well-Architected Framework? (Select TWO.)
Correct
Use distributed architectures to tolerate failures – This principle refers to the use of distributed systems and architecture to increase the resilience of your applications to failure. By designing your system with multiple essential components that can fail independently without bringing down the entire system. It will increase the resilience of your system and improve its overall reliability. Implement automated backup and disaster recovery processes – This principle emphasizes the importance of implementing automated backup and recovery processes to minimize the impact of outages and ensure that your systems can recover quickly from disruptions. This includes regularly backing up your data and configurations and implementing automated processes for restoring your systems in a disaster. By doing this, you can improve reliability. Incorrect Options: Implement automation to reduce manual tasks – Implementing automation to reduce manual tasks is a principle that aligns with the Operational Excellence pillar, focusing on improving efficiency and reducing human error. It is not a principle within the Reliability pillar. Use optimization techniques to improve system performance – Using optimization techniques to improve system performance is a principle that relates more to the Performance Efficiency pillar, aiming to optimize resource utilization and enhance system performance. Optimization techniques are not specific to the Reliability pillar. Implement monitoring and logging to detect and diagnose problems – Implement monitoring and logging to detect and diagnose problems is a principle of the Operational Excellence pillar, emphasizing the importance of effective monitoring, logging, and observability practices. It is not a principle of the Reliability pillar. References: https://aws.amazon.com/architecture/well-architected https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/reliability.html
Incorrect
Use distributed architectures to tolerate failures – This principle refers to the use of distributed systems and architecture to increase the resilience of your applications to failure. By designing your system with multiple essential components that can fail independently without bringing down the entire system. It will increase the resilience of your system and improve its overall reliability. Implement automated backup and disaster recovery processes – This principle emphasizes the importance of implementing automated backup and recovery processes to minimize the impact of outages and ensure that your systems can recover quickly from disruptions. This includes regularly backing up your data and configurations and implementing automated processes for restoring your systems in a disaster. By doing this, you can improve reliability. Incorrect Options: Implement automation to reduce manual tasks – Implementing automation to reduce manual tasks is a principle that aligns with the Operational Excellence pillar, focusing on improving efficiency and reducing human error. It is not a principle within the Reliability pillar. Use optimization techniques to improve system performance – Using optimization techniques to improve system performance is a principle that relates more to the Performance Efficiency pillar, aiming to optimize resource utilization and enhance system performance. Optimization techniques are not specific to the Reliability pillar. Implement monitoring and logging to detect and diagnose problems – Implement monitoring and logging to detect and diagnose problems is a principle of the Operational Excellence pillar, emphasizing the importance of effective monitoring, logging, and observability practices. It is not a principle of the Reliability pillar. References: https://aws.amazon.com/architecture/well-architected https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/reliability.html
Unattempted
Use distributed architectures to tolerate failures – This principle refers to the use of distributed systems and architecture to increase the resilience of your applications to failure. By designing your system with multiple essential components that can fail independently without bringing down the entire system. It will increase the resilience of your system and improve its overall reliability. Implement automated backup and disaster recovery processes – This principle emphasizes the importance of implementing automated backup and recovery processes to minimize the impact of outages and ensure that your systems can recover quickly from disruptions. This includes regularly backing up your data and configurations and implementing automated processes for restoring your systems in a disaster. By doing this, you can improve reliability. Incorrect Options: Implement automation to reduce manual tasks – Implementing automation to reduce manual tasks is a principle that aligns with the Operational Excellence pillar, focusing on improving efficiency and reducing human error. It is not a principle within the Reliability pillar. Use optimization techniques to improve system performance – Using optimization techniques to improve system performance is a principle that relates more to the Performance Efficiency pillar, aiming to optimize resource utilization and enhance system performance. Optimization techniques are not specific to the Reliability pillar. Implement monitoring and logging to detect and diagnose problems – Implement monitoring and logging to detect and diagnose problems is a principle of the Operational Excellence pillar, emphasizing the importance of effective monitoring, logging, and observability practices. It is not a principle of the Reliability pillar. References: https://aws.amazon.com/architecture/well-architected https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/reliability.html
Question 3 of 65
3. Question
What is a key principle of the Cost Optimization pillar of the AWS Well-Architected Framework?
Correct
Optimize resource usage to reduce costs – Optimize resource usage to reduce costs is a key principle of the Cost Optimization pillar in the AWS Well-Architected Framework. It emphasizes the importance of efficiently utilizing AWS resources to achieve cost savings without sacrificing performance or reliability. This principle encourages organizations to continuously monitor resource usage, identify idle or underutilized resources, and take actions to optimize their utilization. By implementing cost optimization practices, such as rightsizing instances, leveraging auto-scaling, using reserved or savings plans, and adopting serverless architectures, organizations can align their resource consumption with actual needs. This leads to significant cost savings by eliminating waste, avoiding over-provisioning, and optimizing pricing models. Incorrect Options: Use the right type and size of resources for your workload – Using the right type and size of resources for your workload is an important consideration for cost optimization, it does not a principle of optimizing resource usage to reduce costs. This option focuses on selecting appropriate resources but does not address the need to continually optimize resource utilization and identify cost-saving opportunities. Use the most expensive resources to ensure high performance – Using the most expensive resources to ensure high performance is not a key principle of the Cost Optimization pillar. Cost optimization aims to achieve the optimal balance between cost and performance. It encourages organizations to explore different resource options and architectures to deliver the required performance while maximizing cost savings. Implement automation to reduce manual tasks – Implementing automation to reduce manual tasks is an important principle, but it is not tied to the Cost Optimization pillar. Automation can improve efficiency, reduce errors, and save time, but it is a broader architectural consideration that spans across multiple pillars of the Well-Architected Framework, including Operational Excellence and Reliability. References: https://aws.amazon.com/architecture/well-architected https://docs.aws.amazon.com/wellarchitected/latest/cost-optimization-pillar/cost-optimization.html
Incorrect
Optimize resource usage to reduce costs – Optimize resource usage to reduce costs is a key principle of the Cost Optimization pillar in the AWS Well-Architected Framework. It emphasizes the importance of efficiently utilizing AWS resources to achieve cost savings without sacrificing performance or reliability. This principle encourages organizations to continuously monitor resource usage, identify idle or underutilized resources, and take actions to optimize their utilization. By implementing cost optimization practices, such as rightsizing instances, leveraging auto-scaling, using reserved or savings plans, and adopting serverless architectures, organizations can align their resource consumption with actual needs. This leads to significant cost savings by eliminating waste, avoiding over-provisioning, and optimizing pricing models. Incorrect Options: Use the right type and size of resources for your workload – Using the right type and size of resources for your workload is an important consideration for cost optimization, it does not a principle of optimizing resource usage to reduce costs. This option focuses on selecting appropriate resources but does not address the need to continually optimize resource utilization and identify cost-saving opportunities. Use the most expensive resources to ensure high performance – Using the most expensive resources to ensure high performance is not a key principle of the Cost Optimization pillar. Cost optimization aims to achieve the optimal balance between cost and performance. It encourages organizations to explore different resource options and architectures to deliver the required performance while maximizing cost savings. Implement automation to reduce manual tasks – Implementing automation to reduce manual tasks is an important principle, but it is not tied to the Cost Optimization pillar. Automation can improve efficiency, reduce errors, and save time, but it is a broader architectural consideration that spans across multiple pillars of the Well-Architected Framework, including Operational Excellence and Reliability. References: https://aws.amazon.com/architecture/well-architected https://docs.aws.amazon.com/wellarchitected/latest/cost-optimization-pillar/cost-optimization.html
Unattempted
Optimize resource usage to reduce costs – Optimize resource usage to reduce costs is a key principle of the Cost Optimization pillar in the AWS Well-Architected Framework. It emphasizes the importance of efficiently utilizing AWS resources to achieve cost savings without sacrificing performance or reliability. This principle encourages organizations to continuously monitor resource usage, identify idle or underutilized resources, and take actions to optimize their utilization. By implementing cost optimization practices, such as rightsizing instances, leveraging auto-scaling, using reserved or savings plans, and adopting serverless architectures, organizations can align their resource consumption with actual needs. This leads to significant cost savings by eliminating waste, avoiding over-provisioning, and optimizing pricing models. Incorrect Options: Use the right type and size of resources for your workload – Using the right type and size of resources for your workload is an important consideration for cost optimization, it does not a principle of optimizing resource usage to reduce costs. This option focuses on selecting appropriate resources but does not address the need to continually optimize resource utilization and identify cost-saving opportunities. Use the most expensive resources to ensure high performance – Using the most expensive resources to ensure high performance is not a key principle of the Cost Optimization pillar. Cost optimization aims to achieve the optimal balance between cost and performance. It encourages organizations to explore different resource options and architectures to deliver the required performance while maximizing cost savings. Implement automation to reduce manual tasks – Implementing automation to reduce manual tasks is an important principle, but it is not tied to the Cost Optimization pillar. Automation can improve efficiency, reduce errors, and save time, but it is a broader architectural consideration that spans across multiple pillars of the Well-Architected Framework, including Operational Excellence and Reliability. References: https://aws.amazon.com/architecture/well-architected https://docs.aws.amazon.com/wellarchitected/latest/cost-optimization-pillar/cost-optimization.html
Question 4 of 65
4. Question
What are the benefits of creating snapshots of Amazon EBS volumes to back up data? (Select TWO.)
Correct
Durability – Creating snapshots of Amazon Elastic Block Store (EBS) volumes provides the benefit of durability. When you create an EBS snapshot, AWS stores it in Amazon S3, which is designed for durability. Amazon S3 replicates data across multiple availability zones within a region, ensuring that your snapshots are highly durable and protected against data loss. In case of failures or errors, you can rely on the durability of snapshots to recover your data and restore your EBS volumes to a previous state.
Cost-Effective – Creating snapshots of EBS volumes also offers the benefit of cost-effectiveness. Instead of creating full backups of your data, EBS snapshots use an incremental backup approach. This means that when you create a snapshot, only the changed blocks since the last snapshot is stored. This incremental backup approach reduces storage costs by eliminating the need to duplicate unchanged data. Additionally, snapshots are charged based on the compressed data size, allowing you to optimize costs by efficiently managing your snapshot storage.
Incorrect Options:
Elasticity – Creating snapshots of EBS volumes is not related to elasticity. Elasticity refers to the ability to scale resources up or down based on demand. Snapshots can be used to restore and provision EBS volumes as needed, the act of creating snapshots itself does not inherently contribute to the elasticity of the system.
Scalability – Scalability is not a benefit of creating snapshots of EBS volumes. Snapshots primarily focus on data backup and recovery, allowing you to capture a point-in-time copy of your EBS volume. Scalability, on the other hand, relates to the ability to adjust the capacity or size of resources dynamically in response to changing demands.
Flexibility – Creating snapshots of EBS volumes does not provide the benefit of flexibility. Flexibility in the context of cloud computing typically refers to the ability to adapt, configure, and customize resources according to specific requirements. Snapshots do not inherently contribute to the flexibility of resource configuration or customization.
References: https://aws.amazon.com/ebs/snapshots
Incorrect
Durability – Creating snapshots of Amazon Elastic Block Store (EBS) volumes provides the benefit of durability. When you create an EBS snapshot, AWS stores it in Amazon S3, which is designed for durability. Amazon S3 replicates data across multiple availability zones within a region, ensuring that your snapshots are highly durable and protected against data loss. In case of failures or errors, you can rely on the durability of snapshots to recover your data and restore your EBS volumes to a previous state.
Cost-Effective – Creating snapshots of EBS volumes also offers the benefit of cost-effectiveness. Instead of creating full backups of your data, EBS snapshots use an incremental backup approach. This means that when you create a snapshot, only the changed blocks since the last snapshot is stored. This incremental backup approach reduces storage costs by eliminating the need to duplicate unchanged data. Additionally, snapshots are charged based on the compressed data size, allowing you to optimize costs by efficiently managing your snapshot storage.
Incorrect Options:
Elasticity – Creating snapshots of EBS volumes is not related to elasticity. Elasticity refers to the ability to scale resources up or down based on demand. Snapshots can be used to restore and provision EBS volumes as needed, the act of creating snapshots itself does not inherently contribute to the elasticity of the system.
Scalability – Scalability is not a benefit of creating snapshots of EBS volumes. Snapshots primarily focus on data backup and recovery, allowing you to capture a point-in-time copy of your EBS volume. Scalability, on the other hand, relates to the ability to adjust the capacity or size of resources dynamically in response to changing demands.
Flexibility – Creating snapshots of EBS volumes does not provide the benefit of flexibility. Flexibility in the context of cloud computing typically refers to the ability to adapt, configure, and customize resources according to specific requirements. Snapshots do not inherently contribute to the flexibility of resource configuration or customization.
References: https://aws.amazon.com/ebs/snapshots
Unattempted
Durability – Creating snapshots of Amazon Elastic Block Store (EBS) volumes provides the benefit of durability. When you create an EBS snapshot, AWS stores it in Amazon S3, which is designed for durability. Amazon S3 replicates data across multiple availability zones within a region, ensuring that your snapshots are highly durable and protected against data loss. In case of failures or errors, you can rely on the durability of snapshots to recover your data and restore your EBS volumes to a previous state.
Cost-Effective – Creating snapshots of EBS volumes also offers the benefit of cost-effectiveness. Instead of creating full backups of your data, EBS snapshots use an incremental backup approach. This means that when you create a snapshot, only the changed blocks since the last snapshot is stored. This incremental backup approach reduces storage costs by eliminating the need to duplicate unchanged data. Additionally, snapshots are charged based on the compressed data size, allowing you to optimize costs by efficiently managing your snapshot storage.
Incorrect Options:
Elasticity – Creating snapshots of EBS volumes is not related to elasticity. Elasticity refers to the ability to scale resources up or down based on demand. Snapshots can be used to restore and provision EBS volumes as needed, the act of creating snapshots itself does not inherently contribute to the elasticity of the system.
Scalability – Scalability is not a benefit of creating snapshots of EBS volumes. Snapshots primarily focus on data backup and recovery, allowing you to capture a point-in-time copy of your EBS volume. Scalability, on the other hand, relates to the ability to adjust the capacity or size of resources dynamically in response to changing demands.
Flexibility – Creating snapshots of EBS volumes does not provide the benefit of flexibility. Flexibility in the context of cloud computing typically refers to the ability to adapt, configure, and customize resources according to specific requirements. Snapshots do not inherently contribute to the flexibility of resource configuration or customization.
References: https://aws.amazon.com/ebs/snapshots
Question 5 of 65
5. Question
Which AWS service provides recent events to help you manage active events and shows proactive notifications to plan for scheduled activities?
Correct
AWS Personal Health Dashboard – The AWS Personal Health Dashboard provides recent events to help you manage active events and shows proactive notifications to plan for scheduled activities. It is a service that gives you a personalized view into the performance and availability of the AWS services you are using. The Personal Health Dashboard provides real-time information on service health events that may be impacting your resources. It notifies you of any ongoing issues, scheduled maintenance, or other events that might require your attention. The Personal Health Dashboard allows you to view the status of AWS services in your account, access detailed information about events and their impacts, and receive notifications via email or the AWS Management Console. It helps you stay informed about the health of your AWS services, enabling you to plan and manage your resources effectively. Incorrect Options: Amazon Inspector – Amazon Inspector is a security assessment service that helps you analyze the security and compliance of your applications deployed on AWS. It does not specifically provide recent events or proactive notifications for managing active events or scheduled activities. AWS Organizations – AWS Organizations allows you to centrally manage and govern multiple AWS accounts. It helps you manage policies, control access, and simplify billing across your accounts. However, AWS Organizations does not provide recent events or proactive notifications related to active events or scheduled activities. AWS OpsWorks – AWS OpsWorks is a configuration management service that helps you automate the deployment and management of applications. OpsWorks provides capabilities for managing infrastructure and application deployments. it does not offer recent events or proactive notifications for managing active events or scheduled activities. References: https://aws.amazon.com/premiumsupport/technology/personal-health-dashboard
Incorrect
AWS Personal Health Dashboard – The AWS Personal Health Dashboard provides recent events to help you manage active events and shows proactive notifications to plan for scheduled activities. It is a service that gives you a personalized view into the performance and availability of the AWS services you are using. The Personal Health Dashboard provides real-time information on service health events that may be impacting your resources. It notifies you of any ongoing issues, scheduled maintenance, or other events that might require your attention. The Personal Health Dashboard allows you to view the status of AWS services in your account, access detailed information about events and their impacts, and receive notifications via email or the AWS Management Console. It helps you stay informed about the health of your AWS services, enabling you to plan and manage your resources effectively. Incorrect Options: Amazon Inspector – Amazon Inspector is a security assessment service that helps you analyze the security and compliance of your applications deployed on AWS. It does not specifically provide recent events or proactive notifications for managing active events or scheduled activities. AWS Organizations – AWS Organizations allows you to centrally manage and govern multiple AWS accounts. It helps you manage policies, control access, and simplify billing across your accounts. However, AWS Organizations does not provide recent events or proactive notifications related to active events or scheduled activities. AWS OpsWorks – AWS OpsWorks is a configuration management service that helps you automate the deployment and management of applications. OpsWorks provides capabilities for managing infrastructure and application deployments. it does not offer recent events or proactive notifications for managing active events or scheduled activities. References: https://aws.amazon.com/premiumsupport/technology/personal-health-dashboard
Unattempted
AWS Personal Health Dashboard – The AWS Personal Health Dashboard provides recent events to help you manage active events and shows proactive notifications to plan for scheduled activities. It is a service that gives you a personalized view into the performance and availability of the AWS services you are using. The Personal Health Dashboard provides real-time information on service health events that may be impacting your resources. It notifies you of any ongoing issues, scheduled maintenance, or other events that might require your attention. The Personal Health Dashboard allows you to view the status of AWS services in your account, access detailed information about events and their impacts, and receive notifications via email or the AWS Management Console. It helps you stay informed about the health of your AWS services, enabling you to plan and manage your resources effectively. Incorrect Options: Amazon Inspector – Amazon Inspector is a security assessment service that helps you analyze the security and compliance of your applications deployed on AWS. It does not specifically provide recent events or proactive notifications for managing active events or scheduled activities. AWS Organizations – AWS Organizations allows you to centrally manage and govern multiple AWS accounts. It helps you manage policies, control access, and simplify billing across your accounts. However, AWS Organizations does not provide recent events or proactive notifications related to active events or scheduled activities. AWS OpsWorks – AWS OpsWorks is a configuration management service that helps you automate the deployment and management of applications. OpsWorks provides capabilities for managing infrastructure and application deployments. it does not offer recent events or proactive notifications for managing active events or scheduled activities. References: https://aws.amazon.com/premiumsupport/technology/personal-health-dashboard
Question 6 of 65
6. Question
What is the customer‘s responsibility for security-related in the AWS cloud?
Correct
Maintaining client-side encryption – When using the AWS cloud, customers are responsible for maintaining client-side encryption. This means that it is the customer‘s responsibility to ensure that the data they upload or store in the cloud is encrypted before it is sent to AWS. Client-side encryption involves encrypting the data on the customer‘s side using their own encryption keys, before transmitting it to AWS. By encrypting data on the client side, customers can add an extra layer of security to their sensitive information, ensuring that even if the data is somehow compromised or accessed without authorization, it remains unreadable. AWS provides various services and tools to help customers implement and manage client-side encryption effectively. By taking ownership of client-side encryption, customers can enhance the security of their data in the AWS cloud and have more control over the protection of their information. Incorrect Options: Maintaining firewall configurations at a hardware level – Maintaining firewall configurations at a hardware level is not a customer‘s responsibility for security-related aspects in the AWS cloud. AWS manages the network infrastructure and provides security features like security groups, network ACLs, and AWS WAF that customers can configure to control inbound and outbound traffic. Securing infrastructure at data centers – Securing infrastructure at data centers is not a customer‘s responsibility in the AWS cloud. AWS is responsible for securing its data centers, including physical security, environmental controls, and operational practices. Maintaining networking among hardware components – Maintaining networking among hardware components is not a customer‘s responsibility in the AWS cloud. AWS manages the underlying network infrastructure, including networking between hardware components, to ensure reliable and secure connectivity within its data centers. References: https://aws.amazon.com/compliance/shared-responsibility-model
Incorrect
Maintaining client-side encryption – When using the AWS cloud, customers are responsible for maintaining client-side encryption. This means that it is the customer‘s responsibility to ensure that the data they upload or store in the cloud is encrypted before it is sent to AWS. Client-side encryption involves encrypting the data on the customer‘s side using their own encryption keys, before transmitting it to AWS. By encrypting data on the client side, customers can add an extra layer of security to their sensitive information, ensuring that even if the data is somehow compromised or accessed without authorization, it remains unreadable. AWS provides various services and tools to help customers implement and manage client-side encryption effectively. By taking ownership of client-side encryption, customers can enhance the security of their data in the AWS cloud and have more control over the protection of their information. Incorrect Options: Maintaining firewall configurations at a hardware level – Maintaining firewall configurations at a hardware level is not a customer‘s responsibility for security-related aspects in the AWS cloud. AWS manages the network infrastructure and provides security features like security groups, network ACLs, and AWS WAF that customers can configure to control inbound and outbound traffic. Securing infrastructure at data centers – Securing infrastructure at data centers is not a customer‘s responsibility in the AWS cloud. AWS is responsible for securing its data centers, including physical security, environmental controls, and operational practices. Maintaining networking among hardware components – Maintaining networking among hardware components is not a customer‘s responsibility in the AWS cloud. AWS manages the underlying network infrastructure, including networking between hardware components, to ensure reliable and secure connectivity within its data centers. References: https://aws.amazon.com/compliance/shared-responsibility-model
Unattempted
Maintaining client-side encryption – When using the AWS cloud, customers are responsible for maintaining client-side encryption. This means that it is the customer‘s responsibility to ensure that the data they upload or store in the cloud is encrypted before it is sent to AWS. Client-side encryption involves encrypting the data on the customer‘s side using their own encryption keys, before transmitting it to AWS. By encrypting data on the client side, customers can add an extra layer of security to their sensitive information, ensuring that even if the data is somehow compromised or accessed without authorization, it remains unreadable. AWS provides various services and tools to help customers implement and manage client-side encryption effectively. By taking ownership of client-side encryption, customers can enhance the security of their data in the AWS cloud and have more control over the protection of their information. Incorrect Options: Maintaining firewall configurations at a hardware level – Maintaining firewall configurations at a hardware level is not a customer‘s responsibility for security-related aspects in the AWS cloud. AWS manages the network infrastructure and provides security features like security groups, network ACLs, and AWS WAF that customers can configure to control inbound and outbound traffic. Securing infrastructure at data centers – Securing infrastructure at data centers is not a customer‘s responsibility in the AWS cloud. AWS is responsible for securing its data centers, including physical security, environmental controls, and operational practices. Maintaining networking among hardware components – Maintaining networking among hardware components is not a customer‘s responsibility in the AWS cloud. AWS manages the underlying network infrastructure, including networking between hardware components, to ensure reliable and secure connectivity within its data centers. References: https://aws.amazon.com/compliance/shared-responsibility-model
Question 7 of 65
7. Question
Which service should be used to create interactive graph applications using popular open-source APIs such as Gremlin?
Correct
Amazon Neptune – Amazon Neptune is a fully managed graph database service that makes it easy to build and run applications working with highly connected datasets. It‘s built for storing billions of relationships and querying the graph with milliseconds latency. Neptune supports popular graph models Property Graph and RDF, and their respective query languages Apache TinkerPop Gremlin and SPARQL, allowing you to create interactive graph applications using familiar open-source APIs like Gremlin. So, for creating interactive graph applications with Gremlin API, Amazon Neptune would be the right choice. Incorrect Options: Amazon Redshift – Amazon Redshift is a fully managed data warehousing service. It offers fast and scalable analytics capabilities, allowing you to analyze large datasets using SQL queries. Redshift is optimized for data warehousing workloads and provides high-performance querying, advanced compression, and automatic scaling for efficient data analysis. It is designed for online analytic processing (OLAP) and business intelligence (BI) applications. It does not support creating interactive graph applications. Amazon Aurora – Amazon Aurora is a fully managed relational database service. It is compatible with MySQL and PostgreSQL and offers high performance, scalability, and durability. Aurora provides automatic scaling, continuous backups, and replication for improved database availability and performance. It is not designed to create interactive graph applications using APIs such as Gremlin. Amazon ElastiCache – Amazon ElastiCache makes it easy to deploy, operate, and scale an in-memory cache in the cloud. It helps improve the performance of web applications by retrieving data from fast, managed, in-memory caches. It does not support to build interactive graph applications using APIs like Gremlin. References: https://aws.amazon.com/neptune
Incorrect
Amazon Neptune – Amazon Neptune is a fully managed graph database service that makes it easy to build and run applications working with highly connected datasets. It‘s built for storing billions of relationships and querying the graph with milliseconds latency. Neptune supports popular graph models Property Graph and RDF, and their respective query languages Apache TinkerPop Gremlin and SPARQL, allowing you to create interactive graph applications using familiar open-source APIs like Gremlin. So, for creating interactive graph applications with Gremlin API, Amazon Neptune would be the right choice. Incorrect Options: Amazon Redshift – Amazon Redshift is a fully managed data warehousing service. It offers fast and scalable analytics capabilities, allowing you to analyze large datasets using SQL queries. Redshift is optimized for data warehousing workloads and provides high-performance querying, advanced compression, and automatic scaling for efficient data analysis. It is designed for online analytic processing (OLAP) and business intelligence (BI) applications. It does not support creating interactive graph applications. Amazon Aurora – Amazon Aurora is a fully managed relational database service. It is compatible with MySQL and PostgreSQL and offers high performance, scalability, and durability. Aurora provides automatic scaling, continuous backups, and replication for improved database availability and performance. It is not designed to create interactive graph applications using APIs such as Gremlin. Amazon ElastiCache – Amazon ElastiCache makes it easy to deploy, operate, and scale an in-memory cache in the cloud. It helps improve the performance of web applications by retrieving data from fast, managed, in-memory caches. It does not support to build interactive graph applications using APIs like Gremlin. References: https://aws.amazon.com/neptune
Unattempted
Amazon Neptune – Amazon Neptune is a fully managed graph database service that makes it easy to build and run applications working with highly connected datasets. It‘s built for storing billions of relationships and querying the graph with milliseconds latency. Neptune supports popular graph models Property Graph and RDF, and their respective query languages Apache TinkerPop Gremlin and SPARQL, allowing you to create interactive graph applications using familiar open-source APIs like Gremlin. So, for creating interactive graph applications with Gremlin API, Amazon Neptune would be the right choice. Incorrect Options: Amazon Redshift – Amazon Redshift is a fully managed data warehousing service. It offers fast and scalable analytics capabilities, allowing you to analyze large datasets using SQL queries. Redshift is optimized for data warehousing workloads and provides high-performance querying, advanced compression, and automatic scaling for efficient data analysis. It is designed for online analytic processing (OLAP) and business intelligence (BI) applications. It does not support creating interactive graph applications. Amazon Aurora – Amazon Aurora is a fully managed relational database service. It is compatible with MySQL and PostgreSQL and offers high performance, scalability, and durability. Aurora provides automatic scaling, continuous backups, and replication for improved database availability and performance. It is not designed to create interactive graph applications using APIs such as Gremlin. Amazon ElastiCache – Amazon ElastiCache makes it easy to deploy, operate, and scale an in-memory cache in the cloud. It helps improve the performance of web applications by retrieving data from fast, managed, in-memory caches. It does not support to build interactive graph applications using APIs like Gremlin. References: https://aws.amazon.com/neptune
Question 8 of 65
8. Question
Which AWS service allows you to store, manage, and deploy container images?
Correct
Amazon Elastic Container Registry (ECR)Â – Amazon Elastic Container Registry (ECR) is a fully managed container image registry that makes it easy for developers to store, manage, and deploy Docker container images. ECR is integrated with Amazon Elastic Container Service (ECS), simplifying your development to production workflow, making it a go-to service for storing, managing, and deploying container images.
Incorrect Options:
Amazon Elastic Kubernetes Service (EKS)Â – Amazon EKS is a fully managed service that provides Kubernetes, a popular open-source system for automating the deployment, scaling, and management of containerized applications. EKS does not support storing, managing, and deploying container images.
Amazon Elastic Compute Cloud (EC2)Â – Amazon EC2 provides secure, resizable compute capacity in the cloud. It allows you to run applications on the AWS infrastructure. It does not support storing, managing, and deploying container images.
Amazon Simple Storage Service (S3) – Amazon S3 provides scalable object storage for data storing, archiving, backup, and analytics. It‘s not designed to manage and deploy container images as Amazon ECR.
References: https://aws.amazon.com/ecr
Incorrect
Amazon Elastic Container Registry (ECR)Â – Amazon Elastic Container Registry (ECR) is a fully managed container image registry that makes it easy for developers to store, manage, and deploy Docker container images. ECR is integrated with Amazon Elastic Container Service (ECS), simplifying your development to production workflow, making it a go-to service for storing, managing, and deploying container images.
Incorrect Options:
Amazon Elastic Kubernetes Service (EKS)Â – Amazon EKS is a fully managed service that provides Kubernetes, a popular open-source system for automating the deployment, scaling, and management of containerized applications. EKS does not support storing, managing, and deploying container images.
Amazon Elastic Compute Cloud (EC2)Â – Amazon EC2 provides secure, resizable compute capacity in the cloud. It allows you to run applications on the AWS infrastructure. It does not support storing, managing, and deploying container images.
Amazon Simple Storage Service (S3) – Amazon S3 provides scalable object storage for data storing, archiving, backup, and analytics. It‘s not designed to manage and deploy container images as Amazon ECR.
References: https://aws.amazon.com/ecr
Unattempted
Amazon Elastic Container Registry (ECR)Â – Amazon Elastic Container Registry (ECR) is a fully managed container image registry that makes it easy for developers to store, manage, and deploy Docker container images. ECR is integrated with Amazon Elastic Container Service (ECS), simplifying your development to production workflow, making it a go-to service for storing, managing, and deploying container images.
Incorrect Options:
Amazon Elastic Kubernetes Service (EKS)Â – Amazon EKS is a fully managed service that provides Kubernetes, a popular open-source system for automating the deployment, scaling, and management of containerized applications. EKS does not support storing, managing, and deploying container images.
Amazon Elastic Compute Cloud (EC2)Â – Amazon EC2 provides secure, resizable compute capacity in the cloud. It allows you to run applications on the AWS infrastructure. It does not support storing, managing, and deploying container images.
Amazon Simple Storage Service (S3) – Amazon S3 provides scalable object storage for data storing, archiving, backup, and analytics. It‘s not designed to manage and deploy container images as Amazon ECR.
References: https://aws.amazon.com/ecr
Question 9 of 65
9. Question
A tech firm has developed its AWS Cloud infrastructure to manage its operations efficiently. The firm also has an established practice for ongoing improvement of associated processes. Which principle of the AWS Well-Architected Framework does this case illustrate?
Correct
Operational excellence – The operational excellence pillar provides guidance on running and managing systems to deliver business value. It focuses on automating processes, continuous improvement, and monitoring operations to achieve operational resilience, efficiency, and effectiveness. It includes practices like defining processes, using automation, and implementing metrics and monitoring for operational success. In our case, the tech firm has not only designed its AWS Cloud infrastructure to efficiently manage its operations but also emphasizes continual process improvement. This clearly indicates that they are operating in alignment with the practices and concepts defined in the Operational Excellence pillar. Incorrect Options: Cost optimization – This scenario doesn‘t highlight any specifics related to cost, such as reducing costs, managing costs, or optimizing resources to improve cost efficiency. So this option is incorrect. Security – The scenario doesn‘t mention any measures or protocols related to securing data, protecting information, or adhering to compliance requirements, which are critical aspects of the Security pillar. So this option is incorrect. Performance efficiency – The Performance Efficiency pillar deals with using computing resources efficiently, but this scenario does not discuss efficient resource utilization, choosing the right resource types, or efficient architecture designs, which are central to Performance Efficiency. So this option is incorrect. References: https://aws.amazon.com/architecture/well-architected/
Incorrect
Operational excellence – The operational excellence pillar provides guidance on running and managing systems to deliver business value. It focuses on automating processes, continuous improvement, and monitoring operations to achieve operational resilience, efficiency, and effectiveness. It includes practices like defining processes, using automation, and implementing metrics and monitoring for operational success. In our case, the tech firm has not only designed its AWS Cloud infrastructure to efficiently manage its operations but also emphasizes continual process improvement. This clearly indicates that they are operating in alignment with the practices and concepts defined in the Operational Excellence pillar. Incorrect Options: Cost optimization – This scenario doesn‘t highlight any specifics related to cost, such as reducing costs, managing costs, or optimizing resources to improve cost efficiency. So this option is incorrect. Security – The scenario doesn‘t mention any measures or protocols related to securing data, protecting information, or adhering to compliance requirements, which are critical aspects of the Security pillar. So this option is incorrect. Performance efficiency – The Performance Efficiency pillar deals with using computing resources efficiently, but this scenario does not discuss efficient resource utilization, choosing the right resource types, or efficient architecture designs, which are central to Performance Efficiency. So this option is incorrect. References: https://aws.amazon.com/architecture/well-architected/
Unattempted
Operational excellence – The operational excellence pillar provides guidance on running and managing systems to deliver business value. It focuses on automating processes, continuous improvement, and monitoring operations to achieve operational resilience, efficiency, and effectiveness. It includes practices like defining processes, using automation, and implementing metrics and monitoring for operational success. In our case, the tech firm has not only designed its AWS Cloud infrastructure to efficiently manage its operations but also emphasizes continual process improvement. This clearly indicates that they are operating in alignment with the practices and concepts defined in the Operational Excellence pillar. Incorrect Options: Cost optimization – This scenario doesn‘t highlight any specifics related to cost, such as reducing costs, managing costs, or optimizing resources to improve cost efficiency. So this option is incorrect. Security – The scenario doesn‘t mention any measures or protocols related to securing data, protecting information, or adhering to compliance requirements, which are critical aspects of the Security pillar. So this option is incorrect. Performance efficiency – The Performance Efficiency pillar deals with using computing resources efficiently, but this scenario does not discuss efficient resource utilization, choosing the right resource types, or efficient architecture designs, which are central to Performance Efficiency. So this option is incorrect. References: https://aws.amazon.com/architecture/well-architected/
Question 10 of 65
10. Question
Which service is used to analyze data in Amazon S3 using standard SQL?
Correct
Amazon Athena – Amazon Athena is used to analyze data in Amazon S3 using standard SQL. It allows you to run interactive queries on data stored in S3 without the need for infrastructure provisioning or data loading. With Athena, you can directly query structured, semi-structured, and unstructured data in S3 using SQL syntax. Athena provides results quickly, scales automatically to handle large datasets, and charges you only for the amount of data scanned by your queries. Incorrect Options: Amazon FinSpace – Amazon FinSpace is a fully managed data management and analytics service designed for the financial industry. It provides tools for data preparation, analytics, and collaboration specific to financial datasets. However, it is not used to analyze data in Amazon S3 using standard SQL. Amazon Redshift – Amazon Redshift is a fully managed data warehousing service that allows you to analyze large datasets using SQL. It is optimized for online analytical processing (OLAP) and provides high-performance querying capabilities. Redshift is not used for analyzing data in Amazon S3 using standard SQL. Amazon CloudWatch – Amazon CloudWatch is a monitoring and observability service that collects and tracks metrics, logs, and events from various AWS resources. It is used for monitoring and managing the operational health of your AWS environment and does not provide the capability to analyze data in Amazon S3 using standard SQL. References: https://aws.amazon.com/athena
Incorrect
Amazon Athena – Amazon Athena is used to analyze data in Amazon S3 using standard SQL. It allows you to run interactive queries on data stored in S3 without the need for infrastructure provisioning or data loading. With Athena, you can directly query structured, semi-structured, and unstructured data in S3 using SQL syntax. Athena provides results quickly, scales automatically to handle large datasets, and charges you only for the amount of data scanned by your queries. Incorrect Options: Amazon FinSpace – Amazon FinSpace is a fully managed data management and analytics service designed for the financial industry. It provides tools for data preparation, analytics, and collaboration specific to financial datasets. However, it is not used to analyze data in Amazon S3 using standard SQL. Amazon Redshift – Amazon Redshift is a fully managed data warehousing service that allows you to analyze large datasets using SQL. It is optimized for online analytical processing (OLAP) and provides high-performance querying capabilities. Redshift is not used for analyzing data in Amazon S3 using standard SQL. Amazon CloudWatch – Amazon CloudWatch is a monitoring and observability service that collects and tracks metrics, logs, and events from various AWS resources. It is used for monitoring and managing the operational health of your AWS environment and does not provide the capability to analyze data in Amazon S3 using standard SQL. References: https://aws.amazon.com/athena
Unattempted
Amazon Athena – Amazon Athena is used to analyze data in Amazon S3 using standard SQL. It allows you to run interactive queries on data stored in S3 without the need for infrastructure provisioning or data loading. With Athena, you can directly query structured, semi-structured, and unstructured data in S3 using SQL syntax. Athena provides results quickly, scales automatically to handle large datasets, and charges you only for the amount of data scanned by your queries. Incorrect Options: Amazon FinSpace – Amazon FinSpace is a fully managed data management and analytics service designed for the financial industry. It provides tools for data preparation, analytics, and collaboration specific to financial datasets. However, it is not used to analyze data in Amazon S3 using standard SQL. Amazon Redshift – Amazon Redshift is a fully managed data warehousing service that allows you to analyze large datasets using SQL. It is optimized for online analytical processing (OLAP) and provides high-performance querying capabilities. Redshift is not used for analyzing data in Amazon S3 using standard SQL. Amazon CloudWatch – Amazon CloudWatch is a monitoring and observability service that collects and tracks metrics, logs, and events from various AWS resources. It is used for monitoring and managing the operational health of your AWS environment and does not provide the capability to analyze data in Amazon S3 using standard SQL. References: https://aws.amazon.com/athena
Question 11 of 65
11. Question
Which of the following are the most cost-effective options when using EC2 instances? (Select TWO.)
Correct
Spot Instances for stateless and flexible workloads – Amazon EC2 Spot Instances allow you to use spare Amazon EC2 computing capacity at up to a 90% discount compared to On-Demand prices. Spot instances are a cost-effective choice if you can be flexible about when your applications run and if your applications can be interrupted. They‘re ideal for various fault-tolerant, flexible workloads, and are one of the most cost-effective ways to use EC2. Reserved Instances for sustained workloads – Amazon EC2 Reserved Instances provide a significant discount (up to 75%) compared to On-Demand instance pricing, and offer a capacity reservation when used in a specific Availability Zone. They‘re a good choice for workloads with steady-state usage, predictable usage patterns, or long term commitments, making them a cost-effective choice for many types of applications, especially ones with predictable workloads. Incorrect Options: Set spending limit using AWS Budgets – AWS Budget can help manage costs by setting custom cost and usage budgets that alert you when your user-defined thresholds are met. It don’t reduce the cost of EC2 instances. Memory optimized instances for high-compute workloads – Memory-optimized instances are designed to deliver fast performance for workloads that process large data sets in memory. However, they are not the most cost-effective choice as they tend to be more expensive than other instance types. The cost-effectiveness of these instances depends on the specific requirements of your workload. On-Demand Instances for sustained workloads – On-Demand Instances let you pay for compute capacity by the hour with no long-term commitments. They offer flexibility but don‘t provide the same cost savings as Spot Instances or Reserved Instances for sustained or flexible workloads. On-Demand Instances are good for short-term or no commitments workloads. References: https://aws.amazon.com/ec2/pricing https://aws.amazon.com/ec2/pricing/reserved-instances/pricing https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-instances.html
Incorrect
Spot Instances for stateless and flexible workloads – Amazon EC2 Spot Instances allow you to use spare Amazon EC2 computing capacity at up to a 90% discount compared to On-Demand prices. Spot instances are a cost-effective choice if you can be flexible about when your applications run and if your applications can be interrupted. They‘re ideal for various fault-tolerant, flexible workloads, and are one of the most cost-effective ways to use EC2. Reserved Instances for sustained workloads – Amazon EC2 Reserved Instances provide a significant discount (up to 75%) compared to On-Demand instance pricing, and offer a capacity reservation when used in a specific Availability Zone. They‘re a good choice for workloads with steady-state usage, predictable usage patterns, or long term commitments, making them a cost-effective choice for many types of applications, especially ones with predictable workloads. Incorrect Options: Set spending limit using AWS Budgets – AWS Budget can help manage costs by setting custom cost and usage budgets that alert you when your user-defined thresholds are met. It don’t reduce the cost of EC2 instances. Memory optimized instances for high-compute workloads – Memory-optimized instances are designed to deliver fast performance for workloads that process large data sets in memory. However, they are not the most cost-effective choice as they tend to be more expensive than other instance types. The cost-effectiveness of these instances depends on the specific requirements of your workload. On-Demand Instances for sustained workloads – On-Demand Instances let you pay for compute capacity by the hour with no long-term commitments. They offer flexibility but don‘t provide the same cost savings as Spot Instances or Reserved Instances for sustained or flexible workloads. On-Demand Instances are good for short-term or no commitments workloads. References: https://aws.amazon.com/ec2/pricing https://aws.amazon.com/ec2/pricing/reserved-instances/pricing https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-instances.html
Unattempted
Spot Instances for stateless and flexible workloads – Amazon EC2 Spot Instances allow you to use spare Amazon EC2 computing capacity at up to a 90% discount compared to On-Demand prices. Spot instances are a cost-effective choice if you can be flexible about when your applications run and if your applications can be interrupted. They‘re ideal for various fault-tolerant, flexible workloads, and are one of the most cost-effective ways to use EC2. Reserved Instances for sustained workloads – Amazon EC2 Reserved Instances provide a significant discount (up to 75%) compared to On-Demand instance pricing, and offer a capacity reservation when used in a specific Availability Zone. They‘re a good choice for workloads with steady-state usage, predictable usage patterns, or long term commitments, making them a cost-effective choice for many types of applications, especially ones with predictable workloads. Incorrect Options: Set spending limit using AWS Budgets – AWS Budget can help manage costs by setting custom cost and usage budgets that alert you when your user-defined thresholds are met. It don’t reduce the cost of EC2 instances. Memory optimized instances for high-compute workloads – Memory-optimized instances are designed to deliver fast performance for workloads that process large data sets in memory. However, they are not the most cost-effective choice as they tend to be more expensive than other instance types. The cost-effectiveness of these instances depends on the specific requirements of your workload. On-Demand Instances for sustained workloads – On-Demand Instances let you pay for compute capacity by the hour with no long-term commitments. They offer flexibility but don‘t provide the same cost savings as Spot Instances or Reserved Instances for sustained or flexible workloads. On-Demand Instances are good for short-term or no commitments workloads. References: https://aws.amazon.com/ec2/pricing https://aws.amazon.com/ec2/pricing/reserved-instances/pricing https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-instances.html
Question 12 of 65
12. Question
Which of the following is a feature of Amazon EC2 that allows users to launch instances in multiple Availability Zones and manage them as a single logical unit?
Correct
Amazon EC2 Fleet – Amazon EC2 Fleet is a feature that simplifies the provisioning of Amazon EC2 capacity across different Amazon EC2 instance types, Availability Zones, and purchase models (On-Demand, Reserved, and Spot Instances) in a single API call. This service is designed to maintain the high availability of applications in the face of unpredictable demand by deploying instances in multiple Availability Zones and managing them as a single logical unit. This way, EC2 Fleet allows users to optimize their cost and performance, while ensuring capacity is balanced across the specified Availability Zones.
Incorrect Options:
Amazon EC2 Placement Groups – Placement Groups in Amazon EC2 are a way of placing instances on the same underlying hardware to achieve low latency or high throughput on those instances. They do not support launching instances across multiple Availability Zones in a single logical unit.
Amazon EC2 Auto Scaling – Amazon EC2 Auto Scaling helps you ensure that you have the correct number of Amazon EC2 instances available to handle the load for your application. Although it can launch instances across multiple Availability Zones, it doesn‘t manage them as a single logical unit.
Amazon EC2 Spot Instances – Amazon EC2 Spot Instances let you take advantage of unused EC2 capacity in the AWS cloud at significant discounts compared to On-Demand prices. However, Spot Instances themselves don‘t have a feature to launch instances in multiple Availability Zones and manage them as a single logical unit.
References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-fleet.html
Incorrect
Amazon EC2 Fleet – Amazon EC2 Fleet is a feature that simplifies the provisioning of Amazon EC2 capacity across different Amazon EC2 instance types, Availability Zones, and purchase models (On-Demand, Reserved, and Spot Instances) in a single API call. This service is designed to maintain the high availability of applications in the face of unpredictable demand by deploying instances in multiple Availability Zones and managing them as a single logical unit. This way, EC2 Fleet allows users to optimize their cost and performance, while ensuring capacity is balanced across the specified Availability Zones.
Incorrect Options:
Amazon EC2 Placement Groups – Placement Groups in Amazon EC2 are a way of placing instances on the same underlying hardware to achieve low latency or high throughput on those instances. They do not support launching instances across multiple Availability Zones in a single logical unit.
Amazon EC2 Auto Scaling – Amazon EC2 Auto Scaling helps you ensure that you have the correct number of Amazon EC2 instances available to handle the load for your application. Although it can launch instances across multiple Availability Zones, it doesn‘t manage them as a single logical unit.
Amazon EC2 Spot Instances – Amazon EC2 Spot Instances let you take advantage of unused EC2 capacity in the AWS cloud at significant discounts compared to On-Demand prices. However, Spot Instances themselves don‘t have a feature to launch instances in multiple Availability Zones and manage them as a single logical unit.
References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-fleet.html
Unattempted
Amazon EC2 Fleet – Amazon EC2 Fleet is a feature that simplifies the provisioning of Amazon EC2 capacity across different Amazon EC2 instance types, Availability Zones, and purchase models (On-Demand, Reserved, and Spot Instances) in a single API call. This service is designed to maintain the high availability of applications in the face of unpredictable demand by deploying instances in multiple Availability Zones and managing them as a single logical unit. This way, EC2 Fleet allows users to optimize their cost and performance, while ensuring capacity is balanced across the specified Availability Zones.
Incorrect Options:
Amazon EC2 Placement Groups – Placement Groups in Amazon EC2 are a way of placing instances on the same underlying hardware to achieve low latency or high throughput on those instances. They do not support launching instances across multiple Availability Zones in a single logical unit.
Amazon EC2 Auto Scaling – Amazon EC2 Auto Scaling helps you ensure that you have the correct number of Amazon EC2 instances available to handle the load for your application. Although it can launch instances across multiple Availability Zones, it doesn‘t manage them as a single logical unit.
Amazon EC2 Spot Instances – Amazon EC2 Spot Instances let you take advantage of unused EC2 capacity in the AWS cloud at significant discounts compared to On-Demand prices. However, Spot Instances themselves don‘t have a feature to launch instances in multiple Availability Zones and manage them as a single logical unit.
References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-fleet.html
Question 13 of 65
13. Question
A business is planning to move its operations to AWS. It wants to ensure that the system has the capacity to recover automatically in the event of a system failure. Which of the AWS Well-Architected Framework principles encompasses this necessity?
Correct
Reliability – The Reliability pillar of the AWS Well-Architected Framework is what this requirement falls under. The pillar focuses on the ability to recover from infrastructural or service disruptions, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues. An AWS system designed with reliability in mind would include automated recovery from failure, ensuring that any disruptions are quickly rectified without requiring manual intervention. This allows businesses to provide a consistent level of service, maintaining customer trust and business continuity, both critical aspects of modern cloud-based solutions. Incorrect Options: Cost Optimization – The Cost Optimization pillar is more concerned with avoiding unnecessary costs and getting the most value out of AWS resources. It‘s not focused on automatic recovery from failure. Operational Excellence – It involves running and monitoring systems to deliver business value, and improving processes and procedures continuously. It‘s not focused on automatic recovery from failure. Performance Efficiency – Performance Efficiency is about using computing resources efficiently to meet system requirements and maintaining that efficiency as demand changes and technologies evolve. Automatic recovery from failure is not its main focus. References: https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/welcome.html
Incorrect
Reliability – The Reliability pillar of the AWS Well-Architected Framework is what this requirement falls under. The pillar focuses on the ability to recover from infrastructural or service disruptions, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues. An AWS system designed with reliability in mind would include automated recovery from failure, ensuring that any disruptions are quickly rectified without requiring manual intervention. This allows businesses to provide a consistent level of service, maintaining customer trust and business continuity, both critical aspects of modern cloud-based solutions. Incorrect Options: Cost Optimization – The Cost Optimization pillar is more concerned with avoiding unnecessary costs and getting the most value out of AWS resources. It‘s not focused on automatic recovery from failure. Operational Excellence – It involves running and monitoring systems to deliver business value, and improving processes and procedures continuously. It‘s not focused on automatic recovery from failure. Performance Efficiency – Performance Efficiency is about using computing resources efficiently to meet system requirements and maintaining that efficiency as demand changes and technologies evolve. Automatic recovery from failure is not its main focus. References: https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/welcome.html
Unattempted
Reliability – The Reliability pillar of the AWS Well-Architected Framework is what this requirement falls under. The pillar focuses on the ability to recover from infrastructural or service disruptions, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues. An AWS system designed with reliability in mind would include automated recovery from failure, ensuring that any disruptions are quickly rectified without requiring manual intervention. This allows businesses to provide a consistent level of service, maintaining customer trust and business continuity, both critical aspects of modern cloud-based solutions. Incorrect Options: Cost Optimization – The Cost Optimization pillar is more concerned with avoiding unnecessary costs and getting the most value out of AWS resources. It‘s not focused on automatic recovery from failure. Operational Excellence – It involves running and monitoring systems to deliver business value, and improving processes and procedures continuously. It‘s not focused on automatic recovery from failure. Performance Efficiency – Performance Efficiency is about using computing resources efficiently to meet system requirements and maintaining that efficiency as demand changes and technologies evolve. Automatic recovery from failure is not its main focus. References: https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/welcome.html
Question 14 of 65
14. Question
Which AWS service can be used to automate compliance checks and provide remediation for your AWS resources?
Correct
AWS Config – AWS Config can be used to automate compliance checks and provide remediation for your AWS resources. AWS Config continuously monitors and records the configuration changes of your AWS resources. It provides a detailed view of the configuration state of your resources and allows you to assess compliance against desired configurations and predefined rules. With AWS Config, you can define custom or pre-configured rules to evaluate resource configurations and check for compliance with industry standards and best practices. AWS Config can also be integrated with AWS Lambda to automate remediation actions when non-compliant resources are detected, enabling you to automatically correct configuration drift and maintain compliance. Incorrect Options: AWS CloudFormation – AWS CloudFormation allows you to provision and manage AWS resources using infrastructure as code. It enables you to define and deploy infrastructure using templates, automating the provisioning and configuration process, and providing an efficient and consistent way to manage your AWS infrastructure. It is not designed to automate compliance checks and provide remediation for AWS resources. AWS Systems Manager – AWS Systems Manager is a suite of tools for managing and automating operational tasks in AWS. It offers features such as parameter management, patch management, and automation workflows. Systems Manager includes compliance-related features, such as compliance reporting but it does not focus on automating compliance checks and providing remediation. AWS Control Tower – AWS Control Tower helps you set up and govern a secure and compliant multi-account AWS environment. Control Tower assists with account provisioning, security baselines, and compliance guardrails, but it does not have the same level of automation and remediation capabilities as AWS Config. Control Tower focuses more on initial setup and governance of accounts rather than continuous compliance checking and remediation. References: https://aws.amazon.com/config
Incorrect
AWS Config – AWS Config can be used to automate compliance checks and provide remediation for your AWS resources. AWS Config continuously monitors and records the configuration changes of your AWS resources. It provides a detailed view of the configuration state of your resources and allows you to assess compliance against desired configurations and predefined rules. With AWS Config, you can define custom or pre-configured rules to evaluate resource configurations and check for compliance with industry standards and best practices. AWS Config can also be integrated with AWS Lambda to automate remediation actions when non-compliant resources are detected, enabling you to automatically correct configuration drift and maintain compliance. Incorrect Options: AWS CloudFormation – AWS CloudFormation allows you to provision and manage AWS resources using infrastructure as code. It enables you to define and deploy infrastructure using templates, automating the provisioning and configuration process, and providing an efficient and consistent way to manage your AWS infrastructure. It is not designed to automate compliance checks and provide remediation for AWS resources. AWS Systems Manager – AWS Systems Manager is a suite of tools for managing and automating operational tasks in AWS. It offers features such as parameter management, patch management, and automation workflows. Systems Manager includes compliance-related features, such as compliance reporting but it does not focus on automating compliance checks and providing remediation. AWS Control Tower – AWS Control Tower helps you set up and govern a secure and compliant multi-account AWS environment. Control Tower assists with account provisioning, security baselines, and compliance guardrails, but it does not have the same level of automation and remediation capabilities as AWS Config. Control Tower focuses more on initial setup and governance of accounts rather than continuous compliance checking and remediation. References: https://aws.amazon.com/config
Unattempted
AWS Config – AWS Config can be used to automate compliance checks and provide remediation for your AWS resources. AWS Config continuously monitors and records the configuration changes of your AWS resources. It provides a detailed view of the configuration state of your resources and allows you to assess compliance against desired configurations and predefined rules. With AWS Config, you can define custom or pre-configured rules to evaluate resource configurations and check for compliance with industry standards and best practices. AWS Config can also be integrated with AWS Lambda to automate remediation actions when non-compliant resources are detected, enabling you to automatically correct configuration drift and maintain compliance. Incorrect Options: AWS CloudFormation – AWS CloudFormation allows you to provision and manage AWS resources using infrastructure as code. It enables you to define and deploy infrastructure using templates, automating the provisioning and configuration process, and providing an efficient and consistent way to manage your AWS infrastructure. It is not designed to automate compliance checks and provide remediation for AWS resources. AWS Systems Manager – AWS Systems Manager is a suite of tools for managing and automating operational tasks in AWS. It offers features such as parameter management, patch management, and automation workflows. Systems Manager includes compliance-related features, such as compliance reporting but it does not focus on automating compliance checks and providing remediation. AWS Control Tower – AWS Control Tower helps you set up and govern a secure and compliant multi-account AWS environment. Control Tower assists with account provisioning, security baselines, and compliance guardrails, but it does not have the same level of automation and remediation capabilities as AWS Config. Control Tower focuses more on initial setup and governance of accounts rather than continuous compliance checking and remediation. References: https://aws.amazon.com/config
Question 15 of 65
15. Question
A company wants to develop a book reader app for blind users. What service would you recommend that converts text to voice?
Correct
Amazon Polly – Amazon Polly turns text into lifelike speech, allowing you to create applications that talk, and build entirely new categories of speech-enabled products. Polly‘s Text-to-Speech (TTS) service uses advanced deep learning technologies to synthesize natural sounding human speech. It would be an ideal choice for developing a book reader app for blind users, as it can read out loud the text from the books.
Incorrect Options:
Amazon Transcribe – Amazon Transcribe is an automatic speech recognition (ASR) service that makes it easy for developers to add speech-to-text capability to their applications. It‘s not designed to convert text to speech, which is what‘s required for a book reader app for blind users.
Amazon Textract – Amazon Textract extracts text and data from documents. It analyzes scanned documents, PDFs, and images to extract structured data like tables and forms, making it easier to process and analyze large amounts of textual information with higher accuracy. While it‘s useful for extracting information from physical books, it doesn‘t convert text to voice.
Amazon SageMaker – Amazon SageMaker is a fully-managed service that covers the entire machine learning workflow to label and prepare your data, choose an algorithm, train the model, tune and optimize it for deployment, make predictions, and take action. It doesn‘t provide the functionality to convert text to speech.
References: https://aws.amazon.com/polly
Incorrect
Amazon Polly – Amazon Polly turns text into lifelike speech, allowing you to create applications that talk, and build entirely new categories of speech-enabled products. Polly‘s Text-to-Speech (TTS) service uses advanced deep learning technologies to synthesize natural sounding human speech. It would be an ideal choice for developing a book reader app for blind users, as it can read out loud the text from the books.
Incorrect Options:
Amazon Transcribe – Amazon Transcribe is an automatic speech recognition (ASR) service that makes it easy for developers to add speech-to-text capability to their applications. It‘s not designed to convert text to speech, which is what‘s required for a book reader app for blind users.
Amazon Textract – Amazon Textract extracts text and data from documents. It analyzes scanned documents, PDFs, and images to extract structured data like tables and forms, making it easier to process and analyze large amounts of textual information with higher accuracy. While it‘s useful for extracting information from physical books, it doesn‘t convert text to voice.
Amazon SageMaker – Amazon SageMaker is a fully-managed service that covers the entire machine learning workflow to label and prepare your data, choose an algorithm, train the model, tune and optimize it for deployment, make predictions, and take action. It doesn‘t provide the functionality to convert text to speech.
References: https://aws.amazon.com/polly
Unattempted
Amazon Polly – Amazon Polly turns text into lifelike speech, allowing you to create applications that talk, and build entirely new categories of speech-enabled products. Polly‘s Text-to-Speech (TTS) service uses advanced deep learning technologies to synthesize natural sounding human speech. It would be an ideal choice for developing a book reader app for blind users, as it can read out loud the text from the books.
Incorrect Options:
Amazon Transcribe – Amazon Transcribe is an automatic speech recognition (ASR) service that makes it easy for developers to add speech-to-text capability to their applications. It‘s not designed to convert text to speech, which is what‘s required for a book reader app for blind users.
Amazon Textract – Amazon Textract extracts text and data from documents. It analyzes scanned documents, PDFs, and images to extract structured data like tables and forms, making it easier to process and analyze large amounts of textual information with higher accuracy. While it‘s useful for extracting information from physical books, it doesn‘t convert text to voice.
Amazon SageMaker – Amazon SageMaker is a fully-managed service that covers the entire machine learning workflow to label and prepare your data, choose an algorithm, train the model, tune and optimize it for deployment, make predictions, and take action. It doesn‘t provide the functionality to convert text to speech.
References: https://aws.amazon.com/polly
Question 16 of 65
16. Question
Which AWS service helps you generate a report that lists all users in your account and credentials, including passwords, access keys, etc.?
Correct
IAM Credential Reports – IAM Credential Report is a document that lists all your AWS account‘s users and the status of their various credentials, including passwords, access keys, MFA devices, and more. This service is valuable for auditing the security status of your account and identifying any potential vulnerabilities or issues. The report doesn‘t actually reveal any sensitive information like passwords or access keys but rather provides an overview of their status, making it a safe and secure tool for account management. Therefore, IAM Credential Reports is the correct answer. Incorrect Options: AWS Artifact Reports – AWS Artifact is primarily used to access on-demand AWS compliance reports and select online agreements. It doesn‘t provide information about user credentials. Cost and Usage Reports – These reports provide comprehensive data about your AWS costs and usage. It does not contain information related to user credentials. Cost Allocation Reports – Cost Allocation Reports in AWS help you categorize and track your AWS costs. It does not include user credential information, so this is not the correct answer. References: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_getting-report.html
Incorrect
IAM Credential Reports – IAM Credential Report is a document that lists all your AWS account‘s users and the status of their various credentials, including passwords, access keys, MFA devices, and more. This service is valuable for auditing the security status of your account and identifying any potential vulnerabilities or issues. The report doesn‘t actually reveal any sensitive information like passwords or access keys but rather provides an overview of their status, making it a safe and secure tool for account management. Therefore, IAM Credential Reports is the correct answer. Incorrect Options: AWS Artifact Reports – AWS Artifact is primarily used to access on-demand AWS compliance reports and select online agreements. It doesn‘t provide information about user credentials. Cost and Usage Reports – These reports provide comprehensive data about your AWS costs and usage. It does not contain information related to user credentials. Cost Allocation Reports – Cost Allocation Reports in AWS help you categorize and track your AWS costs. It does not include user credential information, so this is not the correct answer. References: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_getting-report.html
Unattempted
IAM Credential Reports – IAM Credential Report is a document that lists all your AWS account‘s users and the status of their various credentials, including passwords, access keys, MFA devices, and more. This service is valuable for auditing the security status of your account and identifying any potential vulnerabilities or issues. The report doesn‘t actually reveal any sensitive information like passwords or access keys but rather provides an overview of their status, making it a safe and secure tool for account management. Therefore, IAM Credential Reports is the correct answer. Incorrect Options: AWS Artifact Reports – AWS Artifact is primarily used to access on-demand AWS compliance reports and select online agreements. It doesn‘t provide information about user credentials. Cost and Usage Reports – These reports provide comprehensive data about your AWS costs and usage. It does not contain information related to user credentials. Cost Allocation Reports – Cost Allocation Reports in AWS help you categorize and track your AWS costs. It does not include user credential information, so this is not the correct answer. References: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_getting-report.html
Question 17 of 65
17. Question
A company has a microservice application in AWS Cloud. Recently, they noticed some performance issues and need to debug them to fix these issues. As a Cloud Practitioner, which AWS service should you recommend?
Correct
AWS X-Ray – AWS X-Ray can collects data about requests that your application serves and provides tools you can use to view, filter, and gain insights into that data to identify issues and opportunities for optimization. It‘s perfect for debugging and diagnosing microservice applications (including those built using AWS Lambda, Amazon EC2, and Amazon ECS) and for tracing requests from beginning to end across all components and services of the application.
How it works
Incorrect Options:
AWS CloudTrail – AWS CloudTrail is a service that logs and continuously monitors account activity related to actions across your AWS infrastructure. It‘s primarily used for auditing and compliance rather than application performance debugging.
Amazon Inspector – Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. It does not provide application performance debugging features.
Amazon CloudWatch – While Amazon CloudWatch is a monitoring and observability service, it primarily focuses on resource utilization and operational health rather than detailed debugging of microservice applications.
References: https://aws.amazon.com/xray
Incorrect
AWS X-Ray – AWS X-Ray can collects data about requests that your application serves and provides tools you can use to view, filter, and gain insights into that data to identify issues and opportunities for optimization. It‘s perfect for debugging and diagnosing microservice applications (including those built using AWS Lambda, Amazon EC2, and Amazon ECS) and for tracing requests from beginning to end across all components and services of the application.
How it works
Incorrect Options:
AWS CloudTrail – AWS CloudTrail is a service that logs and continuously monitors account activity related to actions across your AWS infrastructure. It‘s primarily used for auditing and compliance rather than application performance debugging.
Amazon Inspector – Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. It does not provide application performance debugging features.
Amazon CloudWatch – While Amazon CloudWatch is a monitoring and observability service, it primarily focuses on resource utilization and operational health rather than detailed debugging of microservice applications.
References: https://aws.amazon.com/xray
Unattempted
AWS X-Ray – AWS X-Ray can collects data about requests that your application serves and provides tools you can use to view, filter, and gain insights into that data to identify issues and opportunities for optimization. It‘s perfect for debugging and diagnosing microservice applications (including those built using AWS Lambda, Amazon EC2, and Amazon ECS) and for tracing requests from beginning to end across all components and services of the application.
How it works
Incorrect Options:
AWS CloudTrail – AWS CloudTrail is a service that logs and continuously monitors account activity related to actions across your AWS infrastructure. It‘s primarily used for auditing and compliance rather than application performance debugging.
Amazon Inspector – Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. It does not provide application performance debugging features.
Amazon CloudWatch – While Amazon CloudWatch is a monitoring and observability service, it primarily focuses on resource utilization and operational health rather than detailed debugging of microservice applications.
References: https://aws.amazon.com/xray
Question 18 of 65
18. Question
What are the customer‘s responsibilities according to the shared responsibility model? (Select TWO.)
Correct
Security group and ACL configuration – According to the shared responsibility model, the customer is responsible for security “in“ the cloud. This includes configuring security groups and Access Control Lists (ACLs) which control inbound and outbound traffic to resources like EC2 instances and VPC subnets. Patch management of an Amazon EC2 instance operating system – In the shared responsibility model, customers are responsible for managing the guest operating system (including updates and security patches), other associated application software as well as the configuration of the AWS-provided security group firewall on the Amazon EC2 instances. Incorrect Options: Update firmware physical storage devices – This is the responsibility of AWS as it pertains to security “of“ the cloud, i.e., the infrastructure the cloud services run on, including hardware, software, networking, and facilities. Controlling physical access to data centers – Control over physical access to data centers is the responsibility of AWS, as part of security “of“ the cloud. Patch management of an Amazon RDS instance operating system – For managed services like Amazon RDS, AWS is responsible for the underlying infrastructure and operating system patch management, while customers are responsible for the data and configuration of the managed service. References: https://aws.amazon.com/compliance/shared-responsibility-model https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security.html
Incorrect
Security group and ACL configuration – According to the shared responsibility model, the customer is responsible for security “in“ the cloud. This includes configuring security groups and Access Control Lists (ACLs) which control inbound and outbound traffic to resources like EC2 instances and VPC subnets. Patch management of an Amazon EC2 instance operating system – In the shared responsibility model, customers are responsible for managing the guest operating system (including updates and security patches), other associated application software as well as the configuration of the AWS-provided security group firewall on the Amazon EC2 instances. Incorrect Options: Update firmware physical storage devices – This is the responsibility of AWS as it pertains to security “of“ the cloud, i.e., the infrastructure the cloud services run on, including hardware, software, networking, and facilities. Controlling physical access to data centers – Control over physical access to data centers is the responsibility of AWS, as part of security “of“ the cloud. Patch management of an Amazon RDS instance operating system – For managed services like Amazon RDS, AWS is responsible for the underlying infrastructure and operating system patch management, while customers are responsible for the data and configuration of the managed service. References: https://aws.amazon.com/compliance/shared-responsibility-model https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security.html
Unattempted
Security group and ACL configuration – According to the shared responsibility model, the customer is responsible for security “in“ the cloud. This includes configuring security groups and Access Control Lists (ACLs) which control inbound and outbound traffic to resources like EC2 instances and VPC subnets. Patch management of an Amazon EC2 instance operating system – In the shared responsibility model, customers are responsible for managing the guest operating system (including updates and security patches), other associated application software as well as the configuration of the AWS-provided security group firewall on the Amazon EC2 instances. Incorrect Options: Update firmware physical storage devices – This is the responsibility of AWS as it pertains to security “of“ the cloud, i.e., the infrastructure the cloud services run on, including hardware, software, networking, and facilities. Controlling physical access to data centers – Control over physical access to data centers is the responsibility of AWS, as part of security “of“ the cloud. Patch management of an Amazon RDS instance operating system – For managed services like Amazon RDS, AWS is responsible for the underlying infrastructure and operating system patch management, while customers are responsible for the data and configuration of the managed service. References: https://aws.amazon.com/compliance/shared-responsibility-model https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security.html
Question 19 of 65
19. Question
Which of the following is an example of a serverless architecture in AWS?
Correct
A Lambda function triggered by an API Gateway – In this architecture, AWS Lambda handles the underlying infrastructure and provides a platform for running code without provisioning or managing servers. API Gateway acts as the trigger for Lambda, allowing the function to be invoked in response to an HTTP request. Incorrect Options: An EC2 instance running a web server – An EC2 instance running a web server is not a serverless architecture. In this architecture, you must still manage the underlying server infrastructure, including capacity planning, scaling, and patching. A load-balanced group of instances behind an ELB – In this architecture, you provision and manage a group of EC2 instances behind a load balancer to handle the traffic. Although auto-scaling can help with capacity planning and scaling, it still requires the management of the underlying server infrastructure. A database instance running in RDS – In this architecture, AWS manages the database infrastructure, but you still have to provision and manage the database instance. This is different from a serverless architecture, where you don‘t have to manage any underlying infrastructure. References: https://aws.amazon.com/lambda/serverless-architectures-learn-more https://aws.amazon.com/serverless
Incorrect
A Lambda function triggered by an API Gateway – In this architecture, AWS Lambda handles the underlying infrastructure and provides a platform for running code without provisioning or managing servers. API Gateway acts as the trigger for Lambda, allowing the function to be invoked in response to an HTTP request. Incorrect Options: An EC2 instance running a web server – An EC2 instance running a web server is not a serverless architecture. In this architecture, you must still manage the underlying server infrastructure, including capacity planning, scaling, and patching. A load-balanced group of instances behind an ELB – In this architecture, you provision and manage a group of EC2 instances behind a load balancer to handle the traffic. Although auto-scaling can help with capacity planning and scaling, it still requires the management of the underlying server infrastructure. A database instance running in RDS – In this architecture, AWS manages the database infrastructure, but you still have to provision and manage the database instance. This is different from a serverless architecture, where you don‘t have to manage any underlying infrastructure. References: https://aws.amazon.com/lambda/serverless-architectures-learn-more https://aws.amazon.com/serverless
Unattempted
A Lambda function triggered by an API Gateway – In this architecture, AWS Lambda handles the underlying infrastructure and provides a platform for running code without provisioning or managing servers. API Gateway acts as the trigger for Lambda, allowing the function to be invoked in response to an HTTP request. Incorrect Options: An EC2 instance running a web server – An EC2 instance running a web server is not a serverless architecture. In this architecture, you must still manage the underlying server infrastructure, including capacity planning, scaling, and patching. A load-balanced group of instances behind an ELB – In this architecture, you provision and manage a group of EC2 instances behind a load balancer to handle the traffic. Although auto-scaling can help with capacity planning and scaling, it still requires the management of the underlying server infrastructure. A database instance running in RDS – In this architecture, AWS manages the database infrastructure, but you still have to provision and manage the database instance. This is different from a serverless architecture, where you don‘t have to manage any underlying infrastructure. References: https://aws.amazon.com/lambda/serverless-architectures-learn-more https://aws.amazon.com/serverless
Question 20 of 65
20. Question
AWS calculates costs based on which of the following? (Select TWO.)
Correct
Data transfer OUT of AWS clouds – AWS charges for the data transfer OUT of its services to the internet or between regions and Availability Zones (AZs). The charges can vary depending on the region and the total volume of data transferred. Compute & storage usages – AWS charges are based on the resources used, including compute instances (such as EC2 and Lambda) and storage services (like S3 and EBS). The cost of these resources depends on their type, size, and duration of usage. Incorrect Options: Data transfer IN of AWS clouds – AWS does not charge for inbound data transfer, i.e., data transferred into AWS services from the internet or from one service to another. Number of the services used – The number of services used does not directly affect AWS costs. Costs are based on the resource usage within each service, not the number of services themselves. Number of users who used AWS – The number of users accessing AWS does not influence costs directly. Costs are determined by the amount of resources consumed, not by how many users are accessing those resources. References: https://d1.awsstatic.com/whitepapers/aws_pricing_overview.pdf %5Bpage-6%5D
Incorrect
Data transfer OUT of AWS clouds – AWS charges for the data transfer OUT of its services to the internet or between regions and Availability Zones (AZs). The charges can vary depending on the region and the total volume of data transferred. Compute & storage usages – AWS charges are based on the resources used, including compute instances (such as EC2 and Lambda) and storage services (like S3 and EBS). The cost of these resources depends on their type, size, and duration of usage. Incorrect Options: Data transfer IN of AWS clouds – AWS does not charge for inbound data transfer, i.e., data transferred into AWS services from the internet or from one service to another. Number of the services used – The number of services used does not directly affect AWS costs. Costs are based on the resource usage within each service, not the number of services themselves. Number of users who used AWS – The number of users accessing AWS does not influence costs directly. Costs are determined by the amount of resources consumed, not by how many users are accessing those resources. References: https://d1.awsstatic.com/whitepapers/aws_pricing_overview.pdf %5Bpage-6%5D
Unattempted
Data transfer OUT of AWS clouds – AWS charges for the data transfer OUT of its services to the internet or between regions and Availability Zones (AZs). The charges can vary depending on the region and the total volume of data transferred. Compute & storage usages – AWS charges are based on the resources used, including compute instances (such as EC2 and Lambda) and storage services (like S3 and EBS). The cost of these resources depends on their type, size, and duration of usage. Incorrect Options: Data transfer IN of AWS clouds – AWS does not charge for inbound data transfer, i.e., data transferred into AWS services from the internet or from one service to another. Number of the services used – The number of services used does not directly affect AWS costs. Costs are based on the resource usage within each service, not the number of services themselves. Number of users who used AWS – The number of users accessing AWS does not influence costs directly. Costs are determined by the amount of resources consumed, not by how many users are accessing those resources. References: https://d1.awsstatic.com/whitepapers/aws_pricing_overview.pdf %5Bpage-6%5D
Question 21 of 65
21. Question
What is the difference between an AWS IAM user and an AWS IAM role?
Correct
A user is a permanent identity that can access AWS services, while a role is a temporary identity that can be assumed by a user or AWS service. An AWS IAM user is an identity that represents a person or application to interact with AWS services. Users have their own set of security credentials (access keys and secret access keys) and can be assigned permissions directly to access AWS resources. On the other hand, An IAM role is an identity within your AWS account that has specific permissions. It is similar to an IAM user but is not associated with a specific person. Roles can be temporary and have a set of policies that determine what actions are allowed or denied. IAM role should be used when a service makes a request to AWS service. Incorrect Options: A user is a person or application that uses AWS services, while a role is a set of permissions that determines what an AWS service can do. A user is a set of permissions that determines what an AWS service can do, while a role is a person or application that uses AWS services. A user is a group of permissions that determines what an AWS service can do, while a role is a set of users that can access AWS services. All of the above options are incorrect. References: https://docs.aws.amazon.com/IAM/latest/UserGuide/id.html
Incorrect
A user is a permanent identity that can access AWS services, while a role is a temporary identity that can be assumed by a user or AWS service. An AWS IAM user is an identity that represents a person or application to interact with AWS services. Users have their own set of security credentials (access keys and secret access keys) and can be assigned permissions directly to access AWS resources. On the other hand, An IAM role is an identity within your AWS account that has specific permissions. It is similar to an IAM user but is not associated with a specific person. Roles can be temporary and have a set of policies that determine what actions are allowed or denied. IAM role should be used when a service makes a request to AWS service. Incorrect Options: A user is a person or application that uses AWS services, while a role is a set of permissions that determines what an AWS service can do. A user is a set of permissions that determines what an AWS service can do, while a role is a person or application that uses AWS services. A user is a group of permissions that determines what an AWS service can do, while a role is a set of users that can access AWS services. All of the above options are incorrect. References: https://docs.aws.amazon.com/IAM/latest/UserGuide/id.html
Unattempted
A user is a permanent identity that can access AWS services, while a role is a temporary identity that can be assumed by a user or AWS service. An AWS IAM user is an identity that represents a person or application to interact with AWS services. Users have their own set of security credentials (access keys and secret access keys) and can be assigned permissions directly to access AWS resources. On the other hand, An IAM role is an identity within your AWS account that has specific permissions. It is similar to an IAM user but is not associated with a specific person. Roles can be temporary and have a set of policies that determine what actions are allowed or denied. IAM role should be used when a service makes a request to AWS service. Incorrect Options: A user is a person or application that uses AWS services, while a role is a set of permissions that determines what an AWS service can do. A user is a set of permissions that determines what an AWS service can do, while a role is a person or application that uses AWS services. A user is a group of permissions that determines what an AWS service can do, while a role is a set of users that can access AWS services. All of the above options are incorrect. References: https://docs.aws.amazon.com/IAM/latest/UserGuide/id.html
Question 22 of 65
22. Question
What is the main purpose of AWS WAF in protecting against SQL injection attacks?
Correct
To filter incoming web traffic based on a set of predefined rules – The primary purpose of AWS WAF (Web Application Firewall) in protecting against SQL injection attacks is to filter incoming web traffic based on a set of predefined rules. AWS WAF allows you to monitor HTTP and HTTPS requests that are forwarded to Amazon CloudFront or an Application Load Balancer. You can create rules to block, allow, or monitor (count) web requests based on conditions that you define. These conditions include SQL injection patterns, making AWS WAF a powerful tool to protect your web application against such attacks. Incorrect Options: To encrypt the incoming traffic to your web application – AWS WAF does not provide encryption services, it‘s primarily used to monitor and filter web traffic based on predefined conditions. To provide a secure layer between your web application and the internet – AWS WAF indeed provides a form of security between your application and the internet, it‘s not in the form of a secure layer or barrier. Instead, it examines the traffic and filters it based on the rules you define. To provide an audit trail of all incoming traffic to your web application – AWS WAF does log web requests, its primary purpose is not to provide an audit trail of all incoming traffic. Its main role is to filter and protect your web applications from harmful web requests such as SQL injections. References: https://aws.amazon.com/waf https://docs.aws.amazon.com/waf/latest/developerguide/waf-rules.html
Incorrect
To filter incoming web traffic based on a set of predefined rules – The primary purpose of AWS WAF (Web Application Firewall) in protecting against SQL injection attacks is to filter incoming web traffic based on a set of predefined rules. AWS WAF allows you to monitor HTTP and HTTPS requests that are forwarded to Amazon CloudFront or an Application Load Balancer. You can create rules to block, allow, or monitor (count) web requests based on conditions that you define. These conditions include SQL injection patterns, making AWS WAF a powerful tool to protect your web application against such attacks. Incorrect Options: To encrypt the incoming traffic to your web application – AWS WAF does not provide encryption services, it‘s primarily used to monitor and filter web traffic based on predefined conditions. To provide a secure layer between your web application and the internet – AWS WAF indeed provides a form of security between your application and the internet, it‘s not in the form of a secure layer or barrier. Instead, it examines the traffic and filters it based on the rules you define. To provide an audit trail of all incoming traffic to your web application – AWS WAF does log web requests, its primary purpose is not to provide an audit trail of all incoming traffic. Its main role is to filter and protect your web applications from harmful web requests such as SQL injections. References: https://aws.amazon.com/waf https://docs.aws.amazon.com/waf/latest/developerguide/waf-rules.html
Unattempted
To filter incoming web traffic based on a set of predefined rules – The primary purpose of AWS WAF (Web Application Firewall) in protecting against SQL injection attacks is to filter incoming web traffic based on a set of predefined rules. AWS WAF allows you to monitor HTTP and HTTPS requests that are forwarded to Amazon CloudFront or an Application Load Balancer. You can create rules to block, allow, or monitor (count) web requests based on conditions that you define. These conditions include SQL injection patterns, making AWS WAF a powerful tool to protect your web application against such attacks. Incorrect Options: To encrypt the incoming traffic to your web application – AWS WAF does not provide encryption services, it‘s primarily used to monitor and filter web traffic based on predefined conditions. To provide a secure layer between your web application and the internet – AWS WAF indeed provides a form of security between your application and the internet, it‘s not in the form of a secure layer or barrier. Instead, it examines the traffic and filters it based on the rules you define. To provide an audit trail of all incoming traffic to your web application – AWS WAF does log web requests, its primary purpose is not to provide an audit trail of all incoming traffic. Its main role is to filter and protect your web applications from harmful web requests such as SQL injections. References: https://aws.amazon.com/waf https://docs.aws.amazon.com/waf/latest/developerguide/waf-rules.html
Question 23 of 65
23. Question
What should you do if you discover that your EC2 instance is being used for suspicious activity, such as a DoS attack?
Correct
Contact the AWS Abuse team – If you suspect that your EC2 instance is being used for suspicious activities such as a Denial of Service (DoS) attack, you should report it to the AWS Abuse team. AWS has a specific protocol for reporting abuse that may be a violation of the AWS Acceptable Use Policy. The AWS Abuse team will investigate the report and take necessary actions based on their findings. The AWS Trust & Safety team can assist you when AWS resources are used to engage in the following types of abusive behavior: Spam: You are receiving unwanted emails from an AWS-owned IP address, or AWS resources are used to spam websites or forums. Port scanning: Your logs show that one or more AWS-owned IP addresses are sending packets to multiple ports on your server. You also believe this is an attempt to discover unsecured ports. Denial-of-service (DoS) attacks: Your logs show that one or more AWS-owned IP addresses are used to flood ports on your resources with packets. You also believe that this is an attempt to overwhelm or crash your server or the software running on your server. Intrusion attempts: Your logs show that one or more AWS-owned IP addresses are used to attempt to log in to your resources. Hosting prohibited content: You have evidence that AWS resources are used to host or distribute prohibited content, such as illegal content or copyrighted content without the consent of the copyright holder. Distributing malware: You have evidence that AWS resources are used to distribute software that was knowingly created to compromise or cause harm to computers or machines that it‘s installed on. Incorrect Options: Restart/reboot your EC2 instance – Restarting or rebooting your instance may not stop the suspicious activity if it‘s due to compromised credentials or malicious software. Update the Operating system of your EC2 – While keeping the operating system of your EC2 instances up-to-date is generally a good practice for security, it may not address the root cause of the suspicious activity. Contact Customer Service for Penetration Testing – Customer Service does not handle security incidents or perform penetration testing. Penetration testing requires explicit authorization from AWS and should not be your immediate response to discovering suspicious activity on your instance. References: https://aws.amazon.com/security/vulnerability-reporting https://aws.amazon.com/premiumsupport/knowledge-center/report-aws-abuse
Incorrect
Contact the AWS Abuse team – If you suspect that your EC2 instance is being used for suspicious activities such as a Denial of Service (DoS) attack, you should report it to the AWS Abuse team. AWS has a specific protocol for reporting abuse that may be a violation of the AWS Acceptable Use Policy. The AWS Abuse team will investigate the report and take necessary actions based on their findings. The AWS Trust & Safety team can assist you when AWS resources are used to engage in the following types of abusive behavior: Spam: You are receiving unwanted emails from an AWS-owned IP address, or AWS resources are used to spam websites or forums. Port scanning: Your logs show that one or more AWS-owned IP addresses are sending packets to multiple ports on your server. You also believe this is an attempt to discover unsecured ports. Denial-of-service (DoS) attacks: Your logs show that one or more AWS-owned IP addresses are used to flood ports on your resources with packets. You also believe that this is an attempt to overwhelm or crash your server or the software running on your server. Intrusion attempts: Your logs show that one or more AWS-owned IP addresses are used to attempt to log in to your resources. Hosting prohibited content: You have evidence that AWS resources are used to host or distribute prohibited content, such as illegal content or copyrighted content without the consent of the copyright holder. Distributing malware: You have evidence that AWS resources are used to distribute software that was knowingly created to compromise or cause harm to computers or machines that it‘s installed on. Incorrect Options: Restart/reboot your EC2 instance – Restarting or rebooting your instance may not stop the suspicious activity if it‘s due to compromised credentials or malicious software. Update the Operating system of your EC2 – While keeping the operating system of your EC2 instances up-to-date is generally a good practice for security, it may not address the root cause of the suspicious activity. Contact Customer Service for Penetration Testing – Customer Service does not handle security incidents or perform penetration testing. Penetration testing requires explicit authorization from AWS and should not be your immediate response to discovering suspicious activity on your instance. References: https://aws.amazon.com/security/vulnerability-reporting https://aws.amazon.com/premiumsupport/knowledge-center/report-aws-abuse
Unattempted
Contact the AWS Abuse team – If you suspect that your EC2 instance is being used for suspicious activities such as a Denial of Service (DoS) attack, you should report it to the AWS Abuse team. AWS has a specific protocol for reporting abuse that may be a violation of the AWS Acceptable Use Policy. The AWS Abuse team will investigate the report and take necessary actions based on their findings. The AWS Trust & Safety team can assist you when AWS resources are used to engage in the following types of abusive behavior: Spam: You are receiving unwanted emails from an AWS-owned IP address, or AWS resources are used to spam websites or forums. Port scanning: Your logs show that one or more AWS-owned IP addresses are sending packets to multiple ports on your server. You also believe this is an attempt to discover unsecured ports. Denial-of-service (DoS) attacks: Your logs show that one or more AWS-owned IP addresses are used to flood ports on your resources with packets. You also believe that this is an attempt to overwhelm or crash your server or the software running on your server. Intrusion attempts: Your logs show that one or more AWS-owned IP addresses are used to attempt to log in to your resources. Hosting prohibited content: You have evidence that AWS resources are used to host or distribute prohibited content, such as illegal content or copyrighted content without the consent of the copyright holder. Distributing malware: You have evidence that AWS resources are used to distribute software that was knowingly created to compromise or cause harm to computers or machines that it‘s installed on. Incorrect Options: Restart/reboot your EC2 instance – Restarting or rebooting your instance may not stop the suspicious activity if it‘s due to compromised credentials or malicious software. Update the Operating system of your EC2 – While keeping the operating system of your EC2 instances up-to-date is generally a good practice for security, it may not address the root cause of the suspicious activity. Contact Customer Service for Penetration Testing – Customer Service does not handle security incidents or perform penetration testing. Penetration testing requires explicit authorization from AWS and should not be your immediate response to discovering suspicious activity on your instance. References: https://aws.amazon.com/security/vulnerability-reporting https://aws.amazon.com/premiumsupport/knowledge-center/report-aws-abuse
Question 24 of 65
24. Question
Which AWS Feature should be used to launch Amazon EC2 instances with pre-configured settings?
Correct
Amazon Machine Image (AMI) – To launch Amazon EC2 instances with pre-configured settings, the company should use Amazon Machine Image (AMI). An AMI is a pre-configured template that contains the necessary information to launch an instance, such as the operating system, software applications, libraries, and configurations. It serves as the foundation for creating new EC2 instances with the desired settings and configurations. AMIs provide a convenient way to replicate and share instances across different regions and accounts, allowing for consistent and efficient deployment of pre-configured environments. Incorrect Options: Amazon VPC – Amazon VPC (Virtual Private Cloud) is a service that allows users to create their isolated virtual network environments within the AWS Cloud. It does not provide pre-configured settings for launching EC2 instances. Security Groups – Security Groups are used to control inbound and outbound traffic for EC2 instances. It act as virtual firewalls and provide network-level security. they do not offer pre-configured settings for launching instances. AWS Identity and Access Management (IAM) – AWS IAM is a service for managing user access and permissions within the AWS environment. It helps control who can access various AWS resources and what actions they can perform. IAM is not related to launching EC2 instances with pre-configured settings and does not provide the functionality of AMIs. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html
Incorrect
Amazon Machine Image (AMI) – To launch Amazon EC2 instances with pre-configured settings, the company should use Amazon Machine Image (AMI). An AMI is a pre-configured template that contains the necessary information to launch an instance, such as the operating system, software applications, libraries, and configurations. It serves as the foundation for creating new EC2 instances with the desired settings and configurations. AMIs provide a convenient way to replicate and share instances across different regions and accounts, allowing for consistent and efficient deployment of pre-configured environments. Incorrect Options: Amazon VPC – Amazon VPC (Virtual Private Cloud) is a service that allows users to create their isolated virtual network environments within the AWS Cloud. It does not provide pre-configured settings for launching EC2 instances. Security Groups – Security Groups are used to control inbound and outbound traffic for EC2 instances. It act as virtual firewalls and provide network-level security. they do not offer pre-configured settings for launching instances. AWS Identity and Access Management (IAM) – AWS IAM is a service for managing user access and permissions within the AWS environment. It helps control who can access various AWS resources and what actions they can perform. IAM is not related to launching EC2 instances with pre-configured settings and does not provide the functionality of AMIs. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html
Unattempted
Amazon Machine Image (AMI) – To launch Amazon EC2 instances with pre-configured settings, the company should use Amazon Machine Image (AMI). An AMI is a pre-configured template that contains the necessary information to launch an instance, such as the operating system, software applications, libraries, and configurations. It serves as the foundation for creating new EC2 instances with the desired settings and configurations. AMIs provide a convenient way to replicate and share instances across different regions and accounts, allowing for consistent and efficient deployment of pre-configured environments. Incorrect Options: Amazon VPC – Amazon VPC (Virtual Private Cloud) is a service that allows users to create their isolated virtual network environments within the AWS Cloud. It does not provide pre-configured settings for launching EC2 instances. Security Groups – Security Groups are used to control inbound and outbound traffic for EC2 instances. It act as virtual firewalls and provide network-level security. they do not offer pre-configured settings for launching instances. AWS Identity and Access Management (IAM) – AWS IAM is a service for managing user access and permissions within the AWS environment. It helps control who can access various AWS resources and what actions they can perform. IAM is not related to launching EC2 instances with pre-configured settings and does not provide the functionality of AMIs. References: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html
Question 25 of 65
25. Question
What is the primary difference between a monolithic architecture and a microservices architecture in AWS?
Correct
Microservices architecture allows for greater flexibility and easier maintenance than monolithic architecture. Microservices architecture allows for greater flexibility and easier maintenance. Breaking an application into smaller services makes it easier to update or replace individual components, add new features, or change underlying technologies. Additionally, microservices can be developed and deployed independently, reducing the risk of introducing bugs or downtime. Incorrect Options: Monolithic architecture is more scalable than microservices architecture – Monolithic architecture requires scaling the entire application, which can be difficult and time-consuming. On the other hand, microservices architecture is designed to be highly scalable by breaking down large applications into smaller, independently deployable services. Each service can be scaled independently, making it easier to handle traffic spikes or changing demands. Microservices architecture is more tightly coupled than monolithic architecture – Monolithic architecture tends to be tightly coupled, with all components integrated into a single application. Microservices architecture is designed to be loosely coupled. Each microservice is designed to operate independently and communicate with other services through well-defined APIs, making it easier to replace or update individual services without affecting the rest of the application. Monolithic architecture is more fault-tolerant than microservices architecture – Fault-tolerance is not directly related to the architecture type. Both monolithic and microservices architectures can be designed to be fault-tolerant by implementing redundancy, failover mechanisms, and other best practices. References: https://aws.amazon.com/microservices
Incorrect
Microservices architecture allows for greater flexibility and easier maintenance than monolithic architecture. Microservices architecture allows for greater flexibility and easier maintenance. Breaking an application into smaller services makes it easier to update or replace individual components, add new features, or change underlying technologies. Additionally, microservices can be developed and deployed independently, reducing the risk of introducing bugs or downtime. Incorrect Options: Monolithic architecture is more scalable than microservices architecture – Monolithic architecture requires scaling the entire application, which can be difficult and time-consuming. On the other hand, microservices architecture is designed to be highly scalable by breaking down large applications into smaller, independently deployable services. Each service can be scaled independently, making it easier to handle traffic spikes or changing demands. Microservices architecture is more tightly coupled than monolithic architecture – Monolithic architecture tends to be tightly coupled, with all components integrated into a single application. Microservices architecture is designed to be loosely coupled. Each microservice is designed to operate independently and communicate with other services through well-defined APIs, making it easier to replace or update individual services without affecting the rest of the application. Monolithic architecture is more fault-tolerant than microservices architecture – Fault-tolerance is not directly related to the architecture type. Both monolithic and microservices architectures can be designed to be fault-tolerant by implementing redundancy, failover mechanisms, and other best practices. References: https://aws.amazon.com/microservices
Unattempted
Microservices architecture allows for greater flexibility and easier maintenance than monolithic architecture. Microservices architecture allows for greater flexibility and easier maintenance. Breaking an application into smaller services makes it easier to update or replace individual components, add new features, or change underlying technologies. Additionally, microservices can be developed and deployed independently, reducing the risk of introducing bugs or downtime. Incorrect Options: Monolithic architecture is more scalable than microservices architecture – Monolithic architecture requires scaling the entire application, which can be difficult and time-consuming. On the other hand, microservices architecture is designed to be highly scalable by breaking down large applications into smaller, independently deployable services. Each service can be scaled independently, making it easier to handle traffic spikes or changing demands. Microservices architecture is more tightly coupled than monolithic architecture – Monolithic architecture tends to be tightly coupled, with all components integrated into a single application. Microservices architecture is designed to be loosely coupled. Each microservice is designed to operate independently and communicate with other services through well-defined APIs, making it easier to replace or update individual services without affecting the rest of the application. Monolithic architecture is more fault-tolerant than microservices architecture – Fault-tolerance is not directly related to the architecture type. Both monolithic and microservices architectures can be designed to be fault-tolerant by implementing redundancy, failover mechanisms, and other best practices. References: https://aws.amazon.com/microservices
Question 26 of 65
26. Question
A company uses AWS Organization to centrally manage its multiple AWS accounts and consolidated billing. What are the benefits they will receive? (Select TWO.)
Correct
Can take advantage of quantity discounts with a single bill – AWS Organizations allows you to consolidate billing across multiple AWS accounts. This means that all of the accounts in the organization can contribute to reaching volume discount tiers, potentially leading to a lower overall cost for the resources used across those accounts. Can share critical resources with other accounts in the Organization – With AWS Organizations, you can create and manage accounts that are grouped together in an organizational unit (OU), and manage access to AWS services and resources across those accounts. This also allows sharing of certain types of resources across accounts within the organization, like AWS Resource Access Manager (RAM), AWS Service Catalog portfolios, and more. Incorrect Options: Will receive a fixed discount for usage across accounts – AWS does not offer a fixed discount for usage across multiple accounts through AWS Organizations. The cost benefits come from consolidated billing that enables volume-based discounts. Can use a single enterprise support plan for all accounts – AWS Organizations doesn‘t inherently provide a single support plan for all accounts. However, it is possible to share benefits of Business and Enterprise support with other AWS accounts in your organization, but it‘s separate from the core AWS Organizations features. Will get a higher discount for EC2 instances reservation from the regular price – Reserved Instances (RI) discounts apply to usage within a specific account, not across the entire organization. AWS Organizations itself does not provide additional discounts for Reserved Instances. References: https://aws.amazon.com/organizations https://aws.amazon.com/organizations/features
Incorrect
Can take advantage of quantity discounts with a single bill – AWS Organizations allows you to consolidate billing across multiple AWS accounts. This means that all of the accounts in the organization can contribute to reaching volume discount tiers, potentially leading to a lower overall cost for the resources used across those accounts. Can share critical resources with other accounts in the Organization – With AWS Organizations, you can create and manage accounts that are grouped together in an organizational unit (OU), and manage access to AWS services and resources across those accounts. This also allows sharing of certain types of resources across accounts within the organization, like AWS Resource Access Manager (RAM), AWS Service Catalog portfolios, and more. Incorrect Options: Will receive a fixed discount for usage across accounts – AWS does not offer a fixed discount for usage across multiple accounts through AWS Organizations. The cost benefits come from consolidated billing that enables volume-based discounts. Can use a single enterprise support plan for all accounts – AWS Organizations doesn‘t inherently provide a single support plan for all accounts. However, it is possible to share benefits of Business and Enterprise support with other AWS accounts in your organization, but it‘s separate from the core AWS Organizations features. Will get a higher discount for EC2 instances reservation from the regular price – Reserved Instances (RI) discounts apply to usage within a specific account, not across the entire organization. AWS Organizations itself does not provide additional discounts for Reserved Instances. References: https://aws.amazon.com/organizations https://aws.amazon.com/organizations/features
Unattempted
Can take advantage of quantity discounts with a single bill – AWS Organizations allows you to consolidate billing across multiple AWS accounts. This means that all of the accounts in the organization can contribute to reaching volume discount tiers, potentially leading to a lower overall cost for the resources used across those accounts. Can share critical resources with other accounts in the Organization – With AWS Organizations, you can create and manage accounts that are grouped together in an organizational unit (OU), and manage access to AWS services and resources across those accounts. This also allows sharing of certain types of resources across accounts within the organization, like AWS Resource Access Manager (RAM), AWS Service Catalog portfolios, and more. Incorrect Options: Will receive a fixed discount for usage across accounts – AWS does not offer a fixed discount for usage across multiple accounts through AWS Organizations. The cost benefits come from consolidated billing that enables volume-based discounts. Can use a single enterprise support plan for all accounts – AWS Organizations doesn‘t inherently provide a single support plan for all accounts. However, it is possible to share benefits of Business and Enterprise support with other AWS accounts in your organization, but it‘s separate from the core AWS Organizations features. Will get a higher discount for EC2 instances reservation from the regular price – Reserved Instances (RI) discounts apply to usage within a specific account, not across the entire organization. AWS Organizations itself does not provide additional discounts for Reserved Instances. References: https://aws.amazon.com/organizations https://aws.amazon.com/organizations/features
Question 27 of 65
27. Question
A newspaper company wants to develop a news app that will convert news into voice so that blind people can listen to their news. As a Cloud Practitioner, which service would you recommend?
Correct
Amazon Polly – Amazon Polly is a service that turns text into lifelike speech, allowing you to create applications that talk, and build entirely new categories of speech-enabled products. Amazon Polly is perfect for the proposed use case as it supports multiple languages and offers a variety of voices to choose from. It uses advanced deep learning technologies to synthesize speech that sounds like a human voice. Hence, using Amazon Polly, the newspaper company can effectively convert news articles into audible speech, which would be highly beneficial for visually impaired individuals. Incorrect Options: Amazon Transcribe – Amazon Transcribe is an automatic speech recognition (ASR) service that makes it easy for developers to add speech-to-text capability to their applications. It doesn‘t fit our requirement because we need to convert text into speech, not the other way around. AWS Elemental MediaConvert – AWS Elemental MediaConvert is a video transcoding service with packaging and encrypting capabilities. This service doesn‘t align with our goal as it is primarily used for video processing, and not for text-to-speech conversion. Amazon WAF – Amazon Web Application Firewall (WAF) is a web application firewall that helps protect your web applications or APIs against common web exploits. It‘s primarily used for cybersecurity and doesn‘t provide text-to-speech capabilities. References: https://aws.amazon.com/polly
Incorrect
Amazon Polly – Amazon Polly is a service that turns text into lifelike speech, allowing you to create applications that talk, and build entirely new categories of speech-enabled products. Amazon Polly is perfect for the proposed use case as it supports multiple languages and offers a variety of voices to choose from. It uses advanced deep learning technologies to synthesize speech that sounds like a human voice. Hence, using Amazon Polly, the newspaper company can effectively convert news articles into audible speech, which would be highly beneficial for visually impaired individuals. Incorrect Options: Amazon Transcribe – Amazon Transcribe is an automatic speech recognition (ASR) service that makes it easy for developers to add speech-to-text capability to their applications. It doesn‘t fit our requirement because we need to convert text into speech, not the other way around. AWS Elemental MediaConvert – AWS Elemental MediaConvert is a video transcoding service with packaging and encrypting capabilities. This service doesn‘t align with our goal as it is primarily used for video processing, and not for text-to-speech conversion. Amazon WAF – Amazon Web Application Firewall (WAF) is a web application firewall that helps protect your web applications or APIs against common web exploits. It‘s primarily used for cybersecurity and doesn‘t provide text-to-speech capabilities. References: https://aws.amazon.com/polly
Unattempted
Amazon Polly – Amazon Polly is a service that turns text into lifelike speech, allowing you to create applications that talk, and build entirely new categories of speech-enabled products. Amazon Polly is perfect for the proposed use case as it supports multiple languages and offers a variety of voices to choose from. It uses advanced deep learning technologies to synthesize speech that sounds like a human voice. Hence, using Amazon Polly, the newspaper company can effectively convert news articles into audible speech, which would be highly beneficial for visually impaired individuals. Incorrect Options: Amazon Transcribe – Amazon Transcribe is an automatic speech recognition (ASR) service that makes it easy for developers to add speech-to-text capability to their applications. It doesn‘t fit our requirement because we need to convert text into speech, not the other way around. AWS Elemental MediaConvert – AWS Elemental MediaConvert is a video transcoding service with packaging and encrypting capabilities. This service doesn‘t align with our goal as it is primarily used for video processing, and not for text-to-speech conversion. Amazon WAF – Amazon Web Application Firewall (WAF) is a web application firewall that helps protect your web applications or APIs against common web exploits. It‘s primarily used for cybersecurity and doesn‘t provide text-to-speech capabilities. References: https://aws.amazon.com/polly
Question 28 of 65
28. Question
According to the AWS cloud design, which principle reduces system interdependence?
Correct
Loosely Coupled – The principle of designing systems to be ‘loosely coupled‘ reduces system interdependence in AWS cloud design. In a loosely coupled architecture, components are interlinked in such a way that they interact with each other without being strongly dependent or intertwined. This enables individual components to remain functional and operate independently, even if another part of the system fails or changes. By reducing the dependence of components on each other, we increase the system‘s resilience, flexibility, and scalability, which is a key tenet of robust cloud architecture design. Incorrect Options: Automation – Automation is not specifically aimed at reducing system interdependence. Automation can help with tasks such as deploying updates, managing resources, and responding to changes, but it doesn‘t directly influence how tightly components of a system are linked. Removing Single Points of Failure – Removing single points of failure is a key principle for improving system reliability and fault tolerance, but it doesn‘t specifically address system interdependence. This practice involves ensuring that no single component can cause the entire system to fail if it becomes non-operational. Security – Security is a core aspect of any system design, including cloud systems. However, while essential, security doesn‘t specifically reduce system interdependence. Instead, it focuses on protecting data, maintaining privacy, and ensuring compliance with relevant regulations and standards. References: https://aws.amazon.com/architecture/well-architected https://docs.aws.amazon.com/wellarchitected/latest/high-performance-computing-lens/loosely-coupled-scenarios.html
Incorrect
Loosely Coupled – The principle of designing systems to be ‘loosely coupled‘ reduces system interdependence in AWS cloud design. In a loosely coupled architecture, components are interlinked in such a way that they interact with each other without being strongly dependent or intertwined. This enables individual components to remain functional and operate independently, even if another part of the system fails or changes. By reducing the dependence of components on each other, we increase the system‘s resilience, flexibility, and scalability, which is a key tenet of robust cloud architecture design. Incorrect Options: Automation – Automation is not specifically aimed at reducing system interdependence. Automation can help with tasks such as deploying updates, managing resources, and responding to changes, but it doesn‘t directly influence how tightly components of a system are linked. Removing Single Points of Failure – Removing single points of failure is a key principle for improving system reliability and fault tolerance, but it doesn‘t specifically address system interdependence. This practice involves ensuring that no single component can cause the entire system to fail if it becomes non-operational. Security – Security is a core aspect of any system design, including cloud systems. However, while essential, security doesn‘t specifically reduce system interdependence. Instead, it focuses on protecting data, maintaining privacy, and ensuring compliance with relevant regulations and standards. References: https://aws.amazon.com/architecture/well-architected https://docs.aws.amazon.com/wellarchitected/latest/high-performance-computing-lens/loosely-coupled-scenarios.html
Unattempted
Loosely Coupled – The principle of designing systems to be ‘loosely coupled‘ reduces system interdependence in AWS cloud design. In a loosely coupled architecture, components are interlinked in such a way that they interact with each other without being strongly dependent or intertwined. This enables individual components to remain functional and operate independently, even if another part of the system fails or changes. By reducing the dependence of components on each other, we increase the system‘s resilience, flexibility, and scalability, which is a key tenet of robust cloud architecture design. Incorrect Options: Automation – Automation is not specifically aimed at reducing system interdependence. Automation can help with tasks such as deploying updates, managing resources, and responding to changes, but it doesn‘t directly influence how tightly components of a system are linked. Removing Single Points of Failure – Removing single points of failure is a key principle for improving system reliability and fault tolerance, but it doesn‘t specifically address system interdependence. This practice involves ensuring that no single component can cause the entire system to fail if it becomes non-operational. Security – Security is a core aspect of any system design, including cloud systems. However, while essential, security doesn‘t specifically reduce system interdependence. Instead, it focuses on protecting data, maintaining privacy, and ensuring compliance with relevant regulations and standards. References: https://aws.amazon.com/architecture/well-architected https://docs.aws.amazon.com/wellarchitected/latest/high-performance-computing-lens/loosely-coupled-scenarios.html
Question 29 of 65
29. Question
Which AWS service supports graph query languages for performing complex queries?
Correct
Amazon Neptune – Amazon Neptune is a fully managed graph database service that is optimized for storing and querying highly connected data. Neptune supports the popular graph query languages, such as Apache TinkerPop Gremlin and W3C‘s RDF SPARQL, allowing users to perform sophisticated and expressive graph queries. With Neptune, you can model and query relationships between entities in a graph structure, making it suitable for applications that require rich and complex data relationships, such as social networks, recommendation engines, and fraud detection systems.
How it works
Incorrect Options:
Amazon Redshift – Amazon Redshift is a fully managed data warehousing service that is optimized for online analytical processing (OLAP). Redshift can performe complex analytics queries on large datasets. It does not natively support graph query languages.
Amazon Aurora – Amazon Aurora is a relational database service that is compatible with MySQL and PostgreSQL. Aurora provides high-performance and scalable relational database service. It does not support graph query languages.
Amazon DynamoDBÂ – Amazon DynamoDB is a fully managed NoSQL database service. It is optimized for high scalability and performance but does not have native support for graph query languages. DynamoDB is designed for key-value and document data models, and it is not focused on graph data structures.
References: https://aws.amazon.com/neptune
Incorrect
Amazon Neptune – Amazon Neptune is a fully managed graph database service that is optimized for storing and querying highly connected data. Neptune supports the popular graph query languages, such as Apache TinkerPop Gremlin and W3C‘s RDF SPARQL, allowing users to perform sophisticated and expressive graph queries. With Neptune, you can model and query relationships between entities in a graph structure, making it suitable for applications that require rich and complex data relationships, such as social networks, recommendation engines, and fraud detection systems.
How it works
Incorrect Options:
Amazon Redshift – Amazon Redshift is a fully managed data warehousing service that is optimized for online analytical processing (OLAP). Redshift can performe complex analytics queries on large datasets. It does not natively support graph query languages.
Amazon Aurora – Amazon Aurora is a relational database service that is compatible with MySQL and PostgreSQL. Aurora provides high-performance and scalable relational database service. It does not support graph query languages.
Amazon DynamoDBÂ – Amazon DynamoDB is a fully managed NoSQL database service. It is optimized for high scalability and performance but does not have native support for graph query languages. DynamoDB is designed for key-value and document data models, and it is not focused on graph data structures.
References: https://aws.amazon.com/neptune
Unattempted
Amazon Neptune – Amazon Neptune is a fully managed graph database service that is optimized for storing and querying highly connected data. Neptune supports the popular graph query languages, such as Apache TinkerPop Gremlin and W3C‘s RDF SPARQL, allowing users to perform sophisticated and expressive graph queries. With Neptune, you can model and query relationships between entities in a graph structure, making it suitable for applications that require rich and complex data relationships, such as social networks, recommendation engines, and fraud detection systems.
How it works
Incorrect Options:
Amazon Redshift – Amazon Redshift is a fully managed data warehousing service that is optimized for online analytical processing (OLAP). Redshift can performe complex analytics queries on large datasets. It does not natively support graph query languages.
Amazon Aurora – Amazon Aurora is a relational database service that is compatible with MySQL and PostgreSQL. Aurora provides high-performance and scalable relational database service. It does not support graph query languages.
Amazon DynamoDBÂ – Amazon DynamoDB is a fully managed NoSQL database service. It is optimized for high scalability and performance but does not have native support for graph query languages. DynamoDB is designed for key-value and document data models, and it is not focused on graph data structures.
References: https://aws.amazon.com/neptune
Question 30 of 65
30. Question
Which statement is true about software licensing costs in the cloud?
Correct
Costs depend on the software and deployment model. The cost of software licensing in the cloud can vary depending on the software being used and the deployment model. Some cloud providers offer pay-as-you-go models, while others require upfront or subscription fees. Some software may be licensed differently in the cloud than on-premises. It‘s important to consider these factors when evaluating the impact of software licensing costs when moving to the cloud. Incorrect Options: Costs are always lower than on-premises software licensing costs – While it may be true for some software and deployment models, there are cases where software licensing costs in the cloud may be higher than on-premises. Costs are always higher than on-premises software licensing costs – Again, while it may be true for some software and deployment models, there are cases where software licensing costs in the cloud may be lower than on-premises. Costs are not affected by the deployment model – The deployment model can significantly impact software licensing costs in the cloud. Cloud providers have different licensing models, and the software licensing cost can vary depending on the deployment model used. References: https://aws.amazon.com/license-manager/faqs
Incorrect
Costs depend on the software and deployment model. The cost of software licensing in the cloud can vary depending on the software being used and the deployment model. Some cloud providers offer pay-as-you-go models, while others require upfront or subscription fees. Some software may be licensed differently in the cloud than on-premises. It‘s important to consider these factors when evaluating the impact of software licensing costs when moving to the cloud. Incorrect Options: Costs are always lower than on-premises software licensing costs – While it may be true for some software and deployment models, there are cases where software licensing costs in the cloud may be higher than on-premises. Costs are always higher than on-premises software licensing costs – Again, while it may be true for some software and deployment models, there are cases where software licensing costs in the cloud may be lower than on-premises. Costs are not affected by the deployment model – The deployment model can significantly impact software licensing costs in the cloud. Cloud providers have different licensing models, and the software licensing cost can vary depending on the deployment model used. References: https://aws.amazon.com/license-manager/faqs
Unattempted
Costs depend on the software and deployment model. The cost of software licensing in the cloud can vary depending on the software being used and the deployment model. Some cloud providers offer pay-as-you-go models, while others require upfront or subscription fees. Some software may be licensed differently in the cloud than on-premises. It‘s important to consider these factors when evaluating the impact of software licensing costs when moving to the cloud. Incorrect Options: Costs are always lower than on-premises software licensing costs – While it may be true for some software and deployment models, there are cases where software licensing costs in the cloud may be higher than on-premises. Costs are always higher than on-premises software licensing costs – Again, while it may be true for some software and deployment models, there are cases where software licensing costs in the cloud may be lower than on-premises. Costs are not affected by the deployment model – The deployment model can significantly impact software licensing costs in the cloud. Cloud providers have different licensing models, and the software licensing cost can vary depending on the deployment model used. References: https://aws.amazon.com/license-manager/faqs
Question 31 of 65
31. Question
Which AWS service provides an event log for all AWS resources?
Correct
AWS CloudTrail – AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. It provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This event history simplifies security analysis, resource change tracking, and troubleshooting. Thus, CloudTrail can be seen as an event log for all AWS resources.
How it works
Incorrect Options:
AWS Config – AWS Config enables you to assess, audit, and evaluate the configurations of your AWS resources. While it provides detailed configuration history, it doesn‘t serve as an event log for all AWS resources like CloudTrail does.
AWS CloudFormation – AWS CloudFormation helps you model and provision AWS and third-party application resources in your AWS environment. It is more focused on managing infrastructure as code and not designed to provide an event log for all AWS resources.
Amazon CloudWatch – Amazon CloudWatch is a monitoring and observability service that provides metrics, logs, and alarms for AWS resources and applications. It is not the primary service for providing a comprehensive event log for all AWS resources like CloudTrail.
References: https://aws.amazon.com/cloudtrail
Incorrect
AWS CloudTrail – AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. It provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This event history simplifies security analysis, resource change tracking, and troubleshooting. Thus, CloudTrail can be seen as an event log for all AWS resources.
How it works
Incorrect Options:
AWS Config – AWS Config enables you to assess, audit, and evaluate the configurations of your AWS resources. While it provides detailed configuration history, it doesn‘t serve as an event log for all AWS resources like CloudTrail does.
AWS CloudFormation – AWS CloudFormation helps you model and provision AWS and third-party application resources in your AWS environment. It is more focused on managing infrastructure as code and not designed to provide an event log for all AWS resources.
Amazon CloudWatch – Amazon CloudWatch is a monitoring and observability service that provides metrics, logs, and alarms for AWS resources and applications. It is not the primary service for providing a comprehensive event log for all AWS resources like CloudTrail.
References: https://aws.amazon.com/cloudtrail
Unattempted
AWS CloudTrail – AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. It provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This event history simplifies security analysis, resource change tracking, and troubleshooting. Thus, CloudTrail can be seen as an event log for all AWS resources.
How it works
Incorrect Options:
AWS Config – AWS Config enables you to assess, audit, and evaluate the configurations of your AWS resources. While it provides detailed configuration history, it doesn‘t serve as an event log for all AWS resources like CloudTrail does.
AWS CloudFormation – AWS CloudFormation helps you model and provision AWS and third-party application resources in your AWS environment. It is more focused on managing infrastructure as code and not designed to provide an event log for all AWS resources.
Amazon CloudWatch – Amazon CloudWatch is a monitoring and observability service that provides metrics, logs, and alarms for AWS resources and applications. It is not the primary service for providing a comprehensive event log for all AWS resources like CloudTrail.
References: https://aws.amazon.com/cloudtrail
Question 32 of 65
32. Question
Which statement is true regarding the AWS Command Line Interface (AWS CLI)?
Correct
Access key ID and secret access key are both required to access AWS CLIÂ – To access AWS services using the AWS Command Line Interface (AWS CLI), you need both an access key ID and a secret access key. These are provided when you create an IAM user or role. They are part of the security credentials that allow AWS CLI to authenticate your requests to AWS services and are absolutely necessary for accessing the AWS CLI. Incorrect Options: To access AWS CLI you must be provided a username and password You can access CLI with only a secret access key You can access CLI with AWS Personal Token Key – AWS does not provide any Personal Token Key All of the above are incorrect. References: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html
Incorrect
Access key ID and secret access key are both required to access AWS CLIÂ – To access AWS services using the AWS Command Line Interface (AWS CLI), you need both an access key ID and a secret access key. These are provided when you create an IAM user or role. They are part of the security credentials that allow AWS CLI to authenticate your requests to AWS services and are absolutely necessary for accessing the AWS CLI. Incorrect Options: To access AWS CLI you must be provided a username and password You can access CLI with only a secret access key You can access CLI with AWS Personal Token Key – AWS does not provide any Personal Token Key All of the above are incorrect. References: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html
Unattempted
Access key ID and secret access key are both required to access AWS CLIÂ – To access AWS services using the AWS Command Line Interface (AWS CLI), you need both an access key ID and a secret access key. These are provided when you create an IAM user or role. They are part of the security credentials that allow AWS CLI to authenticate your requests to AWS services and are absolutely necessary for accessing the AWS CLI. Incorrect Options: To access AWS CLI you must be provided a username and password You can access CLI with only a secret access key You can access CLI with AWS Personal Token Key – AWS does not provide any Personal Token Key All of the above are incorrect. References: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html
Question 33 of 65
33. Question
A startup intends to run a new business on the AWS cloud, and the CTO wants to understand the advantages of cloud computing. Which of the following advantages does AWS offer? (Select THREE.)
Correct
Stop spending money running and maintaining data centers Stop guessing data center infrastructure capacity Pay only when you consume computing resources Cloud computing is the on-demand delivery of compute power, database, storage, applications, and other IT resources through a cloud services platform via the internet with pay-as-you-go pricing. The Six advantages of cloud computing are following: Trade fixed expense for variable expense – Instead of having to invest heavily in data centers and servers before you know how you’re going to use them, you can pay only when you consume computing resources, and pay only for how much you consume. Benefit from massive economies of scale – By using cloud computing, you can achieve a lower variable cost than you can get on your own. Because usage from hundreds of thousands of customers is aggregated in the cloud, providers such as AWS can achieve higher economies of scale, which translates into lower pay as-you-go prices. Stop guessing capacity – Eliminate guessing on your infrastructure capacity needs. When you make a capacity decision prior to deploying an application, you often end up either sitting on expensive idle resources or dealing with limited capacity. With cloud computing, these problems go away. You can access as much or as little capacity as you need, and scale up and down as required with only a few minutes’ notice. Increase speed and agility – In a cloud computing environment, new IT resources are only a click away, which means that you reduce the time to make those resources available to your developers from weeks to just minutes. This results in a dramatic increase in agility for the organization, since the cost and time it takes to experiment and develop is significantly lower. Stop spending money running and maintaining data centers – Focus on projects that differentiate your business, not the infrastructure. Cloud computing lets you focus on your own customers, rather than on the heavy lifting of racking, stacking, and powering servers. Go global in minutes – Easily deploy your application in multiple regions around the world with just a few clicks. This means you can provide lower latency and a better experience for your customers at minimal cost. Incorrect Options: Provides high-security systems from on-premises data centers – Although AWS implements the best security standards to provide security to industries. But you should implement security best practices to secure servers and applications. So this option is incorrect. Capital expense for running and maintaining data centers – Cloud computing lets you the ability to focus on your own business, rather than running and maintaining data centers. So this option is incorrect. Provide full control of technology and innovation, increasing a company‘s ability – Cloud computing does not provide full control of technology such as physical hardware, networking, security. So this option is incorrect. References: https://docs.aws.amazon.com/whitepapers/latest/aws-overview/six-advantages-of-cloud-computing.html
Incorrect
Stop spending money running and maintaining data centers Stop guessing data center infrastructure capacity Pay only when you consume computing resources Cloud computing is the on-demand delivery of compute power, database, storage, applications, and other IT resources through a cloud services platform via the internet with pay-as-you-go pricing. The Six advantages of cloud computing are following: Trade fixed expense for variable expense – Instead of having to invest heavily in data centers and servers before you know how you’re going to use them, you can pay only when you consume computing resources, and pay only for how much you consume. Benefit from massive economies of scale – By using cloud computing, you can achieve a lower variable cost than you can get on your own. Because usage from hundreds of thousands of customers is aggregated in the cloud, providers such as AWS can achieve higher economies of scale, which translates into lower pay as-you-go prices. Stop guessing capacity – Eliminate guessing on your infrastructure capacity needs. When you make a capacity decision prior to deploying an application, you often end up either sitting on expensive idle resources or dealing with limited capacity. With cloud computing, these problems go away. You can access as much or as little capacity as you need, and scale up and down as required with only a few minutes’ notice. Increase speed and agility – In a cloud computing environment, new IT resources are only a click away, which means that you reduce the time to make those resources available to your developers from weeks to just minutes. This results in a dramatic increase in agility for the organization, since the cost and time it takes to experiment and develop is significantly lower. Stop spending money running and maintaining data centers – Focus on projects that differentiate your business, not the infrastructure. Cloud computing lets you focus on your own customers, rather than on the heavy lifting of racking, stacking, and powering servers. Go global in minutes – Easily deploy your application in multiple regions around the world with just a few clicks. This means you can provide lower latency and a better experience for your customers at minimal cost. Incorrect Options: Provides high-security systems from on-premises data centers – Although AWS implements the best security standards to provide security to industries. But you should implement security best practices to secure servers and applications. So this option is incorrect. Capital expense for running and maintaining data centers – Cloud computing lets you the ability to focus on your own business, rather than running and maintaining data centers. So this option is incorrect. Provide full control of technology and innovation, increasing a company‘s ability – Cloud computing does not provide full control of technology such as physical hardware, networking, security. So this option is incorrect. References: https://docs.aws.amazon.com/whitepapers/latest/aws-overview/six-advantages-of-cloud-computing.html
Unattempted
Stop spending money running and maintaining data centers Stop guessing data center infrastructure capacity Pay only when you consume computing resources Cloud computing is the on-demand delivery of compute power, database, storage, applications, and other IT resources through a cloud services platform via the internet with pay-as-you-go pricing. The Six advantages of cloud computing are following: Trade fixed expense for variable expense – Instead of having to invest heavily in data centers and servers before you know how you’re going to use them, you can pay only when you consume computing resources, and pay only for how much you consume. Benefit from massive economies of scale – By using cloud computing, you can achieve a lower variable cost than you can get on your own. Because usage from hundreds of thousands of customers is aggregated in the cloud, providers such as AWS can achieve higher economies of scale, which translates into lower pay as-you-go prices. Stop guessing capacity – Eliminate guessing on your infrastructure capacity needs. When you make a capacity decision prior to deploying an application, you often end up either sitting on expensive idle resources or dealing with limited capacity. With cloud computing, these problems go away. You can access as much or as little capacity as you need, and scale up and down as required with only a few minutes’ notice. Increase speed and agility – In a cloud computing environment, new IT resources are only a click away, which means that you reduce the time to make those resources available to your developers from weeks to just minutes. This results in a dramatic increase in agility for the organization, since the cost and time it takes to experiment and develop is significantly lower. Stop spending money running and maintaining data centers – Focus on projects that differentiate your business, not the infrastructure. Cloud computing lets you focus on your own customers, rather than on the heavy lifting of racking, stacking, and powering servers. Go global in minutes – Easily deploy your application in multiple regions around the world with just a few clicks. This means you can provide lower latency and a better experience for your customers at minimal cost. Incorrect Options: Provides high-security systems from on-premises data centers – Although AWS implements the best security standards to provide security to industries. But you should implement security best practices to secure servers and applications. So this option is incorrect. Capital expense for running and maintaining data centers – Cloud computing lets you the ability to focus on your own business, rather than running and maintaining data centers. So this option is incorrect. Provide full control of technology and innovation, increasing a company‘s ability – Cloud computing does not provide full control of technology such as physical hardware, networking, security. So this option is incorrect. References: https://docs.aws.amazon.com/whitepapers/latest/aws-overview/six-advantages-of-cloud-computing.html
Question 34 of 65
34. Question
Which of the following benefits do customers get from TCO when using AWS services? (Select TWO.)
Correct
Cloud services can provide more predictable and stable costs over time – With AWS, customers can pay only for the resources they use and scale up or down based on their needs. AWS pricing is transparent and predictable, with no upfront costs or long-term commitments. This allows customers to plan and budget more accurately, resulting in more predictable and stable costs over time. Cloud services can provide more flexibility to adjust capacity based on demand – AWS allows customers to easily scale their resources up or down in response to changes in demand. Customers can quickly and easily adjust their resources to match their needs without worrying about over-provisioning or under-provisioning their infrastructure. Incorrect Options: Cloud services may have less control over security and compliance measures – AWS provides a wide range of security and compliance features and services that help customers protect their data and applications. AWS compliance programs include SOC, PCI, HIPAA, and others certifications. Cloud services can increase the risk of data breaches – AWS provides a secure platform with many security features and services to protect customer data. Customers also have full control over their security configurations and access to advanced tools for monitoring and managing security. Cloud services can only be used for some specific applications – AWS provides a wide range of services and tools that can be used for various applications, from web applications to big data processing and machine learning. References: https://docs.aws.amazon.com/whitepapers/latest/how-aws-pricing-works/aws-pricingtco-tools.html https://aws.amazon.com/blogs/publicsector/tco-cost-optimization-best-practices-for-managing-usage
Incorrect
Cloud services can provide more predictable and stable costs over time – With AWS, customers can pay only for the resources they use and scale up or down based on their needs. AWS pricing is transparent and predictable, with no upfront costs or long-term commitments. This allows customers to plan and budget more accurately, resulting in more predictable and stable costs over time. Cloud services can provide more flexibility to adjust capacity based on demand – AWS allows customers to easily scale their resources up or down in response to changes in demand. Customers can quickly and easily adjust their resources to match their needs without worrying about over-provisioning or under-provisioning their infrastructure. Incorrect Options: Cloud services may have less control over security and compliance measures – AWS provides a wide range of security and compliance features and services that help customers protect their data and applications. AWS compliance programs include SOC, PCI, HIPAA, and others certifications. Cloud services can increase the risk of data breaches – AWS provides a secure platform with many security features and services to protect customer data. Customers also have full control over their security configurations and access to advanced tools for monitoring and managing security. Cloud services can only be used for some specific applications – AWS provides a wide range of services and tools that can be used for various applications, from web applications to big data processing and machine learning. References: https://docs.aws.amazon.com/whitepapers/latest/how-aws-pricing-works/aws-pricingtco-tools.html https://aws.amazon.com/blogs/publicsector/tco-cost-optimization-best-practices-for-managing-usage
Unattempted
Cloud services can provide more predictable and stable costs over time – With AWS, customers can pay only for the resources they use and scale up or down based on their needs. AWS pricing is transparent and predictable, with no upfront costs or long-term commitments. This allows customers to plan and budget more accurately, resulting in more predictable and stable costs over time. Cloud services can provide more flexibility to adjust capacity based on demand – AWS allows customers to easily scale their resources up or down in response to changes in demand. Customers can quickly and easily adjust their resources to match their needs without worrying about over-provisioning or under-provisioning their infrastructure. Incorrect Options: Cloud services may have less control over security and compliance measures – AWS provides a wide range of security and compliance features and services that help customers protect their data and applications. AWS compliance programs include SOC, PCI, HIPAA, and others certifications. Cloud services can increase the risk of data breaches – AWS provides a secure platform with many security features and services to protect customer data. Customers also have full control over their security configurations and access to advanced tools for monitoring and managing security. Cloud services can only be used for some specific applications – AWS provides a wide range of services and tools that can be used for various applications, from web applications to big data processing and machine learning. References: https://docs.aws.amazon.com/whitepapers/latest/how-aws-pricing-works/aws-pricingtco-tools.html https://aws.amazon.com/blogs/publicsector/tco-cost-optimization-best-practices-for-managing-usage
Question 35 of 65
35. Question
Which AWS service provides in-memory data storage?
Correct
Amazon ElastiCache – Amazon ElastiCache provides in-memory data storage. It is a fully managed, in-memory data store that can be used to improve the performance and scalability of applications. ElastiCache supports two popular in-memory engines: Redis and Memcached. It enables you to store frequently accessed data in memory, reducing the need for disk-based operations and improving overall application response times. ElastiCache is commonly used for use cases such as caching, session management, real-time analytics, and high-performance database query processing.
How it works
Incorrect Options:
Amazon Aurora – Amazon Aurora is a relational database service that is compatible with MySQL and PostgreSQL. Aurora offers high performance and scalability but it is not designed for in-memory data storage.
Amazon EBSÂ – Amazon EBS (Elastic Block Store) provides block-level storage volumes for EC2 instances. It is used for persistent storage and is not focused on in-memory data storage. EBS volumes are attached to EC2 instances as block devices and offer durability and persistence but do not provide in-memory capabilities.
Amazon Redshift – Amazon Redshift is a fully managed data warehousing service. It is optimized for online analytical processing (OLAP) and provides fast query performance for large datasets. Redshift is not designed for in-memory data storage.
References: https://aws.amazon.com/elasticache
Incorrect
Amazon ElastiCache – Amazon ElastiCache provides in-memory data storage. It is a fully managed, in-memory data store that can be used to improve the performance and scalability of applications. ElastiCache supports two popular in-memory engines: Redis and Memcached. It enables you to store frequently accessed data in memory, reducing the need for disk-based operations and improving overall application response times. ElastiCache is commonly used for use cases such as caching, session management, real-time analytics, and high-performance database query processing.
How it works
Incorrect Options:
Amazon Aurora – Amazon Aurora is a relational database service that is compatible with MySQL and PostgreSQL. Aurora offers high performance and scalability but it is not designed for in-memory data storage.
Amazon EBSÂ – Amazon EBS (Elastic Block Store) provides block-level storage volumes for EC2 instances. It is used for persistent storage and is not focused on in-memory data storage. EBS volumes are attached to EC2 instances as block devices and offer durability and persistence but do not provide in-memory capabilities.
Amazon Redshift – Amazon Redshift is a fully managed data warehousing service. It is optimized for online analytical processing (OLAP) and provides fast query performance for large datasets. Redshift is not designed for in-memory data storage.
References: https://aws.amazon.com/elasticache
Unattempted
Amazon ElastiCache – Amazon ElastiCache provides in-memory data storage. It is a fully managed, in-memory data store that can be used to improve the performance and scalability of applications. ElastiCache supports two popular in-memory engines: Redis and Memcached. It enables you to store frequently accessed data in memory, reducing the need for disk-based operations and improving overall application response times. ElastiCache is commonly used for use cases such as caching, session management, real-time analytics, and high-performance database query processing.
How it works
Incorrect Options:
Amazon Aurora – Amazon Aurora is a relational database service that is compatible with MySQL and PostgreSQL. Aurora offers high performance and scalability but it is not designed for in-memory data storage.
Amazon EBSÂ – Amazon EBS (Elastic Block Store) provides block-level storage volumes for EC2 instances. It is used for persistent storage and is not focused on in-memory data storage. EBS volumes are attached to EC2 instances as block devices and offer durability and persistence but do not provide in-memory capabilities.
Amazon Redshift – Amazon Redshift is a fully managed data warehousing service. It is optimized for online analytical processing (OLAP) and provides fast query performance for large datasets. Redshift is not designed for in-memory data storage.
References: https://aws.amazon.com/elasticache
Question 36 of 65
36. Question
Which of the following statement is true according to the Cloud Computing Elasticity?
Correct
The ability to acquire resources as needed and release resources when no longer needed – Elasticity in cloud computing refers to the ability of a system to adapt to workload changes by provisioning and deprovisioning resources in an autonomic manner, enabling the system to seamlessly scale up or down based on demand. This means that resources can be acquired as needed (scaling up when demand is high) and released when they‘re no longer needed (scaling down when demand is low), making this statement accurate. Elasticity is one of the core advantages of cloud computing, as it allows businesses to only pay for the resources they use and enables systems to handle peak loads effectively. Incorrect Options: Automatically patching to resolve functionality issues, improve security or add new features – This statement describes patch management, not elasticity. While it‘s an important aspect of maintaining cloud-based systems, it doesn‘t relate directly to the concept of elasticity. Provisions a synchronous standby replica in a different Availability Zone – This refers to disaster recovery strategies and high availability, not elasticity. Keeping a standby replica in a different Availability Zone is a way to ensure that the system remains operational if one region experiences an outage. Monitoring continuous failure detection for the disaster recovery strategy – This pertains to system monitoring and disaster recovery, not elasticity. Continuous failure detection is a part of maintaining the reliability and stability of cloud-based systems, but it doesn‘t relate to the ability to scale resources up or down based on demand. References: https://wa.aws.amazon.com/wat.concepts.wa-concepts.en.html
Incorrect
The ability to acquire resources as needed and release resources when no longer needed – Elasticity in cloud computing refers to the ability of a system to adapt to workload changes by provisioning and deprovisioning resources in an autonomic manner, enabling the system to seamlessly scale up or down based on demand. This means that resources can be acquired as needed (scaling up when demand is high) and released when they‘re no longer needed (scaling down when demand is low), making this statement accurate. Elasticity is one of the core advantages of cloud computing, as it allows businesses to only pay for the resources they use and enables systems to handle peak loads effectively. Incorrect Options: Automatically patching to resolve functionality issues, improve security or add new features – This statement describes patch management, not elasticity. While it‘s an important aspect of maintaining cloud-based systems, it doesn‘t relate directly to the concept of elasticity. Provisions a synchronous standby replica in a different Availability Zone – This refers to disaster recovery strategies and high availability, not elasticity. Keeping a standby replica in a different Availability Zone is a way to ensure that the system remains operational if one region experiences an outage. Monitoring continuous failure detection for the disaster recovery strategy – This pertains to system monitoring and disaster recovery, not elasticity. Continuous failure detection is a part of maintaining the reliability and stability of cloud-based systems, but it doesn‘t relate to the ability to scale resources up or down based on demand. References: https://wa.aws.amazon.com/wat.concepts.wa-concepts.en.html
Unattempted
The ability to acquire resources as needed and release resources when no longer needed – Elasticity in cloud computing refers to the ability of a system to adapt to workload changes by provisioning and deprovisioning resources in an autonomic manner, enabling the system to seamlessly scale up or down based on demand. This means that resources can be acquired as needed (scaling up when demand is high) and released when they‘re no longer needed (scaling down when demand is low), making this statement accurate. Elasticity is one of the core advantages of cloud computing, as it allows businesses to only pay for the resources they use and enables systems to handle peak loads effectively. Incorrect Options: Automatically patching to resolve functionality issues, improve security or add new features – This statement describes patch management, not elasticity. While it‘s an important aspect of maintaining cloud-based systems, it doesn‘t relate directly to the concept of elasticity. Provisions a synchronous standby replica in a different Availability Zone – This refers to disaster recovery strategies and high availability, not elasticity. Keeping a standby replica in a different Availability Zone is a way to ensure that the system remains operational if one region experiences an outage. Monitoring continuous failure detection for the disaster recovery strategy – This pertains to system monitoring and disaster recovery, not elasticity. Continuous failure detection is a part of maintaining the reliability and stability of cloud-based systems, but it doesn‘t relate to the ability to scale resources up or down based on demand. References: https://wa.aws.amazon.com/wat.concepts.wa-concepts.en.html
Question 37 of 65
37. Question
A startup recently deployed an application in the AWS Cloud. They are concerned about security risks and need automation that provides security checks and centralized security alerts. As a Cloud Practitioner, which AWS service provides automated security assessments of AWS resources?
Correct
AWS Security Hub – AWS Security Hub provides a comprehensive view of your high-priority security alerts and compliance status across AWS accounts. It collects and aggregates findings from AWS services and integrated third-party products, offering automated security checks and centralized management of security alerts. It helps to identify potential security issues and assists in maintaining a strong security posture. Incorrect Options: AWS CloudTrail – AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. It provides visibility into user activity by recording actions taken in your account, it does not provide automated security assessments. AWS Config – AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. But it does not provide automated security assessments like AWS Security Hub. AWS Identity and Access Management (IAM) – IAM is used for controlling access to AWS services and resources. It helps manage permissions and ensure only authorized users and services can access your resources. However, IAM does not provide automated security assessments. References: https://aws.amazon.com/security-hub
Incorrect
AWS Security Hub – AWS Security Hub provides a comprehensive view of your high-priority security alerts and compliance status across AWS accounts. It collects and aggregates findings from AWS services and integrated third-party products, offering automated security checks and centralized management of security alerts. It helps to identify potential security issues and assists in maintaining a strong security posture. Incorrect Options: AWS CloudTrail – AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. It provides visibility into user activity by recording actions taken in your account, it does not provide automated security assessments. AWS Config – AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. But it does not provide automated security assessments like AWS Security Hub. AWS Identity and Access Management (IAM) – IAM is used for controlling access to AWS services and resources. It helps manage permissions and ensure only authorized users and services can access your resources. However, IAM does not provide automated security assessments. References: https://aws.amazon.com/security-hub
Unattempted
AWS Security Hub – AWS Security Hub provides a comprehensive view of your high-priority security alerts and compliance status across AWS accounts. It collects and aggregates findings from AWS services and integrated third-party products, offering automated security checks and centralized management of security alerts. It helps to identify potential security issues and assists in maintaining a strong security posture. Incorrect Options: AWS CloudTrail – AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. It provides visibility into user activity by recording actions taken in your account, it does not provide automated security assessments. AWS Config – AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. But it does not provide automated security assessments like AWS Security Hub. AWS Identity and Access Management (IAM) – IAM is used for controlling access to AWS services and resources. It helps manage permissions and ensure only authorized users and services can access your resources. However, IAM does not provide automated security assessments. References: https://aws.amazon.com/security-hub
Question 38 of 65
38. Question
A web hosting company uses Amazon RDS Multi-AZ deployments for its database needs. Which core advantage of the AWS Cloud does this instance demonstrate?
Correct
High availability – Amazon RDS Multi-AZ deployments are primarily used to provide high availability and failover support for DB instances. In a Multi-AZ deployment, Amazon RDS automatically provisions and maintains a standby in a different Availability Zone for failover support in case of a planned or unplanned outage. This architecture offers enhanced availability and durability of the database instances, making them a natural fit for mission-critical, production database workloads. The usage of RDS Multi-AZ deployments as a high availability solution enables the hosting company to maintain continuous service even if one instance fails, thereby demonstrating the high availability characteristic of the AWS Cloud. Incorrect Options: Durability – Amazon RDS Multi-AZ deployments ensure high availability rather than durability. Durability, in terms of data storage, refers to the long-term integrity and preservation of data, which is not the purpose of RDS Multi-AZ deployments. Elasticity – Amazon RDS Multi-AZ deployments are not for elasticity. Elasticity in the AWS Cloud refers to the ability to quickly scale resources up or down to meet demand, which is not the feature of RDS Multi-AZ deployments. Scalability – Scalability refers to the ability to increase or decrease the resources or services based on demand. While RDS Multi-AZ does allow for some level of scalability, its primary purpose is to provide high availability and failover support, not to scale resources. References: https://aws.amazon.com/rds/features/multi-az/
Incorrect
High availability – Amazon RDS Multi-AZ deployments are primarily used to provide high availability and failover support for DB instances. In a Multi-AZ deployment, Amazon RDS automatically provisions and maintains a standby in a different Availability Zone for failover support in case of a planned or unplanned outage. This architecture offers enhanced availability and durability of the database instances, making them a natural fit for mission-critical, production database workloads. The usage of RDS Multi-AZ deployments as a high availability solution enables the hosting company to maintain continuous service even if one instance fails, thereby demonstrating the high availability characteristic of the AWS Cloud. Incorrect Options: Durability – Amazon RDS Multi-AZ deployments ensure high availability rather than durability. Durability, in terms of data storage, refers to the long-term integrity and preservation of data, which is not the purpose of RDS Multi-AZ deployments. Elasticity – Amazon RDS Multi-AZ deployments are not for elasticity. Elasticity in the AWS Cloud refers to the ability to quickly scale resources up or down to meet demand, which is not the feature of RDS Multi-AZ deployments. Scalability – Scalability refers to the ability to increase or decrease the resources or services based on demand. While RDS Multi-AZ does allow for some level of scalability, its primary purpose is to provide high availability and failover support, not to scale resources. References: https://aws.amazon.com/rds/features/multi-az/
Unattempted
High availability – Amazon RDS Multi-AZ deployments are primarily used to provide high availability and failover support for DB instances. In a Multi-AZ deployment, Amazon RDS automatically provisions and maintains a standby in a different Availability Zone for failover support in case of a planned or unplanned outage. This architecture offers enhanced availability and durability of the database instances, making them a natural fit for mission-critical, production database workloads. The usage of RDS Multi-AZ deployments as a high availability solution enables the hosting company to maintain continuous service even if one instance fails, thereby demonstrating the high availability characteristic of the AWS Cloud. Incorrect Options: Durability – Amazon RDS Multi-AZ deployments ensure high availability rather than durability. Durability, in terms of data storage, refers to the long-term integrity and preservation of data, which is not the purpose of RDS Multi-AZ deployments. Elasticity – Amazon RDS Multi-AZ deployments are not for elasticity. Elasticity in the AWS Cloud refers to the ability to quickly scale resources up or down to meet demand, which is not the feature of RDS Multi-AZ deployments. Scalability – Scalability refers to the ability to increase or decrease the resources or services based on demand. While RDS Multi-AZ does allow for some level of scalability, its primary purpose is to provide high availability and failover support, not to scale resources. References: https://aws.amazon.com/rds/features/multi-az/
Question 39 of 65
39. Question
Suppose you have an EC2 instance with 1TB of data. Now you want to move this data to an S3 bucket in the same region. How much will AWS charge you?
Correct
There will not be charged to you for this data transfer AWS does not charge for data transfer between an EC2 instance and an S3 bucket within the same region. The data transfer in this case, where data is moved from the EC2 instance to the S3 bucket, is considered “inbound“ for S3, which is free of charge. Incorrect Options: The inbound charge will be applicable for data transfer – In AWS, all inbound data transfer is free, including the transfer from an EC2 instance to an S3 bucket within the same region. The inbound and outbound data transfer charge will be applicable for S3 bucket – This is incorrect as AWS does not charge for data transfer between an EC2 instance and an S3 bucket in the same region. Furthermore, inbound data transfers to S3 are always free. The outbound charge will be applicable for data transfer – Outbound data transfers from S3 are indeed charged, but in this scenario, we are moving data into S3, which is an inbound data transfer and thus free of charge. References: https://aws.amazon.com/s3/pricing
Incorrect
There will not be charged to you for this data transfer AWS does not charge for data transfer between an EC2 instance and an S3 bucket within the same region. The data transfer in this case, where data is moved from the EC2 instance to the S3 bucket, is considered “inbound“ for S3, which is free of charge. Incorrect Options: The inbound charge will be applicable for data transfer – In AWS, all inbound data transfer is free, including the transfer from an EC2 instance to an S3 bucket within the same region. The inbound and outbound data transfer charge will be applicable for S3 bucket – This is incorrect as AWS does not charge for data transfer between an EC2 instance and an S3 bucket in the same region. Furthermore, inbound data transfers to S3 are always free. The outbound charge will be applicable for data transfer – Outbound data transfers from S3 are indeed charged, but in this scenario, we are moving data into S3, which is an inbound data transfer and thus free of charge. References: https://aws.amazon.com/s3/pricing
Unattempted
There will not be charged to you for this data transfer AWS does not charge for data transfer between an EC2 instance and an S3 bucket within the same region. The data transfer in this case, where data is moved from the EC2 instance to the S3 bucket, is considered “inbound“ for S3, which is free of charge. Incorrect Options: The inbound charge will be applicable for data transfer – In AWS, all inbound data transfer is free, including the transfer from an EC2 instance to an S3 bucket within the same region. The inbound and outbound data transfer charge will be applicable for S3 bucket – This is incorrect as AWS does not charge for data transfer between an EC2 instance and an S3 bucket in the same region. Furthermore, inbound data transfers to S3 are always free. The outbound charge will be applicable for data transfer – Outbound data transfers from S3 are indeed charged, but in this scenario, we are moving data into S3, which is an inbound data transfer and thus free of charge. References: https://aws.amazon.com/s3/pricing
Question 40 of 65
40. Question
Which of the following statements are true for Amazon Route 53? (Select TWO.)
Correct
Can configure DNS settings for health checks and use routing policy to load balancing – Amazon Route 53 is a scalable and highly available Domain Name System (DNS) web service. You can configure DNS health checks to route traffic to healthy endpoints or independently monitor the health of your application and its endpoints. Route 53 also supports a variety of DNS routing policies that can help you configure load balancing behavior. Can configure DNS failover so that it will route your traffic to a healthy resource – Amazon Route 53 enables you to set up DNS failover in active-passive and active-active configurations. If one resource becomes unavailable, Amazon Route 53 can automatically route your traffic to a backup resource. Incorrect Options: Continually scans AWS workloads for software vulnerabilities and unintended network exposure – This feature is provided by AWS Inspector, not Amazon Route 53. AWS Inspector is a service that helps to improve the security and compliance of applications deployed on AWS. Can provide a direct connection between AWS cloud and on-premises data center – This feature is supported by AWS Direct Connect, not Amazon Route 53. AWS Direct Connect is a network service that provides an alternative to using the internet to utilize AWS cloud services. Provides protection against Distributed Denial of Service (DDoS) attacks for applications running on AWS – This is the function of AWS Shield, a managed Distributed Denial of Service (DDoS) protection service. Amazon Route 53 doesn‘t provide this service directly. References: https://aws.amazon.com/route53 https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover.html
Incorrect
Can configure DNS settings for health checks and use routing policy to load balancing – Amazon Route 53 is a scalable and highly available Domain Name System (DNS) web service. You can configure DNS health checks to route traffic to healthy endpoints or independently monitor the health of your application and its endpoints. Route 53 also supports a variety of DNS routing policies that can help you configure load balancing behavior. Can configure DNS failover so that it will route your traffic to a healthy resource – Amazon Route 53 enables you to set up DNS failover in active-passive and active-active configurations. If one resource becomes unavailable, Amazon Route 53 can automatically route your traffic to a backup resource. Incorrect Options: Continually scans AWS workloads for software vulnerabilities and unintended network exposure – This feature is provided by AWS Inspector, not Amazon Route 53. AWS Inspector is a service that helps to improve the security and compliance of applications deployed on AWS. Can provide a direct connection between AWS cloud and on-premises data center – This feature is supported by AWS Direct Connect, not Amazon Route 53. AWS Direct Connect is a network service that provides an alternative to using the internet to utilize AWS cloud services. Provides protection against Distributed Denial of Service (DDoS) attacks for applications running on AWS – This is the function of AWS Shield, a managed Distributed Denial of Service (DDoS) protection service. Amazon Route 53 doesn‘t provide this service directly. References: https://aws.amazon.com/route53 https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover.html
Unattempted
Can configure DNS settings for health checks and use routing policy to load balancing – Amazon Route 53 is a scalable and highly available Domain Name System (DNS) web service. You can configure DNS health checks to route traffic to healthy endpoints or independently monitor the health of your application and its endpoints. Route 53 also supports a variety of DNS routing policies that can help you configure load balancing behavior. Can configure DNS failover so that it will route your traffic to a healthy resource – Amazon Route 53 enables you to set up DNS failover in active-passive and active-active configurations. If one resource becomes unavailable, Amazon Route 53 can automatically route your traffic to a backup resource. Incorrect Options: Continually scans AWS workloads for software vulnerabilities and unintended network exposure – This feature is provided by AWS Inspector, not Amazon Route 53. AWS Inspector is a service that helps to improve the security and compliance of applications deployed on AWS. Can provide a direct connection between AWS cloud and on-premises data center – This feature is supported by AWS Direct Connect, not Amazon Route 53. AWS Direct Connect is a network service that provides an alternative to using the internet to utilize AWS cloud services. Provides protection against Distributed Denial of Service (DDoS) attacks for applications running on AWS – This is the function of AWS Shield, a managed Distributed Denial of Service (DDoS) protection service. Amazon Route 53 doesn‘t provide this service directly. References: https://aws.amazon.com/route53 https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover.html
Question 41 of 65
41. Question
What is the purpose of the AWS Well-Architected Tool?
Correct
To simulate and validate cloud architectures for reliability and cost optimization The AWS Well-Architected Tool helps customers review and improve their workloads based on the best practices defined in the AWS Well-Architected Framework. It provides guidance on designing and operating reliable, secure, efficient, and cost-effective systems in the AWS cloud. The primary purpose of the AWS Well-Architected Tool is to simulate and validate cloud architectures to help customers identify potential issues and improve the reliability and cost efficiency of their AWS workloads. This tool provides a set of questions and best practices that customers can use to assess their architectures and identify areas for improvement. It also offers recommendations and resources to help customers optimize their workloads. Incorrect Options: To automatically generate code for deploying infrastructure in AWS – The AWS Well-Architected Tool provides guidance for cloud architectures and does not generate code automatically. AWS offers services such as AWS CloudFormation and AWS CDK that allow users to define infrastructure as code. To provide a comprehensive checklist of AWS best practices – While the AWS Well-Architected Tool does provide a set of questions and best practices to help customers assess their architectures, its primary purpose is to simulate and validate cloud architectures, not to provide a comprehensive checklist. To monitor and optimize AWS resources in real-time – The AWS Well-Architected Tool provides guidance for cloud architectures. It is not designed for real-time monitoring and optimization. AWS offers services such as AWS CloudWatch and AWS Trusted Advisor that allow users to monitor and optimize their AWS resources in real-time. References: https://aws.amazon.com/well-architected-tool https://docs.aws.amazon.com/wellarchitected/latest/userguide/intro.html
Incorrect
To simulate and validate cloud architectures for reliability and cost optimization The AWS Well-Architected Tool helps customers review and improve their workloads based on the best practices defined in the AWS Well-Architected Framework. It provides guidance on designing and operating reliable, secure, efficient, and cost-effective systems in the AWS cloud. The primary purpose of the AWS Well-Architected Tool is to simulate and validate cloud architectures to help customers identify potential issues and improve the reliability and cost efficiency of their AWS workloads. This tool provides a set of questions and best practices that customers can use to assess their architectures and identify areas for improvement. It also offers recommendations and resources to help customers optimize their workloads. Incorrect Options: To automatically generate code for deploying infrastructure in AWS – The AWS Well-Architected Tool provides guidance for cloud architectures and does not generate code automatically. AWS offers services such as AWS CloudFormation and AWS CDK that allow users to define infrastructure as code. To provide a comprehensive checklist of AWS best practices – While the AWS Well-Architected Tool does provide a set of questions and best practices to help customers assess their architectures, its primary purpose is to simulate and validate cloud architectures, not to provide a comprehensive checklist. To monitor and optimize AWS resources in real-time – The AWS Well-Architected Tool provides guidance for cloud architectures. It is not designed for real-time monitoring and optimization. AWS offers services such as AWS CloudWatch and AWS Trusted Advisor that allow users to monitor and optimize their AWS resources in real-time. References: https://aws.amazon.com/well-architected-tool https://docs.aws.amazon.com/wellarchitected/latest/userguide/intro.html
Unattempted
To simulate and validate cloud architectures for reliability and cost optimization The AWS Well-Architected Tool helps customers review and improve their workloads based on the best practices defined in the AWS Well-Architected Framework. It provides guidance on designing and operating reliable, secure, efficient, and cost-effective systems in the AWS cloud. The primary purpose of the AWS Well-Architected Tool is to simulate and validate cloud architectures to help customers identify potential issues and improve the reliability and cost efficiency of their AWS workloads. This tool provides a set of questions and best practices that customers can use to assess their architectures and identify areas for improvement. It also offers recommendations and resources to help customers optimize their workloads. Incorrect Options: To automatically generate code for deploying infrastructure in AWS – The AWS Well-Architected Tool provides guidance for cloud architectures and does not generate code automatically. AWS offers services such as AWS CloudFormation and AWS CDK that allow users to define infrastructure as code. To provide a comprehensive checklist of AWS best practices – While the AWS Well-Architected Tool does provide a set of questions and best practices to help customers assess their architectures, its primary purpose is to simulate and validate cloud architectures, not to provide a comprehensive checklist. To monitor and optimize AWS resources in real-time – The AWS Well-Architected Tool provides guidance for cloud architectures. It is not designed for real-time monitoring and optimization. AWS offers services such as AWS CloudWatch and AWS Trusted Advisor that allow users to monitor and optimize their AWS resources in real-time. References: https://aws.amazon.com/well-architected-tool https://docs.aws.amazon.com/wellarchitected/latest/userguide/intro.html
Question 42 of 65
42. Question
Which AWS service should be used to establish a dedicated, private network connection between AWS and your on-premises data server?
Correct
AWS Direct Connect – AWS Direct Connect is the correct service to establish a dedicated, private network connection between AWS and your on-premises data server. It allows you to establish a direct physical connection with AWS, bypassing the public internet. With AWS Direct Connect, you can establish a private, high-bandwidth, low-latency connection to AWS, which provides a more reliable and consistent network performance compared to internet-based connections. This dedicated connection can be used for various purposes, such as transferring large data sets, running latency-sensitive applications, or extending your on-premises network into the AWS Cloud. By using AWS Direct Connect, you can ensure secure and efficient communication between your on-premises infrastructure and AWS services. Incorrect Options: Amazon Route 53 – Amazon Route 53 is a scalable domain name system (DNS) web service that routes traffic to resources in AWS or on-premises based on the DNS queries. It cannot establish a dedicated, private network connection between AWS and your on-premises data server. Amazon CloudFront – Amazon CloudFront is a content delivery network (CDN) service that caches and delivers content from edge locations to improve the performance and scalability of web applications. It is not designed for establishing a private network connection between AWS and on-premises data servers. Amazon API Gateway – Amazon API Gateway is a fully managed service that makes it easy to create, publish, and manage APIs for your applications. It can integrate with AWS services and provide access to them through APIs, it does not provide the functionality of establishing a dedicated, private network connection between AWS and on-premises data servers. References: https://aws.amazon.com/directconnect
Incorrect
AWS Direct Connect – AWS Direct Connect is the correct service to establish a dedicated, private network connection between AWS and your on-premises data server. It allows you to establish a direct physical connection with AWS, bypassing the public internet. With AWS Direct Connect, you can establish a private, high-bandwidth, low-latency connection to AWS, which provides a more reliable and consistent network performance compared to internet-based connections. This dedicated connection can be used for various purposes, such as transferring large data sets, running latency-sensitive applications, or extending your on-premises network into the AWS Cloud. By using AWS Direct Connect, you can ensure secure and efficient communication between your on-premises infrastructure and AWS services. Incorrect Options: Amazon Route 53 – Amazon Route 53 is a scalable domain name system (DNS) web service that routes traffic to resources in AWS or on-premises based on the DNS queries. It cannot establish a dedicated, private network connection between AWS and your on-premises data server. Amazon CloudFront – Amazon CloudFront is a content delivery network (CDN) service that caches and delivers content from edge locations to improve the performance and scalability of web applications. It is not designed for establishing a private network connection between AWS and on-premises data servers. Amazon API Gateway – Amazon API Gateway is a fully managed service that makes it easy to create, publish, and manage APIs for your applications. It can integrate with AWS services and provide access to them through APIs, it does not provide the functionality of establishing a dedicated, private network connection between AWS and on-premises data servers. References: https://aws.amazon.com/directconnect
Unattempted
AWS Direct Connect – AWS Direct Connect is the correct service to establish a dedicated, private network connection between AWS and your on-premises data server. It allows you to establish a direct physical connection with AWS, bypassing the public internet. With AWS Direct Connect, you can establish a private, high-bandwidth, low-latency connection to AWS, which provides a more reliable and consistent network performance compared to internet-based connections. This dedicated connection can be used for various purposes, such as transferring large data sets, running latency-sensitive applications, or extending your on-premises network into the AWS Cloud. By using AWS Direct Connect, you can ensure secure and efficient communication between your on-premises infrastructure and AWS services. Incorrect Options: Amazon Route 53 – Amazon Route 53 is a scalable domain name system (DNS) web service that routes traffic to resources in AWS or on-premises based on the DNS queries. It cannot establish a dedicated, private network connection between AWS and your on-premises data server. Amazon CloudFront – Amazon CloudFront is a content delivery network (CDN) service that caches and delivers content from edge locations to improve the performance and scalability of web applications. It is not designed for establishing a private network connection between AWS and on-premises data servers. Amazon API Gateway – Amazon API Gateway is a fully managed service that makes it easy to create, publish, and manage APIs for your applications. It can integrate with AWS services and provide access to them through APIs, it does not provide the functionality of establishing a dedicated, private network connection between AWS and on-premises data servers. References: https://aws.amazon.com/directconnect
Question 43 of 65
43. Question
How does Amazon RDS help with elasticity?
Correct
It allows you to scale compute and storage independently Amazon RDS (Relational Database Service) allows you to quickly scale compute and storage resources independently, which makes it easier to optimize the performance of your database without overprovisioning. You can increase or decrease the database‘s compute capacity (CPU and memory) or storage capacity (disk space) as needed without having to change anything else. This is important for elasticity because it enables you to scale your database up or down based on demand while optimizing your costs. Incorrect Options: It automatically creates read replicas for improved scalability – Read replica is not related to elasticity. It is related to scalability. Read replicas can be used to distribute read traffic across multiple instances, but they do not affect the ability to scale compute or storage resources. It provides automatic backups for disaster recovery – Automatic backup is related to disaster recovery, and it cannot scale your database up or down based on demand. It supports multiple database engines for flexibility – Supporting multiple database engines cannot scale up or down based on demand. So it is not related to elasticity. Supporting multiple database engines provide you the flexibility where you can choose one from them. References: https://aws.amazon.com/rds https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PIOPS.StorageTypes.html https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.DBInstance.Modifying.html
Incorrect
It allows you to scale compute and storage independently Amazon RDS (Relational Database Service) allows you to quickly scale compute and storage resources independently, which makes it easier to optimize the performance of your database without overprovisioning. You can increase or decrease the database‘s compute capacity (CPU and memory) or storage capacity (disk space) as needed without having to change anything else. This is important for elasticity because it enables you to scale your database up or down based on demand while optimizing your costs. Incorrect Options: It automatically creates read replicas for improved scalability – Read replica is not related to elasticity. It is related to scalability. Read replicas can be used to distribute read traffic across multiple instances, but they do not affect the ability to scale compute or storage resources. It provides automatic backups for disaster recovery – Automatic backup is related to disaster recovery, and it cannot scale your database up or down based on demand. It supports multiple database engines for flexibility – Supporting multiple database engines cannot scale up or down based on demand. So it is not related to elasticity. Supporting multiple database engines provide you the flexibility where you can choose one from them. References: https://aws.amazon.com/rds https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PIOPS.StorageTypes.html https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.DBInstance.Modifying.html
Unattempted
It allows you to scale compute and storage independently Amazon RDS (Relational Database Service) allows you to quickly scale compute and storage resources independently, which makes it easier to optimize the performance of your database without overprovisioning. You can increase or decrease the database‘s compute capacity (CPU and memory) or storage capacity (disk space) as needed without having to change anything else. This is important for elasticity because it enables you to scale your database up or down based on demand while optimizing your costs. Incorrect Options: It automatically creates read replicas for improved scalability – Read replica is not related to elasticity. It is related to scalability. Read replicas can be used to distribute read traffic across multiple instances, but they do not affect the ability to scale compute or storage resources. It provides automatic backups for disaster recovery – Automatic backup is related to disaster recovery, and it cannot scale your database up or down based on demand. It supports multiple database engines for flexibility – Supporting multiple database engines cannot scale up or down based on demand. So it is not related to elasticity. Supporting multiple database engines provide you the flexibility where you can choose one from them. References: https://aws.amazon.com/rds https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PIOPS.StorageTypes.html https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.DBInstance.Modifying.html
Question 44 of 65
44. Question
According to the AWS cloud concept, which statement is true for achieving high availability?
Correct
Launch the EC2 instances across multiple Availability Zones in a single AWS Region – High availability in AWS is achieved by distributing resources across multiple, isolated Availability Zones within an AWS Region. Availability Zones are distinct locations that are engineered to be insulated from failures in other Availability Zones. By launching EC2 instances across multiple Availability Zones, you can ensure that a failure in one zone does not affect the overall availability of your application. Incorrect Options: Launch the instances as EC2 Reserved Instances in the same AWS Region and the same Availability Zone – Reserved Instances provide a capacity reservation, but placing all instances in the same Availability Zone doesn‘t promote high availability because it doesn‘t protect from a single point of failure. Launch the instances in multiple AWS Regions but in the same Availability Zone – This option is not technically possible because Availability Zones are distinct to each Region. The phrasing appears to misunderstand the relationship between Regions and Availability Zones in AWS. Launch the instances as EC2 Spot Instances in the same AWS Region but in different Availability Zones – While distributing instances across different Availability Zones promotes high availability, using Spot Instances may not because they can be interrupted with two minutes notice when AWS needs the capacity back. References: https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-add-availability-zone.html
Incorrect
Launch the EC2 instances across multiple Availability Zones in a single AWS Region – High availability in AWS is achieved by distributing resources across multiple, isolated Availability Zones within an AWS Region. Availability Zones are distinct locations that are engineered to be insulated from failures in other Availability Zones. By launching EC2 instances across multiple Availability Zones, you can ensure that a failure in one zone does not affect the overall availability of your application. Incorrect Options: Launch the instances as EC2 Reserved Instances in the same AWS Region and the same Availability Zone – Reserved Instances provide a capacity reservation, but placing all instances in the same Availability Zone doesn‘t promote high availability because it doesn‘t protect from a single point of failure. Launch the instances in multiple AWS Regions but in the same Availability Zone – This option is not technically possible because Availability Zones are distinct to each Region. The phrasing appears to misunderstand the relationship between Regions and Availability Zones in AWS. Launch the instances as EC2 Spot Instances in the same AWS Region but in different Availability Zones – While distributing instances across different Availability Zones promotes high availability, using Spot Instances may not because they can be interrupted with two minutes notice when AWS needs the capacity back. References: https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-add-availability-zone.html
Unattempted
Launch the EC2 instances across multiple Availability Zones in a single AWS Region – High availability in AWS is achieved by distributing resources across multiple, isolated Availability Zones within an AWS Region. Availability Zones are distinct locations that are engineered to be insulated from failures in other Availability Zones. By launching EC2 instances across multiple Availability Zones, you can ensure that a failure in one zone does not affect the overall availability of your application. Incorrect Options: Launch the instances as EC2 Reserved Instances in the same AWS Region and the same Availability Zone – Reserved Instances provide a capacity reservation, but placing all instances in the same Availability Zone doesn‘t promote high availability because it doesn‘t protect from a single point of failure. Launch the instances in multiple AWS Regions but in the same Availability Zone – This option is not technically possible because Availability Zones are distinct to each Region. The phrasing appears to misunderstand the relationship between Regions and Availability Zones in AWS. Launch the instances as EC2 Spot Instances in the same AWS Region but in different Availability Zones – While distributing instances across different Availability Zones promotes high availability, using Spot Instances may not because they can be interrupted with two minutes notice when AWS needs the capacity back. References: https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-add-availability-zone.html
Question 45 of 65
45. Question
Your company has a large number of accounts in the AWS Cloud, and the CEO wants to manage billing and security policies centrally. Which service allows you to do this?
Correct
AWS Organization – AWS Organizations is the service that allows you to manage billing and security policies centrally in your AWS Cloud accounts. It provides a way to centrally manage and govern multiple AWS accounts within your organization. With AWS Organizations, you can create groups of accounts, called organizational units (OUs), and apply policies to those OUs. This enables you to manage permissions, apply security policies, and control costs across all your accounts from a single, centralized location. By using AWS Organizations, the CEO can gain a consolidated view of billing across all accounts, set up security policies that are enforced across the organization, and simplify the overall management of AWS resources. Incorrect Options: AWS IAM – AWS IAM (Identity and Access Management) is not the correct service for centrally managing billing and security policies. IAM is used for managing user access and permissions within individual AWS accounts, it does not provide centralized management capabilities across multiple accounts. AWS Config – AWS Config is a service that allows you to assess, audit, and evaluate the configurations of your AWS resources. It helps you maintain compliance, monitor resource changes, and troubleshoot operational issues. However, it does not have the primary functionality of centrally managing billing and security policies across multiple accounts. AWS Billing – AWS Billing is a service that provides you with information about your AWS usage and charges. It enables you to view, analyze, and manage your billing and cost data. While it helps with understanding and controlling costs. It does not provide the capability to centrally manage security policies across multiple accounts. References: https://aws.amazon.com/organizations
Incorrect
AWS Organization – AWS Organizations is the service that allows you to manage billing and security policies centrally in your AWS Cloud accounts. It provides a way to centrally manage and govern multiple AWS accounts within your organization. With AWS Organizations, you can create groups of accounts, called organizational units (OUs), and apply policies to those OUs. This enables you to manage permissions, apply security policies, and control costs across all your accounts from a single, centralized location. By using AWS Organizations, the CEO can gain a consolidated view of billing across all accounts, set up security policies that are enforced across the organization, and simplify the overall management of AWS resources. Incorrect Options: AWS IAM – AWS IAM (Identity and Access Management) is not the correct service for centrally managing billing and security policies. IAM is used for managing user access and permissions within individual AWS accounts, it does not provide centralized management capabilities across multiple accounts. AWS Config – AWS Config is a service that allows you to assess, audit, and evaluate the configurations of your AWS resources. It helps you maintain compliance, monitor resource changes, and troubleshoot operational issues. However, it does not have the primary functionality of centrally managing billing and security policies across multiple accounts. AWS Billing – AWS Billing is a service that provides you with information about your AWS usage and charges. It enables you to view, analyze, and manage your billing and cost data. While it helps with understanding and controlling costs. It does not provide the capability to centrally manage security policies across multiple accounts. References: https://aws.amazon.com/organizations
Unattempted
AWS Organization – AWS Organizations is the service that allows you to manage billing and security policies centrally in your AWS Cloud accounts. It provides a way to centrally manage and govern multiple AWS accounts within your organization. With AWS Organizations, you can create groups of accounts, called organizational units (OUs), and apply policies to those OUs. This enables you to manage permissions, apply security policies, and control costs across all your accounts from a single, centralized location. By using AWS Organizations, the CEO can gain a consolidated view of billing across all accounts, set up security policies that are enforced across the organization, and simplify the overall management of AWS resources. Incorrect Options: AWS IAM – AWS IAM (Identity and Access Management) is not the correct service for centrally managing billing and security policies. IAM is used for managing user access and permissions within individual AWS accounts, it does not provide centralized management capabilities across multiple accounts. AWS Config – AWS Config is a service that allows you to assess, audit, and evaluate the configurations of your AWS resources. It helps you maintain compliance, monitor resource changes, and troubleshoot operational issues. However, it does not have the primary functionality of centrally managing billing and security policies across multiple accounts. AWS Billing – AWS Billing is a service that provides you with information about your AWS usage and charges. It enables you to view, analyze, and manage your billing and cost data. While it helps with understanding and controlling costs. It does not provide the capability to centrally manage security policies across multiple accounts. References: https://aws.amazon.com/organizations
Question 46 of 65
46. Question
An eCommerce company wants to show product recommendations to users based on users‘ history. Which AWS service should be used to do this?
Correct
Amazon Personalize – Amazon Personalize should be used to provide product recommendations to users based on their history. It leverages machine learning algorithms and data analysis to create personalized recommendations tailored to individual users. With Personalize, eCommerce companies can utilize historical user data, such as browsing behavior, purchase history, and preferences, to generate accurate and relevant recommendations. The service automatically handles the data processing, model training, and recommendation generation, making it easier for companies to implement personalized recommendation systems without extensive machine learning expertise. Incorrect Options: Amazon SageMaker – Amazon SageMaker is a fully managed service that enables developers and data scientists to build, train, and deploy machine learning models. SageMaker is a versatile service for building and deploying models and it does not provide product recommendations based on user history. Amazon Rekognition – Amazon Rekognition is a service for image and video analysis, primarily used for tasks such as object detection, facial recognition, and content moderation. It cannot generate personalized product recommendations based on user history. Amazon Polly – Amazon Polly is a service that converts text into lifelike speech. It is used for creating speech-enabled applications and adding voice capabilities to systems. It cannot generate personalized product recommendations based on user history. References: https://aws.amazon.com/personalize
Incorrect
Amazon Personalize – Amazon Personalize should be used to provide product recommendations to users based on their history. It leverages machine learning algorithms and data analysis to create personalized recommendations tailored to individual users. With Personalize, eCommerce companies can utilize historical user data, such as browsing behavior, purchase history, and preferences, to generate accurate and relevant recommendations. The service automatically handles the data processing, model training, and recommendation generation, making it easier for companies to implement personalized recommendation systems without extensive machine learning expertise. Incorrect Options: Amazon SageMaker – Amazon SageMaker is a fully managed service that enables developers and data scientists to build, train, and deploy machine learning models. SageMaker is a versatile service for building and deploying models and it does not provide product recommendations based on user history. Amazon Rekognition – Amazon Rekognition is a service for image and video analysis, primarily used for tasks such as object detection, facial recognition, and content moderation. It cannot generate personalized product recommendations based on user history. Amazon Polly – Amazon Polly is a service that converts text into lifelike speech. It is used for creating speech-enabled applications and adding voice capabilities to systems. It cannot generate personalized product recommendations based on user history. References: https://aws.amazon.com/personalize
Unattempted
Amazon Personalize – Amazon Personalize should be used to provide product recommendations to users based on their history. It leverages machine learning algorithms and data analysis to create personalized recommendations tailored to individual users. With Personalize, eCommerce companies can utilize historical user data, such as browsing behavior, purchase history, and preferences, to generate accurate and relevant recommendations. The service automatically handles the data processing, model training, and recommendation generation, making it easier for companies to implement personalized recommendation systems without extensive machine learning expertise. Incorrect Options: Amazon SageMaker – Amazon SageMaker is a fully managed service that enables developers and data scientists to build, train, and deploy machine learning models. SageMaker is a versatile service for building and deploying models and it does not provide product recommendations based on user history. Amazon Rekognition – Amazon Rekognition is a service for image and video analysis, primarily used for tasks such as object detection, facial recognition, and content moderation. It cannot generate personalized product recommendations based on user history. Amazon Polly – Amazon Polly is a service that converts text into lifelike speech. It is used for creating speech-enabled applications and adding voice capabilities to systems. It cannot generate personalized product recommendations based on user history. References: https://aws.amazon.com/personalize
Question 47 of 65
47. Question
Which of the following are advantages of AWS managed services? (Select TWO.)
Correct
Enhanced Security aligned with your controls – AWS Managed Services (AMS) builds and maintains a growing repository of compliance, operational, and security guardrails that help keep you aligned with your controls. AMS reduces the burden of meeting compliance program requirements (HIPAA, HITRUST, GDPR, SOC, NIST, ISO, PCI, FedRAMP) through automated detection and remediation automation. Reduced operational costs for maintaining – AWS Managed Services (AMS) helps with financial optimization across your AWS estate, and any savings identified reduces your AMS fee without impacting operational outcomes. The customers have enjoyed up to 30% in operational savings and up to 25% in AWS infrastructure savings. Pay for what you use and take back operational control when you are ready. Incorrect Options: Increased high-level control of infrastructure – This option is incorrect. It is not an advantage of AWS managed services. For managed service, AWS takes care of the foundation infrastructure, Operating System, Platform, software, physical network. So you don’t get access to the underlying infrastructure. Provided free enterprise level support – This option is incorrect. It is not an advantage of AWS managed services. You have to pay for AWS Enterprise support For more details https://aws.amazon.com/premiumsupport/plans/enterprise Automatically encrypted data at rest – This option is incorrect. It is not an advantage of AWS managed services. AWS doesn’t provide automatic security for customers‘ data at rest. So you should enable encryption for protecting your sensitive data. References: https://aws.amazon.com/managed-services
Incorrect
Enhanced Security aligned with your controls – AWS Managed Services (AMS) builds and maintains a growing repository of compliance, operational, and security guardrails that help keep you aligned with your controls. AMS reduces the burden of meeting compliance program requirements (HIPAA, HITRUST, GDPR, SOC, NIST, ISO, PCI, FedRAMP) through automated detection and remediation automation. Reduced operational costs for maintaining – AWS Managed Services (AMS) helps with financial optimization across your AWS estate, and any savings identified reduces your AMS fee without impacting operational outcomes. The customers have enjoyed up to 30% in operational savings and up to 25% in AWS infrastructure savings. Pay for what you use and take back operational control when you are ready. Incorrect Options: Increased high-level control of infrastructure – This option is incorrect. It is not an advantage of AWS managed services. For managed service, AWS takes care of the foundation infrastructure, Operating System, Platform, software, physical network. So you don’t get access to the underlying infrastructure. Provided free enterprise level support – This option is incorrect. It is not an advantage of AWS managed services. You have to pay for AWS Enterprise support For more details https://aws.amazon.com/premiumsupport/plans/enterprise Automatically encrypted data at rest – This option is incorrect. It is not an advantage of AWS managed services. AWS doesn’t provide automatic security for customers‘ data at rest. So you should enable encryption for protecting your sensitive data. References: https://aws.amazon.com/managed-services
Unattempted
Enhanced Security aligned with your controls – AWS Managed Services (AMS) builds and maintains a growing repository of compliance, operational, and security guardrails that help keep you aligned with your controls. AMS reduces the burden of meeting compliance program requirements (HIPAA, HITRUST, GDPR, SOC, NIST, ISO, PCI, FedRAMP) through automated detection and remediation automation. Reduced operational costs for maintaining – AWS Managed Services (AMS) helps with financial optimization across your AWS estate, and any savings identified reduces your AMS fee without impacting operational outcomes. The customers have enjoyed up to 30% in operational savings and up to 25% in AWS infrastructure savings. Pay for what you use and take back operational control when you are ready. Incorrect Options: Increased high-level control of infrastructure – This option is incorrect. It is not an advantage of AWS managed services. For managed service, AWS takes care of the foundation infrastructure, Operating System, Platform, software, physical network. So you don’t get access to the underlying infrastructure. Provided free enterprise level support – This option is incorrect. It is not an advantage of AWS managed services. You have to pay for AWS Enterprise support For more details https://aws.amazon.com/premiumsupport/plans/enterprise Automatically encrypted data at rest – This option is incorrect. It is not an advantage of AWS managed services. AWS doesn’t provide automatic security for customers‘ data at rest. So you should enable encryption for protecting your sensitive data. References: https://aws.amazon.com/managed-services
Question 48 of 65
48. Question
Your company wants to migrate an application to AWS Cloud from an on-premises data center and needs to move large volumes of data with limited bandwidth. Which service helps you transfer data with high security?
Correct
AWS Snowball – AWS Snowball is a data transport solution that accelerates transferring large amounts of data into and out of AWS using storage devices designed to secure and transfer data efficiently. It can move terabytes of data where limited bandwidth or in places with unreliable or no connectivity. The devices used for this service have robust on-board security, including tamper-resistant enclosures, 256-bit encryption, and industry-standard Trusted Platform Modules (TPM), ensuring a high level of security during transit. Incorrect Options: AWS VPN – AWS Virtual Private Network (VPN) establishes a secure and private encrypted tunnel from your network or device to AWS. It‘s used primarily for secure communication between your network and AWS, not for large-scale data transfer. AWS DataSync – AWS DataSync is used to move large amounts of data online between on-premises storage systems and AWS storage services. It‘s more suited to continuous data replication needs and might not be as efficient for one-time, large-volume transfers with limited bandwidth. AWS Transfer Family – The AWS Transfer Family provides fully managed support for file transfers directly into and out of Amazon S3 using SFTP, FTPS, and FTP. It is not designed for transferring extremely large volumes of data offline. References: https://aws.amazon.com/snowball
Incorrect
AWS Snowball – AWS Snowball is a data transport solution that accelerates transferring large amounts of data into and out of AWS using storage devices designed to secure and transfer data efficiently. It can move terabytes of data where limited bandwidth or in places with unreliable or no connectivity. The devices used for this service have robust on-board security, including tamper-resistant enclosures, 256-bit encryption, and industry-standard Trusted Platform Modules (TPM), ensuring a high level of security during transit. Incorrect Options: AWS VPN – AWS Virtual Private Network (VPN) establishes a secure and private encrypted tunnel from your network or device to AWS. It‘s used primarily for secure communication between your network and AWS, not for large-scale data transfer. AWS DataSync – AWS DataSync is used to move large amounts of data online between on-premises storage systems and AWS storage services. It‘s more suited to continuous data replication needs and might not be as efficient for one-time, large-volume transfers with limited bandwidth. AWS Transfer Family – The AWS Transfer Family provides fully managed support for file transfers directly into and out of Amazon S3 using SFTP, FTPS, and FTP. It is not designed for transferring extremely large volumes of data offline. References: https://aws.amazon.com/snowball
Unattempted
AWS Snowball – AWS Snowball is a data transport solution that accelerates transferring large amounts of data into and out of AWS using storage devices designed to secure and transfer data efficiently. It can move terabytes of data where limited bandwidth or in places with unreliable or no connectivity. The devices used for this service have robust on-board security, including tamper-resistant enclosures, 256-bit encryption, and industry-standard Trusted Platform Modules (TPM), ensuring a high level of security during transit. Incorrect Options: AWS VPN – AWS Virtual Private Network (VPN) establishes a secure and private encrypted tunnel from your network or device to AWS. It‘s used primarily for secure communication between your network and AWS, not for large-scale data transfer. AWS DataSync – AWS DataSync is used to move large amounts of data online between on-premises storage systems and AWS storage services. It‘s more suited to continuous data replication needs and might not be as efficient for one-time, large-volume transfers with limited bandwidth. AWS Transfer Family – The AWS Transfer Family provides fully managed support for file transfers directly into and out of Amazon S3 using SFTP, FTPS, and FTP. It is not designed for transferring extremely large volumes of data offline. References: https://aws.amazon.com/snowball
Question 49 of 65
49. Question
Which of the following are recommended in the Reliability Pillar of AWS Well-Architected? (Select TWO.)
Correct
Enhanced Security aligned with your controls – AWS Managed Services (AMS) provides enhanced security that aligns with your controls. This includes compliance monitoring, patch management, and incident response, among other services. AWS infrastructure is designed to meet the most stringent security requirements, and AMS can help to implement best-practice security measures in line with your own policies and controls. Reduced operational costs for maintaining – One of the primary advantages of using AWS Managed Services is that it can significantly reduce operational costs. By offloading the burden of day-to-day infrastructure management tasks, businesses can focus on their core competencies and strategic initiatives, reducing the costs associated with maintaining infrastructure and systems. AWS Managed Services takes care of routine tasks such as change requests, monitoring, patch management, security, and backup services, helping to lower the total cost of ownership. Incorrect Options: Increased high-level control of infrastructure – While AWS Managed Services helps manage your infrastructure, it may not necessarily lead to increased high-level control of the infrastructure. Managed services often involve delegating certain responsibilities to the service provider, which can result in a reduced level of control over some aspects of the infrastructure. Provided free enterprise level supports – Enterprise-level support is not provided for free with AWS Managed Services. It does provide a range of services designed to support enterprise-level needs, these typically come with associated costs. Automatically encrypted data at rest – AWS provides many tools and services to help encrypt data at rest, automatic encryption is not a default feature of AWS Managed Services. Encryption practices typically depend on the specific services being used and the customer‘s particular security requirements. References: https://aws.amazon.com/blogs/apn/the-6-pillars-of-the-aws-well-architected-framework https://aws.amazon.com/architecture/well-architected
Incorrect
Enhanced Security aligned with your controls – AWS Managed Services (AMS) provides enhanced security that aligns with your controls. This includes compliance monitoring, patch management, and incident response, among other services. AWS infrastructure is designed to meet the most stringent security requirements, and AMS can help to implement best-practice security measures in line with your own policies and controls. Reduced operational costs for maintaining – One of the primary advantages of using AWS Managed Services is that it can significantly reduce operational costs. By offloading the burden of day-to-day infrastructure management tasks, businesses can focus on their core competencies and strategic initiatives, reducing the costs associated with maintaining infrastructure and systems. AWS Managed Services takes care of routine tasks such as change requests, monitoring, patch management, security, and backup services, helping to lower the total cost of ownership. Incorrect Options: Increased high-level control of infrastructure – While AWS Managed Services helps manage your infrastructure, it may not necessarily lead to increased high-level control of the infrastructure. Managed services often involve delegating certain responsibilities to the service provider, which can result in a reduced level of control over some aspects of the infrastructure. Provided free enterprise level supports – Enterprise-level support is not provided for free with AWS Managed Services. It does provide a range of services designed to support enterprise-level needs, these typically come with associated costs. Automatically encrypted data at rest – AWS provides many tools and services to help encrypt data at rest, automatic encryption is not a default feature of AWS Managed Services. Encryption practices typically depend on the specific services being used and the customer‘s particular security requirements. References: https://aws.amazon.com/blogs/apn/the-6-pillars-of-the-aws-well-architected-framework https://aws.amazon.com/architecture/well-architected
Unattempted
Enhanced Security aligned with your controls – AWS Managed Services (AMS) provides enhanced security that aligns with your controls. This includes compliance monitoring, patch management, and incident response, among other services. AWS infrastructure is designed to meet the most stringent security requirements, and AMS can help to implement best-practice security measures in line with your own policies and controls. Reduced operational costs for maintaining – One of the primary advantages of using AWS Managed Services is that it can significantly reduce operational costs. By offloading the burden of day-to-day infrastructure management tasks, businesses can focus on their core competencies and strategic initiatives, reducing the costs associated with maintaining infrastructure and systems. AWS Managed Services takes care of routine tasks such as change requests, monitoring, patch management, security, and backup services, helping to lower the total cost of ownership. Incorrect Options: Increased high-level control of infrastructure – While AWS Managed Services helps manage your infrastructure, it may not necessarily lead to increased high-level control of the infrastructure. Managed services often involve delegating certain responsibilities to the service provider, which can result in a reduced level of control over some aspects of the infrastructure. Provided free enterprise level supports – Enterprise-level support is not provided for free with AWS Managed Services. It does provide a range of services designed to support enterprise-level needs, these typically come with associated costs. Automatically encrypted data at rest – AWS provides many tools and services to help encrypt data at rest, automatic encryption is not a default feature of AWS Managed Services. Encryption practices typically depend on the specific services being used and the customer‘s particular security requirements. References: https://aws.amazon.com/blogs/apn/the-6-pillars-of-the-aws-well-architected-framework https://aws.amazon.com/architecture/well-architected
Question 50 of 65
50. Question
The AWS Well-Architected Framework describes key concepts, design principles, and architectural best practices for designing and running workloads in the cloud. Which of the following are pillars of the AWS Well-Architected Framework? (Select TWO.)
Correct
Security Pillar Security is one of the key pillars of the AWS Well-Architected Framework. It involves understanding and applying best practices around the protection of information and systems. The security pillar provides strategies to help you protect your data, systems, and assets in the cloud. Sustainability Pillar The sustainability pillar focuses on minimizing the environmental impacts of running cloud workloads. Key topics include a shared responsibility model for sustainability, understanding impact, and maximizing utilization to minimize required resources and reduce downstream impacts. Incorrect Options: Elasticity – Elasticity, while a fundamental concept in cloud computing and an aspect addressed within the AWS Well-Architected Framework, but isn‘t officially recognized as one of the six pillars of the framework. Availability – Availability is a crucial aspect of cloud architectures and is a part of the Reliability pillar. It isn‘t a standalone pillar in the AWS Well-Architected Framework. Scalability – Scalability is a key aspect of Performance Efficiency, but it‘s not a standalone pillar in the AWS Well-Architected Framework. References: https://aws.amazon.com/architecture/well-architected
Incorrect
Security Pillar Security is one of the key pillars of the AWS Well-Architected Framework. It involves understanding and applying best practices around the protection of information and systems. The security pillar provides strategies to help you protect your data, systems, and assets in the cloud. Sustainability Pillar The sustainability pillar focuses on minimizing the environmental impacts of running cloud workloads. Key topics include a shared responsibility model for sustainability, understanding impact, and maximizing utilization to minimize required resources and reduce downstream impacts. Incorrect Options: Elasticity – Elasticity, while a fundamental concept in cloud computing and an aspect addressed within the AWS Well-Architected Framework, but isn‘t officially recognized as one of the six pillars of the framework. Availability – Availability is a crucial aspect of cloud architectures and is a part of the Reliability pillar. It isn‘t a standalone pillar in the AWS Well-Architected Framework. Scalability – Scalability is a key aspect of Performance Efficiency, but it‘s not a standalone pillar in the AWS Well-Architected Framework. References: https://aws.amazon.com/architecture/well-architected
Unattempted
Security Pillar Security is one of the key pillars of the AWS Well-Architected Framework. It involves understanding and applying best practices around the protection of information and systems. The security pillar provides strategies to help you protect your data, systems, and assets in the cloud. Sustainability Pillar The sustainability pillar focuses on minimizing the environmental impacts of running cloud workloads. Key topics include a shared responsibility model for sustainability, understanding impact, and maximizing utilization to minimize required resources and reduce downstream impacts. Incorrect Options: Elasticity – Elasticity, while a fundamental concept in cloud computing and an aspect addressed within the AWS Well-Architected Framework, but isn‘t officially recognized as one of the six pillars of the framework. Availability – Availability is a crucial aspect of cloud architectures and is a part of the Reliability pillar. It isn‘t a standalone pillar in the AWS Well-Architected Framework. Scalability – Scalability is a key aspect of Performance Efficiency, but it‘s not a standalone pillar in the AWS Well-Architected Framework. References: https://aws.amazon.com/architecture/well-architected
Question 51 of 65
51. Question
What are AWS‘ responsibilities for managed services like Amazon RDS? (select TWO.)
Correct
Maintaining Operating System – AWS is responsible for maintaining the underlying infrastructure and the operating system for managed services like Amazon RDS. This responsibility includes ensuring that the operating system is up-to-date with the latest patches and security updates. Patching database software – Another responsibility of AWS for managed services like Amazon RDS is patching the database software. AWS takes care of applying necessary patches and updates to the database software to ensure security, stability, and performance improvements. This includes both major and minor updates, bug fixes, and security patches. Incorrect Options: Encrypting data at rest – AWS provides options and capabilities to encrypt data at rest in Amazon RDS, the choice and implementation to use these options are the responsibility of the customer. Managing IAM Groups – Management of AWS Identity and Access Management (IAM) groups, including the creation of users, roles, and policies, is the responsibility of the customer. Managing access policies – Managing access policies is a customer‘s responsibility. Customers define and manage policies to control access to AWS resources. References: https://aws.amazon.com/compliance/shared-responsibility-model https://aws.amazon.com/rds
Incorrect
Maintaining Operating System – AWS is responsible for maintaining the underlying infrastructure and the operating system for managed services like Amazon RDS. This responsibility includes ensuring that the operating system is up-to-date with the latest patches and security updates. Patching database software – Another responsibility of AWS for managed services like Amazon RDS is patching the database software. AWS takes care of applying necessary patches and updates to the database software to ensure security, stability, and performance improvements. This includes both major and minor updates, bug fixes, and security patches. Incorrect Options: Encrypting data at rest – AWS provides options and capabilities to encrypt data at rest in Amazon RDS, the choice and implementation to use these options are the responsibility of the customer. Managing IAM Groups – Management of AWS Identity and Access Management (IAM) groups, including the creation of users, roles, and policies, is the responsibility of the customer. Managing access policies – Managing access policies is a customer‘s responsibility. Customers define and manage policies to control access to AWS resources. References: https://aws.amazon.com/compliance/shared-responsibility-model https://aws.amazon.com/rds
Unattempted
Maintaining Operating System – AWS is responsible for maintaining the underlying infrastructure and the operating system for managed services like Amazon RDS. This responsibility includes ensuring that the operating system is up-to-date with the latest patches and security updates. Patching database software – Another responsibility of AWS for managed services like Amazon RDS is patching the database software. AWS takes care of applying necessary patches and updates to the database software to ensure security, stability, and performance improvements. This includes both major and minor updates, bug fixes, and security patches. Incorrect Options: Encrypting data at rest – AWS provides options and capabilities to encrypt data at rest in Amazon RDS, the choice and implementation to use these options are the responsibility of the customer. Managing IAM Groups – Management of AWS Identity and Access Management (IAM) groups, including the creation of users, roles, and policies, is the responsibility of the customer. Managing access policies – Managing access policies is a customer‘s responsibility. Customers define and manage policies to control access to AWS resources. References: https://aws.amazon.com/compliance/shared-responsibility-model https://aws.amazon.com/rds
Question 52 of 65
52. Question
Which of the following AWS services support Compute Savings Plans? (Select TWO.)
Correct
AWS Fargate – AWS Fargate is a serverless compute engine for containers that works with both Amazon Elastic Container Service (ECS) and Amazon EKS. Compute Savings Plans apply to AWS Fargate usage, which can significantly reduce the cost compared to On-Demand usage. AWS Lambda – AWS Lambda is a serverless compute service that runs your code in response to events and automatically manages the underlying compute resources. Lambda also supports Compute Savings Plans, so the cost can be optimized based on committed usage. Incorrect Options: Amazon RDS – Amazon RDS is a relational database service, not a compute service. Therefore, it doesn‘t support Compute Savings Plans. Amazon Lightsail – Amazon Lightsail is a virtual private server (VPS) service. While it does provide compute power, it is not covered under Compute Savings Plans. Amazon SNS – Amazon SNS (Simple Notification Service) is a fully managed messaging service for both application-to-application and application-to-person communication. It‘s not a compute service and therefore does not support Compute Savings Plans. References: https://aws.amazon.com/savingsplans/compute-pricing https://aws.amazon.com/fargate/pricing https://aws.amazon.com/lambda/pricing
Incorrect
AWS Fargate – AWS Fargate is a serverless compute engine for containers that works with both Amazon Elastic Container Service (ECS) and Amazon EKS. Compute Savings Plans apply to AWS Fargate usage, which can significantly reduce the cost compared to On-Demand usage. AWS Lambda – AWS Lambda is a serverless compute service that runs your code in response to events and automatically manages the underlying compute resources. Lambda also supports Compute Savings Plans, so the cost can be optimized based on committed usage. Incorrect Options: Amazon RDS – Amazon RDS is a relational database service, not a compute service. Therefore, it doesn‘t support Compute Savings Plans. Amazon Lightsail – Amazon Lightsail is a virtual private server (VPS) service. While it does provide compute power, it is not covered under Compute Savings Plans. Amazon SNS – Amazon SNS (Simple Notification Service) is a fully managed messaging service for both application-to-application and application-to-person communication. It‘s not a compute service and therefore does not support Compute Savings Plans. References: https://aws.amazon.com/savingsplans/compute-pricing https://aws.amazon.com/fargate/pricing https://aws.amazon.com/lambda/pricing
Unattempted
AWS Fargate – AWS Fargate is a serverless compute engine for containers that works with both Amazon Elastic Container Service (ECS) and Amazon EKS. Compute Savings Plans apply to AWS Fargate usage, which can significantly reduce the cost compared to On-Demand usage. AWS Lambda – AWS Lambda is a serverless compute service that runs your code in response to events and automatically manages the underlying compute resources. Lambda also supports Compute Savings Plans, so the cost can be optimized based on committed usage. Incorrect Options: Amazon RDS – Amazon RDS is a relational database service, not a compute service. Therefore, it doesn‘t support Compute Savings Plans. Amazon Lightsail – Amazon Lightsail is a virtual private server (VPS) service. While it does provide compute power, it is not covered under Compute Savings Plans. Amazon SNS – Amazon SNS (Simple Notification Service) is a fully managed messaging service for both application-to-application and application-to-person communication. It‘s not a compute service and therefore does not support Compute Savings Plans. References: https://aws.amazon.com/savingsplans/compute-pricing https://aws.amazon.com/fargate/pricing https://aws.amazon.com/lambda/pricing
Question 53 of 65
53. Question
A company has a MySQL database running on a single Amazon EC2 instance. Now The database requires higher availability. As a Cloud Practitioner, which option should you suggest?
Correct
Migrate to Amazon RDS with enabling Multi-AZ DB instance deployments – To achieve higher availability for the MySQL database, it is recommended to migrate to Amazon RDS (Relational Database Service) with Multi-AZ (Availability Zone) DB instance deployments. Amazon RDS is a managed database service that simplifies database administration tasks. Multi-AZ deployment provides automatic synchronous replication of the database to a standby replica in a different Availability Zone. In the event of a failure, Amazon RDS automatically promotes the standby replica to the primary database, minimizing downtime and ensuring high availability. By migrating to Amazon RDS with Multi-AZ, the company can benefit from automated failover, data durability, and reduced operational overhead for managing the database. Incorrect Options: Upgrade EC2 instance size (Increase CPU & RAM) – Increasing the size of the EC2 instance, such as upgrading the CPU and RAM, does not address the requirement for higher availability. It only improves the performance and capacity of the database server but does not provide built-in mechanisms for redundancy and failover. Enable termination protection to avoid outages – Enabling termination protection on an EC2 instance prevents accidental termination, but it does not improve the availability of the database. Termination protection is designed to prevent instances from being terminated through user error or automation, but it does not protect against instance failures or provide automatic failover. Add an Application Load Balancer to the EC2 instance – While an Application Load Balancer (ALB) can distribute incoming traffic across multiple EC2 instances, it does not inherently provide high availability for a single database instance. ALBs are typically used for distributing traffic to multiple instances of an application to improve scalability and fault tolerance, but they do not address the specific requirement of higher availability for a database running on a single EC2 instance. References: https://aws.amazon.com/rds https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html
Incorrect
Migrate to Amazon RDS with enabling Multi-AZ DB instance deployments – To achieve higher availability for the MySQL database, it is recommended to migrate to Amazon RDS (Relational Database Service) with Multi-AZ (Availability Zone) DB instance deployments. Amazon RDS is a managed database service that simplifies database administration tasks. Multi-AZ deployment provides automatic synchronous replication of the database to a standby replica in a different Availability Zone. In the event of a failure, Amazon RDS automatically promotes the standby replica to the primary database, minimizing downtime and ensuring high availability. By migrating to Amazon RDS with Multi-AZ, the company can benefit from automated failover, data durability, and reduced operational overhead for managing the database. Incorrect Options: Upgrade EC2 instance size (Increase CPU & RAM) – Increasing the size of the EC2 instance, such as upgrading the CPU and RAM, does not address the requirement for higher availability. It only improves the performance and capacity of the database server but does not provide built-in mechanisms for redundancy and failover. Enable termination protection to avoid outages – Enabling termination protection on an EC2 instance prevents accidental termination, but it does not improve the availability of the database. Termination protection is designed to prevent instances from being terminated through user error or automation, but it does not protect against instance failures or provide automatic failover. Add an Application Load Balancer to the EC2 instance – While an Application Load Balancer (ALB) can distribute incoming traffic across multiple EC2 instances, it does not inherently provide high availability for a single database instance. ALBs are typically used for distributing traffic to multiple instances of an application to improve scalability and fault tolerance, but they do not address the specific requirement of higher availability for a database running on a single EC2 instance. References: https://aws.amazon.com/rds https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html
Unattempted
Migrate to Amazon RDS with enabling Multi-AZ DB instance deployments – To achieve higher availability for the MySQL database, it is recommended to migrate to Amazon RDS (Relational Database Service) with Multi-AZ (Availability Zone) DB instance deployments. Amazon RDS is a managed database service that simplifies database administration tasks. Multi-AZ deployment provides automatic synchronous replication of the database to a standby replica in a different Availability Zone. In the event of a failure, Amazon RDS automatically promotes the standby replica to the primary database, minimizing downtime and ensuring high availability. By migrating to Amazon RDS with Multi-AZ, the company can benefit from automated failover, data durability, and reduced operational overhead for managing the database. Incorrect Options: Upgrade EC2 instance size (Increase CPU & RAM) – Increasing the size of the EC2 instance, such as upgrading the CPU and RAM, does not address the requirement for higher availability. It only improves the performance and capacity of the database server but does not provide built-in mechanisms for redundancy and failover. Enable termination protection to avoid outages – Enabling termination protection on an EC2 instance prevents accidental termination, but it does not improve the availability of the database. Termination protection is designed to prevent instances from being terminated through user error or automation, but it does not protect against instance failures or provide automatic failover. Add an Application Load Balancer to the EC2 instance – While an Application Load Balancer (ALB) can distribute incoming traffic across multiple EC2 instances, it does not inherently provide high availability for a single database instance. ALBs are typically used for distributing traffic to multiple instances of an application to improve scalability and fault tolerance, but they do not address the specific requirement of higher availability for a database running on a single EC2 instance. References: https://aws.amazon.com/rds https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html
Question 54 of 65
54. Question
What is the possible discount for reserved instances compared to the on-demand price?
Correct
Up to 72% – The possible discount for reserved instances compared to the on-demand price can vary depending on factors such as instance type, term length, and payment option. However, the highest discount that can be achieved with reserved instances is up to 72%. This discount is based on committing to a longer-term contract (such as one or three years) and making an upfront payment. By opting for reserved instances, businesses can save significantly on their cloud computing costs compared to the pay-as-you-go on-demand pricing model. It‘s important to note that the specific discount percentages may vary across different cloud service providers and regions, so it‘s advisable to check the pricing details and terms provided by the respective cloud provider for accurate information.
Up to 72% – The possible discount for reserved instances compared to the on-demand price can vary depending on factors such as instance type, term length, and payment option. However, the highest discount that can be achieved with reserved instances is up to 72%. This discount is based on committing to a longer-term contract (such as one or three years) and making an upfront payment. By opting for reserved instances, businesses can save significantly on their cloud computing costs compared to the pay-as-you-go on-demand pricing model. It‘s important to note that the specific discount percentages may vary across different cloud service providers and regions, so it‘s advisable to check the pricing details and terms provided by the respective cloud provider for accurate information.
Up to 72% – The possible discount for reserved instances compared to the on-demand price can vary depending on factors such as instance type, term length, and payment option. However, the highest discount that can be achieved with reserved instances is up to 72%. This discount is based on committing to a longer-term contract (such as one or three years) and making an upfront payment. By opting for reserved instances, businesses can save significantly on their cloud computing costs compared to the pay-as-you-go on-demand pricing model. It‘s important to note that the specific discount percentages may vary across different cloud service providers and regions, so it‘s advisable to check the pricing details and terms provided by the respective cloud provider for accurate information.
Which of the following is an example of shared responsibility in the AWS Shared Responsibility Model?
Correct
AWS is responsible for ensuring the availability of EC2 instances, and the customer is responsible for patching the operating system of EC2 instances.
The AWS Shared Responsibility Model is a cybersecurity framework where security responsibilities are shared between AWS and the customer. AWS manages the security of the cloud, including the infrastructure and services like EC2, while the customer is responsible for security in the cloud, including their data and the guest operating system. Therefore, this option perfectly exemplifies the shared responsibility model that AWS ensures EC2 instances are available, and customers are responsible for their maintenance, like patching the OS.
Incorrect Options:
AWS is responsible for encrypting customer data at rest, and the customer is responsible for encrypting data in transit – It should have been the opposite. AWS is responsible for encrypting customer data in transit, and the customer is responsible for encrypting data at rest.
AWS is responsible for ensuring the physical security of AWS data centers, and the customer is responsible for configuring physical network security to their AWS resources – AWS is responsible for both data center security and configuring physical networks.
AWS is responsible for monitoring the performance of AWS-managed databases, and the customer is responsible for patching engines – AWS is responsible for both monitoring the performance and patching engines for managed databases.
References: https://aws.amazon.com/compliance/shared-responsibility-model
Incorrect
AWS is responsible for ensuring the availability of EC2 instances, and the customer is responsible for patching the operating system of EC2 instances.
The AWS Shared Responsibility Model is a cybersecurity framework where security responsibilities are shared between AWS and the customer. AWS manages the security of the cloud, including the infrastructure and services like EC2, while the customer is responsible for security in the cloud, including their data and the guest operating system. Therefore, this option perfectly exemplifies the shared responsibility model that AWS ensures EC2 instances are available, and customers are responsible for their maintenance, like patching the OS.
Incorrect Options:
AWS is responsible for encrypting customer data at rest, and the customer is responsible for encrypting data in transit – It should have been the opposite. AWS is responsible for encrypting customer data in transit, and the customer is responsible for encrypting data at rest.
AWS is responsible for ensuring the physical security of AWS data centers, and the customer is responsible for configuring physical network security to their AWS resources – AWS is responsible for both data center security and configuring physical networks.
AWS is responsible for monitoring the performance of AWS-managed databases, and the customer is responsible for patching engines – AWS is responsible for both monitoring the performance and patching engines for managed databases.
References: https://aws.amazon.com/compliance/shared-responsibility-model
Unattempted
AWS is responsible for ensuring the availability of EC2 instances, and the customer is responsible for patching the operating system of EC2 instances.
The AWS Shared Responsibility Model is a cybersecurity framework where security responsibilities are shared between AWS and the customer. AWS manages the security of the cloud, including the infrastructure and services like EC2, while the customer is responsible for security in the cloud, including their data and the guest operating system. Therefore, this option perfectly exemplifies the shared responsibility model that AWS ensures EC2 instances are available, and customers are responsible for their maintenance, like patching the OS.
Incorrect Options:
AWS is responsible for encrypting customer data at rest, and the customer is responsible for encrypting data in transit – It should have been the opposite. AWS is responsible for encrypting customer data in transit, and the customer is responsible for encrypting data at rest.
AWS is responsible for ensuring the physical security of AWS data centers, and the customer is responsible for configuring physical network security to their AWS resources – AWS is responsible for both data center security and configuring physical networks.
AWS is responsible for monitoring the performance of AWS-managed databases, and the customer is responsible for patching engines – AWS is responsible for both monitoring the performance and patching engines for managed databases.
References: https://aws.amazon.com/compliance/shared-responsibility-model
Question 56 of 65
56. Question
Which of the following AWS services can quickly deploy a Node.js application to the AWS Cloud? (Select TWO.)
Correct
Amazon Lightsail – Amazon Lightsail can quickly deploy a Node.js application to the AWS Cloud. It provides a simplified way to launch and manage virtual private servers (VPS) with pre-configured compute, storage, and networking resources. The service supports a variety of applications and platforms, including popular programming languages like Node.js, Python, Java, and more. Lightsail offers a straightforward interface and pre-configured application stacks, including Node.js, making it easy to deploy a Node.js application without the need for extensive infrastructure management or configuration. With Lightsail, developers can quickly get their Node.js application up and running in the AWS Cloud with just a few clicks.
AWS Elastic Beanstalk – AWS Elastic Beanstalk is another service that facilitates the quick deployment of Node.js applications to the AWS Cloud. It is a fully managed service that abstracts away the underlying infrastructure complexities. It allows developers to easily deploy web applications in various programming languages, such as Java, .NET, Python, Node.js, Ruby, and more.
Elastic Beanstalk automatically provisions and manages the necessary resources to run Node.js applications, including EC2 instances, load balancers, and auto-scaling groups. It simplifies the deployment process by providing a platform where developers can easily upload their Node.js application code and let Elastic Beanstalk handle the rest, including environment setup, scaling, and load balancing.
Incorrect Options:
Amazon EC2Â – Amazon EC2 (Elastic Compute Cloud) provides virtual servers in the cloud, allowing users to have full control over their computing resources. While it is possible to deploy a Node.js application on EC2 instances, it requires manual setup and configuration of the infrastructure, making it a more involved and complex process compared to the dedicated services like Lightsail and Elastic Beanstalk.
Amazon ECSÂ – Amazon ECS (Elastic Container Service) is a highly scalable container orchestration service. While it is capable of deploying and managing containerized applications, including Node.js applications, it is a more advanced service that requires containerization and additional configuration compared to the simpler and more streamlined options like Lightsail and Elastic Beanstalk.
AWS CloudFormation – AWS CloudFormation is an infrastructure-as-code service that provides developers and businesses an easy way to create a collection of related AWS resources and provision them in an orderly and predictable manner. However, it‘s not specifically designed for deploying Node.js applications quickly as it focuses more on resource provisioning and management.
References: https://aws.amazon.com/lightsail https://aws.amazon.com/elasticbeanstalk
Incorrect
Amazon Lightsail – Amazon Lightsail can quickly deploy a Node.js application to the AWS Cloud. It provides a simplified way to launch and manage virtual private servers (VPS) with pre-configured compute, storage, and networking resources. The service supports a variety of applications and platforms, including popular programming languages like Node.js, Python, Java, and more. Lightsail offers a straightforward interface and pre-configured application stacks, including Node.js, making it easy to deploy a Node.js application without the need for extensive infrastructure management or configuration. With Lightsail, developers can quickly get their Node.js application up and running in the AWS Cloud with just a few clicks.
AWS Elastic Beanstalk – AWS Elastic Beanstalk is another service that facilitates the quick deployment of Node.js applications to the AWS Cloud. It is a fully managed service that abstracts away the underlying infrastructure complexities. It allows developers to easily deploy web applications in various programming languages, such as Java, .NET, Python, Node.js, Ruby, and more.
Elastic Beanstalk automatically provisions and manages the necessary resources to run Node.js applications, including EC2 instances, load balancers, and auto-scaling groups. It simplifies the deployment process by providing a platform where developers can easily upload their Node.js application code and let Elastic Beanstalk handle the rest, including environment setup, scaling, and load balancing.
Incorrect Options:
Amazon EC2Â – Amazon EC2 (Elastic Compute Cloud) provides virtual servers in the cloud, allowing users to have full control over their computing resources. While it is possible to deploy a Node.js application on EC2 instances, it requires manual setup and configuration of the infrastructure, making it a more involved and complex process compared to the dedicated services like Lightsail and Elastic Beanstalk.
Amazon ECSÂ – Amazon ECS (Elastic Container Service) is a highly scalable container orchestration service. While it is capable of deploying and managing containerized applications, including Node.js applications, it is a more advanced service that requires containerization and additional configuration compared to the simpler and more streamlined options like Lightsail and Elastic Beanstalk.
AWS CloudFormation – AWS CloudFormation is an infrastructure-as-code service that provides developers and businesses an easy way to create a collection of related AWS resources and provision them in an orderly and predictable manner. However, it‘s not specifically designed for deploying Node.js applications quickly as it focuses more on resource provisioning and management.
References: https://aws.amazon.com/lightsail https://aws.amazon.com/elasticbeanstalk
Unattempted
Amazon Lightsail – Amazon Lightsail can quickly deploy a Node.js application to the AWS Cloud. It provides a simplified way to launch and manage virtual private servers (VPS) with pre-configured compute, storage, and networking resources. The service supports a variety of applications and platforms, including popular programming languages like Node.js, Python, Java, and more. Lightsail offers a straightforward interface and pre-configured application stacks, including Node.js, making it easy to deploy a Node.js application without the need for extensive infrastructure management or configuration. With Lightsail, developers can quickly get their Node.js application up and running in the AWS Cloud with just a few clicks.
AWS Elastic Beanstalk – AWS Elastic Beanstalk is another service that facilitates the quick deployment of Node.js applications to the AWS Cloud. It is a fully managed service that abstracts away the underlying infrastructure complexities. It allows developers to easily deploy web applications in various programming languages, such as Java, .NET, Python, Node.js, Ruby, and more.
Elastic Beanstalk automatically provisions and manages the necessary resources to run Node.js applications, including EC2 instances, load balancers, and auto-scaling groups. It simplifies the deployment process by providing a platform where developers can easily upload their Node.js application code and let Elastic Beanstalk handle the rest, including environment setup, scaling, and load balancing.
Incorrect Options:
Amazon EC2Â – Amazon EC2 (Elastic Compute Cloud) provides virtual servers in the cloud, allowing users to have full control over their computing resources. While it is possible to deploy a Node.js application on EC2 instances, it requires manual setup and configuration of the infrastructure, making it a more involved and complex process compared to the dedicated services like Lightsail and Elastic Beanstalk.
Amazon ECSÂ – Amazon ECS (Elastic Container Service) is a highly scalable container orchestration service. While it is capable of deploying and managing containerized applications, including Node.js applications, it is a more advanced service that requires containerization and additional configuration compared to the simpler and more streamlined options like Lightsail and Elastic Beanstalk.
AWS CloudFormation – AWS CloudFormation is an infrastructure-as-code service that provides developers and businesses an easy way to create a collection of related AWS resources and provision them in an orderly and predictable manner. However, it‘s not specifically designed for deploying Node.js applications quickly as it focuses more on resource provisioning and management.
References: https://aws.amazon.com/lightsail https://aws.amazon.com/elasticbeanstalk
Question 57 of 65
57. Question
What is the purpose of the AWS Savings Plans service?
Correct
To provide discounted pricing for AWS resources – The AWS Savings Plans provide discounted pricing for AWS resources by allowing customers to commit to a certain amount of usage in exchange for lower prices. Customers can choose from different types of Savings Plans depending on their usage patterns and preferences. The service is designed to help customers save money on their AWS bills by providing predictable and flexible pricing options. Incorrect Options: To provide detailed cost allocation reports for AWS resources – This is the purpose of the AWS Cost Explorer and AWS Cost and Usage Reports services. These services provide detailed reports and analysis of AWS usage and costs, including cost allocation by service, region, and tag. It is not the purpose of AWS Savings Plans. To provide cost optimization recommendations for AWS resources – This is the purpose of the AWS Trusted Advisor service, which provides recommendations for optimizing AWS resources based on best practices and usage patterns. It is not the purpose of AWS Savings Plans. To set cost and usage budgets for AWS resources – This is the purpose of the AWS Budgets service, which allows customers to set custom cost and usage budgets for their AWS resources and receive alerts when their usage exceeds those budgets. It is not the purpose of AWS Savings Plans. References: https://aws.amazon.com/savingsplans
Incorrect
To provide discounted pricing for AWS resources – The AWS Savings Plans provide discounted pricing for AWS resources by allowing customers to commit to a certain amount of usage in exchange for lower prices. Customers can choose from different types of Savings Plans depending on their usage patterns and preferences. The service is designed to help customers save money on their AWS bills by providing predictable and flexible pricing options. Incorrect Options: To provide detailed cost allocation reports for AWS resources – This is the purpose of the AWS Cost Explorer and AWS Cost and Usage Reports services. These services provide detailed reports and analysis of AWS usage and costs, including cost allocation by service, region, and tag. It is not the purpose of AWS Savings Plans. To provide cost optimization recommendations for AWS resources – This is the purpose of the AWS Trusted Advisor service, which provides recommendations for optimizing AWS resources based on best practices and usage patterns. It is not the purpose of AWS Savings Plans. To set cost and usage budgets for AWS resources – This is the purpose of the AWS Budgets service, which allows customers to set custom cost and usage budgets for their AWS resources and receive alerts when their usage exceeds those budgets. It is not the purpose of AWS Savings Plans. References: https://aws.amazon.com/savingsplans
Unattempted
To provide discounted pricing for AWS resources – The AWS Savings Plans provide discounted pricing for AWS resources by allowing customers to commit to a certain amount of usage in exchange for lower prices. Customers can choose from different types of Savings Plans depending on their usage patterns and preferences. The service is designed to help customers save money on their AWS bills by providing predictable and flexible pricing options. Incorrect Options: To provide detailed cost allocation reports for AWS resources – This is the purpose of the AWS Cost Explorer and AWS Cost and Usage Reports services. These services provide detailed reports and analysis of AWS usage and costs, including cost allocation by service, region, and tag. It is not the purpose of AWS Savings Plans. To provide cost optimization recommendations for AWS resources – This is the purpose of the AWS Trusted Advisor service, which provides recommendations for optimizing AWS resources based on best practices and usage patterns. It is not the purpose of AWS Savings Plans. To set cost and usage budgets for AWS resources – This is the purpose of the AWS Budgets service, which allows customers to set custom cost and usage budgets for their AWS resources and receive alerts when their usage exceeds those budgets. It is not the purpose of AWS Savings Plans. References: https://aws.amazon.com/savingsplans
Question 58 of 65
58. Question
Which AWS service allows you to perform face detection and analysis from millions of images and videos in minutes?
Correct
Amazon Rekognition – Amazon Rekognition is the AWS service that allows you to perform face detection and analysis from millions of images and videos in just minutes. It provides powerful computer vision capabilities for analyzing visual content. With Rekognition, you can detect and analyze faces, identify facial attributes, such as emotions and age range, perform face comparison and recognition, and even track faces in videos. Rekognition utilizes deep learning algorithms to deliver accurate and fast results. It is highly scalable, allowing you to process vast amounts of visual data quickly and efficiently. Amazon Rekognition is widely used in various applications, including security systems, content moderation, personalized user experiences, and social media analytics. Incorrect Options: Amazon Polly – Amazon Polly is a text-to-speech (TTS) service that converts text into lifelike speech. It does not provide face detection and analysis from images and videos. Amazon Transcribe – Amazon Transcribe is an automatic speech recognition (ASR) service that converts spoken language into written text. It is not intended for face detection and analysis tasks. Amazon Kendra – Amazon Kendra is an intelligent search service powered by machine learning. It is designed to provide highly accurate search results by understanding natural language queries and retrieving information from various data sources. It does not specialize in face detection and analysis from images and videos. References: https://aws.amazon.com/rekognition
Incorrect
Amazon Rekognition – Amazon Rekognition is the AWS service that allows you to perform face detection and analysis from millions of images and videos in just minutes. It provides powerful computer vision capabilities for analyzing visual content. With Rekognition, you can detect and analyze faces, identify facial attributes, such as emotions and age range, perform face comparison and recognition, and even track faces in videos. Rekognition utilizes deep learning algorithms to deliver accurate and fast results. It is highly scalable, allowing you to process vast amounts of visual data quickly and efficiently. Amazon Rekognition is widely used in various applications, including security systems, content moderation, personalized user experiences, and social media analytics. Incorrect Options: Amazon Polly – Amazon Polly is a text-to-speech (TTS) service that converts text into lifelike speech. It does not provide face detection and analysis from images and videos. Amazon Transcribe – Amazon Transcribe is an automatic speech recognition (ASR) service that converts spoken language into written text. It is not intended for face detection and analysis tasks. Amazon Kendra – Amazon Kendra is an intelligent search service powered by machine learning. It is designed to provide highly accurate search results by understanding natural language queries and retrieving information from various data sources. It does not specialize in face detection and analysis from images and videos. References: https://aws.amazon.com/rekognition
Unattempted
Amazon Rekognition – Amazon Rekognition is the AWS service that allows you to perform face detection and analysis from millions of images and videos in just minutes. It provides powerful computer vision capabilities for analyzing visual content. With Rekognition, you can detect and analyze faces, identify facial attributes, such as emotions and age range, perform face comparison and recognition, and even track faces in videos. Rekognition utilizes deep learning algorithms to deliver accurate and fast results. It is highly scalable, allowing you to process vast amounts of visual data quickly and efficiently. Amazon Rekognition is widely used in various applications, including security systems, content moderation, personalized user experiences, and social media analytics. Incorrect Options: Amazon Polly – Amazon Polly is a text-to-speech (TTS) service that converts text into lifelike speech. It does not provide face detection and analysis from images and videos. Amazon Transcribe – Amazon Transcribe is an automatic speech recognition (ASR) service that converts spoken language into written text. It is not intended for face detection and analysis tasks. Amazon Kendra – Amazon Kendra is an intelligent search service powered by machine learning. It is designed to provide highly accurate search results by understanding natural language queries and retrieving information from various data sources. It does not specialize in face detection and analysis from images and videos. References: https://aws.amazon.com/rekognition
Question 59 of 65
59. Question
Which of the following helps you easily categorize and track your AWS costs at a detailed level?
Correct
Cost Allocation Tags – Cost Allocation Tags help you organize your AWS resources and can assist with managing costs. It is a key-value pairs that you can attach to your AWS resources. Once activated, AWS generates a cost allocation report with usage and costs aggregated by your tags, helping you to track and categorize your AWS costs in a detailed manner. Incorrect Options: AWS Budgets – AWS Budgets gives you the ability to set custom cost and usage budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount. However, it doesn‘t categorize or track your costs at a detailed level. CloudWatch Logs – CloudWatch Logs help you to monitor, store, and access your log files from EC2 instances, and other sources, but it does not assist in tracking or categorizing costs. AWS Service Catalog – AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS, but it does not track or categorize costs. References: https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html
Incorrect
Cost Allocation Tags – Cost Allocation Tags help you organize your AWS resources and can assist with managing costs. It is a key-value pairs that you can attach to your AWS resources. Once activated, AWS generates a cost allocation report with usage and costs aggregated by your tags, helping you to track and categorize your AWS costs in a detailed manner. Incorrect Options: AWS Budgets – AWS Budgets gives you the ability to set custom cost and usage budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount. However, it doesn‘t categorize or track your costs at a detailed level. CloudWatch Logs – CloudWatch Logs help you to monitor, store, and access your log files from EC2 instances, and other sources, but it does not assist in tracking or categorizing costs. AWS Service Catalog – AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS, but it does not track or categorize costs. References: https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html
Unattempted
Cost Allocation Tags – Cost Allocation Tags help you organize your AWS resources and can assist with managing costs. It is a key-value pairs that you can attach to your AWS resources. Once activated, AWS generates a cost allocation report with usage and costs aggregated by your tags, helping you to track and categorize your AWS costs in a detailed manner. Incorrect Options: AWS Budgets – AWS Budgets gives you the ability to set custom cost and usage budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount. However, it doesn‘t categorize or track your costs at a detailed level. CloudWatch Logs – CloudWatch Logs help you to monitor, store, and access your log files from EC2 instances, and other sources, but it does not assist in tracking or categorizing costs. AWS Service Catalog – AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS, but it does not track or categorize costs. References: https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html
Question 60 of 65
60. Question
Which AWS support plan offers a Technical Account Manager (TAM)?
Correct
Enterprise – The AWS Enterprise Support plan offers a Technical Account Manager (TAM) as part of its support offerings. A TAM is a designated technical expert who works closely with enterprise customers to provide guidance, technical assistance, and strategic advice on their AWS environments. The TAM acts as a trusted advisor, helping organizations optimize their AWS infrastructure, resolve issues, and achieve their business goals. They assist with architectural guidance, proactive planning, and ongoing support to ensure smooth operations and successful implementation of AWS services. The Enterprise Support plan, with the inclusion of a dedicated TAM, is tailored to meet the needs of large-scale enterprise customers with mission-critical workloads and complex environments.
Incorrect Options:
Basic – The Basic support plan is the default level of AWS support that is available to all AWS customers. It provides access to documentation, community forums, and basic customer support through email.
Developer – The Developer support plan is a paid support plan that offers faster response times and additional support channels compared to the Basic plan. However, it does not include a dedicated TAM.
Business – The Business support plan is a paid support plan designed for businesses with production workloads. It provides faster response times, additional support channels, and enhanced support for AWS architecture and best practices. However, it does not include a dedicated TAM.
References: https://aws.amazon.com/premiumsupport/plans
Incorrect
Enterprise – The AWS Enterprise Support plan offers a Technical Account Manager (TAM) as part of its support offerings. A TAM is a designated technical expert who works closely with enterprise customers to provide guidance, technical assistance, and strategic advice on their AWS environments. The TAM acts as a trusted advisor, helping organizations optimize their AWS infrastructure, resolve issues, and achieve their business goals. They assist with architectural guidance, proactive planning, and ongoing support to ensure smooth operations and successful implementation of AWS services. The Enterprise Support plan, with the inclusion of a dedicated TAM, is tailored to meet the needs of large-scale enterprise customers with mission-critical workloads and complex environments.
Incorrect Options:
Basic – The Basic support plan is the default level of AWS support that is available to all AWS customers. It provides access to documentation, community forums, and basic customer support through email.
Developer – The Developer support plan is a paid support plan that offers faster response times and additional support channels compared to the Basic plan. However, it does not include a dedicated TAM.
Business – The Business support plan is a paid support plan designed for businesses with production workloads. It provides faster response times, additional support channels, and enhanced support for AWS architecture and best practices. However, it does not include a dedicated TAM.
References: https://aws.amazon.com/premiumsupport/plans
Unattempted
Enterprise – The AWS Enterprise Support plan offers a Technical Account Manager (TAM) as part of its support offerings. A TAM is a designated technical expert who works closely with enterprise customers to provide guidance, technical assistance, and strategic advice on their AWS environments. The TAM acts as a trusted advisor, helping organizations optimize their AWS infrastructure, resolve issues, and achieve their business goals. They assist with architectural guidance, proactive planning, and ongoing support to ensure smooth operations and successful implementation of AWS services. The Enterprise Support plan, with the inclusion of a dedicated TAM, is tailored to meet the needs of large-scale enterprise customers with mission-critical workloads and complex environments.
Incorrect Options:
Basic – The Basic support plan is the default level of AWS support that is available to all AWS customers. It provides access to documentation, community forums, and basic customer support through email.
Developer – The Developer support plan is a paid support plan that offers faster response times and additional support channels compared to the Basic plan. However, it does not include a dedicated TAM.
Business – The Business support plan is a paid support plan designed for businesses with production workloads. It provides faster response times, additional support channels, and enhanced support for AWS architecture and best practices. However, it does not include a dedicated TAM.
References: https://aws.amazon.com/premiumsupport/plans
Question 61 of 65
61. Question
Which AWS service can be used to perform vulnerability assessments and security audits of AWS resources?
Correct
Amazon Inspector – Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. It analyzes the behavior of the AWS resources and helps identify potential security issues, vulnerabilities, or deviations from best practices. By running Amazon Inspector, AWS customers can receive a detailed report on the security state of their AWS resources and actionable recommendations to mitigate any identified risks. Therefore, Amazon Inspector is the correct choice.
Incorrect Options:
AWS GuardDuty – GuardDuty is a threat detection service that continuously monitors for malicious or unauthorized activity. It doesn‘t perform vulnerability assessments or security audits, making this option incorrect.
AWS WAFÂ – AWS WAF (Web Application Firewall) protects web applications from common web exploits. It is used for protection rather than performing vulnerability assessments or audits, hence this is not the correct answer.
AWS KMS – AWS Key Management Service (KMS) is used to create and manage cryptographic keys and control their use across a wide range of AWS services and in applications. It doesn‘t perform vulnerability assessments or security audits, making this option incorrect.
References: https://aws.amazon.com/inspector
Incorrect
Amazon Inspector – Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. It analyzes the behavior of the AWS resources and helps identify potential security issues, vulnerabilities, or deviations from best practices. By running Amazon Inspector, AWS customers can receive a detailed report on the security state of their AWS resources and actionable recommendations to mitigate any identified risks. Therefore, Amazon Inspector is the correct choice.
Incorrect Options:
AWS GuardDuty – GuardDuty is a threat detection service that continuously monitors for malicious or unauthorized activity. It doesn‘t perform vulnerability assessments or security audits, making this option incorrect.
AWS WAFÂ – AWS WAF (Web Application Firewall) protects web applications from common web exploits. It is used for protection rather than performing vulnerability assessments or audits, hence this is not the correct answer.
AWS KMS – AWS Key Management Service (KMS) is used to create and manage cryptographic keys and control their use across a wide range of AWS services and in applications. It doesn‘t perform vulnerability assessments or security audits, making this option incorrect.
References: https://aws.amazon.com/inspector
Unattempted
Amazon Inspector – Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. It analyzes the behavior of the AWS resources and helps identify potential security issues, vulnerabilities, or deviations from best practices. By running Amazon Inspector, AWS customers can receive a detailed report on the security state of their AWS resources and actionable recommendations to mitigate any identified risks. Therefore, Amazon Inspector is the correct choice.
Incorrect Options:
AWS GuardDuty – GuardDuty is a threat detection service that continuously monitors for malicious or unauthorized activity. It doesn‘t perform vulnerability assessments or security audits, making this option incorrect.
AWS WAFÂ – AWS WAF (Web Application Firewall) protects web applications from common web exploits. It is used for protection rather than performing vulnerability assessments or audits, hence this is not the correct answer.
AWS KMS – AWS Key Management Service (KMS) is used to create and manage cryptographic keys and control their use across a wide range of AWS services and in applications. It doesn‘t perform vulnerability assessments or security audits, making this option incorrect.
References: https://aws.amazon.com/inspector
Question 62 of 65
62. Question
A company wants to move 8 terabytes of data from an on-premises data center to the AWS cloud. Which of the following should be used to do this in a cost-effective way?
Correct
AWS Snowcone – AWS Snowcone is a portable, rugged, and secure device for edge computing and data transfer. It is the smallest member of the AWS Snow Family of devices, and it‘s capable of storing up to 8 terabytes of data. The company can use Snowcone to move their data to AWS, which is practical and cost-effective for the amount of data they want to transfer. Incorrect Options: AWS Snowball – AWS Snowball is a data transport solution that accelerates moving terabytes to petabytes of data into and out of AWS using storage devices designed to be secure for physical transport. However, for an 8 terabyte data transfer, it might be overkill and less cost-effective. AWS Snowmobile – AWS Snowmobile is an Exabyte-scale data transfer service used to move extremely large amounts of data to AWS. It‘s used to transfer data amounts much larger than 8 terabytes, in the order of petabytes or exabytes, and it‘s not the most cost-effective solution for this use case. AWS Storage Gateway – AWS Storage Gateway is a hybrid cloud storage service that connects an on-premises software appliance with cloud-based storage. However, for a large one-time data transfer of 8 terabytes, Snowcone would be a more cost-effective solution. References: https://aws.amazon.com/snowcone
Incorrect
AWS Snowcone – AWS Snowcone is a portable, rugged, and secure device for edge computing and data transfer. It is the smallest member of the AWS Snow Family of devices, and it‘s capable of storing up to 8 terabytes of data. The company can use Snowcone to move their data to AWS, which is practical and cost-effective for the amount of data they want to transfer. Incorrect Options: AWS Snowball – AWS Snowball is a data transport solution that accelerates moving terabytes to petabytes of data into and out of AWS using storage devices designed to be secure for physical transport. However, for an 8 terabyte data transfer, it might be overkill and less cost-effective. AWS Snowmobile – AWS Snowmobile is an Exabyte-scale data transfer service used to move extremely large amounts of data to AWS. It‘s used to transfer data amounts much larger than 8 terabytes, in the order of petabytes or exabytes, and it‘s not the most cost-effective solution for this use case. AWS Storage Gateway – AWS Storage Gateway is a hybrid cloud storage service that connects an on-premises software appliance with cloud-based storage. However, for a large one-time data transfer of 8 terabytes, Snowcone would be a more cost-effective solution. References: https://aws.amazon.com/snowcone
Unattempted
AWS Snowcone – AWS Snowcone is a portable, rugged, and secure device for edge computing and data transfer. It is the smallest member of the AWS Snow Family of devices, and it‘s capable of storing up to 8 terabytes of data. The company can use Snowcone to move their data to AWS, which is practical and cost-effective for the amount of data they want to transfer. Incorrect Options: AWS Snowball – AWS Snowball is a data transport solution that accelerates moving terabytes to petabytes of data into and out of AWS using storage devices designed to be secure for physical transport. However, for an 8 terabyte data transfer, it might be overkill and less cost-effective. AWS Snowmobile – AWS Snowmobile is an Exabyte-scale data transfer service used to move extremely large amounts of data to AWS. It‘s used to transfer data amounts much larger than 8 terabytes, in the order of petabytes or exabytes, and it‘s not the most cost-effective solution for this use case. AWS Storage Gateway – AWS Storage Gateway is a hybrid cloud storage service that connects an on-premises software appliance with cloud-based storage. However, for a large one-time data transfer of 8 terabytes, Snowcone would be a more cost-effective solution. References: https://aws.amazon.com/snowcone
Question 63 of 65
63. Question
What is the purpose of AWS KMS?
Correct
To encrypt and decrypt data stored in AWS services – AWS Key Management Service (KMS) is designed to create and control cryptographic keys for encrypting and decrypting data across AWS services. It supports centralized control over the cryptographic keys and provides an auditable solution to satisfy compliance requirements. KMS is integrated with other AWS services to help protect data you store in these services and control access to this data by using encryption. Therefore, the correct purpose of AWS KMS is to encrypt and decrypt data stored in AWS services. Incorrect Options: To control access to AWS services and resources – This is the purpose of AWS Identity and Access Management (IAM), not AWS KMS. IAM is used to manage access to AWS services and resources securely. To manage user authentication and authorization – This is related to AWS IAM, which handles user authentication and authorization, allowing you to create users, groups, and roles and define their permissions. KMS does not serve this purpose. To manage compliance with regulatory standards – While AWS KMS helps to meet compliance requirements through the use of key encryption, the comprehensive management of regulatory standards compliance is more accurately aligned with AWS Compliance programs, not AWS KMS. References: https://aws.amazon.com/kms
Incorrect
To encrypt and decrypt data stored in AWS services – AWS Key Management Service (KMS) is designed to create and control cryptographic keys for encrypting and decrypting data across AWS services. It supports centralized control over the cryptographic keys and provides an auditable solution to satisfy compliance requirements. KMS is integrated with other AWS services to help protect data you store in these services and control access to this data by using encryption. Therefore, the correct purpose of AWS KMS is to encrypt and decrypt data stored in AWS services. Incorrect Options: To control access to AWS services and resources – This is the purpose of AWS Identity and Access Management (IAM), not AWS KMS. IAM is used to manage access to AWS services and resources securely. To manage user authentication and authorization – This is related to AWS IAM, which handles user authentication and authorization, allowing you to create users, groups, and roles and define their permissions. KMS does not serve this purpose. To manage compliance with regulatory standards – While AWS KMS helps to meet compliance requirements through the use of key encryption, the comprehensive management of regulatory standards compliance is more accurately aligned with AWS Compliance programs, not AWS KMS. References: https://aws.amazon.com/kms
Unattempted
To encrypt and decrypt data stored in AWS services – AWS Key Management Service (KMS) is designed to create and control cryptographic keys for encrypting and decrypting data across AWS services. It supports centralized control over the cryptographic keys and provides an auditable solution to satisfy compliance requirements. KMS is integrated with other AWS services to help protect data you store in these services and control access to this data by using encryption. Therefore, the correct purpose of AWS KMS is to encrypt and decrypt data stored in AWS services. Incorrect Options: To control access to AWS services and resources – This is the purpose of AWS Identity and Access Management (IAM), not AWS KMS. IAM is used to manage access to AWS services and resources securely. To manage user authentication and authorization – This is related to AWS IAM, which handles user authentication and authorization, allowing you to create users, groups, and roles and define their permissions. KMS does not serve this purpose. To manage compliance with regulatory standards – While AWS KMS helps to meet compliance requirements through the use of key encryption, the comprehensive management of regulatory standards compliance is more accurately aligned with AWS Compliance programs, not AWS KMS. References: https://aws.amazon.com/kms
Question 64 of 65
64. Question
Which AWS service provides a cloud-based virtual desktop that must be persistent and a replacement for traditional desktops?
Correct
Amazon WorkSpaces – Amazon WorkSpaces provides a cloud-based virtual desktop infrastructure (VDI) solution. It offers a persistent and fully managed desktop experience in the cloud, acting as a replacement for traditional desktops. With WorkSpaces, users can access their desktops from anywhere using a supported device, including laptops, tablets, and thin clients. WorkSpaces allows organizations to provision and manage virtual desktops for their users, providing a consistent and secure computing environment. It supports various operating systems and offers flexibility in terms of hardware configurations and software applications. Amazon WorkSpaces simplifies desktop management, reduces hardware dependencies, and enables remote access and mobility.
How it works
Incorrect Options:
Amazon AppStream 2.0Â – Amazon AppStream 2.0 is a service that enables users to stream desktop applications to their devices, rather than providing a full virtual desktop experience. It is designed for application streaming and does not serve as a direct replacement for traditional desktops.
Amazon WorkLink – Amazon WorkLink is a service that allows users to securely access internal websites and web applications from mobile devices. It focuses on providing secure and simplified access to web content, rather than offering a complete virtual desktop replacement.
AWS Cloud9Â – AWS Cloud9 is a cloud-based integrated development environment (IDE) that allows developers to write, run, and debug code from their browsers. While Cloud9 provides a development environment in the cloud, it is not intended as a replacement for traditional desktops, as it primarily serves as a coding and collaboration platform for developers.
References: https://aws.amazon.com/workspaces
Incorrect
Amazon WorkSpaces – Amazon WorkSpaces provides a cloud-based virtual desktop infrastructure (VDI) solution. It offers a persistent and fully managed desktop experience in the cloud, acting as a replacement for traditional desktops. With WorkSpaces, users can access their desktops from anywhere using a supported device, including laptops, tablets, and thin clients. WorkSpaces allows organizations to provision and manage virtual desktops for their users, providing a consistent and secure computing environment. It supports various operating systems and offers flexibility in terms of hardware configurations and software applications. Amazon WorkSpaces simplifies desktop management, reduces hardware dependencies, and enables remote access and mobility.
How it works
Incorrect Options:
Amazon AppStream 2.0Â – Amazon AppStream 2.0 is a service that enables users to stream desktop applications to their devices, rather than providing a full virtual desktop experience. It is designed for application streaming and does not serve as a direct replacement for traditional desktops.
Amazon WorkLink – Amazon WorkLink is a service that allows users to securely access internal websites and web applications from mobile devices. It focuses on providing secure and simplified access to web content, rather than offering a complete virtual desktop replacement.
AWS Cloud9Â – AWS Cloud9 is a cloud-based integrated development environment (IDE) that allows developers to write, run, and debug code from their browsers. While Cloud9 provides a development environment in the cloud, it is not intended as a replacement for traditional desktops, as it primarily serves as a coding and collaboration platform for developers.
References: https://aws.amazon.com/workspaces
Unattempted
Amazon WorkSpaces – Amazon WorkSpaces provides a cloud-based virtual desktop infrastructure (VDI) solution. It offers a persistent and fully managed desktop experience in the cloud, acting as a replacement for traditional desktops. With WorkSpaces, users can access their desktops from anywhere using a supported device, including laptops, tablets, and thin clients. WorkSpaces allows organizations to provision and manage virtual desktops for their users, providing a consistent and secure computing environment. It supports various operating systems and offers flexibility in terms of hardware configurations and software applications. Amazon WorkSpaces simplifies desktop management, reduces hardware dependencies, and enables remote access and mobility.
How it works
Incorrect Options:
Amazon AppStream 2.0Â – Amazon AppStream 2.0 is a service that enables users to stream desktop applications to their devices, rather than providing a full virtual desktop experience. It is designed for application streaming and does not serve as a direct replacement for traditional desktops.
Amazon WorkLink – Amazon WorkLink is a service that allows users to securely access internal websites and web applications from mobile devices. It focuses on providing secure and simplified access to web content, rather than offering a complete virtual desktop replacement.
AWS Cloud9Â – AWS Cloud9 is a cloud-based integrated development environment (IDE) that allows developers to write, run, and debug code from their browsers. While Cloud9 provides a development environment in the cloud, it is not intended as a replacement for traditional desktops, as it primarily serves as a coding and collaboration platform for developers.
References: https://aws.amazon.com/workspaces
Question 65 of 65
65. Question
Which AWS service provides recommendations to help you follow best practices for improving security and performance, fault tolerance, reducing costs, and monitoring service quotas?
Correct
AWS Trusted Advisor – AWS Trusted Advisor provides recommendations to help you adhere to best practices by enhancing your AWS environment. It provides real-time guidance to help you provision your resources following AWS best practices for optimal security, high performance, service fault tolerance, and cost efficiency. Additionally, it monitors your service quotas to ensure you don‘t hit limits. Therefore, for a holistic view of your AWS services, adhering to best practices, and optimizing the use of resources, AWS Trusted Advisor is your go-to service. Incorrect Options: Amazon Inspector – Amazon Inspector is a security vulnerability management service. It helps to improve the security and compliance of applications deployed on AWS but does not provide overall recommendations for improving performance, fault tolerance, cost efficiency, and monitoring service quotas. Amazon CloudWatch – Amazon CloudWatch is a monitoring and observability service but does not provide recommendations for best practices, cost reduction, or fault tolerance. AWS IAM – AWS Identity and Access Management (IAM) allows you to manage access to AWS services and resources securely. It does not provide recommendations for best practices in improving performance, fault tolerance, or reducing costs. References: https://aws.amazon.com/premiumsupport/technology/trusted-advisor
Incorrect
AWS Trusted Advisor – AWS Trusted Advisor provides recommendations to help you adhere to best practices by enhancing your AWS environment. It provides real-time guidance to help you provision your resources following AWS best practices for optimal security, high performance, service fault tolerance, and cost efficiency. Additionally, it monitors your service quotas to ensure you don‘t hit limits. Therefore, for a holistic view of your AWS services, adhering to best practices, and optimizing the use of resources, AWS Trusted Advisor is your go-to service. Incorrect Options: Amazon Inspector – Amazon Inspector is a security vulnerability management service. It helps to improve the security and compliance of applications deployed on AWS but does not provide overall recommendations for improving performance, fault tolerance, cost efficiency, and monitoring service quotas. Amazon CloudWatch – Amazon CloudWatch is a monitoring and observability service but does not provide recommendations for best practices, cost reduction, or fault tolerance. AWS IAM – AWS Identity and Access Management (IAM) allows you to manage access to AWS services and resources securely. It does not provide recommendations for best practices in improving performance, fault tolerance, or reducing costs. References: https://aws.amazon.com/premiumsupport/technology/trusted-advisor
Unattempted
AWS Trusted Advisor – AWS Trusted Advisor provides recommendations to help you adhere to best practices by enhancing your AWS environment. It provides real-time guidance to help you provision your resources following AWS best practices for optimal security, high performance, service fault tolerance, and cost efficiency. Additionally, it monitors your service quotas to ensure you don‘t hit limits. Therefore, for a holistic view of your AWS services, adhering to best practices, and optimizing the use of resources, AWS Trusted Advisor is your go-to service. Incorrect Options: Amazon Inspector – Amazon Inspector is a security vulnerability management service. It helps to improve the security and compliance of applications deployed on AWS but does not provide overall recommendations for improving performance, fault tolerance, cost efficiency, and monitoring service quotas. Amazon CloudWatch – Amazon CloudWatch is a monitoring and observability service but does not provide recommendations for best practices, cost reduction, or fault tolerance. AWS IAM – AWS Identity and Access Management (IAM) allows you to manage access to AWS services and resources securely. It does not provide recommendations for best practices in improving performance, fault tolerance, or reducing costs. References: https://aws.amazon.com/premiumsupport/technology/trusted-advisor
Use Page numbers below to navigate to other practice tests