You have already completed the Test before. Hence you can not start it again.
Test is loading...
You must sign in or sign up to start the Test.
You have to finish following quiz, to start this Test:
Your results are here!! for" AWS Cloud Practitioner - Practice Test 2 "
0 of 65 questions answered correctly
Your time:
Time has elapsed
Your Final Score is : 0
You have attempted : 0
Number of Correct Questions : 0 and scored 0
Number of Incorrect Questions : 0 and Negative marks 0
Average score
Your score
AWS Cloud Practitioner
You have attempted: 0
Number of Correct Questions: 0 and scored 0
Number of Incorrect Questions: 0 and Negative marks 0
You can review your answers by clicking view questions. Important Note : Open Reference Documentation Links in New Tab (Right Click and Open in New Tab).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
Answered
Review
Question 1 of 65
1. Question
An organization is planning to move its infrastructure from the on-premises datacenter to AWS Cloud. As a Cloud Practioner, which options would you recommend so that the organization can identify the right AWS services to build solutions on AWS Cloud (Select two)?
Correct
Correct options:
AWS Service Catalog – AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS. These IT services can include everything from virtual machine images, servers, software, and databases to complete multi-tier application architectures.
AWS Partner Network – Organizations can take help from the AWS Partner Network (APN) to identify the right AWS services to build solutions on AWS Cloud. APN is the global partner program for technology and consulting businesses that leverage Amazon Web Services to build solutions and services for customers.
Incorrect options:
AWS Organizations – AWS Organizations helps you centrally govern your environment as you grow and scale your workloads on AWS. Organizations help you to centrally manage billing; control access, compliance, and security; and share resources across your AWS accounts. AWS Organizations cannot help in identifying the right AWS services to build solutions on AWS Cloud.
Amazon CloudWatch – Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides data and actionable insights to monitor your applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. Think resource performance monitoring, events, and alerts; think CloudWatch. CloudWatch cannot help in identifying the right AWS services to build solutions on AWS Cloud.
AWS CloudTrail – AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. Think account-specific activity and audit; think CloudTrail. CloudTrail cannot help in identifying the right AWS services to build solutions on AWS Cloud.
References: https://aws.amazon.com/servicecatalog/ https://aws.amazon.com/partners/
Incorrect
Correct options:
AWS Service Catalog – AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS. These IT services can include everything from virtual machine images, servers, software, and databases to complete multi-tier application architectures.
AWS Partner Network – Organizations can take help from the AWS Partner Network (APN) to identify the right AWS services to build solutions on AWS Cloud. APN is the global partner program for technology and consulting businesses that leverage Amazon Web Services to build solutions and services for customers.
Incorrect options:
AWS Organizations – AWS Organizations helps you centrally govern your environment as you grow and scale your workloads on AWS. Organizations help you to centrally manage billing; control access, compliance, and security; and share resources across your AWS accounts. AWS Organizations cannot help in identifying the right AWS services to build solutions on AWS Cloud.
Amazon CloudWatch – Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides data and actionable insights to monitor your applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. Think resource performance monitoring, events, and alerts; think CloudWatch. CloudWatch cannot help in identifying the right AWS services to build solutions on AWS Cloud.
AWS CloudTrail – AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. Think account-specific activity and audit; think CloudTrail. CloudTrail cannot help in identifying the right AWS services to build solutions on AWS Cloud.
References: https://aws.amazon.com/servicecatalog/ https://aws.amazon.com/partners/
Unattempted
Correct options:
AWS Service Catalog – AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS. These IT services can include everything from virtual machine images, servers, software, and databases to complete multi-tier application architectures.
AWS Partner Network – Organizations can take help from the AWS Partner Network (APN) to identify the right AWS services to build solutions on AWS Cloud. APN is the global partner program for technology and consulting businesses that leverage Amazon Web Services to build solutions and services for customers.
Incorrect options:
AWS Organizations – AWS Organizations helps you centrally govern your environment as you grow and scale your workloads on AWS. Organizations help you to centrally manage billing; control access, compliance, and security; and share resources across your AWS accounts. AWS Organizations cannot help in identifying the right AWS services to build solutions on AWS Cloud.
Amazon CloudWatch – Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides data and actionable insights to monitor your applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. Think resource performance monitoring, events, and alerts; think CloudWatch. CloudWatch cannot help in identifying the right AWS services to build solutions on AWS Cloud.
AWS CloudTrail – AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. Think account-specific activity and audit; think CloudTrail. CloudTrail cannot help in identifying the right AWS services to build solutions on AWS Cloud.
References: https://aws.amazon.com/servicecatalog/ https://aws.amazon.com/partners/
Question 2 of 65
2. Question
Which of the following AWS services are global in scope? (Select two)
Correct
Correct options:
AWS Identity and Access Management (IAM)
Amazon CloudFront
Most of the services that AWS offers are Region specific. But few services, by definition, need to be in a global scope because of the underlying service they offer. AWS IAM, Amazon CloudFront, Route 53 and WAF are some of the global services.
AWS Identity and Access Management (IAM) enables you to manage access to AWS services and resources securely. Using IAM, you can create and manage AWS users and groups, and use permissions to allow and deny their access to AWS resources.
Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment.
Incorrect options:
Amazon Relational Database Service (Amazon RDS) – Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. This is a regional service.
Amazon Elastic Compute Cloud (Amazon EC2) – Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It comes under Infrastructure as a Service type of Cloud Computing. This is a regional service.
Exam Alert:
Amazon S3 – Amazon S3 is a unique service in the sense that it follows a global namespace but the buckets are regional. You specify an AWS Region when you create your Amazon S3 bucket. This is a regional service.
References: https://aws.amazon.com/iam/faqs/ https://aws.amazon.com/cloudfront/faqs/
Incorrect
Correct options:
AWS Identity and Access Management (IAM)
Amazon CloudFront
Most of the services that AWS offers are Region specific. But few services, by definition, need to be in a global scope because of the underlying service they offer. AWS IAM, Amazon CloudFront, Route 53 and WAF are some of the global services.
AWS Identity and Access Management (IAM) enables you to manage access to AWS services and resources securely. Using IAM, you can create and manage AWS users and groups, and use permissions to allow and deny their access to AWS resources.
Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment.
Incorrect options:
Amazon Relational Database Service (Amazon RDS) – Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. This is a regional service.
Amazon Elastic Compute Cloud (Amazon EC2) – Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It comes under Infrastructure as a Service type of Cloud Computing. This is a regional service.
Exam Alert:
Amazon S3 – Amazon S3 is a unique service in the sense that it follows a global namespace but the buckets are regional. You specify an AWS Region when you create your Amazon S3 bucket. This is a regional service.
References: https://aws.amazon.com/iam/faqs/ https://aws.amazon.com/cloudfront/faqs/
Unattempted
Correct options:
AWS Identity and Access Management (IAM)
Amazon CloudFront
Most of the services that AWS offers are Region specific. But few services, by definition, need to be in a global scope because of the underlying service they offer. AWS IAM, Amazon CloudFront, Route 53 and WAF are some of the global services.
AWS Identity and Access Management (IAM) enables you to manage access to AWS services and resources securely. Using IAM, you can create and manage AWS users and groups, and use permissions to allow and deny their access to AWS resources.
Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment.
Incorrect options:
Amazon Relational Database Service (Amazon RDS) – Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. This is a regional service.
Amazon Elastic Compute Cloud (Amazon EC2) – Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It comes under Infrastructure as a Service type of Cloud Computing. This is a regional service.
Exam Alert:
Amazon S3 – Amazon S3 is a unique service in the sense that it follows a global namespace but the buckets are regional. You specify an AWS Region when you create your Amazon S3 bucket. This is a regional service.
References: https://aws.amazon.com/iam/faqs/ https://aws.amazon.com/cloudfront/faqs/
Question 3 of 65
3. Question
Which of the following is the correct statement regarding the AWS Storage services?
Correct
Correct option:
S3 is object based storage, EBS is block based storage and EFS is file based storage
Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed, elastic NFS file system.
Amazon Elastic Block Store (EBS) is an easy to use, high-performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction-intensive workloads at any scale.
Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance.
Incorrect options:
S3 is block based storage, EBS is object based storage and EFS is file based storage
S3 is object based storage, EBS is file based storage and EFS is block based storage
S3 is file based storage, EBS is block based storage and EFS is object based storage
These three options contradict the details provided earlier in the explanation, so these options are incorrect.
References: https://aws.amazon.com/s3/ https://aws.amazon.com/ebs/ https://aws.amazon.com/efs/
Incorrect
Correct option:
S3 is object based storage, EBS is block based storage and EFS is file based storage
Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed, elastic NFS file system.
Amazon Elastic Block Store (EBS) is an easy to use, high-performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction-intensive workloads at any scale.
Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance.
Incorrect options:
S3 is block based storage, EBS is object based storage and EFS is file based storage
S3 is object based storage, EBS is file based storage and EFS is block based storage
S3 is file based storage, EBS is block based storage and EFS is object based storage
These three options contradict the details provided earlier in the explanation, so these options are incorrect.
References: https://aws.amazon.com/s3/ https://aws.amazon.com/ebs/ https://aws.amazon.com/efs/
Unattempted
Correct option:
S3 is object based storage, EBS is block based storage and EFS is file based storage
Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed, elastic NFS file system.
Amazon Elastic Block Store (EBS) is an easy to use, high-performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction-intensive workloads at any scale.
Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance.
Incorrect options:
S3 is block based storage, EBS is object based storage and EFS is file based storage
S3 is object based storage, EBS is file based storage and EFS is block based storage
S3 is file based storage, EBS is block based storage and EFS is object based storage
These three options contradict the details provided earlier in the explanation, so these options are incorrect.
References: https://aws.amazon.com/s3/ https://aws.amazon.com/ebs/ https://aws.amazon.com/efs/
Question 4 of 65
4. Question
Which of the following options can be used to access AWS services (Select three)?
Correct
Correct options:
AWS services can be accessed in three different ways:
AWS Management Console – This is a simple web interface for accessing AWS services.
AWS Command Line Interface (CLI) – You can access AWS services from the command line and automate service management with scripts.
AWS Software Developer Kit (SDK) – You can also access via AWS SDK that provides language-specific abstracted APIs for AWS services.
Incorrect options:
AWS Systems Manager – AWS Systems Manager gives you visibility and control of your infrastructure on AWS. Systems Manager provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks across your AWS resources. With Systems Manager, you can group resources, like Amazon EC2 instances, Amazon S3 buckets, or Amazon RDS instances, by application, view operational data for monitoring and troubleshooting, and take action on your groups of resources.
AWS Secrets Manager – AWS Secrets Manager helps you protect secrets needed to access your applications, services, and IT resources. The service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. Users and applications retrieve secrets with a call to Secrets Manager APIs, eliminating the need to hardcode sensitive information in plain text.
Amazon API Gateway – Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale.
Incorrect
Correct options:
AWS services can be accessed in three different ways:
AWS Management Console – This is a simple web interface for accessing AWS services.
AWS Command Line Interface (CLI) – You can access AWS services from the command line and automate service management with scripts.
AWS Software Developer Kit (SDK) – You can also access via AWS SDK that provides language-specific abstracted APIs for AWS services.
Incorrect options:
AWS Systems Manager – AWS Systems Manager gives you visibility and control of your infrastructure on AWS. Systems Manager provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks across your AWS resources. With Systems Manager, you can group resources, like Amazon EC2 instances, Amazon S3 buckets, or Amazon RDS instances, by application, view operational data for monitoring and troubleshooting, and take action on your groups of resources.
AWS Secrets Manager – AWS Secrets Manager helps you protect secrets needed to access your applications, services, and IT resources. The service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. Users and applications retrieve secrets with a call to Secrets Manager APIs, eliminating the need to hardcode sensitive information in plain text.
Amazon API Gateway – Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale.
Unattempted
Correct options:
AWS services can be accessed in three different ways:
AWS Management Console – This is a simple web interface for accessing AWS services.
AWS Command Line Interface (CLI) – You can access AWS services from the command line and automate service management with scripts.
AWS Software Developer Kit (SDK) – You can also access via AWS SDK that provides language-specific abstracted APIs for AWS services.
Incorrect options:
AWS Systems Manager – AWS Systems Manager gives you visibility and control of your infrastructure on AWS. Systems Manager provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks across your AWS resources. With Systems Manager, you can group resources, like Amazon EC2 instances, Amazon S3 buckets, or Amazon RDS instances, by application, view operational data for monitoring and troubleshooting, and take action on your groups of resources.
AWS Secrets Manager – AWS Secrets Manager helps you protect secrets needed to access your applications, services, and IT resources. The service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. Users and applications retrieve secrets with a call to Secrets Manager APIs, eliminating the need to hardcode sensitive information in plain text.
Amazon API Gateway – Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale.
Question 5 of 65
5. Question
What is the primary benefit of deploying an RDS database in a Multi-AZ configuration?
Correct
Correct option:
Multi-AZ enhances database availability
Amazon RDS Multi-AZ deployments provide enhanced availability and durability for RDS database (DB) instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ).
In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby so that you can resume database operations as soon as the failover is complete.
How Multi-AZ Works: via – https://aws.amazon.com/rds/features/multi-az/
Exam Alert:
Please review the differences between Multi-AZ, Multi-Region and Read Replica deployments for RDS: via – https://aws.amazon.com/rds/features/multi-az/
Incorrect options:
Multi-AZ improves database performance for read-heavy workloads – Read Replicas allow you to create read-only copies that are synchronized with your master database. Read Replicas are used for improved read performance. Therefore, this option is incorrect.
Multi-AZ protects the database from a regional failure – You need to use RDS in Multi-Region deployment configuration to protect from a regional failure. Multi-AZ cannot protect from a regional failure.
Multi-AZ reduces database usage costs – Multi-AZ RDS increases the database costs compared to the standard deployment. So this option is incorrect.
Reference: https://aws.amazon.com/rds/features/multi-az/
Incorrect
Correct option:
Multi-AZ enhances database availability
Amazon RDS Multi-AZ deployments provide enhanced availability and durability for RDS database (DB) instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ).
In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby so that you can resume database operations as soon as the failover is complete.
How Multi-AZ Works: via – https://aws.amazon.com/rds/features/multi-az/
Exam Alert:
Please review the differences between Multi-AZ, Multi-Region and Read Replica deployments for RDS: via – https://aws.amazon.com/rds/features/multi-az/
Incorrect options:
Multi-AZ improves database performance for read-heavy workloads – Read Replicas allow you to create read-only copies that are synchronized with your master database. Read Replicas are used for improved read performance. Therefore, this option is incorrect.
Multi-AZ protects the database from a regional failure – You need to use RDS in Multi-Region deployment configuration to protect from a regional failure. Multi-AZ cannot protect from a regional failure.
Multi-AZ reduces database usage costs – Multi-AZ RDS increases the database costs compared to the standard deployment. So this option is incorrect.
Reference: https://aws.amazon.com/rds/features/multi-az/
Unattempted
Correct option:
Multi-AZ enhances database availability
Amazon RDS Multi-AZ deployments provide enhanced availability and durability for RDS database (DB) instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ).
In case of an infrastructure failure, Amazon RDS performs an automatic failover to the standby so that you can resume database operations as soon as the failover is complete.
How Multi-AZ Works: via – https://aws.amazon.com/rds/features/multi-az/
Exam Alert:
Please review the differences between Multi-AZ, Multi-Region and Read Replica deployments for RDS: via – https://aws.amazon.com/rds/features/multi-az/
Incorrect options:
Multi-AZ improves database performance for read-heavy workloads – Read Replicas allow you to create read-only copies that are synchronized with your master database. Read Replicas are used for improved read performance. Therefore, this option is incorrect.
Multi-AZ protects the database from a regional failure – You need to use RDS in Multi-Region deployment configuration to protect from a regional failure. Multi-AZ cannot protect from a regional failure.
Multi-AZ reduces database usage costs – Multi-AZ RDS increases the database costs compared to the standard deployment. So this option is incorrect.
Reference: https://aws.amazon.com/rds/features/multi-az/
Question 6 of 65
6. Question
A retail company has multiple AWS accounts for each of its departments. Which of the following AWS services can be used to set up consolidated billing and a single payment method for these AWS accounts?
Correct
Correct option:
AWS Organizations
AWS Organizations helps you to centrally manage billing; control access, compliance, and security; and share resources across your AWS accounts. Using AWS Organizations, you can automate account creation, create groups of accounts to reflect your business needs, and apply policies for these groups for governance. You can also simplify billing by setting up a single payment method for all of your AWS accounts. AWS Organizations is available to all AWS customers at no additional charge.
Key Features of AWS Organizations: via – https://aws.amazon.com/organizations/
Incorrect options:
AWS Cost Explorer – AWS Cost Explorer has an easy-to-use interface that lets you visualize, understand, and manage your AWS costs and usage over time. AWS Cost Explorer includes a default report that helps you visualize the costs and usage associated with your top five cost-accruing AWS services, and gives you a detailed breakdown of all services in the table view. The reports let you adjust the time range to view historical data going back up to twelve months to gain an understanding of your cost trends. You cannot use Cost Explorer to set up consolidated billing and a single payment method for multiple AWS accounts.
AWS Budgets – AWS Budgets gives the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount. You can also use AWS Budgets to set reservation utilization or coverage targets and receive alerts when your utilization drops below the threshold you define. Budgets can be created at the monthly, quarterly, or yearly level, and you can customize the start and end dates. You can further refine your budget to track costs associated with multiple dimensions, such as AWS service, linked account, tag, and others. You cannot use AWS Budgets to set up consolidated billing and a single payment method for multiple AWS accounts.
AWS Secrets Manager – AWS Secrets Manager helps you protect secrets needed to access your applications, services, and IT resources. The service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. You cannot use Secrets Manager to set up consolidated billing and a single payment method for multiple AWS accounts.
Reference: https://aws.amazon.com/organizations/
Incorrect
Correct option:
AWS Organizations
AWS Organizations helps you to centrally manage billing; control access, compliance, and security; and share resources across your AWS accounts. Using AWS Organizations, you can automate account creation, create groups of accounts to reflect your business needs, and apply policies for these groups for governance. You can also simplify billing by setting up a single payment method for all of your AWS accounts. AWS Organizations is available to all AWS customers at no additional charge.
Key Features of AWS Organizations: via – https://aws.amazon.com/organizations/
Incorrect options:
AWS Cost Explorer – AWS Cost Explorer has an easy-to-use interface that lets you visualize, understand, and manage your AWS costs and usage over time. AWS Cost Explorer includes a default report that helps you visualize the costs and usage associated with your top five cost-accruing AWS services, and gives you a detailed breakdown of all services in the table view. The reports let you adjust the time range to view historical data going back up to twelve months to gain an understanding of your cost trends. You cannot use Cost Explorer to set up consolidated billing and a single payment method for multiple AWS accounts.
AWS Budgets – AWS Budgets gives the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount. You can also use AWS Budgets to set reservation utilization or coverage targets and receive alerts when your utilization drops below the threshold you define. Budgets can be created at the monthly, quarterly, or yearly level, and you can customize the start and end dates. You can further refine your budget to track costs associated with multiple dimensions, such as AWS service, linked account, tag, and others. You cannot use AWS Budgets to set up consolidated billing and a single payment method for multiple AWS accounts.
AWS Secrets Manager – AWS Secrets Manager helps you protect secrets needed to access your applications, services, and IT resources. The service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. You cannot use Secrets Manager to set up consolidated billing and a single payment method for multiple AWS accounts.
Reference: https://aws.amazon.com/organizations/
Unattempted
Correct option:
AWS Organizations
AWS Organizations helps you to centrally manage billing; control access, compliance, and security; and share resources across your AWS accounts. Using AWS Organizations, you can automate account creation, create groups of accounts to reflect your business needs, and apply policies for these groups for governance. You can also simplify billing by setting up a single payment method for all of your AWS accounts. AWS Organizations is available to all AWS customers at no additional charge.
Key Features of AWS Organizations: via – https://aws.amazon.com/organizations/
Incorrect options:
AWS Cost Explorer – AWS Cost Explorer has an easy-to-use interface that lets you visualize, understand, and manage your AWS costs and usage over time. AWS Cost Explorer includes a default report that helps you visualize the costs and usage associated with your top five cost-accruing AWS services, and gives you a detailed breakdown of all services in the table view. The reports let you adjust the time range to view historical data going back up to twelve months to gain an understanding of your cost trends. You cannot use Cost Explorer to set up consolidated billing and a single payment method for multiple AWS accounts.
AWS Budgets – AWS Budgets gives the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount. You can also use AWS Budgets to set reservation utilization or coverage targets and receive alerts when your utilization drops below the threshold you define. Budgets can be created at the monthly, quarterly, or yearly level, and you can customize the start and end dates. You can further refine your budget to track costs associated with multiple dimensions, such as AWS service, linked account, tag, and others. You cannot use AWS Budgets to set up consolidated billing and a single payment method for multiple AWS accounts.
AWS Secrets Manager – AWS Secrets Manager helps you protect secrets needed to access your applications, services, and IT resources. The service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. You cannot use Secrets Manager to set up consolidated billing and a single payment method for multiple AWS accounts.
Reference: https://aws.amazon.com/organizations/
Question 7 of 65
7. Question
The AWS Well-Architected Framework provides guidance on building cloud based applications using AWS best practices. Which of the following options are the pillars mentioned in the AWS Well-Architected Framework? (Select two)
Correct
Correct option:
Reliability
Cost Optimization
The Well-Architected Framework provides guidance on building secure, high-performing, resilient, and efficient infrastructure for cloud based applications. Based on five pillars — operational excellence, security, reliability, performance efficiency, and cost optimization — the Framework provides a consistent approach for customers and partners to evaluate architectures, and implement designs that will scale over time.
Incorrect options:
Elasticity – Elasticity is the ability to acquire resources as you need them and release resources when you no longer need them. In the cloud, you want to do this automatically.
Availability – A system that is available is capable of delivering the designed functionality at a given point in time. Highly available systems are those that can withstand some measure of degradation while still remaining available.
Scalability – A measurement of a system’s ability to grow to accommodate an increase in demand.
These three options are not part of the AWS Well-Architected Framework.
Reference: https://d1.awsstatic.com/whitepapers/architecture/AWS_Well-Architected_Framework.pdf
Incorrect
Correct option:
Reliability
Cost Optimization
The Well-Architected Framework provides guidance on building secure, high-performing, resilient, and efficient infrastructure for cloud based applications. Based on five pillars — operational excellence, security, reliability, performance efficiency, and cost optimization — the Framework provides a consistent approach for customers and partners to evaluate architectures, and implement designs that will scale over time.
Incorrect options:
Elasticity – Elasticity is the ability to acquire resources as you need them and release resources when you no longer need them. In the cloud, you want to do this automatically.
Availability – A system that is available is capable of delivering the designed functionality at a given point in time. Highly available systems are those that can withstand some measure of degradation while still remaining available.
Scalability – A measurement of a system’s ability to grow to accommodate an increase in demand.
These three options are not part of the AWS Well-Architected Framework.
Reference: https://d1.awsstatic.com/whitepapers/architecture/AWS_Well-Architected_Framework.pdf
Unattempted
Correct option:
Reliability
Cost Optimization
The Well-Architected Framework provides guidance on building secure, high-performing, resilient, and efficient infrastructure for cloud based applications. Based on five pillars — operational excellence, security, reliability, performance efficiency, and cost optimization — the Framework provides a consistent approach for customers and partners to evaluate architectures, and implement designs that will scale over time.
Incorrect options:
Elasticity – Elasticity is the ability to acquire resources as you need them and release resources when you no longer need them. In the cloud, you want to do this automatically.
Availability – A system that is available is capable of delivering the designed functionality at a given point in time. Highly available systems are those that can withstand some measure of degradation while still remaining available.
Scalability – A measurement of a system’s ability to grow to accommodate an increase in demand.
These three options are not part of the AWS Well-Architected Framework.
Reference: https://d1.awsstatic.com/whitepapers/architecture/AWS_Well-Architected_Framework.pdf
Question 8 of 65
8. Question
According to the AWS Shared Responsibility Model, which of the following are responsibilities of the customer for Amazon RDS?
Correct
Correct option:
Security and Compliance is a shared responsibility between AWS and the customer. This shared model can help relieve the customer’s operational burden as AWS operates, manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates.
Database encryption – Under the shared model, customers are responsible for managing their data, including data encryption.
Shared Responsibility Model Overview: via – https://aws.amazon.com/compliance/shared-responsibility-model/
Incorrect options:
According to the AWS Shared Responsibility Model, AWS is responsible for “Security of the Cloud”. This includes protecting the infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud services.
Amazon Relational Database Service (Amazon RDS) is a managed service that makes it easy to set up, operate, and scale a relational database in the cloud.
Managing the underlying server hardware on which RDS runs – Since RDS is a managed service, the underlying infrastructure is the responsibility of AWS.
Applying patches to the RDS database – Since RDS is a managed service, the underlying infrastructure is the responsibility of AWS.
Applying patches to the underlying OS – Since RDS is a managed service, the underlying infrastructure is the responsibility of AWS.
Reference: https://aws.amazon.com/compliance/shared-responsibility-model/
Incorrect
Correct option:
Security and Compliance is a shared responsibility between AWS and the customer. This shared model can help relieve the customer’s operational burden as AWS operates, manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates.
Database encryption – Under the shared model, customers are responsible for managing their data, including data encryption.
Shared Responsibility Model Overview: via – https://aws.amazon.com/compliance/shared-responsibility-model/
Incorrect options:
According to the AWS Shared Responsibility Model, AWS is responsible for “Security of the Cloud”. This includes protecting the infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud services.
Amazon Relational Database Service (Amazon RDS) is a managed service that makes it easy to set up, operate, and scale a relational database in the cloud.
Managing the underlying server hardware on which RDS runs – Since RDS is a managed service, the underlying infrastructure is the responsibility of AWS.
Applying patches to the RDS database – Since RDS is a managed service, the underlying infrastructure is the responsibility of AWS.
Applying patches to the underlying OS – Since RDS is a managed service, the underlying infrastructure is the responsibility of AWS.
Reference: https://aws.amazon.com/compliance/shared-responsibility-model/
Unattempted
Correct option:
Security and Compliance is a shared responsibility between AWS and the customer. This shared model can help relieve the customer’s operational burden as AWS operates, manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates.
Database encryption – Under the shared model, customers are responsible for managing their data, including data encryption.
Shared Responsibility Model Overview: via – https://aws.amazon.com/compliance/shared-responsibility-model/
Incorrect options:
According to the AWS Shared Responsibility Model, AWS is responsible for “Security of the Cloud”. This includes protecting the infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud services.
Amazon Relational Database Service (Amazon RDS) is a managed service that makes it easy to set up, operate, and scale a relational database in the cloud.
Managing the underlying server hardware on which RDS runs – Since RDS is a managed service, the underlying infrastructure is the responsibility of AWS.
Applying patches to the RDS database – Since RDS is a managed service, the underlying infrastructure is the responsibility of AWS.
Applying patches to the underlying OS – Since RDS is a managed service, the underlying infrastructure is the responsibility of AWS.
Reference: https://aws.amazon.com/compliance/shared-responsibility-model/
Question 9 of 65
9. Question
A multi-national company wants to migrate its IT infrastructure to AWS Cloud and is looking for a concierge support team as well as a response time of around an hour in case the systems go down. As a Cloud Practitioner, which of the following support plans would you recommend to the company?
Correct
Correct option:
Enterprise
The Concierge Support Team is only available for the Enterprise Support plan. The Concierge Team are AWS billing and account experts that specialize in working with enterprise accounts. They will quickly and efficiently assist you with your billing and account inquiries. Enterprise Support plan provides a response time of fewer than 15 minutes for business-critical systems and provides a response time of less than an hour for production systems related outage. So this is the correct option.
Exam Alert:
Please review the differences between the Developer, Business, and Enterprise support plans as you can expect at least a couple of questions on the exam:
via – https://aws.amazon.com/premiumsupport/plans/
Incorrect options:
Developer – Concierge Support Team is only available for Enterprise Support plan so this option is incorrect.
Business – Concierge Support Team is only available for Enterprise Support plan so this option is incorrect.
Individual – This is a made-up option and has been added as a distractor.
Reference: https://aws.amazon.com/premiumsupport/plans/
Incorrect
Correct option:
Enterprise
The Concierge Support Team is only available for the Enterprise Support plan. The Concierge Team are AWS billing and account experts that specialize in working with enterprise accounts. They will quickly and efficiently assist you with your billing and account inquiries. Enterprise Support plan provides a response time of fewer than 15 minutes for business-critical systems and provides a response time of less than an hour for production systems related outage. So this is the correct option.
Exam Alert:
Please review the differences between the Developer, Business, and Enterprise support plans as you can expect at least a couple of questions on the exam:
via – https://aws.amazon.com/premiumsupport/plans/
Incorrect options:
Developer – Concierge Support Team is only available for Enterprise Support plan so this option is incorrect.
Business – Concierge Support Team is only available for Enterprise Support plan so this option is incorrect.
Individual – This is a made-up option and has been added as a distractor.
Reference: https://aws.amazon.com/premiumsupport/plans/
Unattempted
Correct option:
Enterprise
The Concierge Support Team is only available for the Enterprise Support plan. The Concierge Team are AWS billing and account experts that specialize in working with enterprise accounts. They will quickly and efficiently assist you with your billing and account inquiries. Enterprise Support plan provides a response time of fewer than 15 minutes for business-critical systems and provides a response time of less than an hour for production systems related outage. So this is the correct option.
Exam Alert:
Please review the differences between the Developer, Business, and Enterprise support plans as you can expect at least a couple of questions on the exam:
via – https://aws.amazon.com/premiumsupport/plans/
Incorrect options:
Developer – Concierge Support Team is only available for Enterprise Support plan so this option is incorrect.
Business – Concierge Support Team is only available for Enterprise Support plan so this option is incorrect.
Individual – This is a made-up option and has been added as a distractor.
Reference: https://aws.amazon.com/premiumsupport/plans/
Question 10 of 65
10. Question
Which of the following AWS services comes under the Software as a Service (SaaS) Cloud Computing Type?
Correct
Correct option: Amazon Rekognition
Cloud Computing can be broadly divided into three types – Infrastructure as a Service (IaaS), Platform as a Service (PaaS), Software as a Service (SaaS).
IaaS contains the basic building blocks for cloud IT. It typically provides access to networking features, computers (virtual or on dedicated hardware), and data storage space. IaaS gives the highest level of flexibility and management control over IT resources. Examples – Amazon EC2 (on AWS), GCP, Azure, Rackspace, Digital Ocean, Linode.
PaaS removes the need to manage underlying infrastructure (usually hardware and operating systems) and allows you to focus on the deployment and management of your applications. You don’t need to worry about resource procurement, capacity planning, software maintenance, patching, or any of the other undifferentiated heavy lifting involved in running your application. Examples – Elastic Beanstalk (on AWS), Heroku, Google App Engine (GCP), Windows Azure (Microsoft).
SaaS provides you with a complete product that is run and managed by the service provider. With a SaaS offering, you don’t have to think about how the service is maintained or how the underlying infrastructure is managed. You only need to think about how you will use that particular software. Examples – Amazon Rekognition, Google Apps (Gmail), Dropbox, Zoom.
Overview of Cloud Computing Types: via – https://aws.amazon.com/types-of-cloud-computing/
You can use Amazon Rekognition to add image and video analysis to your applications using proven, highly scalable, deep learning technology that requires no machine learning expertise. With Amazon Rekognition, you can identify objects, people, text, scenes, and activities in images and videos as well as detect any inappropriate content. Rekognition is an example of Software as a Service model.
Incorrect options:
Amazon EC2 – Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. Hence, it comes under Infrastructure as a Service type of Cloud Computing.
AWS Elastic Beanstalk – AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services. You simply upload your code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring. Per the definitions above, Elastic Beanstalk falls under the Platform as a Service type.
Elastic Load Balancing – Elastic Load Balancing automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions. It can handle the varying load of your application traffic in a single Availability Zone or across multiple Availability Zones. This has been added as a distractor.
References: https://aws.amazon.com/elasticbeanstalk/ https://aws.amazon.com/what-is-cloud-computing/
Incorrect
Correct option: Amazon Rekognition
Cloud Computing can be broadly divided into three types – Infrastructure as a Service (IaaS), Platform as a Service (PaaS), Software as a Service (SaaS).
IaaS contains the basic building blocks for cloud IT. It typically provides access to networking features, computers (virtual or on dedicated hardware), and data storage space. IaaS gives the highest level of flexibility and management control over IT resources. Examples – Amazon EC2 (on AWS), GCP, Azure, Rackspace, Digital Ocean, Linode.
PaaS removes the need to manage underlying infrastructure (usually hardware and operating systems) and allows you to focus on the deployment and management of your applications. You don’t need to worry about resource procurement, capacity planning, software maintenance, patching, or any of the other undifferentiated heavy lifting involved in running your application. Examples – Elastic Beanstalk (on AWS), Heroku, Google App Engine (GCP), Windows Azure (Microsoft).
SaaS provides you with a complete product that is run and managed by the service provider. With a SaaS offering, you don’t have to think about how the service is maintained or how the underlying infrastructure is managed. You only need to think about how you will use that particular software. Examples – Amazon Rekognition, Google Apps (Gmail), Dropbox, Zoom.
Overview of Cloud Computing Types: via – https://aws.amazon.com/types-of-cloud-computing/
You can use Amazon Rekognition to add image and video analysis to your applications using proven, highly scalable, deep learning technology that requires no machine learning expertise. With Amazon Rekognition, you can identify objects, people, text, scenes, and activities in images and videos as well as detect any inappropriate content. Rekognition is an example of Software as a Service model.
Incorrect options:
Amazon EC2 – Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. Hence, it comes under Infrastructure as a Service type of Cloud Computing.
AWS Elastic Beanstalk – AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services. You simply upload your code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring. Per the definitions above, Elastic Beanstalk falls under the Platform as a Service type.
Elastic Load Balancing – Elastic Load Balancing automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions. It can handle the varying load of your application traffic in a single Availability Zone or across multiple Availability Zones. This has been added as a distractor.
References: https://aws.amazon.com/elasticbeanstalk/ https://aws.amazon.com/what-is-cloud-computing/
Unattempted
Correct option: Amazon Rekognition
Cloud Computing can be broadly divided into three types – Infrastructure as a Service (IaaS), Platform as a Service (PaaS), Software as a Service (SaaS).
IaaS contains the basic building blocks for cloud IT. It typically provides access to networking features, computers (virtual or on dedicated hardware), and data storage space. IaaS gives the highest level of flexibility and management control over IT resources. Examples – Amazon EC2 (on AWS), GCP, Azure, Rackspace, Digital Ocean, Linode.
PaaS removes the need to manage underlying infrastructure (usually hardware and operating systems) and allows you to focus on the deployment and management of your applications. You don’t need to worry about resource procurement, capacity planning, software maintenance, patching, or any of the other undifferentiated heavy lifting involved in running your application. Examples – Elastic Beanstalk (on AWS), Heroku, Google App Engine (GCP), Windows Azure (Microsoft).
SaaS provides you with a complete product that is run and managed by the service provider. With a SaaS offering, you don’t have to think about how the service is maintained or how the underlying infrastructure is managed. You only need to think about how you will use that particular software. Examples – Amazon Rekognition, Google Apps (Gmail), Dropbox, Zoom.
Overview of Cloud Computing Types: via – https://aws.amazon.com/types-of-cloud-computing/
You can use Amazon Rekognition to add image and video analysis to your applications using proven, highly scalable, deep learning technology that requires no machine learning expertise. With Amazon Rekognition, you can identify objects, people, text, scenes, and activities in images and videos as well as detect any inappropriate content. Rekognition is an example of Software as a Service model.
Incorrect options:
Amazon EC2 – Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. Hence, it comes under Infrastructure as a Service type of Cloud Computing.
AWS Elastic Beanstalk – AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services. You simply upload your code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring. Per the definitions above, Elastic Beanstalk falls under the Platform as a Service type.
Elastic Load Balancing – Elastic Load Balancing automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions. It can handle the varying load of your application traffic in a single Availability Zone or across multiple Availability Zones. This has been added as a distractor.
References: https://aws.amazon.com/elasticbeanstalk/ https://aws.amazon.com/what-is-cloud-computing/
Question 11 of 65
11. Question
A fleet of Amazon EC2 instances spread across different Availability Zones needs to access, edit and share file-based data stored centrally on a system. As a Cloud Practitioner, which AWS service would you recommend for this use-case?
Correct
Correct option:
Elastic File System (EFS)
Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed, elastic NFS file system. It is built to scale on-demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth. Amazon EFS is designed to provide massively parallel shared access to thousands of Amazon EC2 instances, enabling your applications to achieve high levels of aggregate throughput and IOPS with consistent low latencies.
How EFS Works: via – https://aws.amazon.com/efs/
Incorrect options:
Elastic Block Store (EBS) Volume – Amazon Elastic Block Store (EBS) is an easy to use, high-performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction-intensive workloads at any scale. A broad range of workloads, such as relational and non-relational databases, enterprise applications, containerized applications, big data analytics engines, file systems, and media workflows are widely deployed on Amazon EBS. An EBS can only be mounted to one EC2 instance at a time, so this option is not correct for the given use-case.
EC2 Instance Store – An instance store provides temporary block-level storage for your EC2 instance. This storage is located on disks that are physically attached to the host computer. Instance store is ideal for the temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content, or for data that is replicated across a fleet of instances, such as a load-balanced pool of web servers. Instance storage is temporary, data is lost if instance experiences failure or is terminated. EC2 instance store cannot be used for file sharing between instances.
Amazon S3 – Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. This means customers of all sizes and industries can use it to store and protect any amount of data for a range of use cases, such as websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics. As S3 is object-based storage, so it cannot be used for file sharing between instances.
Reference: https://aws.amazon.com/efs/
Incorrect
Correct option:
Elastic File System (EFS)
Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed, elastic NFS file system. It is built to scale on-demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth. Amazon EFS is designed to provide massively parallel shared access to thousands of Amazon EC2 instances, enabling your applications to achieve high levels of aggregate throughput and IOPS with consistent low latencies.
How EFS Works: via – https://aws.amazon.com/efs/
Incorrect options:
Elastic Block Store (EBS) Volume – Amazon Elastic Block Store (EBS) is an easy to use, high-performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction-intensive workloads at any scale. A broad range of workloads, such as relational and non-relational databases, enterprise applications, containerized applications, big data analytics engines, file systems, and media workflows are widely deployed on Amazon EBS. An EBS can only be mounted to one EC2 instance at a time, so this option is not correct for the given use-case.
EC2 Instance Store – An instance store provides temporary block-level storage for your EC2 instance. This storage is located on disks that are physically attached to the host computer. Instance store is ideal for the temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content, or for data that is replicated across a fleet of instances, such as a load-balanced pool of web servers. Instance storage is temporary, data is lost if instance experiences failure or is terminated. EC2 instance store cannot be used for file sharing between instances.
Amazon S3 – Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. This means customers of all sizes and industries can use it to store and protect any amount of data for a range of use cases, such as websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics. As S3 is object-based storage, so it cannot be used for file sharing between instances.
Reference: https://aws.amazon.com/efs/
Unattempted
Correct option:
Elastic File System (EFS)
Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed, elastic NFS file system. It is built to scale on-demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth. Amazon EFS is designed to provide massively parallel shared access to thousands of Amazon EC2 instances, enabling your applications to achieve high levels of aggregate throughput and IOPS with consistent low latencies.
How EFS Works: via – https://aws.amazon.com/efs/
Incorrect options:
Elastic Block Store (EBS) Volume – Amazon Elastic Block Store (EBS) is an easy to use, high-performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction-intensive workloads at any scale. A broad range of workloads, such as relational and non-relational databases, enterprise applications, containerized applications, big data analytics engines, file systems, and media workflows are widely deployed on Amazon EBS. An EBS can only be mounted to one EC2 instance at a time, so this option is not correct for the given use-case.
EC2 Instance Store – An instance store provides temporary block-level storage for your EC2 instance. This storage is located on disks that are physically attached to the host computer. Instance store is ideal for the temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content, or for data that is replicated across a fleet of instances, such as a load-balanced pool of web servers. Instance storage is temporary, data is lost if instance experiences failure or is terminated. EC2 instance store cannot be used for file sharing between instances.
Amazon S3 – Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. This means customers of all sizes and industries can use it to store and protect any amount of data for a range of use cases, such as websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics. As S3 is object-based storage, so it cannot be used for file sharing between instances.
Reference: https://aws.amazon.com/efs/
Question 12 of 65
12. Question
A developer has written a simple web application in PHP and he wants to just upload his code to AWS Cloud and have AWS handle the deployment automatically but still wants access to the underlying operating system for further enhancements. As a Cloud Practioner, which of the following AWS services would you recommend for this use-case?
Correct
Correct option:
AWS Elastic Beanstalk
AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS. Simply upload your code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring. At the same time, you retain full control over the AWS resources powering your application and can access the underlying resources at any time. There is no additional charge for Elastic Beanstalk – you pay only for the AWS resources needed to store and run your applications.
Key Benefits of Elastic Beanstalk: via – https://aws.amazon.com/elasticbeanstalk/
Incorrect options:
AWS CloudFormation – AWS CloudFormation allows you to use programming languages or a simple text file (in YAML or JSON format) to model and provision, in an automated and secure manner, all the resources needed for your applications across all Regions and accounts. Think infrastructure as code; think CloudFormation. This is very different from Beanstalk where you just upload your application code and Beanstalk automatically figures out what resources are required to deploy that application. In CloudFormation, you have to explicitly specify which resources you want to provision.
Amazon EC2 – Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud, per-second billing, and access to the underlying OS. It is designed to make web-scale cloud computing easier for developers. Maintaining the server and its software has to be done by the customer. EC2 cannot handle the application deployment automatically, so this option is not correct.
AWS Elastic Container Service (ECS) – Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast, container management service that makes it easy to run, stop, and manage Docker containers on a cluster. ECS cannot handle the application deployment automatically, so this option is not correct.
Reference: https://aws.amazon.com/elasticbeanstalk/
Incorrect
Correct option:
AWS Elastic Beanstalk
AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS. Simply upload your code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring. At the same time, you retain full control over the AWS resources powering your application and can access the underlying resources at any time. There is no additional charge for Elastic Beanstalk – you pay only for the AWS resources needed to store and run your applications.
Key Benefits of Elastic Beanstalk: via – https://aws.amazon.com/elasticbeanstalk/
Incorrect options:
AWS CloudFormation – AWS CloudFormation allows you to use programming languages or a simple text file (in YAML or JSON format) to model and provision, in an automated and secure manner, all the resources needed for your applications across all Regions and accounts. Think infrastructure as code; think CloudFormation. This is very different from Beanstalk where you just upload your application code and Beanstalk automatically figures out what resources are required to deploy that application. In CloudFormation, you have to explicitly specify which resources you want to provision.
Amazon EC2 – Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud, per-second billing, and access to the underlying OS. It is designed to make web-scale cloud computing easier for developers. Maintaining the server and its software has to be done by the customer. EC2 cannot handle the application deployment automatically, so this option is not correct.
AWS Elastic Container Service (ECS) – Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast, container management service that makes it easy to run, stop, and manage Docker containers on a cluster. ECS cannot handle the application deployment automatically, so this option is not correct.
Reference: https://aws.amazon.com/elasticbeanstalk/
Unattempted
Correct option:
AWS Elastic Beanstalk
AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS. Simply upload your code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring. At the same time, you retain full control over the AWS resources powering your application and can access the underlying resources at any time. There is no additional charge for Elastic Beanstalk – you pay only for the AWS resources needed to store and run your applications.
Key Benefits of Elastic Beanstalk: via – https://aws.amazon.com/elasticbeanstalk/
Incorrect options:
AWS CloudFormation – AWS CloudFormation allows you to use programming languages or a simple text file (in YAML or JSON format) to model and provision, in an automated and secure manner, all the resources needed for your applications across all Regions and accounts. Think infrastructure as code; think CloudFormation. This is very different from Beanstalk where you just upload your application code and Beanstalk automatically figures out what resources are required to deploy that application. In CloudFormation, you have to explicitly specify which resources you want to provision.
Amazon EC2 – Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud, per-second billing, and access to the underlying OS. It is designed to make web-scale cloud computing easier for developers. Maintaining the server and its software has to be done by the customer. EC2 cannot handle the application deployment automatically, so this option is not correct.
AWS Elastic Container Service (ECS) – Amazon Elastic Container Service (Amazon ECS) is a highly scalable, fast, container management service that makes it easy to run, stop, and manage Docker containers on a cluster. ECS cannot handle the application deployment automatically, so this option is not correct.
Reference: https://aws.amazon.com/elasticbeanstalk/
Question 13 of 65
13. Question
Which AWS service would you use to send alerts when the costs for your AWS account exceed your budgeted amount?
Correct
Correct option:
AWS Budgets
AWS Budgets gives the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount. You can also use AWS Budgets to set reservation utilization or coverage targets and receive alerts when your utilization drops below the threshold you define. Budgets can be created at the monthly, quarterly, or yearly level, and you can customize the start and end dates. You can further refine your budget to track costs associated with multiple dimensions, such as AWS service, linked account, tag, and others. Budget alerts can be sent via email and/or Amazon Simple Notification Service (SNS) topic.
AWS Budgets Overview: via – https://aws.amazon.com/aws-cost-management/aws-budgets/
Exam Alert:
It is useful to note the difference between CloudWatch Billing vs Budgets:
CloudWatch Billing Alarms: Sends an alarm when the actual cost exceeds a certain threshold.
Budgets: Sends an alarm when the actual cost exceeds the budgeted amount or even when the cost forecast exceeds the budgeted amount.
Incorrect options:
AWS Cost Explorer – AWS Cost Explorer has an easy-to-use interface that lets you visualize, understand, and manage your AWS costs and usage over time. AWS Cost Explorer includes a default report that helps you visualize the costs and usage associated with your top five cost-accruing AWS services, and gives you a detailed breakdown on all services in the table view. The reports let you adjust the time range to view historical data going back up to twelve months to gain an understanding of your cost trends.
AWS Cost Explorer Reports: via – https://aws.amazon.com/aws-cost-management/aws-cost-explorer/
Exam Alert:
Watch out for questions on AWS Cost Explorer vs AWS Budgets. AWS Budgets can alert you when your costs exceed your budgeted amount. Cost Explorer helps you visualize and manage your AWS costs and usage over time.
AWS Organizations – AWS Organizations helps you centrally govern your environment as you grow and scale your workloads on AWS. Whether you are a growing startup or a large enterprise, Organizations helps you to centrally manage billing; control access, compliance, and security; and share resources across your AWS accounts.
AWS Total Cost of Ownership (TCO) Calculator – AWS helps reduce Total Cost of Ownership (TCO) by reducing the need to invest in large capital expenditures and providing a pay-as-you-go model that empowers to invest in the capacity you need and use it only when the business requires it. TCO calculator helps to compare the cost of your applications in an on-premises or traditional hosting environment to AWS. Once you describe your on-premises or hosting environment configuration, it produces a detailed cost comparison with AWS.
Reference: https://aws.amazon.com/aws-cost-management/aws-budgets/
Incorrect
Correct option:
AWS Budgets
AWS Budgets gives the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount. You can also use AWS Budgets to set reservation utilization or coverage targets and receive alerts when your utilization drops below the threshold you define. Budgets can be created at the monthly, quarterly, or yearly level, and you can customize the start and end dates. You can further refine your budget to track costs associated with multiple dimensions, such as AWS service, linked account, tag, and others. Budget alerts can be sent via email and/or Amazon Simple Notification Service (SNS) topic.
AWS Budgets Overview: via – https://aws.amazon.com/aws-cost-management/aws-budgets/
Exam Alert:
It is useful to note the difference between CloudWatch Billing vs Budgets:
CloudWatch Billing Alarms: Sends an alarm when the actual cost exceeds a certain threshold.
Budgets: Sends an alarm when the actual cost exceeds the budgeted amount or even when the cost forecast exceeds the budgeted amount.
Incorrect options:
AWS Cost Explorer – AWS Cost Explorer has an easy-to-use interface that lets you visualize, understand, and manage your AWS costs and usage over time. AWS Cost Explorer includes a default report that helps you visualize the costs and usage associated with your top five cost-accruing AWS services, and gives you a detailed breakdown on all services in the table view. The reports let you adjust the time range to view historical data going back up to twelve months to gain an understanding of your cost trends.
AWS Cost Explorer Reports: via – https://aws.amazon.com/aws-cost-management/aws-cost-explorer/
Exam Alert:
Watch out for questions on AWS Cost Explorer vs AWS Budgets. AWS Budgets can alert you when your costs exceed your budgeted amount. Cost Explorer helps you visualize and manage your AWS costs and usage over time.
AWS Organizations – AWS Organizations helps you centrally govern your environment as you grow and scale your workloads on AWS. Whether you are a growing startup or a large enterprise, Organizations helps you to centrally manage billing; control access, compliance, and security; and share resources across your AWS accounts.
AWS Total Cost of Ownership (TCO) Calculator – AWS helps reduce Total Cost of Ownership (TCO) by reducing the need to invest in large capital expenditures and providing a pay-as-you-go model that empowers to invest in the capacity you need and use it only when the business requires it. TCO calculator helps to compare the cost of your applications in an on-premises or traditional hosting environment to AWS. Once you describe your on-premises or hosting environment configuration, it produces a detailed cost comparison with AWS.
Reference: https://aws.amazon.com/aws-cost-management/aws-budgets/
Unattempted
Correct option:
AWS Budgets
AWS Budgets gives the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount. You can also use AWS Budgets to set reservation utilization or coverage targets and receive alerts when your utilization drops below the threshold you define. Budgets can be created at the monthly, quarterly, or yearly level, and you can customize the start and end dates. You can further refine your budget to track costs associated with multiple dimensions, such as AWS service, linked account, tag, and others. Budget alerts can be sent via email and/or Amazon Simple Notification Service (SNS) topic.
AWS Budgets Overview: via – https://aws.amazon.com/aws-cost-management/aws-budgets/
Exam Alert:
It is useful to note the difference between CloudWatch Billing vs Budgets:
CloudWatch Billing Alarms: Sends an alarm when the actual cost exceeds a certain threshold.
Budgets: Sends an alarm when the actual cost exceeds the budgeted amount or even when the cost forecast exceeds the budgeted amount.
Incorrect options:
AWS Cost Explorer – AWS Cost Explorer has an easy-to-use interface that lets you visualize, understand, and manage your AWS costs and usage over time. AWS Cost Explorer includes a default report that helps you visualize the costs and usage associated with your top five cost-accruing AWS services, and gives you a detailed breakdown on all services in the table view. The reports let you adjust the time range to view historical data going back up to twelve months to gain an understanding of your cost trends.
AWS Cost Explorer Reports: via – https://aws.amazon.com/aws-cost-management/aws-cost-explorer/
Exam Alert:
Watch out for questions on AWS Cost Explorer vs AWS Budgets. AWS Budgets can alert you when your costs exceed your budgeted amount. Cost Explorer helps you visualize and manage your AWS costs and usage over time.
AWS Organizations – AWS Organizations helps you centrally govern your environment as you grow and scale your workloads on AWS. Whether you are a growing startup or a large enterprise, Organizations helps you to centrally manage billing; control access, compliance, and security; and share resources across your AWS accounts.
AWS Total Cost of Ownership (TCO) Calculator – AWS helps reduce Total Cost of Ownership (TCO) by reducing the need to invest in large capital expenditures and providing a pay-as-you-go model that empowers to invest in the capacity you need and use it only when the business requires it. TCO calculator helps to compare the cost of your applications in an on-premises or traditional hosting environment to AWS. Once you describe your on-premises or hosting environment configuration, it produces a detailed cost comparison with AWS.
Reference: https://aws.amazon.com/aws-cost-management/aws-budgets/
Question 14 of 65
14. Question
Access Key ID and Secret Access Key are tied to which of the following AWS Identity and Access Management entities?
Correct
Correct option: IAM User
Access keys are long-term credentials for an IAM user or the AWS account root user. You can use access keys to sign programmatic requests to the AWS CLI or AWS API (directly or using the AWS SDK). Access keys consist of two parts: an access key ID (for example, AKIAIOSFODNN7EXAMPLE) and a secret access key (for example, wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY). As a user name and password, you must use both the access key ID and secret access key together to authenticate your requests. Access Keys are secret, just like a password. You should never share them.
Incorrect options:
IAM Role – An IAM role is similar to an IAM user, in that it is an AWS identity with permission policies that determine what the identity can and cannot do in AWS. However, instead of being uniquely associated with one person, a role is intended to be assumable by anyone who needs it.
IAM Group – An IAM group is a collection of IAM users. Groups let you specify permissions for multiple users, which can make it easier to manage the permissions for those users.
AWS Policy – You manage access in AWS by creating policies and attaching them to IAM identities (users, groups of users, or roles) or AWS resources. A policy is an object in AWS that, when associated with an identity or resource, defines their permissions.
Access keys are not tied to the IAM role, IAM group, or AWS policy. So all three options are incorrect.
Reference: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html
Incorrect
Correct option: IAM User
Access keys are long-term credentials for an IAM user or the AWS account root user. You can use access keys to sign programmatic requests to the AWS CLI or AWS API (directly or using the AWS SDK). Access keys consist of two parts: an access key ID (for example, AKIAIOSFODNN7EXAMPLE) and a secret access key (for example, wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY). As a user name and password, you must use both the access key ID and secret access key together to authenticate your requests. Access Keys are secret, just like a password. You should never share them.
Incorrect options:
IAM Role – An IAM role is similar to an IAM user, in that it is an AWS identity with permission policies that determine what the identity can and cannot do in AWS. However, instead of being uniquely associated with one person, a role is intended to be assumable by anyone who needs it.
IAM Group – An IAM group is a collection of IAM users. Groups let you specify permissions for multiple users, which can make it easier to manage the permissions for those users.
AWS Policy – You manage access in AWS by creating policies and attaching them to IAM identities (users, groups of users, or roles) or AWS resources. A policy is an object in AWS that, when associated with an identity or resource, defines their permissions.
Access keys are not tied to the IAM role, IAM group, or AWS policy. So all three options are incorrect.
Reference: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html
Unattempted
Correct option: IAM User
Access keys are long-term credentials for an IAM user or the AWS account root user. You can use access keys to sign programmatic requests to the AWS CLI or AWS API (directly or using the AWS SDK). Access keys consist of two parts: an access key ID (for example, AKIAIOSFODNN7EXAMPLE) and a secret access key (for example, wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY). As a user name and password, you must use both the access key ID and secret access key together to authenticate your requests. Access Keys are secret, just like a password. You should never share them.
Incorrect options:
IAM Role – An IAM role is similar to an IAM user, in that it is an AWS identity with permission policies that determine what the identity can and cannot do in AWS. However, instead of being uniquely associated with one person, a role is intended to be assumable by anyone who needs it.
IAM Group – An IAM group is a collection of IAM users. Groups let you specify permissions for multiple users, which can make it easier to manage the permissions for those users.
AWS Policy – You manage access in AWS by creating policies and attaching them to IAM identities (users, groups of users, or roles) or AWS resources. A policy is an object in AWS that, when associated with an identity or resource, defines their permissions.
Access keys are not tied to the IAM role, IAM group, or AWS policy. So all three options are incorrect.
Reference: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html
Question 15 of 65
15. Question
Which AWS service helps you define your infrastructure as code?
Correct
Correct option:
AWS CLoudFormation
AWS CloudFormation provides a common language to model and provision AWS and third-party application resources in your cloud environment. AWS CloudFormation allows you to use programming languages or a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all Regions and accounts. You can use AWS CloudFormation’s sample templates or create your templates to describe the AWS resources, and any associated dependencies or runtime parameters, required to run your application.
Incorrect options:
AWS Config – AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. Think resource-specific history, audit, and compliance; think Config.
AWS Service Catalog – AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS. These IT services can include everything from virtual machine images, servers, software, and databases to complete multi-tier application architectures.
AWS CloudTrail – AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides the event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command-line tools, and other AWS services. Think account-specific activity and audit; think CloudTrail.
Reference: https://aws.amazon.com/cloudformation/
Incorrect
Correct option:
AWS CLoudFormation
AWS CloudFormation provides a common language to model and provision AWS and third-party application resources in your cloud environment. AWS CloudFormation allows you to use programming languages or a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all Regions and accounts. You can use AWS CloudFormation’s sample templates or create your templates to describe the AWS resources, and any associated dependencies or runtime parameters, required to run your application.
Incorrect options:
AWS Config – AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. Think resource-specific history, audit, and compliance; think Config.
AWS Service Catalog – AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS. These IT services can include everything from virtual machine images, servers, software, and databases to complete multi-tier application architectures.
AWS CloudTrail – AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides the event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command-line tools, and other AWS services. Think account-specific activity and audit; think CloudTrail.
Reference: https://aws.amazon.com/cloudformation/
Unattempted
Correct option:
AWS CLoudFormation
AWS CloudFormation provides a common language to model and provision AWS and third-party application resources in your cloud environment. AWS CloudFormation allows you to use programming languages or a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all Regions and accounts. You can use AWS CloudFormation’s sample templates or create your templates to describe the AWS resources, and any associated dependencies or runtime parameters, required to run your application.
Incorrect options:
AWS Config – AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. Think resource-specific history, audit, and compliance; think Config.
AWS Service Catalog – AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS. These IT services can include everything from virtual machine images, servers, software, and databases to complete multi-tier application architectures.
AWS CloudTrail – AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides the event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command-line tools, and other AWS services. Think account-specific activity and audit; think CloudTrail.
Reference: https://aws.amazon.com/cloudformation/
Question 16 of 65
16. Question
An IT company wants to run a log backup process every Monday at 2 AM. The usual runtime of the process is 5 minutes. As a Cloud Practitioner, which AWS services would you recommend to build a serverless solution for this use-case? (Select two)
Correct
Correct option:
CloudWatch – Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides data and actionable insights to monitor applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health.
Lambda – AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume. The lambda has a maximum execution time of 15 minutes, so it can be used to run this log backup process.
To build the solution for the given use-case, you can create a CloudWatch Events rule that triggers on a schedule via a cron expression. You can then set the Lambda as the target for this rule.
Incorrect options:
Systems Manager – AWS Systems Manager gives you visibility and control of your infrastructure on AWS. Systems Manager provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks across your AWS resources. With Systems Manager, you can group resources, like Amazon EC2 instances, Amazon S3 buckets, or Amazon RDS instances, by application, view operational data for monitoring and troubleshooting, and take action on your groups of resources. Secrets Manager cannot be used to run a process on a schedule.
EC2 Instance – Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud with support for per-second billing. It is the easiest way to provision servers on AWS Cloud and access the underlying OS. As the company wants a serverless solution, so this option is ruled out.
Step Function – AWS Step Function lets you coordinate multiple AWS services into serverless workflows. You can design and run workflows that stitch together services such as AWS Lambda, AWS Glue and Amazon SageMaker. Step Function cannot be used to run a process on a schedule.
Reference: https://wa.aws.amazon.com/wat.concepts.wa-concepts.en.html
Incorrect
Correct option:
CloudWatch – Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides data and actionable insights to monitor applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health.
Lambda – AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume. The lambda has a maximum execution time of 15 minutes, so it can be used to run this log backup process.
To build the solution for the given use-case, you can create a CloudWatch Events rule that triggers on a schedule via a cron expression. You can then set the Lambda as the target for this rule.
Incorrect options:
Systems Manager – AWS Systems Manager gives you visibility and control of your infrastructure on AWS. Systems Manager provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks across your AWS resources. With Systems Manager, you can group resources, like Amazon EC2 instances, Amazon S3 buckets, or Amazon RDS instances, by application, view operational data for monitoring and troubleshooting, and take action on your groups of resources. Secrets Manager cannot be used to run a process on a schedule.
EC2 Instance – Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud with support for per-second billing. It is the easiest way to provision servers on AWS Cloud and access the underlying OS. As the company wants a serverless solution, so this option is ruled out.
Step Function – AWS Step Function lets you coordinate multiple AWS services into serverless workflows. You can design and run workflows that stitch together services such as AWS Lambda, AWS Glue and Amazon SageMaker. Step Function cannot be used to run a process on a schedule.
Reference: https://wa.aws.amazon.com/wat.concepts.wa-concepts.en.html
Unattempted
Correct option:
CloudWatch – Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides data and actionable insights to monitor applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health.
Lambda – AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume. The lambda has a maximum execution time of 15 minutes, so it can be used to run this log backup process.
To build the solution for the given use-case, you can create a CloudWatch Events rule that triggers on a schedule via a cron expression. You can then set the Lambda as the target for this rule.
Incorrect options:
Systems Manager – AWS Systems Manager gives you visibility and control of your infrastructure on AWS. Systems Manager provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks across your AWS resources. With Systems Manager, you can group resources, like Amazon EC2 instances, Amazon S3 buckets, or Amazon RDS instances, by application, view operational data for monitoring and troubleshooting, and take action on your groups of resources. Secrets Manager cannot be used to run a process on a schedule.
EC2 Instance – Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud with support for per-second billing. It is the easiest way to provision servers on AWS Cloud and access the underlying OS. As the company wants a serverless solution, so this option is ruled out.
Step Function – AWS Step Function lets you coordinate multiple AWS services into serverless workflows. You can design and run workflows that stitch together services such as AWS Lambda, AWS Glue and Amazon SageMaker. Step Function cannot be used to run a process on a schedule.
Reference: https://wa.aws.amazon.com/wat.concepts.wa-concepts.en.html
Question 17 of 65
17. Question
What are the fundamental drivers of cost with AWS Cloud?
Correct
Correct options:
“Compute, Storage and Outbound Data Transfer”
There are three fundamental drivers of cost with AWS: compute, storage, and outbound data transfer. In most cases, there is no charge for inbound data transfer or data transfer between other AWS services within the same region. Outbound data transfer is aggregated across services and then charged at the outbound data transfer rate.
AWS Cloud Pricing Fundamentals: via – https://d0.awsstatic.com/whitepapers/aws_pricing_overview.pdf
Incorrect options:
“Compute, Storage and Inbound Data Transfer”
“Compute, Databases and Outbound Data Transfer”
“Compute, Storage and Outbound Data Transfer”
These three options contradict the details provided earlier in the explanation, so these options are incorrect.
Reference: https://d0.awsstatic.com/whitepapers/aws_pricing_overview.pdf
Incorrect
Correct options:
“Compute, Storage and Outbound Data Transfer”
There are three fundamental drivers of cost with AWS: compute, storage, and outbound data transfer. In most cases, there is no charge for inbound data transfer or data transfer between other AWS services within the same region. Outbound data transfer is aggregated across services and then charged at the outbound data transfer rate.
AWS Cloud Pricing Fundamentals: via – https://d0.awsstatic.com/whitepapers/aws_pricing_overview.pdf
Incorrect options:
“Compute, Storage and Inbound Data Transfer”
“Compute, Databases and Outbound Data Transfer”
“Compute, Storage and Outbound Data Transfer”
These three options contradict the details provided earlier in the explanation, so these options are incorrect.
Reference: https://d0.awsstatic.com/whitepapers/aws_pricing_overview.pdf
Unattempted
Correct options:
“Compute, Storage and Outbound Data Transfer”
There are three fundamental drivers of cost with AWS: compute, storage, and outbound data transfer. In most cases, there is no charge for inbound data transfer or data transfer between other AWS services within the same region. Outbound data transfer is aggregated across services and then charged at the outbound data transfer rate.
AWS Cloud Pricing Fundamentals: via – https://d0.awsstatic.com/whitepapers/aws_pricing_overview.pdf
Incorrect options:
“Compute, Storage and Inbound Data Transfer”
“Compute, Databases and Outbound Data Transfer”
“Compute, Storage and Outbound Data Transfer”
These three options contradict the details provided earlier in the explanation, so these options are incorrect.
Reference: https://d0.awsstatic.com/whitepapers/aws_pricing_overview.pdf
Question 18 of 65
18. Question
Which of the following is correct about AWS “Developer” Support plan?
Correct
Correct option:
Allows one contact to open unlimited cases
AWS Developer Support plan allows one primary contact to open unlimited cases.
Incorrect options:
Allows one contact to open a limited number of cases per month – As mentioned earlier, the AWS Developer Support plan allows one primary contact to open unlimited cases. So this option is incorrect.
Allows unlimited contacts to open unlimited cases – This is supported by AWS “Business” and “Enterprise” Support plans. So this is incorrect for AWS “Developer” Support plan.
Allows unlimited contacts to open a limited number of cases per month – This is a made-up option and has been added as a distractor.
Reference: https://aws.amazon.com/premiumsupport/plans/
Incorrect
Correct option:
Allows one contact to open unlimited cases
AWS Developer Support plan allows one primary contact to open unlimited cases.
Incorrect options:
Allows one contact to open a limited number of cases per month – As mentioned earlier, the AWS Developer Support plan allows one primary contact to open unlimited cases. So this option is incorrect.
Allows unlimited contacts to open unlimited cases – This is supported by AWS “Business” and “Enterprise” Support plans. So this is incorrect for AWS “Developer” Support plan.
Allows unlimited contacts to open a limited number of cases per month – This is a made-up option and has been added as a distractor.
Reference: https://aws.amazon.com/premiumsupport/plans/
Unattempted
Correct option:
Allows one contact to open unlimited cases
AWS Developer Support plan allows one primary contact to open unlimited cases.
Incorrect options:
Allows one contact to open a limited number of cases per month – As mentioned earlier, the AWS Developer Support plan allows one primary contact to open unlimited cases. So this option is incorrect.
Allows unlimited contacts to open unlimited cases – This is supported by AWS “Business” and “Enterprise” Support plans. So this is incorrect for AWS “Developer” Support plan.
Allows unlimited contacts to open a limited number of cases per month – This is a made-up option and has been added as a distractor.
Reference: https://aws.amazon.com/premiumsupport/plans/
Question 19 of 65
19. Question
Which policy describes prohibited uses of the web services offered by Amazon Web Services?
Correct
Correct option:
AWS Acceptable Use Policy
The Acceptable Use Policy describes prohibited uses of the web services offered by Amazon Web Services, Inc. and its affiliates (the “Services”) and the website located at http://aws.amazon.com (the “AWS Site”). This policy is present at https://aws.amazon.com/aup/ and is updated on a need basis by AWS.
Incorrect options:
AWS Trusted Advisor – AWS Trusted Advisor is an online tool that provides you real-time guidance to help you provision your resources following AWS best practices on cost optimization, security, fault tolerance, service limits, and performance improvement. Whether establishing new workflows, developing applications, or as part of ongoing improvement, recommendations provided by Trusted Advisor regularly help keep your solutions provisioned optimally. Trusted Advisor does not describe prohibited uses of the web services offered by Amazon Web Services.
AWS Fair Use Policy – This is a made-up option and has been added as a distractor.
AWS Applicable Use Policy – This is a made-up option and has been added as a distractor.
Reference: https://aws.amazon.com/aup/
Incorrect
Correct option:
AWS Acceptable Use Policy
The Acceptable Use Policy describes prohibited uses of the web services offered by Amazon Web Services, Inc. and its affiliates (the “Services”) and the website located at http://aws.amazon.com (the “AWS Site”). This policy is present at https://aws.amazon.com/aup/ and is updated on a need basis by AWS.
Incorrect options:
AWS Trusted Advisor – AWS Trusted Advisor is an online tool that provides you real-time guidance to help you provision your resources following AWS best practices on cost optimization, security, fault tolerance, service limits, and performance improvement. Whether establishing new workflows, developing applications, or as part of ongoing improvement, recommendations provided by Trusted Advisor regularly help keep your solutions provisioned optimally. Trusted Advisor does not describe prohibited uses of the web services offered by Amazon Web Services.
AWS Fair Use Policy – This is a made-up option and has been added as a distractor.
AWS Applicable Use Policy – This is a made-up option and has been added as a distractor.
Reference: https://aws.amazon.com/aup/
Unattempted
Correct option:
AWS Acceptable Use Policy
The Acceptable Use Policy describes prohibited uses of the web services offered by Amazon Web Services, Inc. and its affiliates (the “Services”) and the website located at http://aws.amazon.com (the “AWS Site”). This policy is present at https://aws.amazon.com/aup/ and is updated on a need basis by AWS.
Incorrect options:
AWS Trusted Advisor – AWS Trusted Advisor is an online tool that provides you real-time guidance to help you provision your resources following AWS best practices on cost optimization, security, fault tolerance, service limits, and performance improvement. Whether establishing new workflows, developing applications, or as part of ongoing improvement, recommendations provided by Trusted Advisor regularly help keep your solutions provisioned optimally. Trusted Advisor does not describe prohibited uses of the web services offered by Amazon Web Services.
AWS Fair Use Policy – This is a made-up option and has been added as a distractor.
AWS Applicable Use Policy – This is a made-up option and has been added as a distractor.
Reference: https://aws.amazon.com/aup/
Question 20 of 65
20. Question
Which of the following solutions can you use to connect your on-premises network with AWS Cloud (Select two).
Correct
Correct options:
AWS Direct Connect – AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS. Using AWS Direct Connect, you can establish private connectivity between AWS and your datacenter, office, or colocation environment, which in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections.
How AWS Direct Connect Works: via – https://aws.amazon.com/directconnect/
AWS VPN – AWS Virtual Private Network (VPN) solutions establish secure connections between on-premises networks, remote offices, client devices, and the AWS global network. AWS VPN is comprised of two services: AWS Site-to-Site VPN and AWS Client VPN. Together, they deliver a highly-available, managed, and elastic cloud VPN solution to protect your network traffic.
How AWS Client VPN Works: via – https://aws.amazon.com/vpn/
How AWS Site-to-Site VPN Works: via – https://aws.amazon.com/vpn/
Incorrect options:
Amazon VPC – Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define. You have complete control over your virtual networking environment, including the selection of your IP address range, creation of subnets, and configuration of route tables and network gateways. You cannot use Amazon VPC to connect your on-premises network with AWS Cloud.
Internet Gateway – An Internet Gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between instances in your VPC and the internet. Therefore, it imposes no availability risks or bandwidth constraints on your network traffic. You cannot use an Internet Gateway to interconnect your on-premises network with AWS Cloud, hence this option is incorrect.
Amazon Route 53 – Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service. It is designed to give developers and businesses an extremely reliable and cost-effective way to route end users to Internet applications by translating names like http://www.example.com into the numeric IP addresses like 192.0.2.1 that computers use to connect. You cannot use Amazon Route 53 to connect your on-premises network with AWS Cloud.
References: https://aws.amazon.com/vpn/ https://aws.amazon.com/directconnect/
Incorrect
Correct options:
AWS Direct Connect – AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS. Using AWS Direct Connect, you can establish private connectivity between AWS and your datacenter, office, or colocation environment, which in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections.
How AWS Direct Connect Works: via – https://aws.amazon.com/directconnect/
AWS VPN – AWS Virtual Private Network (VPN) solutions establish secure connections between on-premises networks, remote offices, client devices, and the AWS global network. AWS VPN is comprised of two services: AWS Site-to-Site VPN and AWS Client VPN. Together, they deliver a highly-available, managed, and elastic cloud VPN solution to protect your network traffic.
How AWS Client VPN Works: via – https://aws.amazon.com/vpn/
How AWS Site-to-Site VPN Works: via – https://aws.amazon.com/vpn/
Incorrect options:
Amazon VPC – Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define. You have complete control over your virtual networking environment, including the selection of your IP address range, creation of subnets, and configuration of route tables and network gateways. You cannot use Amazon VPC to connect your on-premises network with AWS Cloud.
Internet Gateway – An Internet Gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between instances in your VPC and the internet. Therefore, it imposes no availability risks or bandwidth constraints on your network traffic. You cannot use an Internet Gateway to interconnect your on-premises network with AWS Cloud, hence this option is incorrect.
Amazon Route 53 – Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service. It is designed to give developers and businesses an extremely reliable and cost-effective way to route end users to Internet applications by translating names like http://www.example.com into the numeric IP addresses like 192.0.2.1 that computers use to connect. You cannot use Amazon Route 53 to connect your on-premises network with AWS Cloud.
References: https://aws.amazon.com/vpn/ https://aws.amazon.com/directconnect/
Unattempted
Correct options:
AWS Direct Connect – AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS. Using AWS Direct Connect, you can establish private connectivity between AWS and your datacenter, office, or colocation environment, which in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections.
How AWS Direct Connect Works: via – https://aws.amazon.com/directconnect/
AWS VPN – AWS Virtual Private Network (VPN) solutions establish secure connections between on-premises networks, remote offices, client devices, and the AWS global network. AWS VPN is comprised of two services: AWS Site-to-Site VPN and AWS Client VPN. Together, they deliver a highly-available, managed, and elastic cloud VPN solution to protect your network traffic.
How AWS Client VPN Works: via – https://aws.amazon.com/vpn/
How AWS Site-to-Site VPN Works: via – https://aws.amazon.com/vpn/
Incorrect options:
Amazon VPC – Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define. You have complete control over your virtual networking environment, including the selection of your IP address range, creation of subnets, and configuration of route tables and network gateways. You cannot use Amazon VPC to connect your on-premises network with AWS Cloud.
Internet Gateway – An Internet Gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between instances in your VPC and the internet. Therefore, it imposes no availability risks or bandwidth constraints on your network traffic. You cannot use an Internet Gateway to interconnect your on-premises network with AWS Cloud, hence this option is incorrect.
Amazon Route 53 – Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service. It is designed to give developers and businesses an extremely reliable and cost-effective way to route end users to Internet applications by translating names like http://www.example.com into the numeric IP addresses like 192.0.2.1 that computers use to connect. You cannot use Amazon Route 53 to connect your on-premises network with AWS Cloud.
References: https://aws.amazon.com/vpn/ https://aws.amazon.com/directconnect/
Question 21 of 65
21. Question
Which of the following statement is correct regarding the AWS pricing policy for data transfer charges?
Correct
Correct option:
Only outbound data transfer is charged
One of the main benefits of cloud services is the ability it gives you to optimize costs to match your needs, even as those needs change. AWS services do not have complex dependencies or licensing requirements, so you can get exactly what you need to build innovative, cost-effective solutions using the latest technology. There are three fundamental drivers of cost with AWS: compute, storage, and outbound data transfer. These characteristics vary somewhat, depending on the AWS product and pricing model you choose.
Incorrect options:
Only inbound data transfer is charged – There is no charge for inbound data transfer or data transfer between other AWS services within the same Region.
Both inbound data transfer and outbound data transfer are charged – This is an incorrect statement.
Data transfer between AWS services within the same region is charged – Per AWS pricing, data transfer between AWS services within the same region is not charged.
Reference: https://d0.awsstatic.com/whitepapers/aws_pricing_overview.pdf
Incorrect
Correct option:
Only outbound data transfer is charged
One of the main benefits of cloud services is the ability it gives you to optimize costs to match your needs, even as those needs change. AWS services do not have complex dependencies or licensing requirements, so you can get exactly what you need to build innovative, cost-effective solutions using the latest technology. There are three fundamental drivers of cost with AWS: compute, storage, and outbound data transfer. These characteristics vary somewhat, depending on the AWS product and pricing model you choose.
Incorrect options:
Only inbound data transfer is charged – There is no charge for inbound data transfer or data transfer between other AWS services within the same Region.
Both inbound data transfer and outbound data transfer are charged – This is an incorrect statement.
Data transfer between AWS services within the same region is charged – Per AWS pricing, data transfer between AWS services within the same region is not charged.
Reference: https://d0.awsstatic.com/whitepapers/aws_pricing_overview.pdf
Unattempted
Correct option:
Only outbound data transfer is charged
One of the main benefits of cloud services is the ability it gives you to optimize costs to match your needs, even as those needs change. AWS services do not have complex dependencies or licensing requirements, so you can get exactly what you need to build innovative, cost-effective solutions using the latest technology. There are three fundamental drivers of cost with AWS: compute, storage, and outbound data transfer. These characteristics vary somewhat, depending on the AWS product and pricing model you choose.
Incorrect options:
Only inbound data transfer is charged – There is no charge for inbound data transfer or data transfer between other AWS services within the same Region.
Both inbound data transfer and outbound data transfer are charged – This is an incorrect statement.
Data transfer between AWS services within the same region is charged – Per AWS pricing, data transfer between AWS services within the same region is not charged.
Reference: https://d0.awsstatic.com/whitepapers/aws_pricing_overview.pdf
Question 22 of 65
22. Question
As per the AWS shared responsibility model, which of the following is a responsibility of the customer from a security and compliance point of view?
Correct
Correct option:
Managing patches of the guest operating system on Amazon EC2
Security and Compliance is a shared responsibility between AWS and the customer. This shared model can help relieve the customer’s operational burden as AWS operates, manages, and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates.
As per the AWS shared responsibility model, the customer is responsible for security “in” the cloud. Customers that deploy an Amazon EC2 instance are responsible for the management of the guest operating system (including updates and security patches), any application software or utilities installed by the customer on the instances, and the configuration of the AWS-provided firewall (called a security group) on each instance.
Exam Alert:
Please review the Shared Responsibility Model in detail as you can expect multiple questions on the shared responsibility model in the exam: via – https://aws.amazon.com/compliance/shared-responsibility-model/
Incorrect options:
Configuration management for AWS global infrastructure
Availability Zone infrastructure management
Patching/fixing flaws within the AWS infrastructure
AWS is responsible for “Security of the Cloud”. AWS is responsible for protecting the infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud services. Hence, all of the above options – Configuration management for AWS global infrastructure, Availability Zone infrastructure management, and patching/fixing flaws within the AWS infrastructure are responsibilities of AWS.
Reference: https://aws.amazon.com/compliance/shared-responsibility-model/
Incorrect
Correct option:
Managing patches of the guest operating system on Amazon EC2
Security and Compliance is a shared responsibility between AWS and the customer. This shared model can help relieve the customer’s operational burden as AWS operates, manages, and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates.
As per the AWS shared responsibility model, the customer is responsible for security “in” the cloud. Customers that deploy an Amazon EC2 instance are responsible for the management of the guest operating system (including updates and security patches), any application software or utilities installed by the customer on the instances, and the configuration of the AWS-provided firewall (called a security group) on each instance.
Exam Alert:
Please review the Shared Responsibility Model in detail as you can expect multiple questions on the shared responsibility model in the exam: via – https://aws.amazon.com/compliance/shared-responsibility-model/
Incorrect options:
Configuration management for AWS global infrastructure
Availability Zone infrastructure management
Patching/fixing flaws within the AWS infrastructure
AWS is responsible for “Security of the Cloud”. AWS is responsible for protecting the infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud services. Hence, all of the above options – Configuration management for AWS global infrastructure, Availability Zone infrastructure management, and patching/fixing flaws within the AWS infrastructure are responsibilities of AWS.
Reference: https://aws.amazon.com/compliance/shared-responsibility-model/
Unattempted
Correct option:
Managing patches of the guest operating system on Amazon EC2
Security and Compliance is a shared responsibility between AWS and the customer. This shared model can help relieve the customer’s operational burden as AWS operates, manages, and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates.
As per the AWS shared responsibility model, the customer is responsible for security “in” the cloud. Customers that deploy an Amazon EC2 instance are responsible for the management of the guest operating system (including updates and security patches), any application software or utilities installed by the customer on the instances, and the configuration of the AWS-provided firewall (called a security group) on each instance.
Exam Alert:
Please review the Shared Responsibility Model in detail as you can expect multiple questions on the shared responsibility model in the exam: via – https://aws.amazon.com/compliance/shared-responsibility-model/
Incorrect options:
Configuration management for AWS global infrastructure
Availability Zone infrastructure management
Patching/fixing flaws within the AWS infrastructure
AWS is responsible for “Security of the Cloud”. AWS is responsible for protecting the infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud services. Hence, all of the above options – Configuration management for AWS global infrastructure, Availability Zone infrastructure management, and patching/fixing flaws within the AWS infrastructure are responsibilities of AWS.
Reference: https://aws.amazon.com/compliance/shared-responsibility-model/
Question 23 of 65
23. Question
Which AWS service enables users to find, buy, and immediately start using software solutions in their AWS environment?
Correct
Correct option:
AWS Marketplace
AWS Marketplace is a digital catalog with thousands of software listings from independent software vendors that make it easy to find, test, buy, and deploy software that runs on AWS. AWS Marketplace includes thousands of software listings from popular categories such as security, networking, storage, machine learning, IoT, business intelligence, database, and DevOps. You can use AWS Marketplace as a buyer (subscriber) or as a seller (provider), or both. Anyone with an AWS account can use AWS Marketplace as a consumer and can register to become a seller.
Incorrect options:
AWS Config – AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. Think resource-specific history, audit, and compliance; think Config.
AWS OpsWorks – AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed and managed across your Amazon EC2 instances or on-premises compute environments.
AWS Systems Manager – AWS Systems Manager gives you visibility and control of your infrastructure on AWS. Systems Manager provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks across your AWS resources. With Systems Manager, you can group resources, like Amazon EC2 instances, Amazon S3 buckets, or Amazon RDS instances, by application, view operational data for monitoring and troubleshooting, and take action on your groups of resources.
Reference: https://docs.aws.amazon.com/marketplace/latest/buyerguide/what-is-marketplace.html
Incorrect
Correct option:
AWS Marketplace
AWS Marketplace is a digital catalog with thousands of software listings from independent software vendors that make it easy to find, test, buy, and deploy software that runs on AWS. AWS Marketplace includes thousands of software listings from popular categories such as security, networking, storage, machine learning, IoT, business intelligence, database, and DevOps. You can use AWS Marketplace as a buyer (subscriber) or as a seller (provider), or both. Anyone with an AWS account can use AWS Marketplace as a consumer and can register to become a seller.
Incorrect options:
AWS Config – AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. Think resource-specific history, audit, and compliance; think Config.
AWS OpsWorks – AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed and managed across your Amazon EC2 instances or on-premises compute environments.
AWS Systems Manager – AWS Systems Manager gives you visibility and control of your infrastructure on AWS. Systems Manager provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks across your AWS resources. With Systems Manager, you can group resources, like Amazon EC2 instances, Amazon S3 buckets, or Amazon RDS instances, by application, view operational data for monitoring and troubleshooting, and take action on your groups of resources.
Reference: https://docs.aws.amazon.com/marketplace/latest/buyerguide/what-is-marketplace.html
Unattempted
Correct option:
AWS Marketplace
AWS Marketplace is a digital catalog with thousands of software listings from independent software vendors that make it easy to find, test, buy, and deploy software that runs on AWS. AWS Marketplace includes thousands of software listings from popular categories such as security, networking, storage, machine learning, IoT, business intelligence, database, and DevOps. You can use AWS Marketplace as a buyer (subscriber) or as a seller (provider), or both. Anyone with an AWS account can use AWS Marketplace as a consumer and can register to become a seller.
Incorrect options:
AWS Config – AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. Think resource-specific history, audit, and compliance; think Config.
AWS OpsWorks – AWS OpsWorks is a configuration management service that provides managed instances of Chef and Puppet. OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed and managed across your Amazon EC2 instances or on-premises compute environments.
AWS Systems Manager – AWS Systems Manager gives you visibility and control of your infrastructure on AWS. Systems Manager provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks across your AWS resources. With Systems Manager, you can group resources, like Amazon EC2 instances, Amazon S3 buckets, or Amazon RDS instances, by application, view operational data for monitoring and troubleshooting, and take action on your groups of resources.
Reference: https://docs.aws.amazon.com/marketplace/latest/buyerguide/what-is-marketplace.html
Question 24 of 65
24. Question
Which AWS compute service provides the EASIEST way to access resizable compute capacity in the cloud with support for per-second billing and access to the underlying OS?
Correct
Correct option:
Amazon Elastic Compute Cloud (EC2)
Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud with support for per-second billing. It is the easiest way to provision servers on AWS Cloud and access the underlying OS. Amazon EC2 reduces the time required to obtain and boot new server instances to minutes, allowing you to quickly scale capacity, both up and down, as your computing requirements change.
Amazon EC2 Overview: via – https://aws.amazon.com/ec2/
Incorrect options:
Amazon Elastic Container Service (ECS) – Amazon Elastic Container Service (ECS) is a highly scalable, high-performance container management service that supports Docker containers and allows you to easily run applications on a managed cluster of Amazon EC2 instances. Technically, you can access the underlying EC2 instances, but the set up is more complex than just using the EC2 service directly, so this option is ruled out.
How ECS Works: via – https://aws.amazon.com/ecs/
AWS Lambda – AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume. AWS Lambda is serverless, so you don’t get access to the underlying OS.
Amazon Lightsail – Lightsail is an easy-to-use cloud platform that offers you everything needed to build an application or website, plus a cost-effective, monthly plan. Lightsail offers several preconfigured, one-click-to-launch operating systems, development stacks, and web applications, including Linux, Windows OS, and WordPress. Lightsail comes with monthly payment plans and does not support per second billing, so this option is ruled out.
References: https://aws.amazon.com/ec2/ https://aws.amazon.com/ecs/
Incorrect
Correct option:
Amazon Elastic Compute Cloud (EC2)
Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud with support for per-second billing. It is the easiest way to provision servers on AWS Cloud and access the underlying OS. Amazon EC2 reduces the time required to obtain and boot new server instances to minutes, allowing you to quickly scale capacity, both up and down, as your computing requirements change.
Amazon EC2 Overview: via – https://aws.amazon.com/ec2/
Incorrect options:
Amazon Elastic Container Service (ECS) – Amazon Elastic Container Service (ECS) is a highly scalable, high-performance container management service that supports Docker containers and allows you to easily run applications on a managed cluster of Amazon EC2 instances. Technically, you can access the underlying EC2 instances, but the set up is more complex than just using the EC2 service directly, so this option is ruled out.
How ECS Works: via – https://aws.amazon.com/ecs/
AWS Lambda – AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume. AWS Lambda is serverless, so you don’t get access to the underlying OS.
Amazon Lightsail – Lightsail is an easy-to-use cloud platform that offers you everything needed to build an application or website, plus a cost-effective, monthly plan. Lightsail offers several preconfigured, one-click-to-launch operating systems, development stacks, and web applications, including Linux, Windows OS, and WordPress. Lightsail comes with monthly payment plans and does not support per second billing, so this option is ruled out.
References: https://aws.amazon.com/ec2/ https://aws.amazon.com/ecs/
Unattempted
Correct option:
Amazon Elastic Compute Cloud (EC2)
Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud with support for per-second billing. It is the easiest way to provision servers on AWS Cloud and access the underlying OS. Amazon EC2 reduces the time required to obtain and boot new server instances to minutes, allowing you to quickly scale capacity, both up and down, as your computing requirements change.
Amazon EC2 Overview: via – https://aws.amazon.com/ec2/
Incorrect options:
Amazon Elastic Container Service (ECS) – Amazon Elastic Container Service (ECS) is a highly scalable, high-performance container management service that supports Docker containers and allows you to easily run applications on a managed cluster of Amazon EC2 instances. Technically, you can access the underlying EC2 instances, but the set up is more complex than just using the EC2 service directly, so this option is ruled out.
How ECS Works: via – https://aws.amazon.com/ecs/
AWS Lambda – AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume. AWS Lambda is serverless, so you don’t get access to the underlying OS.
Amazon Lightsail – Lightsail is an easy-to-use cloud platform that offers you everything needed to build an application or website, plus a cost-effective, monthly plan. Lightsail offers several preconfigured, one-click-to-launch operating systems, development stacks, and web applications, including Linux, Windows OS, and WordPress. Lightsail comes with monthly payment plans and does not support per second billing, so this option is ruled out.
References: https://aws.amazon.com/ec2/ https://aws.amazon.com/ecs/
Question 25 of 65
25. Question
Which AWS service can be used to provision resources to run big data workloads on Hadoop clusters?
Correct
Correct option:
Amazon EMR – Amazon EMR is the industry-leading cloud big data platform for processing vast amounts of data using open source tools such as Hadoop, Apache Spark, Apache Hive, Apache HBase, Apache Flink, Apache Hudi, and Presto. Amazon EMR can be used to provision resources to run big data workloads on Hadoop clusters.
Incorrect options:
AWS Step Function – AWS Step Function lets you coordinate multiple AWS services into serverless workflows. You can design and run workflows that stitch together services such as AWS Lambda, AWS Glue and Amazon SageMaker.
AWS Step Functions Overview: via – https://aws.amazon.com/step-functions/
AWS Batch
You can use AWS Batch to plan, schedule and execute your batch computing workloads across the full range of AWS compute services. AWS Batch dynamically provisions the optimal quantity and type of compute resources (e.g., CPU or memory optimized instances) based on the volume and specific resource requirements of the batch jobs submitted. AWS Batch provisions compute resources and optimizes the job distribution based on the volume and resource requirements of the submitted batch jobs.
Please review the common use-cases for AWS Batch: via – https://aws.amazon.com/batch/
Exam Alert:
Understand the difference between AWS Step Functions and AWS Batch. You may get questions to choose one over the other. AWS Batch runs batch computing workloads by provisioning the compute resources. AWS Step Function does not provision any resources. Step Function only orchestrates AWS services required for a given workflow. You cannot use Step Functions to plan, schedule and execute your batch computing workloads by provisioning underlying resources.
Amazon EC2 – Amazon EC2 is a web service that provides secure, resizable compute capacity in the AWS cloud. You can use EC2 to provision virtual servers on AWS Cloud. You cannot use EC2 to plan, schedule and execute your batch computing workloads by provisioning underlying resources.
References: https://aws.amazon.com/emr/ https://aws.amazon.com/batch/ https://aws.amazon.com/step-functions/
Incorrect
Correct option:
Amazon EMR – Amazon EMR is the industry-leading cloud big data platform for processing vast amounts of data using open source tools such as Hadoop, Apache Spark, Apache Hive, Apache HBase, Apache Flink, Apache Hudi, and Presto. Amazon EMR can be used to provision resources to run big data workloads on Hadoop clusters.
Incorrect options:
AWS Step Function – AWS Step Function lets you coordinate multiple AWS services into serverless workflows. You can design and run workflows that stitch together services such as AWS Lambda, AWS Glue and Amazon SageMaker.
AWS Step Functions Overview: via – https://aws.amazon.com/step-functions/
AWS Batch
You can use AWS Batch to plan, schedule and execute your batch computing workloads across the full range of AWS compute services. AWS Batch dynamically provisions the optimal quantity and type of compute resources (e.g., CPU or memory optimized instances) based on the volume and specific resource requirements of the batch jobs submitted. AWS Batch provisions compute resources and optimizes the job distribution based on the volume and resource requirements of the submitted batch jobs.
Please review the common use-cases for AWS Batch: via – https://aws.amazon.com/batch/
Exam Alert:
Understand the difference between AWS Step Functions and AWS Batch. You may get questions to choose one over the other. AWS Batch runs batch computing workloads by provisioning the compute resources. AWS Step Function does not provision any resources. Step Function only orchestrates AWS services required for a given workflow. You cannot use Step Functions to plan, schedule and execute your batch computing workloads by provisioning underlying resources.
Amazon EC2 – Amazon EC2 is a web service that provides secure, resizable compute capacity in the AWS cloud. You can use EC2 to provision virtual servers on AWS Cloud. You cannot use EC2 to plan, schedule and execute your batch computing workloads by provisioning underlying resources.
References: https://aws.amazon.com/emr/ https://aws.amazon.com/batch/ https://aws.amazon.com/step-functions/
Unattempted
Correct option:
Amazon EMR – Amazon EMR is the industry-leading cloud big data platform for processing vast amounts of data using open source tools such as Hadoop, Apache Spark, Apache Hive, Apache HBase, Apache Flink, Apache Hudi, and Presto. Amazon EMR can be used to provision resources to run big data workloads on Hadoop clusters.
Incorrect options:
AWS Step Function – AWS Step Function lets you coordinate multiple AWS services into serverless workflows. You can design and run workflows that stitch together services such as AWS Lambda, AWS Glue and Amazon SageMaker.
AWS Step Functions Overview: via – https://aws.amazon.com/step-functions/
AWS Batch
You can use AWS Batch to plan, schedule and execute your batch computing workloads across the full range of AWS compute services. AWS Batch dynamically provisions the optimal quantity and type of compute resources (e.g., CPU or memory optimized instances) based on the volume and specific resource requirements of the batch jobs submitted. AWS Batch provisions compute resources and optimizes the job distribution based on the volume and resource requirements of the submitted batch jobs.
Please review the common use-cases for AWS Batch: via – https://aws.amazon.com/batch/
Exam Alert:
Understand the difference between AWS Step Functions and AWS Batch. You may get questions to choose one over the other. AWS Batch runs batch computing workloads by provisioning the compute resources. AWS Step Function does not provision any resources. Step Function only orchestrates AWS services required for a given workflow. You cannot use Step Functions to plan, schedule and execute your batch computing workloads by provisioning underlying resources.
Amazon EC2 – Amazon EC2 is a web service that provides secure, resizable compute capacity in the AWS cloud. You can use EC2 to provision virtual servers on AWS Cloud. You cannot use EC2 to plan, schedule and execute your batch computing workloads by provisioning underlying resources.
References: https://aws.amazon.com/emr/ https://aws.amazon.com/batch/ https://aws.amazon.com/step-functions/
Question 26 of 65
26. Question
Which of the following AWS services can be used to prevent Distributed Denial-of-Service (DDoS) attack? (Select three)
Correct
Correct options:
AWS Shield – AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS. AWS Shield provides always-on detection and automatic inline mitigations that minimize application downtime and latency, so there is no need to engage AWS Support to benefit from DDoS protection.
AWS WAF – By using AWS WAF, you can configure web access control lists (Web ACLs) on your CloudFront distributions or Application Load Balancers to filter and block requests based on request signatures. Besides, by using AWS WAF’s rate-based rules, you can automatically block the IP addresses of bad actors when requests matching a rule exceed a threshold that you define.
Amazon CloudFront with Route 53 – AWS hosts CloudFront and Route 53 services on a distributed network of proxy servers in data centers throughout the world called edge locations. Using the global Amazon network of edge locations for application delivery and DNS service plays an important part in building a comprehensive defense against DDoS attacks for your dynamic web applications.
How AWS Shield, WAF, and CloudFront with Route 53 help mitigate DDoS attacks: via – https://aws.amazon.com/blogs/security/how-to-protect-dynamic-web-applications-against-ddos-attacks-by-using-amazon-cloudfront-and-amazon-route-53/
Incorrect options:
AWS CloudHSM – AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate and use your encryption keys on the AWS Cloud. With CloudHSM, you can manage your encryption keys using FIPS 140-2 Level 3 validated HSMs. It is a fully-managed service that automates time-consuming administrative tasks for you, such as hardware provisioning, software patching, high-availability, and backups. CloudHSM cannot be used to prevent Distributed Denial-of-Service (DDoS) attack.
AWS Trusted Advisor – AWS Trusted Advisor is an online tool that provides you real-time guidance to help you provision your resources following AWS best practices on cost optimization, security, fault tolerance, service limits and performance improvement. Whether establishing new workflows, developing applications, or as part of ongoing improvement, recommendations provided by Trusted Advisor regularly help keep your solutions provisioned optimally. Trusted Advisor cannot be used to prevent Distributed Denial-of-Service (DDoS) attack.
Amazon Inspector – Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices. Amazon Inspector cannot be used to prevent Distributed Denial-of-Service (DDoS) attack.
References: https://docs.aws.amazon.com/waf/latest/developerguide/ddos-overview.html https://aws.amazon.com/shield/ https://d1.awsstatic.com/whitepapers/Security/DDoS_White_Paper.pdf https://aws.amazon.com/blogs/security/how-to-protect-dynamic-web-applications-against-ddos-attacks-by-using-amazon-cloudfront-and-amazon-route-53/
Incorrect
Correct options:
AWS Shield – AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS. AWS Shield provides always-on detection and automatic inline mitigations that minimize application downtime and latency, so there is no need to engage AWS Support to benefit from DDoS protection.
AWS WAF – By using AWS WAF, you can configure web access control lists (Web ACLs) on your CloudFront distributions or Application Load Balancers to filter and block requests based on request signatures. Besides, by using AWS WAF’s rate-based rules, you can automatically block the IP addresses of bad actors when requests matching a rule exceed a threshold that you define.
Amazon CloudFront with Route 53 – AWS hosts CloudFront and Route 53 services on a distributed network of proxy servers in data centers throughout the world called edge locations. Using the global Amazon network of edge locations for application delivery and DNS service plays an important part in building a comprehensive defense against DDoS attacks for your dynamic web applications.
How AWS Shield, WAF, and CloudFront with Route 53 help mitigate DDoS attacks: via – https://aws.amazon.com/blogs/security/how-to-protect-dynamic-web-applications-against-ddos-attacks-by-using-amazon-cloudfront-and-amazon-route-53/
Incorrect options:
AWS CloudHSM – AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate and use your encryption keys on the AWS Cloud. With CloudHSM, you can manage your encryption keys using FIPS 140-2 Level 3 validated HSMs. It is a fully-managed service that automates time-consuming administrative tasks for you, such as hardware provisioning, software patching, high-availability, and backups. CloudHSM cannot be used to prevent Distributed Denial-of-Service (DDoS) attack.
AWS Trusted Advisor – AWS Trusted Advisor is an online tool that provides you real-time guidance to help you provision your resources following AWS best practices on cost optimization, security, fault tolerance, service limits and performance improvement. Whether establishing new workflows, developing applications, or as part of ongoing improvement, recommendations provided by Trusted Advisor regularly help keep your solutions provisioned optimally. Trusted Advisor cannot be used to prevent Distributed Denial-of-Service (DDoS) attack.
Amazon Inspector – Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices. Amazon Inspector cannot be used to prevent Distributed Denial-of-Service (DDoS) attack.
References: https://docs.aws.amazon.com/waf/latest/developerguide/ddos-overview.html https://aws.amazon.com/shield/ https://d1.awsstatic.com/whitepapers/Security/DDoS_White_Paper.pdf https://aws.amazon.com/blogs/security/how-to-protect-dynamic-web-applications-against-ddos-attacks-by-using-amazon-cloudfront-and-amazon-route-53/
Unattempted
Correct options:
AWS Shield – AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS. AWS Shield provides always-on detection and automatic inline mitigations that minimize application downtime and latency, so there is no need to engage AWS Support to benefit from DDoS protection.
AWS WAF – By using AWS WAF, you can configure web access control lists (Web ACLs) on your CloudFront distributions or Application Load Balancers to filter and block requests based on request signatures. Besides, by using AWS WAF’s rate-based rules, you can automatically block the IP addresses of bad actors when requests matching a rule exceed a threshold that you define.
Amazon CloudFront with Route 53 – AWS hosts CloudFront and Route 53 services on a distributed network of proxy servers in data centers throughout the world called edge locations. Using the global Amazon network of edge locations for application delivery and DNS service plays an important part in building a comprehensive defense against DDoS attacks for your dynamic web applications.
How AWS Shield, WAF, and CloudFront with Route 53 help mitigate DDoS attacks: via – https://aws.amazon.com/blogs/security/how-to-protect-dynamic-web-applications-against-ddos-attacks-by-using-amazon-cloudfront-and-amazon-route-53/
Incorrect options:
AWS CloudHSM – AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate and use your encryption keys on the AWS Cloud. With CloudHSM, you can manage your encryption keys using FIPS 140-2 Level 3 validated HSMs. It is a fully-managed service that automates time-consuming administrative tasks for you, such as hardware provisioning, software patching, high-availability, and backups. CloudHSM cannot be used to prevent Distributed Denial-of-Service (DDoS) attack.
AWS Trusted Advisor – AWS Trusted Advisor is an online tool that provides you real-time guidance to help you provision your resources following AWS best practices on cost optimization, security, fault tolerance, service limits and performance improvement. Whether establishing new workflows, developing applications, or as part of ongoing improvement, recommendations provided by Trusted Advisor regularly help keep your solutions provisioned optimally. Trusted Advisor cannot be used to prevent Distributed Denial-of-Service (DDoS) attack.
Amazon Inspector – Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices. Amazon Inspector cannot be used to prevent Distributed Denial-of-Service (DDoS) attack.
References: https://docs.aws.amazon.com/waf/latest/developerguide/ddos-overview.html https://aws.amazon.com/shield/ https://d1.awsstatic.com/whitepapers/Security/DDoS_White_Paper.pdf https://aws.amazon.com/blogs/security/how-to-protect-dynamic-web-applications-against-ddos-attacks-by-using-amazon-cloudfront-and-amazon-route-53/
Question 27 of 65
27. Question
An organization has a complex IT architecture involving a lot of system dependencies and it wants to track the history of changes to each resource. Which AWS service will help the organization track the history of configuration changes for all the resources?
Correct
Correct option:
AWS Config
AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. Think resource-specific history, audit, and compliance; think Config.
With AWS Config, you can do the following: 1. Evaluate your AWS resource configurations for desired settings. 2. Get a snapshot of the current configurations of the supported resources that are associated with your AWS account. 3. Retrieve configurations of one or more resources that exist in your account. 4. Retrieve historical configurations of one or more resources. 5. Receive a notification whenever a resource is created, modified, or deleted. 6.View relationships between resources. For example, you might want to find all resources that use a particular security group.
Incorrect options:
AWS Service Catalog – AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS. These IT services can include everything from virtual machine images, servers, software, and databases to complete multi-tier application architectures. You cannot use Service Catalog to track changes to each resource on AWS.
AWS CloudFormation – AWS CloudFormation provides a common language to model and provision AWS and third-party application resources in your cloud environment. AWS CloudFormation allows you to use programming languages or a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all Regions and accounts. Think infrastructure as code; think CloudFormation. You cannot use CloudFormation to track changes to each resource on AWS.
AWS CloudTrail – AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides the event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command-line tools, and other AWS services. Think account-specific activity and audit; think CloudTrail. You cannot use CloudTrail to track changes to each resource on AWS.
Reference: https://docs.aws.amazon.com/config/latest/developerguide/WhatIsConfig.html
Incorrect
Correct option:
AWS Config
AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. Think resource-specific history, audit, and compliance; think Config.
With AWS Config, you can do the following: 1. Evaluate your AWS resource configurations for desired settings. 2. Get a snapshot of the current configurations of the supported resources that are associated with your AWS account. 3. Retrieve configurations of one or more resources that exist in your account. 4. Retrieve historical configurations of one or more resources. 5. Receive a notification whenever a resource is created, modified, or deleted. 6.View relationships between resources. For example, you might want to find all resources that use a particular security group.
Incorrect options:
AWS Service Catalog – AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS. These IT services can include everything from virtual machine images, servers, software, and databases to complete multi-tier application architectures. You cannot use Service Catalog to track changes to each resource on AWS.
AWS CloudFormation – AWS CloudFormation provides a common language to model and provision AWS and third-party application resources in your cloud environment. AWS CloudFormation allows you to use programming languages or a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all Regions and accounts. Think infrastructure as code; think CloudFormation. You cannot use CloudFormation to track changes to each resource on AWS.
AWS CloudTrail – AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides the event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command-line tools, and other AWS services. Think account-specific activity and audit; think CloudTrail. You cannot use CloudTrail to track changes to each resource on AWS.
Reference: https://docs.aws.amazon.com/config/latest/developerguide/WhatIsConfig.html
Unattempted
Correct option:
AWS Config
AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. Think resource-specific history, audit, and compliance; think Config.
With AWS Config, you can do the following: 1. Evaluate your AWS resource configurations for desired settings. 2. Get a snapshot of the current configurations of the supported resources that are associated with your AWS account. 3. Retrieve configurations of one or more resources that exist in your account. 4. Retrieve historical configurations of one or more resources. 5. Receive a notification whenever a resource is created, modified, or deleted. 6.View relationships between resources. For example, you might want to find all resources that use a particular security group.
Incorrect options:
AWS Service Catalog – AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS. These IT services can include everything from virtual machine images, servers, software, and databases to complete multi-tier application architectures. You cannot use Service Catalog to track changes to each resource on AWS.
AWS CloudFormation – AWS CloudFormation provides a common language to model and provision AWS and third-party application resources in your cloud environment. AWS CloudFormation allows you to use programming languages or a simple text file to model and provision, in an automated and secure manner, all the resources needed for your applications across all Regions and accounts. Think infrastructure as code; think CloudFormation. You cannot use CloudFormation to track changes to each resource on AWS.
AWS CloudTrail – AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides the event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command-line tools, and other AWS services. Think account-specific activity and audit; think CloudTrail. You cannot use CloudTrail to track changes to each resource on AWS.
Reference: https://docs.aws.amazon.com/config/latest/developerguide/WhatIsConfig.html
Question 28 of 65
28. Question
An e-commerce company wants to assess its applications for vulnerabilities and deviations from AWS best practices. Which AWS service can be used to facilitate this?
Correct
Correct option:
Amazon Inspector
Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices.
Overview of Amazon Inspector: via – https://aws.amazon.com/inspector/
Incorrect options:
AWS Secrets Manager – AWS Secrets Manager helps you protect secrets needed to access your applications, services, and IT resources. The service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. Users and applications retrieve secrets with a call to Secrets Manager APIs, eliminating the need to hardcode sensitive information in plain text. Secrets Manager cannot be used for security assessment of applications deployed on AWS.
AWS CloudHSM – AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate and use your encryption keys on the AWS Cloud. With CloudHSM, you can manage your encryption keys using FIPS 140-2 Level 3 validated HSMs. It is a fully-managed service that automates time-consuming administrative tasks for you, such as hardware provisioning, software patching, high-availability, and backups. CloudHSM cannot be used for the security assessment of applications deployed on AWS.
AWS Trusted Advisor – AWS Trusted Advisor is an online tool that provides you real-time guidance to help you provision your resources following AWS best practices on cost optimization, security, fault tolerance, service limits, and performance improvement. Whether establishing new workflows, developing applications, or as part of ongoing improvement, recommendations provided by Trusted Advisor regularly help keep your solutions provisioned optimally. Trusted Advisor cannot be used for security assessment of applications deployed on AWS.
Reference: https://aws.amazon.com/inspector/
Incorrect
Correct option:
Amazon Inspector
Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices.
Overview of Amazon Inspector: via – https://aws.amazon.com/inspector/
Incorrect options:
AWS Secrets Manager – AWS Secrets Manager helps you protect secrets needed to access your applications, services, and IT resources. The service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. Users and applications retrieve secrets with a call to Secrets Manager APIs, eliminating the need to hardcode sensitive information in plain text. Secrets Manager cannot be used for security assessment of applications deployed on AWS.
AWS CloudHSM – AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate and use your encryption keys on the AWS Cloud. With CloudHSM, you can manage your encryption keys using FIPS 140-2 Level 3 validated HSMs. It is a fully-managed service that automates time-consuming administrative tasks for you, such as hardware provisioning, software patching, high-availability, and backups. CloudHSM cannot be used for the security assessment of applications deployed on AWS.
AWS Trusted Advisor – AWS Trusted Advisor is an online tool that provides you real-time guidance to help you provision your resources following AWS best practices on cost optimization, security, fault tolerance, service limits, and performance improvement. Whether establishing new workflows, developing applications, or as part of ongoing improvement, recommendations provided by Trusted Advisor regularly help keep your solutions provisioned optimally. Trusted Advisor cannot be used for security assessment of applications deployed on AWS.
Reference: https://aws.amazon.com/inspector/
Unattempted
Correct option:
Amazon Inspector
Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices.
Overview of Amazon Inspector: via – https://aws.amazon.com/inspector/
Incorrect options:
AWS Secrets Manager – AWS Secrets Manager helps you protect secrets needed to access your applications, services, and IT resources. The service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. Users and applications retrieve secrets with a call to Secrets Manager APIs, eliminating the need to hardcode sensitive information in plain text. Secrets Manager cannot be used for security assessment of applications deployed on AWS.
AWS CloudHSM – AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate and use your encryption keys on the AWS Cloud. With CloudHSM, you can manage your encryption keys using FIPS 140-2 Level 3 validated HSMs. It is a fully-managed service that automates time-consuming administrative tasks for you, such as hardware provisioning, software patching, high-availability, and backups. CloudHSM cannot be used for the security assessment of applications deployed on AWS.
AWS Trusted Advisor – AWS Trusted Advisor is an online tool that provides you real-time guidance to help you provision your resources following AWS best practices on cost optimization, security, fault tolerance, service limits, and performance improvement. Whether establishing new workflows, developing applications, or as part of ongoing improvement, recommendations provided by Trusted Advisor regularly help keep your solutions provisioned optimally. Trusted Advisor cannot be used for security assessment of applications deployed on AWS.
Reference: https://aws.amazon.com/inspector/
Question 29 of 65
29. Question
An organization deploys its IT infrastructure in a combination of its on-premises data center along with AWS Cloud. How would you categorize this deployment model?
Correct
Correct option:
Hybrid deployment
A hybrid deployment is a way to connect your on-premises infrastructure to the cloud. The most common method of hybrid deployment is between the cloud and existing on-premises infrastructure to extend an organization’s infrastructure into the cloud while connecting cloud resources to internal systems.
Overview of Cloud Computing Deployment Models: via – https://aws.amazon.com/types-of-cloud-computing/
Incorrect options:
Cloud deployment – For this type of deployment, a cloud-based application is fully deployed in the cloud, and all parts of the application run in the cloud. Applications in the cloud have either been created in the cloud or have been migrated from an existing infrastructure to take advantage of the benefits of cloud computing.
Private deployment – For this deployment model, resources are deployed on-premises using virtualization technologies. On-premises deployment does not provide many of the benefits of cloud computing but is sometimes sought for its ability to provide dedicated resources.
Mixed deployment – This is a made-up option and has been added as a distractor.
References: https://aws.amazon.com/types-of-cloud-computing/ https://aws.amazon.com/hybrid/
Incorrect
Correct option:
Hybrid deployment
A hybrid deployment is a way to connect your on-premises infrastructure to the cloud. The most common method of hybrid deployment is between the cloud and existing on-premises infrastructure to extend an organization’s infrastructure into the cloud while connecting cloud resources to internal systems.
Overview of Cloud Computing Deployment Models: via – https://aws.amazon.com/types-of-cloud-computing/
Incorrect options:
Cloud deployment – For this type of deployment, a cloud-based application is fully deployed in the cloud, and all parts of the application run in the cloud. Applications in the cloud have either been created in the cloud or have been migrated from an existing infrastructure to take advantage of the benefits of cloud computing.
Private deployment – For this deployment model, resources are deployed on-premises using virtualization technologies. On-premises deployment does not provide many of the benefits of cloud computing but is sometimes sought for its ability to provide dedicated resources.
Mixed deployment – This is a made-up option and has been added as a distractor.
References: https://aws.amazon.com/types-of-cloud-computing/ https://aws.amazon.com/hybrid/
Unattempted
Correct option:
Hybrid deployment
A hybrid deployment is a way to connect your on-premises infrastructure to the cloud. The most common method of hybrid deployment is between the cloud and existing on-premises infrastructure to extend an organization’s infrastructure into the cloud while connecting cloud resources to internal systems.
Overview of Cloud Computing Deployment Models: via – https://aws.amazon.com/types-of-cloud-computing/
Incorrect options:
Cloud deployment – For this type of deployment, a cloud-based application is fully deployed in the cloud, and all parts of the application run in the cloud. Applications in the cloud have either been created in the cloud or have been migrated from an existing infrastructure to take advantage of the benefits of cloud computing.
Private deployment – For this deployment model, resources are deployed on-premises using virtualization technologies. On-premises deployment does not provide many of the benefits of cloud computing but is sometimes sought for its ability to provide dedicated resources.
Mixed deployment – This is a made-up option and has been added as a distractor.
References: https://aws.amazon.com/types-of-cloud-computing/ https://aws.amazon.com/hybrid/
Question 30 of 65
30. Question
As per the AWS shared responsibility model, which of the following is a responsibility of AWS from a security and compliance point of view?
Correct
Correct option:
Edge Location Management
Security and Compliance is a shared responsibility between AWS and the customer. This shared model can help relieve the customer’s operational burden as AWS operates, manages, and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates.
AWS is responsible for security “of” the cloud. This covers their global infrastructure elements including Regions, Availability Zones, and Edge Locations.
Incorrect options:
Customer Data
Identity and Access Management
Server-side Encryption
The customer is responsible for security “in” the cloud. Customers are responsible for managing their data including encryption options and using Identity and Access Management tools for implementing appropriate access control policies as per their organization requirements. For abstracted services, such as Amazon S3 and Amazon DynamoDB, AWS operates the infrastructure layer, the operating system, and platforms, and customers access the endpoints to store and retrieve data. Therefore, these three options fall under the responsibility of the customer according to the AWS shared responsibility model.
Exam Alert:
Please review the Shared Responsibility Model in detail as you can expect multiple questions on the shared responsibility model in the exam: via – https://aws.amazon.com/compliance/shared-responsibility-model/
Reference: https://aws.amazon.com/compliance/shared-responsibility-model/
Incorrect
Correct option:
Edge Location Management
Security and Compliance is a shared responsibility between AWS and the customer. This shared model can help relieve the customer’s operational burden as AWS operates, manages, and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates.
AWS is responsible for security “of” the cloud. This covers their global infrastructure elements including Regions, Availability Zones, and Edge Locations.
Incorrect options:
Customer Data
Identity and Access Management
Server-side Encryption
The customer is responsible for security “in” the cloud. Customers are responsible for managing their data including encryption options and using Identity and Access Management tools for implementing appropriate access control policies as per their organization requirements. For abstracted services, such as Amazon S3 and Amazon DynamoDB, AWS operates the infrastructure layer, the operating system, and platforms, and customers access the endpoints to store and retrieve data. Therefore, these three options fall under the responsibility of the customer according to the AWS shared responsibility model.
Exam Alert:
Please review the Shared Responsibility Model in detail as you can expect multiple questions on the shared responsibility model in the exam: via – https://aws.amazon.com/compliance/shared-responsibility-model/
Reference: https://aws.amazon.com/compliance/shared-responsibility-model/
Unattempted
Correct option:
Edge Location Management
Security and Compliance is a shared responsibility between AWS and the customer. This shared model can help relieve the customer’s operational burden as AWS operates, manages, and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates.
AWS is responsible for security “of” the cloud. This covers their global infrastructure elements including Regions, Availability Zones, and Edge Locations.
Incorrect options:
Customer Data
Identity and Access Management
Server-side Encryption
The customer is responsible for security “in” the cloud. Customers are responsible for managing their data including encryption options and using Identity and Access Management tools for implementing appropriate access control policies as per their organization requirements. For abstracted services, such as Amazon S3 and Amazon DynamoDB, AWS operates the infrastructure layer, the operating system, and platforms, and customers access the endpoints to store and retrieve data. Therefore, these three options fall under the responsibility of the customer according to the AWS shared responsibility model.
Exam Alert:
Please review the Shared Responsibility Model in detail as you can expect multiple questions on the shared responsibility model in the exam: via – https://aws.amazon.com/compliance/shared-responsibility-model/
Reference: https://aws.amazon.com/compliance/shared-responsibility-model/
Question 31 of 65
31. Question
Which of the following AWS services are always free to use (Select two)?
Correct
Correct options:
Identity and Access Management (IAM) – AWS Identity and Access Management (IAM) enables you to manage access to AWS services and resources securely. Using IAM, you can create and manage AWS users and groups, and use permissions to allow and deny their access to AWS resources. IAM is a feature of your AWS account offered at no additional charge.
AWS Auto Scaling – AWS Auto Scaling monitors your applications and automatically adjusts the capacity to maintain steady, predictable performance at the lowest possible cost. Using AWS Auto Scaling, it’s easy to setup application scaling for multiple resources across multiple services in minutes. AWS Auto Scaling is available at no additional charge. You pay only for the AWS resources needed to run your applications and Amazon CloudWatch monitoring fees.
Incorrect options:
Elastic Compute Cloud (Amazon EC2) – Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. This is not a free service. You pay for what you use or depending on the plan you choose.
Simple Storage Service (Amazon S3) – Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. S3 service is not free and you pay to depend on the storage class you choose for your data.
DynamoDB – Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. It’s a fully managed, multi-Region, multi-master, durable database with built-in security, backup and restore, and in-memory caching for internet-scale applications. DynamoDB is not free and you are charged for reading, writing, and storing data in your DynamoDB tables, along with any optional features you choose to enable.
References: https://aws.amazon.com/iam/ https://aws.amazon.com/autoscaling/
Incorrect
Correct options:
Identity and Access Management (IAM) – AWS Identity and Access Management (IAM) enables you to manage access to AWS services and resources securely. Using IAM, you can create and manage AWS users and groups, and use permissions to allow and deny their access to AWS resources. IAM is a feature of your AWS account offered at no additional charge.
AWS Auto Scaling – AWS Auto Scaling monitors your applications and automatically adjusts the capacity to maintain steady, predictable performance at the lowest possible cost. Using AWS Auto Scaling, it’s easy to setup application scaling for multiple resources across multiple services in minutes. AWS Auto Scaling is available at no additional charge. You pay only for the AWS resources needed to run your applications and Amazon CloudWatch monitoring fees.
Incorrect options:
Elastic Compute Cloud (Amazon EC2) – Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. This is not a free service. You pay for what you use or depending on the plan you choose.
Simple Storage Service (Amazon S3) – Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. S3 service is not free and you pay to depend on the storage class you choose for your data.
DynamoDB – Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. It’s a fully managed, multi-Region, multi-master, durable database with built-in security, backup and restore, and in-memory caching for internet-scale applications. DynamoDB is not free and you are charged for reading, writing, and storing data in your DynamoDB tables, along with any optional features you choose to enable.
References: https://aws.amazon.com/iam/ https://aws.amazon.com/autoscaling/
Unattempted
Correct options:
Identity and Access Management (IAM) – AWS Identity and Access Management (IAM) enables you to manage access to AWS services and resources securely. Using IAM, you can create and manage AWS users and groups, and use permissions to allow and deny their access to AWS resources. IAM is a feature of your AWS account offered at no additional charge.
AWS Auto Scaling – AWS Auto Scaling monitors your applications and automatically adjusts the capacity to maintain steady, predictable performance at the lowest possible cost. Using AWS Auto Scaling, it’s easy to setup application scaling for multiple resources across multiple services in minutes. AWS Auto Scaling is available at no additional charge. You pay only for the AWS resources needed to run your applications and Amazon CloudWatch monitoring fees.
Incorrect options:
Elastic Compute Cloud (Amazon EC2) – Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. This is not a free service. You pay for what you use or depending on the plan you choose.
Simple Storage Service (Amazon S3) – Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. S3 service is not free and you pay to depend on the storage class you choose for your data.
DynamoDB – Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. It’s a fully managed, multi-Region, multi-master, durable database with built-in security, backup and restore, and in-memory caching for internet-scale applications. DynamoDB is not free and you are charged for reading, writing, and storing data in your DynamoDB tables, along with any optional features you choose to enable.
References: https://aws.amazon.com/iam/ https://aws.amazon.com/autoscaling/
Question 32 of 65
32. Question
Which AWS support plan provides access to a Technical Account Manager (TAM)?
Correct
Correct option:
“Enterprise”
AWS offers three different support plans to cater to each of its customers – Developer, Business, and Enterprise Support plans. A basic support plan is included for all AWS customers.
AWS Enterprise Support provides customers with concierge-like service where the main focus is helping the customer achieve their outcomes and find success in the cloud. With Enterprise Support, you get 24×7 technical support from high-quality engineers, tools and technology to automatically manage the health of your environment, consultative architectural guidance delivered in the context of your applications and use-cases, and a designated Technical Account Manager (TAM) to coordinate access to proactive/preventative programs and AWS subject matter experts.
Exam Alert:
Please review the differences between the Developer, Business, and Enterprise support plans as you can expect at least a couple of questions on the exam:
via – https://aws.amazon.com/premiumsupport/plans/
Incorrect options:
“Developer” – AWS recommends Developer Support if you are testing or doing early development on AWS and want the ability to get technical support during business hours as well as general architectural guidance as you build and test.
“Business” – AWS recommends Business Support if you have production workloads on AWS and want 24×7 access to technical support and architectural guidance in the context of your specific use-cases.
“Business & Enterprise” – This is an invalid option and has been added as a distractor. An Enterprise plan will include all the facilities offered by Developer and Business Support plans already.
Reference: https://aws.amazon.com/premiumsupport/plans/enterprise/
Incorrect
Correct option:
“Enterprise”
AWS offers three different support plans to cater to each of its customers – Developer, Business, and Enterprise Support plans. A basic support plan is included for all AWS customers.
AWS Enterprise Support provides customers with concierge-like service where the main focus is helping the customer achieve their outcomes and find success in the cloud. With Enterprise Support, you get 24×7 technical support from high-quality engineers, tools and technology to automatically manage the health of your environment, consultative architectural guidance delivered in the context of your applications and use-cases, and a designated Technical Account Manager (TAM) to coordinate access to proactive/preventative programs and AWS subject matter experts.
Exam Alert:
Please review the differences between the Developer, Business, and Enterprise support plans as you can expect at least a couple of questions on the exam:
via – https://aws.amazon.com/premiumsupport/plans/
Incorrect options:
“Developer” – AWS recommends Developer Support if you are testing or doing early development on AWS and want the ability to get technical support during business hours as well as general architectural guidance as you build and test.
“Business” – AWS recommends Business Support if you have production workloads on AWS and want 24×7 access to technical support and architectural guidance in the context of your specific use-cases.
“Business & Enterprise” – This is an invalid option and has been added as a distractor. An Enterprise plan will include all the facilities offered by Developer and Business Support plans already.
Reference: https://aws.amazon.com/premiumsupport/plans/enterprise/
Unattempted
Correct option:
“Enterprise”
AWS offers three different support plans to cater to each of its customers – Developer, Business, and Enterprise Support plans. A basic support plan is included for all AWS customers.
AWS Enterprise Support provides customers with concierge-like service where the main focus is helping the customer achieve their outcomes and find success in the cloud. With Enterprise Support, you get 24×7 technical support from high-quality engineers, tools and technology to automatically manage the health of your environment, consultative architectural guidance delivered in the context of your applications and use-cases, and a designated Technical Account Manager (TAM) to coordinate access to proactive/preventative programs and AWS subject matter experts.
Exam Alert:
Please review the differences between the Developer, Business, and Enterprise support plans as you can expect at least a couple of questions on the exam:
via – https://aws.amazon.com/premiumsupport/plans/
Incorrect options:
“Developer” – AWS recommends Developer Support if you are testing or doing early development on AWS and want the ability to get technical support during business hours as well as general architectural guidance as you build and test.
“Business” – AWS recommends Business Support if you have production workloads on AWS and want 24×7 access to technical support and architectural guidance in the context of your specific use-cases.
“Business & Enterprise” – This is an invalid option and has been added as a distractor. An Enterprise plan will include all the facilities offered by Developer and Business Support plans already.
Reference: https://aws.amazon.com/premiumsupport/plans/enterprise/
Question 33 of 65
33. Question
Which of the following statement is correct for a Security Group and a Network Access Control List?
Correct
Correct option:
Security Group acts as a firewall at the instance level whereas Network Access Control List acts as a firewall at the subnet level
A security group acts as a virtual firewall for your instance to control inbound and outbound traffic. When you launch an instance in a VPC, you can assign up to five security groups to the instance. Security groups act at the instance level, not the subnet level. A network access control list (ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets (i.e. it works at subnet level).
Security Group Overview: via – https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html
Network Access Control List (NACL) Overview: via – https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html
Incorrect options:
Security Group acts as a firewall at the subnet level whereas Network Access Control List acts as a firewall at the instance level – As explained above, the security group acts at the instance level and ACL is at the subnet level.
Security Group acts as a firewall at the VPC level whereas Network Access Control List acts as a firewall at the AZ level – As explained above, the security group acts at the instance level and ACL is at the subnet level.
Security Group acts as a firewall at the AZ level whereas Network Access Control List acts as a firewall at the VPC level – As explained above, the security group acts at the instance level and ACL is at the subnet level.
References: https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html
Incorrect
Correct option:
Security Group acts as a firewall at the instance level whereas Network Access Control List acts as a firewall at the subnet level
A security group acts as a virtual firewall for your instance to control inbound and outbound traffic. When you launch an instance in a VPC, you can assign up to five security groups to the instance. Security groups act at the instance level, not the subnet level. A network access control list (ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets (i.e. it works at subnet level).
Security Group Overview: via – https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html
Network Access Control List (NACL) Overview: via – https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html
Incorrect options:
Security Group acts as a firewall at the subnet level whereas Network Access Control List acts as a firewall at the instance level – As explained above, the security group acts at the instance level and ACL is at the subnet level.
Security Group acts as a firewall at the VPC level whereas Network Access Control List acts as a firewall at the AZ level – As explained above, the security group acts at the instance level and ACL is at the subnet level.
Security Group acts as a firewall at the AZ level whereas Network Access Control List acts as a firewall at the VPC level – As explained above, the security group acts at the instance level and ACL is at the subnet level.
References: https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html
Unattempted
Correct option:
Security Group acts as a firewall at the instance level whereas Network Access Control List acts as a firewall at the subnet level
A security group acts as a virtual firewall for your instance to control inbound and outbound traffic. When you launch an instance in a VPC, you can assign up to five security groups to the instance. Security groups act at the instance level, not the subnet level. A network access control list (ACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets (i.e. it works at subnet level).
Security Group Overview: via – https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html
Network Access Control List (NACL) Overview: via – https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html
Incorrect options:
Security Group acts as a firewall at the subnet level whereas Network Access Control List acts as a firewall at the instance level – As explained above, the security group acts at the instance level and ACL is at the subnet level.
Security Group acts as a firewall at the VPC level whereas Network Access Control List acts as a firewall at the AZ level – As explained above, the security group acts at the instance level and ACL is at the subnet level.
Security Group acts as a firewall at the AZ level whereas Network Access Control List acts as a firewall at the VPC level – As explained above, the security group acts at the instance level and ACL is at the subnet level.
References: https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html
Question 34 of 65
34. Question
Which of the following are examples of Horizontal Scalability (aka Elasticity)? (Select two)
Correct
Correct options: Elastic Load Balancing
Read Replicas in Amazon RDS
A “horizontally scalable” system is one that can increase capacity by adding more computers to the system. This is in contrast to a “vertically scalable” system, which is constrained to running its processes on only one computer; in such systems, the only way to increase performance is to add more resources into one computer in the form of faster (or more) CPUs, memory or storage. Horizontally scalable systems are oftentimes able to outperform vertically scalable systems by enabling parallel execution of workloads and distributing those across many different computers.
Elastic Load Balancing – Elastic Load Balancing automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions. It can handle the varying load of your application traffic in a single Availability Zone or across multiple Availability Zones. This falls under Horizontal Scaling.
“Read Replicas in Amazon RDS” – Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. Read replicas allow you to create read-only copies that are synchronized with your master database. You can also place your read replica in a different AWS Region closer to your users for better performance. Read replicas are an example of horizontal scaling of resources.
Incorrect options:
Add a bigger CPU to a computer – As explained above, this comes under vertical scaling since the bigger resource is being added to a single computer or node.
Modify an EC2 instance type from t2.nano to u-12tb1.metal – Enhancing the type of a single Amazon EC2 system is also an example of vertical scaling since the extra capacity is being added to a single instance.
Modify a Database instance to higher CPU and RAM – This is also an example of vertical scaling since the focus is on increasing the capacity of a single machine or instance.
Reference: https://wa.aws.amazon.com/wat.concept.horizontal-scaling.en.html
Incorrect
Correct options: Elastic Load Balancing
Read Replicas in Amazon RDS
A “horizontally scalable” system is one that can increase capacity by adding more computers to the system. This is in contrast to a “vertically scalable” system, which is constrained to running its processes on only one computer; in such systems, the only way to increase performance is to add more resources into one computer in the form of faster (or more) CPUs, memory or storage. Horizontally scalable systems are oftentimes able to outperform vertically scalable systems by enabling parallel execution of workloads and distributing those across many different computers.
Elastic Load Balancing – Elastic Load Balancing automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions. It can handle the varying load of your application traffic in a single Availability Zone or across multiple Availability Zones. This falls under Horizontal Scaling.
“Read Replicas in Amazon RDS” – Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. Read replicas allow you to create read-only copies that are synchronized with your master database. You can also place your read replica in a different AWS Region closer to your users for better performance. Read replicas are an example of horizontal scaling of resources.
Incorrect options:
Add a bigger CPU to a computer – As explained above, this comes under vertical scaling since the bigger resource is being added to a single computer or node.
Modify an EC2 instance type from t2.nano to u-12tb1.metal – Enhancing the type of a single Amazon EC2 system is also an example of vertical scaling since the extra capacity is being added to a single instance.
Modify a Database instance to higher CPU and RAM – This is also an example of vertical scaling since the focus is on increasing the capacity of a single machine or instance.
Reference: https://wa.aws.amazon.com/wat.concept.horizontal-scaling.en.html
Unattempted
Correct options: Elastic Load Balancing
Read Replicas in Amazon RDS
A “horizontally scalable” system is one that can increase capacity by adding more computers to the system. This is in contrast to a “vertically scalable” system, which is constrained to running its processes on only one computer; in such systems, the only way to increase performance is to add more resources into one computer in the form of faster (or more) CPUs, memory or storage. Horizontally scalable systems are oftentimes able to outperform vertically scalable systems by enabling parallel execution of workloads and distributing those across many different computers.
Elastic Load Balancing – Elastic Load Balancing automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions. It can handle the varying load of your application traffic in a single Availability Zone or across multiple Availability Zones. This falls under Horizontal Scaling.
“Read Replicas in Amazon RDS” – Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. Read replicas allow you to create read-only copies that are synchronized with your master database. You can also place your read replica in a different AWS Region closer to your users for better performance. Read replicas are an example of horizontal scaling of resources.
Incorrect options:
Add a bigger CPU to a computer – As explained above, this comes under vertical scaling since the bigger resource is being added to a single computer or node.
Modify an EC2 instance type from t2.nano to u-12tb1.metal – Enhancing the type of a single Amazon EC2 system is also an example of vertical scaling since the extra capacity is being added to a single instance.
Modify a Database instance to higher CPU and RAM – This is also an example of vertical scaling since the focus is on increasing the capacity of a single machine or instance.
Reference: https://wa.aws.amazon.com/wat.concept.horizontal-scaling.en.html
Question 35 of 65
35. Question
Which tool will help you review your workloads against current AWS best practices for cost optimization, security, and performance improvement and then obtain advice to architect them better?
Correct
Correct option: AWS Trusted Advisor
AWS Trusted Advisor is an online tool that provides you real-time guidance to help you provision your resources following AWS best practices on cost optimization, security, fault tolerance, service limits, and performance improvement. Whether establishing new workflows, developing applications, or as part of ongoing improvement, recommendations provided by Trusted Advisor regularly help keep your solutions provisioned optimally. All AWS customers get access to the seven core Trusted Advisor checks to help increase the security and performance of the AWS environment.
How Trusted Advisor Works: via – https://aws.amazon.com/premiumsupport/technology/trusted-advisor/
AWS Trusted Advisor Recommendations: via – https://aws.amazon.com/premiumsupport/technology/trusted-advisor/
Incorrect options:
AWS Cost Explorer – AWS Cost Explorer lets you explore your AWS costs and usage at both a high level and at a detailed level of analysis, and empowering you to dive deeper using several filtering dimensions (e.g., AWS Service, Region, Linked Account). Cost Explorer does not offer any recommendations vis-a-vis AWS best practices for cost optimization, security, and performance improvement.
Amazon CloudWatch – Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides data and actionable insights to monitor applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. This is an excellent service for building Resilient systems. Think resource performance monitoring, events, and alerts; think CloudWatch. CloudWatch does not offer any recommendations vis-a-vis AWS best practices for cost optimization, security, and performance improvement.
Amazon Inspector – Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices. Amazon Inspector cannot be used to prevent Distributed Denial-of-Service (DDoS) attack. Inspector does not offer any recommendations vis-a-vis AWS best practices for cost optimization, security, and performance improvement.
Reference: https://aws.amazon.com/premiumsupport/technology/trusted-advisor/
Incorrect
Correct option: AWS Trusted Advisor
AWS Trusted Advisor is an online tool that provides you real-time guidance to help you provision your resources following AWS best practices on cost optimization, security, fault tolerance, service limits, and performance improvement. Whether establishing new workflows, developing applications, or as part of ongoing improvement, recommendations provided by Trusted Advisor regularly help keep your solutions provisioned optimally. All AWS customers get access to the seven core Trusted Advisor checks to help increase the security and performance of the AWS environment.
How Trusted Advisor Works: via – https://aws.amazon.com/premiumsupport/technology/trusted-advisor/
AWS Trusted Advisor Recommendations: via – https://aws.amazon.com/premiumsupport/technology/trusted-advisor/
Incorrect options:
AWS Cost Explorer – AWS Cost Explorer lets you explore your AWS costs and usage at both a high level and at a detailed level of analysis, and empowering you to dive deeper using several filtering dimensions (e.g., AWS Service, Region, Linked Account). Cost Explorer does not offer any recommendations vis-a-vis AWS best practices for cost optimization, security, and performance improvement.
Amazon CloudWatch – Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides data and actionable insights to monitor applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. This is an excellent service for building Resilient systems. Think resource performance monitoring, events, and alerts; think CloudWatch. CloudWatch does not offer any recommendations vis-a-vis AWS best practices for cost optimization, security, and performance improvement.
Amazon Inspector – Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices. Amazon Inspector cannot be used to prevent Distributed Denial-of-Service (DDoS) attack. Inspector does not offer any recommendations vis-a-vis AWS best practices for cost optimization, security, and performance improvement.
Reference: https://aws.amazon.com/premiumsupport/technology/trusted-advisor/
Unattempted
Correct option: AWS Trusted Advisor
AWS Trusted Advisor is an online tool that provides you real-time guidance to help you provision your resources following AWS best practices on cost optimization, security, fault tolerance, service limits, and performance improvement. Whether establishing new workflows, developing applications, or as part of ongoing improvement, recommendations provided by Trusted Advisor regularly help keep your solutions provisioned optimally. All AWS customers get access to the seven core Trusted Advisor checks to help increase the security and performance of the AWS environment.
How Trusted Advisor Works: via – https://aws.amazon.com/premiumsupport/technology/trusted-advisor/
AWS Trusted Advisor Recommendations: via – https://aws.amazon.com/premiumsupport/technology/trusted-advisor/
Incorrect options:
AWS Cost Explorer – AWS Cost Explorer lets you explore your AWS costs and usage at both a high level and at a detailed level of analysis, and empowering you to dive deeper using several filtering dimensions (e.g., AWS Service, Region, Linked Account). Cost Explorer does not offer any recommendations vis-a-vis AWS best practices for cost optimization, security, and performance improvement.
Amazon CloudWatch – Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides data and actionable insights to monitor applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. This is an excellent service for building Resilient systems. Think resource performance monitoring, events, and alerts; think CloudWatch. CloudWatch does not offer any recommendations vis-a-vis AWS best practices for cost optimization, security, and performance improvement.
Amazon Inspector – Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices. Amazon Inspector cannot be used to prevent Distributed Denial-of-Service (DDoS) attack. Inspector does not offer any recommendations vis-a-vis AWS best practices for cost optimization, security, and performance improvement.
Reference: https://aws.amazon.com/premiumsupport/technology/trusted-advisor/
Question 36 of 65
36. Question
Which of the following is the MOST cost-effective EC2 instance purchasing option for short-term, spiky and critical workloads on AWS Cloud?
Correct
Correct option:
On-Demand Instance
An On-Demand Instance is an instance that you use on-demand. You have full control over its lifecycle — you decide when to launch, stop, hibernate, start, reboot, or terminate it. There is no long-term commitment required when you purchase On-Demand Instances. There is no upfront payment and you pay only for the seconds that your On-Demand Instances are running. There is no need for a long-term purchasing commitment. The price per second for running an On-Demand Instance is fixed. On-demand instances cannot be interrupted. Therefore On-Demand instances are the best fit for short-term, spiky and critical workloads.
EC2 Pricing Options Overview: via – https://aws.amazon.com/ec2/pricing/
Incorrect options:
Spot Instance – A Spot Instance is an unused EC2 instance that is available for less than the On-Demand price. Because Spot Instances enable you to request unused EC2 instances at steep discounts (up to 90%), you can lower your Amazon EC2 costs significantly. Spot Instances are well-suited for data analysis, batch jobs, background processing, and other flexible tasks that can be interrupted. These can be terminated at short notice, so these are not suitable for critical workloads that need to run at a specific point in time.
Reserved Instance – Reserved Instances provide you with significant savings (up to 75%) on your Amazon EC2 costs compared to On-Demand Instance pricing. Reserved Instances are not physical instances, but rather a billing discount applied to the use of On-Demand Instances in your account. You can purchase a Reserved Instance for a one-year or three-year commitment, with the three-year commitment offering a bigger discount. Reserved instances cannot be interrupted. Reserved instances are not the right choice for short-term workloads.
Dedicated Host – Amazon EC2 Dedicated Hosts allow you to use your eligible software licenses from vendors such as Microsoft and Oracle on Amazon EC2 so that you get the flexibility and cost-effectiveness of using your licenses, but with the resiliency, simplicity, and elasticity of AWS. An Amazon EC2 Dedicated Host is a physical server fully dedicated for your use, so you can help address corporate compliance requirement. They’re not cost-efficient compared to On-Demand instances. So this option is not correct.
Reference: https://aws.amazon.com/ec2/pricing/
Incorrect
Correct option:
On-Demand Instance
An On-Demand Instance is an instance that you use on-demand. You have full control over its lifecycle — you decide when to launch, stop, hibernate, start, reboot, or terminate it. There is no long-term commitment required when you purchase On-Demand Instances. There is no upfront payment and you pay only for the seconds that your On-Demand Instances are running. There is no need for a long-term purchasing commitment. The price per second for running an On-Demand Instance is fixed. On-demand instances cannot be interrupted. Therefore On-Demand instances are the best fit for short-term, spiky and critical workloads.
EC2 Pricing Options Overview: via – https://aws.amazon.com/ec2/pricing/
Incorrect options:
Spot Instance – A Spot Instance is an unused EC2 instance that is available for less than the On-Demand price. Because Spot Instances enable you to request unused EC2 instances at steep discounts (up to 90%), you can lower your Amazon EC2 costs significantly. Spot Instances are well-suited for data analysis, batch jobs, background processing, and other flexible tasks that can be interrupted. These can be terminated at short notice, so these are not suitable for critical workloads that need to run at a specific point in time.
Reserved Instance – Reserved Instances provide you with significant savings (up to 75%) on your Amazon EC2 costs compared to On-Demand Instance pricing. Reserved Instances are not physical instances, but rather a billing discount applied to the use of On-Demand Instances in your account. You can purchase a Reserved Instance for a one-year or three-year commitment, with the three-year commitment offering a bigger discount. Reserved instances cannot be interrupted. Reserved instances are not the right choice for short-term workloads.
Dedicated Host – Amazon EC2 Dedicated Hosts allow you to use your eligible software licenses from vendors such as Microsoft and Oracle on Amazon EC2 so that you get the flexibility and cost-effectiveness of using your licenses, but with the resiliency, simplicity, and elasticity of AWS. An Amazon EC2 Dedicated Host is a physical server fully dedicated for your use, so you can help address corporate compliance requirement. They’re not cost-efficient compared to On-Demand instances. So this option is not correct.
Reference: https://aws.amazon.com/ec2/pricing/
Unattempted
Correct option:
On-Demand Instance
An On-Demand Instance is an instance that you use on-demand. You have full control over its lifecycle — you decide when to launch, stop, hibernate, start, reboot, or terminate it. There is no long-term commitment required when you purchase On-Demand Instances. There is no upfront payment and you pay only for the seconds that your On-Demand Instances are running. There is no need for a long-term purchasing commitment. The price per second for running an On-Demand Instance is fixed. On-demand instances cannot be interrupted. Therefore On-Demand instances are the best fit for short-term, spiky and critical workloads.
EC2 Pricing Options Overview: via – https://aws.amazon.com/ec2/pricing/
Incorrect options:
Spot Instance – A Spot Instance is an unused EC2 instance that is available for less than the On-Demand price. Because Spot Instances enable you to request unused EC2 instances at steep discounts (up to 90%), you can lower your Amazon EC2 costs significantly. Spot Instances are well-suited for data analysis, batch jobs, background processing, and other flexible tasks that can be interrupted. These can be terminated at short notice, so these are not suitable for critical workloads that need to run at a specific point in time.
Reserved Instance – Reserved Instances provide you with significant savings (up to 75%) on your Amazon EC2 costs compared to On-Demand Instance pricing. Reserved Instances are not physical instances, but rather a billing discount applied to the use of On-Demand Instances in your account. You can purchase a Reserved Instance for a one-year or three-year commitment, with the three-year commitment offering a bigger discount. Reserved instances cannot be interrupted. Reserved instances are not the right choice for short-term workloads.
Dedicated Host – Amazon EC2 Dedicated Hosts allow you to use your eligible software licenses from vendors such as Microsoft and Oracle on Amazon EC2 so that you get the flexibility and cost-effectiveness of using your licenses, but with the resiliency, simplicity, and elasticity of AWS. An Amazon EC2 Dedicated Host is a physical server fully dedicated for your use, so you can help address corporate compliance requirement. They’re not cost-efficient compared to On-Demand instances. So this option is not correct.
Reference: https://aws.amazon.com/ec2/pricing/
Question 37 of 65
37. Question
A social media company wants to have the MOST cost-optimal strategy for deploying EC2 instances. As a Cloud Practitioner, which of the following options would you recommend? (Select two)
Correct
Correct options:
Use Spot Instances for ad-hoc jobs that can be interrupted
A Spot Instance is an unused EC2 instance that is available for less than the On-Demand price. Because Spot Instances enable you to request unused EC2 instances at steep discounts (up to 90%), you can lower your Amazon EC2 costs significantly. Spot Instances are well-suited for data analysis, batch jobs, background processing, and optional tasks. These can be terminated at short notice, so these are not suitable for critical workloads that need to run at a specific point in time.
Use Reserved Instances to run applications with a predictable usage over the next one year
Reserved Instances provide you with significant savings (up to 75%) on your Amazon EC2 costs compared to On-Demand Instance pricing. Reserved Instances are not physical instances, but rather a billing discount applied to the use of On-Demand Instances in your account. You can purchase a Reserved Instance for a one-year or three-year commitment, with the three-year commitment offering a bigger discount. Reserved instances are a great fit for application with a steady-state usage. Reserved instances cannot be interrupted.
EC2 Pricing Options Overview: via – https://aws.amazon.com/ec2/pricing/
Incorrect options:
Use On-Demand Instances to run applications with a predictable usage over the next one year
Use On-Demand Instances for ad-hoc jobs that can be interrupted
An On-Demand Instance is an instance that you use on-demand. You have full control over its lifecycle — you decide when to launch, stop, hibernate, start, reboot, or terminate it. There is no long-term commitment required when you purchase On-Demand Instances. There is no upfront payment and you pay only for the seconds that your On-Demand Instances are running. The price per second for running an On-Demand Instance is fixed. On-demand instances cannot be interrupted. However, On-demand instances are not as cost-effective as Spot instances or Reserved instances, so both these options are not correct.
Use Reserved Instances for ad-hoc jobs that can be interrupted – Spot instances are more cost-effective than Reserved instances for running ad-hoc jobs that can be interrupted, so this option is not correct.
Reference: https://aws.amazon.com/ec2/pricing/
Incorrect
Correct options:
Use Spot Instances for ad-hoc jobs that can be interrupted
A Spot Instance is an unused EC2 instance that is available for less than the On-Demand price. Because Spot Instances enable you to request unused EC2 instances at steep discounts (up to 90%), you can lower your Amazon EC2 costs significantly. Spot Instances are well-suited for data analysis, batch jobs, background processing, and optional tasks. These can be terminated at short notice, so these are not suitable for critical workloads that need to run at a specific point in time.
Use Reserved Instances to run applications with a predictable usage over the next one year
Reserved Instances provide you with significant savings (up to 75%) on your Amazon EC2 costs compared to On-Demand Instance pricing. Reserved Instances are not physical instances, but rather a billing discount applied to the use of On-Demand Instances in your account. You can purchase a Reserved Instance for a one-year or three-year commitment, with the three-year commitment offering a bigger discount. Reserved instances are a great fit for application with a steady-state usage. Reserved instances cannot be interrupted.
EC2 Pricing Options Overview: via – https://aws.amazon.com/ec2/pricing/
Incorrect options:
Use On-Demand Instances to run applications with a predictable usage over the next one year
Use On-Demand Instances for ad-hoc jobs that can be interrupted
An On-Demand Instance is an instance that you use on-demand. You have full control over its lifecycle — you decide when to launch, stop, hibernate, start, reboot, or terminate it. There is no long-term commitment required when you purchase On-Demand Instances. There is no upfront payment and you pay only for the seconds that your On-Demand Instances are running. The price per second for running an On-Demand Instance is fixed. On-demand instances cannot be interrupted. However, On-demand instances are not as cost-effective as Spot instances or Reserved instances, so both these options are not correct.
Use Reserved Instances for ad-hoc jobs that can be interrupted – Spot instances are more cost-effective than Reserved instances for running ad-hoc jobs that can be interrupted, so this option is not correct.
Reference: https://aws.amazon.com/ec2/pricing/
Unattempted
Correct options:
Use Spot Instances for ad-hoc jobs that can be interrupted
A Spot Instance is an unused EC2 instance that is available for less than the On-Demand price. Because Spot Instances enable you to request unused EC2 instances at steep discounts (up to 90%), you can lower your Amazon EC2 costs significantly. Spot Instances are well-suited for data analysis, batch jobs, background processing, and optional tasks. These can be terminated at short notice, so these are not suitable for critical workloads that need to run at a specific point in time.
Use Reserved Instances to run applications with a predictable usage over the next one year
Reserved Instances provide you with significant savings (up to 75%) on your Amazon EC2 costs compared to On-Demand Instance pricing. Reserved Instances are not physical instances, but rather a billing discount applied to the use of On-Demand Instances in your account. You can purchase a Reserved Instance for a one-year or three-year commitment, with the three-year commitment offering a bigger discount. Reserved instances are a great fit for application with a steady-state usage. Reserved instances cannot be interrupted.
EC2 Pricing Options Overview: via – https://aws.amazon.com/ec2/pricing/
Incorrect options:
Use On-Demand Instances to run applications with a predictable usage over the next one year
Use On-Demand Instances for ad-hoc jobs that can be interrupted
An On-Demand Instance is an instance that you use on-demand. You have full control over its lifecycle — you decide when to launch, stop, hibernate, start, reboot, or terminate it. There is no long-term commitment required when you purchase On-Demand Instances. There is no upfront payment and you pay only for the seconds that your On-Demand Instances are running. The price per second for running an On-Demand Instance is fixed. On-demand instances cannot be interrupted. However, On-demand instances are not as cost-effective as Spot instances or Reserved instances, so both these options are not correct.
Use Reserved Instances for ad-hoc jobs that can be interrupted – Spot instances are more cost-effective than Reserved instances for running ad-hoc jobs that can be interrupted, so this option is not correct.
Reference: https://aws.amazon.com/ec2/pricing/
Question 38 of 65
38. Question
Which pillar of AWS Well-Architected Framework is responsible for making sure that you focus on continually improving your processes and procedures?
Correct
Correct option:
The AWS Well-Architected Framework helps you understand the pros and cons of decisions you make while building systems on AWS. By using the Framework you will learn architectural best practices for designing and operating reliable, secure, efficient, and cost-effective systems in the cloud. It provides a way for you to consistently measure your architectures against best practices and identify areas for improvement.
The AWS Well-Architected Framework is based on five pillars — Operational Excellence, Security, Reliability, Performance Efficiency, and Cost Optimization.
Overview of the five pillars of the Well-Architected Framework: via – https://aws.amazon.com/architecture/well-architected/
Operational Excellence – The Operational Excellence pillar includes the ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures. In the cloud, you can apply the same engineering discipline that you use for application code to your entire environment. You can define your entire workload (applications, infrastructure) as code and update it with code. You can implement your operations procedures as code and automate their execution by triggering them in response to events.
Incorrect options:
Cost Optimization – Cost Optimization focuses on avoiding un-needed costs. Key topics include understanding and controlling where the money is being spent, selecting the most appropriate and right number of resource types, analyzing spend over time, and scaling to meet business needs without overspending.
Reliability – This refers to the ability of a system to recover from infrastructure or service disruptions, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues.
Performance Efficiency – The performance efficiency pillar focuses on using IT and computing resources efficiently. Key topics include selecting the right resource types and sizes based on workload requirements, monitoring performance, and making informed decisions to maintain efficiency as business needs evolve.
Reference: https://aws.amazon.com/architecture/well-architected/
Incorrect
Correct option:
The AWS Well-Architected Framework helps you understand the pros and cons of decisions you make while building systems on AWS. By using the Framework you will learn architectural best practices for designing and operating reliable, secure, efficient, and cost-effective systems in the cloud. It provides a way for you to consistently measure your architectures against best practices and identify areas for improvement.
The AWS Well-Architected Framework is based on five pillars — Operational Excellence, Security, Reliability, Performance Efficiency, and Cost Optimization.
Overview of the five pillars of the Well-Architected Framework: via – https://aws.amazon.com/architecture/well-architected/
Operational Excellence – The Operational Excellence pillar includes the ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures. In the cloud, you can apply the same engineering discipline that you use for application code to your entire environment. You can define your entire workload (applications, infrastructure) as code and update it with code. You can implement your operations procedures as code and automate their execution by triggering them in response to events.
Incorrect options:
Cost Optimization – Cost Optimization focuses on avoiding un-needed costs. Key topics include understanding and controlling where the money is being spent, selecting the most appropriate and right number of resource types, analyzing spend over time, and scaling to meet business needs without overspending.
Reliability – This refers to the ability of a system to recover from infrastructure or service disruptions, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues.
Performance Efficiency – The performance efficiency pillar focuses on using IT and computing resources efficiently. Key topics include selecting the right resource types and sizes based on workload requirements, monitoring performance, and making informed decisions to maintain efficiency as business needs evolve.
Reference: https://aws.amazon.com/architecture/well-architected/
Unattempted
Correct option:
The AWS Well-Architected Framework helps you understand the pros and cons of decisions you make while building systems on AWS. By using the Framework you will learn architectural best practices for designing and operating reliable, secure, efficient, and cost-effective systems in the cloud. It provides a way for you to consistently measure your architectures against best practices and identify areas for improvement.
The AWS Well-Architected Framework is based on five pillars — Operational Excellence, Security, Reliability, Performance Efficiency, and Cost Optimization.
Overview of the five pillars of the Well-Architected Framework: via – https://aws.amazon.com/architecture/well-architected/
Operational Excellence – The Operational Excellence pillar includes the ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures. In the cloud, you can apply the same engineering discipline that you use for application code to your entire environment. You can define your entire workload (applications, infrastructure) as code and update it with code. You can implement your operations procedures as code and automate their execution by triggering them in response to events.
Incorrect options:
Cost Optimization – Cost Optimization focuses on avoiding un-needed costs. Key topics include understanding and controlling where the money is being spent, selecting the most appropriate and right number of resource types, analyzing spend over time, and scaling to meet business needs without overspending.
Reliability – This refers to the ability of a system to recover from infrastructure or service disruptions, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues.
Performance Efficiency – The performance efficiency pillar focuses on using IT and computing resources efficiently. Key topics include selecting the right resource types and sizes based on workload requirements, monitoring performance, and making informed decisions to maintain efficiency as business needs evolve.
Reference: https://aws.amazon.com/architecture/well-architected/
Question 39 of 65
39. Question
Which of the following can you use to run a bootstrap script while launching an EC2 instance?
Correct
Correct option:
EC2 instance user data
EC2 instance user data is the data that you specified in the form of a bootstrap script or configuration parameters while launching your instance.
EC2 instance metadata and user data:
Incorrect options:
EC2 instance metadata – EC2 instance metadata is data about your instance that you can use to manage the instance. You can get instance items such as ami-id, public-hostname, local-hostname, hostname, public-ipv4, local-ipv4, public-keys, instance-id by using instance metadata. You cannot use EC2 instance metadata to run a bootstrap script while launching an EC2 instance. So this option is incorrect.
EC2 instance configuration data
EC2 instance AMI data
There is no such thing as EC2 instance configuration data or EC2 instance AMI data. These options have been added as distractors.
Reference: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html
Incorrect
Correct option:
EC2 instance user data
EC2 instance user data is the data that you specified in the form of a bootstrap script or configuration parameters while launching your instance.
EC2 instance metadata and user data:
Incorrect options:
EC2 instance metadata – EC2 instance metadata is data about your instance that you can use to manage the instance. You can get instance items such as ami-id, public-hostname, local-hostname, hostname, public-ipv4, local-ipv4, public-keys, instance-id by using instance metadata. You cannot use EC2 instance metadata to run a bootstrap script while launching an EC2 instance. So this option is incorrect.
EC2 instance configuration data
EC2 instance AMI data
There is no such thing as EC2 instance configuration data or EC2 instance AMI data. These options have been added as distractors.
Reference: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html
Unattempted
Correct option:
EC2 instance user data
EC2 instance user data is the data that you specified in the form of a bootstrap script or configuration parameters while launching your instance.
EC2 instance metadata and user data:
Incorrect options:
EC2 instance metadata – EC2 instance metadata is data about your instance that you can use to manage the instance. You can get instance items such as ami-id, public-hostname, local-hostname, hostname, public-ipv4, local-ipv4, public-keys, instance-id by using instance metadata. You cannot use EC2 instance metadata to run a bootstrap script while launching an EC2 instance. So this option is incorrect.
EC2 instance configuration data
EC2 instance AMI data
There is no such thing as EC2 instance configuration data or EC2 instance AMI data. These options have been added as distractors.
Reference: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html
Question 40 of 65
40. Question
AWS Trusted Advisor can provide alerts on which of the following common security misconfigurations? (Select two)?
Correct
Correct options:
When you allow public access to Amazon S3 buckets
When you don’t turn on user activity logging (AWS CloudTrail)
AWS Trusted Advisor is an online tool that provides real-time guidance to help provision your resources following AWS best practices. Whether establishing new workflows, developing applications, or as part of ongoing improvement, recommendations provided by Trusted Advisor regularly help keep your solutions provisioned optimally. AWS Trusted Advisor analyzes your AWS environment and provides best practice recommendations in five categories: Cost Optimization, Performance, Security, Fault Tolerance, Service Limits.
Trusted Advisor inspects your AWS environment and makes recommendations when opportunities may exist to save money, improve system performance, or close security gaps. It provides alerts on several of the most common security misconfigurations that can occur, including leaving certain ports open that make you vulnerable to hacking and unauthorized access, neglecting to create IAM accounts for your internal users, allowing public access to Amazon S3 buckets, not turning on user activity logging (AWS CloudTrail), or not using MFA on your root AWS Account.
How Trusted Advisor Works: via – https://aws.amazon.com/premiumsupport/technology/trusted-advisor/
Incorrect options:
When you don’t tag objects in S3 buckets – Tagging objects (or any resource) in S3 is not mandatory and it’s not a security threat.
“When you share IAM user credentials with others” – It is the customer’s responsibility to adhere to the IAM security best practices and never share the IAM user credentials with others. Trusted Advisor cannot send an alert for such use-cases.
When you don’t enable data encryption on S3 Glacier – By default, data on S3 Glacier is encrypted. So, this option has been added as a distractor.
Reference: https://aws.amazon.com/premiumsupport/technology/trusted-advisor/best-practice-checklist/
Incorrect
Correct options:
When you allow public access to Amazon S3 buckets
When you don’t turn on user activity logging (AWS CloudTrail)
AWS Trusted Advisor is an online tool that provides real-time guidance to help provision your resources following AWS best practices. Whether establishing new workflows, developing applications, or as part of ongoing improvement, recommendations provided by Trusted Advisor regularly help keep your solutions provisioned optimally. AWS Trusted Advisor analyzes your AWS environment and provides best practice recommendations in five categories: Cost Optimization, Performance, Security, Fault Tolerance, Service Limits.
Trusted Advisor inspects your AWS environment and makes recommendations when opportunities may exist to save money, improve system performance, or close security gaps. It provides alerts on several of the most common security misconfigurations that can occur, including leaving certain ports open that make you vulnerable to hacking and unauthorized access, neglecting to create IAM accounts for your internal users, allowing public access to Amazon S3 buckets, not turning on user activity logging (AWS CloudTrail), or not using MFA on your root AWS Account.
How Trusted Advisor Works: via – https://aws.amazon.com/premiumsupport/technology/trusted-advisor/
Incorrect options:
When you don’t tag objects in S3 buckets – Tagging objects (or any resource) in S3 is not mandatory and it’s not a security threat.
“When you share IAM user credentials with others” – It is the customer’s responsibility to adhere to the IAM security best practices and never share the IAM user credentials with others. Trusted Advisor cannot send an alert for such use-cases.
When you don’t enable data encryption on S3 Glacier – By default, data on S3 Glacier is encrypted. So, this option has been added as a distractor.
Reference: https://aws.amazon.com/premiumsupport/technology/trusted-advisor/best-practice-checklist/
Unattempted
Correct options:
When you allow public access to Amazon S3 buckets
When you don’t turn on user activity logging (AWS CloudTrail)
AWS Trusted Advisor is an online tool that provides real-time guidance to help provision your resources following AWS best practices. Whether establishing new workflows, developing applications, or as part of ongoing improvement, recommendations provided by Trusted Advisor regularly help keep your solutions provisioned optimally. AWS Trusted Advisor analyzes your AWS environment and provides best practice recommendations in five categories: Cost Optimization, Performance, Security, Fault Tolerance, Service Limits.
Trusted Advisor inspects your AWS environment and makes recommendations when opportunities may exist to save money, improve system performance, or close security gaps. It provides alerts on several of the most common security misconfigurations that can occur, including leaving certain ports open that make you vulnerable to hacking and unauthorized access, neglecting to create IAM accounts for your internal users, allowing public access to Amazon S3 buckets, not turning on user activity logging (AWS CloudTrail), or not using MFA on your root AWS Account.
How Trusted Advisor Works: via – https://aws.amazon.com/premiumsupport/technology/trusted-advisor/
Incorrect options:
When you don’t tag objects in S3 buckets – Tagging objects (or any resource) in S3 is not mandatory and it’s not a security threat.
“When you share IAM user credentials with others” – It is the customer’s responsibility to adhere to the IAM security best practices and never share the IAM user credentials with others. Trusted Advisor cannot send an alert for such use-cases.
When you don’t enable data encryption on S3 Glacier – By default, data on S3 Glacier is encrypted. So, this option has been added as a distractor.
Reference: https://aws.amazon.com/premiumsupport/technology/trusted-advisor/best-practice-checklist/
Question 41 of 65
41. Question
As per the AWS Shared Responsibility Model, which of the following is a responsibility of AWS from a security and compliance point of view?
Correct
Correct option:
Patching networking infrastructure
According to the AWS Shared Responsibility Model, AWS is responsible for “Security of the Cloud”. This includes protecting the infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud services. Therefore, patching networking infrastructure is the responsibility of AWS.
Incorrect options:
Service and Communications Protection
Identity and Access Management
Patching guest OS and applications
The customer is responsible for security “in” the cloud. This covers things such as services and communications protection; Identity and Access Management; and patching guest OS and applications. Customers are responsible for managing their data including encryption options and using Identity and Access Management tools for implementing appropriate access control policies as per their organization requirements. Therefore, these three options fall under the responsibility of the customer according to the AWS shared responsibility model.
Exam Alert:
Please review the Shared Responsibility Model in detail as you can expect multiple questions on the shared responsibility model in the exam: via – https://aws.amazon.com/compliance/shared-responsibility-model/
Reference: https://aws.amazon.com/compliance/shared-responsibility-model/
Incorrect
Correct option:
Patching networking infrastructure
According to the AWS Shared Responsibility Model, AWS is responsible for “Security of the Cloud”. This includes protecting the infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud services. Therefore, patching networking infrastructure is the responsibility of AWS.
Incorrect options:
Service and Communications Protection
Identity and Access Management
Patching guest OS and applications
The customer is responsible for security “in” the cloud. This covers things such as services and communications protection; Identity and Access Management; and patching guest OS and applications. Customers are responsible for managing their data including encryption options and using Identity and Access Management tools for implementing appropriate access control policies as per their organization requirements. Therefore, these three options fall under the responsibility of the customer according to the AWS shared responsibility model.
Exam Alert:
Please review the Shared Responsibility Model in detail as you can expect multiple questions on the shared responsibility model in the exam: via – https://aws.amazon.com/compliance/shared-responsibility-model/
Reference: https://aws.amazon.com/compliance/shared-responsibility-model/
Unattempted
Correct option:
Patching networking infrastructure
According to the AWS Shared Responsibility Model, AWS is responsible for “Security of the Cloud”. This includes protecting the infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud services. Therefore, patching networking infrastructure is the responsibility of AWS.
Incorrect options:
Service and Communications Protection
Identity and Access Management
Patching guest OS and applications
The customer is responsible for security “in” the cloud. This covers things such as services and communications protection; Identity and Access Management; and patching guest OS and applications. Customers are responsible for managing their data including encryption options and using Identity and Access Management tools for implementing appropriate access control policies as per their organization requirements. Therefore, these three options fall under the responsibility of the customer according to the AWS shared responsibility model.
Exam Alert:
Please review the Shared Responsibility Model in detail as you can expect multiple questions on the shared responsibility model in the exam: via – https://aws.amazon.com/compliance/shared-responsibility-model/
Reference: https://aws.amazon.com/compliance/shared-responsibility-model/
Question 42 of 65
42. Question
Which of the following AWS Support plans provide programmatic access to AWS Support Center features to create, manage and close your support cases? (Select two)
Correct
Correct options:
Enterprise – AWS Enterprise Support provides customers with concierge-like service where the main focus is helping the customer achieve their outcomes and find success in the cloud. With Enterprise Support, you get 24×7 technical support from high-quality engineers, tools and technology to automatically manage the health of your environment, consultative architectural guidance delivered in the context of your applications and use-cases, and a designated Technical Account Manager (TAM) to coordinate access to proactive/preventative programs and AWS subject matter experts. You get programmatic access (API Access) to AWS Support Center features to create, manage, and close your support cases, and operationally manage your Trusted Advisor check requests and status.
Business – AWS recommends Business Support if you have production workloads on AWS and want 24×7 phone, email and chat access to technical support and architectural guidance in the context of your specific use-cases. You get full access to AWS Trusted Advisor Best Practice Checks. You get programmatic access (API Access) to AWS Support Center features to create, manage, and close your support cases, and operationally manage your Trusted Advisor check requests and status.
Exam Alert:
Please review the differences between the Developer, Business, and Enterprise support plans as you can expect at least a couple of questions on the exam:
via – https://aws.amazon.com/premiumsupport/plans/
Incorrect options:
Basic – The basic plan only provides access to the following:
Customer Service & Communities – 24×7 access to customer service, documentation, whitepapers, and support forums. AWS Trusted Advisor – Access to the 7 core Trusted Advisor checks and guidance to provision your resources following best practices to increase performance and improve security. AWS Personal Health Dashboard – A personalized view of the health of AWS services, and alerts when your resources are impacted.
Developer – AWS recommends the Developer Support plan if you are testing or doing early development on AWS and want the ability to get email-based technical support during business hours. This plan also supports general guidance on how services can be used for various use cases, workloads, or applications. You do not get access to Infrastructure Event Management with this plan.
Both these plans do not support programmatic access (API Access) to AWS Support Center.
Corporate – This is a made-up option and has been added as a distractor.
Reference: https://aws.amazon.com/premiumsupport/plans/
Incorrect
Correct options:
Enterprise – AWS Enterprise Support provides customers with concierge-like service where the main focus is helping the customer achieve their outcomes and find success in the cloud. With Enterprise Support, you get 24×7 technical support from high-quality engineers, tools and technology to automatically manage the health of your environment, consultative architectural guidance delivered in the context of your applications and use-cases, and a designated Technical Account Manager (TAM) to coordinate access to proactive/preventative programs and AWS subject matter experts. You get programmatic access (API Access) to AWS Support Center features to create, manage, and close your support cases, and operationally manage your Trusted Advisor check requests and status.
Business – AWS recommends Business Support if you have production workloads on AWS and want 24×7 phone, email and chat access to technical support and architectural guidance in the context of your specific use-cases. You get full access to AWS Trusted Advisor Best Practice Checks. You get programmatic access (API Access) to AWS Support Center features to create, manage, and close your support cases, and operationally manage your Trusted Advisor check requests and status.
Exam Alert:
Please review the differences between the Developer, Business, and Enterprise support plans as you can expect at least a couple of questions on the exam:
via – https://aws.amazon.com/premiumsupport/plans/
Incorrect options:
Basic – The basic plan only provides access to the following:
Customer Service & Communities – 24×7 access to customer service, documentation, whitepapers, and support forums. AWS Trusted Advisor – Access to the 7 core Trusted Advisor checks and guidance to provision your resources following best practices to increase performance and improve security. AWS Personal Health Dashboard – A personalized view of the health of AWS services, and alerts when your resources are impacted.
Developer – AWS recommends the Developer Support plan if you are testing or doing early development on AWS and want the ability to get email-based technical support during business hours. This plan also supports general guidance on how services can be used for various use cases, workloads, or applications. You do not get access to Infrastructure Event Management with this plan.
Both these plans do not support programmatic access (API Access) to AWS Support Center.
Corporate – This is a made-up option and has been added as a distractor.
Reference: https://aws.amazon.com/premiumsupport/plans/
Unattempted
Correct options:
Enterprise – AWS Enterprise Support provides customers with concierge-like service where the main focus is helping the customer achieve their outcomes and find success in the cloud. With Enterprise Support, you get 24×7 technical support from high-quality engineers, tools and technology to automatically manage the health of your environment, consultative architectural guidance delivered in the context of your applications and use-cases, and a designated Technical Account Manager (TAM) to coordinate access to proactive/preventative programs and AWS subject matter experts. You get programmatic access (API Access) to AWS Support Center features to create, manage, and close your support cases, and operationally manage your Trusted Advisor check requests and status.
Business – AWS recommends Business Support if you have production workloads on AWS and want 24×7 phone, email and chat access to technical support and architectural guidance in the context of your specific use-cases. You get full access to AWS Trusted Advisor Best Practice Checks. You get programmatic access (API Access) to AWS Support Center features to create, manage, and close your support cases, and operationally manage your Trusted Advisor check requests and status.
Exam Alert:
Please review the differences between the Developer, Business, and Enterprise support plans as you can expect at least a couple of questions on the exam:
via – https://aws.amazon.com/premiumsupport/plans/
Incorrect options:
Basic – The basic plan only provides access to the following:
Customer Service & Communities – 24×7 access to customer service, documentation, whitepapers, and support forums. AWS Trusted Advisor – Access to the 7 core Trusted Advisor checks and guidance to provision your resources following best practices to increase performance and improve security. AWS Personal Health Dashboard – A personalized view of the health of AWS services, and alerts when your resources are impacted.
Developer – AWS recommends the Developer Support plan if you are testing or doing early development on AWS and want the ability to get email-based technical support during business hours. This plan also supports general guidance on how services can be used for various use cases, workloads, or applications. You do not get access to Infrastructure Event Management with this plan.
Both these plans do not support programmatic access (API Access) to AWS Support Center.
Corporate – This is a made-up option and has been added as a distractor.
Reference: https://aws.amazon.com/premiumsupport/plans/
Question 43 of 65
43. Question
A financial services company wants to migrate from its on-premises data center to AWS Cloud. As a Cloud Practitioner, which AWS service would you recommend so that the company can compare the cost of running their IT infrastructure on-premises vs AWS Cloud?
Correct
Correct option:
AWS Total Cost of Ownership (TCO) Calculator
TCO calculator helps to compare the cost of your applications in an on-premises or traditional hosting environment to AWS. AWS helps reduce Total Cost of Ownership (TCO) by reducing the need to invest in large capital expenditures and providing a pay-as-you-go model that empowers to invest in the capacity you need and use it only when the business requires it. Once you describe your on-premises or hosting environment configuration, it produces a detailed cost comparison with AWS.
Incorrect options:
AWS Simple Monthly Calculator – The Simple Monthly Calculator helps customers and prospects estimate their monthly AWS bill more efficiently. The Simple Monthly Calculator cannot be used to compare the cost of running the IT infrastructure on-premises vs AWS Cloud.
AWS Cost Explorer – AWS Cost Explorer has an easy-to-use interface that lets you visualize, understand, and manage your AWS costs and usage over time. AWS Cost Explorer includes a default report that helps you visualize the costs and usage associated with your top five cost-accruing AWS services, and gives you a detailed breakdown of all services in the table view. The reports let you adjust the time range to view historical data going back up to twelve months to gain an understanding of your cost trends. AWS Cost Explorer cannot be used to compare the cost of running the IT infrastructure on-premises vs AWS Cloud.
AWS Budgets – AWS Budgets gives the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount. You can also use AWS Budgets to set reservation utilization or coverage targets and receive alerts when your utilization drops below the threshold you define. Budgets can be created at the monthly, quarterly, or yearly level, and you can customize the start and end dates. You can further refine your budget to track costs associated with multiple dimensions, such as AWS service, linked account, tag, and others. AWS Budgets cannot be used to compare the cost of running the IT infrastructure on-premises vs AWS Cloud.
Incorrect
Correct option:
AWS Total Cost of Ownership (TCO) Calculator
TCO calculator helps to compare the cost of your applications in an on-premises or traditional hosting environment to AWS. AWS helps reduce Total Cost of Ownership (TCO) by reducing the need to invest in large capital expenditures and providing a pay-as-you-go model that empowers to invest in the capacity you need and use it only when the business requires it. Once you describe your on-premises or hosting environment configuration, it produces a detailed cost comparison with AWS.
Incorrect options:
AWS Simple Monthly Calculator – The Simple Monthly Calculator helps customers and prospects estimate their monthly AWS bill more efficiently. The Simple Monthly Calculator cannot be used to compare the cost of running the IT infrastructure on-premises vs AWS Cloud.
AWS Cost Explorer – AWS Cost Explorer has an easy-to-use interface that lets you visualize, understand, and manage your AWS costs and usage over time. AWS Cost Explorer includes a default report that helps you visualize the costs and usage associated with your top five cost-accruing AWS services, and gives you a detailed breakdown of all services in the table view. The reports let you adjust the time range to view historical data going back up to twelve months to gain an understanding of your cost trends. AWS Cost Explorer cannot be used to compare the cost of running the IT infrastructure on-premises vs AWS Cloud.
AWS Budgets – AWS Budgets gives the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount. You can also use AWS Budgets to set reservation utilization or coverage targets and receive alerts when your utilization drops below the threshold you define. Budgets can be created at the monthly, quarterly, or yearly level, and you can customize the start and end dates. You can further refine your budget to track costs associated with multiple dimensions, such as AWS service, linked account, tag, and others. AWS Budgets cannot be used to compare the cost of running the IT infrastructure on-premises vs AWS Cloud.
Unattempted
Correct option:
AWS Total Cost of Ownership (TCO) Calculator
TCO calculator helps to compare the cost of your applications in an on-premises or traditional hosting environment to AWS. AWS helps reduce Total Cost of Ownership (TCO) by reducing the need to invest in large capital expenditures and providing a pay-as-you-go model that empowers to invest in the capacity you need and use it only when the business requires it. Once you describe your on-premises or hosting environment configuration, it produces a detailed cost comparison with AWS.
Incorrect options:
AWS Simple Monthly Calculator – The Simple Monthly Calculator helps customers and prospects estimate their monthly AWS bill more efficiently. The Simple Monthly Calculator cannot be used to compare the cost of running the IT infrastructure on-premises vs AWS Cloud.
AWS Cost Explorer – AWS Cost Explorer has an easy-to-use interface that lets you visualize, understand, and manage your AWS costs and usage over time. AWS Cost Explorer includes a default report that helps you visualize the costs and usage associated with your top five cost-accruing AWS services, and gives you a detailed breakdown of all services in the table view. The reports let you adjust the time range to view historical data going back up to twelve months to gain an understanding of your cost trends. AWS Cost Explorer cannot be used to compare the cost of running the IT infrastructure on-premises vs AWS Cloud.
AWS Budgets – AWS Budgets gives the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount. You can also use AWS Budgets to set reservation utilization or coverage targets and receive alerts when your utilization drops below the threshold you define. Budgets can be created at the monthly, quarterly, or yearly level, and you can customize the start and end dates. You can further refine your budget to track costs associated with multiple dimensions, such as AWS service, linked account, tag, and others. AWS Budgets cannot be used to compare the cost of running the IT infrastructure on-premises vs AWS Cloud.
Question 44 of 65
44. Question
A financial services company wants to ensure that all customer data uploaded on its data lake on Amazon S3 always stays private. Which of the following is the MOST efficient solution to address this compliance requirement?
Correct
Correct option:
Use Amazon S3 Block Public Access to ensure that all S3 resources stay private
The Amazon S3 Block Public Access feature provides settings for access points, buckets, and accounts to help you manage public access to Amazon S3 resources. By default, new buckets, access points, and objects don’t allow public access. However, users can modify bucket policies, access point policies, or object permissions to allow public access. S3 Block Public Access settings override these policies and permissions so that you can limit public access to these resources.
When Amazon S3 receives a request to access a bucket or an object, it determines whether the bucket or the bucket owner’s account has a block public access setting applied. If the request was made through an access point, Amazon S3 also checks for block public access settings for the access point. If there is an existing block public access setting that prohibits the requested access, Amazon S3 rejects the request.
Amazon S3 Block Public Access Overview: via – https://docs.aws.amazon.com/AmazonS3/latest/dev/access-control-block-public-access.html
Incorrect options:
Trigger a lambda function every time an object is uploaded on S3. The lambda function should change the object settings to make sure it stays private – Although it’s possible to implement this solution, but it is more efficient to use the “Amazon S3 Block Public Access” feature as its available off-the-shelf.
Use CloudWatch to ensure that all S3 resources stay private – Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides data and actionable insights to monitor applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. This is an excellent service for building Resilient systems. Think resource performance monitoring, events, and alerts; think CloudWatch. CloudWatch cannot be used to ensure data privacy on S3.
Set up a high-level advisory committee to review the privacy settings of each object uploaded into S3 – This option has been added as a distractor.
Reference: https://docs.aws.amazon.com/AmazonS3/latest/dev/access-control-block-public-access.html
Incorrect
Correct option:
Use Amazon S3 Block Public Access to ensure that all S3 resources stay private
The Amazon S3 Block Public Access feature provides settings for access points, buckets, and accounts to help you manage public access to Amazon S3 resources. By default, new buckets, access points, and objects don’t allow public access. However, users can modify bucket policies, access point policies, or object permissions to allow public access. S3 Block Public Access settings override these policies and permissions so that you can limit public access to these resources.
When Amazon S3 receives a request to access a bucket or an object, it determines whether the bucket or the bucket owner’s account has a block public access setting applied. If the request was made through an access point, Amazon S3 also checks for block public access settings for the access point. If there is an existing block public access setting that prohibits the requested access, Amazon S3 rejects the request.
Amazon S3 Block Public Access Overview: via – https://docs.aws.amazon.com/AmazonS3/latest/dev/access-control-block-public-access.html
Incorrect options:
Trigger a lambda function every time an object is uploaded on S3. The lambda function should change the object settings to make sure it stays private – Although it’s possible to implement this solution, but it is more efficient to use the “Amazon S3 Block Public Access” feature as its available off-the-shelf.
Use CloudWatch to ensure that all S3 resources stay private – Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides data and actionable insights to monitor applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. This is an excellent service for building Resilient systems. Think resource performance monitoring, events, and alerts; think CloudWatch. CloudWatch cannot be used to ensure data privacy on S3.
Set up a high-level advisory committee to review the privacy settings of each object uploaded into S3 – This option has been added as a distractor.
Reference: https://docs.aws.amazon.com/AmazonS3/latest/dev/access-control-block-public-access.html
Unattempted
Correct option:
Use Amazon S3 Block Public Access to ensure that all S3 resources stay private
The Amazon S3 Block Public Access feature provides settings for access points, buckets, and accounts to help you manage public access to Amazon S3 resources. By default, new buckets, access points, and objects don’t allow public access. However, users can modify bucket policies, access point policies, or object permissions to allow public access. S3 Block Public Access settings override these policies and permissions so that you can limit public access to these resources.
When Amazon S3 receives a request to access a bucket or an object, it determines whether the bucket or the bucket owner’s account has a block public access setting applied. If the request was made through an access point, Amazon S3 also checks for block public access settings for the access point. If there is an existing block public access setting that prohibits the requested access, Amazon S3 rejects the request.
Amazon S3 Block Public Access Overview: via – https://docs.aws.amazon.com/AmazonS3/latest/dev/access-control-block-public-access.html
Incorrect options:
Trigger a lambda function every time an object is uploaded on S3. The lambda function should change the object settings to make sure it stays private – Although it’s possible to implement this solution, but it is more efficient to use the “Amazon S3 Block Public Access” feature as its available off-the-shelf.
Use CloudWatch to ensure that all S3 resources stay private – Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides data and actionable insights to monitor applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. This is an excellent service for building Resilient systems. Think resource performance monitoring, events, and alerts; think CloudWatch. CloudWatch cannot be used to ensure data privacy on S3.
Set up a high-level advisory committee to review the privacy settings of each object uploaded into S3 – This option has been added as a distractor.
Reference: https://docs.aws.amazon.com/AmazonS3/latest/dev/access-control-block-public-access.html
Question 45 of 65
45. Question
Which of the following is available across all AWS Support plans?
Correct
Correct option:
“AWS Personal Health Dashboard”
Full set of AWS Trusted Advisor best practice checks, enhanced Technical Support with unlimited cases, and unlimited contacts and third-party Software Support are available only for Business and Enterprise Support plans.
AWS Personal Health Dashboard is available for all Support plans.
Exam Alert:
Please review the differences between the Developer, Business, and Enterprise support plans as you can expect at least a couple of questions on the exam:
via – https://aws.amazon.com/premiumsupport/plans/
Incorrect options:
“Full set of AWS Trusted Advisor best practice checks”
“Enhanced Technical Support with unlimited cases and unlimited contacts”
“Third-Party Software Support”
As mentioned in the explanation above, these options are available only for Business and Enterprise Support plans.
Reference: https://aws.amazon.com/premiumsupport/plans/
Incorrect
Correct option:
“AWS Personal Health Dashboard”
Full set of AWS Trusted Advisor best practice checks, enhanced Technical Support with unlimited cases, and unlimited contacts and third-party Software Support are available only for Business and Enterprise Support plans.
AWS Personal Health Dashboard is available for all Support plans.
Exam Alert:
Please review the differences between the Developer, Business, and Enterprise support plans as you can expect at least a couple of questions on the exam:
via – https://aws.amazon.com/premiumsupport/plans/
Incorrect options:
“Full set of AWS Trusted Advisor best practice checks”
“Enhanced Technical Support with unlimited cases and unlimited contacts”
“Third-Party Software Support”
As mentioned in the explanation above, these options are available only for Business and Enterprise Support plans.
Reference: https://aws.amazon.com/premiumsupport/plans/
Unattempted
Correct option:
“AWS Personal Health Dashboard”
Full set of AWS Trusted Advisor best practice checks, enhanced Technical Support with unlimited cases, and unlimited contacts and third-party Software Support are available only for Business and Enterprise Support plans.
AWS Personal Health Dashboard is available for all Support plans.
Exam Alert:
Please review the differences between the Developer, Business, and Enterprise support plans as you can expect at least a couple of questions on the exam:
via – https://aws.amazon.com/premiumsupport/plans/
Incorrect options:
“Full set of AWS Trusted Advisor best practice checks”
“Enhanced Technical Support with unlimited cases and unlimited contacts”
“Third-Party Software Support”
As mentioned in the explanation above, these options are available only for Business and Enterprise Support plans.
Reference: https://aws.amazon.com/premiumsupport/plans/
Question 46 of 65
46. Question
The DevOps team at a Big Data consultancy has set up EC2 instances across two AWS Regions for its flagship application. Which of the following characterizes this application architecture?
Correct
Correct option:
Deploying the application across two AWS Regions improves availability – Highly available systems are those that can withstand some measure of degradation while remaining available. Each AWS Region is fully isolated and comprised of multiple Availability Zones (AZ’s), which are fully isolated partitions of AWS infrastructure. To better isolate any issues and achieve high availability, you can partition applications across multiple AZ’s in the same AWS Region or even across multiple AWS Regions.
Key Benefits of AWS Global Infrastructure: via – https://aws.amazon.com/about-aws/global-infrastructure/
Incorrect options:
Deploying the application across two AWS Regions improves agility – Agility refers to the ability of the cloud to give you easy access to a broad range of technologies so that you can innovate faster and build nearly anything that you can imagine. You can quickly spin up resources as you need them – from infrastructure services, such as compute, storage, and databases, to Internet of Things, machine learning, data lakes and analytics, and much more. Deploying the application across two AWS Regions does not improve agility.
Deploying the application across two AWS Regions improves security – The application security is dependent on multiple factors such as data encryption, IAM policies, IAM roles, VPC security configurations, Security Groups, NACLs, etc. Deploying the application across two AWS Regions directly impacts availability. So this option is not the best fit for the given use-case.
Deploying the application across two AWS Regions improves scalability – For the given use-case, you can improve the scalability of the application by using an Application Load Balancer with an Auto Scaling group. Deploying the application across two AWS Regions directly impacts availability. So this option is not the best fit for the given use-case.
Reference: https://aws.amazon.com/about-aws/global-infrastructure/
Incorrect
Correct option:
Deploying the application across two AWS Regions improves availability – Highly available systems are those that can withstand some measure of degradation while remaining available. Each AWS Region is fully isolated and comprised of multiple Availability Zones (AZ’s), which are fully isolated partitions of AWS infrastructure. To better isolate any issues and achieve high availability, you can partition applications across multiple AZ’s in the same AWS Region or even across multiple AWS Regions.
Key Benefits of AWS Global Infrastructure: via – https://aws.amazon.com/about-aws/global-infrastructure/
Incorrect options:
Deploying the application across two AWS Regions improves agility – Agility refers to the ability of the cloud to give you easy access to a broad range of technologies so that you can innovate faster and build nearly anything that you can imagine. You can quickly spin up resources as you need them – from infrastructure services, such as compute, storage, and databases, to Internet of Things, machine learning, data lakes and analytics, and much more. Deploying the application across two AWS Regions does not improve agility.
Deploying the application across two AWS Regions improves security – The application security is dependent on multiple factors such as data encryption, IAM policies, IAM roles, VPC security configurations, Security Groups, NACLs, etc. Deploying the application across two AWS Regions directly impacts availability. So this option is not the best fit for the given use-case.
Deploying the application across two AWS Regions improves scalability – For the given use-case, you can improve the scalability of the application by using an Application Load Balancer with an Auto Scaling group. Deploying the application across two AWS Regions directly impacts availability. So this option is not the best fit for the given use-case.
Reference: https://aws.amazon.com/about-aws/global-infrastructure/
Unattempted
Correct option:
Deploying the application across two AWS Regions improves availability – Highly available systems are those that can withstand some measure of degradation while remaining available. Each AWS Region is fully isolated and comprised of multiple Availability Zones (AZ’s), which are fully isolated partitions of AWS infrastructure. To better isolate any issues and achieve high availability, you can partition applications across multiple AZ’s in the same AWS Region or even across multiple AWS Regions.
Key Benefits of AWS Global Infrastructure: via – https://aws.amazon.com/about-aws/global-infrastructure/
Incorrect options:
Deploying the application across two AWS Regions improves agility – Agility refers to the ability of the cloud to give you easy access to a broad range of technologies so that you can innovate faster and build nearly anything that you can imagine. You can quickly spin up resources as you need them – from infrastructure services, such as compute, storage, and databases, to Internet of Things, machine learning, data lakes and analytics, and much more. Deploying the application across two AWS Regions does not improve agility.
Deploying the application across two AWS Regions improves security – The application security is dependent on multiple factors such as data encryption, IAM policies, IAM roles, VPC security configurations, Security Groups, NACLs, etc. Deploying the application across two AWS Regions directly impacts availability. So this option is not the best fit for the given use-case.
Deploying the application across two AWS Regions improves scalability – For the given use-case, you can improve the scalability of the application by using an Application Load Balancer with an Auto Scaling group. Deploying the application across two AWS Regions directly impacts availability. So this option is not the best fit for the given use-case.
Reference: https://aws.amazon.com/about-aws/global-infrastructure/
Question 47 of 65
47. Question
Which of the following AWS services can be used to forecast your AWS account usage and costs?
Correct
Correct options:
AWS Cost Explorer
AWS Cost Explorer has an easy-to-use interface that lets you visualize, understand, and manage your AWS costs and usage over time. AWS Cost Explorer includes a default report that helps you visualize the costs and usage associated with your top five cost-accruing AWS services, and gives you a detailed breakdown of all services in the table view. The reports let you adjust the time range to view historical data going back up to twelve months to gain an understanding of your cost trends. AWS Cost Explorer also supports forecasting to get a better idea of what your costs and usage may look like in the future so that you can plan.
AWS Cost Explorer Features: via – https://aws.amazon.com/aws-cost-management/aws-cost-explorer/
Incorrect options:
AWS Cost and Usage Reports – The AWS Cost and Usage Reports (AWS CUR) contains the most comprehensive set of cost and usage data available. You can use Cost and Usage Reports to publish your AWS billing reports to an Amazon Simple Storage Service (Amazon S3) bucket that you own. You can receive reports that break down your costs by the hour or month, by product or product resource, or by tags that you define yourself. AWS updates the report in your bucket once a day in a comma-separated value (CSV) format. AWS Cost and Usage Reports cannot forecast your AWS account cost and usage.
AWS Budgets – AWS Budgets gives the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount. You can also use AWS Budgets to set reservation utilization or coverage targets and receive alerts when your utilization drops below the threshold you define. Budgets can be created at the monthly, quarterly, or yearly level, and you can customize the start and end dates. You can further refine your budget to track costs associated with multiple dimensions, such as AWS service, linked account, tag, and others. AWS Budgets cannot forecast your AWS account cost and usage.
AWS Simple Monthly Calculator – The Simple Monthly Calculator provides an estimate of usage charges for AWS services based on certain information you provide. It helps customers and prospects estimate their monthly AWS bill more efficiently. Simple Monthly Calculator cannot forecast your AWS account cost and usage.
Reference: https://aws.amazon.com/aws-cost-management/aws-cost-explorer/
Incorrect
Correct options:
AWS Cost Explorer
AWS Cost Explorer has an easy-to-use interface that lets you visualize, understand, and manage your AWS costs and usage over time. AWS Cost Explorer includes a default report that helps you visualize the costs and usage associated with your top five cost-accruing AWS services, and gives you a detailed breakdown of all services in the table view. The reports let you adjust the time range to view historical data going back up to twelve months to gain an understanding of your cost trends. AWS Cost Explorer also supports forecasting to get a better idea of what your costs and usage may look like in the future so that you can plan.
AWS Cost Explorer Features: via – https://aws.amazon.com/aws-cost-management/aws-cost-explorer/
Incorrect options:
AWS Cost and Usage Reports – The AWS Cost and Usage Reports (AWS CUR) contains the most comprehensive set of cost and usage data available. You can use Cost and Usage Reports to publish your AWS billing reports to an Amazon Simple Storage Service (Amazon S3) bucket that you own. You can receive reports that break down your costs by the hour or month, by product or product resource, or by tags that you define yourself. AWS updates the report in your bucket once a day in a comma-separated value (CSV) format. AWS Cost and Usage Reports cannot forecast your AWS account cost and usage.
AWS Budgets – AWS Budgets gives the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount. You can also use AWS Budgets to set reservation utilization or coverage targets and receive alerts when your utilization drops below the threshold you define. Budgets can be created at the monthly, quarterly, or yearly level, and you can customize the start and end dates. You can further refine your budget to track costs associated with multiple dimensions, such as AWS service, linked account, tag, and others. AWS Budgets cannot forecast your AWS account cost and usage.
AWS Simple Monthly Calculator – The Simple Monthly Calculator provides an estimate of usage charges for AWS services based on certain information you provide. It helps customers and prospects estimate their monthly AWS bill more efficiently. Simple Monthly Calculator cannot forecast your AWS account cost and usage.
Reference: https://aws.amazon.com/aws-cost-management/aws-cost-explorer/
Unattempted
Correct options:
AWS Cost Explorer
AWS Cost Explorer has an easy-to-use interface that lets you visualize, understand, and manage your AWS costs and usage over time. AWS Cost Explorer includes a default report that helps you visualize the costs and usage associated with your top five cost-accruing AWS services, and gives you a detailed breakdown of all services in the table view. The reports let you adjust the time range to view historical data going back up to twelve months to gain an understanding of your cost trends. AWS Cost Explorer also supports forecasting to get a better idea of what your costs and usage may look like in the future so that you can plan.
AWS Cost Explorer Features: via – https://aws.amazon.com/aws-cost-management/aws-cost-explorer/
Incorrect options:
AWS Cost and Usage Reports – The AWS Cost and Usage Reports (AWS CUR) contains the most comprehensive set of cost and usage data available. You can use Cost and Usage Reports to publish your AWS billing reports to an Amazon Simple Storage Service (Amazon S3) bucket that you own. You can receive reports that break down your costs by the hour or month, by product or product resource, or by tags that you define yourself. AWS updates the report in your bucket once a day in a comma-separated value (CSV) format. AWS Cost and Usage Reports cannot forecast your AWS account cost and usage.
AWS Budgets – AWS Budgets gives the ability to set custom budgets that alert you when your costs or usage exceed (or are forecasted to exceed) your budgeted amount. You can also use AWS Budgets to set reservation utilization or coverage targets and receive alerts when your utilization drops below the threshold you define. Budgets can be created at the monthly, quarterly, or yearly level, and you can customize the start and end dates. You can further refine your budget to track costs associated with multiple dimensions, such as AWS service, linked account, tag, and others. AWS Budgets cannot forecast your AWS account cost and usage.
AWS Simple Monthly Calculator – The Simple Monthly Calculator provides an estimate of usage charges for AWS services based on certain information you provide. It helps customers and prospects estimate their monthly AWS bill more efficiently. Simple Monthly Calculator cannot forecast your AWS account cost and usage.
Reference: https://aws.amazon.com/aws-cost-management/aws-cost-explorer/
Question 48 of 65
48. Question
Which AWS services can be used together to send alerts whenever the AWS account root user signs in? (Select two)
Correct
Correct options:
SNS
CloudWatch
Amazon CloudWatch Events delivers a near real-time stream of system events that describe changes in Amazon Web Services (AWS) resources. Using simple rules that you can quickly set up, you can match events and route them to one or more target functions or streams. CloudWatch Events becomes aware of operational changes as they occur. CloudWatch Events responds to these operational changes and takes corrective action as necessary, by sending messages to respond to the environment, activating functions, making changes, and capturing state information.
Amazon Simple Notification Service (SNS) is a highly available, durable, secure, fully managed pub/sub messaging service that enables you to decouple microservices, distributed systems, and serverless applications. Additionally, SNS can be used to fan out notifications to end users using mobile push, SMS, and email.
How SNS Works: via – https://aws.amazon.com/sns/
To send alerts whenever the AWS account root user signs in, you can create an Amazon Simple Notification Service (Amazon SNS) topic. Then, create an Amazon CloudWatch event rule to monitor userIdentity root logins from the AWS Management Console and send an email via SNS when the event triggers.
Incorrect options:
SQS – Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available.
Lambda – AWS Lambda is a compute service that lets you run code without provisioning or managing servers.
Step Function – AWS Step Function lets you coordinate multiple AWS services into serverless workflows. You can design and run workflows that stitch together services such as AWS Lambda, AWS Glue and Amazon SageMaker.
References: https://aws.amazon.com/premiumsupport/knowledge-center/root-user-account-cloudwatch-rule/ https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/WhatIsCloudWatchEvents.html
Incorrect
Correct options:
SNS
CloudWatch
Amazon CloudWatch Events delivers a near real-time stream of system events that describe changes in Amazon Web Services (AWS) resources. Using simple rules that you can quickly set up, you can match events and route them to one or more target functions or streams. CloudWatch Events becomes aware of operational changes as they occur. CloudWatch Events responds to these operational changes and takes corrective action as necessary, by sending messages to respond to the environment, activating functions, making changes, and capturing state information.
Amazon Simple Notification Service (SNS) is a highly available, durable, secure, fully managed pub/sub messaging service that enables you to decouple microservices, distributed systems, and serverless applications. Additionally, SNS can be used to fan out notifications to end users using mobile push, SMS, and email.
How SNS Works: via – https://aws.amazon.com/sns/
To send alerts whenever the AWS account root user signs in, you can create an Amazon Simple Notification Service (Amazon SNS) topic. Then, create an Amazon CloudWatch event rule to monitor userIdentity root logins from the AWS Management Console and send an email via SNS when the event triggers.
Incorrect options:
SQS – Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available.
Lambda – AWS Lambda is a compute service that lets you run code without provisioning or managing servers.
Step Function – AWS Step Function lets you coordinate multiple AWS services into serverless workflows. You can design and run workflows that stitch together services such as AWS Lambda, AWS Glue and Amazon SageMaker.
References: https://aws.amazon.com/premiumsupport/knowledge-center/root-user-account-cloudwatch-rule/ https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/WhatIsCloudWatchEvents.html
Unattempted
Correct options:
SNS
CloudWatch
Amazon CloudWatch Events delivers a near real-time stream of system events that describe changes in Amazon Web Services (AWS) resources. Using simple rules that you can quickly set up, you can match events and route them to one or more target functions or streams. CloudWatch Events becomes aware of operational changes as they occur. CloudWatch Events responds to these operational changes and takes corrective action as necessary, by sending messages to respond to the environment, activating functions, making changes, and capturing state information.
Amazon Simple Notification Service (SNS) is a highly available, durable, secure, fully managed pub/sub messaging service that enables you to decouple microservices, distributed systems, and serverless applications. Additionally, SNS can be used to fan out notifications to end users using mobile push, SMS, and email.
How SNS Works: via – https://aws.amazon.com/sns/
To send alerts whenever the AWS account root user signs in, you can create an Amazon Simple Notification Service (Amazon SNS) topic. Then, create an Amazon CloudWatch event rule to monitor userIdentity root logins from the AWS Management Console and send an email via SNS when the event triggers.
Incorrect options:
SQS – Amazon Simple Queue Service (SQS) is a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. Using SQS, you can send, store, and receive messages between software components at any volume, without losing messages or requiring other services to be available.
Lambda – AWS Lambda is a compute service that lets you run code without provisioning or managing servers.
Step Function – AWS Step Function lets you coordinate multiple AWS services into serverless workflows. You can design and run workflows that stitch together services such as AWS Lambda, AWS Glue and Amazon SageMaker.
References: https://aws.amazon.com/premiumsupport/knowledge-center/root-user-account-cloudwatch-rule/ https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/WhatIsCloudWatchEvents.html
Question 49 of 65
49. Question
Which AWS service would you use to create a logically isolated section of the AWS Cloud where you can launch AWS resources in your virtual network?
Correct
Correct option:
Virtual Private Cloud (VPC)
Amazon Virtual Private Cloud (Amazon VPC) is a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define. You have complete control over your virtual networking environment, including the selection of your IP address range, creation of subnets, and configuration of route tables and network gateways. You can easily customize the network configuration of your Amazon VPC using public and private subnets.
Incorrect options:
Virtual Private Network (VPN) – AWS Virtual Private Network (AWS VPN) lets you establish a secure and private encrypted tunnel from your on-premises network to the AWS global network. AWS VPN is comprised of two services: AWS Site-to-Site VPN and AWS Client VPN. You cannot use VPN to create a logically isolated section of the AWS Cloud.
Subnet – A subnet is a range of IP addresses within your VPC. A subnet is not an AWS service, so this option is ruled out.
Network Access Control List (NACL) – A network access control list (NACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. A NACL is not an AWS service, so this option is ruled out.
Reference: https://aws.amazon.com/vpc/
Incorrect
Correct option:
Virtual Private Cloud (VPC)
Amazon Virtual Private Cloud (Amazon VPC) is a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define. You have complete control over your virtual networking environment, including the selection of your IP address range, creation of subnets, and configuration of route tables and network gateways. You can easily customize the network configuration of your Amazon VPC using public and private subnets.
Incorrect options:
Virtual Private Network (VPN) – AWS Virtual Private Network (AWS VPN) lets you establish a secure and private encrypted tunnel from your on-premises network to the AWS global network. AWS VPN is comprised of two services: AWS Site-to-Site VPN and AWS Client VPN. You cannot use VPN to create a logically isolated section of the AWS Cloud.
Subnet – A subnet is a range of IP addresses within your VPC. A subnet is not an AWS service, so this option is ruled out.
Network Access Control List (NACL) – A network access control list (NACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. A NACL is not an AWS service, so this option is ruled out.
Reference: https://aws.amazon.com/vpc/
Unattempted
Correct option:
Virtual Private Cloud (VPC)
Amazon Virtual Private Cloud (Amazon VPC) is a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define. You have complete control over your virtual networking environment, including the selection of your IP address range, creation of subnets, and configuration of route tables and network gateways. You can easily customize the network configuration of your Amazon VPC using public and private subnets.
Incorrect options:
Virtual Private Network (VPN) – AWS Virtual Private Network (AWS VPN) lets you establish a secure and private encrypted tunnel from your on-premises network to the AWS global network. AWS VPN is comprised of two services: AWS Site-to-Site VPN and AWS Client VPN. You cannot use VPN to create a logically isolated section of the AWS Cloud.
Subnet – A subnet is a range of IP addresses within your VPC. A subnet is not an AWS service, so this option is ruled out.
Network Access Control List (NACL) – A network access control list (NACL) is an optional layer of security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. A NACL is not an AWS service, so this option is ruled out.
Reference: https://aws.amazon.com/vpc/
Question 50 of 65
50. Question
Which of the following AWS storage services can be directly used with on-premises systems?
Correct
Correct option: Amazon Elastic File System (Amazon EFS)
Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources.
To access EFS file systems from on-premises, you must have an AWS Direct Connect or AWS VPN connection between your on-premises datacenter and your Amazon VPC. You mount an EFS file system on your on-premises Linux server using the standard Linux mount command for mounting a file system
How EFS Works: via – https://aws.amazon.com/efs/faq/
Incorrect options:
Amazon Elastic Block Store (EBS) – Amazon Elastic Block Store (EBS) is an easy to use, high-performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction-intensive workloads at any scale. EBS volumes can only be mounted with Amazon EC2.
Amazon EC2 Instance Store – An instance store provides temporary block-level storage for your Amazon EC2 instance. This storage is located on disks that are physically attached to the host computer. It is not possible to use this storage from on-premises systems.
Amazon Simple Storage Service (Amazon S3) – Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. Amazon S3 can be accessed from on-premises only via AWS Storage Gateway. It is not possible to access S3 directly from on-premises systems.
Reference: https://aws.amazon.com/efs/faq/
Incorrect
Correct option: Amazon Elastic File System (Amazon EFS)
Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources.
To access EFS file systems from on-premises, you must have an AWS Direct Connect or AWS VPN connection between your on-premises datacenter and your Amazon VPC. You mount an EFS file system on your on-premises Linux server using the standard Linux mount command for mounting a file system
How EFS Works: via – https://aws.amazon.com/efs/faq/
Incorrect options:
Amazon Elastic Block Store (EBS) – Amazon Elastic Block Store (EBS) is an easy to use, high-performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction-intensive workloads at any scale. EBS volumes can only be mounted with Amazon EC2.
Amazon EC2 Instance Store – An instance store provides temporary block-level storage for your Amazon EC2 instance. This storage is located on disks that are physically attached to the host computer. It is not possible to use this storage from on-premises systems.
Amazon Simple Storage Service (Amazon S3) – Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. Amazon S3 can be accessed from on-premises only via AWS Storage Gateway. It is not possible to access S3 directly from on-premises systems.
Reference: https://aws.amazon.com/efs/faq/
Unattempted
Correct option: Amazon Elastic File System (Amazon EFS)
Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources.
To access EFS file systems from on-premises, you must have an AWS Direct Connect or AWS VPN connection between your on-premises datacenter and your Amazon VPC. You mount an EFS file system on your on-premises Linux server using the standard Linux mount command for mounting a file system
How EFS Works: via – https://aws.amazon.com/efs/faq/
Incorrect options:
Amazon Elastic Block Store (EBS) – Amazon Elastic Block Store (EBS) is an easy to use, high-performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction-intensive workloads at any scale. EBS volumes can only be mounted with Amazon EC2.
Amazon EC2 Instance Store – An instance store provides temporary block-level storage for your Amazon EC2 instance. This storage is located on disks that are physically attached to the host computer. It is not possible to use this storage from on-premises systems.
Amazon Simple Storage Service (Amazon S3) – Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. Amazon S3 can be accessed from on-premises only via AWS Storage Gateway. It is not possible to access S3 directly from on-premises systems.
Reference: https://aws.amazon.com/efs/faq/
Question 51 of 65
51. Question
AWS Shield Advanced provides expanded DDoS attack protection for web applications running on which of the following resources? (Select two)
Correct
Correct options:
Amazon CloudFront
Amazon Elastic Compute Cloud
AWS Shield Standard is activated for all AWS customers, by default. For higher levels of protection against attacks, you can subscribe to AWS Shield Advanced. With Shield Advanced, you also have exclusive access to advanced, real-time metrics and reports for extensive visibility into attacks on your AWS resources. With the assistance of the DRT (DDoS response team), AWS Shield Advanced includes intelligent DDoS attack detection and mitigation for not only for network layer (layer 3) and transport layer (layer 4) attacks but also for application layer (layer 7) attacks.
AWS Shield Advanced provides expanded DDoS attack protection for web applications running on the following resources: Amazon Elastic Compute Cloud, Elastic Load Balancing (ELB), Amazon CloudFront, Amazon Route 53, AWS Global Accelerator.
Incorrect options:
Amazon Simple Storage Service (Amazon S3)
AWS Elastic Beanstalk
AWS Identity and Access Management (IAM)
These three resource types are not supported by AWS Shield Advanced.
Reference: https://docs.aws.amazon.com/waf/latest/developerguide/ddos-overview.html
Incorrect
Correct options:
Amazon CloudFront
Amazon Elastic Compute Cloud
AWS Shield Standard is activated for all AWS customers, by default. For higher levels of protection against attacks, you can subscribe to AWS Shield Advanced. With Shield Advanced, you also have exclusive access to advanced, real-time metrics and reports for extensive visibility into attacks on your AWS resources. With the assistance of the DRT (DDoS response team), AWS Shield Advanced includes intelligent DDoS attack detection and mitigation for not only for network layer (layer 3) and transport layer (layer 4) attacks but also for application layer (layer 7) attacks.
AWS Shield Advanced provides expanded DDoS attack protection for web applications running on the following resources: Amazon Elastic Compute Cloud, Elastic Load Balancing (ELB), Amazon CloudFront, Amazon Route 53, AWS Global Accelerator.
Incorrect options:
Amazon Simple Storage Service (Amazon S3)
AWS Elastic Beanstalk
AWS Identity and Access Management (IAM)
These three resource types are not supported by AWS Shield Advanced.
Reference: https://docs.aws.amazon.com/waf/latest/developerguide/ddos-overview.html
Unattempted
Correct options:
Amazon CloudFront
Amazon Elastic Compute Cloud
AWS Shield Standard is activated for all AWS customers, by default. For higher levels of protection against attacks, you can subscribe to AWS Shield Advanced. With Shield Advanced, you also have exclusive access to advanced, real-time metrics and reports for extensive visibility into attacks on your AWS resources. With the assistance of the DRT (DDoS response team), AWS Shield Advanced includes intelligent DDoS attack detection and mitigation for not only for network layer (layer 3) and transport layer (layer 4) attacks but also for application layer (layer 7) attacks.
AWS Shield Advanced provides expanded DDoS attack protection for web applications running on the following resources: Amazon Elastic Compute Cloud, Elastic Load Balancing (ELB), Amazon CloudFront, Amazon Route 53, AWS Global Accelerator.
Incorrect options:
Amazon Simple Storage Service (Amazon S3)
AWS Elastic Beanstalk
AWS Identity and Access Management (IAM)
These three resource types are not supported by AWS Shield Advanced.
Reference: https://docs.aws.amazon.com/waf/latest/developerguide/ddos-overview.html
Question 52 of 65
52. Question
An e-commerce company has migrated its IT infrastructure from the on-premises data center to AWS Cloud. Which of the following costs is the company responsible for?
Correct
Correct option:
Application software license costs
Cloud computing is the on-demand delivery of compute power, database storage, applications, and other IT resources through a cloud services platform via the Internet with pay-as-you-go pricing. With cloud computing, you don’t need to make large upfront investments in hardware and spend a lot of time on the heavy lifting of managing that hardware. Therefore, all costs for hardware infrastructure, powering servers and physical security for the Data Center fall under the ambit of AWS.
The customer needs to take care of software licensing costs and human resources costs.
Incorrect options:
AWS Data Center physical security costs
Costs for hardware infrastructure on AWS Cloud
Costs for powering servers on AWS Cloud
As per the details mentioned in the explanation above, these three options are not correct for the given use-case.
Reference: https://docs.aws.amazon.com/whitepapers/latest/aws-overview/what-is-cloud-computing.html
Incorrect
Correct option:
Application software license costs
Cloud computing is the on-demand delivery of compute power, database storage, applications, and other IT resources through a cloud services platform via the Internet with pay-as-you-go pricing. With cloud computing, you don’t need to make large upfront investments in hardware and spend a lot of time on the heavy lifting of managing that hardware. Therefore, all costs for hardware infrastructure, powering servers and physical security for the Data Center fall under the ambit of AWS.
The customer needs to take care of software licensing costs and human resources costs.
Incorrect options:
AWS Data Center physical security costs
Costs for hardware infrastructure on AWS Cloud
Costs for powering servers on AWS Cloud
As per the details mentioned in the explanation above, these three options are not correct for the given use-case.
Reference: https://docs.aws.amazon.com/whitepapers/latest/aws-overview/what-is-cloud-computing.html
Unattempted
Correct option:
Application software license costs
Cloud computing is the on-demand delivery of compute power, database storage, applications, and other IT resources through a cloud services platform via the Internet with pay-as-you-go pricing. With cloud computing, you don’t need to make large upfront investments in hardware and spend a lot of time on the heavy lifting of managing that hardware. Therefore, all costs for hardware infrastructure, powering servers and physical security for the Data Center fall under the ambit of AWS.
The customer needs to take care of software licensing costs and human resources costs.
Incorrect options:
AWS Data Center physical security costs
Costs for hardware infrastructure on AWS Cloud
Costs for powering servers on AWS Cloud
As per the details mentioned in the explanation above, these three options are not correct for the given use-case.
Reference: https://docs.aws.amazon.com/whitepapers/latest/aws-overview/what-is-cloud-computing.html
Question 53 of 65
53. Question
Which of the following entities are part of a VPC in the AWS Cloud? (Select two)
Correct
Correct option:
Subnet
Internet Gateway
Amazon Virtual Private Cloud (Amazon VPC) enables you to launch AWS resources into a virtual network that you’ve defined.
The following are the key concepts for VPCs:
Virtual private cloud (VPC) — A virtual network dedicated to your AWS account.
Subnet — A range of IP addresses in your VPC.
Route table — A set of rules, called routes, that are used to determine where network traffic is directed.
Internet Gateway — A gateway that you attach to your VPC to enable communication between resources in your VPC and the internet.
VPC endpoint — Enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection.
Incorrect options:
Storage Gateway – AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage. Customers use Storage Gateway to simplify storage management and reduce costs for key hybrid cloud storage use cases. Storage Gateway is not part of VPC.
API Gateway – Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the “front door” for applications to access data, business logic, or functionality from your backend services. API Gateway is not part of a VPC.
Object – Buckets and objects are part of Amazon S3. Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance.
Reference: https://docs.amazonaws.cn/en_us/vpc/latest/userguide/what-is-amazon-vpc.html
Incorrect
Correct option:
Subnet
Internet Gateway
Amazon Virtual Private Cloud (Amazon VPC) enables you to launch AWS resources into a virtual network that you’ve defined.
The following are the key concepts for VPCs:
Virtual private cloud (VPC) — A virtual network dedicated to your AWS account.
Subnet — A range of IP addresses in your VPC.
Route table — A set of rules, called routes, that are used to determine where network traffic is directed.
Internet Gateway — A gateway that you attach to your VPC to enable communication between resources in your VPC and the internet.
VPC endpoint — Enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection.
Incorrect options:
Storage Gateway – AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage. Customers use Storage Gateway to simplify storage management and reduce costs for key hybrid cloud storage use cases. Storage Gateway is not part of VPC.
API Gateway – Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the “front door” for applications to access data, business logic, or functionality from your backend services. API Gateway is not part of a VPC.
Object – Buckets and objects are part of Amazon S3. Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance.
Reference: https://docs.amazonaws.cn/en_us/vpc/latest/userguide/what-is-amazon-vpc.html
Unattempted
Correct option:
Subnet
Internet Gateway
Amazon Virtual Private Cloud (Amazon VPC) enables you to launch AWS resources into a virtual network that you’ve defined.
The following are the key concepts for VPCs:
Virtual private cloud (VPC) — A virtual network dedicated to your AWS account.
Subnet — A range of IP addresses in your VPC.
Route table — A set of rules, called routes, that are used to determine where network traffic is directed.
Internet Gateway — A gateway that you attach to your VPC to enable communication between resources in your VPC and the internet.
VPC endpoint — Enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection.
Incorrect options:
Storage Gateway – AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage. Customers use Storage Gateway to simplify storage management and reduce costs for key hybrid cloud storage use cases. Storage Gateway is not part of VPC.
API Gateway – Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the “front door” for applications to access data, business logic, or functionality from your backend services. API Gateway is not part of a VPC.
Object – Buckets and objects are part of Amazon S3. Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance.
Reference: https://docs.amazonaws.cn/en_us/vpc/latest/userguide/what-is-amazon-vpc.html
Question 54 of 65
54. Question
A multi-national organization has separate VPCs for each of its business units on the AWS Cloud. The organization also wants to connect its on-premises data center with all VPCs for better organization-wide collaboration. Which AWS services can be combined to build the MOST efficient solution for this use-case? (Select two)
Correct
Correct option:
AWS Transit Gateway
AWS Transit Gateway connects VPCs and on-premises networks through a central hub. This simplifies your network and puts an end to complex peering relationships. It acts as a cloud router – each new connection is only made once. As you expand globally, inter-Region peering connects AWS Transit Gateways using the AWS global network. Your data is automatically encrypted and never travels over the public internet.
How Transit Gateway can simplify your network: via – https://aws.amazon.com/transit-gateway/
AWS Direct Connect
AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS. Using AWS Direct Connect, you can establish private connectivity between AWS and your datacenter, office, or colocation environment, which in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections.
Incorrect options:
VPC Peering – A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them privately. VPC peering is not transitive, a separate VPC peering connection has to be made between two VPCs that need to talk to each other. With growing VPCs, this gets difficult to manage.
Transitive VPC Peering is not allowed: via – https://docs.aws.amazon.com/vpc/latest/peering/invalid-peering-configurations.html
Internet Gateway – An internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between instances in your VPC and the internet. It, therefore, imposes no availability risks or bandwidth constraints on your network traffic. You cannot use Internet Gateway to connect your on-premises data center with multiple VPCs within your AWS network.
AWS Storage Gateway – AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage. All data transferred between the gateway and AWS storage is encrypted using SSL (for all three types of gateways – File, Volume and Tape Gateways). You cannot use Storage Gateway to connect your on-premises data center with multiple VPCs within your AWS network.
Reference: https://aws.amazon.com/transit-gateway/
Incorrect
Correct option:
AWS Transit Gateway
AWS Transit Gateway connects VPCs and on-premises networks through a central hub. This simplifies your network and puts an end to complex peering relationships. It acts as a cloud router – each new connection is only made once. As you expand globally, inter-Region peering connects AWS Transit Gateways using the AWS global network. Your data is automatically encrypted and never travels over the public internet.
How Transit Gateway can simplify your network: via – https://aws.amazon.com/transit-gateway/
AWS Direct Connect
AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS. Using AWS Direct Connect, you can establish private connectivity between AWS and your datacenter, office, or colocation environment, which in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections.
Incorrect options:
VPC Peering – A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them privately. VPC peering is not transitive, a separate VPC peering connection has to be made between two VPCs that need to talk to each other. With growing VPCs, this gets difficult to manage.
Transitive VPC Peering is not allowed: via – https://docs.aws.amazon.com/vpc/latest/peering/invalid-peering-configurations.html
Internet Gateway – An internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between instances in your VPC and the internet. It, therefore, imposes no availability risks or bandwidth constraints on your network traffic. You cannot use Internet Gateway to connect your on-premises data center with multiple VPCs within your AWS network.
AWS Storage Gateway – AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage. All data transferred between the gateway and AWS storage is encrypted using SSL (for all three types of gateways – File, Volume and Tape Gateways). You cannot use Storage Gateway to connect your on-premises data center with multiple VPCs within your AWS network.
Reference: https://aws.amazon.com/transit-gateway/
Unattempted
Correct option:
AWS Transit Gateway
AWS Transit Gateway connects VPCs and on-premises networks through a central hub. This simplifies your network and puts an end to complex peering relationships. It acts as a cloud router – each new connection is only made once. As you expand globally, inter-Region peering connects AWS Transit Gateways using the AWS global network. Your data is automatically encrypted and never travels over the public internet.
How Transit Gateway can simplify your network: via – https://aws.amazon.com/transit-gateway/
AWS Direct Connect
AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS. Using AWS Direct Connect, you can establish private connectivity between AWS and your datacenter, office, or colocation environment, which in many cases can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than Internet-based connections.
Incorrect options:
VPC Peering – A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them privately. VPC peering is not transitive, a separate VPC peering connection has to be made between two VPCs that need to talk to each other. With growing VPCs, this gets difficult to manage.
Transitive VPC Peering is not allowed: via – https://docs.aws.amazon.com/vpc/latest/peering/invalid-peering-configurations.html
Internet Gateway – An internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between instances in your VPC and the internet. It, therefore, imposes no availability risks or bandwidth constraints on your network traffic. You cannot use Internet Gateway to connect your on-premises data center with multiple VPCs within your AWS network.
AWS Storage Gateway – AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage. All data transferred between the gateway and AWS storage is encrypted using SSL (for all three types of gateways – File, Volume and Tape Gateways). You cannot use Storage Gateway to connect your on-premises data center with multiple VPCs within your AWS network.
Reference: https://aws.amazon.com/transit-gateway/
Question 55 of 65
55. Question
Which of the following is correct regarding the AWS RDS service?
Correct
Correct option:
You can use Read Replicas for improved read performance and Multi-AZ for Disaster Recovery
Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. Read Replicas allow you to create read-only copies that are synchronized with your master database. Read Replicas are used for improved read performance. You can also place your read replica in a different AWS Region closer to your users for better performance. Read Replicas are an example of horizontal scaling of resources.
Read Replica Overview: via – https://aws.amazon.com/rds/features/multi-az/
Amazon RDS Multi-AZ deployments provide enhanced availability and durability for RDS database (DB) instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ).
In case of an infrastructure failure (such as a disaster), Amazon RDS performs an automatic failover to the standby so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention.
How Multi-AZ Works: via – https://aws.amazon.com/rds/features/multi-az/
Incorrect options:
You can use both Read Replicas and Multi-AZ for Disaster Recovery
You can use both Read Replicas and Multi-AZ for improved read performance
You can use Read Replicas for Disaster Recovery and Multi-AZ for improved read performance
These three options contradict the details provided earlier in the explanation, so these options are incorrect.
Reference:
Incorrect
Correct option:
You can use Read Replicas for improved read performance and Multi-AZ for Disaster Recovery
Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. Read Replicas allow you to create read-only copies that are synchronized with your master database. Read Replicas are used for improved read performance. You can also place your read replica in a different AWS Region closer to your users for better performance. Read Replicas are an example of horizontal scaling of resources.
Read Replica Overview: via – https://aws.amazon.com/rds/features/multi-az/
Amazon RDS Multi-AZ deployments provide enhanced availability and durability for RDS database (DB) instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ).
In case of an infrastructure failure (such as a disaster), Amazon RDS performs an automatic failover to the standby so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention.
How Multi-AZ Works: via – https://aws.amazon.com/rds/features/multi-az/
Incorrect options:
You can use both Read Replicas and Multi-AZ for Disaster Recovery
You can use both Read Replicas and Multi-AZ for improved read performance
You can use Read Replicas for Disaster Recovery and Multi-AZ for improved read performance
These three options contradict the details provided earlier in the explanation, so these options are incorrect.
Reference:
Unattempted
Correct option:
You can use Read Replicas for improved read performance and Multi-AZ for Disaster Recovery
Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. Read Replicas allow you to create read-only copies that are synchronized with your master database. Read Replicas are used for improved read performance. You can also place your read replica in a different AWS Region closer to your users for better performance. Read Replicas are an example of horizontal scaling of resources.
Read Replica Overview: via – https://aws.amazon.com/rds/features/multi-az/
Amazon RDS Multi-AZ deployments provide enhanced availability and durability for RDS database (DB) instances, making them a natural fit for production database workloads. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ).
In case of an infrastructure failure (such as a disaster), Amazon RDS performs an automatic failover to the standby so that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the same after a failover, your application can resume database operation without the need for manual administrative intervention.
How Multi-AZ Works: via – https://aws.amazon.com/rds/features/multi-az/
Incorrect options:
You can use both Read Replicas and Multi-AZ for Disaster Recovery
You can use both Read Replicas and Multi-AZ for improved read performance
You can use Read Replicas for Disaster Recovery and Multi-AZ for improved read performance
These three options contradict the details provided earlier in the explanation, so these options are incorrect.
Reference:
Question 56 of 65
56. Question
Which of the following is the best practice for application architecture on AWS Cloud?
Correct
Correct option:
Build loosely coupled components
AWS Cloud recommends microservices as an architectural and organizational approach to software development where software is composed of small independent services that communicate over well-defined APIs. These services are owned by small, self-contained teams.
Microservices architectures make applications easier to scale and faster to develop, enabling innovation and accelerating time-to-market for new features. Each service can be considered as a loosely coupled component of a bigger system. You can use services like SNS or SQS to decouple and scale microservices.
Microservices Overview: via – https://aws.amazon.com/blogs/compute/understanding-asynchronous-messaging-for-microservices/
Incorrect options:
Build tightly coupled components
Build monolithic applications
With monolithic architectures, all processes are tightly coupled and run as a single service. This means that if one process of the application experiences a spike in demand, the entire architecture must be scaled. Monolithic architectures add risk for application availability because many dependent and tightly coupled processes increase the impact of a single process failure. So both these options are incorrect.
Use synchronous communication between components – Synchronous between applications can be problematic if there are sudden spikes of traffic. You should use SNS or SQS to decouple your application components.
Reference: https://aws.amazon.com/blogs/compute/understanding-asynchronous-messaging-for-microservices/
Incorrect
Correct option:
Build loosely coupled components
AWS Cloud recommends microservices as an architectural and organizational approach to software development where software is composed of small independent services that communicate over well-defined APIs. These services are owned by small, self-contained teams.
Microservices architectures make applications easier to scale and faster to develop, enabling innovation and accelerating time-to-market for new features. Each service can be considered as a loosely coupled component of a bigger system. You can use services like SNS or SQS to decouple and scale microservices.
Microservices Overview: via – https://aws.amazon.com/blogs/compute/understanding-asynchronous-messaging-for-microservices/
Incorrect options:
Build tightly coupled components
Build monolithic applications
With monolithic architectures, all processes are tightly coupled and run as a single service. This means that if one process of the application experiences a spike in demand, the entire architecture must be scaled. Monolithic architectures add risk for application availability because many dependent and tightly coupled processes increase the impact of a single process failure. So both these options are incorrect.
Use synchronous communication between components – Synchronous between applications can be problematic if there are sudden spikes of traffic. You should use SNS or SQS to decouple your application components.
Reference: https://aws.amazon.com/blogs/compute/understanding-asynchronous-messaging-for-microservices/
Unattempted
Correct option:
Build loosely coupled components
AWS Cloud recommends microservices as an architectural and organizational approach to software development where software is composed of small independent services that communicate over well-defined APIs. These services are owned by small, self-contained teams.
Microservices architectures make applications easier to scale and faster to develop, enabling innovation and accelerating time-to-market for new features. Each service can be considered as a loosely coupled component of a bigger system. You can use services like SNS or SQS to decouple and scale microservices.
Microservices Overview: via – https://aws.amazon.com/blogs/compute/understanding-asynchronous-messaging-for-microservices/
Incorrect options:
Build tightly coupled components
Build monolithic applications
With monolithic architectures, all processes are tightly coupled and run as a single service. This means that if one process of the application experiences a spike in demand, the entire architecture must be scaled. Monolithic architectures add risk for application availability because many dependent and tightly coupled processes increase the impact of a single process failure. So both these options are incorrect.
Use synchronous communication between components – Synchronous between applications can be problematic if there are sudden spikes of traffic. You should use SNS or SQS to decouple your application components.
Reference: https://aws.amazon.com/blogs/compute/understanding-asynchronous-messaging-for-microservices/
Question 57 of 65
57. Question
Which AWS service would you choose for a data processing project to store unstructured data?
Correct
Correct option:
Amazon DynamoDB
Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. It’s a fully managed, multi-Region, multi-master, durable database with built-in security, backup and restore, and in-memory caching for internet-scale applications. DynamoDB offers flexible schema and can easily handle unstructured data.
Incorrect options:
Amazon RedShift – Amazon Redshift is a fully-managed petabyte-scale cloud-based data warehouse product designed for large scale data set storage and analysis. Amazon Redshift does not support storing unstructured data.
Amazon Aurora – Amazon Aurora is an AWS service for relational databases. Aurora does not support storing unstructured data.
Amazon RDS – Amazon RDS is an AWS service for relational databases. RDS does not support storing unstructured data.
Reference: https://aws.amazon.com/dynamodb/features/
Incorrect
Correct option:
Amazon DynamoDB
Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. It’s a fully managed, multi-Region, multi-master, durable database with built-in security, backup and restore, and in-memory caching for internet-scale applications. DynamoDB offers flexible schema and can easily handle unstructured data.
Incorrect options:
Amazon RedShift – Amazon Redshift is a fully-managed petabyte-scale cloud-based data warehouse product designed for large scale data set storage and analysis. Amazon Redshift does not support storing unstructured data.
Amazon Aurora – Amazon Aurora is an AWS service for relational databases. Aurora does not support storing unstructured data.
Amazon RDS – Amazon RDS is an AWS service for relational databases. RDS does not support storing unstructured data.
Reference: https://aws.amazon.com/dynamodb/features/
Unattempted
Correct option:
Amazon DynamoDB
Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. It’s a fully managed, multi-Region, multi-master, durable database with built-in security, backup and restore, and in-memory caching for internet-scale applications. DynamoDB offers flexible schema and can easily handle unstructured data.
Incorrect options:
Amazon RedShift – Amazon Redshift is a fully-managed petabyte-scale cloud-based data warehouse product designed for large scale data set storage and analysis. Amazon Redshift does not support storing unstructured data.
Amazon Aurora – Amazon Aurora is an AWS service for relational databases. Aurora does not support storing unstructured data.
Amazon RDS – Amazon RDS is an AWS service for relational databases. RDS does not support storing unstructured data.
Reference: https://aws.amazon.com/dynamodb/features/
Question 58 of 65
58. Question
AWS Organizations provides which of the following benefits? (Select two)
Correct
Correct option:
Volume discounts for Amazon EC2 and Amazon S3 aggregated across the member AWS accounts
Share the reserved EC2 instances amongst the member AWS accounts
AWS Organizations helps you to centrally manage billing; control access, compliance, and security; and share resources such as reserved EC2 instances across your AWS accounts.
Using AWS Organizations, you can automate account creation, create groups of accounts to reflect your business needs, and apply policies for these groups for governance. You can also simplify billing by setting up a single payment method for all of your AWS accounts. AWS Organizations is available to all AWS customers at no additional charge.
You can use AWS Organizations to set up a single payment method for all the AWS accounts in your organization through consolidated billing. With consolidated billing, you can see a combined view of charges incurred by all your accounts, as well as take advantage of pricing benefits from aggregated usage, such as volume discounts for Amazon EC2 and Amazon S3.
Key benefits of AWS Organizations: via – https://aws.amazon.com/organizations/
Incorrect options:
Check vulnerabilities on EC2 instances across the member AWS accounts
Deploy patches on EC2 instances across the member AWS accounts
Provision EC2 Spot instances across the member AWS accounts
These three options contradict the details provided earlier in the explanation, so these options are incorrect.
Reference: https://aws.amazon.com/organizations/
Incorrect
Correct option:
Volume discounts for Amazon EC2 and Amazon S3 aggregated across the member AWS accounts
Share the reserved EC2 instances amongst the member AWS accounts
AWS Organizations helps you to centrally manage billing; control access, compliance, and security; and share resources such as reserved EC2 instances across your AWS accounts.
Using AWS Organizations, you can automate account creation, create groups of accounts to reflect your business needs, and apply policies for these groups for governance. You can also simplify billing by setting up a single payment method for all of your AWS accounts. AWS Organizations is available to all AWS customers at no additional charge.
You can use AWS Organizations to set up a single payment method for all the AWS accounts in your organization through consolidated billing. With consolidated billing, you can see a combined view of charges incurred by all your accounts, as well as take advantage of pricing benefits from aggregated usage, such as volume discounts for Amazon EC2 and Amazon S3.
Key benefits of AWS Organizations: via – https://aws.amazon.com/organizations/
Incorrect options:
Check vulnerabilities on EC2 instances across the member AWS accounts
Deploy patches on EC2 instances across the member AWS accounts
Provision EC2 Spot instances across the member AWS accounts
These three options contradict the details provided earlier in the explanation, so these options are incorrect.
Reference: https://aws.amazon.com/organizations/
Unattempted
Correct option:
Volume discounts for Amazon EC2 and Amazon S3 aggregated across the member AWS accounts
Share the reserved EC2 instances amongst the member AWS accounts
AWS Organizations helps you to centrally manage billing; control access, compliance, and security; and share resources such as reserved EC2 instances across your AWS accounts.
Using AWS Organizations, you can automate account creation, create groups of accounts to reflect your business needs, and apply policies for these groups for governance. You can also simplify billing by setting up a single payment method for all of your AWS accounts. AWS Organizations is available to all AWS customers at no additional charge.
You can use AWS Organizations to set up a single payment method for all the AWS accounts in your organization through consolidated billing. With consolidated billing, you can see a combined view of charges incurred by all your accounts, as well as take advantage of pricing benefits from aggregated usage, such as volume discounts for Amazon EC2 and Amazon S3.
Key benefits of AWS Organizations: via – https://aws.amazon.com/organizations/
Incorrect options:
Check vulnerabilities on EC2 instances across the member AWS accounts
Deploy patches on EC2 instances across the member AWS accounts
Provision EC2 Spot instances across the member AWS accounts
These three options contradict the details provided earlier in the explanation, so these options are incorrect.
Reference: https://aws.amazon.com/organizations/
Question 59 of 65
59. Question
Threat detection is of paramount importance for security in the Cloud. Which AWS service offers this key feature?
Correct
Correct option:
Amazon GuardDuty
Amazon GuardDuty is a threat detection service that monitors malicious activity and unauthorized behavior to protect your AWS account. GuardDuty analyzes billions of events across your AWS accounts from AWS CloudTrail (AWS user and API activity in your accounts), Amazon VPC Flow Logs (network traffic data), and DNS Logs (name query patterns).
How GuardDuty Works: via – https://aws.amazon.com/guardduty/
Incorrect options:
Amazon Inspector – Amazon Inspector is an automated, security assessment service that helps you check for unintended network accessibility of your Amazon EC2 instances and for vulnerabilities on those EC2 instances. Amazon Inspector assessments are offered to you as pre-defined rules packages mapped to common security best practices and vulnerability definitions.
AWS Shield – AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS. AWS Shield provides always-on detection and automatic inline mitigations that minimize application downtime and latency, so there is no need to engage AWS Support to benefit from DDoS protection.
AWS CloudHSM – AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate and use your encryption keys on the AWS Cloud. With CloudHSM, you can manage your encryption keys using FIPS 140-2 Level 3 validated HSMs. It is a fully-managed service that automates time-consuming administrative tasks for you, such as hardware provisioning, software patching, high-availability, and backups.
Reference: https://aws.amazon.com/guardduty/
Incorrect
Correct option:
Amazon GuardDuty
Amazon GuardDuty is a threat detection service that monitors malicious activity and unauthorized behavior to protect your AWS account. GuardDuty analyzes billions of events across your AWS accounts from AWS CloudTrail (AWS user and API activity in your accounts), Amazon VPC Flow Logs (network traffic data), and DNS Logs (name query patterns).
How GuardDuty Works: via – https://aws.amazon.com/guardduty/
Incorrect options:
Amazon Inspector – Amazon Inspector is an automated, security assessment service that helps you check for unintended network accessibility of your Amazon EC2 instances and for vulnerabilities on those EC2 instances. Amazon Inspector assessments are offered to you as pre-defined rules packages mapped to common security best practices and vulnerability definitions.
AWS Shield – AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS. AWS Shield provides always-on detection and automatic inline mitigations that minimize application downtime and latency, so there is no need to engage AWS Support to benefit from DDoS protection.
AWS CloudHSM – AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate and use your encryption keys on the AWS Cloud. With CloudHSM, you can manage your encryption keys using FIPS 140-2 Level 3 validated HSMs. It is a fully-managed service that automates time-consuming administrative tasks for you, such as hardware provisioning, software patching, high-availability, and backups.
Reference: https://aws.amazon.com/guardduty/
Unattempted
Correct option:
Amazon GuardDuty
Amazon GuardDuty is a threat detection service that monitors malicious activity and unauthorized behavior to protect your AWS account. GuardDuty analyzes billions of events across your AWS accounts from AWS CloudTrail (AWS user and API activity in your accounts), Amazon VPC Flow Logs (network traffic data), and DNS Logs (name query patterns).
How GuardDuty Works: via – https://aws.amazon.com/guardduty/
Incorrect options:
Amazon Inspector – Amazon Inspector is an automated, security assessment service that helps you check for unintended network accessibility of your Amazon EC2 instances and for vulnerabilities on those EC2 instances. Amazon Inspector assessments are offered to you as pre-defined rules packages mapped to common security best practices and vulnerability definitions.
AWS Shield – AWS Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS. AWS Shield provides always-on detection and automatic inline mitigations that minimize application downtime and latency, so there is no need to engage AWS Support to benefit from DDoS protection.
AWS CloudHSM – AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate and use your encryption keys on the AWS Cloud. With CloudHSM, you can manage your encryption keys using FIPS 140-2 Level 3 validated HSMs. It is a fully-managed service that automates time-consuming administrative tasks for you, such as hardware provisioning, software patching, high-availability, and backups.
Reference: https://aws.amazon.com/guardduty/
Question 60 of 65
60. Question
Which AWS entity enables you to privately connect your VPC to an Amazon SQS queue?
Correct
Correct option:
VPC Interface Endpoint
An interface endpoint is an elastic network interface with a private IP address from the IP address range of your subnet that serves as an entry point for traffic destined to a supported service. Interface endpoints are powered by AWS PrivateLink, a technology that enables you to privately access services by using private IP addresses. AWS PrivateLink restricts all network traffic between your VPC and services to the Amazon network. You do not need an internet gateway, a NAT device, or a virtual private gateway.
Exam Alert:
You may see a question around this concept in the exam. Just remember that only S3 and DynamoDB support VPC Endpoint Gateway. All other services that support VPC Endpoints use a VPC Endpoint Interface.
Incorrect options:
VPC Gateway Endpoint – A Gateway Endpoint is a gateway that you specify as a target for a route in your route table for traffic destined to a supported AWS service. The following AWS services are supported: Amazon S3, DynamoDB. You cannot use VPC Gateway Endpoint to privately connect your VPC to an Amazon SQS queue.
AWS Direct Connect – AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS. You can use AWS Direct Connect to establish a private virtual interface from your on-premise network directly to your Amazon VPC. This private connection takes at least one month for completion. You cannot use AWS Direct Connect to privately connect your VPC to an Amazon SQS queue.
Internet Gateway – An Internet Gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between your VPC and the internet. An internet gateway serves two purposes: to provide a target in your VPC route tables for internet-routable traffic, and to perform network address translation (NAT) for instances that have been assigned public IPv4 addresses. You cannot use an Internet Gateway to privately connect your VPC to an Amazon SQS queue.
Reference: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints.html
Incorrect
Correct option:
VPC Interface Endpoint
An interface endpoint is an elastic network interface with a private IP address from the IP address range of your subnet that serves as an entry point for traffic destined to a supported service. Interface endpoints are powered by AWS PrivateLink, a technology that enables you to privately access services by using private IP addresses. AWS PrivateLink restricts all network traffic between your VPC and services to the Amazon network. You do not need an internet gateway, a NAT device, or a virtual private gateway.
Exam Alert:
You may see a question around this concept in the exam. Just remember that only S3 and DynamoDB support VPC Endpoint Gateway. All other services that support VPC Endpoints use a VPC Endpoint Interface.
Incorrect options:
VPC Gateway Endpoint – A Gateway Endpoint is a gateway that you specify as a target for a route in your route table for traffic destined to a supported AWS service. The following AWS services are supported: Amazon S3, DynamoDB. You cannot use VPC Gateway Endpoint to privately connect your VPC to an Amazon SQS queue.
AWS Direct Connect – AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS. You can use AWS Direct Connect to establish a private virtual interface from your on-premise network directly to your Amazon VPC. This private connection takes at least one month for completion. You cannot use AWS Direct Connect to privately connect your VPC to an Amazon SQS queue.
Internet Gateway – An Internet Gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between your VPC and the internet. An internet gateway serves two purposes: to provide a target in your VPC route tables for internet-routable traffic, and to perform network address translation (NAT) for instances that have been assigned public IPv4 addresses. You cannot use an Internet Gateway to privately connect your VPC to an Amazon SQS queue.
Reference: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints.html
Unattempted
Correct option:
VPC Interface Endpoint
An interface endpoint is an elastic network interface with a private IP address from the IP address range of your subnet that serves as an entry point for traffic destined to a supported service. Interface endpoints are powered by AWS PrivateLink, a technology that enables you to privately access services by using private IP addresses. AWS PrivateLink restricts all network traffic between your VPC and services to the Amazon network. You do not need an internet gateway, a NAT device, or a virtual private gateway.
Exam Alert:
You may see a question around this concept in the exam. Just remember that only S3 and DynamoDB support VPC Endpoint Gateway. All other services that support VPC Endpoints use a VPC Endpoint Interface.
Incorrect options:
VPC Gateway Endpoint – A Gateway Endpoint is a gateway that you specify as a target for a route in your route table for traffic destined to a supported AWS service. The following AWS services are supported: Amazon S3, DynamoDB. You cannot use VPC Gateway Endpoint to privately connect your VPC to an Amazon SQS queue.
AWS Direct Connect – AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS. You can use AWS Direct Connect to establish a private virtual interface from your on-premise network directly to your Amazon VPC. This private connection takes at least one month for completion. You cannot use AWS Direct Connect to privately connect your VPC to an Amazon SQS queue.
Internet Gateway – An Internet Gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between your VPC and the internet. An internet gateway serves two purposes: to provide a target in your VPC route tables for internet-routable traffic, and to perform network address translation (NAT) for instances that have been assigned public IPv4 addresses. You cannot use an Internet Gateway to privately connect your VPC to an Amazon SQS queue.
Reference: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints.html
Question 61 of 65
61. Question
Which entity ensures that your application on Amazon EC2 always has the right amount of capacity to handle the current traffic demand?
Correct
Correct option:
Auto Scaling
Auto Scaling helps you ensure that you have the correct number of Amazon EC2 instances available to handle the load for your application. You create collections of EC2 instances, called Auto Scaling groups. You can specify the minimum number of instances in each Auto Scaling group, and Amazon EC2 Auto Scaling ensures that your group never goes below this size.
EC2 Auto Scaling Overview: via – https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html
Incorrect options:
Multi AZ deployment – With Availability Zones, you can design and operate applications and databases that automatically failover between zones without interruption. Multi AZ deployment of EC2 instances provided high availability, it does not help in scaling resources.
Network Load Balancer – Network Load Balancer is best suited for load balancing of Transmission Control Protocol (TCP), User Datagram Protocol (UDP) and Transport Layer Security (TLS) traffic where extreme performance is required. It distributes traffic, does not scale resources.
Application Load Balancer – An Application Load Balancer serves as the single point of contact for clients. The load balancer distributes incoming application traffic across multiple targets, such as EC2 instances, in multiple Availability Zones. It distributes traffic, does not scale resources.
Reference: https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html
Incorrect
Correct option:
Auto Scaling
Auto Scaling helps you ensure that you have the correct number of Amazon EC2 instances available to handle the load for your application. You create collections of EC2 instances, called Auto Scaling groups. You can specify the minimum number of instances in each Auto Scaling group, and Amazon EC2 Auto Scaling ensures that your group never goes below this size.
EC2 Auto Scaling Overview: via – https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html
Incorrect options:
Multi AZ deployment – With Availability Zones, you can design and operate applications and databases that automatically failover between zones without interruption. Multi AZ deployment of EC2 instances provided high availability, it does not help in scaling resources.
Network Load Balancer – Network Load Balancer is best suited for load balancing of Transmission Control Protocol (TCP), User Datagram Protocol (UDP) and Transport Layer Security (TLS) traffic where extreme performance is required. It distributes traffic, does not scale resources.
Application Load Balancer – An Application Load Balancer serves as the single point of contact for clients. The load balancer distributes incoming application traffic across multiple targets, such as EC2 instances, in multiple Availability Zones. It distributes traffic, does not scale resources.
Reference: https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html
Unattempted
Correct option:
Auto Scaling
Auto Scaling helps you ensure that you have the correct number of Amazon EC2 instances available to handle the load for your application. You create collections of EC2 instances, called Auto Scaling groups. You can specify the minimum number of instances in each Auto Scaling group, and Amazon EC2 Auto Scaling ensures that your group never goes below this size.
EC2 Auto Scaling Overview: via – https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html
Incorrect options:
Multi AZ deployment – With Availability Zones, you can design and operate applications and databases that automatically failover between zones without interruption. Multi AZ deployment of EC2 instances provided high availability, it does not help in scaling resources.
Network Load Balancer – Network Load Balancer is best suited for load balancing of Transmission Control Protocol (TCP), User Datagram Protocol (UDP) and Transport Layer Security (TLS) traffic where extreme performance is required. It distributes traffic, does not scale resources.
Application Load Balancer – An Application Load Balancer serves as the single point of contact for clients. The load balancer distributes incoming application traffic across multiple targets, such as EC2 instances, in multiple Availability Zones. It distributes traffic, does not scale resources.
Reference: https://docs.aws.amazon.com/autoscaling/ec2/userguide/what-is-amazon-ec2-auto-scaling.html
Question 62 of 65
62. Question
A streaming media company wants to convert English language subtitles into Spanish language subtitles. As a Cloud Practitioner, which AWS service would you recommend for this use-case?
Correct
Correct option:
Amazon Translate
Amazon Translate is a neural machine translation service that delivers fast, high-quality, and affordable language translation. Amazon Translate allows you to localize content – such as websites and applications – for international users, and to easily translate large volumes of text efficiently.
Incorrect options:
Amazon Polly – You can use Amazon Polly to turn text into lifelike speech thereby allowing you to create applications that talk. Polly’s Text-to-Speech (TTS) service uses advanced deep learning technologies to synthesize natural sounding human speech.
Amazon Transcribe – You can use Amazon Transcribe to add speech-to-text capability to your applications. Amazon Transcribe uses a deep learning process called automatic speech recognition (ASR) to convert speech to text quickly and accurately. Amazon Transcribe can be used to transcribe customer service calls, to automate closed captioning and subtitling, and to generate metadata for media assets.
Amazon Rekognition – With Amazon Rekognition, you can identify objects, people, text, scenes, and activities in images and videos, as well as to detect any inappropriate content. Amazon Rekognition also provides highly accurate facial analysis and facial search capabilities that you can use to detect, analyze, and compare faces for a wide variety of user verification, people counting, and public safety use cases.
Reference: https://aws.amazon.com/translate/
Incorrect
Correct option:
Amazon Translate
Amazon Translate is a neural machine translation service that delivers fast, high-quality, and affordable language translation. Amazon Translate allows you to localize content – such as websites and applications – for international users, and to easily translate large volumes of text efficiently.
Incorrect options:
Amazon Polly – You can use Amazon Polly to turn text into lifelike speech thereby allowing you to create applications that talk. Polly’s Text-to-Speech (TTS) service uses advanced deep learning technologies to synthesize natural sounding human speech.
Amazon Transcribe – You can use Amazon Transcribe to add speech-to-text capability to your applications. Amazon Transcribe uses a deep learning process called automatic speech recognition (ASR) to convert speech to text quickly and accurately. Amazon Transcribe can be used to transcribe customer service calls, to automate closed captioning and subtitling, and to generate metadata for media assets.
Amazon Rekognition – With Amazon Rekognition, you can identify objects, people, text, scenes, and activities in images and videos, as well as to detect any inappropriate content. Amazon Rekognition also provides highly accurate facial analysis and facial search capabilities that you can use to detect, analyze, and compare faces for a wide variety of user verification, people counting, and public safety use cases.
Reference: https://aws.amazon.com/translate/
Unattempted
Correct option:
Amazon Translate
Amazon Translate is a neural machine translation service that delivers fast, high-quality, and affordable language translation. Amazon Translate allows you to localize content – such as websites and applications – for international users, and to easily translate large volumes of text efficiently.
Incorrect options:
Amazon Polly – You can use Amazon Polly to turn text into lifelike speech thereby allowing you to create applications that talk. Polly’s Text-to-Speech (TTS) service uses advanced deep learning technologies to synthesize natural sounding human speech.
Amazon Transcribe – You can use Amazon Transcribe to add speech-to-text capability to your applications. Amazon Transcribe uses a deep learning process called automatic speech recognition (ASR) to convert speech to text quickly and accurately. Amazon Transcribe can be used to transcribe customer service calls, to automate closed captioning and subtitling, and to generate metadata for media assets.
Amazon Rekognition – With Amazon Rekognition, you can identify objects, people, text, scenes, and activities in images and videos, as well as to detect any inappropriate content. Amazon Rekognition also provides highly accurate facial analysis and facial search capabilities that you can use to detect, analyze, and compare faces for a wide variety of user verification, people counting, and public safety use cases.
Reference: https://aws.amazon.com/translate/
Question 63 of 65
63. Question
Which AWS service can help you analyze your infrastructure to identify unattached or underutilized EBS volumes?
Correct
Correct option:
AWS Trusted Advisor
AWS Trusted Advisor is an online tool that provides real-time guidance to help provision your resources following AWS best practices. Whether establishing new workflows, developing applications, or as part of ongoing improvement, recommendations provided by Trusted Advisor regularly help keep your solutions provisioned optimally. AWS Trusted Advisor analyzes your AWS environment and provides best practice recommendations in five categories: Cost Optimization, Performance, Security, Fault Tolerance, Service Limits.
AWS Trusted Advisor can check Amazon Elastic Block Store (Amazon EBS) volume configurations and warns when volumes appear to be underused. Charges begin when a volume is created. If a volume remains unattached or has very low write activity (excluding boot volumes) for a period of time, the volume is probably not being used.
How Trusted Advisor Works: via – https://aws.amazon.com/premiumsupport/technology/trusted-advisor/
Incorrect options:
AWS Config – AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. Think resource-specific change history, audit, and compliance; think Config. Its a configuration tracking service and not an infrastructure tracking service.
Amazon CloudWatch – Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides data and actionable insights to monitor applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. Amazon EBS emits notifications based on Amazon CloudWatch Events for a variety of volume, snapshot, and encryption status changes. With CloudWatch Events, you can establish rules that trigger programmatic actions in response to a change in volume, snapshot, or encryption key state (though not for underutilized volume usage).
Amazon Inspector – Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on your Amazon EC2 instances. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices. Its a security assessment service and not an infrastructure tracking service.
References: https://aws.amazon.com/premiumsupport/technology/trusted-advisor/best-practice-checklist/ https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-cloud-watch-events.html
Incorrect
Correct option:
AWS Trusted Advisor
AWS Trusted Advisor is an online tool that provides real-time guidance to help provision your resources following AWS best practices. Whether establishing new workflows, developing applications, or as part of ongoing improvement, recommendations provided by Trusted Advisor regularly help keep your solutions provisioned optimally. AWS Trusted Advisor analyzes your AWS environment and provides best practice recommendations in five categories: Cost Optimization, Performance, Security, Fault Tolerance, Service Limits.
AWS Trusted Advisor can check Amazon Elastic Block Store (Amazon EBS) volume configurations and warns when volumes appear to be underused. Charges begin when a volume is created. If a volume remains unattached or has very low write activity (excluding boot volumes) for a period of time, the volume is probably not being used.
How Trusted Advisor Works: via – https://aws.amazon.com/premiumsupport/technology/trusted-advisor/
Incorrect options:
AWS Config – AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. Think resource-specific change history, audit, and compliance; think Config. Its a configuration tracking service and not an infrastructure tracking service.
Amazon CloudWatch – Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides data and actionable insights to monitor applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. Amazon EBS emits notifications based on Amazon CloudWatch Events for a variety of volume, snapshot, and encryption status changes. With CloudWatch Events, you can establish rules that trigger programmatic actions in response to a change in volume, snapshot, or encryption key state (though not for underutilized volume usage).
Amazon Inspector – Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on your Amazon EC2 instances. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices. Its a security assessment service and not an infrastructure tracking service.
References: https://aws.amazon.com/premiumsupport/technology/trusted-advisor/best-practice-checklist/ https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-cloud-watch-events.html
Unattempted
Correct option:
AWS Trusted Advisor
AWS Trusted Advisor is an online tool that provides real-time guidance to help provision your resources following AWS best practices. Whether establishing new workflows, developing applications, or as part of ongoing improvement, recommendations provided by Trusted Advisor regularly help keep your solutions provisioned optimally. AWS Trusted Advisor analyzes your AWS environment and provides best practice recommendations in five categories: Cost Optimization, Performance, Security, Fault Tolerance, Service Limits.
AWS Trusted Advisor can check Amazon Elastic Block Store (Amazon EBS) volume configurations and warns when volumes appear to be underused. Charges begin when a volume is created. If a volume remains unattached or has very low write activity (excluding boot volumes) for a period of time, the volume is probably not being used.
How Trusted Advisor Works: via – https://aws.amazon.com/premiumsupport/technology/trusted-advisor/
Incorrect options:
AWS Config – AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. Think resource-specific change history, audit, and compliance; think Config. Its a configuration tracking service and not an infrastructure tracking service.
Amazon CloudWatch – Amazon CloudWatch is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers. CloudWatch provides data and actionable insights to monitor applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. Amazon EBS emits notifications based on Amazon CloudWatch Events for a variety of volume, snapshot, and encryption status changes. With CloudWatch Events, you can establish rules that trigger programmatic actions in response to a change in volume, snapshot, or encryption key state (though not for underutilized volume usage).
Amazon Inspector – Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on your Amazon EC2 instances. Amazon Inspector automatically assesses applications for exposure, vulnerabilities, and deviations from best practices. Its a security assessment service and not an infrastructure tracking service.
References: https://aws.amazon.com/premiumsupport/technology/trusted-advisor/best-practice-checklist/ https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-cloud-watch-events.html
Question 64 of 65
64. Question
An e-commerce company wants to review the Payment Card Industry (PCI) reports on AWS Cloud. Which AWS resource can be used to address this use-case?
Correct
Correct option:
AWS Artifact
AWS Artifact is your go-to, central resource for compliance-related information that matters to your organization. It provides on-demand access to AWS’ security and compliance reports and select online agreements. Reports available in AWS Artifact include our Service Organization Control (SOC) reports, Payment Card Industry (PCI) reports, and certifications from accreditation bodies across geographies and compliance verticals that validate the implementation and operating effectiveness of AWS security controls. It is not a service, it’s a no-cost, self-service portal for on-demand access to AWS’ compliance reports.
Incorrect options:
AWS Trusted Advisor – AWS Trusted Advisor is an online tool that provides you real-time guidance to help you provision your resources following AWS best practices. Whether establishing new workflows, developing applications, or as part of ongoing improvement, recommendations provided by Trusted Advisor regularly help keep your solutions provisioned optimally.
AWS Secrets Manager – AWS Secrets Manager helps you protect secrets needed to access your applications, services, and IT resources. The service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. Users and applications retrieve secrets with a call to Secrets Manager APIs, eliminating the need to hardcode sensitive information in plain text.
AWS Cost and Usage Reports – The AWS Cost and Usage Reports (AWS CUR) contains the most comprehensive set of cost and usage data available. You can use Cost and Usage Reports to publish your AWS billing reports to an Amazon Simple Storage Service (Amazon S3) bucket that you own. You can receive reports that break down your costs by the hour or month, by product or product resource, or by tags that you define yourself. AWS updates the report in your bucket once a day in comma-separated value (CSV) format.
Reference: https://aws.amazon.com/artifact/
Incorrect
Correct option:
AWS Artifact
AWS Artifact is your go-to, central resource for compliance-related information that matters to your organization. It provides on-demand access to AWS’ security and compliance reports and select online agreements. Reports available in AWS Artifact include our Service Organization Control (SOC) reports, Payment Card Industry (PCI) reports, and certifications from accreditation bodies across geographies and compliance verticals that validate the implementation and operating effectiveness of AWS security controls. It is not a service, it’s a no-cost, self-service portal for on-demand access to AWS’ compliance reports.
Incorrect options:
AWS Trusted Advisor – AWS Trusted Advisor is an online tool that provides you real-time guidance to help you provision your resources following AWS best practices. Whether establishing new workflows, developing applications, or as part of ongoing improvement, recommendations provided by Trusted Advisor regularly help keep your solutions provisioned optimally.
AWS Secrets Manager – AWS Secrets Manager helps you protect secrets needed to access your applications, services, and IT resources. The service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. Users and applications retrieve secrets with a call to Secrets Manager APIs, eliminating the need to hardcode sensitive information in plain text.
AWS Cost and Usage Reports – The AWS Cost and Usage Reports (AWS CUR) contains the most comprehensive set of cost and usage data available. You can use Cost and Usage Reports to publish your AWS billing reports to an Amazon Simple Storage Service (Amazon S3) bucket that you own. You can receive reports that break down your costs by the hour or month, by product or product resource, or by tags that you define yourself. AWS updates the report in your bucket once a day in comma-separated value (CSV) format.
Reference: https://aws.amazon.com/artifact/
Unattempted
Correct option:
AWS Artifact
AWS Artifact is your go-to, central resource for compliance-related information that matters to your organization. It provides on-demand access to AWS’ security and compliance reports and select online agreements. Reports available in AWS Artifact include our Service Organization Control (SOC) reports, Payment Card Industry (PCI) reports, and certifications from accreditation bodies across geographies and compliance verticals that validate the implementation and operating effectiveness of AWS security controls. It is not a service, it’s a no-cost, self-service portal for on-demand access to AWS’ compliance reports.
Incorrect options:
AWS Trusted Advisor – AWS Trusted Advisor is an online tool that provides you real-time guidance to help you provision your resources following AWS best practices. Whether establishing new workflows, developing applications, or as part of ongoing improvement, recommendations provided by Trusted Advisor regularly help keep your solutions provisioned optimally.
AWS Secrets Manager – AWS Secrets Manager helps you protect secrets needed to access your applications, services, and IT resources. The service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. Users and applications retrieve secrets with a call to Secrets Manager APIs, eliminating the need to hardcode sensitive information in plain text.
AWS Cost and Usage Reports – The AWS Cost and Usage Reports (AWS CUR) contains the most comprehensive set of cost and usage data available. You can use Cost and Usage Reports to publish your AWS billing reports to an Amazon Simple Storage Service (Amazon S3) bucket that you own. You can receive reports that break down your costs by the hour or month, by product or product resource, or by tags that you define yourself. AWS updates the report in your bucket once a day in comma-separated value (CSV) format.
Reference: https://aws.amazon.com/artifact/
Question 65 of 65
65. Question
According to the AWS Shared Responsibility Model, which of the following are responsibilities of the customer for IAM? (Select two)
Correct
Correct options:
Security and Compliance is a shared responsibility between AWS and the customer. This shared model can help relieve the customer’s operational burden as AWS operates, manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates.
Enable MFA on all accounts
Analyze user access patterns and review IAM permissions
Under the AWS Shared Responsibility Model, customers are responsible for enabling MFA on all accounts, analyzing access patterns and reviewing permissions.
Shared Responsibility Model Overview: via – https://aws.amazon.com/compliance/shared-responsibility-model/
Incorrect options:
Manage global network security infrastructure
Configuration and vulnerability analysis for the underlying software infrastructure
Compliance validation for the underlying software infrastructure
According to the AWS Shared Responsibility Model, AWS is responsible for “Security of the Cloud”. This includes protecting the infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud services. Therefore these three options fall under the responsibility of AWS.
Reference: https://aws.amazon.com/compliance/shared-responsibility-model/
Incorrect
Correct options:
Security and Compliance is a shared responsibility between AWS and the customer. This shared model can help relieve the customer’s operational burden as AWS operates, manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates.
Enable MFA on all accounts
Analyze user access patterns and review IAM permissions
Under the AWS Shared Responsibility Model, customers are responsible for enabling MFA on all accounts, analyzing access patterns and reviewing permissions.
Shared Responsibility Model Overview: via – https://aws.amazon.com/compliance/shared-responsibility-model/
Incorrect options:
Manage global network security infrastructure
Configuration and vulnerability analysis for the underlying software infrastructure
Compliance validation for the underlying software infrastructure
According to the AWS Shared Responsibility Model, AWS is responsible for “Security of the Cloud”. This includes protecting the infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud services. Therefore these three options fall under the responsibility of AWS.
Reference: https://aws.amazon.com/compliance/shared-responsibility-model/
Unattempted
Correct options:
Security and Compliance is a shared responsibility between AWS and the customer. This shared model can help relieve the customer’s operational burden as AWS operates, manages and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates.
Enable MFA on all accounts
Analyze user access patterns and review IAM permissions
Under the AWS Shared Responsibility Model, customers are responsible for enabling MFA on all accounts, analyzing access patterns and reviewing permissions.
Shared Responsibility Model Overview: via – https://aws.amazon.com/compliance/shared-responsibility-model/
Incorrect options:
Manage global network security infrastructure
Configuration and vulnerability analysis for the underlying software infrastructure
Compliance validation for the underlying software infrastructure
According to the AWS Shared Responsibility Model, AWS is responsible for “Security of the Cloud”. This includes protecting the infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud services. Therefore these three options fall under the responsibility of AWS.
Reference: https://aws.amazon.com/compliance/shared-responsibility-model/
Use Page numbers below to navigate to other practice tests