You have already completed the Test before. Hence you can not start it again.
Test is loading...
You must sign in or sign up to start the Test.
You have to finish following quiz, to start this Test:
Your results are here!! for" Microsoft Azure AZ-304 Practice Test 8 "
0 of 65 questions answered correctly
Your time:
Time has elapsed
Your Final Score is : 0
You have attempted : 0
Number of Correct Questions : 0 and scored 0
Number of Incorrect Questions : 0 and Negative marks 0
Average score
Your score
Microsoft Azure AZ-304
You have attempted: 0
Number of Correct Questions: 0 and scored 0
Number of Incorrect Questions: 0 and Negative marks 0
You can review your answers by clicking view questions. Important Note : Open Reference Documentation Links in New Tab (Right Click and Open in New Tab).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
Answered
Review
Question 1 of 65
1. Question
You have an on-premises file server that stores 2 TB of data files.
You plan to move the data files to Azure Blob storage in the Central Europe region.
You need to recommend a storage account type to store the data files and a replication solution for the storage account. The solution must meet the following requirements:
Be available if a single Azure datacenter fails.
Support storage tiers.
Minimize cost.
What should you recommend as replication solution?
Correct
Zone-redundant storage (ZRS) replicates your Azure Storage data synchronously across three Azure availability zones in the primary region. Each availability zone is a separate physical location with independent power, cooling, and networking. ZRS offers durability for Azure Storage data objects of at least 99.9999999999% (12 9’s) over a given year.
With ZRS, your data is still accessible for both read and write operations even if a zone becomes unavailable. If a zone becomes unavailable, Azure undertakes networking updates, such as DNS re-pointing.
Zone-redundant storage (ZRS) replicates your Azure Storage data synchronously across three Azure availability zones in the primary region. Each availability zone is a separate physical location with independent power, cooling, and networking. ZRS offers durability for Azure Storage data objects of at least 99.9999999999% (12 9’s) over a given year.
With ZRS, your data is still accessible for both read and write operations even if a zone becomes unavailable. If a zone becomes unavailable, Azure undertakes networking updates, such as DNS re-pointing.
Zone-redundant storage (ZRS) replicates your Azure Storage data synchronously across three Azure availability zones in the primary region. Each availability zone is a separate physical location with independent power, cooling, and networking. ZRS offers durability for Azure Storage data objects of at least 99.9999999999% (12 9’s) over a given year.
With ZRS, your data is still accessible for both read and write operations even if a zone becomes unavailable. If a zone becomes unavailable, Azure undertakes networking updates, such as DNS re-pointing.
You have deployed several business critical applications in Azure Virtual Machines (VMs) in a single Azure subscription. The applications are hosted across 15 virtual machines. You need to ensure that your operations team receive an email message when any virtual machines are powered off, restarted, or deallocated.
What is the minimum number of rules and action groups that you require?
Correct
Alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues before the users of your system notice them.
Alert rules are separated from alerts and the actions taken when an alert fires. The alert rule captures the target and criteria for alerting. The alert rule can be in an enabled or a disabled state.
You can only define one Activity Log signal per alert rule. The actions like powered off, restarted, or deallocated are activity log signals. So, we need three rules.
An action group is a collection of notification preferences defined by the owner of an Azure subscription. Azure Monitor and Service Health alerts use action groups to notify users that an alert has been triggered. Various alerts may use the same action group or different action groups depending on the user’s requirements.
So, one action group should suffice our requirement
Alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues before the users of your system notice them.
Alert rules are separated from alerts and the actions taken when an alert fires. The alert rule captures the target and criteria for alerting. The alert rule can be in an enabled or a disabled state.
You can only define one Activity Log signal per alert rule. The actions like powered off, restarted, or deallocated are activity log signals. So, we need three rules.
An action group is a collection of notification preferences defined by the owner of an Azure subscription. Azure Monitor and Service Health alerts use action groups to notify users that an alert has been triggered. Various alerts may use the same action group or different action groups depending on the user’s requirements.
So, one action group should suffice our requirement
Alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues before the users of your system notice them.
Alert rules are separated from alerts and the actions taken when an alert fires. The alert rule captures the target and criteria for alerting. The alert rule can be in an enabled or a disabled state.
You can only define one Activity Log signal per alert rule. The actions like powered off, restarted, or deallocated are activity log signals. So, we need three rules.
An action group is a collection of notification preferences defined by the owner of an Azure subscription. Azure Monitor and Service Health alerts use action groups to notify users that an alert has been triggered. Various alerts may use the same action group or different action groups depending on the user’s requirements.
So, one action group should suffice our requirement
You plan to migrate on-premise file data to Azure Storage account. The on-premise file data is accessed by users through a map drive. You need to ensure, users will be able to access data in same way post migration. You need to recommend network changes. What should you recommend?
You are working on a solution design for your organization which has developed and deployed several Azure App Service Web and API applications. The applications use Azure SQL Database to store and retrieve data. Several departments have the following requests to support the applications:
Database department wants to store an asymmentric key to allow real-time I/O encryption and decryption of the AzureSQL database data and log files
Development department wants to enable the applications to retrieve x.509 certificates, stored in an Azure AD-protected resource, by using access token
Security department wants to protect Azure SQL database connection strings and only allow access to the connection strings during application runtime.
You need to recommend the appropriate Azure service for Development department request.
What should you recommend?
Correct
Azure Key Vault is a cloud service that provides a secure store for secrets. You can securely store keys, passwords, certificates, and other secrets.
An organization named myowncompany has adopted various cloud services like Azure Active Directoy, O365, Azure, and AWS Services. The administrators of myowncompany are looking for services to Collect data from various sources to detect threats and automate response rapidly.
Which solution do you recommend?
Correct
Microsoft Azure Sentinel is a scalable, cloud-native, security information event management (SIEM) and security orchestration automated response (SOAR) solution. Azure Sentinel delivers intelligent security analytics and threat intelligence across the enterprise, providing a single solution for alert detection, threat visibility, proactive hunting, and threat response.
Microsoft Azure Sentinel is a scalable, cloud-native, security information event management (SIEM) and security orchestration automated response (SOAR) solution. Azure Sentinel delivers intelligent security analytics and threat intelligence across the enterprise, providing a single solution for alert detection, threat visibility, proactive hunting, and threat response.
Microsoft Azure Sentinel is a scalable, cloud-native, security information event management (SIEM) and security orchestration automated response (SOAR) solution. Azure Sentinel delivers intelligent security analytics and threat intelligence across the enterprise, providing a single solution for alert detection, threat visibility, proactive hunting, and threat response.
Your company has deployed various applications in Azure. You must notify Finance and Administrator teams proactively whenever Azure costs reaches a threshold to help them to manage costs, and to monitor how spending progresses over time.
What solution should you propose to automate report generation?
Correct
Budgets in Cost Management help you plan for and drive organizational accountability. With budgets, you can account for the Azure services you consume or subscribe to during a specific period. They help you inform others about their spending to proactively manage costs, and to monitor how spending progresses over time. When the budget thresholds you’ve created are exceeded, only notifications are triggered.
Budgets in Cost Management help you plan for and drive organizational accountability. With budgets, you can account for the Azure services you consume or subscribe to during a specific period. They help you inform others about their spending to proactively manage costs, and to monitor how spending progresses over time. When the budget thresholds you’ve created are exceeded, only notifications are triggered.
Budgets in Cost Management help you plan for and drive organizational accountability. With budgets, you can account for the Azure services you consume or subscribe to during a specific period. They help you inform others about their spending to proactively manage costs, and to monitor how spending progresses over time. When the budget thresholds you’ve created are exceeded, only notifications are triggered.
Your customer has an Azure subscription named Subscription1 and resource group named RG1. Your client has provisioned various Azure resources in RG1 including a Storage Account named storageaccount1. Your customer has transferred various types of files, ~100GB of data from on-premise into three containers named container1, container2 and contianer3 in storageaccount1. Your customer would like to allow users to access only files copied into container3 without the need of access keys or authentication. However, data access to container2 and container3 must be protected.
Which solution should you recommend keeping minimal administration activities?
Correct
You can configure a container with the following permissions:
No public read access: The container and its blobs can be accessed only by the storage account owner. This is the default for all new containers.
Public read access for blobs only: Blobs within the container can be read by anonymous request, but container data is not available. Anonymous clients cannot enumerate the blobs within the container.
Public read access for container and its blobs: All container and blob data can be read by anonymous request. Clients can enumerate blobs within the container by anonymous request, but cannot enumerate containers within the storage account.
You can configure a container with the following permissions:
No public read access: The container and its blobs can be accessed only by the storage account owner. This is the default for all new containers.
Public read access for blobs only: Blobs within the container can be read by anonymous request, but container data is not available. Anonymous clients cannot enumerate the blobs within the container.
Public read access for container and its blobs: All container and blob data can be read by anonymous request. Clients can enumerate blobs within the container by anonymous request, but cannot enumerate containers within the storage account.
You can configure a container with the following permissions:
No public read access: The container and its blobs can be accessed only by the storage account owner. This is the default for all new containers.
Public read access for blobs only: Blobs within the container can be read by anonymous request, but container data is not available. Anonymous clients cannot enumerate the blobs within the container.
Public read access for container and its blobs: All container and blob data can be read by anonymous request. Clients can enumerate blobs within the container by anonymous request, but cannot enumerate containers within the storage account.
You are planning to deploy several business critical internet facing applications in Azure. Your customer is concerned about protecting business critical applications from malicious attacks. You are asked to implement a solution as quickly as possible with minimum implementation effort.
What should be your approach?
Correct
Azure Web Application Firewall (WAF) combined with Azure Policy can help enforce organizational standards and assess compliance at-scale for WAF resources. Azure policy is a governance tool that provides an aggregated view to evaluate the overall state of the environment, with the ability to drill down to the per-resource, per-policy granularity. Azure policy also helps to bring your resources to compliance through bulk remediation for existing resources and automatic remediation for new resources.
Azure Web Application Firewall (WAF) combined with Azure Policy can help enforce organizational standards and assess compliance at-scale for WAF resources. Azure policy is a governance tool that provides an aggregated view to evaluate the overall state of the environment, with the ability to drill down to the per-resource, per-policy granularity. Azure policy also helps to bring your resources to compliance through bulk remediation for existing resources and automatic remediation for new resources.
Azure Web Application Firewall (WAF) combined with Azure Policy can help enforce organizational standards and assess compliance at-scale for WAF resources. Azure policy is a governance tool that provides an aggregated view to evaluate the overall state of the environment, with the ability to drill down to the per-resource, per-policy granularity. Azure policy also helps to bring your resources to compliance through bulk remediation for existing resources and automatic remediation for new resources.
You are migrating on-premise applications to Microsoft Azure. Your customer preference is to have maximum number of applications in Azure Platform as a Service (PaaS) model rather than re-hosting as-is in the target environment. You have completed the assessment of your customerÂ’s on-premise applications and found that most of the applications are ASP.Net applications and dependent on a 3rd party component; which require customization to host operating system settings.
What should be your approach for migration with minimal effort?
Correct
Azure App Service provides pre-defined application stacks on Windows like ASP.NET or Node.js, running on IIS. The preconfigured Windows environment locks down the operating system from administrative access, software installations, and changes to the global assembly cache, and so on (see Operating system functionality on Azure App Service). However, using a custom Windows container in App Service lets you make OS changes that your app needs, so it’s easy to migrate on-premises app that requires custom OS and software configuration.
Azure App Service provides pre-defined application stacks on Windows like ASP.NET or Node.js, running on IIS. The preconfigured Windows environment locks down the operating system from administrative access, software installations, and changes to the global assembly cache, and so on (see Operating system functionality on Azure App Service). However, using a custom Windows container in App Service lets you make OS changes that your app needs, so it’s easy to migrate on-premises app that requires custom OS and software configuration.
Azure App Service provides pre-defined application stacks on Windows like ASP.NET or Node.js, running on IIS. The preconfigured Windows environment locks down the operating system from administrative access, software installations, and changes to the global assembly cache, and so on (see Operating system functionality on Azure App Service). However, using a custom Windows container in App Service lets you make OS changes that your app needs, so it’s easy to migrate on-premises app that requires custom OS and software configuration.
You are working on a migration plan for your customerÂ’s data center migration to Microsoft Azure. Your customer has several business critical applications currently running in on-premise environment. Most of these business critical applications use SQL Server database as backend. As per your analysis on on-premise database, the databases use Linked Servers and Database email features currently in on-premise. Your customer wanted to migrate as-is to the target environment without any changes to application architecture.
What should be your migration approach to migrate application database to Azure with minimal effort?
Correct
Part of the Azure SQL product family, Azure SQL Managed Instance is the intelligent, scalable cloud database service that combines the broadest SQL Server database engine compatibility with all the benefits of a fully managed and evergreen platform as a service. SQL Managed Instance has near 100% compatibility with the latest SQL Server (Enterprise Edition) database engine, providing a native virtual network (VNet) implementation that addresses common security concerns, and a business model favorable for existing SQL Server customers. SQL Managed Instance allows existing SQL Server customers to lift and shift their on-premises applications to the cloud with minimal application and database changes. At the same time, SQL Managed Instance preserves all PaaS capabilities (automatic patching and version updates, automated backups, high availability) that drastically reduce management overhead and TCO.
Database Mail is an enterprise solution for sending e-mail messages from the SQL Server Database Engine or Azure SQL Database Managed Instance. Using Database Mail, your database applications can send e-mail messages to users. The messages can contain query results, and can also include files from any resource on your network.
Part of the Azure SQL product family, Azure SQL Managed Instance is the intelligent, scalable cloud database service that combines the broadest SQL Server database engine compatibility with all the benefits of a fully managed and evergreen platform as a service. SQL Managed Instance has near 100% compatibility with the latest SQL Server (Enterprise Edition) database engine, providing a native virtual network (VNet) implementation that addresses common security concerns, and a business model favorable for existing SQL Server customers. SQL Managed Instance allows existing SQL Server customers to lift and shift their on-premises applications to the cloud with minimal application and database changes. At the same time, SQL Managed Instance preserves all PaaS capabilities (automatic patching and version updates, automated backups, high availability) that drastically reduce management overhead and TCO.
Database Mail is an enterprise solution for sending e-mail messages from the SQL Server Database Engine or Azure SQL Database Managed Instance. Using Database Mail, your database applications can send e-mail messages to users. The messages can contain query results, and can also include files from any resource on your network.
Part of the Azure SQL product family, Azure SQL Managed Instance is the intelligent, scalable cloud database service that combines the broadest SQL Server database engine compatibility with all the benefits of a fully managed and evergreen platform as a service. SQL Managed Instance has near 100% compatibility with the latest SQL Server (Enterprise Edition) database engine, providing a native virtual network (VNet) implementation that addresses common security concerns, and a business model favorable for existing SQL Server customers. SQL Managed Instance allows existing SQL Server customers to lift and shift their on-premises applications to the cloud with minimal application and database changes. At the same time, SQL Managed Instance preserves all PaaS capabilities (automatic patching and version updates, automated backups, high availability) that drastically reduce management overhead and TCO.
Database Mail is an enterprise solution for sending e-mail messages from the SQL Server Database Engine or Azure SQL Database Managed Instance. Using Database Mail, your database applications can send e-mail messages to users. The messages can contain query results, and can also include files from any resource on your network.
You have designed your Microsoft Azure environment in a Hub – Spoke model for your customer. Hub is centralized virtual network and spokes are designed to deploy your customer applications. You have used network security groups and application security groups as protective measures and to control traffic. Your customer wanted to apply Azure Firewall to further increase security. Where should you deploy Azure Firewall to reduce costs?
Correct
You can deploy Azure Firewall on any virtual network, but customers typically deploy it on a central virtual network and peer other virtual networks to it in a hub-and-spoke model. You can then set the default route from the peered virtual networks to point to this central firewall virtual network. Global VNet peering is supported, but it isn’t recommended because of potential performance and latency issues across regions. For best performance, deploy one firewall per region.
The advantage of this model is the ability to centrally exert control on multiple spoke VNETs across different subscriptions. There are also cost savings as you don’t need to deploy a firewall in each VNet separately.
You can deploy Azure Firewall on any virtual network, but customers typically deploy it on a central virtual network and peer other virtual networks to it in a hub-and-spoke model. You can then set the default route from the peered virtual networks to point to this central firewall virtual network. Global VNet peering is supported, but it isn’t recommended because of potential performance and latency issues across regions. For best performance, deploy one firewall per region.
The advantage of this model is the ability to centrally exert control on multiple spoke VNETs across different subscriptions. There are also cost savings as you don’t need to deploy a firewall in each VNet separately.
You can deploy Azure Firewall on any virtual network, but customers typically deploy it on a central virtual network and peer other virtual networks to it in a hub-and-spoke model. You can then set the default route from the peered virtual networks to point to this central firewall virtual network. Global VNet peering is supported, but it isn’t recommended because of potential performance and latency issues across regions. For best performance, deploy one firewall per region.
The advantage of this model is the ability to centrally exert control on multiple spoke VNETs across different subscriptions. There are also cost savings as you don’t need to deploy a firewall in each VNet separately.
Your customer has deployed various custom applications in Azure virtual machines (VMs) that uses an Azure SQL Database instance as the backend. The IT department of your customer recently enabled forced tunneling.
Since the configuration change, developers have noticed degraded performance when they access the database.
You need to recommend a solution to minimize latency when accessing the database. The solution must minimize costs.
What should you include in the recommendation?
Correct
Virtual Network (VNet) service endpoint provides secure and direct connectivity to Azure services over an optimized route over the Azure backbone network. Endpoints allow you to secure your critical Azure service resources to only your virtual networks. Service Endpoints enables private IP addresses in the VNet to reach the endpoint of an Azure service without needing a public IP address on the VNet.
Virtual Network (VNet) service endpoint provides secure and direct connectivity to Azure services over an optimized route over the Azure backbone network. Endpoints allow you to secure your critical Azure service resources to only your virtual networks. Service Endpoints enables private IP addresses in the VNet to reach the endpoint of an Azure service without needing a public IP address on the VNet.
Virtual Network (VNet) service endpoint provides secure and direct connectivity to Azure services over an optimized route over the Azure backbone network. Endpoints allow you to secure your critical Azure service resources to only your virtual networks. Service Endpoints enables private IP addresses in the VNet to reach the endpoint of an Azure service without needing a public IP address on the VNet.
You have an Azure subscription that contains an Azure Blob storage account named mystorageaccount1. You have an on-premises file server named myserver1 that runs Windows Server 2016. myserver1 stores 1 TB of company files. You need to store a copy of the company files in mystorageaccount1.
Which two possible Azure services achieve this goal? Each correct answer presents a complete solution.
Correct
Azure Import/Export service is used to securely import large amounts of data to Azure Blob storage and Azure Files by shipping disk drives to an Azure datacenter. This service can also be used to transfer data from Azure Blob storage to disk drives and ship to your on-premises sites. Data from one or more disk drives can be imported either to Azure Blob storage or Azure Files.
With Data Factory, you can use the Copy Activity in a data pipeline to move data from both on-premises and cloud source data stores to a centralization data store in the cloud for further analysis.
Azure Import/Export service is used to securely import large amounts of data to Azure Blob storage and Azure Files by shipping disk drives to an Azure datacenter. This service can also be used to transfer data from Azure Blob storage to disk drives and ship to your on-premises sites. Data from one or more disk drives can be imported either to Azure Blob storage or Azure Files.
With Data Factory, you can use the Copy Activity in a data pipeline to move data from both on-premises and cloud source data stores to a centralization data store in the cloud for further analysis.
Azure Import/Export service is used to securely import large amounts of data to Azure Blob storage and Azure Files by shipping disk drives to an Azure datacenter. This service can also be used to transfer data from Azure Blob storage to disk drives and ship to your on-premises sites. Data from one or more disk drives can be imported either to Azure Blob storage or Azure Files.
With Data Factory, you can use the Copy Activity in a data pipeline to move data from both on-premises and cloud source data stores to a centralization data store in the cloud for further analysis.
Your customer has several web applications that uses a MongoDB database. You plan to migrate the web applications to Azure. Your customer does not have time and budget to make code and configuration changes. You must migrate to Azure while minimizing code and configuration changes.
You need to design the Cosmos DB configuration.
What should you recommend?
Correct
Azure Cosmos DB is Microsoft’s globally distributed, multi-model database service for mission-critical applications. Azure Cosmos DB provides turn-key global distribution, elastic scaling of throughput and storage worldwide, single-digit millisecond latencies at the 99th percentile, and guaranteed high availability, all backed by industry-leading SLAs. Azure Cosmos DB automatically indexes data without requiring you to deal with schema and index management. It is multi-model and supports document, key-value, graph, and columnar data models. By default, you can interact with Cosmos DB using SQL API. Additionally, the Cosmos DB service implements wire protocols for common NoSQL APIs including Cassandra, MongoDB, Gremlin, and Azure Table Storage. This allows you to use your familiar NoSQL client drivers and tools to interact with your Cosmos database.
In this scenario, we are migrating applications which uses MongoDB database, so we must use Mongo DB API
Azure Cosmos DB is Microsoft’s globally distributed, multi-model database service for mission-critical applications. Azure Cosmos DB provides turn-key global distribution, elastic scaling of throughput and storage worldwide, single-digit millisecond latencies at the 99th percentile, and guaranteed high availability, all backed by industry-leading SLAs. Azure Cosmos DB automatically indexes data without requiring you to deal with schema and index management. It is multi-model and supports document, key-value, graph, and columnar data models. By default, you can interact with Cosmos DB using SQL API. Additionally, the Cosmos DB service implements wire protocols for common NoSQL APIs including Cassandra, MongoDB, Gremlin, and Azure Table Storage. This allows you to use your familiar NoSQL client drivers and tools to interact with your Cosmos database.
In this scenario, we are migrating applications which uses MongoDB database, so we must use Mongo DB API
Azure Cosmos DB is Microsoft’s globally distributed, multi-model database service for mission-critical applications. Azure Cosmos DB provides turn-key global distribution, elastic scaling of throughput and storage worldwide, single-digit millisecond latencies at the 99th percentile, and guaranteed high availability, all backed by industry-leading SLAs. Azure Cosmos DB automatically indexes data without requiring you to deal with schema and index management. It is multi-model and supports document, key-value, graph, and columnar data models. By default, you can interact with Cosmos DB using SQL API. Additionally, the Cosmos DB service implements wire protocols for common NoSQL APIs including Cassandra, MongoDB, Gremlin, and Azure Table Storage. This allows you to use your familiar NoSQL client drivers and tools to interact with your Cosmos database.
In this scenario, we are migrating applications which uses MongoDB database, so we must use Mongo DB API
You have several Microsoft SQL Server Integration Services (SSIS) packages that are configured to use on-premises SQL Server databases as their destinations. You are planning to migrate the on-premises databases to Azure SQL Database. You need to recommend a solution to host the SSIS packages in Azure. The solution must ensure that the packages can target the SQL Database instances as their destinations.
What should you include in the recommendation?
Correct
Azure Data Factory hosts the runtime engine for SSIS packages on Azure. The runtime engine is called the Azure-SSIS Integration Runtime (Azure-SSIS IR)
Azure Data Factory hosts the runtime engine for SSIS packages on Azure. The runtime engine is called the Azure-SSIS Integration Runtime (Azure-SSIS IR)
Azure Data Factory hosts the runtime engine for SSIS packages on Azure. The runtime engine is called the Azure-SSIS Integration Runtime (Azure-SSIS IR)
You have a hybrid deployment of Azure Active Directory (Azure AD). You need to recommend a solution to ensure that the Azure AD tenant can be managed only from the computers on your on-premises network.
What should you include in the recommendation?
Correct
Conditional Access is the tool used by Azure Active Directory to bring signals together, to make decisions, and enforce organizational policies. By using Conditional Access policies, you can apply the right access controls when needed to keep your organization secure and stay out of your user’s way when not needed.
Conditional Access is the tool used by Azure Active Directory to bring signals together, to make decisions, and enforce organizational policies. By using Conditional Access policies, you can apply the right access controls when needed to keep your organization secure and stay out of your user’s way when not needed.
Conditional Access is the tool used by Azure Active Directory to bring signals together, to make decisions, and enforce organizational policies. By using Conditional Access policies, you can apply the right access controls when needed to keep your organization secure and stay out of your user’s way when not needed.
You are developing an e-commerce application for your customer. The application will contain several Azure cloud services and will handle different components of a transaction. Different cloud services will process customer orders, billing, payment, inventory, and shipping.
You need to recommend a solution to enable the cloud services to asynchronously communicate transaction information by using REST messages.
What would you include in your recommendation?
Correct
Azure Queue Storage is a service for storing large numbers of messages. You access messages from anywhere in the world via authenticated calls using HTTP or HTTPS. A queue message can be up to 64 KB in size. A queue may contain millions of messages, up to the total capacity limit of a storage account. Queues are commonly used to create a backlog of work to process asynchronously.
Azure Queue Storage is a service for storing large numbers of messages. You access messages from anywhere in the world via authenticated calls using HTTP or HTTPS. A queue message can be up to 64 KB in size. A queue may contain millions of messages, up to the total capacity limit of a storage account. Queues are commonly used to create a backlog of work to process asynchronously.
Azure Queue Storage is a service for storing large numbers of messages. You access messages from anywhere in the world via authenticated calls using HTTP or HTTPS. A queue message can be up to 64 KB in size. A queue may contain millions of messages, up to the total capacity limit of a storage account. Queues are commonly used to create a backlog of work to process asynchronously.
You plan to create an Azure Cosmos DB account that uses the SQL API. The account will contain data added by a web application. The web application will send data daily. You need to recommend a notification solution that meets the following requirements:
Sends email notification when data is received.
Minimizes compute cost.
What should you include in the recommendation?
Correct
The Azure Cosmos DB change feed enables efficient processing of large datasets with a high volume of writes. Change feed also offers an alternative to querying an entire dataset to identify what has changed.
Azure Cosmos DB is well-suited for IoT, gaming, retail, and operational logging applications. A common design pattern in these applications is to use changes to the data to trigger additional actions. Examples of additional actions include:
· Triggering a notification or a call to an API, when an item is inserted or updated.
· Real-time stream processing for IoT or real-time analytics processing on operational data.
· Data movement such as synchronizing with a cache, a search engine, a data warehouse, or cold storage.
The Azure Cosmos DB change feed enables efficient processing of large datasets with a high volume of writes. Change feed also offers an alternative to querying an entire dataset to identify what has changed.
Azure Cosmos DB is well-suited for IoT, gaming, retail, and operational logging applications. A common design pattern in these applications is to use changes to the data to trigger additional actions. Examples of additional actions include:
· Triggering a notification or a call to an API, when an item is inserted or updated.
· Real-time stream processing for IoT or real-time analytics processing on operational data.
· Data movement such as synchronizing with a cache, a search engine, a data warehouse, or cold storage.
The Azure Cosmos DB change feed enables efficient processing of large datasets with a high volume of writes. Change feed also offers an alternative to querying an entire dataset to identify what has changed.
Azure Cosmos DB is well-suited for IoT, gaming, retail, and operational logging applications. A common design pattern in these applications is to use changes to the data to trigger additional actions. Examples of additional actions include:
· Triggering a notification or a call to an API, when an item is inserted or updated.
· Real-time stream processing for IoT or real-time analytics processing on operational data.
· Data movement such as synchronizing with a cache, a search engine, a data warehouse, or cold storage.
You have deployed server Azure windows virtual machines for your customer. The Azure virtual machines run a custom line-of-business web application. You plan to use a third-party solution to parse event logs from the virtual machines stored in an Azure storage account.
You need to recommend a solution to save the event logs from the virtual machines to the Azure Storage account. The solution must minimize costs and complexity.
What should you include in the recommendation?
Correct
The Azure Diagnostics VM extension enables you to collect monitoring data, such as performance counters and event logs, from your Windows VM. You can granularly specify what data you want to collect and where you want the data to go, such as an Azure Storage account or an Azure Event Hub.
The Azure Diagnostics VM extension enables you to collect monitoring data, such as performance counters and event logs, from your Windows VM. You can granularly specify what data you want to collect and where you want the data to go, such as an Azure Storage account or an Azure Event Hub.
The Azure Diagnostics VM extension enables you to collect monitoring data, such as performance counters and event logs, from your Windows VM. You can granularly specify what data you want to collect and where you want the data to go, such as an Azure Storage account or an Azure Event Hub.
You are planning an Azure solution that will host production databases for a high-performance application. The solution will include the following components:
Two virtual machines that will run Microsoft SQL Server 2016. The virtual machines will be deployed to different data centers in the same Azure region, and will be part of an Always On availability group.
SQL Server data that will be backed up by using the Automated Backup feature of the SQL Server IaaS Agent Extension (SQLIaaSExtension)
You identify the storage priorities for various data types as shown below.
Data Type                 Storage Priority
Operating System           Speed and availablity
Databases and logs          Speed and availablity
Backups                    Lowcost
Which storage type should you recommend for backups?
Correct
Automated Backup v2 automatically configures Managed Backup to Microsoft Azure for all existing and new databases on an Azure VM running SQL Server 2016/2017 Standard, Enterprise, or Developer editions. This enables you to configure regular database backups that utilize durable Azure blob storage.
Automated Backup v2 automatically configures Managed Backup to Microsoft Azure for all existing and new databases on an Azure VM running SQL Server 2016/2017 Standard, Enterprise, or Developer editions. This enables you to configure regular database backups that utilize durable Azure blob storage.
Automated Backup v2 automatically configures Managed Backup to Microsoft Azure for all existing and new databases on an Azure VM running SQL Server 2016/2017 Standard, Enterprise, or Developer editions. This enables you to configure regular database backups that utilize durable Azure blob storage.
Your customer has deployed several Linux and Windows virtual machines in Azure. On-premise connectivity to your customer’s network has been enabled by using Azure Expressroute. You need to recommend a solution to analyze traffic attempting to access internet from the virtual machines.
Which solution should you recommend?
Correct
Traffic Analytics is a cloud-based solution that provides visibility into user and application activity in cloud networks. Traffic analytics analyzes Network Watcher network security group (NSG) flow logs to provide insights into traffic flow in your Azure cloud. With traffic analytics, you can:
· Visualize network activity across your Azure subscriptions and identify hot spots.
· Identify security threats to, and secure your network, with information such as open-ports, applications attempting internet access, and virtual machines (VM) connecting to rogue networks.
· Understand traffic flow patterns across Azure regions and the internet to optimize your network deployment for performance and capacity.
· Pinpoint network misconfigurations leading to failed connections in your network.
Traffic Analytics is a cloud-based solution that provides visibility into user and application activity in cloud networks. Traffic analytics analyzes Network Watcher network security group (NSG) flow logs to provide insights into traffic flow in your Azure cloud. With traffic analytics, you can:
· Visualize network activity across your Azure subscriptions and identify hot spots.
· Identify security threats to, and secure your network, with information such as open-ports, applications attempting internet access, and virtual machines (VM) connecting to rogue networks.
· Understand traffic flow patterns across Azure regions and the internet to optimize your network deployment for performance and capacity.
· Pinpoint network misconfigurations leading to failed connections in your network.
Traffic Analytics is a cloud-based solution that provides visibility into user and application activity in cloud networks. Traffic analytics analyzes Network Watcher network security group (NSG) flow logs to provide insights into traffic flow in your Azure cloud. With traffic analytics, you can:
· Visualize network activity across your Azure subscriptions and identify hot spots.
· Identify security threats to, and secure your network, with information such as open-ports, applications attempting internet access, and virtual machines (VM) connecting to rogue networks.
· Understand traffic flow patterns across Azure regions and the internet to optimize your network deployment for performance and capacity.
· Pinpoint network misconfigurations leading to failed connections in your network.
our customer has deployed serveral Linux and Windows virtual machines in Azure. On-premise connectivity to your customer’s network has been enabled by using Azure Expressroute. You need to recommend a solution to visualize the VMs with their dependencies on other computers and external processes with minimal configuration changes.
Which solution should you recommend?
Correct
Service Map automatically discovers application components on Windows and Linux systems and maps the communication between services. With Service Map, you can view your servers in the way that you think of them: as interconnected systems that deliver critical services. Service Map shows connections between servers, processes, inbound and outbound connection latency, and ports across any TCP-connected architecture, with no configuration required other than the installation of an agent.
Service Map automatically discovers application components on Windows and Linux systems and maps the communication between services. With Service Map, you can view your servers in the way that you think of them: as interconnected systems that deliver critical services. Service Map shows connections between servers, processes, inbound and outbound connection latency, and ports across any TCP-connected architecture, with no configuration required other than the installation of an agent.
Service Map automatically discovers application components on Windows and Linux systems and maps the communication between services. With Service Map, you can view your servers in the way that you think of them: as interconnected systems that deliver critical services. Service Map shows connections between servers, processes, inbound and outbound connection latency, and ports across any TCP-connected architecture, with no configuration required other than the installation of an agent.
You have several on-premise workloads running business critical application. You plan to use Azure Site Recovery to protect on-premise physical server workloads. The on-premise workloads are independent to each other and stateless.
You need to recommend a failover strategy to ensure that if the on-premises data center fails, the workloads must be available in Azure as quickly as possible.
Which failover strategy should you include in the recommendation?
Correct
As quickly as possible means, it should be low RTO. Last processed – This option fails over VMs to the latest recovery point processed by Site Recovery. To see the latest recovery point for a specific VM, check Latest Recovery Points in the VM settings. This option provides a low RTO (Recovery Time Objective), because no time is spent processing unprocessed data.
As quickly as possible means, it should be low RTO. Last processed – This option fails over VMs to the latest recovery point processed by Site Recovery. To see the latest recovery point for a specific VM, check Latest Recovery Points in the VM settings. This option provides a low RTO (Recovery Time Objective), because no time is spent processing unprocessed data.
As quickly as possible means, it should be low RTO. Last processed – This option fails over VMs to the latest recovery point processed by Site Recovery. To see the latest recovery point for a specific VM, check Latest Recovery Points in the VM settings. This option provides a low RTO (Recovery Time Objective), because no time is spent processing unprocessed data.
You are migrating your company’s legal data to Microsoft Azure. Your company’s legal team wanted to keep data for seven years for compliance purposes. It is unlikely that the data will be used in future. You need to move the data to Azure. The solution must minimize costs.
Where should you store the data?
Correct
The archive access tier has the lowest storage cost. But it has higher data retrieval costs compared to the hot and cool tiers. Data must remain in the archive tier for at least 180 days or be subject to an early deletion charge.
Example usage scenarios for the archive access tier include:
Long-term backup, secondary backup, and archival datasets
Original (raw) data that must be preserved, even after it has been processed into final usable form.
Compliance and archival data that needs to be stored for a long time and is hardly ever accessed.
The archive access tier has the lowest storage cost. But it has higher data retrieval costs compared to the hot and cool tiers. Data must remain in the archive tier for at least 180 days or be subject to an early deletion charge.
Example usage scenarios for the archive access tier include:
Long-term backup, secondary backup, and archival datasets
Original (raw) data that must be preserved, even after it has been processed into final usable form.
Compliance and archival data that needs to be stored for a long time and is hardly ever accessed.
The archive access tier has the lowest storage cost. But it has higher data retrieval costs compared to the hot and cool tiers. Data must remain in the archive tier for at least 180 days or be subject to an early deletion charge.
Example usage scenarios for the archive access tier include:
Long-term backup, secondary backup, and archival datasets
Original (raw) data that must be preserved, even after it has been processed into final usable form.
Compliance and archival data that needs to be stored for a long time and is hardly ever accessed.
Your company has 300 virtual machines hosted in a VMware environment. The virtual machines vary in size and have various utilization levels.
You plan to move all the virtual machines to Azure.
You need to recommend how many and what size Azure virtual machines will be required to move the current workloads to Azure. The solution must minimize administrative effort.
What should you use to make the recommendation?
Correct
The Azure Migrate: Server Assessment tool discovers and assesses on-premises VMware VMs, Hyper-V VMs, and physical servers for migration to Azure.
Here’s what the tool does:
Azure readiness: Assesses whether on-premises machines are ready for migration to Azure.
Azure sizing: Estimates the size of Azure VMs or number of Azure VMware nodes after migration.
Azure cost estimation: Estimates costs for running on-premises servers in Azure.
Dependency analysis: Identifies cross-server dependencies and optimization strategies for moving interdependent servers to Azure. Learn more about Server Assessment with dependency analysis.
The Azure Migrate: Server Assessment tool discovers and assesses on-premises VMware VMs, Hyper-V VMs, and physical servers for migration to Azure.
Here’s what the tool does:
Azure readiness: Assesses whether on-premises machines are ready for migration to Azure.
Azure sizing: Estimates the size of Azure VMs or number of Azure VMware nodes after migration.
Azure cost estimation: Estimates costs for running on-premises servers in Azure.
Dependency analysis: Identifies cross-server dependencies and optimization strategies for moving interdependent servers to Azure. Learn more about Server Assessment with dependency analysis.
The Azure Migrate: Server Assessment tool discovers and assesses on-premises VMware VMs, Hyper-V VMs, and physical servers for migration to Azure.
Here’s what the tool does:
Azure readiness: Assesses whether on-premises machines are ready for migration to Azure.
Azure sizing: Estimates the size of Azure VMs or number of Azure VMware nodes after migration.
Azure cost estimation: Estimates costs for running on-premises servers in Azure.
Dependency analysis: Identifies cross-server dependencies and optimization strategies for moving interdependent servers to Azure. Learn more about Server Assessment with dependency analysis.
Your customer is planning to migrate on-premise data center to Microsoft Azure. You manage on-premises networks and Azure virtual networks.
You need a secure private connection between the on-premises networks and the Azure virtual networks. The connection must offer redundant pair of cross connections for high availability.
What should you recommend?
Correct
ExpressRoute lets you extend your on-premises networks into the Microsoft cloud over a private connection facilitated by a connectivity provider. With ExpressRoute, you can establish connections to Microsoft cloud services, such as Microsoft Azure and Office 365.
Connectivity can be from any-to-any (IP VPN) network, a point-to-point Ethernet network, or a virtual cross-connection through a connectivity provider at a co-location facility. ExpressRoute connections do not go over the public Internet. This allows ExpressRoute connections to offer more reliability, faster speeds, consistent latencies, and higher security than typical connections over the Internet.
ExpressRoute lets you extend your on-premises networks into the Microsoft cloud over a private connection facilitated by a connectivity provider. With ExpressRoute, you can establish connections to Microsoft cloud services, such as Microsoft Azure and Office 365.
Connectivity can be from any-to-any (IP VPN) network, a point-to-point Ethernet network, or a virtual cross-connection through a connectivity provider at a co-location facility. ExpressRoute connections do not go over the public Internet. This allows ExpressRoute connections to offer more reliability, faster speeds, consistent latencies, and higher security than typical connections over the Internet.
ExpressRoute lets you extend your on-premises networks into the Microsoft cloud over a private connection facilitated by a connectivity provider. With ExpressRoute, you can establish connections to Microsoft cloud services, such as Microsoft Azure and Office 365.
Connectivity can be from any-to-any (IP VPN) network, a point-to-point Ethernet network, or a virtual cross-connection through a connectivity provider at a co-location facility. ExpressRoute connections do not go over the public Internet. This allows ExpressRoute connections to offer more reliability, faster speeds, consistent latencies, and higher security than typical connections over the Internet.
Your company is planning to migrate several Line of business applications to Microsoft Azure. Your companyÂ’s main on-premise data center is present in New York, USA. However, there are many regional data centers across various countries. Your company has users across the globe. You are currently designing Microsoft Azure environment for your company. You need to find out network performance of cloud deployments from various branch offices and data centers.
Which solution should you consider in your design that will help you in finding network performance?
Correct
Performance Monitor: You can monitor network connectivity across cloud deployments and on-premises locations, multiple data centers, and branch offices and mission-critical multitier applications or microservices. With Performance Monitor, you can detect network issues before users complain.
Performance Monitor: You can monitor network connectivity across cloud deployments and on-premises locations, multiple data centers, and branch offices and mission-critical multitier applications or microservices. With Performance Monitor, you can detect network issues before users complain.
Performance Monitor: You can monitor network connectivity across cloud deployments and on-premises locations, multiple data centers, and branch offices and mission-critical multitier applications or microservices. With Performance Monitor, you can detect network issues before users complain.
Your customer is planning to migrate on-premise data center to Microsoft Azure. Your customer wanted to ensure that employees can use their current credentials to access both on-premise hosted applications and cloud hosted applications. Your security administrator must be able to apply account lockout, account expiration policies to comply with governance policies.
Which authentication should you consider in your design?
Correct
Pass-through Authentication enforces the on-premises account policy at the time of sign-in. For example, access is denied when an on-premises userÂ’s account state is disabled, locked out, or their password expires or the logon attempt falls outside the hours when the user is allowed to sign in.
Pass-through Authentication enforces the on-premises account policy at the time of sign-in. For example, access is denied when an on-premises userÂ’s account state is disabled, locked out, or their password expires or the logon attempt falls outside the hours when the user is allowed to sign in.
Pass-through Authentication enforces the on-premises account policy at the time of sign-in. For example, access is denied when an on-premises userÂ’s account state is disabled, locked out, or their password expires or the logon attempt falls outside the hours when the user is allowed to sign in.
You are designing an application named myapplication1 for your customer named customer1. Your customer is into supply chain business. Your customer wanted to invite some of the suppliers to access myapplication1. Your business analyst noticed that some of the suppliers does not have social accounts or Microsoft accounts. Your security administrator does not want to create identities in cusotmer1 Azure AD tenant for suppliers who do not have social accounts or Microsoft accounts.
Which solution should you consider in your design so that suppliers can access myapplication1 without taking much effort from customer1 helpdesk?
Correct
The Email one-time passcode feature authenticates B2B guest users when they can’t be authenticated through other means like Azure AD, a Microsoft account (MSA), or Google federation. With one-time passcode authentication, there’s no need to create a Microsoft account. When the guest user redeems an invitation or accesses a shared resource, they can request a temporary code, which is sent to their email address. Then they enter this code to continue signing in.
The Email one-time passcode feature authenticates B2B guest users when they can’t be authenticated through other means like Azure AD, a Microsoft account (MSA), or Google federation. With one-time passcode authentication, there’s no need to create a Microsoft account. When the guest user redeems an invitation or accesses a shared resource, they can request a temporary code, which is sent to their email address. Then they enter this code to continue signing in.
The Email one-time passcode feature authenticates B2B guest users when they can’t be authenticated through other means like Azure AD, a Microsoft account (MSA), or Google federation. With one-time passcode authentication, there’s no need to create a Microsoft account. When the guest user redeems an invitation or accesses a shared resource, they can request a temporary code, which is sent to their email address. Then they enter this code to continue signing in.
You have several on-premise workloads running business critical application. You have implemented Azure Site Recovery to protect on-premise workloads. Your key business users is concerned about the implementation. Your manager asked you to validate the business continuity and disaster recovery (BCDR) approach without impacting the application and data loss.
What type of failover you should consider to validate your BCDR strategy?
Correct
Test failover is used to run a drill that validates your BCDR strategy, without any data loss or downtime.
Incorrect
Test failover is used to run a drill that validates your BCDR strategy, without any data loss or downtime.
Unattempted
Test failover is used to run a drill that validates your BCDR strategy, without any data loss or downtime.
Question 32 of 65
32. Question
Your customer have several on-premise workloads running business critical application in a Hyper-V Virtual Machines (VMs) in a remote data center with poor network connectivity. As part of revised regulations, your customer wanted to maintain the backups of virtual machines for seven years.
You need to recommend a solution to backup virtual machines without impacting network bandwidth to Azure Backup.
Which solution you must consider?
Correct
Initial full backups to Azure typically transfer large amounts of data online and require more network bandwidth when compared to subsequent backups that transfer only incremental changes. Remote offices or datacenters in certain geographies don’t always have sufficient network bandwidth. For this reason, these initial backups take several days. During this time, the backups continuously use the same network that was provisioned for applications running in the on-premises datacenter.
Azure Backup supports offline backup, which transfers initial backup data offline, without the use of network bandwidth. It provides a mechanism to copy backup data onto physical storage devices. The devices are then shipped to a nearby Azure datacenter and uploaded onto a Recovery Services vault. This process ensures robust transfer of backup data without using any network bandwidth.
Initial full backups to Azure typically transfer large amounts of data online and require more network bandwidth when compared to subsequent backups that transfer only incremental changes. Remote offices or datacenters in certain geographies don’t always have sufficient network bandwidth. For this reason, these initial backups take several days. During this time, the backups continuously use the same network that was provisioned for applications running in the on-premises datacenter.
Azure Backup supports offline backup, which transfers initial backup data offline, without the use of network bandwidth. It provides a mechanism to copy backup data onto physical storage devices. The devices are then shipped to a nearby Azure datacenter and uploaded onto a Recovery Services vault. This process ensures robust transfer of backup data without using any network bandwidth.
Initial full backups to Azure typically transfer large amounts of data online and require more network bandwidth when compared to subsequent backups that transfer only incremental changes. Remote offices or datacenters in certain geographies don’t always have sufficient network bandwidth. For this reason, these initial backups take several days. During this time, the backups continuously use the same network that was provisioned for applications running in the on-premises datacenter.
Azure Backup supports offline backup, which transfers initial backup data offline, without the use of network bandwidth. It provides a mechanism to copy backup data onto physical storage devices. The devices are then shipped to a nearby Azure datacenter and uploaded onto a Recovery Services vault. This process ensures robust transfer of backup data without using any network bandwidth.
You are designing an application for your customer. Your customer wants at least 99.9% availability for the application and it should be tolerant for datacenter failures. The application is typical n-tier architecture application with Web, App and Database tiers. The app tier processes large amount of files and the transactions within the application is high. You must keep network latency as low as possible.
Which Azure Virtual Machine (VM) deployment model you should include in in your recommendation?
Correct
Availability zones (AZs) are unique physical locations that span datacenters within an Azure region. Each AZ accesses one or more datacenters that have independent power, cooling, and networking, and each AZ-enabled Azure region has a minimum of three separate AZs. The physical separation of AZs within a region protects deployed VMs from datacenter failure.
If app latency is a primary concern, you should colocate services in a single datacenter by using proximity placement groups (PPGs) with AZs and ASs.
Availability zones (AZs) are unique physical locations that span datacenters within an Azure region. Each AZ accesses one or more datacenters that have independent power, cooling, and networking, and each AZ-enabled Azure region has a minimum of three separate AZs. The physical separation of AZs within a region protects deployed VMs from datacenter failure.
If app latency is a primary concern, you should colocate services in a single datacenter by using proximity placement groups (PPGs) with AZs and ASs.
Availability zones (AZs) are unique physical locations that span datacenters within an Azure region. Each AZ accesses one or more datacenters that have independent power, cooling, and networking, and each AZ-enabled Azure region has a minimum of three separate AZs. The physical separation of AZs within a region protects deployed VMs from datacenter failure.
If app latency is a primary concern, you should colocate services in a single datacenter by using proximity placement groups (PPGs) with AZs and ASs.
You are designing an application named Application1. Application1 will be hosted on two Azure virtual machines named VirtualMachine1 and VirtualMachine2. Application1 is a stateless front-end application.
You plan to load balance connections to VirtualMachine1 and VirtualMachine2 from the Internet by using one Azure load balancer. You need to recommend the minimum number of public IP addresses required.
How many public IP addresses should you recommend for complete solution?
Correct
A public IP address for Azure load balancer is enough to reach traffic coming from internet. The Azure load balancer can route traffic to VirtualMachine1 and VirtualMachine2 using private IP address of respective virtual machines.
Incorrect
A public IP address for Azure load balancer is enough to reach traffic coming from internet. The Azure load balancer can route traffic to VirtualMachine1 and VirtualMachine2 using private IP address of respective virtual machines.
Unattempted
A public IP address for Azure load balancer is enough to reach traffic coming from internet. The Azure load balancer can route traffic to VirtualMachine1 and VirtualMachine2 using private IP address of respective virtual machines.
Question 35 of 65
35. Question
You are designing an application named Application1. Application1 will be hosted on two Azure virtual machines named VirtualMachine1 and VirtualMachine2. Application1 is a stateless front-end application.
You plan to load balance connections to VirtualMachine1 and VirtualMachine2 from the Internet by using one Azure load balancer. You need to recommend the minimum number of public IP addresses required.
How many public IP addresses should you recommend for VirtualMachine1 and Virtual Machine2?
Correct
A public IP address for Azure load balancer is enough to reach traffic coming from internet. The Azure load balancer can route traffic to VirtualMachine1 and VirtualMachine2 using private IP address of respective virtual machines.
Incorrect
A public IP address for Azure load balancer is enough to reach traffic coming from internet. The Azure load balancer can route traffic to VirtualMachine1 and VirtualMachine2 using private IP address of respective virtual machines.
Unattempted
A public IP address for Azure load balancer is enough to reach traffic coming from internet. The Azure load balancer can route traffic to VirtualMachine1 and VirtualMachine2 using private IP address of respective virtual machines.
Question 36 of 65
36. Question
You are designing an application named application1 which processes confidential information. You planned to use Azure SQL Database as backend for application1. There are couple of applications hosted in your on-premise datacenter which perform CRUD operations on application1 database. Your security administrator is concerned that the connectivity between your on-premise applications and applicaiton1 database will go over internet.
You need to recommend a solution to address your security administratorÂ’s concern. Which solution you should recommend?
Correct
Azure Private Endpoint is a network interface that connects you privately and securely to a service powered by Azure Private Link. Private Endpoint uses a private IP address from your VNet, effectively bringing the service into your VNet. The service could be an Azure service such as Azure Storage, Azure Cosmos DB, SQL, etc.
Private endpoint enables connectivity between the consumers from the same VNet, regionally peered VNets, globally peered VNets and on premises using VPN or Express Route and services powered by Private Link.
Azure Private Endpoint is a network interface that connects you privately and securely to a service powered by Azure Private Link. Private Endpoint uses a private IP address from your VNet, effectively bringing the service into your VNet. The service could be an Azure service such as Azure Storage, Azure Cosmos DB, SQL, etc.
Private endpoint enables connectivity between the consumers from the same VNet, regionally peered VNets, globally peered VNets and on premises using VPN or Express Route and services powered by Private Link.
Azure Private Endpoint is a network interface that connects you privately and securely to a service powered by Azure Private Link. Private Endpoint uses a private IP address from your VNet, effectively bringing the service into your VNet. The service could be an Azure service such as Azure Storage, Azure Cosmos DB, SQL, etc.
Private endpoint enables connectivity between the consumers from the same VNet, regionally peered VNets, globally peered VNets and on premises using VPN or Express Route and services powered by Private Link.
You have an Azure App Service Web App that includes Azure Blob storage and an Azure SQL Database instance. The application is instrumented by using the Application Insights SDK. You need to design a monitoring solution for the web app.
Which Azure monitoring services should you use to visualize the relationships between application components?
Correct
Application Map helps you spot performance bottlenecks or failure hotspots across all components of your distributed application. You can see the full application topology across multiple levels of related application components. Components could be different Application Insights resources, or different roles in a single resource. The app map finds components by following HTTP dependency calls made between servers with the Application Insights SDK installed.
Application Map helps you spot performance bottlenecks or failure hotspots across all components of your distributed application. You can see the full application topology across multiple levels of related application components. Components could be different Application Insights resources, or different roles in a single resource. The app map finds components by following HTTP dependency calls made between servers with the Application Insights SDK installed.
Application Map helps you spot performance bottlenecks or failure hotspots across all components of your distributed application. You can see the full application topology across multiple levels of related application components. Components could be different Application Insights resources, or different roles in a single resource. The app map finds components by following HTTP dependency calls made between servers with the Application Insights SDK installed.
You are designing a web application for your customer. Your customer wanted to decouple the application components using asynchronous communication. As per the business analyst, the order of messages is important as it represents the flow of data.
Which Azure service do you include in your design?
Correct
You should consider using Service Bus queues when:
· Your solution must be able to receive messages without having to poll the queue. With Service Bus, this can be achieved through the use of the long-polling receive operation using the TCP-based protocols that Service Bus supports.
· Your solution requires the queue to provide a guaranteed first-in-first-out (FIFO) ordered delivery.
· Your solution must be able to support automatic duplicate detection.
You should consider using Service Bus queues when:
· Your solution must be able to receive messages without having to poll the queue. With Service Bus, this can be achieved through the use of the long-polling receive operation using the TCP-based protocols that Service Bus supports.
· Your solution requires the queue to provide a guaranteed first-in-first-out (FIFO) ordered delivery.
· Your solution must be able to support automatic duplicate detection.
You should consider using Service Bus queues when:
· Your solution must be able to receive messages without having to poll the queue. With Service Bus, this can be achieved through the use of the long-polling receive operation using the TCP-based protocols that Service Bus supports.
· Your solution requires the queue to provide a guaranteed first-in-first-out (FIFO) ordered delivery.
· Your solution must be able to support automatic duplicate detection.
The application is expected to process NoSQL data and leverage Cosmos DB for the back-end.
To meet compliance requirements of your legal team, you must retain database backups for 1 year.
You need to recommend a solution that requires the least administrative effort. What should you include in your design to meet the requirements of this solution?
Correct
Cosmos DB does not include any native abilities to manage long-term-retention. Microsoft recommends the use of Azure Data Factory to manage backups.
Your customer is setting up a hybrid enterprise data warehouse within Azure. You are asked to design a solution to copy data from on-premises to an Azure SQL Database every week.
Which Azure service should you recommend to automate the data copy?
Correct
Azure Data Factory is built for this purpose. It helps with the automated movement of data.
Azure Data Factory is a hybrid data integration service. It helps with extract transform load (ETL) and extract load transform (ELT) processes. Using a code-free GUI to visualize and build workflows, Data Factory helps manage the movement and connectivity of data between data platforms.
Azure Data Factory is built for this purpose. It helps with the automated movement of data.
Azure Data Factory is a hybrid data integration service. It helps with extract transform load (ETL) and extract load transform (ELT) processes. Using a code-free GUI to visualize and build workflows, Data Factory helps manage the movement and connectivity of data between data platforms.
Azure Data Factory is built for this purpose. It helps with the automated movement of data.
Azure Data Factory is a hybrid data integration service. It helps with extract transform load (ETL) and extract load transform (ELT) processes. Using a code-free GUI to visualize and build workflows, Data Factory helps manage the movement and connectivity of data between data platforms.
You have an Azure App Service Web App that includes Azure Blob storage and an Azure SQL Database instance. The application is instrumented by using the Application Insights SDK. You need to design a monitoring solution for the web app.
Which Azure monitoring services should you use to analyze how many users return to the application and how often they select a particular dropdown value?
Correct
Custom events and metrics in App insights allow you write yourself in the client or server code, to track business events such as items sold or games won.
Custom events and metrics in App insights allow you write yourself in the client or server code, to track business events such as items sold or games won.
Custom events and metrics in App insights allow you write yourself in the client or server code, to track business events such as items sold or games won.
You are planning to migrate your on-premise data center to Azure. You have serveral servers that run Windows Server 2012 R2 and host Microsoft SQL Server 2012 R2 instances. The stored procedures in SQL databases are implemented by using CLR. You plan to move all the data from SQL Server to Azure.
You need to recommend an Azure service to host the databases. The solution must meet the following requirements:
Minimize management overhead for the migrated databases.
Minimize the number of database changes required to complete the migration.
Ensure that users can authenticate by using their Azure Active Directory credentials.
What should you include in the recommendation?
Correct
Azure SQL Managed Instance is the intelligent, scalable cloud database service that combines the broadest SQL Server database engine compatibility with all the benefits of a fully managed and evergreen platform as a service. SQL Managed Instance has near 100% compatibility with the latest SQL Server (Enterprise Edition) database engine, providing a native virtual network (VNet) implementation that addresses common security concerns, and a business model favorable for existing SQL Server customers. SQL Managed Instance allows existing SQL Server customers to lift and shift their on-premises applications to the cloud with minimal application and database changes. At the same time, SQL Managed Instance preserves all PaaS capabilities (automatic patching and version updates, automated backups, high availability) that drastically reduce management overhead and TCO.
Azure SQL Managed Instance is the intelligent, scalable cloud database service that combines the broadest SQL Server database engine compatibility with all the benefits of a fully managed and evergreen platform as a service. SQL Managed Instance has near 100% compatibility with the latest SQL Server (Enterprise Edition) database engine, providing a native virtual network (VNet) implementation that addresses common security concerns, and a business model favorable for existing SQL Server customers. SQL Managed Instance allows existing SQL Server customers to lift and shift their on-premises applications to the cloud with minimal application and database changes. At the same time, SQL Managed Instance preserves all PaaS capabilities (automatic patching and version updates, automated backups, high availability) that drastically reduce management overhead and TCO.
Azure SQL Managed Instance is the intelligent, scalable cloud database service that combines the broadest SQL Server database engine compatibility with all the benefits of a fully managed and evergreen platform as a service. SQL Managed Instance has near 100% compatibility with the latest SQL Server (Enterprise Edition) database engine, providing a native virtual network (VNet) implementation that addresses common security concerns, and a business model favorable for existing SQL Server customers. SQL Managed Instance allows existing SQL Server customers to lift and shift their on-premises applications to the cloud with minimal application and database changes. At the same time, SQL Managed Instance preserves all PaaS capabilities (automatic patching and version updates, automated backups, high availability) that drastically reduce management overhead and TCO.
You have an Azure Storage v2 account named storage1. You plan to archive data to storage1.
You need to ensure that the archived data cannot be deleted for five years. The solution must prevent administrators from deleting the data.
Solution: You create an Azure Blob storage container, and you configure a legal hold access policy.
Does this meet the goal?
Correct
Time-based retention policy support: Users can set policies to store data for a specified interval. When a time-based retention policy is set, blobs can be created and read, but not modified or deleted. After the retention period has expired, blobs can be deleted but not overwritten.
Legal hold policy support: If the retention interval is not known, users can set legal holds to store immutable data until the legal hold is cleared. When a legal hold policy is set, blobs can be created and read, but not modified or deleted. Each legal hold is associated with a user-defined alphanumeric tag (such as a case ID, event name, etc.) that is used as an identifier string.
In this scenario, we know the retention period i.e., 5 years. So, we need to set time-based retention policy.
Time-based retention policy support: Users can set policies to store data for a specified interval. When a time-based retention policy is set, blobs can be created and read, but not modified or deleted. After the retention period has expired, blobs can be deleted but not overwritten.
Legal hold policy support: If the retention interval is not known, users can set legal holds to store immutable data until the legal hold is cleared. When a legal hold policy is set, blobs can be created and read, but not modified or deleted. Each legal hold is associated with a user-defined alphanumeric tag (such as a case ID, event name, etc.) that is used as an identifier string.
In this scenario, we know the retention period i.e., 5 years. So, we need to set time-based retention policy.
Time-based retention policy support: Users can set policies to store data for a specified interval. When a time-based retention policy is set, blobs can be created and read, but not modified or deleted. After the retention period has expired, blobs can be deleted but not overwritten.
Legal hold policy support: If the retention interval is not known, users can set legal holds to store immutable data until the legal hold is cleared. When a legal hold policy is set, blobs can be created and read, but not modified or deleted. Each legal hold is associated with a user-defined alphanumeric tag (such as a case ID, event name, etc.) that is used as an identifier string.
In this scenario, we know the retention period i.e., 5 years. So, we need to set time-based retention policy.
You have an on-premises Hyper-V cluster. The cluster contains Hyper-V hosts that run Windows Server 2016 Datacenter. The hosts are licensed under a Microsoft Enterprise Agreement that has Software Assurance.
The Hyper-V cluster hosts 3 virtual machines that run Windows Server 2012 R2. Each virtual machine runs a different workload. The workloads have predictable consumption patterns.
You plan to replace the virtual machines with Azure virtual machines that run Windows Server 2016. The virtual machines will be sized according to the consumption pattern of each workload.
You need to recommend a solution to minimize the compute costs of the Azure virtual machines.
Which two recommendations should you include in the solution? Each correct answer presents part of the solution.
Correct
Use reserved instances and Azure hybrid benefit to reduce costs.
Incorrect
Use reserved instances and Azure hybrid benefit to reduce costs.
Unattempted
Use reserved instances and Azure hybrid benefit to reduce costs.
Question 48 of 65
48. Question
You have an on-premises Active Directory forest and an Azure Active Directory (Azure AD) tenant. All Azure AD users are assigned a Premium P1 license. You deploy Azure AD Connect.
Which two features are available in this environment that can reduce operational overhead for your company’s help desk?
Correct
P1 lets your hybrid users access both on-premises and cloud resources. It also supports advanced administration, such as dynamic groups, self-service group management, Microsoft Identity Manager (an on-premises identity and access management suite) and cloud write-back capabilities, which allow self-service password reset for your on-premises users.
P1 lets your hybrid users access both on-premises and cloud resources. It also supports advanced administration, such as dynamic groups, self-service group management, Microsoft Identity Manager (an on-premises identity and access management suite) and cloud write-back capabilities, which allow self-service password reset for your on-premises users.
P1 lets your hybrid users access both on-premises and cloud resources. It also supports advanced administration, such as dynamic groups, self-service group management, Microsoft Identity Manager (an on-premises identity and access management suite) and cloud write-back capabilities, which allow self-service password reset for your on-premises users.
You have an Azure Storage account named mystorageaccount, which contains several files of 1GB named File1, File2, and File3… FileN. The files are set to use the archive access tier.
You need to ensure that File1 is accessible immediately when a retrieval request is initiated.
Solution: You set Access tier to Cool for File1
Does this meet the goal?
Correct
You can access files in Cool tier immediately.
Incorrect
You can access files in Cool tier immediately.
Unattempted
You can access files in Cool tier immediately.
Question 50 of 65
50. Question
You have an Azure Storage account named mystorageaccount, which contains several files of 1GB named File1, File2, and File3… FileN. The files are set to use the archive access tier.
You need to ensure that File1 is accessible immediately when a retrieval request is initiated.
Solution: For File1, you set Access tier to Hot
Does this meet the goal?
Correct
Files in Hot access tier are accessible immediately.
Incorrect
Files in Hot access tier are accessible immediately.
Unattempted
Files in Hot access tier are accessible immediately.
Question 51 of 65
51. Question
You have an Azure Storage account named Storageaccount1 that contains several files of 1GB named File1, File2, and File3… FileN. The files are set to use the archive access tier.
You need to ensure that File1 is accessible immediately when a retrieval request is initiated.
Solution: You move File1 to a new storage account. For File1, you set Access tier to Archive.
Does this meet the goal?
Correct
Files in archive tier are not accessible immediately.
Incorrect
Files in archive tier are not accessible immediately.
Unattempted
Files in archive tier are not accessible immediately.
Question 52 of 65
52. Question
Your customer is planning to migrate on-premise data center to Microsoft Azure. Your customer wanted to make sure that company employees should be able use same username and password that they are using in on-premise environment. Your security administrator likes to automate the detection of identity-based risks.
Which authentication should you consider in your design?
Correct
Azure AD password hash synchronization is the simplest way to enable authentication for on-premises directory objects in Azure AD. Users can use the same username and password that they use on-premises without having to deploy any additional infrastructure. Some premium features of Azure AD, like Identity Protection and Azure AD Domain Services, require password hash synchronization, no matter which authentication method you choose.
Azure AD password hash synchronization is the simplest way to enable authentication for on-premises directory objects in Azure AD. Users can use the same username and password that they use on-premises without having to deploy any additional infrastructure. Some premium features of Azure AD, like Identity Protection and Azure AD Domain Services, require password hash synchronization, no matter which authentication method you choose.
Azure AD password hash synchronization is the simplest way to enable authentication for on-premises directory objects in Azure AD. Users can use the same username and password that they use on-premises without having to deploy any additional infrastructure. Some premium features of Azure AD, like Identity Protection and Azure AD Domain Services, require password hash synchronization, no matter which authentication method you choose.
Your customer is planning to migrate on-premise data center to Microsoft Azure. You have following requirements.
Your customer wanted to ensure that employees can use their current credentials to access both on-premise hosted applications and cloud hosted applications.
Your customer is concerned about recent cyber-attacks on various organizations. Your customer wanted to ensure that there should not be any outage for user authentication though there is an outage to on-premise.
Which authentication should you consider in your design?
Correct
Pass-through Authentication and federation rely on on-premises infrastructure. For pass-through authentication, the on-premises footprint includes the server hardware and networking the Pass-through Authentication agents require. For federation, the on-premises footprint is even larger. It requires servers in your perimeter network to proxy authentication requests and the internal federation servers.
To avoid single points of failure, deploy redundant servers. Then authentication requests will always be serviced if any component fails. Both pass-through authentication and federation also rely on domain controllers to respond to authentication requests, which can also fail. Many of these components need maintenance to stay healthy. Outages are more likely when maintenance isn’t planned and implemented correctly. Avoid outages by using password hash synchronization because the Microsoft Azure AD cloud authentication service scales globally and is always available.
Pass-through Authentication and federation rely on on-premises infrastructure. For pass-through authentication, the on-premises footprint includes the server hardware and networking the Pass-through Authentication agents require. For federation, the on-premises footprint is even larger. It requires servers in your perimeter network to proxy authentication requests and the internal federation servers.
To avoid single points of failure, deploy redundant servers. Then authentication requests will always be serviced if any component fails. Both pass-through authentication and federation also rely on domain controllers to respond to authentication requests, which can also fail. Many of these components need maintenance to stay healthy. Outages are more likely when maintenance isn’t planned and implemented correctly. Avoid outages by using password hash synchronization because the Microsoft Azure AD cloud authentication service scales globally and is always available.
Pass-through Authentication and federation rely on on-premises infrastructure. For pass-through authentication, the on-premises footprint includes the server hardware and networking the Pass-through Authentication agents require. For federation, the on-premises footprint is even larger. It requires servers in your perimeter network to proxy authentication requests and the internal federation servers.
To avoid single points of failure, deploy redundant servers. Then authentication requests will always be serviced if any component fails. Both pass-through authentication and federation also rely on domain controllers to respond to authentication requests, which can also fail. Many of these components need maintenance to stay healthy. Outages are more likely when maintenance isn’t planned and implemented correctly. Avoid outages by using password hash synchronization because the Microsoft Azure AD cloud authentication service scales globally and is always available.
Your company has users who work remotely from laptops.
You plan to move some of the applications accessed by the remote users to Azure virtual machines. The users will access the applications in Azure by using a point-to-site VPN connection. You will use certificates generated from an on-premises-based Certification authority (CA).
You need to recommend which certificates are required for the deployment.
What should you include in the recommendation for trusted root certification authorities certificate store on each laptop?
Correct
Certificates are used by Azure to authenticate clients connecting to a VNet over a Point-to-Site VPN connection. Once you obtain a root certificate, you upload the public key information to Azure. The root certificate is then considered ‘trusted’ by Azure for connection over P2S to the virtual network. You also generate client certificates from the trusted root certificate, and then install them on each client computer. The client certificate is used to authenticate the client when it initiates a connection to the VNet.
Certificates are used by Azure to authenticate clients connecting to a VNet over a Point-to-Site VPN connection. Once you obtain a root certificate, you upload the public key information to Azure. The root certificate is then considered ‘trusted’ by Azure for connection over P2S to the virtual network. You also generate client certificates from the trusted root certificate, and then install them on each client computer. The client certificate is used to authenticate the client when it initiates a connection to the VNet.
Certificates are used by Azure to authenticate clients connecting to a VNet over a Point-to-Site VPN connection. Once you obtain a root certificate, you upload the public key information to Azure. The root certificate is then considered ‘trusted’ by Azure for connection over P2S to the virtual network. You also generate client certificates from the trusted root certificate, and then install them on each client computer. The client certificate is used to authenticate the client when it initiates a connection to the VNet.
Your company has users who work remotely from laptops.
You plan to move some of the applications accessed by the remote users to Azure virtual machines. The users will access the applications in Azure by using a point-to-site VPN connection. You will use certificates generated from an on-premises-based Certification authority (CA).
You need to recommend which certificates are required for the deployment.
What should you include in the recommendation for users personal store on each laptop?
Correct
Certificates are used by Azure to authenticate clients connecting to a VNet over a Point-to-Site VPN connection. Once you obtain a root certificate, you upload the public key information to Azure. The root certificate is then considered ‘trusted’ by Azure for connection over P2S to the virtual network. You also generate client certificates from the trusted root certificate, and then install them on each client computer. The client certificate is used to authenticate the client when it initiates a connection to the VNet.
Certificates are used by Azure to authenticate clients connecting to a VNet over a Point-to-Site VPN connection. Once you obtain a root certificate, you upload the public key information to Azure. The root certificate is then considered ‘trusted’ by Azure for connection over P2S to the virtual network. You also generate client certificates from the trusted root certificate, and then install them on each client computer. The client certificate is used to authenticate the client when it initiates a connection to the VNet.
Certificates are used by Azure to authenticate clients connecting to a VNet over a Point-to-Site VPN connection. Once you obtain a root certificate, you upload the public key information to Azure. The root certificate is then considered ‘trusted’ by Azure for connection over P2S to the virtual network. You also generate client certificates from the trusted root certificate, and then install them on each client computer. The client certificate is used to authenticate the client when it initiates a connection to the VNet.
Your company has users who work remotely from laptops.
You plan to move some of the applications accessed by the remote users to Azure virtual machines. The users will access the applications in Azure by using a point-to-site VPN connection. You will use certificates generated from an on-premises-based Certification authority (CA).
You need to recommend which certificates are required for the deployment.
What should you include in the recommendation for Azure VPN Gateway?
Correct
Certificates are used by Azure to authenticate clients connecting to a VNet over a Point-to-Site VPN connection. Once you obtain a root certificate, you upload the public key information to Azure. The root certificate is then considered ‘trusted’ by Azure for connection over P2S to the virtual network. You also generate client certificates from the trusted root certificate, and then install them on each client computer. The client certificate is used to authenticate the client when it initiates a connection to the VNet.
Certificates are used by Azure to authenticate clients connecting to a VNet over a Point-to-Site VPN connection. Once you obtain a root certificate, you upload the public key information to Azure. The root certificate is then considered ‘trusted’ by Azure for connection over P2S to the virtual network. You also generate client certificates from the trusted root certificate, and then install them on each client computer. The client certificate is used to authenticate the client when it initiates a connection to the VNet.
Certificates are used by Azure to authenticate clients connecting to a VNet over a Point-to-Site VPN connection. Once you obtain a root certificate, you upload the public key information to Azure. The root certificate is then considered ‘trusted’ by Azure for connection over P2S to the virtual network. You also generate client certificates from the trusted root certificate, and then install them on each client computer. The client certificate is used to authenticate the client when it initiates a connection to the VNet.
You are building an application that will run in a virtual machine (VM). The application will use Azure Managed Identity.
The application uses Azure Key Vault, Azure SQL Database, and Azure Cosmos DB.
You need to ensure the application can use secure credentials to access these services.
Which authentication method should you recommend for Azure Key Vault?
Correct
Managed identities for Azure resources is a feature of Azure Active Directory. Each of the Azure services that support managed identities for Azure resources are subject to their own timeline.
Windows virtual machine (VM) can use a system-assigned managed identity to access Azure Key Vault.
Managed identities for Azure resources is a feature of Azure Active Directory. Each of the Azure services that support managed identities for Azure resources are subject to their own timeline.
Windows virtual machine (VM) can use a system-assigned managed identity to access Azure Key Vault.
Managed identities for Azure resources is a feature of Azure Active Directory. Each of the Azure services that support managed identities for Azure resources are subject to their own timeline.
Windows virtual machine (VM) can use a system-assigned managed identity to access Azure Key Vault.
You are building an application that will run in a virtual machine (VM). The application will use Azure Managed Identity.
The application uses Azure Key Vault, Azure SQL Database, and Azure Cosmos DB.
You need to ensure the application can use secure credentials to access these services.
Which authentication method should you recommend for Cosmos DB?
Correct
Managed identities for Azure resources is a feature of Azure Active Directory. Each of the Azure services that support managed identities for Azure resources are subject to their own timeline.
Windows virtual machine (VM) can use a system-assigned managed identity to access Azure Cosmos DB
Managed identities for Azure resources is a feature of Azure Active Directory. Each of the Azure services that support managed identities for Azure resources are subject to their own timeline.
Windows virtual machine (VM) can use a system-assigned managed identity to access Azure Cosmos DB
Managed identities for Azure resources is a feature of Azure Active Directory. Each of the Azure services that support managed identities for Azure resources are subject to their own timeline.
Windows virtual machine (VM) can use a system-assigned managed identity to access Azure Cosmos DB
You have an on-premises network to which you deploy a virtual appliance.
You plan to deploy several Azure virtual machines and connect the on-premises network to Azure by using a Site-to-Site connection.
All network traffic that will be directed from the Azure virtual machines to a specific subnet must flow through the virtual appliance.
You need to recommend solutions to manage network traffic.
Which two options should you recommend?
Correct
Forced tunneling in Azure is configured via virtual network user-defined routes. Redirecting traffic to an on-premises site is expressed as a Default Route to the Azure VPN gateway. This procedure uses user-defined routes (UDR) to create a routing table to add a default route, and then associate the routing table to your VNet subnet(s) to enable forced tunneling on those subnets.
ExpressRoute forced tunneling is not configured via this mechanism, but instead, is enabled by advertising a default route via the ExpressRoute BGP peering sessions .
Forced tunneling in Azure is configured via virtual network user-defined routes. Redirecting traffic to an on-premises site is expressed as a Default Route to the Azure VPN gateway. This procedure uses user-defined routes (UDR) to create a routing table to add a default route, and then associate the routing table to your VNet subnet(s) to enable forced tunneling on those subnets.
ExpressRoute forced tunneling is not configured via this mechanism, but instead, is enabled by advertising a default route via the ExpressRoute BGP peering sessions .
Forced tunneling in Azure is configured via virtual network user-defined routes. Redirecting traffic to an on-premises site is expressed as a Default Route to the Azure VPN gateway. This procedure uses user-defined routes (UDR) to create a routing table to add a default route, and then associate the routing table to your VNet subnet(s) to enable forced tunneling on those subnets.
ExpressRoute forced tunneling is not configured via this mechanism, but instead, is enabled by advertising a default route via the ExpressRoute BGP peering sessions .
You have an Azure Active Directory (Azure AD) tenant and Windows 10 devices.
You configure a conditional access policy as shown below.
What is the result of the policy?
Correct
Conditional Access policies at their simplest are if-then statements, if a user wants to access a resource, then they must complete an action. Example: A payroll manager wants to access the payroll application and is required to perform multi-factor authentication to access it.
In this scenario, the enable policy is set to Off in the image. So, this policy will have no impact.
Conditional Access policies at their simplest are if-then statements, if a user wants to access a resource, then they must complete an action. Example: A payroll manager wants to access the payroll application and is required to perform multi-factor authentication to access it.
In this scenario, the enable policy is set to Off in the image. So, this policy will have no impact.
Conditional Access policies at their simplest are if-then statements, if a user wants to access a resource, then they must complete an action. Example: A payroll manager wants to access the payroll application and is required to perform multi-factor authentication to access it.
In this scenario, the enable policy is set to Off in the image. So, this policy will have no impact.
You are building an application that will run in a virtual machine (VM). The application will use Azure Managed Identity.
The application uses Azure Key Vault, Azure SQL Database, and Azure Cosmos DB.
You need to ensure the application can use secure credentials to access these services.
Which authentication method should you recommend for Azure SQL Database?
Correct
Managed identities for Azure resources is a feature of Azure Active Directory. Each of the Azure services that support managed identities for Azure resources are subject to their own timeline.
Windows virtual machine (VM) can use a system-assigned managed identity to access Azure SQL Database.
Managed identities for Azure resources is a feature of Azure Active Directory. Each of the Azure services that support managed identities for Azure resources are subject to their own timeline.
Windows virtual machine (VM) can use a system-assigned managed identity to access Azure SQL Database.
Managed identities for Azure resources is a feature of Azure Active Directory. Each of the Azure services that support managed identities for Azure resources are subject to their own timeline.
Windows virtual machine (VM) can use a system-assigned managed identity to access Azure SQL Database.
Case Study –
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study.
Overview –
PreparationLabs, Ltd, is a US-based financial services company that has a main office in New York and a branch office in San Francisco.
Existing Environment – Payment Processing System
PreparationLabs hosts a business-critical payment processing system in its New York data center. The system has three tiers: a front-end web app, a middle-tier web API, and a back-end data store implemented as a Microsoft SQL Server 2014 database. All servers run Windows Server 2012 R2.
The front-end and middle-tier components are hosted by using Microsoft Internet Information Services (IIS). The application code is written in C# and ASP.NET.
The middle-tier API uses the Entity Framework to communicate to the SQL Server database. Maintenance of the database is performed by using SQL Server
Agent jobs.
The database is currently 2 TB and is not expected to grow beyond 3 TB.
The payment processing system has the following compliance-related requirements:
Encrypt data in transit and at rest. Only the front-end and middle-tier components must be able to access the encryption keys that protect the data store.
Keep backups of the data in two separate physical locations that are at least 200 miles apart and can be restored for up to seven years.
Support blocking inbound and outbound traffic based on the source IP address, the destination IP address, and the port number.
Collect Windows security logs from all the middle-tier servers and retain the logs for a period of seven years.
Inspect inbound and outbound traffic from the front-end tier by using highly available network appliances.
Only allow all access to all the tiers from the internal network of PreparationLabs.
Tape backups are configured by using an on-premises deployment of Microsoft System Center Data Protection Manager (DPM), and then shipped offsite for long term storage.
Existing Environment – Historical Transaction Query System
PreparationLabs recently migrated a business-critical workload to Azure. The workload contains a .NET web service for querying the historical transaction data residing in Azure Table Storage. The .NET web service is accessible from a client app that was developed in-house and runs on the client computers in the New York office.
The data in the table storage is 50 GB and is not expected to increase.
Existing Environment – Current Issues
The PreparationLabs IT team discovers poor performance of the historical transaction query system, as the queries frequently cause table scans.
Requirements – Planned Changes
PreparationLabs plans to implement the following changes:
Migrate the payment processing system to Azure.
Migrate the historical transaction data to Azure Cosmos DB to address the performance issues.
Requirements – Migration Requirements
PreparationLabs identifies the following general migration requirements:
Infrastructure services must remain available if a region or a data center fails. Failover must occur without any administrative intervention.
Whenever possible, Azure managed services must be used to minimize management overhead.
Whenever possible, costs must be minimized.
PreparationLabs identifies the following requirements for the payment processing system:
If a data center fails, ensure that the payment processing system remains available without any administrative intervention. The middle-tier and the web front end must continue to operate without any additional configurations.
Ensure that the number of compute nodes of the front-end and the middle tiers of the payment processing system can increase or decrease automatically based on CPU utilization.
Ensure that each tier of the payment processing system is subject to a Service Level Agreement (SLA) of 99.99 percent availability.
Minimize the effort required to modify the middle-tier API and the back-end tier of the payment processing system.
Payment processing system must be able to use grouping and joining tables on encrypted columns.
Generate alerts when unauthorized login attempts occur on the middle-tier virtual machines.
Ensure that the payment processing system preserves its current compliance status.
Host the middle tier of the payment processing system on a virtual machine
PreparationLabs identifies the following requirements for the historical transaction query system:
Minimize the use of on-premises infrastructure services.
Minimize the effort required to modify the .NET web service querying Azure Cosmos DB.
Minimize the frequency of table scans.
If a region fails, ensure that the historical transaction query system remains available without any administrative intervention.
Requirements – Information Security Requirements
The IT security team wants to ensure that identity management is performed by using Active Directory. Password hashes must be stored on-premises only.
Access to all business-critical systems must rely on Active Directory credentials. Any suspicious authentication attempts must trigger a multi-factor authentication prompt automatically.
Question –
You need to recommend a solution for the data store of the historical transaction query system.
What should you include in the recommendation for Sizing requirements?
Correct
Scenario: The data in the table storage is 50 GB and is not expected to increase.
Migrate the historical transaction data to Azure Cosmos DB to address the performance issues.
Minimize the effort required to modify the .NET web service querying Azure Cosmos DB.
If we split data to multiple tables, it needs changes to .Net web service. We can move data as-is to reduce the code changes required.
Incorrect
Scenario: The data in the table storage is 50 GB and is not expected to increase.
Migrate the historical transaction data to Azure Cosmos DB to address the performance issues.
Minimize the effort required to modify the .NET web service querying Azure Cosmos DB.
If we split data to multiple tables, it needs changes to .Net web service. We can move data as-is to reduce the code changes required.
Unattempted
Scenario: The data in the table storage is 50 GB and is not expected to increase.
Migrate the historical transaction data to Azure Cosmos DB to address the performance issues.
Minimize the effort required to modify the .NET web service querying Azure Cosmos DB.
If we split data to multiple tables, it needs changes to .Net web service. We can move data as-is to reduce the code changes required.
Question 63 of 65
63. Question
Case Study –
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study.
Overview –
PreparationLabs, Ltd, is a US-based financial services company that has a main office in New York and a branch office in San Francisco.
Existing Environment – Payment Processing System
PreparationLabs hosts a business-critical payment processing system in its New York data center. The system has three tiers: a front-end web app, a middle-tier web API, and a back-end data store implemented as a Microsoft SQL Server 2014 database. All servers run Windows Server 2012 R2.
The front-end and middle-tier components are hosted by using Microsoft Internet Information Services (IIS). The application code is written in C# and ASP.NET.
The middle-tier API uses the Entity Framework to communicate to the SQL Server database. Maintenance of the database is performed by using SQL Server
Agent jobs.
The database is currently 2 TB and is not expected to grow beyond 3 TB.
The payment processing system has the following compliance-related requirements:
Encrypt data in transit and at rest. Only the front-end and middle-tier components must be able to access the encryption keys that protect the data store.
Keep backups of the data in two separate physical locations that are at least 200 miles apart and can be restored for up to seven years.
Support blocking inbound and outbound traffic based on the source IP address, the destination IP address, and the port number.
Collect Windows security logs from all the middle-tier servers and retain the logs for a period of seven years.
Inspect inbound and outbound traffic from the front-end tier by using highly available network appliances.
Only allow all access to all the tiers from the internal network of PreparationLabs.
Tape backups are configured by using an on-premises deployment of Microsoft System Center Data Protection Manager (DPM), and then shipped offsite for long term storage.
Existing Environment – Historical Transaction Query System
PreparationLabs recently migrated a business-critical workload to Azure. The workload contains a .NET web service for querying the historical transaction data residing in Azure Table Storage. The .NET web service is accessible from a client app that was developed in-house and runs on the client computers in the New York office.
The data in the table storage is 50 GB and is not expected to increase.
Existing Environment – Current Issues
The PreparationLabs IT team discovers poor performance of the historical transaction query system, as the queries frequently cause table scans.
Requirements – Planned Changes
PreparationLabs plans to implement the following changes:
Migrate the payment processing system to Azure.
Migrate the historical transaction data to Azure Cosmos DB to address the performance issues.
Requirements – Migration Requirements
PreparationLabs identifies the following general migration requirements:
Infrastructure services must remain available if a region or a data center fails. Failover must occur without any administrative intervention.
Whenever possible, Azure managed services must be used to minimize management overhead.
Whenever possible, costs must be minimized.
PreparationLabs identifies the following requirements for the payment processing system:
If a data center fails, ensure that the payment processing system remains available without any administrative intervention. The middle-tier and the web front end must continue to operate without any additional configurations.
Ensure that the number of compute nodes of the front-end and the middle tiers of the payment processing system can increase or decrease automatically based on CPU utilization.
Ensure that each tier of the payment processing system is subject to a Service Level Agreement (SLA) of 99.99 percent availability.
Minimize the effort required to modify the middle-tier API and the back-end tier of the payment processing system.
Payment processing system must be able to use grouping and joining tables on encrypted columns.
Generate alerts when unauthorized login attempts occur on the middle-tier virtual machines.
Ensure that the payment processing system preserves its current compliance status.
Host the middle tier of the payment processing system on a virtual machine
PreparationLabs identifies the following requirements for the historical transaction query system:
Minimize the use of on-premises infrastructure services.
Minimize the effort required to modify the .NET web service querying Azure Cosmos DB.
Minimize the frequency of table scans.
If a region fails, ensure that the historical transaction query system remains available without any administrative intervention.
Requirements – Information Security Requirements
The IT security team wants to ensure that identity management is performed by using Active Directory. Password hashes must be stored on-premises only.
Access to all business-critical systems must rely on Active Directory credentials. Any suspicious authentication attempts must trigger a multi-factor authentication prompt automatically.
Question –
You need to recommend a solution for the data store of the historical transaction query system.
What should you include in the recommendation for resiliency requirements?
Correct
Scenario: If a region fails, ensure that the historical transaction query system remains available without any administrative intervention.
Incorrect
Scenario: If a region fails, ensure that the historical transaction query system remains available without any administrative intervention.
Unattempted
Scenario: If a region fails, ensure that the historical transaction query system remains available without any administrative intervention.
Question 64 of 65
64. Question
Case Study –
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study.
Overview –
PreparationLabs, Ltd, is a US-based financial services company that has a main office in New York and a branch office in San Francisco.
Existing Environment – Payment Processing System
PreparationLabs hosts a business-critical payment processing system in its New York data center. The system has three tiers: a front-end web app, a middle-tier web API, and a back-end data store implemented as a Microsoft SQL Server 2014 database. All servers run Windows Server 2012 R2.
The front-end and middle-tier components are hosted by using Microsoft Internet Information Services (IIS). The application code is written in C# and ASP.NET.
The middle-tier API uses the Entity Framework to communicate to the SQL Server database. Maintenance of the database is performed by using SQL Server
Agent jobs.
The database is currently 2 TB and is not expected to grow beyond 3 TB.
The payment processing system has the following compliance-related requirements:
Encrypt data in transit and at rest. Only the front-end and middle-tier components must be able to access the encryption keys that protect the data store.
Keep backups of the data in two separate physical locations that are at least 200 miles apart and can be restored for up to seven years.
Support blocking inbound and outbound traffic based on the source IP address, the destination IP address, and the port number.
Collect Windows security logs from all the middle-tier servers and retain the logs for a period of seven years.
Inspect inbound and outbound traffic from the front-end tier by using highly available network appliances.
Only allow all access to all the tiers from the internal network of PreparationLabs.
Tape backups are configured by using an on-premises deployment of Microsoft System Center Data Protection Manager (DPM), and then shipped offsite for long term storage.
Existing Environment – Historical Transaction Query System
PreparationLabs recently migrated a business-critical workload to Azure. The workload contains a .NET web service for querying the historical transaction data residing in Azure Table Storage. The .NET web service is accessible from a client app that was developed in-house and runs on the client computers in the New York office.
The data in the table storage is 50 GB and is not expected to increase.
Existing Environment – Current Issues
The PreparationLabs IT team discovers poor performance of the historical transaction query system, as the queries frequently cause table scans.
Requirements – Planned Changes
PreparationLabs plans to implement the following changes:
Migrate the payment processing system to Azure.
Migrate the historical transaction data to Azure Cosmos DB to address the performance issues.
Requirements – Migration Requirements
PreparationLabs identifies the following general migration requirements:
Infrastructure services must remain available if a region or a data center fails. Failover must occur without any administrative intervention.
Whenever possible, Azure managed services must be used to minimize management overhead.
Whenever possible, costs must be minimized.
PreparationLabs identifies the following requirements for the payment processing system:
If a data center fails, ensure that the payment processing system remains available without any administrative intervention. The middle-tier and the web front end must continue to operate without any additional configurations.
Ensure that the number of compute nodes of the front-end and the middle tiers of the payment processing system can increase or decrease automatically based on CPU utilization.
Ensure that each tier of the payment processing system is subject to a Service Level Agreement (SLA) of 99.99 percent availability.
Minimize the effort required to modify the middle-tier API and the back-end tier of the payment processing system.
Payment processing system must be able to use grouping and joining tables on encrypted columns.
Generate alerts when unauthorized login attempts occur on the middle-tier virtual machines.
Ensure that the payment processing system preserves its current compliance status.
Host the middle tier of the payment processing system on a virtual machine
PreparationLabs identifies the following requirements for the historical transaction query system:
Minimize the use of on-premises infrastructure services.
Minimize the effort required to modify the .NET web service querying Azure Cosmos DB.
Minimize the frequency of table scans.
If a region fails, ensure that the historical transaction query system remains available without any administrative intervention.
Requirements – Information Security Requirements
The IT security team wants to ensure that identity management is performed by using Active Directory. Password hashes must be stored on-premises only.
Access to all business-critical systems must rely on Active Directory credentials. Any suspicious authentication attempts must trigger a multi-factor authentication prompt automatically.
Question –
You need to recommend a solution for the users at PreparationLabs to authenticate to the cloud-based services and the Azure AD-integrated applications.
What should you include in the recommendation?
Correct
Scenario: The IT security team wants to ensure that identity management is performed by using Active Directory. Password hashes must be stored on-premises only. Access to all business-critical systems must rely on Active Directory credentials.
Azure Active Directory (Azure AD) Pass-through Authentication allows your users to sign in to both on-premises and cloud-based applications using the same passwords.
Scenario: The IT security team wants to ensure that identity management is performed by using Active Directory. Password hashes must be stored on-premises only. Access to all business-critical systems must rely on Active Directory credentials.
Azure Active Directory (Azure AD) Pass-through Authentication allows your users to sign in to both on-premises and cloud-based applications using the same passwords.
Scenario: The IT security team wants to ensure that identity management is performed by using Active Directory. Password hashes must be stored on-premises only. Access to all business-critical systems must rely on Active Directory credentials.
Azure Active Directory (Azure AD) Pass-through Authentication allows your users to sign in to both on-premises and cloud-based applications using the same passwords.
Case Study –
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study.
Overview –
PreparationLabs, Ltd, is a US-based financial services company that has a main office in New York and a branch office in San Francisco.
Existing Environment – Payment Processing System
PreparationLabs hosts a business-critical payment processing system in its New York data center. The system has three tiers: a front-end web app, a middle-tier web API, and a back-end data store implemented as a Microsoft SQL Server 2014 database. All servers run Windows Server 2012 R2.
The front-end and middle-tier components are hosted by using Microsoft Internet Information Services (IIS). The application code is written in C# and ASP.NET.
The middle-tier API uses the Entity Framework to communicate to the SQL Server database. Maintenance of the database is performed by using SQL Server
Agent jobs.
The database is currently 2 TB and is not expected to grow beyond 3 TB.
The payment processing system has the following compliance-related requirements:
Encrypt data in transit and at rest. Only the front-end and middle-tier components must be able to access the encryption keys that protect the data store.
Keep backups of the data in two separate physical locations that are at least 200 miles apart and can be restored for up to seven years.
Support blocking inbound and outbound traffic based on the source IP address, the destination IP address, and the port number.
Collect Windows security logs from all the middle-tier servers and retain the logs for a period of seven years.
Inspect inbound and outbound traffic from the front-end tier by using highly available network appliances.
Only allow all access to all the tiers from the internal network of PreparationLabs.
Tape backups are configured by using an on-premises deployment of Microsoft System Center Data Protection Manager (DPM), and then shipped offsite for long term storage.
Existing Environment – Historical Transaction Query System
PreparationLabs recently migrated a business-critical workload to Azure. The workload contains a .NET web service for querying the historical transaction data residing in Azure Table Storage. The .NET web service is accessible from a client app that was developed in-house and runs on the client computers in the New York office.
The data in the table storage is 50 GB and is not expected to increase.
Existing Environment – Current Issues
The PreparationLabs IT team discovers poor performance of the historical transaction query system, as the queries frequently cause table scans.
Requirements – Planned Changes
PreparationLabs plans to implement the following changes:
Migrate the payment processing system to Azure.
Migrate the historical transaction data to Azure Cosmos DB to address the performance issues.
Requirements – Migration Requirements
PreparationLabs identifies the following general migration requirements:
Infrastructure services must remain available if a region or a data center fails. Failover must occur without any administrative intervention.
Whenever possible, Azure managed services must be used to minimize management overhead.
Whenever possible, costs must be minimized.
PreparationLabs identifies the following requirements for the payment processing system:
If a data center fails, ensure that the payment processing system remains available without any administrative intervention. The middle-tier and the web front end must continue to operate without any additional configurations.
Ensure that the number of compute nodes of the front-end and the middle tiers of the payment processing system can increase or decrease automatically based on CPU utilization.
Ensure that each tier of the payment processing system is subject to a Service Level Agreement (SLA) of 99.99 percent availability.
Minimize the effort required to modify the middle-tier API and the back-end tier of the payment processing system.
Payment processing system must be able to use grouping and joining tables on encrypted columns.
Generate alerts when unauthorized login attempts occur on the middle-tier virtual machines.
Ensure that the payment processing system preserves its current compliance status.
Host the middle tier of the payment processing system on a virtual machine
PreparationLabs identifies the following requirements for the historical transaction query system:
Minimize the use of on-premises infrastructure services.
Minimize the effort required to modify the .NET web service querying Azure Cosmos DB.
Minimize the frequency of table scans.
If a region fails, ensure that the historical transaction query system remains available without any administrative intervention.
Requirements – Information Security Requirements
The IT security team wants to ensure that identity management is performed by using Active Directory. Password hashes must be stored on-premises only.
Access to all business-critical systems must rely on Active Directory credentials. Any suspicious authentication attempts must trigger a multi-factor authentication prompt automatically.
Question –
You need to recommend a solution for configuring the Azure Multi-Factor Authentication (MFA) settings. What should you include in the recommendation?
Correct
Scenario: Any suspicious authentication attempts must trigger a multi-factor authentication prompt automatically.
You need to create risk based conditional access policy, so you need Azure AD Premium P2 license.
There is no requirement to block malicious users. You need to allow users to access the application but with MFA.