You have already completed the Test before. Hence you can not start it again.
Test is loading...
You must sign in or sign up to start the Test.
You have to finish following quiz, to start this Test:
Your results are here!! for" Azure AI Engineer Associate Practice Test 6 "
0 of 51 questions answered correctly
Your time:
Time has elapsed
Your Final Score is : 0
You have attempted : 0
Number of Correct Questions : 0 and scored 0
Number of Incorrect Questions : 0 and Negative marks 0
Average score
Your score
AI-102
You have attempted: 0
Number of Correct Questions: 0 and scored 0
Number of Incorrect Questions: 0 and Negative marks 0
You can review your answers by clicking view questions. Important Note : Open Reference Documentation Links in New Tab (Right Click and Open in New Tab).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
Answered
Review
Question 1 of 51
1. Question
HOTSPOT –
You need to configure security for an Azure Machine Learning service used by groups of data scientists. The groups must have access to only their own experiments and must be able to grant permissions to the members of their team.
What should you do? To answer, select the appropriate options in the answer area.
Explanation: A subscription is a billing entity that can contain multiple resource groups and resources. It is too broad for the specific requirement of isolating experiments and managing permissions at the team level. Subscriptions are not designed for fine-grained access control1.
By selecting A Workspace (F) and Owner (E), you can ensure that each group of data scientists has isolated access to their experiments and the ability to manage permissions within their team.
Explanation: A subscription is a billing entity that can contain multiple resource groups and resources. It is too broad for the specific requirement of isolating experiments and managing permissions at the team level. Subscriptions are not designed for fine-grained access control1.
By selecting A Workspace (F) and Owner (E), you can ensure that each group of data scientists has isolated access to their experiments and the ability to manage permissions within their team.
Explanation: A subscription is a billing entity that can contain multiple resource groups and resources. It is too broad for the specific requirement of isolating experiments and managing permissions at the team level. Subscriptions are not designed for fine-grained access control1.
By selecting A Workspace (F) and Owner (E), you can ensure that each group of data scientists has isolated access to their experiments and the ability to manage permissions within their team.
Question 2 of 51
2. Question
You have several AI applications that use an Azure Kubernetes Service (AKS) cluster. The cluster supports a maximum of 32 nodes. You discover that occasionally and unpredictably, the application requires more than 32 nodes.
You need to recommend a solution to handle the unpredictable application load.
Which scaling method should you recommend?
Correct
References:
Incorrect
References:
Unattempted
References:
Question 3 of 51
3. Question
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You are developing an application that uses an Azure Kubernetes Service (AKS) cluster.
You are troubleshooting a node issue. You need to connect to an AKS node by using SSH.
Solution: You create a managed identity for AKS, and then you create an SSH connection.
Does this meet the goal?
Correct
Instead load key to node to access using SSH connection.
Incorrect
Instead load key to node to access using SSH connection.
Unattempted
Instead load key to node to access using SSH connection.
Question 4 of 51
4. Question
You create an Azure Cognitive Services resource.
A data scientist needs to call the resource from Azure Logic Apps.
Which two values should you provide to the data scientist? Each correct answer presents part of the solution.
Correct
All requests to a search service need a read-only api-key that was generated specifically for your service. The api-key is the sole mechanism for authenticating access to your search service endpoint and must be included on every request.
References: https://docs.microsoft.com/en-us/azure/search/search-security-api-keys
Incorrect
All requests to a search service need a read-only api-key that was generated specifically for your service. The api-key is the sole mechanism for authenticating access to your search service endpoint and must be included on every request.
References: https://docs.microsoft.com/en-us/azure/search/search-security-api-keys
Unattempted
All requests to a search service need a read-only api-key that was generated specifically for your service. The api-key is the sole mechanism for authenticating access to your search service endpoint and must be included on every request.
References: https://docs.microsoft.com/en-us/azure/search/search-security-api-keys
Question 5 of 51
5. Question
HOTSPOT –
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You plan to deploy an application that will perform image recognition. The application will store image data in two Azure Blob storage stores named Blob1 and Blob2.
You need to recommend a security solution that meets the following requirements:
? Access to Blob1 must be controlled by using a role.
? Access to Blob2 must be time-limited and constrained to specific operations
What should you recommend using to control access to each blob store? To answer, select the appropriate options in the answer area.
Correct
All Options and correct answers:
References:
Incorrect
All Options and correct answers:
References:
Unattempted
All Options and correct answers:
References:
Question 6 of 51
6. Question
You are developing a mobile application that will perform optical character recognition (OCR) from photos.
The application will annotate the photos by using metadata, store the photos in Azure Blob storage, and then score the photos by using an Azure Machine Learning model.
What should you use to process the data?
Correct
By using Azure services such as the Computer Vision API and Azure Functions, companies can eliminate the need to manage individual servers, while reducing costs and leveraging the expertise that Microsoft has already developed around processing images with Cognitive Services. This example scenario specifically addresses an image-processing use case. If you have different AI needs, consider the full suite of Cognitive Services.
use cases include:
> Classifying images on a fashion website.
> Classifying telemetry data from screenshots of games.
This scenario covers the back-end components of a web or mobile application. Data flows through the scenario as follows:
The API layer is built using Azure Functions. These APIs enable the application to upload images and retrieve data from Cosmos DB. When an image is uploaded via an API call, it’s stored in Blob storage.
Adding new files to Blob storage triggers an Event Grid notification to be sent to an Azure Function.
Azure Functions sends a link to the newly uploaded file to the Computer Vision API to analyze.
Once the data has been returned from the Computer Vision API, Azure Functions makes an entry in Cosmos DB to persist the results of the analysis along with the image metadata.
References: https://docs.microsoft.com/en-us/azure/architecture/example-scenario/ai/intelligent-apps-image-processing
Incorrect
By using Azure services such as the Computer Vision API and Azure Functions, companies can eliminate the need to manage individual servers, while reducing costs and leveraging the expertise that Microsoft has already developed around processing images with Cognitive Services. This example scenario specifically addresses an image-processing use case. If you have different AI needs, consider the full suite of Cognitive Services.
use cases include:
> Classifying images on a fashion website.
> Classifying telemetry data from screenshots of games.
This scenario covers the back-end components of a web or mobile application. Data flows through the scenario as follows:
The API layer is built using Azure Functions. These APIs enable the application to upload images and retrieve data from Cosmos DB. When an image is uploaded via an API call, it’s stored in Blob storage.
Adding new files to Blob storage triggers an Event Grid notification to be sent to an Azure Function.
Azure Functions sends a link to the newly uploaded file to the Computer Vision API to analyze.
Once the data has been returned from the Computer Vision API, Azure Functions makes an entry in Cosmos DB to persist the results of the analysis along with the image metadata.
References: https://docs.microsoft.com/en-us/azure/architecture/example-scenario/ai/intelligent-apps-image-processing
Unattempted
By using Azure services such as the Computer Vision API and Azure Functions, companies can eliminate the need to manage individual servers, while reducing costs and leveraging the expertise that Microsoft has already developed around processing images with Cognitive Services. This example scenario specifically addresses an image-processing use case. If you have different AI needs, consider the full suite of Cognitive Services.
use cases include:
> Classifying images on a fashion website.
> Classifying telemetry data from screenshots of games.
This scenario covers the back-end components of a web or mobile application. Data flows through the scenario as follows:
The API layer is built using Azure Functions. These APIs enable the application to upload images and retrieve data from Cosmos DB. When an image is uploaded via an API call, it’s stored in Blob storage.
Adding new files to Blob storage triggers an Event Grid notification to be sent to an Azure Function.
Azure Functions sends a link to the newly uploaded file to the Computer Vision API to analyze.
Once the data has been returned from the Computer Vision API, Azure Functions makes an entry in Cosmos DB to persist the results of the analysis along with the image metadata.
References: https://docs.microsoft.com/en-us/azure/architecture/example-scenario/ai/intelligent-apps-image-processing
Question 7 of 51
7. Question
Your company has 1,000 AI developers who are responsible for provisioning environments in Azure.
You need to control the type, size, and location of the resources that the developers can provision.
What should you use?
Correct
The most suitable service to control the type, size, and location of resources provisioned by your AI developers in Azure is Azure Policy.
Here’s why Azure Policy is the best fit for this scenario:
Resource Management: Azure Policy allows you to define policies that enforce specific rules on resources deployed in Azure.These policies can restrict the type (e.g., virtual machine types), size (e.SKU), and location (e.g., specific regions) of resources that developers can provision.
Centralized Control: With Azure Policy, you can define policies centrally and apply them to subscriptions, resource groups, or specific resources. This ensures consistent enforcement across your entire Azure environment.
Granular Control: You can create different policies for different developer groups or projects, allowing for granular control over resource provisioning.
Other options and why they are not the best fit:
Azure Key Vault: While useful for storing secrets securely, it doesn’t directly control resource provisioning.
Azure Security Center: Primarily focuses on security posture and threat detection, not resource provisioning control.
Azure Managed Identities: Provide identities for Azure resources without human intervention, but don’t enforce resource provisioning policies.
Azure Service Principals: Similar to managed identities, they provide authentication for Azure resources but lack policy enforcement capabilities.
By implementing Azure Policy, you can ensure that your AI developers have the resources they need while maintaining control over costs and security.
Incorrect
The most suitable service to control the type, size, and location of resources provisioned by your AI developers in Azure is Azure Policy.
Here’s why Azure Policy is the best fit for this scenario:
Resource Management: Azure Policy allows you to define policies that enforce specific rules on resources deployed in Azure.These policies can restrict the type (e.g., virtual machine types), size (e.SKU), and location (e.g., specific regions) of resources that developers can provision.
Centralized Control: With Azure Policy, you can define policies centrally and apply them to subscriptions, resource groups, or specific resources. This ensures consistent enforcement across your entire Azure environment.
Granular Control: You can create different policies for different developer groups or projects, allowing for granular control over resource provisioning.
Other options and why they are not the best fit:
Azure Key Vault: While useful for storing secrets securely, it doesn’t directly control resource provisioning.
Azure Security Center: Primarily focuses on security posture and threat detection, not resource provisioning control.
Azure Managed Identities: Provide identities for Azure resources without human intervention, but don’t enforce resource provisioning policies.
Azure Service Principals: Similar to managed identities, they provide authentication for Azure resources but lack policy enforcement capabilities.
By implementing Azure Policy, you can ensure that your AI developers have the resources they need while maintaining control over costs and security.
Unattempted
The most suitable service to control the type, size, and location of resources provisioned by your AI developers in Azure is Azure Policy.
Here’s why Azure Policy is the best fit for this scenario:
Resource Management: Azure Policy allows you to define policies that enforce specific rules on resources deployed in Azure.These policies can restrict the type (e.g., virtual machine types), size (e.SKU), and location (e.g., specific regions) of resources that developers can provision.
Centralized Control: With Azure Policy, you can define policies centrally and apply them to subscriptions, resource groups, or specific resources. This ensures consistent enforcement across your entire Azure environment.
Granular Control: You can create different policies for different developer groups or projects, allowing for granular control over resource provisioning.
Other options and why they are not the best fit:
Azure Key Vault: While useful for storing secrets securely, it doesn’t directly control resource provisioning.
Azure Security Center: Primarily focuses on security posture and threat detection, not resource provisioning control.
Azure Managed Identities: Provide identities for Azure resources without human intervention, but don’t enforce resource provisioning policies.
Azure Service Principals: Similar to managed identities, they provide authentication for Azure resources but lack policy enforcement capabilities.
By implementing Azure Policy, you can ensure that your AI developers have the resources they need while maintaining control over costs and security.
Question 8 of 51
8. Question
You have a solution that runs on a five-node Azure Kubernetes Service (AKS) cluster. The cluster uses an N-series virtual machine. An Azure Batch AI process runs once a day and rarely on demand.
You need to recommend a solution to maintain the cluster configuration when the cluster is not in use. The solution must not incur any compute costs.
What should you include in the recommendation?
Correct
The correct answer is A. Downscale the cluster to zero nodes.
Explanation:
Downscaling to zero nodes: This option allows you to completely stop the cluster, eliminating all compute costs. When you need to run the Azure Batch AI process, you can simply scale the cluster back up to the desired number of nodes. This provides a cost-effective solution for infrequent, scheduled workloads.
Incorrect Options:
Downscaling to one node: While this option can reduce costs compared to running five nodes, it still incurs some compute costs. If the workload is very infrequent, it might be more cost-effective to completely stop the cluster.
Deleting the cluster: This option is not ideal because it requires recreating the cluster each time you need to run the Azure Batch AI process. This can be time-consuming and may incur additional costs, especially if you have complex cluster configurations.
Incorrect
The correct answer is A. Downscale the cluster to zero nodes.
Explanation:
Downscaling to zero nodes: This option allows you to completely stop the cluster, eliminating all compute costs. When you need to run the Azure Batch AI process, you can simply scale the cluster back up to the desired number of nodes. This provides a cost-effective solution for infrequent, scheduled workloads.
Incorrect Options:
Downscaling to one node: While this option can reduce costs compared to running five nodes, it still incurs some compute costs. If the workload is very infrequent, it might be more cost-effective to completely stop the cluster.
Deleting the cluster: This option is not ideal because it requires recreating the cluster each time you need to run the Azure Batch AI process. This can be time-consuming and may incur additional costs, especially if you have complex cluster configurations.
Unattempted
The correct answer is A. Downscale the cluster to zero nodes.
Explanation:
Downscaling to zero nodes: This option allows you to completely stop the cluster, eliminating all compute costs. When you need to run the Azure Batch AI process, you can simply scale the cluster back up to the desired number of nodes. This provides a cost-effective solution for infrequent, scheduled workloads.
Incorrect Options:
Downscaling to one node: While this option can reduce costs compared to running five nodes, it still incurs some compute costs. If the workload is very infrequent, it might be more cost-effective to completely stop the cluster.
Deleting the cluster: This option is not ideal because it requires recreating the cluster each time you need to run the Azure Batch AI process. This can be time-consuming and may incur additional costs, especially if you have complex cluster configurations.
Question 9 of 51
9. Question
You plan to deploy an AI solution that tracks the behavior of 10 custom mobile apps. Each mobile app has several thousand users.
You need to recommend a solution for real-time data ingestion for the data originating from the mobile app users.
Which Microsoft Azure service should you include in the recommendation?
Correct
Azure Event Hubs is a big data streaming platform and event ingestion service. It can receive and process millions of events per second. Data sent to an event hub can be transformed and stored by using any real-time analytics provider or batching/storage adapters.
The following scenarios are some of the scenarios where you can use Event Hubs:
Anomaly detection (fraud/outliers)
Application logging
Analytics pipelines, such as click-streams
Live dash-boarding
Archiving data
Transaction processing
User telemetry processing
Device telemetry streaming
References: https://docs.microsoft.com/en-in/azure/event-hubs/event-hubs-about
Incorrect
Azure Event Hubs is a big data streaming platform and event ingestion service. It can receive and process millions of events per second. Data sent to an event hub can be transformed and stored by using any real-time analytics provider or batching/storage adapters.
The following scenarios are some of the scenarios where you can use Event Hubs:
Anomaly detection (fraud/outliers)
Application logging
Analytics pipelines, such as click-streams
Live dash-boarding
Archiving data
Transaction processing
User telemetry processing
Device telemetry streaming
References: https://docs.microsoft.com/en-in/azure/event-hubs/event-hubs-about
Unattempted
Azure Event Hubs is a big data streaming platform and event ingestion service. It can receive and process millions of events per second. Data sent to an event hub can be transformed and stored by using any real-time analytics provider or batching/storage adapters.
The following scenarios are some of the scenarios where you can use Event Hubs:
Anomaly detection (fraud/outliers)
Application logging
Analytics pipelines, such as click-streams
Live dash-boarding
Archiving data
Transaction processing
User telemetry processing
Device telemetry streaming
References: https://docs.microsoft.com/en-in/azure/event-hubs/event-hubs-about
Question 10 of 51
10. Question
HOTSPOT –
You need to build a sentiment analysis solution that will use input data from JSON documents and PDF documents. The JSON documents must be processed in batches and aggregated.
Which storage type should you use for each file type? To answer, select the appropriate options in the answer area.
Correct
All Options with correct Answer:
Azure data Lake: A common big data scenario is batch processing of data at rest. In this scenario, the source data is loaded into data storage, either by the source application itself or by an orchestration workflow. The data is then processed in-place by a parallelized job, which can also be initiated by the orchestration workflow. The processing may include multiple iterative steps before the transformed results are loaded into an analytical data store, which can be queried by analytics and reporting components.
If you have unstructured text or images in Azure Blob storage, an AI enrichment pipeline can extract information and create new content that is useful for full-text search or knowledge mining scenarios. Although a pipeline can process images, this REST tutorial focuses on text, applying language detection and natural language processing to create new fields that you can leverage in queries, facets, and filters.
This tutorial uses Postman and the Search REST APIs to perform the following tasks:
Start with whole documents (unstructured text) such as PDF, HTML, DOCX, and PPTX in Azure Blob storage.
References: https://docs.microsoft.com/en-us/azure/architecture/data-guide/big-data/batch-processing https://docs.microsoft.com/en-us/azure/architecture/data-guide/technology-choices/batch-processing https://www.sqlchick.com/entries/2017/9/4/querying-multi-structured-json-files-with-u-sql-in-azure-data-lake https://docs.microsoft.com/en-us/azure/search/cognitive-search-tutorial-blob
Incorrect
All Options with correct Answer:
Azure data Lake: A common big data scenario is batch processing of data at rest. In this scenario, the source data is loaded into data storage, either by the source application itself or by an orchestration workflow. The data is then processed in-place by a parallelized job, which can also be initiated by the orchestration workflow. The processing may include multiple iterative steps before the transformed results are loaded into an analytical data store, which can be queried by analytics and reporting components.
If you have unstructured text or images in Azure Blob storage, an AI enrichment pipeline can extract information and create new content that is useful for full-text search or knowledge mining scenarios. Although a pipeline can process images, this REST tutorial focuses on text, applying language detection and natural language processing to create new fields that you can leverage in queries, facets, and filters.
This tutorial uses Postman and the Search REST APIs to perform the following tasks:
Start with whole documents (unstructured text) such as PDF, HTML, DOCX, and PPTX in Azure Blob storage.
References: https://docs.microsoft.com/en-us/azure/architecture/data-guide/big-data/batch-processing https://docs.microsoft.com/en-us/azure/architecture/data-guide/technology-choices/batch-processing https://www.sqlchick.com/entries/2017/9/4/querying-multi-structured-json-files-with-u-sql-in-azure-data-lake https://docs.microsoft.com/en-us/azure/search/cognitive-search-tutorial-blob
Unattempted
All Options with correct Answer:
Azure data Lake: A common big data scenario is batch processing of data at rest. In this scenario, the source data is loaded into data storage, either by the source application itself or by an orchestration workflow. The data is then processed in-place by a parallelized job, which can also be initiated by the orchestration workflow. The processing may include multiple iterative steps before the transformed results are loaded into an analytical data store, which can be queried by analytics and reporting components.
If you have unstructured text or images in Azure Blob storage, an AI enrichment pipeline can extract information and create new content that is useful for full-text search or knowledge mining scenarios. Although a pipeline can process images, this REST tutorial focuses on text, applying language detection and natural language processing to create new fields that you can leverage in queries, facets, and filters.
This tutorial uses Postman and the Search REST APIs to perform the following tasks:
Start with whole documents (unstructured text) such as PDF, HTML, DOCX, and PPTX in Azure Blob storage.
References: https://docs.microsoft.com/en-us/azure/architecture/data-guide/big-data/batch-processing https://docs.microsoft.com/en-us/azure/architecture/data-guide/technology-choices/batch-processing https://www.sqlchick.com/entries/2017/9/4/querying-multi-structured-json-files-with-u-sql-in-azure-data-lake https://docs.microsoft.com/en-us/azure/search/cognitive-search-tutorial-blob
Question 11 of 51
11. Question
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution. After you answer a question, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You are developing an application that uses an Azure Kubernetes Service (AKS) cluster.You are troubleshooting a node issue. You need to connect to an AKS node by using SSH.
Solution: You change the permissions of the AKS resource group, and then you create an SSH connection.
Does this meet the goal?
Correct
Instead load key to node to access using SSH connection.
Incorrect
Instead load key to node to access using SSH connection.
Unattempted
Instead load key to node to access using SSH connection.
Question 12 of 51
12. Question
DRAG DROP –
You are designing an AI solution that will analyze media data. The data will be stored in Azure Blob storage.
You need to ensure that the storage account is encrypted by using a key generated by the hardware security module (HSM) of your company.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Select and Place:
You plan to deploy Azure IoT Edge devices that will each store more than 10,000 images locally and classify the images by using a Custom Vision Service classifier. Each image is approximately 5 MB.
You need to ensure that the images persist on the devices for 14 days.
What should you use?
Correct
Azure Blob Storage on IoT Edge provides a block blob and append blob storage solution at the edge. A blob storage module on your IoT Edge device behaves like an Azure blob service, except the blobs are stored locally on your IoT Edge device. You can access your blobs using the same Azure storage SDK methods or blob API calls that you’re already used to. This article explains the concepts related to Azure Blob Storage on IoT Edge container that runs a blob service on your IoT Edge device.
References: https://docs.microsoft.com/en-us/azure/iot-edge/how-to-store-data-blob https://youtu.be/xbwgMNGB_3Y
Incorrect
Azure Blob Storage on IoT Edge provides a block blob and append blob storage solution at the edge. A blob storage module on your IoT Edge device behaves like an Azure blob service, except the blobs are stored locally on your IoT Edge device. You can access your blobs using the same Azure storage SDK methods or blob API calls that you’re already used to. This article explains the concepts related to Azure Blob Storage on IoT Edge container that runs a blob service on your IoT Edge device.
References: https://docs.microsoft.com/en-us/azure/iot-edge/how-to-store-data-blob https://youtu.be/xbwgMNGB_3Y
Unattempted
Azure Blob Storage on IoT Edge provides a block blob and append blob storage solution at the edge. A blob storage module on your IoT Edge device behaves like an Azure blob service, except the blobs are stored locally on your IoT Edge device. You can access your blobs using the same Azure storage SDK methods or blob API calls that you’re already used to. This article explains the concepts related to Azure Blob Storage on IoT Edge container that runs a blob service on your IoT Edge device.
References: https://docs.microsoft.com/en-us/azure/iot-edge/how-to-store-data-blob https://youtu.be/xbwgMNGB_3Y
Question 14 of 51
14. Question
HOTSPOT –
You are designing a solution that will ingest data from an Azure IoT Edge device, preprocess the data in Azure Machine Learning, and then move the data to Azure HDInsight for further processing.
What should you include in the solution?
To answer, select the appropriate options in the answer area.
Correct
Export Data :
The Export data to Hive option in the Export Data module in Azure Machine Learning Studio. This option is useful when you are working with very large datasets, and want to save your machine learning experiment data to a Hadoop cluster or HDInsight distributed storage.
Apache Hive:
Apache Hive is a data warehouse system for Apache Hadoop. Hive enables data summarization, querying, and analysis of data. Hive queries are written in HiveQL, which is a query language similar to SQL.
Azure Data Lake:
Default storage for the HDFS file system of HDInsight clusters can be associated with either an Azure Storage account or an Azure Data Lake Storage.
Export Data :
The Export data to Hive option in the Export Data module in Azure Machine Learning Studio. This option is useful when you are working with very large datasets, and want to save your machine learning experiment data to a Hadoop cluster or HDInsight distributed storage.
Apache Hive:
Apache Hive is a data warehouse system for Apache Hadoop. Hive enables data summarization, querying, and analysis of data. Hive queries are written in HiveQL, which is a query language similar to SQL.
Azure Data Lake:
Default storage for the HDFS file system of HDInsight clusters can be associated with either an Azure Storage account or an Azure Data Lake Storage.
Export Data :
The Export data to Hive option in the Export Data module in Azure Machine Learning Studio. This option is useful when you are working with very large datasets, and want to save your machine learning experiment data to a Hadoop cluster or HDInsight distributed storage.
Apache Hive:
Apache Hive is a data warehouse system for Apache Hadoop. Hive enables data summarization, querying, and analysis of data. Hive queries are written in HiveQL, which is a query language similar to SQL.
Azure Data Lake:
Default storage for the HDFS file system of HDInsight clusters can be associated with either an Azure Storage account or an Azure Data Lake Storage.
You are configuring data persistence for a Microsoft Bot Framework application. The application requires a structured NoSQL cloud data store.
You need to identify a storage solution for the application. The solution must minimize costs.
What should you identify?
Correct
Azure Table storage can be used as data store for structure NoSQL data with lower cost than Azure Cosmos DB.A NoSQL key-value store for rapid development using massive semi-structured datasets
Use Azure Table storage to store petabytes of semi-structured data and keep costs down.
References https://azure.microsoft.com/en-us/services/storage/tables/
Incorrect
Azure Table storage can be used as data store for structure NoSQL data with lower cost than Azure Cosmos DB.A NoSQL key-value store for rapid development using massive semi-structured datasets
Use Azure Table storage to store petabytes of semi-structured data and keep costs down.
References https://azure.microsoft.com/en-us/services/storage/tables/
Unattempted
Azure Table storage can be used as data store for structure NoSQL data with lower cost than Azure Cosmos DB.A NoSQL key-value store for rapid development using massive semi-structured datasets
Use Azure Table storage to store petabytes of semi-structured data and keep costs down.
References https://azure.microsoft.com/en-us/services/storage/tables/
Question 16 of 51
16. Question
DRAG DROP –
You are designing an AI solution that will use IoT devices to gather data from conference attendees, and then later analyze the data. The IoT devices will connect to an Azure IoT hub.
You need to design a solution to anonymize the data before the data is sent to the IoT hub.
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Select and Place:
Correct
Options with correct order:
Step 1: Create a storage container ASA Edge jobs run in containers deployed to Azure IoT Edge devices.
Step 2: Create an Azure Stream Analytics Edge Job Azure Stream Analytics (ASA) on IoT Edge empowers developers to deploy near-real-time analytical intelligence closer to IoT devices so that they can unlock the full value of device-generated data.
Step 3: Add the job to the IoT devices in IoT
Options with correct order:
Step 1: Create a storage container ASA Edge jobs run in containers deployed to Azure IoT Edge devices.
Step 2: Create an Azure Stream Analytics Edge Job Azure Stream Analytics (ASA) on IoT Edge empowers developers to deploy near-real-time analytical intelligence closer to IoT devices so that they can unlock the full value of device-generated data.
Step 3: Add the job to the IoT devices in IoT
Options with correct order:
Step 1: Create a storage container ASA Edge jobs run in containers deployed to Azure IoT Edge devices.
Step 2: Create an Azure Stream Analytics Edge Job Azure Stream Analytics (ASA) on IoT Edge empowers developers to deploy near-real-time analytical intelligence closer to IoT devices so that they can unlock the full value of device-generated data.
Step 3: Add the job to the IoT devices in IoT
You plan to build an application that will perform predictive analytics. Users will be able to consume the application data by using Microsoft Power BI or a custom website.
You need to ensure that you can audit application usage.
Which auditing solution should you use?
Correct
Which features of your web or mobile app are most popular? Do your users achieve their goals with your app? Do they drop out at particular points, and do they return later? Azure Application Insights helps you gain powerful insights into how people use your app. Every time you update your app, you can assess how well it works for users. With this knowledge, you can make data driven decisions about your next development cycles.
References: https://docs.microsoft.com/en-us/azure/azure-monitor/app/usage-overview
Incorrect
Which features of your web or mobile app are most popular? Do your users achieve their goals with your app? Do they drop out at particular points, and do they return later? Azure Application Insights helps you gain powerful insights into how people use your app. Every time you update your app, you can assess how well it works for users. With this knowledge, you can make data driven decisions about your next development cycles.
References: https://docs.microsoft.com/en-us/azure/azure-monitor/app/usage-overview
Unattempted
Which features of your web or mobile app are most popular? Do your users achieve their goals with your app? Do they drop out at particular points, and do they return later? Azure Application Insights helps you gain powerful insights into how people use your app. Every time you update your app, you can assess how well it works for users. With this knowledge, you can make data driven decisions about your next development cycles.
References: https://docs.microsoft.com/en-us/azure/azure-monitor/app/usage-overview
Question 18 of 51
18. Question
You deploy an infrastructure for a big data workload.
You need to run Azure HDInsight and Microsoft Machine Learning Server. You plan to set the RevoScaleR compute contexts to run rx function calls in parallel.
What are three compute contexts that you can use for Machine Learning Server? Each correct answer presents a complete solution.
Correct
A. SQL
Correctness: This option is correct. SQL Server is indeed a valid compute context for running rx functions. It allows for executing machine learning tasks directly within a SQL Server environment, leveraging its computational capabilities.
B. Local sequential
Correctness: This option remains incorrect. This context runs computations sequentially on a single machine, which is not suitable for parallel processing..
C. HBase
Correctness: This option is incorrect. HBase is not listed as a compute context for running rx functions in Microsoft Machine Learning Server. It serves as a data storage solution rather than a compute environment.
D. Local parallel
Correctness: This option is correct. The Local Parallel compute context enables parallel execution of rx functions on a single machine using multiple cores, making it suitable for improving performance with larger datasets.
E. Spark
Correctness: This option is correct. The Spark compute context allows for distributed computing using Apache Spark, which is ideal for big data workloads and enables parallel execution of rx functions across multiple nodes.
Incorrect
A. SQL
Correctness: This option is correct. SQL Server is indeed a valid compute context for running rx functions. It allows for executing machine learning tasks directly within a SQL Server environment, leveraging its computational capabilities.
B. Local sequential
Correctness: This option remains incorrect. This context runs computations sequentially on a single machine, which is not suitable for parallel processing..
C. HBase
Correctness: This option is incorrect. HBase is not listed as a compute context for running rx functions in Microsoft Machine Learning Server. It serves as a data storage solution rather than a compute environment.
D. Local parallel
Correctness: This option is correct. The Local Parallel compute context enables parallel execution of rx functions on a single machine using multiple cores, making it suitable for improving performance with larger datasets.
E. Spark
Correctness: This option is correct. The Spark compute context allows for distributed computing using Apache Spark, which is ideal for big data workloads and enables parallel execution of rx functions across multiple nodes.
Unattempted
A. SQL
Correctness: This option is correct. SQL Server is indeed a valid compute context for running rx functions. It allows for executing machine learning tasks directly within a SQL Server environment, leveraging its computational capabilities.
B. Local sequential
Correctness: This option remains incorrect. This context runs computations sequentially on a single machine, which is not suitable for parallel processing..
C. HBase
Correctness: This option is incorrect. HBase is not listed as a compute context for running rx functions in Microsoft Machine Learning Server. It serves as a data storage solution rather than a compute environment.
D. Local parallel
Correctness: This option is correct. The Local Parallel compute context enables parallel execution of rx functions on a single machine using multiple cores, making it suitable for improving performance with larger datasets.
E. Spark
Correctness: This option is correct. The Spark compute context allows for distributed computing using Apache Spark, which is ideal for big data workloads and enables parallel execution of rx functions across multiple nodes.
Question 19 of 51
19. Question
You need to control and run parallel data transformation operation from to different data sources using Azure Data factory service, which application you can use to control these defined actions to be performed on your data.
Correct
A data factory can have one or more pipelines. A pipeline is a logical grouping of activities that together perform a task. For example, a pipeline could contain a set of activities that ingest and clean log data, and then kick off a mapping data flow to analyze the log data. The pipeline allows you to manage the activities as a set instead of each one individually. You deploy and schedule the pipeline instead of the activities independently.
Data Factory has three groupings of activities: data movement activities, data transformation activities, and control activities. An activity can take zero or more input datasets and produce one or more output datasets. The following diagram shows the relationship between pipeline, activity, and dataset in Data Factory:
References: https://docs.microsoft.com/en-us/azure/data-factory/concepts-pipelines-activities
Incorrect
A data factory can have one or more pipelines. A pipeline is a logical grouping of activities that together perform a task. For example, a pipeline could contain a set of activities that ingest and clean log data, and then kick off a mapping data flow to analyze the log data. The pipeline allows you to manage the activities as a set instead of each one individually. You deploy and schedule the pipeline instead of the activities independently.
Data Factory has three groupings of activities: data movement activities, data transformation activities, and control activities. An activity can take zero or more input datasets and produce one or more output datasets. The following diagram shows the relationship between pipeline, activity, and dataset in Data Factory:
References: https://docs.microsoft.com/en-us/azure/data-factory/concepts-pipelines-activities
Unattempted
A data factory can have one or more pipelines. A pipeline is a logical grouping of activities that together perform a task. For example, a pipeline could contain a set of activities that ingest and clean log data, and then kick off a mapping data flow to analyze the log data. The pipeline allows you to manage the activities as a set instead of each one individually. You deploy and schedule the pipeline instead of the activities independently.
Data Factory has three groupings of activities: data movement activities, data transformation activities, and control activities. An activity can take zero or more input datasets and produce one or more output datasets. The following diagram shows the relationship between pipeline, activity, and dataset in Data Factory:
References: https://docs.microsoft.com/en-us/azure/data-factory/concepts-pipelines-activities
Question 20 of 51
20. Question
Your company is building a cinema chatbot by using the BOT Framework and Language
Understanding (LUS).
You are designing the intents and the entities for LUIS.
The following are utterances that customers might provide:
* Which movies are playing on December 8?
* What time the performance of Movie1?
* I would like to purchase two adult tickets in the balcony section for Movie2.
You need to identify which entity types to use . The solution must minimize development effort.
Question: two adult tickets in the balcony section . what is entity type?
You plan to create an intelligent bot to handle internal user chats to the help desk of your
company. The bot has the following requirements
Must be able to answer questions from an existing knowledge base
You need to recommend which solutions meet the requirements.
Correct
QnA Maker is a cloud-based Natural Language Processing (NLP) service that easily creates a natural conversational layer over your data. It can be used to find the most appropriate answer for any given natural language input, from your custom knowledge base (KB) of information.
References: https://azure.microsoft.com/en-us/services/cognitive-services/qna-maker/
Incorrect
QnA Maker is a cloud-based Natural Language Processing (NLP) service that easily creates a natural conversational layer over your data. It can be used to find the most appropriate answer for any given natural language input, from your custom knowledge base (KB) of information.
References: https://azure.microsoft.com/en-us/services/cognitive-services/qna-maker/
Unattempted
QnA Maker is a cloud-based Natural Language Processing (NLP) service that easily creates a natural conversational layer over your data. It can be used to find the most appropriate answer for any given natural language input, from your custom knowledge base (KB) of information.
References: https://azure.microsoft.com/en-us/services/cognitive-services/qna-maker/
Question 22 of 51
22. Question
You need to create a new app that will consume resources from the following Azure Cognitive Services APIs: Face API Bing Search Text Analytics Translator Text Language Understanding (LUIS)The solution must prepare the development environment as quickly as possible.
What should you create first from the Azure portal?
You plan to use Azure Cognitive Services to provide the development team at your company with the ability to create intelligent apps without having direct AI or data science skills. The company identifies the following requirements for the planned Cognitive Services deployment: Provide support for the following languages: English Portuguese and German. Perform text analytics to derive a sentiment score.
Which Cognitive Service service should you deploy for each requirement? To answer select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Correct
Text Analytics – The Language Detection feature of the Azure Text Analytics REST API evaluates text input for each document and returns language identifiers with a score that indicates the strength of the analysis.
Text Analytics uses Language APIs for sentiment analysis
References: https://azure.microsoft.com/en-us/services/cognitive-services/text-analytics/
Incorrect
Text Analytics – The Language Detection feature of the Azure Text Analytics REST API evaluates text input for each document and returns language identifiers with a score that indicates the strength of the analysis.
Text Analytics uses Language APIs for sentiment analysis
References: https://azure.microsoft.com/en-us/services/cognitive-services/text-analytics/
Unattempted
Text Analytics – The Language Detection feature of the Azure Text Analytics REST API evaluates text input for each document and returns language identifiers with a score that indicates the strength of the analysis.
Text Analytics uses Language APIs for sentiment analysis
References: https://azure.microsoft.com/en-us/services/cognitive-services/text-analytics/
Question 24 of 51
24. Question
You plan to create an intelligent bot to handle internal user chats to the help desk of your company. The bot has the following requirements:
Question: Must be able to interpret what a user means.
You need to recommend which solutions meet the requirements.
Correct
LUIS language understanding
Build applications capable of understanding natural language. Using machine teaching technology and our visual user interface, developers and subject matter experts can build custom machine-learned language models that interprets user goals and extracts key information from conversational phrasesall without any machine learning experience. https://azure.microsoft.com/en-us/services/cognitive-services/language-understanding-intelligent-service/
Incorrect
LUIS language understanding
Build applications capable of understanding natural language. Using machine teaching technology and our visual user interface, developers and subject matter experts can build custom machine-learned language models that interprets user goals and extracts key information from conversational phrasesall without any machine learning experience. https://azure.microsoft.com/en-us/services/cognitive-services/language-understanding-intelligent-service/
Unattempted
LUIS language understanding
Build applications capable of understanding natural language. Using machine teaching technology and our visual user interface, developers and subject matter experts can build custom machine-learned language models that interprets user goals and extracts key information from conversational phrasesall without any machine learning experience. https://azure.microsoft.com/en-us/services/cognitive-services/language-understanding-intelligent-service/
Question 25 of 51
25. Question
HOTSPOT
You plan to create a bot that will support five languages. The bot will be used by users located in three different countries. The bot will answer common customer questions. The bot will use Language Understanding (LUIS) to identify which skill to use and to detect the language of the customer. You need to identify the minimum number of Azure resources that must be created for the planned bot.
How many QnA Maker LUIS and Language Detection instances should you create?
To answer select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Correct
QnA Maker: 5
If the user plans to support multiple languages, they need to have a new QnA Maker resource for each language.
LUIS: 5 If you need a multi-language LUIS client application such as a chatbot, you have a few options. If LUIS
supports all the languages, you develop a LUIS app for each language. Each LUIS app has a unique app ID,
and endpoint log. If you need to provide language understanding for a language LUIS does not support, you
can use Microsoft Translator API to translate the utterance into a supported language, submit the utterance to
the LUIS endpoint, and receive the resulting scores.
Language detection: 1 The Language Detection feature of the Azure Text Analytics REST API evaluates text input for each document and returns language identifiers with a score that indicates the strength of the analysis.
This capability is useful for content stores that collect arbitrary text, where language is unknown. You can
parse the results of this analysis to determine which language is used in the input document. The response
also returns a score that reflects the confidence of the model. The score value is between 0 and 1.
The Language Detection feature can detect a wide range of languages, variants, dialects, and some regional
or cultural languages. The exact list of languages for this feature isn’t published.
Incorrect
QnA Maker: 5
If the user plans to support multiple languages, they need to have a new QnA Maker resource for each language.
LUIS: 5 If you need a multi-language LUIS client application such as a chatbot, you have a few options. If LUIS
supports all the languages, you develop a LUIS app for each language. Each LUIS app has a unique app ID,
and endpoint log. If you need to provide language understanding for a language LUIS does not support, you
can use Microsoft Translator API to translate the utterance into a supported language, submit the utterance to
the LUIS endpoint, and receive the resulting scores.
Language detection: 1 The Language Detection feature of the Azure Text Analytics REST API evaluates text input for each document and returns language identifiers with a score that indicates the strength of the analysis.
This capability is useful for content stores that collect arbitrary text, where language is unknown. You can
parse the results of this analysis to determine which language is used in the input document. The response
also returns a score that reflects the confidence of the model. The score value is between 0 and 1.
The Language Detection feature can detect a wide range of languages, variants, dialects, and some regional
or cultural languages. The exact list of languages for this feature isn’t published.
Unattempted
QnA Maker: 5
If the user plans to support multiple languages, they need to have a new QnA Maker resource for each language.
LUIS: 5 If you need a multi-language LUIS client application such as a chatbot, you have a few options. If LUIS
supports all the languages, you develop a LUIS app for each language. Each LUIS app has a unique app ID,
and endpoint log. If you need to provide language understanding for a language LUIS does not support, you
can use Microsoft Translator API to translate the utterance into a supported language, submit the utterance to
the LUIS endpoint, and receive the resulting scores.
Language detection: 1 The Language Detection feature of the Azure Text Analytics REST API evaluates text input for each document and returns language identifiers with a score that indicates the strength of the analysis.
This capability is useful for content stores that collect arbitrary text, where language is unknown. You can
parse the results of this analysis to determine which language is used in the input document. The response
also returns a score that reflects the confidence of the model. The score value is between 0 and 1.
The Language Detection feature can detect a wide range of languages, variants, dialects, and some regional
or cultural languages. The exact list of languages for this feature isn’t published.
Question 26 of 51
26. Question
Your company recently deployed several hardware devices that contain sensors.
The sensors generate new data on an hourly basis. The data generated is stored on-premises and retained for several years. During the past two months, the sensors generated 300 GB of data.
You plan to move the data to Azure and then perform advanced analytics on the data.
You need to recommend an Azure storage solution for the data.
Which storage solution should you recommend?
Correct
Azure Storage blob is a managed storage service that is highly available, secure, durable, scalable, and redundant. Microsoft takes care of maintenance and handles critical problems for you. Azure Storage is the most ubiquitous storage solution Azure provides, due to the number of services and tools that can be used with it.
References: https://docs.microsoft.com/en-us/azure/architecture/data-guide/technology-choices/data-storage
Incorrect
Azure Storage blob is a managed storage service that is highly available, secure, durable, scalable, and redundant. Microsoft takes care of maintenance and handles critical problems for you. Azure Storage is the most ubiquitous storage solution Azure provides, due to the number of services and tools that can be used with it.
References: https://docs.microsoft.com/en-us/azure/architecture/data-guide/technology-choices/data-storage
Unattempted
Azure Storage blob is a managed storage service that is highly available, secure, durable, scalable, and redundant. Microsoft takes care of maintenance and handles critical problems for you. Azure Storage is the most ubiquitous storage solution Azure provides, due to the number of services and tools that can be used with it.
References: https://docs.microsoft.com/en-us/azure/architecture/data-guide/technology-choices/data-storage
Question 27 of 51
27. Question
You have an Azure Machine Learning model that is deployed to a web service.
You plan to publish the web service by using the name ml.contoso.com.
You need to recommend a solution to ensure that access to the web service is encrypted.
Which three actions should you recommend? Each correct answer presents part of the solution.
Correct
References:
Incorrect
References:
Unattempted
References:
Question 28 of 51
28. Question
HOTSPOT –
You are designing an application to parse images of business forms and upload the data to a database. The upload process will occur once a week.
You need to recommend which services to use for the application. The solution must minimize infrastructure costs.
Which services should you recommend? To answer, select the appropriate options in the answer area.
Correct
Incorrect
Unattempted
Question 29 of 51
29. Question
You have thousands of images that contain text.You need to process the text from the images to a machine-readable character stream.Which Azure Cognitive Services service should you use?
DRAG DROP
You have a real-time scoring pattern that uses deep learning models in Azure. You need to complete the scoring pattern. What should you use? To answer drag the appropriate Azure services to the correct locations in the scoring pattern. Each service may be used once more than once or not at all. You may need to drag the split bat between panes or scroll to view content.
NOTE: Each correct selection is worth one point.
Select and Place:
Correct
Box 1: Azure Container Registry – Azure Container Registry enables storage of images for all types of Docker container deployments including DC/OS Docker Swarm and Kubernetes. The scoring images are deployed as containers on Azure Kubernetes Service and used to run the scoring script. The image used here is created by Machine Learning from the trained model and scoring script and then is pushed to the Azure Container Registry.
Box 2: Azure Kubernetes Services – Azure Kubernetes Service (AKS) is used to deploy the application on a Kubernetes cluster. AKS simplifies the deployment and operations of Kubernetes. The cluster can be configured using CPU-only VMs for regular Python models or GPU-enabled VMs for deep learning models. This reference architecture shows how to deploy Python models as web services to make real-time predictions using the Azure Machine Learning.
Box 1: Azure Container Registry – Azure Container Registry enables storage of images for all types of Docker container deployments including DC/OS Docker Swarm and Kubernetes. The scoring images are deployed as containers on Azure Kubernetes Service and used to run the scoring script. The image used here is created by Machine Learning from the trained model and scoring script and then is pushed to the Azure Container Registry.
Box 2: Azure Kubernetes Services – Azure Kubernetes Service (AKS) is used to deploy the application on a Kubernetes cluster. AKS simplifies the deployment and operations of Kubernetes. The cluster can be configured using CPU-only VMs for regular Python models or GPU-enabled VMs for deep learning models. This reference architecture shows how to deploy Python models as web services to make real-time predictions using the Azure Machine Learning.
Box 1: Azure Container Registry – Azure Container Registry enables storage of images for all types of Docker container deployments including DC/OS Docker Swarm and Kubernetes. The scoring images are deployed as containers on Azure Kubernetes Service and used to run the scoring script. The image used here is created by Machine Learning from the trained model and scoring script and then is pushed to the Azure Container Registry.
Box 2: Azure Kubernetes Services – Azure Kubernetes Service (AKS) is used to deploy the application on a Kubernetes cluster. AKS simplifies the deployment and operations of Kubernetes. The cluster can be configured using CPU-only VMs for regular Python models or GPU-enabled VMs for deep learning models. This reference architecture shows how to deploy Python models as web services to make real-time predictions using the Azure Machine Learning.
Your company plant to monitor twitter hashtags. and then to build a graph of connected
people and places that contains the associated sentiment.
The monitored hashtags use several languages, but the graph will be displayed in English.
You need to recommend the required Azure Cognitive Services endpoints (or the planned graph
Which Cognitive Services endpoints should you recommend?
Correct
People and place are Name entity recognition also need translator text as using several languages
The Text Analytics API is a cloud-based service that provides advanced natural language processing over raw text, and includes four main functions: sentiment analysis, key phrase extraction, language detection, and named entity recognition.
Named Entity Recognition
Identify and categorize entities in your text as people, places, organizations, date/time, quantities, percentages, currencies, and more. Well-known entities are also recognized and linked to more information on the web.
References: https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/overview
Incorrect
People and place are Name entity recognition also need translator text as using several languages
The Text Analytics API is a cloud-based service that provides advanced natural language processing over raw text, and includes four main functions: sentiment analysis, key phrase extraction, language detection, and named entity recognition.
Named Entity Recognition
Identify and categorize entities in your text as people, places, organizations, date/time, quantities, percentages, currencies, and more. Well-known entities are also recognized and linked to more information on the web.
References: https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/overview
Unattempted
People and place are Name entity recognition also need translator text as using several languages
The Text Analytics API is a cloud-based service that provides advanced natural language processing over raw text, and includes four main functions: sentiment analysis, key phrase extraction, language detection, and named entity recognition.
Named Entity Recognition
Identify and categorize entities in your text as people, places, organizations, date/time, quantities, percentages, currencies, and more. Well-known entities are also recognized and linked to more information on the web.
References: https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/overview
Question 32 of 51
32. Question
HOTSPOT –
You need to build an interactive website that will accept uploaded images, and then ask a series of predefined questions based on each image.
Which services should you use? To answer, select the appropriate options in the answer area.
Correct
Options with correct Answer:
Azure Bot Service –
help you enrich the customer experience while maintaining control of your data. Build any type of bot from a Q&A bot to your own branded virtual assistantto quickly connect your users to the answers they need.
Computer Vision –
The Computer Vision Analyze an image feature, returns information about visual content found in an image. Use tagging, domain-specific models, and descriptions in four languages to identify content and label it with confidence. Use Object Detection to get location of thousands of objects within an image. Apply the adult/racy settings to help you detect potential adult content. Identify image types and color schemes in pictures.
References: https://azure.microsoft.com/en-us/services/bot-service/ https://azure.microsoft.com/en-us/services/cognitive-services/computer-vision/
Incorrect
Options with correct Answer:
Azure Bot Service –
help you enrich the customer experience while maintaining control of your data. Build any type of bot from a Q&A bot to your own branded virtual assistantto quickly connect your users to the answers they need.
Computer Vision –
The Computer Vision Analyze an image feature, returns information about visual content found in an image. Use tagging, domain-specific models, and descriptions in four languages to identify content and label it with confidence. Use Object Detection to get location of thousands of objects within an image. Apply the adult/racy settings to help you detect potential adult content. Identify image types and color schemes in pictures.
References: https://azure.microsoft.com/en-us/services/bot-service/ https://azure.microsoft.com/en-us/services/cognitive-services/computer-vision/
Unattempted
Options with correct Answer:
Azure Bot Service –
help you enrich the customer experience while maintaining control of your data. Build any type of bot from a Q&A bot to your own branded virtual assistantto quickly connect your users to the answers they need.
Computer Vision –
The Computer Vision Analyze an image feature, returns information about visual content found in an image. Use tagging, domain-specific models, and descriptions in four languages to identify content and label it with confidence. Use Object Detection to get location of thousands of objects within an image. Apply the adult/racy settings to help you detect potential adult content. Identify image types and color schemes in pictures.
References: https://azure.microsoft.com/en-us/services/bot-service/ https://azure.microsoft.com/en-us/services/cognitive-services/computer-vision/
Question 33 of 51
33. Question
Your company has an on-premises data-center.
You plan to publish an app that will recognize a set of individuals by using the Face API. The model is
trained. You need to ensure that all images are processed in the on-premises data center.
What should you deploy to host the Face API?
Correct
A container is a standard unit of software that packages up code and all its dependencies so the
application runs quickly and reliably from one computing environment to another. A Docker
container image is a lightweight, standalone, executable package of software that includes everything
needed to run an application: code, runtime, system tools, system libraries and settings.
deploy the Cognitive Services Face container to Azure Container Instances. This procedure demonstrates the creation of an Azure Face resource. Then we discuss pulling the associated container image. Finally, we highlight the ability to exercise the orchestration of the two from a browser. Using containers can shift the developers’ attention away from managing infrastructure to instead focusing on application development.
References: https://docs.microsoft.com/en-us/azure/cognitive-services/face/face-api-how-to-topics/deploy-face-container-on-container-instances
Incorrect
A container is a standard unit of software that packages up code and all its dependencies so the
application runs quickly and reliably from one computing environment to another. A Docker
container image is a lightweight, standalone, executable package of software that includes everything
needed to run an application: code, runtime, system tools, system libraries and settings.
deploy the Cognitive Services Face container to Azure Container Instances. This procedure demonstrates the creation of an Azure Face resource. Then we discuss pulling the associated container image. Finally, we highlight the ability to exercise the orchestration of the two from a browser. Using containers can shift the developers’ attention away from managing infrastructure to instead focusing on application development.
References: https://docs.microsoft.com/en-us/azure/cognitive-services/face/face-api-how-to-topics/deploy-face-container-on-container-instances
Unattempted
A container is a standard unit of software that packages up code and all its dependencies so the
application runs quickly and reliably from one computing environment to another. A Docker
container image is a lightweight, standalone, executable package of software that includes everything
needed to run an application: code, runtime, system tools, system libraries and settings.
deploy the Cognitive Services Face container to Azure Container Instances. This procedure demonstrates the creation of an Azure Face resource. Then we discuss pulling the associated container image. Finally, we highlight the ability to exercise the orchestration of the two from a browser. Using containers can shift the developers’ attention away from managing infrastructure to instead focusing on application development.
References: https://docs.microsoft.com/en-us/azure/cognitive-services/face/face-api-how-to-topics/deploy-face-container-on-container-instances
Question 34 of 51
34. Question
HOTSPOT
Your company plans to deploy several apps that will use Azure Cognitive Services APIs. You need to recommend which Cognitive Services APIs must be used to meet the following requirements: Must be able to identify inappropriate text and profanities in multiple languages. Must be able to interpret user requests sent by using text input. Must be able to identify named entities in text.
Which API should you recommend for each requirement?
To answer select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Correct
Content Moderator – The Azure Content Moderator API is a cognitive service that checks text image and video content for material that is potentially offensive risky or otherwise undesirable. When such material is found the service applies appropriate labels (flags) to the content. Your app can then handle flagged content in order to comply with regulations or maintain the intended environment for users.
Language Understanding (LUIS) – Designed to identify valuable information in conversations LUIS interprets user goals (intents) and distills valuable information from sentences (entities) for a high quality nuanced language model. LUIS integrates seamlessly with the Azure Bot Service making it easy to create a sophisticated bot.
Text Analytics – The Text Analytics API is a cloud-based service that provides advanced natural language processing over raw text and includes four main functions: sentiment analysis key phrase extraction named entity recognition and language detection.
Incorrect
Content Moderator – The Azure Content Moderator API is a cognitive service that checks text image and video content for material that is potentially offensive risky or otherwise undesirable. When such material is found the service applies appropriate labels (flags) to the content. Your app can then handle flagged content in order to comply with regulations or maintain the intended environment for users.
Language Understanding (LUIS) – Designed to identify valuable information in conversations LUIS interprets user goals (intents) and distills valuable information from sentences (entities) for a high quality nuanced language model. LUIS integrates seamlessly with the Azure Bot Service making it easy to create a sophisticated bot.
Text Analytics – The Text Analytics API is a cloud-based service that provides advanced natural language processing over raw text and includes four main functions: sentiment analysis key phrase extraction named entity recognition and language detection.
Unattempted
Content Moderator – The Azure Content Moderator API is a cognitive service that checks text image and video content for material that is potentially offensive risky or otherwise undesirable. When such material is found the service applies appropriate labels (flags) to the content. Your app can then handle flagged content in order to comply with regulations or maintain the intended environment for users.
Language Understanding (LUIS) – Designed to identify valuable information in conversations LUIS interprets user goals (intents) and distills valuable information from sentences (entities) for a high quality nuanced language model. LUIS integrates seamlessly with the Azure Bot Service making it easy to create a sophisticated bot.
Text Analytics – The Text Analytics API is a cloud-based service that provides advanced natural language processing over raw text and includes four main functions: sentiment analysis key phrase extraction named entity recognition and language detection.
Question 35 of 51
35. Question
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution while others might not have a correct solution. After you answer a question you will NOT be able to return to it. As a result these questions will not appear in the review screen.
You have Azure IoT Edge devices that generate streaming data. On the devices you need to detect anomalies in the data by using Azure Machine Learning models. Once an anomaly is detected the devices must add information about the anomaly to the Azure IoT Hub stream.
Solution: You deploy an Azure Machine Learning model as an IoT Edge module.
Does this meet the goal?
Correct
You can use IoT Edge modules to deploy code that implements your business logic directly to your IoT Edge devices. For example you can deploy an Azure Machine Learning module that predicts when a device fails based on simulated machine temperature data
Use Azure Notebooks to develop a machine learning module and deploy it to a Linux device running Azure IoT Edge. You can use IoT Edge modules to deploy code that implements your business logic directly to your IoT Edge devices. This tutorial walks you through deploying an Azure Machine Learning module that predicts when a device fails based on simulated machine temperature data
References: https://docs.microsoft.com/bs-latn-ba/azure/iot-edge/tutorial-deploy-machine-learning
Incorrect
You can use IoT Edge modules to deploy code that implements your business logic directly to your IoT Edge devices. For example you can deploy an Azure Machine Learning module that predicts when a device fails based on simulated machine temperature data
Use Azure Notebooks to develop a machine learning module and deploy it to a Linux device running Azure IoT Edge. You can use IoT Edge modules to deploy code that implements your business logic directly to your IoT Edge devices. This tutorial walks you through deploying an Azure Machine Learning module that predicts when a device fails based on simulated machine temperature data
References: https://docs.microsoft.com/bs-latn-ba/azure/iot-edge/tutorial-deploy-machine-learning
Unattempted
You can use IoT Edge modules to deploy code that implements your business logic directly to your IoT Edge devices. For example you can deploy an Azure Machine Learning module that predicts when a device fails based on simulated machine temperature data
Use Azure Notebooks to develop a machine learning module and deploy it to a Linux device running Azure IoT Edge. You can use IoT Edge modules to deploy code that implements your business logic directly to your IoT Edge devices. This tutorial walks you through deploying an Azure Machine Learning module that predicts when a device fails based on simulated machine temperature data
References: https://docs.microsoft.com/bs-latn-ba/azure/iot-edge/tutorial-deploy-machine-learning
Question 36 of 51
36. Question
HOTSPOT –
You plan to deploy an Azure Data Factory pipeline that will perform the following:
-> Move data from on-premises to the cloud.
->Consume Azure Cognitive Services APIs.
You need to recommend which technologies the pipeline should use. The solution must minimize custom code.
What should you include in the recommendation? To answer, select the appropriate options in the answer area.
Correct
All Options with Correct Answers:
References:
Incorrect
All Options with Correct Answers:
References:
Unattempted
All Options with Correct Answers:
References:
Question 37 of 51
37. Question
Your company is building a cinema chatbot by using the BOT Framework and Language
Understanding (LUS).
You are designing the intents and the entities for LUIS.
The following are utterances that customers might provide:
* Which movies are playing on December 8?
* What time the performance of Movie1?
* I would like to purchase two adult tickets in the balcony section for Movie2.
You need to identify which entity types to use . The solution must minimize development effort.
Question: December 8 what is entity type?
Correct
Prebuilt: date time value: 8 dec
Language Understanding (LUIS) provides prebuilt entities. When a prebuilt entity is included in your application, LUIS includes the corresponding entity prediction in the endpoint response. All example utterances are also labeled with the entity. The behavior of prebuilt entities can’t be modified. Unless otherwise noted, prebuilt entities are available in all LUIS application locales (cultures). The following table shows the prebuilt entities that are supported for each culture.
few common prebuilt entities such as datetimeV2, ordinal, email, and phone number.
References https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-reference-prebuilt-entities https://azure.github.io/LearnAI-DesigningandArchitectingIntelligentAgents/03-luis/1_session.html
Incorrect
Prebuilt: date time value: 8 dec
Language Understanding (LUIS) provides prebuilt entities. When a prebuilt entity is included in your application, LUIS includes the corresponding entity prediction in the endpoint response. All example utterances are also labeled with the entity. The behavior of prebuilt entities can’t be modified. Unless otherwise noted, prebuilt entities are available in all LUIS application locales (cultures). The following table shows the prebuilt entities that are supported for each culture.
few common prebuilt entities such as datetimeV2, ordinal, email, and phone number.
References https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-reference-prebuilt-entities https://azure.github.io/LearnAI-DesigningandArchitectingIntelligentAgents/03-luis/1_session.html
Unattempted
Prebuilt: date time value: 8 dec
Language Understanding (LUIS) provides prebuilt entities. When a prebuilt entity is included in your application, LUIS includes the corresponding entity prediction in the endpoint response. All example utterances are also labeled with the entity. The behavior of prebuilt entities can’t be modified. Unless otherwise noted, prebuilt entities are available in all LUIS application locales (cultures). The following table shows the prebuilt entities that are supported for each culture.
few common prebuilt entities such as datetimeV2, ordinal, email, and phone number.
References https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-reference-prebuilt-entities https://azure.github.io/LearnAI-DesigningandArchitectingIntelligentAgents/03-luis/1_session.html
Question 38 of 51
38. Question
HOTSPOT
Your company plans to build an app that will perform the following tasks: Match a users picture to a picture of a celebrity. Tag a scene from a movie and then search for movie scenes by using the tags. You need to recommend which Azure Cognitive Services APIs must be used to perform the tasks.
Which Cognitive Services API should you recommend for each task?
To answer select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:
Correct
Computer Vision – Azure’s Computer Vision service provides developers with access to advanced algorithms that process images and return information. Computer Vision Detect Faces: Detect faces in an image and provide information about each detected face. Computer Vision returns the coordinates rectangle gender and age for each detected face. Computer Vision provides a subset of the Face service functionality. You can use the Face service for more detailed analysis such as facial identification and pose detection.
Use domain models to detect and identify domain-specific content in an image, such as celebrities and landmarks. For example, if an image contains people, Computer Vision can use a domain model for celebrities to determine if the people detected in the image are known celebrities.
Bing Video Search – Search for videos and get comprehensive results With Bing Video Search API v7 find videos across the web. Results provide useful metadata including creator encoding format video length view count improved & simplified paging and more.
Computer Vision – Azure’s Computer Vision service provides developers with access to advanced algorithms that process images and return information. Computer Vision Detect Faces: Detect faces in an image and provide information about each detected face. Computer Vision returns the coordinates rectangle gender and age for each detected face. Computer Vision provides a subset of the Face service functionality. You can use the Face service for more detailed analysis such as facial identification and pose detection.
Use domain models to detect and identify domain-specific content in an image, such as celebrities and landmarks. For example, if an image contains people, Computer Vision can use a domain model for celebrities to determine if the people detected in the image are known celebrities.
Bing Video Search – Search for videos and get comprehensive results With Bing Video Search API v7 find videos across the web. Results provide useful metadata including creator encoding format video length view count improved & simplified paging and more.
Computer Vision – Azure’s Computer Vision service provides developers with access to advanced algorithms that process images and return information. Computer Vision Detect Faces: Detect faces in an image and provide information about each detected face. Computer Vision returns the coordinates rectangle gender and age for each detected face. Computer Vision provides a subset of the Face service functionality. You can use the Face service for more detailed analysis such as facial identification and pose detection.
Use domain models to detect and identify domain-specific content in an image, such as celebrities and landmarks. For example, if an image contains people, Computer Vision can use a domain model for celebrities to determine if the people detected in the image are known celebrities.
Bing Video Search – Search for videos and get comprehensive results With Bing Video Search API v7 find videos across the web. Results provide useful metadata including creator encoding format video length view count improved & simplified paging and more.
You needs to develop mobile application which is using computer vision service to identify fresh products and vegetables at grocery store.
What is custom vision domain to use ?
Correct
Image Classification Domains
Generic
Optimized for a broad range of image classification tasks. If none of the other domains are appropriate, or you’re unsure of which domain to choose, select the Generic domain.
Food
Optimized for photographs of dishes as you would see them on a restaurant menu. If you want to classify photographs of individual fruits or vegetables, use the Food domain.
Landmarks
Optimized for recognizable landmarks, both natural and artificial. This domain works best when the landmark is clearly visible in the photograph. This domain works even if the landmark is slightly obstructed by people in front of it.
Retail
Optimized for images that are found in a shopping catalog or shopping website. If you want high precision classifying between dresses, pants, and shirts, use this domain.
Compact domains
Optimized for the constraints of real-time classification on edge devices.
Real time classification for Food will be Food Compact.
Image Classification Domains
Generic
Optimized for a broad range of image classification tasks. If none of the other domains are appropriate, or you’re unsure of which domain to choose, select the Generic domain.
Food
Optimized for photographs of dishes as you would see them on a restaurant menu. If you want to classify photographs of individual fruits or vegetables, use the Food domain.
Landmarks
Optimized for recognizable landmarks, both natural and artificial. This domain works best when the landmark is clearly visible in the photograph. This domain works even if the landmark is slightly obstructed by people in front of it.
Retail
Optimized for images that are found in a shopping catalog or shopping website. If you want high precision classifying between dresses, pants, and shirts, use this domain.
Compact domains
Optimized for the constraints of real-time classification on edge devices.
Real time classification for Food will be Food Compact.
Image Classification Domains
Generic
Optimized for a broad range of image classification tasks. If none of the other domains are appropriate, or you’re unsure of which domain to choose, select the Generic domain.
Food
Optimized for photographs of dishes as you would see them on a restaurant menu. If you want to classify photographs of individual fruits or vegetables, use the Food domain.
Landmarks
Optimized for recognizable landmarks, both natural and artificial. This domain works best when the landmark is clearly visible in the photograph. This domain works even if the landmark is slightly obstructed by people in front of it.
Retail
Optimized for images that are found in a shopping catalog or shopping website. If you want high precision classifying between dresses, pants, and shirts, use this domain.
Compact domains
Optimized for the constraints of real-time classification on edge devices.
Real time classification for Food will be Food Compact.
You are designing an AI solution that will use IoT devices to gather data from conference attendees and then analyze the data. The IoT device will connect to an Azure IoT hub. You need to ensure that data contains no personally identifiable information before it is sent to the IoT hub.
Which three actions should you perform in sequence?
To answer move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
Select and Place:
Correct
Azure Stream Analytics (ASA) on IoT Edge empowers developers to deploy near-real-time analytical intelligence closer to IoT devices so that they can unlock the full value of device-generated data. Azure Stream Analytics is designed for low latency, resiliency, efficient use of bandwidth, and compliance. Enterprises can now deploy control logic close to the industrial operations and complement Big Data analytics done in the cloud.
Azure Stream Analytics on IoT Edge runs within the Azure IoT Edge framework. Once the job is created in ASA, you can deploy and manage it using IoT Hub.
Azure Stream Analytics (ASA) on IoT Edge empowers developers to deploy near-real-time analytical intelligence closer to IoT devices so that they can unlock the full value of device-generated data. Azure Stream Analytics is designed for low latency, resiliency, efficient use of bandwidth, and compliance. Enterprises can now deploy control logic close to the industrial operations and complement Big Data analytics done in the cloud.
Azure Stream Analytics on IoT Edge runs within the Azure IoT Edge framework. Once the job is created in ASA, you can deploy and manage it using IoT Hub.
Azure Stream Analytics (ASA) on IoT Edge empowers developers to deploy near-real-time analytical intelligence closer to IoT devices so that they can unlock the full value of device-generated data. Azure Stream Analytics is designed for low latency, resiliency, efficient use of bandwidth, and compliance. Enterprises can now deploy control logic close to the industrial operations and complement Big Data analytics done in the cloud.
Azure Stream Analytics on IoT Edge runs within the Azure IoT Edge framework. Once the job is created in ASA, you can deploy and manage it using IoT Hub.
Your company is building a cinema chatbot by using the BOT Framework and Language
Understanding (LUS).
You are designing the intents and the entities for LUIS.
The following are utterances that customers might provide:
* Which movies are playing on December 8?
* What time the performance of Movie1?
* I would like to purchase two adult tickets in the balcony section for Movie2.
You need to identify which entity types to use . The solution must minimize development effort.
Question: Movie1 what is entity type?
You plan to create an intelligent bot to handle internal user chats to the help desk of your
company. The bot has the following requirements
Must be able to perform multiple tasks for a user.
You need to recommend which solutions meet the requirements.
Correct
If a bot uses multiple LUIS models and QnA Maker knowledge bases (knowledge bases), you can use Dispatch tool to determine which LUIS model or QnA Maker knowledge base best matches the user input. The dispatch tool does this by creating a single LUIS app to route user input to the correct model. For more information about the Dispatch, including the CLI commands
References: https://docs.microsoft.com/en-us/azure/bot-service/bot-builder-tutorial-dispatch?view=azure-bot-service-4.0&tabs=cs
Incorrect
If a bot uses multiple LUIS models and QnA Maker knowledge bases (knowledge bases), you can use Dispatch tool to determine which LUIS model or QnA Maker knowledge base best matches the user input. The dispatch tool does this by creating a single LUIS app to route user input to the correct model. For more information about the Dispatch, including the CLI commands
References: https://docs.microsoft.com/en-us/azure/bot-service/bot-builder-tutorial-dispatch?view=azure-bot-service-4.0&tabs=cs
Unattempted
If a bot uses multiple LUIS models and QnA Maker knowledge bases (knowledge bases), you can use Dispatch tool to determine which LUIS model or QnA Maker knowledge base best matches the user input. The dispatch tool does this by creating a single LUIS app to route user input to the correct model. For more information about the Dispatch, including the CLI commands
References: https://docs.microsoft.com/en-us/azure/bot-service/bot-builder-tutorial-dispatch?view=azure-bot-service-4.0&tabs=cs
Question 43 of 51
43. Question
You need to create a prototype of a bot to demonstrate a user performing a task. The demonstration will use the Bot Framework Emulator.
Which botbuilder CLI tool should you use to create the prototype?
Correct
Use Chatdown to produce prototype mock conversations in markdown and convert the markdown to transcripts you can load and view in the new V4 Bot Framework Emulator.
Incorrect
Use Chatdown to produce prototype mock conversations in markdown and convert the markdown to transcripts you can load and view in the new V4 Bot Framework Emulator.
Unattempted
Use Chatdown to produce prototype mock conversations in markdown and convert the markdown to transcripts you can load and view in the new V4 Bot Framework Emulator.
Question 44 of 51
44. Question
HOTSPOT
You have an app that uses the Language Understanding (LUIS) API as shown in the following exhibit.
Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic.
NOTE: Each correct selection is worth one point.
Hot Area:
Correct
Train – Utterances are input from the user that your app needs to interpret. To train LUIS to extract intents and entities from them it’s important to capture a variety of different example utterances for each intent. Active learning or the process of continuing to train on new utterances is essential to machine-learned intelligence that LUIS provides.
Creating intents – Each intent needs to have example utterances at least 15. If you have an intent that does not have any example utterances you will not be able to train LUIS. If you have an intent with one or very few example utterances LUIS will not accurately predict the intent.
Never published – In each iteration of the model do not add a large quantity of utterances. Add utterances in quantities of 15. Train publish and test again.
Incorrect
Train – Utterances are input from the user that your app needs to interpret. To train LUIS to extract intents and entities from them it’s important to capture a variety of different example utterances for each intent. Active learning or the process of continuing to train on new utterances is essential to machine-learned intelligence that LUIS provides.
Creating intents – Each intent needs to have example utterances at least 15. If you have an intent that does not have any example utterances you will not be able to train LUIS. If you have an intent with one or very few example utterances LUIS will not accurately predict the intent.
Never published – In each iteration of the model do not add a large quantity of utterances. Add utterances in quantities of 15. Train publish and test again.
Unattempted
Train – Utterances are input from the user that your app needs to interpret. To train LUIS to extract intents and entities from them it’s important to capture a variety of different example utterances for each intent. Active learning or the process of continuing to train on new utterances is essential to machine-learned intelligence that LUIS provides.
Creating intents – Each intent needs to have example utterances at least 15. If you have an intent that does not have any example utterances you will not be able to train LUIS. If you have an intent with one or very few example utterances LUIS will not accurately predict the intent.
Never published – In each iteration of the model do not add a large quantity of utterances. Add utterances in quantities of 15. Train publish and test again.
Question 45 of 51
45. Question
You needs to develop mobile application which is using computer vision service to identify fresh products and vegetables at grocery store.
What is custom vision project and Compact domain type to use?
Correct
Compact domains
The models generated by compact domains can be exported to run locally. Model performance varies by selected domain. In the table below, we report the model size and inference time on Intel Desktop CPU and NVidia GPU.
Food Classification with Custom Vision Service
In our recent engagement with Vectorform, we built a simple Android application that allows a user to obtain a foods nutritional values based on a photo of that food. To make this scenario simpler, we will assume that the target photo has either a single food item in it or that the user will indicate the food item in question.
The main functionality of the app (image recognition) is powered by Custom Vision, where we will detect what the item is: for example, an apple or a tomato. Once we know what the food is, our goal of finding nutritional info from publicly available services is easy.
To get started, well begin by classifying the following foods. For the purposes of this example, well select foods that are visually distinct from each other:
Apple
Banana
Cake
Fries
Sandwich
Creating a classification model in CSV.
To demonstrate the full power of the Custom Vision classification we built, we created a mobile application on Android that would capture a food-related image, hit our endpoint to determine what food is pictured in the image, use the Nutritionix service to get nutritional information about that food, then display the results to the user.
Compact domains
The models generated by compact domains can be exported to run locally. Model performance varies by selected domain. In the table below, we report the model size and inference time on Intel Desktop CPU and NVidia GPU.
Food Classification with Custom Vision Service
In our recent engagement with Vectorform, we built a simple Android application that allows a user to obtain a foods nutritional values based on a photo of that food. To make this scenario simpler, we will assume that the target photo has either a single food item in it or that the user will indicate the food item in question.
The main functionality of the app (image recognition) is powered by Custom Vision, where we will detect what the item is: for example, an apple or a tomato. Once we know what the food is, our goal of finding nutritional info from publicly available services is easy.
To get started, well begin by classifying the following foods. For the purposes of this example, well select foods that are visually distinct from each other:
Apple
Banana
Cake
Fries
Sandwich
Creating a classification model in CSV.
To demonstrate the full power of the Custom Vision classification we built, we created a mobile application on Android that would capture a food-related image, hit our endpoint to determine what food is pictured in the image, use the Nutritionix service to get nutritional information about that food, then display the results to the user.
Compact domains
The models generated by compact domains can be exported to run locally. Model performance varies by selected domain. In the table below, we report the model size and inference time on Intel Desktop CPU and NVidia GPU.
Food Classification with Custom Vision Service
In our recent engagement with Vectorform, we built a simple Android application that allows a user to obtain a foods nutritional values based on a photo of that food. To make this scenario simpler, we will assume that the target photo has either a single food item in it or that the user will indicate the food item in question.
The main functionality of the app (image recognition) is powered by Custom Vision, where we will detect what the item is: for example, an apple or a tomato. Once we know what the food is, our goal of finding nutritional info from publicly available services is easy.
To get started, well begin by classifying the following foods. For the purposes of this example, well select foods that are visually distinct from each other:
Apple
Banana
Cake
Fries
Sandwich
Creating a classification model in CSV.
To demonstrate the full power of the Custom Vision classification we built, we created a mobile application on Android that would capture a food-related image, hit our endpoint to determine what food is pictured in the image, use the Nutritionix service to get nutritional information about that food, then display the results to the user.
You needs to develop mobile application which is using computer vision service to identify fresh products and vegetables at grocery store.
Which Export Capabilities to use for mobile application with lowest size?
Correct
VAIDK (Vision AI Dev Kit)
When a compact domain is selected an extra option “Export Capabilities” is provided allowing for distinguishing between “Basic Platforms” and “Vision AI Dev Kit”.
Under Export Capabilities the two options are:
1. Basic platforms (Tensorflow, CoreML, ONNX, etc.)
2. Vision AI Dev Kit.
When Vision AI Dev Kit is selected the Generic, Landmarks, and Retail but not the Food compact domains are available for Image Classification while both General (compact) and General (compact) [S1] are available for object detection.
So, Vision AI Dev Kit is not available for Food compact for image classification. so basic platforms will be selected.
VAIDK (Vision AI Dev Kit)
When a compact domain is selected an extra option “Export Capabilities” is provided allowing for distinguishing between “Basic Platforms” and “Vision AI Dev Kit”.
Under Export Capabilities the two options are:
1. Basic platforms (Tensorflow, CoreML, ONNX, etc.)
2. Vision AI Dev Kit.
When Vision AI Dev Kit is selected the Generic, Landmarks, and Retail but not the Food compact domains are available for Image Classification while both General (compact) and General (compact) [S1] are available for object detection.
So, Vision AI Dev Kit is not available for Food compact for image classification. so basic platforms will be selected.
VAIDK (Vision AI Dev Kit)
When a compact domain is selected an extra option “Export Capabilities” is provided allowing for distinguishing between “Basic Platforms” and “Vision AI Dev Kit”.
Under Export Capabilities the two options are:
1. Basic platforms (Tensorflow, CoreML, ONNX, etc.)
2. Vision AI Dev Kit.
When Vision AI Dev Kit is selected the Generic, Landmarks, and Retail but not the Food compact domains are available for Image Classification while both General (compact) and General (compact) [S1] are available for object detection.
So, Vision AI Dev Kit is not available for Food compact for image classification. so basic platforms will be selected.
You plan to deploy the Text Analytics and Computer Vision services. The Azure Cognitive
Services will be deployed to the West US and East Europe Azure regions.
You need to identify the minimum number of service endpoints and API keys required for the
planned deployment.
What should you identify? To answer, select the appropriate options in the answer area.
Correct
After creating a Cognitive Service resource in the Azure portal, you’ll get an endpoint and a key for
authenticating your applications. You can access Azure Cognitive Services through two different
resources: A multi-service resource, or a single-service one.
Multi-service resource: Access multiple Azure Cognitive Services with a single key and endpoint.
Note: You need a key and endpoint for a Text Analytics resource. Azure Cognitive Services are
represented by Azure resources that you subscribe to.
Each request must include your access key and an HTTP endpoint. The endpoint specifies the region
you chose during sign up, the service URL, and a resource used on the request Box 2: 2 You need at
least one key per region.
After creating a Cognitive Service resource in the Azure portal, you’ll get an endpoint and a key for
authenticating your applications. You can access Azure Cognitive Services through two different
resources: A multi-service resource, or a single-service one.
Multi-service resource: Access multiple Azure Cognitive Services with a single key and endpoint.
Note: You need a key and endpoint for a Text Analytics resource. Azure Cognitive Services are
represented by Azure resources that you subscribe to.
Each request must include your access key and an HTTP endpoint. The endpoint specifies the region
you chose during sign up, the service URL, and a resource used on the request Box 2: 2 You need at
least one key per region.
After creating a Cognitive Service resource in the Azure portal, you’ll get an endpoint and a key for
authenticating your applications. You can access Azure Cognitive Services through two different
resources: A multi-service resource, or a single-service one.
Multi-service resource: Access multiple Azure Cognitive Services with a single key and endpoint.
Note: You need a key and endpoint for a Text Analytics resource. Azure Cognitive Services are
represented by Azure resources that you subscribe to.
Each request must include your access key and an HTTP endpoint. The endpoint specifies the region
you chose during sign up, the service URL, and a resource used on the request Box 2: 2 You need at
least one key per region.
You plan to design an application that will use data from Azure Data Lake and perform sentiment analysis by using Azure Machine Learning algorithms.
The developers of the application use a mix of Windows- and Linux-based environments. The developers contribute to shared GitHub repositories.
You need all the developers to use the same tool to develop the application.
What is the best tool to use? More than one answer choice may achieve the goal.
Correct
Mainly Azure Machine Learning Studio also Microsoft Visual Studio Code enable developers of the application to use a mix of Windows- and Linux-based environments. The developers can contribute to shared GitHub repositories.
References:
Incorrect
Mainly Azure Machine Learning Studio also Microsoft Visual Studio Code enable developers of the application to use a mix of Windows- and Linux-based environments. The developers can contribute to shared GitHub repositories.
References:
Unattempted
Mainly Azure Machine Learning Studio also Microsoft Visual Studio Code enable developers of the application to use a mix of Windows- and Linux-based environments. The developers can contribute to shared GitHub repositories.
References:
Question 49 of 51
49. Question
You are designing an AI solution that will analyze millions of pictures.
You need to recommend a solution for storing the pictures. The solution must minimize costs.
Which storage solution should you recommend?
The development team at your company builds a bot by using C# and .NET. You need to deploy the bot to Azure.
Which tool should you use?
Correct
The deployment process documented here uses one of the ARM templates to provision required resources for the bot in Azure by using the Azure CLI.
Note: When you create a bot using the Visual Studio template Yeoman template or Cookie cutter template the source code generated includes a deployment Templates folder that contains ARM templates.
Incorrect
The deployment process documented here uses one of the ARM templates to provision required resources for the bot in Azure by using the Azure CLI.
Note: When you create a bot using the Visual Studio template Yeoman template or Cookie cutter template the source code generated includes a deployment Templates folder that contains ARM templates.
Unattempted
The deployment process documented here uses one of the ARM templates to provision required resources for the bot in Azure by using the Azure CLI.
Note: When you create a bot using the Visual Studio template Yeoman template or Cookie cutter template the source code generated includes a deployment Templates folder that contains ARM templates.
Question 51 of 51
51. Question
DRAG DROP
You need to create a bot to meet the following requirements: The bot must support multiple bot channels including Direct Line. Users must be able to sign in to the bot by using a Gmail user account and save activities and preferences.
Which four actions should you perform in sequence?
To answer move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
NOTE: More than one order of answer choices is correct.
You will receive credit for any of the correct orders you select.
Select and Place:
Correct
The Azure Bot Service v4 SDK facilitates the development of bots that can access online resources that require authentication. Your bot does not need to manage authentication tokens. Azure does it for you using OAuth2 to generate a token, based on each user’s credentials. Your bot uses the token generated by Azure to access those resources. In this way, the user does not have to provide ID and password to the bot to access a secured resource but only to a trusted identity provider.
1- From the Azure portal configure an identity provider. The Azure Bot Service and the v4 SDK include new bot authentication capabilities providing features to make it easier to develop a bot that authenticates users to various identity providers such as Azure AD (Azure Active Directory) GitHub Uber and so on.
2- From the Azure portal create an Azure Active Directory (Azure AD) B2C service. Azure Active Directory B2C provides business-to-customer identity as a service. Your customers use their preferred social enterprise or local account identities to get single sign-on access to your applications and APIs.
3- From the Azure portal create a client application You can enable communication between your bot and your own client application by using the Direct Line API.
4- From the bot code add the connection settings and OAuth Prompt Use an OAuth prompt to sign the user in and get a token. Azure AD B2C uses standards-based authentication protocols including OpenID Connect OAuth 2.0 and SAML.
The Azure Bot Service v4 SDK facilitates the development of bots that can access online resources that require authentication. Your bot does not need to manage authentication tokens. Azure does it for you using OAuth2 to generate a token, based on each user’s credentials. Your bot uses the token generated by Azure to access those resources. In this way, the user does not have to provide ID and password to the bot to access a secured resource but only to a trusted identity provider.
1- From the Azure portal configure an identity provider. The Azure Bot Service and the v4 SDK include new bot authentication capabilities providing features to make it easier to develop a bot that authenticates users to various identity providers such as Azure AD (Azure Active Directory) GitHub Uber and so on.
2- From the Azure portal create an Azure Active Directory (Azure AD) B2C service. Azure Active Directory B2C provides business-to-customer identity as a service. Your customers use their preferred social enterprise or local account identities to get single sign-on access to your applications and APIs.
3- From the Azure portal create a client application You can enable communication between your bot and your own client application by using the Direct Line API.
4- From the bot code add the connection settings and OAuth Prompt Use an OAuth prompt to sign the user in and get a token. Azure AD B2C uses standards-based authentication protocols including OpenID Connect OAuth 2.0 and SAML.
The Azure Bot Service v4 SDK facilitates the development of bots that can access online resources that require authentication. Your bot does not need to manage authentication tokens. Azure does it for you using OAuth2 to generate a token, based on each user’s credentials. Your bot uses the token generated by Azure to access those resources. In this way, the user does not have to provide ID and password to the bot to access a secured resource but only to a trusted identity provider.
1- From the Azure portal configure an identity provider. The Azure Bot Service and the v4 SDK include new bot authentication capabilities providing features to make it easier to develop a bot that authenticates users to various identity providers such as Azure AD (Azure Active Directory) GitHub Uber and so on.
2- From the Azure portal create an Azure Active Directory (Azure AD) B2C service. Azure Active Directory B2C provides business-to-customer identity as a service. Your customers use their preferred social enterprise or local account identities to get single sign-on access to your applications and APIs.
3- From the Azure portal create a client application You can enable communication between your bot and your own client application by using the Direct Line API.
4- From the bot code add the connection settings and OAuth Prompt Use an OAuth prompt to sign the user in and get a token. Azure AD B2C uses standards-based authentication protocols including OpenID Connect OAuth 2.0 and SAML.