You have already completed the Test before. Hence you can not start it again.
Test is loading...
You must sign in or sign up to start the Test.
You have to finish following quiz, to start this Test:
Your results are here!! for" Azure AI Engineer Associate Practice Test 8 "
0 of 65 questions answered correctly
Your time:
Time has elapsed
Your Final Score is : 0
You have attempted : 0
Number of Correct Questions : 0 and scored 0
Number of Incorrect Questions : 0 and Negative marks 0
Average score
Your score
AI-102
You have attempted: 0
Number of Correct Questions: 0 and scored 0
Number of Incorrect Questions: 0 and Negative marks 0
You can review your answers by clicking view questions. Important Note : Open Reference Documentation Links in New Tab (Right Click and Open in New Tab).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
Answered
Review
Question 1 of 65
1. Question
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
To access your Content Moderator resource, a subscription key is required. The key can be found in the left pane, under [?] by selecting Keys and Endpoints.
Correct
The missing word is: Resource Management.
To access your Content Moderator resource, a subscription key is required. The key can be found in the left pane, under Resource Management by selecting Keys and Endpoints.
To access your Content Moderator resource, a subscription key is required. The key can be found in the left pane, under Resource Management by selecting Keys and Endpoints.
To access your Content Moderator resource, a subscription key is required. The key can be found in the left pane, under Resource Management by selecting Keys and Endpoints.
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
The [?] is Microsoft’s cross-platform command-line tool for managing Azure resources. It’s available for macOS, Linux, and Windows, or in the browser using Azure Cloud Shell.
Correct
The Azure CLI is Microsoft’s cross-platform command-line tool for managing Azure resources. It’s available for macOS, Linux, and Windows, or in the browser using Azure Cloud Shell. https://docs.microsoft.com/en-us/azure/cloud-shell/overview
Incorrect
The Azure CLI is Microsoft’s cross-platform command-line tool for managing Azure resources. It’s available for macOS, Linux, and Windows, or in the browser using Azure Cloud Shell. https://docs.microsoft.com/en-us/azure/cloud-shell/overview
Unattempted
The Azure CLI is Microsoft’s cross-platform command-line tool for managing Azure resources. It’s available for macOS, Linux, and Windows, or in the browser using Azure Cloud Shell. https://docs.microsoft.com/en-us/azure/cloud-shell/overview
Question 4 of 65
4. Question
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
A(n) [?] represents a continuous segment of the video. Transitions within the video are detected which determine how the video is split up. There can be gaps in time between the [?]s but together they are representative of the shot.
Correct
The missing word is: Scene.
A Scene represents a continuous segment of the video. Transitions within the video are detected which determine how the video is split up. There can be gaps in time between the Scenes but together they are representative of the shot.
Incorrect
The missing word is: Scene.
A Scene represents a continuous segment of the video. Transitions within the video are detected which determine how the video is split up. There can be gaps in time between the Scenes but together they are representative of the shot.
Unattempted
The missing word is: Scene.
A Scene represents a continuous segment of the video. Transitions within the video are detected which determine how the video is split up. There can be gaps in time between the Scenes but together they are representative of the shot.
Question 5 of 65
5. Question
You’re integrating the Computer Vision API into your solution. You created a Cognitive Services account for the Computer Vision Service in the Eastern US region. Which of the following is the correct address for you to access the ocr operation?
Correct
The Computer Vision API provides state-of-the-art algorithms to process images and return information. For example, it can be used to determine if an image contains mature content, or it can be used to find all the faces in an image. It also has other features like estimating dominant and accent colors, categorizing the content of images, and describing an image with complete English sentences. Additionally, it can also intelligently generate images thumbnails for displaying large images effectively.
The endpoint specifies the region you chose during sign-up, the service URL, and a resource used on the request.
In this case, the correct response is eastus.api.cognitive.microsoft.com/vision/v2.0/ocr
This API is currently available in:
• Australia East – australiaeast.api.cognitive.microsoft.com
• Brazil South – brazilsouth.api.cognitive.microsoft.com
• Canada Central – canadacentral.api.cognitive.microsoft.com
• Central India – centralindia.api.cognitive.microsoft.com
• Central US – centralus.api.cognitive.microsoft.com
• East Asia – eastasia.api.cognitive.microsoft.com
• East US – eastus.api.cognitive.microsoft.com
• East US 2 – eastus2.api.cognitive.microsoft.com
• France Central – francecentral.api.cognitive.microsoft.com
• Japan East – japaneast.api.cognitive.microsoft.com
• Japan West – japanwest.api.cognitive.microsoft.com
• Korea Central – koreacentral.api.cognitive.microsoft.com
• North Central US – northcentralus.api.cognitive.microsoft.com
• North Europe – northeurope.api.cognitive.microsoft.com
• South Africa North – southafricanorth.api.cognitive.microsoft.com
• South Central US – southcentralus.api.cognitive.microsoft.com
• Southeast Asia – southeastasia.api.cognitive.microsoft.com
• UK South – uksouth.api.cognitive.microsoft.com
• West Central US – westcentralus.api.cognitive.microsoft.com
• West Europe – westeurope.api.cognitive.microsoft.com
• West US – westus.api.cognitive.microsoft.com
• West US 2 – westus2.api.cognitive.microsoft.com https://eastus.dev.cognitive.microsoft.com/docs/services/5adf991815e1060e6355ad44/operations/56f91f2e778daf14a499e1fc
Incorrect
The Computer Vision API provides state-of-the-art algorithms to process images and return information. For example, it can be used to determine if an image contains mature content, or it can be used to find all the faces in an image. It also has other features like estimating dominant and accent colors, categorizing the content of images, and describing an image with complete English sentences. Additionally, it can also intelligently generate images thumbnails for displaying large images effectively.
The endpoint specifies the region you chose during sign-up, the service URL, and a resource used on the request.
In this case, the correct response is eastus.api.cognitive.microsoft.com/vision/v2.0/ocr
This API is currently available in:
• Australia East – australiaeast.api.cognitive.microsoft.com
• Brazil South – brazilsouth.api.cognitive.microsoft.com
• Canada Central – canadacentral.api.cognitive.microsoft.com
• Central India – centralindia.api.cognitive.microsoft.com
• Central US – centralus.api.cognitive.microsoft.com
• East Asia – eastasia.api.cognitive.microsoft.com
• East US – eastus.api.cognitive.microsoft.com
• East US 2 – eastus2.api.cognitive.microsoft.com
• France Central – francecentral.api.cognitive.microsoft.com
• Japan East – japaneast.api.cognitive.microsoft.com
• Japan West – japanwest.api.cognitive.microsoft.com
• Korea Central – koreacentral.api.cognitive.microsoft.com
• North Central US – northcentralus.api.cognitive.microsoft.com
• North Europe – northeurope.api.cognitive.microsoft.com
• South Africa North – southafricanorth.api.cognitive.microsoft.com
• South Central US – southcentralus.api.cognitive.microsoft.com
• Southeast Asia – southeastasia.api.cognitive.microsoft.com
• UK South – uksouth.api.cognitive.microsoft.com
• West Central US – westcentralus.api.cognitive.microsoft.com
• West Europe – westeurope.api.cognitive.microsoft.com
• West US – westus.api.cognitive.microsoft.com
• West US 2 – westus2.api.cognitive.microsoft.com https://eastus.dev.cognitive.microsoft.com/docs/services/5adf991815e1060e6355ad44/operations/56f91f2e778daf14a499e1fc
Unattempted
The Computer Vision API provides state-of-the-art algorithms to process images and return information. For example, it can be used to determine if an image contains mature content, or it can be used to find all the faces in an image. It also has other features like estimating dominant and accent colors, categorizing the content of images, and describing an image with complete English sentences. Additionally, it can also intelligently generate images thumbnails for displaying large images effectively.
The endpoint specifies the region you chose during sign-up, the service URL, and a resource used on the request.
In this case, the correct response is eastus.api.cognitive.microsoft.com/vision/v2.0/ocr
This API is currently available in:
• Australia East – australiaeast.api.cognitive.microsoft.com
• Brazil South – brazilsouth.api.cognitive.microsoft.com
• Canada Central – canadacentral.api.cognitive.microsoft.com
• Central India – centralindia.api.cognitive.microsoft.com
• Central US – centralus.api.cognitive.microsoft.com
• East Asia – eastasia.api.cognitive.microsoft.com
• East US – eastus.api.cognitive.microsoft.com
• East US 2 – eastus2.api.cognitive.microsoft.com
• France Central – francecentral.api.cognitive.microsoft.com
• Japan East – japaneast.api.cognitive.microsoft.com
• Japan West – japanwest.api.cognitive.microsoft.com
• Korea Central – koreacentral.api.cognitive.microsoft.com
• North Central US – northcentralus.api.cognitive.microsoft.com
• North Europe – northeurope.api.cognitive.microsoft.com
• South Africa North – southafricanorth.api.cognitive.microsoft.com
• South Central US – southcentralus.api.cognitive.microsoft.com
• Southeast Asia – southeastasia.api.cognitive.microsoft.com
• UK South – uksouth.api.cognitive.microsoft.com
• West Central US – westcentralus.api.cognitive.microsoft.com
• West Europe – westeurope.api.cognitive.microsoft.com
• West US – westus.api.cognitive.microsoft.com
• West US 2 – westus2.api.cognitive.microsoft.com https://eastus.dev.cognitive.microsoft.com/docs/services/5adf991815e1060e6355ad44/operations/56f91f2e778daf14a499e1fc
Question 6 of 65
6. Question
What score value is returned if the text in a document cannot be analyzed correctly?
Correct
The service will return NaN (Not-a-Number) when it cannot determine the language in the provided text.
The Language Detection feature of the Azure Text Analytics REST API evaluates text input for each document and returns language identifiers with a score that indicates the strength of the analysis.
This capability is useful for content stores that collect arbitrary text, where language is unknown. You can parse the results of this analysis to determine which language is used in the input document. The response also returns a score that reflects the confidence of the model. The score value is between 0 and 1.
The Language Detection feature can detect a wide range of languages, variants, dialects, and some regional or cultural languages. https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-language-detection
Incorrect
The service will return NaN (Not-a-Number) when it cannot determine the language in the provided text.
The Language Detection feature of the Azure Text Analytics REST API evaluates text input for each document and returns language identifiers with a score that indicates the strength of the analysis.
This capability is useful for content stores that collect arbitrary text, where language is unknown. You can parse the results of this analysis to determine which language is used in the input document. The response also returns a score that reflects the confidence of the model. The score value is between 0 and 1.
The Language Detection feature can detect a wide range of languages, variants, dialects, and some regional or cultural languages. https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-language-detection
Unattempted
The service will return NaN (Not-a-Number) when it cannot determine the language in the provided text.
The Language Detection feature of the Azure Text Analytics REST API evaluates text input for each document and returns language identifiers with a score that indicates the strength of the analysis.
This capability is useful for content stores that collect arbitrary text, where language is unknown. You can parse the results of this analysis to determine which language is used in the input document. The response also returns a score that reflects the confidence of the model. The score value is between 0 and 1.
The Language Detection feature can detect a wide range of languages, variants, dialects, and some regional or cultural languages. https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-language-detection
Question 7 of 65
7. Question
What additional parameter is required when synthesizing text input to an audio file?
Which of the following provides a user interface for a Conversational AI Agent?
Correct
Conversational AI
While many organizations publish support information and answers to frequently asked questions (FAQs) that can be accessed through a web browser or dedicated app. The complexity of the systems and services they offer means that answers to specific questions are hard to find. Often, these organizations find their support personnel being overloaded with requests for help through phone calls, email, text messages, social media, and other channels.
Increasingly, organizations are turning to artificial intelligence (AI) solutions that make use of AI agents, commonly known as bots to provide a first-line of automated support through the full range of channels that we use to communicate.
Azure Bot Service provides a user interface for a Conversational AI Agent.
Bots are designed to interact with users in a conversational manner, as shown in this example of a chat interface:
Conversations typically take the form of messages exchanged in turns; and one of the most common kinds of conversational exchange is a question followed by an answer. This pattern forms the basis for many user support bots, and can often be based on existing FAQ documentation. To implement this kind of solution, you need:
• A knowledge base of question and answer pairs – usually with some built-in natural language processing model to enable questions that can be phrased in multiple ways to be understood with the same semantic meaning.
• A bot service that provides an interface to the knowledge base through one or more channels. https://docs.microsoft.com/en-us/learn/modules/build-faq-chatbot-qna-maker-azure-bot-service/1-introduction
The following do not provide a user interface for a Conversational AI Agent:
• Azure Speech helps recognize and synthesize speech, recognize and identify speakers, translate live or recorded speech. Not provide a UI for bots.
• Bot Framework provides additional bots’ capabilities, but it relies on Azure Bot Service to provide a user interface for bots.
• QnA Maker service provides KB capabilities for bots, but it relies on Azure Bot Service to provide a user interface for bots.
• Computer Vision Service works with images. It does not provide a UI for bots.
Incorrect
Conversational AI
While many organizations publish support information and answers to frequently asked questions (FAQs) that can be accessed through a web browser or dedicated app. The complexity of the systems and services they offer means that answers to specific questions are hard to find. Often, these organizations find their support personnel being overloaded with requests for help through phone calls, email, text messages, social media, and other channels.
Increasingly, organizations are turning to artificial intelligence (AI) solutions that make use of AI agents, commonly known as bots to provide a first-line of automated support through the full range of channels that we use to communicate.
Azure Bot Service provides a user interface for a Conversational AI Agent.
Bots are designed to interact with users in a conversational manner, as shown in this example of a chat interface:
Conversations typically take the form of messages exchanged in turns; and one of the most common kinds of conversational exchange is a question followed by an answer. This pattern forms the basis for many user support bots, and can often be based on existing FAQ documentation. To implement this kind of solution, you need:
• A knowledge base of question and answer pairs – usually with some built-in natural language processing model to enable questions that can be phrased in multiple ways to be understood with the same semantic meaning.
• A bot service that provides an interface to the knowledge base through one or more channels. https://docs.microsoft.com/en-us/learn/modules/build-faq-chatbot-qna-maker-azure-bot-service/1-introduction
The following do not provide a user interface for a Conversational AI Agent:
• Azure Speech helps recognize and synthesize speech, recognize and identify speakers, translate live or recorded speech. Not provide a UI for bots.
• Bot Framework provides additional bots’ capabilities, but it relies on Azure Bot Service to provide a user interface for bots.
• QnA Maker service provides KB capabilities for bots, but it relies on Azure Bot Service to provide a user interface for bots.
• Computer Vision Service works with images. It does not provide a UI for bots.
Unattempted
Conversational AI
While many organizations publish support information and answers to frequently asked questions (FAQs) that can be accessed through a web browser or dedicated app. The complexity of the systems and services they offer means that answers to specific questions are hard to find. Often, these organizations find their support personnel being overloaded with requests for help through phone calls, email, text messages, social media, and other channels.
Increasingly, organizations are turning to artificial intelligence (AI) solutions that make use of AI agents, commonly known as bots to provide a first-line of automated support through the full range of channels that we use to communicate.
Azure Bot Service provides a user interface for a Conversational AI Agent.
Bots are designed to interact with users in a conversational manner, as shown in this example of a chat interface:
Conversations typically take the form of messages exchanged in turns; and one of the most common kinds of conversational exchange is a question followed by an answer. This pattern forms the basis for many user support bots, and can often be based on existing FAQ documentation. To implement this kind of solution, you need:
• A knowledge base of question and answer pairs – usually with some built-in natural language processing model to enable questions that can be phrased in multiple ways to be understood with the same semantic meaning.
• A bot service that provides an interface to the knowledge base through one or more channels. https://docs.microsoft.com/en-us/learn/modules/build-faq-chatbot-qna-maker-azure-bot-service/1-introduction
The following do not provide a user interface for a Conversational AI Agent:
• Azure Speech helps recognize and synthesize speech, recognize and identify speakers, translate live or recorded speech. Not provide a UI for bots.
• Bot Framework provides additional bots’ capabilities, but it relies on Azure Bot Service to provide a user interface for bots.
• QnA Maker service provides KB capabilities for bots, but it relies on Azure Bot Service to provide a user interface for bots.
• Computer Vision Service works with images. It does not provide a UI for bots.
Question 10 of 65
10. Question
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
The Training API endpoint is available in the [?] of your Custom Vision service project in the web portal.
Correct
The Training API endpoint is available in the Settings pane of your Custom Vision service project in the web portal. You can also find the training key on this page. You need the training key to authorize calls to the Training API services. After you have these two pieces of information, you’re ready to use the CreateImages methods.
After you identify the proper URL, you invoke it with an HTTP PUT request, passing the request in the body and the training key as a request header with the name Training-Key. https://docs.microsoft.com/en-ca/learn/modules/evaluate-requirements-for-custom-computer-vision-api/5-examine-the-custom-vision-training-api
Incorrect
The Training API endpoint is available in the Settings pane of your Custom Vision service project in the web portal. You can also find the training key on this page. You need the training key to authorize calls to the Training API services. After you have these two pieces of information, you’re ready to use the CreateImages methods.
After you identify the proper URL, you invoke it with an HTTP PUT request, passing the request in the body and the training key as a request header with the name Training-Key. https://docs.microsoft.com/en-ca/learn/modules/evaluate-requirements-for-custom-computer-vision-api/5-examine-the-custom-vision-training-api
Unattempted
The Training API endpoint is available in the Settings pane of your Custom Vision service project in the web portal. You can also find the training key on this page. You need the training key to authorize calls to the Training API services. After you have these two pieces of information, you’re ready to use the CreateImages methods.
After you identify the proper URL, you invoke it with an HTTP PUT request, passing the request in the body and the training key as a request header with the name Training-Key. https://docs.microsoft.com/en-ca/learn/modules/evaluate-requirements-for-custom-computer-vision-api/5-examine-the-custom-vision-training-api
Question 11 of 65
11. Question
Which of these tasks would be a good fit for the Custom Vision APIs?
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
The [A] service enables you to define and publish a knowledge base of questions and answers with support for natural language querying. When combined with [B], you can use [A] to deliver a bot that responds intelligently to user questions over multiple communication channels.
Correct
The QnA Maker service enables you to define and publish a knowledge base of questions and answers with support for natural language querying. When combined with Azure Bot Service, you can use QnA Maker to deliver a bot that responds intelligently to user questions over multiple communication channels. https://docs.microsoft.com/en-us/learn/modules/build-faq-chatbot-qna-maker-azure-bot-service/4-summary
Incorrect
The QnA Maker service enables you to define and publish a knowledge base of questions and answers with support for natural language querying. When combined with Azure Bot Service, you can use QnA Maker to deliver a bot that responds intelligently to user questions over multiple communication channels. https://docs.microsoft.com/en-us/learn/modules/build-faq-chatbot-qna-maker-azure-bot-service/4-summary
Unattempted
The QnA Maker service enables you to define and publish a knowledge base of questions and answers with support for natural language querying. When combined with Azure Bot Service, you can use QnA Maker to deliver a bot that responds intelligently to user questions over multiple communication channels. https://docs.microsoft.com/en-us/learn/modules/build-faq-chatbot-qna-maker-azure-bot-service/4-summary
Question 13 of 65
13. Question
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
The Language Understanding (LUIS) container loads your trained or published Language Understanding model. As a LUIS app, the docker container provides access to the query predictions from the container’s API endpoints. You can collect query logs from the container and upload them back to the Language Understanding app to improve the app’s prediction accuracy.
[?] offers many benefits including the ability to run the service closer to your application, reduce network constraints on consumption of the LUIS app, help to reduce the cost associated with testing by taking endpoint hits off the Azure or LUIS platform, and the ability to scale up or scale out the LUIS application using container instances or Azure Kubernetes Services (AKS).
Correct
The Language Understanding (LUIS) container loads your trained or published Language Understanding model. As a LUIS app, the docker container provides access to the query predictions from the container’s API endpoints. You can collect query logs from the container and upload them back to the Language Understanding app to improve the app’s prediction accuracy.
Containerizing LUISÂ offers many benefits including the ability to run the service closer to your application, reduce network constraints on consumption of the LUIS app, help to reduce the cost associated with testing by taking endpoint hits off the Azure or LUIS platform, and the ability to scale up or scale out the LUIS application using container instances or Azure Kubernetes Services (AKS). https://docs.microsoft.com/en-us/azure/cognitive-services/cognitive-services-container-support
Incorrect
The Language Understanding (LUIS) container loads your trained or published Language Understanding model. As a LUIS app, the docker container provides access to the query predictions from the container’s API endpoints. You can collect query logs from the container and upload them back to the Language Understanding app to improve the app’s prediction accuracy.
Containerizing LUISÂ offers many benefits including the ability to run the service closer to your application, reduce network constraints on consumption of the LUIS app, help to reduce the cost associated with testing by taking endpoint hits off the Azure or LUIS platform, and the ability to scale up or scale out the LUIS application using container instances or Azure Kubernetes Services (AKS). https://docs.microsoft.com/en-us/azure/cognitive-services/cognitive-services-container-support
Unattempted
The Language Understanding (LUIS) container loads your trained or published Language Understanding model. As a LUIS app, the docker container provides access to the query predictions from the container’s API endpoints. You can collect query logs from the container and upload them back to the Language Understanding app to improve the app’s prediction accuracy.
Containerizing LUISÂ offers many benefits including the ability to run the service closer to your application, reduce network constraints on consumption of the LUIS app, help to reduce the cost associated with testing by taking endpoint hits off the Azure or LUIS platform, and the ability to scale up or scale out the LUIS application using container instances or Azure Kubernetes Services (AKS). https://docs.microsoft.com/en-us/azure/cognitive-services/cognitive-services-container-support
Question 14 of 65
14. Question
Diana Prince is working in a new role in Wayne Enterprises. Her task is retrieving handwritten text from an image. Using Azure CLI, she has executed the following code:
Azure CLI
curl “https://westus2.api.cognitive.microsoft.com/vision/v2.0/recognizeText?mode=Handwritten” \
-H “Ocp-Apim-Subscription-Key: $key” \
-H “Content-Type: application/json” \
-d “{‘url’ : ‘https://raw.githubusercontent.com/MicrosoftDocs/mslearn-process-images-with-the-computer-vision-service/master/images/handwriting.jpg’}” \
-D –
The above dumps the headers of this operation to the console.
Azure CLI
HTTP/1.1 202 Accepted
Cache-Control: no-cache
Pragma: no-cache
Content-Length: 0
Expires: -1
Operation-Location: https://westus2.api.cognitive.microsoft.com/vision/v2.0/textOperations/d0e9b397-4072-471c-ae61-7490bec8f077
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
apim-request-id: f5663487-03c6-4760-9be7-c9157fac10a1
Strict-Transport-Security: max-age=31536000; includeSubDomains; preload
x-content-type-options: nosniff
Date: Wed, 12 Sep 2020 19:22:00 GMT
What is the next step in the process?
Correct
The next step will be to copy the Operation-Location header value then execute a new command in Azure Cloud Shell with the value for the Operation-Location header from the preceding step.
In this scenario, the code to enter would be:
Azure CLI
curl -H “Ocp-Apim-Subscription-Key: $key” “” | jq ‘.’ https://docs.microsoft.com/en-us/cli/azure/format-output-azure-cli
Incorrect
The next step will be to copy the Operation-Location header value then execute a new command in Azure Cloud Shell with the value for the Operation-Location header from the preceding step.
In this scenario, the code to enter would be:
Azure CLI
curl -H “Ocp-Apim-Subscription-Key: $key” “” | jq ‘.’ https://docs.microsoft.com/en-us/cli/azure/format-output-azure-cli
Unattempted
The next step will be to copy the Operation-Location header value then execute a new command in Azure Cloud Shell with the value for the Operation-Location header from the preceding step.
In this scenario, the code to enter would be:
Azure CLI
curl -H “Ocp-Apim-Subscription-Key: $key” “” | jq ‘.’ https://docs.microsoft.com/en-us/cli/azure/format-output-azure-cli
Question 15 of 65
15. Question
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
A face list relates to several concepts:
• [A]: A single face
• [B]: A list or collection of faces
• [C]: A single person
• [D]: A list or collection of persons
Face lists are useful when you’re working on face identification and face similarity.
Correct
Strictly speaking, a face list is a group of faces. Create and manage face lists to find similar faces in a fixed collection of faces.
For example, you could use a face list to find a similar face in a set of pictures of celebrities, friends, or family members.
A face list relates to several concepts:
• Face: A single face
• Face list: A list or collection of faces
• Person: A single person
• Person group: A list or collection of persons
Relationships among these terms can get a little fuzzy, so it’s helpful to visualize them:
Face lists are useful when you’re working on face identification and face similarity.
Face identification
You can use the Face API to identify people by comparing a detected face to a person group. Remember, a person group is like a database of people. For example, you might create a person group named myInnerCircle:
Strictly speaking, a face list is a group of faces. Create and manage face lists to find similar faces in a fixed collection of faces.
For example, you could use a face list to find a similar face in a set of pictures of celebrities, friends, or family members.
A face list relates to several concepts:
• Face: A single face
• Face list: A list or collection of faces
• Person: A single person
• Person group: A list or collection of persons
Relationships among these terms can get a little fuzzy, so it’s helpful to visualize them:
Face lists are useful when you’re working on face identification and face similarity.
Face identification
You can use the Face API to identify people by comparing a detected face to a person group. Remember, a person group is like a database of people. For example, you might create a person group named myInnerCircle:
Strictly speaking, a face list is a group of faces. Create and manage face lists to find similar faces in a fixed collection of faces.
For example, you could use a face list to find a similar face in a set of pictures of celebrities, friends, or family members.
A face list relates to several concepts:
• Face: A single face
• Face list: A list or collection of faces
• Person: A single person
• Person group: A list or collection of persons
Relationships among these terms can get a little fuzzy, so it’s helpful to visualize them:
Face lists are useful when you’re working on face identification and face similarity.
Face identification
You can use the Face API to identify people by comparing a detected face to a person group. Remember, a person group is like a database of people. For example, you might create a person group named myInnerCircle:
Wayne Enterprises is using LUIS in a bot to help a user book a flight.
A user may use the following utterance, “Book 2 tickets on a flight to New York for New Year’s Eve.” If we evaluate this utterance for key aspects, we can determine the user’s intent. The user wants to book a flight.
The [?] can be referred to as BookFlight.
Correct
Wayne Enterprises is using LUIS in a bot to help a user book a flight.
A user may use the following utterance, “Book 2 tickets on a flight to New York for New Year’s Eve.” If we evaluate this utterance for key aspects, we can determine the user’s intent. The user wants to book a flight.
• We can state the Intent as BookFlight.
Entities aren’t only words or phrases, but also simply data. This data helps provide specific context for the utterance and aids the algorithm in more accurately identifying the intent. Not every utterance contains entities, though.
In the utterance above, we can identify entities like:
• New York: We can classify this entity as Location.Destination.
• New Year’s Eve: We can classify this entity as Event.
• The number 2: This number maps to a built-in entity. In LUIS, such an entity is known as a prebuilt entity, specifically a prebuilt number.
Wayne Enterprises is using LUIS in a bot to help a user book a flight.
A user may use the following utterance, “Book 2 tickets on a flight to New York for New Year’s Eve.” If we evaluate this utterance for key aspects, we can determine the user’s intent. The user wants to book a flight.
• We can state the Intent as BookFlight.
Entities aren’t only words or phrases, but also simply data. This data helps provide specific context for the utterance and aids the algorithm in more accurately identifying the intent. Not every utterance contains entities, though.
In the utterance above, we can identify entities like:
• New York: We can classify this entity as Location.Destination.
• New Year’s Eve: We can classify this entity as Event.
• The number 2: This number maps to a built-in entity. In LUIS, such an entity is known as a prebuilt entity, specifically a prebuilt number.
Wayne Enterprises is using LUIS in a bot to help a user book a flight.
A user may use the following utterance, “Book 2 tickets on a flight to New York for New Year’s Eve.” If we evaluate this utterance for key aspects, we can determine the user’s intent. The user wants to book a flight.
• We can state the Intent as BookFlight.
Entities aren’t only words or phrases, but also simply data. This data helps provide specific context for the utterance and aids the algorithm in more accurately identifying the intent. Not every utterance contains entities, though.
In the utterance above, we can identify entities like:
• New York: We can classify this entity as Location.Destination.
• New Year’s Eve: We can classify this entity as Event.
• The number 2: This number maps to a built-in entity. In LUIS, such an entity is known as a prebuilt entity, specifically a prebuilt number.
Wade Wilson is working in a group called Mercs for Money which has an existing frequently asked questions (FAQ) document. He needs to create a QnA Maker knowledge base that includes the questions and answers from the FAQ with the least possible effort.
What should he do?
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
The [?] is used for the following:
• Free authoring service requests to your app. This can be accomplished using the LUIS portal or through the supported SDKs.
• You are permitted 1,000 prediction endpoint requests per month for free. These requests may come in through the browser, API, or SDK.
Correct
The missing word is: Authoring key.
An authoring key is used for:
Free authoring service requests: This allows you to create, train, and publish your LUIS app using the LUIS portal or SDKs.
1,000 free prediction endpoint requests per month: These requests can be made through the browser, API, or SDK to get predictions from your published LUIS app.
Free authoring service requests: This allows you to create, train, and publish your LUIS app using the LUIS portal or SDKs.
1,000 free prediction endpoint requests per month: These requests can be made through the browser, API, or SDK to get predictions from your published LUIS app.
Free authoring service requests: This allows you to create, train, and publish your LUIS app using the LUIS portal or SDKs.
1,000 free prediction endpoint requests per month: These requests can be made through the browser, API, or SDK to get predictions from your published LUIS app.
What is the maximum character count in each document that you send to the Key Phrase Extraction service?
Correct
The Key Phrase Extraction API expects a well-formed JSON input. You can create a JSON file that contains an array of documents. Each document size must be 5,120 characters or less and you can have up to 1,000 documents per collection (1,000 IDs). https://docs.microsoft.com/en-us/azure/search/cognitive-search-skill-keyphrases
Incorrect
The Key Phrase Extraction API expects a well-formed JSON input. You can create a JSON file that contains an array of documents. Each document size must be 5,120 characters or less and you can have up to 1,000 documents per collection (1,000 IDs). https://docs.microsoft.com/en-us/azure/search/cognitive-search-skill-keyphrases
Unattempted
The Key Phrase Extraction API expects a well-formed JSON input. You can create a JSON file that contains an array of documents. Each document size must be 5,120 characters or less and you can have up to 1,000 documents per collection (1,000 IDs). https://docs.microsoft.com/en-us/azure/search/cognitive-search-skill-keyphrases
Question 20 of 65
20. Question
When using Text Moderation, under the classification of the text, what type of value is returned for the category?
Correct
The categories are rated with a value between 0 and 1, with values closer to 1 being more positive for the match.
Here’s a sample JSON response:
JSON
“Classification”: {
“ReviewRecommended”: true,
“Category1”: {
“Score”: 0.99756889843889822
},
“Category2”: {
“Score”: 0.12747249007225037
},
“Category3”: {
“Score”: 0.98799997568130493
}
} https://docs.microsoft.com/en-us/azure/cognitive-services/content-moderator/try-text-api
Incorrect
The categories are rated with a value between 0 and 1, with values closer to 1 being more positive for the match.
Here’s a sample JSON response:
JSON
“Classification”: {
“ReviewRecommended”: true,
“Category1”: {
“Score”: 0.99756889843889822
},
“Category2”: {
“Score”: 0.12747249007225037
},
“Category3”: {
“Score”: 0.98799997568130493
}
} https://docs.microsoft.com/en-us/azure/cognitive-services/content-moderator/try-text-api
Unattempted
The categories are rated with a value between 0 and 1, with values closer to 1 being more positive for the match.
Here’s a sample JSON response:
JSON
“Classification”: {
“ReviewRecommended”: true,
“Category1”: {
“Score”: 0.99756889843889822
},
“Category2”: {
“Score”: 0.12747249007225037
},
“Category3”: {
“Score”: 0.98799997568130493
}
} https://docs.microsoft.com/en-us/azure/cognitive-services/content-moderator/try-text-api
Question 21 of 65
21. Question
True or False: Within the Entity Recognition service, you will find Entity Types. All Entity Types have at least one sub-type to clarify or further enhance the core type.
Correct
Entity types
Many of the entity types that you encounter in this service, also come with a sub-type. Not all entity types have a sub-type as they are typically used to clarify or further enhance the core type. The current supported list of entities and the classes (in version 2.1 of the API) is shown in Table 1. For those items with N/A, the sub-type may be omitted, depending on input and extracted entities.
Entity types
Many of the entity types that you encounter in this service, also come with a sub-type. Not all entity types have a sub-type as they are typically used to clarify or further enhance the core type. The current supported list of entities and the classes (in version 2.1 of the API) is shown in Table 1. For those items with N/A, the sub-type may be omitted, depending on input and extracted entities.
Entity types
Many of the entity types that you encounter in this service, also come with a sub-type. Not all entity types have a sub-type as they are typically used to clarify or further enhance the core type. The current supported list of entities and the classes (in version 2.1 of the API) is shown in Table 1. For those items with N/A, the sub-type may be omitted, depending on input and extracted entities.
Which markup language is used to control the Speech Synthesis output for your telephone audio attendant?
Correct
Speech Synthesis Markup Language (SSML)Â is an XML-based markup language that lets developers specify how input text is converted into synthesized speech using the text-to-speech service. Compared to plain text, SSML allows developers to fine-tune the pitch, pronunciation, speaking rate, volume, and more of the text-to-speech output. Normal punctuation, such as pausing after a period, or using the correct intonation when a sentence ends with a question mark are automatically handled. https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-synthesis-markup?tabs=csharp
The following are not markup languages used to control the Speech Synthesis:
SQL – Structured Query Language is a data management language, not a markup language.
JSON – JavaScript Object Notation is a data interchange format, not a markup language.
HTML & TeX – Hypertext Markup Language and TeX are markup languages but the Azure Cognitive services use SSML for the control of Speech Synthesis output.
Incorrect
Speech Synthesis Markup Language (SSML)Â is an XML-based markup language that lets developers specify how input text is converted into synthesized speech using the text-to-speech service. Compared to plain text, SSML allows developers to fine-tune the pitch, pronunciation, speaking rate, volume, and more of the text-to-speech output. Normal punctuation, such as pausing after a period, or using the correct intonation when a sentence ends with a question mark are automatically handled. https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-synthesis-markup?tabs=csharp
The following are not markup languages used to control the Speech Synthesis:
SQL – Structured Query Language is a data management language, not a markup language.
JSON – JavaScript Object Notation is a data interchange format, not a markup language.
HTML & TeX – Hypertext Markup Language and TeX are markup languages but the Azure Cognitive services use SSML for the control of Speech Synthesis output.
Unattempted
Speech Synthesis Markup Language (SSML)Â is an XML-based markup language that lets developers specify how input text is converted into synthesized speech using the text-to-speech service. Compared to plain text, SSML allows developers to fine-tune the pitch, pronunciation, speaking rate, volume, and more of the text-to-speech output. Normal punctuation, such as pausing after a period, or using the correct intonation when a sentence ends with a question mark are automatically handled. https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-synthesis-markup?tabs=csharp
The following are not markup languages used to control the Speech Synthesis:
SQL – Structured Query Language is a data management language, not a markup language.
JSON – JavaScript Object Notation is a data interchange format, not a markup language.
HTML & TeX – Hypertext Markup Language and TeX are markup languages but the Azure Cognitive services use SSML for the control of Speech Synthesis output.
Question 23 of 65
23. Question
Which Azure Cognitive Services can you use to build conversation AI solutions? (Select two)
Correct
To build conversation AI solutions, you can use the following Azure Cognitive Services:
Azure Bot Service: This service provides a framework for building and deploying bots that can engage in natural conversations with users.
LUIS (Language Understanding Intelligent Service): This service helps you build and deploy language models that can understand natural language input and extract intent and entities from it.
These two services are essential for building sophisticated conversational AI solutions. They enable you to create bots that can understand and respond to user queries in a natural and informative way.
https://docs.microsoft.com/en-us/learn/modules/build-faq-chatbot-qna-maker-azure-bot-service/1-introduction
The following are not Azure Cognitive Services which can be used to build conversation AI solutions:
• Text Analytics is one of the NLP services that helps analyze text documents. It is not a Conversation AI solution.
• LUIS is one of the NLP services that understands voice or text commands. It is not a Conversation AI solution.
• Object Detection is one of the common tasks of Computer Vision that helps recognize objects on the images. It is not a Conversation AI solution.
• Speech is one of the NLP services that helps recognize and synthesize speech. It is not a Conversation AI solution.
Incorrect
To build conversation AI solutions, you can use the following Azure Cognitive Services:
Azure Bot Service: This service provides a framework for building and deploying bots that can engage in natural conversations with users.
LUIS (Language Understanding Intelligent Service): This service helps you build and deploy language models that can understand natural language input and extract intent and entities from it.
These two services are essential for building sophisticated conversational AI solutions. They enable you to create bots that can understand and respond to user queries in a natural and informative way.
https://docs.microsoft.com/en-us/learn/modules/build-faq-chatbot-qna-maker-azure-bot-service/1-introduction
The following are not Azure Cognitive Services which can be used to build conversation AI solutions:
• Text Analytics is one of the NLP services that helps analyze text documents. It is not a Conversation AI solution.
• LUIS is one of the NLP services that understands voice or text commands. It is not a Conversation AI solution.
• Object Detection is one of the common tasks of Computer Vision that helps recognize objects on the images. It is not a Conversation AI solution.
• Speech is one of the NLP services that helps recognize and synthesize speech. It is not a Conversation AI solution.
Unattempted
To build conversation AI solutions, you can use the following Azure Cognitive Services:
Azure Bot Service: This service provides a framework for building and deploying bots that can engage in natural conversations with users.
LUIS (Language Understanding Intelligent Service): This service helps you build and deploy language models that can understand natural language input and extract intent and entities from it.
These two services are essential for building sophisticated conversational AI solutions. They enable you to create bots that can understand and respond to user queries in a natural and informative way.
https://docs.microsoft.com/en-us/learn/modules/build-faq-chatbot-qna-maker-azure-bot-service/1-introduction
The following are not Azure Cognitive Services which can be used to build conversation AI solutions:
• Text Analytics is one of the NLP services that helps analyze text documents. It is not a Conversation AI solution.
• LUIS is one of the NLP services that understands voice or text commands. It is not a Conversation AI solution.
• Object Detection is one of the common tasks of Computer Vision that helps recognize objects on the images. It is not a Conversation AI solution.
• Speech is one of the NLP services that helps recognize and synthesize speech. It is not a Conversation AI solution.
Question 24 of 65
24. Question
Review the following code:
JSON
{
“bindings”: [
{
“name”: “myQueueItem”,
“type”: “queueTrigger”,
“direction”: “in”,
“queueName”: “new-feedback-q”,
“connection”: “AzureWebJobsDashboard”
}
],
“disabled”: false
}
What does the code in this function.json config file serve?
If possible PII values are found by the Text Moderation API, the JSON response includes which of the following?
Correct
If possible PII values are found by the Text Moderation API, the JSON response includes relevant information about the text and the index location within the text. https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/overview?WT.mc.id=aiapril-medium-abornst
A sample JSON response is shown here:
JSON
“PII”: {
“Email”: [{
“Detected”: “[email protected]”,
“SubType”: “Regular”,
“Text”: “[email protected]”,
“Index”: 32
}],
“IPA”: [{
“SubType”: “IPV4”,
“Text”: “255.255.255.255”,
“Index”: 72
}],
“Phone”: [{
“CountryCode”: “US”,
“Text”: “5557789887”,
“Index”: 56
}, {
“CountryCode”: “UK”,
“Text”: “+44 123 456 7890”,
“Index”: 208
}],
“Address”: [{
“Text”: “1 Microsoft Way, Redmond, WA 98052”,
“Index”: 89
}],
“SSN”: [{
“Text”: “999-99-9999”,
“Index”: 267
}]
}
Incorrect
If possible PII values are found by the Text Moderation API, the JSON response includes relevant information about the text and the index location within the text. https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/overview?WT.mc.id=aiapril-medium-abornst
A sample JSON response is shown here:
JSON
“PII”: {
“Email”: [{
“Detected”: “[email protected]”,
“SubType”: “Regular”,
“Text”: “[email protected]”,
“Index”: 32
}],
“IPA”: [{
“SubType”: “IPV4”,
“Text”: “255.255.255.255”,
“Index”: 72
}],
“Phone”: [{
“CountryCode”: “US”,
“Text”: “5557789887”,
“Index”: 56
}, {
“CountryCode”: “UK”,
“Text”: “+44 123 456 7890”,
“Index”: 208
}],
“Address”: [{
“Text”: “1 Microsoft Way, Redmond, WA 98052”,
“Index”: 89
}],
“SSN”: [{
“Text”: “999-99-9999”,
“Index”: 267
}]
}
Unattempted
If possible PII values are found by the Text Moderation API, the JSON response includes relevant information about the text and the index location within the text. https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/overview?WT.mc.id=aiapril-medium-abornst
A sample JSON response is shown here:
JSON
“PII”: {
“Email”: [{
“Detected”: “[email protected]”,
“SubType”: “Regular”,
“Text”: “[email protected]”,
“Index”: 32
}],
“IPA”: [{
“SubType”: “IPV4”,
“Text”: “255.255.255.255”,
“Index”: 72
}],
“Phone”: [{
“CountryCode”: “US”,
“Text”: “5557789887”,
“Index”: 56
}, {
“CountryCode”: “UK”,
“Text”: “+44 123 456 7890”,
“Index”: 208
}],
“Address”: [{
“Text”: “1 Microsoft Way, Redmond, WA 98052”,
“Index”: 89
}],
“SSN”: [{
“Text”: “999-99-9999”,
“Index”: 267
}]
}
Question 26 of 65
26. Question
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
The Classification feature of the Text Moderation API can place text into specific groups based on certain specifications.
• [A]: Potential presence of language that might be considered sexually suggestive or mature in certain situations.
• [B]: Potential presence of language that might be considered offensive in certain situations.
• [C]: Potential presence of language that might be considered sexually explicit or adult in certain situations.
Correct
Classification
This feature of the API can place text into specific categories based on the following specifications:
• Category 1: Potential presence of language that might be considered sexually explicit or adult in certain situations.
• Category 2: Potential presence of language that might be considered sexually suggestive or mature in certain situations.
• Category 3: Potential presence of language that might be considered offensive in certain situations. https://docs.microsoft.com/en-us/azure/cognitive-services/content-moderator/try-text-api
Incorrect
Classification
This feature of the API can place text into specific categories based on the following specifications:
• Category 1: Potential presence of language that might be considered sexually explicit or adult in certain situations.
• Category 2: Potential presence of language that might be considered sexually suggestive or mature in certain situations.
• Category 3: Potential presence of language that might be considered offensive in certain situations. https://docs.microsoft.com/en-us/azure/cognitive-services/content-moderator/try-text-api
Unattempted
Classification
This feature of the API can place text into specific categories based on the following specifications:
• Category 1: Potential presence of language that might be considered sexually explicit or adult in certain situations.
• Category 2: Potential presence of language that might be considered sexually suggestive or mature in certain situations.
• Category 3: Potential presence of language that might be considered offensive in certain situations. https://docs.microsoft.com/en-us/azure/cognitive-services/content-moderator/try-text-api
Question 27 of 65
27. Question
When using Azure Speech Translation to translate into multiple languages, this is achieved by …
Correct
The AddTargetLanguage or add_target_language methods are used. Add target languages to the collection of languages for the Speech Config object.
The Speech Translation capabilities require working with some key objects:
• A SpeechTranslationConfig object that will accept
   • your subscription key and region information
   • attributes for source and target language
   • a speech output voice name
• A TranslationRecognizer object that will accept
   • the SpeechTranslationConfig object listed above
   • calling the method to start the recognition process
• A TranslationRecognitionResult object is returned for you to evaluate for the result
• A Speech Synthesizer object to play the audio output in the target language https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-translation
Incorrect
The AddTargetLanguage or add_target_language methods are used. Add target languages to the collection of languages for the Speech Config object.
The Speech Translation capabilities require working with some key objects:
• A SpeechTranslationConfig object that will accept
   • your subscription key and region information
   • attributes for source and target language
   • a speech output voice name
• A TranslationRecognizer object that will accept
   • the SpeechTranslationConfig object listed above
   • calling the method to start the recognition process
• A TranslationRecognitionResult object is returned for you to evaluate for the result
• A Speech Synthesizer object to play the audio output in the target language https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-translation
Unattempted
The AddTargetLanguage or add_target_language methods are used. Add target languages to the collection of languages for the Speech Config object.
The Speech Translation capabilities require working with some key objects:
• A SpeechTranslationConfig object that will accept
   • your subscription key and region information
   • attributes for source and target language
   • a speech output voice name
• A TranslationRecognizer object that will accept
   • the SpeechTranslationConfig object listed above
   • calling the method to start the recognition process
• A TranslationRecognitionResult object is returned for you to evaluate for the result
• A Speech Synthesizer object to play the audio output in the target language https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-translation
Question 28 of 65
28. Question
True or False: Azure requires you to create the LUIS app in the same geographic location where you created the service.
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
The Emotion API provides advanced Face analysis algorithms to assign a(n) [?] for the emotions detected.
Correct
The Emotion API provides advanced Face analysis algorithms to assign a confidence level for the following emotions: anger, contempt, disgust, fear, happiness, neutral (the absence of emotion), sadness, and surprise.
Scores and locations
As in the Face API, the Emotion API associates detected faces with face locations. But the Emotion API adds a collection of values to the payload describing the likelihood of various emotions. The analysis is based on a variation of confidence levels, referred to as scores.
• Score: The likelihood or level of confidence that a face displays a specific emotion
• Location: The top, left, height, and width of a region in the image that displays a face
• Scores, just like confidence levels, are similar to percentages. Their range is 0.0 to 1.0. The higher the value, the more certain the service is that the emotion is accurate. https://docs.microsoft.com/en-us/learn/modules/identify-faces-with-computer-vision/9-introducing-emotion
Incorrect
The Emotion API provides advanced Face analysis algorithms to assign a confidence level for the following emotions: anger, contempt, disgust, fear, happiness, neutral (the absence of emotion), sadness, and surprise.
Scores and locations
As in the Face API, the Emotion API associates detected faces with face locations. But the Emotion API adds a collection of values to the payload describing the likelihood of various emotions. The analysis is based on a variation of confidence levels, referred to as scores.
• Score: The likelihood or level of confidence that a face displays a specific emotion
• Location: The top, left, height, and width of a region in the image that displays a face
• Scores, just like confidence levels, are similar to percentages. Their range is 0.0 to 1.0. The higher the value, the more certain the service is that the emotion is accurate. https://docs.microsoft.com/en-us/learn/modules/identify-faces-with-computer-vision/9-introducing-emotion
Unattempted
The Emotion API provides advanced Face analysis algorithms to assign a confidence level for the following emotions: anger, contempt, disgust, fear, happiness, neutral (the absence of emotion), sadness, and surprise.
Scores and locations
As in the Face API, the Emotion API associates detected faces with face locations. But the Emotion API adds a collection of values to the payload describing the likelihood of various emotions. The analysis is based on a variation of confidence levels, referred to as scores.
• Score: The likelihood or level of confidence that a face displays a specific emotion
• Location: The top, left, height, and width of a region in the image that displays a face
• Scores, just like confidence levels, are similar to percentages. Their range is 0.0 to 1.0. The higher the value, the more certain the service is that the emotion is accurate. https://docs.microsoft.com/en-us/learn/modules/identify-faces-with-computer-vision/9-introducing-emotion
Question 31 of 65
31. Question
True or False: Microsoft recommends using multiple versions for collaboration among contributors.
Correct
True.
Microsoft recommends using multiple versions for collaboration among contributors to ensure smooth development and avoid conflicts. This allows different team members to work on different parts of the project simultaneously without interfering with each other’s work. By creating and merging different versions, developers can manage changes effectively and resolve conflicts in a controlled manner.
Microsoft recommends using multiple versions for collaboration among contributors to ensure smooth development and avoid conflicts. This allows different team members to work on different parts of the project simultaneously without interfering with each other’s work. By creating and merging different versions, developers can manage changes effectively and resolve conflicts in a controlled manner.
Microsoft recommends using multiple versions for collaboration among contributors to ensure smooth development and avoid conflicts. This allows different team members to work on different parts of the project simultaneously without interfering with each other’s work. By creating and merging different versions, developers can manage changes effectively and resolve conflicts in a controlled manner.
Entity recognition environment using Visual Studio Code for Cognitive Services supports which of the following languages? (Select two)
Correct
Visual Studio Code must be installed on the local machine for your operating system. Depending on the programming language you choose, the setup will differ. The available languages are Python and C#.
Visual Studio Code must be installed on the local machine for your operating system. Depending on the programming language you choose, the setup will differ. The available languages are Python and C#.
Visual Studio Code must be installed on the local machine for your operating system. Depending on the programming language you choose, the setup will differ. The available languages are Python and C#.
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
To containerize LUIS, you require an existing LUIS application. You will not create a LUIS app inside a Docker container. LUIS offers the ability to export to a container. Before you can export the app as a container, it must be trained so ensure that you have selected the [A] button and its status is showing [B].
Correct
To containerize LUIS, you require an existing LUIS application. You will not create a LUIS app inside a Docker container. LUIS offers the ability to export to a container. Before you can export the app as a container, it must be trained so ensure that you have selected the Train button and its status is showing a green dot.
To containerize LUIS, you require an existing LUIS application. You will not create a LUIS app inside a Docker container. LUIS offers the ability to export to a container. Before you can export the app as a container, it must be trained so ensure that you have selected the Train button and its status is showing a green dot.
To containerize LUIS, you require an existing LUIS application. You will not create a LUIS app inside a Docker container. LUIS offers the ability to export to a container. Before you can export the app as a container, it must be trained so ensure that you have selected the Train button and its status is showing a green dot.
True or False: The LUIS Portal Dashboard gives a visual display to help with the evaluation and training within the LUIS app. There are status displays and suggestions for fixes.
Using Pattern Matching will help with situations where your intent score is low or if the correct intent is not the top scoring intent.
Correct
True and True.
LUIS Portal Dashboard: The LUIS portal dashboard provides valuable insights into your app’s performance, including visual representations of intent recognition accuracy, entity extraction precision, and other relevant metrics. It also offers suggestions for improvement, such as adding more training examples or refining intent definitions.
Pattern Matching: Pattern matching is a powerful technique for improving intent recognition, especially when dealing with complex or ambiguous queries. By defining specific patterns or regular expressions, you can help LUIS identify intents more accurately, even if the exact phrasing doesn’t match any of the training examples. This is particularly useful when the intent score is low or the correct intent isn’t the top-scoring one
LUIS Portal Dashboard: The LUIS portal dashboard provides valuable insights into your app’s performance, including visual representations of intent recognition accuracy, entity extraction precision, and other relevant metrics. It also offers suggestions for improvement, such as adding more training examples or refining intent definitions.
Pattern Matching: Pattern matching is a powerful technique for improving intent recognition, especially when dealing with complex or ambiguous queries. By defining specific patterns or regular expressions, you can help LUIS identify intents more accurately, even if the exact phrasing doesn’t match any of the training examples. This is particularly useful when the intent score is low or the correct intent isn’t the top-scoring one
LUIS Portal Dashboard: The LUIS portal dashboard provides valuable insights into your app’s performance, including visual representations of intent recognition accuracy, entity extraction precision, and other relevant metrics. It also offers suggestions for improvement, such as adding more training examples or refining intent definitions.
Pattern Matching: Pattern matching is a powerful technique for improving intent recognition, especially when dealing with complex or ambiguous queries. By defining specific patterns or regular expressions, you can help LUIS identify intents more accurately, even if the exact phrasing doesn’t match any of the training examples. This is particularly useful when the intent score is low or the correct intent isn’t the top-scoring one
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
When using the Text Moderation, text can be at most [?] long. If the content passed to the text API or the image API exceeds the size limits, the API will return an error code that informs about the issue.
Correct
The Text Moderation API scans content as it is generated. Content Moderator then processes the content and sends the results along with relevant information either back to the user’s systems or to the built-in review tool. This information can be used to make decisions e.g. take it down, send to human judge, etc.
When using the API, images need to have a minimum of 128 pixels and a maximum file size of 4MB. Text can be at most 1024 characters long. If the content passed to the text API or the image API exceeds the size limits, the API will return an error code that informs about the issue.
The Text Moderation API scans content as it is generated. Content Moderator then processes the content and sends the results along with relevant information either back to the user’s systems or to the built-in review tool. This information can be used to make decisions e.g. take it down, send to human judge, etc.
When using the API, images need to have a minimum of 128 pixels and a maximum file size of 4MB. Text can be at most 1024 characters long. If the content passed to the text API or the image API exceeds the size limits, the API will return an error code that informs about the issue.
The Text Moderation API scans content as it is generated. Content Moderator then processes the content and sends the results along with relevant information either back to the user’s systems or to the built-in review tool. This information can be used to make decisions e.g. take it down, send to human judge, etc.
When using the API, images need to have a minimum of 128 pixels and a maximum file size of 4MB. Text can be at most 1024 characters long. If the content passed to the text API or the image API exceeds the size limits, the API will return an error code that informs about the issue.
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
Improving predictions can be achieved by manipulating the data stored in the LUIS app. The [?] gives a visual display to help with the evaluation and training within the LUIS app. There are status displays and suggestions for fixes.
Correct
You can use the Dashboard in the LUIS portal to evaluate the training results for your LUIS app. Various charts are used to display the status along with problems and suggestions for fixes.
You can use the Dashboard in the LUIS portal to evaluate the training results for your LUIS app. Various charts are used to display the status along with problems and suggestions for fixes.
You can use the Dashboard in the LUIS portal to evaluate the training results for your LUIS app. Various charts are used to display the status along with problems and suggestions for fixes.
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
When text is passed to the Text Moderation API, any potentially profane terms in the text are identified and returned in a [A] response. The profane item is returned as a Term in the [A] response, along with a(n) [B] showing where the term is in the supplied text.
Correct
When text is passed to the Text Moderation API, any potentially profane terms in the text are identified and returned in a JSON response. The profane item is returned as a Term in the JSON response, along with an Index value showing where the term is in the supplied text.
You can also use custom term lists with this API. In that case, if a profane term is identified in the text, a ListId is also returned to identify the specific custom word that was identified. A sample JSON response is shown here:
JSON
“Terms”: [
{
“Index”: 118,
“OriginalIndex”: 118,
“ListId”: 0,
“Term”: “crap”
} https://docs.microsoft.com/en-us/azure/cognitive-services/Content-Moderator/text-moderation-api
Incorrect
When text is passed to the Text Moderation API, any potentially profane terms in the text are identified and returned in a JSON response. The profane item is returned as a Term in the JSON response, along with an Index value showing where the term is in the supplied text.
You can also use custom term lists with this API. In that case, if a profane term is identified in the text, a ListId is also returned to identify the specific custom word that was identified. A sample JSON response is shown here:
JSON
“Terms”: [
{
“Index”: 118,
“OriginalIndex”: 118,
“ListId”: 0,
“Term”: “crap”
} https://docs.microsoft.com/en-us/azure/cognitive-services/Content-Moderator/text-moderation-api
Unattempted
When text is passed to the Text Moderation API, any potentially profane terms in the text are identified and returned in a JSON response. The profane item is returned as a Term in the JSON response, along with an Index value showing where the term is in the supplied text.
You can also use custom term lists with this API. In that case, if a profane term is identified in the text, a ListId is also returned to identify the specific custom word that was identified. A sample JSON response is shown here:
JSON
“Terms”: [
{
“Index”: 118,
“OriginalIndex”: 118,
“ListId”: 0,
“Term”: “crap”
} https://docs.microsoft.com/en-us/azure/cognitive-services/Content-Moderator/text-moderation-api
Question 39 of 65
39. Question
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
Improving predictions can be achieved by manipulating the data stored in the LUIS app. The Speech-to-Text aspect of [?] transcribes audio streams into text. The application can display this text to the user, or act upon it as command input. This service can be used either with an SDK client library (for supported platforms and languages), or a representational state transfer (REST) API.
Correct
The Speech-to-Text aspect of the Azure Speech Service transcribes audio streams into text. Your application can display this text to the user, or act upon it as command input. You can use this service either with an SDK client library (for supported platforms and languages), or a representational state transfer (REST) API.
With the Speech Service, you can:
• Extend the reach of your applications across mobile, desktop, and web to provide speech-to-text transcription.
• Easily translate to and from multiple, supported languages through the open REST interface of the Speech SDK. This API is a cloud-based, automatic speech-translation service (also known as machine translation).
• Perform Text-to-Speech operations that can accept text input and then output a spoken version of that text, using synthesized speech.
• Perform entity recognition through integration with Language Understanding (LUIS).
The Speech-to-Text aspect of the Azure Speech Service transcribes audio streams into text. Your application can display this text to the user, or act upon it as command input. You can use this service either with an SDK client library (for supported platforms and languages), or a representational state transfer (REST) API.
With the Speech Service, you can:
• Extend the reach of your applications across mobile, desktop, and web to provide speech-to-text transcription.
• Easily translate to and from multiple, supported languages through the open REST interface of the Speech SDK. This API is a cloud-based, automatic speech-translation service (also known as machine translation).
• Perform Text-to-Speech operations that can accept text input and then output a spoken version of that text, using synthesized speech.
• Perform entity recognition through integration with Language Understanding (LUIS).
The Speech-to-Text aspect of the Azure Speech Service transcribes audio streams into text. Your application can display this text to the user, or act upon it as command input. You can use this service either with an SDK client library (for supported platforms and languages), or a representational state transfer (REST) API.
With the Speech Service, you can:
• Extend the reach of your applications across mobile, desktop, and web to provide speech-to-text transcription.
• Easily translate to and from multiple, supported languages through the open REST interface of the Speech SDK. This API is a cloud-based, automatic speech-translation service (also known as machine translation).
• Perform Text-to-Speech operations that can accept text input and then output a spoken version of that text, using synthesized speech.
• Perform entity recognition through integration with Language Understanding (LUIS).
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
[?] is a Cognitive Service designed to help you extract information from text. Through the service you can identify language, discover sentiment, extract key phrases, and detect well-known entities from text.
Correct
Text Analytics can determine the intent a given group of text
Text Analytics API is a Cognitive Service designed to help you extract information from text. The Text Analytics API is a cloud-based service that provides advanced natural language processing over raw text, and includes four main functions: sentiment analysis, key phrase extraction, named entity recognition, and language detection.
Text Analytics can determine the intent a given group of text
Text Analytics API is a Cognitive Service designed to help you extract information from text. The Text Analytics API is a cloud-based service that provides advanced natural language processing over raw text, and includes four main functions: sentiment analysis, key phrase extraction, named entity recognition, and language detection.
Text Analytics can determine the intent a given group of text
Text Analytics API is a Cognitive Service designed to help you extract information from text. The Text Analytics API is a cloud-based service that provides advanced natural language processing over raw text, and includes four main functions: sentiment analysis, key phrase extraction, named entity recognition, and language detection.
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
Transitions within a video are detected which determine how it is divided. [?] are consecutive frames taken from the same camera at the same time. It’s metadata includes a start and end time, as well as the list of keyframes.
Correct
Shot
Transitions within the video are detected which determine how it is split into shots. Video Indexer determines when a shot changes in the video based on visual cues, by tracking both abrupt and gradual transitions in the colour scheme of adjacent frames. The shot’s metadata includes a start and end time, as well as the list of keyframes included in that shot. The shots are consecutive frames taken from the same camera at the same time.
Shot
Transitions within the video are detected which determine how it is split into shots. Video Indexer determines when a shot changes in the video based on visual cues, by tracking both abrupt and gradual transitions in the colour scheme of adjacent frames. The shot’s metadata includes a start and end time, as well as the list of keyframes included in that shot. The shots are consecutive frames taken from the same camera at the same time.
Shot
Transitions within the video are detected which determine how it is split into shots. Video Indexer determines when a shot changes in the video based on visual cues, by tracking both abrupt and gradual transitions in the colour scheme of adjacent frames. The shot’s metadata includes a start and end time, as well as the list of keyframes included in that shot. The shots are consecutive frames taken from the same camera at the same time.
Diana Prince is working in a new role in Wayne Enterprises. Her task is retrieving handwritten text from an image. Using Azure CLI, she has executed the following code:
Azure CLI
curl “https://westus2.api.cognitive.microsoft.com/vision/v2.0/recognizeText?mode=Handwritten” \
-H “Ocp-Apim-Subscription-Key: $key” \
-H “Content-Type: application/json” \
-d “{‘url’ : ‘https://raw.githubusercontent.com/MicrosoftDocs/mslearn-process-images-with-the-computer-vision-service/master/images/handwriting.jpg’}” \
-D –
The above dumps the headers of this operation to the console.
Azure CLI
HTTP/1.1 202 Accepted
Cache-Control: no-cache
Pragma: no-cache
Content-Length: 0
Expires: -1
Operation-Location: https://westus2.api.cognitive.microsoft.com/vision/v2.0/textOperations/d0e9b397-4072-471c-ae61-7490bec8f077
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
apim-request-id: f5663487-03c6-4760-9be7-c9157fac10a1
Strict-Transport-Security: max-age=31536000; includeSubDomains; preload
x-content-type-options: nosniff
Date: Wed, 12 Sep 2020 19:22:00 GMT
What does Operation-Location refer to?
True or False: Thanks to Machine Learning, the LUIS app can make inferences from a variety of labels to determine the intended label, even if there are conflicting label names.
Correct
True.
Thanks to machine learning, the LUIS app can indeed make inferences from a variety of labels to determine the intended label, even if there are conflicting label names. This is due to its ability to learn from the provided training data and identify patterns.
For example, if you have labels like “order pizza” and “cancel pizza order,” and a user says “I don’t want the pizza,” LUIS can use its understanding of language and context to infer that the intent is to “cancel pizza order,” even though the exact phrase doesn’t match.
Thanks to machine learning, the LUIS app can indeed make inferences from a variety of labels to determine the intended label, even if there are conflicting label names. This is due to its ability to learn from the provided training data and identify patterns.
For example, if you have labels like “order pizza” and “cancel pizza order,” and a user says “I don’t want the pizza,” LUIS can use its understanding of language and context to infer that the intent is to “cancel pizza order,” even though the exact phrase doesn’t match.
Thanks to machine learning, the LUIS app can indeed make inferences from a variety of labels to determine the intended label, even if there are conflicting label names. This is due to its ability to learn from the provided training data and identify patterns.
For example, if you have labels like “order pizza” and “cancel pizza order,” and a user says “I don’t want the pizza,” LUIS can use its understanding of language and context to infer that the intent is to “cancel pizza order,” even though the exact phrase doesn’t match.
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
When you create a LUIS app, you are using the [A] process. You create Intents as part of your core LUIS [B].
Correct
When you create a LUIS app, you are using the Authoring process. You create Intents as part of your core LUIS Configuration. Within those Intents, you create Utterances.
The first storage location for utterance data is within the intents of your LUIS app. As a result, you manage the utterances through the LUIS intents. You can create new utterances and delete existing utterances as part of the LUIS app lifecycle. Retraining of the model is necessary if you add or delete utterances.
When users interact with your LUIS app through the endpoint URL, the phrases that they enter can be stored in the endpoint utterances. For the entries to be stored, the request’s query string parameter for logging must be set to true, log=true. The phrases can be found in the Review endpoint utterances page of the Build tab for a specific LUIS app.
Within the entries that are available on this page, you can review the aligned intent and decide if you need to make changes to your model and retrain, or if you want to add or remove these utterances to the intents that already exist and show as aligned.
For example, if you find that an aligned intent is what you believe is correct, you can select the check mark to Add the utterance to your intent storage in the app, or you can select the X to Delete the utterance from the app.
If you select the Add option, the utterance will be removed from the Review endpoint utterances location and will then be available in the utterances for that Intent in the app.
When you create a LUIS app, you are using the Authoring process. You create Intents as part of your core LUIS Configuration. Within those Intents, you create Utterances.
The first storage location for utterance data is within the intents of your LUIS app. As a result, you manage the utterances through the LUIS intents. You can create new utterances and delete existing utterances as part of the LUIS app lifecycle. Retraining of the model is necessary if you add or delete utterances.
When users interact with your LUIS app through the endpoint URL, the phrases that they enter can be stored in the endpoint utterances. For the entries to be stored, the request’s query string parameter for logging must be set to true, log=true. The phrases can be found in the Review endpoint utterances page of the Build tab for a specific LUIS app.
Within the entries that are available on this page, you can review the aligned intent and decide if you need to make changes to your model and retrain, or if you want to add or remove these utterances to the intents that already exist and show as aligned.
For example, if you find that an aligned intent is what you believe is correct, you can select the check mark to Add the utterance to your intent storage in the app, or you can select the X to Delete the utterance from the app.
If you select the Add option, the utterance will be removed from the Review endpoint utterances location and will then be available in the utterances for that Intent in the app.
When you create a LUIS app, you are using the Authoring process. You create Intents as part of your core LUIS Configuration. Within those Intents, you create Utterances.
The first storage location for utterance data is within the intents of your LUIS app. As a result, you manage the utterances through the LUIS intents. You can create new utterances and delete existing utterances as part of the LUIS app lifecycle. Retraining of the model is necessary if you add or delete utterances.
When users interact with your LUIS app through the endpoint URL, the phrases that they enter can be stored in the endpoint utterances. For the entries to be stored, the request’s query string parameter for logging must be set to true, log=true. The phrases can be found in the Review endpoint utterances page of the Build tab for a specific LUIS app.
Within the entries that are available on this page, you can review the aligned intent and decide if you need to make changes to your model and retrain, or if you want to add or remove these utterances to the intents that already exist and show as aligned.
For example, if you find that an aligned intent is what you believe is correct, you can select the check mark to Add the utterance to your intent storage in the app, or you can select the X to Delete the utterance from the app.
If you select the Add option, the utterance will be removed from the Review endpoint utterances location and will then be available in the utterances for that Intent in the app.
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
The Face API tasks fall into five categories, which are the two that are listed below?
• [A]: Search and identify faces.
• [B]: Check the likelihood that two faces belong to the same person.
Correct
The Face API tasks fall into five categories:
• Verification: Check the likelihood that two faces belong to the same person.
• Detection: Detect human faces in an image.
• Identification: Search and identify faces.
• Similarity: Find similar faces.
• Grouping: Organize unidentified faces into groups, based on their visual similarity.
The Face API tasks fall into five categories:
• Verification: Check the likelihood that two faces belong to the same person.
• Detection: Detect human faces in an image.
• Identification: Search and identify faces.
• Similarity: Find similar faces.
• Grouping: Organize unidentified faces into groups, based on their visual similarity.
The Face API tasks fall into five categories:
• Verification: Check the likelihood that two faces belong to the same person.
• Detection: Detect human faces in an image.
• Identification: Search and identify faces.
• Similarity: Find similar faces.
• Grouping: Organize unidentified faces into groups, based on their visual similarity.
Determine the order of the following steps for calling the Computer Vision API.
1. Parse the response
2. Get an API access key
3. Make a POST call to the API
Correct
How to call the Computer Vision API
You call Computer Vision in your application using client libraries or the REST API directly. We’ll call the REST API in this module. To make a call:
1. Get an API access key
You are assigned access keys when you sign up for a Computer Vision service account. A key must be passed in the header of every request.
2. Make a POST call to the API
Format the URL as follows: region.api.cognitive.microsoft.com/vision/v2.0/resource/[parameters]
• region – the region where you created the account, for example, westus.
• resource – the Computer Vision resource you are calling such as analyze, describe, generateThumbnail, ocr, models, recognizeText, tag.
You can supply the image to be processed either as a raw image binary or an image URL.
The request header must contain the subscription key, which provides access to this API.
3. Parse the response
The response holds the insight the Computer Vision API has about your image, as a JSON payload. https://us.flow.microsoft.com/en-us/connectors/shared_cognitiveservicescomputervision/computer-vision-api/
Incorrect
How to call the Computer Vision API
You call Computer Vision in your application using client libraries or the REST API directly. We’ll call the REST API in this module. To make a call:
1. Get an API access key
You are assigned access keys when you sign up for a Computer Vision service account. A key must be passed in the header of every request.
2. Make a POST call to the API
Format the URL as follows: region.api.cognitive.microsoft.com/vision/v2.0/resource/[parameters]
• region – the region where you created the account, for example, westus.
• resource – the Computer Vision resource you are calling such as analyze, describe, generateThumbnail, ocr, models, recognizeText, tag.
You can supply the image to be processed either as a raw image binary or an image URL.
The request header must contain the subscription key, which provides access to this API.
3. Parse the response
The response holds the insight the Computer Vision API has about your image, as a JSON payload. https://us.flow.microsoft.com/en-us/connectors/shared_cognitiveservicescomputervision/computer-vision-api/
Unattempted
How to call the Computer Vision API
You call Computer Vision in your application using client libraries or the REST API directly. We’ll call the REST API in this module. To make a call:
1. Get an API access key
You are assigned access keys when you sign up for a Computer Vision service account. A key must be passed in the header of every request.
2. Make a POST call to the API
Format the URL as follows: region.api.cognitive.microsoft.com/vision/v2.0/resource/[parameters]
• region – the region where you created the account, for example, westus.
• resource – the Computer Vision resource you are calling such as analyze, describe, generateThumbnail, ocr, models, recognizeText, tag.
You can supply the image to be processed either as a raw image binary or an image URL.
The request header must contain the subscription key, which provides access to this API.
3. Parse the response
The response holds the insight the Computer Vision API has about your image, as a JSON payload. https://us.flow.microsoft.com/en-us/connectors/shared_cognitiveservicescomputervision/computer-vision-api/
Question 47 of 65
47. Question
When you’re satisfied with your trained knowledge base, you can publish it so that client applications can use it over its REST interface. To access the knowledge base, client applications require which of the following? (Select three)
Correct
Publish the knowledge base
When you’re satisfied with your trained knowledge base, you can publish it so that client applications can use it over its REST interface. To access the knowledge base, client applications require:
• The knowledge base ID
• The knowledge base endpoint
• The knowledge base authorization key https://docs.microsoft.com/en-us/azure/cognitive-services/QnAMaker/how-to/test-knowledge-base?tabs=v1
Incorrect
Publish the knowledge base
When you’re satisfied with your trained knowledge base, you can publish it so that client applications can use it over its REST interface. To access the knowledge base, client applications require:
• The knowledge base ID
• The knowledge base endpoint
• The knowledge base authorization key https://docs.microsoft.com/en-us/azure/cognitive-services/QnAMaker/how-to/test-knowledge-base?tabs=v1
Unattempted
Publish the knowledge base
When you’re satisfied with your trained knowledge base, you can publish it so that client applications can use it over its REST interface. To access the knowledge base, client applications require:
• The knowledge base ID
• The knowledge base endpoint
• The knowledge base authorization key https://docs.microsoft.com/en-us/azure/cognitive-services/QnAMaker/how-to/test-knowledge-base?tabs=v1
Question 48 of 65
48. Question
The Entity Recognition service supports which two types of recognition. (Select two)
Correct
The Entity Recognition skill extracts entities of different types from text. This skill uses the machine learning models provided by Text Analytics in Cognitive Services.
You can provide the service with unstructured text and it will return a list of entities, or items in the text that it recognizes. The service can also provide links to more information about that entity on the web. An entity is essentially a type or a category that certain text elements can fall under.
The service supports two types of recognition.
Named entity recognition
Named entity recognition provides the ability to recognize and identify items in text that are categorized according to some pre-defined classes. Version 3, which is in preview, will add the ability to identify more items such as personal and/or sensitive information like phone numbers, social security numbers, email addresses, and bank account numbers.
Entity linking
The entity linking feature helps to remove ambiguity that may exist around an identified entity. A document may contain an entity such as ARES, which could mean the Greek god of war or it could be an acronym for Amateur Radio Emergency Services. Text Analytics is not able to make this linking on its own. It requires a knowledge base, in the required language, to provide the necessary recognition. This is a way to customize linked entities to your own organizations list of entity elements. https://docs.microsoft.com/en-us/azure/search/cognitive-search-skill-entity-recognition
Incorrect
The Entity Recognition skill extracts entities of different types from text. This skill uses the machine learning models provided by Text Analytics in Cognitive Services.
You can provide the service with unstructured text and it will return a list of entities, or items in the text that it recognizes. The service can also provide links to more information about that entity on the web. An entity is essentially a type or a category that certain text elements can fall under.
The service supports two types of recognition.
Named entity recognition
Named entity recognition provides the ability to recognize and identify items in text that are categorized according to some pre-defined classes. Version 3, which is in preview, will add the ability to identify more items such as personal and/or sensitive information like phone numbers, social security numbers, email addresses, and bank account numbers.
Entity linking
The entity linking feature helps to remove ambiguity that may exist around an identified entity. A document may contain an entity such as ARES, which could mean the Greek god of war or it could be an acronym for Amateur Radio Emergency Services. Text Analytics is not able to make this linking on its own. It requires a knowledge base, in the required language, to provide the necessary recognition. This is a way to customize linked entities to your own organizations list of entity elements. https://docs.microsoft.com/en-us/azure/search/cognitive-search-skill-entity-recognition
Unattempted
The Entity Recognition skill extracts entities of different types from text. This skill uses the machine learning models provided by Text Analytics in Cognitive Services.
You can provide the service with unstructured text and it will return a list of entities, or items in the text that it recognizes. The service can also provide links to more information about that entity on the web. An entity is essentially a type or a category that certain text elements can fall under.
The service supports two types of recognition.
Named entity recognition
Named entity recognition provides the ability to recognize and identify items in text that are categorized according to some pre-defined classes. Version 3, which is in preview, will add the ability to identify more items such as personal and/or sensitive information like phone numbers, social security numbers, email addresses, and bank account numbers.
Entity linking
The entity linking feature helps to remove ambiguity that may exist around an identified entity. A document may contain an entity such as ARES, which could mean the Greek god of war or it could be an acronym for Amateur Radio Emergency Services. Text Analytics is not able to make this linking on its own. It requires a knowledge base, in the required language, to provide the necessary recognition. This is a way to customize linked entities to your own organizations list of entity elements. https://docs.microsoft.com/en-us/azure/search/cognitive-search-skill-entity-recognition
Question 49 of 65
49. Question
What is the importance of the –mount type=bind,src=c:\output,target=/output argument in the Docker run command?
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
The following content has been prepared for utterances in a LUIS model.
• Print this picture
• I would like to order prints
• Can I get an 8×10 of that one?
• Order wallets
Which would be the best Utterance to add this content to?
Correct
The following values are Utterances best for training the OrderPic intent as they are the most relevant to the title of the title and it’s intent.
• Print this picture
• I would like to order prints
• Can I get an 8×10 of that one?
• Order wallets
The following values are Utterances best for training the OrderPic intent as they are the most relevant to the title of the title and it’s intent.
• Print this picture
• I would like to order prints
• Can I get an 8×10 of that one?
• Order wallets
The following values are Utterances best for training the OrderPic intent as they are the most relevant to the title of the title and it’s intent.
• Print this picture
• I would like to order prints
• Can I get an 8×10 of that one?
• Order wallets
True or False: Using entity recognition it is possible to classify or redact content automatically.
Correct
Using entity recognition it is possible to classify or redact content automatically.
Entity recognition is one part of the Text Analytics set of APIs in Azure Cognitive Services. Using pre-built entity types, such as Person, Location, Organization, and others, the service evaluates the content of documents and returns information related to these entities.
You can use Named Entity Recognition to identify personal and sensitive information in documents. Use the data to classify documents or redact them so they can be shared safely. https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/named-entity-types?tabs=general
Incorrect
Using entity recognition it is possible to classify or redact content automatically.
Entity recognition is one part of the Text Analytics set of APIs in Azure Cognitive Services. Using pre-built entity types, such as Person, Location, Organization, and others, the service evaluates the content of documents and returns information related to these entities.
You can use Named Entity Recognition to identify personal and sensitive information in documents. Use the data to classify documents or redact them so they can be shared safely. https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/named-entity-types?tabs=general
Unattempted
Using entity recognition it is possible to classify or redact content automatically.
Entity recognition is one part of the Text Analytics set of APIs in Azure Cognitive Services. Using pre-built entity types, such as Person, Location, Organization, and others, the service evaluates the content of documents and returns information related to these entities.
You can use Named Entity Recognition to identify personal and sensitive information in documents. Use the data to classify documents or redact them so they can be shared safely. https://docs.microsoft.com/en-us/azure/cognitive-services/text-analytics/named-entity-types?tabs=general
Question 53 of 65
53. Question
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
There are various [A] and [B] options that you can use to create and manage LUIS apps. The support varies by programming language.
Correct
There are various APIs and SDK options that you can use to create and manage LUIS apps. The support varies by programming language. Some aspects are available via REST and others via the SDK options. For current support of the SDK and API options, view the sample code and information found on the LUIS Samples Git hub Repository. https://github.com/Azure-Samples/cognitive-services-language-understanding
Incorrect
There are various APIs and SDK options that you can use to create and manage LUIS apps. The support varies by programming language. Some aspects are available via REST and others via the SDK options. For current support of the SDK and API options, view the sample code and information found on the LUIS Samples Git hub Repository. https://github.com/Azure-Samples/cognitive-services-language-understanding
Unattempted
There are various APIs and SDK options that you can use to create and manage LUIS apps. The support varies by programming language. Some aspects are available via REST and others via the SDK options. For current support of the SDK and API options, view the sample code and information found on the LUIS Samples Git hub Repository. https://github.com/Azure-Samples/cognitive-services-language-understanding
Question 54 of 65
54. Question
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
[?] utilizes Azure Cognitive Services by providing a Speech-to-Text integration, allowing users to interact with the LUIS app through spoken phrases. The Speech-to-Text Service will recognize the speech patterns and output the phrases as text, which is then passed to the LUIS app, as text input utterances.
Correct
Speech priming utilizes Azure Cognitive Services by providing a speech-to-text integration, allowing users to interact with the LUIS app through spoken phrases. The Speech to text service will recognize the speech patterns and output the phrases as text, which is then passed to the LUIS app, as text input utterances.
Speech priming utilizes Azure Cognitive Services by providing a speech-to-text integration, allowing users to interact with the LUIS app through spoken phrases. The Speech to text service will recognize the speech patterns and output the phrases as text, which is then passed to the LUIS app, as text input utterances.
Speech priming utilizes Azure Cognitive Services by providing a speech-to-text integration, allowing users to interact with the LUIS app through spoken phrases. The Speech to text service will recognize the speech patterns and output the phrases as text, which is then passed to the LUIS app, as text input utterances.
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
The Speech Translation API lets you add End-to-End, Real-time, multi-language translation of speech to your applications, tools, and devices. The same API can be used for both [A] and [B] translation.
Correct
The missing words are:
[A] Speech-to-Text, [B] Text-to-Speech
This means the Speech Translation API can be used for both converting spoken language into text (Speech-to-Text) and converting text into spoken language (Text-to-Speech).
This means the Speech Translation API can be used for both converting spoken language into text (Speech-to-Text) and converting text into spoken language (Text-to-Speech).
This means the Speech Translation API can be used for both converting spoken language into text (Speech-to-Text) and converting text into spoken language (Text-to-Speech).
When you authorize requests to the Custom Vision Training API, either via the query string or the request header, which of these values do you need to supply?
Correct
Every programmatic call to the Custom Vision Training API requires that you pass a training key (“Training-Key”) to the service. You can pass this key via a value in the query string parameter or specify it in the request header. “Prediction-Key” is used for the Prediction API, and “Ocp-Apim-Subscription-Key” is used for the Computer Vision, Face, and Emotion APIs. A generic “Subscription-Key” value isn’t used in Azure Cognitive Vision Services. https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/
Incorrect
Every programmatic call to the Custom Vision Training API requires that you pass a training key (“Training-Key”) to the service. You can pass this key via a value in the query string parameter or specify it in the request header. “Prediction-Key” is used for the Prediction API, and “Ocp-Apim-Subscription-Key” is used for the Computer Vision, Face, and Emotion APIs. A generic “Subscription-Key” value isn’t used in Azure Cognitive Vision Services. https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/
Unattempted
Every programmatic call to the Custom Vision Training API requires that you pass a training key (“Training-Key”) to the service. You can pass this key via a value in the query string parameter or specify it in the request header. “Prediction-Key” is used for the Prediction API, and “Ocp-Apim-Subscription-Key” is used for the Computer Vision, Face, and Emotion APIs. A generic “Subscription-Key” value isn’t used in Azure Cognitive Vision Services. https://docs.microsoft.com/en-us/azure/cognitive-services/Custom-Vision-Service/
Question 57 of 65
57. Question
True or False: Once a Custom Vision model has been trained and published, changing the model can be a complicated procedure. It is recommended to create a new model if changes are required to the configuration of a Custom Vision model. If changes are expected, it is advised to create a Computer Vision model.
Correct
The Custom Vision service web portal is an easy way to train a model by uploading tagged images. This approach is the most common way to train, test, and publish a model. But sometimes a business need might require a model to be prepared (or retrained) based on incoming data available to the applications that are using the service. In these cases, the app can use the Training API to simply add and tag new images and publish a new iteration of the Custom Vision service. https://docs.microsoft.com/en-ca/learn/modules/evaluate-requirements-for-custom-computer-vision-api/5-examine-the-custom-vision-training-api
Incorrect
The Custom Vision service web portal is an easy way to train a model by uploading tagged images. This approach is the most common way to train, test, and publish a model. But sometimes a business need might require a model to be prepared (or retrained) based on incoming data available to the applications that are using the service. In these cases, the app can use the Training API to simply add and tag new images and publish a new iteration of the Custom Vision service. https://docs.microsoft.com/en-ca/learn/modules/evaluate-requirements-for-custom-computer-vision-api/5-examine-the-custom-vision-training-api
Unattempted
The Custom Vision service web portal is an easy way to train a model by uploading tagged images. This approach is the most common way to train, test, and publish a model. But sometimes a business need might require a model to be prepared (or retrained) based on incoming data available to the applications that are using the service. In these cases, the app can use the Training API to simply add and tag new images and publish a new iteration of the Custom Vision service. https://docs.microsoft.com/en-ca/learn/modules/evaluate-requirements-for-custom-computer-vision-api/5-examine-the-custom-vision-training-api
Question 58 of 65
58. Question
True or False: When Bot Training is engaged, it executes as soon as it is engaged.
Correct
Training is not always immediate. Sometimes it gets queued and can take several minutes. You can setup a loop or other configuration of your choosing to periodically check the training status, before you place a call to the publish operation. To check the status of the train operation, you could use code such as the following. You should look for a status of Success or UpToDate before publishing. https://docs.microsoft.com/en-us/azure/machine-learning/concept-train-machine-learning-model
Incorrect
Training is not always immediate. Sometimes it gets queued and can take several minutes. You can setup a loop or other configuration of your choosing to periodically check the training status, before you place a call to the publish operation. To check the status of the train operation, you could use code such as the following. You should look for a status of Success or UpToDate before publishing. https://docs.microsoft.com/en-us/azure/machine-learning/concept-train-machine-learning-model
Unattempted
Training is not always immediate. Sometimes it gets queued and can take several minutes. You can setup a loop or other configuration of your choosing to periodically check the training status, before you place a call to the publish operation. To check the status of the train operation, you could use code such as the following. You should look for a status of Success or UpToDate before publishing. https://docs.microsoft.com/en-us/azure/machine-learning/concept-train-machine-learning-model
Question 59 of 65
59. Question
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
[?] identifies temporal segments within the video to improve how you browse and edit indexed videos.
The key aspects are extracted based on changes in colour, contrast, and other semantic properties.
Some of the operations that [?] can perform on media files include:
• Identifying and extracting speech and identify speakers.
• Identifying and extracting on-screen text in a video.
• Detecting objects in a video file.
• Identify brands (for example: Microsoft) from audio tracks and on-screen text in a video.
• Detecting and recognizing faces from a database of celebrities and a user-defined database of faces.
• Extracting topics discussed but not necessarily mentioned in audio and video content.
• Creating closed captions or subtitles from the audio track.
Correct
Video Indexer identifies temporal segments within the video to improve how you browse and edit indexed videos. The key aspects are extracted based on changes in colour, contrast, and other semantic properties.
The detected segments are organized as a hierarchy. Video Indexer processes the video file into one or more scenes. Each scene is made up of one or more shots. Within the shot, keyframes are determined.
Some of the operations that Video Indexer can perform on media files include:
• Identifying and extracting speech and identify speakers.
• Identifying and extracting on-screen text in a video.
• Detecting objects in a video file.
• Identify brands (for example: Microsoft) from audio tracks and on-screen text in a video.
• Detecting and recognizing faces from a database of celebrities and a user-defined database of faces.
• Extracting topics discussed but not necessarily mentioned in audio and video content.
• Creating closed captions or subtitles from the audio track. https://docs.microsoft.com/en-us/azure/media-services/video-indexer/faq
Incorrect
Video Indexer identifies temporal segments within the video to improve how you browse and edit indexed videos. The key aspects are extracted based on changes in colour, contrast, and other semantic properties.
The detected segments are organized as a hierarchy. Video Indexer processes the video file into one or more scenes. Each scene is made up of one or more shots. Within the shot, keyframes are determined.
Some of the operations that Video Indexer can perform on media files include:
• Identifying and extracting speech and identify speakers.
• Identifying and extracting on-screen text in a video.
• Detecting objects in a video file.
• Identify brands (for example: Microsoft) from audio tracks and on-screen text in a video.
• Detecting and recognizing faces from a database of celebrities and a user-defined database of faces.
• Extracting topics discussed but not necessarily mentioned in audio and video content.
• Creating closed captions or subtitles from the audio track. https://docs.microsoft.com/en-us/azure/media-services/video-indexer/faq
Unattempted
Video Indexer identifies temporal segments within the video to improve how you browse and edit indexed videos. The key aspects are extracted based on changes in colour, contrast, and other semantic properties.
The detected segments are organized as a hierarchy. Video Indexer processes the video file into one or more scenes. Each scene is made up of one or more shots. Within the shot, keyframes are determined.
Some of the operations that Video Indexer can perform on media files include:
• Identifying and extracting speech and identify speakers.
• Identifying and extracting on-screen text in a video.
• Detecting objects in a video file.
• Identify brands (for example: Microsoft) from audio tracks and on-screen text in a video.
• Detecting and recognizing faces from a database of celebrities and a user-defined database of faces.
• Extracting topics discussed but not necessarily mentioned in audio and video content.
• Creating closed captions or subtitles from the audio track. https://docs.microsoft.com/en-us/azure/media-services/video-indexer/faq
Question 60 of 65
60. Question
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
To access your Content Moderator resource, you’ll need a [?] key.
Correct
To access your Content Moderator resource, you’ll need a subscription key.
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
LUIS makes use of three key aspects for understanding language:
• [A]: Input from the user that your app needs to interpret.
• [B]: Represents a task or action the user wants to do. It’s a purpose or goal expressed in a user’s input.
• [C]: Represents a word or phrase inside the input that you want to extract.
Correct
LUIS makes use of three key aspects for understanding language:
• Utterances: An utterance is input from the user that your app needs to interpret.
• Intents: An intent represents a task or action the user wants to do. It’s a purpose or goal expressed in a user’s utterance.
• Entities: An entity represents a word or phrase inside the utterance that you want to extract.
LUIS makes use of three key aspects for understanding language:
• Utterances: An utterance is input from the user that your app needs to interpret.
• Intents: An intent represents a task or action the user wants to do. It’s a purpose or goal expressed in a user’s utterance.
• Entities: An entity represents a word or phrase inside the utterance that you want to extract.
LUIS makes use of three key aspects for understanding language:
• Utterances: An utterance is input from the user that your app needs to interpret.
• Intents: An intent represents a task or action the user wants to do. It’s a purpose or goal expressed in a user’s utterance.
• Entities: An entity represents a word or phrase inside the utterance that you want to extract.
What is the following code designed to do?
https://.api.cognitive.microsoft.com/vision/v2.0/recognizeText?mode=<...>
Correct
The recognizeText operation detects and extracts handwritten text from notes, letters, essays, whiteboards, forms, and other sources. The request URL has the following format:
All calls must be made to the region where the account was created.
If present, the mode parameter must be set to Handwritten or Printed and is case-sensitive. If the parameter is set to Handwritten or is not specified, handwriting recognition is performed. If the parameter is set to Printed, then printed text recognition is performed. The time it takes to get a result from this call depends on the amount of writing in the image. https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/concept-recognizing-text
Incorrect
The recognizeText operation detects and extracts handwritten text from notes, letters, essays, whiteboards, forms, and other sources. The request URL has the following format:
All calls must be made to the region where the account was created.
If present, the mode parameter must be set to Handwritten or Printed and is case-sensitive. If the parameter is set to Handwritten or is not specified, handwriting recognition is performed. If the parameter is set to Printed, then printed text recognition is performed. The time it takes to get a result from this call depends on the amount of writing in the image. https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/concept-recognizing-text
Unattempted
The recognizeText operation detects and extracts handwritten text from notes, letters, essays, whiteboards, forms, and other sources. The request URL has the following format:
All calls must be made to the region where the account was created.
If present, the mode parameter must be set to Handwritten or Printed and is case-sensitive. If the parameter is set to Handwritten or is not specified, handwriting recognition is performed. If the parameter is set to Printed, then printed text recognition is performed. The time it takes to get a result from this call depends on the amount of writing in the image. https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/concept-recognizing-text
Question 63 of 65
63. Question
Managing keys for your Language Understanding (LUIS) service involves an understanding of the two services and API sets that LUIS has, along with resources included with LUIS. LUIS uses an [A] service/API and a [B] service/API.
Correct
Managing keys for your Language Understanding (LUIS) service involves an understanding of the two services and API sets that LUIS has, along with resources included with LUIS. LUIS uses an authoring service/API and a prediction service/API.
Managing keys for your Language Understanding (LUIS) service involves an understanding of the two services and API sets that LUIS has, along with resources included with LUIS. LUIS uses an authoring service/API and a prediction service/API.
Managing keys for your Language Understanding (LUIS) service involves an understanding of the two services and API sets that LUIS has, along with resources included with LUIS. LUIS uses an authoring service/API and a prediction service/API.
This cognitive service enables you to quickly build a knowledge base of questions and answers that can form the basis of a dialog between a human and an AI agent.
Correct
Conversational AI in Microsoft Azure
To create conversational AI solutions on Microsoft Azure, you can use the following services:
Extracting insights from videos starts with uploading and indexing the videos. Azure Video Indexer provides several options for uploading videos: upload from URL, send file as byte array, or reference existing asset ID. Many file formats are supported including which of the following? (Select four)
Correct
Extracting insights from videos starts with uploading and indexing the videos. Azure Video Indexer provides several options for uploading videos: upload from URL, send file as byte array, or reference existing asset ID. Many file formats are supported including
The preferred option for uploading your video is to use a URL. Your video URL should point directly to a supported media file. You cannot upload media by referencing a webpage, such as a video page on youtube.com. If the file requires an access token, it should be included in the URI. https://docs.microsoft.com/en-us/azure/media-services/latest/media-encoder-standard-formats
Incorrect
Extracting insights from videos starts with uploading and indexing the videos. Azure Video Indexer provides several options for uploading videos: upload from URL, send file as byte array, or reference existing asset ID. Many file formats are supported including
The preferred option for uploading your video is to use a URL. Your video URL should point directly to a supported media file. You cannot upload media by referencing a webpage, such as a video page on youtube.com. If the file requires an access token, it should be included in the URI. https://docs.microsoft.com/en-us/azure/media-services/latest/media-encoder-standard-formats
Unattempted
Extracting insights from videos starts with uploading and indexing the videos. Azure Video Indexer provides several options for uploading videos: upload from URL, send file as byte array, or reference existing asset ID. Many file formats are supported including
The preferred option for uploading your video is to use a URL. Your video URL should point directly to a supported media file. You cannot upload media by referencing a webpage, such as a video page on youtube.com. If the file requires an access token, it should be included in the URI. https://docs.microsoft.com/en-us/azure/media-services/latest/media-encoder-standard-formats
X
Use Page numbers below to navigate to other practice tests