You have already completed the Test before. Hence you can not start it again.
Test is loading...
You must sign in or sign up to start the Test.
You have to finish following quiz, to start this Test:
Your results are here!! for" Azure AI Engineer Associate Practice Test 9 "
0 of 65 questions answered correctly
Your time:
Time has elapsed
Your Final Score is : 0
You have attempted : 0
Number of Correct Questions : 0 and scored 0
Number of Incorrect Questions : 0 and Negative marks 0
Average score
Your score
AI-100
You have attempted: 0
Number of Correct Questions: 0 and scored 0
Number of Incorrect Questions: 0 and Negative marks 0
You can review your answers by clicking view questions. Important Note : Open Reference Documentation Links in New Tab (Right Click and Open in New Tab).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
Answered
Review
Question 1 of 65
1. Question
The Key Phrase Extraction service uses either C# or Python. If Python is used for the extraction service, which language will the response be returned in?
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
When creating a LUIS application and are creating “new intents”, you should always provide at least [?] example utterances for each intent.
Correct
When creating a LUIS application and are creating “new intents”, each intent needs to have example utterances, at least 15. If you have an intent that does not have any example utterances, you will not be able to train LUIS. If you have an intent with one or very few example utterances, LUIS may not accurately predict the intent.
When creating a LUIS application and are creating “new intents”, each intent needs to have example utterances, at least 15. If you have an intent that does not have any example utterances, you will not be able to train LUIS. If you have an intent with one or very few example utterances, LUIS may not accurately predict the intent.
When creating a LUIS application and are creating “new intents”, each intent needs to have example utterances, at least 15. If you have an intent that does not have any example utterances, you will not be able to train LUIS. If you have an intent with one or very few example utterances, LUIS may not accurately predict the intent.
Steve Rogers is learning all about text analytics, he wants to use text analytics to analyze his text. Which are the services that text analytics provides? (Select four)
Correct
Text analytics is part of natural language processing . It includes the following services:
• Sentiment analysis
• Key phrase extraction
• Entity recognition
• Language detection
Azure resources for the Text Analytics service
To use the Text Analytics service in an application, you must provision an appropriate resource in your Azure subscription. You can choose to provision either of the following types of resource:
• A Text Analytics resource – choose this resource type if you only plan to use the Text Analytics service, or if you want to manage access and billing for the resource separately from other services.
• A Cognitive Services resource – choose this resource type if you plan to use the Text Analytics service in combination with other cognitive services, and you want to manage access and billing for these services together.
The following are not services that text analytics provides:
• Authoring is the process of creating entities, intent, and model training for LUIS Applications.
• Translator text is part of natural language processing.
• Alternative phrasing is used in creating knowledge bases for QnA Maker service.
None of these are part of the text analytics service. https://docs.microsoft.com/en-us/learn/modules/analyze-text-with-text-analytics-service/2-get-started-azure
Incorrect
Text analytics is part of natural language processing . It includes the following services:
• Sentiment analysis
• Key phrase extraction
• Entity recognition
• Language detection
Azure resources for the Text Analytics service
To use the Text Analytics service in an application, you must provision an appropriate resource in your Azure subscription. You can choose to provision either of the following types of resource:
• A Text Analytics resource – choose this resource type if you only plan to use the Text Analytics service, or if you want to manage access and billing for the resource separately from other services.
• A Cognitive Services resource – choose this resource type if you plan to use the Text Analytics service in combination with other cognitive services, and you want to manage access and billing for these services together.
The following are not services that text analytics provides:
• Authoring is the process of creating entities, intent, and model training for LUIS Applications.
• Translator text is part of natural language processing.
• Alternative phrasing is used in creating knowledge bases for QnA Maker service.
None of these are part of the text analytics service. https://docs.microsoft.com/en-us/learn/modules/analyze-text-with-text-analytics-service/2-get-started-azure
Unattempted
Text analytics is part of natural language processing . It includes the following services:
• Sentiment analysis
• Key phrase extraction
• Entity recognition
• Language detection
Azure resources for the Text Analytics service
To use the Text Analytics service in an application, you must provision an appropriate resource in your Azure subscription. You can choose to provision either of the following types of resource:
• A Text Analytics resource – choose this resource type if you only plan to use the Text Analytics service, or if you want to manage access and billing for the resource separately from other services.
• A Cognitive Services resource – choose this resource type if you plan to use the Text Analytics service in combination with other cognitive services, and you want to manage access and billing for these services together.
The following are not services that text analytics provides:
• Authoring is the process of creating entities, intent, and model training for LUIS Applications.
• Translator text is part of natural language processing.
• Alternative phrasing is used in creating knowledge bases for QnA Maker service.
None of these are part of the text analytics service. https://docs.microsoft.com/en-us/learn/modules/analyze-text-with-text-analytics-service/2-get-started-azure
Question 4 of 65
4. Question
What two values do you need to know in order to make a call to the Prediction API from client code?
Clark Kent works the Daily Planet and he needs to make the organization’s automated telephone attendant read voice menus in human-like voices. He has opted to use the Neural voice for the solution.
Does this decision help to achieve the goal?
Correct
Azure Text-to-Speech service provides two options for the voice:
• Standard voices – Created using Statistical Parametric Synthesis and/or Concatenation Synthesis techniques. These voices are highly intelligible and sound natural. You can easily enable your applications to speak in more than 45 languages, with a wide range of voice options. These voices provide high pronunciation accuracy, including support for abbreviations, acronym expansions, date/time interpretations, polyphones, and more. For a full list of standard voices, see supported languages.
• Neural voices – Deep neural networks are used to overcome the limits of traditional speech synthesis with regards to stress and intonation in spoken language. Prosody prediction and voice synthesis are performed simultaneously, which results in more fluid and natural-sounding outputs. Neural voices can be used to make interactions with chatbots and voice assistants more natural and engaging, convert digital texts such as e-books into audiobooks, and enhance in-car navigation systems. With the human-like natural prosody and clear articulation of words, neural voices significantly reduce listening fatigue when you interact with AI systems. For a full list of neural voices, see supported languages. https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/text-to-speech
Incorrect
Azure Text-to-Speech service provides two options for the voice:
• Standard voices – Created using Statistical Parametric Synthesis and/or Concatenation Synthesis techniques. These voices are highly intelligible and sound natural. You can easily enable your applications to speak in more than 45 languages, with a wide range of voice options. These voices provide high pronunciation accuracy, including support for abbreviations, acronym expansions, date/time interpretations, polyphones, and more. For a full list of standard voices, see supported languages.
• Neural voices – Deep neural networks are used to overcome the limits of traditional speech synthesis with regards to stress and intonation in spoken language. Prosody prediction and voice synthesis are performed simultaneously, which results in more fluid and natural-sounding outputs. Neural voices can be used to make interactions with chatbots and voice assistants more natural and engaging, convert digital texts such as e-books into audiobooks, and enhance in-car navigation systems. With the human-like natural prosody and clear articulation of words, neural voices significantly reduce listening fatigue when you interact with AI systems. For a full list of neural voices, see supported languages. https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/text-to-speech
Unattempted
Azure Text-to-Speech service provides two options for the voice:
• Standard voices – Created using Statistical Parametric Synthesis and/or Concatenation Synthesis techniques. These voices are highly intelligible and sound natural. You can easily enable your applications to speak in more than 45 languages, with a wide range of voice options. These voices provide high pronunciation accuracy, including support for abbreviations, acronym expansions, date/time interpretations, polyphones, and more. For a full list of standard voices, see supported languages.
• Neural voices – Deep neural networks are used to overcome the limits of traditional speech synthesis with regards to stress and intonation in spoken language. Prosody prediction and voice synthesis are performed simultaneously, which results in more fluid and natural-sounding outputs. Neural voices can be used to make interactions with chatbots and voice assistants more natural and engaging, convert digital texts such as e-books into audiobooks, and enhance in-car navigation systems. With the human-like natural prosody and clear articulation of words, neural voices significantly reduce listening fatigue when you interact with AI systems. For a full list of neural voices, see supported languages. https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/text-to-speech
Question 6 of 65
6. Question
True or False: The recommended method for creating a new LUIS app version is to overwrite the current version as there is a built-in version control that will allow rolling the file back to any point in time if needed.
Correct
Each version is a snapshot in time of the LUIS app. Before you make changes to the app, create a new version. It is easier to go back to an older version than to try to remove intents and utterances to a previous state.
The recommended method for creating a new LUIS app version is to clone an existing version and then make changes to the cloned app, saving it as a new version.
Each version is a snapshot in time of the LUIS app. Before you make changes to the app, create a new version. It is easier to go back to an older version than to try to remove intents and utterances to a previous state.
The recommended method for creating a new LUIS app version is to clone an existing version and then make changes to the cloned app, saving it as a new version.
Each version is a snapshot in time of the LUIS app. Before you make changes to the app, create a new version. It is easier to go back to an older version than to try to remove intents and utterances to a previous state.
The recommended method for creating a new LUIS app version is to clone an existing version and then make changes to the cloned app, saving it as a new version.
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
With this Starter key, you are permitted [?] prediction endpoint requests per month for free. These requests may come in through the browser, API, or SDK.
Correct
When you first create a LUIS application, a starter key is created for you automatically. You can use this starter key for the following.
• Free authoring service requests to your app. This can be accomplished using the LUIS portal or through the supported SDKs.
• With this starter key, you are permitted 1,000 prediction endpoint requests per month for free. These requests may come in through the browser, API, or SDK. https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-how-to-azure-subscription
Incorrect
When you first create a LUIS application, a starter key is created for you automatically. You can use this starter key for the following.
• Free authoring service requests to your app. This can be accomplished using the LUIS portal or through the supported SDKs.
• With this starter key, you are permitted 1,000 prediction endpoint requests per month for free. These requests may come in through the browser, API, or SDK. https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-how-to-azure-subscription
Unattempted
When you first create a LUIS application, a starter key is created for you automatically. You can use this starter key for the following.
• Free authoring service requests to your app. This can be accomplished using the LUIS portal or through the supported SDKs.
• With this starter key, you are permitted 1,000 prediction endpoint requests per month for free. These requests may come in through the browser, API, or SDK. https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-how-to-azure-subscription
Question 8 of 65
8. Question
In both the Face API and the Emotion API, which of the following terms describes the rectangular coordinates of a face that’s detected in an image?
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
When you consider creating a LUIS app, you should ensure that you have a(n) [?] in mind. The [?] is typically focused on a domain, which is the subject or topic the app will focus on.
Correct
When you consider creating a LUIS app, you should ensure that you have schema in mind. The schema is typically focused on a domain, which is the subject or topic the app will focus on. You may decide to create a LUIS app for a travel domain that would focus on trip planning and execution. The schema will define what your users might be asking for (intent). It will also identify which parts of the intent contain the detailed information (entities) that help determine answers to the intents. Your schema will go through iterations as you manage the LUIS app versions.
When you consider creating a LUIS app, you should ensure that you have schema in mind. The schema is typically focused on a domain, which is the subject or topic the app will focus on. You may decide to create a LUIS app for a travel domain that would focus on trip planning and execution. The schema will define what your users might be asking for (intent). It will also identify which parts of the intent contain the detailed information (entities) that help determine answers to the intents. Your schema will go through iterations as you manage the LUIS app versions.
When you consider creating a LUIS app, you should ensure that you have schema in mind. The schema is typically focused on a domain, which is the subject or topic the app will focus on. You may decide to create a LUIS app for a travel domain that would focus on trip planning and execution. The schema will define what your users might be asking for (intent). It will also identify which parts of the intent contain the detailed information (entities) that help determine answers to the intents. Your schema will go through iterations as you manage the LUIS app versions.
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
Each time a Custom Vision Service model is trained, [?].
Correct
Each time a Custom Vision Service model is trained, a new iteration is created. The Custom Vision Service maintains several iterations, allowing you to compare progress over time.
The results show two measures of a model’s accuracy, Precision and Recall.
Suppose the model was presented with three Picasso images and three from Rembrandt. Let’s say it correctly identified two of the Picasso samples as “Picasso” images, but incorrectly identified two of the Rembrandt samples as Picasso. In this case, the Precision would be 50%, because it identified two out of four images correctly. The Recall score would be 67% because it correctly identified two of the three Picasso images correctly. https://azure.microsoft.com/en-us/services/cognitive-services/custom-vision-service/
Incorrect
Each time a Custom Vision Service model is trained, a new iteration is created. The Custom Vision Service maintains several iterations, allowing you to compare progress over time.
The results show two measures of a model’s accuracy, Precision and Recall.
Suppose the model was presented with three Picasso images and three from Rembrandt. Let’s say it correctly identified two of the Picasso samples as “Picasso” images, but incorrectly identified two of the Rembrandt samples as Picasso. In this case, the Precision would be 50%, because it identified two out of four images correctly. The Recall score would be 67% because it correctly identified two of the three Picasso images correctly. https://azure.microsoft.com/en-us/services/cognitive-services/custom-vision-service/
Unattempted
Each time a Custom Vision Service model is trained, a new iteration is created. The Custom Vision Service maintains several iterations, allowing you to compare progress over time.
The results show two measures of a model’s accuracy, Precision and Recall.
Suppose the model was presented with three Picasso images and three from Rembrandt. Let’s say it correctly identified two of the Picasso samples as “Picasso” images, but incorrectly identified two of the Rembrandt samples as Picasso. In this case, the Precision would be 50%, because it identified two out of four images correctly. The Recall score would be 67% because it correctly identified two of the three Picasso images correctly. https://azure.microsoft.com/en-us/services/cognitive-services/custom-vision-service/
Question 14 of 65
14. Question
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
Correct
Face detection vs. Face recognition
Facial recognition builds on the facial detection API by analyzing the landmarks in two or more pictures to determine if the same face is present. There are four aspects which can be determined through this analysis.
• Do two images of a face belong to the same person? This defines verification.
• Does this person look like other people? This defines similarity.
• Do all of these faces belong together? This defines grouping.
• Who is this person in this group of people? This defines identification. https://azure.microsoft.com/en-us/services/cognitive-services/face/
Incorrect
Face detection vs. Face recognition
Facial recognition builds on the facial detection API by analyzing the landmarks in two or more pictures to determine if the same face is present. There are four aspects which can be determined through this analysis.
• Do two images of a face belong to the same person? This defines verification.
• Does this person look like other people? This defines similarity.
• Do all of these faces belong together? This defines grouping.
• Who is this person in this group of people? This defines identification. https://azure.microsoft.com/en-us/services/cognitive-services/face/
Unattempted
Face detection vs. Face recognition
Facial recognition builds on the facial detection API by analyzing the landmarks in two or more pictures to determine if the same face is present. There are four aspects which can be determined through this analysis.
• Do two images of a face belong to the same person? This defines verification.
• Does this person look like other people? This defines similarity.
• Do all of these faces belong together? This defines grouping.
• Who is this person in this group of people? This defines identification. https://azure.microsoft.com/en-us/services/cognitive-services/face/
Question 15 of 65
15. Question
Which of the following choices describes what smart cropping does when generating a thumbnail with the generateThumbnail operation?
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
[?] can help you detect if any values in the text might be considered personally identifiable information before you release it publicly. Key aspects that are detected include:
• Name
• Email addresses
• Mailing addresses
• IP addresses
• Phone numbers
• Driver’s license number
• Social Security numbers
• Bank account numbers
• Passport numbers
Correct
Personally identifiable information (PII) is of critical importance in many applications. Azure PII Detection can help you detect if any values in the text might be considered PII before you release it publicly. Key aspects that are detected include:
• Name
• Email addresses
• Mailing addresses
• IP addresses
• Phone numbers
• Driver’s license number
• Social Security numbers
• Bank account numbers
• Passport numbers
Personally identifiable information (PII) is of critical importance in many applications. Azure PII Detection can help you detect if any values in the text might be considered PII before you release it publicly. Key aspects that are detected include:
• Name
• Email addresses
• Mailing addresses
• IP addresses
• Phone numbers
• Driver’s license number
• Social Security numbers
• Bank account numbers
• Passport numbers
Personally identifiable information (PII) is of critical importance in many applications. Azure PII Detection can help you detect if any values in the text might be considered PII before you release it publicly. Key aspects that are detected include:
• Name
• Email addresses
• Mailing addresses
• IP addresses
• Phone numbers
• Driver’s license number
• Social Security numbers
• Bank account numbers
• Passport numbers
The Entity Recognition service expects a JSON formatted submission. The following JSON document was submitted to the service for entity recognition.
JSON
“documents”: [
{“id”: “1”, “language”: “en”, “text”: “Microsoft was founded by Bill Gates and Paul Allen on April 4, 1975, to develop and sell BASIC interpreters for the Altair 8800.”},
{“id”: “2”, “language”: “es”,
“text”: “La sede principal de Microsoft se encuentra en la ciudad de Redmond, a 21 kilómetros de Seattle.”
}
]
How many entries are present in this submission?
Correct
In the example presented, the JSON document has two entries. One is in English and the other is in Spanish.
Different entities will be extracted from each phrase and the ID attribute will be key in helping identify the entities that match each phrase. The following sample shows the results from calling the service. https://docs.microsoft.com/en-us/azure/search/cognitive-search-skill-entity-recognition
Incorrect
In the example presented, the JSON document has two entries. One is in English and the other is in Spanish.
Different entities will be extracted from each phrase and the ID attribute will be key in helping identify the entities that match each phrase. The following sample shows the results from calling the service. https://docs.microsoft.com/en-us/azure/search/cognitive-search-skill-entity-recognition
Unattempted
In the example presented, the JSON document has two entries. One is in English and the other is in Spanish.
Different entities will be extracted from each phrase and the ID attribute will be key in helping identify the entities that match each phrase. The following sample shows the results from calling the service. https://docs.microsoft.com/en-us/azure/search/cognitive-search-skill-entity-recognition
Question 18 of 65
18. Question
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
There are specific requirements to use LUIS in a container environment. The first requirement is to have [?] installed on the host computer. [?] must be configured to connect with and send billing information to Azure.
Correct
There are specific requirements to use LUIS in a container environment. The first requirement is to have Docker installed on the host computer. Docker must be configured to connect with and send billing information to Azure. If you are running Docker on a Windows host, Docker must be configured to support Linux containers.
There are specific requirements to use LUIS in a container environment. The first requirement is to have Docker installed on the host computer. Docker must be configured to connect with and send billing information to Azure. If you are running Docker on a Windows host, Docker must be configured to support Linux containers.
There are specific requirements to use LUIS in a container environment. The first requirement is to have Docker installed on the host computer. Docker must be configured to connect with and send billing information to Azure. If you are running Docker on a Windows host, Docker must be configured to support Linux containers.
What is the following code designed to do?
Azure CLI
curl “https://.api.cognitive.microsoft.com/vision/v2.0/analyze?visualFeatures=Categories,Description&details=Landmarks” \
-H “Ocp-Apim-Subscription-Key: $key” \
-H “Content-Type: application/json” \
-d “{‘url’ : ‘https://raw.githubusercontent.com/MicrosoftDocs/mslearn-process-images-with-the-computer-vision-service/master/images/mountains.jpg’}” \
| jq ‘.’
Correct
This call looks for landmarks in the image specified by the image URL. The call also asks the service to return category information and a description of the image. The description is returned as a complete English sentence.
Every call to the API needs an access key. This is set in the Ocp-Apim-Subscription-Key header of the request.
The JSON response from this call returns the following:
• A categories array listing all image categories that were detected, along with a score between 0 and 1 of how confident the service is that the image belongs in the specified category.
• A description entry containing an array of tags or words that are related to the image.
• A captions entry with a text field that describes in English what is in the image. Observe that the text also has a certainty score. This score can help you decide what to do next with this analysis. https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/concept-object-detection
Incorrect
This call looks for landmarks in the image specified by the image URL. The call also asks the service to return category information and a description of the image. The description is returned as a complete English sentence.
Every call to the API needs an access key. This is set in the Ocp-Apim-Subscription-Key header of the request.
The JSON response from this call returns the following:
• A categories array listing all image categories that were detected, along with a score between 0 and 1 of how confident the service is that the image belongs in the specified category.
• A description entry containing an array of tags or words that are related to the image.
• A captions entry with a text field that describes in English what is in the image. Observe that the text also has a certainty score. This score can help you decide what to do next with this analysis. https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/concept-object-detection
Unattempted
This call looks for landmarks in the image specified by the image URL. The call also asks the service to return category information and a description of the image. The description is returned as a complete English sentence.
Every call to the API needs an access key. This is set in the Ocp-Apim-Subscription-Key header of the request.
The JSON response from this call returns the following:
• A categories array listing all image categories that were detected, along with a score between 0 and 1 of how confident the service is that the image belongs in the specified category.
• A description entry containing an array of tags or words that are related to the image.
• A captions entry with a text field that describes in English what is in the image. Observe that the text also has a certainty score. This score can help you decide what to do next with this analysis. https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/concept-object-detection
Question 20 of 65
20. Question
See the image below.
What does this output tell us in terms of an Azure Function?
Correct
The Azure Function trigger executed successfully
If everything works properly when a Function has been setup with a trigger and the trigger executes properly, a message similar to the image will be rendered showing the “Success” message.
The Azure Function trigger executed successfully
If everything works properly when a Function has been setup with a trigger and the trigger executes properly, a message similar to the image will be rendered showing the “Success” message.
The Azure Function trigger executed successfully
If everything works properly when a Function has been setup with a trigger and the trigger executes properly, a message similar to the image will be rendered showing the “Success” message.
The Computer Vision API provides algorithms to process images and return insights. Which of the following are examples of Computer Vision. (Select all that apply)
Correct
The Computer Vision API provides algorithms to process images and return insights. The following are examples of Computer Vision.
• Determine if an image has mature content
• Find all the faces in an image
• Estimating dominant and accent colours
• Categorizing the content of images
• Describing an image with complete English sentences
• Generate images thumbnails for displaying large images effectively
• Extract printed text from images using optical character recognition (OCR)
• Recognize printed and handwritten text from images
• Recognize celebrities and landmarks
• Analyze video
The Computer Vision API is available in many regions across the globe. https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/overview
To find the region nearest you, see the Products available by region. https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services®ions=all
Incorrect
The Computer Vision API provides algorithms to process images and return insights. The following are examples of Computer Vision.
• Determine if an image has mature content
• Find all the faces in an image
• Estimating dominant and accent colours
• Categorizing the content of images
• Describing an image with complete English sentences
• Generate images thumbnails for displaying large images effectively
• Extract printed text from images using optical character recognition (OCR)
• Recognize printed and handwritten text from images
• Recognize celebrities and landmarks
• Analyze video
The Computer Vision API is available in many regions across the globe. https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/overview
To find the region nearest you, see the Products available by region. https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services®ions=all
Unattempted
The Computer Vision API provides algorithms to process images and return insights. The following are examples of Computer Vision.
• Determine if an image has mature content
• Find all the faces in an image
• Estimating dominant and accent colours
• Categorizing the content of images
• Describing an image with complete English sentences
• Generate images thumbnails for displaying large images effectively
• Extract printed text from images using optical character recognition (OCR)
• Recognize printed and handwritten text from images
• Recognize celebrities and landmarks
• Analyze video
The Computer Vision API is available in many regions across the globe. https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/overview
To find the region nearest you, see the Products available by region. https://azure.microsoft.com/global-infrastructure/services/?products=cognitive-services®ions=all
Question 22 of 65
22. Question
What is the preferred option when needing an entity to represent currency?
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
LUIS is an [?] that applies custom machine-learning intelligence to a user’s conversational natural-language text. LUIS uses certain aspects of the text to predict the user’s overall meaning and pull out relevant detailed information. Applications can use this information to interact with the user.
Correct
LUIS is an Azure Cognitive Services API that applies custom machine-learning intelligence to a user’s conversational natural-language text. LUIS uses certain aspects of the text to predict the user’s overall meaning and pull out relevant detailed information. Applications can use this information to interact with the user.
LUIS is an Azure Cognitive Services API that applies custom machine-learning intelligence to a user’s conversational natural-language text. LUIS uses certain aspects of the text to predict the user’s overall meaning and pull out relevant detailed information. Applications can use this information to interact with the user.
LUIS is an Azure Cognitive Services API that applies custom machine-learning intelligence to a user’s conversational natural-language text. LUIS uses certain aspects of the text to predict the user’s overall meaning and pull out relevant detailed information. Applications can use this information to interact with the user.
When analyzing an image using the Computer Vision API, you can specify visual feature types to return. Which of the following is a valid visual feature type when calling the “analyze” operation?
Which Azure resource is common for conversation AI software “agents”?
Correct
Conversation AI software “agents” use Azure Bot Service. Conversational AI is an artificial intelligence workload that deals with dialogs between AI agents and human users. https://docs.microsoft.com/en-us/learn/paths/explore-conversational-ai/
• Azure Storage is not part of conversation AI solutions.
• Azure Virtual Network is not part of conversation AI solutions.
• Cognitive Search is not part of conversation AI solutions.
• Azure Custom Vision is not part of conversation AI solutions
Incorrect
Conversation AI software “agents” use Azure Bot Service. Conversational AI is an artificial intelligence workload that deals with dialogs between AI agents and human users. https://docs.microsoft.com/en-us/learn/paths/explore-conversational-ai/
• Azure Storage is not part of conversation AI solutions.
• Azure Virtual Network is not part of conversation AI solutions.
• Cognitive Search is not part of conversation AI solutions.
• Azure Custom Vision is not part of conversation AI solutions
Unattempted
Conversation AI software “agents” use Azure Bot Service. Conversational AI is an artificial intelligence workload that deals with dialogs between AI agents and human users. https://docs.microsoft.com/en-us/learn/paths/explore-conversational-ai/
• Azure Storage is not part of conversation AI solutions.
• Azure Virtual Network is not part of conversation AI solutions.
• Cognitive Search is not part of conversation AI solutions.
• Azure Custom Vision is not part of conversation AI solutions
Question 26 of 65
26. Question
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
To access your Speech Service from an application, you’ll need to get two pieces of information from the Azure portal:
• A [A] that’s passed with every request to authenticate the call.
• The [B] that exposes your service on the network.
Correct
To access your Speech Service from an application, you’ll need to get two pieces of information from the Azure portal:
• A Subscription key that’s passed with every request to authenticate the call.
• The Endpoint that exposes your service on the network.
You will need the subscription key when using the Speech SDK or the REST APIs, but the endpoint will only be required for the REST API access. Using the Speech SDK in an application uses the key, but also requires a region. You will see the use of the key and region information in the exercise later in the module.
To access your Speech Service from an application, you’ll need to get two pieces of information from the Azure portal:
• A Subscription key that’s passed with every request to authenticate the call.
• The Endpoint that exposes your service on the network.
You will need the subscription key when using the Speech SDK or the REST APIs, but the endpoint will only be required for the REST API access. Using the Speech SDK in an application uses the key, but also requires a region. You will see the use of the key and region information in the exercise later in the module.
To access your Speech Service from an application, you’ll need to get two pieces of information from the Azure portal:
• A Subscription key that’s passed with every request to authenticate the call.
• The Endpoint that exposes your service on the network.
You will need the subscription key when using the Speech SDK or the REST APIs, but the endpoint will only be required for the REST API access. Using the Speech SDK in an application uses the key, but also requires a region. You will see the use of the key and region information in the exercise later in the module.
See the below utterance created for the using code:
CreateUtterance (“SearchPic”, “find outdoor pics”, new Dictionary() { {“facet”, “outdoor”} } ),
Which part of the code is the Intent?
What is the following code designed to do?
https://.api.cognitive.microsoft.com/vision/v2.0/ocr?language=<...>&detectOrientation=<...>
Correct
Calling the Computer Vision API to extract printed text.
The ocr operation detects text in an image and extracts the recognized characters into a machine-usable character stream. The request URL has the following format:
https://.api.cognitive.microsoft.com/vision/v2.0/ocr?language=<...>&detectOrientation=<...>
As usual, all calls must be made to the region where the account was created. The call accepts two optional parameters:
• language: The language code of the text to be detected in the image. The default value is unk,or unknown. This let’s the service auto detect the language of the text in the image.
• detectOrientation: When true, the service tries to detect the image orientation and correct it before further processing, for example, whether the image is upside-down. https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/concept-recognizing-text
Incorrect
Calling the Computer Vision API to extract printed text.
The ocr operation detects text in an image and extracts the recognized characters into a machine-usable character stream. The request URL has the following format:
https://.api.cognitive.microsoft.com/vision/v2.0/ocr?language=<...>&detectOrientation=<...>
As usual, all calls must be made to the region where the account was created. The call accepts two optional parameters:
• language: The language code of the text to be detected in the image. The default value is unk,or unknown. This let’s the service auto detect the language of the text in the image.
• detectOrientation: When true, the service tries to detect the image orientation and correct it before further processing, for example, whether the image is upside-down. https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/concept-recognizing-text
Unattempted
Calling the Computer Vision API to extract printed text.
The ocr operation detects text in an image and extracts the recognized characters into a machine-usable character stream. The request URL has the following format:
https://.api.cognitive.microsoft.com/vision/v2.0/ocr?language=<...>&detectOrientation=<...>
As usual, all calls must be made to the region where the account was created. The call accepts two optional parameters:
• language: The language code of the text to be detected in the image. The default value is unk,or unknown. This let’s the service auto detect the language of the text in the image.
• detectOrientation: When true, the service tries to detect the image orientation and correct it before further processing, for example, whether the image is upside-down. https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/concept-recognizing-text
Question 29 of 65
29. Question
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
Language detection is useful for content stores that collect arbitrary text, where language is unknown. You can parse the results of this analysis to determine which language is used in the input document.
If you are performing language detection and there are multiple languages in a single document, the predominant language will be the one listed in the [?] array.
Correct
If you are performing language detection and there are multiple languages in a single document, the predominant language will be the one listed in the detectedLanguages array.
JSON
{
“documents”: [
{
“id”: “1”,
“detectedLanguages”: [
{
“name”: “English”,
“iso6391Name”: “en”,
“score”: 1
}
]
},
{
“id”: “2”,
“detectedLanguages”: [
{
“name”: “Spanish”,
“iso6391Name”: “es”,
“score”: 1
}
]
},
{
“id”: “3”,
“detectedLanguages”: [
{
“name”: ” French “,
“iso6391Name”: ” fr “,
“score”: 1
}
]
},
“errors”: []
} https://docs.microsoft.com/en-us/rest/api/cognitiveservices/textanalytics/detect%20language/detect%20language
Incorrect
If you are performing language detection and there are multiple languages in a single document, the predominant language will be the one listed in the detectedLanguages array.
JSON
{
“documents”: [
{
“id”: “1”,
“detectedLanguages”: [
{
“name”: “English”,
“iso6391Name”: “en”,
“score”: 1
}
]
},
{
“id”: “2”,
“detectedLanguages”: [
{
“name”: “Spanish”,
“iso6391Name”: “es”,
“score”: 1
}
]
},
{
“id”: “3”,
“detectedLanguages”: [
{
“name”: ” French “,
“iso6391Name”: ” fr “,
“score”: 1
}
]
},
“errors”: []
} https://docs.microsoft.com/en-us/rest/api/cognitiveservices/textanalytics/detect%20language/detect%20language
Unattempted
If you are performing language detection and there are multiple languages in a single document, the predominant language will be the one listed in the detectedLanguages array.
JSON
{
“documents”: [
{
“id”: “1”,
“detectedLanguages”: [
{
“name”: “English”,
“iso6391Name”: “en”,
“score”: 1
}
]
},
{
“id”: “2”,
“detectedLanguages”: [
{
“name”: “Spanish”,
“iso6391Name”: “es”,
“score”: 1
}
]
},
{
“id”: “3”,
“detectedLanguages”: [
{
“name”: ” French “,
“iso6391Name”: ” fr “,
“score”: 1
}
]
},
“errors”: []
} https://docs.microsoft.com/en-us/rest/api/cognitiveservices/textanalytics/detect%20language/detect%20language
Question 30 of 65
30. Question
Which of the following output formats are supported by Speech-to-Text?
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
When you first create a LUIS application, a starter key is created for you [?].
Correct
When you first create a LUIS application, a starter key is created for you automatically. You can use this starter key for the following.
• Free authoring service requests to your app. This can be accomplished using the LUIS portal or through the supported SDKs.
• With this starter key, you are permitted 1000 prediction endpoint requests per month for free. These requests may come in through the browser, API, or SDK. https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-how-to-azure-subscription
Incorrect
When you first create a LUIS application, a starter key is created for you automatically. You can use this starter key for the following.
• Free authoring service requests to your app. This can be accomplished using the LUIS portal or through the supported SDKs.
• With this starter key, you are permitted 1000 prediction endpoint requests per month for free. These requests may come in through the browser, API, or SDK. https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-how-to-azure-subscription
Unattempted
When you first create a LUIS application, a starter key is created for you automatically. You can use this starter key for the following.
• Free authoring service requests to your app. This can be accomplished using the LUIS portal or through the supported SDKs.
• With this starter key, you are permitted 1000 prediction endpoint requests per month for free. These requests may come in through the browser, API, or SDK. https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-how-to-azure-subscription
Question 32 of 65
32. Question
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
The different versions of your LUIS app can be found on the [?] page. The various options for your LUIS app versions are:
• Import a LUIS app
• Rename an existing LUIS app
• Clone a specific version
• Activate a previously deactivated LUIS app
• Export a version of your LUIS app
• Delete an existing app version
• Search versions
Correct
The different versions of your LUIS app can be found on the Manage page. The various options for your LUIS app versions are:
• Import a LUIS app
• Rename an existing LUIS app
• Clone a specific version (doing so, is how you create a new version)
• Activate a previously deactivated LUIS app
• Export a version of your LUIS app
• Delete an existing app version
• Search versions
The different versions of your LUIS app can be found on the Manage page. The various options for your LUIS app versions are:
• Import a LUIS app
• Rename an existing LUIS app
• Clone a specific version (doing so, is how you create a new version)
• Activate a previously deactivated LUIS app
• Export a version of your LUIS app
• Delete an existing app version
• Search versions
The different versions of your LUIS app can be found on the Manage page. The various options for your LUIS app versions are:
• Import a LUIS app
• Rename an existing LUIS app
• Clone a specific version (doing so, is how you create a new version)
• Activate a previously deactivated LUIS app
• Export a version of your LUIS app
• Delete an existing app version
• Search versions
Translating to multiple target languages with Azure Speech Translation requires an extra step in your code. The extra step is …
Correct
Translating to multiple target languages requires only one extra step in your code. Adding additional to languages is all that is required. If you want to synthesize the translated text, you will need to configure an appropriate language and voice for each target language separately.
The SpeechTranslationConfig object contains a TargetLanguages property (Dictionary type of collection). You use the AddTargetLanguage() method to add the languages you want to translate into.
C#
// Add the German and Indonesian languages to the target language dictionary.
config.AddTargetLanguage(“de”);
config.AddTargetLanguage(“id-ID”);
Python
# Add the German and Indonesian languages to the target language dictionary
speech_config.add_target_language(‘de’)
speech_config.add_target_language(‘id-ID’) https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-synthesis-markup?tabs=csharp
Incorrect
Translating to multiple target languages requires only one extra step in your code. Adding additional to languages is all that is required. If you want to synthesize the translated text, you will need to configure an appropriate language and voice for each target language separately.
The SpeechTranslationConfig object contains a TargetLanguages property (Dictionary type of collection). You use the AddTargetLanguage() method to add the languages you want to translate into.
C#
// Add the German and Indonesian languages to the target language dictionary.
config.AddTargetLanguage(“de”);
config.AddTargetLanguage(“id-ID”);
Python
# Add the German and Indonesian languages to the target language dictionary
speech_config.add_target_language(‘de’)
speech_config.add_target_language(‘id-ID’) https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-synthesis-markup?tabs=csharp
Unattempted
Translating to multiple target languages requires only one extra step in your code. Adding additional to languages is all that is required. If you want to synthesize the translated text, you will need to configure an appropriate language and voice for each target language separately.
The SpeechTranslationConfig object contains a TargetLanguages property (Dictionary type of collection). You use the AddTargetLanguage() method to add the languages you want to translate into.
C#
// Add the German and Indonesian languages to the target language dictionary.
config.AddTargetLanguage(“de”);
config.AddTargetLanguage(“id-ID”);
Python
# Add the German and Indonesian languages to the target language dictionary
speech_config.add_target_language(‘de’)
speech_config.add_target_language(‘id-ID’) https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-synthesis-markup?tabs=csharp
Question 34 of 65
34. Question
Wayne Enterprises is using LUIS in a bot to help a user book a flight.
A user may use the following utterance – “Book 2 tickets on a flight to New York for New Year’s Eve.” If we evaluate this utterance for key aspects, we can determine the user’s intent. The user wants to book a flight.
• The number 2: This number maps to a built-in entity. In LUIS, such an entity is known as a [?] entity.
Correct
Wayne Enterprises is using LUIS in a bot to help a user book a flight.
A user may use the following utterance – “Book 2 tickets on a flight to New York for New Year’s Eve.” If we evaluate this utterance for key aspects, we can determine the user’s intent. The user wants to book a flight.
• We can state the Intent as BookFlight.
Entities aren’t only words or phrases, but also simply data. This data helps provide specific context for the utterance and aids the algorithm in more accurately identifying the intent. Not every utterance contains entities, though.
In the utterance above, we can identify entities like:
• New York: We can classify this Entity as Location.Destination.
• New Year’s Eve: We can classify this Entity as Event.
• The number 2: This number maps to a built-in entity. In LUIS, such an entity is known as a prebuilt entity, specifically a prebuilt number.
Wayne Enterprises is using LUIS in a bot to help a user book a flight.
A user may use the following utterance – “Book 2 tickets on a flight to New York for New Year’s Eve.” If we evaluate this utterance for key aspects, we can determine the user’s intent. The user wants to book a flight.
• We can state the Intent as BookFlight.
Entities aren’t only words or phrases, but also simply data. This data helps provide specific context for the utterance and aids the algorithm in more accurately identifying the intent. Not every utterance contains entities, though.
In the utterance above, we can identify entities like:
• New York: We can classify this Entity as Location.Destination.
• New Year’s Eve: We can classify this Entity as Event.
• The number 2: This number maps to a built-in entity. In LUIS, such an entity is known as a prebuilt entity, specifically a prebuilt number.
Wayne Enterprises is using LUIS in a bot to help a user book a flight.
A user may use the following utterance – “Book 2 tickets on a flight to New York for New Year’s Eve.” If we evaluate this utterance for key aspects, we can determine the user’s intent. The user wants to book a flight.
• We can state the Intent as BookFlight.
Entities aren’t only words or phrases, but also simply data. This data helps provide specific context for the utterance and aids the algorithm in more accurately identifying the intent. Not every utterance contains entities, though.
In the utterance above, we can identify entities like:
• New York: We can classify this Entity as Location.Destination.
• New Year’s Eve: We can classify this Entity as Event.
• The number 2: This number maps to a built-in entity. In LUIS, such an entity is known as a prebuilt entity, specifically a prebuilt number.
True or False: The Face Detection API provides information about detected faces in an image, but isn’t designed to identify or recognize a specific face.
Correct
The face detection API provides information about detected faces in an image, but isn’t designed to identify or recognize a specific face. However, the Face API provides this capability through a Facial recognition API.
Facial recognition is used in many areas, including security, natural user interfaces, image analysis, mobile apps, and robotics.
The ability to use artificial intelligence to recognize and match faces is one of the more powerful aspects of the Face API.
The face detection API provides information about detected faces in an image, but isn’t designed to identify or recognize a specific face. However, the Face API provides this capability through a Facial recognition API.
Facial recognition is used in many areas, including security, natural user interfaces, image analysis, mobile apps, and robotics.
The ability to use artificial intelligence to recognize and match faces is one of the more powerful aspects of the Face API.
The face detection API provides information about detected faces in an image, but isn’t designed to identify or recognize a specific face. However, the Face API provides this capability through a Facial recognition API.
Facial recognition is used in many areas, including security, natural user interfaces, image analysis, mobile apps, and robotics.
The ability to use artificial intelligence to recognize and match faces is one of the more powerful aspects of the Face API.
Entity recognition is one part of the Text Analytics set of APIs in Azure Cognitive Services. Which of the below are valid Pre-built Entity Types? (Select three)
Correct
Entity recognition is one part of the Text Analytics set of APIs in Azure Cognitive Services. Using Pre-built entity types, such as Person, Location, Organization, and others, the service evaluates the content of documents and returns information related to these entities. You can use Named Entity Recognition to identify personal and sensitive information in documents. Use the data to classify documents or redact them so they can be shared safely.
Entity recognition is one part of the Text Analytics set of APIs in Azure Cognitive Services. Using Pre-built entity types, such as Person, Location, Organization, and others, the service evaluates the content of documents and returns information related to these entities. You can use Named Entity Recognition to identify personal and sensitive information in documents. Use the data to classify documents or redact them so they can be shared safely.
Entity recognition is one part of the Text Analytics set of APIs in Azure Cognitive Services. Using Pre-built entity types, such as Person, Location, Organization, and others, the service evaluates the content of documents and returns information related to these entities. You can use Named Entity Recognition to identify personal and sensitive information in documents. Use the data to classify documents or redact them so they can be shared safely.
You want to add 20 friends to a collection, and you have four face images for each friend. Each face image is referred to as a face, and each friend is referred to as a person. In the Face API, what is this collection of friends called?
This cognitive service enables you to quickly build a knowledge base of questions and answers that can form the basis of a dialog between a human and an AI agent.
Correct
Conversational AI in Microsoft Azure
To create conversational AI solutions on Microsoft Azure, you can use the following services:
The Key Phrase Extraction API expects a well-formed JSON input. You can create a JSON file that contains an array of documents. Each document size must be [A] or less and you can have up to [B] per collection.
Correct
The Key Phrase Extraction API expects a well-formed JSON input. You can create a JSON file that contains an array of documents. Each document size must be 5,120 characters or less and you can have up to 1,000 documents per collection (1,000 IDs).
JSON
{
“documents”: [
{
“language”: “en”,
“id”: “1”,
“text”: “Document 1 text here”
},
{
“language”: “fr”,
“id”: “2”,
“text”: “Document 2 text here”
}
]
}
Note the entries in the JSON array consist of a language attribute, an ID attribute, and the text that will be evaluated. There are other specific requirements for the data that you will pass to the service for evaluation. Each document size must be 5,120 characters or less and you can have up to 1,000 documents per collection (1,000 IDs). When you make the request to the service, the JSON document is passed in the body. https://docs.microsoft.com/en-us/azure/search/cognitive-search-skill-keyphrases
Incorrect
The Key Phrase Extraction API expects a well-formed JSON input. You can create a JSON file that contains an array of documents. Each document size must be 5,120 characters or less and you can have up to 1,000 documents per collection (1,000 IDs).
JSON
{
“documents”: [
{
“language”: “en”,
“id”: “1”,
“text”: “Document 1 text here”
},
{
“language”: “fr”,
“id”: “2”,
“text”: “Document 2 text here”
}
]
}
Note the entries in the JSON array consist of a language attribute, an ID attribute, and the text that will be evaluated. There are other specific requirements for the data that you will pass to the service for evaluation. Each document size must be 5,120 characters or less and you can have up to 1,000 documents per collection (1,000 IDs). When you make the request to the service, the JSON document is passed in the body. https://docs.microsoft.com/en-us/azure/search/cognitive-search-skill-keyphrases
Unattempted
The Key Phrase Extraction API expects a well-formed JSON input. You can create a JSON file that contains an array of documents. Each document size must be 5,120 characters or less and you can have up to 1,000 documents per collection (1,000 IDs).
JSON
{
“documents”: [
{
“language”: “en”,
“id”: “1”,
“text”: “Document 1 text here”
},
{
“language”: “fr”,
“id”: “2”,
“text”: “Document 2 text here”
}
]
}
Note the entries in the JSON array consist of a language attribute, an ID attribute, and the text that will be evaluated. There are other specific requirements for the data that you will pass to the service for evaluation. Each document size must be 5,120 characters or less and you can have up to 1,000 documents per collection (1,000 IDs). When you make the request to the service, the JSON document is passed in the body. https://docs.microsoft.com/en-us/azure/search/cognitive-search-skill-keyphrases
Question 40 of 65
40. Question
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
The following content has been added to a section of the LUIS model.
• Share this pic
• Can you tweet that?
• Post to Twitter
This texts are considered [?].
Correct
The following values are Utterances for training the SharePic intent:
• Share this pic
• Can you tweet that?
• Post to Twitter
Tony at Stark Industries is building a bot using bot framework and Azure Bot Service. He wants to extend the capabilities of his bot.
Which service will he use to achieve his goal?
Correct
Using Bot Framework Skills , he can easily extend the capabilities of his Bot. Skills are like standalone Bots that focus on a specific function, like Calendar, To Do, Point of Interest, etc.
In the virtual assistant design, Bot framework dispatches actions to Skills. In the image below, the Microsoft information about Bot framework and skills are parts of virtual assistant.
Bot Framework Skills are re-usable conversational skill building-blocks covering conversational use-cases enabling you to add extensive functionality to a Bot within minutes. Skills include Language Understanding (LUIS) models, dialogs and integration code, delivered as source code enabling you to customize and extend as required. At this time we provide Calendar, Email, To Do, Point of Interest skills and a number of other experimental skills.
A Skill is like a standard conversational bot but with the ability to be plugged in to a broader solution. This can be a complex Virtual Assistant or perhaps an Enterprise Bot seeking to stitch together multiple bots within an organization.
Apart from some minor differences that enable this special invocation pattern, a Skill looks and behaves like a regular bot. The same protocol is maintained between two bots to ensure a consistent approach. Skills for common scenarios like productivity and navigation can be used as-is or customized however a customer prefers. https://microsoft.github.io/botframework-solutions/overview/skills/
Custom Vision is incorrect because Custom Vision Services and can potentially extend Bot functionality as part of the Skill. Language Translation and Text to Speech are Natural Language Processing services and potentially can extend Bot functionality as part of the Skill.
Chit-Chat and FAQ documents are parts of the Knowledgebase for QnA Maker and Azure Bot Service.
Incorrect
Using Bot Framework Skills , he can easily extend the capabilities of his Bot. Skills are like standalone Bots that focus on a specific function, like Calendar, To Do, Point of Interest, etc.
In the virtual assistant design, Bot framework dispatches actions to Skills. In the image below, the Microsoft information about Bot framework and skills are parts of virtual assistant.
Bot Framework Skills are re-usable conversational skill building-blocks covering conversational use-cases enabling you to add extensive functionality to a Bot within minutes. Skills include Language Understanding (LUIS) models, dialogs and integration code, delivered as source code enabling you to customize and extend as required. At this time we provide Calendar, Email, To Do, Point of Interest skills and a number of other experimental skills.
A Skill is like a standard conversational bot but with the ability to be plugged in to a broader solution. This can be a complex Virtual Assistant or perhaps an Enterprise Bot seeking to stitch together multiple bots within an organization.
Apart from some minor differences that enable this special invocation pattern, a Skill looks and behaves like a regular bot. The same protocol is maintained between two bots to ensure a consistent approach. Skills for common scenarios like productivity and navigation can be used as-is or customized however a customer prefers. https://microsoft.github.io/botframework-solutions/overview/skills/
Custom Vision is incorrect because Custom Vision Services and can potentially extend Bot functionality as part of the Skill. Language Translation and Text to Speech are Natural Language Processing services and potentially can extend Bot functionality as part of the Skill.
Chit-Chat and FAQ documents are parts of the Knowledgebase for QnA Maker and Azure Bot Service.
Unattempted
Using Bot Framework Skills , he can easily extend the capabilities of his Bot. Skills are like standalone Bots that focus on a specific function, like Calendar, To Do, Point of Interest, etc.
In the virtual assistant design, Bot framework dispatches actions to Skills. In the image below, the Microsoft information about Bot framework and skills are parts of virtual assistant.
Bot Framework Skills are re-usable conversational skill building-blocks covering conversational use-cases enabling you to add extensive functionality to a Bot within minutes. Skills include Language Understanding (LUIS) models, dialogs and integration code, delivered as source code enabling you to customize and extend as required. At this time we provide Calendar, Email, To Do, Point of Interest skills and a number of other experimental skills.
A Skill is like a standard conversational bot but with the ability to be plugged in to a broader solution. This can be a complex Virtual Assistant or perhaps an Enterprise Bot seeking to stitch together multiple bots within an organization.
Apart from some minor differences that enable this special invocation pattern, a Skill looks and behaves like a regular bot. The same protocol is maintained between two bots to ensure a consistent approach. Skills for common scenarios like productivity and navigation can be used as-is or customized however a customer prefers. https://microsoft.github.io/botframework-solutions/overview/skills/
Custom Vision is incorrect because Custom Vision Services and can potentially extend Bot functionality as part of the Skill. Language Translation and Text to Speech are Natural Language Processing services and potentially can extend Bot functionality as part of the Skill.
Chit-Chat and FAQ documents are parts of the Knowledgebase for QnA Maker and Azure Bot Service.
Question 42 of 65
42. Question
True or False: The Prediction key is used for requests against your LUIS app and is not used for Authoring purposes.
Correct
The Starter key is a good choice to initial creation and simple testing but once your requests to the prediction endpoint go beyond 1000, you need to consider using a Prediction key. The prediction key is used for requests against your LUIS app and is not used for Authoring purposes. You may decide to work with developers to author the LUIS app through the SDKs. In that case, do not use the prediction key for authoring. https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-how-to-azure-subscription
Incorrect
The Starter key is a good choice to initial creation and simple testing but once your requests to the prediction endpoint go beyond 1000, you need to consider using a Prediction key. The prediction key is used for requests against your LUIS app and is not used for Authoring purposes. You may decide to work with developers to author the LUIS app through the SDKs. In that case, do not use the prediction key for authoring. https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-how-to-azure-subscription
Unattempted
The Starter key is a good choice to initial creation and simple testing but once your requests to the prediction endpoint go beyond 1000, you need to consider using a Prediction key. The prediction key is used for requests against your LUIS app and is not used for Authoring purposes. You may decide to work with developers to author the LUIS app through the SDKs. In that case, do not use the prediction key for authoring. https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-how-to-azure-subscription
Question 43 of 65
43. Question
True or False: The option is available to enable the Bing Spell Check to help catch spelling errors in the input utterance inputs.
Correct
When you publish your app, you have the option to select additional settings to alter or handle the input data. On the Settings page in the Manage section, you can adjust options on the version settings by enabling punctuation normalization and normalize word forms. Punctuation normalization means that punctuation in the utterances will not affect the prediction scoring when turned on.
When you select the Publish button, you can also enable additional settings that can impact how LUIS handles the input from users. Turning on Sentiment Analysis will cause LUIS to send the utterances to the Text Analytics API and perform a sentiment analysis on the utterance. You can use that information to detect the “mood” of the utterances entered by the users.
You can also enable the Bing Spell Check to help catch spelling errors in the input, which may affect the scoring of the utterances. https://docs.microsoft.com/en-ca/learn/modules/manage-language-understanding-intelligent-service-apps/4-manage-data-language-understanding-intelligent-service-app?pivots=csharp
Incorrect
When you publish your app, you have the option to select additional settings to alter or handle the input data. On the Settings page in the Manage section, you can adjust options on the version settings by enabling punctuation normalization and normalize word forms. Punctuation normalization means that punctuation in the utterances will not affect the prediction scoring when turned on.
When you select the Publish button, you can also enable additional settings that can impact how LUIS handles the input from users. Turning on Sentiment Analysis will cause LUIS to send the utterances to the Text Analytics API and perform a sentiment analysis on the utterance. You can use that information to detect the “mood” of the utterances entered by the users.
You can also enable the Bing Spell Check to help catch spelling errors in the input, which may affect the scoring of the utterances. https://docs.microsoft.com/en-ca/learn/modules/manage-language-understanding-intelligent-service-apps/4-manage-data-language-understanding-intelligent-service-app?pivots=csharp
Unattempted
When you publish your app, you have the option to select additional settings to alter or handle the input data. On the Settings page in the Manage section, you can adjust options on the version settings by enabling punctuation normalization and normalize word forms. Punctuation normalization means that punctuation in the utterances will not affect the prediction scoring when turned on.
When you select the Publish button, you can also enable additional settings that can impact how LUIS handles the input from users. Turning on Sentiment Analysis will cause LUIS to send the utterances to the Text Analytics API and perform a sentiment analysis on the utterance. You can use that information to detect the “mood” of the utterances entered by the users.
You can also enable the Bing Spell Check to help catch spelling errors in the input, which may affect the scoring of the utterances. https://docs.microsoft.com/en-ca/learn/modules/manage-language-understanding-intelligent-service-apps/4-manage-data-language-understanding-intelligent-service-app?pivots=csharp
Question 44 of 65
44. Question
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
When using the Text Moderation, images need to have a minimum of [A] and a maximum file size of [B].
Correct
The Text Moderation API scans content as it is generated. Content Moderator then processes the content and sends the results along with relevant information either back to the user’s systems or to the built-in review tool. This information can be used to make decisions e.g. take it down, send to human judge, etc.
When using the API, images need to have a minimum of 128 pixels and a maximum file size of 4MB. Text can be at most 1024 characters long. If the content passed to the text API or the image API exceeds the size limits, the API will return an error code that informs about the issue.
The Text Moderation API scans content as it is generated. Content Moderator then processes the content and sends the results along with relevant information either back to the user’s systems or to the built-in review tool. This information can be used to make decisions e.g. take it down, send to human judge, etc.
When using the API, images need to have a minimum of 128 pixels and a maximum file size of 4MB. Text can be at most 1024 characters long. If the content passed to the text API or the image API exceeds the size limits, the API will return an error code that informs about the issue.
The Text Moderation API scans content as it is generated. Content Moderator then processes the content and sends the results along with relevant information either back to the user’s systems or to the built-in review tool. This information can be used to make decisions e.g. take it down, send to human judge, etc.
When using the API, images need to have a minimum of 128 pixels and a maximum file size of 4MB. Text can be at most 1024 characters long. If the content passed to the text API or the image API exceeds the size limits, the API will return an error code that informs about the issue.
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
[?] can be the basis of AI solutions for:
• Customer support for products or services.
• Reservation systems for restaurants, airlines, cinemas, and other appointment based businesses.
• Health care consultations and self-diagnosis.
• Home automation and personal digital assistants.
Correct
Conversational AI is the term used to describe solutions where AI agents participate in conversations with humans. Most commonly, conversational AI solutions use bots to manage dialogs with users. These dialogs can take place through web site interfaces, email, social media platforms, messaging systems, phone calls, and other channels.
Bots can be the basis of AI solutions for:
• Customer support for products or services.
• Reservation systems for restaurants, airlines, cinemas, and other appointment based businesses.
• Health care consultations and self-diagnosis.
• Home automation and personal digital assistants. https://docs.microsoft.com/en-us/learn/modules/get-started-ai-fundamentals/6-understand-conversational-ai
Incorrect
Conversational AI is the term used to describe solutions where AI agents participate in conversations with humans. Most commonly, conversational AI solutions use bots to manage dialogs with users. These dialogs can take place through web site interfaces, email, social media platforms, messaging systems, phone calls, and other channels.
Bots can be the basis of AI solutions for:
• Customer support for products or services.
• Reservation systems for restaurants, airlines, cinemas, and other appointment based businesses.
• Health care consultations and self-diagnosis.
• Home automation and personal digital assistants. https://docs.microsoft.com/en-us/learn/modules/get-started-ai-fundamentals/6-understand-conversational-ai
Unattempted
Conversational AI is the term used to describe solutions where AI agents participate in conversations with humans. Most commonly, conversational AI solutions use bots to manage dialogs with users. These dialogs can take place through web site interfaces, email, social media platforms, messaging systems, phone calls, and other channels.
Bots can be the basis of AI solutions for:
• Customer support for products or services.
• Reservation systems for restaurants, airlines, cinemas, and other appointment based businesses.
• Health care consultations and self-diagnosis.
• Home automation and personal digital assistants. https://docs.microsoft.com/en-us/learn/modules/get-started-ai-fundamentals/6-understand-conversational-ai
Question 46 of 65
46. Question
True or False: Choosing a voice that is not native for the language you are using, may not provide the results you expect.
Bruce Banner has a website which hosts a variety of pictures. A user logs onto his site and searches for “images with a beach”. What is the query the user entered referred to as?
Correct
When you create Intents for a LUIS app, you are creating aspects that represent a user’s intention. Intents represent a task or an action that a user would like to perform. The user will express the intent through an utterance.
An example might be Search for images with a beach.
The Intent that would represent the action could be SearchPics.
The user wants to search pictures that contain beaches. You summarize the user’s intentions into a succinct or concise word.
This concise word is used as the Intent.
When you create Intents for a LUIS app, you are creating aspects that represent a user’s intention. Intents represent a task or an action that a user would like to perform. The user will express the intent through an utterance.
An example might be Search for images with a beach.
The Intent that would represent the action could be SearchPics.
The user wants to search pictures that contain beaches. You summarize the user’s intentions into a succinct or concise word.
This concise word is used as the Intent.
When you create Intents for a LUIS app, you are creating aspects that represent a user’s intention. Intents represent a task or an action that a user would like to perform. The user will express the intent through an utterance.
An example might be Search for images with a beach.
The Intent that would represent the action could be SearchPics.
The user wants to search pictures that contain beaches. You summarize the user’s intentions into a succinct or concise word.
This concise word is used as the Intent.
Clark Kent works the Daily Planet and he needs to make the organization’s automated telephone attendant read voice menus in humanlike voices. He has opted to use the Standard voice for the solution.
Does this decision help to achieve the goal?
Correct
Azure Text-to-Speech service provides two options for the voice:
• Standard voices – Created using Statistical Parametric Synthesis and/or Concatenation Synthesis techniques. These voices are highly intelligible and sound natural. You can easily enable your applications to speak in more than 45 languages, with a wide range of voice options. These voices provide high pronunciation accuracy, including support for abbreviations, acronym expansions, date/time interpretations, polyphones, and more. For a full list of standard voices, see supported languages.
• Neural voices – Deep neural networks are used to overcome the limits of traditional speech synthesis with regards to stress and intonation in spoken language. Prosody prediction and voice synthesis are performed simultaneously, which results in more fluid and natural-sounding outputs. Neural voices can be used to make interactions with chatbots and voice assistants more natural and engaging, convert digital texts such as e-books into audiobooks, and enhance in-car navigation systems. With the human-like natural prosody and clear articulation of words, neural voices significantly reduce listening fatigue when you interact with AI systems. For a full list of neural voices, see supported languages. https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/text-to-speech
Incorrect
Azure Text-to-Speech service provides two options for the voice:
• Standard voices – Created using Statistical Parametric Synthesis and/or Concatenation Synthesis techniques. These voices are highly intelligible and sound natural. You can easily enable your applications to speak in more than 45 languages, with a wide range of voice options. These voices provide high pronunciation accuracy, including support for abbreviations, acronym expansions, date/time interpretations, polyphones, and more. For a full list of standard voices, see supported languages.
• Neural voices – Deep neural networks are used to overcome the limits of traditional speech synthesis with regards to stress and intonation in spoken language. Prosody prediction and voice synthesis are performed simultaneously, which results in more fluid and natural-sounding outputs. Neural voices can be used to make interactions with chatbots and voice assistants more natural and engaging, convert digital texts such as e-books into audiobooks, and enhance in-car navigation systems. With the human-like natural prosody and clear articulation of words, neural voices significantly reduce listening fatigue when you interact with AI systems. For a full list of neural voices, see supported languages. https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/text-to-speech
Unattempted
Azure Text-to-Speech service provides two options for the voice:
• Standard voices – Created using Statistical Parametric Synthesis and/or Concatenation Synthesis techniques. These voices are highly intelligible and sound natural. You can easily enable your applications to speak in more than 45 languages, with a wide range of voice options. These voices provide high pronunciation accuracy, including support for abbreviations, acronym expansions, date/time interpretations, polyphones, and more. For a full list of standard voices, see supported languages.
• Neural voices – Deep neural networks are used to overcome the limits of traditional speech synthesis with regards to stress and intonation in spoken language. Prosody prediction and voice synthesis are performed simultaneously, which results in more fluid and natural-sounding outputs. Neural voices can be used to make interactions with chatbots and voice assistants more natural and engaging, convert digital texts such as e-books into audiobooks, and enhance in-car navigation systems. With the human-like natural prosody and clear articulation of words, neural voices significantly reduce listening fatigue when you interact with AI systems. For a full list of neural voices, see supported languages. https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/text-to-speech
Question 50 of 65
50. Question
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
If you would like to maintain the data in your LUIS app, separately from the internal storage mechanisms, you can export your LUIS app. Exporting the LUIS app results in the creation of either a [A] file or an [B] file. Both file types are comprehensive and contain the intents, utterances, and entities, among other information.
Correct
Export data
If you would like to maintain the data in your LUIS app, separately from the internal storage mechanisms, you can export your LUIS app. Exporting the LUIS app results in the creation of either a JSON file or an LU file. Both file types are comprehensive and contain the intents, utterances, and entities, among other information. In this way, you can ensure that the data for your LUIS application is available offline. You can use the information to reconstruct the intents, entities, and utterances if needed. You can also import the file into LUIS to create another version of the app in a different LUIS region or account.
Export data
If you would like to maintain the data in your LUIS app, separately from the internal storage mechanisms, you can export your LUIS app. Exporting the LUIS app results in the creation of either a JSON file or an LU file. Both file types are comprehensive and contain the intents, utterances, and entities, among other information. In this way, you can ensure that the data for your LUIS application is available offline. You can use the information to reconstruct the intents, entities, and utterances if needed. You can also import the file into LUIS to create another version of the app in a different LUIS region or account.
Export data
If you would like to maintain the data in your LUIS app, separately from the internal storage mechanisms, you can export your LUIS app. Exporting the LUIS app results in the creation of either a JSON file or an LU file. Both file types are comprehensive and contain the intents, utterances, and entities, among other information. In this way, you can ensure that the data for your LUIS application is available offline. You can use the information to reconstruct the intents, entities, and utterances if needed. You can also import the file into LUIS to create another version of the app in a different LUIS region or account.
True or False: The idea behind building this LUIS application was to integrate it into a bot application.
Correct
The idea behind building this LUIS application was to integrate it into a bot application.
To integrate LUIS into your own AI applications, use the API documentation to understand how to do that integration.
LUIS allows you to map natural language utterances to intents. In other words, LUIS maps the user’s words, phrases, or sentences to tasks or actions the user wants to do. https://docs.microsoft.com/en-us/azure/cognitive-services/luis/what-is-luis
The idea behind building this LUIS application was to integrate it into a bot application.
To integrate LUIS into your own AI applications, use the API documentation to understand how to do that integration.
LUIS allows you to map natural language utterances to intents. In other words, LUIS maps the user’s words, phrases, or sentences to tasks or actions the user wants to do. https://docs.microsoft.com/en-us/azure/cognitive-services/luis/what-is-luis
The idea behind building this LUIS application was to integrate it into a bot application.
To integrate LUIS into your own AI applications, use the API documentation to understand how to do that integration.
LUIS allows you to map natural language utterances to intents. In other words, LUIS maps the user’s words, phrases, or sentences to tasks or actions the user wants to do. https://docs.microsoft.com/en-us/azure/cognitive-services/luis/what-is-luis
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
When automating a Function based on live content being sent to a specified folder so that an action can be taken upon the content received, a [?] is implemented.
Correct
A function is activated through a trigger. Azure Functions can run as new Azure Queue storage messages are created and can write queue messages within a function. A runtime will poll a queue and start a function to process a new message. In the list of templates available to a Function app, select Azure Queue Storage trigger.
A function is activated through a trigger. Azure Functions can run as new Azure Queue storage messages are created and can write queue messages within a function. A runtime will poll a queue and start a function to process a new message. In the list of templates available to a Function app, select Azure Queue Storage trigger.
A function is activated through a trigger. Azure Functions can run as new Azure Queue storage messages are created and can write queue messages within a function. A runtime will poll a queue and start a function to process a new message. In the list of templates available to a Function app, select Azure Queue Storage trigger.
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
A(n) [?] represents a single event within the video. It groups consecutive shots that are related. It will have a start time, end time, and thumbnail.
Correct
Scene
A scene represents a single event within the video. It groups consecutive shots that are related. It will have a start time, end time, and thumbnail (first keyframe in the scene).
Scene
A scene represents a single event within the video. It groups consecutive shots that are related. It will have a start time, end time, and thumbnail (first keyframe in the scene).
Scene
A scene represents a single event within the video. It groups consecutive shots that are related. It will have a start time, end time, and thumbnail (first keyframe in the scene).
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
The response from the [?] includes the following information:
• A list of potentially unwanted words found in the text.
• What type of potentially unwanted words were found.
• Possible personally identifiable information (PII) found in the text.
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
The capacities for Face API storage collections are as follows:
• Face list: Up to [A] distinct faces
• Person group: Up to [B] persons
• Person: Up to [C] faces
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
Video Indexer is an artificial intelligence service that is part of Microsoft Azure Media Services. Video Indexer provides an orchestration of multiple machine learning models that enable you to easily extract deep insight from a video. To provide advanced and accurate insights, Video Indexer makes use of multiple channels of the video:
• Audio
• Speech
• Visual.
Video Indexer does have its limitations, below are a few:
• A name of the video must be no greater than [A] characters.
• The upload size with the URL option is limited to [B].
• Video Indexer has a max duration limit of [C] for a single file.
Correct
Video Indexer is an artificial intelligence service that is part of Microsoft Azure Media Services. Video Indexer provides an orchestration of multiple machine learning models that enable you to easily extract deep insight from a video. To provide advanced and accurate insights, Video Indexer makes use of multiple channels of the video: audio, speech, and visual.
Video Indexer uploading considerations and limitations
• A name of the video must be no greater than 80 characters.
• When uploading your video based on the URL (preferred) the endpoint must be secured with TLS 1.2 (or higher).
• The upload size with the URL option is limited to 30GB.
• The request URL length is limited to 6144 characters where the query string URL length is limited to 4096 characters .
• The upload size with the byte array option is limited to 2GB.
• The byte array option times out after 30 min.
• The URL provided in the videoURL param needs to be encoded.
• Indexing Media Services assets has the same limitation as indexing from URL.
• Video Indexer has a max duration limit of 4 hours for a single file.
• The URL needs to be accessible (for example a public URL).
• If it is a private URL, the access token need to be provided in the request.
• The URL has to point to a valid media file and not to a webpage, such as a link to the http://www.youtube.com page.
• In a paid account you can upload up to 50 movies per minute, and in a trial account up to 5 movies per minute. https://docs.microsoft.com/en-us/azure/media-services/video-indexer/upload-index-videos#uploading-considerations-and-limitations
Incorrect
Video Indexer is an artificial intelligence service that is part of Microsoft Azure Media Services. Video Indexer provides an orchestration of multiple machine learning models that enable you to easily extract deep insight from a video. To provide advanced and accurate insights, Video Indexer makes use of multiple channels of the video: audio, speech, and visual.
Video Indexer uploading considerations and limitations
• A name of the video must be no greater than 80 characters.
• When uploading your video based on the URL (preferred) the endpoint must be secured with TLS 1.2 (or higher).
• The upload size with the URL option is limited to 30GB.
• The request URL length is limited to 6144 characters where the query string URL length is limited to 4096 characters .
• The upload size with the byte array option is limited to 2GB.
• The byte array option times out after 30 min.
• The URL provided in the videoURL param needs to be encoded.
• Indexing Media Services assets has the same limitation as indexing from URL.
• Video Indexer has a max duration limit of 4 hours for a single file.
• The URL needs to be accessible (for example a public URL).
• If it is a private URL, the access token need to be provided in the request.
• The URL has to point to a valid media file and not to a webpage, such as a link to the http://www.youtube.com page.
• In a paid account you can upload up to 50 movies per minute, and in a trial account up to 5 movies per minute. https://docs.microsoft.com/en-us/azure/media-services/video-indexer/upload-index-videos#uploading-considerations-and-limitations
Unattempted
Video Indexer is an artificial intelligence service that is part of Microsoft Azure Media Services. Video Indexer provides an orchestration of multiple machine learning models that enable you to easily extract deep insight from a video. To provide advanced and accurate insights, Video Indexer makes use of multiple channels of the video: audio, speech, and visual.
Video Indexer uploading considerations and limitations
• A name of the video must be no greater than 80 characters.
• When uploading your video based on the URL (preferred) the endpoint must be secured with TLS 1.2 (or higher).
• The upload size with the URL option is limited to 30GB.
• The request URL length is limited to 6144 characters where the query string URL length is limited to 4096 characters .
• The upload size with the byte array option is limited to 2GB.
• The byte array option times out after 30 min.
• The URL provided in the videoURL param needs to be encoded.
• Indexing Media Services assets has the same limitation as indexing from URL.
• Video Indexer has a max duration limit of 4 hours for a single file.
• The URL needs to be accessible (for example a public URL).
• If it is a private URL, the access token need to be provided in the request.
• The URL has to point to a valid media file and not to a webpage, such as a link to the http://www.youtube.com page.
• In a paid account you can upload up to 50 movies per minute, and in a trial account up to 5 movies per minute. https://docs.microsoft.com/en-us/azure/media-services/video-indexer/upload-index-videos#uploading-considerations-and-limitations
Question 59 of 65
59. Question
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
A key factor in [?] is the ability to interact intelligently with the user. Some of the most common [?] occur through bots. You can build conversational intelligence into your bot by using Azure Language Understanding Intelligent Service (LUIS).
Correct
A key factor in AI applications is the ability to interact intelligently with the user. Some of the most common AI interactions occur through bots. You can build conversational intelligence into your bot by using Azure Language Understanding Intelligent Service (LUIS).
A key factor in AI applications is the ability to interact intelligently with the user. Some of the most common AI interactions occur through bots. You can build conversational intelligence into your bot by using Azure Language Understanding Intelligent Service (LUIS).
A key factor in AI applications is the ability to interact intelligently with the user. Some of the most common AI interactions occur through bots. You can build conversational intelligence into your bot by using Azure Language Understanding Intelligent Service (LUIS).
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
Entity recognition is a part of [?] under Azure Cognitive Services.
Correct
The Entity Recognition skill extracts entities of different types from text. This skill uses the machine learning models provided by Text Analytics in Cognitive Services.
You can provide the service with unstructured text and it will return a list of entities, or items in the text that it recognizes. The service can also provide links to more information about that entity on the web. An entity is essentially a type or a category that certain text elements can fall under.
The service supports two types of recognition.
Named entity recognition
Named entity recognition provides the ability to recognize and identify items in text that are categorized according to some pre-defined classes. Version 3, which is in preview, will add the ability to identify more items such as personal and/or sensitive information like phone numbers, social security numbers, email addresses, and bank account numbers.
Entity linking
The entity linking feature helps to remove ambiguity that may exist around an identified entity. A document may contain an entity such as ARES, which could mean the Greek god of war or it could be an acronym for Amateur Radio Emergency Services. Text Analytics is not able to make this linking on its own. It requires a knowledge base, in the required language, to provide the necessary recognition. This is a way to customize linked entities to your own organizations list of entity elements. https://docs.microsoft.com/en-us/azure/search/cognitive-search-skill-entity-recognition
Incorrect
The Entity Recognition skill extracts entities of different types from text. This skill uses the machine learning models provided by Text Analytics in Cognitive Services.
You can provide the service with unstructured text and it will return a list of entities, or items in the text that it recognizes. The service can also provide links to more information about that entity on the web. An entity is essentially a type or a category that certain text elements can fall under.
The service supports two types of recognition.
Named entity recognition
Named entity recognition provides the ability to recognize and identify items in text that are categorized according to some pre-defined classes. Version 3, which is in preview, will add the ability to identify more items such as personal and/or sensitive information like phone numbers, social security numbers, email addresses, and bank account numbers.
Entity linking
The entity linking feature helps to remove ambiguity that may exist around an identified entity. A document may contain an entity such as ARES, which could mean the Greek god of war or it could be an acronym for Amateur Radio Emergency Services. Text Analytics is not able to make this linking on its own. It requires a knowledge base, in the required language, to provide the necessary recognition. This is a way to customize linked entities to your own organizations list of entity elements. https://docs.microsoft.com/en-us/azure/search/cognitive-search-skill-entity-recognition
Unattempted
The Entity Recognition skill extracts entities of different types from text. This skill uses the machine learning models provided by Text Analytics in Cognitive Services.
You can provide the service with unstructured text and it will return a list of entities, or items in the text that it recognizes. The service can also provide links to more information about that entity on the web. An entity is essentially a type or a category that certain text elements can fall under.
The service supports two types of recognition.
Named entity recognition
Named entity recognition provides the ability to recognize and identify items in text that are categorized according to some pre-defined classes. Version 3, which is in preview, will add the ability to identify more items such as personal and/or sensitive information like phone numbers, social security numbers, email addresses, and bank account numbers.
Entity linking
The entity linking feature helps to remove ambiguity that may exist around an identified entity. A document may contain an entity such as ARES, which could mean the Greek god of war or it could be an acronym for Amateur Radio Emergency Services. Text Analytics is not able to make this linking on its own. It requires a knowledge base, in the required language, to provide the necessary recognition. This is a way to customize linked entities to your own organizations list of entity elements. https://docs.microsoft.com/en-us/azure/search/cognitive-search-skill-entity-recognition
Question 61 of 65
61. Question
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
As a lead developer at Stark Industries, you’re responsible for building and maintaining a line-of-business app that lets your frontline distributors scan and upload images of the store shelves they are restocking.
You want to validate that any images posted by users respect the content rules set by your company. The company doesn’t want inappropriate content posted to company sites.
You need to decide whether to build or buy a solution. Building a sophisticated image processing and analysis engine is costly. One alternative is to use the Computer Vision API from Microsoft.
Frontline distributors of your products scan and upload images of store shelves they’re restocking. As a lead developer at your company, you are responsible for creating thumbnails of the images. The thumbnails are used in the online reports you create for the sales team. Recently, the sales manager said that the images in the report are blurry and often don’t have the product front and centre, making it difficult to scan the large report. It’s up to you to improve the situation.
You decide to try the thumbnail generation feature of the Computer Vision API. Perhaps it can do a better job than the resizing function you wrote.
The following code is prepared to be entered into Azure CLI:
Azure CLI
curl “https://.api.cognitive.microsoft.com/vision/v2.0/[?]” \
-H “Ocp-Apim-Subscription-Key: $key” \
-H “Content-Type: application/json” \
-d “{‘url’ : ‘https://raw.githubusercontent.com/MicrosoftDocs/mslearn-process-images-with-the-computer-vision-service/master/images/ebook.png’}” \
| jq ‘.’
Which should be replace [?] in the first line of code?
Correct
“ocr” should replace [?] in the first line of code.
The following JSON is an example of the response we get from this call. Some lines of JSON have been removed to make the snippet fit better on the page.
JSON
{
“language”: “en”,
“orientation”: “Up”,
“textAngle”: 0,
“regions” : [
/*… snipped*/
{
“boundingBox”: “766,1419,302,33”,
“words”: [
{
“boundingBox”: “766,1419,126,25”,
“text”: “Microsoft”
},
{
“boundingBox”: “903,1420,165,32”,
“text”: “Corporation”
}
]
}]
}
The service identified the text as being English. The value of the language field contains the BCP-47 language code of the text detected in the image. In this example it is en, or English.
The orientation was detected as up. This property is the direction that the top of the recognized text is facing, after the image has been rotated around its centre according to the detected text angle.
The textAngle is the angle by which the text must be rotated to become horizontal or vertical. In this example, the text was perfectly horizontal, so the value returned is 0.
The regions property contains a list of values used to show where the text is, its position in the picture, and the word found in that part of the image.
The four integers of the boundingBox value are:
• the x-coordinate of the left edge
• the y-coordinate of the top edge
• the width of the bounding box
• the height of the bounding box,
You can use these values to draw boxes around every piece of text found in the image.
As you can see in this example, the ocr service gives detailed information about the printed text in an image. https://docs.microsoft.com/en-us/cli/azure/format-output-azure-cli
Incorrect
“ocr” should replace [?] in the first line of code.
The following JSON is an example of the response we get from this call. Some lines of JSON have been removed to make the snippet fit better on the page.
JSON
{
“language”: “en”,
“orientation”: “Up”,
“textAngle”: 0,
“regions” : [
/*… snipped*/
{
“boundingBox”: “766,1419,302,33”,
“words”: [
{
“boundingBox”: “766,1419,126,25”,
“text”: “Microsoft”
},
{
“boundingBox”: “903,1420,165,32”,
“text”: “Corporation”
}
]
}]
}
The service identified the text as being English. The value of the language field contains the BCP-47 language code of the text detected in the image. In this example it is en, or English.
The orientation was detected as up. This property is the direction that the top of the recognized text is facing, after the image has been rotated around its centre according to the detected text angle.
The textAngle is the angle by which the text must be rotated to become horizontal or vertical. In this example, the text was perfectly horizontal, so the value returned is 0.
The regions property contains a list of values used to show where the text is, its position in the picture, and the word found in that part of the image.
The four integers of the boundingBox value are:
• the x-coordinate of the left edge
• the y-coordinate of the top edge
• the width of the bounding box
• the height of the bounding box,
You can use these values to draw boxes around every piece of text found in the image.
As you can see in this example, the ocr service gives detailed information about the printed text in an image. https://docs.microsoft.com/en-us/cli/azure/format-output-azure-cli
Unattempted
“ocr” should replace [?] in the first line of code.
The following JSON is an example of the response we get from this call. Some lines of JSON have been removed to make the snippet fit better on the page.
JSON
{
“language”: “en”,
“orientation”: “Up”,
“textAngle”: 0,
“regions” : [
/*… snipped*/
{
“boundingBox”: “766,1419,302,33”,
“words”: [
{
“boundingBox”: “766,1419,126,25”,
“text”: “Microsoft”
},
{
“boundingBox”: “903,1420,165,32”,
“text”: “Corporation”
}
]
}]
}
The service identified the text as being English. The value of the language field contains the BCP-47 language code of the text detected in the image. In this example it is en, or English.
The orientation was detected as up. This property is the direction that the top of the recognized text is facing, after the image has been rotated around its centre according to the detected text angle.
The textAngle is the angle by which the text must be rotated to become horizontal or vertical. In this example, the text was perfectly horizontal, so the value returned is 0.
The regions property contains a list of values used to show where the text is, its position in the picture, and the word found in that part of the image.
The four integers of the boundingBox value are:
• the x-coordinate of the left edge
• the y-coordinate of the top edge
• the width of the bounding box
• the height of the bounding box,
You can use these values to draw boxes around every piece of text found in the image.
As you can see in this example, the ocr service gives detailed information about the printed text in an image. https://docs.microsoft.com/en-us/cli/azure/format-output-azure-cli
Question 62 of 65
62. Question
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
The following items are considered [?] and are of critical importance in many applications including the Text Moderation API.
• Name
• Email addresses
• Mailing addresses
• IP addresses
• Phone numbers
• Driver’s license number
• Social Security numbers
• Bank account numbers
• Passport numbers
Correct
Personally identifiable information (PII) is of critical importance in many applications. Azure PII Detection can help you detect if any values in the text might be considered PII before you release it publicly. Key aspects that are detected include:
• Name
• Email addresses
• Mailing addresses
• IP addresses
• Phone numbers
• Driver’s license number
• Social Security numbers
• Bank account numbers
• Passport numbers
Personally identifiable information (PII) is of critical importance in many applications. Azure PII Detection can help you detect if any values in the text might be considered PII before you release it publicly. Key aspects that are detected include:
• Name
• Email addresses
• Mailing addresses
• IP addresses
• Phone numbers
• Driver’s license number
• Social Security numbers
• Bank account numbers
• Passport numbers
Personally identifiable information (PII) is of critical importance in many applications. Azure PII Detection can help you detect if any values in the text might be considered PII before you release it publicly. Key aspects that are detected include:
• Name
• Email addresses
• Mailing addresses
• IP addresses
• Phone numbers
• Driver’s license number
• Social Security numbers
• Bank account numbers
• Passport numbers
Calling a Custom Vision model prediction endpoint over HTTP requires three pieces of information.
•[A]: This key has to be set as a header in all requests. That’s what gives us access to the endpoint.
•[B]: The dialog shows two different URLs. If we’re posting an image URL, then use the first URL, which ends in /url. If we want to post a raw image in the body of our request, we use the second URL, which ends in /image.
•[C]: If we’re posting a raw image, we set the body of the request to the binary representation of the image and the content type to application/octet-stream. If we’re posting an image URL, we put that as JSON in the body and set the content type to application/json.
Correct
Calling a Custom Vision model prediction endpoint over HTTP requires three pieces of information.
• Prediction-Key: This key has to be set as a header in all requests. That’s what gives us access to the endpoint.
• Request URL: The dialog shows two different URLs. If we’re posting an image URL, then use the first URL, which ends in /url. If we want to post a raw image in the body of our request, we use the second URL, which ends in /image.
• Content-Type: If we’re posting a raw image, we set the body of the request to the binary representation of the image and the content type to application/octet-stream. If we’re posting an image URL, we put that as JSON in the body and set the content type to application/json. https://docs.microsoft.com/en-ca/learn/modules/classify-images-with-custom-vision-service/5-call-the-prediction-endpoint-curl
Incorrect
Calling a Custom Vision model prediction endpoint over HTTP requires three pieces of information.
• Prediction-Key: This key has to be set as a header in all requests. That’s what gives us access to the endpoint.
• Request URL: The dialog shows two different URLs. If we’re posting an image URL, then use the first URL, which ends in /url. If we want to post a raw image in the body of our request, we use the second URL, which ends in /image.
• Content-Type: If we’re posting a raw image, we set the body of the request to the binary representation of the image and the content type to application/octet-stream. If we’re posting an image URL, we put that as JSON in the body and set the content type to application/json. https://docs.microsoft.com/en-ca/learn/modules/classify-images-with-custom-vision-service/5-call-the-prediction-endpoint-curl
Unattempted
Calling a Custom Vision model prediction endpoint over HTTP requires three pieces of information.
• Prediction-Key: This key has to be set as a header in all requests. That’s what gives us access to the endpoint.
• Request URL: The dialog shows two different URLs. If we’re posting an image URL, then use the first URL, which ends in /url. If we want to post a raw image in the body of our request, we use the second URL, which ends in /image.
• Content-Type: If we’re posting a raw image, we set the body of the request to the binary representation of the image and the content type to application/octet-stream. If we’re posting an image URL, we put that as JSON in the body and set the content type to application/json. https://docs.microsoft.com/en-ca/learn/modules/classify-images-with-custom-vision-service/5-call-the-prediction-endpoint-curl
Question 64 of 65
64. Question
Identify the missing word(s) in the following sentence within the context of Microsoft Azure.
After creating a set of question-and-answer pairs, you must [?] your knowledge base. This process analyzes your literal questions and answers and applies a built-in natural language processing model to match appropriate answers to questions, even when they are not phrased exactly as specified in your question definitions.
Correct
Train and test the knowledge base
After creating a set of question-and-answer pairs, you must train your knowledge base. This process analyzes your literal questions and answers and applies a built-in natural language processing model to match appropriate answers to questions, even when they are not phrased exactly as specified in your question definitions.
After training, you can use the built-in test interface in the QnA Maker portal to test your knowledge base by submitting questions and reviewing the answers that are returned. https://docs.microsoft.com/en-us/azure/cognitive-services/QnAMaker/how-to/test-knowledge-base?tabs=v1
Incorrect
Train and test the knowledge base
After creating a set of question-and-answer pairs, you must train your knowledge base. This process analyzes your literal questions and answers and applies a built-in natural language processing model to match appropriate answers to questions, even when they are not phrased exactly as specified in your question definitions.
After training, you can use the built-in test interface in the QnA Maker portal to test your knowledge base by submitting questions and reviewing the answers that are returned. https://docs.microsoft.com/en-us/azure/cognitive-services/QnAMaker/how-to/test-knowledge-base?tabs=v1
Unattempted
Train and test the knowledge base
After creating a set of question-and-answer pairs, you must train your knowledge base. This process analyzes your literal questions and answers and applies a built-in natural language processing model to match appropriate answers to questions, even when they are not phrased exactly as specified in your question definitions.
After training, you can use the built-in test interface in the QnA Maker portal to test your knowledge base by submitting questions and reviewing the answers that are returned. https://docs.microsoft.com/en-us/azure/cognitive-services/QnAMaker/how-to/test-knowledge-base?tabs=v1
Question 65 of 65
65. Question
True or False: Most flags for Azure CLI parameters can be abbreviated to a single character.
Correct
The Azure CLI includes the cognitiveservices command to manage Cognitive Services accounts in Azure. We can supply several subcommands to do specific tasks.
The most common include:
The Azure CLI includes the cognitiveservices command to manage Cognitive Services accounts in Azure. We can supply several subcommands to do specific tasks.
The most common include:
The Azure CLI includes the cognitiveservices command to manage Cognitive Services accounts in Azure. We can supply several subcommands to do specific tasks.
The most common include: