You have already completed the Test before. Hence you can not start it again.
Test is loading...
You must sign in or sign up to start the Test.
You have to finish following quiz, to start this Test:
Your results are here!! for" AZ-400 Practice Test 4 "
0 of 60 questions answered correctly
Your time:
Time has elapsed
Your Final Score is : 0
You have attempted : 0
Number of Correct Questions : 0 and scored 0
Number of Incorrect Questions : 0 and Negative marks 0
Average score
Your score
AZ-400
You have attempted: 0
Number of Correct Questions: 0 and scored 0
Number of Incorrect Questions: 0 and Negative marks 0
You can review your answers by clicking view questions. Important Note : Open Reference Documentation Links in New Tab (Right Click and Open in New Tab).
Question
You need complete the following code to initialise the App Centre in the mobile application.
What should you select for Dropdown1?
Correct
Correct Answer(s): MSAnalytics.self
MSAnalytics.self is the CORRECT answer because according to the case study presented above we need to integrate our new application code with the Visual studio app center for managing all mobile application analytics, crashes and device types, centrally.
The VS App center uses an architecture which allows us to integrate it with our existing code to use selective features. In order to use and start the App Center SDK Analytics and Crashes, we need to import certain libraries/packages as per our requirements. After that we need to include code within the methods to use selective features (analytics in the first dropdown and crashes in the second dropdown). On doing this, we achieve our requirements as mentioned in the case study.
The full code that we include as per the image above is:
MSDistribute.self is the INCORRECT answer because this App center distribute feature is used to distribute different versions to the users, as newer versions are released. However, this is not our requirement for Litwares investment planning application suite. https://docs.microsoft.com/en-us/appcenter/sdk/distribute/ios
MSPush.self is the INCORRECT answer because this feature is used for sending Push notification to the application users using the Visual studio app center portal. Moreover, this feature is being retired and is being replaced by Push Migration. https://docs.microsoft.com/en-us/appcenter/migration/push/
Incorrect
Correct Answer(s): MSAnalytics.self
MSAnalytics.self is the CORRECT answer because according to the case study presented above we need to integrate our new application code with the Visual studio app center for managing all mobile application analytics, crashes and device types, centrally.
The VS App center uses an architecture which allows us to integrate it with our existing code to use selective features. In order to use and start the App Center SDK Analytics and Crashes, we need to import certain libraries/packages as per our requirements. After that we need to include code within the methods to use selective features (analytics in the first dropdown and crashes in the second dropdown). On doing this, we achieve our requirements as mentioned in the case study.
The full code that we include as per the image above is:
MSDistribute.self is the INCORRECT answer because this App center distribute feature is used to distribute different versions to the users, as newer versions are released. However, this is not our requirement for Litwares investment planning application suite. https://docs.microsoft.com/en-us/appcenter/sdk/distribute/ios
MSPush.self is the INCORRECT answer because this feature is used for sending Push notification to the application users using the Visual studio app center portal. Moreover, this feature is being retired and is being replaced by Push Migration. https://docs.microsoft.com/en-us/appcenter/migration/push/
Unattempted
Correct Answer(s): MSAnalytics.self
MSAnalytics.self is the CORRECT answer because according to the case study presented above we need to integrate our new application code with the Visual studio app center for managing all mobile application analytics, crashes and device types, centrally.
The VS App center uses an architecture which allows us to integrate it with our existing code to use selective features. In order to use and start the App Center SDK Analytics and Crashes, we need to import certain libraries/packages as per our requirements. After that we need to include code within the methods to use selective features (analytics in the first dropdown and crashes in the second dropdown). On doing this, we achieve our requirements as mentioned in the case study.
The full code that we include as per the image above is:
MSDistribute.self is the INCORRECT answer because this App center distribute feature is used to distribute different versions to the users, as newer versions are released. However, this is not our requirement for Litwares investment planning application suite. https://docs.microsoft.com/en-us/appcenter/sdk/distribute/ios
MSPush.self is the INCORRECT answer because this feature is used for sending Push notification to the application users using the Visual studio app center portal. Moreover, this feature is being retired and is being replaced by Push Migration. https://docs.microsoft.com/en-us/appcenter/migration/push/
Question
You need complete the following code to initialise the App Centre in the mobile application.
What should you select for Dropdown2?
Correct
Correct Answer(s): MSCrashes.self]
MSCrashes.self is the CORRECT answer because we have requirements to integrate our new application code with the Visual studio app center for centrally managing the crashes and device types.
The Visual Studio Application center architecture allows us to integrate it with our existing code to use selective features. Since we need to use the App Center Crashes feature with the analytics feature, we import certain libraries/packages as per our requirements. After that we include code within the methods to use selective features (analytics in the first dropdown and crashes in the second dropdown). On doing this, we fulfill our requirements as mentioned in the case study i.e to manage the crash reports centrally from the app center portal. https://docs.microsoft.com/en-us/appcenter/sdk/crashes/ios
The full code that we include as per the image above is:
AppCenter.start(withAppSecret: “{xxx App Secret xxx}”, services: [Analytics.self, Crashes.self])
MSAnalytics.self] is the INCORRECT answer because we need to use the crash reports feature as well with the analytics feature to centralise the reporting of mobile application crashes and device types in use. Moreover, we have already used the analytics feature already, so we use crashes as the second feature.
MSDistribute.self] is the INCORRECT answer because this feature is used to distribute different versions to the users, as newer versions are released. However, this is not our requirement and that is why we rule this option out. https://docs.microsoft.com/en-us/appcenter/sdk/distribute/ios
Incorrect
Correct Answer(s): MSCrashes.self]
MSCrashes.self is the CORRECT answer because we have requirements to integrate our new application code with the Visual studio app center for centrally managing the crashes and device types.
The Visual Studio Application center architecture allows us to integrate it with our existing code to use selective features. Since we need to use the App Center Crashes feature with the analytics feature, we import certain libraries/packages as per our requirements. After that we include code within the methods to use selective features (analytics in the first dropdown and crashes in the second dropdown). On doing this, we fulfill our requirements as mentioned in the case study i.e to manage the crash reports centrally from the app center portal. https://docs.microsoft.com/en-us/appcenter/sdk/crashes/ios
The full code that we include as per the image above is:
AppCenter.start(withAppSecret: “{xxx App Secret xxx}”, services: [Analytics.self, Crashes.self])
MSAnalytics.self] is the INCORRECT answer because we need to use the crash reports feature as well with the analytics feature to centralise the reporting of mobile application crashes and device types in use. Moreover, we have already used the analytics feature already, so we use crashes as the second feature.
MSDistribute.self] is the INCORRECT answer because this feature is used to distribute different versions to the users, as newer versions are released. However, this is not our requirement and that is why we rule this option out. https://docs.microsoft.com/en-us/appcenter/sdk/distribute/ios
Unattempted
Correct Answer(s): MSCrashes.self]
MSCrashes.self is the CORRECT answer because we have requirements to integrate our new application code with the Visual studio app center for centrally managing the crashes and device types.
The Visual Studio Application center architecture allows us to integrate it with our existing code to use selective features. Since we need to use the App Center Crashes feature with the analytics feature, we import certain libraries/packages as per our requirements. After that we include code within the methods to use selective features (analytics in the first dropdown and crashes in the second dropdown). On doing this, we fulfill our requirements as mentioned in the case study i.e to manage the crash reports centrally from the app center portal. https://docs.microsoft.com/en-us/appcenter/sdk/crashes/ios
The full code that we include as per the image above is:
AppCenter.start(withAppSecret: “{xxx App Secret xxx}”, services: [Analytics.self, Crashes.self])
MSAnalytics.self] is the INCORRECT answer because we need to use the crash reports feature as well with the analytics feature to centralise the reporting of mobile application crashes and device types in use. Moreover, we have already used the analytics feature already, so we use crashes as the second feature.
MSDistribute.self] is the INCORRECT answer because this feature is used to distribute different versions to the users, as newer versions are released. However, this is not our requirement and that is why we rule this option out. https://docs.microsoft.com/en-us/appcenter/sdk/distribute/ios
Question
Which branching strategy should you recommend for the investment planning applications suite?
Correct
Correct Answer(s): feature isolation
Feature isolation is the CORRECT answer because our requirement is to move to a more agile development methodology and Litware needs a branching strategy that supports developing new functionality in isolation.
Feature Isolation branching strategy promoted the use of different branches known as feature branches. Developers use these branches to add new features to the main branch copy, and integrate with it in the future via a pull request. These feature branches can be more than one and can be used to add different functionalities, keeping in mind that the conflicts dont emerge. https://docs.microsoft.com/en-us/azure/devops/repos/tfvc/branching-strategies-with-tfvc?view=azure-devops#feature-isolation
Development isolation is the INCORRECT answer because our requirement is to configure a branching strategy that supports developing new functionality in isolation. Development isolation focuses more on keeping the develop branch as a pre-production branch, separately from the production branch (master). However, there is not a functionality that it provides for isolated different feature developments. https://docs.microsoft.com/en-us/azure/devops/repos/tfvc/branching-strategies-with-tfvc?view=azure-devops#development-isolation
Release isolation is the INCORRECT answer because our requirement is to configure a branching strategy that supports developing new functionality in isolation. Release isolation focuses more on keeping the main branch as a pre-production branch. We create a parallel branch out of the main branch whenever a release of the application is needed. However, there is no functionality that it provides for developing individual features, in an isolated environment. https://docs.microsoft.com/en-us/azure/devops/repos/tfvc/branching-strategies-with-tfvc?view=azure-devops#release-isolation
Correct Answer(s): feature isolation
Feature isolation is the CORRECT answer because our requirement is to move to a more agile development methodology and Litware needs a branching strategy that supports developing new functionality in isolation.
Feature Isolation branching strategy promoted the use of different branches known as feature branches. Developers use these branches to add new features to the main branch copy, and integrate with it in the future via a pull request. These feature branches can be more than one and can be used to add different functionalities, keeping in mind that the conflicts dont emerge. https://docs.microsoft.com/en-us/azure/devops/repos/tfvc/branching-strategies-with-tfvc?view=azure-devops#feature-isolation
Development isolation is the INCORRECT answer because our requirement is to configure a branching strategy that supports developing new functionality in isolation. Development isolation focuses more on keeping the develop branch as a pre-production branch, separately from the production branch (master). However, there is not a functionality that it provides for isolated different feature developments. https://docs.microsoft.com/en-us/azure/devops/repos/tfvc/branching-strategies-with-tfvc?view=azure-devops#development-isolation
Release isolation is the INCORRECT answer because our requirement is to configure a branching strategy that supports developing new functionality in isolation. Release isolation focuses more on keeping the main branch as a pre-production branch. We create a parallel branch out of the main branch whenever a release of the application is needed. However, there is no functionality that it provides for developing individual features, in an isolated environment. https://docs.microsoft.com/en-us/azure/devops/repos/tfvc/branching-strategies-with-tfvc?view=azure-devops#release-isolation
Correct Answer(s): feature isolation
Feature isolation is the CORRECT answer because our requirement is to move to a more agile development methodology and Litware needs a branching strategy that supports developing new functionality in isolation.
Feature Isolation branching strategy promoted the use of different branches known as feature branches. Developers use these branches to add new features to the main branch copy, and integrate with it in the future via a pull request. These feature branches can be more than one and can be used to add different functionalities, keeping in mind that the conflicts dont emerge. https://docs.microsoft.com/en-us/azure/devops/repos/tfvc/branching-strategies-with-tfvc?view=azure-devops#feature-isolation
Development isolation is the INCORRECT answer because our requirement is to configure a branching strategy that supports developing new functionality in isolation. Development isolation focuses more on keeping the develop branch as a pre-production branch, separately from the production branch (master). However, there is not a functionality that it provides for isolated different feature developments. https://docs.microsoft.com/en-us/azure/devops/repos/tfvc/branching-strategies-with-tfvc?view=azure-devops#development-isolation
Release isolation is the INCORRECT answer because our requirement is to configure a branching strategy that supports developing new functionality in isolation. Release isolation focuses more on keeping the main branch as a pre-production branch. We create a parallel branch out of the main branch whenever a release of the application is needed. However, there is no functionality that it provides for developing individual features, in an isolated environment. https://docs.microsoft.com/en-us/azure/devops/repos/tfvc/branching-strategies-with-tfvc?view=azure-devops#release-isolation
Question
You need to configure a cloud service to store the secrets required by the mobile applications to call the share pricing service.
What should you include in the solution for Required secrets?
Correct
Correct Answer(s): Shared Access Authorisation token
Shared Access Authorisation token is the CORRECT answer because Litwares share pricing service of the existing retirement fund management system only supports basic authentication over HTTPS. Till the time new features are added, for the developers to allow mobile apps to be able to call this share pricing service without saving the authorization keys in the code, we use shared access authorization signature tokens. Moreover, the source code being in the TFS server in the main office which is accessible over TFS proxy servers, tokens with authorization headers (SAS tokens) provide basic authorized authentication over HTTPS.
The SAS tokens are authorized by the Primary and Secondary keys of a storage account to securely access storage resources. https://docs.microsoft.com/en-us/rest/api/storageservices/authorize-with-shared-key https://docs.microsoft.com/en-us/rest/api/storageservices/delegate-access-with-shared-access-signature https://docs.microsoft.com/en-us/rest/api/storageservices/service-sas-examples
Certificate is the INCORRECT answer because the share pricing service of the existing retirement fund management system only supports basic authentication over HTTPS (authorized requests to work with the APIs over HTTPS). Certificate is used as an authentication mechanism, which alone does not fit our requirement and that is why we rule this option out.
Personal access token is an INCORRECT answer because PAT is an alternative to using credentials. A PAT is a GUID value which is unique and is generated to grant access for some specific services. Share pricing service of the existing retirement fund management system only supports basic authentication over HTTPS and that is why we rule this option out.
Username and Password is an INCORRECT answer because we need to use an authorization and authentication method that is supported by share pricing service of the existing retirement fund management system which only supports basic authentication over HTTPS. Username and password is a two way authentication just like the certificate based authentication. On entering the credentials, the services match them at the directory level and then complete the handshake, which isnt supported with Litwares existing legacy system.
Incorrect
Correct Answer(s): Shared Access Authorisation token
Shared Access Authorisation token is the CORRECT answer because Litwares share pricing service of the existing retirement fund management system only supports basic authentication over HTTPS. Till the time new features are added, for the developers to allow mobile apps to be able to call this share pricing service without saving the authorization keys in the code, we use shared access authorization signature tokens. Moreover, the source code being in the TFS server in the main office which is accessible over TFS proxy servers, tokens with authorization headers (SAS tokens) provide basic authorized authentication over HTTPS.
The SAS tokens are authorized by the Primary and Secondary keys of a storage account to securely access storage resources. https://docs.microsoft.com/en-us/rest/api/storageservices/authorize-with-shared-key https://docs.microsoft.com/en-us/rest/api/storageservices/delegate-access-with-shared-access-signature https://docs.microsoft.com/en-us/rest/api/storageservices/service-sas-examples
Certificate is the INCORRECT answer because the share pricing service of the existing retirement fund management system only supports basic authentication over HTTPS (authorized requests to work with the APIs over HTTPS). Certificate is used as an authentication mechanism, which alone does not fit our requirement and that is why we rule this option out.
Personal access token is an INCORRECT answer because PAT is an alternative to using credentials. A PAT is a GUID value which is unique and is generated to grant access for some specific services. Share pricing service of the existing retirement fund management system only supports basic authentication over HTTPS and that is why we rule this option out.
Username and Password is an INCORRECT answer because we need to use an authorization and authentication method that is supported by share pricing service of the existing retirement fund management system which only supports basic authentication over HTTPS. Username and password is a two way authentication just like the certificate based authentication. On entering the credentials, the services match them at the directory level and then complete the handshake, which isnt supported with Litwares existing legacy system.
Unattempted
Correct Answer(s): Shared Access Authorisation token
Shared Access Authorisation token is the CORRECT answer because Litwares share pricing service of the existing retirement fund management system only supports basic authentication over HTTPS. Till the time new features are added, for the developers to allow mobile apps to be able to call this share pricing service without saving the authorization keys in the code, we use shared access authorization signature tokens. Moreover, the source code being in the TFS server in the main office which is accessible over TFS proxy servers, tokens with authorization headers (SAS tokens) provide basic authorized authentication over HTTPS.
The SAS tokens are authorized by the Primary and Secondary keys of a storage account to securely access storage resources. https://docs.microsoft.com/en-us/rest/api/storageservices/authorize-with-shared-key https://docs.microsoft.com/en-us/rest/api/storageservices/delegate-access-with-shared-access-signature https://docs.microsoft.com/en-us/rest/api/storageservices/service-sas-examples
Certificate is the INCORRECT answer because the share pricing service of the existing retirement fund management system only supports basic authentication over HTTPS (authorized requests to work with the APIs over HTTPS). Certificate is used as an authentication mechanism, which alone does not fit our requirement and that is why we rule this option out.
Personal access token is an INCORRECT answer because PAT is an alternative to using credentials. A PAT is a GUID value which is unique and is generated to grant access for some specific services. Share pricing service of the existing retirement fund management system only supports basic authentication over HTTPS and that is why we rule this option out.
Username and Password is an INCORRECT answer because we need to use an authorization and authentication method that is supported by share pricing service of the existing retirement fund management system which only supports basic authentication over HTTPS. Username and password is a two way authentication just like the certificate based authentication. On entering the credentials, the services match them at the directory level and then complete the handshake, which isnt supported with Litwares existing legacy system.
Question
You need to configure a cloud service to store the secrets required by the mobile applications to call the share pricing service.
What should you include in the solution for Storage location?
Correct
Correct Answer(s): Azure Storage with HTTPS access
Azure Storage with HTTPS access is the CORRECT answer because the share pricing service of the existing retirement fund management system only supports basic authentication over HTTPS. We need a cloud service for storage of secrets required by the mobile applications to call the service over basic HTTPS based authentication. Storage account with HTTPS access enabled provides a way to use authorization headers as SAS tokens to grant authorized access only. https://docs.microsoft.com/en-us/azure/storage/common/storage-require-secure-transfer https://docs.microsoft.com/en-us/rest/api/storageservices/create-user-delegation-sas
Azure Storage with HTTP access is an INCORRECT answer because only basic authentication over HTTPS is supported by Litwares share pricing service of the existing retirement fund management system.
Azure Data Lake is the INCORRECT answer because azure data lake service provides enterprise standard storage capabilities for different types of data. It does not fit our requirements. https://azure.microsoft.com/en-in/solutions/data-lake/
Azure Key Vault is the INCORRECT answer because Azure key vault access is limited to azure native services. In order to send requests to a key vault we need Azure AD authenticated requests only, which is not a supported scenario in our case. Moreover, the authenticated requests use HTTP operations on key vault secrets, keys and certificates. However, we need basic authentication over HTTPS to access the existing legacy services and that is why we rule this option out. https://docs.microsoft.com/en-us/azure/key-vault/general/authentication-requests-and-responses
Incorrect
Correct Answer(s): Azure Storage with HTTPS access
Azure Storage with HTTPS access is the CORRECT answer because the share pricing service of the existing retirement fund management system only supports basic authentication over HTTPS. We need a cloud service for storage of secrets required by the mobile applications to call the service over basic HTTPS based authentication. Storage account with HTTPS access enabled provides a way to use authorization headers as SAS tokens to grant authorized access only. https://docs.microsoft.com/en-us/azure/storage/common/storage-require-secure-transfer https://docs.microsoft.com/en-us/rest/api/storageservices/create-user-delegation-sas
Azure Storage with HTTP access is an INCORRECT answer because only basic authentication over HTTPS is supported by Litwares share pricing service of the existing retirement fund management system.
Azure Data Lake is the INCORRECT answer because azure data lake service provides enterprise standard storage capabilities for different types of data. It does not fit our requirements. https://azure.microsoft.com/en-in/solutions/data-lake/
Azure Key Vault is the INCORRECT answer because Azure key vault access is limited to azure native services. In order to send requests to a key vault we need Azure AD authenticated requests only, which is not a supported scenario in our case. Moreover, the authenticated requests use HTTP operations on key vault secrets, keys and certificates. However, we need basic authentication over HTTPS to access the existing legacy services and that is why we rule this option out. https://docs.microsoft.com/en-us/azure/key-vault/general/authentication-requests-and-responses
Unattempted
Correct Answer(s): Azure Storage with HTTPS access
Azure Storage with HTTPS access is the CORRECT answer because the share pricing service of the existing retirement fund management system only supports basic authentication over HTTPS. We need a cloud service for storage of secrets required by the mobile applications to call the service over basic HTTPS based authentication. Storage account with HTTPS access enabled provides a way to use authorization headers as SAS tokens to grant authorized access only. https://docs.microsoft.com/en-us/azure/storage/common/storage-require-secure-transfer https://docs.microsoft.com/en-us/rest/api/storageservices/create-user-delegation-sas
Azure Storage with HTTP access is an INCORRECT answer because only basic authentication over HTTPS is supported by Litwares share pricing service of the existing retirement fund management system.
Azure Data Lake is the INCORRECT answer because azure data lake service provides enterprise standard storage capabilities for different types of data. It does not fit our requirements. https://azure.microsoft.com/en-in/solutions/data-lake/
Azure Key Vault is the INCORRECT answer because Azure key vault access is limited to azure native services. In order to send requests to a key vault we need Azure AD authenticated requests only, which is not a supported scenario in our case. Moreover, the authenticated requests use HTTP operations on key vault secrets, keys and certificates. However, we need basic authentication over HTTPS to access the existing legacy services and that is why we rule this option out. https://docs.microsoft.com/en-us/azure/key-vault/general/authentication-requests-and-responses
Question
To resolve the current technical issue, what should you do to the Register-AzureRmAutomationDscNode command?
Correct
Correct Answer(s): Change the value of the ConfigurationMode parameter
Change the value of the ConfigurationMode parameter is the CORRECT answer because as per the current technical issue, we want the Automation State Configuration to prevent the configuration drift of servers over time.
This is the current command(powershell) that is used to configure nodes for Azure Automation State Configuration:
Register-AzureRmAutomationDscNode
-ResourceGroupName TestResourceGroup
-AutomationAccountname LitwareAutomationAccount
-AzureVMName $vmname
-ConfigurationMode ApplyOnly
ApplyAndMonitor value for the parameter enables the automation state configuration to periodically check the desired configuration for the nodes and logs any changes that caused the configuration to change.
ApplyAndAutoCorrect value for the parameter enables the Automation state config to periodically check the desired configuration for the nodes. It logs the data for configuration changes if any changes are made, and then tries to correct/restore the changes.
Add the AllowModuleOverwrite parameter is the INCORRECT answer because this is a boolean value that makes sure to override the modules currently present in a node with the new modules, whenever a new configuration is pushed by the Azure Automation State.
Incorrect
Correct Answer(s): Change the value of the ConfigurationMode parameter
Change the value of the ConfigurationMode parameter is the CORRECT answer because as per the current technical issue, we want the Automation State Configuration to prevent the configuration drift of servers over time.
This is the current command(powershell) that is used to configure nodes for Azure Automation State Configuration:
Register-AzureRmAutomationDscNode
-ResourceGroupName TestResourceGroup
-AutomationAccountname LitwareAutomationAccount
-AzureVMName $vmname
-ConfigurationMode ApplyOnly
ApplyAndMonitor value for the parameter enables the automation state configuration to periodically check the desired configuration for the nodes and logs any changes that caused the configuration to change.
ApplyAndAutoCorrect value for the parameter enables the Automation state config to periodically check the desired configuration for the nodes. It logs the data for configuration changes if any changes are made, and then tries to correct/restore the changes.
Add the AllowModuleOverwrite parameter is the INCORRECT answer because this is a boolean value that makes sure to override the modules currently present in a node with the new modules, whenever a new configuration is pushed by the Azure Automation State.
Unattempted
Correct Answer(s): Change the value of the ConfigurationMode parameter
Change the value of the ConfigurationMode parameter is the CORRECT answer because as per the current technical issue, we want the Automation State Configuration to prevent the configuration drift of servers over time.
This is the current command(powershell) that is used to configure nodes for Azure Automation State Configuration:
Register-AzureRmAutomationDscNode
-ResourceGroupName TestResourceGroup
-AutomationAccountname LitwareAutomationAccount
-AzureVMName $vmname
-ConfigurationMode ApplyOnly
ApplyAndMonitor value for the parameter enables the automation state configuration to periodically check the desired configuration for the nodes and logs any changes that caused the configuration to change.
ApplyAndAutoCorrect value for the parameter enables the Automation state config to periodically check the desired configuration for the nodes. It logs the data for configuration changes if any changes are made, and then tries to correct/restore the changes.
Add the AllowModuleOverwrite parameter is the INCORRECT answer because this is a boolean value that makes sure to override the modules currently present in a node with the new modules, whenever a new configuration is pushed by the Azure Automation State.
A post-deployment approval is the INCORRECT answer because an approval process will only make sure that the release proceeds after someone(admin/owner/lead) has approved the release to move forward. However, it doesnt help in adding a code quality restriction before the release.
A pre-deployment approval is also an INCORRECT answer because the approval process will only make sure that the release for a particular stage is happening only after a reviewer has approved that. However, it doesnt help in adding a code quality restriction before the release.
A trigger is an INCORRECT answer because the release pipeline triggers are used to literally configure a trigger mechanism for a stage to run. Eg for a trigger : A production stage will only be released on the successful completion of pre-production stage.
However, it does not help in adding the code quality restriction which is our requirement.
A post-deployment approval is the INCORRECT answer because an approval process will only make sure that the release proceeds after someone(admin/owner/lead) has approved the release to move forward. However, it doesnt help in adding a code quality restriction before the release.
A pre-deployment approval is also an INCORRECT answer because the approval process will only make sure that the release for a particular stage is happening only after a reviewer has approved that. However, it doesnt help in adding a code quality restriction before the release.
A trigger is an INCORRECT answer because the release pipeline triggers are used to literally configure a trigger mechanism for a stage to run. Eg for a trigger : A production stage will only be released on the successful completion of pre-production stage.
However, it does not help in adding the code quality restriction which is our requirement.
A post-deployment approval is the INCORRECT answer because an approval process will only make sure that the release proceeds after someone(admin/owner/lead) has approved the release to move forward. However, it doesnt help in adding a code quality restriction before the release.
A pre-deployment approval is also an INCORRECT answer because the approval process will only make sure that the release for a particular stage is happening only after a reviewer has approved that. However, it doesnt help in adding a code quality restriction before the release.
A trigger is an INCORRECT answer because the release pipeline triggers are used to literally configure a trigger mechanism for a stage to run. Eg for a trigger : A production stage will only be released on the successful completion of pre-production stage.
However, it does not help in adding the code quality restriction which is our requirement.
Question 8 of 60
8. Question
You have an Azure subscription that contains resources in several resource groups.
You need to design a monitoring strategy that will provide a consolidated view. The solution must support the following requirements:
– Support role-based access control (RBAC) by using Azure Active Directory (Azure AD) identities.
– Include visuals from Azure Monitor that are generated by using the Kutso query language.
– Support documentation written in markdown.
– Use the latest data available for each visual.
What should you use to create the consolidated view?
Azure Monitor is an INCORRECT answer because monitor service provided by azure collects resource specific data for azure resources. Users can drill the data down to collect specific logs for analyzing/monitoring their azure resources or even use the data for optimizations in resource configurations.
Azure Data explorer is an INCORRECT answer because azure data explorer service is used to analyze large amounts of data being collected from different sources, in real-time. https://azure.microsoft.com/en-in/services/data-explorer/
This service also allows the use of KQL to write custom queries. However, it isnt the ideal choice in our case because one of our requirements is for the solution to support documentation written in markdown. There is a markdown section in the Azure dashboards. Refer the below link: https://docs.microsoft.com/en-us/azure/azure-portal/azure-portal-markdown-tile
Microsoft Power BI is the INCORRECT answer because power BI is a business analytics service provided by Microsoft to visualize data in different forms of charts and dashboards to onboard teams with business intelligence capabilities.
Azure Monitor is an INCORRECT answer because monitor service provided by azure collects resource specific data for azure resources. Users can drill the data down to collect specific logs for analyzing/monitoring their azure resources or even use the data for optimizations in resource configurations.
Azure Data explorer is an INCORRECT answer because azure data explorer service is used to analyze large amounts of data being collected from different sources, in real-time. https://azure.microsoft.com/en-in/services/data-explorer/
This service also allows the use of KQL to write custom queries. However, it isnt the ideal choice in our case because one of our requirements is for the solution to support documentation written in markdown. There is a markdown section in the Azure dashboards. Refer the below link: https://docs.microsoft.com/en-us/azure/azure-portal/azure-portal-markdown-tile
Microsoft Power BI is the INCORRECT answer because power BI is a business analytics service provided by Microsoft to visualize data in different forms of charts and dashboards to onboard teams with business intelligence capabilities.
Azure Monitor is an INCORRECT answer because monitor service provided by azure collects resource specific data for azure resources. Users can drill the data down to collect specific logs for analyzing/monitoring their azure resources or even use the data for optimizations in resource configurations.
Azure Data explorer is an INCORRECT answer because azure data explorer service is used to analyze large amounts of data being collected from different sources, in real-time. https://azure.microsoft.com/en-in/services/data-explorer/
This service also allows the use of KQL to write custom queries. However, it isnt the ideal choice in our case because one of our requirements is for the solution to support documentation written in markdown. There is a markdown section in the Azure dashboards. Refer the below link: https://docs.microsoft.com/en-us/azure/azure-portal/azure-portal-markdown-tile
Microsoft Power BI is the INCORRECT answer because power BI is a business analytics service provided by Microsoft to visualize data in different forms of charts and dashboards to onboard teams with business intelligence capabilities.
Question 9 of 60
9. Question
Your company creates a web application.
You need to recommend a solution that automatically sends to Microsoft Teams a daily summary of the exceptions that occur in the application.
Which two Azure services to recommend?
Azure Application Insights and Azure Logic Apps are the two services to recommend in this case because our first requirement is to capture application data and give a custom summary on exceptions that are occurring, for which we use Application Insights. The second requirement is to repeat this task daily while sending custom reports fetched using KQL queries to a Microsoft Teams channel or maybe in a group chat(anything of this kind), for which we use the Logic app to automate the whole workflow.
Something like this workflow can be defined in a logic App to take care of our requirements:
Azure DevOps Project is an INCORRECT answer because Azure DevOps project is like a container(public or private) to hold all the project belongings like pipelines, repos, etc. It can not be used to monitor the application exceptions.
Azure Pipelines is an INCORRECT answer because pipelines just provide all the tools/resources that are required to build or release some application or a block of code. However, they dont monitor the telemetry data of an application and track the application exceptions that are taking place.
Microsoft Visual Studio app center is an INCORRECT answer because we need to collect information regarding application exceptions for running apps, which is not a feature provided by the Visual Studio app center. It is used to manage, build, test, release application code all in one place, but not to monitor applications.
Azure Application Insights and Azure Logic Apps are the two services to recommend in this case because our first requirement is to capture application data and give a custom summary on exceptions that are occurring, for which we use Application Insights. The second requirement is to repeat this task daily while sending custom reports fetched using KQL queries to a Microsoft Teams channel or maybe in a group chat(anything of this kind), for which we use the Logic app to automate the whole workflow.
Something like this workflow can be defined in a logic App to take care of our requirements:
Azure DevOps Project is an INCORRECT answer because Azure DevOps project is like a container(public or private) to hold all the project belongings like pipelines, repos, etc. It can not be used to monitor the application exceptions.
Azure Pipelines is an INCORRECT answer because pipelines just provide all the tools/resources that are required to build or release some application or a block of code. However, they dont monitor the telemetry data of an application and track the application exceptions that are taking place.
Microsoft Visual Studio app center is an INCORRECT answer because we need to collect information regarding application exceptions for running apps, which is not a feature provided by the Visual Studio app center. It is used to manage, build, test, release application code all in one place, but not to monitor applications.
Azure Application Insights and Azure Logic Apps are the two services to recommend in this case because our first requirement is to capture application data and give a custom summary on exceptions that are occurring, for which we use Application Insights. The second requirement is to repeat this task daily while sending custom reports fetched using KQL queries to a Microsoft Teams channel or maybe in a group chat(anything of this kind), for which we use the Logic app to automate the whole workflow.
Something like this workflow can be defined in a logic App to take care of our requirements:
Azure DevOps Project is an INCORRECT answer because Azure DevOps project is like a container(public or private) to hold all the project belongings like pipelines, repos, etc. It can not be used to monitor the application exceptions.
Azure Pipelines is an INCORRECT answer because pipelines just provide all the tools/resources that are required to build or release some application or a block of code. However, they dont monitor the telemetry data of an application and track the application exceptions that are taking place.
Microsoft Visual Studio app center is an INCORRECT answer because we need to collect information regarding application exceptions for running apps, which is not a feature provided by the Visual Studio app center. It is used to manage, build, test, release application code all in one place, but not to monitor applications.
Question 10 of 60
10. Question
You have a multi-tier application that has an Azure Web Apps front end and an Azure SQL Database backend.
You need to recommend a solution to capture and store telemetry data. The solution must meet the following requirements:
– Support using ad-hoc queries to identify baselines.
– Trigger alerts when metrics in the baseline are exceeded.
– Store application and database metrics in a central location.
Azure SQL Database Intelligent Insights is the INCORRECT answer because this feature only applies to SQL databases and not the azure app services. Moreover, SQL DB intelligent Insights are used to monitor the database and its functioning/performance to analyze and capture key bottlenecks. https://docs.microsoft.com/en-us/azure/azure-sql/database/intelligent-insights-overview
Azure Application Insights is an INCORRECT answer because the application insights service is used to capture application-specific data and not the telemetry for Azure app service and Azure SQL DB resources. Application insights monitor the application data to analyze application performance and identify problems if any.
Azure SQL Database Intelligent Insights is the INCORRECT answer because this feature only applies to SQL databases and not the azure app services. Moreover, SQL DB intelligent Insights are used to monitor the database and its functioning/performance to analyze and capture key bottlenecks. https://docs.microsoft.com/en-us/azure/azure-sql/database/intelligent-insights-overview
Azure Application Insights is an INCORRECT answer because the application insights service is used to capture application-specific data and not the telemetry for Azure app service and Azure SQL DB resources. Application insights monitor the application data to analyze application performance and identify problems if any.
Azure SQL Database Intelligent Insights is the INCORRECT answer because this feature only applies to SQL databases and not the azure app services. Moreover, SQL DB intelligent Insights are used to monitor the database and its functioning/performance to analyze and capture key bottlenecks. https://docs.microsoft.com/en-us/azure/azure-sql/database/intelligent-insights-overview
Azure Application Insights is an INCORRECT answer because the application insights service is used to capture application-specific data and not the telemetry for Azure app service and Azure SQL DB resources. Application insights monitor the application data to analyze application performance and identify problems if any.
You have an Azure DevOps organization named Contoso and an Azure subscription. The subscription contains an Azure virtual machine scale set named VMSS1 and an Azure Standard Load Balancer named LB1. LB1 distributes incoming requests across VMSS1 instances.
You use Azure DevOps to build a web named App1 and deploy App1 to VMSS1. App1 is accessible via HTTPS only and configured to require mutual authentication by using a client certificate.
You need to recommend a solution for implementing a health check of App1. The solution must meet the following requirements:
– Identify whether individual instances of VMSS1 are eligible for an upgrade operation.
– Minimize administrative effort.
What should you include in the recommendation?
Correct
Correct Answer(s): the Application Health extension
The Application Health extension is the CORRECT answer because we want to use a service that helps in identifying whether individual instances of VM scale set are eligible for an upgrade operation while minimizing any administrative effort.
Application health extension is used by virtual machine scale sets to determine the availability of the backend instance to promote the use of automatic instance repairs.
An Azure load balancer health probe is an INCORRECT answer because the health probe service is used by the Azure Load balancer itself to analyze the health of the backend servers/workloads. If one of the instances is unhealthy, the health probe provides the load balancer with the information so that the incoming traffic is not sent for that probe.
The Custom Script Extension is an INCORRECT answer because this extension is used to push and run a custom PowerShell script inside a server without the need of logging in. This extension helps us fix many problems while troubleshooting or helps in quick changes.
Correct Answer(s): the Application Health extension
The Application Health extension is the CORRECT answer because we want to use a service that helps in identifying whether individual instances of VM scale set are eligible for an upgrade operation while minimizing any administrative effort.
Application health extension is used by virtual machine scale sets to determine the availability of the backend instance to promote the use of automatic instance repairs.
An Azure load balancer health probe is an INCORRECT answer because the health probe service is used by the Azure Load balancer itself to analyze the health of the backend servers/workloads. If one of the instances is unhealthy, the health probe provides the load balancer with the information so that the incoming traffic is not sent for that probe.
The Custom Script Extension is an INCORRECT answer because this extension is used to push and run a custom PowerShell script inside a server without the need of logging in. This extension helps us fix many problems while troubleshooting or helps in quick changes.
Correct Answer(s): the Application Health extension
The Application Health extension is the CORRECT answer because we want to use a service that helps in identifying whether individual instances of VM scale set are eligible for an upgrade operation while minimizing any administrative effort.
Application health extension is used by virtual machine scale sets to determine the availability of the backend instance to promote the use of automatic instance repairs.
An Azure load balancer health probe is an INCORRECT answer because the health probe service is used by the Azure Load balancer itself to analyze the health of the backend servers/workloads. If one of the instances is unhealthy, the health probe provides the load balancer with the information so that the incoming traffic is not sent for that probe.
The Custom Script Extension is an INCORRECT answer because this extension is used to push and run a custom PowerShell script inside a server without the need of logging in. This extension helps us fix many problems while troubleshooting or helps in quick changes.
You manage an Azure web app that supports an e-commerce website.
You need to increase the logging level when the web app exceeds normal usage patterns. The solution must minimize administrative overhead.
Which two resources should you include in the solution?
Correct
Correct Answer(s):
an Azure Monitor alert that has a dynamic threshold
the Azure Monitor autoscale settings
An Azure Monitor alert that has a dynamic threshold and the Azure Monitor autoscale settings are the two CORRECT answers because we need to configure auto-scaling for the resources when the usage patterns exceed the normal usage patterns. An azure monitor alert having a dynamic threshold takes care of the deviation in usage pattern from normal behaviour, using machine learning algorithms in the backend. In addition, auto-scale settings in the azure monitor automatically take care of the scale-out or scale-in of azure resources. This also helps in minimizing the administrative overhead as there is no need to configure manual scaling of resources. https://docs.microsoft.com/en-us/azure/azure-monitor/platform/autoscale-understanding-settings https://docs.microsoft.com/en-us/azure/azure-monitor/platform/alerts-dynamic-thresholds
An Azure Monitor alert that has a static threshold is an INCORRECT answer because we use static threshold value when we know what the exact threshold for our metric is, Eg: 70% CPU utilization. However, we want to keep a check on deviation from normal behavior and that is why we use the dynamic threshold.
an Azure Automation runbook is an INCORRECT answer because an automation runbook is used when we want to automate some Azure-based task or the occurrence of one. An automation runbook can be used to do everything that a PowerShell script can do (including scale-out, monitor data, alert using O365), but we also have to reduce the operational and administrative overhead and that is why we rule this option out. Maintaining runbooks requires a lot of effort.
an Azure Monitor alert that uses an action group that has an email action is an INCORRECT answer because an alert using an action group to send an email will solve no purpose other than notifying an administrator. After getting the notification, there is a lot of administrative overhead to solve the problem.
Incorrect
Correct Answer(s):
an Azure Monitor alert that has a dynamic threshold
the Azure Monitor autoscale settings
An Azure Monitor alert that has a dynamic threshold and the Azure Monitor autoscale settings are the two CORRECT answers because we need to configure auto-scaling for the resources when the usage patterns exceed the normal usage patterns. An azure monitor alert having a dynamic threshold takes care of the deviation in usage pattern from normal behaviour, using machine learning algorithms in the backend. In addition, auto-scale settings in the azure monitor automatically take care of the scale-out or scale-in of azure resources. This also helps in minimizing the administrative overhead as there is no need to configure manual scaling of resources. https://docs.microsoft.com/en-us/azure/azure-monitor/platform/autoscale-understanding-settings https://docs.microsoft.com/en-us/azure/azure-monitor/platform/alerts-dynamic-thresholds
An Azure Monitor alert that has a static threshold is an INCORRECT answer because we use static threshold value when we know what the exact threshold for our metric is, Eg: 70% CPU utilization. However, we want to keep a check on deviation from normal behavior and that is why we use the dynamic threshold.
an Azure Automation runbook is an INCORRECT answer because an automation runbook is used when we want to automate some Azure-based task or the occurrence of one. An automation runbook can be used to do everything that a PowerShell script can do (including scale-out, monitor data, alert using O365), but we also have to reduce the operational and administrative overhead and that is why we rule this option out. Maintaining runbooks requires a lot of effort.
an Azure Monitor alert that uses an action group that has an email action is an INCORRECT answer because an alert using an action group to send an email will solve no purpose other than notifying an administrator. After getting the notification, there is a lot of administrative overhead to solve the problem.
Unattempted
Correct Answer(s):
an Azure Monitor alert that has a dynamic threshold
the Azure Monitor autoscale settings
An Azure Monitor alert that has a dynamic threshold and the Azure Monitor autoscale settings are the two CORRECT answers because we need to configure auto-scaling for the resources when the usage patterns exceed the normal usage patterns. An azure monitor alert having a dynamic threshold takes care of the deviation in usage pattern from normal behaviour, using machine learning algorithms in the backend. In addition, auto-scale settings in the azure monitor automatically take care of the scale-out or scale-in of azure resources. This also helps in minimizing the administrative overhead as there is no need to configure manual scaling of resources. https://docs.microsoft.com/en-us/azure/azure-monitor/platform/autoscale-understanding-settings https://docs.microsoft.com/en-us/azure/azure-monitor/platform/alerts-dynamic-thresholds
An Azure Monitor alert that has a static threshold is an INCORRECT answer because we use static threshold value when we know what the exact threshold for our metric is, Eg: 70% CPU utilization. However, we want to keep a check on deviation from normal behavior and that is why we use the dynamic threshold.
an Azure Automation runbook is an INCORRECT answer because an automation runbook is used when we want to automate some Azure-based task or the occurrence of one. An automation runbook can be used to do everything that a PowerShell script can do (including scale-out, monitor data, alert using O365), but we also have to reduce the operational and administrative overhead and that is why we rule this option out. Maintaining runbooks requires a lot of effort.
an Azure Monitor alert that uses an action group that has an email action is an INCORRECT answer because an alert using an action group to send an email will solve no purpose other than notifying an administrator. After getting the notification, there is a lot of administrative overhead to solve the problem.
Question 13 of 60
13. Question
You are monitoring the health and performance of an Azure web app by using Azure Application Insights.
You need to ensure that an alert is sent when the web app has a sudden rise in performance issues and failures.
What should you use?
Correct
Correct Answer(s): Smart Detection
Smart Detection is the CORRECT answer because this feature is used to actively monitor and analyze the application data sent by Application Insights to make sure that an alert is fired up (an email is sent to the administrator) in case of a sudden rise in failure rates or abnormal behavior patterns.
More information on the smart detection feature of application insights: https://docs.microsoft.com/en-us/azure/azure-monitor/app/proactive-diagnostics
Application Insights Profiler is an INCORRECT answer because as the name suggests, this feature extracts all the information about request handling done by the applications running in the cloud. This helps us to analyze how different components are handling different requests. https://docs.microsoft.com/en-us/azure/azure-monitor/app/profiler-overview
Continuous export is an INCORRECT answer because the continuous export feature is used for only class application insights as an alternative to diagnostic settings that we have in workspace based application insights. This feature just configures the retention of logs. https://docs.microsoft.com/en-us/azure/azure-monitor/app/export-telemetry
Smart Detection is the CORRECT answer because this feature is used to actively monitor and analyze the application data sent by Application Insights to make sure that an alert is fired up (an email is sent to the administrator) in case of a sudden rise in failure rates or abnormal behavior patterns.
More information on the smart detection feature of application insights: https://docs.microsoft.com/en-us/azure/azure-monitor/app/proactive-diagnostics
Application Insights Profiler is an INCORRECT answer because as the name suggests, this feature extracts all the information about request handling done by the applications running in the cloud. This helps us to analyze how different components are handling different requests. https://docs.microsoft.com/en-us/azure/azure-monitor/app/profiler-overview
Continuous export is an INCORRECT answer because the continuous export feature is used for only class application insights as an alternative to diagnostic settings that we have in workspace based application insights. This feature just configures the retention of logs. https://docs.microsoft.com/en-us/azure/azure-monitor/app/export-telemetry
Smart Detection is the CORRECT answer because this feature is used to actively monitor and analyze the application data sent by Application Insights to make sure that an alert is fired up (an email is sent to the administrator) in case of a sudden rise in failure rates or abnormal behavior patterns.
More information on the smart detection feature of application insights: https://docs.microsoft.com/en-us/azure/azure-monitor/app/proactive-diagnostics
Application Insights Profiler is an INCORRECT answer because as the name suggests, this feature extracts all the information about request handling done by the applications running in the cloud. This helps us to analyze how different components are handling different requests. https://docs.microsoft.com/en-us/azure/azure-monitor/app/profiler-overview
Continuous export is an INCORRECT answer because the continuous export feature is used for only class application insights as an alternative to diagnostic settings that we have in workspace based application insights. This feature just configures the retention of logs. https://docs.microsoft.com/en-us/azure/azure-monitor/app/export-telemetry
You are creating a container for an ASP.NET Core app.
You need to create a Dockerfile file to build the image. The solution must ensure that the size of the image is minimized.
You need to configure the file.
What should go in the Value1 box?
Correct
Correct Answer(s): microsoft/dotnet:2.2-sdk
microsoft/dotnet:2.2-sdk is the CORRECT answer because the first step of building a container for ASP.NET core app is to initialize a new build stage and set it as a base image. In our case, we need .NET sdk 2.2 as part of the base image and that is why the first line for the file will be:
FROM microsoft/dotnet:2.2-sdk
dotnet publish -c Release -o out is the INCORRECT answer because this command is used to build/publish the application and related artifacts to the out folder. This cannot be the first step and that is why we rule this out.
dotnet restore is an INCORRECT answer because this command is used when there is a requirement of restoring a project and its dependencies that are specified in the main project file.
microsoft/dotnet:2.2-aspnetcore-runtime is an INCORRECT answer because this command will use the published ASP.NET core image as the starter image for the current docker file. However, in our case, we will start with the .NET sdk which even contains the runtime, and afterward scrape off anything that makes the image heavy/large.
Incorrect
Correct Answer(s): microsoft/dotnet:2.2-sdk
microsoft/dotnet:2.2-sdk is the CORRECT answer because the first step of building a container for ASP.NET core app is to initialize a new build stage and set it as a base image. In our case, we need .NET sdk 2.2 as part of the base image and that is why the first line for the file will be:
FROM microsoft/dotnet:2.2-sdk
dotnet publish -c Release -o out is the INCORRECT answer because this command is used to build/publish the application and related artifacts to the out folder. This cannot be the first step and that is why we rule this out.
dotnet restore is an INCORRECT answer because this command is used when there is a requirement of restoring a project and its dependencies that are specified in the main project file.
microsoft/dotnet:2.2-aspnetcore-runtime is an INCORRECT answer because this command will use the published ASP.NET core image as the starter image for the current docker file. However, in our case, we will start with the .NET sdk which even contains the runtime, and afterward scrape off anything that makes the image heavy/large.
Unattempted
Correct Answer(s): microsoft/dotnet:2.2-sdk
microsoft/dotnet:2.2-sdk is the CORRECT answer because the first step of building a container for ASP.NET core app is to initialize a new build stage and set it as a base image. In our case, we need .NET sdk 2.2 as part of the base image and that is why the first line for the file will be:
FROM microsoft/dotnet:2.2-sdk
dotnet publish -c Release -o out is the INCORRECT answer because this command is used to build/publish the application and related artifacts to the out folder. This cannot be the first step and that is why we rule this out.
dotnet restore is an INCORRECT answer because this command is used when there is a requirement of restoring a project and its dependencies that are specified in the main project file.
microsoft/dotnet:2.2-aspnetcore-runtime is an INCORRECT answer because this command will use the published ASP.NET core image as the starter image for the current docker file. However, in our case, we will start with the .NET sdk which even contains the runtime, and afterward scrape off anything that makes the image heavy/large.
Question 15 of 60
15. Question
You are creating a container for an ASP.NET Core app.
You need to create a Dockerfile file to build the image. The solution must ensure that the size of the image is minimized.
You need to configure the file.
What should go in the Value2 box?
Correct
Correct Answer(s): dotnet publish -c Release -o out
dotnet publish -c Release -o out is the CORRECT answer because this is the command that we will use when we want to build or publish the application and related artifacts to the out folder. We build our code after the base image has been set up using the dotnet publish command for ASP.NET core applications. https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet-publish
dotnet restore is an INCORRECT answer because we use this command when we need to restore a project and its dependencies that are specified in the project file. However, that is not the case here because we need to build our application using the base image, and that is why we rule this option out.
microsoft/dotnet:2.2-aspnetcore-runtime is an INCORRECT answer because we need to complete the RUN command statement as per the question. However, this option refers to an image source as part of the FROM command.
microsoft/dotnet:2.2-sdk is an INCORRECT answer because we need to complete the RUN command statement as per the question. However, this option refers to an image source as part of the FROM command, and that is why we rule this option out.
Incorrect
Correct Answer(s): dotnet publish -c Release -o out
dotnet publish -c Release -o out is the CORRECT answer because this is the command that we will use when we want to build or publish the application and related artifacts to the out folder. We build our code after the base image has been set up using the dotnet publish command for ASP.NET core applications. https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet-publish
dotnet restore is an INCORRECT answer because we use this command when we need to restore a project and its dependencies that are specified in the project file. However, that is not the case here because we need to build our application using the base image, and that is why we rule this option out.
microsoft/dotnet:2.2-aspnetcore-runtime is an INCORRECT answer because we need to complete the RUN command statement as per the question. However, this option refers to an image source as part of the FROM command.
microsoft/dotnet:2.2-sdk is an INCORRECT answer because we need to complete the RUN command statement as per the question. However, this option refers to an image source as part of the FROM command, and that is why we rule this option out.
Unattempted
Correct Answer(s): dotnet publish -c Release -o out
dotnet publish -c Release -o out is the CORRECT answer because this is the command that we will use when we want to build or publish the application and related artifacts to the out folder. We build our code after the base image has been set up using the dotnet publish command for ASP.NET core applications. https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet-publish
dotnet restore is an INCORRECT answer because we use this command when we need to restore a project and its dependencies that are specified in the project file. However, that is not the case here because we need to build our application using the base image, and that is why we rule this option out.
microsoft/dotnet:2.2-aspnetcore-runtime is an INCORRECT answer because we need to complete the RUN command statement as per the question. However, this option refers to an image source as part of the FROM command.
microsoft/dotnet:2.2-sdk is an INCORRECT answer because we need to complete the RUN command statement as per the question. However, this option refers to an image source as part of the FROM command, and that is why we rule this option out.
Question 16 of 60
16. Question
You are creating a container for an ASP.NET Core app.
You need to create a Dockerfile file to build the image. The solution must ensure that the size of the image is minimized.
You need to configure the file.
microsoft/dotnet:2.2-aspnetcore-runtime is the CORRECT answer because our requirement is to keep the final image size as small as possible. To do that, for the final image, we refer to the published image created in line 4, using the command dotnet publish -c Release -o out. https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
dotnet restore is the INCORRECT answer because this command is used when there is a need to restore some project and its dependencies that are specified in the project file.
microsoft/dotnet:2.2-sdk is the INCORRECT answer because we want to make our final image as small as possible and that is why we refer from the published build and not build our app from the start(using dotnet sdk even after the code has been built).
dotnet publish -c Release -o out is the INCORRECT answer because we have used this line for publishing our application to a folder. We have to refer to this as the base for the final stage. Doing this will ensure that the final image is as small as possible because it loses all the artifacts which were unnecessary after the complete build, for the next stage of the build.
microsoft/dotnet:2.2-aspnetcore-runtime is the CORRECT answer because our requirement is to keep the final image size as small as possible. To do that, for the final image, we refer to the published image created in line 4, using the command dotnet publish -c Release -o out. https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
dotnet restore is the INCORRECT answer because this command is used when there is a need to restore some project and its dependencies that are specified in the project file.
microsoft/dotnet:2.2-sdk is the INCORRECT answer because we want to make our final image as small as possible and that is why we refer from the published build and not build our app from the start(using dotnet sdk even after the code has been built).
dotnet publish -c Release -o out is the INCORRECT answer because we have used this line for publishing our application to a folder. We have to refer to this as the base for the final stage. Doing this will ensure that the final image is as small as possible because it loses all the artifacts which were unnecessary after the complete build, for the next stage of the build.
microsoft/dotnet:2.2-aspnetcore-runtime is the CORRECT answer because our requirement is to keep the final image size as small as possible. To do that, for the final image, we refer to the published image created in line 4, using the command dotnet publish -c Release -o out. https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
dotnet restore is the INCORRECT answer because this command is used when there is a need to restore some project and its dependencies that are specified in the project file.
microsoft/dotnet:2.2-sdk is the INCORRECT answer because we want to make our final image as small as possible and that is why we refer from the published build and not build our app from the start(using dotnet sdk even after the code has been built).
dotnet publish -c Release -o out is the INCORRECT answer because we have used this line for publishing our application to a folder. We have to refer to this as the base for the final stage. Doing this will ensure that the final image is as small as possible because it loses all the artifacts which were unnecessary after the complete build, for the next stage of the build.
Question 17 of 60
17. Question
You are developing an iOS application by using Azure DevOps.
You need to test the application manually on 10 devices without releasing the application to the public.
Which two actions should you perform?
Correct
To manually test an iOS application on 10 devices without releasing the application to the public, two key actions are required:
Register the IDs of the devices in the Apple Developer portal: This allows the devices to install the application that is not yet publicly available on the App Store.
Distribute a new release of the application: This creates a build of the application that can be installed on the registered devices for testing.
Here’s why the other options are not necessary for this scenario:
Deploy a certificate from an internal certification authority (CA) to each device: This might be required for some internal testing scenarios, but it’s not essential for installing a pre-release app on registered devices.
Onboard the devices into Microsoft Intune:Microsoft Intune is a mobile device management (MDM) tool that can be used to manage and secure devices. 1 While it can be helpful for enterprise deployments, it’s not strictly necessary for installing an app on a few devices for testing purposes.
Create a Microsoft Intune device compliance policy: Similar to Intune enrollment, creating a compliance policy is more for managing a large number of devices in an organization and not essential for this specific scenario.
Register the application in the iTunes store: Registering the application in the iTunes store would make it publicly available, which is the opposite of what we want for internal testing.
By following these two steps, you can distribute the pre-release app to the registered devices for manual testing without making it publicly available on the App Store.
Incorrect
To manually test an iOS application on 10 devices without releasing the application to the public, two key actions are required:
Register the IDs of the devices in the Apple Developer portal: This allows the devices to install the application that is not yet publicly available on the App Store.
Distribute a new release of the application: This creates a build of the application that can be installed on the registered devices for testing.
Here’s why the other options are not necessary for this scenario:
Deploy a certificate from an internal certification authority (CA) to each device: This might be required for some internal testing scenarios, but it’s not essential for installing a pre-release app on registered devices.
Onboard the devices into Microsoft Intune:Microsoft Intune is a mobile device management (MDM) tool that can be used to manage and secure devices. 1 While it can be helpful for enterprise deployments, it’s not strictly necessary for installing an app on a few devices for testing purposes.
Create a Microsoft Intune device compliance policy: Similar to Intune enrollment, creating a compliance policy is more for managing a large number of devices in an organization and not essential for this specific scenario.
Register the application in the iTunes store: Registering the application in the iTunes store would make it publicly available, which is the opposite of what we want for internal testing.
By following these two steps, you can distribute the pre-release app to the registered devices for manual testing without making it publicly available on the App Store.
Unattempted
To manually test an iOS application on 10 devices without releasing the application to the public, two key actions are required:
Register the IDs of the devices in the Apple Developer portal: This allows the devices to install the application that is not yet publicly available on the App Store.
Distribute a new release of the application: This creates a build of the application that can be installed on the registered devices for testing.
Here’s why the other options are not necessary for this scenario:
Deploy a certificate from an internal certification authority (CA) to each device: This might be required for some internal testing scenarios, but it’s not essential for installing a pre-release app on registered devices.
Onboard the devices into Microsoft Intune:Microsoft Intune is a mobile device management (MDM) tool that can be used to manage and secure devices. 1 While it can be helpful for enterprise deployments, it’s not strictly necessary for installing an app on a few devices for testing purposes.
Create a Microsoft Intune device compliance policy: Similar to Intune enrollment, creating a compliance policy is more for managing a large number of devices in an organization and not essential for this specific scenario.
Register the application in the iTunes store: Registering the application in the iTunes store would make it publicly available, which is the opposite of what we want for internal testing.
By following these two steps, you can distribute the pre-release app to the registered devices for manual testing without making it publicly available on the App Store.
Question 18 of 60
18. Question
You are planning projects for three customers. Each customers preferred process for work items is shown in the following table.
The customers all plan to use Azure DevOps for work item management.
Which work item process should you use for the customer A. Datum Corporation?
Correct
Correct Answer(s): CMMI
The CORRECT process for the customer Datum Corporation would be to use the CMMI process because there is a need to tack requirements, change requests, risks and reviews. In addition, CMMI or Capability Maturity Model Integration is best suited when teams follow more formal project methods that require a framework for process improvement. https://docs.microsoft.com/en-us/azure/devops/boards/work-items/guidance/choose-process
XP is an INCORRECT answer because extreme programming development methodology focuses more on the concepts of CI and CD, test focussed development, Pair programming, etc. XP in the case of the Datum Corporation, wont fulfill the demand of Track requirements, change requests, risks, and reviews. https://en.wikipedia.org/wiki/Extreme_programming_practices
The CORRECT process for the customer Datum Corporation would be to use the CMMI process because there is a need to tack requirements, change requests, risks and reviews. In addition, CMMI or Capability Maturity Model Integration is best suited when teams follow more formal project methods that require a framework for process improvement. https://docs.microsoft.com/en-us/azure/devops/boards/work-items/guidance/choose-process
XP is an INCORRECT answer because extreme programming development methodology focuses more on the concepts of CI and CD, test focussed development, Pair programming, etc. XP in the case of the Datum Corporation, wont fulfill the demand of Track requirements, change requests, risks, and reviews. https://en.wikipedia.org/wiki/Extreme_programming_practices
The CORRECT process for the customer Datum Corporation would be to use the CMMI process because there is a need to tack requirements, change requests, risks and reviews. In addition, CMMI or Capability Maturity Model Integration is best suited when teams follow more formal project methods that require a framework for process improvement. https://docs.microsoft.com/en-us/azure/devops/boards/work-items/guidance/choose-process
XP is an INCORRECT answer because extreme programming development methodology focuses more on the concepts of CI and CD, test focussed development, Pair programming, etc. XP in the case of the Datum Corporation, wont fulfill the demand of Track requirements, change requests, risks, and reviews. https://en.wikipedia.org/wiki/Extreme_programming_practices
Your company uses a Git source-code repository.
You plan to implement GitFlow as a workflow strategy.
You need to identify which branch types are used for production code and pre-production code in the strategy.
Which branch type should you identify for each code type?
Correct
Correct Answer(s):
Production code: Master
Preproduction code: Develop
The CORRECT answer for Production is Master branch and for Pre-Production is Develop branch because we are using GitFlow as a workflow strategy.
The Gitflow Workflow defines a strict branching model designed around the project release and project code repository.
Develop and Feature are INCORRECT choices for PRODUCTION environment because according to the GitFlow workflow, in production we need a track of all the features added to the child branch which will then be merged with the parent branch (or master branch).
Master and Feature are INCORRECT choices for a PRE-PRODUCTION environment because the Master branch will be one to be the parent branch for Develop. Develop branch will then have feature branches as its child branches.
Incorrect
Correct Answer(s):
Production code: Master
Preproduction code: Develop
The CORRECT answer for Production is Master branch and for Pre-Production is Develop branch because we are using GitFlow as a workflow strategy.
The Gitflow Workflow defines a strict branching model designed around the project release and project code repository.
Develop and Feature are INCORRECT choices for PRODUCTION environment because according to the GitFlow workflow, in production we need a track of all the features added to the child branch which will then be merged with the parent branch (or master branch).
Master and Feature are INCORRECT choices for a PRE-PRODUCTION environment because the Master branch will be one to be the parent branch for Develop. Develop branch will then have feature branches as its child branches.
Unattempted
Correct Answer(s):
Production code: Master
Preproduction code: Develop
The CORRECT answer for Production is Master branch and for Pre-Production is Develop branch because we are using GitFlow as a workflow strategy.
The Gitflow Workflow defines a strict branching model designed around the project release and project code repository.
Develop and Feature are INCORRECT choices for PRODUCTION environment because according to the GitFlow workflow, in production we need a track of all the features added to the child branch which will then be merged with the parent branch (or master branch).
Master and Feature are INCORRECT choices for a PRE-PRODUCTION environment because the Master branch will be one to be the parent branch for Develop. Develop branch will then have feature branches as its child branches.
Question 20 of 60
20. Question
Your company uses Azure DevOps for Git source control.
Subscribe is the CORRECT answer for Dropdown1 and https://dev.azure.com/contoso/contoso-app/ is the CORRECT answer for Dropdown2 because our requirement is to receive Slack notifications when there are pull requests created for Contoso App which has three repositories.
/azrepos subscribe https://dev.azure.com/contoso/contoso-app/ will be the full command that we will run to subscribe to all the three repositories within our Contoso App project.
Signin is the INCORRECT answer because /azrepos signin command is used to authenticate and connect Slack workspace to Azure Repos.
Feedback is an INCORRECT choice because /azrepos feedback command is used to report a problem or suggest a feature.
Subscriptions is an INCORRECT choice because /azrepos subscriptions command is used when there is a requirement to view, add or remove subscriptions for a channel. The command will list all the current subscriptions for a channel so you can add new subscriptions or remove existing ones.
https://dev.azure.com/contoso/contoso-app/core-spa is the INCORRECT answer because this url or link will refer to the core-spa repository within the Contoso-app project and not all the three repositories within the project.
Subscribe is the CORRECT answer for Dropdown1 and https://dev.azure.com/contoso/contoso-app/ is the CORRECT answer for Dropdown2 because our requirement is to receive Slack notifications when there are pull requests created for Contoso App which has three repositories.
/azrepos subscribe https://dev.azure.com/contoso/contoso-app/ will be the full command that we will run to subscribe to all the three repositories within our Contoso App project.
Signin is the INCORRECT answer because /azrepos signin command is used to authenticate and connect Slack workspace to Azure Repos.
Feedback is an INCORRECT choice because /azrepos feedback command is used to report a problem or suggest a feature.
Subscriptions is an INCORRECT choice because /azrepos subscriptions command is used when there is a requirement to view, add or remove subscriptions for a channel. The command will list all the current subscriptions for a channel so you can add new subscriptions or remove existing ones.
https://dev.azure.com/contoso/contoso-app/core-spa is the INCORRECT answer because this url or link will refer to the core-spa repository within the Contoso-app project and not all the three repositories within the project.
Subscribe is the CORRECT answer for Dropdown1 and https://dev.azure.com/contoso/contoso-app/ is the CORRECT answer for Dropdown2 because our requirement is to receive Slack notifications when there are pull requests created for Contoso App which has three repositories.
/azrepos subscribe https://dev.azure.com/contoso/contoso-app/ will be the full command that we will run to subscribe to all the three repositories within our Contoso App project.
Signin is the INCORRECT answer because /azrepos signin command is used to authenticate and connect Slack workspace to Azure Repos.
Feedback is an INCORRECT choice because /azrepos feedback command is used to report a problem or suggest a feature.
Subscriptions is an INCORRECT choice because /azrepos subscriptions command is used when there is a requirement to view, add or remove subscriptions for a channel. The command will list all the current subscriptions for a channel so you can add new subscriptions or remove existing ones.
https://dev.azure.com/contoso/contoso-app/core-spa is the INCORRECT answer because this url or link will refer to the core-spa repository within the Contoso-app project and not all the three repositories within the project.
You have a project in Azure DevOps named Contoso App that contains pipelines in Azure Pipelines for GitHub repositories.
You need to ensure that developers receive Microsoft Teams notifications when there are failures in a pipeline of Contoso App.
What should you run in Teams?
Select the option which has the correct values for Dropdown1 and Dropdown2.
Subscribe is the CORRECT answer for Dropdown1 and https://dev.azure.com/contoso/contoso-app/ is the CORRECT answer for Dropdown2 because our requirement is to receive Microsoft Teams notifications when there are failures in pipelines of Contoso App.
@azure pipelines subscribe https://dev.azure.com/contoso/contoso-app/ will be the full command that we will run to subscribe to all the pipelines within our Contoso App project.
After we have subscribed to the pipelines, we can later configure notifications of build/release status for individual pipelines.
Signin is the INCORRECT answer because @azure pipelines signin command is used to authenticate and connect Microsoft Teams to the Azure Pipelines account.
Feedback is an INCORRECT choice because @azure pipelines feedback command is used to report a problem or suggest a feature.
Subscriptions is an INCORRECT choice because @azure pipelines subscriptions command is used when there is a requirement to view, add, or remove subscriptions for a channel. The command will list all the current subscriptions for a Teams channel so you can add new subscriptions or remove existing ones
https://dev.azure.com/contoso/contoso-app/_build is the INCORRECT answer because this link will neither refer to a specific build (containing a buildId/releaseId or definitionId) nor to all the pipelines(/_release is missing for release pipelines) for the Contoso-app project.
Subscribe is the CORRECT answer for Dropdown1 and https://dev.azure.com/contoso/contoso-app/ is the CORRECT answer for Dropdown2 because our requirement is to receive Microsoft Teams notifications when there are failures in pipelines of Contoso App.
@azure pipelines subscribe https://dev.azure.com/contoso/contoso-app/ will be the full command that we will run to subscribe to all the pipelines within our Contoso App project.
After we have subscribed to the pipelines, we can later configure notifications of build/release status for individual pipelines.
Signin is the INCORRECT answer because @azure pipelines signin command is used to authenticate and connect Microsoft Teams to the Azure Pipelines account.
Feedback is an INCORRECT choice because @azure pipelines feedback command is used to report a problem or suggest a feature.
Subscriptions is an INCORRECT choice because @azure pipelines subscriptions command is used when there is a requirement to view, add, or remove subscriptions for a channel. The command will list all the current subscriptions for a Teams channel so you can add new subscriptions or remove existing ones
https://dev.azure.com/contoso/contoso-app/_build is the INCORRECT answer because this link will neither refer to a specific build (containing a buildId/releaseId or definitionId) nor to all the pipelines(/_release is missing for release pipelines) for the Contoso-app project.
Subscribe is the CORRECT answer for Dropdown1 and https://dev.azure.com/contoso/contoso-app/ is the CORRECT answer for Dropdown2 because our requirement is to receive Microsoft Teams notifications when there are failures in pipelines of Contoso App.
@azure pipelines subscribe https://dev.azure.com/contoso/contoso-app/ will be the full command that we will run to subscribe to all the pipelines within our Contoso App project.
After we have subscribed to the pipelines, we can later configure notifications of build/release status for individual pipelines.
Signin is the INCORRECT answer because @azure pipelines signin command is used to authenticate and connect Microsoft Teams to the Azure Pipelines account.
Feedback is an INCORRECT choice because @azure pipelines feedback command is used to report a problem or suggest a feature.
Subscriptions is an INCORRECT choice because @azure pipelines subscriptions command is used when there is a requirement to view, add, or remove subscriptions for a channel. The command will list all the current subscriptions for a Teams channel so you can add new subscriptions or remove existing ones
https://dev.azure.com/contoso/contoso-app/_build is the INCORRECT answer because this link will neither refer to a specific build (containing a buildId/releaseId or definitionId) nor to all the pipelines(/_release is missing for release pipelines) for the Contoso-app project.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals.
You have an approval process that contains a condition. The condition requires that releases be approved by a team leader before they are deployed.
You have a policy stating that approvals must occur within eight hours.
You discover that deployments fail if the approvals take longer than two hours.
You need to ensure that the deployment only fail if the approvals take longer than eight hours.
Solution: From Pre-deployment conditions, you modify the Time between re-evaluation of gates option.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals.
You have an approval process that contains a condition. The condition requires that releases be approved by a team leader before they are deployed.
You have a policy stating that approvals must occur within eight hours.
You discover that deployments fail if the approvals take longer than two hours.
You need to ensure that the deployment only fail if the approvals take longer than eight hours.
Solution: From Pre-deployment conditions, you modify the Timeout setting for pre-deployment approvals.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals.
You have an approval process that contains a condition. The condition requires that releases be approved by a team leader before they are deployed.
You have a policy stating that approvals must occur within eight hours.
You discover that deployments fail if the approvals take longer than two hours.
You need to ensure that the deployment only fail if the approvals take longer than eight hours.
Solution: From Post-deployment conditions, you modify the Timeout setting for post-deployment approvals.
Does this meet the goal?
Correct
Correct Answer(s): No
NO is the CORRECT answer here because our requirement is to get the approval from a team leader before the deployment takes place. However, the post-deployment conditions are the conditions that are set to be completed/verified after the deployment is done and it is ready to move to the next stage. (Example: Deployment from QA to Production)Modifying the timeout setting for 8 hours would work if it had been the pre-deployment conditions but in our case, we use a solution that modifies these settings for post-deployment conditions, so it is INCORRECT.
Please refer to the screenshot below for more clarity:
Incorrect
Correct Answer(s): No
NO is the CORRECT answer here because our requirement is to get the approval from a team leader before the deployment takes place. However, the post-deployment conditions are the conditions that are set to be completed/verified after the deployment is done and it is ready to move to the next stage. (Example: Deployment from QA to Production)Modifying the timeout setting for 8 hours would work if it had been the pre-deployment conditions but in our case, we use a solution that modifies these settings for post-deployment conditions, so it is INCORRECT.
Please refer to the screenshot below for more clarity:
Unattempted
Correct Answer(s): No
NO is the CORRECT answer here because our requirement is to get the approval from a team leader before the deployment takes place. However, the post-deployment conditions are the conditions that are set to be completed/verified after the deployment is done and it is ready to move to the next stage. (Example: Deployment from QA to Production)Modifying the timeout setting for 8 hours would work if it had been the pre-deployment conditions but in our case, we use a solution that modifies these settings for post-deployment conditions, so it is INCORRECT.
Please refer to the screenshot below for more clarity:
Question 25 of 60
25. Question
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals.
The lead developer at a company reports that adding new application features takes longer than expected due to a large accumulated technical debt.
You need to recommend changes to reduce the accumulated technical debt.
Solution: You recommend increasing the test coverage.
Does this meet the goal?
Correct
No
Explanation:
While increasing test coverage is generally a good practice, it does not directly address the root cause of the problem: accumulated technical debt.
Technical Debt refers to shortcuts or suboptimal design choices made during development for faster delivery, but which ultimately lead to increased development time and maintenance costs in the long run.
How increasing test coverage might help, but not directly address the debt:
Improved code quality: More tests can help identify and prevent regressions caused by changes to existing code, which can be a symptom of technical debt (e.g., tightly coupled components).
Increased confidence in refactoring: With good test coverage, developers can refactor code (improve its structure and design) with more confidence, as the tests will help ensure that the refactoring does not break existing functionality. This can help reduce technical debt over time.
However, increasing test coverage alone won’t:
Automatically fix existing design flaws: It doesn’t address the underlying issues that led to the technical debt in the first place.
Prevent future technical debt: It doesn’t directly discourage developers from making short-term decisions that create technical debt in the future.
To truly reduce accumulated technical debt, you need to:
Identify and address the root causes: Analyze the codebase to pinpoint areas with high technical debt (e.g., tight coupling, code duplication, lack of modularity).
Refactor the code: Invest time in refactoring the code to improve its design, maintainability, and testability.
Implement code reviews: Encourage peer code reviews to identify and prevent the introduction of new technical debt.
Prioritize code quality: Emphasize the importance of writing clean, maintainable code from the start.
Regularly address technical debt: Allocate dedicated time for addressing technical debt on a regular basis, instead of letting it accumulate further.
In summary: While increasing test coverage is a valuable practice, it’s not the most effective solution for reducing accumulated technical debt. A more comprehensive approach that addresses the underlying causes and actively improves the codebase is required.
Incorrect
No
Explanation:
While increasing test coverage is generally a good practice, it does not directly address the root cause of the problem: accumulated technical debt.
Technical Debt refers to shortcuts or suboptimal design choices made during development for faster delivery, but which ultimately lead to increased development time and maintenance costs in the long run.
How increasing test coverage might help, but not directly address the debt:
Improved code quality: More tests can help identify and prevent regressions caused by changes to existing code, which can be a symptom of technical debt (e.g., tightly coupled components).
Increased confidence in refactoring: With good test coverage, developers can refactor code (improve its structure and design) with more confidence, as the tests will help ensure that the refactoring does not break existing functionality. This can help reduce technical debt over time.
However, increasing test coverage alone won’t:
Automatically fix existing design flaws: It doesn’t address the underlying issues that led to the technical debt in the first place.
Prevent future technical debt: It doesn’t directly discourage developers from making short-term decisions that create technical debt in the future.
To truly reduce accumulated technical debt, you need to:
Identify and address the root causes: Analyze the codebase to pinpoint areas with high technical debt (e.g., tight coupling, code duplication, lack of modularity).
Refactor the code: Invest time in refactoring the code to improve its design, maintainability, and testability.
Implement code reviews: Encourage peer code reviews to identify and prevent the introduction of new technical debt.
Prioritize code quality: Emphasize the importance of writing clean, maintainable code from the start.
Regularly address technical debt: Allocate dedicated time for addressing technical debt on a regular basis, instead of letting it accumulate further.
In summary: While increasing test coverage is a valuable practice, it’s not the most effective solution for reducing accumulated technical debt. A more comprehensive approach that addresses the underlying causes and actively improves the codebase is required.
Unattempted
No
Explanation:
While increasing test coverage is generally a good practice, it does not directly address the root cause of the problem: accumulated technical debt.
Technical Debt refers to shortcuts or suboptimal design choices made during development for faster delivery, but which ultimately lead to increased development time and maintenance costs in the long run.
How increasing test coverage might help, but not directly address the debt:
Improved code quality: More tests can help identify and prevent regressions caused by changes to existing code, which can be a symptom of technical debt (e.g., tightly coupled components).
Increased confidence in refactoring: With good test coverage, developers can refactor code (improve its structure and design) with more confidence, as the tests will help ensure that the refactoring does not break existing functionality. This can help reduce technical debt over time.
However, increasing test coverage alone won’t:
Automatically fix existing design flaws: It doesn’t address the underlying issues that led to the technical debt in the first place.
Prevent future technical debt: It doesn’t directly discourage developers from making short-term decisions that create technical debt in the future.
To truly reduce accumulated technical debt, you need to:
Identify and address the root causes: Analyze the codebase to pinpoint areas with high technical debt (e.g., tight coupling, code duplication, lack of modularity).
Refactor the code: Invest time in refactoring the code to improve its design, maintainability, and testability.
Implement code reviews: Encourage peer code reviews to identify and prevent the introduction of new technical debt.
Prioritize code quality: Emphasize the importance of writing clean, maintainable code from the start.
Regularly address technical debt: Allocate dedicated time for addressing technical debt on a regular basis, instead of letting it accumulate further.
In summary: While increasing test coverage is a valuable practice, it’s not the most effective solution for reducing accumulated technical debt. A more comprehensive approach that addresses the underlying causes and actively improves the codebase is required.
Question 26 of 60
26. Question
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals.
The lead developer at a company reports that adding new application features takes longer than expected due to a large accumulated technical debt.
You need to recommend changes to reduce the accumulated technical debt.
Solution: You recommend reducing the code coupling and the dependency cycles.
Does this meet the goal?
Correct
Correct Answer(s): Yes
Yes is the CORRECT answer here because according to the modern architecture design best practices focussed on microservices, code coupling or code dependency contributes to poor code quality. In order to alleviate pending technical debt developers should focus on reducing the coupling and code dependencies. As the code grows, the more difficult it becomes to manage dependencies and quality. https://www.hbs.edu/faculty/Publication%20Files/2016-JSS%20Technical%20Debt_d793c712-5160-4aa9-8761-781b444cc75f.pdf
Correct Answer(s): Yes
Yes is the CORRECT answer here because according to the modern architecture design best practices focussed on microservices, code coupling or code dependency contributes to poor code quality. In order to alleviate pending technical debt developers should focus on reducing the coupling and code dependencies. As the code grows, the more difficult it becomes to manage dependencies and quality. https://www.hbs.edu/faculty/Publication%20Files/2016-JSS%20Technical%20Debt_d793c712-5160-4aa9-8761-781b444cc75f.pdf
Correct Answer(s): Yes
Yes is the CORRECT answer here because according to the modern architecture design best practices focussed on microservices, code coupling or code dependency contributes to poor code quality. In order to alleviate pending technical debt developers should focus on reducing the coupling and code dependencies. As the code grows, the more difficult it becomes to manage dependencies and quality. https://www.hbs.edu/faculty/Publication%20Files/2016-JSS%20Technical%20Debt_d793c712-5160-4aa9-8761-781b444cc75f.pdf
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals.
The lead developer at a company reports that adding new application features takes longer than expected due to a large accumulated technical debt. You need to recommend changes to reduce the accumulated technical debt.
Solution: You recommend increasing the code duplication.
In order to identify and remediate code duplication in your existing code, third-party tools like SonarQube can be used.
Question 28 of 60
28. Question
You have a containerized solution that runs in Azure Container Instances. The solution contains a frontend container named App1 and a backend container named DB1. DB1 loads a large amount of data during startup.
You need to verify that DB1 can handle incoming requests before users can submit requests to App1..
What should you configure?
Correct
Correct Answer(s): a readiness probe
A readiness probe is the CORRECT answer because we need to verify if a container is ready to handle incoming traffic beforehand, while it loads a large chunk of data. Readiness probes include configurations which ensure that a container app does not receive request until certain conditions are satisfied. https://docs.microsoft.com/en-us/azure/container-instances/container-instances-readiness-probe
An Azure Load Balancer health probe is the INCORRECT answer because a health probe for a load balancer helps the Azure load balancer to detect the backend endpoint status. When balancing network level load within a Virtual network, the load balancer needs to verify if the endpoint is in a healthy state to accept incoming requests and redirect to the specific backend endpoint. Only after it is verified to be a healthy probe, the load balancer sends net flows to the backend pool instance. https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-custom-probe-overview https://docs.microsoft.com/en-us/azure/load-balancer/
Performance log fields/properties for Azure monitor:
A liveness probe is an INCORRECT choice because a liveness probe comes into play when there is a need to repair a container in a broken state by restarting it if it is not live. With a liveness probe, we can configure the containers within the container group to restart if a critical functionality is not working. https://docs.microsoft.com/en-us/azure/container-instances/container-instances-liveness-probe
Incorrect
Correct Answer(s): a readiness probe
A readiness probe is the CORRECT answer because we need to verify if a container is ready to handle incoming traffic beforehand, while it loads a large chunk of data. Readiness probes include configurations which ensure that a container app does not receive request until certain conditions are satisfied. https://docs.microsoft.com/en-us/azure/container-instances/container-instances-readiness-probe
An Azure Load Balancer health probe is the INCORRECT answer because a health probe for a load balancer helps the Azure load balancer to detect the backend endpoint status. When balancing network level load within a Virtual network, the load balancer needs to verify if the endpoint is in a healthy state to accept incoming requests and redirect to the specific backend endpoint. Only after it is verified to be a healthy probe, the load balancer sends net flows to the backend pool instance. https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-custom-probe-overview https://docs.microsoft.com/en-us/azure/load-balancer/
Performance log fields/properties for Azure monitor:
A liveness probe is an INCORRECT choice because a liveness probe comes into play when there is a need to repair a container in a broken state by restarting it if it is not live. With a liveness probe, we can configure the containers within the container group to restart if a critical functionality is not working. https://docs.microsoft.com/en-us/azure/container-instances/container-instances-liveness-probe
Unattempted
Correct Answer(s): a readiness probe
A readiness probe is the CORRECT answer because we need to verify if a container is ready to handle incoming traffic beforehand, while it loads a large chunk of data. Readiness probes include configurations which ensure that a container app does not receive request until certain conditions are satisfied. https://docs.microsoft.com/en-us/azure/container-instances/container-instances-readiness-probe
An Azure Load Balancer health probe is the INCORRECT answer because a health probe for a load balancer helps the Azure load balancer to detect the backend endpoint status. When balancing network level load within a Virtual network, the load balancer needs to verify if the endpoint is in a healthy state to accept incoming requests and redirect to the specific backend endpoint. Only after it is verified to be a healthy probe, the load balancer sends net flows to the backend pool instance. https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-custom-probe-overview https://docs.microsoft.com/en-us/azure/load-balancer/
Performance log fields/properties for Azure monitor:
A liveness probe is an INCORRECT choice because a liveness probe comes into play when there is a need to repair a container in a broken state by restarting it if it is not live. With a liveness probe, we can configure the containers within the container group to restart if a critical functionality is not working. https://docs.microsoft.com/en-us/azure/container-instances/container-instances-liveness-probe
Question 29 of 60
29. Question
You create an alert rule in Azure Monitor as shown in the following exhibit.
Which action will trigger an alert?
Correct
Correct Answer(s): a failed attempt to delete the ASP-9bb7 resource
The CORRECT answer is a failed attempt to delete the ASP-9bb7 resource because we are using Azure monitor to set up azure alerts on the activity logs level and this options matches as True for the specified condition for the Alert.
Azure Activity logs provide insights into subscription-level events that occur in Azure and have different categories like Administrative, security, policy, etc. In our case above, we use an administrative category which includes records of all create, update, delete, and action operations performed through Azure resource manager. We also specify that the status of log should be failed and therefore, we will get an alert when there is a failed deletion attempt.
Refer below for types of Activity logs and their details.
Administrative
Contains the record of all create, update, delete, and action operations performed through Resource Manager. Examples of Administrative events include create virtual machine and delete network security group.
Every action taken by a user or application using Resource Manager is modeled as an operation on a particular resource type. If the operation type is Write, Delete, or Action, the records of both the start and success or fail of that operation are recorded in the Administrative category. Administrative events also include any changes to role-based access control in a subscription.
Service Health
Contains the record of any service health incidents that have occurred in Azure. An example of a Service Health event SQL Azure in East US is experiencing downtime.
Service Health events come in Six varieties: Action Required, Assisted Recovery, Incident, Maintenance, Information, or Security. These events are only created if you have a resource in the subscription that would be impacted by the event.
Resource Health
Contains the record of any resource health events that have occurred to your Azure resources. An example of a Resource Health event is Virtual Machine health status changed to unavailable.
Resource Health events can represent one of four health statuses: Available, Unavailable, Degraded, and Unknown. Additionally, Resource Health events can be categorized as being Platform Initiated or User Initiated.
Alert
Contains the record of activations for Azure alerts. An example of an Alert event is CPU % on myVM has been over 80 for the past 5 minutes.
Autoscale
Contains the record of any events related to the operation of the autoscale engine based on any autoscale settings you have defined in your subscription. An example of an Autoscale event is Autoscale scale up action failed.
Recommendation
Contains recommendation events from Azure Advisor.
Security
Contains the record of any alerts generated by Azure Security Center. An example of a Security event is Suspicious double extension file executed.
Policy
Contains records of all effect action operations performed by Azure Policy. Examples of Policy events include Audit and Deny. Every action taken by Policy is modeled as an operation on a resource.
a successful attempt to delete the ASP-9bb7 resource is an INCORRECT choice because even after being an administrative type of azure activity alert, it doesnt match the filter of failed type because the activity logged will be a success status.
a change to a role assignment for the ASP-9bb7 resource is an INCORRECT choice because it also does not match the failed criteria. Any change to Azure role based access control is an administrative type of activity log, but in order to be alerted on the above condition, the activity should fail.
a failed attempt to scale up the ASP-9bb7 resource is an INCORRECT answer because it does not match the administrative category.
Incorrect
Correct Answer(s): a failed attempt to delete the ASP-9bb7 resource
The CORRECT answer is a failed attempt to delete the ASP-9bb7 resource because we are using Azure monitor to set up azure alerts on the activity logs level and this options matches as True for the specified condition for the Alert.
Azure Activity logs provide insights into subscription-level events that occur in Azure and have different categories like Administrative, security, policy, etc. In our case above, we use an administrative category which includes records of all create, update, delete, and action operations performed through Azure resource manager. We also specify that the status of log should be failed and therefore, we will get an alert when there is a failed deletion attempt.
Refer below for types of Activity logs and their details.
Administrative
Contains the record of all create, update, delete, and action operations performed through Resource Manager. Examples of Administrative events include create virtual machine and delete network security group.
Every action taken by a user or application using Resource Manager is modeled as an operation on a particular resource type. If the operation type is Write, Delete, or Action, the records of both the start and success or fail of that operation are recorded in the Administrative category. Administrative events also include any changes to role-based access control in a subscription.
Service Health
Contains the record of any service health incidents that have occurred in Azure. An example of a Service Health event SQL Azure in East US is experiencing downtime.
Service Health events come in Six varieties: Action Required, Assisted Recovery, Incident, Maintenance, Information, or Security. These events are only created if you have a resource in the subscription that would be impacted by the event.
Resource Health
Contains the record of any resource health events that have occurred to your Azure resources. An example of a Resource Health event is Virtual Machine health status changed to unavailable.
Resource Health events can represent one of four health statuses: Available, Unavailable, Degraded, and Unknown. Additionally, Resource Health events can be categorized as being Platform Initiated or User Initiated.
Alert
Contains the record of activations for Azure alerts. An example of an Alert event is CPU % on myVM has been over 80 for the past 5 minutes.
Autoscale
Contains the record of any events related to the operation of the autoscale engine based on any autoscale settings you have defined in your subscription. An example of an Autoscale event is Autoscale scale up action failed.
Recommendation
Contains recommendation events from Azure Advisor.
Security
Contains the record of any alerts generated by Azure Security Center. An example of a Security event is Suspicious double extension file executed.
Policy
Contains records of all effect action operations performed by Azure Policy. Examples of Policy events include Audit and Deny. Every action taken by Policy is modeled as an operation on a resource.
a successful attempt to delete the ASP-9bb7 resource is an INCORRECT choice because even after being an administrative type of azure activity alert, it doesnt match the filter of failed type because the activity logged will be a success status.
a change to a role assignment for the ASP-9bb7 resource is an INCORRECT choice because it also does not match the failed criteria. Any change to Azure role based access control is an administrative type of activity log, but in order to be alerted on the above condition, the activity should fail.
a failed attempt to scale up the ASP-9bb7 resource is an INCORRECT answer because it does not match the administrative category.
Unattempted
Correct Answer(s): a failed attempt to delete the ASP-9bb7 resource
The CORRECT answer is a failed attempt to delete the ASP-9bb7 resource because we are using Azure monitor to set up azure alerts on the activity logs level and this options matches as True for the specified condition for the Alert.
Azure Activity logs provide insights into subscription-level events that occur in Azure and have different categories like Administrative, security, policy, etc. In our case above, we use an administrative category which includes records of all create, update, delete, and action operations performed through Azure resource manager. We also specify that the status of log should be failed and therefore, we will get an alert when there is a failed deletion attempt.
Refer below for types of Activity logs and their details.
Administrative
Contains the record of all create, update, delete, and action operations performed through Resource Manager. Examples of Administrative events include create virtual machine and delete network security group.
Every action taken by a user or application using Resource Manager is modeled as an operation on a particular resource type. If the operation type is Write, Delete, or Action, the records of both the start and success or fail of that operation are recorded in the Administrative category. Administrative events also include any changes to role-based access control in a subscription.
Service Health
Contains the record of any service health incidents that have occurred in Azure. An example of a Service Health event SQL Azure in East US is experiencing downtime.
Service Health events come in Six varieties: Action Required, Assisted Recovery, Incident, Maintenance, Information, or Security. These events are only created if you have a resource in the subscription that would be impacted by the event.
Resource Health
Contains the record of any resource health events that have occurred to your Azure resources. An example of a Resource Health event is Virtual Machine health status changed to unavailable.
Resource Health events can represent one of four health statuses: Available, Unavailable, Degraded, and Unknown. Additionally, Resource Health events can be categorized as being Platform Initiated or User Initiated.
Alert
Contains the record of activations for Azure alerts. An example of an Alert event is CPU % on myVM has been over 80 for the past 5 minutes.
Autoscale
Contains the record of any events related to the operation of the autoscale engine based on any autoscale settings you have defined in your subscription. An example of an Autoscale event is Autoscale scale up action failed.
Recommendation
Contains recommendation events from Azure Advisor.
Security
Contains the record of any alerts generated by Azure Security Center. An example of a Security event is Suspicious double extension file executed.
Policy
Contains records of all effect action operations performed by Azure Policy. Examples of Policy events include Audit and Deny. Every action taken by Policy is modeled as an operation on a resource.
a successful attempt to delete the ASP-9bb7 resource is an INCORRECT choice because even after being an administrative type of azure activity alert, it doesnt match the filter of failed type because the activity logged will be a success status.
a change to a role assignment for the ASP-9bb7 resource is an INCORRECT choice because it also does not match the failed criteria. Any change to Azure role based access control is an administrative type of activity log, but in order to be alerted on the above condition, the activity should fail.
a failed attempt to scale up the ASP-9bb7 resource is an INCORRECT answer because it does not match the administrative category.
Question 30 of 60
30. Question
You have an application named App1 that has a custom domain of app.contoso.com.
You create a test in Azure Application Insights as shown in the following exhibit.
Based on the information presented in the graphic, complete the following statement with the correct answer choice.
“The test will execute _______________”
Correct
Correct Answer(s): every five minutes per location
every five minutes per location is the CORRECT answer because as per the image in the question, we have set the test frequency to 5 minutes and have four test locations. Test frequency sets how often the test is run from each of the test locations. That is why in our case, servers from each location send web requests to the specified URL. https://docs.microsoft.com/en-us/azure/azure-monitor/app/monitor-web-app-availability
Details on parameters for creating a URL test:
every 30 seconds per location is an INCORRECT choice because the test frequency is set to 5 minutes and not 30 seconds, so the test will run every five minutes instead of 30 seconds.
every 30 seconds at a random location is an INCORRECT option because the test frequency is set to 5 minutes instead of 30 seconds. Moreover, the request will originate every 5 minutes per location and not any random location.
Every 5 minutes at a random location is an INCORRECT answer because the test frequency sets how often the test will run from each test location. The assumption that the test will run every five minutes at a random location is wrong.
Incorrect
Correct Answer(s): every five minutes per location
every five minutes per location is the CORRECT answer because as per the image in the question, we have set the test frequency to 5 minutes and have four test locations. Test frequency sets how often the test is run from each of the test locations. That is why in our case, servers from each location send web requests to the specified URL. https://docs.microsoft.com/en-us/azure/azure-monitor/app/monitor-web-app-availability
Details on parameters for creating a URL test:
every 30 seconds per location is an INCORRECT choice because the test frequency is set to 5 minutes and not 30 seconds, so the test will run every five minutes instead of 30 seconds.
every 30 seconds at a random location is an INCORRECT option because the test frequency is set to 5 minutes instead of 30 seconds. Moreover, the request will originate every 5 minutes per location and not any random location.
Every 5 minutes at a random location is an INCORRECT answer because the test frequency sets how often the test will run from each test location. The assumption that the test will run every five minutes at a random location is wrong.
Unattempted
Correct Answer(s): every five minutes per location
every five minutes per location is the CORRECT answer because as per the image in the question, we have set the test frequency to 5 minutes and have four test locations. Test frequency sets how often the test is run from each of the test locations. That is why in our case, servers from each location send web requests to the specified URL. https://docs.microsoft.com/en-us/azure/azure-monitor/app/monitor-web-app-availability
Details on parameters for creating a URL test:
every 30 seconds per location is an INCORRECT choice because the test frequency is set to 5 minutes and not 30 seconds, so the test will run every five minutes instead of 30 seconds.
every 30 seconds at a random location is an INCORRECT option because the test frequency is set to 5 minutes instead of 30 seconds. Moreover, the request will originate every 5 minutes per location and not any random location.
Every 5 minutes at a random location is an INCORRECT answer because the test frequency sets how often the test will run from each test location. The assumption that the test will run every five minutes at a random location is wrong.
Question 31 of 60
31. Question
You have an application named App1 that has a custom domain of app.contoso.com.
You create a test in Azure Application Insights as shown in the following exhibit.
Based on the information presented in the graphic, complete the following statement with the correct answer choice.
“The test will pass if _______________ within 30 seconds”
In the case when the parse dependent requests option is checked the test could fail for cases, which may not even be noticeable when manually browsing the site.
App1 responds to ICMP ping is an INCORRECT answer because app insights availability test – URL ping test does not make use of Internet control message protocol to check a sites availability. Instead, it uses HTTP request functionality to validate whether an endpoint is responding.
All the HTML, javascripts, and images of App1 load is an INCORRECT answer because this will be the case when the option of parse dependent requests is not checked. However, in our case, the option is checked for parsing dependent requests.
In the case when the parse dependent requests option is checked the test could fail for cases, which may not even be noticeable when manually browsing the site.
App1 responds to ICMP ping is an INCORRECT answer because app insights availability test – URL ping test does not make use of Internet control message protocol to check a sites availability. Instead, it uses HTTP request functionality to validate whether an endpoint is responding.
All the HTML, javascripts, and images of App1 load is an INCORRECT answer because this will be the case when the option of parse dependent requests is not checked. However, in our case, the option is checked for parsing dependent requests.
In the case when the parse dependent requests option is checked the test could fail for cases, which may not even be noticeable when manually browsing the site.
App1 responds to ICMP ping is an INCORRECT answer because app insights availability test – URL ping test does not make use of Internet control message protocol to check a sites availability. Instead, it uses HTTP request functionality to validate whether an endpoint is responding.
All the HTML, javascripts, and images of App1 load is an INCORRECT answer because this will be the case when the option of parse dependent requests is not checked. However, in our case, the option is checked for parsing dependent requests.
Question 32 of 60
32. Question
You have an Azure DevOps organization named Contoso and an Azure subscription. The subscription contains an Azure virtual machine scale set named VMSS1 that is configured for autoscaling.
You use Azure DevOps to build a web app named App1 and deploy App1 to VMSS1. App1 is used heavily and has usage patterns that vary on a weekly basis.
You need to recommend a solution to detect an abnormal rise in the rate of failed requests to App1. The solution must minimize administrative effort.
What should you include in the recommendation?
Correct
Correct Answer(s): the Smart Detection feature in Azure Application Insights
The Smart Detection feature in Azure Application Insights is the CORRECT answer because we have the requirement of a solution that can be used to detect an abnormal rise in the rate of failed requests to App1. Moreover, we have to make sure that administrative efforts are minimal.
The smart detection in-app insights automatically warns of potential performance problems and failure anomalies in the configured web application. If there is a sudden rise in failure rates or abnormal patterns in performance, an alert is triggered to warn the app owners.
For failed web requests, the response code is 400 or higher. The failure anomalies feature detects an unusual rise in the rate of HTTP requests or dependency calls that are reported as failed using Microsoft powered machine learning algorithms to predict the change from normal behavior. This feature needs no additional setup or configuration after just setting up app insights, employing minimal administrative effort. https://docs.microsoft.com/en-us/azure/azure-monitor/app/proactive-diagnostics
https://docs.microsoft.com/en-us/azure/azure-monitor/app/proactive-failure-diagnostics
After setting up the Application Insights for a web application, and if it generates a certain minimum amount of data, Smart Detection of failure anomalies takes 24 hours to learn the normal behavior of the application, before it is switched on and can send alerts. Refer below for an example of an alert:
the Failures feature in Azure Application Insights is an INCORRECT choice because this feature helps in diagnosing failures in the monitored applications and not in diagnosing the anomalistic behavior in the failing requests. https://docs.microsoft.com/en-us/azure/azure-monitor/app/asp-net-exceptions
To use this feature from the Azure portal, click on the failures option in the Application insights resource menu, on the left side of the screen. On the right of this feature screen, you can see some of the useful distributions specific to the selected failing operations.
Refer to the image below:
An Azure Service Health alert is an INCORRECT answer because this feature tracks the health of your Azure services in the regions where it is used and not the abnormal swing in failed requests.
Service health feature tracks four types of health events that may impact azure resources – service issues, planned maintenance, health advisories, and security advisories. https://docs.microsoft.com/en-us/azure/service-health/service-health-overview
Incorrect
Correct Answer(s): the Smart Detection feature in Azure Application Insights
The Smart Detection feature in Azure Application Insights is the CORRECT answer because we have the requirement of a solution that can be used to detect an abnormal rise in the rate of failed requests to App1. Moreover, we have to make sure that administrative efforts are minimal.
The smart detection in-app insights automatically warns of potential performance problems and failure anomalies in the configured web application. If there is a sudden rise in failure rates or abnormal patterns in performance, an alert is triggered to warn the app owners.
For failed web requests, the response code is 400 or higher. The failure anomalies feature detects an unusual rise in the rate of HTTP requests or dependency calls that are reported as failed using Microsoft powered machine learning algorithms to predict the change from normal behavior. This feature needs no additional setup or configuration after just setting up app insights, employing minimal administrative effort. https://docs.microsoft.com/en-us/azure/azure-monitor/app/proactive-diagnostics
https://docs.microsoft.com/en-us/azure/azure-monitor/app/proactive-failure-diagnostics
After setting up the Application Insights for a web application, and if it generates a certain minimum amount of data, Smart Detection of failure anomalies takes 24 hours to learn the normal behavior of the application, before it is switched on and can send alerts. Refer below for an example of an alert:
the Failures feature in Azure Application Insights is an INCORRECT choice because this feature helps in diagnosing failures in the monitored applications and not in diagnosing the anomalistic behavior in the failing requests. https://docs.microsoft.com/en-us/azure/azure-monitor/app/asp-net-exceptions
To use this feature from the Azure portal, click on the failures option in the Application insights resource menu, on the left side of the screen. On the right of this feature screen, you can see some of the useful distributions specific to the selected failing operations.
Refer to the image below:
An Azure Service Health alert is an INCORRECT answer because this feature tracks the health of your Azure services in the regions where it is used and not the abnormal swing in failed requests.
Service health feature tracks four types of health events that may impact azure resources – service issues, planned maintenance, health advisories, and security advisories. https://docs.microsoft.com/en-us/azure/service-health/service-health-overview
Unattempted
Correct Answer(s): the Smart Detection feature in Azure Application Insights
The Smart Detection feature in Azure Application Insights is the CORRECT answer because we have the requirement of a solution that can be used to detect an abnormal rise in the rate of failed requests to App1. Moreover, we have to make sure that administrative efforts are minimal.
The smart detection in-app insights automatically warns of potential performance problems and failure anomalies in the configured web application. If there is a sudden rise in failure rates or abnormal patterns in performance, an alert is triggered to warn the app owners.
For failed web requests, the response code is 400 or higher. The failure anomalies feature detects an unusual rise in the rate of HTTP requests or dependency calls that are reported as failed using Microsoft powered machine learning algorithms to predict the change from normal behavior. This feature needs no additional setup or configuration after just setting up app insights, employing minimal administrative effort. https://docs.microsoft.com/en-us/azure/azure-monitor/app/proactive-diagnostics
https://docs.microsoft.com/en-us/azure/azure-monitor/app/proactive-failure-diagnostics
After setting up the Application Insights for a web application, and if it generates a certain minimum amount of data, Smart Detection of failure anomalies takes 24 hours to learn the normal behavior of the application, before it is switched on and can send alerts. Refer below for an example of an alert:
the Failures feature in Azure Application Insights is an INCORRECT choice because this feature helps in diagnosing failures in the monitored applications and not in diagnosing the anomalistic behavior in the failing requests. https://docs.microsoft.com/en-us/azure/azure-monitor/app/asp-net-exceptions
To use this feature from the Azure portal, click on the failures option in the Application insights resource menu, on the left side of the screen. On the right of this feature screen, you can see some of the useful distributions specific to the selected failing operations.
Refer to the image below:
An Azure Service Health alert is an INCORRECT answer because this feature tracks the health of your Azure services in the regions where it is used and not the abnormal swing in failed requests.
Service health feature tracks four types of health events that may impact azure resources – service issues, planned maintenance, health advisories, and security advisories. https://docs.microsoft.com/en-us/azure/service-health/service-health-overview
Question 33 of 60
33. Question
Your company is concerned that when developers introduce open source libraries, it creates licensing compliance issues.
You need to add an automated process to the build pipeline to detect when common open source libraries are added to the code base.
What should you do?
Microsoft Visual SourceSafe is an INCORRECT option because Microsoft Visual SourceSafe(VSS) is used for version control and not for assessing licensing problems with open source libraries. VSS is a client/server system that was commonly used in an integrated mode with Visual Studio. MS VSS is a discontinued product at the present date.
PDM is an INCORRECT choice because product data management does not automate the process of build to detect when common open source libraries are added to the code base. PDM generally helps the users to function more efficiently and effectively by making sure engineers/designers no longer accidentally (or even purposefully) step on each other’s toes while working on the file.
Microsoft Visual SourceSafe is an INCORRECT option because Microsoft Visual SourceSafe(VSS) is used for version control and not for assessing licensing problems with open source libraries. VSS is a client/server system that was commonly used in an integrated mode with Visual Studio. MS VSS is a discontinued product at the present date.
PDM is an INCORRECT choice because product data management does not automate the process of build to detect when common open source libraries are added to the code base. PDM generally helps the users to function more efficiently and effectively by making sure engineers/designers no longer accidentally (or even purposefully) step on each other’s toes while working on the file.
Microsoft Visual SourceSafe is an INCORRECT option because Microsoft Visual SourceSafe(VSS) is used for version control and not for assessing licensing problems with open source libraries. VSS is a client/server system that was commonly used in an integrated mode with Visual Studio. MS VSS is a discontinued product at the present date.
PDM is an INCORRECT choice because product data management does not automate the process of build to detect when common open source libraries are added to the code base. PDM generally helps the users to function more efficiently and effectively by making sure engineers/designers no longer accidentally (or even purposefully) step on each other’s toes while working on the file.
Question 34 of 60
34. Question
You have a GitHub repository.
You create a new repository in Azure DevOps.
You need to recommend a procedure to clone the repository from GitHub to Azure DevOps.
What should you recommend?
Correct
Correct Answer(s): From import a Git repository, click Import.
From import a Git repository, click import is the CORRECT answer because we need to clone an existing repository from Github to Azure DevOps without any hassle. We can just provide the URL of the GitHub repository to clone and if any authentication is required we can do that for the source repository. After this has been done, only one click would clone the entire Github repository to Azure Repos. Please refer to the screenshot below for more clarity:
Create a pull request is the INCORRECT choice because a pull request is created to propose changes to the main project branch and not to clone this existing repository from one source control to the other one.
Create a Webhook is INCORRECT because we use a webhook when we need to send event information to any other service like GitHub. Webhook or as said in Azure DevOps, the service hooks provide a way to run tasks on other services when some event happens in Azure DevOps like sending out a push notification if a pipeline fails or integrating pipeline events with Azure Functions. https://docs.microsoft.com/en-us/azure/devops/service-hooks/services/webhooks?view=azure-devops
Correct Answer(s): From import a Git repository, click Import.
From import a Git repository, click import is the CORRECT answer because we need to clone an existing repository from Github to Azure DevOps without any hassle. We can just provide the URL of the GitHub repository to clone and if any authentication is required we can do that for the source repository. After this has been done, only one click would clone the entire Github repository to Azure Repos. Please refer to the screenshot below for more clarity:
Create a pull request is the INCORRECT choice because a pull request is created to propose changes to the main project branch and not to clone this existing repository from one source control to the other one.
Create a Webhook is INCORRECT because we use a webhook when we need to send event information to any other service like GitHub. Webhook or as said in Azure DevOps, the service hooks provide a way to run tasks on other services when some event happens in Azure DevOps like sending out a push notification if a pipeline fails or integrating pipeline events with Azure Functions. https://docs.microsoft.com/en-us/azure/devops/service-hooks/services/webhooks?view=azure-devops
Correct Answer(s): From import a Git repository, click Import.
From import a Git repository, click import is the CORRECT answer because we need to clone an existing repository from Github to Azure DevOps without any hassle. We can just provide the URL of the GitHub repository to clone and if any authentication is required we can do that for the source repository. After this has been done, only one click would clone the entire Github repository to Azure Repos. Please refer to the screenshot below for more clarity:
Create a pull request is the INCORRECT choice because a pull request is created to propose changes to the main project branch and not to clone this existing repository from one source control to the other one.
Create a Webhook is INCORRECT because we use a webhook when we need to send event information to any other service like GitHub. Webhook or as said in Azure DevOps, the service hooks provide a way to run tasks on other services when some event happens in Azure DevOps like sending out a push notification if a pipeline fails or integrating pipeline events with Azure Functions. https://docs.microsoft.com/en-us/azure/devops/service-hooks/services/webhooks?view=azure-devops
Your company uses Azure DevOps to manage the build and release processes for applications.
You use a Git repository for application source control.
You plan to create a new branch from an existing pull request. Later, you plan to merge the new branch and the target branch of the pull request.
You need to use a pull request action to create the new branch. The solution must ensure that the branch uses only a portion of the code in the pull request.
Which pull request action should you use?
Approve with suggestions is an INCORRECT answer because this will not fulfill the requirement to create a new branch from an existing pull request. Approving with suggestion will just approve the merge with some suggestions to the target branch.
Set as default branch is the INCORRECT choice because setting as a default branch feature will just change the default branch from master to the specified branch. It has nothing to do with creating a new branch out of an existing pull request. https://medium.com/objectsharp/set-default-branch-in-azure-repos-f879ec1509d0
Revert is an INCORRECT choice because reverting a pull request creates a new branch with changes that undo the initial pull request for the target branch. Also, the new branch created contains only one reverted commit for each of the commits merged in the original pull request. https://docs.microsoft.com/en-us/azure/devops/repos/git/pull-requests?view=azure-devops
Reactivate is an INCORRECT answer because this feature will reactivate an already closed and merged pull request by forcing it to push backward. This is not going to help us in the requirement to use a portion of the code in the new pull request.
Approve with suggestions is an INCORRECT answer because this will not fulfill the requirement to create a new branch from an existing pull request. Approving with suggestion will just approve the merge with some suggestions to the target branch.
Set as default branch is the INCORRECT choice because setting as a default branch feature will just change the default branch from master to the specified branch. It has nothing to do with creating a new branch out of an existing pull request. https://medium.com/objectsharp/set-default-branch-in-azure-repos-f879ec1509d0
Revert is an INCORRECT choice because reverting a pull request creates a new branch with changes that undo the initial pull request for the target branch. Also, the new branch created contains only one reverted commit for each of the commits merged in the original pull request. https://docs.microsoft.com/en-us/azure/devops/repos/git/pull-requests?view=azure-devops
Reactivate is an INCORRECT answer because this feature will reactivate an already closed and merged pull request by forcing it to push backward. This is not going to help us in the requirement to use a portion of the code in the new pull request.
Approve with suggestions is an INCORRECT answer because this will not fulfill the requirement to create a new branch from an existing pull request. Approving with suggestion will just approve the merge with some suggestions to the target branch.
Set as default branch is the INCORRECT choice because setting as a default branch feature will just change the default branch from master to the specified branch. It has nothing to do with creating a new branch out of an existing pull request. https://medium.com/objectsharp/set-default-branch-in-azure-repos-f879ec1509d0
Revert is an INCORRECT choice because reverting a pull request creates a new branch with changes that undo the initial pull request for the target branch. Also, the new branch created contains only one reverted commit for each of the commits merged in the original pull request. https://docs.microsoft.com/en-us/azure/devops/repos/git/pull-requests?view=azure-devops
Reactivate is an INCORRECT answer because this feature will reactivate an already closed and merged pull request by forcing it to push backward. This is not going to help us in the requirement to use a portion of the code in the new pull request.
Question 36 of 60
36. Question
You have an existing project in Azure DevOps.
You plan to integrate GitHub as the repository for the project.
You need to ensure that Azure Pipelines runs under the Azure Pipelines identity.
Which authentication mechanism should you use?
Correct
Correct Answer(s): GitHub App
GitHub App is the CORRECT answer because we need to provide access to Azure pipelines for the Github repositories without running the pipelines under our personal Github identity. Instead, pipelines should run using Azure Pipelines Identity and for that, we use GitHub App authentication type. Builds and GitHub status will be performed using the Azure pipelines identity. https://docs.microsoft.com/en-us/azure/devops/pipelines/repos/github?view=azure-devops&tabs=yaml#access-to-github-repositories
Three authentication types for granting Azure Pipelines access to your GitHub repositories :
Azure Active Directory (Azure AD) is an INCORRECT option because Azure AD wont be able to give access on behalf of GitHub to the Azure Pipelines while ensuring that personal identity isnt used for running the pipelines. Azure AD is not integrated with Github in our case. https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/github-tutorial
Incorrect
Correct Answer(s): GitHub App
GitHub App is the CORRECT answer because we need to provide access to Azure pipelines for the Github repositories without running the pipelines under our personal Github identity. Instead, pipelines should run using Azure Pipelines Identity and for that, we use GitHub App authentication type. Builds and GitHub status will be performed using the Azure pipelines identity. https://docs.microsoft.com/en-us/azure/devops/pipelines/repos/github?view=azure-devops&tabs=yaml#access-to-github-repositories
Three authentication types for granting Azure Pipelines access to your GitHub repositories :
Azure Active Directory (Azure AD) is an INCORRECT option because Azure AD wont be able to give access on behalf of GitHub to the Azure Pipelines while ensuring that personal identity isnt used for running the pipelines. Azure AD is not integrated with Github in our case. https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/github-tutorial
Unattempted
Correct Answer(s): GitHub App
GitHub App is the CORRECT answer because we need to provide access to Azure pipelines for the Github repositories without running the pipelines under our personal Github identity. Instead, pipelines should run using Azure Pipelines Identity and for that, we use GitHub App authentication type. Builds and GitHub status will be performed using the Azure pipelines identity. https://docs.microsoft.com/en-us/azure/devops/pipelines/repos/github?view=azure-devops&tabs=yaml#access-to-github-repositories
Three authentication types for granting Azure Pipelines access to your GitHub repositories :
Azure Active Directory (Azure AD) is an INCORRECT option because Azure AD wont be able to give access on behalf of GitHub to the Azure Pipelines while ensuring that personal identity isnt used for running the pipelines. Azure AD is not integrated with Github in our case. https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/github-tutorial
Question 37 of 60
37. Question
You are implementing an Azure DevOps strategy for mobile devices using App Center.
You plan to use distribution groups to control access to releases.
You need to create the distribution groups shown in the following table.
Which type of distribution group should you use for Group1?
Correct
Correct Answer(s): Private
Private is the CORRECT answer because for Group1 users, the requirement is to send an invitation using the email. Private distribution groups ensure that the release / application is accessible to the invited person only and no one else. The invited person needs to verify the identity by logging in the app center account in order to access the release and download it. https://docs.microsoft.com/en-us/appcenter/distribution/groups#private-distribution-groups
Distribution groups are used to control access to releases. A distribution group helps to manage a set of users as a single entity. Using these we can control access to the particular releases for multiple users using a single distribution group.
Public is the INCORRECT answer because we do not need anonymous users to access our code from the App center, without authentication.
Shared is the INCORRECT answer because shared distribution groups can be public or private but provide access at the organization level, rather than the application level access. These types of groups are used by multiple applications inside an organization but our requirement is to provide access to invited users to a specific release. Therefore, Group1 wont use shared distribution groups.
Incorrect
Correct Answer(s): Private
Private is the CORRECT answer because for Group1 users, the requirement is to send an invitation using the email. Private distribution groups ensure that the release / application is accessible to the invited person only and no one else. The invited person needs to verify the identity by logging in the app center account in order to access the release and download it. https://docs.microsoft.com/en-us/appcenter/distribution/groups#private-distribution-groups
Distribution groups are used to control access to releases. A distribution group helps to manage a set of users as a single entity. Using these we can control access to the particular releases for multiple users using a single distribution group.
Public is the INCORRECT answer because we do not need anonymous users to access our code from the App center, without authentication.
Shared is the INCORRECT answer because shared distribution groups can be public or private but provide access at the organization level, rather than the application level access. These types of groups are used by multiple applications inside an organization but our requirement is to provide access to invited users to a specific release. Therefore, Group1 wont use shared distribution groups.
Unattempted
Correct Answer(s): Private
Private is the CORRECT answer because for Group1 users, the requirement is to send an invitation using the email. Private distribution groups ensure that the release / application is accessible to the invited person only and no one else. The invited person needs to verify the identity by logging in the app center account in order to access the release and download it. https://docs.microsoft.com/en-us/appcenter/distribution/groups#private-distribution-groups
Distribution groups are used to control access to releases. A distribution group helps to manage a set of users as a single entity. Using these we can control access to the particular releases for multiple users using a single distribution group.
Public is the INCORRECT answer because we do not need anonymous users to access our code from the App center, without authentication.
Shared is the INCORRECT answer because shared distribution groups can be public or private but provide access at the organization level, rather than the application level access. These types of groups are used by multiple applications inside an organization but our requirement is to provide access to invited users to a specific release. Therefore, Group1 wont use shared distribution groups.
Question 38 of 60
38. Question
You are implementing an Azure DevOps strategy for mobile devices using App Center.
You plan to use distribution groups to control access to releases.
You need to create the distribution groups shown in the following table.
Which type of distribution group should you use for Group2?
Correct
Correct Answer(s): Public
Public is the CORRECT answer because there is a requirement to give users unauthenticated public links for accessing the release. The public distribution group will be used to provide access to the early release users without using any authentication credentials.
If allowing public access for already existing private groups, the users which were the part of that Group will get email invites to access the release/application but in the email, an additional public link is also present for unauthenticated access. The new users, using the public link will gain direct unauthenticated access. https://docs.microsoft.com/en-us/appcenter/distribution/groups#public-distribution-groups https://docs.microsoft.com/en-us/appcenter/distribution/uploading
Private is the INCORRECT choice because we need unauthenticated access without providing any credentials using a public link. The private distribution group requires users to sign-in to their app center account for using the release/application.
Shared is the INCORRECT choice because shared groups are used for providing access to multiple projects within an organization which is not a requirement for Group2. Technically, a shared distribution group can be either public or private and used by multiple projects. Since there is no mention of multiple projects for group 2, we rule this option out.
Incorrect
Correct Answer(s): Public
Public is the CORRECT answer because there is a requirement to give users unauthenticated public links for accessing the release. The public distribution group will be used to provide access to the early release users without using any authentication credentials.
If allowing public access for already existing private groups, the users which were the part of that Group will get email invites to access the release/application but in the email, an additional public link is also present for unauthenticated access. The new users, using the public link will gain direct unauthenticated access. https://docs.microsoft.com/en-us/appcenter/distribution/groups#public-distribution-groups https://docs.microsoft.com/en-us/appcenter/distribution/uploading
Private is the INCORRECT choice because we need unauthenticated access without providing any credentials using a public link. The private distribution group requires users to sign-in to their app center account for using the release/application.
Shared is the INCORRECT choice because shared groups are used for providing access to multiple projects within an organization which is not a requirement for Group2. Technically, a shared distribution group can be either public or private and used by multiple projects. Since there is no mention of multiple projects for group 2, we rule this option out.
Unattempted
Correct Answer(s): Public
Public is the CORRECT answer because there is a requirement to give users unauthenticated public links for accessing the release. The public distribution group will be used to provide access to the early release users without using any authentication credentials.
If allowing public access for already existing private groups, the users which were the part of that Group will get email invites to access the release/application but in the email, an additional public link is also present for unauthenticated access. The new users, using the public link will gain direct unauthenticated access. https://docs.microsoft.com/en-us/appcenter/distribution/groups#public-distribution-groups https://docs.microsoft.com/en-us/appcenter/distribution/uploading
Private is the INCORRECT choice because we need unauthenticated access without providing any credentials using a public link. The private distribution group requires users to sign-in to their app center account for using the release/application.
Shared is the INCORRECT choice because shared groups are used for providing access to multiple projects within an organization which is not a requirement for Group2. Technically, a shared distribution group can be either public or private and used by multiple projects. Since there is no mention of multiple projects for group 2, we rule this option out.
Question 39 of 60
39. Question
You are implementing an Azure DevOps strategy for mobile devices using App Center.
You plan to use distribution groups to control access to releases.
You need to create the distribution groups shown in the following table.
Which type of distribution group should you use for Group3?
Correct
Correct Answer(s): Shared
Shared is the CORRECT answer here because Group3 users(Application testers) need access to all apps of the company. A shared distribution group can be shared across multiple applications inside an organization and can be used to give access to any combination of applications residing in the parent organization.
Shared distribution groups can be both public or private type groups and are at organization level rather than the application level hierarchy. https://docs.microsoft.com/en-us/appcenter/distribution/groups#shared-distribution-groups
Private is the INCORRECT answer because this type of distribution group solely wont give access to all the applications for the company. These groups are used to control application level access.
Public is the INCORRECT answer because the public distribution group cannot solely grant access for all the projects in an organization. That is why we rule this option out and go for the shared distribution group(organization level).
Incorrect
Correct Answer(s): Shared
Shared is the CORRECT answer here because Group3 users(Application testers) need access to all apps of the company. A shared distribution group can be shared across multiple applications inside an organization and can be used to give access to any combination of applications residing in the parent organization.
Shared distribution groups can be both public or private type groups and are at organization level rather than the application level hierarchy. https://docs.microsoft.com/en-us/appcenter/distribution/groups#shared-distribution-groups
Private is the INCORRECT answer because this type of distribution group solely wont give access to all the applications for the company. These groups are used to control application level access.
Public is the INCORRECT answer because the public distribution group cannot solely grant access for all the projects in an organization. That is why we rule this option out and go for the shared distribution group(organization level).
Unattempted
Correct Answer(s): Shared
Shared is the CORRECT answer here because Group3 users(Application testers) need access to all apps of the company. A shared distribution group can be shared across multiple applications inside an organization and can be used to give access to any combination of applications residing in the parent organization.
Shared distribution groups can be both public or private type groups and are at organization level rather than the application level hierarchy. https://docs.microsoft.com/en-us/appcenter/distribution/groups#shared-distribution-groups
Private is the INCORRECT answer because this type of distribution group solely wont give access to all the applications for the company. These groups are used to control application level access.
Public is the INCORRECT answer because the public distribution group cannot solely grant access for all the projects in an organization. That is why we rule this option out and go for the shared distribution group(organization level).
Question 40 of 60
40. Question
You need to recommend a Docker container build strategy that meets the following requirements:
– Minimized image sizes
– Minimizes the security surface area of the final image
What should you include in the recommendation?
Correct
Correct Answer(s): multi-stage builds
Multi-stage build is the CORRECT choice because we need to minimize the size of the final image. Generally, every instruction in the Dockerfile adds a layer to the image. To write a really efficient Dockerfile traditionally, logic was the key. Now however, multi-stage builds are used over the single-stage build to keep the layers as small as possible and to ensure that each layer has the artifacts it needs from the previous layer and nothing else. This practice ensures that the surface area of the image is as small as possible. https://docs.docker.com/develop/develop-images/multistage-build/
This is made possible by using FROM statement multiple times in a docker file and referencing a previous build as a source of the next FROM statement, leaving behind anything that is not required for the final image. https://medium.com/capital-one-tech/multi-stage-builds-and-dockerfile-b5866d9e2f84
Docker Swarm is the INCORRECT answer because a Docker Swarm is a group of nodes (physical machines or VMs) that are running the Docker application and that have been configured to join together in a cluster. Once a group of machines have been clustered together, you can still run the Docker commands that you’re used to, but they will now be carried out by the machines in your cluster. The activities of the cluster are controlled by a swarm manager, and machines that have joined the cluster are referred to as nodes.
The docker swarm is a container orchestration tool that has nothing to do with the image size and therefore we rule this option out.
Single-stage builds is the INCORRECT answer because these will not suffice our requirement to minimize the size and surface area of the final docker image. When using a single staged pipeline, our final image will contain all dependencies and files used since the start resulting in a larger final image size.
Correct Answer(s): multi-stage builds
Multi-stage build is the CORRECT choice because we need to minimize the size of the final image. Generally, every instruction in the Dockerfile adds a layer to the image. To write a really efficient Dockerfile traditionally, logic was the key. Now however, multi-stage builds are used over the single-stage build to keep the layers as small as possible and to ensure that each layer has the artifacts it needs from the previous layer and nothing else. This practice ensures that the surface area of the image is as small as possible. https://docs.docker.com/develop/develop-images/multistage-build/
This is made possible by using FROM statement multiple times in a docker file and referencing a previous build as a source of the next FROM statement, leaving behind anything that is not required for the final image. https://medium.com/capital-one-tech/multi-stage-builds-and-dockerfile-b5866d9e2f84
Docker Swarm is the INCORRECT answer because a Docker Swarm is a group of nodes (physical machines or VMs) that are running the Docker application and that have been configured to join together in a cluster. Once a group of machines have been clustered together, you can still run the Docker commands that you’re used to, but they will now be carried out by the machines in your cluster. The activities of the cluster are controlled by a swarm manager, and machines that have joined the cluster are referred to as nodes.
The docker swarm is a container orchestration tool that has nothing to do with the image size and therefore we rule this option out.
Single-stage builds is the INCORRECT answer because these will not suffice our requirement to minimize the size and surface area of the final docker image. When using a single staged pipeline, our final image will contain all dependencies and files used since the start resulting in a larger final image size.
Correct Answer(s): multi-stage builds
Multi-stage build is the CORRECT choice because we need to minimize the size of the final image. Generally, every instruction in the Dockerfile adds a layer to the image. To write a really efficient Dockerfile traditionally, logic was the key. Now however, multi-stage builds are used over the single-stage build to keep the layers as small as possible and to ensure that each layer has the artifacts it needs from the previous layer and nothing else. This practice ensures that the surface area of the image is as small as possible. https://docs.docker.com/develop/develop-images/multistage-build/
This is made possible by using FROM statement multiple times in a docker file and referencing a previous build as a source of the next FROM statement, leaving behind anything that is not required for the final image. https://medium.com/capital-one-tech/multi-stage-builds-and-dockerfile-b5866d9e2f84
Docker Swarm is the INCORRECT answer because a Docker Swarm is a group of nodes (physical machines or VMs) that are running the Docker application and that have been configured to join together in a cluster. Once a group of machines have been clustered together, you can still run the Docker commands that you’re used to, but they will now be carried out by the machines in your cluster. The activities of the cluster are controlled by a swarm manager, and machines that have joined the cluster are referred to as nodes.
The docker swarm is a container orchestration tool that has nothing to do with the image size and therefore we rule this option out.
Single-stage builds is the INCORRECT answer because these will not suffice our requirement to minimize the size and surface area of the final docker image. When using a single staged pipeline, our final image will contain all dependencies and files used since the start resulting in a larger final image size.
You plan to create an image that will contain a .NET Core application.
You have a Dockerfile file that contains the following code. (Line numbers are included for reference only.)
You need to ensure that the image is as small as possible when the image is built.
Which line should you modify in the file?
Correct
Correct Answer(s): 4
4 is the CORRECT answer because we need to minimize the size of the final image built and to do that we need to employ the concept of multi-stage builds.
With multi-stage builds, we use multiple FROM statements in the Dockerfile. Each FROM instruction should use a different base, and each of them should begin a new stage of the build. Artifacts can be copied selectively from one stage to another, leaving behind everything that is not wanted in the final image.
Refer the above code: FROM statement in the fourth line of the above code can use another source instead of the same source as used in the first line. Preferably, it should refer to the first build as a starting point.
Note: by default the first stage of the build is referred to as 0 (number zero) in any multi-stage build, if any specific name is not mentioned explicitly. https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
3 is the INCORRECT choice because the docker run command in the third line of the above code is necessary for creating a writable container layer over the specified image. https://docs.docker.com/engine/reference/commandline/run/
7 is the INCORRECT choice because the entrypoint command in the seventh line specifies the other instruction which will be used to configure how the container will run. Similar to how the CMD command works, there is a need to specify a command and parameters along with the ENTRYPOINT command.
Correct Answer(s): 4
4 is the CORRECT answer because we need to minimize the size of the final image built and to do that we need to employ the concept of multi-stage builds.
With multi-stage builds, we use multiple FROM statements in the Dockerfile. Each FROM instruction should use a different base, and each of them should begin a new stage of the build. Artifacts can be copied selectively from one stage to another, leaving behind everything that is not wanted in the final image.
Refer the above code: FROM statement in the fourth line of the above code can use another source instead of the same source as used in the first line. Preferably, it should refer to the first build as a starting point.
Note: by default the first stage of the build is referred to as 0 (number zero) in any multi-stage build, if any specific name is not mentioned explicitly. https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
3 is the INCORRECT choice because the docker run command in the third line of the above code is necessary for creating a writable container layer over the specified image. https://docs.docker.com/engine/reference/commandline/run/
7 is the INCORRECT choice because the entrypoint command in the seventh line specifies the other instruction which will be used to configure how the container will run. Similar to how the CMD command works, there is a need to specify a command and parameters along with the ENTRYPOINT command.
Correct Answer(s): 4
4 is the CORRECT answer because we need to minimize the size of the final image built and to do that we need to employ the concept of multi-stage builds.
With multi-stage builds, we use multiple FROM statements in the Dockerfile. Each FROM instruction should use a different base, and each of them should begin a new stage of the build. Artifacts can be copied selectively from one stage to another, leaving behind everything that is not wanted in the final image.
Refer the above code: FROM statement in the fourth line of the above code can use another source instead of the same source as used in the first line. Preferably, it should refer to the first build as a starting point.
Note: by default the first stage of the build is referred to as 0 (number zero) in any multi-stage build, if any specific name is not mentioned explicitly. https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
3 is the INCORRECT choice because the docker run command in the third line of the above code is necessary for creating a writable container layer over the specified image. https://docs.docker.com/engine/reference/commandline/run/
7 is the INCORRECT choice because the entrypoint command in the seventh line specifies the other instruction which will be used to configure how the container will run. Similar to how the CMD command works, there is a need to specify a command and parameters along with the ENTRYPOINT command.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals.
Your company has a project in Azure DevOps for a new web application.
You need to ensure that when code is checked in, a build runs automatically.
Solution: From the Continuous deployment trigger settings of the release pipeline, you enable the Pull request trigger setting.
Does this meet the goal?
Correct
Correct Answer(s): No
No is the CORRECT answer because first and foremost, to run a build we need to trigger the build pipeline and not the release pipeline. As mentioned in the question, when we enable the pull request trigger from the continuous deployment trigger setting, a release is created every time a selected artifact is available as part of a pull request workflow.
Correct Answer(s): No
No is the CORRECT answer because first and foremost, to run a build we need to trigger the build pipeline and not the release pipeline. As mentioned in the question, when we enable the pull request trigger from the continuous deployment trigger setting, a release is created every time a selected artifact is available as part of a pull request workflow.
Correct Answer(s): No
No is the CORRECT answer because first and foremost, to run a build we need to trigger the build pipeline and not the release pipeline. As mentioned in the question, when we enable the pull request trigger from the continuous deployment trigger setting, a release is created every time a selected artifact is available as part of a pull request workflow.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals.
Your company has a project in Azure DevOps for a new web application.
You need to ensure that when code is checked in, a build runs automatically.
Solution: From the Triggers tab of the build pipeline, you select Enable continuous integration.
Does this meet the goal?
Correct
Correct Answer(s): Yes
Yes is the CORRECT answer because Continuous Integration trigger for a build pipeline lets us build the code automatically as soon as the code is checked in to the target branch of the repository.
Correct Answer(s): Yes
Yes is the CORRECT answer because Continuous Integration trigger for a build pipeline lets us build the code automatically as soon as the code is checked in to the target branch of the repository.
Correct Answer(s): Yes
Yes is the CORRECT answer because Continuous Integration trigger for a build pipeline lets us build the code automatically as soon as the code is checked in to the target branch of the repository.
You are configuring a release pipeline in Azure DevOps as shown in the exhibit.
Based on the information presented in the graphic, answer the question that follows.
How many stages have triggers set?
Correct
Correct Answer(s): 7
7 or Seven is the CORRECT answer because each stage that is part of the Release structure has their Triggers set in the pre-deployment conditions setting.
The Triggers are of three types :
Manual : Sets the stage to deploy only when triggered manually
After Stage : Sets the stage to deploy after a preceding stage deployment
After Release : Deploys the stage after release is set(as the first stage after new artifacts are generated)
In the above snippet given in question:
– Development stage is set to has the trigger value as After Release
– Internal review stage is set to has the trigger value as Manual Only
– All the other stages are set to have their trigger values as After Stage as indicated by arrows connecting them to a previously deployed stage. https://docs.microsoft.com/en-us/azure/devops/pipelines/build/triggers?view=azure-devops
Incorrect
Correct Answer(s): 7
7 or Seven is the CORRECT answer because each stage that is part of the Release structure has their Triggers set in the pre-deployment conditions setting.
The Triggers are of three types :
Manual : Sets the stage to deploy only when triggered manually
After Stage : Sets the stage to deploy after a preceding stage deployment
After Release : Deploys the stage after release is set(as the first stage after new artifacts are generated)
In the above snippet given in question:
– Development stage is set to has the trigger value as After Release
– Internal review stage is set to has the trigger value as Manual Only
– All the other stages are set to have their trigger values as After Stage as indicated by arrows connecting them to a previously deployed stage. https://docs.microsoft.com/en-us/azure/devops/pipelines/build/triggers?view=azure-devops
Unattempted
Correct Answer(s): 7
7 or Seven is the CORRECT answer because each stage that is part of the Release structure has their Triggers set in the pre-deployment conditions setting.
The Triggers are of three types :
Manual : Sets the stage to deploy only when triggered manually
After Stage : Sets the stage to deploy after a preceding stage deployment
After Release : Deploys the stage after release is set(as the first stage after new artifacts are generated)
In the above snippet given in question:
– Development stage is set to has the trigger value as After Release
– Internal review stage is set to has the trigger value as Manual Only
– All the other stages are set to have their trigger values as After Stage as indicated by arrows connecting them to a previously deployed stage. https://docs.microsoft.com/en-us/azure/devops/pipelines/build/triggers?view=azure-devops
Question 45 of 60
45. Question
You are configuring a release pipeline in Azure DevOps as shown in the exhibit.
Based on the information presented in the graphic, answer the question that follows.
Which component should you modify to enable continuous delivery?
Correct
Correct Answer(s): The Internal Review stage
The Internal Review Stage is the CORRECT answer because in order to enable continuous Delivery we need a continuous workflow for our code to build, test, configure and deploy starting from a build stage to be ready for the production environment. Since the Web Application Artifact has already Continuous deployment trigger enabled, only the internal review stagek apart from every other stage is left out of the CD cycle.
Continuous Delivery sequences multiple deployment rings for a progressive exposure as a release progresses across masses. Through progressive exposure different user groups get to try new releases to monitor their experience in rings. Rings define a different user groups who get to use the software version in a progressive fashion (a release cycle) and provide feedback for its maturity. Example – The first deployment ring is used to test new versions in production before a broader rollout. Continuous Delivery aims to automate deployment from one ring to the next and may optionally depend on a decision maker’s approval. CD may create an auditable record of the approval in order to satisfy regulatory procedures or other control objectives. https://azuredevopslabs.com/labs/azuredevops/continuousdeployment/#exercise-1-embracing-continuous-delivery-with-azure-devops
Without Continuous Delivery, teams often relied on handoffs that resulted in issues during release cycles. The automated release pipeline allows a fail fast approach to validation, where the tests most likely to fail quickly are run first and longer-running tests happen after the faster ones complete successfully.
The goal of CD is to keep production fresh by achieving the shortest path from the availability of new code in version control or new components in package management to deployment. By automation, CD minimizes the time to deploy and time to mitigate or time to remediate production incidents (TTM and TTR). In lean terms, this optimizes process time and eliminates idle time. https://docs.microsoft.com/en-us/azure/devops/learn/what-is-continuous-delivery
The Development Stage is an INCORRECT choice because it is already a part of the continuous automated release pipeline and no trigger is set to manual for the development stage.
The Production Stage is the INCORRECT choice because it is also a part of the continuous release pipeline and has a lot of pre-production stages and pre-deployment conditions set to control deployment to this stage.
The Web Application Artifact is the INCORRECT answer because Web application artifact has the continuous deployment trigger enabled as part of the automated pipeline. We can ensure that continuous deployment trigger is enabled for the artifact by looking at the small tick mark on the top right of the web application artifact phase.
Incorrect
Correct Answer(s): The Internal Review stage
The Internal Review Stage is the CORRECT answer because in order to enable continuous Delivery we need a continuous workflow for our code to build, test, configure and deploy starting from a build stage to be ready for the production environment. Since the Web Application Artifact has already Continuous deployment trigger enabled, only the internal review stagek apart from every other stage is left out of the CD cycle.
Continuous Delivery sequences multiple deployment rings for a progressive exposure as a release progresses across masses. Through progressive exposure different user groups get to try new releases to monitor their experience in rings. Rings define a different user groups who get to use the software version in a progressive fashion (a release cycle) and provide feedback for its maturity. Example – The first deployment ring is used to test new versions in production before a broader rollout. Continuous Delivery aims to automate deployment from one ring to the next and may optionally depend on a decision maker’s approval. CD may create an auditable record of the approval in order to satisfy regulatory procedures or other control objectives. https://azuredevopslabs.com/labs/azuredevops/continuousdeployment/#exercise-1-embracing-continuous-delivery-with-azure-devops
Without Continuous Delivery, teams often relied on handoffs that resulted in issues during release cycles. The automated release pipeline allows a fail fast approach to validation, where the tests most likely to fail quickly are run first and longer-running tests happen after the faster ones complete successfully.
The goal of CD is to keep production fresh by achieving the shortest path from the availability of new code in version control or new components in package management to deployment. By automation, CD minimizes the time to deploy and time to mitigate or time to remediate production incidents (TTM and TTR). In lean terms, this optimizes process time and eliminates idle time. https://docs.microsoft.com/en-us/azure/devops/learn/what-is-continuous-delivery
The Development Stage is an INCORRECT choice because it is already a part of the continuous automated release pipeline and no trigger is set to manual for the development stage.
The Production Stage is the INCORRECT choice because it is also a part of the continuous release pipeline and has a lot of pre-production stages and pre-deployment conditions set to control deployment to this stage.
The Web Application Artifact is the INCORRECT answer because Web application artifact has the continuous deployment trigger enabled as part of the automated pipeline. We can ensure that continuous deployment trigger is enabled for the artifact by looking at the small tick mark on the top right of the web application artifact phase.
Unattempted
Correct Answer(s): The Internal Review stage
The Internal Review Stage is the CORRECT answer because in order to enable continuous Delivery we need a continuous workflow for our code to build, test, configure and deploy starting from a build stage to be ready for the production environment. Since the Web Application Artifact has already Continuous deployment trigger enabled, only the internal review stagek apart from every other stage is left out of the CD cycle.
Continuous Delivery sequences multiple deployment rings for a progressive exposure as a release progresses across masses. Through progressive exposure different user groups get to try new releases to monitor their experience in rings. Rings define a different user groups who get to use the software version in a progressive fashion (a release cycle) and provide feedback for its maturity. Example – The first deployment ring is used to test new versions in production before a broader rollout. Continuous Delivery aims to automate deployment from one ring to the next and may optionally depend on a decision maker’s approval. CD may create an auditable record of the approval in order to satisfy regulatory procedures or other control objectives. https://azuredevopslabs.com/labs/azuredevops/continuousdeployment/#exercise-1-embracing-continuous-delivery-with-azure-devops
Without Continuous Delivery, teams often relied on handoffs that resulted in issues during release cycles. The automated release pipeline allows a fail fast approach to validation, where the tests most likely to fail quickly are run first and longer-running tests happen after the faster ones complete successfully.
The goal of CD is to keep production fresh by achieving the shortest path from the availability of new code in version control or new components in package management to deployment. By automation, CD minimizes the time to deploy and time to mitigate or time to remediate production incidents (TTM and TTR). In lean terms, this optimizes process time and eliminates idle time. https://docs.microsoft.com/en-us/azure/devops/learn/what-is-continuous-delivery
The Development Stage is an INCORRECT choice because it is already a part of the continuous automated release pipeline and no trigger is set to manual for the development stage.
The Production Stage is the INCORRECT choice because it is also a part of the continuous release pipeline and has a lot of pre-production stages and pre-deployment conditions set to control deployment to this stage.
The Web Application Artifact is the INCORRECT answer because Web application artifact has the continuous deployment trigger enabled as part of the automated pipeline. We can ensure that continuous deployment trigger is enabled for the artifact by looking at the small tick mark on the top right of the web application artifact phase.
Question 46 of 60
46. Question
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals.
You have an approval process that contains a condition. The condition requires that releases be approved by a team leader before they are deployed.
You have a policy stating that approvals must occur within eight hours.
You discover that deployment fails if the approvals take longer than two hours.
You need to ensure that the deployments only fail if the approvals take longer than eight hours.
Solution: From Post-deployment conditions, you modify the Time between re-evaluation of gates option.
You use Azure pipelines to manage the build and deployment of apps.
You are planning the release strategies for a new app.
Which strategy should you choose for the following scenario?
Releases will be made available to users who are grouped by their tolerance for software faults.
Correct
Correct Answer(s): Progressive exposure
Progressive exposure is the CORRECT answer here because we want to release software to users in a progressive manner. Progressive exposure deployment is used where we want a new release to be exposed to users and then over time, extend it for a larger group.
Above image shows the concept of progressive exposure – an inner circle called canaries, then early adopters, and finally, the users sphere that contains all remaining users. This is often referred to as the impact/blast radius, where the release is analysed before exposing them to more users.
Examples of this can be seen as Azure cloud services. Microsoft relies on their user base and customers to implement and test these new features against existing systems to make sure they continue to function as expected. Sometimes, they even release features for private preview and then extend to general preview, followed by public preview if everything works fine. The impact of new changes on different users is changed and increased over time. Advantage of this method is that users know beforehand that they are in an early adoption phase and should be looking for, or expecting to see, issues.
Blue/green is INCORRECT answer here because this type of deployment involves running two identical production environments called Blue and Green. This functionality is used when we need two identical environments and want to flipp the switch to redirect the traffic without interrupting the user’s experience if in case a newer environment or feature doesnt work as expected. However, this type of methodology is not helpful in our case to release to a group of users in order to reduce the blast radius, and that is why we rule this option out.
Feature flags is an INCORRECT choice because we need to reduce the impact of new features by grouping the users and make the release available based on the user groups. However, feature flags are used to enable functionality as it comes in later stages of development.
Incorrect
Correct Answer(s): Progressive exposure
Progressive exposure is the CORRECT answer here because we want to release software to users in a progressive manner. Progressive exposure deployment is used where we want a new release to be exposed to users and then over time, extend it for a larger group.
Above image shows the concept of progressive exposure – an inner circle called canaries, then early adopters, and finally, the users sphere that contains all remaining users. This is often referred to as the impact/blast radius, where the release is analysed before exposing them to more users.
Examples of this can be seen as Azure cloud services. Microsoft relies on their user base and customers to implement and test these new features against existing systems to make sure they continue to function as expected. Sometimes, they even release features for private preview and then extend to general preview, followed by public preview if everything works fine. The impact of new changes on different users is changed and increased over time. Advantage of this method is that users know beforehand that they are in an early adoption phase and should be looking for, or expecting to see, issues.
Blue/green is INCORRECT answer here because this type of deployment involves running two identical production environments called Blue and Green. This functionality is used when we need two identical environments and want to flipp the switch to redirect the traffic without interrupting the user’s experience if in case a newer environment or feature doesnt work as expected. However, this type of methodology is not helpful in our case to release to a group of users in order to reduce the blast radius, and that is why we rule this option out.
Feature flags is an INCORRECT choice because we need to reduce the impact of new features by grouping the users and make the release available based on the user groups. However, feature flags are used to enable functionality as it comes in later stages of development.
Unattempted
Correct Answer(s): Progressive exposure
Progressive exposure is the CORRECT answer here because we want to release software to users in a progressive manner. Progressive exposure deployment is used where we want a new release to be exposed to users and then over time, extend it for a larger group.
Above image shows the concept of progressive exposure – an inner circle called canaries, then early adopters, and finally, the users sphere that contains all remaining users. This is often referred to as the impact/blast radius, where the release is analysed before exposing them to more users.
Examples of this can be seen as Azure cloud services. Microsoft relies on their user base and customers to implement and test these new features against existing systems to make sure they continue to function as expected. Sometimes, they even release features for private preview and then extend to general preview, followed by public preview if everything works fine. The impact of new changes on different users is changed and increased over time. Advantage of this method is that users know beforehand that they are in an early adoption phase and should be looking for, or expecting to see, issues.
Blue/green is INCORRECT answer here because this type of deployment involves running two identical production environments called Blue and Green. This functionality is used when we need two identical environments and want to flipp the switch to redirect the traffic without interrupting the user’s experience if in case a newer environment or feature doesnt work as expected. However, this type of methodology is not helpful in our case to release to a group of users in order to reduce the blast radius, and that is why we rule this option out.
Feature flags is an INCORRECT choice because we need to reduce the impact of new features by grouping the users and make the release available based on the user groups. However, feature flags are used to enable functionality as it comes in later stages of development.
Question 48 of 60
48. Question
You use Azure pipelines to manage the build and deployment of apps.
You are planning the release strategies for a new app.
Which strategy should you choose for the following scenario?
Code will be deployed to enable functionality that will be available in later releases of the app.
Correct
Correct Answer(s): feature flags
Feature flag is the CORRECT answer here because our requirement is to deploy the code for enabling the functionality that will be available in later releases of the app. Feature flag, feature toggles/switches is the concept of adding a flag to a feature so that it can be enabled, or disabled, for the application. This allows a feature to be tested even before it is completed and ready for release. Feature toggles or feature flags are used to hide, enable, or disable the feature as we make progress with time for a particular application.
Blue/green is the INCORRECT option because this type of deployment process can not be used to enable functionality as per the requirement, maybe in a later release and that is why we rule this option out. We use Blue/green type of deployment when we need two identical environments so that if a newer release doesnt work as expected, we can switch back to the previous working version.
Progressive exposure is the INCORRECT option here because it is used in cases where we want to make a release available to different users who are grouped by their tolerance for software faults. However, it has nothing to do with the process to enable functionality that will be available in later releases of the app, and that is why we rule this option out.
Incorrect
Correct Answer(s): feature flags
Feature flag is the CORRECT answer here because our requirement is to deploy the code for enabling the functionality that will be available in later releases of the app. Feature flag, feature toggles/switches is the concept of adding a flag to a feature so that it can be enabled, or disabled, for the application. This allows a feature to be tested even before it is completed and ready for release. Feature toggles or feature flags are used to hide, enable, or disable the feature as we make progress with time for a particular application.
Blue/green is the INCORRECT option because this type of deployment process can not be used to enable functionality as per the requirement, maybe in a later release and that is why we rule this option out. We use Blue/green type of deployment when we need two identical environments so that if a newer release doesnt work as expected, we can switch back to the previous working version.
Progressive exposure is the INCORRECT option here because it is used in cases where we want to make a release available to different users who are grouped by their tolerance for software faults. However, it has nothing to do with the process to enable functionality that will be available in later releases of the app, and that is why we rule this option out.
Unattempted
Correct Answer(s): feature flags
Feature flag is the CORRECT answer here because our requirement is to deploy the code for enabling the functionality that will be available in later releases of the app. Feature flag, feature toggles/switches is the concept of adding a flag to a feature so that it can be enabled, or disabled, for the application. This allows a feature to be tested even before it is completed and ready for release. Feature toggles or feature flags are used to hide, enable, or disable the feature as we make progress with time for a particular application.
Blue/green is the INCORRECT option because this type of deployment process can not be used to enable functionality as per the requirement, maybe in a later release and that is why we rule this option out. We use Blue/green type of deployment when we need two identical environments so that if a newer release doesnt work as expected, we can switch back to the previous working version.
Progressive exposure is the INCORRECT option here because it is used in cases where we want to make a release available to different users who are grouped by their tolerance for software faults. However, it has nothing to do with the process to enable functionality that will be available in later releases of the app, and that is why we rule this option out.
Question 49 of 60
49. Question
You use Azure pipelines to manage the build and deployment of apps.
You are planning the release strategies for a new app.
Which strategy should you choose for the following scenario?
When a new release occurs, the existing deployment will remain active to minimise recovery time if a return to the previous version is required.
Correct
Correct Answer(s): Blue/green
Blue/green is the CORRECT answer because we have a requirement to keep the existing deployment as active to minimise recovery time if a return to the previous version is required.
This type of deployment is a technique that can reduce downtime and risk by running two identical production environments called Blue and Green. At a particular instance, one of the environments, blue or green, is live and serves as host to the production traffic. A router or a Traffic manager in Azure is used to redirect traffic. https://www.redhat.com/en/topics/devops/what-is-blue-green-deployment
This type of setup helps in ensuring that the new environment is upto the mark and serving its absolute purpose before routing the traffic to the other environment. It also makes sure to not interrupt the user’s experience. Another benefit that we get from such setup is that if there is a need to roll back the change, we can redirect the traffic to the last known original deployment that was working fine, without any hassle.
Progressive exposure is the INCORRECT choice because we need to have existing deployment as active to minimise recovery time if a return to the previous version is required. However, progressive exposure doesnt allow such functionality but instead it focuses more on gradually increasing the scope of any new features.
Feature flags is an INCORRECT answer here because feature flags provide us with an option to include more features with time but doesnt provide a way to keep the existing deployment as active to switch back from newer deployment in case of a problem with the new one.
Incorrect
Correct Answer(s): Blue/green
Blue/green is the CORRECT answer because we have a requirement to keep the existing deployment as active to minimise recovery time if a return to the previous version is required.
This type of deployment is a technique that can reduce downtime and risk by running two identical production environments called Blue and Green. At a particular instance, one of the environments, blue or green, is live and serves as host to the production traffic. A router or a Traffic manager in Azure is used to redirect traffic. https://www.redhat.com/en/topics/devops/what-is-blue-green-deployment
This type of setup helps in ensuring that the new environment is upto the mark and serving its absolute purpose before routing the traffic to the other environment. It also makes sure to not interrupt the user’s experience. Another benefit that we get from such setup is that if there is a need to roll back the change, we can redirect the traffic to the last known original deployment that was working fine, without any hassle.
Progressive exposure is the INCORRECT choice because we need to have existing deployment as active to minimise recovery time if a return to the previous version is required. However, progressive exposure doesnt allow such functionality but instead it focuses more on gradually increasing the scope of any new features.
Feature flags is an INCORRECT answer here because feature flags provide us with an option to include more features with time but doesnt provide a way to keep the existing deployment as active to switch back from newer deployment in case of a problem with the new one.
Unattempted
Correct Answer(s): Blue/green
Blue/green is the CORRECT answer because we have a requirement to keep the existing deployment as active to minimise recovery time if a return to the previous version is required.
This type of deployment is a technique that can reduce downtime and risk by running two identical production environments called Blue and Green. At a particular instance, one of the environments, blue or green, is live and serves as host to the production traffic. A router or a Traffic manager in Azure is used to redirect traffic. https://www.redhat.com/en/topics/devops/what-is-blue-green-deployment
This type of setup helps in ensuring that the new environment is upto the mark and serving its absolute purpose before routing the traffic to the other environment. It also makes sure to not interrupt the user’s experience. Another benefit that we get from such setup is that if there is a need to roll back the change, we can redirect the traffic to the last known original deployment that was working fine, without any hassle.
Progressive exposure is the INCORRECT choice because we need to have existing deployment as active to minimise recovery time if a return to the previous version is required. However, progressive exposure doesnt allow such functionality but instead it focuses more on gradually increasing the scope of any new features.
Feature flags is an INCORRECT answer here because feature flags provide us with an option to include more features with time but doesnt provide a way to keep the existing deployment as active to switch back from newer deployment in case of a problem with the new one.
Question 50 of 60
50. Question
Your company has a hybrid cloud between Azure and Azure Stack.
The company uses Azure DevOps for its full CI/CD pipelines. Some applications are built by using Erlang and Hack.
You need to ensure that Erlang and Hack are supported as part of the build strategy across the hybrid cloud. The solution must minimize management overhead.
What should you use to execute the build pipeline?
Correct
Correct Answer(s): Azure DevOps self-hosted agents on Virtual Machines that run on Azure Stack
Azure DevOps self-hosted agents in virtual machines that run on Azure Stack is the CORRECT answer because we need to make sure that support for Erlang and Hack is extended within the Hybrid cloud environment while keeping the management overhead as minimum as possible.
In our case, we will be using Virtual machines that are provided as part of the Azure Stack Hub because these VMs are scalable, boasts most features from Azure and are managed by Microsoft. https://docs.microsoft.com/en-us/azure-stack/user/azure-stack-compute-overview
Azure DevOps self-hosted agents on Azure DevTest Labs virtual machines is an INCORRECT answer because we wont use Azure DevTest Labs in our case as it is only used for testing or development purposes. Infrastructure provided by Azure DevTest Labs can not be used for live environments or even production environments.
Azure DevOps self-hosted agents on Hyper-V virtual machines is an INCORRECT answer because leveraging Hyper-V virtual machines will not minimize operational overhead as we ourselves have to manage Hyper-V environment.
a Microsoft-hosted agent is an INCORRECT answer because we need to ensure that Erlang and Hack are supported as part of the build strategy across the hybrid cloud. Erlang is supported by MS hosted agents(Linux based) but Hack which is supported and used with Hip-hop VMs(open source), isnt a good choice to run inside a MS hosted agent.
Incorrect
Correct Answer(s): Azure DevOps self-hosted agents on Virtual Machines that run on Azure Stack
Azure DevOps self-hosted agents in virtual machines that run on Azure Stack is the CORRECT answer because we need to make sure that support for Erlang and Hack is extended within the Hybrid cloud environment while keeping the management overhead as minimum as possible.
In our case, we will be using Virtual machines that are provided as part of the Azure Stack Hub because these VMs are scalable, boasts most features from Azure and are managed by Microsoft. https://docs.microsoft.com/en-us/azure-stack/user/azure-stack-compute-overview
Azure DevOps self-hosted agents on Azure DevTest Labs virtual machines is an INCORRECT answer because we wont use Azure DevTest Labs in our case as it is only used for testing or development purposes. Infrastructure provided by Azure DevTest Labs can not be used for live environments or even production environments.
Azure DevOps self-hosted agents on Hyper-V virtual machines is an INCORRECT answer because leveraging Hyper-V virtual machines will not minimize operational overhead as we ourselves have to manage Hyper-V environment.
a Microsoft-hosted agent is an INCORRECT answer because we need to ensure that Erlang and Hack are supported as part of the build strategy across the hybrid cloud. Erlang is supported by MS hosted agents(Linux based) but Hack which is supported and used with Hip-hop VMs(open source), isnt a good choice to run inside a MS hosted agent.
Unattempted
Correct Answer(s): Azure DevOps self-hosted agents on Virtual Machines that run on Azure Stack
Azure DevOps self-hosted agents in virtual machines that run on Azure Stack is the CORRECT answer because we need to make sure that support for Erlang and Hack is extended within the Hybrid cloud environment while keeping the management overhead as minimum as possible.
In our case, we will be using Virtual machines that are provided as part of the Azure Stack Hub because these VMs are scalable, boasts most features from Azure and are managed by Microsoft. https://docs.microsoft.com/en-us/azure-stack/user/azure-stack-compute-overview
Azure DevOps self-hosted agents on Azure DevTest Labs virtual machines is an INCORRECT answer because we wont use Azure DevTest Labs in our case as it is only used for testing or development purposes. Infrastructure provided by Azure DevTest Labs can not be used for live environments or even production environments.
Azure DevOps self-hosted agents on Hyper-V virtual machines is an INCORRECT answer because leveraging Hyper-V virtual machines will not minimize operational overhead as we ourselves have to manage Hyper-V environment.
a Microsoft-hosted agent is an INCORRECT answer because we need to ensure that Erlang and Hack are supported as part of the build strategy across the hybrid cloud. Erlang is supported by MS hosted agents(Linux based) but Hack which is supported and used with Hip-hop VMs(open source), isnt a good choice to run inside a MS hosted agent.
Question 51 of 60
51. Question
You have an Azure DevOps project named Project1 and an Azure subscription named Sub1. Sub1 contains an Azure SQL database named DB1.
You need to create a release pipeline that uses the Azure SQL Database Deployment task to update DB1.
Which artifact should you deploy?
Correct
Correct Answer(s): a DACPAC
a DACPAC is the CORRECT answer here because we need to update the existing database DB1 in a PaaS SQL Server using the Azure Sql Database Deployment task in the pipeline. This task uses either DACPACs or SQL Server scripts to deploy or update a database in an existing Azure SQL Server.
A dac(a logical entity) is used for encapsulation of all the SQL server objects like instance objects, logins, etc for DB users to form an artifact (.zip package) known as a dacpac. DACPAC can be used by administrators to capture and deploy a schema, and upgrade an existing database.
Note – a DACPAC only has schema for the database and not the Data. https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/sql-azure-dacpac-deployment?view=azure-devops https://github.com/microsoft/azure-pipelines-tasks/blob/master/Tasks/SqlAzureDacpacDeploymentV1/README.md
Please refer the below image for reference on the task :
an LDF file is the INCORRECT answer because it is a log file created by the SQL Server. It contains a log of recent actions executed by the database(transactions) and is used to track events. LDF is helpful in case when we need to recover or restore a database due to some unexpected/unplanned events.
an MDF file is an INCORRECT choice because a MDF file is a primary data file for a database. It serves as the primary location to store all the database artifacts. including tables, views, triggers etc. However, it can not be used to update the sql database through the Azure SQL database deployment task as we do not have an integration option for .mdf files with Azure SQL deployment task but only .sql files.
a BACPAC is an INCORRECT answer because unlike DACPAC, this file holds data as well as the schema for a database and that is why we do not use it to update the database. However, it is useful if we want to export or import the database.
Incorrect
Correct Answer(s): a DACPAC
a DACPAC is the CORRECT answer here because we need to update the existing database DB1 in a PaaS SQL Server using the Azure Sql Database Deployment task in the pipeline. This task uses either DACPACs or SQL Server scripts to deploy or update a database in an existing Azure SQL Server.
A dac(a logical entity) is used for encapsulation of all the SQL server objects like instance objects, logins, etc for DB users to form an artifact (.zip package) known as a dacpac. DACPAC can be used by administrators to capture and deploy a schema, and upgrade an existing database.
Note – a DACPAC only has schema for the database and not the Data. https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/sql-azure-dacpac-deployment?view=azure-devops https://github.com/microsoft/azure-pipelines-tasks/blob/master/Tasks/SqlAzureDacpacDeploymentV1/README.md
Please refer the below image for reference on the task :
an LDF file is the INCORRECT answer because it is a log file created by the SQL Server. It contains a log of recent actions executed by the database(transactions) and is used to track events. LDF is helpful in case when we need to recover or restore a database due to some unexpected/unplanned events.
an MDF file is an INCORRECT choice because a MDF file is a primary data file for a database. It serves as the primary location to store all the database artifacts. including tables, views, triggers etc. However, it can not be used to update the sql database through the Azure SQL database deployment task as we do not have an integration option for .mdf files with Azure SQL deployment task but only .sql files.
a BACPAC is an INCORRECT answer because unlike DACPAC, this file holds data as well as the schema for a database and that is why we do not use it to update the database. However, it is useful if we want to export or import the database.
Unattempted
Correct Answer(s): a DACPAC
a DACPAC is the CORRECT answer here because we need to update the existing database DB1 in a PaaS SQL Server using the Azure Sql Database Deployment task in the pipeline. This task uses either DACPACs or SQL Server scripts to deploy or update a database in an existing Azure SQL Server.
A dac(a logical entity) is used for encapsulation of all the SQL server objects like instance objects, logins, etc for DB users to form an artifact (.zip package) known as a dacpac. DACPAC can be used by administrators to capture and deploy a schema, and upgrade an existing database.
Note – a DACPAC only has schema for the database and not the Data. https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/sql-azure-dacpac-deployment?view=azure-devops https://github.com/microsoft/azure-pipelines-tasks/blob/master/Tasks/SqlAzureDacpacDeploymentV1/README.md
Please refer the below image for reference on the task :
an LDF file is the INCORRECT answer because it is a log file created by the SQL Server. It contains a log of recent actions executed by the database(transactions) and is used to track events. LDF is helpful in case when we need to recover or restore a database due to some unexpected/unplanned events.
an MDF file is an INCORRECT choice because a MDF file is a primary data file for a database. It serves as the primary location to store all the database artifacts. including tables, views, triggers etc. However, it can not be used to update the sql database through the Azure SQL database deployment task as we do not have an integration option for .mdf files with Azure SQL deployment task but only .sql files.
a BACPAC is an INCORRECT answer because unlike DACPAC, this file holds data as well as the schema for a database and that is why we do not use it to update the database. However, it is useful if we want to export or import the database.
Question 52 of 60
52. Question
You have a web app hosted on Azure App Service. The web app stores data in Azure SQL database.
You need to generate an alert when there are 10,000 simultaneous connections to the database. The solution must minimise development effort.
Which option should you select in the Diagnostic settings of the database?
Correct
Correct Answer(s): Send to Log Analytics
Send to Log Analytics is the CORRECT answer here because we have a requirement to leverage Azure monitor and create alerts whenever there are 10K simultaneous connections to a SQL database. When we send our Diagnostic data to Log analytics, we can use Azure native KQL queries to run against that data and logically create custom alerts out the queries.
Once our data is sent to Azure Log analytics, we then use KQL queries to create a custom Query which shoots an alert whenever the threshold of 10,000 is exceeded for simultaneous connections to a particular database or all the database(for which the data is being collected in Log Analytics workspace).
Archive to a Storage Account is an INCORRECT answer because we have a requirement to configure an alert system based on a custom requirement. The solution will be to use Azure monitor to configure azure alerts but the data archived in the Storage account is not accessible to the Azure Monitor and that is why we rule this option out.
Stream to an Event Hub is the INCORRECT answer because we stream the data to an event hub when we want to leverage the capabilities of a big data platform or an event ingestion service like databricks or other third party analytics. However, we want to create alerts using the diagnostic setting and for that we use Log analytics instead of Event Hub.
Incorrect
Correct Answer(s): Send to Log Analytics
Send to Log Analytics is the CORRECT answer here because we have a requirement to leverage Azure monitor and create alerts whenever there are 10K simultaneous connections to a SQL database. When we send our Diagnostic data to Log analytics, we can use Azure native KQL queries to run against that data and logically create custom alerts out the queries.
Once our data is sent to Azure Log analytics, we then use KQL queries to create a custom Query which shoots an alert whenever the threshold of 10,000 is exceeded for simultaneous connections to a particular database or all the database(for which the data is being collected in Log Analytics workspace).
Archive to a Storage Account is an INCORRECT answer because we have a requirement to configure an alert system based on a custom requirement. The solution will be to use Azure monitor to configure azure alerts but the data archived in the Storage account is not accessible to the Azure Monitor and that is why we rule this option out.
Stream to an Event Hub is the INCORRECT answer because we stream the data to an event hub when we want to leverage the capabilities of a big data platform or an event ingestion service like databricks or other third party analytics. However, we want to create alerts using the diagnostic setting and for that we use Log analytics instead of Event Hub.
Unattempted
Correct Answer(s): Send to Log Analytics
Send to Log Analytics is the CORRECT answer here because we have a requirement to leverage Azure monitor and create alerts whenever there are 10K simultaneous connections to a SQL database. When we send our Diagnostic data to Log analytics, we can use Azure native KQL queries to run against that data and logically create custom alerts out the queries.
Once our data is sent to Azure Log analytics, we then use KQL queries to create a custom Query which shoots an alert whenever the threshold of 10,000 is exceeded for simultaneous connections to a particular database or all the database(for which the data is being collected in Log Analytics workspace).
Archive to a Storage Account is an INCORRECT answer because we have a requirement to configure an alert system based on a custom requirement. The solution will be to use Azure monitor to configure azure alerts but the data archived in the Storage account is not accessible to the Azure Monitor and that is why we rule this option out.
Stream to an Event Hub is the INCORRECT answer because we stream the data to an event hub when we want to leverage the capabilities of a big data platform or an event ingestion service like databricks or other third party analytics. However, we want to create alerts using the diagnostic setting and for that we use Log analytics instead of Event Hub.
Question 53 of 60
53. Question
You plan to create alerts that will be triggered based on the page load performance of a homepage.
You have the Application Insights log query shown in the following exhibit.
Based on the information presented in the graphic, complete the following statement with the correct answer choice.
Tocreate an alert based on the page load experience of most users, the alerting level must be based on __________________.
Correct
Correct Answer(s): percentile_duration_95
percentile_duration_95 is the CORRECT answer because we need to filter the result for most users and percentiles() aggregation function in the above query calculates the value of Duration that is larger than 95% of the sample set (number of users in our case).
Percentile in general – Lets say a person scores or ranks at the 99th percentile in an exam. In general, this means that the particular person who scored the 99th percentile scored more than 99% of the people who appeared in that particular exam.
Percentile_duration_50 is the INCORRECT answer because when we use this aggregation type, we get results for the 50th percentile – meaning that we get duration that is larger than 50% of the number of users. This is not for most users as the 50th percentile will only constitute for only half the users and that is why we rule this option out.
Percentile_duration_90 is the INCORRECT answer because this will only constitute for only 90% of the sample set which is less than the other option of 95 that we have.
threshold is the INCORRECT answer because threshold value is used as a basis of triggering mechanism for the Alert (if something exceeds the threshold value, then the alert is fired) that we configure and it is not an aggregation type that can be used for filtering the number of users.
Incorrect
Correct Answer(s): percentile_duration_95
percentile_duration_95 is the CORRECT answer because we need to filter the result for most users and percentiles() aggregation function in the above query calculates the value of Duration that is larger than 95% of the sample set (number of users in our case).
Percentile in general – Lets say a person scores or ranks at the 99th percentile in an exam. In general, this means that the particular person who scored the 99th percentile scored more than 99% of the people who appeared in that particular exam.
Percentile_duration_50 is the INCORRECT answer because when we use this aggregation type, we get results for the 50th percentile – meaning that we get duration that is larger than 50% of the number of users. This is not for most users as the 50th percentile will only constitute for only half the users and that is why we rule this option out.
Percentile_duration_90 is the INCORRECT answer because this will only constitute for only 90% of the sample set which is less than the other option of 95 that we have.
threshold is the INCORRECT answer because threshold value is used as a basis of triggering mechanism for the Alert (if something exceeds the threshold value, then the alert is fired) that we configure and it is not an aggregation type that can be used for filtering the number of users.
Unattempted
Correct Answer(s): percentile_duration_95
percentile_duration_95 is the CORRECT answer because we need to filter the result for most users and percentiles() aggregation function in the above query calculates the value of Duration that is larger than 95% of the sample set (number of users in our case).
Percentile in general – Lets say a person scores or ranks at the 99th percentile in an exam. In general, this means that the particular person who scored the 99th percentile scored more than 99% of the people who appeared in that particular exam.
Percentile_duration_50 is the INCORRECT answer because when we use this aggregation type, we get results for the 50th percentile – meaning that we get duration that is larger than 50% of the number of users. This is not for most users as the 50th percentile will only constitute for only half the users and that is why we rule this option out.
Percentile_duration_90 is the INCORRECT answer because this will only constitute for only 90% of the sample set which is less than the other option of 95 that we have.
threshold is the INCORRECT answer because threshold value is used as a basis of triggering mechanism for the Alert (if something exceeds the threshold value, then the alert is fired) that we configure and it is not an aggregation type that can be used for filtering the number of users.
Question 54 of 60
54. Question
You plan to create alerts that will be triggered based on the page load performance of a homepage.
You have the Application Insights log query shown in the following exhibit.
Based on the information presented in the graphic, complete the following statement with the correct answer choice.
To only create an alert when an authentication error occurs on the server, the query must be filtered on __________________.
Correct
Correct Answer(s): success
Success is the CORRECT answer because we need to check the status for the authentication request and only alert if the authentication error occurs on the server.
itemType is the INCORRECT answer because in our case item type would be equal to request.
itemType == request
We can not apply a filter for checking if the request was accepted or failed using this itemType property and that is why we rule this option out.
Source is the INCORRECT answer because the source property is used to determine the source for the request that is coming in to the server. However, it doesnt provide a way to determine if the request was successful or not. Example : type of source like Azure Service Bus, endpoint url, etc.
resultCode is the INCORRECT answer because this property provides us with the response code that we get from the WebPage or Application. Example : a statusCode of 400
Incorrect
Correct Answer(s): success
Success is the CORRECT answer because we need to check the status for the authentication request and only alert if the authentication error occurs on the server.
itemType is the INCORRECT answer because in our case item type would be equal to request.
itemType == request
We can not apply a filter for checking if the request was accepted or failed using this itemType property and that is why we rule this option out.
Source is the INCORRECT answer because the source property is used to determine the source for the request that is coming in to the server. However, it doesnt provide a way to determine if the request was successful or not. Example : type of source like Azure Service Bus, endpoint url, etc.
resultCode is the INCORRECT answer because this property provides us with the response code that we get from the WebPage or Application. Example : a statusCode of 400
Unattempted
Correct Answer(s): success
Success is the CORRECT answer because we need to check the status for the authentication request and only alert if the authentication error occurs on the server.
itemType is the INCORRECT answer because in our case item type would be equal to request.
itemType == request
We can not apply a filter for checking if the request was accepted or failed using this itemType property and that is why we rule this option out.
Source is the INCORRECT answer because the source property is used to determine the source for the request that is coming in to the server. However, it doesnt provide a way to determine if the request was successful or not. Example : type of source like Azure Service Bus, endpoint url, etc.
resultCode is the INCORRECT answer because this property provides us with the response code that we get from the WebPage or Application. Example : a statusCode of 400
Question 55 of 60
55. Question
You have a multi-tier application. The front end of the application is hosted in Azure App Service.
You need to identify the average load times of the application pages.
What should you use?
Correct
Correct Answer(s): Azure Application Insights
Azure Application Insights is the CORRECT answer because we need to identify and analyze the average page load times of the web pages for an application running in an Azure App Service.
We can enable application insights for our Azure App service and then use customized log queries to analyze the application specific data without interfering with our application specific code.
Azure Advisor is the INCORRECT answer because Azure Advisor is a tool that does analysis on the existing configurations in Azure and based on that it provides recommendations in order to help the user to optimise the current Azure resources for better reliability, security, operational excellence, performance and cost. However, it does not help analyze the application page load times.
The activity log of the App Service is the INCORRECT answer because activity logs are subscription level logs which give insights on categories like security, service health, etc.
The diagnostic logs of App service is the INCORRECT answer because diagnostic logs of an app service will give information about the resource level(Azure app service) metrics like Average utilization, etc. However, it will not provide metrics on the average page load times for the running Application.
Incorrect
Correct Answer(s): Azure Application Insights
Azure Application Insights is the CORRECT answer because we need to identify and analyze the average page load times of the web pages for an application running in an Azure App Service.
We can enable application insights for our Azure App service and then use customized log queries to analyze the application specific data without interfering with our application specific code.
Azure Advisor is the INCORRECT answer because Azure Advisor is a tool that does analysis on the existing configurations in Azure and based on that it provides recommendations in order to help the user to optimise the current Azure resources for better reliability, security, operational excellence, performance and cost. However, it does not help analyze the application page load times.
The activity log of the App Service is the INCORRECT answer because activity logs are subscription level logs which give insights on categories like security, service health, etc.
The diagnostic logs of App service is the INCORRECT answer because diagnostic logs of an app service will give information about the resource level(Azure app service) metrics like Average utilization, etc. However, it will not provide metrics on the average page load times for the running Application.
Unattempted
Correct Answer(s): Azure Application Insights
Azure Application Insights is the CORRECT answer because we need to identify and analyze the average page load times of the web pages for an application running in an Azure App Service.
We can enable application insights for our Azure App service and then use customized log queries to analyze the application specific data without interfering with our application specific code.
Azure Advisor is the INCORRECT answer because Azure Advisor is a tool that does analysis on the existing configurations in Azure and based on that it provides recommendations in order to help the user to optimise the current Azure resources for better reliability, security, operational excellence, performance and cost. However, it does not help analyze the application page load times.
The activity log of the App Service is the INCORRECT answer because activity logs are subscription level logs which give insights on categories like security, service health, etc.
The diagnostic logs of App service is the INCORRECT answer because diagnostic logs of an app service will give information about the resource level(Azure app service) metrics like Average utilization, etc. However, it will not provide metrics on the average page load times for the running Application.
Question 56 of 60
56. Question
You use Azure DevOps to manage the build and deployment of an app named App1.
You have a release pipeline that deploys a virtual machine named VM1.
You plan to monitor the release pipeline by using Azure Monitor.
You need to create an alert to monitor the performance of VM1. The alert must be triggered when the average CPU usage exceeds 70 percent for five minutes. The alert must calculate the average once every minute.
You need to configure the alert for the Aggregation granularity (Period).
What should you choose?
Correct
Correct Answer(s): 5 minutes
5 minutes is the CORRECT answer because we have a requirement to calculate the average CPU usage over a period of five minutes and to do that we need to set up the aggregation granularity (time period) as 5 minutes.
1 minutes, 3 minutes and 7 minutes are the INCORRECT answer because the CPU usage needs to be calculated for the period of last 5 minutes(the aggregation granularity for time period) and the frequency of alert needs to be for every minute(the duration of checking).
Incorrect
Correct Answer(s): 5 minutes
5 minutes is the CORRECT answer because we have a requirement to calculate the average CPU usage over a period of five minutes and to do that we need to set up the aggregation granularity (time period) as 5 minutes.
1 minutes, 3 minutes and 7 minutes are the INCORRECT answer because the CPU usage needs to be calculated for the period of last 5 minutes(the aggregation granularity for time period) and the frequency of alert needs to be for every minute(the duration of checking).
Unattempted
Correct Answer(s): 5 minutes
5 minutes is the CORRECT answer because we have a requirement to calculate the average CPU usage over a period of five minutes and to do that we need to set up the aggregation granularity (time period) as 5 minutes.
1 minutes, 3 minutes and 7 minutes are the INCORRECT answer because the CPU usage needs to be calculated for the period of last 5 minutes(the aggregation granularity for time period) and the frequency of alert needs to be for every minute(the duration of checking).
Question 57 of 60
57. Question
You use Azure DevOps to manage the build and deployment of an app named App1.
You have a release pipeline that deploys a virtual machine named VM1.
You plan to monitor the release pipeline by using Azure Monitor.
You need to create an alert to monitor the performance of VM1. The alert must be triggered when the average CPU usage exceeds 70 percent for five minutes. The alert must calculate the average once every minute.
You need to configure the alert for the Threshold value.
What should you choose?
Correct
Correct Answer(s): Static
Static is the CORRECT answer because our requirement is to set an alert for VM1 if the CPU usage exceeds 70 percent threshold. Since our requirement of threshold is set to 70% which is a static value, we use static threshold alert for our VM.
Dynamic is the INCORRECT answer choice because we use a Dynamic type of threshold when we are not sure to use a particular value(in our case which is 70%) for a threshold value. The dynamic threshold based alerts uses machine learning algorithms in the backend to check if the resource is deviating from its general behaviour and then based on that deviation it gives an alert.
Correct Answer(s): Static
Static is the CORRECT answer because our requirement is to set an alert for VM1 if the CPU usage exceeds 70 percent threshold. Since our requirement of threshold is set to 70% which is a static value, we use static threshold alert for our VM.
Dynamic is the INCORRECT answer choice because we use a Dynamic type of threshold when we are not sure to use a particular value(in our case which is 70%) for a threshold value. The dynamic threshold based alerts uses machine learning algorithms in the backend to check if the resource is deviating from its general behaviour and then based on that deviation it gives an alert.
Correct Answer(s): Static
Static is the CORRECT answer because our requirement is to set an alert for VM1 if the CPU usage exceeds 70 percent threshold. Since our requirement of threshold is set to 70% which is a static value, we use static threshold alert for our VM.
Dynamic is the INCORRECT answer choice because we use a Dynamic type of threshold when we are not sure to use a particular value(in our case which is 70%) for a threshold value. The dynamic threshold based alerts uses machine learning algorithms in the backend to check if the resource is deviating from its general behaviour and then based on that deviation it gives an alert.
You use Azure DevOps to manage the build and deployment of an app named App1.
You have a release pipeline that deploys a virtual machine named VM1.
You plan to monitor the release pipeline by using Azure Monitor.
You need to create an alert to monitor the performance of VM1. The alert must be triggered when the average CPU usage exceeds 70 percent for five minutes. The alert must calculate the average once every minute.
Greater than or equal to is an INCORRECT choice because setting this value for the operator will fire up the alert even on 70% mark. However, we need the alert to be created only if the CPU usage exceeds the 70 percent mark.
Less than is an INCORRECT answer because using this operator will initiate the alert if our CPU usage is less than 70%. However, our requirement is completely opposite and that is why we rule this option out.
Less than or equal to is the INCORRECT answer because this operator will initiate the alert even when the CPU percentage is less or equal to 70%. This is not our requirement and that is why this is an incorrect value.
Greater than or equal to is an INCORRECT choice because setting this value for the operator will fire up the alert even on 70% mark. However, we need the alert to be created only if the CPU usage exceeds the 70 percent mark.
Less than is an INCORRECT answer because using this operator will initiate the alert if our CPU usage is less than 70%. However, our requirement is completely opposite and that is why we rule this option out.
Less than or equal to is the INCORRECT answer because this operator will initiate the alert even when the CPU percentage is less or equal to 70%. This is not our requirement and that is why this is an incorrect value.
Greater than or equal to is an INCORRECT choice because setting this value for the operator will fire up the alert even on 70% mark. However, we need the alert to be created only if the CPU usage exceeds the 70 percent mark.
Less than is an INCORRECT answer because using this operator will initiate the alert if our CPU usage is less than 70%. However, our requirement is completely opposite and that is why we rule this option out.
Less than or equal to is the INCORRECT answer because this operator will initiate the alert even when the CPU percentage is less or equal to 70%. This is not our requirement and that is why this is an incorrect value.
Question 59 of 60
59. Question
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals.
Your company uses Azure DevOps to manage the build and release processes for applications.
You use a GIT repository for applications source control.
You need to implement a pull request strategy that reduces the history volume in the master branch.
Solution: You implement a pull request strategy that uses squash merges.
Does this meet the goal?
Correct
Correct Answer(s): Yes
YES is the CORRECT answer because our requirement is to implement a pull request strategy in such a way that the volume of history in the master branch(or default branch) remains as less as possible.
We use the squash merging because it helps in compressing the history of commits for any feature branch, whenever completing a pull request and merging with a master branch. By doing this, a squash merge turns a history of commits in a feature branch to a single commit against the master branch. Therefore, reducing the complexity of tracking any change. https://docs.microsoft.com/en-us/azure/devops/repos/git/branch-policies?view=azure-devops
Correct Answer(s): Yes
YES is the CORRECT answer because our requirement is to implement a pull request strategy in such a way that the volume of history in the master branch(or default branch) remains as less as possible.
We use the squash merging because it helps in compressing the history of commits for any feature branch, whenever completing a pull request and merging with a master branch. By doing this, a squash merge turns a history of commits in a feature branch to a single commit against the master branch. Therefore, reducing the complexity of tracking any change. https://docs.microsoft.com/en-us/azure/devops/repos/git/branch-policies?view=azure-devops
Correct Answer(s): Yes
YES is the CORRECT answer because our requirement is to implement a pull request strategy in such a way that the volume of history in the master branch(or default branch) remains as less as possible.
We use the squash merging because it helps in compressing the history of commits for any feature branch, whenever completing a pull request and merging with a master branch. By doing this, a squash merge turns a history of commits in a feature branch to a single commit against the master branch. Therefore, reducing the complexity of tracking any change. https://docs.microsoft.com/en-us/azure/devops/repos/git/branch-policies?view=azure-devops
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals.
Your company uses Azure DevOps to manage the build and release processes for applications.
You use a GIT repository for applications source control.
You need to implement a pull request strategy that reduces the history volume in the master branch.
Solution: You implement a pull request strategy that uses a three-way merge.
Does this meet the goal?
Correct
Correct Answer(s): No
NO is the CORRECT answer here because we want to reduce the commit history of the feature branches in the master branch. However, the three-way merge as a matter of fact doesnt really take care of commit history, but instead focuses more on the strategy to merge two feature branches with their parent branch.
A three way merge strategy is generally used in scenarios where different feature branches take some time to develop while others are in progress. While one of the feature branches is in the making, development on another feature branch might start alongside the first one. In the future, when the time comes to merge these two branches with the parent branch (Parent also advancing with time as developers are also working on new features simultaneously), the Git uses a three way merge to merge these three heads together. https://www.atlassian.com/git/tutorials/using-branches/git-merge
Incorrect
Correct Answer(s): No
NO is the CORRECT answer here because we want to reduce the commit history of the feature branches in the master branch. However, the three-way merge as a matter of fact doesnt really take care of commit history, but instead focuses more on the strategy to merge two feature branches with their parent branch.
A three way merge strategy is generally used in scenarios where different feature branches take some time to develop while others are in progress. While one of the feature branches is in the making, development on another feature branch might start alongside the first one. In the future, when the time comes to merge these two branches with the parent branch (Parent also advancing with time as developers are also working on new features simultaneously), the Git uses a three way merge to merge these three heads together. https://www.atlassian.com/git/tutorials/using-branches/git-merge
Unattempted
Correct Answer(s): No
NO is the CORRECT answer here because we want to reduce the commit history of the feature branches in the master branch. However, the three-way merge as a matter of fact doesnt really take care of commit history, but instead focuses more on the strategy to merge two feature branches with their parent branch.
A three way merge strategy is generally used in scenarios where different feature branches take some time to develop while others are in progress. While one of the feature branches is in the making, development on another feature branch might start alongside the first one. In the future, when the time comes to merge these two branches with the parent branch (Parent also advancing with time as developers are also working on new features simultaneously), the Git uses a three way merge to merge these three heads together. https://www.atlassian.com/git/tutorials/using-branches/git-merge
X
Use Page numbers below to navigate to other practice tests