You have already completed the Test before. Hence you can not start it again.
Test is loading...
You must sign in or sign up to start the Test.
You have to finish following quiz, to start this Test:
Your results are here!! for" AZ-400 Practice Test 5 "
0 of 55 questions answered correctly
Your time:
Time has elapsed
Your Final Score is : 0
You have attempted : 0
Number of Correct Questions : 0 and scored 0
Number of Incorrect Questions : 0 and Negative marks 0
Average score
Your score
AZ-400
You have attempted: 0
Number of Correct Questions: 0 and scored 0
Number of Incorrect Questions: 0 and Negative marks 0
You can review your answers by clicking view questions. Important Note : Open Reference Documentation Links in New Tab (Right Click and Open in New Tab).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
Answered
Review
Question 1 of 55
1. Question
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals.
Your company uses Azure DevOps to manage the build and release processes for applications.
You use a GIT repository for applications source control.
You need to implement a pull request strategy that reduces the history volume in the master branch.
Solution: You implement a pull request strategy that uses an explicit merge.
Does this meet the goal?
Correct
Correct Answer(s): No
NO is the CORRECT answer because our requirement is to compress the volume of commit history to keep the master branch as clean as possible. However, if we use explicit merge, the scenario would be the exact opposite of what was expected.
Explicit merge is somehow the opposite of squash merge. Squash merge helps in condensing the commit history of feature branches when merging to the master branch but, the explicit merge in contrast, creates another merge commit to change the commit history. This helps in explicitly pointing out where a merge happened.
Using explicit merge creates complexity and an increase in existing commit/merge history, which can create operational overhead in case of a large code base. https://www.atlassian.com/git/tutorials/using-branches/merge-strategy
Incorrect
Correct Answer(s): No
NO is the CORRECT answer because our requirement is to compress the volume of commit history to keep the master branch as clean as possible. However, if we use explicit merge, the scenario would be the exact opposite of what was expected.
Explicit merge is somehow the opposite of squash merge. Squash merge helps in condensing the commit history of feature branches when merging to the master branch but, the explicit merge in contrast, creates another merge commit to change the commit history. This helps in explicitly pointing out where a merge happened.
Using explicit merge creates complexity and an increase in existing commit/merge history, which can create operational overhead in case of a large code base. https://www.atlassian.com/git/tutorials/using-branches/merge-strategy
Unattempted
Correct Answer(s): No
NO is the CORRECT answer because our requirement is to compress the volume of commit history to keep the master branch as clean as possible. However, if we use explicit merge, the scenario would be the exact opposite of what was expected.
Explicit merge is somehow the opposite of squash merge. Squash merge helps in condensing the commit history of feature branches when merging to the master branch but, the explicit merge in contrast, creates another merge commit to change the commit history. This helps in explicitly pointing out where a merge happened.
Using explicit merge creates complexity and an increase in existing commit/merge history, which can create operational overhead in case of a large code base. https://www.atlassian.com/git/tutorials/using-branches/merge-strategy
Question 2 of 55
2. Question
Your company builds a multi-tier web application.
You use Azure DevOps and host the production application on Azure virtual machines.
Your team prepares an Azure Resource Manager template of the virtual machine that you will use to test new features.
You need to create a staging environment in Azure that meets the following requirements:
– Minimizes the cost of Azure hosting
– Provisions the virtual machines automatically
– Uses the custom Azure Resource Manager template to provision the virtual machines
What should you do?
Correct
Correct Answer(s): In Azure DevOps, configure new tasks in the release pipeline to create and delete the virtual machines in Azure DevTest Labs
In Azure DevOps, configure new tasks in the release pipeline to create and delete the virtual machines in Azure DevTest Lab is the CORRECT answer because our requirements include minimizing the cost, auto provisioning of Virtual machines and using ARM templates for automation.
Azure DevTest Labs provide on-demand lab environments for developers to test their code in a similar Azure-like environment. It allows the use of azure resource manager templates to automate provisioning of workloads, and also provides repository for managing the code and pushing it inside the workloads.
The DevTest labs provide a lot of free options to manage costing for the environments and only charges for the storage costs, if any.
Please refer the documentation on Azure DevTest labs to see what all are the features and cost optimization/automation options: https://azure.microsoft.com/en-in/pricing/details/devtest-lab/
In Azure Cloud Shell, run Azure CLI commands to create and delete the new virtual machines in a staging resource group is an INCORRECT choice because using any staging resource group and azure cli will not meet our requirements of using ARM templates and implementing cost optimization.
In Azure DevOps, configure new tasks in the release pipeline to deploy to Azure Cloud Services is an INCORRECT choice because this process doesnt provide any additional cost optimizations.
From Azure Cloud Shell, run Azure PowerShell commands to create and delete the new virtual machines in a staging resource group is an INCORRECT answer because using any staging resource group and azure powershell will not meet our requirements of using ARM templates and implementing cost optimization.
Incorrect
Correct Answer(s): In Azure DevOps, configure new tasks in the release pipeline to create and delete the virtual machines in Azure DevTest Labs
In Azure DevOps, configure new tasks in the release pipeline to create and delete the virtual machines in Azure DevTest Lab is the CORRECT answer because our requirements include minimizing the cost, auto provisioning of Virtual machines and using ARM templates for automation.
Azure DevTest Labs provide on-demand lab environments for developers to test their code in a similar Azure-like environment. It allows the use of azure resource manager templates to automate provisioning of workloads, and also provides repository for managing the code and pushing it inside the workloads.
The DevTest labs provide a lot of free options to manage costing for the environments and only charges for the storage costs, if any.
Please refer the documentation on Azure DevTest labs to see what all are the features and cost optimization/automation options: https://azure.microsoft.com/en-in/pricing/details/devtest-lab/
In Azure Cloud Shell, run Azure CLI commands to create and delete the new virtual machines in a staging resource group is an INCORRECT choice because using any staging resource group and azure cli will not meet our requirements of using ARM templates and implementing cost optimization.
In Azure DevOps, configure new tasks in the release pipeline to deploy to Azure Cloud Services is an INCORRECT choice because this process doesnt provide any additional cost optimizations.
From Azure Cloud Shell, run Azure PowerShell commands to create and delete the new virtual machines in a staging resource group is an INCORRECT answer because using any staging resource group and azure powershell will not meet our requirements of using ARM templates and implementing cost optimization.
Unattempted
Correct Answer(s): In Azure DevOps, configure new tasks in the release pipeline to create and delete the virtual machines in Azure DevTest Labs
In Azure DevOps, configure new tasks in the release pipeline to create and delete the virtual machines in Azure DevTest Lab is the CORRECT answer because our requirements include minimizing the cost, auto provisioning of Virtual machines and using ARM templates for automation.
Azure DevTest Labs provide on-demand lab environments for developers to test their code in a similar Azure-like environment. It allows the use of azure resource manager templates to automate provisioning of workloads, and also provides repository for managing the code and pushing it inside the workloads.
The DevTest labs provide a lot of free options to manage costing for the environments and only charges for the storage costs, if any.
Please refer the documentation on Azure DevTest labs to see what all are the features and cost optimization/automation options: https://azure.microsoft.com/en-in/pricing/details/devtest-lab/
In Azure Cloud Shell, run Azure CLI commands to create and delete the new virtual machines in a staging resource group is an INCORRECT choice because using any staging resource group and azure cli will not meet our requirements of using ARM templates and implementing cost optimization.
In Azure DevOps, configure new tasks in the release pipeline to deploy to Azure Cloud Services is an INCORRECT choice because this process doesnt provide any additional cost optimizations.
From Azure Cloud Shell, run Azure PowerShell commands to create and delete the new virtual machines in a staging resource group is an INCORRECT answer because using any staging resource group and azure powershell will not meet our requirements of using ARM templates and implementing cost optimization.
Question 3 of 55
3. Question
You are designing the development process for your company.
You need to recommend a solution for continuous inspection of the company’s code base to locate common code patterns that are known to be problematic.
What should you include in the recommendation?
Correct
Correct Answer(s): SonarCloud analysis
SonarCloud analysis is the CORRECT answer because we need a solution that enables continuous inspection of the code base to find vulnerabilities.
We can enable sonarcloud in our Build pipeline for continuous inspection of the code base. The sonarcloud extension provides a way to scan all the code and point out common code vulnerabilities by scanning the quality and security of the code base.
Microsoft Visual Studio test plans is an INCORRECT answer because MS visual studio plans provide a way to group and manage the different types of tests that are conducted by an application team. However, it doesnt provide a solution for continuous inspection of code base to point out anomalies.
Gradle wrapper scripts is an INCORRECT answer because it is a script that helps in invoking a declared version of gradle for use by the developers. However, it does not provide a mechanism/way to scan the code base to spot vulnerabilities. https://docs.gradle.org/current/userguide/gradle_wrapper.html
the Javascript task runner is an INCORRECT answer because it provides a way to compile and run javascript based code. However, this task doesnt support the scanning of the code base to spot vulnerabilities.
Incorrect
Correct Answer(s): SonarCloud analysis
SonarCloud analysis is the CORRECT answer because we need a solution that enables continuous inspection of the code base to find vulnerabilities.
We can enable sonarcloud in our Build pipeline for continuous inspection of the code base. The sonarcloud extension provides a way to scan all the code and point out common code vulnerabilities by scanning the quality and security of the code base.
Microsoft Visual Studio test plans is an INCORRECT answer because MS visual studio plans provide a way to group and manage the different types of tests that are conducted by an application team. However, it doesnt provide a solution for continuous inspection of code base to point out anomalies.
Gradle wrapper scripts is an INCORRECT answer because it is a script that helps in invoking a declared version of gradle for use by the developers. However, it does not provide a mechanism/way to scan the code base to spot vulnerabilities. https://docs.gradle.org/current/userguide/gradle_wrapper.html
the Javascript task runner is an INCORRECT answer because it provides a way to compile and run javascript based code. However, this task doesnt support the scanning of the code base to spot vulnerabilities.
Unattempted
Correct Answer(s): SonarCloud analysis
SonarCloud analysis is the CORRECT answer because we need a solution that enables continuous inspection of the code base to find vulnerabilities.
We can enable sonarcloud in our Build pipeline for continuous inspection of the code base. The sonarcloud extension provides a way to scan all the code and point out common code vulnerabilities by scanning the quality and security of the code base.
Microsoft Visual Studio test plans is an INCORRECT answer because MS visual studio plans provide a way to group and manage the different types of tests that are conducted by an application team. However, it doesnt provide a solution for continuous inspection of code base to point out anomalies.
Gradle wrapper scripts is an INCORRECT answer because it is a script that helps in invoking a declared version of gradle for use by the developers. However, it does not provide a mechanism/way to scan the code base to spot vulnerabilities. https://docs.gradle.org/current/userguide/gradle_wrapper.html
the Javascript task runner is an INCORRECT answer because it provides a way to compile and run javascript based code. However, this task doesnt support the scanning of the code base to spot vulnerabilities.
Question 4 of 55
4. Question
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals.
You need to recommend an integration strategy for the build process of a Java application. The solution must meet the following requirements:
The builds must access an on-premises dependency management system.
The build outputs must be stored as Server artifacts in Azure DevOps.
The source code must be stored in a Git repository in Azure DevOps.
Solution: Configure the build pipeline to use a Hosted VS 2017 agent pool. Include the Java Tool Installer task in the build pipeline.
Does this meet the goal?
Correct
Correct Answer(s): No
No is the CORRECT answer here because using MS hosted agent – VS 2017 version will not help with the requirements of storing build as Server artifacts, and accessing on-prem dependency management system.
In our case, we can leverage capabilities of tools like Octopus deploy. Using it we can manage our application endpoints where the code is running(infrastructure like VMs) more efficiently from a centralized location(octopus server). This will help in Streamlining the application deployment process(tentacles for endpoints and server for central management) and application development lifecycle as a whole. https://octopus.com/docs/infrastructure/deployment-targets/azure
Incorrect
Correct Answer(s): No
No is the CORRECT answer here because using MS hosted agent – VS 2017 version will not help with the requirements of storing build as Server artifacts, and accessing on-prem dependency management system.
In our case, we can leverage capabilities of tools like Octopus deploy. Using it we can manage our application endpoints where the code is running(infrastructure like VMs) more efficiently from a centralized location(octopus server). This will help in Streamlining the application deployment process(tentacles for endpoints and server for central management) and application development lifecycle as a whole. https://octopus.com/docs/infrastructure/deployment-targets/azure
Unattempted
Correct Answer(s): No
No is the CORRECT answer here because using MS hosted agent – VS 2017 version will not help with the requirements of storing build as Server artifacts, and accessing on-prem dependency management system.
In our case, we can leverage capabilities of tools like Octopus deploy. Using it we can manage our application endpoints where the code is running(infrastructure like VMs) more efficiently from a centralized location(octopus server). This will help in Streamlining the application deployment process(tentacles for endpoints and server for central management) and application development lifecycle as a whole. https://octopus.com/docs/infrastructure/deployment-targets/azure
Question 5 of 55
5. Question
You manage build and release pipelines by using Azure DevOps. Your entire managed environment resides in Azure. You need to configure a service endpoint for accessing Azure Key Vault secrets.
The solution must meet the following requirements:
Ensure that the secrets are retrieved by Azure DevOps.
Avoid persisting credentials and tokens in Azure DevOps.
How should you configure the service endpoint for the following?
Service connection type
Correct
Correct Answer(s): Azure Resource Manager
Azure Resource Manager is the CORRECT answer because our requirement is to configure a service endpoint for accessing the Key Vault entities while ignoring the persisting tokens and secrets. We use the Azure key vault task to fulfill our requirement and to do that we configure a service connection of the type – Azure Resource Manager.
We use this type of connection because we need to connect to the Azure subscription where the Key Vault is residing. We will create a service principal for connecting Azure subscription and Azure DevOps organization. https://docs.microsoft.com/en-us/azure/devops/pipelines/library/connect-to-azure?view=azure-devops#use-msi https://azuredevopslabs.com/labs/vstsextend/azurekeyvault/
Azure Resource Manager is the CORRECT answer because our requirement is to configure a service endpoint for accessing the Key Vault entities while ignoring the persisting tokens and secrets. We use the Azure key vault task to fulfill our requirement and to do that we configure a service connection of the type – Azure Resource Manager.
We use this type of connection because we need to connect to the Azure subscription where the Key Vault is residing. We will create a service principal for connecting Azure subscription and Azure DevOps organization. https://docs.microsoft.com/en-us/azure/devops/pipelines/library/connect-to-azure?view=azure-devops#use-msi https://azuredevopslabs.com/labs/vstsextend/azurekeyvault/
Azure Resource Manager is the CORRECT answer because our requirement is to configure a service endpoint for accessing the Key Vault entities while ignoring the persisting tokens and secrets. We use the Azure key vault task to fulfill our requirement and to do that we configure a service connection of the type – Azure Resource Manager.
We use this type of connection because we need to connect to the Azure subscription where the Key Vault is residing. We will create a service principal for connecting Azure subscription and Azure DevOps organization. https://docs.microsoft.com/en-us/azure/devops/pipelines/library/connect-to-azure?view=azure-devops#use-msi https://azuredevopslabs.com/labs/vstsextend/azurekeyvault/
Your company has a project in Azure DevOps.
You plan to create a release pipeline that will deploy resources by using Azure Resource Manager templates. The templates will reference secrets stored in Azure Key Vault.
You need to recommend a solution for accessing the secrets stored in the key vault during deployments. The solution must use the principle of least privilege.
Which configuration should you use for the following?
‘Restrict access to the secrets in Key Vault’ :
Correct
A. an Azure Key Vault access policy
Using an Azure Key Vault access policy is the most secure and recommended solution for restricting access to secrets during deployments. Key Vault access policies allow you to grant specific permissions to individual users, groups, or service principals. By granting access only to the necessary resources and operations, you can adhere to the principle of least privilege.
Here’s a breakdown of why the other options are not as suitable:
B. a personal access token (PAT): While PATs can be used to access Key Vault, they are generally less secure than using access policies. PATs are shared credentials and can be compromised if they fall into the wrong hands.
C. RBAC: RBAC can be used to control access to Azure resources, but it’s not specifically designed for managing access to secrets within Key Vault. Key Vault access policies provide more granular control over secret access.
An Azure Key vault access policy is the CORRECT answer because access policies in a key vault are used to manage accesses and permissions for the users and service principals to use any key vault entity including keys, secrets and certificates. Since, our requirement is to restrict access to the secrets in the key vault, we will leverage access policies to set permissions for users/ service principals to access and manage the secrets.
RBAC is an INCORRECT answer because role based access controls are used to control access of users for Azure resources. However, we can not control access to key vault entities for users/service principals.
RBAC Example: there are RBAC roles like Key vault Reader. The users granted with this role will be able to view the key vault and its configuration but, they will not be able to change anything.
A personal access token (PAT) is an INCORRECT answer because PATs are used where we want to leverage a token instead of credentials. However, PAT is not an appropriate authentication method when trying to access key vault entities.
Example PAT use case : We can use PAT when connecting from a third party tool to Azure and if you want to skip authentication most of the time or if you don’t want to provide your primary Microsoft credentials. https://docs.microsoft.com/en-us/azure/devops/integrate/get-started/authentication/authentication-guidance?view=azure-devops
Incorrect
A. an Azure Key Vault access policy
Using an Azure Key Vault access policy is the most secure and recommended solution for restricting access to secrets during deployments. Key Vault access policies allow you to grant specific permissions to individual users, groups, or service principals. By granting access only to the necessary resources and operations, you can adhere to the principle of least privilege.
Here’s a breakdown of why the other options are not as suitable:
B. a personal access token (PAT): While PATs can be used to access Key Vault, they are generally less secure than using access policies. PATs are shared credentials and can be compromised if they fall into the wrong hands.
C. RBAC: RBAC can be used to control access to Azure resources, but it’s not specifically designed for managing access to secrets within Key Vault. Key Vault access policies provide more granular control over secret access.
An Azure Key vault access policy is the CORRECT answer because access policies in a key vault are used to manage accesses and permissions for the users and service principals to use any key vault entity including keys, secrets and certificates. Since, our requirement is to restrict access to the secrets in the key vault, we will leverage access policies to set permissions for users/ service principals to access and manage the secrets.
RBAC is an INCORRECT answer because role based access controls are used to control access of users for Azure resources. However, we can not control access to key vault entities for users/service principals.
RBAC Example: there are RBAC roles like Key vault Reader. The users granted with this role will be able to view the key vault and its configuration but, they will not be able to change anything.
A personal access token (PAT) is an INCORRECT answer because PATs are used where we want to leverage a token instead of credentials. However, PAT is not an appropriate authentication method when trying to access key vault entities.
Example PAT use case : We can use PAT when connecting from a third party tool to Azure and if you want to skip authentication most of the time or if you don’t want to provide your primary Microsoft credentials. https://docs.microsoft.com/en-us/azure/devops/integrate/get-started/authentication/authentication-guidance?view=azure-devops
Unattempted
A. an Azure Key Vault access policy
Using an Azure Key Vault access policy is the most secure and recommended solution for restricting access to secrets during deployments. Key Vault access policies allow you to grant specific permissions to individual users, groups, or service principals. By granting access only to the necessary resources and operations, you can adhere to the principle of least privilege.
Here’s a breakdown of why the other options are not as suitable:
B. a personal access token (PAT): While PATs can be used to access Key Vault, they are generally less secure than using access policies. PATs are shared credentials and can be compromised if they fall into the wrong hands.
C. RBAC: RBAC can be used to control access to Azure resources, but it’s not specifically designed for managing access to secrets within Key Vault. Key Vault access policies provide more granular control over secret access.
An Azure Key vault access policy is the CORRECT answer because access policies in a key vault are used to manage accesses and permissions for the users and service principals to use any key vault entity including keys, secrets and certificates. Since, our requirement is to restrict access to the secrets in the key vault, we will leverage access policies to set permissions for users/ service principals to access and manage the secrets.
RBAC is an INCORRECT answer because role based access controls are used to control access of users for Azure resources. However, we can not control access to key vault entities for users/service principals.
RBAC Example: there are RBAC roles like Key vault Reader. The users granted with this role will be able to view the key vault and its configuration but, they will not be able to change anything.
A personal access token (PAT) is an INCORRECT answer because PATs are used where we want to leverage a token instead of credentials. However, PAT is not an appropriate authentication method when trying to access key vault entities.
Example PAT use case : We can use PAT when connecting from a third party tool to Azure and if you want to skip authentication most of the time or if you don’t want to provide your primary Microsoft credentials. https://docs.microsoft.com/en-us/azure/devops/integrate/get-started/authentication/authentication-guidance?view=azure-devops
Question 7 of 55
7. Question
Your company has a project in Azure DevOps.
You plan to create a release pipeline that will deploy resources by using Azure Resource Manager templates. The templates will reference secrets stored in Azure Key Vault.
You need to recommend a solution for accessing the secrets stored in the key vault during deployments. The solution must use the principle of least privilege.
Which configuration should you use for the following?
Restrict access to delete the key vault
Correct
Correct Answer(s): RBAC
RBAC is the CORRECT answer because our requirement is to limit specific users/service principal accounts from deleting a key vault. Role-based access controls provide built-in as well as Custom Azure roles to fulfill our requirement of restricting the deletion of a key vault.
Built-in role which provides the capability to restrict deletion of key Vault is KeyVault Reader role where the user can see all the configuration for a key vault but cannot edit it or delete the key vault. A custom role can also be made just for our purpose.
An Azure Key vault access policy is an INCORRECT answer because access policies are used to manage permissions for user principal accounts to manage Key vault entities like secrets, certificates and keys. However, an access policy can not allow or disallow access for key vault as a resource. We will have to implement resource level control to do that(RBAC).
RBAC is the CORRECT answer because our requirement is to limit specific users/service principal accounts from deleting a key vault. Role-based access controls provide built-in as well as Custom Azure roles to fulfill our requirement of restricting the deletion of a key vault.
Built-in role which provides the capability to restrict deletion of key Vault is KeyVault Reader role where the user can see all the configuration for a key vault but cannot edit it or delete the key vault. A custom role can also be made just for our purpose.
An Azure Key vault access policy is an INCORRECT answer because access policies are used to manage permissions for user principal accounts to manage Key vault entities like secrets, certificates and keys. However, an access policy can not allow or disallow access for key vault as a resource. We will have to implement resource level control to do that(RBAC).
RBAC is the CORRECT answer because our requirement is to limit specific users/service principal accounts from deleting a key vault. Role-based access controls provide built-in as well as Custom Azure roles to fulfill our requirement of restricting the deletion of a key vault.
Built-in role which provides the capability to restrict deletion of key Vault is KeyVault Reader role where the user can see all the configuration for a key vault but cannot edit it or delete the key vault. A custom role can also be made just for our purpose.
An Azure Key vault access policy is an INCORRECT answer because access policies are used to manage permissions for user principal accounts to manage Key vault entities like secrets, certificates and keys. However, an access policy can not allow or disallow access for key vault as a resource. We will have to implement resource level control to do that(RBAC).
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals.
You have an approval process that contains a condition. The condition requires that releases be approved by a team leader before they are deployed.
You have a policy stating that approvals must occur within eight hours.
You discover that deployment fails if the approvals take longer than two hours.
You need to ensure that the deployments only fail if the approvals take longer than eight hours.
Solution: From Pre-deployment conditions, you modify the Time between re-evaluation of gates option.
Does this meet the goal?
Correct
Correct Answer(s): No
Modifying the Time between re-evaluation of gates option will not address the issue of deployments failing if approvals take longer than two hours. This option controls how frequently the gate conditions are re-evaluated during the release deployment process. To ensure that deployments only fail if approvals take longer than eight hours, the Timeout setting for pre-deployment approvals should be modified.
No is the CORRECT answer for this question because modifying the time between re-evaluation of gates option will just modify the time before the added gates in the pre-deployment conditions are evaluated for the first time.
If no gates are present, the deployment will just wait for the mentioned time and proceed after the threshold.
Delay before evaluation option will not affect the allowed time for the approval process and the deployment will still fail in 2 hours if the approver doesn’t approve the pending deployment.
Modifying the Time between re-evaluation of gates option will not address the issue of deployments failing if approvals take longer than two hours. This option controls how frequently the gate conditions are re-evaluated during the release deployment process. To ensure that deployments only fail if approvals take longer than eight hours, the Timeout setting for pre-deployment approvals should be modified.
No is the CORRECT answer for this question because modifying the time between re-evaluation of gates option will just modify the time before the added gates in the pre-deployment conditions are evaluated for the first time.
If no gates are present, the deployment will just wait for the mentioned time and proceed after the threshold.
Delay before evaluation option will not affect the allowed time for the approval process and the deployment will still fail in 2 hours if the approver doesn’t approve the pending deployment.
Modifying the Time between re-evaluation of gates option will not address the issue of deployments failing if approvals take longer than two hours. This option controls how frequently the gate conditions are re-evaluated during the release deployment process. To ensure that deployments only fail if approvals take longer than eight hours, the Timeout setting for pre-deployment approvals should be modified.
No is the CORRECT answer for this question because modifying the time between re-evaluation of gates option will just modify the time before the added gates in the pre-deployment conditions are evaluated for the first time.
If no gates are present, the deployment will just wait for the mentioned time and proceed after the threshold.
Delay before evaluation option will not affect the allowed time for the approval process and the deployment will still fail in 2 hours if the approver doesn’t approve the pending deployment.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals.
You have an approval process that contains a condition. The condition requires that releases be approved by a team leader before they are deployed.
You have a policy stating that approvals must occur within eight hours.
You discover that deployment fails if the approvals take longer than two hours.
You need to ensure that the deployments only fail if the approvals take longer than eight hours.
Solution: From Pre-deployment conditions, you modify the Timeout setting for pre-deployment approvals.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals.
You plan to create a release pipeline that will deploy Azure resources by using Azure Resource Manager templates. The release pipeline will create the following resources:
– Two resource groups
– Four Azure virtual machines in one resource group
– Two Azure SQL databases in other resource group
You need to recommend a solution to deploy the resources.
Solution: Create two standalone templates, each of which will deploy the resources in its respective group.
Does this meet the goal?
Correct
Yes, this solution meets the goal.
By creating two standalone templates, you can clearly define the resources and dependencies within each resource group. This approach provides a modular and maintainable solution for deploying the Azure resources.
Incorrect
Yes, this solution meets the goal.
By creating two standalone templates, you can clearly define the resources and dependencies within each resource group. This approach provides a modular and maintainable solution for deploying the Azure resources.
Unattempted
Yes, this solution meets the goal.
By creating two standalone templates, you can clearly define the resources and dependencies within each resource group. This approach provides a modular and maintainable solution for deploying the Azure resources.
Question 11 of 55
11. Question
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals.
You plan to create a release pipeline that will deploy Azure resources by using Azure Resource Manager templates. The release pipeline will create the following resources:
– Two resource groups
– Four Azure virtual machines in one resource group
– Two Azure SQL databases in other resource group
You need to recommend a solution to deploy the resources.
Solution: Create a single standalone template that will deploy all the resources.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals.
You have an Azure DevOps project.
Your build process creates several artifacts.
You need to deploy the artifacts to on-premises servers.
Solution: You deploy a Docker build to an on-premises server. You add a Download Build Artifacts task to the deployment pipeline.
Does this meet the goal?
Correct
Correct Answer(s): No
No is the CORRECT answer because our requirement is to deploy the artifacts created by the Build process that runs using Azure pipelines(Azure DevOps project) to an on-premise infrastructure (on-prem server in our case). The Azure pipelines need an agent to run on. That Agent can be part of a Microsoft managed pool or a self-hosted agent pool but a docker build on an on-prem server can not be leveraged directly by the Azure Pipelines. Moreover, downloading the build artifacts task will just download the finished build artifacts but not publish them for further use by the deployment pipeline. That is why, deploying a docker build to an on-premises server and then adding a download artifacts task to the deployment pipeline will not fulfil our requirements.
We can also set up Azure Pipelines self-hosted agent to run inside a Windows Server Core (for Windows hosts), or Ubuntu container (for Linux hosts) with Docker, if we want to run agents with some kind of outer orchestration, like ACI (Azure Container Instances).
Correct Answer(s): No
No is the CORRECT answer because our requirement is to deploy the artifacts created by the Build process that runs using Azure pipelines(Azure DevOps project) to an on-premise infrastructure (on-prem server in our case). The Azure pipelines need an agent to run on. That Agent can be part of a Microsoft managed pool or a self-hosted agent pool but a docker build on an on-prem server can not be leveraged directly by the Azure Pipelines. Moreover, downloading the build artifacts task will just download the finished build artifacts but not publish them for further use by the deployment pipeline. That is why, deploying a docker build to an on-premises server and then adding a download artifacts task to the deployment pipeline will not fulfil our requirements.
We can also set up Azure Pipelines self-hosted agent to run inside a Windows Server Core (for Windows hosts), or Ubuntu container (for Linux hosts) with Docker, if we want to run agents with some kind of outer orchestration, like ACI (Azure Container Instances).
Correct Answer(s): No
No is the CORRECT answer because our requirement is to deploy the artifacts created by the Build process that runs using Azure pipelines(Azure DevOps project) to an on-premise infrastructure (on-prem server in our case). The Azure pipelines need an agent to run on. That Agent can be part of a Microsoft managed pool or a self-hosted agent pool but a docker build on an on-prem server can not be leveraged directly by the Azure Pipelines. Moreover, downloading the build artifacts task will just download the finished build artifacts but not publish them for further use by the deployment pipeline. That is why, deploying a docker build to an on-premises server and then adding a download artifacts task to the deployment pipeline will not fulfil our requirements.
We can also set up Azure Pipelines self-hosted agent to run inside a Windows Server Core (for Windows hosts), or Ubuntu container (for Linux hosts) with Docker, if we want to run agents with some kind of outer orchestration, like ACI (Azure Container Instances).
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals.
You have an Azure DevOps project.
Your build process creates several artifacts.
You need to deploy the artifacts to on-premises servers.
Solution: You deploy an Azure self-hosted agent to an on-premises server. You add a Copy and Publish Build Artifacts task to the deployment pipeline.
Does this meet the goal?
Correct
Correct Answer(s): Yes
Yes is the CORRECT answer because we want to deploy the completed artifacts after a Build, on an on-premises server and to do that we will leverage self-hosted agent and Copy and Publish Build artifacts task.
As of now, the the copy and publish build artifacts task is deprecated, so instead we will use two different tasks as part of our pipeline – Copy Files task and Publish Build Artifacts task.
Correct Answer(s): Yes
Yes is the CORRECT answer because we want to deploy the completed artifacts after a Build, on an on-premises server and to do that we will leverage self-hosted agent and Copy and Publish Build artifacts task.
As of now, the the copy and publish build artifacts task is deprecated, so instead we will use two different tasks as part of our pipeline – Copy Files task and Publish Build Artifacts task.
Correct Answer(s): Yes
Yes is the CORRECT answer because we want to deploy the completed artifacts after a Build, on an on-premises server and to do that we will leverage self-hosted agent and Copy and Publish Build artifacts task.
As of now, the the copy and publish build artifacts task is deprecated, so instead we will use two different tasks as part of our pipeline – Copy Files task and Publish Build Artifacts task.
Your company hosts a web application in Azure. The company uses Azure Pipelines for the build and release management of the application.
Stakeholders report that the past few releases have negatively affected system performance.
You configure alerts in Azure Monitor.
You need to ensure that new releases are only deployed to production if the releases meet defined performance baseline criteria in the staging environment first.
What should you use to prevent the deployment of releases that fail to meet the performance baseline?
Correct
Correct Answer(s): a gate
A gate is the CORRECT answer because a gate is configured as part of a release process to set caps or baselines for deployments to succeed. Gate in a pipeline helps to check the code against a set of rules, and only passes a release if the code matches the set baseline of configured rules. If the code does not pass the gate check, the release fails. https://docs.microsoft.com/en-us/azure/devops/pipelines/release/approvals/gates?view=azure-devops
An Azure Scheduler job is the INCORRECT choice because we need to prevent the deployment of releases that fail to meet the performance baseline and not to use an Azure service to automate our repetitive tasks, aka Azure Scheduler Job.
Moreover, Azure Scheduler Job will be retiring and is likely to be replaced by Logic Apps.
A trigger is an INCORRECT choice because triggers are useful when we need a mechanism for a stage(part of pipeline) to run as we need it to run. However, our requirement is to prevent the releases from being created if baseline performance is not met, and that is why we rule this option out. https://docs.microsoft.com/en-us/azure/devops/pipelines/build/triggers?view=azure-devops
An Azure Function is an INCORRECT answer because this is a service for developers leveraging the concept of serverless computing. It has nothing to do with the mechanism to check if the baseline performance of our code is met or not. https://azure.microsoft.com/en-in/services/functions/
Incorrect
Correct Answer(s): a gate
A gate is the CORRECT answer because a gate is configured as part of a release process to set caps or baselines for deployments to succeed. Gate in a pipeline helps to check the code against a set of rules, and only passes a release if the code matches the set baseline of configured rules. If the code does not pass the gate check, the release fails. https://docs.microsoft.com/en-us/azure/devops/pipelines/release/approvals/gates?view=azure-devops
An Azure Scheduler job is the INCORRECT choice because we need to prevent the deployment of releases that fail to meet the performance baseline and not to use an Azure service to automate our repetitive tasks, aka Azure Scheduler Job.
Moreover, Azure Scheduler Job will be retiring and is likely to be replaced by Logic Apps.
A trigger is an INCORRECT choice because triggers are useful when we need a mechanism for a stage(part of pipeline) to run as we need it to run. However, our requirement is to prevent the releases from being created if baseline performance is not met, and that is why we rule this option out. https://docs.microsoft.com/en-us/azure/devops/pipelines/build/triggers?view=azure-devops
An Azure Function is an INCORRECT answer because this is a service for developers leveraging the concept of serverless computing. It has nothing to do with the mechanism to check if the baseline performance of our code is met or not. https://azure.microsoft.com/en-in/services/functions/
Unattempted
Correct Answer(s): a gate
A gate is the CORRECT answer because a gate is configured as part of a release process to set caps or baselines for deployments to succeed. Gate in a pipeline helps to check the code against a set of rules, and only passes a release if the code matches the set baseline of configured rules. If the code does not pass the gate check, the release fails. https://docs.microsoft.com/en-us/azure/devops/pipelines/release/approvals/gates?view=azure-devops
An Azure Scheduler job is the INCORRECT choice because we need to prevent the deployment of releases that fail to meet the performance baseline and not to use an Azure service to automate our repetitive tasks, aka Azure Scheduler Job.
Moreover, Azure Scheduler Job will be retiring and is likely to be replaced by Logic Apps.
A trigger is an INCORRECT choice because triggers are useful when we need a mechanism for a stage(part of pipeline) to run as we need it to run. However, our requirement is to prevent the releases from being created if baseline performance is not met, and that is why we rule this option out. https://docs.microsoft.com/en-us/azure/devops/pipelines/build/triggers?view=azure-devops
An Azure Function is an INCORRECT answer because this is a service for developers leveraging the concept of serverless computing. It has nothing to do with the mechanism to check if the baseline performance of our code is met or not. https://azure.microsoft.com/en-in/services/functions/
Question 15 of 55
15. Question
You plan to share packages that you wrote, tested, validated, and deployed by using Azure Artifacts.
You need to release multiple builds of each package by using a single feed. The solution must limit the release of packages that are in development.
What should you use?
Correct
Correct Answer(s): upstream sources
Upstream sources is the CORRECT answer because our requirement is to consume packages from multiple feeds(builds) and use them together as a single feed.
Upstream sources help in managing all of the product’s dependencies in a single feed. Recommended way is to publish all of the packages for a given product to that product’s feed, and manage that product’s dependencies from remote feeds in the same feed, via upstream sources.
Few benefits of using this approach are:
Simplicity: the NuGet.config, .npmrc, or settings.xml contains exactly one feed
Determinism: feed resolves package requests in order, so rebuilding the same codebase at the same commit or changeset uses the same set of packages
Provenance: the feed knows the provenance of packages it saved via upstream sources, so you can verify that you’re using the original package, not a custom, or malicious copy published to your feed
Peace of mind: packages used via upstream sources are guaranteed to be saved in the feed on first use; if the upstream source is disabled/removed, or the remote feed goes down or deletes a package, we can continue to develop and build
Views is an INCORRECT answer because a view provides a way to enable users to share some packages while keeping others private. However, our requirement is to release multiple builds of each package by using a single feed, and that is why we rule this option out.
Views filter the feed to a subset of packages that meet criteria defined by that view.
There are 3 types of views: local, Prerelease and Release. https://docs.microsoft.com/en-us/azure/devops/artifacts/concepts/views?view=azure-devops#views-and-upstream-sources
Local symbols is an INCORRECT answer because local symbols is a set of debugging information which is present in a symbol file. It has nothing to do with the approach for releasing multiple builds using a single feed.
To debug compiled executables, especially executables compiled from native code languages like C++, we need symbol files that contain debugging information. These files generally have the PDB (program database) extension
Global Symbols is an INCORRECT answer because global symbols is also a set of debugging information which is present in the symbol file. The symbol files having the PDB extension has nothing to do with the approach for releasing multiple feeds/builds using a single feed. https://docs.microsoft.com/en-us/azure/devops/artifacts/concepts/symbols?view=azure-devops
Incorrect
Correct Answer(s): upstream sources
Upstream sources is the CORRECT answer because our requirement is to consume packages from multiple feeds(builds) and use them together as a single feed.
Upstream sources help in managing all of the product’s dependencies in a single feed. Recommended way is to publish all of the packages for a given product to that product’s feed, and manage that product’s dependencies from remote feeds in the same feed, via upstream sources.
Few benefits of using this approach are:
Simplicity: the NuGet.config, .npmrc, or settings.xml contains exactly one feed
Determinism: feed resolves package requests in order, so rebuilding the same codebase at the same commit or changeset uses the same set of packages
Provenance: the feed knows the provenance of packages it saved via upstream sources, so you can verify that you’re using the original package, not a custom, or malicious copy published to your feed
Peace of mind: packages used via upstream sources are guaranteed to be saved in the feed on first use; if the upstream source is disabled/removed, or the remote feed goes down or deletes a package, we can continue to develop and build
Views is an INCORRECT answer because a view provides a way to enable users to share some packages while keeping others private. However, our requirement is to release multiple builds of each package by using a single feed, and that is why we rule this option out.
Views filter the feed to a subset of packages that meet criteria defined by that view.
There are 3 types of views: local, Prerelease and Release. https://docs.microsoft.com/en-us/azure/devops/artifacts/concepts/views?view=azure-devops#views-and-upstream-sources
Local symbols is an INCORRECT answer because local symbols is a set of debugging information which is present in a symbol file. It has nothing to do with the approach for releasing multiple builds using a single feed.
To debug compiled executables, especially executables compiled from native code languages like C++, we need symbol files that contain debugging information. These files generally have the PDB (program database) extension
Global Symbols is an INCORRECT answer because global symbols is also a set of debugging information which is present in the symbol file. The symbol files having the PDB extension has nothing to do with the approach for releasing multiple feeds/builds using a single feed. https://docs.microsoft.com/en-us/azure/devops/artifacts/concepts/symbols?view=azure-devops
Unattempted
Correct Answer(s): upstream sources
Upstream sources is the CORRECT answer because our requirement is to consume packages from multiple feeds(builds) and use them together as a single feed.
Upstream sources help in managing all of the product’s dependencies in a single feed. Recommended way is to publish all of the packages for a given product to that product’s feed, and manage that product’s dependencies from remote feeds in the same feed, via upstream sources.
Few benefits of using this approach are:
Simplicity: the NuGet.config, .npmrc, or settings.xml contains exactly one feed
Determinism: feed resolves package requests in order, so rebuilding the same codebase at the same commit or changeset uses the same set of packages
Provenance: the feed knows the provenance of packages it saved via upstream sources, so you can verify that you’re using the original package, not a custom, or malicious copy published to your feed
Peace of mind: packages used via upstream sources are guaranteed to be saved in the feed on first use; if the upstream source is disabled/removed, or the remote feed goes down or deletes a package, we can continue to develop and build
Views is an INCORRECT answer because a view provides a way to enable users to share some packages while keeping others private. However, our requirement is to release multiple builds of each package by using a single feed, and that is why we rule this option out.
Views filter the feed to a subset of packages that meet criteria defined by that view.
There are 3 types of views: local, Prerelease and Release. https://docs.microsoft.com/en-us/azure/devops/artifacts/concepts/views?view=azure-devops#views-and-upstream-sources
Local symbols is an INCORRECT answer because local symbols is a set of debugging information which is present in a symbol file. It has nothing to do with the approach for releasing multiple builds using a single feed.
To debug compiled executables, especially executables compiled from native code languages like C++, we need symbol files that contain debugging information. These files generally have the PDB (program database) extension
Global Symbols is an INCORRECT answer because global symbols is also a set of debugging information which is present in the symbol file. The symbol files having the PDB extension has nothing to do with the approach for releasing multiple feeds/builds using a single feed. https://docs.microsoft.com/en-us/azure/devops/artifacts/concepts/symbols?view=azure-devops
Question 16 of 55
16. Question
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals.
You manage a project in Azure DevOps.
You need to prevent the configuration of the project from changing over time.
Solution: Add a code coverage step to the build pipelines.
Does this meet the goal?
Correct
Correct Answer(s): No
NO is the CORRECT answer here because code coverage helps in determining the proportion of a project’s code that is actually being tested by tests such as unit tests. To increase the code quality, and guard effectively against bugs, the tests should cover a large proportion of the code.
Reviewing the code coverage result helps to identify code blocks that are not covered by the tests. This helps in the long run to reduce technical debt. https://docs.microsoft.com/en-us/azure/devops/pipelines/test/review-code-coverage-results?view=azure-devops
Code coverage steps in the build pipeline will not prevent the configuration of the project from changing over time. This can be achieved using a mechanism which periodically scans the current configuration against a standard configuration and then accordingly reverts any unauthorized or unnecessary changes in the configuration. Such mechanism or practice is also known as Continuous Assurance.
Correct Answer(s): No
NO is the CORRECT answer here because code coverage helps in determining the proportion of a project’s code that is actually being tested by tests such as unit tests. To increase the code quality, and guard effectively against bugs, the tests should cover a large proportion of the code.
Reviewing the code coverage result helps to identify code blocks that are not covered by the tests. This helps in the long run to reduce technical debt. https://docs.microsoft.com/en-us/azure/devops/pipelines/test/review-code-coverage-results?view=azure-devops
Code coverage steps in the build pipeline will not prevent the configuration of the project from changing over time. This can be achieved using a mechanism which periodically scans the current configuration against a standard configuration and then accordingly reverts any unauthorized or unnecessary changes in the configuration. Such mechanism or practice is also known as Continuous Assurance.
Correct Answer(s): No
NO is the CORRECT answer here because code coverage helps in determining the proportion of a project’s code that is actually being tested by tests such as unit tests. To increase the code quality, and guard effectively against bugs, the tests should cover a large proportion of the code.
Reviewing the code coverage result helps to identify code blocks that are not covered by the tests. This helps in the long run to reduce technical debt. https://docs.microsoft.com/en-us/azure/devops/pipelines/test/review-code-coverage-results?view=azure-devops
Code coverage steps in the build pipeline will not prevent the configuration of the project from changing over time. This can be achieved using a mechanism which periodically scans the current configuration against a standard configuration and then accordingly reverts any unauthorized or unnecessary changes in the configuration. Such mechanism or practice is also known as Continuous Assurance.
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals.
You manage a project in Azure DevOps.
You need to prevent the configuration of the project from changing over time.
Solution: Implement Continuous Integration for the project.
Does this meet the goal?
Correct
Correct Answer(s): No
NO is the CORRECT answer here because we need to make sure that configuration of our project is intact and does not change over time. For that, we need a mechanism that keeps scanning the current state against a desired state to achieve continuous assurance and continuous compliance.
Continuous Integration can be a part of this whole exercise that ensures the desired configuration is always intact, but CI for the project alone, can not meet the requirement.
https://azsk.azurewebsites.net/04-Continous-Assurance/Readme.html
The basic idea behind Continuous Assurance is to set up the ability to check for “drift” from what is considered a secure snapshot of a system. Support for Continuous Assurance lets us treat security truly as a ‘state’ as opposed to a ‘point in time’ achievement.
Incorrect
Correct Answer(s): No
NO is the CORRECT answer here because we need to make sure that configuration of our project is intact and does not change over time. For that, we need a mechanism that keeps scanning the current state against a desired state to achieve continuous assurance and continuous compliance.
Continuous Integration can be a part of this whole exercise that ensures the desired configuration is always intact, but CI for the project alone, can not meet the requirement.
https://azsk.azurewebsites.net/04-Continous-Assurance/Readme.html
The basic idea behind Continuous Assurance is to set up the ability to check for “drift” from what is considered a secure snapshot of a system. Support for Continuous Assurance lets us treat security truly as a ‘state’ as opposed to a ‘point in time’ achievement.
Unattempted
Correct Answer(s): No
NO is the CORRECT answer here because we need to make sure that configuration of our project is intact and does not change over time. For that, we need a mechanism that keeps scanning the current state against a desired state to achieve continuous assurance and continuous compliance.
Continuous Integration can be a part of this whole exercise that ensures the desired configuration is always intact, but CI for the project alone, can not meet the requirement.
https://azsk.azurewebsites.net/04-Continous-Assurance/Readme.html
The basic idea behind Continuous Assurance is to set up the ability to check for “drift” from what is considered a secure snapshot of a system. Support for Continuous Assurance lets us treat security truly as a ‘state’ as opposed to a ‘point in time’ achievement.
Question 18 of 55
18. Question
Your company uses ServiceNow for incident management.
You develop an application that runs on Azure.
The company needs to generate a ticket in ServiceNow when the application fails to authenticate.
Which Azure Log Analytics solution should you use?
Correct
Correct Answer(s): IT Service Management Connector (ITSM)
IT Service Management Connector (ITSM) is the CORRECT answer because the IT Service Management Connector (ITSMC) allows you to connect Azure and a supported IT Service Management (ITSM) product/service.
Using this we can create work items (tickets in our case) in ITSM tool (serviceNow in our case), based on the Azure alerts (metric alerts, Activity Log alerts and Log Analytics alerts). Optionally, we can also sync the incident and change request data from the ITSM tool to an Azure Log Analytics workspace. https://docs.microsoft.com/en-us/azure/azure-monitor/platform/itsmc-overview
ITSMC supports connections with the following ITSM tools:
– ServiceNow
– System Center Service Manager
– Provance
– Cherwell
Application Insights Connector is the INCORRECT answer because this is used to connect the resources where our code is running and Azure Application insights to the Log Analytics to better analyze application specific data and logs. This is not a solution which helps connect to ServiceNow and that is why we rule this option out. https://docs.microsoft.com/en-us/connectors/applicationinsights/
Automation & Control is an INCORRECT answer because Automation & Control in Operations Management Suite (OMS) delivers unified capabilities to deploy, configure, and maintain your infrastructure and applications in Azure or any other cloud, including on-premises, across Windows Server and Linux.
Following are the features it provides:
– Integrate process automation and configuration for automated delivery of services using PowerShell or graphical authoring
– Combine change tracking with configuration management to identify and apply desired configurations and enable compliance
– Deliver orchestrated update management for both Windows Server and Linux from the cloud
Insight & Analytics is an INCORRECT choice because this solution deals with the analysis and insights for azure services. It does not help in integration with ServiceNow and that is why we rule this option out.
Incorrect
Correct Answer(s): IT Service Management Connector (ITSM)
IT Service Management Connector (ITSM) is the CORRECT answer because the IT Service Management Connector (ITSMC) allows you to connect Azure and a supported IT Service Management (ITSM) product/service.
Using this we can create work items (tickets in our case) in ITSM tool (serviceNow in our case), based on the Azure alerts (metric alerts, Activity Log alerts and Log Analytics alerts). Optionally, we can also sync the incident and change request data from the ITSM tool to an Azure Log Analytics workspace. https://docs.microsoft.com/en-us/azure/azure-monitor/platform/itsmc-overview
ITSMC supports connections with the following ITSM tools:
– ServiceNow
– System Center Service Manager
– Provance
– Cherwell
Application Insights Connector is the INCORRECT answer because this is used to connect the resources where our code is running and Azure Application insights to the Log Analytics to better analyze application specific data and logs. This is not a solution which helps connect to ServiceNow and that is why we rule this option out. https://docs.microsoft.com/en-us/connectors/applicationinsights/
Automation & Control is an INCORRECT answer because Automation & Control in Operations Management Suite (OMS) delivers unified capabilities to deploy, configure, and maintain your infrastructure and applications in Azure or any other cloud, including on-premises, across Windows Server and Linux.
Following are the features it provides:
– Integrate process automation and configuration for automated delivery of services using PowerShell or graphical authoring
– Combine change tracking with configuration management to identify and apply desired configurations and enable compliance
– Deliver orchestrated update management for both Windows Server and Linux from the cloud
Insight & Analytics is an INCORRECT choice because this solution deals with the analysis and insights for azure services. It does not help in integration with ServiceNow and that is why we rule this option out.
Unattempted
Correct Answer(s): IT Service Management Connector (ITSM)
IT Service Management Connector (ITSM) is the CORRECT answer because the IT Service Management Connector (ITSMC) allows you to connect Azure and a supported IT Service Management (ITSM) product/service.
Using this we can create work items (tickets in our case) in ITSM tool (serviceNow in our case), based on the Azure alerts (metric alerts, Activity Log alerts and Log Analytics alerts). Optionally, we can also sync the incident and change request data from the ITSM tool to an Azure Log Analytics workspace. https://docs.microsoft.com/en-us/azure/azure-monitor/platform/itsmc-overview
ITSMC supports connections with the following ITSM tools:
– ServiceNow
– System Center Service Manager
– Provance
– Cherwell
Application Insights Connector is the INCORRECT answer because this is used to connect the resources where our code is running and Azure Application insights to the Log Analytics to better analyze application specific data and logs. This is not a solution which helps connect to ServiceNow and that is why we rule this option out. https://docs.microsoft.com/en-us/connectors/applicationinsights/
Automation & Control is an INCORRECT answer because Automation & Control in Operations Management Suite (OMS) delivers unified capabilities to deploy, configure, and maintain your infrastructure and applications in Azure or any other cloud, including on-premises, across Windows Server and Linux.
Following are the features it provides:
– Integrate process automation and configuration for automated delivery of services using PowerShell or graphical authoring
– Combine change tracking with configuration management to identify and apply desired configurations and enable compliance
– Deliver orchestrated update management for both Windows Server and Linux from the cloud
Insight & Analytics is an INCORRECT choice because this solution deals with the analysis and insights for azure services. It does not help in integration with ServiceNow and that is why we rule this option out.
Question 19 of 55
19. Question
Your company is building a new web application.
You plan to collect feedback from pilot users on the features being delivered.
All the pilot users have a corporate computer that has Google Chrome and the Microsoft Test & Feedback extension installed. The pilot users will test the application by using Chrome.
You need to identify which access levels are required to ensure that developers can request and gather feedback from the pilot users. The solution must use the principle of least privilege.
Which access levels in Azure DevOps should you identify for Developers?
Correct
Correct Answer(s): Basic
Basic is the CORRECT answer because for anyone to raise the feedback request and gather data from that, a minimum of Basic level access is required. In our case, the developers should be able to raise the request and gather feedback from the Pilot users and for that they need Basic level access in Azure DevOps.
Stakeholder is the INCORRECT answer because Stakeholder level access for developers will not be sufficient enough to utilize features in Azure DevOps to raise feedback requests.
Incorrect
Correct Answer(s): Basic
Basic is the CORRECT answer because for anyone to raise the feedback request and gather data from that, a minimum of Basic level access is required. In our case, the developers should be able to raise the request and gather feedback from the Pilot users and for that they need Basic level access in Azure DevOps.
Stakeholder is the INCORRECT answer because Stakeholder level access for developers will not be sufficient enough to utilize features in Azure DevOps to raise feedback requests.
Unattempted
Correct Answer(s): Basic
Basic is the CORRECT answer because for anyone to raise the feedback request and gather data from that, a minimum of Basic level access is required. In our case, the developers should be able to raise the request and gather feedback from the Pilot users and for that they need Basic level access in Azure DevOps.
Stakeholder is the INCORRECT answer because Stakeholder level access for developers will not be sufficient enough to utilize features in Azure DevOps to raise feedback requests.
Question 20 of 55
20. Question
Your company is building a new web application.
You plan to collect feedback from pilot users on the features being delivered.
All the pilot users have a corporate computer that has Google Chrome and the Microsoft Test & Feedback extension installed. The pilot users will test the application by using Chrome.
You need to identify which access levels are required to ensure that developers can request and gather feedback from the pilot users. The solution must use the principle of least privilege.
Which access levels in Azure DevOps should you identify for Pilot users?
Correct
Correct Answer(s): Stakeholder
Stakeholder is the CORRECT answer here because the stakeholder access level makes sure that the users have access to only required limited services.
Once the feedback request is raised by the developers for a feature, the Pilot users will then provide feedback either directly using the link in a feedback request email or they can use the Test & Feedback extension on their browsers to do so(in our case the Google chrome extension).
Basic is the INCORRECT answer here because we are using the principle of least privilege and zero trust policy. Keeping this in mind, the Pilot users will have unnecessary access to additional features which are not even required for giving feedback, if given Basic level access. That is why we rule this option out.
Incorrect
Correct Answer(s): Stakeholder
Stakeholder is the CORRECT answer here because the stakeholder access level makes sure that the users have access to only required limited services.
Once the feedback request is raised by the developers for a feature, the Pilot users will then provide feedback either directly using the link in a feedback request email or they can use the Test & Feedback extension on their browsers to do so(in our case the Google chrome extension).
Basic is the INCORRECT answer here because we are using the principle of least privilege and zero trust policy. Keeping this in mind, the Pilot users will have unnecessary access to additional features which are not even required for giving feedback, if given Basic level access. That is why we rule this option out.
Unattempted
Correct Answer(s): Stakeholder
Stakeholder is the CORRECT answer here because the stakeholder access level makes sure that the users have access to only required limited services.
Once the feedback request is raised by the developers for a feature, the Pilot users will then provide feedback either directly using the link in a feedback request email or they can use the Test & Feedback extension on their browsers to do so(in our case the Google chrome extension).
Basic is the INCORRECT answer here because we are using the principle of least privilege and zero trust policy. Keeping this in mind, the Pilot users will have unnecessary access to additional features which are not even required for giving feedback, if given Basic level access. That is why we rule this option out.
Question 21 of 55
21. Question
You use Azure SQL Database Intelligent Insights and Azure Application Insights for monitoring.
You need to write ad-hoc queries against the monitoring data.
Which query language should you use?
Correct
Correct Answer(s): Azure Log Analytics
Azure Log Analytics is the CORRECT answer because our requirement is to write ad-hoc (unplanned or as they come) queries against the monitored data. Since we use Application insights and Azure SQL DB Intelligent Insights to monitor the data, we can leverage Azure log analytics for Analyzing the data as the requirement comes.
Azure Log Analytics uses KQL(Kusto Query Language) for writing queries, which is Azure native. Below is an example of KQL for getting the CPU percentage of SQL databases.
AzureMetrics
| where ResourceProvider==”MICROSOFT.SQL”
| where ResourceId contains “/DATABASES/”
| where MetricName==”cpu_percent”
| summarize AggregatedValue = max(Maximum) by bin(TimeGenerated, 5m)
| render timechart
PL/pgSQL is the INCORRECT answer because PL/pgSQL is a procedural programming language supported by the PostgreSQL ORDBMS. However, this language is not supported for writing ad-hoc queries against monitored data through for live Azure resources. https://www.postgresql.org/docs/9.6/plpgsql.html
PL/SQL is the INCORRECT answer here because PL/SQL is Oracle Corporation’s procedural extension for SQL and the Oracle relational database. However, this is not supported for analyzing/monitoring resources running in Azure cloud. https://www.oracle.com/database/technologies/appdev/plsql.html
Correct Answer(s): Azure Log Analytics
Azure Log Analytics is the CORRECT answer because our requirement is to write ad-hoc (unplanned or as they come) queries against the monitored data. Since we use Application insights and Azure SQL DB Intelligent Insights to monitor the data, we can leverage Azure log analytics for Analyzing the data as the requirement comes.
Azure Log Analytics uses KQL(Kusto Query Language) for writing queries, which is Azure native. Below is an example of KQL for getting the CPU percentage of SQL databases.
AzureMetrics
| where ResourceProvider==”MICROSOFT.SQL”
| where ResourceId contains “/DATABASES/”
| where MetricName==”cpu_percent”
| summarize AggregatedValue = max(Maximum) by bin(TimeGenerated, 5m)
| render timechart
PL/pgSQL is the INCORRECT answer because PL/pgSQL is a procedural programming language supported by the PostgreSQL ORDBMS. However, this language is not supported for writing ad-hoc queries against monitored data through for live Azure resources. https://www.postgresql.org/docs/9.6/plpgsql.html
PL/SQL is the INCORRECT answer here because PL/SQL is Oracle Corporation’s procedural extension for SQL and the Oracle relational database. However, this is not supported for analyzing/monitoring resources running in Azure cloud. https://www.oracle.com/database/technologies/appdev/plsql.html
Correct Answer(s): Azure Log Analytics
Azure Log Analytics is the CORRECT answer because our requirement is to write ad-hoc (unplanned or as they come) queries against the monitored data. Since we use Application insights and Azure SQL DB Intelligent Insights to monitor the data, we can leverage Azure log analytics for Analyzing the data as the requirement comes.
Azure Log Analytics uses KQL(Kusto Query Language) for writing queries, which is Azure native. Below is an example of KQL for getting the CPU percentage of SQL databases.
AzureMetrics
| where ResourceProvider==”MICROSOFT.SQL”
| where ResourceId contains “/DATABASES/”
| where MetricName==”cpu_percent”
| summarize AggregatedValue = max(Maximum) by bin(TimeGenerated, 5m)
| render timechart
PL/pgSQL is the INCORRECT answer because PL/pgSQL is a procedural programming language supported by the PostgreSQL ORDBMS. However, this language is not supported for writing ad-hoc queries against monitored data through for live Azure resources. https://www.postgresql.org/docs/9.6/plpgsql.html
PL/SQL is the INCORRECT answer here because PL/SQL is Oracle Corporation’s procedural extension for SQL and the Oracle relational database. However, this is not supported for analyzing/monitoring resources running in Azure cloud. https://www.oracle.com/database/technologies/appdev/plsql.html
Your company has two virtual machines that run Linux in a third-party public cloud.
You plan to use the company’s Azure Automation State Configuration implementation to manage the two virtual machines and detect configuration drift.
You need to onboard the Linux virtual machines.
You install PowerShell Desired State Configuration (DSC) on the virtual machines, and then run register.py.
Which three actions should you perform next in sequence?
ACTIONS
1. From the virtual machines, run setdsclocalconfigurationmanager.py.
2. Create a DSC metaconfiguration.
3. Add the virtual machines as DSC nodes in Azure Automation.
4. Copy the metaconfiguration to the virtual machines.
5. Install Windows Management Framework 5.1 on the virtual machines.
Correct
Suggested Answer:2-4-3
Step 1: Create a DSC metaconfiguration
Load up the DSC Configuration into Azure Automation.
Step 2: Copy the metaconfiguration to the virtual machines.
Linking the Node Configuration to the Linux Host
Step 3: Add the virtual machines as DSC nodes in Azure Automation. go to DSC Nodes, select your node, and then click Assign node configuration. This step assigns the DSC configuration to the Linux machine.
Next up will be to link the node configuration to the host. Go to the host and press the ג€Assign nodeג€¦ג€-button. Next up you can select your node configuration.
Incorrect
Suggested Answer:2-4-3
Step 1: Create a DSC metaconfiguration
Load up the DSC Configuration into Azure Automation.
Step 2: Copy the metaconfiguration to the virtual machines.
Linking the Node Configuration to the Linux Host
Step 3: Add the virtual machines as DSC nodes in Azure Automation. go to DSC Nodes, select your node, and then click Assign node configuration. This step assigns the DSC configuration to the Linux machine.
Next up will be to link the node configuration to the host. Go to the host and press the ג€Assign nodeג€¦ג€-button. Next up you can select your node configuration.
Unattempted
Suggested Answer:2-4-3
Step 1: Create a DSC metaconfiguration
Load up the DSC Configuration into Azure Automation.
Step 2: Copy the metaconfiguration to the virtual machines.
Linking the Node Configuration to the Linux Host
Step 3: Add the virtual machines as DSC nodes in Azure Automation. go to DSC Nodes, select your node, and then click Assign node configuration. This step assigns the DSC configuration to the Linux machine.
Next up will be to link the node configuration to the host. Go to the host and press the ג€Assign nodeג€¦ג€-button. Next up you can select your node configuration.
Question 23 of 55
23. Question
You are building an ASP.NET core application.
You plan to create an application utilisation baseline by capturing telemetry data.
You need to add code to the application to capture the telemetry data. The solution must minimise the costs of storing the telemetry data.
Which two actions should you perform?
Correct
Out of the options you provided, enabling adaptive sampling from the code of the application and adding Azure Application Insights telemetry are the two actions you should perform to capture telemetry data in an ASP.NET core application while minimizing storage costs.
Enable adaptive sampling: Adaptive sampling dynamically reduces the telemetry volume sent to Application Insights based on pre-defined criteria. This helps in reducing the amount of data stored and lowers costs.
Add Azure Application Insights telemetry: Application Insights is a service by Azure that monitors your web applications. It automatically collects telemetry data including performance metrics, exceptions, and traces. You can configure adaptive sampling within the Application Insights service to further reduce storage costs.
Here are the steps to add Application Insights telemetry to your ASP.NET Core application:
Install the Microsoft.ApplicationInsights.AspNetCore NuGet package.
Add the following code to your Startup.cs file:
C#
publicvoidConfigureServices(IServiceCollection services)
{
services.AddApplicationInsightsTelemetry();// ... other configuration code
}
Configure the Application Insights connection string in your appsettings.json file.
By following these steps, you can capture telemetry data from your ASP.NET Core application and minimize storage costs using adaptive sampling.
Incorrect
Out of the options you provided, enabling adaptive sampling from the code of the application and adding Azure Application Insights telemetry are the two actions you should perform to capture telemetry data in an ASP.NET core application while minimizing storage costs.
Enable adaptive sampling: Adaptive sampling dynamically reduces the telemetry volume sent to Application Insights based on pre-defined criteria. This helps in reducing the amount of data stored and lowers costs.
Add Azure Application Insights telemetry: Application Insights is a service by Azure that monitors your web applications. It automatically collects telemetry data including performance metrics, exceptions, and traces. You can configure adaptive sampling within the Application Insights service to further reduce storage costs.
Here are the steps to add Application Insights telemetry to your ASP.NET Core application:
Install the Microsoft.ApplicationInsights.AspNetCore NuGet package.
Add the following code to your Startup.cs file:
C#
publicvoidConfigureServices(IServiceCollection services)
{
services.AddApplicationInsightsTelemetry();// ... other configuration code
}
Configure the Application Insights connection string in your appsettings.json file.
By following these steps, you can capture telemetry data from your ASP.NET Core application and minimize storage costs using adaptive sampling.
Unattempted
Out of the options you provided, enabling adaptive sampling from the code of the application and adding Azure Application Insights telemetry are the two actions you should perform to capture telemetry data in an ASP.NET core application while minimizing storage costs.
Enable adaptive sampling: Adaptive sampling dynamically reduces the telemetry volume sent to Application Insights based on pre-defined criteria. This helps in reducing the amount of data stored and lowers costs.
Add Azure Application Insights telemetry: Application Insights is a service by Azure that monitors your web applications. It automatically collects telemetry data including performance metrics, exceptions, and traces. You can configure adaptive sampling within the Application Insights service to further reduce storage costs.
Here are the steps to add Application Insights telemetry to your ASP.NET Core application:
Install the Microsoft.ApplicationInsights.AspNetCore NuGet package.
Add the following code to your Startup.cs file:
C#
publicvoidConfigureServices(IServiceCollection services)
{
services.AddApplicationInsightsTelemetry();// ... other configuration code
}
Configure the Application Insights connection string in your appsettings.json file.
By following these steps, you can capture telemetry data from your ASP.NET Core application and minimize storage costs using adaptive sampling.
Question 24 of 55
24. Question
You are configuring the settings of a new Git repository in Azure Repos.
You need to ensure that pull requests in a branch meet the following criteria before they are merged:
– Committed code must compile successfully.
Which policy type should you configure for the above requirement?
Correct
Correct Answer(s): A check-in policy
A check-in policy is the CORRECT answer because a check-in policy helps in adding a required action for a user to make sure that the last build was successful or not, at the time of their check-ins. Our check-in policy will ensure that the committed code(new additions) compiles successfully. https://docs.microsoft.com/en-us/azure/devops/repos/tfvc/add-check-policies?view=azure-devops
A build policy is an INCORRECT answer because a build policy is used to ensure when and how the build process is associated with the pull request process. A build policy keeps in check the status of a build validation for a pull request to be completed.
A status policy is an INCORRECT answer because using this policy we add a status check condition for any third-party service, which enables us to integrate the third-party service in our pull request process. The status from the third-party service can make the difference between a passed or failed Pull request(passing all the checks).
Incorrect
Correct Answer(s): A check-in policy
A check-in policy is the CORRECT answer because a check-in policy helps in adding a required action for a user to make sure that the last build was successful or not, at the time of their check-ins. Our check-in policy will ensure that the committed code(new additions) compiles successfully. https://docs.microsoft.com/en-us/azure/devops/repos/tfvc/add-check-policies?view=azure-devops
A build policy is an INCORRECT answer because a build policy is used to ensure when and how the build process is associated with the pull request process. A build policy keeps in check the status of a build validation for a pull request to be completed.
A status policy is an INCORRECT answer because using this policy we add a status check condition for any third-party service, which enables us to integrate the third-party service in our pull request process. The status from the third-party service can make the difference between a passed or failed Pull request(passing all the checks).
Unattempted
Correct Answer(s): A check-in policy
A check-in policy is the CORRECT answer because a check-in policy helps in adding a required action for a user to make sure that the last build was successful or not, at the time of their check-ins. Our check-in policy will ensure that the committed code(new additions) compiles successfully. https://docs.microsoft.com/en-us/azure/devops/repos/tfvc/add-check-policies?view=azure-devops
A build policy is an INCORRECT answer because a build policy is used to ensure when and how the build process is associated with the pull request process. A build policy keeps in check the status of a build validation for a pull request to be completed.
A status policy is an INCORRECT answer because using this policy we add a status check condition for any third-party service, which enables us to integrate the third-party service in our pull request process. The status from the third-party service can make the difference between a passed or failed Pull request(passing all the checks).
Question 25 of 55
25. Question
You are configuring the settings of a new Git repository in Azure Repos.
You need to ensure that pull requests in a branch meet the following criteria before they are merged:
– Pull requests must have a Quality Gate status of Passed in SonarCloud
Which policy type should you configure for the above requirement?
Correct
The correct policy type to configure for the requirement is a Status Policy.
Status Policies in Azure Repos allow you to define requirements for the status of external services before a pull request can be completed.
In this case, you want to ensure that the pull request has a “Passed” status in SonarCloud before it can be merged. SonarCloud provides a status check that can be integrated into your Azure Repos workflow. By configuring a Status Policy, you can enforce this requirement and prevent merges until the quality gate criteria are met.
Build Policies are used to enforce requirements related to build pipelines.
Check-in Policies are used to enforce rules on the code itself before it can be checked in, such as code style or formatting rules.
Therefore, a Status Policy is the most appropriate choice for ensuring that pull requests meet the SonarCloud Quality Gate requirement before they are merged in your Azure Repos.
Incorrect
The correct policy type to configure for the requirement is a Status Policy.
Status Policies in Azure Repos allow you to define requirements for the status of external services before a pull request can be completed.
In this case, you want to ensure that the pull request has a “Passed” status in SonarCloud before it can be merged. SonarCloud provides a status check that can be integrated into your Azure Repos workflow. By configuring a Status Policy, you can enforce this requirement and prevent merges until the quality gate criteria are met.
Build Policies are used to enforce requirements related to build pipelines.
Check-in Policies are used to enforce rules on the code itself before it can be checked in, such as code style or formatting rules.
Therefore, a Status Policy is the most appropriate choice for ensuring that pull requests meet the SonarCloud Quality Gate requirement before they are merged in your Azure Repos.
Unattempted
The correct policy type to configure for the requirement is a Status Policy.
Status Policies in Azure Repos allow you to define requirements for the status of external services before a pull request can be completed.
In this case, you want to ensure that the pull request has a “Passed” status in SonarCloud before it can be merged. SonarCloud provides a status check that can be integrated into your Azure Repos workflow. By configuring a Status Policy, you can enforce this requirement and prevent merges until the quality gate criteria are met.
Build Policies are used to enforce requirements related to build pipelines.
Check-in Policies are used to enforce rules on the code itself before it can be checked in, such as code style or formatting rules.
Therefore, a Status Policy is the most appropriate choice for ensuring that pull requests meet the SonarCloud Quality Gate requirement before they are merged in your Azure Repos.
Question 26 of 55
26. Question
Your team uses an agile development approach.
You need to recommend a branching strategy for the teams Git repository.
The strategy must meet the following requirements:
– Provide the ability to work on multiple independent tasks in parallel.
– Ensure that checked-in code remains in a releasable state always.
– Ensure that new features can be abandoned at any time.
– Encourage experimentation.
What should you recommend?
Correct
Correct Answer(s): a single long-running branch with multiple short-lived feature branches
A single long-running branch with multiple short-lived feature branches is the CORRECT answer because we need to implement a strategy that includes the ability to work on independent tasks(new features) in parallel and successfully built checked-in code that can be released anytime.
Using this methodology, we will have one main branch, which is a long-running branch having the code which can be released anytime. Apart from the main branch, we will have multiple feature branches that were derived from parent to work on multiple features together. These feature branches are independent short-lived branches and can be used to develop different features, and even to encourage experimentation with custom features.
Azure DevOps documentation of branching strategies: https://docs.microsoft.com/en-us/azure/devops/repos/git/git-branching-guidance?view=azure-devops
a single fork per team member is an INCORRECT answer because doing this will limit the team members to have only one fork, which inhibits experimentation as there is a limit on multiple independent parallel tasks.
multiple long-running branches is an INCORRECT answer because we have to keep the checked-in code always in a releasable state. However, this is not possible when we have multiple long-running branches because the building, running, and managing all the branches for an application is very difficult and is prone to errors/backlogs.
a single long-running branch without forking is an INCORRECT answer because having a single branch only we cannot run multiple independent tasks (different features) in parallel.
Incorrect
Correct Answer(s): a single long-running branch with multiple short-lived feature branches
A single long-running branch with multiple short-lived feature branches is the CORRECT answer because we need to implement a strategy that includes the ability to work on independent tasks(new features) in parallel and successfully built checked-in code that can be released anytime.
Using this methodology, we will have one main branch, which is a long-running branch having the code which can be released anytime. Apart from the main branch, we will have multiple feature branches that were derived from parent to work on multiple features together. These feature branches are independent short-lived branches and can be used to develop different features, and even to encourage experimentation with custom features.
Azure DevOps documentation of branching strategies: https://docs.microsoft.com/en-us/azure/devops/repos/git/git-branching-guidance?view=azure-devops
a single fork per team member is an INCORRECT answer because doing this will limit the team members to have only one fork, which inhibits experimentation as there is a limit on multiple independent parallel tasks.
multiple long-running branches is an INCORRECT answer because we have to keep the checked-in code always in a releasable state. However, this is not possible when we have multiple long-running branches because the building, running, and managing all the branches for an application is very difficult and is prone to errors/backlogs.
a single long-running branch without forking is an INCORRECT answer because having a single branch only we cannot run multiple independent tasks (different features) in parallel.
Unattempted
Correct Answer(s): a single long-running branch with multiple short-lived feature branches
A single long-running branch with multiple short-lived feature branches is the CORRECT answer because we need to implement a strategy that includes the ability to work on independent tasks(new features) in parallel and successfully built checked-in code that can be released anytime.
Using this methodology, we will have one main branch, which is a long-running branch having the code which can be released anytime. Apart from the main branch, we will have multiple feature branches that were derived from parent to work on multiple features together. These feature branches are independent short-lived branches and can be used to develop different features, and even to encourage experimentation with custom features.
Azure DevOps documentation of branching strategies: https://docs.microsoft.com/en-us/azure/devops/repos/git/git-branching-guidance?view=azure-devops
a single fork per team member is an INCORRECT answer because doing this will limit the team members to have only one fork, which inhibits experimentation as there is a limit on multiple independent parallel tasks.
multiple long-running branches is an INCORRECT answer because we have to keep the checked-in code always in a releasable state. However, this is not possible when we have multiple long-running branches because the building, running, and managing all the branches for an application is very difficult and is prone to errors/backlogs.
a single long-running branch without forking is an INCORRECT answer because having a single branch only we cannot run multiple independent tasks (different features) in parallel.
Question 27 of 55
27. Question
You store source code in a Git repository in Azure Repos. You use a third-party continuous integration (CI) tool to control builds.
What will Azure DevOps use to authenticate with the tool?
Correct
Correct Answer(s): a personal access token (PAT)
A personal access token (PAT) is the CORRECT answer because Azure DevOps services prefer and use personal access tokens whenever there is a need to connect with third-party services(third-party CI service in our case). PATs are used because they allow us to seamlessly access the cloud-based GIT (Azure DevOps and TFS) without even entering the credentials every time. Personal access tokens are a secure way of connection, especially when used over HTTPS. https://docs.microsoft.com/en-us/azure/devops/repos/git/auth-overview?view=azure-devops
NTLM authentication is an INCORRECT answer because this type of authentication was used for only Windows authentication that too in Windows server 2012 or preceding versions. In our case, Azure DevOps doesnt use this to connect with third-party services. https://docs.microsoft.com/en-us/windows-server/security/kerberos/ntlm-overview
a Shared Access Signature (SAS) token is an INCORRECT answer because this authentication mechanism is used by storage accounts in azure to allow client-side connections. When a request is made by a client, the SAS token is used to verify the signature against one of the primary keys of the storage account to successfully complete the authentication request. https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview
Certificate authentication is an INCORRECT answer because that kind of authentication uses a client-server based hierarchy which is not the case here. Certificate authentication is done when we want to verify the identity of a user or a device from where the request is coming for authenticity purposes.
Incorrect
Correct Answer(s): a personal access token (PAT)
A personal access token (PAT) is the CORRECT answer because Azure DevOps services prefer and use personal access tokens whenever there is a need to connect with third-party services(third-party CI service in our case). PATs are used because they allow us to seamlessly access the cloud-based GIT (Azure DevOps and TFS) without even entering the credentials every time. Personal access tokens are a secure way of connection, especially when used over HTTPS. https://docs.microsoft.com/en-us/azure/devops/repos/git/auth-overview?view=azure-devops
NTLM authentication is an INCORRECT answer because this type of authentication was used for only Windows authentication that too in Windows server 2012 or preceding versions. In our case, Azure DevOps doesnt use this to connect with third-party services. https://docs.microsoft.com/en-us/windows-server/security/kerberos/ntlm-overview
a Shared Access Signature (SAS) token is an INCORRECT answer because this authentication mechanism is used by storage accounts in azure to allow client-side connections. When a request is made by a client, the SAS token is used to verify the signature against one of the primary keys of the storage account to successfully complete the authentication request. https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview
Certificate authentication is an INCORRECT answer because that kind of authentication uses a client-server based hierarchy which is not the case here. Certificate authentication is done when we want to verify the identity of a user or a device from where the request is coming for authenticity purposes.
Unattempted
Correct Answer(s): a personal access token (PAT)
A personal access token (PAT) is the CORRECT answer because Azure DevOps services prefer and use personal access tokens whenever there is a need to connect with third-party services(third-party CI service in our case). PATs are used because they allow us to seamlessly access the cloud-based GIT (Azure DevOps and TFS) without even entering the credentials every time. Personal access tokens are a secure way of connection, especially when used over HTTPS. https://docs.microsoft.com/en-us/azure/devops/repos/git/auth-overview?view=azure-devops
NTLM authentication is an INCORRECT answer because this type of authentication was used for only Windows authentication that too in Windows server 2012 or preceding versions. In our case, Azure DevOps doesnt use this to connect with third-party services. https://docs.microsoft.com/en-us/windows-server/security/kerberos/ntlm-overview
a Shared Access Signature (SAS) token is an INCORRECT answer because this authentication mechanism is used by storage accounts in azure to allow client-side connections. When a request is made by a client, the SAS token is used to verify the signature against one of the primary keys of the storage account to successfully complete the authentication request. https://docs.microsoft.com/en-us/azure/storage/common/storage-sas-overview
Certificate authentication is an INCORRECT answer because that kind of authentication uses a client-server based hierarchy which is not the case here. Certificate authentication is done when we want to verify the identity of a user or a device from where the request is coming for authenticity purposes.
Question 28 of 55
28. Question
Your company develops a client banking application that processes a large volume of data.
Code quality is an ongoing issue for the company. Recently, the code quality has deteriorated because of an increase in time pressure on the development team.
You need to implement static code analysis.
During which phase should you use static code analysis?
Correct
Correct Answer(s): build
Build is the CORRECT answer because we have the requirement to enable static code analysis which can be done using Microsofts analysis tools or third-party tools like SonarCloud, as part of the build state of a pipeline.
According to best practices, In a software development life cycle, the static analysis for a feature is done during the implementation phase of the application. Build policy(validation of a build process) is placed for a PR to be completed to ensure that the new code is seamlessly running with the existing code without any flaws (language native or security flaws). This promotes the practices of test-driven development, which is widely practiced in DevOps culture. https://devblogs.microsoft.com/premier-developer/microsoft-security-code-analysis/
Staging is an INCORRECT choice because a staging phase is an immediate stage before the production stage. Generally, a staging environment is used to keep the latest copy of the production code. However, we need to include static analysis initially in an SDLC and that is why we rule this option out.
Production release is an INCORRECT answer because we need to implement the static code analysis as part of our PR process, which is as early as possible in the development cycle. Production release is almost the final step and that is why we rule this option out.
Integration testing is an INCORRECT answer because integration testing is done to test how different modules/packages combine up to form and run the application after the build process. However, we need to include static code analysis as part of our early build phase and that is why we rule this option out.
Incorrect
Correct Answer(s): build
Build is the CORRECT answer because we have the requirement to enable static code analysis which can be done using Microsofts analysis tools or third-party tools like SonarCloud, as part of the build state of a pipeline.
According to best practices, In a software development life cycle, the static analysis for a feature is done during the implementation phase of the application. Build policy(validation of a build process) is placed for a PR to be completed to ensure that the new code is seamlessly running with the existing code without any flaws (language native or security flaws). This promotes the practices of test-driven development, which is widely practiced in DevOps culture. https://devblogs.microsoft.com/premier-developer/microsoft-security-code-analysis/
Staging is an INCORRECT choice because a staging phase is an immediate stage before the production stage. Generally, a staging environment is used to keep the latest copy of the production code. However, we need to include static analysis initially in an SDLC and that is why we rule this option out.
Production release is an INCORRECT answer because we need to implement the static code analysis as part of our PR process, which is as early as possible in the development cycle. Production release is almost the final step and that is why we rule this option out.
Integration testing is an INCORRECT answer because integration testing is done to test how different modules/packages combine up to form and run the application after the build process. However, we need to include static code analysis as part of our early build phase and that is why we rule this option out.
Unattempted
Correct Answer(s): build
Build is the CORRECT answer because we have the requirement to enable static code analysis which can be done using Microsofts analysis tools or third-party tools like SonarCloud, as part of the build state of a pipeline.
According to best practices, In a software development life cycle, the static analysis for a feature is done during the implementation phase of the application. Build policy(validation of a build process) is placed for a PR to be completed to ensure that the new code is seamlessly running with the existing code without any flaws (language native or security flaws). This promotes the practices of test-driven development, which is widely practiced in DevOps culture. https://devblogs.microsoft.com/premier-developer/microsoft-security-code-analysis/
Staging is an INCORRECT choice because a staging phase is an immediate stage before the production stage. Generally, a staging environment is used to keep the latest copy of the production code. However, we need to include static analysis initially in an SDLC and that is why we rule this option out.
Production release is an INCORRECT answer because we need to implement the static code analysis as part of our PR process, which is as early as possible in the development cycle. Production release is almost the final step and that is why we rule this option out.
Integration testing is an INCORRECT answer because integration testing is done to test how different modules/packages combine up to form and run the application after the build process. However, we need to include static code analysis as part of our early build phase and that is why we rule this option out.
Question 29 of 55
29. Question
You use GitHub Enterprise Server as a source code repository
You create an Azure DevOps organization named Contoso.
In the Contoso organization, you create a project named Project1.
You need to link github.com commits, pull requests, and issues to the work items of Project1. The solution must use OAuth-based authentication.
Which three actions should you perform in sequence?
ACTIONS
1. From Project Settings in Azure DevOps, create a service hook subscription.
2. From Organisation settings in Azure DevOps, connect to Azure Active Directory (Azure AD).
3. From Organisation settings in Azure DevOps, add an OAuth configuration.
4. From Developer settings in GitHub Enterprise Server, generate a private key.
5. From Developer settings in GitHub Enterprise Server, register a new OAuth app.
6. From Project Settings in Azure DevOps, add a GitHub connection.
Correct
Correct Answer(s): 3,6,5
3,6,5 is the CORRECT answer because we need to link GitHub commits, pull requests, and issues to the work items of Project1 (Azure Boards) using the OAuth-based authentication.
To link the Github enterprise server with our Azure boards so that we can check the status of builds/PRs/commits which happen in Github through the User Stories in Azure Boards, we perform the following steps:
After logging in to the Azure DevOps portal, we move to our organization that holds project1 and navigate to the organization settings. From the organization settings, we select the Oath configurations and add a new Github Enterprise OAth configuration for our GitHub enterprise server.
After setting up the OAuth configuration, we need to add a GitHub connection for our Azure DevOps project – Project1. We select OAuth from the authentication options.
Once we have set up the OAth configuration in Azure DevOps, we need to do the same in Github as well. To do this, we will register a new OAth app from the developer setting > oath apps > Register a new application, in the GitHub Enterprise server.
Correct Answer(s): 3,6,5
3,6,5 is the CORRECT answer because we need to link GitHub commits, pull requests, and issues to the work items of Project1 (Azure Boards) using the OAuth-based authentication.
To link the Github enterprise server with our Azure boards so that we can check the status of builds/PRs/commits which happen in Github through the User Stories in Azure Boards, we perform the following steps:
After logging in to the Azure DevOps portal, we move to our organization that holds project1 and navigate to the organization settings. From the organization settings, we select the Oath configurations and add a new Github Enterprise OAth configuration for our GitHub enterprise server.
After setting up the OAuth configuration, we need to add a GitHub connection for our Azure DevOps project – Project1. We select OAuth from the authentication options.
Once we have set up the OAth configuration in Azure DevOps, we need to do the same in Github as well. To do this, we will register a new OAth app from the developer setting > oath apps > Register a new application, in the GitHub Enterprise server.
Correct Answer(s): 3,6,5
3,6,5 is the CORRECT answer because we need to link GitHub commits, pull requests, and issues to the work items of Project1 (Azure Boards) using the OAuth-based authentication.
To link the Github enterprise server with our Azure boards so that we can check the status of builds/PRs/commits which happen in Github through the User Stories in Azure Boards, we perform the following steps:
After logging in to the Azure DevOps portal, we move to our organization that holds project1 and navigate to the organization settings. From the organization settings, we select the Oath configurations and add a new Github Enterprise OAth configuration for our GitHub enterprise server.
After setting up the OAuth configuration, we need to add a GitHub connection for our Azure DevOps project – Project1. We select OAuth from the authentication options.
Once we have set up the OAth configuration in Azure DevOps, we need to do the same in Github as well. To do this, we will register a new OAth app from the developer setting > oath apps > Register a new application, in the GitHub Enterprise server.
You are configuring Azure Pipelines for three projects in Azure DevOps as shown in the following table.
Which version control system should you recommend for Project1?
Correct
Correct Answer(s): Git in Azure Repos
Git in Azure Repos is the CORRECT answer because in our case we want to manage the future pipeline states using YAML files(markup language). Azure DevOps provides .yml files to manage the configuration of pipelines in a project(build and release pipelines) by treating it as another code file. Any pipeline in Azure DevOps consists of agents, stages, and tasks. Traditionally, all these were created through a graphical interface or Classic editor based approach. Now, all that configuration can be managed through a code-based approach, that too in a single file. This makes the whole configuration of pipelines really easy to manage as part of the version control system itself.
The hierarchy is reflected in the structure of a YAML file like:
Assembla Subversion is an INCORRECT answer because assembla is cloud-based version control for subversion(SVN). It is not helpful in providing YAML files for the configuration of pipelines.
Bitbucket Cloud is an INCORRECT answer because we want to use YAML based configuration files for managing the build and release pipelines. Bitbucket is just a cloud-based version control provided by Atlassian.
GitHub Enterprise is an on-premises version of GitHub hosting that allows the enterprises to host version control within their own corporate network. However, it doesnt provide YAML based files to manage pipelines configuration.
Incorrect
Correct Answer(s): Git in Azure Repos
Git in Azure Repos is the CORRECT answer because in our case we want to manage the future pipeline states using YAML files(markup language). Azure DevOps provides .yml files to manage the configuration of pipelines in a project(build and release pipelines) by treating it as another code file. Any pipeline in Azure DevOps consists of agents, stages, and tasks. Traditionally, all these were created through a graphical interface or Classic editor based approach. Now, all that configuration can be managed through a code-based approach, that too in a single file. This makes the whole configuration of pipelines really easy to manage as part of the version control system itself.
The hierarchy is reflected in the structure of a YAML file like:
Assembla Subversion is an INCORRECT answer because assembla is cloud-based version control for subversion(SVN). It is not helpful in providing YAML files for the configuration of pipelines.
Bitbucket Cloud is an INCORRECT answer because we want to use YAML based configuration files for managing the build and release pipelines. Bitbucket is just a cloud-based version control provided by Atlassian.
GitHub Enterprise is an on-premises version of GitHub hosting that allows the enterprises to host version control within their own corporate network. However, it doesnt provide YAML based files to manage pipelines configuration.
Unattempted
Correct Answer(s): Git in Azure Repos
Git in Azure Repos is the CORRECT answer because in our case we want to manage the future pipeline states using YAML files(markup language). Azure DevOps provides .yml files to manage the configuration of pipelines in a project(build and release pipelines) by treating it as another code file. Any pipeline in Azure DevOps consists of agents, stages, and tasks. Traditionally, all these were created through a graphical interface or Classic editor based approach. Now, all that configuration can be managed through a code-based approach, that too in a single file. This makes the whole configuration of pipelines really easy to manage as part of the version control system itself.
The hierarchy is reflected in the structure of a YAML file like:
Assembla Subversion is an INCORRECT answer because assembla is cloud-based version control for subversion(SVN). It is not helpful in providing YAML files for the configuration of pipelines.
Bitbucket Cloud is an INCORRECT answer because we want to use YAML based configuration files for managing the build and release pipelines. Bitbucket is just a cloud-based version control provided by Atlassian.
GitHub Enterprise is an on-premises version of GitHub hosting that allows the enterprises to host version control within their own corporate network. However, it doesnt provide YAML based files to manage pipelines configuration.
Question 31 of 55
31. Question
You are configuring Azure Pipelines for three projects in Azure DevOps as shown in the following table.
Which version control system should you recommend for Project2?
Correct
Correct Answer(s): GitHub Enterprise
GitHub Enterprise is the CORRECT answer because we have a requirement to host source code on the managed Windows server in the corporate network of the company. Due to the sensitivity of content/code for some projects, GitHub enterprise solution provides service to configure company managed infrastructure to host the source code and work in a similar way as github.com works. https://github.com/enterprise
Assembla Subversion is an INCORRECT answer because assembla is a cloud-based version control system for subversion(SVN) and is not an enterprise-grade on-premises hosting service for version control.
Bitbucket Cloud is an INCORRECT answer because we want to host the source code on-premises but Bitbucket is a cloud-based version control provided by Atlassian.
Git in Azure Repos is an INCORRECT answer because we need to host our source code in a windows server which is a part of the corporate network. However, git in azure repos is a cloud-hosted service and that is why we rule this option out.
Incorrect
Correct Answer(s): GitHub Enterprise
GitHub Enterprise is the CORRECT answer because we have a requirement to host source code on the managed Windows server in the corporate network of the company. Due to the sensitivity of content/code for some projects, GitHub enterprise solution provides service to configure company managed infrastructure to host the source code and work in a similar way as github.com works. https://github.com/enterprise
Assembla Subversion is an INCORRECT answer because assembla is a cloud-based version control system for subversion(SVN) and is not an enterprise-grade on-premises hosting service for version control.
Bitbucket Cloud is an INCORRECT answer because we want to host the source code on-premises but Bitbucket is a cloud-based version control provided by Atlassian.
Git in Azure Repos is an INCORRECT answer because we need to host our source code in a windows server which is a part of the corporate network. However, git in azure repos is a cloud-hosted service and that is why we rule this option out.
Unattempted
Correct Answer(s): GitHub Enterprise
GitHub Enterprise is the CORRECT answer because we have a requirement to host source code on the managed Windows server in the corporate network of the company. Due to the sensitivity of content/code for some projects, GitHub enterprise solution provides service to configure company managed infrastructure to host the source code and work in a similar way as github.com works. https://github.com/enterprise
Assembla Subversion is an INCORRECT answer because assembla is a cloud-based version control system for subversion(SVN) and is not an enterprise-grade on-premises hosting service for version control.
Bitbucket Cloud is an INCORRECT answer because we want to host the source code on-premises but Bitbucket is a cloud-based version control provided by Atlassian.
Git in Azure Repos is an INCORRECT answer because we need to host our source code in a windows server which is a part of the corporate network. However, git in azure repos is a cloud-hosted service and that is why we rule this option out.
Question 32 of 55
32. Question
You are configuring Azure Pipelines for three projects in Azure DevOps as shown in the following table.
Which version control system should you recommend for Project3?
Correct
Correct Answer(s): Assembla Subversion
Assembla Subversion is the CORRECT answer because we have a requirement to configure a centralized version control system instead of a distributed version control. SVN(Subversion) is a centralized version control system and is different from distributed version control systems like Git.
In SVN, all history of version control is stored in a central server. Whenever a change has to be made in a file, the developer pulls that file in their local systems (client-server model). However, In distributed version controls developers have their own copies of main branches on their machines with all version control history. They make changes there and then create a pull request to merge the two folds together. https://www.geeksforgeeks.org/centralized-vs-distributed-version-control-which-one-should-we-choose/ https://www.perforce.com/blog/vcs/what-svn
Bitbucket cloud and Git in Azure Repos are INCORRECT answers because they use a distributed version control system, which is the exact opposite of our current requirement.
Github Enterprise is also an INCORRECT answer because similar to the cloud version of Github, the Github Enterprise is also a distributed version control system. However, we need a centralized version control system.
Incorrect
Correct Answer(s): Assembla Subversion
Assembla Subversion is the CORRECT answer because we have a requirement to configure a centralized version control system instead of a distributed version control. SVN(Subversion) is a centralized version control system and is different from distributed version control systems like Git.
In SVN, all history of version control is stored in a central server. Whenever a change has to be made in a file, the developer pulls that file in their local systems (client-server model). However, In distributed version controls developers have their own copies of main branches on their machines with all version control history. They make changes there and then create a pull request to merge the two folds together. https://www.geeksforgeeks.org/centralized-vs-distributed-version-control-which-one-should-we-choose/ https://www.perforce.com/blog/vcs/what-svn
Bitbucket cloud and Git in Azure Repos are INCORRECT answers because they use a distributed version control system, which is the exact opposite of our current requirement.
Github Enterprise is also an INCORRECT answer because similar to the cloud version of Github, the Github Enterprise is also a distributed version control system. However, we need a centralized version control system.
Unattempted
Correct Answer(s): Assembla Subversion
Assembla Subversion is the CORRECT answer because we have a requirement to configure a centralized version control system instead of a distributed version control. SVN(Subversion) is a centralized version control system and is different from distributed version control systems like Git.
In SVN, all history of version control is stored in a central server. Whenever a change has to be made in a file, the developer pulls that file in their local systems (client-server model). However, In distributed version controls developers have their own copies of main branches on their machines with all version control history. They make changes there and then create a pull request to merge the two folds together. https://www.geeksforgeeks.org/centralized-vs-distributed-version-control-which-one-should-we-choose/ https://www.perforce.com/blog/vcs/what-svn
Bitbucket cloud and Git in Azure Repos are INCORRECT answers because they use a distributed version control system, which is the exact opposite of our current requirement.
Github Enterprise is also an INCORRECT answer because similar to the cloud version of Github, the Github Enterprise is also a distributed version control system. However, we need a centralized version control system.
Question 33 of 55
33. Question
You manage the GIT repository for a large enterprise application.You need to minimize the data size of the repository.
You need to complete the following command.
What should you select for Dropdown1?
Correct
Correct Answer(s): –aggressive
–aggressive is the CORRECT answer because we want to minimize the size of the repository and for that, we need an extensive clean-up of all the loose objects within the repository. Using the –aggressive option for the command git gc will set the command to extensively perform a clean-up (repack and re-compute to find more optimal deltas). Using this option might keep the operation running for a while.
–no-prune is the INCORRECT answer because using this option for the command git gc will set the command to not prune(cut away or get rid off) any loose objects in the repository.
–auto is the INCORRECT answer because using this option for the command git gc will enable the command to check whether or not any cleanup task is required to do based on some default values set( customizable using gc.auto in Configuration). This doesnt every time guarantee that the command runs clean up of loose objects in the repository.
–force is the INCORRECT answer because using this option for the command git gc will force run the command even if any other instance of git gc was at the instance, running on the repo. Using force parameter only confirms the running operation for the command, at all costs.
Incorrect
Correct Answer(s): –aggressive
–aggressive is the CORRECT answer because we want to minimize the size of the repository and for that, we need an extensive clean-up of all the loose objects within the repository. Using the –aggressive option for the command git gc will set the command to extensively perform a clean-up (repack and re-compute to find more optimal deltas). Using this option might keep the operation running for a while.
–no-prune is the INCORRECT answer because using this option for the command git gc will set the command to not prune(cut away or get rid off) any loose objects in the repository.
–auto is the INCORRECT answer because using this option for the command git gc will enable the command to check whether or not any cleanup task is required to do based on some default values set( customizable using gc.auto in Configuration). This doesnt every time guarantee that the command runs clean up of loose objects in the repository.
–force is the INCORRECT answer because using this option for the command git gc will force run the command even if any other instance of git gc was at the instance, running on the repo. Using force parameter only confirms the running operation for the command, at all costs.
Unattempted
Correct Answer(s): –aggressive
–aggressive is the CORRECT answer because we want to minimize the size of the repository and for that, we need an extensive clean-up of all the loose objects within the repository. Using the –aggressive option for the command git gc will set the command to extensively perform a clean-up (repack and re-compute to find more optimal deltas). Using this option might keep the operation running for a while.
–no-prune is the INCORRECT answer because using this option for the command git gc will set the command to not prune(cut away or get rid off) any loose objects in the repository.
–auto is the INCORRECT answer because using this option for the command git gc will enable the command to check whether or not any cleanup task is required to do based on some default values set( customizable using gc.auto in Configuration). This doesnt every time guarantee that the command runs clean up of loose objects in the repository.
–force is the INCORRECT answer because using this option for the command git gc will force run the command even if any other instance of git gc was at the instance, running on the repo. Using force parameter only confirms the running operation for the command, at all costs.
Question 34 of 55
34. Question
You manage the GIT repository for a large enterprise application.
You need to minimize the data size of the repository.
You need to complete the following command.
What should you select for Dropdown2?
Correct
Correct Answer(s): prune
Prune is the CORRECT answer because we have a requirement to minimize the size of our repository. Using the git prune command will let us un-tie or cut off any loose objects from the object database of the repository. Moreover, in the code above, there is an argument provided –expire now which is an option for just the git prune command, when comparing with all the other options.
Once we run the git gc –aggressive we are doing an extensive garbage collection. We then run the git prune command to prune all the unreachable objects. This process helps us get rid of the extra data in our repository.
Merge is an INCORRECT option because git merge command is used to merge the code with the main branch in case of a pending request. Moreover, there is no –expire now option for the git merge command. https://git-scm.com/docs/git-merge
Rebase is an INCORRECT answer because we use this command to rebase one branch by using commits from another branch. This command may be used when we want to rebase a forked repository from the main repository. Moreover, there is no –expire now option for the git rebase command. https://git-scm.com/docs/git-rebase
Reset is an INCORRECT option because this command is used to reset a main branch for all the changes that were made(in a specified frame). Also, there is no –expire now option for the git reset command and that is why we rule this option out. https://git-scm.com/docs/git-reset
Incorrect
Correct Answer(s): prune
Prune is the CORRECT answer because we have a requirement to minimize the size of our repository. Using the git prune command will let us un-tie or cut off any loose objects from the object database of the repository. Moreover, in the code above, there is an argument provided –expire now which is an option for just the git prune command, when comparing with all the other options.
Once we run the git gc –aggressive we are doing an extensive garbage collection. We then run the git prune command to prune all the unreachable objects. This process helps us get rid of the extra data in our repository.
Merge is an INCORRECT option because git merge command is used to merge the code with the main branch in case of a pending request. Moreover, there is no –expire now option for the git merge command. https://git-scm.com/docs/git-merge
Rebase is an INCORRECT answer because we use this command to rebase one branch by using commits from another branch. This command may be used when we want to rebase a forked repository from the main repository. Moreover, there is no –expire now option for the git rebase command. https://git-scm.com/docs/git-rebase
Reset is an INCORRECT option because this command is used to reset a main branch for all the changes that were made(in a specified frame). Also, there is no –expire now option for the git reset command and that is why we rule this option out. https://git-scm.com/docs/git-reset
Unattempted
Correct Answer(s): prune
Prune is the CORRECT answer because we have a requirement to minimize the size of our repository. Using the git prune command will let us un-tie or cut off any loose objects from the object database of the repository. Moreover, in the code above, there is an argument provided –expire now which is an option for just the git prune command, when comparing with all the other options.
Once we run the git gc –aggressive we are doing an extensive garbage collection. We then run the git prune command to prune all the unreachable objects. This process helps us get rid of the extra data in our repository.
Merge is an INCORRECT option because git merge command is used to merge the code with the main branch in case of a pending request. Moreover, there is no –expire now option for the git merge command. https://git-scm.com/docs/git-merge
Rebase is an INCORRECT answer because we use this command to rebase one branch by using commits from another branch. This command may be used when we want to rebase a forked repository from the main repository. Moreover, there is no –expire now option for the git rebase command. https://git-scm.com/docs/git-rebase
Reset is an INCORRECT option because this command is used to reset a main branch for all the changes that were made(in a specified frame). Also, there is no –expire now option for the git reset command and that is why we rule this option out. https://git-scm.com/docs/git-reset
Question 35 of 55
35. Question
Your company develops an app for iOS. All users of the app have devices that are members of a private distribution group in Microsoft Visual Studio App Center.
You plan to distribute a new release of the app.
You need to identify which certificate file you require to distribute the new release from App Center.
Which file type should you upload to the App Center?
.pfx is an INCORRECT answer because we are using the app center to distribute an iOS application. This setup needs a .p12 extension certificate.
Although now there is no difference between .pfx and .p12, and extensions are interchangeable, we will use a p.12 extension certificate because its use is supported in the apple developer portal as well as the visual studio app center. https://docs.microsoft.com/en-us/appcenter/build/ios/code-signing#certificates-p12
.pvk is an INCORRECT answer because certificates with the extension .pvk is used by Microsoft to sign code for different products internally. This format is owned by Microsoft and can not be used due to it being a proprietary product.
.cer is an INCORRECT answer because we are using the app center to distribute an iOS application which needs a .p12 extension certificate. Moreover, .cer file only contains a public key instead of a pair of public and private keys.
.pfx is an INCORRECT answer because we are using the app center to distribute an iOS application. This setup needs a .p12 extension certificate.
Although now there is no difference between .pfx and .p12, and extensions are interchangeable, we will use a p.12 extension certificate because its use is supported in the apple developer portal as well as the visual studio app center. https://docs.microsoft.com/en-us/appcenter/build/ios/code-signing#certificates-p12
.pvk is an INCORRECT answer because certificates with the extension .pvk is used by Microsoft to sign code for different products internally. This format is owned by Microsoft and can not be used due to it being a proprietary product.
.cer is an INCORRECT answer because we are using the app center to distribute an iOS application which needs a .p12 extension certificate. Moreover, .cer file only contains a public key instead of a pair of public and private keys.
.pfx is an INCORRECT answer because we are using the app center to distribute an iOS application. This setup needs a .p12 extension certificate.
Although now there is no difference between .pfx and .p12, and extensions are interchangeable, we will use a p.12 extension certificate because its use is supported in the apple developer portal as well as the visual studio app center. https://docs.microsoft.com/en-us/appcenter/build/ios/code-signing#certificates-p12
.pvk is an INCORRECT answer because certificates with the extension .pvk is used by Microsoft to sign code for different products internally. This format is owned by Microsoft and can not be used due to it being a proprietary product.
.cer is an INCORRECT answer because we are using the app center to distribute an iOS application which needs a .p12 extension certificate. Moreover, .cer file only contains a public key instead of a pair of public and private keys.
Question 36 of 55
36. Question
You use Azure Pipelines to manage project builds and deployments.
You plan to use Azure Pipelines for Microsoft Teams to notify the legal team when a new build is ready for release.
You need to configure the Organization Settings in Azure DevOps to support Azure Pipelines for Microsoft Teams.
What should you turn on?
Correct
Correct Answer(s): Third-party application access via OAth
Third-party application access via OAuth is the CORRECT answer because we have a requirement to connect the azure pipelines and Microsoft teams. Since Azure Pipelines app uses the OAuth based authentication, in order to connect both Teams and Azure pipelines app, we need Third-party application access via OAuth for the organization to be enabled from the Security policies section in Azure DevOps Organization settings.
Alternate authentication credentials is an INCORRECT answer because this authentication method is not supported in the process of integrating Microsoft teams with the application of the Azure pipeline. Moreover, this authentication method is no longer supported by Azure DevOps.
SSH authentication is an INCORRECT answer because this option is enabled when we want to securely allow access for our repos by MacOS and Linux based devices. It wont affect the process of MS teams and azure pipelines integration.
Azure Active Directory Conditional Access Policy Validation is an INCORRECT answer because this option is used when we want to restrict our Azure DevOps Organizations access from outside or use MFA for the sign-in process. This option has nothing to do with the integration of Teams and pipelines.
Incorrect
Correct Answer(s): Third-party application access via OAth
Third-party application access via OAuth is the CORRECT answer because we have a requirement to connect the azure pipelines and Microsoft teams. Since Azure Pipelines app uses the OAuth based authentication, in order to connect both Teams and Azure pipelines app, we need Third-party application access via OAuth for the organization to be enabled from the Security policies section in Azure DevOps Organization settings.
Alternate authentication credentials is an INCORRECT answer because this authentication method is not supported in the process of integrating Microsoft teams with the application of the Azure pipeline. Moreover, this authentication method is no longer supported by Azure DevOps.
SSH authentication is an INCORRECT answer because this option is enabled when we want to securely allow access for our repos by MacOS and Linux based devices. It wont affect the process of MS teams and azure pipelines integration.
Azure Active Directory Conditional Access Policy Validation is an INCORRECT answer because this option is used when we want to restrict our Azure DevOps Organizations access from outside or use MFA for the sign-in process. This option has nothing to do with the integration of Teams and pipelines.
Unattempted
Correct Answer(s): Third-party application access via OAth
Third-party application access via OAuth is the CORRECT answer because we have a requirement to connect the azure pipelines and Microsoft teams. Since Azure Pipelines app uses the OAuth based authentication, in order to connect both Teams and Azure pipelines app, we need Third-party application access via OAuth for the organization to be enabled from the Security policies section in Azure DevOps Organization settings.
Alternate authentication credentials is an INCORRECT answer because this authentication method is not supported in the process of integrating Microsoft teams with the application of the Azure pipeline. Moreover, this authentication method is no longer supported by Azure DevOps.
SSH authentication is an INCORRECT answer because this option is enabled when we want to securely allow access for our repos by MacOS and Linux based devices. It wont affect the process of MS teams and azure pipelines integration.
Azure Active Directory Conditional Access Policy Validation is an INCORRECT answer because this option is used when we want to restrict our Azure DevOps Organizations access from outside or use MFA for the sign-in process. This option has nothing to do with the integration of Teams and pipelines.
Question 37 of 55
37. Question
You are configuring an Azure DevOps deployment pipeline. The deployed application will authenticate to a web service by using a secret stored in an Azure key vault.
You need to use the secret in the deployment pipeline.
Which three actions should you perform in sequence?
ACTIONS
1. Create a service principal in Azure Active Directory (Azure AD).
2. Add an app registration in Azure Active Directory (Azure AD).
3. Configure an access policy in the key vault.
4. Export a certificate from the key vault.
5. Generate a self-signed certificate.
6. Add an Azure Resource Manager service connection to the pipeline.
Correct
Correct Answer(s): 1-3-6
1-3-6 is the CORRECT answer because we need to find a solution for calling a secret value stored in a key vault, within the release pipeline for different tasks to use.
To securely download and use the key vault entities in a release pipeline, we use the AzureKeyVault task. To configure this task we need a service principal for the Azure AD (connects Azure subscription with the Azure pipelines), which will be used by Azure AD for authentication purposes. For the service principal ID, we now set correct permissions(access policies for keys, certificates, secrets) from the key vault access policies.
Since our service principal connection is all set up with appropriate permissions on the key vault and subscription, we will add an Azure Resource Manager service connection to the pipeline(from the Azure DevOps project settings).
After all the above items are configured, we can use the following task to use secrets from the key vault within a pipeline:
1-3-6 is the CORRECT answer because we need to find a solution for calling a secret value stored in a key vault, within the release pipeline for different tasks to use.
To securely download and use the key vault entities in a release pipeline, we use the AzureKeyVault task. To configure this task we need a service principal for the Azure AD (connects Azure subscription with the Azure pipelines), which will be used by Azure AD for authentication purposes. For the service principal ID, we now set correct permissions(access policies for keys, certificates, secrets) from the key vault access policies.
Since our service principal connection is all set up with appropriate permissions on the key vault and subscription, we will add an Azure Resource Manager service connection to the pipeline(from the Azure DevOps project settings).
After all the above items are configured, we can use the following task to use secrets from the key vault within a pipeline:
1-3-6 is the CORRECT answer because we need to find a solution for calling a secret value stored in a key vault, within the release pipeline for different tasks to use.
To securely download and use the key vault entities in a release pipeline, we use the AzureKeyVault task. To configure this task we need a service principal for the Azure AD (connects Azure subscription with the Azure pipelines), which will be used by Azure AD for authentication purposes. For the service principal ID, we now set correct permissions(access policies for keys, certificates, secrets) from the key vault access policies.
Since our service principal connection is all set up with appropriate permissions on the key vault and subscription, we will add an Azure Resource Manager service connection to the pipeline(from the Azure DevOps project settings).
After all the above items are configured, we can use the following task to use secrets from the key vault within a pipeline:
Your company has a project in Azure DevOps.
You need to ensure that when there are multiple builds pending deployment, only the most recent build is deployed.
What should you use?
Correct
Correct Answer(s): deployment queue settings
Deployment queue settings is the CORRECT answer because we need to ensure that in case of multiple releases being queued for deployment, only the most recent one out of the lot will be deployed.
This configuration is done by enabling deploy latest and cancel the others setting from the deployment queue settings in the pre-deployment conditions for any release stage. We can use this option when there are more releases than the builds for any application/feature. This will release the most latest built code and discard the rest of the available builds.
Release gates is an INCORRECT answer because they are used to add a check status for any stage to proceed. The gates are configured to add a logic (success status on any other events completion) before the release can proceed from a stage.
Deployment conditions is an INCORRECT answer because these are configured logics to check on specific conditions and only proceed if they are met. These can include pre-deployment and post-deployment conditions for any of the stages in a pipeline.
Incorrect
Correct Answer(s): deployment queue settings
Deployment queue settings is the CORRECT answer because we need to ensure that in case of multiple releases being queued for deployment, only the most recent one out of the lot will be deployed.
This configuration is done by enabling deploy latest and cancel the others setting from the deployment queue settings in the pre-deployment conditions for any release stage. We can use this option when there are more releases than the builds for any application/feature. This will release the most latest built code and discard the rest of the available builds.
Release gates is an INCORRECT answer because they are used to add a check status for any stage to proceed. The gates are configured to add a logic (success status on any other events completion) before the release can proceed from a stage.
Deployment conditions is an INCORRECT answer because these are configured logics to check on specific conditions and only proceed if they are met. These can include pre-deployment and post-deployment conditions for any of the stages in a pipeline.
Unattempted
Correct Answer(s): deployment queue settings
Deployment queue settings is the CORRECT answer because we need to ensure that in case of multiple releases being queued for deployment, only the most recent one out of the lot will be deployed.
This configuration is done by enabling deploy latest and cancel the others setting from the deployment queue settings in the pre-deployment conditions for any release stage. We can use this option when there are more releases than the builds for any application/feature. This will release the most latest built code and discard the rest of the available builds.
Release gates is an INCORRECT answer because they are used to add a check status for any stage to proceed. The gates are configured to add a logic (success status on any other events completion) before the release can proceed from a stage.
Deployment conditions is an INCORRECT answer because these are configured logics to check on specific conditions and only proceed if they are met. These can include pre-deployment and post-deployment conditions for any of the stages in a pipeline.
Question 39 of 55
39. Question
You have the Azure DevOps pipeline shown in the following exhibit.
From the information presented in the graphic, you need to identify the number of job(s) and task(s) in the pipeline.
Correct
Correct Answer(s): Job(s): 1 Task(s): 4
Job(s): 1 Task(s): 4 is the CORRECT answer because, in the above image, there is only one Job of type Run on Agent named as Cloud Agent.
This job, named Cloud Agent, has four Tasks named NuGet Restore, Compile Application, Copy Files, and Publish Artifacts respectively.
Please refer to the image below, for more clarity:
Incorrect
Correct Answer(s): Job(s): 1 Task(s): 4
Job(s): 1 Task(s): 4 is the CORRECT answer because, in the above image, there is only one Job of type Run on Agent named as Cloud Agent.
This job, named Cloud Agent, has four Tasks named NuGet Restore, Compile Application, Copy Files, and Publish Artifacts respectively.
Please refer to the image below, for more clarity:
Unattempted
Correct Answer(s): Job(s): 1 Task(s): 4
Job(s): 1 Task(s): 4 is the CORRECT answer because, in the above image, there is only one Job of type Run on Agent named as Cloud Agent.
This job, named Cloud Agent, has four Tasks named NuGet Restore, Compile Application, Copy Files, and Publish Artifacts respectively.
Please refer to the image below, for more clarity:
Question 40 of 55
40. Question
Your company has a project in Azure DevOps for a new application. The application will be deployed to several Azure virtual machines that run Windows Server 2019.
You need to recommend a deployment strategy for the virtual machines. The strategy must meet the following requirements:
– Ensure that the virtual machines maintain a consistent configuration.
– Minimize administrative effort to configure the virtual machines.
What should you include in the recommendation?
Correct
Correct Answer(s): Azure Resource Manager templates and the PowerShell Desired State Configuration (DSC) extension for Windows
Azure Resource Manager templates and the PowerShell Desired State Configuration (DSC) extension for Windows is the CORRECT answer because we need a deployment strategy for virtual machines using which we maintain a consistent configuration while minimizing any administrative effort that we might have to do to configure the VM from within(or from the portal/manually).
PowerShell Desired state configuration is a declarative approach to tell the VM what is expected in the configuration, without focussing much on How that configuration is set up. The latter part is taken care of by the Powershell desired configurations local config manager.
An example DSC code in a declarative format, to make sure Web-Server role(IIS) is present in the Windows VM:
Deployment YAML and Azure pipeline deployment groups is an INCORRECT answer because the combination of .yml file is not supported with Azure deployment groups. Yaml files are used for writing pipelines as code, in a declarative code format.
Deployment YAML and Azure pipeline stage templates is an INCORRECT answer because YAML base azure pipeline templates are used to configure the pipelines-as-code so that the configuration of our pipelines can be stored and managed from the source control.
Azure Resource Manager templates and the Custom Script Extension for Windows is the INCORRECT answer because this combination is used to deploy scripts(for some ad-hoc tasks/configurations) inside a Windows VM without logging in to the server. However, this can not be leveraged to maintain a consistent configuration for longer periods of time. For that, we leverage PowerShell DSC.
Incorrect
Correct Answer(s): Azure Resource Manager templates and the PowerShell Desired State Configuration (DSC) extension for Windows
Azure Resource Manager templates and the PowerShell Desired State Configuration (DSC) extension for Windows is the CORRECT answer because we need a deployment strategy for virtual machines using which we maintain a consistent configuration while minimizing any administrative effort that we might have to do to configure the VM from within(or from the portal/manually).
PowerShell Desired state configuration is a declarative approach to tell the VM what is expected in the configuration, without focussing much on How that configuration is set up. The latter part is taken care of by the Powershell desired configurations local config manager.
An example DSC code in a declarative format, to make sure Web-Server role(IIS) is present in the Windows VM:
Deployment YAML and Azure pipeline deployment groups is an INCORRECT answer because the combination of .yml file is not supported with Azure deployment groups. Yaml files are used for writing pipelines as code, in a declarative code format.
Deployment YAML and Azure pipeline stage templates is an INCORRECT answer because YAML base azure pipeline templates are used to configure the pipelines-as-code so that the configuration of our pipelines can be stored and managed from the source control.
Azure Resource Manager templates and the Custom Script Extension for Windows is the INCORRECT answer because this combination is used to deploy scripts(for some ad-hoc tasks/configurations) inside a Windows VM without logging in to the server. However, this can not be leveraged to maintain a consistent configuration for longer periods of time. For that, we leverage PowerShell DSC.
Unattempted
Correct Answer(s): Azure Resource Manager templates and the PowerShell Desired State Configuration (DSC) extension for Windows
Azure Resource Manager templates and the PowerShell Desired State Configuration (DSC) extension for Windows is the CORRECT answer because we need a deployment strategy for virtual machines using which we maintain a consistent configuration while minimizing any administrative effort that we might have to do to configure the VM from within(or from the portal/manually).
PowerShell Desired state configuration is a declarative approach to tell the VM what is expected in the configuration, without focussing much on How that configuration is set up. The latter part is taken care of by the Powershell desired configurations local config manager.
An example DSC code in a declarative format, to make sure Web-Server role(IIS) is present in the Windows VM:
Deployment YAML and Azure pipeline deployment groups is an INCORRECT answer because the combination of .yml file is not supported with Azure deployment groups. Yaml files are used for writing pipelines as code, in a declarative code format.
Deployment YAML and Azure pipeline stage templates is an INCORRECT answer because YAML base azure pipeline templates are used to configure the pipelines-as-code so that the configuration of our pipelines can be stored and managed from the source control.
Azure Resource Manager templates and the Custom Script Extension for Windows is the INCORRECT answer because this combination is used to deploy scripts(for some ad-hoc tasks/configurations) inside a Windows VM without logging in to the server. However, this can not be leveraged to maintain a consistent configuration for longer periods of time. For that, we leverage PowerShell DSC.
Question 41 of 55
41. Question
You are defining release strategies for two applications as shown in the following table.
Which release strategy should you use for each application?
For App1: Canary deployment and for App2: Blue/Green deployment is the CORRECT answer because for App1 we need to release the application to few testers only(a small group of early users) to test out new features, and for App2 we need a solution that deploys faster and provides a way to roll back to the previous version if necessary.
The Canary deployment follows the patterns of progressive exposure release strategy. We use this deployment type when we want to minimize our blast radius for every release. That is why the new features are only made available to a small group and then the number of users is increased gradually with time. https://docs.microsoft.com/en-us/azure/devops/pipelines/process/deployment-jobs?view=azure-devops
Blue/green deployment type employs the methodology of running two environments of the same application parallelly, referring to them as the green environment and blue environment. One of these environments will run with the new release and the other one will be running the previous stable version. If any issues are seen in the active environment with the new features, a smooth shift can be done from the one environment to the other active environment running the previous stable release, with just minimal efforts and no problems to the end-users. https://azure.microsoft.com/en-in/blog/blue-green-deployments-using-azure-traffic-manager/
Rolling deployments is the INCORRECT answer because this type of deployment is used in cases where we want to release a version and then to roll out another version on top of the existing one. This release strategy is not required for either of App 1 or App2, and that is why we rule this option out.
A/B testing is an INCORRECT answer because we use this strategy in case we want to run a comparison test between two competitors which can be webpages, releases, etc. This release strategy is not required for either of App 1 or App2, and that is why we rule this option out.
For App1: Canary deployment and for App2: Blue/Green deployment is the CORRECT answer because for App1 we need to release the application to few testers only(a small group of early users) to test out new features, and for App2 we need a solution that deploys faster and provides a way to roll back to the previous version if necessary.
The Canary deployment follows the patterns of progressive exposure release strategy. We use this deployment type when we want to minimize our blast radius for every release. That is why the new features are only made available to a small group and then the number of users is increased gradually with time. https://docs.microsoft.com/en-us/azure/devops/pipelines/process/deployment-jobs?view=azure-devops
Blue/green deployment type employs the methodology of running two environments of the same application parallelly, referring to them as the green environment and blue environment. One of these environments will run with the new release and the other one will be running the previous stable version. If any issues are seen in the active environment with the new features, a smooth shift can be done from the one environment to the other active environment running the previous stable release, with just minimal efforts and no problems to the end-users. https://azure.microsoft.com/en-in/blog/blue-green-deployments-using-azure-traffic-manager/
Rolling deployments is the INCORRECT answer because this type of deployment is used in cases where we want to release a version and then to roll out another version on top of the existing one. This release strategy is not required for either of App 1 or App2, and that is why we rule this option out.
A/B testing is an INCORRECT answer because we use this strategy in case we want to run a comparison test between two competitors which can be webpages, releases, etc. This release strategy is not required for either of App 1 or App2, and that is why we rule this option out.
For App1: Canary deployment and for App2: Blue/Green deployment is the CORRECT answer because for App1 we need to release the application to few testers only(a small group of early users) to test out new features, and for App2 we need a solution that deploys faster and provides a way to roll back to the previous version if necessary.
The Canary deployment follows the patterns of progressive exposure release strategy. We use this deployment type when we want to minimize our blast radius for every release. That is why the new features are only made available to a small group and then the number of users is increased gradually with time. https://docs.microsoft.com/en-us/azure/devops/pipelines/process/deployment-jobs?view=azure-devops
Blue/green deployment type employs the methodology of running two environments of the same application parallelly, referring to them as the green environment and blue environment. One of these environments will run with the new release and the other one will be running the previous stable version. If any issues are seen in the active environment with the new features, a smooth shift can be done from the one environment to the other active environment running the previous stable release, with just minimal efforts and no problems to the end-users. https://azure.microsoft.com/en-in/blog/blue-green-deployments-using-azure-traffic-manager/
Rolling deployments is the INCORRECT answer because this type of deployment is used in cases where we want to release a version and then to roll out another version on top of the existing one. This release strategy is not required for either of App 1 or App2, and that is why we rule this option out.
A/B testing is an INCORRECT answer because we use this strategy in case we want to run a comparison test between two competitors which can be webpages, releases, etc. This release strategy is not required for either of App 1 or App2, and that is why we rule this option out.
Question 42 of 55
42. Question
As part of your application build process, you need to deploy a group of resources to Azure by using an Azure Resource Manager template located on GitHub.
Which three actions should you perform in sequence?
ACTIONS
1. Create a release pipeline.
2. Set the template parameters.
3. Add an Azure Resource Group Deployment task.
4. Create a package.
5. Create a job agent.
Correct
Correct Answer(s): 1-3-2
(1) Create a release pipeline: This is the starting point for defining the deployment process. A release pipeline in Azure DevOps allows you to automate the deployment steps for your application.
(3) Add an Azure Resource Group Deployment task: This task specifically integrates with Azure Resource Manager (ARM) and allows you to deploy resources based on an ARM template. Within this task, you can define the location of your ARM template on GitHub (using a URL) and set any necessary parameters for the template.
(2) Set the template parameters: The ARM template might require specific parameters to define things like resource names, locations, or sizes. In the Azure Resource Group Deployment task, you can configure these parameters with their corresponding values.
Here’s why the other actions are not part of the ideal sequence:
Create a package (Action 4): This isn’t necessary for deploying resources with an ARM template. The ARM template itself defines the resources to be deployed.
Create a job agent (Action 5): Job agents are used to run tasks in a specific environment. While you might need a job agent for your pipeline execution, creating it isn’t directly related to deploying resources with an ARM template. The pipeline itself can determine the appropriate agent based on your configuration.
Incorrect
Correct Answer(s): 1-3-2
(1) Create a release pipeline: This is the starting point for defining the deployment process. A release pipeline in Azure DevOps allows you to automate the deployment steps for your application.
(3) Add an Azure Resource Group Deployment task: This task specifically integrates with Azure Resource Manager (ARM) and allows you to deploy resources based on an ARM template. Within this task, you can define the location of your ARM template on GitHub (using a URL) and set any necessary parameters for the template.
(2) Set the template parameters: The ARM template might require specific parameters to define things like resource names, locations, or sizes. In the Azure Resource Group Deployment task, you can configure these parameters with their corresponding values.
Here’s why the other actions are not part of the ideal sequence:
Create a package (Action 4): This isn’t necessary for deploying resources with an ARM template. The ARM template itself defines the resources to be deployed.
Create a job agent (Action 5): Job agents are used to run tasks in a specific environment. While you might need a job agent for your pipeline execution, creating it isn’t directly related to deploying resources with an ARM template. The pipeline itself can determine the appropriate agent based on your configuration.
Unattempted
Correct Answer(s): 1-3-2
(1) Create a release pipeline: This is the starting point for defining the deployment process. A release pipeline in Azure DevOps allows you to automate the deployment steps for your application.
(3) Add an Azure Resource Group Deployment task: This task specifically integrates with Azure Resource Manager (ARM) and allows you to deploy resources based on an ARM template. Within this task, you can define the location of your ARM template on GitHub (using a URL) and set any necessary parameters for the template.
(2) Set the template parameters: The ARM template might require specific parameters to define things like resource names, locations, or sizes. In the Azure Resource Group Deployment task, you can configure these parameters with their corresponding values.
Here’s why the other actions are not part of the ideal sequence:
Create a package (Action 4): This isn’t necessary for deploying resources with an ARM template. The ARM template itself defines the resources to be deployed.
Create a job agent (Action 5): Job agents are used to run tasks in a specific environment. While you might need a job agent for your pipeline execution, creating it isn’t directly related to deploying resources with an ARM template. The pipeline itself can determine the appropriate agent based on your configuration.
Question 43 of 55
43. Question
Your development team is building a new web solution by using the Microsoft Visual Studio integrated development environment (IDE).
You need to make a custom package available to all the developers. The package must be managed centrally, and the latest version must be available for consumption in Visual Studio automatically.
Which three actions should you perform?
Correct
Correct Answer(s):
Add the package URL to the NuGet Package Manager settings in Visual Studio,
Create a new feed in Azure Artifacts, and
Publish the package to a feed.
Create a new feed in Azure Artifacts, Publish the package to a feed and
Add the package URL to the NuGet Package Manager settings in Visual Studio are the three actions in order that we will perform to make the latest version of a centrally managed custom package available to all the developers through visual studio.
Our solution needs to be managed centrally and should be able to make the latest version of a package available to all the developers automatically. To achieve this we will leverage Azure artifacts feed to host our custom packages.
After doing this, for the latest version of the package to be available automatically in visual studio, we add the artifacts feed(package) URL to the NuGet package manager setting in visual studio. Now, whenever the new version is released for our package feed, the developers will have access to it.
Upload a package to a Git repository is an INCORRECT answer because we need the latest version of the package to be available for consumption in Visual Studio automatically. Uploading a package in the git repository wont fulfill the requirement.
Create a Git repository in Azure Repos is an INCORRECT answer because just creating a new repository will not solve our problems related to managing the releases for our custom packages.
Incorrect
Correct Answer(s):
Add the package URL to the NuGet Package Manager settings in Visual Studio,
Create a new feed in Azure Artifacts, and
Publish the package to a feed.
Create a new feed in Azure Artifacts, Publish the package to a feed and
Add the package URL to the NuGet Package Manager settings in Visual Studio are the three actions in order that we will perform to make the latest version of a centrally managed custom package available to all the developers through visual studio.
Our solution needs to be managed centrally and should be able to make the latest version of a package available to all the developers automatically. To achieve this we will leverage Azure artifacts feed to host our custom packages.
After doing this, for the latest version of the package to be available automatically in visual studio, we add the artifacts feed(package) URL to the NuGet package manager setting in visual studio. Now, whenever the new version is released for our package feed, the developers will have access to it.
Upload a package to a Git repository is an INCORRECT answer because we need the latest version of the package to be available for consumption in Visual Studio automatically. Uploading a package in the git repository wont fulfill the requirement.
Create a Git repository in Azure Repos is an INCORRECT answer because just creating a new repository will not solve our problems related to managing the releases for our custom packages.
Unattempted
Correct Answer(s):
Add the package URL to the NuGet Package Manager settings in Visual Studio,
Create a new feed in Azure Artifacts, and
Publish the package to a feed.
Create a new feed in Azure Artifacts, Publish the package to a feed and
Add the package URL to the NuGet Package Manager settings in Visual Studio are the three actions in order that we will perform to make the latest version of a centrally managed custom package available to all the developers through visual studio.
Our solution needs to be managed centrally and should be able to make the latest version of a package available to all the developers automatically. To achieve this we will leverage Azure artifacts feed to host our custom packages.
After doing this, for the latest version of the package to be available automatically in visual studio, we add the artifacts feed(package) URL to the NuGet package manager setting in visual studio. Now, whenever the new version is released for our package feed, the developers will have access to it.
Upload a package to a Git repository is an INCORRECT answer because we need the latest version of the package to be available for consumption in Visual Studio automatically. Uploading a package in the git repository wont fulfill the requirement.
Create a Git repository in Azure Repos is an INCORRECT answer because just creating a new repository will not solve our problems related to managing the releases for our custom packages.
Question 44 of 55
44. Question
You have an existing build pipeline in Azure pipelines.
You need to use incremental builds without purging the environment between pipeline executions.
What should you use?
Correct
Correct Answer(s): a self-hosted agent
A self-hosted is the CORRECT answer because we have a requirement to conduct incremental builds. For an incremental build to happen, we need to use a self-hosted agent for running our build pipeline.
When a build is run on a Microsoft hosted agent, the pipelines make sure that the host environment is clean. After every build process, the agent goes through a clean up to delete any build data. That is why any previous builds data in a Microsoft hosted agent can not be used to build the code next time.
Microsoft-hosted parallel jobs is the INCORRECT choice because this feature is used when we want to run multiple jobs together or in parallel. Using parallel jobs helps to run multiple agent jobs in parallel, without the need of waiting for queued jobs. However, in our case, we need a solution for running incremental builds and that is why we rule this option out. https://docs.microsoft.com/en-us/azure/devops/pipelines/licensing/concurrent-jobs?view=azure-devops&tabs=ms-hosted
Incorrect
Correct Answer(s): a self-hosted agent
A self-hosted is the CORRECT answer because we have a requirement to conduct incremental builds. For an incremental build to happen, we need to use a self-hosted agent for running our build pipeline.
When a build is run on a Microsoft hosted agent, the pipelines make sure that the host environment is clean. After every build process, the agent goes through a clean up to delete any build data. That is why any previous builds data in a Microsoft hosted agent can not be used to build the code next time.
Microsoft-hosted parallel jobs is the INCORRECT choice because this feature is used when we want to run multiple jobs together or in parallel. Using parallel jobs helps to run multiple agent jobs in parallel, without the need of waiting for queued jobs. However, in our case, we need a solution for running incremental builds and that is why we rule this option out. https://docs.microsoft.com/en-us/azure/devops/pipelines/licensing/concurrent-jobs?view=azure-devops&tabs=ms-hosted
Unattempted
Correct Answer(s): a self-hosted agent
A self-hosted is the CORRECT answer because we have a requirement to conduct incremental builds. For an incremental build to happen, we need to use a self-hosted agent for running our build pipeline.
When a build is run on a Microsoft hosted agent, the pipelines make sure that the host environment is clean. After every build process, the agent goes through a clean up to delete any build data. That is why any previous builds data in a Microsoft hosted agent can not be used to build the code next time.
Microsoft-hosted parallel jobs is the INCORRECT choice because this feature is used when we want to run multiple jobs together or in parallel. Using parallel jobs helps to run multiple agent jobs in parallel, without the need of waiting for queued jobs. However, in our case, we need a solution for running incremental builds and that is why we rule this option out. https://docs.microsoft.com/en-us/azure/devops/pipelines/licensing/concurrent-jobs?view=azure-devops&tabs=ms-hosted
Question 45 of 55
45. Question
You have an Azure DevOps project that uses many package feeds.
You need to simplify the project by using a single feed that stores packages produced by your company and packages consumed from remote feeds. The solution must support public feeds and authenticated feeds.
What should you enable in DevOps?
Correct
Correct Answer(s): upstream sources
Upstream sources is the CORRECT answer because we need a solution that simplifies the project by using a single feed to store both – organizations packages (authenticated feeds like one of the organizations package feeds) as well as packages consumed from remote feeds(public feeds like Nuget.org). The solution must support public feeds and authenticated feeds.
Views in Azure artifacts is an INCORRECT answer because view provides a way to enable users to share some packages while keeping others private.
Views filter the feed to a subset of packages that meet criteria defined by that view.
There are 3 types of views: local, Prerelease, and Release. Local is usually the default option as it holds all the source as well as native packages. https://docs.microsoft.com/en-us/azure/devops/artifacts/concepts/views?view=azure-devops#views-and-upstream-sources
A symbol server is an INCORRECT answer because a symbol server helps the debuggers to automatically retrieve the correct symbol files, without even knowing the specifics like product names, build numbers, or package name.
To debug compiled executables, especially executables compiled from native code languages like C++, we need symbol files that contain debugging information. These files generally have the PDB (program database) extension https://docs.microsoft.com/en-us/azure/devops/artifacts/concepts/symbols?view=azure-devops
Universal Packages is an INCORRECT answer because this service is used when we need to package a set of files(folder or a directory) and publish them as a single package in Azure feeds. By default, the Universal Packages task will publish all files in the staging directory for the build.
Incorrect
Correct Answer(s): upstream sources
Upstream sources is the CORRECT answer because we need a solution that simplifies the project by using a single feed to store both – organizations packages (authenticated feeds like one of the organizations package feeds) as well as packages consumed from remote feeds(public feeds like Nuget.org). The solution must support public feeds and authenticated feeds.
Views in Azure artifacts is an INCORRECT answer because view provides a way to enable users to share some packages while keeping others private.
Views filter the feed to a subset of packages that meet criteria defined by that view.
There are 3 types of views: local, Prerelease, and Release. Local is usually the default option as it holds all the source as well as native packages. https://docs.microsoft.com/en-us/azure/devops/artifacts/concepts/views?view=azure-devops#views-and-upstream-sources
A symbol server is an INCORRECT answer because a symbol server helps the debuggers to automatically retrieve the correct symbol files, without even knowing the specifics like product names, build numbers, or package name.
To debug compiled executables, especially executables compiled from native code languages like C++, we need symbol files that contain debugging information. These files generally have the PDB (program database) extension https://docs.microsoft.com/en-us/azure/devops/artifacts/concepts/symbols?view=azure-devops
Universal Packages is an INCORRECT answer because this service is used when we need to package a set of files(folder or a directory) and publish them as a single package in Azure feeds. By default, the Universal Packages task will publish all files in the staging directory for the build.
Unattempted
Correct Answer(s): upstream sources
Upstream sources is the CORRECT answer because we need a solution that simplifies the project by using a single feed to store both – organizations packages (authenticated feeds like one of the organizations package feeds) as well as packages consumed from remote feeds(public feeds like Nuget.org). The solution must support public feeds and authenticated feeds.
Views in Azure artifacts is an INCORRECT answer because view provides a way to enable users to share some packages while keeping others private.
Views filter the feed to a subset of packages that meet criteria defined by that view.
There are 3 types of views: local, Prerelease, and Release. Local is usually the default option as it holds all the source as well as native packages. https://docs.microsoft.com/en-us/azure/devops/artifacts/concepts/views?view=azure-devops#views-and-upstream-sources
A symbol server is an INCORRECT answer because a symbol server helps the debuggers to automatically retrieve the correct symbol files, without even knowing the specifics like product names, build numbers, or package name.
To debug compiled executables, especially executables compiled from native code languages like C++, we need symbol files that contain debugging information. These files generally have the PDB (program database) extension https://docs.microsoft.com/en-us/azure/devops/artifacts/concepts/symbols?view=azure-devops
Universal Packages is an INCORRECT answer because this service is used when we need to package a set of files(folder or a directory) and publish them as a single package in Azure feeds. By default, the Universal Packages task will publish all files in the staging directory for the build.
Question 46 of 55
46. Question
You need to increase the security of your teams development process.
Which type of security tool should you recommend for the Pull request stage of the development process?
Correct
Correct Answer(s): Static code analysis
Static code analysis is the CORRECT answer because scanning the code in an early stage, as part of the Pull request promotes test-driven development (Well-structured, tested, and verified code from the start).
Using tools to run security scans for the static code helps in driving a continuous quality of source code throughout the application lifecycle – starting from the pull request stage until the release stage.
Few tools/extensions that can be included as part of build policy triggered by the Pull requests for Azure Repos are:
– Microsoft Security Code analysis extension
– SonarCloud
– WhiteSourceBolt
Penetration testing is the INCORRECT answer because this type of testing is done once the application is running, and not at the pull request stage itself.
Threat modelling is the INCORRECT answer because this technique as its name suggests aids the security teams and architects to identify threats, attacks, vulnerabilities, design flaws, and counter-measures that could affect the application security. This technique is used in either of the design, monitoring, or auditing states for an application code.
Incorrect
Correct Answer(s): Static code analysis
Static code analysis is the CORRECT answer because scanning the code in an early stage, as part of the Pull request promotes test-driven development (Well-structured, tested, and verified code from the start).
Using tools to run security scans for the static code helps in driving a continuous quality of source code throughout the application lifecycle – starting from the pull request stage until the release stage.
Few tools/extensions that can be included as part of build policy triggered by the Pull requests for Azure Repos are:
– Microsoft Security Code analysis extension
– SonarCloud
– WhiteSourceBolt
Penetration testing is the INCORRECT answer because this type of testing is done once the application is running, and not at the pull request stage itself.
Threat modelling is the INCORRECT answer because this technique as its name suggests aids the security teams and architects to identify threats, attacks, vulnerabilities, design flaws, and counter-measures that could affect the application security. This technique is used in either of the design, monitoring, or auditing states for an application code.
Unattempted
Correct Answer(s): Static code analysis
Static code analysis is the CORRECT answer because scanning the code in an early stage, as part of the Pull request promotes test-driven development (Well-structured, tested, and verified code from the start).
Using tools to run security scans for the static code helps in driving a continuous quality of source code throughout the application lifecycle – starting from the pull request stage until the release stage.
Few tools/extensions that can be included as part of build policy triggered by the Pull requests for Azure Repos are:
– Microsoft Security Code analysis extension
– SonarCloud
– WhiteSourceBolt
Penetration testing is the INCORRECT answer because this type of testing is done once the application is running, and not at the pull request stage itself.
Threat modelling is the INCORRECT answer because this technique as its name suggests aids the security teams and architects to identify threats, attacks, vulnerabilities, design flaws, and counter-measures that could affect the application security. This technique is used in either of the design, monitoring, or auditing states for an application code.
Question 47 of 55
47. Question
You need to increase the security of your teams development process.
Which type of security tool should you recommend for the Continuous integration stage of the development process?
Correct
Correct Answer(s): Static code analysis
Static code analysis is the CORRECT answer because scanning the code in an early stage, as part of the continuous integration, helps the developers to produce well-structured, tested, and verified code from the start.
To add more to the quality code from the beginning, we use tools to run security scans as part of the build pipelines(CI). Every time, there is a code merge request to the master, a build is triggered to drive the continuous quality and perform status checks.
Penetration testing is an INCORRECT answer because this type of testing is done once the application is running, and not during the continuous integration phase.
Threat modelling is the INCORRECT answer because this technique as its name suggests aids the security teams and architects to identify threats, attacks, vulnerabilities, design flaws, and counter-measures that could affect the application security. This technique is used in either of the design, monitoring, or auditing states for an application code.
Incorrect
Correct Answer(s): Static code analysis
Static code analysis is the CORRECT answer because scanning the code in an early stage, as part of the continuous integration, helps the developers to produce well-structured, tested, and verified code from the start.
To add more to the quality code from the beginning, we use tools to run security scans as part of the build pipelines(CI). Every time, there is a code merge request to the master, a build is triggered to drive the continuous quality and perform status checks.
Penetration testing is an INCORRECT answer because this type of testing is done once the application is running, and not during the continuous integration phase.
Threat modelling is the INCORRECT answer because this technique as its name suggests aids the security teams and architects to identify threats, attacks, vulnerabilities, design flaws, and counter-measures that could affect the application security. This technique is used in either of the design, monitoring, or auditing states for an application code.
Unattempted
Correct Answer(s): Static code analysis
Static code analysis is the CORRECT answer because scanning the code in an early stage, as part of the continuous integration, helps the developers to produce well-structured, tested, and verified code from the start.
To add more to the quality code from the beginning, we use tools to run security scans as part of the build pipelines(CI). Every time, there is a code merge request to the master, a build is triggered to drive the continuous quality and perform status checks.
Penetration testing is an INCORRECT answer because this type of testing is done once the application is running, and not during the continuous integration phase.
Threat modelling is the INCORRECT answer because this technique as its name suggests aids the security teams and architects to identify threats, attacks, vulnerabilities, design flaws, and counter-measures that could affect the application security. This technique is used in either of the design, monitoring, or auditing states for an application code.
Question 48 of 55
48. Question
You have an Azure DevOps project named Project1 and an Azure subscription named Sub1.
You need to prevent releases from being deployed unless the releases comply with the Azure Policy rules assigned to a Sub1.
What should you do in the release pipeline of Project1?
Correct
Correct Answer(s): Add a deployment gate
Add a deployment gate is the CORRECT answer because a gate is configured as part of a release process to set caps or baselines for deployments to succeed. Gate in a pipeline helps to check the code against a set of rules and only passes a release if the code matches the set baseline of configured rules(in our case the releases must comply with the Azure Policy rules assigned to a Sub1 Azure subscription). If the code does not pass the gate check, the release fails.
Gates allow automatic collection of health signals from different services (Azure policies at the subscription scope in our case) and then promote the release when all the signals are successful (required status checks) at the same time or stop the deployment on timeout. https://docs.microsoft.com/en-us/azure/devops/pipelines/release/approvals/gates?view=azure-devops
Modify the Deployment queue settings is the INCORRECT answer because this setting defines the behaviour of the deployment queues when multiple releases are queued for deployment.
Create a pipeline variable is an INCORRECT answer because doing this will just create a variable that can be used by any task in a pipeline. Eg: creating a pipeline variable as $(ENV) and using different values for it in different tasks.
Configure a deployment trigger is an INCORRECT answer because this setting is used when we want to set up a trigger (manual, after a stage or after a build) to kick start some specific deployment stage.
Incorrect
Correct Answer(s): Add a deployment gate
Add a deployment gate is the CORRECT answer because a gate is configured as part of a release process to set caps or baselines for deployments to succeed. Gate in a pipeline helps to check the code against a set of rules and only passes a release if the code matches the set baseline of configured rules(in our case the releases must comply with the Azure Policy rules assigned to a Sub1 Azure subscription). If the code does not pass the gate check, the release fails.
Gates allow automatic collection of health signals from different services (Azure policies at the subscription scope in our case) and then promote the release when all the signals are successful (required status checks) at the same time or stop the deployment on timeout. https://docs.microsoft.com/en-us/azure/devops/pipelines/release/approvals/gates?view=azure-devops
Modify the Deployment queue settings is the INCORRECT answer because this setting defines the behaviour of the deployment queues when multiple releases are queued for deployment.
Create a pipeline variable is an INCORRECT answer because doing this will just create a variable that can be used by any task in a pipeline. Eg: creating a pipeline variable as $(ENV) and using different values for it in different tasks.
Configure a deployment trigger is an INCORRECT answer because this setting is used when we want to set up a trigger (manual, after a stage or after a build) to kick start some specific deployment stage.
Unattempted
Correct Answer(s): Add a deployment gate
Add a deployment gate is the CORRECT answer because a gate is configured as part of a release process to set caps or baselines for deployments to succeed. Gate in a pipeline helps to check the code against a set of rules and only passes a release if the code matches the set baseline of configured rules(in our case the releases must comply with the Azure Policy rules assigned to a Sub1 Azure subscription). If the code does not pass the gate check, the release fails.
Gates allow automatic collection of health signals from different services (Azure policies at the subscription scope in our case) and then promote the release when all the signals are successful (required status checks) at the same time or stop the deployment on timeout. https://docs.microsoft.com/en-us/azure/devops/pipelines/release/approvals/gates?view=azure-devops
Modify the Deployment queue settings is the INCORRECT answer because this setting defines the behaviour of the deployment queues when multiple releases are queued for deployment.
Create a pipeline variable is an INCORRECT answer because doing this will just create a variable that can be used by any task in a pipeline. Eg: creating a pipeline variable as $(ENV) and using different values for it in different tasks.
Configure a deployment trigger is an INCORRECT answer because this setting is used when we want to set up a trigger (manual, after a stage or after a build) to kick start some specific deployment stage.
Question 49 of 55
49. Question
During a code review, you discover quality issues in a Java application.
You need to recommend a solution to detect quality issues including unused variables and empty catch blocks.
What should you recommend?
Correct
Correct Answer(s): In a Maven build task, select Run PMD
In a Xcode build task, select Use xcpretty from advanced is INCORRECT because xcpretty is a code formatter in the Xcode build task and does not check for unused variables and empty catch blocks. Xcode build task is used for Xcode workspace in MacOS https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/build/xcode
In a Grunt build task, select Enabled from Control Options is INCORRECT because just checking enabled in a grant task wont solve the purpose of scanning code. Checking it would only make sure that the task is enabled as part of the build pipeline. Moreover, grunt is a JS task runner used for automation of frequent tasks like unit-testing, compilation, etc. Grunt wont solve our purpose. https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/build/grunt https://gruntjs.com/getting-started
In a Xcode build task, select Use xcpretty from advanced is INCORRECT because xcpretty is a code formatter in the Xcode build task and does not check for unused variables and empty catch blocks. Xcode build task is used for Xcode workspace in MacOS https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/build/xcode
In a Grunt build task, select Enabled from Control Options is INCORRECT because just checking enabled in a grant task wont solve the purpose of scanning code. Checking it would only make sure that the task is enabled as part of the build pipeline. Moreover, grunt is a JS task runner used for automation of frequent tasks like unit-testing, compilation, etc. Grunt wont solve our purpose. https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/build/grunt https://gruntjs.com/getting-started
In a Xcode build task, select Use xcpretty from advanced is INCORRECT because xcpretty is a code formatter in the Xcode build task and does not check for unused variables and empty catch blocks. Xcode build task is used for Xcode workspace in MacOS https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/build/xcode
In a Grunt build task, select Enabled from Control Options is INCORRECT because just checking enabled in a grant task wont solve the purpose of scanning code. Checking it would only make sure that the task is enabled as part of the build pipeline. Moreover, grunt is a JS task runner used for automation of frequent tasks like unit-testing, compilation, etc. Grunt wont solve our purpose. https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/build/grunt https://gruntjs.com/getting-started
You have a project in Azure DevOps that uses packages from multiple public feeds. Some of the feeds are unreliable.
You need to consolidate the packages into a single feed.
Which three actions should you perform in sequence?
ACTIONS
1. Create an Azure Artifacts feed that uses upstream sources.
2. Create a Microsoft Visual Studio project that includes all the packages.
3. Create a NuGet package.
4. Run an initial package restore.
5. Modify the configuration files to reference the Azure Artifacts feed.
6. Create an npm package.
Correct
The three actions you should perform in sequence to consolidate packages into a single, reliable Azure Artifacts feed are: 1-5-4
(1) Create an Azure Artifacts feed that uses upstream sources: This is the foundation for consolidating packages. The Azure Artifacts feed acts as a central repository that retrieves packages from the unreliable public feeds.
(5) Modify configuration files to reference the feed: Update your project’s configuration files to point to the newly created Azure Artifacts feed instead of the public feeds. This ensures your project downloads packages from your reliable internal source.
(4) Run an initial package restore: After creating the feed and updating configurations, run a package restore to populate your Azure Artifacts feed with all the necessary packages.
These steps effectively consolidate packages and improve your project’s build and deployment stability by eliminating reliance on unreliable public feeds.
Incorrect
The three actions you should perform in sequence to consolidate packages into a single, reliable Azure Artifacts feed are: 1-5-4
(1) Create an Azure Artifacts feed that uses upstream sources: This is the foundation for consolidating packages. The Azure Artifacts feed acts as a central repository that retrieves packages from the unreliable public feeds.
(5) Modify configuration files to reference the feed: Update your project’s configuration files to point to the newly created Azure Artifacts feed instead of the public feeds. This ensures your project downloads packages from your reliable internal source.
(4) Run an initial package restore: After creating the feed and updating configurations, run a package restore to populate your Azure Artifacts feed with all the necessary packages.
These steps effectively consolidate packages and improve your project’s build and deployment stability by eliminating reliance on unreliable public feeds.
Unattempted
The three actions you should perform in sequence to consolidate packages into a single, reliable Azure Artifacts feed are: 1-5-4
(1) Create an Azure Artifacts feed that uses upstream sources: This is the foundation for consolidating packages. The Azure Artifacts feed acts as a central repository that retrieves packages from the unreliable public feeds.
(5) Modify configuration files to reference the feed: Update your project’s configuration files to point to the newly created Azure Artifacts feed instead of the public feeds. This ensures your project downloads packages from your reliable internal source.
(4) Run an initial package restore: After creating the feed and updating configurations, run a package restore to populate your Azure Artifacts feed with all the necessary packages.
These steps effectively consolidate packages and improve your project’s build and deployment stability by eliminating reliance on unreliable public feeds.
Question 51 of 55
51. Question
You use Azure Artifacts to host NuGet packages that you create.
You need to make one of the packages available to anonymous users outside your organization. The solution must minimize the number of publication points.
What should you do?
Correct
Correct Answer(s): Create a new feed for the package
Changing the feed url is INCORRECT because changing the mere feed URL wont change the access level for the package.
Promoting the package to a release view is an INCORRECT answer because it wont satisfy the requirement as the feed views feature is a way to share the working and final versions or subsets of packages within the organization or the Azure Active Directory. https://docs.microsoft.com/en-us/azure/devops/artifacts/concepts/views
Publish the package to a public Nuget Repository is INCORRECT because we need to make one of the packages available to anonymous users outside the organization while minimizing the publication points which is possible with Azure Artifacts package feeds. https://azure.microsoft.com/en-in/blog/deep-dive-into-azure-artifacts/
Incorrect
Correct Answer(s): Create a new feed for the package
Changing the feed url is INCORRECT because changing the mere feed URL wont change the access level for the package.
Promoting the package to a release view is an INCORRECT answer because it wont satisfy the requirement as the feed views feature is a way to share the working and final versions or subsets of packages within the organization or the Azure Active Directory. https://docs.microsoft.com/en-us/azure/devops/artifacts/concepts/views
Publish the package to a public Nuget Repository is INCORRECT because we need to make one of the packages available to anonymous users outside the organization while minimizing the publication points which is possible with Azure Artifacts package feeds. https://azure.microsoft.com/en-in/blog/deep-dive-into-azure-artifacts/
Unattempted
Correct Answer(s): Create a new feed for the package
Changing the feed url is INCORRECT because changing the mere feed URL wont change the access level for the package.
Promoting the package to a release view is an INCORRECT answer because it wont satisfy the requirement as the feed views feature is a way to share the working and final versions or subsets of packages within the organization or the Azure Active Directory. https://docs.microsoft.com/en-us/azure/devops/artifacts/concepts/views
Publish the package to a public Nuget Repository is INCORRECT because we need to make one of the packages available to anonymous users outside the organization while minimizing the publication points which is possible with Azure Artifacts package feeds. https://azure.microsoft.com/en-in/blog/deep-dive-into-azure-artifacts/
Question 52 of 55
52. Question
You have a DevOps organization named Contoso.
You have 10 Azure virtual machines that run Windows Server 2019. The virtual machines host an application that you build and deploy by using Azure Pipelines.
Each virtual machine has the Web Server (IIS) role installed and configured.
You need to ensure that the webserver configurations on the virtual machines are maintained automatically. The solution must provide centralized management of the configuration settings and minimize management overhead.
Which four actions should you perform in sequence?
ACTIONS
1. Compile the Desired State Configuration (DSC) configuration.
2. Onboard the virtual machines to the Azure Automation account.
3. Import a Desired State Configuration (DSC) configuration into the Azure Automation account.
4. Create an Azure Automation account.
5. Install the custom Desired State Configuration (DSC) extension on the virtual machines.
Correct
Correct Answer(s): 4-3-1-2
4-3-1-2 is the CORRECT answer because we need to ensure that the webserver configurations on the Azure virtual machines are maintained automatically while providing centralized management of the configuration settings to minimize the operational overhead.
We use Azure automation state configuration service to achieve the desired functionality and leverage automation to reduce manual intervention.
To onboard the Web server virtual machines with IIS on Azure automation state configuration we will perform the following steps in order:
1. As a prerequisite, we need an Azure automation account to automate the whole workflow process
2. Once we have the azure automation account, we need to create a PowerShell DSC script and import that DSC configuration script into the Azure automation account
3. After having the configuration ready, we will compile it before enabling it for any virtual machines
4. We have everything set. After the script is being compiled, we will onboard the virtual machines to the Azure Automation account as DSC nodes.
4-3-1-2 is the CORRECT answer because we need to ensure that the webserver configurations on the Azure virtual machines are maintained automatically while providing centralized management of the configuration settings to minimize the operational overhead.
We use Azure automation state configuration service to achieve the desired functionality and leverage automation to reduce manual intervention.
To onboard the Web server virtual machines with IIS on Azure automation state configuration we will perform the following steps in order:
1. As a prerequisite, we need an Azure automation account to automate the whole workflow process
2. Once we have the azure automation account, we need to create a PowerShell DSC script and import that DSC configuration script into the Azure automation account
3. After having the configuration ready, we will compile it before enabling it for any virtual machines
4. We have everything set. After the script is being compiled, we will onboard the virtual machines to the Azure Automation account as DSC nodes.
4-3-1-2 is the CORRECT answer because we need to ensure that the webserver configurations on the Azure virtual machines are maintained automatically while providing centralized management of the configuration settings to minimize the operational overhead.
We use Azure automation state configuration service to achieve the desired functionality and leverage automation to reduce manual intervention.
To onboard the Web server virtual machines with IIS on Azure automation state configuration we will perform the following steps in order:
1. As a prerequisite, we need an Azure automation account to automate the whole workflow process
2. Once we have the azure automation account, we need to create a PowerShell DSC script and import that DSC configuration script into the Azure automation account
3. After having the configuration ready, we will compile it before enabling it for any virtual machines
4. We have everything set. After the script is being compiled, we will onboard the virtual machines to the Azure Automation account as DSC nodes.
You have a build pipeline in Azure pipelines that occasionally fails.
You discover that a test measuring the response time of an API endpoint causes the failures.
You need to prevent the build pipeline from failing due to the test.
Which two actions should you perform?
Correct
Correct Answer(s): Manually mark the test as flaky and Clear Flaky tests included in test pass percentage
Clear Flaky tests included in the test pass percentage and Manually mark the test as flaky are the two actions that well perform to prevent the build pipeline from failing due to flaky tests.
Flaky tests inhibit a developers ability to work productively and troubleshoot the real bugs because of the build pipeline failures due to some flaky tests. These tests provide different outcomes of a pass or fail, even when there are no changes done in the source code or the execution environment.
Flaky test management capabilities of azure pipelines help to manage these tests more efficiently by reducing the chances of build failure due to them.
Set Flaky test detection to Off is an INCORRECT choice because this will disable the auto-detection feature of azure pipelines for detecting flaky tests. This is the exact opposite of what is required and that is why we rule this option out.
Correct Answer(s): Manually mark the test as flaky and Clear Flaky tests included in test pass percentage
Clear Flaky tests included in the test pass percentage and Manually mark the test as flaky are the two actions that well perform to prevent the build pipeline from failing due to flaky tests.
Flaky tests inhibit a developers ability to work productively and troubleshoot the real bugs because of the build pipeline failures due to some flaky tests. These tests provide different outcomes of a pass or fail, even when there are no changes done in the source code or the execution environment.
Flaky test management capabilities of azure pipelines help to manage these tests more efficiently by reducing the chances of build failure due to them.
Set Flaky test detection to Off is an INCORRECT choice because this will disable the auto-detection feature of azure pipelines for detecting flaky tests. This is the exact opposite of what is required and that is why we rule this option out.
Correct Answer(s): Manually mark the test as flaky and Clear Flaky tests included in test pass percentage
Clear Flaky tests included in the test pass percentage and Manually mark the test as flaky are the two actions that well perform to prevent the build pipeline from failing due to flaky tests.
Flaky tests inhibit a developers ability to work productively and troubleshoot the real bugs because of the build pipeline failures due to some flaky tests. These tests provide different outcomes of a pass or fail, even when there are no changes done in the source code or the execution environment.
Flaky test management capabilities of azure pipelines help to manage these tests more efficiently by reducing the chances of build failure due to them.
Set Flaky test detection to Off is an INCORRECT choice because this will disable the auto-detection feature of azure pipelines for detecting flaky tests. This is the exact opposite of what is required and that is why we rule this option out.
You have an Azure Resource Manager template that deploys a multi-tier application.
You need to prevent the user who performs a deployment from viewing the account credentials and connection strings used by the application.
What should you use?
Correct
Correct Answer(s): Azure Key Vault
Azure Key Vault is the CORRECT answer because we need to prevent the user who performs a deployment using azure resource manager templates from viewing the account credentials and connection strings used by the application.
Azure Key Vault is a secure cloud storage place for secrets, certificates, keys, or any credentials. We can use the ARM template parameters file to refer to the values of key vault entities from the key vault securely without the credentials being exposed anywhere in the process. Please see the below snippet for an idea on how to refer to key vault secret from an ARM template parameters file:
an Azure Resource Manager parameter file is an INCORRECT answer because the parameters file alone cannot fulfill our requirement. Moreover, using a parameters file is just a way of referencing a particular secret and key vault. The main thing is the key vault which holds a secret value.
A Web.config file is an INCORRECT answer because this file is responsible for holding the configuration settings data for a web application and is not helpful in this case to keep the secrets safe.
An appsettings.json file is an INCORRECT answer because this file holds the application-specific configuration data like database connections strings, any application scope global variables, etc.
An Azure storage table is an INCORRECT answer because this is a storage service for storing NoSQL data and has nothing to do with passing the credentials securely for an ARM template deployment.
Incorrect
Correct Answer(s): Azure Key Vault
Azure Key Vault is the CORRECT answer because we need to prevent the user who performs a deployment using azure resource manager templates from viewing the account credentials and connection strings used by the application.
Azure Key Vault is a secure cloud storage place for secrets, certificates, keys, or any credentials. We can use the ARM template parameters file to refer to the values of key vault entities from the key vault securely without the credentials being exposed anywhere in the process. Please see the below snippet for an idea on how to refer to key vault secret from an ARM template parameters file:
an Azure Resource Manager parameter file is an INCORRECT answer because the parameters file alone cannot fulfill our requirement. Moreover, using a parameters file is just a way of referencing a particular secret and key vault. The main thing is the key vault which holds a secret value.
A Web.config file is an INCORRECT answer because this file is responsible for holding the configuration settings data for a web application and is not helpful in this case to keep the secrets safe.
An appsettings.json file is an INCORRECT answer because this file holds the application-specific configuration data like database connections strings, any application scope global variables, etc.
An Azure storage table is an INCORRECT answer because this is a storage service for storing NoSQL data and has nothing to do with passing the credentials securely for an ARM template deployment.
Unattempted
Correct Answer(s): Azure Key Vault
Azure Key Vault is the CORRECT answer because we need to prevent the user who performs a deployment using azure resource manager templates from viewing the account credentials and connection strings used by the application.
Azure Key Vault is a secure cloud storage place for secrets, certificates, keys, or any credentials. We can use the ARM template parameters file to refer to the values of key vault entities from the key vault securely without the credentials being exposed anywhere in the process. Please see the below snippet for an idea on how to refer to key vault secret from an ARM template parameters file:
an Azure Resource Manager parameter file is an INCORRECT answer because the parameters file alone cannot fulfill our requirement. Moreover, using a parameters file is just a way of referencing a particular secret and key vault. The main thing is the key vault which holds a secret value.
A Web.config file is an INCORRECT answer because this file is responsible for holding the configuration settings data for a web application and is not helpful in this case to keep the secrets safe.
An appsettings.json file is an INCORRECT answer because this file holds the application-specific configuration data like database connections strings, any application scope global variables, etc.
An Azure storage table is an INCORRECT answer because this is a storage service for storing NoSQL data and has nothing to do with passing the credentials securely for an ARM template deployment.
Question 55 of 55
55. Question
You have an Azure DevOps project that contains a build pipeline. The build pipeline uses approximately 50 open source libraries.
You need to ensure that the project can be scanned for known security vulnerabilities in the open-source libraries.
What should you choose for Dropdown1?
Correct
Correct Answer(s): A build task
A build task is the CORRECT answer because our build pipeline uses about 50 open source libraries. For the project holding this pipeline, we need to scan the code to find any known security vulnerabilities.
To drive continuous quality of the code from the build pipeline of the project, we will use tools to scan the codebase. For enabling any third-party code scanning tool, we need to include a build task as part of the build pipeline and add this in the build policy as well, if necessary. https://docs.microsoft.com/en-us/azure/devops/pipelines/process/tasks?view=azure-devops&tabs=yaml
A deployment task is the INCORRECT answer because these tasks are usually part of a release pipeline that deploys the built code to a specified end point.
An artifacts repository is the INCORRECT answer because the repository is used to host artifacts of a particular code base and has nothing to do with the open-source library scanning.
Incorrect
Correct Answer(s): A build task
A build task is the CORRECT answer because our build pipeline uses about 50 open source libraries. For the project holding this pipeline, we need to scan the code to find any known security vulnerabilities.
To drive continuous quality of the code from the build pipeline of the project, we will use tools to scan the codebase. For enabling any third-party code scanning tool, we need to include a build task as part of the build pipeline and add this in the build policy as well, if necessary. https://docs.microsoft.com/en-us/azure/devops/pipelines/process/tasks?view=azure-devops&tabs=yaml
A deployment task is the INCORRECT answer because these tasks are usually part of a release pipeline that deploys the built code to a specified end point.
An artifacts repository is the INCORRECT answer because the repository is used to host artifacts of a particular code base and has nothing to do with the open-source library scanning.
Unattempted
Correct Answer(s): A build task
A build task is the CORRECT answer because our build pipeline uses about 50 open source libraries. For the project holding this pipeline, we need to scan the code to find any known security vulnerabilities.
To drive continuous quality of the code from the build pipeline of the project, we will use tools to scan the codebase. For enabling any third-party code scanning tool, we need to include a build task as part of the build pipeline and add this in the build policy as well, if necessary. https://docs.microsoft.com/en-us/azure/devops/pipelines/process/tasks?view=azure-devops&tabs=yaml
A deployment task is the INCORRECT answer because these tasks are usually part of a release pipeline that deploys the built code to a specified end point.
An artifacts repository is the INCORRECT answer because the repository is used to host artifacts of a particular code base and has nothing to do with the open-source library scanning.
X
Use Page numbers below to navigate to other practice tests