You have already completed the Test before. Hence you can not start it again.
Test is loading...
You must sign in or sign up to start the Test.
You have to finish following quiz, to start this Test:
Your results are here!! for" AZ-400 Practice Test 13 "
0 of 55 questions answered correctly
Your time:
Time has elapsed
Your Final Score is : 0
You have attempted : 0
Number of Correct Questions : 0 and scored 0
Number of Incorrect Questions : 0 and Negative marks 0
Average score
Your score
AZ-400
You have attempted: 0
Number of Correct Questions: 0 and scored 0
Number of Incorrect Questions: 0 and Negative marks 0
You can review your answers by clicking view questions. Important Note : Open Reference Documentation Links in New Tab (Right Click and Open in New Tab).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
Answered
Review
Question 1 of 55
1. Question
Your company is concerned that when developers introduce open source libraries, it creates licensing compliance issues.
You need to add an automated process to the build pipeline to detect when common open source libraries are added to the code base.
What should you use?
Correct
The Black Duck by Synopsys plugin for TFS and Azure DevOps allows automatic identification of open source security vulnerabilities during your application build process. The integration allows you to enforce policies configured in Black Duck to receive alerts and fail builds when policy violations are met. https://marketplace.visualstudio.com/items?itemName=black-duck-software.detect-for-tfs
Incorrect
The Black Duck by Synopsys plugin for TFS and Azure DevOps allows automatic identification of open source security vulnerabilities during your application build process. The integration allows you to enforce policies configured in Black Duck to receive alerts and fail builds when policy violations are met. https://marketplace.visualstudio.com/items?itemName=black-duck-software.detect-for-tfs
Unattempted
The Black Duck by Synopsys plugin for TFS and Azure DevOps allows automatic identification of open source security vulnerabilities during your application build process. The integration allows you to enforce policies configured in Black Duck to receive alerts and fail builds when policy violations are met. https://marketplace.visualstudio.com/items?itemName=black-duck-software.detect-for-tfs
Question 2 of 55
2. Question
You need to find and isolate shared code. The shared code will be maintained in a series of packages. Which three actions should you perform in sequence?
Correct
The correct sequence of actions to find and isolate shared code, while maintaining it in a series of packages, is:
Create a dependency graph for the application.
Group the related components.
Assign ownership to each component group.
Explanation:
Create a dependency graph for the application:
This step provides a visual representation of how different parts of the code interact with each other.
By analyzing the dependencies, you can identify components that are used by multiple parts of the application, indicating potential candidates for shared code.
Group the related components:
Once you have identified potential shared components, group them together based on their functionality and dependencies.
This helps to organize the shared code into logical and maintainable units.
Assign ownership to each component group:
Clearly define which team or individual is responsible for maintaining each group of shared components.
This ensures accountability and facilitates efficient code management.
The other options are incorrect because:
Identifying the most common language: While important for code consistency, this step should generally be done after grouping components and assigning ownership.
Rewriting components: This is a major undertaking and should only be done after careful consideration and planning, not as the initial step.
By following these steps, you can effectively identify, isolate, and manage shared code within your application, leading to improved code reusability, maintainability, and overall code quality.
The correct sequence of actions to find and isolate shared code, while maintaining it in a series of packages, is:
Create a dependency graph for the application.
Group the related components.
Assign ownership to each component group.
Explanation:
Create a dependency graph for the application:
This step provides a visual representation of how different parts of the code interact with each other.
By analyzing the dependencies, you can identify components that are used by multiple parts of the application, indicating potential candidates for shared code.
Group the related components:
Once you have identified potential shared components, group them together based on their functionality and dependencies.
This helps to organize the shared code into logical and maintainable units.
Assign ownership to each component group:
Clearly define which team or individual is responsible for maintaining each group of shared components.
This ensures accountability and facilitates efficient code management.
The other options are incorrect because:
Identifying the most common language: While important for code consistency, this step should generally be done after grouping components and assigning ownership.
Rewriting components: This is a major undertaking and should only be done after careful consideration and planning, not as the initial step.
By following these steps, you can effectively identify, isolate, and manage shared code within your application, leading to improved code reusability, maintainability, and overall code quality.
The correct sequence of actions to find and isolate shared code, while maintaining it in a series of packages, is:
Create a dependency graph for the application.
Group the related components.
Assign ownership to each component group.
Explanation:
Create a dependency graph for the application:
This step provides a visual representation of how different parts of the code interact with each other.
By analyzing the dependencies, you can identify components that are used by multiple parts of the application, indicating potential candidates for shared code.
Group the related components:
Once you have identified potential shared components, group them together based on their functionality and dependencies.
This helps to organize the shared code into logical and maintainable units.
Assign ownership to each component group:
Clearly define which team or individual is responsible for maintaining each group of shared components.
This ensures accountability and facilitates efficient code management.
The other options are incorrect because:
Identifying the most common language: While important for code consistency, this step should generally be done after grouping components and assigning ownership.
Rewriting components: This is a major undertaking and should only be done after careful consideration and planning, not as the initial step.
By following these steps, you can effectively identify, isolate, and manage shared code within your application, leading to improved code reusability, maintainability, and overall code quality.
You are creating a NuGet package. You plan to distribute the package to your development team privately. You need to share the package and test that the package can be consumed.
Which four actions should you perform in sequence?
You have a private project in Azure DevOps.
You need to ensure that a project manager can create custom work item queries to report on the project’s progress. The solution must use the principle of least privilege.
To which security group should you add the project manager?
You use Azure Pipelines to manage build pipelines, GitHub to store source code, and Dependabot to manage dependencies.
You have an app named App1.
Dependabot detects a dependency in App1 that requires an update.
What should you do first to apply the update?
Correct
Dependabot pulls down your dependency files and looks for any outdated or insecure requirements. If any of your dependencies are out-of-date, Dependabot opens individual pull requests to update each one. You check that your tests pass, scan the included changelog and release notes, then hit merge with confidence. https://dependabot.com/#how-it-works
Incorrect
Dependabot pulls down your dependency files and looks for any outdated or insecure requirements. If any of your dependencies are out-of-date, Dependabot opens individual pull requests to update each one. You check that your tests pass, scan the included changelog and release notes, then hit merge with confidence. https://dependabot.com/#how-it-works
Unattempted
Dependabot pulls down your dependency files and looks for any outdated or insecure requirements. If any of your dependencies are out-of-date, Dependabot opens individual pull requests to update each one. You check that your tests pass, scan the included changelog and release notes, then hit merge with confidence. https://dependabot.com/#how-it-works
Question 6 of 55
6. Question
You are automating the testing process for your company.
You need to automate UI testing of a web application.
Which framework should you use?
You have an approval process that contains a condition. The condition requires that releases be approved by a team leader before they are deployed.
You have a policy stating that approvals must occur within eight hours.
You discover that deployment fail if the approvals take longer than two hours.
You need to ensure that the deployments only fail if the approvals take longer than eight hours.
Solution: From Post-deployment conditions, you modify the Time between re-evaluation of gates option.
Does this meet the goal?
You have an approval process that contains a condition. The condition requires that releases be approved by a team leader before they are deployed.
You have a policy stating that approvals must occur within eight hours.
You discover that deployment fail if the approvals take longer than two hours.
You need to ensure that the deployments only fail if the approvals take longer than eight hours.
Solution: From Pre-deployment conditions, you modify the Time between re-evaluation of gates option.
Does this meet the goal?
Correct
No, modifying the “Time between re-evaluation of gates” option in pre-deployment conditions will not directly address the issue of deployments failing after two hours.
This option is typically used to control how often the system checks for changes in conditions that might affect the deployment. While it can indirectly impact the timing of deployments, it doesn’t specifically guarantee that deployments will wait for eight hours before failing.
To ensure that deployments only fail after eight hours of approval pending, you might need to:
Adjust the Approval Timeout: If your deployment tool or process allows, you can directly configure the timeout period for approvals. This would ensure that the deployment waits for the specified time before failing.
Custom Script or Workflow: Implement a custom script or workflow that monitors the approval status and automatically fails the deployment if it’s not approved within eight hours.
Re-evaluate Pre-deployment Conditions: While modifying the re-evaluation time might indirectly affect the timing, it’s not the most direct solution. Consider other options like adjusting the specific conditions or creating a custom condition that explicitly checks the approval status and time.
By addressing the root cause of the early deployment failures, you can ensure that deployments are only delayed when necessary and that the eight-hour approval policy is strictly enforced.
No, modifying the “Time between re-evaluation of gates” option in pre-deployment conditions will not directly address the issue of deployments failing after two hours.
This option is typically used to control how often the system checks for changes in conditions that might affect the deployment. While it can indirectly impact the timing of deployments, it doesn’t specifically guarantee that deployments will wait for eight hours before failing.
To ensure that deployments only fail after eight hours of approval pending, you might need to:
Adjust the Approval Timeout: If your deployment tool or process allows, you can directly configure the timeout period for approvals. This would ensure that the deployment waits for the specified time before failing.
Custom Script or Workflow: Implement a custom script or workflow that monitors the approval status and automatically fails the deployment if it’s not approved within eight hours.
Re-evaluate Pre-deployment Conditions: While modifying the re-evaluation time might indirectly affect the timing, it’s not the most direct solution. Consider other options like adjusting the specific conditions or creating a custom condition that explicitly checks the approval status and time.
By addressing the root cause of the early deployment failures, you can ensure that deployments are only delayed when necessary and that the eight-hour approval policy is strictly enforced.
No, modifying the “Time between re-evaluation of gates” option in pre-deployment conditions will not directly address the issue of deployments failing after two hours.
This option is typically used to control how often the system checks for changes in conditions that might affect the deployment. While it can indirectly impact the timing of deployments, it doesn’t specifically guarantee that deployments will wait for eight hours before failing.
To ensure that deployments only fail after eight hours of approval pending, you might need to:
Adjust the Approval Timeout: If your deployment tool or process allows, you can directly configure the timeout period for approvals. This would ensure that the deployment waits for the specified time before failing.
Custom Script or Workflow: Implement a custom script or workflow that monitors the approval status and automatically fails the deployment if it’s not approved within eight hours.
Re-evaluate Pre-deployment Conditions: While modifying the re-evaluation time might indirectly affect the timing, it’s not the most direct solution. Consider other options like adjusting the specific conditions or creating a custom condition that explicitly checks the approval status and time.
By addressing the root cause of the early deployment failures, you can ensure that deployments are only delayed when necessary and that the eight-hour approval policy is strictly enforced.
You have an approval process that contains a condition. The condition requires that releases be approved by a team leader before they are deployed.
You have a policy stating that approvals must occur within eight hours.
You discover that deployment fail if the approvals take longer than two hours.
You need to ensure that the deployments only fail if the approvals take longer than eight hours.
Solution: From Pre-deployment conditions, you modify the Timeout setting for pre-deployment approvals.
Does this meet the goal?
Correct
You can define approvals at the start of a stage (pre-deployment approvers), at the end of a stage (post-deployment approvers), or both. You can add multiple approvers for both pre-deployment and post-deployment settings. If no approval is granted within the Timeout specified for the approval, the deployment is rejected. https://docs.microsoft.com/en-us/azure/devops/pipelines/release/approvals/approvals?view=azure-devops
Incorrect
You can define approvals at the start of a stage (pre-deployment approvers), at the end of a stage (post-deployment approvers), or both. You can add multiple approvers for both pre-deployment and post-deployment settings. If no approval is granted within the Timeout specified for the approval, the deployment is rejected. https://docs.microsoft.com/en-us/azure/devops/pipelines/release/approvals/approvals?view=azure-devops
Unattempted
You can define approvals at the start of a stage (pre-deployment approvers), at the end of a stage (post-deployment approvers), or both. You can add multiple approvers for both pre-deployment and post-deployment settings. If no approval is granted within the Timeout specified for the approval, the deployment is rejected. https://docs.microsoft.com/en-us/azure/devops/pipelines/release/approvals/approvals?view=azure-devops
Question 10 of 55
10. Question
You have an approval process that contains a condition. The condition requires that releases be approved by a team leader before they are deployed.
You have a policy stating that approvals must occur within eight hours.
You discover that deployment fail if the approvals take longer than two hours. You need to ensure that the deployments only fail if the approvals take longer than eight hours.
Solution: From Post-deployment conditions, you modify the Timeout setting for post-deployment approvals.
Does this meet the goal?
Correct
Instead, configure Timeout setting for pre-deployment approvals.
Incorrect
Instead, configure Timeout setting for pre-deployment approvals.
Unattempted
Instead, configure Timeout setting for pre-deployment approvals.
Question 11 of 55
11. Question
Case Study –
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other question in this case study.
Overview –
Contoso, Ltd. is a manufacturing company that has a main office in Chicago.
Existing Environment –
Contoso plans to improve its IT development and operations processes by implementing Azure DevOps principles. Contoso has an Azure subscription and creates an Azure DevOps organization.
The Azure DevOps organization includes:
The Docker extension
A deployment pool named Pool7 that contains 10 Azure virtual machines that run Windows Server 2016
The Azure subscription contains an Azure Automation account.
Requirements –
Planned changes –
Contoso plans to create projects in Azure DevOps as shown in the following table.
Technical requirements –
Contoso identifies the following technical requirements:
Implement build agents for Project1.
Whenever possible, use Azure resources.
Avoid using deprecated technologies.
Implement a code flow strategy for Project2 that will:
Enable Team2 to submit pull requests for Project2.
Enable Team2 to work independently on changes to a copy of Project2.
Ensure that any intermediary changes performed by Team2 on a copy of Project2 will be subject to the same restrictions as the ones defined in the build policy of Project2.
Whenever possible, implement automation and minimize administrative effort.
Implement Project3, Project5, Project6, and Project7 based on the planned changes.
Implement Project4 and configure the project to push Docker images to Azure Container Registry.
Question –
You add the virtual machines as managed nodes in Azure Automation State Configuration.
You need to configure the managed computers in Pool7.
What should you do next?
Correct
Scenario:
A deployment pool named Pool7 that contains 10 Azure virtual machines that run Windows Server 2016.
Case Study –
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other question in this case study.
Overview –
Contoso, Ltd. is a manufacturing company that has a main office in Chicago.
Existing Environment –
Contoso plans to improve its IT development and operations processes by implementing Azure DevOps principles. Contoso has an Azure subscription and creates an Azure DevOps organization.
The Azure DevOps organization includes:
The Docker extension
A deployment pool named Pool7 that contains 10 Azure virtual machines that run Windows Server 2016
The Azure subscription contains an Azure Automation account.
Requirements –
Planned changes –
Contoso plans to create projects in Azure DevOps as shown in the following table.
Technical requirements –
Contoso identifies the following technical requirements:
Implement build agents for Project1.
Whenever possible, use Azure resources.
Avoid using deprecated technologies.
Implement a code flow strategy for Project2 that will:
Enable Team2 to submit pull requests for Project2.
Enable Team2 to work independently on changes to a copy of Project2.
Ensure that any intermediary changes performed by Team2 on a copy of Project2 will be subject to the same restrictions as the ones defined in the build policy of Project2.
Whenever possible, implement automation and minimize administrative effort.
Implement Project3, Project5, Project6, and Project7 based on the planned changes.
Implement Project4 and configure the project to push Docker images to Azure Container Registry.
Question –
You need to configure Azure Automation for the computers in Pool7.
Which three actions should you perform in sequence?
Case Study –
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other question in this case study.
Overview –
Contoso, Ltd. is a manufacturing company that has a main office in Chicago.
Existing Environment –
Contoso plans to improve its IT development and operations processes by implementing Azure DevOps principles. Contoso has an Azure subscription and creates an Azure DevOps organization.
The Azure DevOps organization includes:
The Docker extension
A deployment pool named Pool7 that contains 10 Azure virtual machines that run Windows Server 2016
The Azure subscription contains an Azure Automation account.
Requirements –
Planned changes –
Contoso plans to create projects in Azure DevOps as shown in the following table.
Technical requirements –
Contoso identifies the following technical requirements:
Implement build agents for Project1.
Whenever possible, use Azure resources.
Avoid using deprecated technologies.
Implement a code flow strategy for Project2 that will:
Enable Team2 to submit pull requests for Project2.
Enable Team2 to work independently on changes to a copy of Project2.
Ensure that any intermediary changes performed by Team2 on a copy of Project2 will be subject to the same restrictions as the ones defined in the build policy of Project2.
Whenever possible, implement automation and minimize administrative effort.
Implement Project3, Project5, Project6, and Project7 based on the planned changes.
Implement Project4 and configure the project to push Docker images to Azure Container Registry.
Question –
How should you configure the filters for the Project5 trigger?
Correct
Continuous integration (CI) triggers cause a pipeline to run whenever you push an update to the specified branches or you push specified tags. For more complex triggers that use exclude or batch, you must use the full syntax as shown in the following example.
trigger:
branches:
include:
– master
– releases/*
– feature/*
exclude:
– releases/old*
– feature/*-working
paths:
include:
– ‘*’ # same as ‘/’ for the repository root
exclude:
– ‘docs/*’ # same as ‘docs/’
In the above example, the pipeline will be triggered if a change is pushed to master or to any releases branch. However, it won’t be triggered if a change is made to a releases branch that starts with docs/.
Incorrect
Continuous integration (CI) triggers cause a pipeline to run whenever you push an update to the specified branches or you push specified tags. For more complex triggers that use exclude or batch, you must use the full syntax as shown in the following example.
trigger:
branches:
include:
– master
– releases/*
– feature/*
exclude:
– releases/old*
– feature/*-working
paths:
include:
– ‘*’ # same as ‘/’ for the repository root
exclude:
– ‘docs/*’ # same as ‘docs/’
In the above example, the pipeline will be triggered if a change is pushed to master or to any releases branch. However, it won’t be triggered if a change is made to a releases branch that starts with docs/.
Unattempted
Continuous integration (CI) triggers cause a pipeline to run whenever you push an update to the specified branches or you push specified tags. For more complex triggers that use exclude or batch, you must use the full syntax as shown in the following example.
trigger:
branches:
include:
– master
– releases/*
– feature/*
exclude:
– releases/old*
– feature/*-working
paths:
include:
– ‘*’ # same as ‘/’ for the repository root
exclude:
– ‘docs/*’ # same as ‘docs/’
In the above example, the pipeline will be triggered if a change is pushed to master or to any releases branch. However, it won’t be triggered if a change is made to a releases branch that starts with docs/.
Question 14 of 55
14. Question
Case Study –
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other question in this case study.
Overview –
Contoso, Ltd. is a manufacturing company that has a main office in Chicago.
Existing Environment –
Contoso plans to improve its IT development and operations processes by implementing Azure DevOps principles. Contoso has an Azure subscription and creates an Azure DevOps organization.
The Azure DevOps organization includes:
The Docker extension
A deployment pool named Pool7 that contains 10 Azure virtual machines that run Windows Server 2016
The Azure subscription contains an Azure Automation account.
Requirements –
Planned changes –
Contoso plans to create projects in Azure DevOps as shown in the following table.
Technical requirements –
Contoso identifies the following technical requirements:
Implement build agents for Project1.
Whenever possible, use Azure resources.
Avoid using deprecated technologies.
Implement a code flow strategy for Project2 that will:
Enable Team2 to submit pull requests for Project2.
Enable Team2 to work independently on changes to a copy of Project2.
Ensure that any intermediary changes performed by Team2 on a copy of Project2 will be subject to the same restrictions as the ones defined in the build policy of Project2.
Whenever possible, implement automation and minimize administrative effort.
Implement Project3, Project5, Project6, and Project7 based on the planned changes.
Implement Project4 and configure the project to push Docker images to Azure Container Registry.
Question –
In Azure DevOps, you create Project3. You need to meet the requirements of the project.
What should you do first?
Correct
From Visual Studio Marketplace, install the SonarQube extension. After installing your extension, you need to declare your SonarQube server as a service endpoint in your Azure DevOps project settings. https://docs.sonarqube.org/latest/analysis/azuredevops-integration/
Incorrect
From Visual Studio Marketplace, install the SonarQube extension. After installing your extension, you need to declare your SonarQube server as a service endpoint in your Azure DevOps project settings. https://docs.sonarqube.org/latest/analysis/azuredevops-integration/
Unattempted
From Visual Studio Marketplace, install the SonarQube extension. After installing your extension, you need to declare your SonarQube server as a service endpoint in your Azure DevOps project settings. https://docs.sonarqube.org/latest/analysis/azuredevops-integration/
Question 15 of 55
15. Question
You use a Git repository in Azure Repos to manage the source code of a web application. Developers commit changes directly to the master branch.
You need to implement a change management procedure that meets the following requirements:
The master branch must be protected, and new changes must be built in the feature branches first.
Changes must be reviewed and approved by at least one release manager before each merge.
Changes must be brought into the master branch by using pull requests.
What should you configure in Azure Repos?
You need to deploy Azure Kubernetes Service (AKS) to host an application. The solution must meet the following requirements:
Containers must only be published internally.
AKS clusters must be able to create and manage containers in Azure.
What should you use for the requirement – “Containers must only be published internally”?
You need to deploy Azure Kubernetes Service (AKS) to host an application. The solution must meet the following requirements:
Containers must only be published internally.
AKS clusters must be able to create and manage containers in Azure.
What should you use for the requirement – “AKS clusters must be able to create and manage containers in Azure”?
You have 50 Node.js-based projects that you scan by using WhiteSource. Each project includes Package.json, Package-lock.json, and Npm-shrinkwrap.json files.
You need to minimize the number of libraries reports by WhiteSource to only the libraries that you explicitly reference.
What should you do?
You need to execute inline testing of an Azure DevOps pipeline that uses a Docker deployment model. The solution must prevent the results from being published to the pipeline.
What should you use for the inline testing?
Correct
For Docker based apps there are many ways to build your application and run tests:
Build and test in a build pipeline: build and tests execute in the pipeline and test results are published using the Publish Test Results task.
Build and test with a multi-stage Dockerfile: build and tests execute inside the container using a multi-stage Docker file, as such test results are not published back to the pipeline.
Build, test, and publish results with a Dockerfile: build and tests execute inside the container and results are published back to the pipeline. https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/test/publish-test-results?view=azure-devops&tabs=trx%2Cyaml#docker
Incorrect
For Docker based apps there are many ways to build your application and run tests:
Build and test in a build pipeline: build and tests execute in the pipeline and test results are published using the Publish Test Results task.
Build and test with a multi-stage Dockerfile: build and tests execute inside the container using a multi-stage Docker file, as such test results are not published back to the pipeline.
Build, test, and publish results with a Dockerfile: build and tests execute inside the container and results are published back to the pipeline. https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/test/publish-test-results?view=azure-devops&tabs=trx%2Cyaml#docker
Unattempted
For Docker based apps there are many ways to build your application and run tests:
Build and test in a build pipeline: build and tests execute in the pipeline and test results are published using the Publish Test Results task.
Build and test with a multi-stage Dockerfile: build and tests execute inside the container using a multi-stage Docker file, as such test results are not published back to the pipeline.
Build, test, and publish results with a Dockerfile: build and tests execute inside the container and results are published back to the pipeline. https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/test/publish-test-results?view=azure-devops&tabs=trx%2Cyaml#docker
Question 20 of 55
20. Question
You have a build pipeline in Azure Pipelines that uses different jobs to compile an application for 10 different architectures.
The build pipeline takes approximately one day to complete.
You need to reduce the time it takes to execute the build pipeline.
Which two actions should you perform?
You have a project in Azure DevOps. You have an Azure Resource Group deployment project in Microsoft Visual Studio that is checked in to the Azure DevOps project.
You need to create a release pipeline that will deploy resources by using Azure Resource Manager templates. The solution must minimize administrative effort.
Which task type should you include in the solution?
Correct
You can integrate Azure Resource Manager templates (ARM templates) with Azure Pipelines for continuous integration and continuous deployment (CI/CD). The different options for deploying an ARM template from a pipeline are
Use ARM template deployment task- This option is the easiest option. This approach works when you want to deploy a template directly from a repository.
Add task that runs an Azure PowerShell script – This option has the advantage of providing consistency throughout the development life cycle because you can use the same script that you used when running local tests. Your script deploys the template but can also perform other operations such as getting values to use as parameters. https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/add-template-to-azure-pipelines
Incorrect
You can integrate Azure Resource Manager templates (ARM templates) with Azure Pipelines for continuous integration and continuous deployment (CI/CD). The different options for deploying an ARM template from a pipeline are
Use ARM template deployment task- This option is the easiest option. This approach works when you want to deploy a template directly from a repository.
Add task that runs an Azure PowerShell script – This option has the advantage of providing consistency throughout the development life cycle because you can use the same script that you used when running local tests. Your script deploys the template but can also perform other operations such as getting values to use as parameters. https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/add-template-to-azure-pipelines
Unattempted
You can integrate Azure Resource Manager templates (ARM templates) with Azure Pipelines for continuous integration and continuous deployment (CI/CD). The different options for deploying an ARM template from a pipeline are
Use ARM template deployment task- This option is the easiest option. This approach works when you want to deploy a template directly from a repository.
Add task that runs an Azure PowerShell script – This option has the advantage of providing consistency throughout the development life cycle because you can use the same script that you used when running local tests. Your script deploys the template but can also perform other operations such as getting values to use as parameters. https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/add-template-to-azure-pipelines
Question 22 of 55
22. Question
You administer an Azure DevOps project that includes package feeds.
You need to ensure that developers can unlist and deprecate packages. The solution must use the principle of least privilege.
Which access level should you grant to the developers?
Correct
Feeds have four permission roles: Readers, Collaborators, Contributors, and Owners. Contributor roles allows to unlist packages.
You have a project in Azure DevOps named Project1. Project1 contains a build pipeline named Pipe1 that builds an application named App1.
You have an agent pool named Pool1 that contains a Windows Server 2019-based self-hosted agent. Pipe1 uses Pool1.
You plan to implement another project named Project2. Project2 will have a build pipeline named Pipe2 that builds an application named App2.
App1 and App2 have conflicting dependencies.
You need to minimize the possibility that the two build pipelines will conflict with each ether. The solution must minimize infrastructure costs.
What should you do?
Correct
By default, jobs run on the host machine where the agent is installed. This is convenient and typically well-suited for projects that are just beginning to adopt Azure Pipelines. Over time, you may find that you want more control over the context where your tasks run. YAML pipelines offer container jobs for this level of control.
On Linux and Windows agents, jobs may be run on the host or in a container. (On macOS and Red Hat Enterprise Linux 6, container jobs are not available.) Containers provide isolation from the host and allow you to pin specific versions of tools and dependencies. Host jobs require less initial setup and infrastructure to maintain.
Containers offer a lightweight abstraction over the host operating system. You can select the exact versions of operating systems, tools, and dependencies that your build requires. https://docs.microsoft.com/en-us/azure/devops/pipelines/process/container-phases?view=azure-devops
Incorrect
By default, jobs run on the host machine where the agent is installed. This is convenient and typically well-suited for projects that are just beginning to adopt Azure Pipelines. Over time, you may find that you want more control over the context where your tasks run. YAML pipelines offer container jobs for this level of control.
On Linux and Windows agents, jobs may be run on the host or in a container. (On macOS and Red Hat Enterprise Linux 6, container jobs are not available.) Containers provide isolation from the host and allow you to pin specific versions of tools and dependencies. Host jobs require less initial setup and infrastructure to maintain.
Containers offer a lightweight abstraction over the host operating system. You can select the exact versions of operating systems, tools, and dependencies that your build requires. https://docs.microsoft.com/en-us/azure/devops/pipelines/process/container-phases?view=azure-devops
Unattempted
By default, jobs run on the host machine where the agent is installed. This is convenient and typically well-suited for projects that are just beginning to adopt Azure Pipelines. Over time, you may find that you want more control over the context where your tasks run. YAML pipelines offer container jobs for this level of control.
On Linux and Windows agents, jobs may be run on the host or in a container. (On macOS and Red Hat Enterprise Linux 6, container jobs are not available.) Containers provide isolation from the host and allow you to pin specific versions of tools and dependencies. Host jobs require less initial setup and infrastructure to maintain.
Containers offer a lightweight abstraction over the host operating system. You can select the exact versions of operating systems, tools, and dependencies that your build requires. https://docs.microsoft.com/en-us/azure/devops/pipelines/process/container-phases?view=azure-devops
Question 24 of 55
24. Question
You are building an application that has the following assets:
Source code
Logs from automated tests and builds
Large and frequently updated binary assets
A common library used by multiple applications
Where should you store source code?
You are building an application that has the following assets:
Source code
Logs from automated tests and builds
Large and frequently updated binary assets
A common library used by multiple applications
Where should you store the common libraries used by multiple applications?
Correct
With Azure Artifacts you can create and share Maven, npm, and NuGet package feeds from public and private sources with teams of any size. You can add fully integrated package management to your continuous integration/continuous delivery (CI/CD) pipelines with a single click. https://docs.microsoft.com/en-au/azure/devops/artifacts/overview?view=azure-devops
Incorrect
With Azure Artifacts you can create and share Maven, npm, and NuGet package feeds from public and private sources with teams of any size. You can add fully integrated package management to your continuous integration/continuous delivery (CI/CD) pipelines with a single click. https://docs.microsoft.com/en-au/azure/devops/artifacts/overview?view=azure-devops
Unattempted
With Azure Artifacts you can create and share Maven, npm, and NuGet package feeds from public and private sources with teams of any size. You can add fully integrated package management to your continuous integration/continuous delivery (CI/CD) pipelines with a single click. https://docs.microsoft.com/en-au/azure/devops/artifacts/overview?view=azure-devops
Question 26 of 55
26. Question
You are building an application that has the following assets:
Source code
Logs from automated tests and builds
Large and frequently updated binary assets
A common library used by multiple applications
Where should you view the logs from automated tests and builds?
You are building an application that has the following assets:
Source code
Logs from automated tests and builds
Large and frequently updated binary assets
A common library used by multiple applications
Where should you store large and frequently updated binary assets?
Your company uses the following resources:
Windows Server 2019 container images hosted in an Azure Container Registry.
Azure virtual machines that run the latest version of Ubuntu
An Azure Log Analytics workspace
Azure Active Directory (Azure AD)
An Azure key vault
For which of the resources can you receive vulnerability assessments in Azure Security Center?
You are integrating Azure Pipelines and Microsoft Teams.
You install the Azure Pipelines app in Microsoft Teams.
You have an Azure DevOps organization named Contoso that contains a project name Project1.
You subscribe to Project1 in Microsoft Teams.
You need to ensure that you only receive events about failed builds in Microsoft Teams.
What should you do first?
Correct
From Microsoft Teams, run @azure pipelines subscriptions.
Here’s why:
List Existing Subscriptions: This command provides a list of all existing subscriptions for Azure Pipelines notifications in your Microsoft Teams channel. It allows you to see which pipelines you’re currently subscribed to and their configurations.
Identify Current Settings: By viewing the existing subscriptions, you can check if there’s already a subscription for Project1 and what events it triggers notifications for (e.g., all builds, successful builds, failed builds).
Plan for Filtering: Knowing the existing setup helps you determine the next steps. You might need to remove an existing subscription for all events or modify it to filter for failed builds only.
Running @azure pipelines subscribe is the most efficient way to gather information before configuring notifications for failed builds in Microsoft Teams.
Here’s what the other options are not the best choices:
@azure pipelines subscribe https://dev.azure.com/Contoso/Project1: This command subscribes to all events for Project1 by default. While it creates a subscription, it doesn’t address the need to filter for failed builds only.
Add Publish Build Artifacts Task: This task is used to publish build artifacts during the pipeline execution. It doesn’t directly affect notifications in Microsoft Teams.
Enable Continuous Integration: While enabling continuous integration might lead to more frequent builds, it doesn’t control the type of notifications received in Microsoft Teams.
Incorrect
From Microsoft Teams, run @azure pipelines subscriptions.
Here’s why:
List Existing Subscriptions: This command provides a list of all existing subscriptions for Azure Pipelines notifications in your Microsoft Teams channel. It allows you to see which pipelines you’re currently subscribed to and their configurations.
Identify Current Settings: By viewing the existing subscriptions, you can check if there’s already a subscription for Project1 and what events it triggers notifications for (e.g., all builds, successful builds, failed builds).
Plan for Filtering: Knowing the existing setup helps you determine the next steps. You might need to remove an existing subscription for all events or modify it to filter for failed builds only.
Running @azure pipelines subscribe is the most efficient way to gather information before configuring notifications for failed builds in Microsoft Teams.
Here’s what the other options are not the best choices:
@azure pipelines subscribe https://dev.azure.com/Contoso/Project1: This command subscribes to all events for Project1 by default. While it creates a subscription, it doesn’t address the need to filter for failed builds only.
Add Publish Build Artifacts Task: This task is used to publish build artifacts during the pipeline execution. It doesn’t directly affect notifications in Microsoft Teams.
Enable Continuous Integration: While enabling continuous integration might lead to more frequent builds, it doesn’t control the type of notifications received in Microsoft Teams.
Unattempted
From Microsoft Teams, run @azure pipelines subscriptions.
Here’s why:
List Existing Subscriptions: This command provides a list of all existing subscriptions for Azure Pipelines notifications in your Microsoft Teams channel. It allows you to see which pipelines you’re currently subscribed to and their configurations.
Identify Current Settings: By viewing the existing subscriptions, you can check if there’s already a subscription for Project1 and what events it triggers notifications for (e.g., all builds, successful builds, failed builds).
Plan for Filtering: Knowing the existing setup helps you determine the next steps. You might need to remove an existing subscription for all events or modify it to filter for failed builds only.
Running @azure pipelines subscribe is the most efficient way to gather information before configuring notifications for failed builds in Microsoft Teams.
Here’s what the other options are not the best choices:
@azure pipelines subscribe https://dev.azure.com/Contoso/Project1: This command subscribes to all events for Project1 by default. While it creates a subscription, it doesn’t address the need to filter for failed builds only.
Add Publish Build Artifacts Task: This task is used to publish build artifacts during the pipeline execution. It doesn’t directly affect notifications in Microsoft Teams.
Enable Continuous Integration: While enabling continuous integration might lead to more frequent builds, it doesn’t control the type of notifications received in Microsoft Teams.
Question 30 of 55
30. Question
You have an Azure DevOps organization named Contoso and an Azure subscription.
You use Azure DevOps to build and deploy a web app named App1. Azure Monitor is configured to generate an email notification in response to alerts generated whenever App1 generates a server-side error.
You need to receive notifications in Microsoft Teams whenever an Azure Monitor alert is generated.
Which two actions should you perform?
Correct
You can set up and trigger a logic app to create a conversation in Microsoft Teams when an alert fires. When an Azure Monitor alert triggers, it calls an action group. Action groups allow you to trigger one or more actions to notify others about an alert and also remediate it. https://docs.microsoft.com/en-us/azure/azure-monitor/alerts/action-groups-logic-app
Incorrect
You can set up and trigger a logic app to create a conversation in Microsoft Teams when an alert fires. When an Azure Monitor alert triggers, it calls an action group. Action groups allow you to trigger one or more actions to notify others about an alert and also remediate it. https://docs.microsoft.com/en-us/azure/azure-monitor/alerts/action-groups-logic-app
Unattempted
You can set up and trigger a logic app to create a conversation in Microsoft Teams when an alert fires. When an Azure Monitor alert triggers, it calls an action group. Action groups allow you to trigger one or more actions to notify others about an alert and also remediate it. https://docs.microsoft.com/en-us/azure/azure-monitor/alerts/action-groups-logic-app
Question 31 of 55
31. Question
You manage build and release pipelines by using Azure DevOps. Your entire managed environment resides in Azure.
You need to configure a service endpoint for accessing Azure Key Vault secrets. The solution must meet the following requirements:
Ensure that the secrets are retrieved by Azure DevOps.
Avoid persisting credentials and tokens in Azure DevOps.
How should you configure the service endpoint?
You need to increase the security of your team’s development process. You need to recommend a security tool for each stage of the development process.
Which type of security tool should you recommend for Pull request?
Correct
Continuous security validation should be added at each step from development through production to help ensure the application is always secure. Validation in the CI/CD begins before the developer commits his or her code. Static code analysis tools in the IDE provide the first line of defense to help ensure that security vulnerabilities are not introduced into the CI/CD process. The process for committing code into a central repository should have controls to help prevent security vulnerabilities from being introduced. The pull request should require a code review.
Continuous security validation should be added at each step from development through production to help ensure the application is always secure. Validation in the CI/CD begins before the developer commits his or her code. Static code analysis tools in the IDE provide the first line of defense to help ensure that security vulnerabilities are not introduced into the CI/CD process. The process for committing code into a central repository should have controls to help prevent security vulnerabilities from being introduced. The pull request should require a code review.
Continuous security validation should be added at each step from development through production to help ensure the application is always secure. Validation in the CI/CD begins before the developer commits his or her code. Static code analysis tools in the IDE provide the first line of defense to help ensure that security vulnerabilities are not introduced into the CI/CD process. The process for committing code into a central repository should have controls to help prevent security vulnerabilities from being introduced. The pull request should require a code review.
You need to increase the security of your team’s development process. You need to recommend a security tool for each stage of the development process.
Which type of security tool should you recommend for continuous integration?
Correct
The CI builds should run static code analysis tests to ensure that the code is following all rules for both maintenance and security.
You need to increase the security of your team’s development process. You need to recommend a security tool for each stage of the development process.
Which type of security tool should you recommend for continuous delivery?
Correct
Once your code quality is verified, and the application is deployed to a lower environment like development or QA, the process should verify that there are not any security vulnerabilities in the running application. This can be accomplished by executing automated penetration test against the running application to scan it for vulnerabilities.
Once your code quality is verified, and the application is deployed to a lower environment like development or QA, the process should verify that there are not any security vulnerabilities in the running application. This can be accomplished by executing automated penetration test against the running application to scan it for vulnerabilities.
Once your code quality is verified, and the application is deployed to a lower environment like development or QA, the process should verify that there are not any security vulnerabilities in the running application. This can be accomplished by executing automated penetration test against the running application to scan it for vulnerabilities.
You need to recommend project metrics for dashboards in Azure DevOps.
Which chart widgets should you recommend for the elapsed time from the creation of work items to their completion?
You need to recommend project metrics for dashboards in Azure DevOps.
Which chart widgets should you recommend for the elapsed time to complete work items once they are active?
Your company plans to use an agile approach to software development.
You need to recommend an application to provide communication between members of the development team who work in locations around the world. The applications must meet the following requirements:
Provide the ability to isolate the members of different project teams into separate communication channels and to keep a history of the chats within those channels.
Be available on Windows 10, Mac OS, iOS, and Android operating systems.
Provide the ability to add external contractors and suppliers to projects.
Integrate directly with Azure DevOps.
What should you recommend?
Correct
In Teams, users can create different channels to organize their communications by topic. Each channel can include a couple of users or scale to thousands of users.
Microsoft Teams works on Android, iOS, Mac and Windows systems and devices. It also works in Chrome, Firefox, Internet Explorer 11 and Microsoft Edge web browsers.
The guest-access feature in Microsoft Teams allows users to invite people outside their organizations to join internal channels for messaging, meetings and file sharing. This capability helps to facilitate business-to-business project management.
Teams integrates with Azure DevOps.
Note: Slack would also be a correct answer, but it is not an option here. https://docs.microsoft.com/en-us/microsoftteams/teams-overview https://docs.microsoft.com/en-us/azure/devops/service-hooks/services/teams?view=azure-devops
Incorrect
In Teams, users can create different channels to organize their communications by topic. Each channel can include a couple of users or scale to thousands of users.
Microsoft Teams works on Android, iOS, Mac and Windows systems and devices. It also works in Chrome, Firefox, Internet Explorer 11 and Microsoft Edge web browsers.
The guest-access feature in Microsoft Teams allows users to invite people outside their organizations to join internal channels for messaging, meetings and file sharing. This capability helps to facilitate business-to-business project management.
Teams integrates with Azure DevOps.
Note: Slack would also be a correct answer, but it is not an option here. https://docs.microsoft.com/en-us/microsoftteams/teams-overview https://docs.microsoft.com/en-us/azure/devops/service-hooks/services/teams?view=azure-devops
Unattempted
In Teams, users can create different channels to organize their communications by topic. Each channel can include a couple of users or scale to thousands of users.
Microsoft Teams works on Android, iOS, Mac and Windows systems and devices. It also works in Chrome, Firefox, Internet Explorer 11 and Microsoft Edge web browsers.
The guest-access feature in Microsoft Teams allows users to invite people outside their organizations to join internal channels for messaging, meetings and file sharing. This capability helps to facilitate business-to-business project management.
Teams integrates with Azure DevOps.
Note: Slack would also be a correct answer, but it is not an option here. https://docs.microsoft.com/en-us/microsoftteams/teams-overview https://docs.microsoft.com/en-us/azure/devops/service-hooks/services/teams?view=azure-devops
Question 40 of 55
40. Question
Your company has an on-premises Bitbucket Server that is used for Git-based source control. The server is protected by a firewall that blocks inbound Internet traffic.
You plan to use Azure DevOps to manage the build and release processes.
Which two components are required to integrate Azure DevOps and Bitbucket?
Correct
You can integrate your on-premises Bitbucket server or another Git server with Azure Pipelines. Your on-premises server may be exposed to the Internet or it may not be. If the Bitbucket server cannot be reached from Azure Pipelines, then either you can add exceptions to your firewall rules to allow traffic from Azure Pipelines to flow through or You need to setup a self-hosted agent within your network and hence will have access to the Bitbucket server. These agents only require outbound connections to Azure Pipelines. There is no need to open a firewall for inbound connections.
External Git service connection defines and secures a connection to a Git repository server https://docs.microsoft.com/en-us/azure/devops/pipelines/repos/on-premises-bitbucket?view=azure-devops https://docs.microsoft.com/en-us/azure/devops/pipelines/library/service-endpoints?view=azure-devops&tabs=yaml#sep-extgit
Incorrect
You can integrate your on-premises Bitbucket server or another Git server with Azure Pipelines. Your on-premises server may be exposed to the Internet or it may not be. If the Bitbucket server cannot be reached from Azure Pipelines, then either you can add exceptions to your firewall rules to allow traffic from Azure Pipelines to flow through or You need to setup a self-hosted agent within your network and hence will have access to the Bitbucket server. These agents only require outbound connections to Azure Pipelines. There is no need to open a firewall for inbound connections.
External Git service connection defines and secures a connection to a Git repository server https://docs.microsoft.com/en-us/azure/devops/pipelines/repos/on-premises-bitbucket?view=azure-devops https://docs.microsoft.com/en-us/azure/devops/pipelines/library/service-endpoints?view=azure-devops&tabs=yaml#sep-extgit
Unattempted
You can integrate your on-premises Bitbucket server or another Git server with Azure Pipelines. Your on-premises server may be exposed to the Internet or it may not be. If the Bitbucket server cannot be reached from Azure Pipelines, then either you can add exceptions to your firewall rules to allow traffic from Azure Pipelines to flow through or You need to setup a self-hosted agent within your network and hence will have access to the Bitbucket server. These agents only require outbound connections to Azure Pipelines. There is no need to open a firewall for inbound connections.
External Git service connection defines and secures a connection to a Git repository server https://docs.microsoft.com/en-us/azure/devops/pipelines/repos/on-premises-bitbucket?view=azure-devops https://docs.microsoft.com/en-us/azure/devops/pipelines/library/service-endpoints?view=azure-devops&tabs=yaml#sep-extgit
Question 41 of 55
41. Question
You are planning projects for three customers. Each customer’s preferred process for work items is shown in the following table.
The customers all plan to use Azure DevOps for work item management.
Which work item process should you use for Litware?
You have a GitHub repository.
You create a new repository in Azure DevOps.
You need to recommend a procedure to clone the repository from GitHub to Azure DevOps.
What should you recommend?
Correct
You can import an existing Git repo from GitHub, Bitbucket, GitLab, or other location into a new or empty existing repo in your project in Azure DevOps.
Import into a new repo –
1. Select Repos, Files.
2. From the repo drop-down, select Import repository.
3. If the source repo is publicly available, just enter the clone URL of the source repository and a name for your new Git repository. https://docs.microsoft.com/en-us/azure/devops/repos/git/import-git-repository?view=azure-devops
Incorrect
You can import an existing Git repo from GitHub, Bitbucket, GitLab, or other location into a new or empty existing repo in your project in Azure DevOps.
Import into a new repo –
1. Select Repos, Files.
2. From the repo drop-down, select Import repository.
3. If the source repo is publicly available, just enter the clone URL of the source repository and a name for your new Git repository. https://docs.microsoft.com/en-us/azure/devops/repos/git/import-git-repository?view=azure-devops
Unattempted
You can import an existing Git repo from GitHub, Bitbucket, GitLab, or other location into a new or empty existing repo in your project in Azure DevOps.
Import into a new repo –
1. Select Repos, Files.
2. From the repo drop-down, select Import repository.
3. If the source repo is publicly available, just enter the clone URL of the source repository and a name for your new Git repository. https://docs.microsoft.com/en-us/azure/devops/repos/git/import-git-repository?view=azure-devops
Question 45 of 55
45. Question
Your company is concerned that when developers introduce open source libraries, it creates licensing compliance issues.
You need to add an automated process to the build pipeline to detect when common open source libraries are added to the code base.
What should you use?
Correct
WhiteSource integrates with your Azure DevOps or Team Foundation Server (TFS) continuous integration servers and detects all open source components in your software, without ever scanning your code. It provides you with real-time alerts on vulnerable and outdated open source components and generates comprehensive up-to-date inventory, licenses and security reports with only one click. https://marketplace.visualstudio.com/items?itemName=whitesource.ws-bolt
Incorrect
WhiteSource integrates with your Azure DevOps or Team Foundation Server (TFS) continuous integration servers and detects all open source components in your software, without ever scanning your code. It provides you with real-time alerts on vulnerable and outdated open source components and generates comprehensive up-to-date inventory, licenses and security reports with only one click. https://marketplace.visualstudio.com/items?itemName=whitesource.ws-bolt
Unattempted
WhiteSource integrates with your Azure DevOps or Team Foundation Server (TFS) continuous integration servers and detects all open source components in your software, without ever scanning your code. It provides you with real-time alerts on vulnerable and outdated open source components and generates comprehensive up-to-date inventory, licenses and security reports with only one click. https://marketplace.visualstudio.com/items?itemName=whitesource.ws-bolt
Question 46 of 55
46. Question
You plan to use a NuGet package in a project in Azure DevOps. The NuGet package is in a feed that requires authentication.
You need to ensure that the project can restore the NuGet package automatically.
What should the project use to automate the authentication?
Correct
The Azure Artifacts Credential Provider automates the acquisition of credentials needed to restore NuGet packages as part of your .NET development workflow. It integrates with MSBuild, dotnet, and NuGet(.exe) and works on Windows, Mac, and Linux. Any time you want to use packages from an Azure Artifacts feed, the Credential Provider will automatically acquire and securely store a token on behalf of the NuGet client you’re using. https://github.com/Microsoft/artifacts-credprovider
Incorrect
The Azure Artifacts Credential Provider automates the acquisition of credentials needed to restore NuGet packages as part of your .NET development workflow. It integrates with MSBuild, dotnet, and NuGet(.exe) and works on Windows, Mac, and Linux. Any time you want to use packages from an Azure Artifacts feed, the Credential Provider will automatically acquire and securely store a token on behalf of the NuGet client you’re using. https://github.com/Microsoft/artifacts-credprovider
Unattempted
The Azure Artifacts Credential Provider automates the acquisition of credentials needed to restore NuGet packages as part of your .NET development workflow. It integrates with MSBuild, dotnet, and NuGet(.exe) and works on Windows, Mac, and Linux. Any time you want to use packages from an Azure Artifacts feed, the Credential Provider will automatically acquire and securely store a token on behalf of the NuGet client you’re using. https://github.com/Microsoft/artifacts-credprovider
Question 47 of 55
47. Question
The lead developer at your company reports that adding new application features takes longer than expected due to a large accumulated technical debt.
You need to recommend changes to reduce the accumulated technical debt.
Solution: You recommend increasing the code duplication.
Does this meet the goal?
Correct
Increasing code duplication will further increase technical debt.
Incorrect
Increasing code duplication will further increase technical debt.
Unattempted
Increasing code duplication will further increase technical debt.
Question 48 of 55
48. Question
The lead developer at your company reports that adding new application features takes longer than expected due to a large accumulated technical debt.
You need to recommend changes to reduce the accumulated technical debt.
Solution: You recommend reducing the code coupling and the dependency cycles?
Does this meet the goal?
Correct
Yes, reducing code coupling and dependency cycles can help reduce accumulated technical debt. Here’s how:
Improved maintainability: Decoupled code is easier to understand, modify, and test, reducing the time it takes to add new features.
Reduced risk of unintended side effects: Breaking down dependencies can help prevent changes in one part of the code from causing unexpected behavior in other parts.
Easier refactoring: Decoupled code is more amenable to refactoring, allowing you to address technical debt issues more effectively.
Enhanced reusability: Smaller, more focused components can be reused in different parts of the application, reducing development time and effort.
By reducing code coupling and dependency cycles, you can create a more maintainable, flexible, and extensible codebase, which will help to reduce technical debt and make it easier to add new features.
Incorrect
Yes, reducing code coupling and dependency cycles can help reduce accumulated technical debt. Here’s how:
Improved maintainability: Decoupled code is easier to understand, modify, and test, reducing the time it takes to add new features.
Reduced risk of unintended side effects: Breaking down dependencies can help prevent changes in one part of the code from causing unexpected behavior in other parts.
Easier refactoring: Decoupled code is more amenable to refactoring, allowing you to address technical debt issues more effectively.
Enhanced reusability: Smaller, more focused components can be reused in different parts of the application, reducing development time and effort.
By reducing code coupling and dependency cycles, you can create a more maintainable, flexible, and extensible codebase, which will help to reduce technical debt and make it easier to add new features.
Unattempted
Yes, reducing code coupling and dependency cycles can help reduce accumulated technical debt. Here’s how:
Improved maintainability: Decoupled code is easier to understand, modify, and test, reducing the time it takes to add new features.
Reduced risk of unintended side effects: Breaking down dependencies can help prevent changes in one part of the code from causing unexpected behavior in other parts.
Easier refactoring: Decoupled code is more amenable to refactoring, allowing you to address technical debt issues more effectively.
Enhanced reusability: Smaller, more focused components can be reused in different parts of the application, reducing development time and effort.
By reducing code coupling and dependency cycles, you can create a more maintainable, flexible, and extensible codebase, which will help to reduce technical debt and make it easier to add new features.
Question 49 of 55
49. Question
The lead developer at your company reports that adding new application features takes longer than expected due to a large accumulated technical debt.
You need to recommend changes to reduce the accumulated technical debt.
Solution: You recommend increasing the test coverage.
Does this meet the goal?
Correct
No
Explanation:
While increasing test coverage is a crucial part of software development best practices, it doesn’t directly address the root cause of accumulated technical debt.
Technical Debt refers to shortcuts or suboptimal design decisions made during development for faster delivery. These can include:
Poorly written code: Difficult to understand, maintain, and extend.
Insufficient documentation: Making it hard to understand how the system works.
Tightly coupled components: Making changes in one area difficult without impacting others.
Lack of modularity: Hindering code reusability and making it hard to isolate and fix issues.
Increased Test Coverage helps:
Identify regressions: Ensure that new changes don’t break existing functionality.
Improve code quality: By forcing developers to write more robust and maintainable code.
However, it doesn’t directly:
Refactor existing code: To improve its design, structure, and maintainability.
Address the root causes of the technical debt: Such as time pressures, lack of planning, or insufficient knowledge.
To effectively reduce technical debt, you should also consider:
Regular code reviews: To identify and address potential issues early on.
Dedicated refactoring sprints: To address specific areas of technical debt.
Investing in code quality tools: Such as static analysis tools and linters.
Improving development processes: To encourage better design and reduce the need for shortcuts.
Educating developers: On best practices, design patterns, and the importance of avoiding technical debt.
In summary:
Increasing test coverage is a valuable practice but not a sufficient solution for reducing accumulated technical debt. It needs to be part of a broader strategy that addresses the underlying issues and actively improves the codebase.
Incorrect
No
Explanation:
While increasing test coverage is a crucial part of software development best practices, it doesn’t directly address the root cause of accumulated technical debt.
Technical Debt refers to shortcuts or suboptimal design decisions made during development for faster delivery. These can include:
Poorly written code: Difficult to understand, maintain, and extend.
Insufficient documentation: Making it hard to understand how the system works.
Tightly coupled components: Making changes in one area difficult without impacting others.
Lack of modularity: Hindering code reusability and making it hard to isolate and fix issues.
Increased Test Coverage helps:
Identify regressions: Ensure that new changes don’t break existing functionality.
Improve code quality: By forcing developers to write more robust and maintainable code.
However, it doesn’t directly:
Refactor existing code: To improve its design, structure, and maintainability.
Address the root causes of the technical debt: Such as time pressures, lack of planning, or insufficient knowledge.
To effectively reduce technical debt, you should also consider:
Regular code reviews: To identify and address potential issues early on.
Dedicated refactoring sprints: To address specific areas of technical debt.
Investing in code quality tools: Such as static analysis tools and linters.
Improving development processes: To encourage better design and reduce the need for shortcuts.
Educating developers: On best practices, design patterns, and the importance of avoiding technical debt.
In summary:
Increasing test coverage is a valuable practice but not a sufficient solution for reducing accumulated technical debt. It needs to be part of a broader strategy that addresses the underlying issues and actively improves the codebase.
Unattempted
No
Explanation:
While increasing test coverage is a crucial part of software development best practices, it doesn’t directly address the root cause of accumulated technical debt.
Technical Debt refers to shortcuts or suboptimal design decisions made during development for faster delivery. These can include:
Poorly written code: Difficult to understand, maintain, and extend.
Insufficient documentation: Making it hard to understand how the system works.
Tightly coupled components: Making changes in one area difficult without impacting others.
Lack of modularity: Hindering code reusability and making it hard to isolate and fix issues.
Increased Test Coverage helps:
Identify regressions: Ensure that new changes don’t break existing functionality.
Improve code quality: By forcing developers to write more robust and maintainable code.
However, it doesn’t directly:
Refactor existing code: To improve its design, structure, and maintainability.
Address the root causes of the technical debt: Such as time pressures, lack of planning, or insufficient knowledge.
To effectively reduce technical debt, you should also consider:
Regular code reviews: To identify and address potential issues early on.
Dedicated refactoring sprints: To address specific areas of technical debt.
Investing in code quality tools: Such as static analysis tools and linters.
Improving development processes: To encourage better design and reduce the need for shortcuts.
Educating developers: On best practices, design patterns, and the importance of avoiding technical debt.
In summary:
Increasing test coverage is a valuable practice but not a sufficient solution for reducing accumulated technical debt. It needs to be part of a broader strategy that addresses the underlying issues and actively improves the codebase.
Question 50 of 55
50. Question
The lead developer at your company reports that adding new application features takes longer than expected due to a large accumulated technical debt.
You need to recommend changes to reduce the accumulated technical debt.
Solution: You recommend reducing the code complexity.
Does this meet the goal?
Correct
By reducing code complexity, we can reduce the number of bugs and defects, along with its lifetime cost.
Incorrect
By reducing code complexity, we can reduce the number of bugs and defects, along with its lifetime cost.
Unattempted
By reducing code complexity, we can reduce the number of bugs and defects, along with its lifetime cost.
Question 51 of 55
51. Question
Your company uses Git as a source code control system for a complex app named App1.
You plan to add a new functionality to App1.
You need to design a branching model for the new functionality.
Which branch lifetime and branch time should you use in the branching model?
Correct
A short-lived branch is something that should carry a consistent piece of code that contributes to the feature you are building.
Develop your features and fix bugs in feature branches based off your main branch. These branches are also known as topic branches. Feature branches isolate work in progress from the completed work in the main branch. Git branches are inexpensive to create and maintain. Even small fixes and changes should have their own feature branch. https://docs.microsoft.com/en-us/azure/devops/repos/git/git-branching-guidance?view=azure-devops https://gist.github.com/digitaljhelms/4287848
Incorrect
A short-lived branch is something that should carry a consistent piece of code that contributes to the feature you are building.
Develop your features and fix bugs in feature branches based off your main branch. These branches are also known as topic branches. Feature branches isolate work in progress from the completed work in the main branch. Git branches are inexpensive to create and maintain. Even small fixes and changes should have their own feature branch. https://docs.microsoft.com/en-us/azure/devops/repos/git/git-branching-guidance?view=azure-devops https://gist.github.com/digitaljhelms/4287848
Unattempted
A short-lived branch is something that should carry a consistent piece of code that contributes to the feature you are building.
Develop your features and fix bugs in feature branches based off your main branch. These branches are also known as topic branches. Feature branches isolate work in progress from the completed work in the main branch. Git branches are inexpensive to create and maintain. Even small fixes and changes should have their own feature branch. https://docs.microsoft.com/en-us/azure/devops/repos/git/git-branching-guidance?view=azure-devops https://gist.github.com/digitaljhelms/4287848
Question 52 of 55
52. Question
You store source code in a Git repository in Azure repos. You use a third-party continuous integration (CI) tool to control builds.
What will Azure DevOps use to authenticate with the tool?
You use Azure Pipelines to manage project builds and deployments.
You plan to use Azure Pipelines for Microsoft Teams to notify the legal team when a new build is ready for release.
You need to configure the Organization Settings in Azure DevOps to support Azure Pipelines for Microsoft Teams.
What should you turn on?
Correct
If Microsoft Teams is your choice for collaboration, you can use the Azure Pipelines app built for Microsoft Teams to easily monitor the events for your pipelines. Set up and manage subscriptions for builds, releases, YAML pipelines, pending approvals and more from the app and get notifications for these events in your Teams channels.
The Azure Pipelines app uses the OAuth authentication protocol, and requires Third-party application access via OAuth for the organization to be enabled. To enable this setting, navigate to Organization Settings > Security > Policies, and set the Third-party application access via OAuth for the organization setting to On. https://docs.microsoft.com/en-us/azure/devops/pipelines/integrations/microsoft-teams
Incorrect
If Microsoft Teams is your choice for collaboration, you can use the Azure Pipelines app built for Microsoft Teams to easily monitor the events for your pipelines. Set up and manage subscriptions for builds, releases, YAML pipelines, pending approvals and more from the app and get notifications for these events in your Teams channels.
The Azure Pipelines app uses the OAuth authentication protocol, and requires Third-party application access via OAuth for the organization to be enabled. To enable this setting, navigate to Organization Settings > Security > Policies, and set the Third-party application access via OAuth for the organization setting to On. https://docs.microsoft.com/en-us/azure/devops/pipelines/integrations/microsoft-teams
Unattempted
If Microsoft Teams is your choice for collaboration, you can use the Azure Pipelines app built for Microsoft Teams to easily monitor the events for your pipelines. Set up and manage subscriptions for builds, releases, YAML pipelines, pending approvals and more from the app and get notifications for these events in your Teams channels.
The Azure Pipelines app uses the OAuth authentication protocol, and requires Third-party application access via OAuth for the organization to be enabled. To enable this setting, navigate to Organization Settings > Security > Policies, and set the Third-party application access via OAuth for the organization setting to On. https://docs.microsoft.com/en-us/azure/devops/pipelines/integrations/microsoft-teams
Question 54 of 55
54. Question
You manage the Git repository for a large enterprise application.
You need to minimize the data size of the repository.
How should you complete the commands?
Correct
git gc –aggressive
Cleanup unnecessary files and optimize the local repository:
git prune
Prune all unreachable objects from the object database: https://git-scm.com/docs/git-gc
Incorrect
git gc –aggressive
Cleanup unnecessary files and optimize the local repository:
git prune
Prune all unreachable objects from the object database: https://git-scm.com/docs/git-gc
Unattempted
git gc –aggressive
Cleanup unnecessary files and optimize the local repository:
git prune
Prune all unreachable objects from the object database: https://git-scm.com/docs/git-gc
Question 55 of 55
55. Question
You have an Azure DevOps project named Project1 and an Azure subscription named Sub1. Sub1 contains an Azure SQL database named DB1.
You need to create a release pipeline that uses the Azure SQL Database Deployment task to update DB1.
Which artifact should you deploy?