You have already completed the Test before. Hence you can not start it again.
Test is loading...
You must sign in or sign up to start the Test.
You have to finish following quiz, to start this Test:
Your results are here!! for" AZ-400 Practice Test 3 "
0 of 56 questions answered correctly
Your time:
Time has elapsed
Your Final Score is : 0
You have attempted : 0
Number of Correct Questions : 0 and scored 0
Number of Incorrect Questions : 0 and Negative marks 0
Average score
Your score
AZ-400
You have attempted: 0
Number of Correct Questions: 0 and scored 0
Number of Incorrect Questions: 0 and Negative marks 0
You can review your answers by clicking view questions. Important Note : Open Reference Documentation Links in New Tab (Right Click and Open in New Tab).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
Answered
Review
Question 1 of 56
1. Question
A team is currently using Azure DevOps for a Java-based project. They need to use a static code analysis tool for the java project. Which of the following are tools that can be used along with Azure DevOps for this purpose? [Select Two]
Correct
Answer A and C
You can use tools such as PMD and FindBugs along with Azure DevOps for static code analysis. The Microsoft documentation mentions the following
Answer A and C
You can use tools such as PMD and FindBugs along with Azure DevOps for static code analysis. The Microsoft documentation mentions the following
Answer A and C
You can use tools such as PMD and FindBugs along with Azure DevOps for static code analysis. The Microsoft documentation mentions the following
A team needs to create Resource Manager templates for deployment of virtual machines that would have a custom script extension.
You need to complete the below resource manager template
Which of the following would go into SLOT_1?
Correct
Answer A
An example of this is given in the Microsoft documentation
A team needs to create Resource Manager templates for deployment of virtual machines that would have a custom script extension.
You need to complete the below resource manager template
Which of the following would go into SLOT_2?
Correct
Answer C
An example of this is given in the Microsoft documentation
A company is developing a new web application. The Web application is going to be given to a set of users for testing. All of those test users will use the Microsoft Test and Feedback extension which would be installed in Chrome.
You need to ensure you provide the right level of access for development teams and the test users.
– The developers would need to request and gather feedback from the test users
– You would need to collect feedback from the test users
What is the access level in Devops you would give for the test users?
Correct
Answer – D
The right level of access is the Stakeholder access which allows you to provide feedback and respond to feedback requests
The access levels are given in the Microsoft documentation
Answer – D
The right level of access is the Stakeholder access which allows you to provide feedback and respond to feedback requests
The access levels are given in the Microsoft documentation
Answer – D
The right level of access is the Stakeholder access which allows you to provide feedback and respond to feedback requests
The access levels are given in the Microsoft documentation
A company is developing a new web application. The Web application is going to be given to a set of users for testing. All of those test users will use the Microsoft Test and Feedback extension which would be installed in Chrome.
You need to ensure you provide the right level of access for development teams and the test users.
– The developers would need to request and gather feedback from the test users
– You would need to collect feedback from the test users
What is the access level in Devops you would give for the developers?
Correct
Answer – D
The right level of access is the Basic access which allows you to request and respond to feedback requests.
The access levels are given in the Microsoft documentation
Answer – D
The right level of access is the Basic access which allows you to request and respond to feedback requests.
The access levels are given in the Microsoft documentation
Answer – D
The right level of access is the Basic access which allows you to request and respond to feedback requests.
The access levels are given in the Microsoft documentation
A development team has currently setup an Azure repository for source code versioning. They currently have the git client installed on their Windows workstations. They will be using this client to connect to the Azure Repo.
Which of the following mechanism is used to authenticate into the repo?
Correct
Answer A
For git clients, they would normally used Personal Access tokens for authentication. The Microsoft documentation mentions the following
A development team has currently setup an Azure repository for source code versioning. They currently have the git client installed on their Windows workstations. They will be using this client to connect to the Azure Repo.
Which of the following should be used to manage the authentication from the Windows client?
Correct
Answer B
This needs to be managed via the Git Credential manager. The Microsoft documentation mentions the following
A company is setting up Azure Artifacts. Several feeds are being created in Azure Artifacts. You need to set the permissions for the feed for the following groups
– GroupA This group should be able to list and install packages from the feed
– GroupB – This group should be able to push packages to the feed
You need to ensure the least privileges are given to each group
Which of the following permission would you assign to GroupA?
Correct
Answer A
Based on the Microsoft documentation, assigning the Reader role is the right role for this requirement
A company is setting up Azure Artifacts. Several feeds are being created in Azure Artifacts. You need to set the permissions for the feed for the following groups
– GroupA This group should be able to list and install packages from the feed
– GroupB – This group should be able to push packages to the feed
You need to ensure the least privileges are given to each group
Which of the following permission would you assign to GroupB?
Correct
Answer C
Based on the Microsoft documentation, assigning the Contributor role is the right role for this requirement
Your team needs to ensure that the runbooks for the Azure Automation Account are stored in the Azure Repo in Azure Devops. How would you configure this?
Correct
Answer D
This can be done with the help of source control integration in Azure Automation
You need to assign permissions so that a particular user can add the agent machines to an agent pool. You need to ensure you provide the least privilege required. Which of the following would you consider assigning to the user?
Correct
Answer D
This can only be accomplished via the Administrator role. The Microsoft documentation mentions the following when it comes to security of agent pools
Answer D
This can only be accomplished via the Administrator role. The Microsoft documentation mentions the following when it comes to security of agent pools
Answer D
This can only be accomplished via the Administrator role. The Microsoft documentation mentions the following when it comes to security of agent pools
Which of the following can be used by the company to manage technical debt for applications?
Correct
Answer B
The right tool for this is SonarQube. Below is what the documentation on Azure Devops mentions about the tool
Option A is incorrect since this is used for finding and fixing open source vulnerabilities
Option C is incorrect since this is used as a CI/CD tool
Option D is incorrect since this is used to check for errors in Java based applications
For more information on implementing SonarQube, please visit the below URL https://www.azuredevopslabs.com/labs/vstsextend/sonarqube/
Incorrect
Answer B
The right tool for this is SonarQube. Below is what the documentation on Azure Devops mentions about the tool
Option A is incorrect since this is used for finding and fixing open source vulnerabilities
Option C is incorrect since this is used as a CI/CD tool
Option D is incorrect since this is used to check for errors in Java based applications
For more information on implementing SonarQube, please visit the below URL https://www.azuredevopslabs.com/labs/vstsextend/sonarqube/
Unattempted
Answer B
The right tool for this is SonarQube. Below is what the documentation on Azure Devops mentions about the tool
Option A is incorrect since this is used for finding and fixing open source vulnerabilities
Option C is incorrect since this is used as a CI/CD tool
Option D is incorrect since this is used to check for errors in Java based applications
For more information on implementing SonarQube, please visit the below URL https://www.azuredevopslabs.com/labs/vstsextend/sonarqube/
Question 14 of 56
14. Question
Case Study:
Which of the following would you use to centralize all of the Nuget packages for the company?
Correct
Answer A
The right tool to use for this is Azure Artifact. The Microsoft documentation mentions the following
Option B is incorrect since this is used as a source code versioning system
Option C is incorrect since this is used for continuous integration and deployments
Option D is incorrect since this is used as an orchestration service for container-based applications
For more information on Azure Artifact, please visit the below URL https://azure.microsoft.com/en-in/services/devops/artifacts/
Incorrect
Answer A
The right tool to use for this is Azure Artifact. The Microsoft documentation mentions the following
Option B is incorrect since this is used as a source code versioning system
Option C is incorrect since this is used for continuous integration and deployments
Option D is incorrect since this is used as an orchestration service for container-based applications
For more information on Azure Artifact, please visit the below URL https://azure.microsoft.com/en-in/services/devops/artifacts/
Unattempted
Answer A
The right tool to use for this is Azure Artifact. The Microsoft documentation mentions the following
Option B is incorrect since this is used as a source code versioning system
Option C is incorrect since this is used for continuous integration and deployments
Option D is incorrect since this is used as an orchestration service for container-based applications
For more information on Azure Artifact, please visit the below URL https://azure.microsoft.com/en-in/services/devops/artifacts/
Question 15 of 56
15. Question
A company wants to follow Azure DevOps strategy for their development and deployment process. They want to ensure that during the development phase the following requirements are met
– Scanning of 3rd party packages in the code base for vulnerabilities
– Checking for unlicensed libraries in the code base
The company decides to implement these checks using Continuous Integration.
Would this fulfil the requirement?
Correct
Answer A
Yes, you would use tools as part of the Continuous Integration process to check on these vulnerabilities.
The Microsoft documentation mentions the following representation on the CI/CD pipeline
Answer A
Yes, you would use tools as part of the Continuous Integration process to check on these vulnerabilities.
The Microsoft documentation mentions the following representation on the CI/CD pipeline
Answer A
Yes, you would use tools as part of the Continuous Integration process to check on these vulnerabilities.
The Microsoft documentation mentions the following representation on the CI/CD pipeline
A company wants to follow Azure DevOps strategy for their development and deployment process. They want to ensure that during the development phase the following requirements are met
– Scanning of 3rd party packages in the code base for vulnerabilities
– Checking for unlicensed libraries in the code base
The company decides to implement these checks using Continuous Deployment.
Would this fulfil the requirement?
A company wants to follow Azure DevOps strategy for their development and deployment process. They want to ensure that during the development phase the following requirements are met
Scanning of 3rd party packages in the code base for vulnerabilities
Checking for unlicensed libraries in the code base
The company decides to implement automated security tools.
Would this fulfil the requirement?
Correct
Answer A
You can implement many tools as part of the CI/CD pipeline to check for security vulnerabilities
The Microsoft documentation mentions the following
Answer A
You can implement many tools as part of the CI/CD pipeline to check for security vulnerabilities
The Microsoft documentation mentions the following
Answer A
You can implement many tools as part of the CI/CD pipeline to check for security vulnerabilities
The Microsoft documentation mentions the following
You have an Azure DevOps project. You have to add a test case to the project. Which of the following are steps you would implement for this scenario? Choose 2 answers from the options given below
Correct
Answer A and B
You can first create a new work item in your Azure Board. You then add a Test for the work item.
The Microsoft documentation mentions the following
Answer A and B
You can first create a new work item in your Azure Board. You then add a Test for the work item.
The Microsoft documentation mentions the following
Answer A and B
You can first create a new work item in your Azure Board. You then add a Test for the work item.
The Microsoft documentation mentions the following
Your company has an Azure DevOps project. They want to create release pipelines in the DevOps project. The release pipelines need to access secrets stored in an existing Key vault service.
Which of the following would you create to ensure the release pipeline could access the secret in the key vault?
Correct
Answer B
To access the secret in the key vault, you have to create a service principal.
This is also mentioned in the Azure DevOps Lab documentation
Your company has an Azure DevOps project. They want to create release pipelines in the DevOps project. The release pipelines need to access secrets stored in an existing Key vault service.
Which of the following would you create in your pipeline?
Correct
Answer – B
You have to create a new service connection
You company has an Azure DevOps project that contains several build pipelines. The build pipeline uses around 30 open source libraries. You have to ensure that the open source libraries comply with the companys licensing standards. Which of the following could you use for this purpose?
Correct
Answer C
The Black Duck tool can be used to comply with this requirement.
This is also mentioned in the Visual Studio marketplace
You currently have an Azure Kubernetes cluster in place. You have to deploy an application to the cluster by using Azure DevOps. Which of the following steps would you carry out to implement this? Choose 3 answers from the options given below
Correct
Answer B,C and E
You can have your images deployed onto Azure Container registry. Even though this is not explicitly mentioned in the question, we would need to assume based on the options available, we would need to allow Azure Kubernetes to authenticate and pull the image from a repository.
For this we have to create a service principal and assign RBAC roles to allow the cluster to pull the images from Azure Container registry.
An example of this is given in the Microsoft documentation
Answer B,C and E
You can have your images deployed onto Azure Container registry. Even though this is not explicitly mentioned in the question, we would need to assume based on the options available, we would need to allow Azure Kubernetes to authenticate and pull the image from a repository.
For this we have to create a service principal and assign RBAC roles to allow the cluster to pull the images from Azure Container registry.
An example of this is given in the Microsoft documentation
Answer B,C and E
You can have your images deployed onto Azure Container registry. Even though this is not explicitly mentioned in the question, we would need to assume based on the options available, we would need to allow Azure Kubernetes to authenticate and pull the image from a repository.
For this we have to create a service principal and assign RBAC roles to allow the cluster to pull the images from Azure Container registry.
An example of this is given in the Microsoft documentation
You have to set up an Azure Kubernetes cluster to host a set of applications. Below are the key requirements for the deployment
– The images must be published internally
– The Azure Kubernetes clusters must be able to create and manage containers in Azure
Which of the following would you use for the requirement?
The images must be published internally
Correct
Answer B
You can use Azure Container Registry for having a private repository service
This is also mentioned in the Microsoft documentation
You have to set up an Azure Kubernetes cluster to host a set of applications. Below are the key requirements for the deployment
– The images must be published internally
The Azure Kubernetes clusters must be able to create and manage containers in Azure
Which of the following would you use for the requirement?
The Azure Kubernetes clusters must be able to create and manage containers in Azure
Correct
Answer D
To provide access for the Azure Kubernetes cluster to the Azure Container registry , we have to use a service principal
This is also mentioned in the Microsoft documentation
Answer D
To provide access for the Azure Kubernetes cluster to the Azure Container registry , we have to use a service principal
This is also mentioned in the Microsoft documentation
Answer D
To provide access for the Azure Kubernetes cluster to the Azure Container registry , we have to use a service principal
This is also mentioned in the Microsoft documentation
Your company needs to deploy a set of Azure Kubernetes clusters. Each cluster has different requirements. The requirements are given below
Which of the following would you use in the deployment of the cluster photonappclusterA?
Correct
Answer D
In the yaml deployment file , you can use the kubernetes.io/azure-file provisioner to access an SMB-based file share.
This is also mentioned in the Microsoft documentation
Answer D
In the yaml deployment file , you can use the kubernetes.io/azure-file provisioner to access an SMB-based file share.
This is also mentioned in the Microsoft documentation
Answer D
In the yaml deployment file , you can use the kubernetes.io/azure-file provisioner to access an SMB-based file share.
This is also mentioned in the Microsoft documentation
Your company needs to deploy a set of Azure Kubernetes clusters. Each cluster has different requirements. The requirements are given below
Which of the following would you use in the deployment of the cluster photonappclusterB?
Correct
Answer C
In the yaml deployment file , you can use the kubernetes.io/azure-disk provisioner to access disks.
This is also mentioned in the Microsoft documentation
Answer C
In the yaml deployment file , you can use the kubernetes.io/azure-disk provisioner to access disks.
This is also mentioned in the Microsoft documentation
Answer C
In the yaml deployment file , you can use the kubernetes.io/azure-disk provisioner to access disks.
This is also mentioned in the Microsoft documentation
You currently have an existing Azure Kubernetes cluster in place. You have to enable monitoring. You have to execute the required Azure CLI command for enabling monitoring.
az aks SLOT_1 -a SLOT_2 -g -n
Which of the following would go into SLOT_1?
Correct
Answer D
First, we have to use the enable-addons options.
This is also given in the Microsoft documentation
You currently have an existing Azure Kubernetes cluster in place. You have to enable monitoring. You have to execute the required Azure CLI command for enabling monitoring.
az aks SLOT_1 -a SLOT_2 -g -n
Which of the following would go into SLOT_2?
Correct
Answer B
Next, we need to enable monitoring
This is also given in the Microsoft documentation
A team needs to deploy a web application onto Azure using the Azure Web App service. The following requirements need to be met
– Send telemetry about the web application onto Azure
– Scale the web application based on the status of availability tests for the web application
Which of the following would you use for storage of telemetry data?
Correct
Answer B
You can use Azure Application Insights to store the telemetry data.
This is also given in the Microsoft documentation
A team needs to deploy a web application onto Azure using the Azure Web App service. The following requirements need to be met
– Send telemetry about the web application onto Azure
– Scale the web application based on the status of availability tests for the web application
You have to define scaling rules. Which of the following would you define as the Metric source?
A team needs to deploy a web application onto Azure using the Azure Web App service. The following requirements need to be met
– Send telemetry about the web application onto Azure
– Scale the web application based on the status of availability tests for the web application
Where would you define the condition of basing the metric on Availability tests?
Your company currently has an on-premises setup that uses Team Foundation Server for managing continuous delivery pipelines. They also now want to make use of Azure DevOps services. They are planning to also use slack for team communication.
They want to effectively create work items directly from slack channels.
Which of the following should they consider when integrating Team Foundation Server and Slack?
Correct
Answer B
For Team Foundation Server they should create a service hook.
This is also given in the Microsoft documentation
Your company currently has an on-premises setup that uses Team Foundation Server for managing continuous delivery pipelines. They also now want to make use of Azure DevOps services. They are planning to also use slack for team communication.
They want to effectively create work items directly from slack channels.
Which of the following should they consider when integrating Azure DevOps and Slack?
Correct
Answer A
For this, they should consider using Azure Boards for Slack.
This is also given in the Microsoft documentation
A company is planning on Azure DevOps and also use DevOps practices for development and deployment of applications. They have the following patterns they want to implement for different development teams when it comes to Build automation
– Ensure that only code that compiles and passes unit testing is checked into the Integration branch
– Ensure code is of good quality before it is released to the test area
– Ensure security vulnerabilities are identified as soon as possible in the code base
Which of the following technique in build automation can be used for the following requirement?
Ensure that only code that compiles and passes unit testing is checked into the Integration branch
Correct
Answer B
You can accomplish this with Gated Check-ins.
This is also given in the Microsoft documentation
A company is planning on Azure DevOps and also use DevOps practices for development and deployment of applications. They have the following patterns they want to implement for different development teams when it comes to Build automation
– Ensure that only code that compiles and passes unit testing is checked into the Integration branch
– Ensure code is of good quality before it is released to the test area
– Ensure security vulnerabilities are identified as soon as possible in the code base
Which of the following technique in build automation can be used for the following requirement?
Ensure code is of good quality before it is released to the test area
Correct
Answer C
You can accomplish this with Code Analysis Integrations
This is also given in the Microsoft documentation
A company is planning on Azure DevOps and also use DevOps practices for development and deployment of applications. They have the following patterns they want to implement for different development teams when it comes to Build automation
– Ensure that only code that compiles and passes unit testing is checked into the Integration branch
– Ensure code is of good quality before it is released to the test area
– Ensure security vulnerabilities are identified as soon as possible in the code base
Which of the following technique in build automation can be used for the following requirement?
Ensure security vulnerabilities are identified as soon as possible in the code base
Correct
Answer D
You can accomplish this with Fortify Integrations
This is also given in the Microsoft documentation
A company wants to provision a set of Windows virtual machines on Azure. They want to automate the installation of applications and configuration of the VM during the creation of the VM.
The company decides to use Azure Custom Script Extensions for the virtual machine.
Would this fulfil the requirement?
Correct
Answer A
Yes, you can use Azure Custom Script Extensions for this requirement
The Microsoft documentation mentions the following
A company wants to provision a set of Windows virtual machines on Azure. They want to automate the installation of applications and configuration of the VM during the creation of the VM.
The company decides to use Cloud-init for the virtual machine.
Would this fulfil the requirement?
Correct
Answer B
No, this is used to customize a Linux VM
The Microsoft documentation mentions the following
A company wants to provision a set of Windows virtual machines on Azure. They want to automate the installation of applications and configuration of the VM during the creation of the VM.
The company decides to use Chef as the automation platform.
Would this fulfil the requirement?
Correct
Answer A
Yes, you can use this to define how your infrastructure gets deployed.
The Microsoft documentation mentions the following
Your company is setting up a DevOps environment using Azure DevOps. They will be using Git for source code versioning in Azure Repos. They want to setup the right policies and permissions in Azure Repos. Below are the key requirements
– Provide a set of users the ability to remove locks set on branches by other users
– Require a minimum number of reviewers before completing a pull request
– Enforce a merge strategy for pull requests
Which of the following can be used to implement the requirement?
Provide a set of users the ability to remove locks set on branches by other users
Correct
Answer B
You can use Branch permissions for this requirement
The Microsoft documentation mentions the following
Your company is setting up a DevOps environment using Azure DevOps. They will be using Git for source code versioning in Azure Repos. They want to setup the right policies and permissions in Azure Repos. Below are the key requirements
– Provide a set of users the ability to remove locks set on branches by other users
– Require a minimum number of reviewers before completing a pull request
– Enforce a merge strategy for pull requests
Which of the following can be used to implement the requirement?
Require a minimum number of reviewers before completing a pull request
Correct
Answer A
You can use Branch policies for this requirement
The Microsoft documentation mentions the following
Your company is setting up a DevOps environment using Azure DevOps. They will be using Git for source code versioning in Azure Repos. They want to setup the right policies and permissions in Azure Repos. Below are the key requirements
– Provide a set of users the ability to remove locks set on branches by other users
– Require a minimum number of reviewers before completing a pull request
– Enforce a merge strategy for pull requests
Which of the following can be used to implement the requirement?
Enforce a merge strategy for pull requests
Correct
Answer A
You can use Branch policies for this requirement
The Microsoft documentation mentions the following
You have an Azure DevOps organization named Contoso.
You need to recommend an authentication mechanism that meets the following requirements:
– Supports authentication from Git
– Minimizes the need to provide credentials during authentication
Alternate credentials in Azure DevOps is the INCORRECT option because even if the authentication will work from Git, it wont satisfy the requirement to minimize the need to provide credentials for authentication. Moreover, using alternate credentials is not a secure method, and therefore, it will be deprecated soon in the future. https://devblogs.microsoft.com/devops/azure-devops-will-no-longer-support-alternate-credentials-authentication/
User accounts in Azure Active Directory (Azure AD) are also INCORRECT because it is based on Azure AD authentication and not Git. Also, using this method wont satisfy the requirement to minimize the need to provide credentials – It will ask for credentials every time someone logs in.
Alternate credentials in Azure DevOps is the INCORRECT option because even if the authentication will work from Git, it wont satisfy the requirement to minimize the need to provide credentials for authentication. Moreover, using alternate credentials is not a secure method, and therefore, it will be deprecated soon in the future. https://devblogs.microsoft.com/devops/azure-devops-will-no-longer-support-alternate-credentials-authentication/
User accounts in Azure Active Directory (Azure AD) are also INCORRECT because it is based on Azure AD authentication and not Git. Also, using this method wont satisfy the requirement to minimize the need to provide credentials – It will ask for credentials every time someone logs in.
Alternate credentials in Azure DevOps is the INCORRECT option because even if the authentication will work from Git, it wont satisfy the requirement to minimize the need to provide credentials for authentication. Moreover, using alternate credentials is not a secure method, and therefore, it will be deprecated soon in the future. https://devblogs.microsoft.com/devops/azure-devops-will-no-longer-support-alternate-credentials-authentication/
User accounts in Azure Active Directory (Azure AD) are also INCORRECT because it is based on Azure AD authentication and not Git. Also, using this method wont satisfy the requirement to minimize the need to provide credentials – It will ask for credentials every time someone logs in.
Question 44 of 56
44. Question
You have a build pipeline in Azure Pipelines.
You create a Slack App Integration.
You need to send build notifications to a Slack channel name #Development.
What should you do first?
Correct
Correct Answer(s): Create a service hook subscription
Configure a Service Connection is INCORRECT because service connections are helpful in connecting Microsoft or external/remote services to execute tasks in a pipeline Agents Job. They will not be helpful in integrating with slack for channel level notifications of a build. https://docs.microsoft.com/en-us/azure/devops/pipelines/library/service-endpoints
A variety of service connections available in Azure DevOps are:
– Azure Classic
– Azure Resource Manager
– Azure Service Bus
– Bitbucket Cloud
– Chef
– Docker Host
– Docker Registry
– External Git
– Generic
– GitHub
– GitHub Enterprise Server
– Jenkins
– Kubernetes
– Maven
– npm
– NuGet
– Python package download
– Python package upload
– Service Fabric
– SSH
– Subversion
– Team Foundation Server/Azure Pipelines
– Visual Studio App Center
Create a global notification is an INCORRECT option because Global notifications in Azure DevOps manage notifications for all projects defined for an Azure DevOps organization or collections. Global notifications wont manage notifications to a particular slack channel instead would be helpful for notifying Azure native groups, teams, or individuals. https://docs.microsoft.com/en-us/azure/devops/notifications/manage-organization-notifications
Create a project-level notification is INCORRECT because the working of project-level notification is similar to the Global notifications, the only difference is that it is used for managing project level notifications as compared to all projects in an organization in the Global level notifications. https://docs.microsoft.com/en-us/azure/devops/notifications/manage-team-group-notifications
Incorrect
Correct Answer(s): Create a service hook subscription
Configure a Service Connection is INCORRECT because service connections are helpful in connecting Microsoft or external/remote services to execute tasks in a pipeline Agents Job. They will not be helpful in integrating with slack for channel level notifications of a build. https://docs.microsoft.com/en-us/azure/devops/pipelines/library/service-endpoints
A variety of service connections available in Azure DevOps are:
– Azure Classic
– Azure Resource Manager
– Azure Service Bus
– Bitbucket Cloud
– Chef
– Docker Host
– Docker Registry
– External Git
– Generic
– GitHub
– GitHub Enterprise Server
– Jenkins
– Kubernetes
– Maven
– npm
– NuGet
– Python package download
– Python package upload
– Service Fabric
– SSH
– Subversion
– Team Foundation Server/Azure Pipelines
– Visual Studio App Center
Create a global notification is an INCORRECT option because Global notifications in Azure DevOps manage notifications for all projects defined for an Azure DevOps organization or collections. Global notifications wont manage notifications to a particular slack channel instead would be helpful for notifying Azure native groups, teams, or individuals. https://docs.microsoft.com/en-us/azure/devops/notifications/manage-organization-notifications
Create a project-level notification is INCORRECT because the working of project-level notification is similar to the Global notifications, the only difference is that it is used for managing project level notifications as compared to all projects in an organization in the Global level notifications. https://docs.microsoft.com/en-us/azure/devops/notifications/manage-team-group-notifications
Unattempted
Correct Answer(s): Create a service hook subscription
Configure a Service Connection is INCORRECT because service connections are helpful in connecting Microsoft or external/remote services to execute tasks in a pipeline Agents Job. They will not be helpful in integrating with slack for channel level notifications of a build. https://docs.microsoft.com/en-us/azure/devops/pipelines/library/service-endpoints
A variety of service connections available in Azure DevOps are:
– Azure Classic
– Azure Resource Manager
– Azure Service Bus
– Bitbucket Cloud
– Chef
– Docker Host
– Docker Registry
– External Git
– Generic
– GitHub
– GitHub Enterprise Server
– Jenkins
– Kubernetes
– Maven
– npm
– NuGet
– Python package download
– Python package upload
– Service Fabric
– SSH
– Subversion
– Team Foundation Server/Azure Pipelines
– Visual Studio App Center
Create a global notification is an INCORRECT option because Global notifications in Azure DevOps manage notifications for all projects defined for an Azure DevOps organization or collections. Global notifications wont manage notifications to a particular slack channel instead would be helpful for notifying Azure native groups, teams, or individuals. https://docs.microsoft.com/en-us/azure/devops/notifications/manage-organization-notifications
Create a project-level notification is INCORRECT because the working of project-level notification is similar to the Global notifications, the only difference is that it is used for managing project level notifications as compared to all projects in an organization in the Global level notifications. https://docs.microsoft.com/en-us/azure/devops/notifications/manage-team-group-notifications
Question 45 of 56
45. Question
You have a private GitHub repository.
You need to display the commit status of the repository on Azure Boards.
What should you do first?
Correct
Correct Answer(s): Add the Azure Boards app to the repository
Add the Azure Boards App to the repository is the CORRECT answer because doing this will link GitHub commits and pull requests to the work items in Azure Boards. This process will enhance the efficiency to track and plan the work. Azure Boards app is available to download in the GitHub marketplace.
Add the Azure Pipelines app to the GitHub repository is the INCORRECT answer because this will integrate GitHub repositories with the Azure pipelines to achieve an end to end tracking and traceability for a code change, commit, build and release. This feature will help to automate CI/CD processes for the code in the repository using Azure Pipelines. However, it wont help in the integration of repository and Azure Boards.
Configure multi-factor authentication (MFA) for your GitHub account is the INCORRECT answer because this feature will enhance the login process for your GitHub account by introducing another layer of authentication(if it is YOU whos logging in) but not help in integrating Azure Boards and Github private repository. An example of MFA is introducing the concept of verification code which will be sent to your personal mobile number before you can log in to Github even after entering your password. https://docs.github.com/en/github/authenticating-to-github/securing-your-account-with-two-factor-authentication-2fa
Create a Github action in Github is INCORRECT because the GitHub Actions help in automating the software workflows by letting us enable CI/CD for the code. Using workflows/templates within Github Actions one can build, test, and deploy more efficiently. It won’t solve the problem of displaying the commit status of the repository on Azure Boards.
Correct Answer(s): Add the Azure Boards app to the repository
Add the Azure Boards App to the repository is the CORRECT answer because doing this will link GitHub commits and pull requests to the work items in Azure Boards. This process will enhance the efficiency to track and plan the work. Azure Boards app is available to download in the GitHub marketplace.
Add the Azure Pipelines app to the GitHub repository is the INCORRECT answer because this will integrate GitHub repositories with the Azure pipelines to achieve an end to end tracking and traceability for a code change, commit, build and release. This feature will help to automate CI/CD processes for the code in the repository using Azure Pipelines. However, it wont help in the integration of repository and Azure Boards.
Configure multi-factor authentication (MFA) for your GitHub account is the INCORRECT answer because this feature will enhance the login process for your GitHub account by introducing another layer of authentication(if it is YOU whos logging in) but not help in integrating Azure Boards and Github private repository. An example of MFA is introducing the concept of verification code which will be sent to your personal mobile number before you can log in to Github even after entering your password. https://docs.github.com/en/github/authenticating-to-github/securing-your-account-with-two-factor-authentication-2fa
Create a Github action in Github is INCORRECT because the GitHub Actions help in automating the software workflows by letting us enable CI/CD for the code. Using workflows/templates within Github Actions one can build, test, and deploy more efficiently. It won’t solve the problem of displaying the commit status of the repository on Azure Boards.
Correct Answer(s): Add the Azure Boards app to the repository
Add the Azure Boards App to the repository is the CORRECT answer because doing this will link GitHub commits and pull requests to the work items in Azure Boards. This process will enhance the efficiency to track and plan the work. Azure Boards app is available to download in the GitHub marketplace.
Add the Azure Pipelines app to the GitHub repository is the INCORRECT answer because this will integrate GitHub repositories with the Azure pipelines to achieve an end to end tracking and traceability for a code change, commit, build and release. This feature will help to automate CI/CD processes for the code in the repository using Azure Pipelines. However, it wont help in the integration of repository and Azure Boards.
Configure multi-factor authentication (MFA) for your GitHub account is the INCORRECT answer because this feature will enhance the login process for your GitHub account by introducing another layer of authentication(if it is YOU whos logging in) but not help in integrating Azure Boards and Github private repository. An example of MFA is introducing the concept of verification code which will be sent to your personal mobile number before you can log in to Github even after entering your password. https://docs.github.com/en/github/authenticating-to-github/securing-your-account-with-two-factor-authentication-2fa
Create a Github action in Github is INCORRECT because the GitHub Actions help in automating the software workflows by letting us enable CI/CD for the code. Using workflows/templates within Github Actions one can build, test, and deploy more efficiently. It won’t solve the problem of displaying the commit status of the repository on Azure Boards.
You are integrating Azure Pipelines and Microsoft Teams.
You install the Azure pipelines app in Microsoft Teams.
You have an Azure DevOps organization named Contoso that contains a project named Project1.
You subscribe to Project1 in Microsoft Teams.
You need to ensure that you only receive events about failed builds in Microsoft Teams.
What should you do first?
Correct
Correct Answer(s): From Microsoft Teams, Run @azure pipelines subscriptions
From Microsoft Teams, Run @azure pipelines subscriptions is the CORRECT answer here because as an initial step(first step) you will list all the subscriptions and pipelines to manage any existing subscription. Once the subscriptions are listed, select the appropriate subscription and configure it using dropdown menus to receive event notifications for failed builds only.
From Microsoft Teams, run @azure pipelines subscribe https://dev.azure.com/Contoso/Project1 is the INCORRECT answer because this command will be used to subscribe to Project1 as a whole and not specific failed build tasks. This command will subscribe to all the pipelines of Project1 including both build and release. Moreover, we previously did subscribe to Project1 in MS teams, so we do not need this command to run first.
For example, to subscribe for Build pipelines enter
@azure pipelines subscribe https://dev.azure.com/test/abcaders/_build?definitionId=2
From Azure Pipelines, add a Publish Build Artifacts task to Project1 is also the INCORRECT choice because the Publish Build Artifact task will pick up all the final artifacts from the source build pipeline and publish it as Artifacts, which then will be available for any other pipeline to be pick up as a source (usually the source for Release pipelines). https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/publish-build-artifacts
Incorrect
Correct Answer(s): From Microsoft Teams, Run @azure pipelines subscriptions
From Microsoft Teams, Run @azure pipelines subscriptions is the CORRECT answer here because as an initial step(first step) you will list all the subscriptions and pipelines to manage any existing subscription. Once the subscriptions are listed, select the appropriate subscription and configure it using dropdown menus to receive event notifications for failed builds only.
From Microsoft Teams, run @azure pipelines subscribe https://dev.azure.com/Contoso/Project1 is the INCORRECT answer because this command will be used to subscribe to Project1 as a whole and not specific failed build tasks. This command will subscribe to all the pipelines of Project1 including both build and release. Moreover, we previously did subscribe to Project1 in MS teams, so we do not need this command to run first.
For example, to subscribe for Build pipelines enter
@azure pipelines subscribe https://dev.azure.com/test/abcaders/_build?definitionId=2
From Azure Pipelines, add a Publish Build Artifacts task to Project1 is also the INCORRECT choice because the Publish Build Artifact task will pick up all the final artifacts from the source build pipeline and publish it as Artifacts, which then will be available for any other pipeline to be pick up as a source (usually the source for Release pipelines). https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/publish-build-artifacts
Unattempted
Correct Answer(s): From Microsoft Teams, Run @azure pipelines subscriptions
From Microsoft Teams, Run @azure pipelines subscriptions is the CORRECT answer here because as an initial step(first step) you will list all the subscriptions and pipelines to manage any existing subscription. Once the subscriptions are listed, select the appropriate subscription and configure it using dropdown menus to receive event notifications for failed builds only.
From Microsoft Teams, run @azure pipelines subscribe https://dev.azure.com/Contoso/Project1 is the INCORRECT answer because this command will be used to subscribe to Project1 as a whole and not specific failed build tasks. This command will subscribe to all the pipelines of Project1 including both build and release. Moreover, we previously did subscribe to Project1 in MS teams, so we do not need this command to run first.
For example, to subscribe for Build pipelines enter
@azure pipelines subscribe https://dev.azure.com/test/abcaders/_build?definitionId=2
From Azure Pipelines, add a Publish Build Artifacts task to Project1 is also the INCORRECT choice because the Publish Build Artifact task will pick up all the final artifacts from the source build pipeline and publish it as Artifacts, which then will be available for any other pipeline to be pick up as a source (usually the source for Release pipelines). https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/publish-build-artifacts
Question 47 of 56
47. Question
You are planning projects for three customers. Each customers preferred process for work items is shown in the following table.
The customers all plan to use Azure DevOps for work item management.
Which work item process should you use for the customer Litware, Inc.?
Correct
Correct Answer(s): Scrum
Scrum is the CORRECT answer because the scrum process in Azure DevOps works great when you want to track product backlog items (PBIs) and bugs on the Kanban board or break PBIs and bugs down into tasks on the taskboard. This process supports the original Scrum methodology.
XP is an INCORRECT answer because XP or extreme programming is a different software development methodology focussed on moving fast using concepts of CI and CD and test focussed development. It fundamentally consisted of 12 core principles. XP in the case of the Litware client, wont fulfill the requirement of tracking user stories and maintaining the PBIs on either of the Kanban or task boards.
Information in detail about the Extreme Programming practice and principles: https://www.geeksforgeeks.org/software-engineering-extreme-programming-xp/ https://www.agilealliance.org/glossary/xp/
CMMI is INCORRECT because CMMI is used when tracking for change requests is needed to create an auditable process. It wont support tracking PBIs/bugs broken into tasks as required by Litware Inc.
Agile is the INCORRECT choice because there is no requirement for the User stories to be tracked and just Scrum can fulfill the requirement.
Incorrect
Correct Answer(s): Scrum
Scrum is the CORRECT answer because the scrum process in Azure DevOps works great when you want to track product backlog items (PBIs) and bugs on the Kanban board or break PBIs and bugs down into tasks on the taskboard. This process supports the original Scrum methodology.
XP is an INCORRECT answer because XP or extreme programming is a different software development methodology focussed on moving fast using concepts of CI and CD and test focussed development. It fundamentally consisted of 12 core principles. XP in the case of the Litware client, wont fulfill the requirement of tracking user stories and maintaining the PBIs on either of the Kanban or task boards.
Information in detail about the Extreme Programming practice and principles: https://www.geeksforgeeks.org/software-engineering-extreme-programming-xp/ https://www.agilealliance.org/glossary/xp/
CMMI is INCORRECT because CMMI is used when tracking for change requests is needed to create an auditable process. It wont support tracking PBIs/bugs broken into tasks as required by Litware Inc.
Agile is the INCORRECT choice because there is no requirement for the User stories to be tracked and just Scrum can fulfill the requirement.
Unattempted
Correct Answer(s): Scrum
Scrum is the CORRECT answer because the scrum process in Azure DevOps works great when you want to track product backlog items (PBIs) and bugs on the Kanban board or break PBIs and bugs down into tasks on the taskboard. This process supports the original Scrum methodology.
XP is an INCORRECT answer because XP or extreme programming is a different software development methodology focussed on moving fast using concepts of CI and CD and test focussed development. It fundamentally consisted of 12 core principles. XP in the case of the Litware client, wont fulfill the requirement of tracking user stories and maintaining the PBIs on either of the Kanban or task boards.
Information in detail about the Extreme Programming practice and principles: https://www.geeksforgeeks.org/software-engineering-extreme-programming-xp/ https://www.agilealliance.org/glossary/xp/
CMMI is INCORRECT because CMMI is used when tracking for change requests is needed to create an auditable process. It wont support tracking PBIs/bugs broken into tasks as required by Litware Inc.
Agile is the INCORRECT choice because there is no requirement for the User stories to be tracked and just Scrum can fulfill the requirement.
Question 48 of 56
48. Question
You are planning projects for three customers. Each customers preferred process for work items is shown in the following table.
The customers all plan to use Azure DevOps for work item management.
Which work item process should you use for the customer Contoso, Ltd.?
Correct
Correct Answer(s): Agile
The CORRECT answer will be Agile because the company wants to track user stories and/or bugs on the Kanban board. Moreover, there is also a requirement to track bugs and tasks on the Taskboard. Agile process is chosen in Azure DevOps when the team uses Agile planning methods and tracks development and test activities separately.
The link to a detailed Microsoft document on how to choose a process in Azure DevOps: https://docs.microsoft.com/en-us/azure/devops/boards/work-items/guidance/choose-process
CMMI is the INCORRECT answer here because the CMMI process is when a framework for Process improvement is required to support Estimates, work status, etc. This process won’t suffice Contosos requirements.
Scrum is the INCORRECT answer because the scrum process is Azure DevOps wont fulfill the requirement to track user stories and bugs on the kanban board.
Incorrect
Correct Answer(s): Agile
The CORRECT answer will be Agile because the company wants to track user stories and/or bugs on the Kanban board. Moreover, there is also a requirement to track bugs and tasks on the Taskboard. Agile process is chosen in Azure DevOps when the team uses Agile planning methods and tracks development and test activities separately.
The link to a detailed Microsoft document on how to choose a process in Azure DevOps: https://docs.microsoft.com/en-us/azure/devops/boards/work-items/guidance/choose-process
CMMI is the INCORRECT answer here because the CMMI process is when a framework for Process improvement is required to support Estimates, work status, etc. This process won’t suffice Contosos requirements.
Scrum is the INCORRECT answer because the scrum process is Azure DevOps wont fulfill the requirement to track user stories and bugs on the kanban board.
Unattempted
Correct Answer(s): Agile
The CORRECT answer will be Agile because the company wants to track user stories and/or bugs on the Kanban board. Moreover, there is also a requirement to track bugs and tasks on the Taskboard. Agile process is chosen in Azure DevOps when the team uses Agile planning methods and tracks development and test activities separately.
The link to a detailed Microsoft document on how to choose a process in Azure DevOps: https://docs.microsoft.com/en-us/azure/devops/boards/work-items/guidance/choose-process
CMMI is the INCORRECT answer here because the CMMI process is when a framework for Process improvement is required to support Estimates, work status, etc. This process won’t suffice Contosos requirements.
Scrum is the INCORRECT answer because the scrum process is Azure DevOps wont fulfill the requirement to track user stories and bugs on the kanban board.
Question 49 of 56
49. Question
You have a project in Azure DevOps. You have an Azure Resource Group deployment project in Microsoft Visual Studio that is checked into the Azure DevOps project.
You need to create a release pipeline that will deploy resources by using Azure Resource Manager templates. The solution must minimize administrative effort.
Which task type should you include in the solution?
Correct
Correct Answer(s): Azure PowerShell
Azure PowerShell is the CORRECT choice because azure powershell commands can be used to deploy azure resources in a resource group using ARM templates(Azure resource manager templates). Moreover, we need to minimize administrative effort, so we deploy the code using powershell and arm templates, or using infrastructure as code.
Azure Powershell – an extension of normal windows PowerShell to support azure resources, provides flexibility in terms of scoped deployments. Different Azure PowerShell commands can be used for different deployments.
Azure Cloud Service Deployment is an INCORRECT choice because an Azure cloud service is a PAAS service and it does not suffice our requirement of deploying azure resources using azure resource manager (ARM) templates. Azure cloud service deployment task in the pipeline will not provide us the ability to utilize ARM templates to deploy different kinds of azure resources. https://docs.microsoft.com/en-us/azure/cloud-services/cloud-services-choose-me
Azure App Service Manage is INCORRECT because this task is used to control or manage the App service already deployed on Azure. This task is used to start, stop, restart, slot swap, Swap with Preview, install site extensions, or enable continuous monitoring for an Azure App Service. Moreover, Azure App Service Manage task has nothing to do with the deployments using the ARM templates and that is why we rule this option out. https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/azure-app-service-manage?view=azure-devops
Incorrect
Correct Answer(s): Azure PowerShell
Azure PowerShell is the CORRECT choice because azure powershell commands can be used to deploy azure resources in a resource group using ARM templates(Azure resource manager templates). Moreover, we need to minimize administrative effort, so we deploy the code using powershell and arm templates, or using infrastructure as code.
Azure Powershell – an extension of normal windows PowerShell to support azure resources, provides flexibility in terms of scoped deployments. Different Azure PowerShell commands can be used for different deployments.
Azure Cloud Service Deployment is an INCORRECT choice because an Azure cloud service is a PAAS service and it does not suffice our requirement of deploying azure resources using azure resource manager (ARM) templates. Azure cloud service deployment task in the pipeline will not provide us the ability to utilize ARM templates to deploy different kinds of azure resources. https://docs.microsoft.com/en-us/azure/cloud-services/cloud-services-choose-me
Azure App Service Manage is INCORRECT because this task is used to control or manage the App service already deployed on Azure. This task is used to start, stop, restart, slot swap, Swap with Preview, install site extensions, or enable continuous monitoring for an Azure App Service. Moreover, Azure App Service Manage task has nothing to do with the deployments using the ARM templates and that is why we rule this option out. https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/azure-app-service-manage?view=azure-devops
Unattempted
Correct Answer(s): Azure PowerShell
Azure PowerShell is the CORRECT choice because azure powershell commands can be used to deploy azure resources in a resource group using ARM templates(Azure resource manager templates). Moreover, we need to minimize administrative effort, so we deploy the code using powershell and arm templates, or using infrastructure as code.
Azure Powershell – an extension of normal windows PowerShell to support azure resources, provides flexibility in terms of scoped deployments. Different Azure PowerShell commands can be used for different deployments.
Azure Cloud Service Deployment is an INCORRECT choice because an Azure cloud service is a PAAS service and it does not suffice our requirement of deploying azure resources using azure resource manager (ARM) templates. Azure cloud service deployment task in the pipeline will not provide us the ability to utilize ARM templates to deploy different kinds of azure resources. https://docs.microsoft.com/en-us/azure/cloud-services/cloud-services-choose-me
Azure App Service Manage is INCORRECT because this task is used to control or manage the App service already deployed on Azure. This task is used to start, stop, restart, slot swap, Swap with Preview, install site extensions, or enable continuous monitoring for an Azure App Service. Moreover, Azure App Service Manage task has nothing to do with the deployments using the ARM templates and that is why we rule this option out. https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/azure-app-service-manage?view=azure-devops
Question 50 of 56
50. Question
Your company is building a new solution in Java.
The company currently uses a SonarQube server to analyze the code of .NET solutions.
You need to analyze and monitor the code quality of the Java solution.
Which task types should you add to the build pipeline?
Maven is a build automation tool used mainly for java projects from apache, which helps in making the build process easy and uniform while encouraging the use of development best practices. http://maven.apache.org/what-is-maven.html
Chef is an INCORRECT answer because Chef is a Configuration Management and infrastructure automation tool which will not be helpful when analyzing the code quality for the Java based solutions. https://www.chef.io/
Maven is a build automation tool used mainly for java projects from apache, which helps in making the build process easy and uniform while encouraging the use of development best practices. http://maven.apache.org/what-is-maven.html
Chef is an INCORRECT answer because Chef is a Configuration Management and infrastructure automation tool which will not be helpful when analyzing the code quality for the Java based solutions. https://www.chef.io/
Maven is a build automation tool used mainly for java projects from apache, which helps in making the build process easy and uniform while encouraging the use of development best practices. http://maven.apache.org/what-is-maven.html
Chef is an INCORRECT answer because Chef is a Configuration Management and infrastructure automation tool which will not be helpful when analyzing the code quality for the Java based solutions. https://www.chef.io/
You have 50 Node.js-based projects that you scan by using WhiteSource. Each project includes Package.json, Package-lock.json, and Npm-shrinkwrap.json files.
You need to minimize the number of libraries reports by WhiteSource to only the libraries that you explicitly reference.
What should you do?
Correct
Correct Answer(s): Add a devDependencies section to Package-lock.json
Add a devDependencies section to Package-lock.json is the CORRECT answer because we need to reduce the number of libraries to the bare minimum that needs to be reviewed.
To achieve that, within the package.json file we need to split out the NPM dependencies between devDependencies and (production) dependencies. After which we can use a production flag to exclude most of the unnecessary packages. https://docs.microsoft.com/en-us/archive/blogs/visualstudioalmrangers/manage-your-open-source-usage-and-security-as-reported-by-your-cicd-pipeline
Configure the Artifactory plug-in is an INCORRECT answer because this plugin integrates Artifactory Artifacts with WhiteSource. The Artifactory plugin adds additional information to the Artifactory artifacts and updates WhiteSource.
Once invoked, all the artifacts’ metadata on your Artifactory will be uploaded to your WhiteSource inventory. However, this wont solve our purpose of minimizing the number of library reports. https://whitesource.atlassian.net/wiki/spaces/WD/pages/34046111/Artifactory+Plugin
Delete Package-lock.json is an INCORRECT because package-lock.json is a very important file which is created for locking the dependency with the installed version. This file is automatically generated for any operations where npm modifies either the node_modules tree, or package.json.
Package-lock.json being an extremely important file, it is highly recommended to not delete this file even when you want to resolve conflicts.
Correct Answer(s): Add a devDependencies section to Package-lock.json
Add a devDependencies section to Package-lock.json is the CORRECT answer because we need to reduce the number of libraries to the bare minimum that needs to be reviewed.
To achieve that, within the package.json file we need to split out the NPM dependencies between devDependencies and (production) dependencies. After which we can use a production flag to exclude most of the unnecessary packages. https://docs.microsoft.com/en-us/archive/blogs/visualstudioalmrangers/manage-your-open-source-usage-and-security-as-reported-by-your-cicd-pipeline
Configure the Artifactory plug-in is an INCORRECT answer because this plugin integrates Artifactory Artifacts with WhiteSource. The Artifactory plugin adds additional information to the Artifactory artifacts and updates WhiteSource.
Once invoked, all the artifacts’ metadata on your Artifactory will be uploaded to your WhiteSource inventory. However, this wont solve our purpose of minimizing the number of library reports. https://whitesource.atlassian.net/wiki/spaces/WD/pages/34046111/Artifactory+Plugin
Delete Package-lock.json is an INCORRECT because package-lock.json is a very important file which is created for locking the dependency with the installed version. This file is automatically generated for any operations where npm modifies either the node_modules tree, or package.json.
Package-lock.json being an extremely important file, it is highly recommended to not delete this file even when you want to resolve conflicts.
Correct Answer(s): Add a devDependencies section to Package-lock.json
Add a devDependencies section to Package-lock.json is the CORRECT answer because we need to reduce the number of libraries to the bare minimum that needs to be reviewed.
To achieve that, within the package.json file we need to split out the NPM dependencies between devDependencies and (production) dependencies. After which we can use a production flag to exclude most of the unnecessary packages. https://docs.microsoft.com/en-us/archive/blogs/visualstudioalmrangers/manage-your-open-source-usage-and-security-as-reported-by-your-cicd-pipeline
Configure the Artifactory plug-in is an INCORRECT answer because this plugin integrates Artifactory Artifacts with WhiteSource. The Artifactory plugin adds additional information to the Artifactory artifacts and updates WhiteSource.
Once invoked, all the artifacts’ metadata on your Artifactory will be uploaded to your WhiteSource inventory. However, this wont solve our purpose of minimizing the number of library reports. https://whitesource.atlassian.net/wiki/spaces/WD/pages/34046111/Artifactory+Plugin
Delete Package-lock.json is an INCORRECT because package-lock.json is a very important file which is created for locking the dependency with the installed version. This file is automatically generated for any operations where npm modifies either the node_modules tree, or package.json.
Package-lock.json being an extremely important file, it is highly recommended to not delete this file even when you want to resolve conflicts.
Your company deploys applications in Docker containers.
You want to detect known exploits in the Docker images used to provision the Docker containers.
You need to integrate image scanning into the application lifecycle. The solution must expose the exploits as early as possible during the application lifecycle.
What should you configure?
Correct
Correct Answer(s): a task executed in the continuous integration pipeline and a scheduled task that analyzes the image registry
a task executed in the continuous integration pipeline and a scheduled task that analyzes the image registry is CORRECT because our requirement is to expose the exploits as early as possible. We can use the Docker task in a build or release pipeline and sign into ACR. Then, Using a script to pull an image and scan the container image for vulnerabilities.
New vulnerabilities are discovered all the time, so scanning for and identifying vulnerabilities is a continuous process. Incorporate vulnerability scanning throughout the container lifecycle by performing a vulnerability scan on containers before pushing the images to a public or private registry, and continue to scan container images in the registry both to identify any flaws that were somehow missed during development and to address any newly discovered vulnerabilities that might exist in the code used in the container images. Another feature for security of containers is provided by integrating Azure security center with Azure container instances to scan image vulnerabilities. https://docs.microsoft.com/en-us/azure/container-instances/container-instances-image-security
manual tasks performed during the planning phase and the deployment phase is an INCORRECT answer because this solution does not meet the requirement of scanning for vulnerabilities as early as possible. Moreover, it is depending on the manual tasks in the planning phase. This is not a best practice as performing manual tasks initially can slow down the application delivery time for the end customer, in return losing to deliver continuous value for the product for the Development team.
a task executed in the continuous deployment pipeline and a scheduled task against a running production container is an INCORRECT choice because we dont need to wait until the container is in production. We want to monitor and identify vulnerabilities as early as possible. Therefore, we rule this option out.
a task executed in the continuous integration pipeline and a scheduled task that analyzes the production container is an INCORRECT choice because the requirement is to detect the potential threats as early as possible. However, in this case we are waiting to schedule a task that analyzes the production container.
Incorrect
Correct Answer(s): a task executed in the continuous integration pipeline and a scheduled task that analyzes the image registry
a task executed in the continuous integration pipeline and a scheduled task that analyzes the image registry is CORRECT because our requirement is to expose the exploits as early as possible. We can use the Docker task in a build or release pipeline and sign into ACR. Then, Using a script to pull an image and scan the container image for vulnerabilities.
New vulnerabilities are discovered all the time, so scanning for and identifying vulnerabilities is a continuous process. Incorporate vulnerability scanning throughout the container lifecycle by performing a vulnerability scan on containers before pushing the images to a public or private registry, and continue to scan container images in the registry both to identify any flaws that were somehow missed during development and to address any newly discovered vulnerabilities that might exist in the code used in the container images. Another feature for security of containers is provided by integrating Azure security center with Azure container instances to scan image vulnerabilities. https://docs.microsoft.com/en-us/azure/container-instances/container-instances-image-security
manual tasks performed during the planning phase and the deployment phase is an INCORRECT answer because this solution does not meet the requirement of scanning for vulnerabilities as early as possible. Moreover, it is depending on the manual tasks in the planning phase. This is not a best practice as performing manual tasks initially can slow down the application delivery time for the end customer, in return losing to deliver continuous value for the product for the Development team.
a task executed in the continuous deployment pipeline and a scheduled task against a running production container is an INCORRECT choice because we dont need to wait until the container is in production. We want to monitor and identify vulnerabilities as early as possible. Therefore, we rule this option out.
a task executed in the continuous integration pipeline and a scheduled task that analyzes the production container is an INCORRECT choice because the requirement is to detect the potential threats as early as possible. However, in this case we are waiting to schedule a task that analyzes the production container.
Unattempted
Correct Answer(s): a task executed in the continuous integration pipeline and a scheduled task that analyzes the image registry
a task executed in the continuous integration pipeline and a scheduled task that analyzes the image registry is CORRECT because our requirement is to expose the exploits as early as possible. We can use the Docker task in a build or release pipeline and sign into ACR. Then, Using a script to pull an image and scan the container image for vulnerabilities.
New vulnerabilities are discovered all the time, so scanning for and identifying vulnerabilities is a continuous process. Incorporate vulnerability scanning throughout the container lifecycle by performing a vulnerability scan on containers before pushing the images to a public or private registry, and continue to scan container images in the registry both to identify any flaws that were somehow missed during development and to address any newly discovered vulnerabilities that might exist in the code used in the container images. Another feature for security of containers is provided by integrating Azure security center with Azure container instances to scan image vulnerabilities. https://docs.microsoft.com/en-us/azure/container-instances/container-instances-image-security
manual tasks performed during the planning phase and the deployment phase is an INCORRECT answer because this solution does not meet the requirement of scanning for vulnerabilities as early as possible. Moreover, it is depending on the manual tasks in the planning phase. This is not a best practice as performing manual tasks initially can slow down the application delivery time for the end customer, in return losing to deliver continuous value for the product for the Development team.
a task executed in the continuous deployment pipeline and a scheduled task against a running production container is an INCORRECT choice because we dont need to wait until the container is in production. We want to monitor and identify vulnerabilities as early as possible. Therefore, we rule this option out.
a task executed in the continuous integration pipeline and a scheduled task that analyzes the production container is an INCORRECT choice because the requirement is to detect the potential threats as early as possible. However, in this case we are waiting to schedule a task that analyzes the production container.
Question 53 of 56
53. Question
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals.
You plan to create a release pipeline that will deploy Azure resources by using Azure Resource Manager templates. The release pipeline will create the following resources:
– Two resource groups
– Four Azure virtual machines in one resource group
– Two Azure SQL databases in other resource group
You need to recommend a solution to deploy the resources.
Solution: Create a main template that will deploy the resources in one resource group and a nested template that will deploy the resources in the other resource group.
Does this meet the goal?
Correct
Yes, this solution meets the goal.
Here’s a breakdown of why:
Main Template: This template can be used to deploy the two resource groups and the four virtual machines in one of them.
Nested Template: The nested template can be used to deploy the two Azure SQL databases in the second resource group.
Flexibility: This approach provides flexibility as you can modify the nested template without affecting the main template, and vice versa.
Modularity: The use of nested templates promotes modularity and reusability of the deployment logic.
Therefore, using a main template and a nested template is an effective way to deploy the specified Azure resources.
Incorrect
Yes, this solution meets the goal.
Here’s a breakdown of why:
Main Template: This template can be used to deploy the two resource groups and the four virtual machines in one of them.
Nested Template: The nested template can be used to deploy the two Azure SQL databases in the second resource group.
Flexibility: This approach provides flexibility as you can modify the nested template without affecting the main template, and vice versa.
Modularity: The use of nested templates promotes modularity and reusability of the deployment logic.
Therefore, using a main template and a nested template is an effective way to deploy the specified Azure resources.
Unattempted
Yes, this solution meets the goal.
Here’s a breakdown of why:
Main Template: This template can be used to deploy the two resource groups and the four virtual machines in one of them.
Nested Template: The nested template can be used to deploy the two Azure SQL databases in the second resource group.
Flexibility: This approach provides flexibility as you can modify the nested template without affecting the main template, and vice versa.
Modularity: The use of nested templates promotes modularity and reusability of the deployment logic.
Therefore, using a main template and a nested template is an effective way to deploy the specified Azure resources.
Question 54 of 56
54. Question
Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals.
You plan to create a release pipeline that will deploy Azure resources by using Azure Resource Manager templates. The release pipeline will create the following resources:
– Two resource groups
– Four Azure virtual machines in one resource group
– Two Azure SQL databases in other resource group
You need to recommend a solution to deploy the resources.
Solution: Create a main template that has two linked templates, each of which will deploy the resources in its respective group.
Does this meet the goal?
Correct
Correct Answer(s): Yes
Yes is the CORRECT answer here because we need to deploy resources to multiple resource group scopes. Multiple linked templates using one main/global template, work in the best possible way to manage the deployments at different scopes and makes the whole Infrastructure as code process easy to manage and scale.
To link a template, add a deployments resource to the main template. In the templateLink property, specify the URI of the template to include. https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/linked-templates#linked-template
Please refer to the template below to get an idea on linked template in a main/global template:
Incorrect
Correct Answer(s): Yes
Yes is the CORRECT answer here because we need to deploy resources to multiple resource group scopes. Multiple linked templates using one main/global template, work in the best possible way to manage the deployments at different scopes and makes the whole Infrastructure as code process easy to manage and scale.
To link a template, add a deployments resource to the main template. In the templateLink property, specify the URI of the template to include. https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/linked-templates#linked-template
Please refer to the template below to get an idea on linked template in a main/global template:
Unattempted
Correct Answer(s): Yes
Yes is the CORRECT answer here because we need to deploy resources to multiple resource group scopes. Multiple linked templates using one main/global template, work in the best possible way to manage the deployments at different scopes and makes the whole Infrastructure as code process easy to manage and scale.
To link a template, add a deployments resource to the main template. In the templateLink property, specify the URI of the template to include. https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/linked-templates#linked-template
Please refer to the template below to get an idea on linked template in a main/global template:
Question 55 of 56
55. Question
Your company has a project in Azure DevOps for a new web application.
The company uses ServiceNow for change management.
You need to ensure that a change request is processed before any components can be deployed to the production environment.
What are two ways to integrate ServiceNow into the Azure DevOps release pipeline?
Correct
Correct Answer(s):
Define a pre-deployment gate before the deployment to the Prod stage.
Define a post-deployment gate after the deployment to the QA stage.
Define a pre-deployment gate before the deployment to the Prod stage and
Define a post-deployment gate after the deployment to the QA stage are the TWO CORRECT answers because our requirement is to integrate ServiceNow in the release pipeline of Azure DevOps with a motive of processing any change request that comes through ServiceNow, before any components can be deployed to the production environment.
Define a deployment control that invokes the ServiceNow REST API is an INCORRECT choice because this option possesses only a part of the solution and not the complete solution. There needs to be a deployment control that will invoke ServiceNow API but where and how it will be set to ensure change management happens before deployment to production stage, is not specified as part of the answer statement.
Define a deployment control that invokes the ServiceNow SOAP API is the INCORRECT answer because the answer statement is giving only half the solution as right.
As stated above, deployment control must be set that will invoke calls to the ServiceNow APIs but, when and as part of which stage are not specified. Since the requirement is to allow deployments to the production only if certain criteria is met, we rule this option out.
Incorrect
Correct Answer(s):
Define a pre-deployment gate before the deployment to the Prod stage.
Define a post-deployment gate after the deployment to the QA stage.
Define a pre-deployment gate before the deployment to the Prod stage and
Define a post-deployment gate after the deployment to the QA stage are the TWO CORRECT answers because our requirement is to integrate ServiceNow in the release pipeline of Azure DevOps with a motive of processing any change request that comes through ServiceNow, before any components can be deployed to the production environment.
Define a deployment control that invokes the ServiceNow REST API is an INCORRECT choice because this option possesses only a part of the solution and not the complete solution. There needs to be a deployment control that will invoke ServiceNow API but where and how it will be set to ensure change management happens before deployment to production stage, is not specified as part of the answer statement.
Define a deployment control that invokes the ServiceNow SOAP API is the INCORRECT answer because the answer statement is giving only half the solution as right.
As stated above, deployment control must be set that will invoke calls to the ServiceNow APIs but, when and as part of which stage are not specified. Since the requirement is to allow deployments to the production only if certain criteria is met, we rule this option out.
Unattempted
Correct Answer(s):
Define a pre-deployment gate before the deployment to the Prod stage.
Define a post-deployment gate after the deployment to the QA stage.
Define a pre-deployment gate before the deployment to the Prod stage and
Define a post-deployment gate after the deployment to the QA stage are the TWO CORRECT answers because our requirement is to integrate ServiceNow in the release pipeline of Azure DevOps with a motive of processing any change request that comes through ServiceNow, before any components can be deployed to the production environment.
Define a deployment control that invokes the ServiceNow REST API is an INCORRECT choice because this option possesses only a part of the solution and not the complete solution. There needs to be a deployment control that will invoke ServiceNow API but where and how it will be set to ensure change management happens before deployment to production stage, is not specified as part of the answer statement.
Define a deployment control that invokes the ServiceNow SOAP API is the INCORRECT answer because the answer statement is giving only half the solution as right.
As stated above, deployment control must be set that will invoke calls to the ServiceNow APIs but, when and as part of which stage are not specified. Since the requirement is to allow deployments to the production only if certain criteria is met, we rule this option out.
Question 56 of 56
56. Question
Your company uses Azure DevOps.
Only users who have accounts in Azure Active Directory can access the Azure DevOps environment.
You need to ensure that only devices that are connected to the on-premises network can access the Azure DevOps environment.
What should you do?
Correct
Correct Answer(s): In Azure Active Directory, configure conditional access
In Azure Active Directory, configure conditional access is the CORRECT answer because a conditional access policy allows us to control specific access using a condition criteria. We can use a conditional access policy to control access for users to any cloud based/on premises(linked with azure) services with a logic(condition).
In our case, we will enable access for devices (on-premises IP range/network) by allowing only trusted networks to connect with the cloud-based Azure devops organization. Setting this will ensure that access to our DevOps is restricted from outside the organization network.
In Azure DevOps, configure Security in Project Settings is an INCORRECT answer because we have a requirement to allow access from only on-premises network of the organization and we do not have an option to do that in Project settings of an Azure DevOps project.
Assign the Stakeholder access level to all users is an INCORRECT answer because Stakeholder access is for accessing the Azure DevOps service like Azure repos, pipelines, Azure boards, etc. However, it can not configure network based access to the azure devops environment.
Incorrect
Correct Answer(s): In Azure Active Directory, configure conditional access
In Azure Active Directory, configure conditional access is the CORRECT answer because a conditional access policy allows us to control specific access using a condition criteria. We can use a conditional access policy to control access for users to any cloud based/on premises(linked with azure) services with a logic(condition).
In our case, we will enable access for devices (on-premises IP range/network) by allowing only trusted networks to connect with the cloud-based Azure devops organization. Setting this will ensure that access to our DevOps is restricted from outside the organization network.
In Azure DevOps, configure Security in Project Settings is an INCORRECT answer because we have a requirement to allow access from only on-premises network of the organization and we do not have an option to do that in Project settings of an Azure DevOps project.
Assign the Stakeholder access level to all users is an INCORRECT answer because Stakeholder access is for accessing the Azure DevOps service like Azure repos, pipelines, Azure boards, etc. However, it can not configure network based access to the azure devops environment.
Unattempted
Correct Answer(s): In Azure Active Directory, configure conditional access
In Azure Active Directory, configure conditional access is the CORRECT answer because a conditional access policy allows us to control specific access using a condition criteria. We can use a conditional access policy to control access for users to any cloud based/on premises(linked with azure) services with a logic(condition).
In our case, we will enable access for devices (on-premises IP range/network) by allowing only trusted networks to connect with the cloud-based Azure devops organization. Setting this will ensure that access to our DevOps is restricted from outside the organization network.
In Azure DevOps, configure Security in Project Settings is an INCORRECT answer because we have a requirement to allow access from only on-premises network of the organization and we do not have an option to do that in Project settings of an Azure DevOps project.
Assign the Stakeholder access level to all users is an INCORRECT answer because Stakeholder access is for accessing the Azure DevOps service like Azure repos, pipelines, Azure boards, etc. However, it can not configure network based access to the azure devops environment.
X
Use Page numbers below to navigate to other practice tests