You have already completed the Test before. Hence you can not start it again.
Test is loading...
You must sign in or sign up to start the Test.
You have to finish following quiz, to start this Test:
Your results are here!! for" Google Professional Cloud DevOps Engineer Practice Test 6 "
0 of 65 questions answered correctly
Your time:
Time has elapsed
Your Final Score is : 0
You have attempted : 0
Number of Correct Questions : 0 and scored 0
Number of Incorrect Questions : 0 and Negative marks 0
Average score
Your score
Google Professional Cloud DevOps Engineer
You have attempted: 0
Number of Correct Questions: 0 and scored 0
Number of Incorrect Questions: 0 and Negative marks 0
You can review your answers by clicking on “View Answers” option. Important Note : Open Reference Documentation Links in New Tab (Right Click and Open in New Tab).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
Answered
Review
Question 1 of 65
1. Question
You are the on-call SRE for a betting company. You are managing an application deployed on App Engine flexible environment within a custom VPC. The application accepts user traffic from anywhere using HTTPS. You have been tasked with logging all successful incoming SSH traffic to the GCE instances from the company network. How will you achieve this?
Correct
Option A is incorrect. The firewall should allow ingress (incoming) not deny. Option B is CORRECT. The firewall rule should allow ingress (incoming) traffic on port 22 (SSH) and logging should be turned on, so the logs appear in Cloud Logging Options C and D are incorrect. The firewall should affect ingress (incoming) not egress (outgoing). Reference https://cloud.google.com/vpc/docs/firewall-rules-logging
Incorrect
Option A is incorrect. The firewall should allow ingress (incoming) not deny. Option B is CORRECT. The firewall rule should allow ingress (incoming) traffic on port 22 (SSH) and logging should be turned on, so the logs appear in Cloud Logging Options C and D are incorrect. The firewall should affect ingress (incoming) not egress (outgoing). Reference https://cloud.google.com/vpc/docs/firewall-rules-logging
Unattempted
Option A is incorrect. The firewall should allow ingress (incoming) not deny. Option B is CORRECT. The firewall rule should allow ingress (incoming) traffic on port 22 (SSH) and logging should be turned on, so the logs appear in Cloud Logging Options C and D are incorrect. The firewall should affect ingress (incoming) not egress (outgoing). Reference https://cloud.google.com/vpc/docs/firewall-rules-logging
Question 2 of 65
2. Question
Your company is planning to deploy a python application on Google App Engine Standard Environment. There is a requirement to continuously gather CPU usage information from your production application. What steps will help achieve this? (select 2)
Correct
Options A and B are incorrect. Cloud Trace is not used to continuously gather CPU and memory usage from applications. Option D is incorrect, because you don’t have access to the underlying instance when using App Engine Standard Environment. Options C and E are CORRECT. Cloud Profiler is used to continuously gather CPU and memory usage from applications. Cloud Profiler API needs to be enabled (if it is not already), added to your requirements.txt file (so it is downloaded), it also needs to be imported and started. References https://cloud.google.com/profiler/docs/about-profiler https://cloud.google.com/profiler/docs/profiling-python?authuser=2#flexible-environment
Incorrect
Options A and B are incorrect. Cloud Trace is not used to continuously gather CPU and memory usage from applications. Option D is incorrect, because you don’t have access to the underlying instance when using App Engine Standard Environment. Options C and E are CORRECT. Cloud Profiler is used to continuously gather CPU and memory usage from applications. Cloud Profiler API needs to be enabled (if it is not already), added to your requirements.txt file (so it is downloaded), it also needs to be imported and started. References https://cloud.google.com/profiler/docs/about-profiler https://cloud.google.com/profiler/docs/profiling-python?authuser=2#flexible-environment
Unattempted
Options A and B are incorrect. Cloud Trace is not used to continuously gather CPU and memory usage from applications. Option D is incorrect, because you don’t have access to the underlying instance when using App Engine Standard Environment. Options C and E are CORRECT. Cloud Profiler is used to continuously gather CPU and memory usage from applications. Cloud Profiler API needs to be enabled (if it is not already), added to your requirements.txt file (so it is downloaded), it also needs to be imported and started. References https://cloud.google.com/profiler/docs/about-profiler https://cloud.google.com/profiler/docs/profiling-python?authuser=2#flexible-environment
Question 3 of 65
3. Question
You are tasked with designing an automated CI pipeline for building and pushing images to Container Registry when there is a commit with a particular tag. In the current system, developers issue build commands after code is pushed to the test branch in the source repository. What steps can you take to automate the build described above the least amount of management overhead?
Correct
Option A is incorrect, triggers are created in Cloud Build. Option B is CORRECT because the correct trigger is the “Push new tag” event that will trigger a build when developers commit code that contains a particular tag. Option C is incorrect, the requirement is automating the build when code is committed with a particular tag, there was no mention of raising a pull request Option D is incorrect, it has a lot of management overhead. Reference https://cloud.google.com/build/docs/automating-builds/create-manage-triggers#build_trigger
Incorrect
Option A is incorrect, triggers are created in Cloud Build. Option B is CORRECT because the correct trigger is the “Push new tag” event that will trigger a build when developers commit code that contains a particular tag. Option C is incorrect, the requirement is automating the build when code is committed with a particular tag, there was no mention of raising a pull request Option D is incorrect, it has a lot of management overhead. Reference https://cloud.google.com/build/docs/automating-builds/create-manage-triggers#build_trigger
Unattempted
Option A is incorrect, triggers are created in Cloud Build. Option B is CORRECT because the correct trigger is the “Push new tag” event that will trigger a build when developers commit code that contains a particular tag. Option C is incorrect, the requirement is automating the build when code is committed with a particular tag, there was no mention of raising a pull request Option D is incorrect, it has a lot of management overhead. Reference https://cloud.google.com/build/docs/automating-builds/create-manage-triggers#build_trigger
Question 4 of 65
4. Question
You are one of the on-call engineers in a global team managing an application running in production. A recent update has caused the application’s response time to increase drastically. An incident has been declared and actions to mitigate the issue have not yet been deployed. Your team is coming to the end of your workday. Following Google’s SRE practice, what should be done?
Correct
Options A, B and D are incorrect. Hand off needs to be given to another Incident Command to coordinate the activities. Overtime is not recommended to avoid burnout especially when you have a global team. Leaving an incident with no one responsible for it and unresolved is not recommended Option C is CORRECT. Reference https://sre.google/sre-book/managing-incidents/ (Clear, Live Handoff)
Incorrect
Options A, B and D are incorrect. Hand off needs to be given to another Incident Command to coordinate the activities. Overtime is not recommended to avoid burnout especially when you have a global team. Leaving an incident with no one responsible for it and unresolved is not recommended Option C is CORRECT. Reference https://sre.google/sre-book/managing-incidents/ (Clear, Live Handoff)
Unattempted
Options A, B and D are incorrect. Hand off needs to be given to another Incident Command to coordinate the activities. Overtime is not recommended to avoid burnout especially when you have a global team. Leaving an incident with no one responsible for it and unresolved is not recommended Option C is CORRECT. Reference https://sre.google/sre-book/managing-incidents/ (Clear, Live Handoff)
Question 5 of 65
5. Question
Your team is creating an incident management procedure which will be a guide for your team during incidents. Part of Google‘s SRE incident management best practice is the separation of responsibilities. Which of the following responsibilities is not essential during an incident?
Correct
Options A, B & C are incorrect. These roles represent the Incident Commander, Operations Lead and Communications lead which are the essential roles for incident management. Options D is CORRECT. Creating the incident management procedure is a team effort so anyone can use it when an incident occurs. Reference https://sre.google/sre-book/managing-incidents/
Incorrect
Options A, B & C are incorrect. These roles represent the Incident Commander, Operations Lead and Communications lead which are the essential roles for incident management. Options D is CORRECT. Creating the incident management procedure is a team effort so anyone can use it when an incident occurs. Reference https://sre.google/sre-book/managing-incidents/
Unattempted
Options A, B & C are incorrect. These roles represent the Incident Commander, Operations Lead and Communications lead which are the essential roles for incident management. Options D is CORRECT. Creating the incident management procedure is a team effort so anyone can use it when an incident occurs. Reference https://sre.google/sre-book/managing-incidents/
Question 6 of 65
6. Question
A large professional services client uses Google Cloud for some of its workload. Your DevOps team is now required to route all logs that show actions taken by Google staff in its account to a separate logging bucket. Which of the following helps you achieve this?
Correct
Access Transparency logs shows you all action taken in the account by Google staff. Option B is incorrect. Admin Activity logs show actions that modify the config or metadata of resources. Option C is incorrect. Data Access logs shows actions that read the config or metadata of resources as well as user API calls that perform CRUD operations. Option D is incorrect. System Event logs are generated by Google systems for Google Cloud actions that modify the config of resources. Reference https://cloud.google.com/logging/docs/view/available-logs https://cloud.google.com/logging/docs/audit#types
Incorrect
Access Transparency logs shows you all action taken in the account by Google staff. Option B is incorrect. Admin Activity logs show actions that modify the config or metadata of resources. Option C is incorrect. Data Access logs shows actions that read the config or metadata of resources as well as user API calls that perform CRUD operations. Option D is incorrect. System Event logs are generated by Google systems for Google Cloud actions that modify the config of resources. Reference https://cloud.google.com/logging/docs/view/available-logs https://cloud.google.com/logging/docs/audit#types
Unattempted
Access Transparency logs shows you all action taken in the account by Google staff. Option B is incorrect. Admin Activity logs show actions that modify the config or metadata of resources. Option C is incorrect. Data Access logs shows actions that read the config or metadata of resources as well as user API calls that perform CRUD operations. Option D is incorrect. System Event logs are generated by Google systems for Google Cloud actions that modify the config of resources. Reference https://cloud.google.com/logging/docs/view/available-logs https://cloud.google.com/logging/docs/audit#types
Question 7 of 65
7. Question
Your company has tasked you with setting up a Continuous Integration pipeline. When code is committed to the source repository, the pipeline will build docker containers to be pushed to Artifact Registry. How would you accomplish this?
Correct
Options B and C are incorrect. There is nothing like a source repository config file. The images field and artifacts field in the build config file specify the docker images to be stored in the container registry (or artifact registry) and the non-container artifacts to be stored in Cloud storage respectively. Option D is incorrect because there is nothing like a source repository config file. References https://cloud.google.com/build/docs/build-config#images https://cloud.google.com/artifact-registry/docs/configure-cloud-build
Incorrect
Options B and C are incorrect. There is nothing like a source repository config file. The images field and artifacts field in the build config file specify the docker images to be stored in the container registry (or artifact registry) and the non-container artifacts to be stored in Cloud storage respectively. Option D is incorrect because there is nothing like a source repository config file. References https://cloud.google.com/build/docs/build-config#images https://cloud.google.com/artifact-registry/docs/configure-cloud-build
Unattempted
Options B and C are incorrect. There is nothing like a source repository config file. The images field and artifacts field in the build config file specify the docker images to be stored in the container registry (or artifact registry) and the non-container artifacts to be stored in Cloud storage respectively. Option D is incorrect because there is nothing like a source repository config file. References https://cloud.google.com/build/docs/build-config#images https://cloud.google.com/artifact-registry/docs/configure-cloud-build
Question 8 of 65
8. Question
You are planning on deploying JVM using Compute Engine. You need to track the peak number of live threads in the instance. Which of the following can help you achieve this?(select 2)
Correct
Options A and C are incorrect. The monitoring agent is installed on GKE, all that you need to do is enable it and select the logging and monitoring type you want. Option B and D are CORRECT. This is the procedure for installing the monitoring agent to capture application level metrics on Compute Engine. Option E is incorrect. This is used for installing the logging agent on the Compute Engine. Reference https://cloud.google.com/monitoring/agent/plugins/jvm
Incorrect
Options A and C are incorrect. The monitoring agent is installed on GKE, all that you need to do is enable it and select the logging and monitoring type you want. Option B and D are CORRECT. This is the procedure for installing the monitoring agent to capture application level metrics on Compute Engine. Option E is incorrect. This is used for installing the logging agent on the Compute Engine. Reference https://cloud.google.com/monitoring/agent/plugins/jvm
Unattempted
Options A and C are incorrect. The monitoring agent is installed on GKE, all that you need to do is enable it and select the logging and monitoring type you want. Option B and D are CORRECT. This is the procedure for installing the monitoring agent to capture application level metrics on Compute Engine. Option E is incorrect. This is used for installing the logging agent on the Compute Engine. Reference https://cloud.google.com/monitoring/agent/plugins/jvm
Question 9 of 65
9. Question
You are on the SRE team of your company. The client has decided to also keep the logs that record operations in Compute Engine that read user-provided data made in the Production project for Two years in order to fulfil a new compliance requirement. Which of the following can help you achieve this? (select 2)
Correct
Option A is incorrect. The _Required Logs Bucket cannot be edited. Option B is incorrect. The _ Required Logs Sink inclusion filters do not capture the specified logs. Option C is CORRECT. Creating a new Logs Bucket with the desired retention and Sink to collate the specified logs. The inclusion filter of the _Default Sink captures the specified logs. Option D is CORRECT. The Data Read audit log of the Compute Engine needs to be enabled Option E is incorrect. The Data Write audit log of the Compute Engine does not meet the criteria of logs specified. Reference https://cloud.google.com/logging/docs/audit/configure-data-access#config-console-enable https://cloud.google.com/logging/docs/audit#data-access
Incorrect
Option A is incorrect. The _Required Logs Bucket cannot be edited. Option B is incorrect. The _ Required Logs Sink inclusion filters do not capture the specified logs. Option C is CORRECT. Creating a new Logs Bucket with the desired retention and Sink to collate the specified logs. The inclusion filter of the _Default Sink captures the specified logs. Option D is CORRECT. The Data Read audit log of the Compute Engine needs to be enabled Option E is incorrect. The Data Write audit log of the Compute Engine does not meet the criteria of logs specified. Reference https://cloud.google.com/logging/docs/audit/configure-data-access#config-console-enable https://cloud.google.com/logging/docs/audit#data-access
Unattempted
Option A is incorrect. The _Required Logs Bucket cannot be edited. Option B is incorrect. The _ Required Logs Sink inclusion filters do not capture the specified logs. Option C is CORRECT. Creating a new Logs Bucket with the desired retention and Sink to collate the specified logs. The inclusion filter of the _Default Sink captures the specified logs. Option D is CORRECT. The Data Read audit log of the Compute Engine needs to be enabled Option E is incorrect. The Data Write audit log of the Compute Engine does not meet the criteria of logs specified. Reference https://cloud.google.com/logging/docs/audit/configure-data-access#config-console-enable https://cloud.google.com/logging/docs/audit#data-access
Question 10 of 65
10. Question
You work as a DevOps Engineer for a start-up company. The company’s strategy is to use an automated CI/CD pipeline to deliver software faster. You have been tasked with choosing the tools for the pipeline. A key requirement is selecting a repository that can trigger builds in Cloud Build. Which of the following repositories does not meet the requirements?
Correct
There is currently no way to trigger a build in Cloud Build using AWS CodeCommit Option B is incorrect. This option can be used to trigger builds in Cloud Build Option C is incorrect. This option can be used to trigger builds in Cloud Build Option D is incorrect. This option can be used to trigger builds in Cloud Build Reference https://cloud.google.com/build/docs/automating-builds/create-webhook-triggers
Incorrect
There is currently no way to trigger a build in Cloud Build using AWS CodeCommit Option B is incorrect. This option can be used to trigger builds in Cloud Build Option C is incorrect. This option can be used to trigger builds in Cloud Build Option D is incorrect. This option can be used to trigger builds in Cloud Build Reference https://cloud.google.com/build/docs/automating-builds/create-webhook-triggers
Unattempted
There is currently no way to trigger a build in Cloud Build using AWS CodeCommit Option B is incorrect. This option can be used to trigger builds in Cloud Build Option C is incorrect. This option can be used to trigger builds in Cloud Build Option D is incorrect. This option can be used to trigger builds in Cloud Build Reference https://cloud.google.com/build/docs/automating-builds/create-webhook-triggers
Question 11 of 65
11. Question
You work as a DevOps Engineer for an energy client. The client runs their applications on Google Kubernetes Engine and logs are sent to Cloud Logging. They would like to use the logs generated to monitor the application usage in real time. What is the best destination for the export sink?
You are on a cross-functional team of SREs and product developers managing an application that needs to be deployed to production. Metrics for measuring reliability and performance of the application have been agreed on. There is a need to decide the frequency of releasing new changes. Following Google’s SRE practice, what measure should be used to control this?
Correct
Options A, B and C are incorrect. The decision on how frequent to push new changes is based on the amount of error budget left. Option D is CORRECT. The amount of error budget is used to determine how frequent new releases should be pushed to production, so the application’s reliability does not fall below the agreed SLO/SLA. Reference https://sre.google/sre-book/embracing-risk/ (Forming Your Error Budget)
Incorrect
Options A, B and C are incorrect. The decision on how frequent to push new changes is based on the amount of error budget left. Option D is CORRECT. The amount of error budget is used to determine how frequent new releases should be pushed to production, so the application’s reliability does not fall below the agreed SLO/SLA. Reference https://sre.google/sre-book/embracing-risk/ (Forming Your Error Budget)
Unattempted
Options A, B and C are incorrect. The decision on how frequent to push new changes is based on the amount of error budget left. Option D is CORRECT. The amount of error budget is used to determine how frequent new releases should be pushed to production, so the application’s reliability does not fall below the agreed SLO/SLA. Reference https://sre.google/sre-book/embracing-risk/ (Forming Your Error Budget)
Question 13 of 65
13. Question
You are one of the on-call engineers managing an application running in production. A recent update has caused the application’s response time to increase drastically. An incident has been declared and all the roles except the Planning Lead has been assigned. Following Google’s SRE practice, who is to assume this role and its responsibilities?
Correct
Options A, C and D are incorrect. The incident commander takes on any unassigned roles during an incident Option B is CORRECT. Any unassigned responsibilities should be handled by the Incident Commander. Reference: https://sre.google/sre-book/managing-incidents/ (Recursive Separation of Responsibilities)
Incorrect
Options A, C and D are incorrect. The incident commander takes on any unassigned roles during an incident Option B is CORRECT. Any unassigned responsibilities should be handled by the Incident Commander. Reference: https://sre.google/sre-book/managing-incidents/ (Recursive Separation of Responsibilities)
Unattempted
Options A, C and D are incorrect. The incident commander takes on any unassigned roles during an incident Option B is CORRECT. Any unassigned responsibilities should be handled by the Incident Commander. Reference: https://sre.google/sre-book/managing-incidents/ (Recursive Separation of Responsibilities)
Question 14 of 65
14. Question
You are on the SRE team that monitor production-grade applications. One of your team members notices that one application performance has degraded, and customers are noticing. As this incident begins to unfold, what is Google’s recommended first action for managing incidents?
Correct
Options A, C and D are incorrect. These are wrong because these are not the first role that need to be assigned. Option B is CORRECT. This is the first role that should be assigned during an incident Reference https://sre.google/sre-book/managing-incidents/ (Elements of Incident Management Process)
Incorrect
Options A, C and D are incorrect. These are wrong because these are not the first role that need to be assigned. Option B is CORRECT. This is the first role that should be assigned during an incident Reference https://sre.google/sre-book/managing-incidents/ (Elements of Incident Management Process)
Unattempted
Options A, C and D are incorrect. These are wrong because these are not the first role that need to be assigned. Option B is CORRECT. This is the first role that should be assigned during an incident Reference https://sre.google/sre-book/managing-incidents/ (Elements of Incident Management Process)
Question 15 of 65
15. Question
You are part of the SRE team in your organisation. After a recent incident in production and the follow-up post-mortem, your team has been invited to a production meeting. Following Google SRE’s best practice, which of the following should not be discussed at the meeting?
Correct
Option A, B and D are incorrect. These should be on the agenda at production meetings. Option C is CORRECT. A blameless SRE culture promotes openness about faults, so finger-pointing is not recommended. Reference https://sre.google/sre-book/communication-and-collaboration/
Incorrect
Option A, B and D are incorrect. These should be on the agenda at production meetings. Option C is CORRECT. A blameless SRE culture promotes openness about faults, so finger-pointing is not recommended. Reference https://sre.google/sre-book/communication-and-collaboration/
Unattempted
Option A, B and D are incorrect. These should be on the agenda at production meetings. Option C is CORRECT. A blameless SRE culture promotes openness about faults, so finger-pointing is not recommended. Reference https://sre.google/sre-book/communication-and-collaboration/
Question 16 of 65
16. Question
You manage a Java application running on Kubernetes Engine in Production. The organization has decided there is a need to understand and benchmark the performance of the application such as CPU time and Heap. The continuous measuring process should not affect the performance of the application. Which of the following can help you achieve this?
Correct
Cloud Profiler is a statistical, low-overhead profiler that continuously gathers CPU usage and memory-allocation information from your production applications. Option B, C and D are incorrect. Reference https://cloud.google.com/profiler/docs/about-profiler
Incorrect
Cloud Profiler is a statistical, low-overhead profiler that continuously gathers CPU usage and memory-allocation information from your production applications. Option B, C and D are incorrect. Reference https://cloud.google.com/profiler/docs/about-profiler
Unattempted
Cloud Profiler is a statistical, low-overhead profiler that continuously gathers CPU usage and memory-allocation information from your production applications. Option B, C and D are incorrect. Reference https://cloud.google.com/profiler/docs/about-profiler
Question 17 of 65
17. Question
You manage an application running on App Engine Standard in a production project. The application serves customers worldwide and downtime needs to be kept to a minimum. There is a need to troubleshoot the application behaviour by injecting logging without stopping it. Which of the following can help you achieve this?
Correct
Option B, C and D are incorrect. The logging Agent and Monitoring cannot be used to inject logging. Option A is CORRECT. Cloud Debugger agent is needed to use Logpoints. Logpoints allow you to inject logging into running services without restarting or interfering with the normal function of the service. Reference https://cloud.google.com/debugger/docs/using/logpoints#logpoints
Incorrect
Option B, C and D are incorrect. The logging Agent and Monitoring cannot be used to inject logging. Option A is CORRECT. Cloud Debugger agent is needed to use Logpoints. Logpoints allow you to inject logging into running services without restarting or interfering with the normal function of the service. Reference https://cloud.google.com/debugger/docs/using/logpoints#logpoints
Unattempted
Option B, C and D are incorrect. The logging Agent and Monitoring cannot be used to inject logging. Option A is CORRECT. Cloud Debugger agent is needed to use Logpoints. Logpoints allow you to inject logging into running services without restarting or interfering with the normal function of the service. Reference https://cloud.google.com/debugger/docs/using/logpoints#logpoints
Question 18 of 65
18. Question
A company wants to use GCP for their development and deployment of applications. They have set up an organization, folders and projects. They want to set up multiple Cloud Source Repositories (CSR) in one Project. Different teams have different access requirements to the CSRs in the Project. Which of the following is the best way of managing access to the CSR for the different teams?
Correct
Options A, B and C are incorrect. These are wrong because it is not the most suitable way to assign the permissions because there is no way to assign permissions per repository at project or folder or organization level. Option D is CORRECT. This is the best way, because you can assign different roles to different teams in each repository Reference https://cloud.google.com/source-repositories/docs/granting-users-access#grant_push_permissions_for_a_repository
Incorrect
Options A, B and C are incorrect. These are wrong because it is not the most suitable way to assign the permissions because there is no way to assign permissions per repository at project or folder or organization level. Option D is CORRECT. This is the best way, because you can assign different roles to different teams in each repository Reference https://cloud.google.com/source-repositories/docs/granting-users-access#grant_push_permissions_for_a_repository
Unattempted
Options A, B and C are incorrect. These are wrong because it is not the most suitable way to assign the permissions because there is no way to assign permissions per repository at project or folder or organization level. Option D is CORRECT. This is the best way, because you can assign different roles to different teams in each repository Reference https://cloud.google.com/source-repositories/docs/granting-users-access#grant_push_permissions_for_a_repository
Question 19 of 65
19. Question
You have been tasked with building an automated build for the deployment of applications to serverless infrastructure in Google Cloud Platform. Which of the following can help you complete the task with little overhead? (select 2)
You are developing a completely serverless application. The application is going to be built using Cloud Build. There is a requirement to store all non-container artifacts in Cloud Storage. How will you meet this requirement?
Correct
The Artifacts field in the build config file is used to specify non-container artifact storage location. Option B is incorrect. The options field is some optional arguments like env, volumes, and secretEnv. Option C is incorrect. The images field specifies one or more Docker images to be pushed by Cloud Build to Container Registry. Option D is incorrect. The substitutions field in your build config file is used substitute specific variables at build time Reference https://cloud.google.com/build/docs/build-config#artifacts
Incorrect
The Artifacts field in the build config file is used to specify non-container artifact storage location. Option B is incorrect. The options field is some optional arguments like env, volumes, and secretEnv. Option C is incorrect. The images field specifies one or more Docker images to be pushed by Cloud Build to Container Registry. Option D is incorrect. The substitutions field in your build config file is used substitute specific variables at build time Reference https://cloud.google.com/build/docs/build-config#artifacts
Unattempted
The Artifacts field in the build config file is used to specify non-container artifact storage location. Option B is incorrect. The options field is some optional arguments like env, volumes, and secretEnv. Option C is incorrect. The images field specifies one or more Docker images to be pushed by Cloud Build to Container Registry. Option D is incorrect. The substitutions field in your build config file is used substitute specific variables at build time Reference https://cloud.google.com/build/docs/build-config#artifacts
Question 21 of 65
21. Question
You are managing an application that generates many logs in the staging Project. The Company has an organization in GCP, two folders and four projects. The folders are dev and prod, while the projects are dev, test, staging and production. The dev and test projects are in the dev folder and the staging and production projects are in the prod folder. The company wants to generate metrics from the logs for alerting purposes for that application alone. What IAM solution will help achieve the requirement following the principle of least privilege?
Correct
Option A is incorrect. The Log Admin role is too permissive and if given at Folder it means the developers will have permissions in production projects as well. Option B is incorrect. The Logs Configuration Writer role is enough for creating logs-based metrics but assigning it at folder level will give developers access to production project logs. Option C is incorrect. The Log Admin role is too permissive. Option D is CORRECT. The Logs Configuration Writer role is enough for creating logs-based metrics. Reference https://cloud.google.com/logging/docs/logs-based-metrics
Incorrect
Option A is incorrect. The Log Admin role is too permissive and if given at Folder it means the developers will have permissions in production projects as well. Option B is incorrect. The Logs Configuration Writer role is enough for creating logs-based metrics but assigning it at folder level will give developers access to production project logs. Option C is incorrect. The Log Admin role is too permissive. Option D is CORRECT. The Logs Configuration Writer role is enough for creating logs-based metrics. Reference https://cloud.google.com/logging/docs/logs-based-metrics
Unattempted
Option A is incorrect. The Log Admin role is too permissive and if given at Folder it means the developers will have permissions in production projects as well. Option B is incorrect. The Logs Configuration Writer role is enough for creating logs-based metrics but assigning it at folder level will give developers access to production project logs. Option C is incorrect. The Log Admin role is too permissive. Option D is CORRECT. The Logs Configuration Writer role is enough for creating logs-based metrics. Reference https://cloud.google.com/logging/docs/logs-based-metrics
Question 22 of 65
22. Question
The Company has a GCP organization that has applications running in GCP projects. The applications push logs into Cloud logging. The company wants to analyse the logs using a third-party software such as Elasticsearch. You have set up the Logs sink to route logs to a Pub/Sub topic but no logs are appearing in Elasticsearch. Which of the following could be the reason?
Your company has decided to use GCP services to automate its’ Continuous Integration and Deployment process. Cloud Build will be used to build images and other artifacts. A new developer has been tasked with creating the build config files. The Cloud Build process is failing. Which of the following could be the reason? Choose Two
Your company has deployed compute resources in VPCs. There are three VPCs in the Development Project and applications are deployed to GCE Instances in the VPCs. There is a new security requirement to collect sample network flows sent to and received by the VM instances. Which of the following can help you achieve this?
Correct
Option A is incorrect. The FluentD agent is useful for application and OS specific logs not the network traffic in the VPC. Option B is incorrect. This will only work after the VPC Flow logs has been enabled Option Cis incorrect. This is used to capture traffic allowed or denied on a particular firewall rule. Option D is CORRECT. VPC Flow Logs record a sample of network flows sent from and received by VM instances, including instances used as GKE nodes. These logs can be used for network monitoring, forensics, real-time security analysis, and expense optimization. Reference https://cloud.google.com/vpc/docs/using-flow-logs
Incorrect
Option A is incorrect. The FluentD agent is useful for application and OS specific logs not the network traffic in the VPC. Option B is incorrect. This will only work after the VPC Flow logs has been enabled Option Cis incorrect. This is used to capture traffic allowed or denied on a particular firewall rule. Option D is CORRECT. VPC Flow Logs record a sample of network flows sent from and received by VM instances, including instances used as GKE nodes. These logs can be used for network monitoring, forensics, real-time security analysis, and expense optimization. Reference https://cloud.google.com/vpc/docs/using-flow-logs
Unattempted
Option A is incorrect. The FluentD agent is useful for application and OS specific logs not the network traffic in the VPC. Option B is incorrect. This will only work after the VPC Flow logs has been enabled Option Cis incorrect. This is used to capture traffic allowed or denied on a particular firewall rule. Option D is CORRECT. VPC Flow Logs record a sample of network flows sent from and received by VM instances, including instances used as GKE nodes. These logs can be used for network monitoring, forensics, real-time security analysis, and expense optimization. Reference https://cloud.google.com/vpc/docs/using-flow-logs
Question 25 of 65
25. Question
You are part of the DevOps team that manages applications running in the production project of your company. After a recent security incident, there was a new requirement to catch network traffic going to and from the Compute instances in the VPCs in the production project. VPC Flow Logs was enabled on the production VPC and no vpc_flows logs are present in Cloud Logging. Which of the following could be the reason?
Correct
Option A is incorrect. Logging inclusion filter does not block any log from being sent. Option B is incorrect. There is no configuration needed for enabling VPC Flow Logs. Option C is incorrect. The service account of the instances is not used in capturing VPC Flow Logs. Option D is CORRECT. Logging exclusion filters block specified logs. Make sure there are no exclusion rules that discard VPC Flow Logs. Reference https://cloud.google.com/vpc/docs/using-flow-logs#no-vpc-flows
Incorrect
Option A is incorrect. Logging inclusion filter does not block any log from being sent. Option B is incorrect. There is no configuration needed for enabling VPC Flow Logs. Option C is incorrect. The service account of the instances is not used in capturing VPC Flow Logs. Option D is CORRECT. Logging exclusion filters block specified logs. Make sure there are no exclusion rules that discard VPC Flow Logs. Reference https://cloud.google.com/vpc/docs/using-flow-logs#no-vpc-flows
Unattempted
Option A is incorrect. Logging inclusion filter does not block any log from being sent. Option B is incorrect. There is no configuration needed for enabling VPC Flow Logs. Option C is incorrect. The service account of the instances is not used in capturing VPC Flow Logs. Option D is CORRECT. Logging exclusion filters block specified logs. Make sure there are no exclusion rules that discard VPC Flow Logs. Reference https://cloud.google.com/vpc/docs/using-flow-logs#no-vpc-flows
Question 26 of 65
26. Question
A financial organization that analyses transactions carried out throughout the day at night. The analysis takes about three hours and must be run between midnight and 5am. The analysis is currently run on standard Compute Engine instances, with several OS level guardrails to satisfy government regulations, and can handle interruptions. You have been tasked with optimising the cost of the analysis which is to run for another six months. Which of the following will optimise the cost?
Your team has been tasked with deploying a python application to Cloud Run. The developer team needs a way to inspect the state of a python application in real time, without stopping or slowing it down. You are responsible for implementing the requirement. Which of the following is needed?
Correct
Option A is incorrect. The application is deployed to Cloud Run not Compute Engine. Option B is incorrect. This is used for installing the Debugger to App Engine. Option C is incorrect. The application is deployed to Cloud Run not Compute Engine. Option D is CORRECT. This is required for installing the Cloud Debugger agent in Cloud Run. References https://cloud.google.com/debugger/docs/setup/python#cloud-run https://cloud.google.com/debugger
Incorrect
Option A is incorrect. The application is deployed to Cloud Run not Compute Engine. Option B is incorrect. This is used for installing the Debugger to App Engine. Option C is incorrect. The application is deployed to Cloud Run not Compute Engine. Option D is CORRECT. This is required for installing the Cloud Debugger agent in Cloud Run. References https://cloud.google.com/debugger/docs/setup/python#cloud-run https://cloud.google.com/debugger
Unattempted
Option A is incorrect. The application is deployed to Cloud Run not Compute Engine. Option B is incorrect. This is used for installing the Debugger to App Engine. Option C is incorrect. The application is deployed to Cloud Run not Compute Engine. Option D is CORRECT. This is required for installing the Cloud Debugger agent in Cloud Run. References https://cloud.google.com/debugger/docs/setup/python#cloud-run https://cloud.google.com/debugger
Question 28 of 65
28. Question
You are on the SRE team of your company. There is a new government regulation to keep the logs of all API calls made in the Production project for three years. Which of the following can help you achieve this?
Your SRE team is responsible for monitoring and logging of the applications in Production Projects. The applications are deployed on different resources like Compute Engine and GKE. Your team has created a centralised monitoring dashboard in the monitoring Project for the metrics from all the production Projects. An uptime check was created for the applications. You have been tasked with setting up the Notification channels for one of the applications to send notifications to the team. Which of these helps you meet the requirement?
You are tasked with investigating the gradual degradation of a production application’s response time. The application is deployed to a Managed Instance Group of five instances. What steps can you take to investigate this issue with the least amount of overhead?
Correct
Option A is incorrect, logging agent does not capture latency data. Option B is CORRECT because the Cloud Trace provides distributed tracing data for your applications. After instrumenting your application, you can inspect latency data for a single request and view the aggregate latency for an entire application in the Cloud Trace console. Option C is incorrect, the monitoring agent does not capture latency data. Option D is incorrect, this is used to investigate the state of your applications in real time and does not contain the latency data needed. Reference https://cloud.google.com/trace/docs/setup
Incorrect
Option A is incorrect, logging agent does not capture latency data. Option B is CORRECT because the Cloud Trace provides distributed tracing data for your applications. After instrumenting your application, you can inspect latency data for a single request and view the aggregate latency for an entire application in the Cloud Trace console. Option C is incorrect, the monitoring agent does not capture latency data. Option D is incorrect, this is used to investigate the state of your applications in real time and does not contain the latency data needed. Reference https://cloud.google.com/trace/docs/setup
Unattempted
Option A is incorrect, logging agent does not capture latency data. Option B is CORRECT because the Cloud Trace provides distributed tracing data for your applications. After instrumenting your application, you can inspect latency data for a single request and view the aggregate latency for an entire application in the Cloud Trace console. Option C is incorrect, the monitoring agent does not capture latency data. Option D is incorrect, this is used to investigate the state of your applications in real time and does not contain the latency data needed. Reference https://cloud.google.com/trace/docs/setup
Question 31 of 65
31. Question
You provide support for a Python application in production on Compute Engine. In recent times there have been complaints about the slow response of the application. You want to investigate how requests propagate through your entire application. Which should you do?
Correct
Options A is CORRECT. Cloud Trace shows how requests propagate through the different components (microservices or functions) of an application. Options B and C are incorrect. The monitoring and logging agents do not show how requests propagate through the different component (microservices or functions) of an application Option D is incorrect. CPU Utilization does not show how requests propagate through the different components (microservices or functions) of an application. Reference https://cloud.google.com/trace/docs/setup
Incorrect
Options A is CORRECT. Cloud Trace shows how requests propagate through the different components (microservices or functions) of an application. Options B and C are incorrect. The monitoring and logging agents do not show how requests propagate through the different component (microservices or functions) of an application Option D is incorrect. CPU Utilization does not show how requests propagate through the different components (microservices or functions) of an application. Reference https://cloud.google.com/trace/docs/setup
Unattempted
Options A is CORRECT. Cloud Trace shows how requests propagate through the different components (microservices or functions) of an application. Options B and C are incorrect. The monitoring and logging agents do not show how requests propagate through the different component (microservices or functions) of an application Option D is incorrect. CPU Utilization does not show how requests propagate through the different components (microservices or functions) of an application. Reference https://cloud.google.com/trace/docs/setup
Question 32 of 65
32. Question
Your Site Reliability Engineering team does toil work to archive unused data in tables within your application’s relational database. This toil work is required to ensure that your application has a low Latency Service Level Indicator (SLI) to meet your Service Level Objective (SLO). Toil is preventing your team from focusing on a high-priority engineering project that will improve the Availability SLI of your application. You want to reduce repetitive tasks to avoid burnout, improve organizational efficiency, and follow the Site Reliability Engineering recommended practices. What should you do?
Correct
The term “toil“ in Site Reliability Engineering (SRE) refers to manual labor that is repetitive, manual, and does not bring any value to the organization. Toil is not desirable as it can lead to burnout, and it is not in line with SRE recommended practices. The goal is to reduce or eliminate toil, while also meeting the Service Level Objective (SLO) and Service Level Indicator (SLI) of the application. The correct answer is “Identify repetitive tasks that contribute to toil and automate them.“ Automating these tasks will reduce or eliminate toil and free up the SRE team to focus on the high-priority engineering project that will improve the Availability SLI of the application. Automation will also improve organizational efficiency and is in line with SRE-recommended practices. “Assign the availability SLI engineering project to the software engineering team“ – This answer does not address the issue of toil, and therefore does not reduce or eliminate toil. It does not improve organizational efficiency, and is not in line with SRE recommended practices. “Change the SLO of your latency SLI to accommodate toil being done less often. Use this capacity to work on the availablility SLI engineering project.“ – This answer does not address the issue of toil, and therefore does not reduce or eliminate toil. It does not improve organizational efficiency, and is not in line with SRE-recommended practices. “Identify repetitive tasks that contribute to toil and onboard additional team members for support.“ – This answer does not address the issue of toil, and therefore does not reduce or eliminate toil. It does not improve organizational efficiency and needs to be in line with SRE-recommended practices. Additionally, it would require additional resources which may not be available.
Incorrect
The term “toil“ in Site Reliability Engineering (SRE) refers to manual labor that is repetitive, manual, and does not bring any value to the organization. Toil is not desirable as it can lead to burnout, and it is not in line with SRE recommended practices. The goal is to reduce or eliminate toil, while also meeting the Service Level Objective (SLO) and Service Level Indicator (SLI) of the application. The correct answer is “Identify repetitive tasks that contribute to toil and automate them.“ Automating these tasks will reduce or eliminate toil and free up the SRE team to focus on the high-priority engineering project that will improve the Availability SLI of the application. Automation will also improve organizational efficiency and is in line with SRE-recommended practices. “Assign the availability SLI engineering project to the software engineering team“ – This answer does not address the issue of toil, and therefore does not reduce or eliminate toil. It does not improve organizational efficiency, and is not in line with SRE recommended practices. “Change the SLO of your latency SLI to accommodate toil being done less often. Use this capacity to work on the availablility SLI engineering project.“ – This answer does not address the issue of toil, and therefore does not reduce or eliminate toil. It does not improve organizational efficiency, and is not in line with SRE-recommended practices. “Identify repetitive tasks that contribute to toil and onboard additional team members for support.“ – This answer does not address the issue of toil, and therefore does not reduce or eliminate toil. It does not improve organizational efficiency and needs to be in line with SRE-recommended practices. Additionally, it would require additional resources which may not be available.
Unattempted
The term “toil“ in Site Reliability Engineering (SRE) refers to manual labor that is repetitive, manual, and does not bring any value to the organization. Toil is not desirable as it can lead to burnout, and it is not in line with SRE recommended practices. The goal is to reduce or eliminate toil, while also meeting the Service Level Objective (SLO) and Service Level Indicator (SLI) of the application. The correct answer is “Identify repetitive tasks that contribute to toil and automate them.“ Automating these tasks will reduce or eliminate toil and free up the SRE team to focus on the high-priority engineering project that will improve the Availability SLI of the application. Automation will also improve organizational efficiency and is in line with SRE-recommended practices. “Assign the availability SLI engineering project to the software engineering team“ – This answer does not address the issue of toil, and therefore does not reduce or eliminate toil. It does not improve organizational efficiency, and is not in line with SRE recommended practices. “Change the SLO of your latency SLI to accommodate toil being done less often. Use this capacity to work on the availablility SLI engineering project.“ – This answer does not address the issue of toil, and therefore does not reduce or eliminate toil. It does not improve organizational efficiency, and is not in line with SRE-recommended practices. “Identify repetitive tasks that contribute to toil and onboard additional team members for support.“ – This answer does not address the issue of toil, and therefore does not reduce or eliminate toil. It does not improve organizational efficiency and needs to be in line with SRE-recommended practices. Additionally, it would require additional resources which may not be available.
Question 33 of 65
33. Question
Your team is planning on the deployment and monitoring of a new application to the production environment. You are responsible for defining the SLIs, SLOs and SLAs while the application is tested in a staging environment. Which of the following is NOT true about error budgets?
Correct
The correct answer is “Error budget is expressed as a percentage with a value close to 100%“ – Error budgets are typically expressed as a percentage of the Service Level Objective (SLO), which is the target level of service that a service or application aims to achieve. For example, if the SLO for an application is 99.95% uptime, the error budget would be 0.05%, or about 52 minutes of downtime per month. It is not expressed as a percentage with a value close to 100%. References: https://cloud.google.com/blog/products/management-tools/sre-error-budgets-and-maintenance-windows “The error budget is 100%-SLO%, for a given SLI“ – Error budgets are typically expressed as a percentage of the Service Level Objective (SLO), which is the target level of service that a service or application aims to achieve. The error budget is not expressed as a percentage of the SLI, but rather as a percentage of the SLO. “Developers can work on new features if they are within their error budget“ – This statement is generally true, as long as the error budget is not being excessively consumed and the service or application is meeting the agreed-upon SLO. There should be an alerting strategy in place to notify developers of events that are consuming an unusually high percentage of the error budget, so that they can take appropriate action to address the issue and maintain the agreed-upon level of service. “There should be an alerting strategy that will notify developers of an event that is consuming an unusually high percentage of the error budget“ – This statement is true. There should be an alerting strategy in place to notify developers of events that are consuming an unusually high percentage of the error budget, so that they can take appropriate action to address the issue and maintain the agreed-upon level of service.
Incorrect
The correct answer is “Error budget is expressed as a percentage with a value close to 100%“ – Error budgets are typically expressed as a percentage of the Service Level Objective (SLO), which is the target level of service that a service or application aims to achieve. For example, if the SLO for an application is 99.95% uptime, the error budget would be 0.05%, or about 52 minutes of downtime per month. It is not expressed as a percentage with a value close to 100%. References: https://cloud.google.com/blog/products/management-tools/sre-error-budgets-and-maintenance-windows “The error budget is 100%-SLO%, for a given SLI“ – Error budgets are typically expressed as a percentage of the Service Level Objective (SLO), which is the target level of service that a service or application aims to achieve. The error budget is not expressed as a percentage of the SLI, but rather as a percentage of the SLO. “Developers can work on new features if they are within their error budget“ – This statement is generally true, as long as the error budget is not being excessively consumed and the service or application is meeting the agreed-upon SLO. There should be an alerting strategy in place to notify developers of events that are consuming an unusually high percentage of the error budget, so that they can take appropriate action to address the issue and maintain the agreed-upon level of service. “There should be an alerting strategy that will notify developers of an event that is consuming an unusually high percentage of the error budget“ – This statement is true. There should be an alerting strategy in place to notify developers of events that are consuming an unusually high percentage of the error budget, so that they can take appropriate action to address the issue and maintain the agreed-upon level of service.
Unattempted
The correct answer is “Error budget is expressed as a percentage with a value close to 100%“ – Error budgets are typically expressed as a percentage of the Service Level Objective (SLO), which is the target level of service that a service or application aims to achieve. For example, if the SLO for an application is 99.95% uptime, the error budget would be 0.05%, or about 52 minutes of downtime per month. It is not expressed as a percentage with a value close to 100%. References: https://cloud.google.com/blog/products/management-tools/sre-error-budgets-and-maintenance-windows “The error budget is 100%-SLO%, for a given SLI“ – Error budgets are typically expressed as a percentage of the Service Level Objective (SLO), which is the target level of service that a service or application aims to achieve. The error budget is not expressed as a percentage of the SLI, but rather as a percentage of the SLO. “Developers can work on new features if they are within their error budget“ – This statement is generally true, as long as the error budget is not being excessively consumed and the service or application is meeting the agreed-upon SLO. There should be an alerting strategy in place to notify developers of events that are consuming an unusually high percentage of the error budget, so that they can take appropriate action to address the issue and maintain the agreed-upon level of service. “There should be an alerting strategy that will notify developers of an event that is consuming an unusually high percentage of the error budget“ – This statement is true. There should be an alerting strategy in place to notify developers of events that are consuming an unusually high percentage of the error budget, so that they can take appropriate action to address the issue and maintain the agreed-upon level of service.
Question 34 of 65
34. Question
You are a DevOps engineer for a tech company. You are responsible for the production Project. At the end of the month, you are informed by finance that charges from logs stored are very high. You have been asked to investigate and reduce the number of logs generated in the project. What of the following is unlikely to be generating a lot of logs?
As a DevOps engineer working with Google Cloud Platform, you want to foster a learning culture and promote healthy communication and collaboration among team members. Which of the following strategies can help you achieve this goal in GCP?
Correct
Create a safe and supportive environment where team members are encouraged to experiment, take risks, and learn from their mistakes. -> Correct. Creating a safe and supportive environment where team members are encouraged to experiment, take risks, and learn from their mistakes is a key strategy for fostering a learning culture and promoting healthy communication and collaboration. This approach allows team members to learn from each other and from their experiences, which can lead to better performance and job satisfaction. Encourage team members to work independently and avoid collaboration to minimize the risk of conflicts and misunderstandings. -> Incorrect. Discouraging collaboration can lead to misunderstandings and miscommunications, which can negatively impact team dynamics and increase the risk of burnout. Provide team members with clear instructions and directives to avoid confusion and ensure that everyone is working towards the same goals. -> Incorrect. Providing team members with clear instructions and directives can create a rigid work environment that may stifle creativity and discourage experimentation. Focus solely on meeting project deadlines and delivering results to ensure that projects are completed on time and within budget. -> Incorrect. Focusing solely on meeting project deadlines and delivering results can create a high-pressure environment that may undermine team communication and collaboration.
Incorrect
Create a safe and supportive environment where team members are encouraged to experiment, take risks, and learn from their mistakes. -> Correct. Creating a safe and supportive environment where team members are encouraged to experiment, take risks, and learn from their mistakes is a key strategy for fostering a learning culture and promoting healthy communication and collaboration. This approach allows team members to learn from each other and from their experiences, which can lead to better performance and job satisfaction. Encourage team members to work independently and avoid collaboration to minimize the risk of conflicts and misunderstandings. -> Incorrect. Discouraging collaboration can lead to misunderstandings and miscommunications, which can negatively impact team dynamics and increase the risk of burnout. Provide team members with clear instructions and directives to avoid confusion and ensure that everyone is working towards the same goals. -> Incorrect. Providing team members with clear instructions and directives can create a rigid work environment that may stifle creativity and discourage experimentation. Focus solely on meeting project deadlines and delivering results to ensure that projects are completed on time and within budget. -> Incorrect. Focusing solely on meeting project deadlines and delivering results can create a high-pressure environment that may undermine team communication and collaboration.
Unattempted
Create a safe and supportive environment where team members are encouraged to experiment, take risks, and learn from their mistakes. -> Correct. Creating a safe and supportive environment where team members are encouraged to experiment, take risks, and learn from their mistakes is a key strategy for fostering a learning culture and promoting healthy communication and collaboration. This approach allows team members to learn from each other and from their experiences, which can lead to better performance and job satisfaction. Encourage team members to work independently and avoid collaboration to minimize the risk of conflicts and misunderstandings. -> Incorrect. Discouraging collaboration can lead to misunderstandings and miscommunications, which can negatively impact team dynamics and increase the risk of burnout. Provide team members with clear instructions and directives to avoid confusion and ensure that everyone is working towards the same goals. -> Incorrect. Providing team members with clear instructions and directives can create a rigid work environment that may stifle creativity and discourage experimentation. Focus solely on meeting project deadlines and delivering results to ensure that projects are completed on time and within budget. -> Incorrect. Focusing solely on meeting project deadlines and delivering results can create a high-pressure environment that may undermine team communication and collaboration.
Question 36 of 65
36. Question
You work as a DevOps Engineer and your company experiences bugs, outages, and slowness in its production systems. Developers use the production environment for new feature development and bug fixes. Configuration and experiments are done in the production environment, causing outages for users. Testers use the production environment for load testing, which often slows the production systems. You need to redesign the environment to reduce the number of bugs and outages in production and to enable testers to toad test new features. What should you do?
Correct
Create a development environment for writing code and a test environment for configurations, experiments, and load testing. -> Correct. Using a production environment for development, configuration, experiments, and load testing is not a best practice as it can lead to bugs, outages, and slowness in the production system. By creating separate environments for development, testing, and production, you can reduce the likelihood of introducing errors into the production environment. Create an automated testing script in production to detect failures as soon as they occur. -> Incorrect. Create an automated testing script in production to detect failures as soon as they occur, is not the best solution because it does not address the root cause of the issue, which is the use of the production environment for development, testing, and experimentation. Create a development environment with smaller server capacity and give access only to developers and testers. -> Incorrect. Create a development environment with smaller server capacity and give access only to developers and testers, is not the best solution because it still involves using the production environment for testing and experimentation. Secure the production environment to ensure that developers can‘t change it and set up one controlled update per year. -> Incorrect. Secure the production environment to ensure that developers can‘t change it and set up one controlled update per year, is not the best solution because it limits the ability of developers to make necessary changes and improvements to the system, and it does not provide a separate environment for testing and experimentation.
Incorrect
Create a development environment for writing code and a test environment for configurations, experiments, and load testing. -> Correct. Using a production environment for development, configuration, experiments, and load testing is not a best practice as it can lead to bugs, outages, and slowness in the production system. By creating separate environments for development, testing, and production, you can reduce the likelihood of introducing errors into the production environment. Create an automated testing script in production to detect failures as soon as they occur. -> Incorrect. Create an automated testing script in production to detect failures as soon as they occur, is not the best solution because it does not address the root cause of the issue, which is the use of the production environment for development, testing, and experimentation. Create a development environment with smaller server capacity and give access only to developers and testers. -> Incorrect. Create a development environment with smaller server capacity and give access only to developers and testers, is not the best solution because it still involves using the production environment for testing and experimentation. Secure the production environment to ensure that developers can‘t change it and set up one controlled update per year. -> Incorrect. Secure the production environment to ensure that developers can‘t change it and set up one controlled update per year, is not the best solution because it limits the ability of developers to make necessary changes and improvements to the system, and it does not provide a separate environment for testing and experimentation.
Unattempted
Create a development environment for writing code and a test environment for configurations, experiments, and load testing. -> Correct. Using a production environment for development, configuration, experiments, and load testing is not a best practice as it can lead to bugs, outages, and slowness in the production system. By creating separate environments for development, testing, and production, you can reduce the likelihood of introducing errors into the production environment. Create an automated testing script in production to detect failures as soon as they occur. -> Incorrect. Create an automated testing script in production to detect failures as soon as they occur, is not the best solution because it does not address the root cause of the issue, which is the use of the production environment for development, testing, and experimentation. Create a development environment with smaller server capacity and give access only to developers and testers. -> Incorrect. Create a development environment with smaller server capacity and give access only to developers and testers, is not the best solution because it still involves using the production environment for testing and experimentation. Secure the production environment to ensure that developers can‘t change it and set up one controlled update per year. -> Incorrect. Secure the production environment to ensure that developers can‘t change it and set up one controlled update per year, is not the best solution because it limits the ability of developers to make necessary changes and improvements to the system, and it does not provide a separate environment for testing and experimentation.
Question 37 of 65
37. Question
As a DevOps Engineer, you are managing the production deployment to a set of Google Kubernetes Engine (GKE) clusters. You want to make sure only images which are successfully built by your trusted CI/CD pipeline are deployed to production. What should you do?
Correct
Set up the Kubernetes Engine clusters with Binary Authorization. -> Correct. Binary Authorization is a security feature in Google Kubernetes Engine (GKE) that ensures that only trusted container images are deployed in a Kubernetes cluster. It allows you to define and enforce policies that require container images to be signed by trusted authorities before they can be deployed to the cluster. With Binary Authorization, you can prevent unverified images from being deployed to your production environment, which helps to reduce the risk of vulnerabilities and ensure the integrity of your software supply chain. Enable Cloud Security Scanner on the clusters. -> Incorrect. Cloud Security Scanner is a web security scanner that checks for vulnerabilities in web applications running on Google Cloud Platform (GCP). It does not check container images or deployments in a Kubernetes cluster. Enable Vulnerability Analysis on the Container Registry. -> Incorrect. Vulnerability Analysis is a feature in Google Container Registry that scans container images for vulnerabilities. While it can help identify potential security issues in your images, it does not prevent unverified images from being deployed to your production environment. Set up the Kubernetes Engine clusters as private clusters. -> Incorrect. Setting up a cluster as a private cluster only restricts access to the control plane of the cluster. It does not provide any additional security measures for the deployment of container images in the cluster.
Incorrect
Set up the Kubernetes Engine clusters with Binary Authorization. -> Correct. Binary Authorization is a security feature in Google Kubernetes Engine (GKE) that ensures that only trusted container images are deployed in a Kubernetes cluster. It allows you to define and enforce policies that require container images to be signed by trusted authorities before they can be deployed to the cluster. With Binary Authorization, you can prevent unverified images from being deployed to your production environment, which helps to reduce the risk of vulnerabilities and ensure the integrity of your software supply chain. Enable Cloud Security Scanner on the clusters. -> Incorrect. Cloud Security Scanner is a web security scanner that checks for vulnerabilities in web applications running on Google Cloud Platform (GCP). It does not check container images or deployments in a Kubernetes cluster. Enable Vulnerability Analysis on the Container Registry. -> Incorrect. Vulnerability Analysis is a feature in Google Container Registry that scans container images for vulnerabilities. While it can help identify potential security issues in your images, it does not prevent unverified images from being deployed to your production environment. Set up the Kubernetes Engine clusters as private clusters. -> Incorrect. Setting up a cluster as a private cluster only restricts access to the control plane of the cluster. It does not provide any additional security measures for the deployment of container images in the cluster.
Unattempted
Set up the Kubernetes Engine clusters with Binary Authorization. -> Correct. Binary Authorization is a security feature in Google Kubernetes Engine (GKE) that ensures that only trusted container images are deployed in a Kubernetes cluster. It allows you to define and enforce policies that require container images to be signed by trusted authorities before they can be deployed to the cluster. With Binary Authorization, you can prevent unverified images from being deployed to your production environment, which helps to reduce the risk of vulnerabilities and ensure the integrity of your software supply chain. Enable Cloud Security Scanner on the clusters. -> Incorrect. Cloud Security Scanner is a web security scanner that checks for vulnerabilities in web applications running on Google Cloud Platform (GCP). It does not check container images or deployments in a Kubernetes cluster. Enable Vulnerability Analysis on the Container Registry. -> Incorrect. Vulnerability Analysis is a feature in Google Container Registry that scans container images for vulnerabilities. While it can help identify potential security issues in your images, it does not prevent unverified images from being deployed to your production environment. Set up the Kubernetes Engine clusters as private clusters. -> Incorrect. Setting up a cluster as a private cluster only restricts access to the control plane of the cluster. It does not provide any additional security measures for the deployment of container images in the cluster.
Question 38 of 65
38. Question
In a Google Cloud Platform (GCP) project, you are designing a monitoring system for a multi-region application using Cloud Monitoring metrics scopes. Your goal is to analyze the performance and usage of resources in each region separately, as well as to have an aggregated view of the entire application. Which of the following configurations would you choose to implement Cloud Monitoring metrics scopes correctly?
Correct
Configure a single metrics scope that covers all the regions and filter the data by region using labels. -> Correct. This allows you to monitor the performance and usage of resources in each region separately by using labels for filtering, while still having an aggregated view of the entire application. Create individual metrics scopes for each region and use a separate metrics scope for the aggregated view. -> Incorrect. It is inefficient and increases management complexity. Metrics scopes are designed to provide a holistic view of your resources, and using labels for filtering is a more effective approach. Use the default metrics scope and apply a resource group to each region for easier management. -> Incorrect. It does not address the requirement to analyze performance and usage in each region separately, nor does it provide an aggregated view of the entire application. Set up individual metrics scopes for each region and use Pub/Sub to aggregate the data. -> Incorrect. It is not recommended, as it adds unnecessary complexity to the monitoring process. Metrics scopes are designed to provide a holistic view of your resources, and using labels for filtering is a more effective approach.
Incorrect
Configure a single metrics scope that covers all the regions and filter the data by region using labels. -> Correct. This allows you to monitor the performance and usage of resources in each region separately by using labels for filtering, while still having an aggregated view of the entire application. Create individual metrics scopes for each region and use a separate metrics scope for the aggregated view. -> Incorrect. It is inefficient and increases management complexity. Metrics scopes are designed to provide a holistic view of your resources, and using labels for filtering is a more effective approach. Use the default metrics scope and apply a resource group to each region for easier management. -> Incorrect. It does not address the requirement to analyze performance and usage in each region separately, nor does it provide an aggregated view of the entire application. Set up individual metrics scopes for each region and use Pub/Sub to aggregate the data. -> Incorrect. It is not recommended, as it adds unnecessary complexity to the monitoring process. Metrics scopes are designed to provide a holistic view of your resources, and using labels for filtering is a more effective approach.
Unattempted
Configure a single metrics scope that covers all the regions and filter the data by region using labels. -> Correct. This allows you to monitor the performance and usage of resources in each region separately by using labels for filtering, while still having an aggregated view of the entire application. Create individual metrics scopes for each region and use a separate metrics scope for the aggregated view. -> Incorrect. It is inefficient and increases management complexity. Metrics scopes are designed to provide a holistic view of your resources, and using labels for filtering is a more effective approach. Use the default metrics scope and apply a resource group to each region for easier management. -> Incorrect. It does not address the requirement to analyze performance and usage in each region separately, nor does it provide an aggregated view of the entire application. Set up individual metrics scopes for each region and use Pub/Sub to aggregate the data. -> Incorrect. It is not recommended, as it adds unnecessary complexity to the monitoring process. Metrics scopes are designed to provide a holistic view of your resources, and using labels for filtering is a more effective approach.
Question 39 of 65
39. Question
A multinational company has recently migrated its infrastructure to Google Cloud Platform. The company uses a variety of Compute Engine instances to run its applications, and they want to optimize resource utilization and utilize committed use discounts where appropriate. As a DevOps Engineer, which of the following strategies should you recommend to optimize their costs?
Correct
As a DevOps Engineer, it is important to optimize the cost of the infrastructure to achieve maximum cost-efficiency. One of the strategies to achieve this is by utilizing committed use discounts (CUDs) for Compute Engine instances, which provide substantial discounts for prepaying for specific instance types for a certain period of time. However, purchasing CUDs for all instance types, regardless of their actual utilization rates, may not be cost-effective. Similarly, relying solely on preemptible instances, which are cheaper but may be interrupted at any time, may not be suitable for all workloads. Implementing autoscaling on all instances is also not a viable option, as it may not be necessary for all workloads and may result in underutilized resources and wasted expenses. Therefore, the recommended strategy is to analyze the instance utilization patterns and purchase CUDs only for the instances with high and predictable usage. This approach ensures that the cost savings from CUDs are maximized while avoiding unnecessary expenses for underutilized or sporadic instances. Reference: Compute Engine Committed Use Discounts documentation: https://cloud.google.com/compute/docs/instances/signing-up-committed-use-discounts
Incorrect
As a DevOps Engineer, it is important to optimize the cost of the infrastructure to achieve maximum cost-efficiency. One of the strategies to achieve this is by utilizing committed use discounts (CUDs) for Compute Engine instances, which provide substantial discounts for prepaying for specific instance types for a certain period of time. However, purchasing CUDs for all instance types, regardless of their actual utilization rates, may not be cost-effective. Similarly, relying solely on preemptible instances, which are cheaper but may be interrupted at any time, may not be suitable for all workloads. Implementing autoscaling on all instances is also not a viable option, as it may not be necessary for all workloads and may result in underutilized resources and wasted expenses. Therefore, the recommended strategy is to analyze the instance utilization patterns and purchase CUDs only for the instances with high and predictable usage. This approach ensures that the cost savings from CUDs are maximized while avoiding unnecessary expenses for underutilized or sporadic instances. Reference: Compute Engine Committed Use Discounts documentation: https://cloud.google.com/compute/docs/instances/signing-up-committed-use-discounts
Unattempted
As a DevOps Engineer, it is important to optimize the cost of the infrastructure to achieve maximum cost-efficiency. One of the strategies to achieve this is by utilizing committed use discounts (CUDs) for Compute Engine instances, which provide substantial discounts for prepaying for specific instance types for a certain period of time. However, purchasing CUDs for all instance types, regardless of their actual utilization rates, may not be cost-effective. Similarly, relying solely on preemptible instances, which are cheaper but may be interrupted at any time, may not be suitable for all workloads. Implementing autoscaling on all instances is also not a viable option, as it may not be necessary for all workloads and may result in underutilized resources and wasted expenses. Therefore, the recommended strategy is to analyze the instance utilization patterns and purchase CUDs only for the instances with high and predictable usage. This approach ensures that the cost savings from CUDs are maximized while avoiding unnecessary expenses for underutilized or sporadic instances. Reference: Compute Engine Committed Use Discounts documentation: https://cloud.google.com/compute/docs/instances/signing-up-committed-use-discounts
Question 40 of 65
40. Question
Your team has deployed a python application to a Managed Instance Group. The Managed Instance Group is placed behind a load balancer. You have been tasked with ensuring the load balancer only sends requests to instances that are working. What of the following helps you achieve this?
Correct
Option A is incorrect. Readiness Probe is used by Kubernetes to check if pods are ready to receive traffic. Option B is incorrect. Liveness Probe is used by Kubernetes to check if a pod is in the running state, if it is not it restarts the pod. Option C is CORRECT. Health Checks is used by the load balancer to determine if the backend is reachable (responds to traffic). Option D is incorrect. Uptime Check is used to check if an application responds or if it is reachable. Reference https://cloud.google.com/monitoring/uptime-checks https://cloud.google.com/load-balancing/docs/health-check-concepts
Incorrect
Option A is incorrect. Readiness Probe is used by Kubernetes to check if pods are ready to receive traffic. Option B is incorrect. Liveness Probe is used by Kubernetes to check if a pod is in the running state, if it is not it restarts the pod. Option C is CORRECT. Health Checks is used by the load balancer to determine if the backend is reachable (responds to traffic). Option D is incorrect. Uptime Check is used to check if an application responds or if it is reachable. Reference https://cloud.google.com/monitoring/uptime-checks https://cloud.google.com/load-balancing/docs/health-check-concepts
Unattempted
Option A is incorrect. Readiness Probe is used by Kubernetes to check if pods are ready to receive traffic. Option B is incorrect. Liveness Probe is used by Kubernetes to check if a pod is in the running state, if it is not it restarts the pod. Option C is CORRECT. Health Checks is used by the load balancer to determine if the backend is reachable (responds to traffic). Option D is incorrect. Uptime Check is used to check if an application responds or if it is reachable. Reference https://cloud.google.com/monitoring/uptime-checks https://cloud.google.com/load-balancing/docs/health-check-concepts
Question 41 of 65
41. Question
A customer has opted to use an external source code management such as GitLab. The customer wants to use Cloud Build for its Continuous Integration and Deployment to Cloud Run. They would like to automatically trigger a build in Cloud Build when code is pushed to GitLab. How can this be done?
You are designing the CICD pipeline for a customer. The pipeline will be used by developers to push changes to production. The customer strategy dictates the use of cloud native tools in the pipeline. Cloud source repositories and Cloud Build have been chosen. The customer has requested that automated builds in the pipeline are approved by a senior engineer. How can this be done?
Correct
Option A is incorrect. Approval is turned on in CloudBuild not in the cloudbuild.yaml file. Option B is incorrect. Triggers are created in CloudBuild. Option C is incorrect. This does not specify if Approval is turned on or not. Option D is CORRECT. Cloud Build can be triggered and remain in a pending state until approval is received, if Approval is turned on. Reference https://cloud.google.com/build/docs/automating-builds/approve-builds
Incorrect
Option A is incorrect. Approval is turned on in CloudBuild not in the cloudbuild.yaml file. Option B is incorrect. Triggers are created in CloudBuild. Option C is incorrect. This does not specify if Approval is turned on or not. Option D is CORRECT. Cloud Build can be triggered and remain in a pending state until approval is received, if Approval is turned on. Reference https://cloud.google.com/build/docs/automating-builds/approve-builds
Unattempted
Option A is incorrect. Approval is turned on in CloudBuild not in the cloudbuild.yaml file. Option B is incorrect. Triggers are created in CloudBuild. Option C is incorrect. This does not specify if Approval is turned on or not. Option D is CORRECT. Cloud Build can be triggered and remain in a pending state until approval is received, if Approval is turned on. Reference https://cloud.google.com/build/docs/automating-builds/approve-builds
Question 43 of 65
43. Question
You are the DevOps Engineer in a healthcare start-up firm. The company has a new application it is testing. Before the application is promoted to production for live traffic, you have been tasked with creating an incident response strategy. Which of the following are incident response team roles that should be delegated?
Correct
Option A is incorrect. These are not roles as defined in Google’s incident response team. Option B is CORRECT. These are the distinct roles that need to be delegated during an incident as defined in Google’s SRE book. Option C is incorrect. These are not roles as defined in Google’s incident response team. Option D is incorrect. These are not roles as defined in Google’s incident response team. Reference https://sre.google/sre-book/managing-incidents/
Incorrect
Option A is incorrect. These are not roles as defined in Google’s incident response team. Option B is CORRECT. These are the distinct roles that need to be delegated during an incident as defined in Google’s SRE book. Option C is incorrect. These are not roles as defined in Google’s incident response team. Option D is incorrect. These are not roles as defined in Google’s incident response team. Reference https://sre.google/sre-book/managing-incidents/
Unattempted
Option A is incorrect. These are not roles as defined in Google’s incident response team. Option B is CORRECT. These are the distinct roles that need to be delegated during an incident as defined in Google’s SRE book. Option C is incorrect. These are not roles as defined in Google’s incident response team. Option D is incorrect. These are not roles as defined in Google’s incident response team. Reference https://sre.google/sre-book/managing-incidents/
Question 44 of 65
44. Question
A gaming company recently launched a new version of its popular game. The traffic to the company’s site has increased by over 70%. Users are now complaining of timed out requests when they attempt to launch the game. Your team declares an incident. What action is the most important?
Correct
Option A is incorrect. Restoring service during an incident should be the topmost priority, root cause analysis can happen after. Option B is incorrect. Restoring service during an incident should be the topmost priority, writing the postmortem document can happen afterwards. Option C is CORRECT. Restoring service (mitigation) during an incident should be the topmost priority. Option D is incorrect. Finger point is not recommended during or after incidents. Reference https://sre.google/sre-book/managing-incidents/ https://sre.google/workbook/incident-response/
Incorrect
Option A is incorrect. Restoring service during an incident should be the topmost priority, root cause analysis can happen after. Option B is incorrect. Restoring service during an incident should be the topmost priority, writing the postmortem document can happen afterwards. Option C is CORRECT. Restoring service (mitigation) during an incident should be the topmost priority. Option D is incorrect. Finger point is not recommended during or after incidents. Reference https://sre.google/sre-book/managing-incidents/ https://sre.google/workbook/incident-response/
Unattempted
Option A is incorrect. Restoring service during an incident should be the topmost priority, root cause analysis can happen after. Option B is incorrect. Restoring service during an incident should be the topmost priority, writing the postmortem document can happen afterwards. Option C is CORRECT. Restoring service (mitigation) during an incident should be the topmost priority. Option D is incorrect. Finger point is not recommended during or after incidents. Reference https://sre.google/sre-book/managing-incidents/ https://sre.google/workbook/incident-response/
Question 45 of 65
45. Question
You are planning on deploying Nginx using Kubernetes Engine. You need to track the number of requests Nginx has serviced. Which of the following can help you achieve this? (select 2)
Correct
The monitoring agent is installed on GKE, all that you need to do is enable it and select the logging and monitoring type you want.
Option B and D are incorrect. This is the procedure for installing the monitoring agent on the Compute Engine.
Option E is incorrect. This is used for installing the logging agent on the Compute Engine.
The monitoring agent is installed on GKE, all that you need to do is enable it and select the logging and monitoring type you want.
Option B and D are incorrect. This is the procedure for installing the monitoring agent on the Compute Engine.
Option E is incorrect. This is used for installing the logging agent on the Compute Engine.
The monitoring agent is installed on GKE, all that you need to do is enable it and select the logging and monitoring type you want.
Option B and D are incorrect. This is the procedure for installing the monitoring agent on the Compute Engine.
Option E is incorrect. This is used for installing the logging agent on the Compute Engine.
Your company has several Google projects in its organisation. As part of the monitoring strategy, the projects will be added to specified workspaces. Your team has been assigned the task of creating the workspaces. Following the principle of least privilege, what IAM role would your team need to create workspaces?
Correct
The principle of least privilege is an important security concept that recommends providing only the minimum access necessary for a user or team to perform their tasks. In this scenario, the team has been assigned the task of creating the monitoring workspaces. The Monitoring Editor role allows a user to create, modify, and delete monitoring configurations, alerts, and dashboards, but does not grant access to manage other resources in the project. This role satisfies the principle of least privilege and allows the team to complete their task without unnecessary access. Assigning the Project Editor or Project Owner role to the team provides broad access to manage all resources in the project, which is not necessary for the task at hand and violates the principle of least privilege. Similarly, the Monitoring Admin role is not necessary as it provides more privileges than required for creating workspaces. Reference: Google Cloud documentation on Monitoring access control: https://cloud.google.com/monitoring/access-control Google Cloud documentation on Understanding roles: https://cloud.google.com/iam/docs/understanding-roles
Incorrect
The principle of least privilege is an important security concept that recommends providing only the minimum access necessary for a user or team to perform their tasks. In this scenario, the team has been assigned the task of creating the monitoring workspaces. The Monitoring Editor role allows a user to create, modify, and delete monitoring configurations, alerts, and dashboards, but does not grant access to manage other resources in the project. This role satisfies the principle of least privilege and allows the team to complete their task without unnecessary access. Assigning the Project Editor or Project Owner role to the team provides broad access to manage all resources in the project, which is not necessary for the task at hand and violates the principle of least privilege. Similarly, the Monitoring Admin role is not necessary as it provides more privileges than required for creating workspaces. Reference: Google Cloud documentation on Monitoring access control: https://cloud.google.com/monitoring/access-control Google Cloud documentation on Understanding roles: https://cloud.google.com/iam/docs/understanding-roles
Unattempted
The principle of least privilege is an important security concept that recommends providing only the minimum access necessary for a user or team to perform their tasks. In this scenario, the team has been assigned the task of creating the monitoring workspaces. The Monitoring Editor role allows a user to create, modify, and delete monitoring configurations, alerts, and dashboards, but does not grant access to manage other resources in the project. This role satisfies the principle of least privilege and allows the team to complete their task without unnecessary access. Assigning the Project Editor or Project Owner role to the team provides broad access to manage all resources in the project, which is not necessary for the task at hand and violates the principle of least privilege. Similarly, the Monitoring Admin role is not necessary as it provides more privileges than required for creating workspaces. Reference: Google Cloud documentation on Monitoring access control: https://cloud.google.com/monitoring/access-control Google Cloud documentation on Understanding roles: https://cloud.google.com/iam/docs/understanding-roles
Question 47 of 65
47. Question
You are a DevOps engineer for a social media company. You are on the monitoring team for their flagship web application that is growing rapidly. The application is deployed on Managed Instance Groups behind a HTTP(S) load balancer. The number of logs created by the application is causing the Project to exceed the logging API quota. You have created exclusion filters in Cloud Logging. You notice the issue persists. What could be the problem?
Correct
Options A and B are incorrect. Exclusion filters work after the logging API has been called and the logs are in Cloud Logging. Option C is CORRECT. The problem is the number of entries.write API calls which pushes logs to Cloud Logging before exclusion filters can be applied. The solution will be to reduce the logs collected. Option D is incorrect. There is no need for extra permissions. The Managed Instance Group can already access Cloud Logging. References https://cloud.google.com/logging/docs/exclusions https://cloud.google.com/logging/quotas#log-limits
Incorrect
Options A and B are incorrect. Exclusion filters work after the logging API has been called and the logs are in Cloud Logging. Option C is CORRECT. The problem is the number of entries.write API calls which pushes logs to Cloud Logging before exclusion filters can be applied. The solution will be to reduce the logs collected. Option D is incorrect. There is no need for extra permissions. The Managed Instance Group can already access Cloud Logging. References https://cloud.google.com/logging/docs/exclusions https://cloud.google.com/logging/quotas#log-limits
Unattempted
Options A and B are incorrect. Exclusion filters work after the logging API has been called and the logs are in Cloud Logging. Option C is CORRECT. The problem is the number of entries.write API calls which pushes logs to Cloud Logging before exclusion filters can be applied. The solution will be to reduce the logs collected. Option D is incorrect. There is no need for extra permissions. The Managed Instance Group can already access Cloud Logging. References https://cloud.google.com/logging/docs/exclusions https://cloud.google.com/logging/quotas#log-limits
Question 48 of 65
48. Question
Your company has multiple projects in Google Cloud. The projects represent the available environments such as development, test, pre-production and production. A centralised logging system needs to be implemented where all the environments send their logs to a security project. There is a requirement to not send any logs generated by an apache application to the security project. What steps can you take to achieve this? (select 2)
Correct
Create a logging bucket in the security project. Sink destination should be the logging bucket in the security project and filter rate should be 100 so all apache logs are excluded. Option B and E is incorrect. Keeping the logs in the different projects does not meet the requirement for sending logs to the security project. Option C is incorrect. Filter rate of 0 means no apache logs are excluded. Reference https://cloud.google.com/logging/docs/exclusions#create-filter
Incorrect
Create a logging bucket in the security project. Sink destination should be the logging bucket in the security project and filter rate should be 100 so all apache logs are excluded. Option B and E is incorrect. Keeping the logs in the different projects does not meet the requirement for sending logs to the security project. Option C is incorrect. Filter rate of 0 means no apache logs are excluded. Reference https://cloud.google.com/logging/docs/exclusions#create-filter
Unattempted
Create a logging bucket in the security project. Sink destination should be the logging bucket in the security project and filter rate should be 100 so all apache logs are excluded. Option B and E is incorrect. Keeping the logs in the different projects does not meet the requirement for sending logs to the security project. Option C is incorrect. Filter rate of 0 means no apache logs are excluded. Reference https://cloud.google.com/logging/docs/exclusions#create-filter
Question 49 of 65
49. Question
You are part of the Site Reliability Engineering Team at your company. Your team manages all the updates to production, and review of application performance in production. Recently there was an incident in production that affected a whole region of users. A meeting has been called to review the incident. Following Google’s best practice, which of the following should not be discussed?
Correct
Team members‘ involvement in causing the incident should not be part of the post-mortem review, it does not promote a blameless culture and others may be motivated to cover up facts critical to understanding and preventing recurrence. Reference https://sre.google/workbook/postmortem-culture/
Incorrect
Team members‘ involvement in causing the incident should not be part of the post-mortem review, it does not promote a blameless culture and others may be motivated to cover up facts critical to understanding and preventing recurrence. Reference https://sre.google/workbook/postmortem-culture/
Unattempted
Team members‘ involvement in causing the incident should not be part of the post-mortem review, it does not promote a blameless culture and others may be motivated to cover up facts critical to understanding and preventing recurrence. Reference https://sre.google/workbook/postmortem-culture/
Question 50 of 65
50. Question
A company has deployed a multi-tier web application on Google Cloud Platform (GCP) and wants to use Cloud Monitoring to analyze application performance data. They have decided to integrate Cloud Monitoring with BigQuery to perform more complex analysis on the collected metrics. Which of the following approaches is the most appropriate way to achieve this integration while ensuring a scalable, cost-effective solution?
Correct
Create a Pub/Sub topic to export monitoring data, use Dataflow to process the data, and then use BigQuery sink to store the data in BigQuery. -> Correct. Creating a Pub/Sub topic to export monitoring data allows for real-time streaming and decoupling of the data ingestion process. Using Dataflow to process the data provides a scalable, serverless solution for data transformation, and the BigQuery sink enables efficient storage of the data in BigQuery. This approach is the most appropriate, as it offers a scalable, cost-effective, and real-time solution to integrate Cloud Monitoring with BigQuery. Export the monitoring data to a Cloud Storage bucket, then set up a Data Transfer service to move the data to BigQuery. -> Incorrect. It adds unnecessary complexity and latency to the solution. This approach also requires additional storage and transfer costs, which makes it less cost-effective. Use the Cloud Monitoring API to fetch the metric data and then use the BigQuery Streaming API to insert the data into BigQuery tables in real-time. -> Incorrect. It might work for small-scale scenarios. However, this approach is not scalable, as it would require significant compute resources to handle large volumes of metric data. Enable the BigQuery export feature within Cloud Monitoring, which will export the monitoring data directly into BigQuery tables. -> Incorrect. While it would be ideal to have a direct BigQuery export feature within Cloud Monitoring, this feature does not currently exist. Therefore, this option is not a valid solution.
Incorrect
Create a Pub/Sub topic to export monitoring data, use Dataflow to process the data, and then use BigQuery sink to store the data in BigQuery. -> Correct. Creating a Pub/Sub topic to export monitoring data allows for real-time streaming and decoupling of the data ingestion process. Using Dataflow to process the data provides a scalable, serverless solution for data transformation, and the BigQuery sink enables efficient storage of the data in BigQuery. This approach is the most appropriate, as it offers a scalable, cost-effective, and real-time solution to integrate Cloud Monitoring with BigQuery. Export the monitoring data to a Cloud Storage bucket, then set up a Data Transfer service to move the data to BigQuery. -> Incorrect. It adds unnecessary complexity and latency to the solution. This approach also requires additional storage and transfer costs, which makes it less cost-effective. Use the Cloud Monitoring API to fetch the metric data and then use the BigQuery Streaming API to insert the data into BigQuery tables in real-time. -> Incorrect. It might work for small-scale scenarios. However, this approach is not scalable, as it would require significant compute resources to handle large volumes of metric data. Enable the BigQuery export feature within Cloud Monitoring, which will export the monitoring data directly into BigQuery tables. -> Incorrect. While it would be ideal to have a direct BigQuery export feature within Cloud Monitoring, this feature does not currently exist. Therefore, this option is not a valid solution.
Unattempted
Create a Pub/Sub topic to export monitoring data, use Dataflow to process the data, and then use BigQuery sink to store the data in BigQuery. -> Correct. Creating a Pub/Sub topic to export monitoring data allows for real-time streaming and decoupling of the data ingestion process. Using Dataflow to process the data provides a scalable, serverless solution for data transformation, and the BigQuery sink enables efficient storage of the data in BigQuery. This approach is the most appropriate, as it offers a scalable, cost-effective, and real-time solution to integrate Cloud Monitoring with BigQuery. Export the monitoring data to a Cloud Storage bucket, then set up a Data Transfer service to move the data to BigQuery. -> Incorrect. It adds unnecessary complexity and latency to the solution. This approach also requires additional storage and transfer costs, which makes it less cost-effective. Use the Cloud Monitoring API to fetch the metric data and then use the BigQuery Streaming API to insert the data into BigQuery tables in real-time. -> Incorrect. It might work for small-scale scenarios. However, this approach is not scalable, as it would require significant compute resources to handle large volumes of metric data. Enable the BigQuery export feature within Cloud Monitoring, which will export the monitoring data directly into BigQuery tables. -> Incorrect. While it would be ideal to have a direct BigQuery export feature within Cloud Monitoring, this feature does not currently exist. Therefore, this option is not a valid solution.
Question 51 of 65
51. Question
You are the on-call SRE for a growing media company. You are managing an application deployed on Compute Engine within a custom VPC. The application accepts user traffic from anywhere using HTTPS. You have been tasked with logging all failed incoming SSH traffic to the GCE instances. How will you achieve this?
Correct
The firewall rule should deny ingress (incoming) traffic on port 22 (SSH) and logging should be turned on, so the logs appear in Cloud Logging. Option B is incorrect. The firewall should deny ingress (incoming) and not allow traffic. Option C and D are incorrect. The firewall should affect ingress (incoming) not egress (outgoing). Reference https://cloud.google.com/vpc/docs/firewall-rules-logging
Incorrect
The firewall rule should deny ingress (incoming) traffic on port 22 (SSH) and logging should be turned on, so the logs appear in Cloud Logging. Option B is incorrect. The firewall should deny ingress (incoming) and not allow traffic. Option C and D are incorrect. The firewall should affect ingress (incoming) not egress (outgoing). Reference https://cloud.google.com/vpc/docs/firewall-rules-logging
Unattempted
The firewall rule should deny ingress (incoming) traffic on port 22 (SSH) and logging should be turned on, so the logs appear in Cloud Logging. Option B is incorrect. The firewall should deny ingress (incoming) and not allow traffic. Option C and D are incorrect. The firewall should affect ingress (incoming) not egress (outgoing). Reference https://cloud.google.com/vpc/docs/firewall-rules-logging
Question 52 of 65
52. Question
You are part of an on-call Site Reliability Engineering team managing a web application in the production. The application serves user requests from several regions. A new update was deployed over the weekend to introduce new features into the application. Users are reporting errors and failed processed requests from the application. Your team declares an incident, accesses the impact and discovers the issue is affecting users in one region. Which of the following is the recommended action?
Correct
Options B, C and D are incorrect. Mitigating the impact is the recommended next course of action when you know the extent of the impact. Performing root-cause analysis and post-mortem is done after service is fully restored. Options A is CORRECT. This is the first recommended next step after assessing the impact or extent of the incident. Reference https://sre.google/workbook/incident-response/ (Case Study 2)
Incorrect
Options B, C and D are incorrect. Mitigating the impact is the recommended next course of action when you know the extent of the impact. Performing root-cause analysis and post-mortem is done after service is fully restored. Options A is CORRECT. This is the first recommended next step after assessing the impact or extent of the incident. Reference https://sre.google/workbook/incident-response/ (Case Study 2)
Unattempted
Options B, C and D are incorrect. Mitigating the impact is the recommended next course of action when you know the extent of the impact. Performing root-cause analysis and post-mortem is done after service is fully restored. Options A is CORRECT. This is the first recommended next step after assessing the impact or extent of the incident. Reference https://sre.google/workbook/incident-response/ (Case Study 2)
Question 53 of 65
53. Question
You are the DevOps Engineer in a Finance company. You manage the Cloud Landscape. The company has several applications on GKE clusters, and the clusters write logs to Cloud Logging. There is a legal requirement to store logs for 7 years. What is the most cost-effective place to store the logs?
Correct
Option A is incorrect. A multi-region standard storage class bucket is more expensive than a single-region archive storage class bucket. Option B is CORRECT. This is the best option. Option C is incorrect. This is an expensive option. Option D is incorrect. BigQuery is mostly suited towards analytics not long-term storage. Reference https://cloud.google.com/logging/docs/routing/overview https://cloud.google.com/storage/docs/storage-classes
Incorrect
Option A is incorrect. A multi-region standard storage class bucket is more expensive than a single-region archive storage class bucket. Option B is CORRECT. This is the best option. Option C is incorrect. This is an expensive option. Option D is incorrect. BigQuery is mostly suited towards analytics not long-term storage. Reference https://cloud.google.com/logging/docs/routing/overview https://cloud.google.com/storage/docs/storage-classes
Unattempted
Option A is incorrect. A multi-region standard storage class bucket is more expensive than a single-region archive storage class bucket. Option B is CORRECT. This is the best option. Option C is incorrect. This is an expensive option. Option D is incorrect. BigQuery is mostly suited towards analytics not long-term storage. Reference https://cloud.google.com/logging/docs/routing/overview https://cloud.google.com/storage/docs/storage-classes
Question 54 of 65
54. Question
You are part of an on-call SRE team managing an apache web service application in production. The application is deployed to Google Compute Engine. The FluentD agent is installed on the GCE instance. You have been tasked with reviewing the apache logs from the application. Which of the following queries helps you do this?
Correct
Option A is incorrect. This has a resource type set to App Engine. Option B is CORRECT. This has a resource type of compute engine and the appropriate log name Options C and D are incorrect. These queries target the Admin activity logs and Data Access logs. Reference https://cloud.google.com/logging/docs/view/query-library-preview#logging-agent-filters
Incorrect
Option A is incorrect. This has a resource type set to App Engine. Option B is CORRECT. This has a resource type of compute engine and the appropriate log name Options C and D are incorrect. These queries target the Admin activity logs and Data Access logs. Reference https://cloud.google.com/logging/docs/view/query-library-preview#logging-agent-filters
Unattempted
Option A is incorrect. This has a resource type set to App Engine. Option B is CORRECT. This has a resource type of compute engine and the appropriate log name Options C and D are incorrect. These queries target the Admin activity logs and Data Access logs. Reference https://cloud.google.com/logging/docs/view/query-library-preview#logging-agent-filters
Question 55 of 65
55. Question
Your organization has several applications running on Compute Engine. The instances generate logs and metrics which are been monitored on dashboards. There is a new requirement to capture operating system (OS) level logs for security reasons. How can you achieve this?
Correct
Option A is incorrect. The FluentD agent is needed for OS level logs not Collectd. Option B is incorrect. Logs based metrics is only possible for logs available in Cloud Logging. Option C is CORRECT. The FluentD agent is needed for OS level logs. Option D is incorrect. A sink can only route logs that are available in Cloud Logging. Reference https://cloud.google.com/logging/docs/agent/logging/installation#gce-ui-install
Incorrect
Option A is incorrect. The FluentD agent is needed for OS level logs not Collectd. Option B is incorrect. Logs based metrics is only possible for logs available in Cloud Logging. Option C is CORRECT. The FluentD agent is needed for OS level logs. Option D is incorrect. A sink can only route logs that are available in Cloud Logging. Reference https://cloud.google.com/logging/docs/agent/logging/installation#gce-ui-install
Unattempted
Option A is incorrect. The FluentD agent is needed for OS level logs not Collectd. Option B is incorrect. Logs based metrics is only possible for logs available in Cloud Logging. Option C is CORRECT. The FluentD agent is needed for OS level logs. Option D is incorrect. A sink can only route logs that are available in Cloud Logging. Reference https://cloud.google.com/logging/docs/agent/logging/installation#gce-ui-install
Question 56 of 65
56. Question
Your company has three environments called production, staging and development. A GCP Project has been set up for each environment, there is also a monitoring project with two workspaces, one for production while the other is for development and staging. A GKE Cluster has been set up in both staging and development for testing an application to be deployed to production. Both clusters have a service called app-serve and an alerting policy was created to monitor the service in the workspace. When there is an incident on the service, the GKE monitoring dashboard can’t associate this incident uniquely with the development service or the staging service. How can you resolve this with little operational overhead?
Correct
Options A and B are incorrect. These options both involve more operational overhead of monitoring an addition workspace and renaming all the components that use the service. Option C is CORRECT. This is the best option with the least amount of overhead. Option D is incorrect. This has a lot of operational overhead and might not solve the problem if the same service name is used. Reference https://cloud.google.com/stackdriver/docs/solutions/gke/troubleshooting#alerting
Incorrect
Options A and B are incorrect. These options both involve more operational overhead of monitoring an addition workspace and renaming all the components that use the service. Option C is CORRECT. This is the best option with the least amount of overhead. Option D is incorrect. This has a lot of operational overhead and might not solve the problem if the same service name is used. Reference https://cloud.google.com/stackdriver/docs/solutions/gke/troubleshooting#alerting
Unattempted
Options A and B are incorrect. These options both involve more operational overhead of monitoring an addition workspace and renaming all the components that use the service. Option C is CORRECT. This is the best option with the least amount of overhead. Option D is incorrect. This has a lot of operational overhead and might not solve the problem if the same service name is used. Reference https://cloud.google.com/stackdriver/docs/solutions/gke/troubleshooting#alerting
Question 57 of 65
57. Question
Your SRE team is responsible for monitoring and logging of the applications in Production Projects. The applications are deployed on different resources like Compute Engine and GKE. Your team has created a centralised monitoring dashboard in the monitoring Project for the metrics from all the production Projects. An uptime check was created for the applications. You have been tasked with setting up the Notification channels for one of the applications to send the notification to a public endpoint. Which of these helps you meet the requirement?
Your Site Reliability (SRE) team members are managing the CICD of your organization. Applications are deployed to Compute Engine instances. There is a requirement to send the logs of the instances in the Development Projects to a user-created bucket. Which step can you take to achieve this?
Correct
Log sink is created in Cloud logging and the destination should be Cloud Storage bucket because you are told the logs should go into a user-created bucket. Option B is incorrect. Cloud Pub/Sub is not a bucket. Option C is incorrect. Log sink is created in Cloud logging. Option D is incorrect. Log sink is created in Cloud logging. Reference https://cloud.google.com/logging/docs/export/configure_export_v2
Incorrect
Log sink is created in Cloud logging and the destination should be Cloud Storage bucket because you are told the logs should go into a user-created bucket. Option B is incorrect. Cloud Pub/Sub is not a bucket. Option C is incorrect. Log sink is created in Cloud logging. Option D is incorrect. Log sink is created in Cloud logging. Reference https://cloud.google.com/logging/docs/export/configure_export_v2
Unattempted
Log sink is created in Cloud logging and the destination should be Cloud Storage bucket because you are told the logs should go into a user-created bucket. Option B is incorrect. Cloud Pub/Sub is not a bucket. Option C is incorrect. Log sink is created in Cloud logging. Option D is incorrect. Log sink is created in Cloud logging. Reference https://cloud.google.com/logging/docs/export/configure_export_v2
Question 59 of 65
59. Question
Your team is planning on the structure of the Cloud Monitoring workspace that will monitor multiple projects. You need to grant permissions to the service account of Compute Engine instances to send metric data to Cloud Monitoring. Following the principle of least privilege, which of the following roles should be assigned?
Correct
Option A is incorrect. The Monitoring Admin role will assign more permissions than is required for the job. Option B is CORRECT. Monitoring Metric Writer provides enough permissions for users or service accounts to write metrics to Cloud Monitoring. Option C is incorrect. The logging Admin role is used for access to Cloud Logging. Option D is incorrect. The Logs Configuration Writer role is used for access to Cloud Logging. Reference https://cloud.google.com/monitoring/access-control
Incorrect
Option A is incorrect. The Monitoring Admin role will assign more permissions than is required for the job. Option B is CORRECT. Monitoring Metric Writer provides enough permissions for users or service accounts to write metrics to Cloud Monitoring. Option C is incorrect. The logging Admin role is used for access to Cloud Logging. Option D is incorrect. The Logs Configuration Writer role is used for access to Cloud Logging. Reference https://cloud.google.com/monitoring/access-control
Unattempted
Option A is incorrect. The Monitoring Admin role will assign more permissions than is required for the job. Option B is CORRECT. Monitoring Metric Writer provides enough permissions for users or service accounts to write metrics to Cloud Monitoring. Option C is incorrect. The logging Admin role is used for access to Cloud Logging. Option D is incorrect. The Logs Configuration Writer role is used for access to Cloud Logging. Reference https://cloud.google.com/monitoring/access-control
Question 60 of 65
60. Question
Your team is developing an application that will be deployed to production. During the testing of the application there were some incidents which were documented and resolved. Which of the following is not a best practice for Incident management?
Correct
Options A, B & C are incorrect. These are Google SRE’s best practices for Incident Management. Option D is CORRECT. Best practice is to prioritize restoring service before root-cause investigations. Reference https://sre.google/sre-book/managing-incidents/
Incorrect
Options A, B & C are incorrect. These are Google SRE’s best practices for Incident Management. Option D is CORRECT. Best practice is to prioritize restoring service before root-cause investigations. Reference https://sre.google/sre-book/managing-incidents/
Unattempted
Options A, B & C are incorrect. These are Google SRE’s best practices for Incident Management. Option D is CORRECT. Best practice is to prioritize restoring service before root-cause investigations. Reference https://sre.google/sre-book/managing-incidents/
Question 61 of 65
61. Question
Your client just recovered from a major outage that disrupted application service for almost an hour. Your DevOps team has been tasked with creating a document that summarizes the events that took place during the incident. Which of the following documents will you create?
Correct
Option A is incorrect. Alerts are created to notify based on measured metrics exceeding or failing below a threshold. Option B is incorrect. Support tickets are usually created for tasks that are requested by a customer. Option C is CORRECT. The postmortem is a document that records an incident, its impact and any mitigating actions taken to resolve it. Option D is incorrect. This is the job of the Communications Lead in the Project not DevOps. Reference https://sre.google/sre-book/postmortem-culture/
Incorrect
Option A is incorrect. Alerts are created to notify based on measured metrics exceeding or failing below a threshold. Option B is incorrect. Support tickets are usually created for tasks that are requested by a customer. Option C is CORRECT. The postmortem is a document that records an incident, its impact and any mitigating actions taken to resolve it. Option D is incorrect. This is the job of the Communications Lead in the Project not DevOps. Reference https://sre.google/sre-book/postmortem-culture/
Unattempted
Option A is incorrect. Alerts are created to notify based on measured metrics exceeding or failing below a threshold. Option B is incorrect. Support tickets are usually created for tasks that are requested by a customer. Option C is CORRECT. The postmortem is a document that records an incident, its impact and any mitigating actions taken to resolve it. Option D is incorrect. This is the job of the Communications Lead in the Project not DevOps. Reference https://sre.google/sre-book/postmortem-culture/
Question 62 of 65
62. Question
An organization is planning to use an automated CI/CD pipeline to deploy applications to Compute Engine. The organization would like to use a combination of cloud native and open-source tools for the pipeline. Which of the following helps you achieve this?
A customer has multiple projects in Google Cloud. The projects represent the different environments. You have been tasked with sending certain logs from all projects to Splunk. There is a requirement to send any data access logs to Splunk. What of the following DOES NOT help you meet this requirement?
Correct
Option A is incorrect. A Pub/Sub topic is needed to send logs to Splunk Option B is incorrect. A log sink is needed to route the selected logs to the destination needed. Option C is incorrect. The logging service account needs permissions to write to the Pub/Sub topic. Option D is CORRECT. Cloud Storage bucket is not needed for routing logs to Splunk. Reference https://cloud.google.com/architecture/exporting-stackdriver-logging-for-splunk#set_up_the_logging_export
Incorrect
Option A is incorrect. A Pub/Sub topic is needed to send logs to Splunk Option B is incorrect. A log sink is needed to route the selected logs to the destination needed. Option C is incorrect. The logging service account needs permissions to write to the Pub/Sub topic. Option D is CORRECT. Cloud Storage bucket is not needed for routing logs to Splunk. Reference https://cloud.google.com/architecture/exporting-stackdriver-logging-for-splunk#set_up_the_logging_export
Unattempted
Option A is incorrect. A Pub/Sub topic is needed to send logs to Splunk Option B is incorrect. A log sink is needed to route the selected logs to the destination needed. Option C is incorrect. The logging service account needs permissions to write to the Pub/Sub topic. Option D is CORRECT. Cloud Storage bucket is not needed for routing logs to Splunk. Reference https://cloud.google.com/architecture/exporting-stackdriver-logging-for-splunk#set_up_the_logging_export
Question 64 of 65
64. Question
You work as a DevOps Engineer for a client. Developers make changes and push code to branches in a repository. Each branch is merged into a staging branch daily. The client wants to trigger a build of the staging branch every night. How can you achieve this? (select 2)
Correct
Option A is incorrect. Scheduled triggers are created with “manual invocation” as the event. Option B is CORRECT. Scheduled triggers are created with “manual invocation” as the event. Option C is incorrect. Scheduled triggers are created with “manual invocation” as the event. Option D is incorrect. Triggers are created in Cloud Build not Cloud Scheduler. Option E is CORRECT. After creating the trigger in Cloud Build, a Cloud Scheduler job needs to be created. Reference https://cloud.google.com/build/docs/automating-builds/create-scheduled-triggers
Incorrect
Option A is incorrect. Scheduled triggers are created with “manual invocation” as the event. Option B is CORRECT. Scheduled triggers are created with “manual invocation” as the event. Option C is incorrect. Scheduled triggers are created with “manual invocation” as the event. Option D is incorrect. Triggers are created in Cloud Build not Cloud Scheduler. Option E is CORRECT. After creating the trigger in Cloud Build, a Cloud Scheduler job needs to be created. Reference https://cloud.google.com/build/docs/automating-builds/create-scheduled-triggers
Unattempted
Option A is incorrect. Scheduled triggers are created with “manual invocation” as the event. Option B is CORRECT. Scheduled triggers are created with “manual invocation” as the event. Option C is incorrect. Scheduled triggers are created with “manual invocation” as the event. Option D is incorrect. Triggers are created in Cloud Build not Cloud Scheduler. Option E is CORRECT. After creating the trigger in Cloud Build, a Cloud Scheduler job needs to be created. Reference https://cloud.google.com/build/docs/automating-builds/create-scheduled-triggers
Question 65 of 65
65. Question
You work as a DevOps Engineer for a client. The company uses cloud native tools for its CICD pipeline. Automated build is done using Cloud Build when code is pushed to repositories in Cloud Source Repositories. Which of the following CANNOT be used as a trigger with Cloud Source Repositories?
Correct
Option A is incorrect. This can be used to trigger Cloud Build when code is pushed to a specified (or any branches) in Cloud Source Repositories. Option B is CORRECT. This option cannot be used to trigger builds in Cloud Build if Cloud Source Repositories is used. Option C is incorrect. This can be used to trigger Cloud Build when code is pushed with a new tag to any branch in the repository in Cloud Source Repositories. Option D is incorrect. Cloud Build can be manually triggered to build code in the repository in Cloud Source Repositories Reference https://cloud.google.com/build/docs/automating-builds/create-manage-triggers
Incorrect
Option A is incorrect. This can be used to trigger Cloud Build when code is pushed to a specified (or any branches) in Cloud Source Repositories. Option B is CORRECT. This option cannot be used to trigger builds in Cloud Build if Cloud Source Repositories is used. Option C is incorrect. This can be used to trigger Cloud Build when code is pushed with a new tag to any branch in the repository in Cloud Source Repositories. Option D is incorrect. Cloud Build can be manually triggered to build code in the repository in Cloud Source Repositories Reference https://cloud.google.com/build/docs/automating-builds/create-manage-triggers
Unattempted
Option A is incorrect. This can be used to trigger Cloud Build when code is pushed to a specified (or any branches) in Cloud Source Repositories. Option B is CORRECT. This option cannot be used to trigger builds in Cloud Build if Cloud Source Repositories is used. Option C is incorrect. This can be used to trigger Cloud Build when code is pushed with a new tag to any branch in the repository in Cloud Source Repositories. Option D is incorrect. Cloud Build can be manually triggered to build code in the repository in Cloud Source Repositories Reference https://cloud.google.com/build/docs/automating-builds/create-manage-triggers
Use Page numbers below to navigate to other practice tests