You have already completed the Test before. Hence you can not start it again.
Test is loading...
You must sign in or sign up to start the Test.
You have to finish following quiz, to start this Test:
Your results are here!! for" Google Professional Cloud DevOps Engineer Practice Test 3 "
0 of 65 questions answered correctly
Your time:
Time has elapsed
Your Final Score is : 0
You have attempted : 0
Number of Correct Questions : 0 and scored 0
Number of Incorrect Questions : 0 and Negative marks 0
Average score
Your score
Google Professional Cloud DevOps Engineer
You have attempted: 0
Number of Correct Questions: 0 and scored 0
Number of Incorrect Questions: 0 and Negative marks 0
You can review your answers by clicking on “View Answers” option.
Important Note : Open Reference Documentation Links in New Tab (Right Click and Open in New Tab).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
Answered
Review
Question 1 of 65
1. Question
Your team is developing a python application for a government agency. The company has decided that the application should be deployed to App Engine Flexible environment in GCP. There is a security requirement for collection of the application logs. Which steps can you take to fulfil this requirement? Select TWO
You are working on a new application development for a gambling company. The application will utilize a microservices architecture to allow for loose coupling of the different components. You are using Cloud Build to build the docker images. You have tested the build locally using the local builder, but when you try to run the build in Cloud Build it fails. Which of the following could be the problem?
Your company has decided to migrate from on-premises to Google Cloud. The first environment to be migrated is the development and testing environments. Currently each environment is fully documented, consists of a network with 3 subnets, several firewall rules, routes, VMs, Storage, Databases and DNS. The environments need to be consistent and immutable. Following best practice, how would you deploy the environments and make them reproducible with little overhead?
Correct
Option A, C and E are incorrect. This does not follow best practice of automating infrastructure creation for reproducibility. Option B is CORRECT. This is Google’s best practice for creating Infrastructure as Code. The templates can be version controlled with very minimal overhead. Option D is incorrect. This will introduce the overhead of managing the Cloud Function. References: https://cloud.google.com/deployment-manager/docs/quickstart https://cloud.google.com/docs/terraform
Incorrect
Option A, C and E are incorrect. This does not follow best practice of automating infrastructure creation for reproducibility. Option B is CORRECT. This is Google’s best practice for creating Infrastructure as Code. The templates can be version controlled with very minimal overhead. Option D is incorrect. This will introduce the overhead of managing the Cloud Function. References: https://cloud.google.com/deployment-manager/docs/quickstart https://cloud.google.com/docs/terraform
Unattempted
Option A, C and E are incorrect. This does not follow best practice of automating infrastructure creation for reproducibility. Option B is CORRECT. This is Google’s best practice for creating Infrastructure as Code. The templates can be version controlled with very minimal overhead. Option D is incorrect. This will introduce the overhead of managing the Cloud Function. References: https://cloud.google.com/deployment-manager/docs/quickstart https://cloud.google.com/docs/terraform
Question 4 of 65
4. Question
You are developing a mobile application for a financial institution. A key security requirement is that application passwords are changed frequently. The application will comprise two parts; the frontend deployed on Google Kubernetes Engine and the database is Google Cloud SQL. You need a secure way to pass the database credentials to the application at runtime and also meet the security requirement. How can you achieve this following best practice?
Correct
Options A and B are incorrect. These do not follow best practice. Storing credentials in the application is not recommended and also injecting the credentials into the application is also not recommended because that means the credentials gets stored in the application code. Option C is incorrect. You currently cannot configure secret rotation via the console. Option D is CORRECT. Secrets rotation policies can only be done through the API or gcloud commands Reference: https://cloud.google.com/secret-manager/docs/secret-rotation
Incorrect
Options A and B are incorrect. These do not follow best practice. Storing credentials in the application is not recommended and also injecting the credentials into the application is also not recommended because that means the credentials gets stored in the application code. Option C is incorrect. You currently cannot configure secret rotation via the console. Option D is CORRECT. Secrets rotation policies can only be done through the API or gcloud commands Reference: https://cloud.google.com/secret-manager/docs/secret-rotation
Unattempted
Options A and B are incorrect. These do not follow best practice. Storing credentials in the application is not recommended and also injecting the credentials into the application is also not recommended because that means the credentials gets stored in the application code. Option C is incorrect. You currently cannot configure secret rotation via the console. Option D is CORRECT. Secrets rotation policies can only be done through the API or gcloud commands Reference: https://cloud.google.com/secret-manager/docs/secret-rotation
Question 5 of 65
5. Question
Your company has several Google Projects. As part of the CI/CD pipeline it has a Project where automated Compute and Docker Image creation is done. Users in the developer, staging and Production Projects require access to the images created for deployments. Following principle of least privilege, what IAM role would you need to assign to users to achieve this?
Correct
Assign the compute.imageUser role to users in the Project where the images are created. Option B is incorrect. This role is too permissive. Options C and D is incorrect. The role is assigned in the Project where the images are created. Reference: https://cloud.google.com/compute/docs/images/image-management-best-practices
Incorrect
Assign the compute.imageUser role to users in the Project where the images are created. Option B is incorrect. This role is too permissive. Options C and D is incorrect. The role is assigned in the Project where the images are created. Reference: https://cloud.google.com/compute/docs/images/image-management-best-practices
Unattempted
Assign the compute.imageUser role to users in the Project where the images are created. Option B is incorrect. This role is too permissive. Options C and D is incorrect. The role is assigned in the Project where the images are created. Reference: https://cloud.google.com/compute/docs/images/image-management-best-practices
Question 6 of 65
6. Question
Your team is developing a containerized python application for a government project. The application uses a microservices architecture and will be deployed using Cloud Run. You have been asked to capture the application‘s top or new errors in a clear dashboard in real-time. How would you achieve this?
Your team manages a financial application for an organisation. You have been given a requirement to preserve the logs from the application for 10years as part of a compliance process. Logs will be reviewed once a year. What is the most cost-effective way to achieve this?
Your Site Reliability (SRE) team members manage an application deployed in three regions. The application is deployed on Managed Instance Groups placed behind a global HTTP(S) Load balancer. You are applying a critical security patch to the Compute Engines. You successfully patch the instances in the first 2 regions, but you made an error in the patching of the third region which causes requests to that region to fail. You want to mitigate the impact of unsuccessful patching on users. What should you do?
Correct
Options A and B are incorrect. These options try to fix the problem immediately with no guarantees that it will solve the problem thereby increasing the Mean Time to Repair (MTTR) Option C is incorrect. Increasing the number of instances does not mitigate the current incident because it will have the same error as the other instances in the Managed Instance Group. Option D is CORRECT. The recommended approach to make the system work as well as it can under the circumstances. This gives you time to fix the errors in region 3 and apply a new patch. References: https://sre.google/sre-book/effective-troubleshooting/ https://cloud.google.com/load-balancing/docs/enabling-connection-draining
Incorrect
Options A and B are incorrect. These options try to fix the problem immediately with no guarantees that it will solve the problem thereby increasing the Mean Time to Repair (MTTR) Option C is incorrect. Increasing the number of instances does not mitigate the current incident because it will have the same error as the other instances in the Managed Instance Group. Option D is CORRECT. The recommended approach to make the system work as well as it can under the circumstances. This gives you time to fix the errors in region 3 and apply a new patch. References: https://sre.google/sre-book/effective-troubleshooting/ https://cloud.google.com/load-balancing/docs/enabling-connection-draining
Unattempted
Options A and B are incorrect. These options try to fix the problem immediately with no guarantees that it will solve the problem thereby increasing the Mean Time to Repair (MTTR) Option C is incorrect. Increasing the number of instances does not mitigate the current incident because it will have the same error as the other instances in the Managed Instance Group. Option D is CORRECT. The recommended approach to make the system work as well as it can under the circumstances. This gives you time to fix the errors in region 3 and apply a new patch. References: https://sre.google/sre-book/effective-troubleshooting/ https://cloud.google.com/load-balancing/docs/enabling-connection-draining
Question 9 of 65
9. Question
Your Site Reliability (SRE) team members are frequently interrupted with several tasks/requests, such as handling quota requests, from customers that prevent them from making progress on engineering work or feature launches. A recent review shows that most of the requests are repetitive. Which steps can you take to reduce the interruptions following Google’s SRE best practice to avoid exhaustions or burnout?
You are developing a new application for a global media company. The application will serve content to users in several countries. The application needs to have a high availability and reliability. Your team has agreed on relevant SLOs and Error budget policy with stakeholders. Which of the following is not a recommended action when the service has consumed its entire error budget?
Correct
Lowering the SLOs is not a recommended action when the Error budget is exhausted. Lowering the SLO means lowering the reliability of the system. Options B, C and D are incorrect. Halting releases to production and focusing on bugs that affect reliability are recommended actions when Error budget is exhausted. Reference: https://sre.google/workbook/implementing-slos/ (Establishing an Error Budget)
Incorrect
Lowering the SLOs is not a recommended action when the Error budget is exhausted. Lowering the SLO means lowering the reliability of the system. Options B, C and D are incorrect. Halting releases to production and focusing on bugs that affect reliability are recommended actions when Error budget is exhausted. Reference: https://sre.google/workbook/implementing-slos/ (Establishing an Error Budget)
Unattempted
Lowering the SLOs is not a recommended action when the Error budget is exhausted. Lowering the SLO means lowering the reliability of the system. Options B, C and D are incorrect. Halting releases to production and focusing on bugs that affect reliability are recommended actions when Error budget is exhausted. Reference: https://sre.google/workbook/implementing-slos/ (Establishing an Error Budget)
Question 11 of 65
11. Question
You are responsible for managing the release of a new version, with breaking changes, of an API that your company owns. There are numerous customers who consume this API. Which of the following is the recommended release order?
To meet security compliance of centrally collecting VPC Flow Logs, your company asked you to configure a Logs routing sink. The Sink destination is a Logging bucket in another project. After you configure the Logs Sink, a few days later one of the security team members points out that there are no logs in the logging bucket. Which of the following is not a possible reason?
Correct
Options A, C & D are incorrect. If Flow Logs are not enabled on the subnets to be monitored there will be no logs, also if logs exclusion filters are wrongly configured desired logs would be discarded. If the security team is looking in the wrong bucket there will not see the logs. Option B is CORRECT. Firewall rules do not affect the logs generated by Flow logs. Reference: https://cloud.google.com/vpc/docs/using-flow-logs#no-vpc-flows
Incorrect
Options A, C & D are incorrect. If Flow Logs are not enabled on the subnets to be monitored there will be no logs, also if logs exclusion filters are wrongly configured desired logs would be discarded. If the security team is looking in the wrong bucket there will not see the logs. Option B is CORRECT. Firewall rules do not affect the logs generated by Flow logs. Reference: https://cloud.google.com/vpc/docs/using-flow-logs#no-vpc-flows
Unattempted
Options A, C & D are incorrect. If Flow Logs are not enabled on the subnets to be monitored there will be no logs, also if logs exclusion filters are wrongly configured desired logs would be discarded. If the security team is looking in the wrong bucket there will not see the logs. Option B is CORRECT. Firewall rules do not affect the logs generated by Flow logs. Reference: https://cloud.google.com/vpc/docs/using-flow-logs#no-vpc-flows
Question 13 of 65
13. Question
To meet industry compliance, your company has asked you to configure VPC Flow Logs. A key priority is to streamline the logs collected from Flow Logs to reduce storage costs. What steps can you take to achieve this? Choose TWO
Correct
Filtering and Metadata annotations are ways of modifying the number of logs generated and stored from VPC Flow Logs. Option D is incorrect. This is used for logs generated by infrastructure such GKE and GCE, not VPC Flow Logs Options C and E is incorrect. They do not streamline the logs collected; they focus more on storage of the logs. Reference: https://cloud.google.com/vpc/docs/flow-logs
Incorrect
Filtering and Metadata annotations are ways of modifying the number of logs generated and stored from VPC Flow Logs. Option D is incorrect. This is used for logs generated by infrastructure such GKE and GCE, not VPC Flow Logs Options C and E is incorrect. They do not streamline the logs collected; they focus more on storage of the logs. Reference: https://cloud.google.com/vpc/docs/flow-logs
Unattempted
Filtering and Metadata annotations are ways of modifying the number of logs generated and stored from VPC Flow Logs. Option D is incorrect. This is used for logs generated by infrastructure such GKE and GCE, not VPC Flow Logs Options C and E is incorrect. They do not streamline the logs collected; they focus more on storage of the logs. Reference: https://cloud.google.com/vpc/docs/flow-logs
Question 14 of 65
14. Question
Your team is developing an application using Java. Cloud Build is used to build images for applications. There is a requirement for store the Java image and maven packages in GCP for use in deployment. What is the recommended solution to achieve this?
Correct
The Container Registry can hold images and packages can be stored in Cloud Storage. This is Google’s recommended approach. Options A, B and C is incorrect. Images cannot be stored in Cloud Source Repository, and packages cannot be stored in Container Registry. Reference: https://cloud.google.com/build/docs/building/store-build-artifacts
Incorrect
The Container Registry can hold images and packages can be stored in Cloud Storage. This is Google’s recommended approach. Options A, B and C is incorrect. Images cannot be stored in Cloud Source Repository, and packages cannot be stored in Container Registry. Reference: https://cloud.google.com/build/docs/building/store-build-artifacts
Unattempted
The Container Registry can hold images and packages can be stored in Cloud Storage. This is Google’s recommended approach. Options A, B and C is incorrect. Images cannot be stored in Cloud Source Repository, and packages cannot be stored in Container Registry. Reference: https://cloud.google.com/build/docs/building/store-build-artifacts
Question 15 of 65
15. Question
Your team uses Docker images to build applications. There is a requirement for exploits to be detected in Docker images built using Cloud Build before they are used in deployments. You have been tasked with deploying the process to detect vulnerabilities in built images before they are deployed. What steps can you take to achieve this? Choose TWO
Correct
Vulnerability scanning can be used to scan images in Container Registry or Artifact Registry. Options A, B, and D is incorrect. Vulnerability scanning is not integrated with these services, also Images are only stored in Container Registry or Artifact Registry. Reference: https://cloud.google.com/container-analysis/docs/get-image-vulnerabilities
Incorrect
Vulnerability scanning can be used to scan images in Container Registry or Artifact Registry. Options A, B, and D is incorrect. Vulnerability scanning is not integrated with these services, also Images are only stored in Container Registry or Artifact Registry. Reference: https://cloud.google.com/container-analysis/docs/get-image-vulnerabilities
Unattempted
Vulnerability scanning can be used to scan images in Container Registry or Artifact Registry. Options A, B, and D is incorrect. Vulnerability scanning is not integrated with these services, also Images are only stored in Container Registry or Artifact Registry. Reference: https://cloud.google.com/container-analysis/docs/get-image-vulnerabilities
Question 16 of 65
16. Question
You are responsible for deploying a web-facing application. The application will serve users in multiple regions. There is a reliability requirement for the system not to be overloaded with requests during peak periods. Following GCP’s SRE best practice, which of these is not recommended?
Correct
Option A, B and D are incorrect. They are the recommended approach for managing requests during peak periods to reduce/prevent cascading failures. Option C is CORRECT. This is not recommended because the intra-layer communication is susceptible to a distributed deadlock. Reference: https://sre.google/sre-book/addressing-cascading-failures/
Incorrect
Option A, B and D are incorrect. They are the recommended approach for managing requests during peak periods to reduce/prevent cascading failures. Option C is CORRECT. This is not recommended because the intra-layer communication is susceptible to a distributed deadlock. Reference: https://sre.google/sre-book/addressing-cascading-failures/
Unattempted
Option A, B and D are incorrect. They are the recommended approach for managing requests during peak periods to reduce/prevent cascading failures. Option C is CORRECT. This is not recommended because the intra-layer communication is susceptible to a distributed deadlock. Reference: https://sre.google/sre-book/addressing-cascading-failures/
Question 17 of 65
17. Question
Your team is designing a web-facing application for your organization. The application is intended to serve users globally. Your job is to plan for the capacity of the application. Following GCP’s SRE best practice for capacity management, which of these is not recommended?
Correct
Option A, B and C are incorrect. These are the recommended approaches for capacity planning. Load testing shows how the application will scale or fail under load; while monitoring allows for approach remedial actions to be taken while graceful degradation allows the application to function in cases of overwhelming request by rejecting requests so the system is not overloaded. Option D is CORRECT. This is not recommended because it is costly and there is no guarantee the application will need that amount of resources. Reference: https://static.googleusercontent.com/media/sre.google/en//static/pdf/login_winter20_10_torres.pdf
Incorrect
Option A, B and C are incorrect. These are the recommended approaches for capacity planning. Load testing shows how the application will scale or fail under load; while monitoring allows for approach remedial actions to be taken while graceful degradation allows the application to function in cases of overwhelming request by rejecting requests so the system is not overloaded. Option D is CORRECT. This is not recommended because it is costly and there is no guarantee the application will need that amount of resources. Reference: https://static.googleusercontent.com/media/sre.google/en//static/pdf/login_winter20_10_torres.pdf
Unattempted
Option A, B and C are incorrect. These are the recommended approaches for capacity planning. Load testing shows how the application will scale or fail under load; while monitoring allows for approach remedial actions to be taken while graceful degradation allows the application to function in cases of overwhelming request by rejecting requests so the system is not overloaded. Option D is CORRECT. This is not recommended because it is costly and there is no guarantee the application will need that amount of resources. Reference: https://static.googleusercontent.com/media/sre.google/en//static/pdf/login_winter20_10_torres.pdf
Question 18 of 65
18. Question
Your team is designing a CICD pipeline for your organization. Jenkins was chosen as the Continuous Deployment Tool. Following GCP’s recommended practice, how should the CD Tool be deployed? Choose Two.
You are part of an on-call SRE team managing a production application. The application receives requests, processes it and returns the response to the user. A new update was deployed yesterday to introduce new features into the application. Users are now complaining about errors and failed processed requests from the application. Your team declares an incident. Which of the following is the recommended first action after an incident is declared?
Correct
Options A, B and C are incorrect. Mitigating the impact is not the first action because you do not know the extent of the impact. Performing root-cause analysis and post-mortem is done after service is fully restored. Options D is CORRECT. This is the first recommended step. Assessing the impact or extent of the incident. Reference: https://sre.google/workbook/incident-response/ (Case Study 2)
Incorrect
Options A, B and C are incorrect. Mitigating the impact is not the first action because you do not know the extent of the impact. Performing root-cause analysis and post-mortem is done after service is fully restored. Options D is CORRECT. This is the first recommended step. Assessing the impact or extent of the incident. Reference: https://sre.google/workbook/incident-response/ (Case Study 2)
Unattempted
Options A, B and C are incorrect. Mitigating the impact is not the first action because you do not know the extent of the impact. Performing root-cause analysis and post-mortem is done after service is fully restored. Options D is CORRECT. This is the first recommended step. Assessing the impact or extent of the incident. Reference: https://sre.google/workbook/incident-response/ (Case Study 2)
Question 20 of 65
20. Question
You are part of an on-call SRE team managing a frontend web service application in production. The application offers an HTTP-based API that consumers can use to manipulate various data. A new version has been developed and needs to be tested with live traffic. There is a requirement to minimize the number of users that will be affected if the new version fails. Which of the following helps you meet the requirement?
Correct
In canary deployment, you partially roll out a change to a subset of users and then evaluate its performance against a baseline deployment. Options B, C and D are incorrect. B, D represent the same technique. Blue/Red represents the current application version and green/black represents the new application version. Only one version is live at a time. These methods will affect every user if there is a failure. Option C will update the live application gradually until it is deployed to all instances. If there is a failure the affected users will increase as the deployment rolls out to all instances. References: https://sre.google/workbook/canarying-releases/ https://cloud.google.com/architecture/application-deployment-and-testing-strategies
Incorrect
In canary deployment, you partially roll out a change to a subset of users and then evaluate its performance against a baseline deployment. Options B, C and D are incorrect. B, D represent the same technique. Blue/Red represents the current application version and green/black represents the new application version. Only one version is live at a time. These methods will affect every user if there is a failure. Option C will update the live application gradually until it is deployed to all instances. If there is a failure the affected users will increase as the deployment rolls out to all instances. References: https://sre.google/workbook/canarying-releases/ https://cloud.google.com/architecture/application-deployment-and-testing-strategies
Unattempted
In canary deployment, you partially roll out a change to a subset of users and then evaluate its performance against a baseline deployment. Options B, C and D are incorrect. B, D represent the same technique. Blue/Red represents the current application version and green/black represents the new application version. Only one version is live at a time. These methods will affect every user if there is a failure. Option C will update the live application gradually until it is deployed to all instances. If there is a failure the affected users will increase as the deployment rolls out to all instances. References: https://sre.google/workbook/canarying-releases/ https://cloud.google.com/architecture/application-deployment-and-testing-strategies
Question 21 of 65
21. Question
You are part of a team designing a containerized application to be deployed to GKE. The application will be deployed to a five-node cluster in a single region. The application will be used to process sensitive user data and there is a requirement to remove any sensitive data from the logs before it goes to Cloud Logging. Which of the following helps you meet the requirement? Choose TWO
Correct
Options A, B and D are incorrect. System & workload logging does not allow you customise the logging. Legacy logging is deprecated and is not recommended for newer clusters. Deployments in Kubernetes will not guarantee the deployment of fluentd pods to all nodes in the cluster. Options C and E are CORRECT. Logging needs to be disabled so it can be installed manually and customized. Daemonset is the correct object used for logging because it ensures a fluentd pod is deployed on every node in the cluster to collect logs. Reference: https://cloud.google.com/architecture/customizing-stackdriver-logs-fluentd
Incorrect
Options A, B and D are incorrect. System & workload logging does not allow you customise the logging. Legacy logging is deprecated and is not recommended for newer clusters. Deployments in Kubernetes will not guarantee the deployment of fluentd pods to all nodes in the cluster. Options C and E are CORRECT. Logging needs to be disabled so it can be installed manually and customized. Daemonset is the correct object used for logging because it ensures a fluentd pod is deployed on every node in the cluster to collect logs. Reference: https://cloud.google.com/architecture/customizing-stackdriver-logs-fluentd
Unattempted
Options A, B and D are incorrect. System & workload logging does not allow you customise the logging. Legacy logging is deprecated and is not recommended for newer clusters. Deployments in Kubernetes will not guarantee the deployment of fluentd pods to all nodes in the cluster. Options C and E are CORRECT. Logging needs to be disabled so it can be installed manually and customized. Daemonset is the correct object used for logging because it ensures a fluentd pod is deployed on every node in the cluster to collect logs. Reference: https://cloud.google.com/architecture/customizing-stackdriver-logs-fluentd
Question 22 of 65
22. Question
You are part of the DevOps team in a growing analytics company. The company currently deploys its docker applications on Virtual Machines on-premises. The company has three different environments: dev, staging and production. The company is planning to move its applications to GKE. The key requirement is the need to have the environments separate in a way the allows for restricting access using IAM policy. Which of the following helps you meet the requirement following GCP’s best practice?
Correct
Options A, B and C are incorrect. There is no way to manage the IAM permissions at a VPC level or Subnet level. While it is possible to apply RBAC using namespaces in a GKE Cluster to separate environments, this is not the best practice for separating environments. Option D is CORRECT. Best practice for managing environments and IAM policy is at the Project level. Reference: https://cloud.google.com/docs/enterprise/best-practices-for-enterprise-organizations#project-structure
Incorrect
Options A, B and C are incorrect. There is no way to manage the IAM permissions at a VPC level or Subnet level. While it is possible to apply RBAC using namespaces in a GKE Cluster to separate environments, this is not the best practice for separating environments. Option D is CORRECT. Best practice for managing environments and IAM policy is at the Project level. Reference: https://cloud.google.com/docs/enterprise/best-practices-for-enterprise-organizations#project-structure
Unattempted
Options A, B and C are incorrect. There is no way to manage the IAM permissions at a VPC level or Subnet level. While it is possible to apply RBAC using namespaces in a GKE Cluster to separate environments, this is not the best practice for separating environments. Option D is CORRECT. Best practice for managing environments and IAM policy is at the Project level. Reference: https://cloud.google.com/docs/enterprise/best-practices-for-enterprise-organizations#project-structure
Question 23 of 65
23. Question
Your SRE team is responsible for monitoring and logging of the applications in different Production Projects. The applications are deployed on different resources like Compute Engine and GKE. Your team has created a centralised monitoring dashboard in the monitoring Project for the metrics from all the production Projects. A new member needs to be given access to one of the charts in the centralised dashboard for training purposes. Which steps will help you meet the requirements? Choose TWO
Your organization has recently decided to move its applications to the Cloud. The current CICD pipeline uses GitHub repositories for source code version control. You have been directed to proof-of-concept deployment linking GitHub to Cloud Build for image creation and deployment. What steps can you take to achieve this with minimal overhead? Choose TW0
You are part of the SRE team working for a data-processing company. Your team manages an application that was manually deployed to App Engine. The application source code is stored in Cloud Source Repositories. A new version of the application has been developed and tested. Approval has been given to deploy to production. You pushed the code update to the Cloud Source Repository, and after some time you notice the old version has not been updated. What should you do?
Correct
Options A and D is incorrect. The application was deployed manually so Cloud Build is not involved. Option B is incorrect. gcloud app browse is used to verify the app is running gcloud app deploy app.yaml, will deploy the new version of the application to App Engine. Reference: https://cloud.google.com/source-repositories/docs/integrating-with-app-engine
Incorrect
Options A and D is incorrect. The application was deployed manually so Cloud Build is not involved. Option B is incorrect. gcloud app browse is used to verify the app is running gcloud app deploy app.yaml, will deploy the new version of the application to App Engine. Reference: https://cloud.google.com/source-repositories/docs/integrating-with-app-engine
Unattempted
Options A and D is incorrect. The application was deployed manually so Cloud Build is not involved. Option B is incorrect. gcloud app browse is used to verify the app is running gcloud app deploy app.yaml, will deploy the new version of the application to App Engine. Reference: https://cloud.google.com/source-repositories/docs/integrating-with-app-engine
Question 26 of 65
26. Question
You are responsible for setting up an automated CICD pipeline. The pipeline will be used to build docker images for application deployment to GKE. Recently the performance (build speed)) of Cloud Build in your pipeline has been dropping. What steps can you take to improve the speed of builds? Choose TWO
Correct
Use .gcloudignore file to exclude unneeded files and selecting a higher machine type will speed up builds . Options B, C and E is incorrect. Larger base images will slow down builds and the Service Account permissions does not affect build speed. Reference: https://cloud.google.com/build/docs/speeding-up-builds
Incorrect
Use .gcloudignore file to exclude unneeded files and selecting a higher machine type will speed up builds . Options B, C and E is incorrect. Larger base images will slow down builds and the Service Account permissions does not affect build speed. Reference: https://cloud.google.com/build/docs/speeding-up-builds
Unattempted
Use .gcloudignore file to exclude unneeded files and selecting a higher machine type will speed up builds . Options B, C and E is incorrect. Larger base images will slow down builds and the Service Account permissions does not affect build speed. Reference: https://cloud.google.com/build/docs/speeding-up-builds
Question 27 of 65
27. Question
Your company has multiple Project in its Google Cloud Organization hierarchy. There are resources in the different Projects which have been configured to send metrics to centralised monitoring workspace. You recently created a deployed Apache on a Compute Engine with a custom Service Account in one of the Projects. You installed and configured the monitoring agent to get metrics from Apache application. You notice there are no apache metrics in the centralised monitoring workspace. Which of the following is a possible reason?
Correct
Options B, C & D are incorrect. FluentD is used for logging not monitoring. The agent is running because the question says it was installed and configured. The region of a Compute Engine has no effect on the monitoring workspace. This is a possible option where metrics are not showing up in the workspace. Reference: https://cloud.google.com/monitoring/agent/monitoring/troubleshooting#verify-creds
Incorrect
Options B, C & D are incorrect. FluentD is used for logging not monitoring. The agent is running because the question says it was installed and configured. The region of a Compute Engine has no effect on the monitoring workspace. This is a possible option where metrics are not showing up in the workspace. Reference: https://cloud.google.com/monitoring/agent/monitoring/troubleshooting#verify-creds
Unattempted
Options B, C & D are incorrect. FluentD is used for logging not monitoring. The agent is running because the question says it was installed and configured. The region of a Compute Engine has no effect on the monitoring workspace. This is a possible option where metrics are not showing up in the workspace. Reference: https://cloud.google.com/monitoring/agent/monitoring/troubleshooting#verify-creds
Question 28 of 65
28. Question
You are designing an online gaming application. The web application allows users to select games and view leaderboards. Game scores are stored in a database. You want to identify the minimum Service Level Indicators (SLIs) for the application to ensure the leader board has the latest scores. What SLIs should you select?
Correct
Option B is CORRECT. This is the best SLI for the application. The web application is request-driven for which latency and availability are good SLIs. The database is a storage system, the recommended SLIs are latency, availability and durability. Options A, C & D are incorrect. Durability and Coverage are not suitable SLIs for the web application which is request-driven. Coverage is a suitable SLI for batch processing systems, so it is not suitable for the database of the gaming application. Reference: https://sre.google/sre-book/service-level-objectives/ (Indicator in Practice)
Incorrect
Option B is CORRECT. This is the best SLI for the application. The web application is request-driven for which latency and availability are good SLIs. The database is a storage system, the recommended SLIs are latency, availability and durability. Options A, C & D are incorrect. Durability and Coverage are not suitable SLIs for the web application which is request-driven. Coverage is a suitable SLI for batch processing systems, so it is not suitable for the database of the gaming application. Reference: https://sre.google/sre-book/service-level-objectives/ (Indicator in Practice)
Unattempted
Option B is CORRECT. This is the best SLI for the application. The web application is request-driven for which latency and availability are good SLIs. The database is a storage system, the recommended SLIs are latency, availability and durability. Options A, C & D are incorrect. Durability and Coverage are not suitable SLIs for the web application which is request-driven. Coverage is a suitable SLI for batch processing systems, so it is not suitable for the database of the gaming application. Reference: https://sre.google/sre-book/service-level-objectives/ (Indicator in Practice)
Question 29 of 65
29. Question
Your team is building an automated CICD pipeline in the development Project. The Cloud Source Repository will be used for code versioning, Cloud Build will be used to build and deploy the application to Google Kubernetes Engine. The Cloud Build Service account has been given the Kubernetes Engine Developer permissions. After a developer pushes code to the Cloud Source Repository, you notice the application is not getting deployed to GKE. Which of the following could be the reason?
Correct
Option A is incorrect. The Kubernetes Engine Developer permission is sufficient for Cloud Build to deploy to GKE. Option B is CORRECT. For Cloud Build to automatically build code pushed to Cloud Source Repository, triggers must be created. Options C and D are incorrect. Cloud Source Repository does not need permissions and the GKE API will be enabled by the Cloud Build service before deployment. Reference: https://cloud.google.com/build/docs/automating-builds/create-manage-triggers
Incorrect
Option A is incorrect. The Kubernetes Engine Developer permission is sufficient for Cloud Build to deploy to GKE. Option B is CORRECT. For Cloud Build to automatically build code pushed to Cloud Source Repository, triggers must be created. Options C and D are incorrect. Cloud Source Repository does not need permissions and the GKE API will be enabled by the Cloud Build service before deployment. Reference: https://cloud.google.com/build/docs/automating-builds/create-manage-triggers
Unattempted
Option A is incorrect. The Kubernetes Engine Developer permission is sufficient for Cloud Build to deploy to GKE. Option B is CORRECT. For Cloud Build to automatically build code pushed to Cloud Source Repository, triggers must be created. Options C and D are incorrect. Cloud Source Repository does not need permissions and the GKE API will be enabled by the Cloud Build service before deployment. Reference: https://cloud.google.com/build/docs/automating-builds/create-manage-triggers
Question 30 of 65
30. Question
Your Site Reliability (SRE) team members are managing the CICD of your organization. The organization uses GCP Projects to separate environments. The pipeline consists of Cloud Source Repository, Cloud Build and Spinnaker. There is a security requirement to send the logs of Cloud Build in the Production Project to a user-created bucket in a Project designated for logs. Which step can you take to achieve this?
Correct
The Storage Admin role should be given to the Cloud Build Service account of the production Project in the logging Project. Option B and D are incorrect. The permission should be added in the logging Project. Options C is incorrect. The viewer role is not sufficient for Cloud Build to put logs. Reference: https://cloud.google.com/build/docs/securing-builds/store-manage-build-logs#store-custom-bucket
Incorrect
The Storage Admin role should be given to the Cloud Build Service account of the production Project in the logging Project. Option B and D are incorrect. The permission should be added in the logging Project. Options C is incorrect. The viewer role is not sufficient for Cloud Build to put logs. Reference: https://cloud.google.com/build/docs/securing-builds/store-manage-build-logs#store-custom-bucket
Unattempted
The Storage Admin role should be given to the Cloud Build Service account of the production Project in the logging Project. Option B and D are incorrect. The permission should be added in the logging Project. Options C is incorrect. The viewer role is not sufficient for Cloud Build to put logs. Reference: https://cloud.google.com/build/docs/securing-builds/store-manage-build-logs#store-custom-bucket
Question 31 of 65
31. Question
Your team has been tasked with the monitoring of a new application to be deployed on Managed Instance Groups. You are responsible for setting up the monitoring agent and the custom metrics for the application. You have chosen to create the metric descriptor manually. You need to monitor the memory utilization metric of the application and create an alerting policy. What value should the Metric Kind be set to in the descriptor?
Your team has developed and tested a video processing service for your company. The video service accepts videos in one format and converts it to another specified format. Your team has agreed on the indicator metrics to track the performance of the system. All stakeholders of the application have agreed on a minimum target value, within a rolling 4-week window, for the indicator metric used to measure the service. What is needed to guarantee a level of service to the customer with consequences for missing it?
Correct
Options A, B and C are incorrect. Only the Service Level Agreements have consequences for not meeting Service Level Objectives. Option D is CORRECT. An SLA is needed Reference: https://sre.google/sre-book/service-level-objectives/ (Agreements)
Incorrect
Options A, B and C are incorrect. Only the Service Level Agreements have consequences for not meeting Service Level Objectives. Option D is CORRECT. An SLA is needed Reference: https://sre.google/sre-book/service-level-objectives/ (Agreements)
Unattempted
Options A, B and C are incorrect. Only the Service Level Agreements have consequences for not meeting Service Level Objectives. Option D is CORRECT. An SLA is needed Reference: https://sre.google/sre-book/service-level-objectives/ (Agreements)
Question 33 of 65
33. Question
Your organization has created three monitoring workspaces called dev-workspace, test-workspace and prod-workspace. The workspaces monitor the Projects outlined below: dev-workspace: dev-1, dev-2, dev-3 test-workspace: test-1, test-2 prod-workspace: prod-1, prod-b and prod-c You have been asked to monitor the project prod-1 alongside test-1 and test-2 in the same workspace. How will you achieve this?
Correct
A project can only be monitored by one workspace at any time. Options B, C and D are incorrect. Monitoring workspaces of Projects can be updated after creation. Merging both workspaces will mean five projects being monitored in test-workspace instead of three. Reference: https://cloud.google.com/monitoring/workspaces/manage
Incorrect
A project can only be monitored by one workspace at any time. Options B, C and D are incorrect. Monitoring workspaces of Projects can be updated after creation. Merging both workspaces will mean five projects being monitored in test-workspace instead of three. Reference: https://cloud.google.com/monitoring/workspaces/manage
Unattempted
A project can only be monitored by one workspace at any time. Options B, C and D are incorrect. Monitoring workspaces of Projects can be updated after creation. Merging both workspaces will mean five projects being monitored in test-workspace instead of three. Reference: https://cloud.google.com/monitoring/workspaces/manage
Question 34 of 65
34. Question
You provide support for a Python application in production on Compute Engine. In recent times there have been complaints about the slow response of the application. You want to investigate how requests propagate through your entire application. Which should you do?
Correct
Cloud Trace shows how requests propagate through the different components (microservices or functions) of an application. Options B and C are incorrect. The monitoring and logging agents do not show how requests propagate through the different component (microservices or functions) of an application Option D is incorrect. CPU Utilization does not show how requests propagate through the different components (microservices or functions) of an application. Reference: https://cloud.google.com/trace/docs/setup
Incorrect
Cloud Trace shows how requests propagate through the different components (microservices or functions) of an application. Options B and C are incorrect. The monitoring and logging agents do not show how requests propagate through the different component (microservices or functions) of an application Option D is incorrect. CPU Utilization does not show how requests propagate through the different components (microservices or functions) of an application. Reference: https://cloud.google.com/trace/docs/setup
Unattempted
Cloud Trace shows how requests propagate through the different components (microservices or functions) of an application. Options B and C are incorrect. The monitoring and logging agents do not show how requests propagate through the different component (microservices or functions) of an application Option D is incorrect. CPU Utilization does not show how requests propagate through the different components (microservices or functions) of an application. Reference: https://cloud.google.com/trace/docs/setup
Question 35 of 65
35. Question
You provide support for a Python application in production on Compute Engine. In recent times there have been complaints about the slow response of the application. You want to investigate how requests propagate through your entire application. Which should you do?
Correct
Options A is CORRECT. Cloud Trace shows how requests propagate through the different components (microservices or functions) of an application. Options B and C are incorrect. The monitoring and logging agents do not show how requests propagate through the different component (microservices or functions) of an application Option D is incorrect. CPU Utilization does not show how requests propagate through the different components (microservices or functions) of an application. Reference: https://cloud.google.com/trace/docs/setup
Incorrect
Options A is CORRECT. Cloud Trace shows how requests propagate through the different components (microservices or functions) of an application. Options B and C are incorrect. The monitoring and logging agents do not show how requests propagate through the different component (microservices or functions) of an application Option D is incorrect. CPU Utilization does not show how requests propagate through the different components (microservices or functions) of an application. Reference: https://cloud.google.com/trace/docs/setup
Unattempted
Options A is CORRECT. Cloud Trace shows how requests propagate through the different components (microservices or functions) of an application. Options B and C are incorrect. The monitoring and logging agents do not show how requests propagate through the different component (microservices or functions) of an application Option D is incorrect. CPU Utilization does not show how requests propagate through the different components (microservices or functions) of an application. Reference: https://cloud.google.com/trace/docs/setup
Question 36 of 65
36. Question
Your team is developing an application that will be deployed to production. During the testing of the application there were some incidents which were documented and resolved. Which of the following is not a best practice for Incident management?
Correct
Options A, B & C are incorrect. These are Google SRE’s best practices for Incident Management. Option D is CORRECT. Best practice is to prioritize restoring service before root-cause investigations. Reference: https://sre.google/sre-book/managing-incidents/
Incorrect
Options A, B & C are incorrect. These are Google SRE’s best practices for Incident Management. Option D is CORRECT. Best practice is to prioritize restoring service before root-cause investigations. Reference: https://sre.google/sre-book/managing-incidents/
Unattempted
Options A, B & C are incorrect. These are Google SRE’s best practices for Incident Management. Option D is CORRECT. Best practice is to prioritize restoring service before root-cause investigations. Reference: https://sre.google/sre-book/managing-incidents/
Question 37 of 65
37. Question
Your team is creating an incident management procedure which will be a guide for your team during incidents. Part of Google‘s SRE incident management best practice is the separation of responsibilities. Which of the following responsibilities is not essential during an incident?
Correct
Options A, B & C are incorrect. These roles represent the Incident Commander, Operations Lead and Communications lead which are the essential roles for incident management. Options D is CORRECT. Creating the incident management procedure is a team effort so anyone can use it when an incident occurs. Reference: https://sre.google/sre-book/managing-incidents/
Incorrect
Options A, B & C are incorrect. These roles represent the Incident Commander, Operations Lead and Communications lead which are the essential roles for incident management. Options D is CORRECT. Creating the incident management procedure is a team effort so anyone can use it when an incident occurs. Reference: https://sre.google/sre-book/managing-incidents/
Unattempted
Options A, B & C are incorrect. These roles represent the Incident Commander, Operations Lead and Communications lead which are the essential roles for incident management. Options D is CORRECT. Creating the incident management procedure is a team effort so anyone can use it when an incident occurs. Reference: https://sre.google/sre-book/managing-incidents/
Question 38 of 65
38. Question
Your team recently pushed an update to production. Several customers are complaining that the service is taking too long to respond. What should you do first following Google’s SRE best practice for effective troubleshooting?
Correct
Options A, C & D are incorrect because according to Google’s effective troubleshooting guide it is not the first thing to do. Options B is CORRECT. According to Google’s effective troubleshooting guide it is the first thing to do. Reference: https://sre.google/sre-book/effective-troubleshooting/
Incorrect
Options A, C & D are incorrect because according to Google’s effective troubleshooting guide it is not the first thing to do. Options B is CORRECT. According to Google’s effective troubleshooting guide it is the first thing to do. Reference: https://sre.google/sre-book/effective-troubleshooting/
Unattempted
Options A, C & D are incorrect because according to Google’s effective troubleshooting guide it is not the first thing to do. Options B is CORRECT. According to Google’s effective troubleshooting guide it is the first thing to do. Reference: https://sre.google/sre-book/effective-troubleshooting/
Question 39 of 65
39. Question
Your team is designing a new User-facing application to serve requests. Service Level Objectives (SLOs) have been set. Your team has been mandated to ensure the application always meets the set SLOs. Your Job is to choose Service Level Indicators (SLIs) that will allow your team effectively monitor the system so it does not breach the SLOs Which of the following is Google‘s SRE suggested best practice for selecting SLIs?
Correct
Option A is incorrect, because choosing too many indicators makes it hard to pay the right level of attention to the indicators that matter Option B is incorrect, because choosing too few may leave significant behaviours of your system unexamined. Option C is CORRECT. Google’s recommended approach is an understanding of what your users want from the system will inform the judicious selection of a few indicators. Option D is incorrect, because it means the system isn’t monitored which is not recommended. Reference: https://sre.google/sre-book/service-level-objectives/
Incorrect
Option A is incorrect, because choosing too many indicators makes it hard to pay the right level of attention to the indicators that matter Option B is incorrect, because choosing too few may leave significant behaviours of your system unexamined. Option C is CORRECT. Google’s recommended approach is an understanding of what your users want from the system will inform the judicious selection of a few indicators. Option D is incorrect, because it means the system isn’t monitored which is not recommended. Reference: https://sre.google/sre-book/service-level-objectives/
Unattempted
Option A is incorrect, because choosing too many indicators makes it hard to pay the right level of attention to the indicators that matter Option B is incorrect, because choosing too few may leave significant behaviours of your system unexamined. Option C is CORRECT. Google’s recommended approach is an understanding of what your users want from the system will inform the judicious selection of a few indicators. Option D is incorrect, because it means the system isn’t monitored which is not recommended. Reference: https://sre.google/sre-book/service-level-objectives/
Question 40 of 65
40. Question
Your team manages an application serving a global audience. A recent update caused a service downtime. You have been designated as the Incident commander. Which of the following should not be in the Incident Document according to Google SRE’s best practices?
Correct
Options A, B and D are incorrect because it is a Google SRE best practice that those information are in the live Incident State Document as the service is being restored . Option C is CORRECT, this should not be in the document to promote a blameless culture. Reference: https://sre.google/sre-book/incident-document/
Incorrect
Options A, B and D are incorrect because it is a Google SRE best practice that those information are in the live Incident State Document as the service is being restored . Option C is CORRECT, this should not be in the document to promote a blameless culture. Reference: https://sre.google/sre-book/incident-document/
Unattempted
Options A, B and D are incorrect because it is a Google SRE best practice that those information are in the live Incident State Document as the service is being restored . Option C is CORRECT, this should not be in the document to promote a blameless culture. Reference: https://sre.google/sre-book/incident-document/
Question 41 of 65
41. Question
You are helping with the design of a data processing pipeline for a company. Data is streamed from different devices into the pipeline and then processed before it is loaded into the final storage for analytic use. You want to identify minimal Service Level Indicators (SLIs) for the pipeline to ensure that the data in the final storage is up to date. Which SLI should not be part of your consideration?
Correct
Option D is CORRECT, this is SLI provides no monitoring value for the data processing pipeline. Options A, B & C are incorrect because they are recommended SLIs for big data systems. Throughput shows speed of processing and latency shows total time to process a request. Correctness measures the accuracy of results returned. Reference: https://sre.google/sre-book/service-level-objectives/
Incorrect
Option D is CORRECT, this is SLI provides no monitoring value for the data processing pipeline. Options A, B & C are incorrect because they are recommended SLIs for big data systems. Throughput shows speed of processing and latency shows total time to process a request. Correctness measures the accuracy of results returned. Reference: https://sre.google/sre-book/service-level-objectives/
Unattempted
Option D is CORRECT, this is SLI provides no monitoring value for the data processing pipeline. Options A, B & C are incorrect because they are recommended SLIs for big data systems. Throughput shows speed of processing and latency shows total time to process a request. Correctness measures the accuracy of results returned. Reference: https://sre.google/sre-book/service-level-objectives/
Question 42 of 65
42. Question
You are part of the SRE team tasked with writing a postmortem of an outage for one of the services your team manages. Which of these should not be a part of the creation of the postmortem document according to the Google’s SRE best practices?
Correct
Options A, B & D are incorrect because they are part of the process for creating a postmortem, which includes figuring out what caused the issues and how to prevent it, also it is a collaborative process with the output shared. Option C is CORRECT because according to Google’s SRE best practices, all postmortems should be reviewed as part of the culture of learning. Reference: https://sre.google/sre-book/postmortem-culture/
Incorrect
Options A, B & D are incorrect because they are part of the process for creating a postmortem, which includes figuring out what caused the issues and how to prevent it, also it is a collaborative process with the output shared. Option C is CORRECT because according to Google’s SRE best practices, all postmortems should be reviewed as part of the culture of learning. Reference: https://sre.google/sre-book/postmortem-culture/
Unattempted
Options A, B & D are incorrect because they are part of the process for creating a postmortem, which includes figuring out what caused the issues and how to prevent it, also it is a collaborative process with the output shared. Option C is CORRECT because according to Google’s SRE best practices, all postmortems should be reviewed as part of the culture of learning. Reference: https://sre.google/sre-book/postmortem-culture/
Question 43 of 65
43. Question
Your company currently has its containerised applications deployed in on-premises Kubernetes cluster. They have a plan to deploy a similar environment in GCP. The company is concerned about the amount of operations that will be needed to keep both environments in sync. Which of the following can be used to keep the Kubernetes environments in sync and provide a centralised multi-cluster management?
Correct
Option A is incorrect. CloudBuild cannot be used to keep environment in sync or for centralised multi-cluster management. Option B is correct. Anthos is used for centralised multi-cluster management. Option C is incorrect, Jenkins is a CI/CD tool and cannot be used to keep the environment in sync or for centralised multi-cluster management. Option D are CORRECT because they can be used to connect two or more GCP VPCs to aid communications using private IP addresses. Reference: https://cloud.google.com/anthos/clusters
Incorrect
Option A is incorrect. CloudBuild cannot be used to keep environment in sync or for centralised multi-cluster management. Option B is correct. Anthos is used for centralised multi-cluster management. Option C is incorrect, Jenkins is a CI/CD tool and cannot be used to keep the environment in sync or for centralised multi-cluster management. Option D are CORRECT because they can be used to connect two or more GCP VPCs to aid communications using private IP addresses. Reference: https://cloud.google.com/anthos/clusters
Unattempted
Option A is incorrect. CloudBuild cannot be used to keep environment in sync or for centralised multi-cluster management. Option B is correct. Anthos is used for centralised multi-cluster management. Option C is incorrect, Jenkins is a CI/CD tool and cannot be used to keep the environment in sync or for centralised multi-cluster management. Option D are CORRECT because they can be used to connect two or more GCP VPCs to aid communications using private IP addresses. Reference: https://cloud.google.com/anthos/clusters
Question 44 of 65
44. Question
Your company has tasked you with setting up a Continuous Integration pipeline. When code is committed to the source repository, the pipeline will build docker containers to be pushed to Container Registry and non-container artifacts to be pushed to Cloud Storage. How would you accomplish this? Choose Two.
Correct
Options A and C are CORRECT. The images field and artifacts field in the build config file specify the docker images to be stored in the container registry and the non-container artifacts to be stored in Cloud storage respectively. Options B, D & E are incorrect because there is nothing like a source repository config file. References: https://cloud.google.com/build/docs/build-config#images https://cloud.google.com/build/docs/build-config#artifacts
Incorrect
Options A and C are CORRECT. The images field and artifacts field in the build config file specify the docker images to be stored in the container registry and the non-container artifacts to be stored in Cloud storage respectively. Options B, D & E are incorrect because there is nothing like a source repository config file. References: https://cloud.google.com/build/docs/build-config#images https://cloud.google.com/build/docs/build-config#artifacts
Unattempted
Options A and C are CORRECT. The images field and artifacts field in the build config file specify the docker images to be stored in the container registry and the non-container artifacts to be stored in Cloud storage respectively. Options B, D & E are incorrect because there is nothing like a source repository config file. References: https://cloud.google.com/build/docs/build-config#images https://cloud.google.com/build/docs/build-config#artifacts
Question 45 of 65
45. Question
You are tasked with designing an automated CI pipeline for building and pushing images to Container Registry. In the current system, developers have to issue build commands after code is pushed to the test branch in the source repository. What steps can you take to automate the build of the test branch with the least amount of management overhead?
Correct
Option A is incorrect, triggers are created in Cloud Build. Option B is CORRECT because the correct trigger is the “Push to a branch” event that will trigger a build when developers push code to their cloud source repository branch. Option C is incorrect, the requirement is automating the build when code is committed to the test branch, there was no mention of raising a pull request Option D is incorrect, it has a lot of management overhead. Reference: https://cloud.google.com/build/docs/automating-builds/create-manage-triggers#build_trigger
Incorrect
Option A is incorrect, triggers are created in Cloud Build. Option B is CORRECT because the correct trigger is the “Push to a branch” event that will trigger a build when developers push code to their cloud source repository branch. Option C is incorrect, the requirement is automating the build when code is committed to the test branch, there was no mention of raising a pull request Option D is incorrect, it has a lot of management overhead. Reference: https://cloud.google.com/build/docs/automating-builds/create-manage-triggers#build_trigger
Unattempted
Option A is incorrect, triggers are created in Cloud Build. Option B is CORRECT because the correct trigger is the “Push to a branch” event that will trigger a build when developers push code to their cloud source repository branch. Option C is incorrect, the requirement is automating the build when code is committed to the test branch, there was no mention of raising a pull request Option D is incorrect, it has a lot of management overhead. Reference: https://cloud.google.com/build/docs/automating-builds/create-manage-triggers#build_trigger
Question 46 of 65
46. Question
Your company has deployed all its Cloud Source Repositories in a separate GCP Project. You have been tasked with granting permissions developers in the dev Project access to commit code to the dev repository in that Project. How can you achieve this according to Google’s best practice of least privilege?
Correct
Option A is incorrect. This is too permissive, does not follow least privilege best practice and grants access to all the repos in that Project. Option B is CORRECT. This grants permissions at the repo level to list, clone, fetch and update repositories. Option C is incorrect. This does not give permissions to update repositories. Option D is incorrect. This is too permissive and does not follow least privilege best practice. Reference: https://cloud.google.com/source-repositories/docs/configure-access-control#roles_and_permissions_matrix
Incorrect
Option A is incorrect. This is too permissive, does not follow least privilege best practice and grants access to all the repos in that Project. Option B is CORRECT. This grants permissions at the repo level to list, clone, fetch and update repositories. Option C is incorrect. This does not give permissions to update repositories. Option D is incorrect. This is too permissive and does not follow least privilege best practice. Reference: https://cloud.google.com/source-repositories/docs/configure-access-control#roles_and_permissions_matrix
Unattempted
Option A is incorrect. This is too permissive, does not follow least privilege best practice and grants access to all the repos in that Project. Option B is CORRECT. This grants permissions at the repo level to list, clone, fetch and update repositories. Option C is incorrect. This does not give permissions to update repositories. Option D is incorrect. This is too permissive and does not follow least privilege best practice. Reference: https://cloud.google.com/source-repositories/docs/configure-access-control#roles_and_permissions_matrix
Question 47 of 65
47. Question
Your team is running a production apache application on Google Compute Engine. You currently monitor the default metrics such as CPU utilization. You have a new requirement to monitor metrics from the Apache application in the Google Cloud console. What should you do? (Choose 2).
Correct
Options A, D and E are incorrect. Fluentd is used for logging and you have to install the monitoring (collectd) agent in order to monitor custom metrics. Options B, C are CORRECT. you have to install the monitoring (collectd) agent in order to monitor custom metrics. Reference: https://cloud.google.com/monitoring/agent/plugins/apache
Incorrect
Options A, D and E are incorrect. Fluentd is used for logging and you have to install the monitoring (collectd) agent in order to monitor custom metrics. Options B, C are CORRECT. you have to install the monitoring (collectd) agent in order to monitor custom metrics. Reference: https://cloud.google.com/monitoring/agent/plugins/apache
Unattempted
Options A, D and E are incorrect. Fluentd is used for logging and you have to install the monitoring (collectd) agent in order to monitor custom metrics. Options B, C are CORRECT. you have to install the monitoring (collectd) agent in order to monitor custom metrics. Reference: https://cloud.google.com/monitoring/agent/plugins/apache
Question 48 of 65
48. Question
Your company is serving an application through the Compute Engine service behind a global load balancer. You have been tasked with monitoring the availability of the application and alert the on-call engineer if the application is unavailable for more than five minutes. What should you do with the least management overhead?
Correct
Option A is incorrect, this has a lot of overhead such as installing the logging agent, configuring the right logs to be sent to Cloud Logging and creating log-based metrics. Option B is incorrect, because a service on the instance may not work when the instance fails. Option C is incorrect. This has a lot of overhead, assuming there are 50 instances, you will have to create an uptime check and alerting policy for each VM instance. Also if one VM is replaced it will trigger an alert which is counter-productive when the application is available. Option D is CORRECT. Creating an uptime check to the load balancer has the least administrative overhead. Reference: https://cloud.google.com/monitoring/uptime-checks
Incorrect
Option A is incorrect, this has a lot of overhead such as installing the logging agent, configuring the right logs to be sent to Cloud Logging and creating log-based metrics. Option B is incorrect, because a service on the instance may not work when the instance fails. Option C is incorrect. This has a lot of overhead, assuming there are 50 instances, you will have to create an uptime check and alerting policy for each VM instance. Also if one VM is replaced it will trigger an alert which is counter-productive when the application is available. Option D is CORRECT. Creating an uptime check to the load balancer has the least administrative overhead. Reference: https://cloud.google.com/monitoring/uptime-checks
Unattempted
Option A is incorrect, this has a lot of overhead such as installing the logging agent, configuring the right logs to be sent to Cloud Logging and creating log-based metrics. Option B is incorrect, because a service on the instance may not work when the instance fails. Option C is incorrect. This has a lot of overhead, assuming there are 50 instances, you will have to create an uptime check and alerting policy for each VM instance. Also if one VM is replaced it will trigger an alert which is counter-productive when the application is available. Option D is CORRECT. Creating an uptime check to the load balancer has the least administrative overhead. Reference: https://cloud.google.com/monitoring/uptime-checks
Question 49 of 65
49. Question
Your company is planning to deploy a python application on Google App Engine Standard Environment. There is a requirement to continuously gather CPU usage information from your production application. What steps will help achieve this? Choose Two.
Correct
Options A and B are incorrect. Cloud Trace is not used to continuously gather CPU and memory usage from applications. Option D is incorrect, because you don’t have access to the underlying instance when using App Engine Standard Environment. Options C and E are CORRECT. Cloud Profiler is used to continuously gather CPU and memory usage from applications. Cloud Profiler API needs to be enabled (if it is not already), added to your requirements.txt file (so it is downloaded), it also needs to be imported and started. References: https://cloud.google.com/profiler/docs/about-profiler https://cloud.google.com/profiler/docs/profiling-python?authuser=2#flexible-environment
Incorrect
Options A and B are incorrect. Cloud Trace is not used to continuously gather CPU and memory usage from applications. Option D is incorrect, because you don’t have access to the underlying instance when using App Engine Standard Environment. Options C and E are CORRECT. Cloud Profiler is used to continuously gather CPU and memory usage from applications. Cloud Profiler API needs to be enabled (if it is not already), added to your requirements.txt file (so it is downloaded), it also needs to be imported and started. References: https://cloud.google.com/profiler/docs/about-profiler https://cloud.google.com/profiler/docs/profiling-python?authuser=2#flexible-environment
Unattempted
Options A and B are incorrect. Cloud Trace is not used to continuously gather CPU and memory usage from applications. Option D is incorrect, because you don’t have access to the underlying instance when using App Engine Standard Environment. Options C and E are CORRECT. Cloud Profiler is used to continuously gather CPU and memory usage from applications. Cloud Profiler API needs to be enabled (if it is not already), added to your requirements.txt file (so it is downloaded), it also needs to be imported and started. References: https://cloud.google.com/profiler/docs/about-profiler https://cloud.google.com/profiler/docs/profiling-python?authuser=2#flexible-environment
Question 50 of 65
50. Question
Your company is planning to deploy a python application on Google App Engine Standard Environment. There is a requirement to continuously gather CPU usage information from your production application. What steps will help achieve this? Choose Two.
Correct
Options A and B are incorrect. Cloud Trace is not used to continuously gather CPU and memory usage from applications. Option D is incorrect, because you don’t have access to the underlying instance when using App Engine Standard Environment. Options C and E are CORRECT. Cloud Profiler is used to continuously gather CPU and memory usage from applications. Cloud Profiler API needs to be enabled (if it is not already), added to your requirements.txt file (so it is downloaded), it also needs to be imported and started. References: https://cloud.google.com/profiler/docs/about-profiler https://cloud.google.com/profiler/docs/profiling-python?authuser=2#flexible-environment
Incorrect
Options A and B are incorrect. Cloud Trace is not used to continuously gather CPU and memory usage from applications. Option D is incorrect, because you don’t have access to the underlying instance when using App Engine Standard Environment. Options C and E are CORRECT. Cloud Profiler is used to continuously gather CPU and memory usage from applications. Cloud Profiler API needs to be enabled (if it is not already), added to your requirements.txt file (so it is downloaded), it also needs to be imported and started. References: https://cloud.google.com/profiler/docs/about-profiler https://cloud.google.com/profiler/docs/profiling-python?authuser=2#flexible-environment
Unattempted
Options A and B are incorrect. Cloud Trace is not used to continuously gather CPU and memory usage from applications. Option D is incorrect, because you don’t have access to the underlying instance when using App Engine Standard Environment. Options C and E are CORRECT. Cloud Profiler is used to continuously gather CPU and memory usage from applications. Cloud Profiler API needs to be enabled (if it is not already), added to your requirements.txt file (so it is downloaded), it also needs to be imported and started. References: https://cloud.google.com/profiler/docs/about-profiler https://cloud.google.com/profiler/docs/profiling-python?authuser=2#flexible-environment
Question 51 of 65
51. Question
Your team is developing an application that will be deployed to production. During the load testing of the application there were application failures, infrastructure issues, and some capacity issues which were resolved and documented for reference in future incidents. Which of the following is not a recommended practice for Incident management?
Correct
Options A, B & C are incorrect. These are Google SRE’s best practices for Incident Management which include prioritizing service restoration during an incident, documenting incident management procedures in advance and encouraging team members to be familiar with each role in the process. Option D is CORRECT. Best practice is to prioritize restoring service before root-cause investigations. Reference: https://sre.google/sre-book/managing-incidents/
Incorrect
Options A, B & C are incorrect. These are Google SRE’s best practices for Incident Management which include prioritizing service restoration during an incident, documenting incident management procedures in advance and encouraging team members to be familiar with each role in the process. Option D is CORRECT. Best practice is to prioritize restoring service before root-cause investigations. Reference: https://sre.google/sre-book/managing-incidents/
Unattempted
Options A, B & C are incorrect. These are Google SRE’s best practices for Incident Management which include prioritizing service restoration during an incident, documenting incident management procedures in advance and encouraging team members to be familiar with each role in the process. Option D is CORRECT. Best practice is to prioritize restoring service before root-cause investigations. Reference: https://sre.google/sre-book/managing-incidents/
Question 52 of 65
52. Question
Your team is creating an incident management procedure which will be a guide for your team during incidents. Part of Google‘s SRE incident management best practice is the separation of responsibilities. Which of the following responsibilities is not essential during an incident?
Correct
Options A, B & C are incorrect. These roles represent the Incident Commander, Operations Lead and Communications lead which are the essential roles for incident management. Options D is CORRECT. Creating the incident management procedure is a team effort so anyone can use it when an incident occurs. Reference: https://sre.google/sre-book/managing-incidents/
Incorrect
Options A, B & C are incorrect. These roles represent the Incident Commander, Operations Lead and Communications lead which are the essential roles for incident management. Options D is CORRECT. Creating the incident management procedure is a team effort so anyone can use it when an incident occurs. Reference: https://sre.google/sre-book/managing-incidents/
Unattempted
Options A, B & C are incorrect. These roles represent the Incident Commander, Operations Lead and Communications lead which are the essential roles for incident management. Options D is CORRECT. Creating the incident management procedure is a team effort so anyone can use it when an incident occurs. Reference: https://sre.google/sre-book/managing-incidents/
Question 53 of 65
53. Question
Your team recently pushed an update to production. Several customers are now complaining that the service is taking too long to respond. What should you do first following Google’s SRE best practice for effective troubleshooting?
Correct
Options A, C & D are incorrect because according to Google’s effective troubleshooting guide it is not the first thing to do. Options B is CORRECT. According to Google’s effective troubleshooting guide it is the first thing to do. Reference: https://sre.google/sre-book/effective-troubleshooting/
Incorrect
Options A, C & D are incorrect because according to Google’s effective troubleshooting guide it is not the first thing to do. Options B is CORRECT. According to Google’s effective troubleshooting guide it is the first thing to do. Reference: https://sre.google/sre-book/effective-troubleshooting/
Unattempted
Options A, C & D are incorrect because according to Google’s effective troubleshooting guide it is not the first thing to do. Options B is CORRECT. According to Google’s effective troubleshooting guide it is the first thing to do. Reference: https://sre.google/sre-book/effective-troubleshooting/
Question 54 of 65
54. Question
Your team is managing multiple Projects with different applications. You have been asked to centralize all billing data for the projects for ease of analysis. What steps should you take, following Google’s best practice? Choose Two.
You are a devops engineer on a large-scale application development for a multinational company. The development, testing and production environment consists of several Projects. You have been tasked with designing and implementing a billing export for the multiple Projects to a central billing Project. Following the principle of least privilege, what role will be needed?
Your team manages several applications in different Projects with a central billing Project. There is a requirement from finance to provide the ability for billing breakdown according to departments or projects in BigQuery. How would you accomplish this?
You manage an application deployed on Google Kubernetes Engine (GKE). The application logs are captured by Cloud Logging. You need to remove sensitive data from the application logs before it reaches the Cloud Logging API. Which logging plugin would you use to accomplish this?
Correct
This is used to add, modify and delete fields in log entries. Option B is incorrect. This is used to modify tags. Option C is incorrect. This is used for scanning a log stream, either unstructured (text) or JSON-format log records, for multi-line exception stack traces. Option D is incorrect. This “parses” string field in event records and mutates its event record with the parsed result Reference: https://cloud.google.com/logging/docs/agent/configuration#modifying_log_records
Incorrect
This is used to add, modify and delete fields in log entries. Option B is incorrect. This is used to modify tags. Option C is incorrect. This is used for scanning a log stream, either unstructured (text) or JSON-format log records, for multi-line exception stack traces. Option D is incorrect. This “parses” string field in event records and mutates its event record with the parsed result Reference: https://cloud.google.com/logging/docs/agent/configuration#modifying_log_records
Unattempted
This is used to add, modify and delete fields in log entries. Option B is incorrect. This is used to modify tags. Option C is incorrect. This is used for scanning a log stream, either unstructured (text) or JSON-format log records, for multi-line exception stack traces. Option D is incorrect. This “parses” string field in event records and mutates its event record with the parsed result Reference: https://cloud.google.com/logging/docs/agent/configuration#modifying_log_records
Question 58 of 65
58. Question
You manage an application deployed on Google Compute Engines (GCE) in a Managed Instance Group. The application requires high availability and will be used to serve requests for some years. Which option provides the lowest cost for you accomplish this?
You are responsible for the VPC network design of an application that your team will be deploying on Compute Engine (GCE). Minimal cost for Internet egress traffic charges is a requirement. Which Network Service Tier option provides the lowest cost?
Correct
Option A is incorrect. This is more expensive. Option B is incorrect. The question is about Network Service Tiers. Option C is CORRECT. This tier provides a cheaper internet egress rate. Option D is incorrect. There are two tiers (Premium and Standard). Reference: https://cloud.google.com/vpc/network-pricing#internet_egress
Incorrect
Option A is incorrect. This is more expensive. Option B is incorrect. The question is about Network Service Tiers. Option C is CORRECT. This tier provides a cheaper internet egress rate. Option D is incorrect. There are two tiers (Premium and Standard). Reference: https://cloud.google.com/vpc/network-pricing#internet_egress
Unattempted
Option A is incorrect. This is more expensive. Option B is incorrect. The question is about Network Service Tiers. Option C is CORRECT. This tier provides a cheaper internet egress rate. Option D is incorrect. There are two tiers (Premium and Standard). Reference: https://cloud.google.com/vpc/network-pricing#internet_egress
Question 60 of 65
60. Question
You are on-call managing an application in production. You receive alerts from the monitoring system of the application which show it is failing uptime checks. What do you do first following SRE best practice of managing incidents?
Correct
Option A is incorrect. This is done at a later stage after the application is back online. Option B is incorrect. This focuses on just the technical problem and does not cover the bigger picture as an SRE. Option C is incorrect. This is not the first thing because you don’t know what the real problem is yet. Option D is CORRECT. This is Google’s recommended approach for incident management. Investigate the problem, if it persists, appoint an incident commander to oversee the resolution. Reference: https://sre.google/sre-book/managing-incidents/
Incorrect
Option A is incorrect. This is done at a later stage after the application is back online. Option B is incorrect. This focuses on just the technical problem and does not cover the bigger picture as an SRE. Option C is incorrect. This is not the first thing because you don’t know what the real problem is yet. Option D is CORRECT. This is Google’s recommended approach for incident management. Investigate the problem, if it persists, appoint an incident commander to oversee the resolution. Reference: https://sre.google/sre-book/managing-incidents/
Unattempted
Option A is incorrect. This is done at a later stage after the application is back online. Option B is incorrect. This focuses on just the technical problem and does not cover the bigger picture as an SRE. Option C is incorrect. This is not the first thing because you don’t know what the real problem is yet. Option D is CORRECT. This is Google’s recommended approach for incident management. Investigate the problem, if it persists, appoint an incident commander to oversee the resolution. Reference: https://sre.google/sre-book/managing-incidents/
Question 61 of 65
61. Question
Your team is planning to deploy an application to App Engine in the production Project. You need to be able to inspect the state of the app in real time, without stopping or slowing it down. How can you accomplish this?
Correct
Option A, B & C are incorrect. None of them allows you to inspect your application state in real-time. Option D is CORRECT. Cloud Debugger lets you inspect the state of an application at any code location without stopping or slowing it down. Reference: https://cloud.google.com/debugger/docs/setup
Incorrect
Option A, B & C are incorrect. None of them allows you to inspect your application state in real-time. Option D is CORRECT. Cloud Debugger lets you inspect the state of an application at any code location without stopping or slowing it down. Reference: https://cloud.google.com/debugger/docs/setup
Unattempted
Option A, B & C are incorrect. None of them allows you to inspect your application state in real-time. Option D is CORRECT. Cloud Debugger lets you inspect the state of an application at any code location without stopping or slowing it down. Reference: https://cloud.google.com/debugger/docs/setup
Question 62 of 65
62. Question
You are responsible for designing the logging of an application. Your company has asked you to ensure logs are sent to the company’s Splunk instance. How should you accomplish this with the least amount of operation overhead?
Correct
Option A is incorrect. This introduces the overhead of managing the Cloud Function. Option B is CORRECT. This is the recommended approach for exporting logs to third-party applications. Option C is incorrect. It is not recommended, and it introduces the overhead of managing the Cloud Function. Option D is incorrect. This is not currently possible. Reference: https://cloud.google.com/logging/docs/export
Incorrect
Option A is incorrect. This introduces the overhead of managing the Cloud Function. Option B is CORRECT. This is the recommended approach for exporting logs to third-party applications. Option C is incorrect. It is not recommended, and it introduces the overhead of managing the Cloud Function. Option D is incorrect. This is not currently possible. Reference: https://cloud.google.com/logging/docs/export
Unattempted
Option A is incorrect. This introduces the overhead of managing the Cloud Function. Option B is CORRECT. This is the recommended approach for exporting logs to third-party applications. Option C is incorrect. It is not recommended, and it introduces the overhead of managing the Cloud Function. Option D is incorrect. This is not currently possible. Reference: https://cloud.google.com/logging/docs/export
Question 63 of 65
63. Question
You are responsible for designing a new logs collection system in your organization. Your company has asked you to ensure all audit logs from all projects in the organization are aggregated in one location. How should you accomplish this? Choose Two.
Correct
Option A is incorrect. In the console, the Logging bucket is created in Logging under Logs Storage, not in Cloud Storage. Options B and D are CORRECT. Creating the logging bucket is done in Logging and the sink for the organization logs is done via cli. Option C is incorrect This cannot be created in the Console Option E is incorrect. The question asks for aggregating logs in one location, this does not support that. Reference: https://cloud.google.com/logging/docs/central-log-storage
Incorrect
Option A is incorrect. In the console, the Logging bucket is created in Logging under Logs Storage, not in Cloud Storage. Options B and D are CORRECT. Creating the logging bucket is done in Logging and the sink for the organization logs is done via cli. Option C is incorrect This cannot be created in the Console Option E is incorrect. The question asks for aggregating logs in one location, this does not support that. Reference: https://cloud.google.com/logging/docs/central-log-storage
Unattempted
Option A is incorrect. In the console, the Logging bucket is created in Logging under Logs Storage, not in Cloud Storage. Options B and D are CORRECT. Creating the logging bucket is done in Logging and the sink for the organization logs is done via cli. Option C is incorrect This cannot be created in the Console Option E is incorrect. The question asks for aggregating logs in one location, this does not support that. Reference: https://cloud.google.com/logging/docs/central-log-storage
Question 64 of 65
64. Question
You are responsible for designing a CI/CD pipeline in your organization. Your company has asked you to ensure all data access logs for the pipeline is turned on and kept for at least 90 days. What should you take into consideration before Data Access logs are turned on? Choose Three.
You are responsible for designing a CICD pipeline in your organization. Your company has asked you to ensure all the continuous deployment (CD) part of the pipeline can handle Blue/Green deployment. How could you accomplish this? Choose Two.