You have already completed the Test before. Hence you can not start it again.
Test is loading...
You must sign in or sign up to start the Test.
You have to finish following quiz, to start this Test:
Your results are here!! for" Google Professional Cloud DevOps Engineer Practice Test 4 "
0 of 65 questions answered correctly
Your time:
Time has elapsed
Your Final Score is : 0
You have attempted : 0
Number of Correct Questions : 0 and scored 0
Number of Incorrect Questions : 0 and Negative marks 0
Average score
Your score
Google Professional Cloud DevOps Engineer
You have attempted: 0
Number of Correct Questions: 0 and scored 0
Number of Incorrect Questions: 0 and Negative marks 0
You can review your answers by clicking on “View Answers” option.
Important Note : Open Reference Documentation Links in New Tab (Right Click and Open in New Tab).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
Answered
Review
Question 1 of 65
1. Question
You are the DevOps Engineer in a Finance company. You manage the Cloud Landscape. The company has several applications on GKE clusters, and the clusters write logs to Cloud Logging. There is a legal requirement to store logs for 7 years. What is the most cost-effective place to store the logs?
Correct
Option A is incorrect. A multi-region standard storage class bucket is more expensive than a single-region archive storage class bucket. Option B is CORRECT. This is the best option. Option C is incorrect. This is an expensive option. Option D is incorrect. BigQuery is mostly suited towards analytics not long-term storage. Reference https://cloud.google.com/logging/docs/routing/overview https://cloud.google.com/storage/docs/storage-classes
Incorrect
Option A is incorrect. A multi-region standard storage class bucket is more expensive than a single-region archive storage class bucket. Option B is CORRECT. This is the best option. Option C is incorrect. This is an expensive option. Option D is incorrect. BigQuery is mostly suited towards analytics not long-term storage. Reference https://cloud.google.com/logging/docs/routing/overview https://cloud.google.com/storage/docs/storage-classes
Unattempted
Option A is incorrect. A multi-region standard storage class bucket is more expensive than a single-region archive storage class bucket. Option B is CORRECT. This is the best option. Option C is incorrect. This is an expensive option. Option D is incorrect. BigQuery is mostly suited towards analytics not long-term storage. Reference https://cloud.google.com/logging/docs/routing/overview https://cloud.google.com/storage/docs/storage-classes
Question 2 of 65
2. Question
Your company has tasked you with setting up a Continuous Integration pipeline. When code is committed to the source repository, the pipeline will build docker containers to be pushed to Artifact Registry. How would you accomplish this?
Correct
Options B and C are incorrect. There is nothing like a source repository config file. The images field and artifacts field in the build config file specify the docker images to be stored in the container registry (or artifact registry) and the non-container artifacts to be stored in Cloud storage respectively. Option D is incorrect because there is nothing like a source repository config file. References: https://cloud.google.com/build/docs/build-config#images https://cloud.google.com/artifact-registry/docs/configure-cloud-build
Incorrect
Options B and C are incorrect. There is nothing like a source repository config file. The images field and artifacts field in the build config file specify the docker images to be stored in the container registry (or artifact registry) and the non-container artifacts to be stored in Cloud storage respectively. Option D is incorrect because there is nothing like a source repository config file. References: https://cloud.google.com/build/docs/build-config#images https://cloud.google.com/artifact-registry/docs/configure-cloud-build
Unattempted
Options B and C are incorrect. There is nothing like a source repository config file. The images field and artifacts field in the build config file specify the docker images to be stored in the container registry (or artifact registry) and the non-container artifacts to be stored in Cloud storage respectively. Option D is incorrect because there is nothing like a source repository config file. References: https://cloud.google.com/build/docs/build-config#images https://cloud.google.com/artifact-registry/docs/configure-cloud-build
Question 3 of 65
3. Question
You are tasked with designing an automated CI pipeline for building and pushing images to Container Registry when there is a commit with a particular tag. In the current system, developers issue build commands after code is pushed to the test branch in the source repository. What steps can you take to automate the build described above the least amount of management overhead?
Correct
Option A is incorrect, triggers are created in Cloud Build. Option B is CORRECT because the correct trigger is the “Push new tag” event that will trigger a build when developers commit code that contains a particular tag. Option C is incorrect, the requirement is automating the build when code is committed with a particular tag, there was no mention of raising a pull request Option D is incorrect, it has a lot of management overhead. Reference: https://cloud.google.com/build/docs/automating-builds/create-manage-triggers#build_trigger
Incorrect
Option A is incorrect, triggers are created in Cloud Build. Option B is CORRECT because the correct trigger is the “Push new tag” event that will trigger a build when developers commit code that contains a particular tag. Option C is incorrect, the requirement is automating the build when code is committed with a particular tag, there was no mention of raising a pull request Option D is incorrect, it has a lot of management overhead. Reference: https://cloud.google.com/build/docs/automating-builds/create-manage-triggers#build_trigger
Unattempted
Option A is incorrect, triggers are created in Cloud Build. Option B is CORRECT because the correct trigger is the “Push new tag” event that will trigger a build when developers commit code that contains a particular tag. Option C is incorrect, the requirement is automating the build when code is committed with a particular tag, there was no mention of raising a pull request Option D is incorrect, it has a lot of management overhead. Reference: https://cloud.google.com/build/docs/automating-builds/create-manage-triggers#build_trigger
Question 4 of 65
4. Question
You are tasked with investigating the gradual degradation of a production application’s response time. The application is deployed to a Managed Instance Group of five instances. What steps can you take to investigate this issue with the least amount of overhead?
Correct
Option A is incorrect, logging agent does not capture latency data. Option B is CORRECT because the Cloud Trace provides distributed tracing data for your applications. After instrumenting your application, you can inspect latency data for a single request and view the aggregate latency for an entire application in the Cloud Trace console. Option C is incorrect, the monitoring agent does not capture latency data. Option D is incorrect, this is used to investigate the state of your applications in real time and does not contain the latency data needed. Reference: https://cloud.google.com/trace/docs/setup
Incorrect
Option A is incorrect, logging agent does not capture latency data. Option B is CORRECT because the Cloud Trace provides distributed tracing data for your applications. After instrumenting your application, you can inspect latency data for a single request and view the aggregate latency for an entire application in the Cloud Trace console. Option C is incorrect, the monitoring agent does not capture latency data. Option D is incorrect, this is used to investigate the state of your applications in real time and does not contain the latency data needed. Reference: https://cloud.google.com/trace/docs/setup
Unattempted
Option A is incorrect, logging agent does not capture latency data. Option B is CORRECT because the Cloud Trace provides distributed tracing data for your applications. After instrumenting your application, you can inspect latency data for a single request and view the aggregate latency for an entire application in the Cloud Trace console. Option C is incorrect, the monitoring agent does not capture latency data. Option D is incorrect, this is used to investigate the state of your applications in real time and does not contain the latency data needed. Reference: https://cloud.google.com/trace/docs/setup
Question 5 of 65
5. Question
A gaming company has decided to move its operations to the Cloud. Applications will be developed and deployed using cloud services such as Compute Engine. You have been tasked to capture all audit logs from the services used. How can you achieve this?
Correct
Option A, B and D are incorrect. These security logs are enabled by default and do not capture the required logs. Option C is CORRECT. Data Access logs are not enabled by default. If you want Data Access audit logs to be written for Google Cloud services other than BigQuery, you must explicitly enable them. Reference: https://cloud.google.com/logging/docs/audit#data-access
Incorrect
Option A, B and D are incorrect. These security logs are enabled by default and do not capture the required logs. Option C is CORRECT. Data Access logs are not enabled by default. If you want Data Access audit logs to be written for Google Cloud services other than BigQuery, you must explicitly enable them. Reference: https://cloud.google.com/logging/docs/audit#data-access
Unattempted
Option A, B and D are incorrect. These security logs are enabled by default and do not capture the required logs. Option C is CORRECT. Data Access logs are not enabled by default. If you want Data Access audit logs to be written for Google Cloud services other than BigQuery, you must explicitly enable them. Reference: https://cloud.google.com/logging/docs/audit#data-access
Question 6 of 65
6. Question
A new public cloud provider is growing in popularity. Your SRE team has been handling a lot of tickets which relate to increasing the quotas for resources consumed. Following Google’s SRE best practice, how can you improve this?
Correct
Google’s SRE principle suggests identifying and reducing toil (repetitive, manual, and automatable processes) Option B is incorrect. This does not address the repetitive nature of the quotas and if the quota requests continue to increase you will need to hire more people to handle it. Options C and D are incorrect. These options do not minimize the toil of responding to quotas. Reference: https://sre.google/workbook/eliminating-toil/ (Business Processes)
Incorrect
Google’s SRE principle suggests identifying and reducing toil (repetitive, manual, and automatable processes) Option B is incorrect. This does not address the repetitive nature of the quotas and if the quota requests continue to increase you will need to hire more people to handle it. Options C and D are incorrect. These options do not minimize the toil of responding to quotas. Reference: https://sre.google/workbook/eliminating-toil/ (Business Processes)
Unattempted
Google’s SRE principle suggests identifying and reducing toil (repetitive, manual, and automatable processes) Option B is incorrect. This does not address the repetitive nature of the quotas and if the quota requests continue to increase you will need to hire more people to handle it. Options C and D are incorrect. These options do not minimize the toil of responding to quotas. Reference: https://sre.google/workbook/eliminating-toil/ (Business Processes)
Question 7 of 65
7. Question
Your team is planning on the deployment and monitoring of a new application to the production environment. You are responsible for defining the SLIs, SLOs and SLAs while the application is tested in a staging environment. Which of the following is NOT true about error budgets?
Correct
Option A is incorrect. Error budget = 100% – SLO%. Option B is incorrect. The error budget determines if new features can be developed or if more effort is needed in improving the availability and reliability of the service/application. Option C is incorrect. It is best practice to monitor how fast the error budget is burnt up. Option D is CORRECT. If the error budget is close to 100%, it shows that the SLO of the application is very low which means the downtime is very high and the application is unreliable. Reference https://cloud.google.com/blog/products/management-tools/sre-error-budgets-and-maintenance-windows
Incorrect
Option A is incorrect. Error budget = 100% – SLO%. Option B is incorrect. The error budget determines if new features can be developed or if more effort is needed in improving the availability and reliability of the service/application. Option C is incorrect. It is best practice to monitor how fast the error budget is burnt up. Option D is CORRECT. If the error budget is close to 100%, it shows that the SLO of the application is very low which means the downtime is very high and the application is unreliable. Reference https://cloud.google.com/blog/products/management-tools/sre-error-budgets-and-maintenance-windows
Unattempted
Option A is incorrect. Error budget = 100% – SLO%. Option B is incorrect. The error budget determines if new features can be developed or if more effort is needed in improving the availability and reliability of the service/application. Option C is incorrect. It is best practice to monitor how fast the error budget is burnt up. Option D is CORRECT. If the error budget is close to 100%, it shows that the SLO of the application is very low which means the downtime is very high and the application is unreliable. Reference https://cloud.google.com/blog/products/management-tools/sre-error-budgets-and-maintenance-windows
Question 8 of 65
8. Question
You are designing a banking application. The web application allows users to check their balances stored in a database. You want to identify the minimum Service Level Indicators (SLIs) for the database to ensure it responds within a certain time. What SLIs should you select?
Correct
Option A is incorrect. Throughput measures how many requests can be handled per time. Option B is incorrect. Availability has to do with the uptime of the database. Option C is CORRECT. Latency measures how long a request takes to complete. Option D is incorrect. Correctness measures if the right data was returned per time Reference https://sre.google/sre-book/service-level-objectives/ (Indicator in Practice)
Incorrect
Option A is incorrect. Throughput measures how many requests can be handled per time. Option B is incorrect. Availability has to do with the uptime of the database. Option C is CORRECT. Latency measures how long a request takes to complete. Option D is incorrect. Correctness measures if the right data was returned per time Reference https://sre.google/sre-book/service-level-objectives/ (Indicator in Practice)
Unattempted
Option A is incorrect. Throughput measures how many requests can be handled per time. Option B is incorrect. Availability has to do with the uptime of the database. Option C is CORRECT. Latency measures how long a request takes to complete. Option D is incorrect. Correctness measures if the right data was returned per time Reference https://sre.google/sre-book/service-level-objectives/ (Indicator in Practice)
Question 9 of 65
9. Question
Your organization has recently decided to build and deploy the new version of its applications in the Cloud. The application is deployed to Compute Engine. During testing users complain of slow response from the application. What steps can you take to understand why the application’s response time is high?
Correct
Option A is incorrect. Cloud Profiler is used to continuously gather information such as CPU and memory usage from applications. Option B is CORRECT. Cloud Trace helps you to understand how long your application takes to process requests and the overall latency of requests. Option C is incorrect. Cloud Logging doesn’t show the latency or bottlenecks in your applications. It simply shows application logs as generated by the code. Option D is incorrect. Cloud Debugger allows you to inspect the application code’s state while it is running . Reference https://cloud.google.com/trace/docs/overview
Incorrect
Option A is incorrect. Cloud Profiler is used to continuously gather information such as CPU and memory usage from applications. Option B is CORRECT. Cloud Trace helps you to understand how long your application takes to process requests and the overall latency of requests. Option C is incorrect. Cloud Logging doesn’t show the latency or bottlenecks in your applications. It simply shows application logs as generated by the code. Option D is incorrect. Cloud Debugger allows you to inspect the application code’s state while it is running . Reference https://cloud.google.com/trace/docs/overview
Unattempted
Option A is incorrect. Cloud Profiler is used to continuously gather information such as CPU and memory usage from applications. Option B is CORRECT. Cloud Trace helps you to understand how long your application takes to process requests and the overall latency of requests. Option C is incorrect. Cloud Logging doesn’t show the latency or bottlenecks in your applications. It simply shows application logs as generated by the code. Option D is incorrect. Cloud Debugger allows you to inspect the application code’s state while it is running . Reference https://cloud.google.com/trace/docs/overview
Question 10 of 65
10. Question
A customer deployed an application on the Compute Engine. The instance uses the default service account, and the application writes to the logs of the instance. You have been asked to investigate why no logs are appearing in Cloud Logging Which of the following is most likely the problem?
Correct
Without the logging agent VMs cannot write logs to Cloud Logging. Option B is incorrect. If a logging agent is installed, The VM can send logs to Cloud Logging. Option C is incorrect. The default service account has the editor role attached. It has sufficient permissions to write to Cloud Logging. Option D is incorrect. The default service account has the editor role attached. It has sufficient permissions to write to Cloud Logging. Reference https://cloud.google.com/logging/docs/agent/logging
Incorrect
Without the logging agent VMs cannot write logs to Cloud Logging. Option B is incorrect. If a logging agent is installed, The VM can send logs to Cloud Logging. Option C is incorrect. The default service account has the editor role attached. It has sufficient permissions to write to Cloud Logging. Option D is incorrect. The default service account has the editor role attached. It has sufficient permissions to write to Cloud Logging. Reference https://cloud.google.com/logging/docs/agent/logging
Unattempted
Without the logging agent VMs cannot write logs to Cloud Logging. Option B is incorrect. If a logging agent is installed, The VM can send logs to Cloud Logging. Option C is incorrect. The default service account has the editor role attached. It has sufficient permissions to write to Cloud Logging. Option D is incorrect. The default service account has the editor role attached. It has sufficient permissions to write to Cloud Logging. Reference https://cloud.google.com/logging/docs/agent/logging
Question 11 of 65
11. Question
Your team has deployed a new version of a service and suddenly more instances are being created in your Kubernetes cluster. Your service scales when average CPU utilization is greater than 80%. What tool would help you investigate the problem?
Correct
Cloud Profiler provides insight into how CPU and memory is consumed by applications. This will help with understanding why your cluster is scaling out. Option B is incorrect. Cloud Trace shows the latency data and how requests flow through your application. Option C is incorrect. Cloud logging is used to collect logging data from applications and services, it does not show the root cause of the scaling action in your cluster. Option D is incorrect. Cloud Monitoring provides a centralised view of metrics that can be monitored for GCP services. It does not show the root cause of the scaling action in your cluster. Reference https://cloud.google.com/profiler/docs/about-profiler https://cloud.google.com/monitoring
Incorrect
Cloud Profiler provides insight into how CPU and memory is consumed by applications. This will help with understanding why your cluster is scaling out. Option B is incorrect. Cloud Trace shows the latency data and how requests flow through your application. Option C is incorrect. Cloud logging is used to collect logging data from applications and services, it does not show the root cause of the scaling action in your cluster. Option D is incorrect. Cloud Monitoring provides a centralised view of metrics that can be monitored for GCP services. It does not show the root cause of the scaling action in your cluster. Reference https://cloud.google.com/profiler/docs/about-profiler https://cloud.google.com/monitoring
Unattempted
Cloud Profiler provides insight into how CPU and memory is consumed by applications. This will help with understanding why your cluster is scaling out. Option B is incorrect. Cloud Trace shows the latency data and how requests flow through your application. Option C is incorrect. Cloud logging is used to collect logging data from applications and services, it does not show the root cause of the scaling action in your cluster. Option D is incorrect. Cloud Monitoring provides a centralised view of metrics that can be monitored for GCP services. It does not show the root cause of the scaling action in your cluster. Reference https://cloud.google.com/profiler/docs/about-profiler https://cloud.google.com/monitoring
Question 12 of 65
12. Question
A betting organization analyses the bets placed on its website at night. The analysis takes about 5 hours and must be run between midnight and 6am. The bets are analysed using standard Compute Engine instances and cannot handle interruptions. You have been tasked with optimising the cost of the analysis which is to run for another twelve months. Which of the following is the best option?
Correct
Committed use instances give the lowest cost, it has a commitment of 1 or 3years. The project will run for 12 months which can make use of the discount from committed use. Option B is incorrect. Preemptible instances provide the lowest cost and are excellent for batch processing. But this question specifies the transactions cannot handle interruptions. Option C is incorrect. The standard instance is the most expensive option. Option D is incorrect. This does not optimize the cost because of the overhead of re-platforming the application Reference https://cloud.google.com/compute/docs/instances/preemptible https://cloud.google.com/compute/docs/instances/signing-up-committed-use-discounts
Incorrect
Committed use instances give the lowest cost, it has a commitment of 1 or 3years. The project will run for 12 months which can make use of the discount from committed use. Option B is incorrect. Preemptible instances provide the lowest cost and are excellent for batch processing. But this question specifies the transactions cannot handle interruptions. Option C is incorrect. The standard instance is the most expensive option. Option D is incorrect. This does not optimize the cost because of the overhead of re-platforming the application Reference https://cloud.google.com/compute/docs/instances/preemptible https://cloud.google.com/compute/docs/instances/signing-up-committed-use-discounts
Unattempted
Committed use instances give the lowest cost, it has a commitment of 1 or 3years. The project will run for 12 months which can make use of the discount from committed use. Option B is incorrect. Preemptible instances provide the lowest cost and are excellent for batch processing. But this question specifies the transactions cannot handle interruptions. Option C is incorrect. The standard instance is the most expensive option. Option D is incorrect. This does not optimize the cost because of the overhead of re-platforming the application Reference https://cloud.google.com/compute/docs/instances/preemptible https://cloud.google.com/compute/docs/instances/signing-up-committed-use-discounts
Question 13 of 65
13. Question
Your DevOps team is responsible for implementing the aggregated logs collection for all the projects in the Google organization of your company. Your team needs to reduce the quantity of logs collected to save costs. Which of the following log types CANNOT be disabled?
Correct
Option A is incorrect. Firewall logs can be disabled. Option B is incorrect. VPC Flow Logs can be disabled. Option C is CORRECT. Policy Denied logs cannot be disabled. Option D is incorrect. Data Access logs can be disabled. Reference https://cloud.google.com/logging/docs/audit#types
Incorrect
Option A is incorrect. Firewall logs can be disabled. Option B is incorrect. VPC Flow Logs can be disabled. Option C is CORRECT. Policy Denied logs cannot be disabled. Option D is incorrect. Data Access logs can be disabled. Reference https://cloud.google.com/logging/docs/audit#types
Unattempted
Option A is incorrect. Firewall logs can be disabled. Option B is incorrect. VPC Flow Logs can be disabled. Option C is CORRECT. Policy Denied logs cannot be disabled. Option D is incorrect. Data Access logs can be disabled. Reference https://cloud.google.com/logging/docs/audit#types
Question 14 of 65
14. Question
Your team is planning on the structure of the Cloud Monitoring workspace that will monitor multiple projects. You need to grant permissions to the service account of Compute Engine instances to send metric data to Cloud Monitoring. Following the principle of least privilege, which of the following roles should be assigned?
Correct
Option A is incorrect. The Monitoring Admin role will assign more permissions than is required for the job. Option B is CORRECT. Monitoring Metric Writer provides enough permissions for users or service accounts to write metrics to Cloud Monitoring. Option C is incorrect. The logging Admin role is used for access to Cloud Logging. Option D is incorrect. The Logs Configuration Writer role is used for access to Cloud Logging. Reference https://cloud.google.com/monitoring/access-control
Incorrect
Option A is incorrect. The Monitoring Admin role will assign more permissions than is required for the job. Option B is CORRECT. Monitoring Metric Writer provides enough permissions for users or service accounts to write metrics to Cloud Monitoring. Option C is incorrect. The logging Admin role is used for access to Cloud Logging. Option D is incorrect. The Logs Configuration Writer role is used for access to Cloud Logging. Reference https://cloud.google.com/monitoring/access-control
Unattempted
Option A is incorrect. The Monitoring Admin role will assign more permissions than is required for the job. Option B is CORRECT. Monitoring Metric Writer provides enough permissions for users or service accounts to write metrics to Cloud Monitoring. Option C is incorrect. The logging Admin role is used for access to Cloud Logging. Option D is incorrect. The Logs Configuration Writer role is used for access to Cloud Logging. Reference https://cloud.google.com/monitoring/access-control
Question 15 of 65
15. Question
Your team has deployed a Java application to a Managed Instance Group. The compute engine instances have the logging agent installed and application logs are sent to Cloud Logging. You have been tasked with creating an alerting policy if the errors in the application logs exceed a threshold. What of the following is NOT a valid notification channel for your alert policy?
Correct
Option A is incorrect. Slack is a valid notification channel. Option B is incorrect. Webhooks is a valid notification channel. Option C is incorrect. PagerDuty Services is a valid notification channel. Option D is CORRECT. Twitter is not a valid notification channel. Reference https://cloud.google.com/monitoring/support/notification-options
Incorrect
Option A is incorrect. Slack is a valid notification channel. Option B is incorrect. Webhooks is a valid notification channel. Option C is incorrect. PagerDuty Services is a valid notification channel. Option D is CORRECT. Twitter is not a valid notification channel. Reference https://cloud.google.com/monitoring/support/notification-options
Unattempted
Option A is incorrect. Slack is a valid notification channel. Option B is incorrect. Webhooks is a valid notification channel. Option C is incorrect. PagerDuty Services is a valid notification channel. Option D is CORRECT. Twitter is not a valid notification channel. Reference https://cloud.google.com/monitoring/support/notification-options
Question 16 of 65
16. Question
Your team has deployed a python application to a Managed Instance Group. The Managed Instance Group is placed behind a load balancer. You have been tasked with ensuring the load balancer only sends requests to instances that are working. What of the following helps you achieve this?
Correct
Option A is incorrect. Readiness Probe is used by Kubernetes to check if pods are ready to receive traffic. Option B is incorrect. Liveness Probe is used by Kubernetes to check if a pod is in the running state, if it is not it restarts the pod. Option C is CORRECT. Health Checks is used by the load balancer to determine if the backend is reachable (responds to traffic). Option D is incorrect. Uptime Check is used to check if an application responds or if it is reachable. Reference https://cloud.google.com/monitoring/uptime-checks https://cloud.google.com/load-balancing/docs/health-check-concepts
Incorrect
Option A is incorrect. Readiness Probe is used by Kubernetes to check if pods are ready to receive traffic. Option B is incorrect. Liveness Probe is used by Kubernetes to check if a pod is in the running state, if it is not it restarts the pod. Option C is CORRECT. Health Checks is used by the load balancer to determine if the backend is reachable (responds to traffic). Option D is incorrect. Uptime Check is used to check if an application responds or if it is reachable. Reference https://cloud.google.com/monitoring/uptime-checks https://cloud.google.com/load-balancing/docs/health-check-concepts
Unattempted
Option A is incorrect. Readiness Probe is used by Kubernetes to check if pods are ready to receive traffic. Option B is incorrect. Liveness Probe is used by Kubernetes to check if a pod is in the running state, if it is not it restarts the pod. Option C is CORRECT. Health Checks is used by the load balancer to determine if the backend is reachable (responds to traffic). Option D is incorrect. Uptime Check is used to check if an application responds or if it is reachable. Reference https://cloud.google.com/monitoring/uptime-checks https://cloud.google.com/load-balancing/docs/health-check-concepts
Question 17 of 65
17. Question
You are a DevOps engineer for a social media company. You are on the monitoring team for their flagship web application that is growing rapidly. The application is deployed on Managed Instance Groups behind a HTTP(S) load balancer. The number of logs created by the application is causing the Project to exceed the logging API quota. You have created exclusion filters in Cloud Logging. You notice the issue persists. What could be the problem?
Correct
Options A and B are incorrect. Exclusion filters work after the logging API has been called and the logs are in Cloud Logging. Option C is CORRECT. The problem is the number of entries.write API calls which pushes logs to Cloud Logging before exclusion filters can be applied. The solution will be to reduce the logs collected. Option D is incorrect. There is no need for extra permissions. The Managed Instance Group can already access Cloud Logging. References: https://cloud.google.com/logging/docs/exclusions https://cloud.google.com/logging/quotas#log-limits
Incorrect
Options A and B are incorrect. Exclusion filters work after the logging API has been called and the logs are in Cloud Logging. Option C is CORRECT. The problem is the number of entries.write API calls which pushes logs to Cloud Logging before exclusion filters can be applied. The solution will be to reduce the logs collected. Option D is incorrect. There is no need for extra permissions. The Managed Instance Group can already access Cloud Logging. References: https://cloud.google.com/logging/docs/exclusions https://cloud.google.com/logging/quotas#log-limits
Unattempted
Options A and B are incorrect. Exclusion filters work after the logging API has been called and the logs are in Cloud Logging. Option C is CORRECT. The problem is the number of entries.write API calls which pushes logs to Cloud Logging before exclusion filters can be applied. The solution will be to reduce the logs collected. Option D is incorrect. There is no need for extra permissions. The Managed Instance Group can already access Cloud Logging. References: https://cloud.google.com/logging/docs/exclusions https://cloud.google.com/logging/quotas#log-limits
Question 18 of 65
18. Question
You are a DevOps engineer for a tech company. You are responsible for the production Project. At the end of the month, you are informed by finance that charges from logs stored are very high. You have been asked to investigate and reduce the number of logs generated in the project. What of the following is unlikely to be generating a lot of logs?
A customer has multiple projects in Google Cloud. The projects represent the different environments. You have been tasked with sending certain logs from all projects to Splunk. There is a requirement to send any data access logs to Splunk. What of the following DOES NOT help you meet this requirement?
Correct
Option A is incorrect. A Pub/Sub topic is needed to send logs to Splunk Option B is incorrect. A log sink is needed to route the selected logs to the destination needed. Option C is incorrect. The logging service account needs permissions to write to the Pub/Sub topic. Option D is CORRECT. Cloud Storage bucket is not needed for routing logs to Splunk. Reference https://cloud.google.com/architecture/exporting-stackdriver-logging-for-splunk#set_up_the_logging_export
Incorrect
Option A is incorrect. A Pub/Sub topic is needed to send logs to Splunk Option B is incorrect. A log sink is needed to route the selected logs to the destination needed. Option C is incorrect. The logging service account needs permissions to write to the Pub/Sub topic. Option D is CORRECT. Cloud Storage bucket is not needed for routing logs to Splunk. Reference https://cloud.google.com/architecture/exporting-stackdriver-logging-for-splunk#set_up_the_logging_export
Unattempted
Option A is incorrect. A Pub/Sub topic is needed to send logs to Splunk Option B is incorrect. A log sink is needed to route the selected logs to the destination needed. Option C is incorrect. The logging service account needs permissions to write to the Pub/Sub topic. Option D is CORRECT. Cloud Storage bucket is not needed for routing logs to Splunk. Reference https://cloud.google.com/architecture/exporting-stackdriver-logging-for-splunk#set_up_the_logging_export
Question 20 of 65
20. Question
You work as a DevOps Engineer for an energy client. The client runs their applications on Google Kubernetes Engine and logs are sent to Cloud Logging. They would like to use the logs generated to monitor the application usage in real time. What is the best destination for the export sink?
A large professional services client uses Google Cloud for some of its workload. Your DevOps team is now required to route all logs that show actions taken by Google staff in its account to a separate logging bucket. Which of the following helps you achieve this?
Correct
Access Transparency logs shows you all action taken in the account by Google staff. Option B is incorrect. Admin Activity logs show actions that modify the config or metadata of resources. Option C is incorrect. Data Access logs shows actions that read the config or metadata of resources as well as user API calls that perform CRUD operations. Option D is incorrect. System Event logs are generated by Google systems for Google Cloud actions that modify the config of resources. Reference https://cloud.google.com/logging/docs/view/available-logs https://cloud.google.com/logging/docs/audit#types
Incorrect
Access Transparency logs shows you all action taken in the account by Google staff. Option B is incorrect. Admin Activity logs show actions that modify the config or metadata of resources. Option C is incorrect. Data Access logs shows actions that read the config or metadata of resources as well as user API calls that perform CRUD operations. Option D is incorrect. System Event logs are generated by Google systems for Google Cloud actions that modify the config of resources. Reference https://cloud.google.com/logging/docs/view/available-logs https://cloud.google.com/logging/docs/audit#types
Unattempted
Access Transparency logs shows you all action taken in the account by Google staff. Option B is incorrect. Admin Activity logs show actions that modify the config or metadata of resources. Option C is incorrect. Data Access logs shows actions that read the config or metadata of resources as well as user API calls that perform CRUD operations. Option D is incorrect. System Event logs are generated by Google systems for Google Cloud actions that modify the config of resources. Reference https://cloud.google.com/logging/docs/view/available-logs https://cloud.google.com/logging/docs/audit#types
Question 22 of 65
22. Question
Your customer is a financial organization, and you are responsible for setting up an automated CICD pipeline to deploy applications to GKE clusters in production. You need to restrict the kinds and origin of images that can be used to deploy containers into clusters. How can you achieve this?
Correct
Option A is incorrect. Firewall rules are a network level restriction on traffic flow, it cannot control the images that are used to deploy on a cluster. Option B is incorrect. Custom routes are a network level design for routing traffic to and from the VPC. Option C is CORRECT. Binary Authorization can be used to allow or block deployment of images using policies. Option D is incorrect. IAM can be used to grant access to services, such as GKE, Container registry but in this case, it cannot control the images that are used to deploy on a GKE cluster. Reference https://cloud.google.com/binary-authorization/docs/overview
Incorrect
Option A is incorrect. Firewall rules are a network level restriction on traffic flow, it cannot control the images that are used to deploy on a cluster. Option B is incorrect. Custom routes are a network level design for routing traffic to and from the VPC. Option C is CORRECT. Binary Authorization can be used to allow or block deployment of images using policies. Option D is incorrect. IAM can be used to grant access to services, such as GKE, Container registry but in this case, it cannot control the images that are used to deploy on a GKE cluster. Reference https://cloud.google.com/binary-authorization/docs/overview
Unattempted
Option A is incorrect. Firewall rules are a network level restriction on traffic flow, it cannot control the images that are used to deploy on a cluster. Option B is incorrect. Custom routes are a network level design for routing traffic to and from the VPC. Option C is CORRECT. Binary Authorization can be used to allow or block deployment of images using policies. Option D is incorrect. IAM can be used to grant access to services, such as GKE, Container registry but in this case, it cannot control the images that are used to deploy on a GKE cluster. Reference https://cloud.google.com/binary-authorization/docs/overview
Question 23 of 65
23. Question
Your Site Reliability (SRE) team members are managing the CICD of your organization. Applications are deployed to Compute Engine instances. There is a requirement to send the logs of the instances in the Development Projects to a user-created bucket. Which step can you take to achieve this?
Correct
Log sink is created in Cloud logging and the destination should be Cloud Storage bucket because you are told the logs should go into a user-created bucket. Option B is incorrect. Cloud Pub/Sub is not a bucket. Option C is incorrect. Log sink is created in Cloud logging. Option D is incorrect. Log sink is created in Cloud logging. Reference https://cloud.google.com/logging/docs/export/configure_export_v2
Incorrect
Log sink is created in Cloud logging and the destination should be Cloud Storage bucket because you are told the logs should go into a user-created bucket. Option B is incorrect. Cloud Pub/Sub is not a bucket. Option C is incorrect. Log sink is created in Cloud logging. Option D is incorrect. Log sink is created in Cloud logging. Reference https://cloud.google.com/logging/docs/export/configure_export_v2
Unattempted
Log sink is created in Cloud logging and the destination should be Cloud Storage bucket because you are told the logs should go into a user-created bucket. Option B is incorrect. Cloud Pub/Sub is not a bucket. Option C is incorrect. Log sink is created in Cloud logging. Option D is incorrect. Log sink is created in Cloud logging. Reference https://cloud.google.com/logging/docs/export/configure_export_v2
Question 24 of 65
24. Question
A customer has opted to use an external source code management such as GitLab. The customer wants to use Cloud Build for its Continuous Integration and Deployment to Cloud Run. They would like to automatically trigger a build in Cloud Build when code is pushed to GitLab How can this be done?
You are designing the CICD pipeline for a customer. The pipeline will be used by developers to push changes to production. The customer strategy dictates the use of cloud native tools in the pipeline. Cloud source repositories and Cloud Build have been chosen. The customer has requested that automated builds in the pipeline are approved by a senior engineer. How can this be done?
Correct
Option A is incorrect. Approval is turned on in CloudBuild not in the cloudbuild.yaml file. Option B is incorrect. Triggers are created in CloudBuild. Option C is incorrect. This does not specify if Approval is turned on or not. Option D is CORRECT. Cloud Build can be triggered and remain in a pending state until approval is received, if Approval is turned on. Reference https://cloud.google.com/build/docs/automating-builds/approve-builds
Incorrect
Option A is incorrect. Approval is turned on in CloudBuild not in the cloudbuild.yaml file. Option B is incorrect. Triggers are created in CloudBuild. Option C is incorrect. This does not specify if Approval is turned on or not. Option D is CORRECT. Cloud Build can be triggered and remain in a pending state until approval is received, if Approval is turned on. Reference https://cloud.google.com/build/docs/automating-builds/approve-builds
Unattempted
Option A is incorrect. Approval is turned on in CloudBuild not in the cloudbuild.yaml file. Option B is incorrect. Triggers are created in CloudBuild. Option C is incorrect. This does not specify if Approval is turned on or not. Option D is CORRECT. Cloud Build can be triggered and remain in a pending state until approval is received, if Approval is turned on. Reference https://cloud.google.com/build/docs/automating-builds/approve-builds
Question 26 of 65
26. Question
You work as a DevOps Engineer for a client. The company uses cloud native tools for its CICD pipeline. Automated build is done using Cloud Build when code is pushed to repositories in Cloud Source Repositories. Which of the following CANNOT be used as a trigger with Cloud Source Repositories?
Correct
Option A is incorrect. This can be used to trigger Cloud Build when code is pushed to a specified (or any branches) in Cloud Source Repositories. Option B is CORRECT. This option cannot be used to trigger builds in Cloud Build if Cloud Source Repositories is used. Option C is incorrect. This can be used to trigger Cloud Build when code is pushed with a new tag to any branch in the repository in Cloud Source Repositories. Option D is incorrect. Cloud Build can be manually triggered to build code in the repository in Cloud Source Repositories Reference https://cloud.google.com/build/docs/automating-builds/create-manage-triggers
Incorrect
Option A is incorrect. This can be used to trigger Cloud Build when code is pushed to a specified (or any branches) in Cloud Source Repositories. Option B is CORRECT. This option cannot be used to trigger builds in Cloud Build if Cloud Source Repositories is used. Option C is incorrect. This can be used to trigger Cloud Build when code is pushed with a new tag to any branch in the repository in Cloud Source Repositories. Option D is incorrect. Cloud Build can be manually triggered to build code in the repository in Cloud Source Repositories Reference https://cloud.google.com/build/docs/automating-builds/create-manage-triggers
Unattempted
Option A is incorrect. This can be used to trigger Cloud Build when code is pushed to a specified (or any branches) in Cloud Source Repositories. Option B is CORRECT. This option cannot be used to trigger builds in Cloud Build if Cloud Source Repositories is used. Option C is incorrect. This can be used to trigger Cloud Build when code is pushed with a new tag to any branch in the repository in Cloud Source Repositories. Option D is incorrect. Cloud Build can be manually triggered to build code in the repository in Cloud Source Repositories Reference https://cloud.google.com/build/docs/automating-builds/create-manage-triggers
Question 27 of 65
27. Question
You work as a DevOps Engineer for a client. Developers make changes and push code to branches in a repository. Each branch is merged into a staging branch daily. The client wants to trigger a build of the staging branch every night. How can you achieve this? Choose TWO.
Correct
Option A is incorrect. Scheduled triggers are created with “manual invocation” as the event. Option B is CORRECT. Scheduled triggers are created with “manual invocation” as the event. Option C is incorrect. Scheduled triggers are created with “manual invocation” as the event. Option D is incorrect. Triggers are created in Cloud Build not Cloud Scheduler. Option E is CORRECT. After creating the trigger in Cloud Build, a Cloud Scheduler job needs to be created. Reference https://cloud.google.com/build/docs/automating-builds/create-scheduled-triggers
Incorrect
Option A is incorrect. Scheduled triggers are created with “manual invocation” as the event. Option B is CORRECT. Scheduled triggers are created with “manual invocation” as the event. Option C is incorrect. Scheduled triggers are created with “manual invocation” as the event. Option D is incorrect. Triggers are created in Cloud Build not Cloud Scheduler. Option E is CORRECT. After creating the trigger in Cloud Build, a Cloud Scheduler job needs to be created. Reference https://cloud.google.com/build/docs/automating-builds/create-scheduled-triggers
Unattempted
Option A is incorrect. Scheduled triggers are created with “manual invocation” as the event. Option B is CORRECT. Scheduled triggers are created with “manual invocation” as the event. Option C is incorrect. Scheduled triggers are created with “manual invocation” as the event. Option D is incorrect. Triggers are created in Cloud Build not Cloud Scheduler. Option E is CORRECT. After creating the trigger in Cloud Build, a Cloud Scheduler job needs to be created. Reference https://cloud.google.com/build/docs/automating-builds/create-scheduled-triggers
Question 28 of 65
28. Question
An organization is planning to use an automated CI/CD pipeline to deploy applications to Compute Engine. The organization would like to use a combination of cloud native and open-source tools for the pipeline. Which of the following helps you achieve this?
You are the DevOps Engineer in a healthcare start-up firm. The company has a new application it is testing. Before the application is promoted to production for live traffic, you have been tasked with creating an incident response strategy. Which of the following are incident response team roles that should be delegated?
Correct
Option A is incorrect. These are not roles as defined in Google’s incident response team. Option B is CORRECT. These are the distinct roles that need to be delegated during an incident as defined in Google’s SRE book. Option C is incorrect. These are not roles as defined in Google’s incident response team. Option D is incorrect. These are not roles as defined in Google’s incident response team. Reference https://sre.google/sre-book/managing-incidents/
Incorrect
Option A is incorrect. These are not roles as defined in Google’s incident response team. Option B is CORRECT. These are the distinct roles that need to be delegated during an incident as defined in Google’s SRE book. Option C is incorrect. These are not roles as defined in Google’s incident response team. Option D is incorrect. These are not roles as defined in Google’s incident response team. Reference https://sre.google/sre-book/managing-incidents/
Unattempted
Option A is incorrect. These are not roles as defined in Google’s incident response team. Option B is CORRECT. These are the distinct roles that need to be delegated during an incident as defined in Google’s SRE book. Option C is incorrect. These are not roles as defined in Google’s incident response team. Option D is incorrect. These are not roles as defined in Google’s incident response team. Reference https://sre.google/sre-book/managing-incidents/
Question 30 of 65
30. Question
A gaming company recently launched a new version of its popular game. The traffic to the company’s site has increased by over 70%. Users are now complaining of timed out requests when they attempt to launch the game. Your team declares an incident. What action is the most important?
Correct
Option A is incorrect. Restoring service during an incident should be the topmost priority, root cause analysis can happen after. Option B is incorrect. Restoring service during an incident should be the topmost priority, writing the postmortem document can happen afterwards. Option C is CORRECT. Restoring service (mitigation) during an incident should be the topmost priority. Option D is incorrect. Finger point is not recommended during or after incidents. Reference https://sre.google/sre-book/managing-incidents/ https://sre.google/workbook/incident-response/
Incorrect
Option A is incorrect. Restoring service during an incident should be the topmost priority, root cause analysis can happen after. Option B is incorrect. Restoring service during an incident should be the topmost priority, writing the postmortem document can happen afterwards. Option C is CORRECT. Restoring service (mitigation) during an incident should be the topmost priority. Option D is incorrect. Finger point is not recommended during or after incidents. Reference https://sre.google/sre-book/managing-incidents/ https://sre.google/workbook/incident-response/
Unattempted
Option A is incorrect. Restoring service during an incident should be the topmost priority, root cause analysis can happen after. Option B is incorrect. Restoring service during an incident should be the topmost priority, writing the postmortem document can happen afterwards. Option C is CORRECT. Restoring service (mitigation) during an incident should be the topmost priority. Option D is incorrect. Finger point is not recommended during or after incidents. Reference https://sre.google/sre-book/managing-incidents/ https://sre.google/workbook/incident-response/
Question 31 of 65
31. Question
Your client just recovered from a major outage that disrupted application service for almost an hour. Your DevOps team has been tasked with creating a document that summarizes the events that took place during the incident. Which of the following documents will you create?
Correct
Option A is incorrect. Alerts are created to notify based on measured metrics exceeding or failing below a threshold. Option B is incorrect. Support tickets are usually created for tasks that are requested by a customer. Option C is CORRECT. The postmortem is a document that records an incident, its impact and any mitigating actions taken to resolve it. Option D is incorrect. This is the job of the Communications Lead in the Project not DevOps. Reference https://sre.google/sre-book/postmortem-culture/
Incorrect
Option A is incorrect. Alerts are created to notify based on measured metrics exceeding or failing below a threshold. Option B is incorrect. Support tickets are usually created for tasks that are requested by a customer. Option C is CORRECT. The postmortem is a document that records an incident, its impact and any mitigating actions taken to resolve it. Option D is incorrect. This is the job of the Communications Lead in the Project not DevOps. Reference https://sre.google/sre-book/postmortem-culture/
Unattempted
Option A is incorrect. Alerts are created to notify based on measured metrics exceeding or failing below a threshold. Option B is incorrect. Support tickets are usually created for tasks that are requested by a customer. Option C is CORRECT. The postmortem is a document that records an incident, its impact and any mitigating actions taken to resolve it. Option D is incorrect. This is the job of the Communications Lead in the Project not DevOps. Reference https://sre.google/sre-book/postmortem-culture/
Question 32 of 65
32. Question
You are planning on deploying Nginx using Kubernetes Engine. You need to track the number of requests Nginx has serviced.
Which of the following can help you achieve this? CHOOSE TWO
You are the on-call SRE for a betting company. You are managing an application deployed on App Engine flexible environment within a custom VPC. The application accepts user traffic from anywhere using HTTPS. You have been tasked with logging all successful incoming SSH traffic to the GCE instances from the company network. How will you achieve this?
Correct
Option A is incorrect. The firewall should allow ingress (incoming) not deny. Option B is CORRECT. The firewall rule should allow ingress (incoming) traffic on port 22 (SSH) and logging should be turned on, so the logs appear in Cloud Logging Options C and D are incorrect. The firewall should affect ingress (incoming) not egress (outgoing). Reference: https://cloud.google.com/vpc/docs/firewall-rules-logging
Incorrect
Option A is incorrect. The firewall should allow ingress (incoming) not deny. Option B is CORRECT. The firewall rule should allow ingress (incoming) traffic on port 22 (SSH) and logging should be turned on, so the logs appear in Cloud Logging Options C and D are incorrect. The firewall should affect ingress (incoming) not egress (outgoing). Reference: https://cloud.google.com/vpc/docs/firewall-rules-logging
Unattempted
Option A is incorrect. The firewall should allow ingress (incoming) not deny. Option B is CORRECT. The firewall rule should allow ingress (incoming) traffic on port 22 (SSH) and logging should be turned on, so the logs appear in Cloud Logging Options C and D are incorrect. The firewall should affect ingress (incoming) not egress (outgoing). Reference: https://cloud.google.com/vpc/docs/firewall-rules-logging
Question 34 of 65
34. Question
Your company has several Google projects in its organisation. As part of the monitoring strategy, the projects will be added to specified workspaces. Your team has been assigned the task of creating the workspaces. Following the principle of least privilege, what IAM role would your team need to create workspaces?
Correct
Option A is incorrect. Project Editor role does not have the required permissions. Option B is CORRECT. The Monitoring Editor has the required permissions and is not too permissive. Options C and D are incorrect. These roles are too permissive. Reference: https://cloud.google.com/monitoring/workspaces/create
Incorrect
Option A is incorrect. Project Editor role does not have the required permissions. Option B is CORRECT. The Monitoring Editor has the required permissions and is not too permissive. Options C and D are incorrect. These roles are too permissive. Reference: https://cloud.google.com/monitoring/workspaces/create
Unattempted
Option A is incorrect. Project Editor role does not have the required permissions. Option B is CORRECT. The Monitoring Editor has the required permissions and is not too permissive. Options C and D are incorrect. These roles are too permissive. Reference: https://cloud.google.com/monitoring/workspaces/create
Question 35 of 65
35. Question
Your company has multiple projects in Google Cloud. The projects represent the available environments such as development, test, pre-production and production. A centralised logging system needs to be implemented where all the environments send their logs to a security project. There is a requirement to not send any logs generated by an apache application to the security project. What steps can you take to achieve this? Choose TWO
Correct
Create a logging bucket in the security project. Sink destination should be the logging bucket in the security project and filter rate should be 100 so all apache logs are excluded. Option B and E is incorrect. Keeping the logs in the different projects does not meet the requirement for sending logs to the security project. Option C is incorrect. Filter rate of 0 means no apache logs are excluded. Reference: https://cloud.google.com/logging/docs/exclusions#create-filter
Incorrect
Create a logging bucket in the security project. Sink destination should be the logging bucket in the security project and filter rate should be 100 so all apache logs are excluded. Option B and E is incorrect. Keeping the logs in the different projects does not meet the requirement for sending logs to the security project. Option C is incorrect. Filter rate of 0 means no apache logs are excluded. Reference: https://cloud.google.com/logging/docs/exclusions#create-filter
Unattempted
Create a logging bucket in the security project. Sink destination should be the logging bucket in the security project and filter rate should be 100 so all apache logs are excluded. Option B and E is incorrect. Keeping the logs in the different projects does not meet the requirement for sending logs to the security project. Option C is incorrect. Filter rate of 0 means no apache logs are excluded. Reference: https://cloud.google.com/logging/docs/exclusions#create-filter
Question 36 of 65
36. Question
You are part of the Site Reliability Engineering Team at your company. Your team manages all the updates to production, and review of application performance in production. Recently there was an incident in production that affected a whole region of users. A meeting has been called to review the incident. Following Google’s best practice, which of the following should not be discussed?
Correct
Options A, B and D are incorrect. These can be discussed at the meeting. Option C is CORRECT. Team members‘ involvement in causing the incident should not be part of the post-mortem review, it does not promote a blameless culture and others may be motivated to cover up facts critical to understanding and preventing recurrence. Reference: https://sre.google/workbook/postmortem-culture/
Incorrect
Options A, B and D are incorrect. These can be discussed at the meeting. Option C is CORRECT. Team members‘ involvement in causing the incident should not be part of the post-mortem review, it does not promote a blameless culture and others may be motivated to cover up facts critical to understanding and preventing recurrence. Reference: https://sre.google/workbook/postmortem-culture/
Unattempted
Options A, B and D are incorrect. These can be discussed at the meeting. Option C is CORRECT. Team members‘ involvement in causing the incident should not be part of the post-mortem review, it does not promote a blameless culture and others may be motivated to cover up facts critical to understanding and preventing recurrence. Reference: https://sre.google/workbook/postmortem-culture/
Question 37 of 65
37. Question
You are the on-call SRE for a growing media company. You are managing an application deployed on Compute Engine within a custom VPC. The application accepts user traffic from anywhere using HTTPS. You have been tasked with logging all failed incoming SSH traffic to the GCE instances. How will you achieve this?
Correct
The firewall rule should deny ingress (incoming) traffic on port 22 (SSH) and logging should be turned on, so the logs appear in Cloud Logging. Option B is incorrect. The firewall should deny ingress (incoming) and not allow traffic. Option C and D are incorrect. The firewall should affect ingress (incoming) not egress (outgoing). Reference: https://cloud.google.com/vpc/docs/firewall-rules-logging
Incorrect
The firewall rule should deny ingress (incoming) traffic on port 22 (SSH) and logging should be turned on, so the logs appear in Cloud Logging. Option B is incorrect. The firewall should deny ingress (incoming) and not allow traffic. Option C and D are incorrect. The firewall should affect ingress (incoming) not egress (outgoing). Reference: https://cloud.google.com/vpc/docs/firewall-rules-logging
Unattempted
The firewall rule should deny ingress (incoming) traffic on port 22 (SSH) and logging should be turned on, so the logs appear in Cloud Logging. Option B is incorrect. The firewall should deny ingress (incoming) and not allow traffic. Option C and D are incorrect. The firewall should affect ingress (incoming) not egress (outgoing). Reference: https://cloud.google.com/vpc/docs/firewall-rules-logging
Question 38 of 65
38. Question
You are part of an on-call Site Reliability Engineering team managing a web application in the production. The application serves user requests from several regions. A new update was deployed over the weekend to introduce new features into the application. Users are reporting errors and failed processed requests from the application. Your team declares an incident, accesses the impact and discovers the issue is affecting users in one region. Which of the following is the recommended action?
Correct
Options B, C and D are incorrect. Mitigating the impact is the recommended next course of action when you know the extent of the impact. Performing root-cause analysis and post-mortem is done after service is fully restored. Options A is CORRECT. This is the first recommended next step after assessing the impact or extent of the incident. Reference: https://sre.google/workbook/incident-response/ (Case Study 2)
Incorrect
Options B, C and D are incorrect. Mitigating the impact is the recommended next course of action when you know the extent of the impact. Performing root-cause analysis and post-mortem is done after service is fully restored. Options A is CORRECT. This is the first recommended next step after assessing the impact or extent of the incident. Reference: https://sre.google/workbook/incident-response/ (Case Study 2)
Unattempted
Options B, C and D are incorrect. Mitigating the impact is the recommended next course of action when you know the extent of the impact. Performing root-cause analysis and post-mortem is done after service is fully restored. Options A is CORRECT. This is the first recommended next step after assessing the impact or extent of the incident. Reference: https://sre.google/workbook/incident-response/ (Case Study 2)
Question 39 of 65
39. Question
You are part of an on-call SRE team managing an apache web service application in production. The application is deployed to Google Compute Engine. The FluentD agent is installed on the GCE instance. You have been tasked with reviewing the apache logs from the application Which of the following queries helps you do this?
Correct
Option A is incorrect. This has a resource type set to App Engine. Option B is CORRECT. This has a resource type of compute engine and the appropriate log name Options C and D are incorrect. These queries target the Admin activity logs and Data Access logs. Reference: https://cloud.google.com/logging/docs/view/query-library-preview#logging-agent-filters
Incorrect
Option A is incorrect. This has a resource type set to App Engine. Option B is CORRECT. This has a resource type of compute engine and the appropriate log name Options C and D are incorrect. These queries target the Admin activity logs and Data Access logs. Reference: https://cloud.google.com/logging/docs/view/query-library-preview#logging-agent-filters
Unattempted
Option A is incorrect. This has a resource type set to App Engine. Option B is CORRECT. This has a resource type of compute engine and the appropriate log name Options C and D are incorrect. These queries target the Admin activity logs and Data Access logs. Reference: https://cloud.google.com/logging/docs/view/query-library-preview#logging-agent-filters
Question 40 of 65
40. Question
Your organization has several applications running on Compute Engine. The instances generate logs and metrics which are been monitored on dashboards. There is a new requirement to capture operating system (OS) level logs for security reasons. How can you achieve this?
Correct
Option A is incorrect. The FluentD agent is needed for OS level logs not Collectd. Option B is incorrect. Logs based metrics is only possible for logs available in Cloud Logging. Option C is CORRECT. The FluentD agent is needed for OS level logs. Option D is incorrect. A sink can only route logs that are available in Cloud Logging. Reference: https://cloud.google.com/logging/docs/agent/logging/installation#gce-ui-install
Incorrect
Option A is incorrect. The FluentD agent is needed for OS level logs not Collectd. Option B is incorrect. Logs based metrics is only possible for logs available in Cloud Logging. Option C is CORRECT. The FluentD agent is needed for OS level logs. Option D is incorrect. A sink can only route logs that are available in Cloud Logging. Reference: https://cloud.google.com/logging/docs/agent/logging/installation#gce-ui-install
Unattempted
Option A is incorrect. The FluentD agent is needed for OS level logs not Collectd. Option B is incorrect. Logs based metrics is only possible for logs available in Cloud Logging. Option C is CORRECT. The FluentD agent is needed for OS level logs. Option D is incorrect. A sink can only route logs that are available in Cloud Logging. Reference: https://cloud.google.com/logging/docs/agent/logging/installation#gce-ui-install
Question 41 of 65
41. Question
Your company has three environments called production, staging and development. A GCP Project has been set up for each environment, there is also a monitoring project with two workspaces, one for production while the other is for development and staging. A GKE Cluster has been set up in both staging and development for testing an application to be deployed to production. Both clusters have a service called app-serve and an alerting policy was created to monitor the service in the workspace. When there is an incident on the service, the GKE monitoring dashboard can’t associate this incident uniquely with the development service or the staging service How can you resolve this with little operational overhead?
Correct
Options A and B are incorrect. These options both involve more operational overhead of monitoring an addition workspace and renaming all the components that use the service. Option C is CORRECT. This is the best option with the least amount of overhead. Option D is incorrect. This has a lot of operational overhead and might not solve the problem if the same service name is used. Reference: https://cloud.google.com/stackdriver/docs/solutions/gke/troubleshooting#alerting
Incorrect
Options A and B are incorrect. These options both involve more operational overhead of monitoring an addition workspace and renaming all the components that use the service. Option C is CORRECT. This is the best option with the least amount of overhead. Option D is incorrect. This has a lot of operational overhead and might not solve the problem if the same service name is used. Reference: https://cloud.google.com/stackdriver/docs/solutions/gke/troubleshooting#alerting
Unattempted
Options A and B are incorrect. These options both involve more operational overhead of monitoring an addition workspace and renaming all the components that use the service. Option C is CORRECT. This is the best option with the least amount of overhead. Option D is incorrect. This has a lot of operational overhead and might not solve the problem if the same service name is used. Reference: https://cloud.google.com/stackdriver/docs/solutions/gke/troubleshooting#alerting
Question 42 of 65
42. Question
Your SRE team is responsible for monitoring and logging of the applications in Production Projects. The applications are deployed on different resources like Compute Engine and GKE. Your team has created a centralised monitoring dashboard in the monitoring Project for the metrics from all the production Projects. An uptime check was created for the applications. You have been tasked with setting up the Notification channels for one of the applications to send the notification to a public endpoint. Which of these helps you meet the requirement?
A company wants to use GCP for their development and deployment of applications. They have set up an organization, folders and projects. They want to set up multiple Cloud Source Repositories (CSR) in one Project. Different teams have different access requirements to the CSRs in the Project. Which of the following is the best way of managing access to the CSR for the different teams?
Correct
Options A, B and C are incorrect. These are wrong because it is not the most suitable way to assign the permissions because there is no way to assign permissions per repository at project or folder or organization level. Option D is CORRECT. This is the best way, because you can assign different roles to different teams in each repository Reference: https://cloud.google.com/source-repositories/docs/granting-users-access#grant_push_permissions_for_a_repository
Incorrect
Options A, B and C are incorrect. These are wrong because it is not the most suitable way to assign the permissions because there is no way to assign permissions per repository at project or folder or organization level. Option D is CORRECT. This is the best way, because you can assign different roles to different teams in each repository Reference: https://cloud.google.com/source-repositories/docs/granting-users-access#grant_push_permissions_for_a_repository
Unattempted
Options A, B and C are incorrect. These are wrong because it is not the most suitable way to assign the permissions because there is no way to assign permissions per repository at project or folder or organization level. Option D is CORRECT. This is the best way, because you can assign different roles to different teams in each repository Reference: https://cloud.google.com/source-repositories/docs/granting-users-access#grant_push_permissions_for_a_repository
Question 44 of 65
44. Question
You are on the SRE team that monitor production-grade applications. One of your team members notices that one application performance has degraded, and customers are noticing. As this incident begins to unfold, what is Google’s recommended first action for managing incidents?
Correct
Options A, C and D are incorrect. These are wrong because these are not the first role that need to be assigned. Option B is CORRECT. This is the first role that should be assigned during an incident Reference: https://sre.google/sre-book/managing-incidents/ (Elements of Incident Management Process)
Incorrect
Options A, C and D are incorrect. These are wrong because these are not the first role that need to be assigned. Option B is CORRECT. This is the first role that should be assigned during an incident Reference: https://sre.google/sre-book/managing-incidents/ (Elements of Incident Management Process)
Unattempted
Options A, C and D are incorrect. These are wrong because these are not the first role that need to be assigned. Option B is CORRECT. This is the first role that should be assigned during an incident Reference: https://sre.google/sre-book/managing-incidents/ (Elements of Incident Management Process)
Question 45 of 65
45. Question
You are one of the on-call engineers managing an application running in production. A recent update has caused the application’s response time to increase drastically. An incident has been declared and all the roles except the Planning Lead has been assigned. Following Google’s SRE practice, who is to assume this role and its responsibilities?
Correct
Options A, C and D are incorrect. The incident commander takes on any unassigned roles during an incident Option B is CORRECT. Any unassigned responsibilities should be handled by the Incident Commander. Reference: https://sre.google/sre-book/managing-incidents/ (Recursive Separation of Responsibilities)
Incorrect
Options A, C and D are incorrect. The incident commander takes on any unassigned roles during an incident Option B is CORRECT. Any unassigned responsibilities should be handled by the Incident Commander. Reference: https://sre.google/sre-book/managing-incidents/ (Recursive Separation of Responsibilities)
Unattempted
Options A, C and D are incorrect. The incident commander takes on any unassigned roles during an incident Option B is CORRECT. Any unassigned responsibilities should be handled by the Incident Commander. Reference: https://sre.google/sre-book/managing-incidents/ (Recursive Separation of Responsibilities)
Question 46 of 65
46. Question
You are on a cross-functional team of SREs and product developers managing an application that needs to be deployed to production. Metrics for measuring reliability and performance of the application have been agreed on. There is a need to decide the frequency of releasing new changes. Following Google’s SRE practice, what measure should be used to control this?
Correct
Options A, B and C are incorrect. The decision on how frequent to push new changes is based on the amount of error budget left. Option D is CORRECT. The amount of error budget is used to determine how frequent new releases should be pushed to production, so the application’s reliability does not fall below the agreed SLO/SLA. Reference: https://sre.google/sre-book/embracing-risk/ (Forming Your Error Budget)
Incorrect
Options A, B and C are incorrect. The decision on how frequent to push new changes is based on the amount of error budget left. Option D is CORRECT. The amount of error budget is used to determine how frequent new releases should be pushed to production, so the application’s reliability does not fall below the agreed SLO/SLA. Reference: https://sre.google/sre-book/embracing-risk/ (Forming Your Error Budget)
Unattempted
Options A, B and C are incorrect. The decision on how frequent to push new changes is based on the amount of error budget left. Option D is CORRECT. The amount of error budget is used to determine how frequent new releases should be pushed to production, so the application’s reliability does not fall below the agreed SLO/SLA. Reference: https://sre.google/sre-book/embracing-risk/ (Forming Your Error Budget)
Question 47 of 65
47. Question
You are on the SRE team of your company. The client has decided to also keep the logs that record operations in Compute Engine that read user-provided data made in the Production project for Two years in order to fulfil a new compliance requirement. Which of the following can help you achieve this? Choose TWO
Correct
Option A is incorrect. The _Required Logs Bucket cannot be edited. Option B is incorrect. The _ Required Logs Sink inclusion filters do not capture the specified logs. Option C is CORRECT. Creating a new Logs Bucket with the desired retention and Sink to collate the specified logs. The inclusion filter of the _Default Sink captures the specified logs. Option D is CORRECT. The Data Read audit log of the Compute Engine needs to be enabled Option E is incorrect. The Data Write audit log of the Compute Engine does not meet the criteria of logs specified. Reference: https://cloud.google.com/logging/docs/audit/configure-data-access#config-console-enable https://cloud.google.com/logging/docs/audit#data-access
Incorrect
Option A is incorrect. The _Required Logs Bucket cannot be edited. Option B is incorrect. The _ Required Logs Sink inclusion filters do not capture the specified logs. Option C is CORRECT. Creating a new Logs Bucket with the desired retention and Sink to collate the specified logs. The inclusion filter of the _Default Sink captures the specified logs. Option D is CORRECT. The Data Read audit log of the Compute Engine needs to be enabled Option E is incorrect. The Data Write audit log of the Compute Engine does not meet the criteria of logs specified. Reference: https://cloud.google.com/logging/docs/audit/configure-data-access#config-console-enable https://cloud.google.com/logging/docs/audit#data-access
Unattempted
Option A is incorrect. The _Required Logs Bucket cannot be edited. Option B is incorrect. The _ Required Logs Sink inclusion filters do not capture the specified logs. Option C is CORRECT. Creating a new Logs Bucket with the desired retention and Sink to collate the specified logs. The inclusion filter of the _Default Sink captures the specified logs. Option D is CORRECT. The Data Read audit log of the Compute Engine needs to be enabled Option E is incorrect. The Data Write audit log of the Compute Engine does not meet the criteria of logs specified. Reference: https://cloud.google.com/logging/docs/audit/configure-data-access#config-console-enable https://cloud.google.com/logging/docs/audit#data-access
Question 48 of 65
48. Question
You are planning on deploying JVM using Compute Engine. You need to track the peak number of live threads in the instance. Which of the following can help you achieve this? CHOOSE TWO
Correct
Options A and C are incorrect. The monitoring agent is installed on GKE, all that you need to do is enable it and select the logging and monitoring type you want. Option B and D are CORRECT. This is the procedure for installing the monitoring agent to capture application level metrics on Compute Engine. Option E is incorrect. This is used for installing the logging agent on the Compute Engine. Reference: https://cloud.google.com/monitoring/agent/plugins/jvm
Incorrect
Options A and C are incorrect. The monitoring agent is installed on GKE, all that you need to do is enable it and select the logging and monitoring type you want. Option B and D are CORRECT. This is the procedure for installing the monitoring agent to capture application level metrics on Compute Engine. Option E is incorrect. This is used for installing the logging agent on the Compute Engine. Reference: https://cloud.google.com/monitoring/agent/plugins/jvm
Unattempted
Options A and C are incorrect. The monitoring agent is installed on GKE, all that you need to do is enable it and select the logging and monitoring type you want. Option B and D are CORRECT. This is the procedure for installing the monitoring agent to capture application level metrics on Compute Engine. Option E is incorrect. This is used for installing the logging agent on the Compute Engine. Reference: https://cloud.google.com/monitoring/agent/plugins/jvm
Question 49 of 65
49. Question
You work as a DevOps Engineer for a start-up company. The company’s strategy is to use an automated CI/CD pipeline to deliver software faster. You have been tasked with choosing the tools for the pipeline. A key requirement is selecting a repository that can trigger builds in Cloud Build. Which of the following repositories does not meet the requirements?
Correct
There is currently no way to trigger a build in Cloud Build using AWS CodeCommit Option B is incorrect. This option can be used to trigger builds in Cloud Build Option C is incorrect. This option can be used to trigger builds in Cloud Build Option D is incorrect. This option can be used to trigger builds in Cloud Build Reference: https://cloud.google.com/build/docs/automating-builds/create-webhook-triggers
Incorrect
There is currently no way to trigger a build in Cloud Build using AWS CodeCommit Option B is incorrect. This option can be used to trigger builds in Cloud Build Option C is incorrect. This option can be used to trigger builds in Cloud Build Option D is incorrect. This option can be used to trigger builds in Cloud Build Reference: https://cloud.google.com/build/docs/automating-builds/create-webhook-triggers
Unattempted
There is currently no way to trigger a build in Cloud Build using AWS CodeCommit Option B is incorrect. This option can be used to trigger builds in Cloud Build Option C is incorrect. This option can be used to trigger builds in Cloud Build Option D is incorrect. This option can be used to trigger builds in Cloud Build Reference: https://cloud.google.com/build/docs/automating-builds/create-webhook-triggers
Question 50 of 65
50. Question
Your SRE team is responsible for monitoring and logging of the applications in Production Projects. The applications are deployed on different resources like Compute Engine and GKE. Your team has created a centralised monitoring dashboard in the monitoring Project for the metrics from all the production Projects. An uptime check was created for the applications. You have been tasked with setting up the Notification channels for one of the applications to send notifications to the team. Which of these helps you meet the requirement?
You are on the SRE team of your company. There is a new government regulation to keep the logs of all API calls made in the Production project for three years. Which of the following can help you achieve this?
You manage a Java application running on Kubernetes Engine in Production. The organization has decided there is a need to understand and benchmark the performance of the application such as CPU time and Heap. The continuous measuring process should not affect the performance of the application. Which of the following can help you achieve this?
Correct
Cloud Profiler is a statistical, low-overhead profiler that continuously gathers CPU usage and memory-allocation information from your production applications. Option B, C and D are incorrect. Reference: https://cloud.google.com/profiler/docs/about-profiler
Incorrect
Cloud Profiler is a statistical, low-overhead profiler that continuously gathers CPU usage and memory-allocation information from your production applications. Option B, C and D are incorrect. Reference: https://cloud.google.com/profiler/docs/about-profiler
Unattempted
Cloud Profiler is a statistical, low-overhead profiler that continuously gathers CPU usage and memory-allocation information from your production applications. Option B, C and D are incorrect. Reference: https://cloud.google.com/profiler/docs/about-profiler
Question 53 of 65
53. Question
You manage an application running on App Engine Standard in a production project. The application serves customers worldwide and downtime needs to be kept to a minimum. There is a need to troubleshoot the application behaviour by injecting logging without stopping it. Which of the following can help you achieve this?
Correct
Option B, C and D are incorrect. The logging Agent and Monitoring cannot be used to inject logging. Option A is CORRECT. Cloud Debugger agent is needed to use Logpoints. Logpoints allow you to inject logging into running services without restarting or interfering with the normal function of the service. Reference: https://cloud.google.com/debugger/docs/using/logpoints#logpoints
Incorrect
Option B, C and D are incorrect. The logging Agent and Monitoring cannot be used to inject logging. Option A is CORRECT. Cloud Debugger agent is needed to use Logpoints. Logpoints allow you to inject logging into running services without restarting or interfering with the normal function of the service. Reference: https://cloud.google.com/debugger/docs/using/logpoints#logpoints
Unattempted
Option B, C and D are incorrect. The logging Agent and Monitoring cannot be used to inject logging. Option A is CORRECT. Cloud Debugger agent is needed to use Logpoints. Logpoints allow you to inject logging into running services without restarting or interfering with the normal function of the service. Reference: https://cloud.google.com/debugger/docs/using/logpoints#logpoints
Question 54 of 65
54. Question
You are part of the SRE team in your organisation. After a recent incident in production and the follow-up post-mortem, your team has been invited to a production meeting. Following Google SRE’s best practice, which of the following should not be discussed at the meeting?
Correct
Option A, B and D are incorrect. These should be on the agenda at production meetings. Option C is CORRECT. A blameless SRE culture promotes openness about faults, so finger-pointing is not recommended. Reference: https://sre.google/sre-book/communication-and-collaboration/
Incorrect
Option A, B and D are incorrect. These should be on the agenda at production meetings. Option C is CORRECT. A blameless SRE culture promotes openness about faults, so finger-pointing is not recommended. Reference: https://sre.google/sre-book/communication-and-collaboration/
Unattempted
Option A, B and D are incorrect. These should be on the agenda at production meetings. Option C is CORRECT. A blameless SRE culture promotes openness about faults, so finger-pointing is not recommended. Reference: https://sre.google/sre-book/communication-and-collaboration/
Question 55 of 65
55. Question
Your team has been tasked with deploying a python application to Cloud Run. The developer team needs a way to inspect the state of a python application in real time, without stopping or slowing it down. You are responsible for implementing the requirement. Which of the following is needed?
Correct
Option A is incorrect. The application is deployed to Cloud Run not Compute Engine. Option B is incorrect. This is used for installing the Debugger to App Engine. Option C is incorrect. The application is deployed to Cloud Run not Compute Engine. Option D is CORRECT. This is required for installing the Cloud Debugger agent in Cloud Run. References: https://cloud.google.com/debugger/docs/setup/python#cloud-run https://cloud.google.com/debugger
Incorrect
Option A is incorrect. The application is deployed to Cloud Run not Compute Engine. Option B is incorrect. This is used for installing the Debugger to App Engine. Option C is incorrect. The application is deployed to Cloud Run not Compute Engine. Option D is CORRECT. This is required for installing the Cloud Debugger agent in Cloud Run. References: https://cloud.google.com/debugger/docs/setup/python#cloud-run https://cloud.google.com/debugger
Unattempted
Option A is incorrect. The application is deployed to Cloud Run not Compute Engine. Option B is incorrect. This is used for installing the Debugger to App Engine. Option C is incorrect. The application is deployed to Cloud Run not Compute Engine. Option D is CORRECT. This is required for installing the Cloud Debugger agent in Cloud Run. References: https://cloud.google.com/debugger/docs/setup/python#cloud-run https://cloud.google.com/debugger
Question 56 of 65
56. Question
A financial organization that analyses transactions carried out throughout the day at night. The analysis takes about three hours and must be run between midnight and 5am. The analysis is currently run on standard Compute Engine instances, with several OS level guardrails to satisfy government regulations, and can handle interruptions. You have been tasked with optimising the cost of the analysis which is to run for another six months. Which of the following will optimise the cost?
You are part of the DevOps team that manages applications running in the production project of your company. After a recent security incident, there was a new requirement to catch network traffic going to and from the Compute instances in the VPCs in the production project. VPC Flow Logs was enabled on the production VPC and no vpc_flows logs are present in Cloud Logging. Which of the following could be the reason?
Correct
Option A is incorrect. Logging inclusion filter does not block any log from being sent. Option B is incorrect. There is no configuration needed for enabling VPC Flow Logs. Option C is incorrect. The service account of the instances is not used in capturing VPC Flow Logs. Option D is CORRECT. Logging exclusion filters block specified logs. Make sure there are no exclusion rules that discard VPC Flow Logs. Reference: https://cloud.google.com/vpc/docs/using-flow-logs#no-vpc-flows
Incorrect
Option A is incorrect. Logging inclusion filter does not block any log from being sent. Option B is incorrect. There is no configuration needed for enabling VPC Flow Logs. Option C is incorrect. The service account of the instances is not used in capturing VPC Flow Logs. Option D is CORRECT. Logging exclusion filters block specified logs. Make sure there are no exclusion rules that discard VPC Flow Logs. Reference: https://cloud.google.com/vpc/docs/using-flow-logs#no-vpc-flows
Unattempted
Option A is incorrect. Logging inclusion filter does not block any log from being sent. Option B is incorrect. There is no configuration needed for enabling VPC Flow Logs. Option C is incorrect. The service account of the instances is not used in capturing VPC Flow Logs. Option D is CORRECT. Logging exclusion filters block specified logs. Make sure there are no exclusion rules that discard VPC Flow Logs. Reference: https://cloud.google.com/vpc/docs/using-flow-logs#no-vpc-flows
Question 58 of 65
58. Question
Your company has deployed compute resources in VPCs. There are three VPCs in the Development Project and applications are deployed to GCE Instances in the VPCs. There is a new security requirement to collect sample network flows sent to and received by the VM instances. Which of the following can help you achieve this?
Correct
Option A is incorrect. The FluentD agent is useful for application and OS specific logs not the network traffic in the VPC. Option B is incorrect. This will only work after the VPC Flow logs has been enabled Option Cis incorrect. This is used to capture traffic allowed or denied on a particular firewall rule. Option D is CORRECT. VPC Flow Logs record a sample of network flows sent from and received by VM instances, including instances used as GKE nodes. These logs can be used for network monitoring, forensics, real-time security analysis, and expense optimization. Reference: https://cloud.google.com/vpc/docs/using-flow-logs
Incorrect
Option A is incorrect. The FluentD agent is useful for application and OS specific logs not the network traffic in the VPC. Option B is incorrect. This will only work after the VPC Flow logs has been enabled Option Cis incorrect. This is used to capture traffic allowed or denied on a particular firewall rule. Option D is CORRECT. VPC Flow Logs record a sample of network flows sent from and received by VM instances, including instances used as GKE nodes. These logs can be used for network monitoring, forensics, real-time security analysis, and expense optimization. Reference: https://cloud.google.com/vpc/docs/using-flow-logs
Unattempted
Option A is incorrect. The FluentD agent is useful for application and OS specific logs not the network traffic in the VPC. Option B is incorrect. This will only work after the VPC Flow logs has been enabled Option Cis incorrect. This is used to capture traffic allowed or denied on a particular firewall rule. Option D is CORRECT. VPC Flow Logs record a sample of network flows sent from and received by VM instances, including instances used as GKE nodes. These logs can be used for network monitoring, forensics, real-time security analysis, and expense optimization. Reference: https://cloud.google.com/vpc/docs/using-flow-logs
Question 59 of 65
59. Question
Your company has decided to use GCP services to automate its’ Continuous Integration and Deployment process. Cloud Build will be used to build images and other artifacts. A new developer has been tasked with creating the build config files. The Cloud Build process is failing. Which of the following could be the reason? Choose Two
The Company has a GCP organization that has applications running in GCP projects. The applications push logs into Cloud logging. The company wants to analyse the logs using a third-party software such as Elasticsearch. You have set up the Logs sink to route logs to a Pub/Sub topic but no logs are appearing in Elasticsearch. Which of the following could be the reason?
You are managing an application that generates many logs in the staging Project. The Company has an organization in GCP, two folders and four projects. The folders are dev and prod, while the projects are dev, test, staging and production. The dev and test projects are in the dev folder and the staging and production projects are in the prod folder. The company wants to generate metrics from the logs for alerting purposes for that application alone. What IAM solution will help achieve the requirement following the principle of least privilege?
Correct
Option A is incorrect. The Log Admin role is too permissive and if given at Folder it means the developers will have permissions in production projects as well. Option B is incorrect. The Logs Configuration Writer role is enough for creating logs-based metrics but assigning it at folder level will give developers access to production project logs. Option C is incorrect. The Log Admin role is too permissive. Option D is CORRECT. The Logs Configuration Writer role is enough for creating logs-based metrics. Reference: https://cloud.google.com/logging/docs/logs-based-metrics
Incorrect
Option A is incorrect. The Log Admin role is too permissive and if given at Folder it means the developers will have permissions in production projects as well. Option B is incorrect. The Logs Configuration Writer role is enough for creating logs-based metrics but assigning it at folder level will give developers access to production project logs. Option C is incorrect. The Log Admin role is too permissive. Option D is CORRECT. The Logs Configuration Writer role is enough for creating logs-based metrics. Reference: https://cloud.google.com/logging/docs/logs-based-metrics
Unattempted
Option A is incorrect. The Log Admin role is too permissive and if given at Folder it means the developers will have permissions in production projects as well. Option B is incorrect. The Logs Configuration Writer role is enough for creating logs-based metrics but assigning it at folder level will give developers access to production project logs. Option C is incorrect. The Log Admin role is too permissive. Option D is CORRECT. The Logs Configuration Writer role is enough for creating logs-based metrics. Reference: https://cloud.google.com/logging/docs/logs-based-metrics
Question 62 of 65
62. Question
You are developing a completely serverless application. The application is going to be built using Cloud Build. There is a requirement to store all non-container artifacts in Cloud Storage. How will you meet this requirement?
Correct
The Artifacts field in the build config file is used to specify non-container artifact storage location. Option B is incorrect. The options field is some optional arguments like env, volumes, and secretEnv. Option C is incorrect. The images field specifies one or more Docker images to be pushed by Cloud Build to Container Registry. Option D is incorrect. The substitutions field in your build config file is used substitute specific variables at build time Reference: https://cloud.google.com/build/docs/build-config#artifacts
Incorrect
The Artifacts field in the build config file is used to specify non-container artifact storage location. Option B is incorrect. The options field is some optional arguments like env, volumes, and secretEnv. Option C is incorrect. The images field specifies one or more Docker images to be pushed by Cloud Build to Container Registry. Option D is incorrect. The substitutions field in your build config file is used substitute specific variables at build time Reference: https://cloud.google.com/build/docs/build-config#artifacts
Unattempted
The Artifacts field in the build config file is used to specify non-container artifact storage location. Option B is incorrect. The options field is some optional arguments like env, volumes, and secretEnv. Option C is incorrect. The images field specifies one or more Docker images to be pushed by Cloud Build to Container Registry. Option D is incorrect. The substitutions field in your build config file is used substitute specific variables at build time Reference: https://cloud.google.com/build/docs/build-config#artifacts
Question 63 of 65
63. Question
You have been tasked with building an automated build for the deployment of applications to serverless infrastructure in Google Cloud Platform. Which of the following can help you complete the task with little overhead? Choose Two
You are one of the on-call engineers in a global team managing an application running in production. A recent update has caused the application’s response time to increase drastically. An incident has been declared and actions to mitigate the issue have not yet been deployed. Your team is coming to the end of your workday. Following Google’s SRE practice, what should be done?
Correct
Options A, B and D are incorrect. Hand off needs to be given to another Incident Command to coordinate the activities. Overtime is not recommended to avoid burnout especially when you have a global team. Leaving an incident with no one responsible for it and unresolved is not recommended Option C is CORRECT. Reference: https://sre.google/sre-book/managing-incidents/ (Clear, Live Handoff)
Incorrect
Options A, B and D are incorrect. Hand off needs to be given to another Incident Command to coordinate the activities. Overtime is not recommended to avoid burnout especially when you have a global team. Leaving an incident with no one responsible for it and unresolved is not recommended Option C is CORRECT. Reference: https://sre.google/sre-book/managing-incidents/ (Clear, Live Handoff)
Unattempted
Options A, B and D are incorrect. Hand off needs to be given to another Incident Command to coordinate the activities. Overtime is not recommended to avoid burnout especially when you have a global team. Leaving an incident with no one responsible for it and unresolved is not recommended Option C is CORRECT. Reference: https://sre.google/sre-book/managing-incidents/ (Clear, Live Handoff)
Question 65 of 65
65. Question
By default the number of host projects to which a service project can attach for shared VPC, is ______