You have already completed the Test before. Hence you can not start it again.
Test is loading...
You must sign in or sign up to start the Test.
You have to finish following quiz, to start this Test:
Your results are here!! for" Google Professional Cloud DevOps Engineer Practice Test 7 "
0 of 65 questions answered correctly
Your time:
Time has elapsed
Your Final Score is : 0
You have attempted : 0
Number of Correct Questions : 0 and scored 0
Number of Incorrect Questions : 0 and Negative marks 0
Average score
Your score
Google Professional Cloud DevOps Engineer
You have attempted: 0
Number of Correct Questions: 0 and scored 0
Number of Incorrect Questions: 0 and Negative marks 0
You can review your answers by clicking on “View Answers” option. Important Note : Open Reference Documentation Links in New Tab (Right Click and Open in New Tab).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
Answered
Review
Question 1 of 65
1. Question
You are helping with the design of an e-commerce application. The web application receives web requests and stores sales transactions in a database. You need to identify minimal Service Level Indicators (SLIs) for the application to ensure that forecasted numbers are based on the latest sales numbers. Which SLIs should you set for the application?
You have been contacted by your CIO to improve your application availability. You have decided to use instance groups by spreading your instances across three zones. What type of instance group do you select?
Correct
An instance group is a collection of virtual machines (VM) instances that you can manage as a single entity. There are two types Managed and Unmanaged Instance Groups. Reference https://cloud.google.com/compute/docs/instance-groups
Incorrect
An instance group is a collection of virtual machines (VM) instances that you can manage as a single entity. There are two types Managed and Unmanaged Instance Groups. Reference https://cloud.google.com/compute/docs/instance-groups
Unattempted
An instance group is a collection of virtual machines (VM) instances that you can manage as a single entity. There are two types Managed and Unmanaged Instance Groups. Reference https://cloud.google.com/compute/docs/instance-groups
Question 3 of 65
3. Question
The__________ Tier delivers traffic over Google’s well-provisioned, low latency, highly reliable global network.
Correct
The Premium Tier delivers traffic over Google’s well-provisioned, low latency, highly reliable global network. Reference https://cloud.google.com/network-tiers/
Incorrect
The Premium Tier delivers traffic over Google’s well-provisioned, low latency, highly reliable global network. Reference https://cloud.google.com/network-tiers/
Unattempted
The Premium Tier delivers traffic over Google’s well-provisioned, low latency, highly reliable global network. Reference https://cloud.google.com/network-tiers/
Question 4 of 65
4. Question
Which of the follow methods will not cause a shutdown script to be executed?
Correct
Create and run shutdown scripts that execute commands right before an instance is terminated or restarted, on a best-effort basis. This is useful if you rely on automated scripts to start up and shut down instances, allowing instances time to clean up or perform tasks, such as exporting logs, or syncing with other systems. Reference https://cloud.google.com/compute/docs/shutdownscript
Incorrect
Create and run shutdown scripts that execute commands right before an instance is terminated or restarted, on a best-effort basis. This is useful if you rely on automated scripts to start up and shut down instances, allowing instances time to clean up or perform tasks, such as exporting logs, or syncing with other systems. Reference https://cloud.google.com/compute/docs/shutdownscript
Unattempted
Create and run shutdown scripts that execute commands right before an instance is terminated or restarted, on a best-effort basis. This is useful if you rely on automated scripts to start up and shut down instances, allowing instances time to clean up or perform tasks, such as exporting logs, or syncing with other systems. Reference https://cloud.google.com/compute/docs/shutdownscript
Question 5 of 65
5. Question
Which of the following two statements are correct about Cloud Operations (Stackdriver) Logging with Kubernetes Engine? (select 2)
Correct
To ingest logs, you must deploy the Stackdriver Logging agent to each node in your cluster. The agent is a configured fluentd instance, where the configuration is stored in a ConfigMap and the instances are managed using a Kubernetes DaemonSet. The actual deployment of the ConfigMap and DaemonSet for your cluster depends on your individual cluster setup. Stackdriver is the default logging solution for clusters deployed on Google Kubernetes Engine. Stackdriver Logging is deployed to a new cluster by default unless you explicitly opt-out.
Incorrect
To ingest logs, you must deploy the Stackdriver Logging agent to each node in your cluster. The agent is a configured fluentd instance, where the configuration is stored in a ConfigMap and the instances are managed using a Kubernetes DaemonSet. The actual deployment of the ConfigMap and DaemonSet for your cluster depends on your individual cluster setup. Stackdriver is the default logging solution for clusters deployed on Google Kubernetes Engine. Stackdriver Logging is deployed to a new cluster by default unless you explicitly opt-out.
Unattempted
To ingest logs, you must deploy the Stackdriver Logging agent to each node in your cluster. The agent is a configured fluentd instance, where the configuration is stored in a ConfigMap and the instances are managed using a Kubernetes DaemonSet. The actual deployment of the ConfigMap and DaemonSet for your cluster depends on your individual cluster setup. Stackdriver is the default logging solution for clusters deployed on Google Kubernetes Engine. Stackdriver Logging is deployed to a new cluster by default unless you explicitly opt-out.
Question 6 of 65
6. Question
You are currently planning a Kubernetes deployment on premises but also extending Kubernetes to GCP as well. Your team would like to understand how management, routing could work as well as how users could extend services in a cluster. What would you specify to deal with these concerns? (select 2)
Correct
Kubernetes ingress is a collection of routing rules that govern how external users access services running in a Kubernetes cluster. The edge proxy is commonly called an ingress controller because it is commonly configured using ingress resources in Kubernetes, however the edge proxy can also be configured
Incorrect
Kubernetes ingress is a collection of routing rules that govern how external users access services running in a Kubernetes cluster. The edge proxy is commonly called an ingress controller because it is commonly configured using ingress resources in Kubernetes, however the edge proxy can also be configured
Unattempted
Kubernetes ingress is a collection of routing rules that govern how external users access services running in a Kubernetes cluster. The edge proxy is commonly called an ingress controller because it is commonly configured using ingress resources in Kubernetes, however the edge proxy can also be configured
Question 7 of 65
7. Question
You are developing an application that will need to meet strict GDPR requirements around several facets of the regulations. You have been asked to store your enterprises data on GCP as efficiently as possible.. This data will need to be archived for at least 5 years. What would be the best option?
Correct
Archive formerly Coldline(Cloud Storage) is meant for storing data over 90 days. It is the most economic option in GCP.
You have downloaded the SDK kit from Google and now would like to manage containers on GKE with gcloud. What command would be typed to install kubectl in the CLI?
Correct
Using gcloud is very important for this exam around Kubernetes since the gcloud commands are what interact with GCP resources that create and manage the clusters and then the kubectl, which is the Kubernetes command line tool is used to run commands against Kubernetes clusters on GKE.
Incorrect
Using gcloud is very important for this exam around Kubernetes since the gcloud commands are what interact with GCP resources that create and manage the clusters and then the kubectl, which is the Kubernetes command line tool is used to run commands against Kubernetes clusters on GKE.
Unattempted
Using gcloud is very important for this exam around Kubernetes since the gcloud commands are what interact with GCP resources that create and manage the clusters and then the kubectl, which is the Kubernetes command line tool is used to run commands against Kubernetes clusters on GKE.
Question 9 of 65
9. Question
Which of the following is the “maximum size“ of a single cached data value in MemCache?
The edge proxy in a Kubernetes ingress controller could be configured in several ways. Which are two ways we could configure the edge proxy? (select 2)
Correct
The edge proxy is commonly called an ingress controller because it is commonly configured using ingress resources in Kubernetes, however the edge proxy can also be configured with custom resource definitions (CRDs) or annotations.
Incorrect
The edge proxy is commonly called an ingress controller because it is commonly configured using ingress resources in Kubernetes, however the edge proxy can also be configured with custom resource definitions (CRDs) or annotations.
Unattempted
The edge proxy is commonly called an ingress controller because it is commonly configured using ingress resources in Kubernetes, however the edge proxy can also be configured with custom resource definitions (CRDs) or annotations.
Question 11 of 65
11. Question
You’re currently learning that Google Cloud has two specific platforms for providing an implementation for message passing and asynchronous integration of your message services. They have do have similarities but differences as well.. From the following statements select the two correct statements about Cloud Tasks. (select 2)
You need to create many projects for many different teams. You want to use a Cloud Deployment Manager (DM) deployment to create those projects in a folder called devops1.. What should you do?
Correct
The best option is to allow for the project creator role. (never owner) for a service account. Command syntax is correct
Incorrect
The best option is to allow for the project creator role. (never owner) for a service account. Command syntax is correct
Unattempted
The best option is to allow for the project creator role. (never owner) for a service account. Command syntax is correct
Question 13 of 65
13. Question
Your application runs in Google Kubernetes Engine (GKE). You want to use Spinnaker with the Kubernetes Provider V2 to perform blue/green deployments and control which version of the application receives traffic. What should you do?
Correct
Spinnaker can update the replica set in place without conflicting with Kubernetes.
Incorrect
Spinnaker can update the replica set in place without conflicting with Kubernetes.
Unattempted
Spinnaker can update the replica set in place without conflicting with Kubernetes.
Question 14 of 65
14. Question
What would be the best definition of “StatefulSets“ with Google Kubernetes Engine?
Correct
StatefulSets represent a set of Pods with unique, persistent identities and stable hostnames that GKE maintains regardless of where they are scheduled. The state information and other resilient data for any given StatefulSet Pod is maintained in persistent disk storage associated with the StatefulSet. Reference https://cloud.google.com/kubernetes-engine/docs/concepts/statefulset
Incorrect
StatefulSets represent a set of Pods with unique, persistent identities and stable hostnames that GKE maintains regardless of where they are scheduled. The state information and other resilient data for any given StatefulSet Pod is maintained in persistent disk storage associated with the StatefulSet. Reference https://cloud.google.com/kubernetes-engine/docs/concepts/statefulset
Unattempted
StatefulSets represent a set of Pods with unique, persistent identities and stable hostnames that GKE maintains regardless of where they are scheduled. The state information and other resilient data for any given StatefulSet Pod is maintained in persistent disk storage associated with the StatefulSet. Reference https://cloud.google.com/kubernetes-engine/docs/concepts/statefulset
Question 15 of 65
15. Question
Your team has been working on building a web application that will have a local audience. The plan is to deploy to Kubernetes as soon as your deployments are reviewed and approved.. You currently have a Docker file that works locally but needs to be deployed to the cloud. How can you get the application deployed to Kubernetes?
Correct
kubectl apply – Apply a configuration to a resource by filename or stdin. The resource name must be specified. This resource will be created if it doesn’t exist yet. To use ‘apply’, always create the resource initially with either ‘apply’ or ‘create –save-config’. Reference https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
Incorrect
kubectl apply – Apply a configuration to a resource by filename or stdin. The resource name must be specified. This resource will be created if it doesn’t exist yet. To use ‘apply’, always create the resource initially with either ‘apply’ or ‘create –save-config’. Reference https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
Unattempted
kubectl apply – Apply a configuration to a resource by filename or stdin. The resource name must be specified. This resource will be created if it doesn’t exist yet. To use ‘apply’, always create the resource initially with either ‘apply’ or ‘create –save-config’. Reference https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
Question 16 of 65
16. Question
Where does Container Analysis store resulting metadata and makes it available for consumption through an API?
Correct
Container Analysis is an API that is used to store trusted metadata about our software artefacts and is used during the Binary Authorization process. However, the scanning service performs vulnerability scans on images in Container Registry, then stores the resulting metadata and makes it available for consumption through an API. Reference https://cloud.google.com/container-registry/docs/container-analysis
Incorrect
Container Analysis is an API that is used to store trusted metadata about our software artefacts and is used during the Binary Authorization process. However, the scanning service performs vulnerability scans on images in Container Registry, then stores the resulting metadata and makes it available for consumption through an API. Reference https://cloud.google.com/container-registry/docs/container-analysis
Unattempted
Container Analysis is an API that is used to store trusted metadata about our software artefacts and is used during the Binary Authorization process. However, the scanning service performs vulnerability scans on images in Container Registry, then stores the resulting metadata and makes it available for consumption through an API. Reference https://cloud.google.com/container-registry/docs/container-analysis
Question 17 of 65
17. Question
You are deploying an application to a Kubernetes cluster that requires a username and password to connect to another service. When you deploy the application, you want to ensure that the credentials are used securely in multiple environments with minimal code changes. What should you do?
Correct
This will enable secrets usage without needing to modify the code per environment, update build pipelines, or store secrets insecurely.
Incorrect
This will enable secrets usage without needing to modify the code per environment, update build pipelines, or store secrets insecurely.
Unattempted
This will enable secrets usage without needing to modify the code per environment, update build pipelines, or store secrets insecurely.
Question 18 of 65
18. Question
Google Cloud Platform has several unique and innovative benefits when it comes to billing and resource control. What are these benefits? (select 3)
Correct
1. Sub-hour billing. 2. Sustained-use discounts Automatically reward users who run virtual machines for over 25% of any calendar month. 3. Compute Engine custom machine types Pay only for the resources you need for your application.
Incorrect
1. Sub-hour billing. 2. Sustained-use discounts Automatically reward users who run virtual machines for over 25% of any calendar month. 3. Compute Engine custom machine types Pay only for the resources you need for your application.
Unattempted
1. Sub-hour billing. 2. Sustained-use discounts Automatically reward users who run virtual machines for over 25% of any calendar month. 3. Compute Engine custom machine types Pay only for the resources you need for your application.
Question 19 of 65
19. Question
When we speak of Best Practices around IAM and specifically the “Principle of least privilege“ . What would be a best practices as related to least privilege?
Correct
This is the principle of least privilege. Always apply the minimal access level required Use groups as a best practice as well. Control who can change policies and group memberships. Enforce audit policy changes and always enable audit logs to record project-level permission changes
Incorrect
This is the principle of least privilege. Always apply the minimal access level required Use groups as a best practice as well. Control who can change policies and group memberships. Enforce audit policy changes and always enable audit logs to record project-level permission changes
Unattempted
This is the principle of least privilege. Always apply the minimal access level required Use groups as a best practice as well. Control who can change policies and group memberships. Enforce audit policy changes and always enable audit logs to record project-level permission changes
Question 20 of 65
20. Question
What type of account would you use in your code when you want to have services interact with other services?
Correct
A service account is a special kind of account used by an application or compute workload, such as a Compute Engine virtual machine (VM) instance, rather than a person. Applications use service accounts to make authorized API calls, authorized as either the service account itself, or as Google Workspace or Cloud Identity users through domain-wide delegation.
For example, a service account can be attached to a Compute Engine VM, so that applications running on that VM can authenticate as the service account. In addition, the service account can be granted IAM roles that let it access resources. The service account is used as the identity of the application, and the service account‘s roles control which resources the application can access.
A service account is a special kind of account used by an application or compute workload, such as a Compute Engine virtual machine (VM) instance, rather than a person. Applications use service accounts to make authorized API calls, authorized as either the service account itself, or as Google Workspace or Cloud Identity users through domain-wide delegation.
For example, a service account can be attached to a Compute Engine VM, so that applications running on that VM can authenticate as the service account. In addition, the service account can be granted IAM roles that let it access resources. The service account is used as the identity of the application, and the service account‘s roles control which resources the application can access.
A service account is a special kind of account used by an application or compute workload, such as a Compute Engine virtual machine (VM) instance, rather than a person. Applications use service accounts to make authorized API calls, authorized as either the service account itself, or as Google Workspace or Cloud Identity users through domain-wide delegation.
For example, a service account can be attached to a Compute Engine VM, so that applications running on that VM can authenticate as the service account. In addition, the service account can be granted IAM roles that let it access resources. The service account is used as the identity of the application, and the service account‘s roles control which resources the application can access.
Your currently ready to deploy some Cloud Deployment Manager templates and you will need to ensure specific requirements (“explicit“) exists before the templates deploy.. What would be the option you would add to your templates or configuration files?
In Google Cloud Platform there are two types of managed instance groups. What are they?
Correct
You can create two types of managed instance groups: A zonal managed instance group, which contains instances from the same zone. A regional managed instance group, which contains instances from multiple zones across the same region. Lastly, don’t confused over an unmanaged instance group. Reference https://cloud.google.com/compute/docs/instance-groups/
Incorrect
You can create two types of managed instance groups: A zonal managed instance group, which contains instances from the same zone. A regional managed instance group, which contains instances from multiple zones across the same region. Lastly, don’t confused over an unmanaged instance group. Reference https://cloud.google.com/compute/docs/instance-groups/
Unattempted
You can create two types of managed instance groups: A zonal managed instance group, which contains instances from the same zone. A regional managed instance group, which contains instances from multiple zones across the same region. Lastly, don’t confused over an unmanaged instance group. Reference https://cloud.google.com/compute/docs/instance-groups/
Question 23 of 65
23. Question
You have just started your cluster and deployed your pods. You now need to view all the running pods. What is the proper CLI syntax to accomplish this task?
Correct
The command syntax to inspect pods is the same as you would use for your on-premises deployments. kubectl get pods
Incorrect
The command syntax to inspect pods is the same as you would use for your on-premises deployments. kubectl get pods
Unattempted
The command syntax to inspect pods is the same as you would use for your on-premises deployments. kubectl get pods
Question 24 of 65
24. Question
App Engine has several solid use cases for the enterprise?. What are three uses cases for App Engine to be a good candidate for a customer? (select 3)
Correct
App Engine is a Platform as a Service (PaaS). It was built to Develop, Scale and Test applications. Reference https://cloud.google.com/appengine/
Incorrect
App Engine is a Platform as a Service (PaaS). It was built to Develop, Scale and Test applications. Reference https://cloud.google.com/appengine/
Unattempted
App Engine is a Platform as a Service (PaaS). It was built to Develop, Scale and Test applications. Reference https://cloud.google.com/appengine/
Question 25 of 65
25. Question
You would like to add a strict deploy-time policy enforcement to your Kubernetes Engine cluster. What would be your best option?
Correct
Binary Authorization is a deploy-time security control that ensures only trusted container images are deployed on Google Kubernetes Engine (GKE). Using Binary Authorization, you can require images to be signed by trusted authorities during the development process and then enforce signature validation when deploying. Reference https://cloud.google.com/binary-authorization
Incorrect
Binary Authorization is a deploy-time security control that ensures only trusted container images are deployed on Google Kubernetes Engine (GKE). Using Binary Authorization, you can require images to be signed by trusted authorities during the development process and then enforce signature validation when deploying. Reference https://cloud.google.com/binary-authorization
Unattempted
Binary Authorization is a deploy-time security control that ensures only trusted container images are deployed on Google Kubernetes Engine (GKE). Using Binary Authorization, you can require images to be signed by trusted authorities during the development process and then enforce signature validation when deploying. Reference https://cloud.google.com/binary-authorization
Question 26 of 65
26. Question
Which of the following resources are Global Resources in Google Cloud Platform? (select 2)
Correct
Global resources are accessible by any resource in any zone within the same project. When you create a global resource, you do not need to provide a scope specification. Global resources include: Snapshots are accessible by any resource in any zone within the same project. When you create a global resource, you do not need to provide a scope specification. Global resources include: Images can be used by any instance or disk resource in the same project as the image. Google provides preconfigured images that you can use to boot your instance. You can customize one of these images, or you can build your own image. Reference https://cloud.google.com/compute/docs/regions-zones/global-regional-zonal-resources
Incorrect
Global resources are accessible by any resource in any zone within the same project. When you create a global resource, you do not need to provide a scope specification. Global resources include: Snapshots are accessible by any resource in any zone within the same project. When you create a global resource, you do not need to provide a scope specification. Global resources include: Images can be used by any instance or disk resource in the same project as the image. Google provides preconfigured images that you can use to boot your instance. You can customize one of these images, or you can build your own image. Reference https://cloud.google.com/compute/docs/regions-zones/global-regional-zonal-resources
Unattempted
Global resources are accessible by any resource in any zone within the same project. When you create a global resource, you do not need to provide a scope specification. Global resources include: Snapshots are accessible by any resource in any zone within the same project. When you create a global resource, you do not need to provide a scope specification. Global resources include: Images can be used by any instance or disk resource in the same project as the image. Google provides preconfigured images that you can use to boot your instance. You can customize one of these images, or you can build your own image. Reference https://cloud.google.com/compute/docs/regions-zones/global-regional-zonal-resources
Question 27 of 65
27. Question
You’re getting ready to deploy a CI pipeline on GCP. You need to confirm that you have the proper syntax for creating a Kubernetes namespace called “production“ that will logically isolate the deployment. What is the Kubernetes command to do this?
Which command will configure Cloud Build to store the image in Container Registry as part of build flow?
Correct
docker push command will push an image or a repository to a registry such as Container Registry. Specify the hostname which specifies location where you will store the image.. To specify use these prefixes (multi-region). gcr.io hosts images in data centers in the United States, but the location may change in the future. us.gcr.io hosts image in data centers in the United States, in a separate storage bucket from images hosted by gcr.io. eu.gcr.io hosts the images in the European Union. asia.gcr.io hosts images in data centers in Asia. The Docker credential helper is the simplest way to configure Docker to authenticate directly with Container Registry. You then use the docker command to tag, push, and pull images. Alternatively, you can use the client libraries to manage container images, or you can interact directly with the Docker API
Incorrect
docker push command will push an image or a repository to a registry such as Container Registry. Specify the hostname which specifies location where you will store the image.. To specify use these prefixes (multi-region). gcr.io hosts images in data centers in the United States, but the location may change in the future. us.gcr.io hosts image in data centers in the United States, in a separate storage bucket from images hosted by gcr.io. eu.gcr.io hosts the images in the European Union. asia.gcr.io hosts images in data centers in Asia. The Docker credential helper is the simplest way to configure Docker to authenticate directly with Container Registry. You then use the docker command to tag, push, and pull images. Alternatively, you can use the client libraries to manage container images, or you can interact directly with the Docker API
Unattempted
docker push command will push an image or a repository to a registry such as Container Registry. Specify the hostname which specifies location where you will store the image.. To specify use these prefixes (multi-region). gcr.io hosts images in data centers in the United States, but the location may change in the future. us.gcr.io hosts image in data centers in the United States, in a separate storage bucket from images hosted by gcr.io. eu.gcr.io hosts the images in the European Union. asia.gcr.io hosts images in data centers in Asia. The Docker credential helper is the simplest way to configure Docker to authenticate directly with Container Registry. You then use the docker command to tag, push, and pull images. Alternatively, you can use the client libraries to manage container images, or you can interact directly with the Docker API
Question 29 of 65
29. Question
Your currently an SRE for My widgets Corp. The development team has asked you to deploy a Java 9 application on GCP App Engine. You realize that you can’t use App Engine Standard because Java 8/11 is the only Java version supported at the time of your planning.. What are your options for this scenario? (select 2)
Correct
App Engine Standard Support here..(Note this may not be in sync at the time you view) https://cloud.google.com/appengine/docs/standard/java/runtime. App Engine Flexible will support this with containers https://cloud.google.com/appengine/docs/the-appengine-environments
Incorrect
App Engine Standard Support here..(Note this may not be in sync at the time you view) https://cloud.google.com/appengine/docs/standard/java/runtime. App Engine Flexible will support this with containers https://cloud.google.com/appengine/docs/the-appengine-environments
Unattempted
App Engine Standard Support here..(Note this may not be in sync at the time you view) https://cloud.google.com/appengine/docs/standard/java/runtime. App Engine Flexible will support this with containers https://cloud.google.com/appengine/docs/the-appengine-environments
Question 30 of 65
30. Question
Who in an SRE organization coordinates efforts of the response team to address an active incident?
Correct
Incident Commander is the person who declares the incident typically steps into the IC role and directs the high-level state of the incident.
Incorrect
Incident Commander is the person who declares the incident typically steps into the IC role and directs the high-level state of the incident.
Unattempted
Incident Commander is the person who declares the incident typically steps into the IC role and directs the high-level state of the incident.
Question 31 of 65
31. Question
You have just created a cluster called “devops” in GKE and now you need to get authentication credentials to interact with the cluster. What is the proper CLI syntax to accomplish this task?
Correct
After creating your cluster, you need to get authentication credentials to interact with the cluster. This is done by a gcloud command, not a kubectl command. gcloud container clusters get-credentials “cluster-name“ will configure configures kubectl to use the cluster you created.
Incorrect
After creating your cluster, you need to get authentication credentials to interact with the cluster. This is done by a gcloud command, not a kubectl command. gcloud container clusters get-credentials “cluster-name“ will configure configures kubectl to use the cluster you created.
Unattempted
After creating your cluster, you need to get authentication credentials to interact with the cluster. This is done by a gcloud command, not a kubectl command. gcloud container clusters get-credentials “cluster-name“ will configure configures kubectl to use the cluster you created.
Question 32 of 65
32. Question
Your company currently uses a third-party monitoring solution for your enterprise apps. You are using Kubernetes Engine for your container deployments and would like to enable this internal monitoring app for Kubernetes clusters. What would be the best approach?
Correct
Many monitoring solutions use the Kubernetes DaemonSet structure to deploy an agent on every cluster node. S Note that each tool has its own software for cluster monitoring. Heapster is another option that could also be used, Heapster is a bridge between a cluster and a storage designed to collect the cluster metrics. Stackdriver is native to Google Cloud and therefore the recommended approach by Google Cloud.
Incorrect
Many monitoring solutions use the Kubernetes DaemonSet structure to deploy an agent on every cluster node. S Note that each tool has its own software for cluster monitoring. Heapster is another option that could also be used, Heapster is a bridge between a cluster and a storage designed to collect the cluster metrics. Stackdriver is native to Google Cloud and therefore the recommended approach by Google Cloud.
Unattempted
Many monitoring solutions use the Kubernetes DaemonSet structure to deploy an agent on every cluster node. S Note that each tool has its own software for cluster monitoring. Heapster is another option that could also be used, Heapster is a bridge between a cluster and a storage designed to collect the cluster metrics. Stackdriver is native to Google Cloud and therefore the recommended approach by Google Cloud.
Question 33 of 65
33. Question
You would like to deploy a LAMP stack for your development team. The only issue is you’re not sure how to configure this LAMP stack. You would like to use a solution that has ready made templates to deploy. What GCP service could you use?
Correct
Google Cloud Marketplace formerly Cloud Launcher offers ready-to-go development stacks, solutions, and services to accelerate development, so you spend less time installing and more time developing. Google Cloud Marketplace formerly Cloud Launcher offers ready-to-go development stacks, solutions, and services to accelerate development, so you spend less time installing and more time developing.
Incorrect
Google Cloud Marketplace formerly Cloud Launcher offers ready-to-go development stacks, solutions, and services to accelerate development, so you spend less time installing and more time developing. Google Cloud Marketplace formerly Cloud Launcher offers ready-to-go development stacks, solutions, and services to accelerate development, so you spend less time installing and more time developing.
Unattempted
Google Cloud Marketplace formerly Cloud Launcher offers ready-to-go development stacks, solutions, and services to accelerate development, so you spend less time installing and more time developing. Google Cloud Marketplace formerly Cloud Launcher offers ready-to-go development stacks, solutions, and services to accelerate development, so you spend less time installing and more time developing.
Question 34 of 65
34. Question
You’re using Stackdriver to set up some alerts. You want to reuse your existing REST-based notification tools that your ops team has created. You also need the setup to be as simple as possible to configure and maintain since your customer does not have programming skills. Which notification option would be the best option?
Correct
A webhook would be the simplest and best option since the other answers won’t fit the requirements.
Incorrect
A webhook would be the simplest and best option since the other answers won’t fit the requirements.
Unattempted
A webhook would be the simplest and best option since the other answers won’t fit the requirements.
Question 35 of 65
35. Question
Your company asked you to configure a log routing sink to meet security compliance of centrally collecting Google Cloud VPC Flow Logs. The sink destination is a logging bucket in another project. After you configure the logs Sink, a few days later, one of the security team members points out that there are no logs in the logging bucket. Which of the following is NOT a potential reason?
Correct
To understand this question, it is essential to know about Google Cloud VPC Flow Logs and Logs Sinks. Google Cloud VPC Flow Logs are a service that allows you to capture information about the IP traffic flowing in and out of your VPC network. A Logs Sink is a destination where you can route your logs, such as a Cloud Storage bucket or a BigQuery table. References: https://cloud.google.com/vpc/docs/using-flow-logs#no-vpc-flows The correct answer is: “Firewall rules are blocking traffic.“ – Firewall rules do not affect the logs generated by Flow logs. They control inbound and outbound traffic to and from your VPC network, but they do not control the logs being generated by Flow logs. Here are explanations for the incorrect answer choices: “Flow Logs were not enabled in the monitored Project.“ – If Flow Logs are not enabled on the subnets to be monitored, there will be no logs. This is a possible reason for the missing logs. “Logging exclusion filters defined on the sink block specified logs“ – If the filters are wrongly configured, desired logs could be discarded. This is a possible reason for the missing logs. “Viewing the wrong Logging bucket“ – If the security team looks in the wrong bucket, they will not see the logs. This is a possible reason for the missing logs.
Incorrect
To understand this question, it is essential to know about Google Cloud VPC Flow Logs and Logs Sinks. Google Cloud VPC Flow Logs are a service that allows you to capture information about the IP traffic flowing in and out of your VPC network. A Logs Sink is a destination where you can route your logs, such as a Cloud Storage bucket or a BigQuery table. References: https://cloud.google.com/vpc/docs/using-flow-logs#no-vpc-flows The correct answer is: “Firewall rules are blocking traffic.“ – Firewall rules do not affect the logs generated by Flow logs. They control inbound and outbound traffic to and from your VPC network, but they do not control the logs being generated by Flow logs. Here are explanations for the incorrect answer choices: “Flow Logs were not enabled in the monitored Project.“ – If Flow Logs are not enabled on the subnets to be monitored, there will be no logs. This is a possible reason for the missing logs. “Logging exclusion filters defined on the sink block specified logs“ – If the filters are wrongly configured, desired logs could be discarded. This is a possible reason for the missing logs. “Viewing the wrong Logging bucket“ – If the security team looks in the wrong bucket, they will not see the logs. This is a possible reason for the missing logs.
Unattempted
To understand this question, it is essential to know about Google Cloud VPC Flow Logs and Logs Sinks. Google Cloud VPC Flow Logs are a service that allows you to capture information about the IP traffic flowing in and out of your VPC network. A Logs Sink is a destination where you can route your logs, such as a Cloud Storage bucket or a BigQuery table. References: https://cloud.google.com/vpc/docs/using-flow-logs#no-vpc-flows The correct answer is: “Firewall rules are blocking traffic.“ – Firewall rules do not affect the logs generated by Flow logs. They control inbound and outbound traffic to and from your VPC network, but they do not control the logs being generated by Flow logs. Here are explanations for the incorrect answer choices: “Flow Logs were not enabled in the monitored Project.“ – If Flow Logs are not enabled on the subnets to be monitored, there will be no logs. This is a possible reason for the missing logs. “Logging exclusion filters defined on the sink block specified logs“ – If the filters are wrongly configured, desired logs could be discarded. This is a possible reason for the missing logs. “Viewing the wrong Logging bucket“ – If the security team looks in the wrong bucket, they will not see the logs. This is a possible reason for the missing logs.
Question 36 of 65
36. Question
Each Google Cloud project has three unique identifiers. Which one is NOT a correct identifier?
Correct
Each Google Cloud Platform project has three unique identifiers: a project name, which you provide; a project ID, which you can provide, or Cloud Platform can provide for you; and a project number, which Cloud Platform provides. The correct answer is “Project Scope“ – this is not one of the three unique identifiers of a Google Cloud Platform project.
Incorrect
Each Google Cloud Platform project has three unique identifiers: a project name, which you provide; a project ID, which you can provide, or Cloud Platform can provide for you; and a project number, which Cloud Platform provides. The correct answer is “Project Scope“ – this is not one of the three unique identifiers of a Google Cloud Platform project.
Unattempted
Each Google Cloud Platform project has three unique identifiers: a project name, which you provide; a project ID, which you can provide, or Cloud Platform can provide for you; and a project number, which Cloud Platform provides. The correct answer is “Project Scope“ – this is not one of the three unique identifiers of a Google Cloud Platform project.
Question 37 of 65
37. Question
Your application runs on Google Cloud Platform. As a DevOps Engineer, you need to implement Jenkins for deploying application releases to GCP. You want to streamline the release process, lower operational toil, and keep user data secure. What should you do?
Correct
Implement Jenkins on Compute Engine virtual machines. -> Correct. It is the best approach to deploy application releases to GCP, streamline the release process, lower operational toil, and keep user data secure. Compute Engine provides virtual machines that can be customized to run Jenkins and other tools in a secure and scalable environment. Implement Jenkins on local workstations. -> Incorrect. It can lead to scalability and security issues. Also, local workstations may not be accessible from outside of the organization‘s network. Implement Jenkins on Kubernetes on-premises. -> Incorrect. It can be complex and resource-intensive, which may not be necessary for the application‘s needs. Implement Jenkins on Google Cloud Functions. -> Incorrect. It is not a recommended deployment option due to the limited execution time, memory, and environment options.
Incorrect
Implement Jenkins on Compute Engine virtual machines. -> Correct. It is the best approach to deploy application releases to GCP, streamline the release process, lower operational toil, and keep user data secure. Compute Engine provides virtual machines that can be customized to run Jenkins and other tools in a secure and scalable environment. Implement Jenkins on local workstations. -> Incorrect. It can lead to scalability and security issues. Also, local workstations may not be accessible from outside of the organization‘s network. Implement Jenkins on Kubernetes on-premises. -> Incorrect. It can be complex and resource-intensive, which may not be necessary for the application‘s needs. Implement Jenkins on Google Cloud Functions. -> Incorrect. It is not a recommended deployment option due to the limited execution time, memory, and environment options.
Unattempted
Implement Jenkins on Compute Engine virtual machines. -> Correct. It is the best approach to deploy application releases to GCP, streamline the release process, lower operational toil, and keep user data secure. Compute Engine provides virtual machines that can be customized to run Jenkins and other tools in a secure and scalable environment. Implement Jenkins on local workstations. -> Incorrect. It can lead to scalability and security issues. Also, local workstations may not be accessible from outside of the organization‘s network. Implement Jenkins on Kubernetes on-premises. -> Incorrect. It can be complex and resource-intensive, which may not be necessary for the application‘s needs. Implement Jenkins on Google Cloud Functions. -> Incorrect. It is not a recommended deployment option due to the limited execution time, memory, and environment options.
Question 38 of 65
38. Question
A DevOps Engineer is tasked with implementing an org-level export of Cloud Logging data to a specific destination. They must also manage the Cloud Logging platform for better organization and efficient usage. Which of the following methods should the engineer use to achieve the desired results?
Correct
Create a logs export sink with the –organization flag, and use the Cloud Logging API to manage logs. -> Correct. This is the correct answer because creating a logs export sink with the –organization flag ensures that the export sink is configured at the org-level. Using the Cloud Logging API to manage logs provides the required flexibility and control over log data. Implement a project-level logs export sink, and use the Cloud Logging API for log management. -> Incorrect. It suggests implementing a project-level logs export sink, which is not the desired outcome. Create a logs export sink with the –folder flag, and use the gcloud command-line tool for log management. -> Incorrect. The –folder flag creates a folder-level logs export sink, not an org-level one. Also, the gcloud command-line tool is not as flexible and comprehensive as the Cloud Logging API for log management. Implement an org-level logs export sink without any specific flag, and use the Cloud Logging API for log management. -> Incorrect. It does not specify any flag for creating an org-level logs export sink. The –organization flag is required to ensure the export sink is created at the org-level. The use of the Cloud Logging API for log management is correct, but the overall answer is incomplete.
Incorrect
Create a logs export sink with the –organization flag, and use the Cloud Logging API to manage logs. -> Correct. This is the correct answer because creating a logs export sink with the –organization flag ensures that the export sink is configured at the org-level. Using the Cloud Logging API to manage logs provides the required flexibility and control over log data. Implement a project-level logs export sink, and use the Cloud Logging API for log management. -> Incorrect. It suggests implementing a project-level logs export sink, which is not the desired outcome. Create a logs export sink with the –folder flag, and use the gcloud command-line tool for log management. -> Incorrect. The –folder flag creates a folder-level logs export sink, not an org-level one. Also, the gcloud command-line tool is not as flexible and comprehensive as the Cloud Logging API for log management. Implement an org-level logs export sink without any specific flag, and use the Cloud Logging API for log management. -> Incorrect. It does not specify any flag for creating an org-level logs export sink. The –organization flag is required to ensure the export sink is created at the org-level. The use of the Cloud Logging API for log management is correct, but the overall answer is incomplete.
Unattempted
Create a logs export sink with the –organization flag, and use the Cloud Logging API to manage logs. -> Correct. This is the correct answer because creating a logs export sink with the –organization flag ensures that the export sink is configured at the org-level. Using the Cloud Logging API to manage logs provides the required flexibility and control over log data. Implement a project-level logs export sink, and use the Cloud Logging API for log management. -> Incorrect. It suggests implementing a project-level logs export sink, which is not the desired outcome. Create a logs export sink with the –folder flag, and use the gcloud command-line tool for log management. -> Incorrect. The –folder flag creates a folder-level logs export sink, not an org-level one. Also, the gcloud command-line tool is not as flexible and comprehensive as the Cloud Logging API for log management. Implement an org-level logs export sink without any specific flag, and use the Cloud Logging API for log management. -> Incorrect. It does not specify any flag for creating an org-level logs export sink. The –organization flag is required to ensure the export sink is created at the org-level. The use of the Cloud Logging API for log management is correct, but the overall answer is incomplete.
Question 39 of 65
39. Question
A company has implemented a Google Cloud Platform (GCP) project that adheres to their Service Level Objectives (SLOs). As a Professional Cloud DevOps Engineer, you have been asked to define alerting policies based on Service Level Indicators (SLIs) with Cloud Monitoring for this project. Which of the following approaches would be the most appropriate way to implement this?
Correct
Create individual alerting policies for each SLI, and trigger alerts when the associated SLI thresholds are breached. Use alert documentation to provide context and recommended actions. -> Correct. Creating individual alerting policies for each SLI allows for better visibility and understanding of which specific indicator has breached the threshold. Including alert documentation provides context and recommended actions to help the team react accordingly. Set up a single alerting policy based on an aggregation of all SLIs, and trigger the alert when the overall SLI threshold is breached. -> Incorrect. Aggregating all SLIs into a single alerting policy would not provide sufficient context for identifying which specific SLI was breached. This approach would make it difficult for the team to take appropriate actions in response to the alert. Utilize Google Error Reporting to automatically create alerts for all SLIs without any additional configurations. -> Incorrect. Google Error Reporting is primarily designed for application error tracking and not for defining alerting policies based on SLIs. It does not provide the necessary level of customization or granularity required for this use case. Monitor only the most critical SLIs and rely on default GCP policies to alert for the other SLIs. -> Incorrect. It would not be effective in ensuring that all relevant SLIs are properly monitored. This approach could lead to overlooking important indicators and negatively impact the company‘s ability to meet their SLOs.
Incorrect
Create individual alerting policies for each SLI, and trigger alerts when the associated SLI thresholds are breached. Use alert documentation to provide context and recommended actions. -> Correct. Creating individual alerting policies for each SLI allows for better visibility and understanding of which specific indicator has breached the threshold. Including alert documentation provides context and recommended actions to help the team react accordingly. Set up a single alerting policy based on an aggregation of all SLIs, and trigger the alert when the overall SLI threshold is breached. -> Incorrect. Aggregating all SLIs into a single alerting policy would not provide sufficient context for identifying which specific SLI was breached. This approach would make it difficult for the team to take appropriate actions in response to the alert. Utilize Google Error Reporting to automatically create alerts for all SLIs without any additional configurations. -> Incorrect. Google Error Reporting is primarily designed for application error tracking and not for defining alerting policies based on SLIs. It does not provide the necessary level of customization or granularity required for this use case. Monitor only the most critical SLIs and rely on default GCP policies to alert for the other SLIs. -> Incorrect. It would not be effective in ensuring that all relevant SLIs are properly monitored. This approach could lead to overlooking important indicators and negatively impact the company‘s ability to meet their SLOs.
Unattempted
Create individual alerting policies for each SLI, and trigger alerts when the associated SLI thresholds are breached. Use alert documentation to provide context and recommended actions. -> Correct. Creating individual alerting policies for each SLI allows for better visibility and understanding of which specific indicator has breached the threshold. Including alert documentation provides context and recommended actions to help the team react accordingly. Set up a single alerting policy based on an aggregation of all SLIs, and trigger the alert when the overall SLI threshold is breached. -> Incorrect. Aggregating all SLIs into a single alerting policy would not provide sufficient context for identifying which specific SLI was breached. This approach would make it difficult for the team to take appropriate actions in response to the alert. Utilize Google Error Reporting to automatically create alerts for all SLIs without any additional configurations. -> Incorrect. Google Error Reporting is primarily designed for application error tracking and not for defining alerting policies based on SLIs. It does not provide the necessary level of customization or granularity required for this use case. Monitor only the most critical SLIs and rely on default GCP policies to alert for the other SLIs. -> Incorrect. It would not be effective in ensuring that all relevant SLIs are properly monitored. This approach could lead to overlooking important indicators and negatively impact the company‘s ability to meet their SLOs.
Question 40 of 65
40. Question
As a DevOps Engineer, you are tasked with optimizing resource utilization and utilizing committed use discounts where appropriate in a Google Cloud Platform (GCP) project. The project has multiple Compute Engine instances with varying resource requirements and usage patterns. Which of the following strategies should you implement to achieve the desired optimization and cost-saving goals?
Correct
Analyze the resource requirements and usage patterns of the Compute Engine instances, then purchase the appropriate committed use contracts for vCPUs and memory, while also enabling autoscaling based on custom metrics that reflect each instance‘s resource requirements and usage patterns. -> Correct. Analyzing resource requirements and usage patterns allows you to purchase committed use contracts that match your instances‘ needs, maximizing cost savings. Additionally, configuring autoscaling based on custom metrics tailored to each instance‘s requirements and usage patterns ensures optimal resource utilization. Purchase committed use contracts for the maximum possible vCPUs and memory for all Compute Engine instances, regardless of their resource requirements and usage patterns, to ensure maximum cost savings. -> Incorrect. Purchasing committed use contracts for the maximum possible vCPUs and memory without considering the instances‘ resource requirements and usage patterns can lead to over-provisioning and increased costs. Analyze the resource requirements and usage patterns of the Compute Engine instances and then migrate them all to Preemptible VMs to reduce costs. -> Incorrect. Preemptible VMs are not suitable for all workloads, as they can be terminated at any time. While they can provide cost savings, migrating all instances to Preemptible VMs without considering their specific requirements and usage patterns may result in service disruptions. Allocate the minimum possible vCPUs and memory for all Compute Engine instances and enable autoscaling based on instance uptime to minimize costs. -> Incorrect. Allocating the minimum possible vCPUs and memory for all instances can lead to performance issues and doesn‘t take into account the unique resource requirements and usage patterns of each instance. Autoscaling based on instance uptime doesn‘t necessarily optimize resource utilization or lead to cost savings.
Incorrect
Analyze the resource requirements and usage patterns of the Compute Engine instances, then purchase the appropriate committed use contracts for vCPUs and memory, while also enabling autoscaling based on custom metrics that reflect each instance‘s resource requirements and usage patterns. -> Correct. Analyzing resource requirements and usage patterns allows you to purchase committed use contracts that match your instances‘ needs, maximizing cost savings. Additionally, configuring autoscaling based on custom metrics tailored to each instance‘s requirements and usage patterns ensures optimal resource utilization. Purchase committed use contracts for the maximum possible vCPUs and memory for all Compute Engine instances, regardless of their resource requirements and usage patterns, to ensure maximum cost savings. -> Incorrect. Purchasing committed use contracts for the maximum possible vCPUs and memory without considering the instances‘ resource requirements and usage patterns can lead to over-provisioning and increased costs. Analyze the resource requirements and usage patterns of the Compute Engine instances and then migrate them all to Preemptible VMs to reduce costs. -> Incorrect. Preemptible VMs are not suitable for all workloads, as they can be terminated at any time. While they can provide cost savings, migrating all instances to Preemptible VMs without considering their specific requirements and usage patterns may result in service disruptions. Allocate the minimum possible vCPUs and memory for all Compute Engine instances and enable autoscaling based on instance uptime to minimize costs. -> Incorrect. Allocating the minimum possible vCPUs and memory for all instances can lead to performance issues and doesn‘t take into account the unique resource requirements and usage patterns of each instance. Autoscaling based on instance uptime doesn‘t necessarily optimize resource utilization or lead to cost savings.
Unattempted
Analyze the resource requirements and usage patterns of the Compute Engine instances, then purchase the appropriate committed use contracts for vCPUs and memory, while also enabling autoscaling based on custom metrics that reflect each instance‘s resource requirements and usage patterns. -> Correct. Analyzing resource requirements and usage patterns allows you to purchase committed use contracts that match your instances‘ needs, maximizing cost savings. Additionally, configuring autoscaling based on custom metrics tailored to each instance‘s requirements and usage patterns ensures optimal resource utilization. Purchase committed use contracts for the maximum possible vCPUs and memory for all Compute Engine instances, regardless of their resource requirements and usage patterns, to ensure maximum cost savings. -> Incorrect. Purchasing committed use contracts for the maximum possible vCPUs and memory without considering the instances‘ resource requirements and usage patterns can lead to over-provisioning and increased costs. Analyze the resource requirements and usage patterns of the Compute Engine instances and then migrate them all to Preemptible VMs to reduce costs. -> Incorrect. Preemptible VMs are not suitable for all workloads, as they can be terminated at any time. While they can provide cost savings, migrating all instances to Preemptible VMs without considering their specific requirements and usage patterns may result in service disruptions. Allocate the minimum possible vCPUs and memory for all Compute Engine instances and enable autoscaling based on instance uptime to minimize costs. -> Incorrect. Allocating the minimum possible vCPUs and memory for all instances can lead to performance issues and doesn‘t take into account the unique resource requirements and usage patterns of each instance. Autoscaling based on instance uptime doesn‘t necessarily optimize resource utilization or lead to cost savings.
Question 41 of 65
41. Question
A company is developing a web application on Google Cloud Platform that uses both front-end and back-end technologies. The front-end is built using React and the back-end is built using Node.js. They want to design a CI/CD pipeline to automate the build, test, and deployment processes. Which approach would be most appropriate for designing the pipeline?
Correct
Use Google Cloud Build and Cloud Run to create a single pipeline for both front-end and back-end. -> Correct. Google Cloud Build provides a fully-managed CI/CD platform that integrates with Cloud Run to provide a comprehensive solution for deploying containerized applications. By creating a single pipeline for both front-end and back-end, the company can centralize their CI/CD processes and ensure consistent deployment workflows across different technologies. Use Jenkins to create separate pipelines for front-end and back-end. -> Incorrect. While this approach is possible, it can introduce unnecessary complexity and increase the chances of errors. It also requires manual setup and maintenance of the Jenkins servers, which can be time-consuming and error-prone. Use Google Cloud Build to create separate pipelines for front-end and back-end. -> Incorrect. While this approach is viable, it can lead to a fragmented CI/CD process, making it difficult to manage and maintain different pipelines for different technologies. Use Kubernetes Engine and Jenkins to create a single pipeline for both front-end and back-end. -> Incorrect. While this approach is possible, it requires manual setup and maintenance of the Jenkins server, which can be time-consuming and error-prone. It also lacks the native support for Cloud Run, which provides a simpler and more scalable solution for deploying containerized applications.
Incorrect
Use Google Cloud Build and Cloud Run to create a single pipeline for both front-end and back-end. -> Correct. Google Cloud Build provides a fully-managed CI/CD platform that integrates with Cloud Run to provide a comprehensive solution for deploying containerized applications. By creating a single pipeline for both front-end and back-end, the company can centralize their CI/CD processes and ensure consistent deployment workflows across different technologies. Use Jenkins to create separate pipelines for front-end and back-end. -> Incorrect. While this approach is possible, it can introduce unnecessary complexity and increase the chances of errors. It also requires manual setup and maintenance of the Jenkins servers, which can be time-consuming and error-prone. Use Google Cloud Build to create separate pipelines for front-end and back-end. -> Incorrect. While this approach is viable, it can lead to a fragmented CI/CD process, making it difficult to manage and maintain different pipelines for different technologies. Use Kubernetes Engine and Jenkins to create a single pipeline for both front-end and back-end. -> Incorrect. While this approach is possible, it requires manual setup and maintenance of the Jenkins server, which can be time-consuming and error-prone. It also lacks the native support for Cloud Run, which provides a simpler and more scalable solution for deploying containerized applications.
Unattempted
Use Google Cloud Build and Cloud Run to create a single pipeline for both front-end and back-end. -> Correct. Google Cloud Build provides a fully-managed CI/CD platform that integrates with Cloud Run to provide a comprehensive solution for deploying containerized applications. By creating a single pipeline for both front-end and back-end, the company can centralize their CI/CD processes and ensure consistent deployment workflows across different technologies. Use Jenkins to create separate pipelines for front-end and back-end. -> Incorrect. While this approach is possible, it can introduce unnecessary complexity and increase the chances of errors. It also requires manual setup and maintenance of the Jenkins servers, which can be time-consuming and error-prone. Use Google Cloud Build to create separate pipelines for front-end and back-end. -> Incorrect. While this approach is viable, it can lead to a fragmented CI/CD process, making it difficult to manage and maintain different pipelines for different technologies. Use Kubernetes Engine and Jenkins to create a single pipeline for both front-end and back-end. -> Incorrect. While this approach is possible, it requires manual setup and maintenance of the Jenkins server, which can be time-consuming and error-prone. It also lacks the native support for Cloud Run, which provides a simpler and more scalable solution for deploying containerized applications.
Question 42 of 65
42. Question
Which of the following best describes the difference between a push and a pull trigger in a CI/CD pipeline with Cloud Source Repositories in Google Cloud Platform?
Correct
A push trigger initiates the CI/CD pipeline when code changes are pushed to the repository, while a pull trigger initiates the pipeline when a pull request is made. -> Correct. In Cloud Source Repositories, a push trigger initiates the CI/CD pipeline when code changes are pushed to the repository. This can be configured to trigger on any push or on specific branches. On the other hand, a pull trigger initiates the pipeline when a pull request is made. This allows developers to test their code changes before they are merged into the main branch. A push trigger initiates the CI/CD pipeline when a new tag is pushed to the repository, while a pull trigger initiates the pipeline when a pull request is merged. -> Incorrect. A push trigger initiates the pipeline when code changes are pushed, not when a new tag is pushed. A push trigger initiates the CI/CD pipeline when a pull request is made, while a pull trigger initiates the pipeline when code changes are pushed to the repository. -> Incorrect. A pull trigger initiates the pipeline when a pull request is made, not when code changes are pushed. A push trigger initiates the CI/CD pipeline when code changes are pushed to the repository, while a pull trigger initiates the pipeline when a new branch is created. -> Incorrect. A push trigger initiates the pipeline when code changes are pushed, not when a new branch is created.
Incorrect
A push trigger initiates the CI/CD pipeline when code changes are pushed to the repository, while a pull trigger initiates the pipeline when a pull request is made. -> Correct. In Cloud Source Repositories, a push trigger initiates the CI/CD pipeline when code changes are pushed to the repository. This can be configured to trigger on any push or on specific branches. On the other hand, a pull trigger initiates the pipeline when a pull request is made. This allows developers to test their code changes before they are merged into the main branch. A push trigger initiates the CI/CD pipeline when a new tag is pushed to the repository, while a pull trigger initiates the pipeline when a pull request is merged. -> Incorrect. A push trigger initiates the pipeline when code changes are pushed, not when a new tag is pushed. A push trigger initiates the CI/CD pipeline when a pull request is made, while a pull trigger initiates the pipeline when code changes are pushed to the repository. -> Incorrect. A pull trigger initiates the pipeline when a pull request is made, not when code changes are pushed. A push trigger initiates the CI/CD pipeline when code changes are pushed to the repository, while a pull trigger initiates the pipeline when a new branch is created. -> Incorrect. A push trigger initiates the pipeline when code changes are pushed, not when a new branch is created.
Unattempted
A push trigger initiates the CI/CD pipeline when code changes are pushed to the repository, while a pull trigger initiates the pipeline when a pull request is made. -> Correct. In Cloud Source Repositories, a push trigger initiates the CI/CD pipeline when code changes are pushed to the repository. This can be configured to trigger on any push or on specific branches. On the other hand, a pull trigger initiates the pipeline when a pull request is made. This allows developers to test their code changes before they are merged into the main branch. A push trigger initiates the CI/CD pipeline when a new tag is pushed to the repository, while a pull trigger initiates the pipeline when a pull request is merged. -> Incorrect. A push trigger initiates the pipeline when code changes are pushed, not when a new tag is pushed. A push trigger initiates the CI/CD pipeline when a pull request is made, while a pull trigger initiates the pipeline when code changes are pushed to the repository. -> Incorrect. A pull trigger initiates the pipeline when a pull request is made, not when code changes are pushed. A push trigger initiates the CI/CD pipeline when code changes are pushed to the repository, while a pull trigger initiates the pipeline when a new branch is created. -> Incorrect. A push trigger initiates the pipeline when code changes are pushed, not when a new branch is created.
Question 43 of 65
43. Question
A multinational company has deployed a distributed application on Google Cloud Platform (GCP) using multiple microservices. They have been experiencing intermittent performance issues in their application. As a DevOps Engineer, you have been tasked with optimizing service performance and debugging the application. Which of the following approaches would you take to identify and resolve the performance bottlenecks?
Correct
Enable Cloud Trace and analyze latency data, configure Cloud Debugger to identify issues in the source code, use Cloud Profiler to analyze CPU and memory usage, and implement circuit breaking patterns to isolate microservices. -> Correct. This is the correct answer because Cloud Trace helps analyze latency data, Cloud Debugger identifies issues in the source code, Cloud Profiler analyzes CPU and memory usage, and circuit breaking patterns isolate microservices to prevent cascading failures. Enable Cloud Trace and analyze latency data, configure Cloud Monitoring to identify issues in the source code, use Cloud Debugger to analyze CPU and memory usage, and implement circuit breaking patterns to isolate microservices. -> Incorrect. Cloud Monitoring cannot identify issues in the source code. Cloud Debugger is required for that purpose. Cloud Debugger cannot analyze CPU and memory usage. Cloud Profiler is needed for this purpose. Enable Cloud Monitoring and analyze latency data, configure Cloud Debugger to identify issues in the source code, use Cloud Trace to analyze CPU and memory usage, and implement circuit breaking patterns to isolate microservices. -> Incorrect. Cloud Monitoring is not for latency analysis, and Cloud Trace cannot analyze CPU and memory usage. Both tools are being used incorrectly in this answer. Enable Cloud Trace and analyze latency data, configure Cloud Monitoring to identify issues in the source code, use Cloud Profiler to analyze CPU and memory usage, and implement load balancing patterns to distribute traffic evenly. -> Incorrect. Cloud Monitoring is not meant for identifying issues in the source code. Additionally, load balancing patterns help distribute traffic but do not isolate microservices during failures like circuit breaking patterns do.
Incorrect
Enable Cloud Trace and analyze latency data, configure Cloud Debugger to identify issues in the source code, use Cloud Profiler to analyze CPU and memory usage, and implement circuit breaking patterns to isolate microservices. -> Correct. This is the correct answer because Cloud Trace helps analyze latency data, Cloud Debugger identifies issues in the source code, Cloud Profiler analyzes CPU and memory usage, and circuit breaking patterns isolate microservices to prevent cascading failures. Enable Cloud Trace and analyze latency data, configure Cloud Monitoring to identify issues in the source code, use Cloud Debugger to analyze CPU and memory usage, and implement circuit breaking patterns to isolate microservices. -> Incorrect. Cloud Monitoring cannot identify issues in the source code. Cloud Debugger is required for that purpose. Cloud Debugger cannot analyze CPU and memory usage. Cloud Profiler is needed for this purpose. Enable Cloud Monitoring and analyze latency data, configure Cloud Debugger to identify issues in the source code, use Cloud Trace to analyze CPU and memory usage, and implement circuit breaking patterns to isolate microservices. -> Incorrect. Cloud Monitoring is not for latency analysis, and Cloud Trace cannot analyze CPU and memory usage. Both tools are being used incorrectly in this answer. Enable Cloud Trace and analyze latency data, configure Cloud Monitoring to identify issues in the source code, use Cloud Profiler to analyze CPU and memory usage, and implement load balancing patterns to distribute traffic evenly. -> Incorrect. Cloud Monitoring is not meant for identifying issues in the source code. Additionally, load balancing patterns help distribute traffic but do not isolate microservices during failures like circuit breaking patterns do.
Unattempted
Enable Cloud Trace and analyze latency data, configure Cloud Debugger to identify issues in the source code, use Cloud Profiler to analyze CPU and memory usage, and implement circuit breaking patterns to isolate microservices. -> Correct. This is the correct answer because Cloud Trace helps analyze latency data, Cloud Debugger identifies issues in the source code, Cloud Profiler analyzes CPU and memory usage, and circuit breaking patterns isolate microservices to prevent cascading failures. Enable Cloud Trace and analyze latency data, configure Cloud Monitoring to identify issues in the source code, use Cloud Debugger to analyze CPU and memory usage, and implement circuit breaking patterns to isolate microservices. -> Incorrect. Cloud Monitoring cannot identify issues in the source code. Cloud Debugger is required for that purpose. Cloud Debugger cannot analyze CPU and memory usage. Cloud Profiler is needed for this purpose. Enable Cloud Monitoring and analyze latency data, configure Cloud Debugger to identify issues in the source code, use Cloud Trace to analyze CPU and memory usage, and implement circuit breaking patterns to isolate microservices. -> Incorrect. Cloud Monitoring is not for latency analysis, and Cloud Trace cannot analyze CPU and memory usage. Both tools are being used incorrectly in this answer. Enable Cloud Trace and analyze latency data, configure Cloud Monitoring to identify issues in the source code, use Cloud Profiler to analyze CPU and memory usage, and implement load balancing patterns to distribute traffic evenly. -> Incorrect. Cloud Monitoring is not meant for identifying issues in the source code. Additionally, load balancing patterns help distribute traffic but do not isolate microservices during failures like circuit breaking patterns do.
Question 44 of 65
44. Question
By default the number of host projects to which a service project can attach for shared VPC, is ______
Correct
The number of host projects to which a service project can attach is 1. This limit cannot be increased.
Cloud Endpoints can be implemented in which languages? (select 2)
Correct
Cloud Endpoints for the App Engine standard generation 1 environment historically used Endpoints Frameworks, which only supports the Java 8 and Python 2.7 runtime environments.(Time of Writing) Reference https://cloud.google.com/endpoints/docs/choose-endpoints-option
Incorrect
Cloud Endpoints for the App Engine standard generation 1 environment historically used Endpoints Frameworks, which only supports the Java 8 and Python 2.7 runtime environments.(Time of Writing) Reference https://cloud.google.com/endpoints/docs/choose-endpoints-option
Unattempted
Cloud Endpoints for the App Engine standard generation 1 environment historically used Endpoints Frameworks, which only supports the Java 8 and Python 2.7 runtime environments.(Time of Writing) Reference https://cloud.google.com/endpoints/docs/choose-endpoints-option
Question 46 of 65
46. Question
You would like to deploy a new cluster on GCP with gcloud. The cluster you need is going be named devops1. You already set your profile and authenticated. What is the syntax to deploy a cluster?
You are evaluating new GCP services and would like to use tools to help you evaluate the costs of using GCP. What are two tools available from GCP to help analyse costs. (select 2)
Correct
Expect several questions on pricing Bigtable and Storage. You can also take advantage of some tools to help you evaluate the costs of using GCP. The pricing calculator provides a quick and easy way to estimate what your GCP usage will look like. You can provide details about the services you want to use, such as the number of Compute Engine instances, persistent disks and their sizes, and so on, and then see a pricing estimate.. Reference https://cloud.google.com/products/calculator https://inthecloud.withgoogle.com/tco-assessment-19/form.html
Incorrect
Expect several questions on pricing Bigtable and Storage. You can also take advantage of some tools to help you evaluate the costs of using GCP. The pricing calculator provides a quick and easy way to estimate what your GCP usage will look like. You can provide details about the services you want to use, such as the number of Compute Engine instances, persistent disks and their sizes, and so on, and then see a pricing estimate.. Reference https://cloud.google.com/products/calculator https://inthecloud.withgoogle.com/tco-assessment-19/form.html
Unattempted
Expect several questions on pricing Bigtable and Storage. You can also take advantage of some tools to help you evaluate the costs of using GCP. The pricing calculator provides a quick and easy way to estimate what your GCP usage will look like. You can provide details about the services you want to use, such as the number of Compute Engine instances, persistent disks and their sizes, and so on, and then see a pricing estimate.. Reference https://cloud.google.com/products/calculator https://inthecloud.withgoogle.com/tco-assessment-19/form.html
Question 48 of 65
48. Question
Your organization would like to obtain significant discounts on your VM instance deployments on Google Cloud. These VM instances only need to be used for a few hours a month.. What pricing model would you want to consider?
Correct
There is terminology that is also AWS terminology such as Spot and Reserved. Googles form of “Spot“ instances are “Preemptable“ . A preemptible VM is an instance that you can create and run at a much lower price than normal instances. However, Compute Engine might terminate at GCP will these instances if it requires access to those resources for other tasks. Reference https://cloud.google.com/compute/docs/instances/preemptible
Incorrect
There is terminology that is also AWS terminology such as Spot and Reserved. Googles form of “Spot“ instances are “Preemptable“ . A preemptible VM is an instance that you can create and run at a much lower price than normal instances. However, Compute Engine might terminate at GCP will these instances if it requires access to those resources for other tasks. Reference https://cloud.google.com/compute/docs/instances/preemptible
Unattempted
There is terminology that is also AWS terminology such as Spot and Reserved. Googles form of “Spot“ instances are “Preemptable“ . A preemptible VM is an instance that you can create and run at a much lower price than normal instances. However, Compute Engine might terminate at GCP will these instances if it requires access to those resources for other tasks. Reference https://cloud.google.com/compute/docs/instances/preemptible
Question 49 of 65
49. Question
Your company is getting ready to deploy a CI pipeline on Google Cloud Platform. You need to confirm that you have the proper syntax for creating a Kubernetes namespace called “production“ that will logically isolate the deployment. What is the Kubernetes command to do this?
Correct
Simple enough kubectl create ns production (kubectl create namespace is equivalent ). This is a great Codelabs that I would recommend as well
Incorrect
Simple enough kubectl create ns production (kubectl create namespace is equivalent ). This is a great Codelabs that I would recommend as well
Unattempted
Simple enough kubectl create ns production (kubectl create namespace is equivalent ). This is a great Codelabs that I would recommend as well
Question 50 of 65
50. Question
Which of these GCP features ‘automatically and digitally checks each component of your software supply chain, ensuring the quality and integrity of your software before an application is deployed to the production environment‘?
Correct
The correct answer is “Binary Authorization“ – Binary Authorization is a GCP feature that automatically and digitally checks each component of a software supply chain to ensure the quality and integrity of the software before it is deployed to the production environment. Binary Authorization uses policy-based controls to enforce security and compliance checks on container images and their associated metadata, ensuring that only trusted and approved images are deployed to production. References: https://cloud.google.com/binary-authorization/docs/overview#background “Container Analysis“ – Container Analysis is a GCP service that provides visibility into the contents and provenance of container images stored in Container Registry. It can be used to scan container images for vulnerabilities and to ensure that they are compliant with security and compliance policies. However, it does not automatically and digitally check each component of a software supply chain to ensure the quality and integrity of the software before it is deployed to the production environment. “GKE“ – Google Kubernetes Engine (GKE) is a GCP service that enables users to run Kubernetes clusters on GCP. GKE provides an easy way to create, manage, and scale containerized applications using Kubernetes. However, it does not automatically and digitally check each component of a software supply chain to ensure the quality and integrity of the software before it is deployed to the production environment. “Container Registry“ – Container Registry is a GCP service that enables users to store and manage container images for use with GCP services such as GKE and Cloud Run. Container Registry provides a secure and scalable way to store and manage container images, but it does not automatically and digitally check each component of a software supply chain to ensure the quality and integrity of the software before it is deployed to the production environment.
Incorrect
The correct answer is “Binary Authorization“ – Binary Authorization is a GCP feature that automatically and digitally checks each component of a software supply chain to ensure the quality and integrity of the software before it is deployed to the production environment. Binary Authorization uses policy-based controls to enforce security and compliance checks on container images and their associated metadata, ensuring that only trusted and approved images are deployed to production. References: https://cloud.google.com/binary-authorization/docs/overview#background “Container Analysis“ – Container Analysis is a GCP service that provides visibility into the contents and provenance of container images stored in Container Registry. It can be used to scan container images for vulnerabilities and to ensure that they are compliant with security and compliance policies. However, it does not automatically and digitally check each component of a software supply chain to ensure the quality and integrity of the software before it is deployed to the production environment. “GKE“ – Google Kubernetes Engine (GKE) is a GCP service that enables users to run Kubernetes clusters on GCP. GKE provides an easy way to create, manage, and scale containerized applications using Kubernetes. However, it does not automatically and digitally check each component of a software supply chain to ensure the quality and integrity of the software before it is deployed to the production environment. “Container Registry“ – Container Registry is a GCP service that enables users to store and manage container images for use with GCP services such as GKE and Cloud Run. Container Registry provides a secure and scalable way to store and manage container images, but it does not automatically and digitally check each component of a software supply chain to ensure the quality and integrity of the software before it is deployed to the production environment.
Unattempted
The correct answer is “Binary Authorization“ – Binary Authorization is a GCP feature that automatically and digitally checks each component of a software supply chain to ensure the quality and integrity of the software before it is deployed to the production environment. Binary Authorization uses policy-based controls to enforce security and compliance checks on container images and their associated metadata, ensuring that only trusted and approved images are deployed to production. References: https://cloud.google.com/binary-authorization/docs/overview#background “Container Analysis“ – Container Analysis is a GCP service that provides visibility into the contents and provenance of container images stored in Container Registry. It can be used to scan container images for vulnerabilities and to ensure that they are compliant with security and compliance policies. However, it does not automatically and digitally check each component of a software supply chain to ensure the quality and integrity of the software before it is deployed to the production environment. “GKE“ – Google Kubernetes Engine (GKE) is a GCP service that enables users to run Kubernetes clusters on GCP. GKE provides an easy way to create, manage, and scale containerized applications using Kubernetes. However, it does not automatically and digitally check each component of a software supply chain to ensure the quality and integrity of the software before it is deployed to the production environment. “Container Registry“ – Container Registry is a GCP service that enables users to store and manage container images for use with GCP services such as GKE and Cloud Run. Container Registry provides a secure and scalable way to store and manage container images, but it does not automatically and digitally check each component of a software supply chain to ensure the quality and integrity of the software before it is deployed to the production environment.
Question 51 of 65
51. Question
Which of the following is an API that is used to store trusted metadata about our software artifacts and is also used during the Binary Authorization process?
Correct
Container Analysis is an API that is used to store trusted metadata about our software artifacts and is used during the Binary Authorization process.
With Continuous ______________, revisions are deployed to a production environment automatically without explicit approval from a developer, making the entire software release process automated.
Correct
With continuous deployment, revisions are deployed to a production environment automatically without explicit approval from a developer, making the entire software release process automated
Incorrect
With continuous deployment, revisions are deployed to a production environment automatically without explicit approval from a developer, making the entire software release process automated
Unattempted
With continuous deployment, revisions are deployed to a production environment automatically without explicit approval from a developer, making the entire software release process automated
Question 53 of 65
53. Question
Which of the following two statements are correct about choices around Cloud Deployment Manager templates?
Correct
You can write templates in your choice of Python 2.7 or Jinja2. Python templates are more powerful and give you the option to programmatically create or manage your templates. If you are familiar with Python, use Python for your templates. Jinja2 is a simpler but less powerful templating language that uses the same syntax as YAML. If you aren’t familiar with Python or just want to write simple templates without messing with Python, use Jinja2. Reference https://cloud.google.com/deployment-manager/docs/step-by-step-guide/create-a-template
Incorrect
You can write templates in your choice of Python 2.7 or Jinja2. Python templates are more powerful and give you the option to programmatically create or manage your templates. If you are familiar with Python, use Python for your templates. Jinja2 is a simpler but less powerful templating language that uses the same syntax as YAML. If you aren’t familiar with Python or just want to write simple templates without messing with Python, use Jinja2. Reference https://cloud.google.com/deployment-manager/docs/step-by-step-guide/create-a-template
Unattempted
You can write templates in your choice of Python 2.7 or Jinja2. Python templates are more powerful and give you the option to programmatically create or manage your templates. If you are familiar with Python, use Python for your templates. Jinja2 is a simpler but less powerful templating language that uses the same syntax as YAML. If you aren’t familiar with Python or just want to write simple templates without messing with Python, use Jinja2. Reference https://cloud.google.com/deployment-manager/docs/step-by-step-guide/create-a-template
Question 54 of 65
54. Question
Which of the following is a GCP resource that is used for infrastructure automation. This resource is where you can specify repeatable processes also. What is this service/resource in GCP that can be used for automation?
Correct
Google Cloud Deployment Manager allows you to specify all the resources needed for your application in a declarative format using yaml. You can also use Python or Jinja2 templates to parameterize the configuration and allow reuse of common deployment paradigms such as a load balanced, auto-scaled instance group.
Treat your configuration as code and perform repeatable deployments.
Incorrect
Google Cloud Deployment Manager allows you to specify all the resources needed for your application in a declarative format using yaml. You can also use Python or Jinja2 templates to parameterize the configuration and allow reuse of common deployment paradigms such as a load balanced, auto-scaled instance group.
Treat your configuration as code and perform repeatable deployments.
Unattempted
Google Cloud Deployment Manager allows you to specify all the resources needed for your application in a declarative format using yaml. You can also use Python or Jinja2 templates to parameterize the configuration and allow reuse of common deployment paradigms such as a load balanced, auto-scaled instance group.
Treat your configuration as code and perform repeatable deployments.
Question 55 of 65
55. Question
What type of resource is this? ___________ bundle application code and dependencies into a single unit, abstracting the application from the infrastructure.
Correct
Containers bundle application code and dependencies into a single unit, abstracting the application from the infrastructure
Incorrect
Containers bundle application code and dependencies into a single unit, abstracting the application from the infrastructure
Unattempted
Containers bundle application code and dependencies into a single unit, abstracting the application from the infrastructure
Question 56 of 65
56. Question
You’re currently running your containers on Google Kubernetes Engine. You have decided to also monitor the nodes that GKE has deployed on your containers. You have set up log information from your application to be sent to stdout while your app begins as a system service on your GKE. Without changing the app, how do you have logs sent to Stackdriver
Which of the following is a feature of using a VPC In Google Cloud?
Correct
A single Google Cloud VPC can span multiple regions without communicating across the public Internet. For on-premises scenarios, you can share a connection between VPC and on-premises resources with all regions in a single VPC. You don’t need a connection in every region. Reference https://cloud.google.com/vpc/
Incorrect
A single Google Cloud VPC can span multiple regions without communicating across the public Internet. For on-premises scenarios, you can share a connection between VPC and on-premises resources with all regions in a single VPC. You don’t need a connection in every region. Reference https://cloud.google.com/vpc/
Unattempted
A single Google Cloud VPC can span multiple regions without communicating across the public Internet. For on-premises scenarios, you can share a connection between VPC and on-premises resources with all regions in a single VPC. You don’t need a connection in every region. Reference https://cloud.google.com/vpc/
Question 58 of 65
58. Question
You‘re currently considering moving your on-premises CI pipeline from on premises to Google Cloud Platform. You would like to have code maintained in a private Git repository which is hosted on the Google Cloud Platform. What service would you choose?
Correct
Cloud Source Repositories provides fully featured, private Git repositories hosted on Google Cloud.
You can create multiple repositories for a single Google Cloud project, allowing you to organize the code associated with your cloud project in whatever way works best for yo
Incorrect
Cloud Source Repositories provides fully featured, private Git repositories hosted on Google Cloud.
You can create multiple repositories for a single Google Cloud project, allowing you to organize the code associated with your cloud project in whatever way works best for yo
Unattempted
Cloud Source Repositories provides fully featured, private Git repositories hosted on Google Cloud.
You can create multiple repositories for a single Google Cloud project, allowing you to organize the code associated with your cloud project in whatever way works best for yo
Question 59 of 65
59. Question
What does Intent-based Capacity Planning involves?
You are currently building an SRE organization and you would like to follow what Google does to build culture. Which of the following two ways could you introduce this culture? (select 2)
Correct
It‘s important to define and communicate your strategy, plans so communicate. It is also very important to evaluate your organization and its capabilities. When you initiate, you’re plans you are launching and then you need to iterate improvements. Never lower your lower standard from a professional respect. Reference https://landing.google.com/sre/sre-book/chapters/software-engineering-in-sre/
Incorrect
It‘s important to define and communicate your strategy, plans so communicate. It is also very important to evaluate your organization and its capabilities. When you initiate, you’re plans you are launching and then you need to iterate improvements. Never lower your lower standard from a professional respect. Reference https://landing.google.com/sre/sre-book/chapters/software-engineering-in-sre/
Unattempted
It‘s important to define and communicate your strategy, plans so communicate. It is also very important to evaluate your organization and its capabilities. When you initiate, you’re plans you are launching and then you need to iterate improvements. Never lower your lower standard from a professional respect. Reference https://landing.google.com/sre/sre-book/chapters/software-engineering-in-sre/
Question 61 of 65
61. Question
Container Analysis performs vulnerability scans on images in Container Registry and monitors the vulnerability information to keep it up to date. What are the two main tasks that Container Analysis performs? (select 2)
Correct
Incremental scanning: Container Analysis scans new images when they’re uploaded to Container Registry. Continuous analysis: Container Analysis continuously monitors the metadata of scanned images in Container Registry for new vulnerabilities. Reference https://cloud.google.com/container-registry/docs/vulnerability-scanning
Incorrect
Incremental scanning: Container Analysis scans new images when they’re uploaded to Container Registry. Continuous analysis: Container Analysis continuously monitors the metadata of scanned images in Container Registry for new vulnerabilities. Reference https://cloud.google.com/container-registry/docs/vulnerability-scanning
Unattempted
Incremental scanning: Container Analysis scans new images when they’re uploaded to Container Registry. Continuous analysis: Container Analysis continuously monitors the metadata of scanned images in Container Registry for new vulnerabilities. Reference https://cloud.google.com/container-registry/docs/vulnerability-scanning
Question 62 of 65
62. Question
The first step in Cloud Deployment manager is to create what____________?
You are currently reviewing your project in GCP using gcloud. You would like to confirm what the DNS related info is for a project.. What is the command to do this?
Correct
gcloud commands need to be memorized. The easy way to rule out two answers is generally to look at the service which should come after gcloud. If the answer is flag related, then we need to memorize in most cases. Reference https://cloud.google.com/sdk/gcloud/reference/dns/project-info/
Incorrect
gcloud commands need to be memorized. The easy way to rule out two answers is generally to look at the service which should come after gcloud. If the answer is flag related, then we need to memorize in most cases. Reference https://cloud.google.com/sdk/gcloud/reference/dns/project-info/
Unattempted
gcloud commands need to be memorized. The easy way to rule out two answers is generally to look at the service which should come after gcloud. If the answer is flag related, then we need to memorize in most cases. Reference https://cloud.google.com/sdk/gcloud/reference/dns/project-info/
Question 64 of 65
64. Question
The HTTPS load balancer can leverage which of the following types of GCP resources?. NOTE: For this exam you must know about load balancers and the two different approaches to Load Balancing!? (select 2)
Correct
1. The load balancer leverages additional resources, Global IP Address (ephemeral or static). 2. One or more Instance Groups are allowed.
Incorrect
1. The load balancer leverages additional resources, Global IP Address (ephemeral or static). 2. One or more Instance Groups are allowed.
Unattempted
1. The load balancer leverages additional resources, Global IP Address (ephemeral or static). 2. One or more Instance Groups are allowed.
Question 65 of 65
65. Question
Which role does your service account for GKE need to be granted to access Cloud Storage and perform a “storage.buckets.update“ ?
Correct
All the permissions could update a bucket except the Storage Object Viewer. However, the key here is to understand that we must apply Google Cloud Best Practices using the Principle of Least Privilege.
Incorrect
All the permissions could update a bucket except the Storage Object Viewer. However, the key here is to understand that we must apply Google Cloud Best Practices using the Principle of Least Privilege.
Unattempted
All the permissions could update a bucket except the Storage Object Viewer. However, the key here is to understand that we must apply Google Cloud Best Practices using the Principle of Least Privilege.
Use Page numbers below to navigate to other practice tests