Google Associate Cloud Engineer Exam Questions 2024 (Sample Questions)
0 of 10 questions completed
Questions:
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
Information
This Sample Test contains 10 Exam Questions. Please fill your Name and Email address and Click on “Start Test”. You can view the results at the end of the test. You will also receive an email with the results. Please purchase to get life time access to Full Practice Tests.
You must specify a text. |
|
You must specify an email address. |
You have already completed the Test before. Hence you can not start it again.
Test is loading...
You must sign in or sign up to start the Test.
You have to finish following quiz, to start this Test:
Your results are here!! for" Google Cloud Certified - Associate Cloud Engineer Sample Exam "
0 of 10 questions answered correctly
Your time:
Time has elapsed
Your Final Score is : 0
You have attempted : 0
Number of Correct Questions : 0 and scored 0
Number of Incorrect Questions : 0 and Negative marks 0
-
Google Certified Associate Cloud Engineer
You have attempted: 0
Number of Correct Questions: 0 and scored 0
Number of Incorrect Questions: 0 and Negative marks 0
-
You can review your answers by clicking on “View Answers”.
Important Note : Open Reference Documentation Links in New Tab (Right Click and Open in New Tab).
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- Answered
- Review
-
Question 1 of 10
1. Question
Your projects incurred more costs than you expected last month. Your research reveals that a development GKE container emitted a huge number of logs, which resulted in higher costs. You want to disable the logs quickly using the minimum number of steps. What should you do?
Correct
1. Go to the GKE console, and delete existing clusters.
2. Recreate a new cluster.
3. Clear the option to enable legacy Stackdriver Logging. is not right.
Our requirement is to disable the logs ingested from the GKE container. We dont need to delete the existing cluster and create a new one.Go to the Logs ingestion window in Stackdriver Logging, and disable the log source for the GKE container resource. is the right answer.
We want to disable logs from a specific GKE container and this is the only option that does that.
More information about logs exclusions: https://cloud.google.com/logging/docs/exclusionsIncorrect
1. Go to the GKE console, and delete existing clusters.
2. Recreate a new cluster.
3. Clear the option to enable legacy Stackdriver Logging. is not right.
Our requirement is to disable the logs ingested from the GKE container. We dont need to delete the existing cluster and create a new one.Go to the Logs ingestion window in Stackdriver Logging, and disable the log source for the GKE container resource. is the right answer.
We want to disable logs from a specific GKE container and this is the only option that does that.
More information about logs exclusions: https://cloud.google.com/logging/docs/exclusionsUnattempted
1. Go to the GKE console, and delete existing clusters.
2. Recreate a new cluster.
3. Clear the option to enable legacy Stackdriver Logging. is not right.
Our requirement is to disable the logs ingested from the GKE container. We dont need to delete the existing cluster and create a new one.Go to the Logs ingestion window in Stackdriver Logging, and disable the log source for the GKE container resource. is the right answer.
We want to disable logs from a specific GKE container and this is the only option that does that.
More information about logs exclusions: https://cloud.google.com/logging/docs/exclusions -
Question 2 of 10
2. Question
Your team is working towards using the desired state configuration for your application deployed on the GKE cluster. You have YAML files for the Kubernetes Deployment and Service objects. Your application is designed to have 2 pods, which is defined by the replicas parameter in app-deployment.yaml. Your service uses GKE Load Balancer which is defined in app-service.yaml
You created the Kubernetes resources by running
kubectl apply -f app-deployment.yaml
kubectl apply -f app-service.yaml
Your deployment is now serving live traffic but is suffering from performance issues. You want to increase the number of replicas to 5. What should you do in order to update the replicas in existing Kubernetes deployment objects?
Correct
Disregard the YAML file. Use the kubectl scale command to scale the replicas to 5. kubectl scale replicas=5 -f app-deployment.yaml. is not right.
While the outcome is the same, this approach doesnt update the change in the desired state configuration (YAML file). If you were to make some changes in your app-deployment.yaml and apply it, the update would scale back the replicas to 2. This is undesirable.
Ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#scaling-a-deploymentDisregard the YAML file. Enable autoscaling on the deployment to trigger on CPU usage and set minimum pods as well as maximum pods to 5. kubectl autoscale myapp min=5 max=5 cpu-percent=80. is not right.
While the outcome is the same, this approach doesnt update the change in the desired state configuration (YAML file). If you were to make some changes in your app-deployment.yaml and apply it, the update would scale back the replicas to 2. This is undesirable.
Ref: https://kubernetes.io/blog/2016/07/autoscaling-in-kubernetes/Modify the current configuration of the deployment by using kubectl edit to open the YAML file of the current configuration, modify and save the configuration. kubectl edit deployment/app-deployment -o yaml save-config. is not right.
Like the above, the outcome is the same. This is equivalent to first getting the resource, editing it in a text editor, and then applying the resource with the updated version. This approach doesnt update the replicas change in our local YAML file. If you were to make some changes in your local app-deployment.yaml and apply it, the update would scale back the replicas to 2. This is undesirable.
Ref: https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/#in-place-updates-of-resourcesEdit the number of replicas in the YAML file and rerun the kubectl apply. kubectl apply -f app-deployment.yaml. is the right answer.
This is the only approach that guarantees that you use desired state configuration. By updating the YAML file to have 5 replicas and applying it using kubectl apply, you are preserving the intended state of Kubernetes cluster in the YAML file.
Ref: https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/#in-place-updates-of-resourcesIncorrect
Disregard the YAML file. Use the kubectl scale command to scale the replicas to 5. kubectl scale replicas=5 -f app-deployment.yaml. is not right.
While the outcome is the same, this approach doesnt update the change in the desired state configuration (YAML file). If you were to make some changes in your app-deployment.yaml and apply it, the update would scale back the replicas to 2. This is undesirable.
Ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#scaling-a-deploymentDisregard the YAML file. Enable autoscaling on the deployment to trigger on CPU usage and set minimum pods as well as maximum pods to 5. kubectl autoscale myapp min=5 max=5 cpu-percent=80. is not right.
While the outcome is the same, this approach doesnt update the change in the desired state configuration (YAML file). If you were to make some changes in your app-deployment.yaml and apply it, the update would scale back the replicas to 2. This is undesirable.
Ref: https://kubernetes.io/blog/2016/07/autoscaling-in-kubernetes/Modify the current configuration of the deployment by using kubectl edit to open the YAML file of the current configuration, modify and save the configuration. kubectl edit deployment/app-deployment -o yaml save-config. is not right.
Like the above, the outcome is the same. This is equivalent to first getting the resource, editing it in a text editor, and then applying the resource with the updated version. This approach doesnt update the replicas change in our local YAML file. If you were to make some changes in your local app-deployment.yaml and apply it, the update would scale back the replicas to 2. This is undesirable.
Ref: https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/#in-place-updates-of-resourcesEdit the number of replicas in the YAML file and rerun the kubectl apply. kubectl apply -f app-deployment.yaml. is the right answer.
This is the only approach that guarantees that you use desired state configuration. By updating the YAML file to have 5 replicas and applying it using kubectl apply, you are preserving the intended state of Kubernetes cluster in the YAML file.
Ref: https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/#in-place-updates-of-resourcesUnattempted
Disregard the YAML file. Use the kubectl scale command to scale the replicas to 5. kubectl scale replicas=5 -f app-deployment.yaml. is not right.
While the outcome is the same, this approach doesnt update the change in the desired state configuration (YAML file). If you were to make some changes in your app-deployment.yaml and apply it, the update would scale back the replicas to 2. This is undesirable.
Ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#scaling-a-deploymentDisregard the YAML file. Enable autoscaling on the deployment to trigger on CPU usage and set minimum pods as well as maximum pods to 5. kubectl autoscale myapp min=5 max=5 cpu-percent=80. is not right.
While the outcome is the same, this approach doesnt update the change in the desired state configuration (YAML file). If you were to make some changes in your app-deployment.yaml and apply it, the update would scale back the replicas to 2. This is undesirable.
Ref: https://kubernetes.io/blog/2016/07/autoscaling-in-kubernetes/Modify the current configuration of the deployment by using kubectl edit to open the YAML file of the current configuration, modify and save the configuration. kubectl edit deployment/app-deployment -o yaml save-config. is not right.
Like the above, the outcome is the same. This is equivalent to first getting the resource, editing it in a text editor, and then applying the resource with the updated version. This approach doesnt update the replicas change in our local YAML file. If you were to make some changes in your local app-deployment.yaml and apply it, the update would scale back the replicas to 2. This is undesirable.
Ref: https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/#in-place-updates-of-resourcesEdit the number of replicas in the YAML file and rerun the kubectl apply. kubectl apply -f app-deployment.yaml. is the right answer.
This is the only approach that guarantees that you use desired state configuration. By updating the YAML file to have 5 replicas and applying it using kubectl apply, you are preserving the intended state of Kubernetes cluster in the YAML file.
Ref: https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/#in-place-updates-of-resources -
Question 3 of 10
3. Question
Your team uses Splunk for centralized logging and you have a number of reports and dashboards based on the logs in Splunk. You want to install Splunk forwarder on all nodes of your new Kubernetes Engine Autoscaled Cluster. The Splunk forwarder forwards the logs to a centralized Splunk Server. You want to minimize operational overhead. What is the best way to install Splunk Forwarder on all nodes in the cluster?
Correct
SSH to each node and run a script to install the forwarder agent. is not right.
While this can be done, this approach does not scale. Every time the Kubernetes cluster autoscaling adds a new node, we have to SSH to the instance and run the script which is manual, possibly error-prone and adds operational overhead. We need to look for a solution that automates this task.Include the forwarder agent in a StatefulSet deployment. is not right.
In GKE, StatefulSets represents a set of Pods with unique, persistent identities and stable hostnames that GKE maintains regardless of where they are scheduled. The main purpose of StatefulSets is to set up persistent storage for pods that are deployed across multiple zones. StatefulSets are not suitable for installing the forwarder agent nor do they provide us the ability to install forwarder agents.
Ref: https://cloud.google.com/kubernetes-engine/docs/concepts/statefulsetUse Deployment Manager to orchestrate the deployment of forwarder agents on all nodes. is not right.
You can use a deployment manager to create a number of GCP resources including GKE Cluster but you can not use it to create Kubernetes deployments or apply configuration files.
Ref: https://cloud.google.com/deployment-manager/docs/fundamentalsInclude the forwarder agent in a DaemonSet deployment. is the right answer.
In GKE, DaemonSets manage groups of replicated Pods and adhere to a one-Pod-per-node model, either across the entire cluster or a subset of nodes. As you add nodes to a node pool, DaemonSets automatically add Pods to the new nodes. So by configuring the pod to use Splunk forwarder agent image and with some minimal configuration (e.g. identifying which logs need to be forwarded), you can automate the installation and configuration of Splunk forwarder agent on each GKE cluster node.
Ref: https://cloud.google.com/kubernetes-engine/docs/concepts/daemonsetIncorrect
SSH to each node and run a script to install the forwarder agent. is not right.
While this can be done, this approach does not scale. Every time the Kubernetes cluster autoscaling adds a new node, we have to SSH to the instance and run the script which is manual, possibly error-prone and adds operational overhead. We need to look for a solution that automates this task.Include the forwarder agent in a StatefulSet deployment. is not right.
In GKE, StatefulSets represents a set of Pods with unique, persistent identities and stable hostnames that GKE maintains regardless of where they are scheduled. The main purpose of StatefulSets is to set up persistent storage for pods that are deployed across multiple zones. StatefulSets are not suitable for installing the forwarder agent nor do they provide us the ability to install forwarder agents.
Ref: https://cloud.google.com/kubernetes-engine/docs/concepts/statefulsetUse Deployment Manager to orchestrate the deployment of forwarder agents on all nodes. is not right.
You can use a deployment manager to create a number of GCP resources including GKE Cluster but you can not use it to create Kubernetes deployments or apply configuration files.
Ref: https://cloud.google.com/deployment-manager/docs/fundamentalsInclude the forwarder agent in a DaemonSet deployment. is the right answer.
In GKE, DaemonSets manage groups of replicated Pods and adhere to a one-Pod-per-node model, either across the entire cluster or a subset of nodes. As you add nodes to a node pool, DaemonSets automatically add Pods to the new nodes. So by configuring the pod to use Splunk forwarder agent image and with some minimal configuration (e.g. identifying which logs need to be forwarded), you can automate the installation and configuration of Splunk forwarder agent on each GKE cluster node.
Ref: https://cloud.google.com/kubernetes-engine/docs/concepts/daemonsetUnattempted
SSH to each node and run a script to install the forwarder agent. is not right.
While this can be done, this approach does not scale. Every time the Kubernetes cluster autoscaling adds a new node, we have to SSH to the instance and run the script which is manual, possibly error-prone and adds operational overhead. We need to look for a solution that automates this task.Include the forwarder agent in a StatefulSet deployment. is not right.
In GKE, StatefulSets represents a set of Pods with unique, persistent identities and stable hostnames that GKE maintains regardless of where they are scheduled. The main purpose of StatefulSets is to set up persistent storage for pods that are deployed across multiple zones. StatefulSets are not suitable for installing the forwarder agent nor do they provide us the ability to install forwarder agents.
Ref: https://cloud.google.com/kubernetes-engine/docs/concepts/statefulsetUse Deployment Manager to orchestrate the deployment of forwarder agents on all nodes. is not right.
You can use a deployment manager to create a number of GCP resources including GKE Cluster but you can not use it to create Kubernetes deployments or apply configuration files.
Ref: https://cloud.google.com/deployment-manager/docs/fundamentalsInclude the forwarder agent in a DaemonSet deployment. is the right answer.
In GKE, DaemonSets manage groups of replicated Pods and adhere to a one-Pod-per-node model, either across the entire cluster or a subset of nodes. As you add nodes to a node pool, DaemonSets automatically add Pods to the new nodes. So by configuring the pod to use Splunk forwarder agent image and with some minimal configuration (e.g. identifying which logs need to be forwarded), you can automate the installation and configuration of Splunk forwarder agent on each GKE cluster node.
Ref: https://cloud.google.com/kubernetes-engine/docs/concepts/daemonset -
Question 4 of 10
4. Question
Your VMs are running in a subnet that has a subnet mask of 255.255.255.240. The current subnet has no more free IP addresses and you require an additional 10 IP addresses for new VMs. The existing and new VMs should all be able to reach each other without additional routes. What should you do?
Correct
Use gcloud to expand the IP range of the current subnet. is the right answer.
Subnet mask of the existing subnet is 255.255.255.240 which means the max possible address in are 16. So the net prefix is /28 i.e. 4 bits free so 2 to the power of 4 is 16 IP Addresses.
As per IETF (Ref: https://tools.ietf.org/html/rfc1918), the supported internal IP Address ranges are
1. 24-bit block 10.0.0.0/8 (16777216 IP Addresses)
2. 20-bit block 172.16.0.0/12 (1048576 IP Addresses)
3. 16-bit block 192.168.0.0/16 (65536 IP Addresses)A prefix of 28 is a very small subnet and could be in any of the ranges above; and all ranges have scope to accommodate a higher prefix.
A prefix of 27 gives you 32 IP Addresses i.e. 16 IP address more and we just need 10 more. So expanding the subnet to a prefix of 27 should give us the required capacity. And GCP lets you do exactly that running a gcloud command
https://cloud.google.com/sdk/gcloud/reference/compute/networks/subnets/expand-ip-range
gcloud compute networks subnets expand-ip-range region= prefix-length=27Incorrect
Use gcloud to expand the IP range of the current subnet. is the right answer.
Subnet mask of the existing subnet is 255.255.255.240 which means the max possible address in are 16. So the net prefix is /28 i.e. 4 bits free so 2 to the power of 4 is 16 IP Addresses.
As per IETF (Ref: https://tools.ietf.org/html/rfc1918), the supported internal IP Address ranges are
1. 24-bit block 10.0.0.0/8 (16777216 IP Addresses)
2. 20-bit block 172.16.0.0/12 (1048576 IP Addresses)
3. 16-bit block 192.168.0.0/16 (65536 IP Addresses)A prefix of 28 is a very small subnet and could be in any of the ranges above; and all ranges have scope to accommodate a higher prefix.
A prefix of 27 gives you 32 IP Addresses i.e. 16 IP address more and we just need 10 more. So expanding the subnet to a prefix of 27 should give us the required capacity. And GCP lets you do exactly that running a gcloud command
https://cloud.google.com/sdk/gcloud/reference/compute/networks/subnets/expand-ip-range
gcloud compute networks subnets expand-ip-range region= prefix-length=27Unattempted
Use gcloud to expand the IP range of the current subnet. is the right answer.
Subnet mask of the existing subnet is 255.255.255.240 which means the max possible address in are 16. So the net prefix is /28 i.e. 4 bits free so 2 to the power of 4 is 16 IP Addresses.
As per IETF (Ref: https://tools.ietf.org/html/rfc1918), the supported internal IP Address ranges are
1. 24-bit block 10.0.0.0/8 (16777216 IP Addresses)
2. 20-bit block 172.16.0.0/12 (1048576 IP Addresses)
3. 16-bit block 192.168.0.0/16 (65536 IP Addresses)A prefix of 28 is a very small subnet and could be in any of the ranges above; and all ranges have scope to accommodate a higher prefix.
A prefix of 27 gives you 32 IP Addresses i.e. 16 IP address more and we just need 10 more. So expanding the subnet to a prefix of 27 should give us the required capacity. And GCP lets you do exactly that running a gcloud command
https://cloud.google.com/sdk/gcloud/reference/compute/networks/subnets/expand-ip-range
gcloud compute networks subnets expand-ip-range region= prefix-length=27 -
Question 5 of 10
5. Question
Your customer is moving their corporate applications to Google Cloud Platform. The security team wants detailed visibility of all projects in the organization. You provision the Google Cloud Resource Manager and set up yourself as the org admin. What Google Cloud Identity and Access Management (Cloud IAM) roles should you give to the security team?
Correct
Correct answer is B as the security team only needs visibility to the projects, project viewer provides the same with the best practice of least privilege.
Refer GCP documentation Organization & Project access control
Option A is wrong as project owner will provide access however it does not align with the best practice of least privilege.
Option C is wrong as org admin does not align with the best practice of least privilege.
Option D is wrong as the user needs to be provided organization viewer access to see the organization.
Incorrect
Correct answer is B as the security team only needs visibility to the projects, project viewer provides the same with the best practice of least privilege.
Refer GCP documentation Organization & Project access control
Option A is wrong as project owner will provide access however it does not align with the best practice of least privilege.
Option C is wrong as org admin does not align with the best practice of least privilege.
Option D is wrong as the user needs to be provided organization viewer access to see the organization.
Unattempted
Correct answer is B as the security team only needs visibility to the projects, project viewer provides the same with the best practice of least privilege.
Refer GCP documentation Organization & Project access control
Option A is wrong as project owner will provide access however it does not align with the best practice of least privilege.
Option C is wrong as org admin does not align with the best practice of least privilege.
Option D is wrong as the user needs to be provided organization viewer access to see the organization.
-
Question 6 of 10
6. Question
Auditors visit your teams every 12 months and ask to review all the Google Cloud Identity and Access Management (Cloud IAM) policy changes in the previous 12 months. You want to streamline and expedite the analysis and audit process. What should you do?
Correct
Correct answer is B as BigQuery is a good storage option with analysis capability. Also, the access to the data can be controlled using ACLs and Views.
BigQuery uses access control lists (ACLs) to manage permissions on projects and datasets.
BigQuery is a petabyte-scale analytics data warehouse that you can use to run SQL queries over vast amounts of data in near realtime.
Giving a view access to a dataset is also known as creating an authorized view in BigQuery. An authorized view allows you to share query results with particular users and groups without giving them access to the underlying tables. You can also use the views SQL query to restrict the columns (fields) the users are able to query. In this tutorial, you create an authorized view.
Option A is wrong as alerts are real time and auditor do not need them.
Option C is wrong as Cloud SQL is not ideal for storage of log files and cannot be controlled through ACLs.
Option D is wrong as Cloud Storage is a good storage option but does not provide direct analytics capabilities.
Incorrect
Correct answer is B as BigQuery is a good storage option with analysis capability. Also, the access to the data can be controlled using ACLs and Views.
BigQuery uses access control lists (ACLs) to manage permissions on projects and datasets.
BigQuery is a petabyte-scale analytics data warehouse that you can use to run SQL queries over vast amounts of data in near realtime.
Giving a view access to a dataset is also known as creating an authorized view in BigQuery. An authorized view allows you to share query results with particular users and groups without giving them access to the underlying tables. You can also use the views SQL query to restrict the columns (fields) the users are able to query. In this tutorial, you create an authorized view.
Option A is wrong as alerts are real time and auditor do not need them.
Option C is wrong as Cloud SQL is not ideal for storage of log files and cannot be controlled through ACLs.
Option D is wrong as Cloud Storage is a good storage option but does not provide direct analytics capabilities.
Unattempted
Correct answer is B as BigQuery is a good storage option with analysis capability. Also, the access to the data can be controlled using ACLs and Views.
BigQuery uses access control lists (ACLs) to manage permissions on projects and datasets.
BigQuery is a petabyte-scale analytics data warehouse that you can use to run SQL queries over vast amounts of data in near realtime.
Giving a view access to a dataset is also known as creating an authorized view in BigQuery. An authorized view allows you to share query results with particular users and groups without giving them access to the underlying tables. You can also use the views SQL query to restrict the columns (fields) the users are able to query. In this tutorial, you create an authorized view.
Option A is wrong as alerts are real time and auditor do not need them.
Option C is wrong as Cloud SQL is not ideal for storage of log files and cannot be controlled through ACLs.
Option D is wrong as Cloud Storage is a good storage option but does not provide direct analytics capabilities.
-
Question 7 of 10
7. Question
Your App Engine application needs to store stateful data in a proper storage service. Your data is non-relational database data. You do not expect the database size to grow beyond 10 GB and you need to have the ability to scale down to zero to avoid unnecessary costs. Which storage service should you use?
Correct
Correct answer is D as Cloud Datastore provides a scalable, fully managed NoSQL document database for your web and mobile applications.
Cloud Datastore A scalable, fully managed NoSQL document database for your web and mobile applications. Semistructured application data User profiles Hierarchical data Product catalogs Durable key-value data Game state
Option A is wrong as Bigtable is not an ideal storage option for state management. Cloud Bigtable A scalable, fully managed NoSQL wide-column database that is suitable for both low-latency single-point lookups and precalculated analytics.Low-latency read/write access IoT, finance, adtech High-throughput data processing Personalization, recommendations Time series support Monitoring Geospatial datasets Graphs
Option B is wrong as Dataproc is not a storage solution. Cloud Dataproc is a fast, easy-to-use, fully-managed cloud service for running Apache Spark and Apache Hadoop clusters in a simpler, more cost-efficient way.
Option C is wrong as you need to define a capacity while provisioning a database.
Cloud SQL A fully managed MySQL and PostgreSQL database service that is built on the strength and reliability of Googles infrastructure. Web frameworks Websites, blogs, and content management systems (CMS) Structured data Business intelligence (BI) applications
OLTP workloads ERP, CRM, and ecommerce applications Geospatial application
Incorrect
Correct answer is D as Cloud Datastore provides a scalable, fully managed NoSQL document database for your web and mobile applications.
Cloud Datastore A scalable, fully managed NoSQL document database for your web and mobile applications. Semistructured application data User profiles Hierarchical data Product catalogs Durable key-value data Game state
Option A is wrong as Bigtable is not an ideal storage option for state management. Cloud Bigtable A scalable, fully managed NoSQL wide-column database that is suitable for both low-latency single-point lookups and precalculated analytics.Low-latency read/write access IoT, finance, adtech High-throughput data processing Personalization, recommendations Time series support Monitoring Geospatial datasets Graphs
Option B is wrong as Dataproc is not a storage solution. Cloud Dataproc is a fast, easy-to-use, fully-managed cloud service for running Apache Spark and Apache Hadoop clusters in a simpler, more cost-efficient way.
Option C is wrong as you need to define a capacity while provisioning a database.
Cloud SQL A fully managed MySQL and PostgreSQL database service that is built on the strength and reliability of Googles infrastructure. Web frameworks Websites, blogs, and content management systems (CMS) Structured data Business intelligence (BI) applications
OLTP workloads ERP, CRM, and ecommerce applications Geospatial application
Unattempted
Correct answer is D as Cloud Datastore provides a scalable, fully managed NoSQL document database for your web and mobile applications.
Cloud Datastore A scalable, fully managed NoSQL document database for your web and mobile applications. Semistructured application data User profiles Hierarchical data Product catalogs Durable key-value data Game state
Option A is wrong as Bigtable is not an ideal storage option for state management. Cloud Bigtable A scalable, fully managed NoSQL wide-column database that is suitable for both low-latency single-point lookups and precalculated analytics.Low-latency read/write access IoT, finance, adtech High-throughput data processing Personalization, recommendations Time series support Monitoring Geospatial datasets Graphs
Option B is wrong as Dataproc is not a storage solution. Cloud Dataproc is a fast, easy-to-use, fully-managed cloud service for running Apache Spark and Apache Hadoop clusters in a simpler, more cost-efficient way.
Option C is wrong as you need to define a capacity while provisioning a database.
Cloud SQL A fully managed MySQL and PostgreSQL database service that is built on the strength and reliability of Googles infrastructure. Web frameworks Websites, blogs, and content management systems (CMS) Structured data Business intelligence (BI) applications
OLTP workloads ERP, CRM, and ecommerce applications Geospatial application
-
Question 8 of 10
8. Question
You have a collection of media files over 50GB each that you need to migrate to Google Cloud Storage. The files are in your on-premises data center. What migration method can you use to help speed up the transfer process?
Correct
Correct answer is B as gsutil provide object composition or parallel upload to handle upload of larger files.
Refer GCP documentation Optimizing for Cloud Storage Performance
More efficient large file uploads
The gsutil utility can also automatically use object composition to perform uploads in parallel for large, local files that you want to upload to Cloud Storage. It splits a large file into component pieces, uploads them in parallel and then recomposes them once theyre in the cloud (and deletes the temporary components it created locally).
You can enable this by setting the `parallel_composite_upload_threshold` option on gsutil (or, updating your .boto file, like the console output suggests).
gsutil -o GSUtil:parallel_composite_upload_threshold=150M cp ./localbigfile gs://your-bucket
Where localbigfile is a file larger than 150MB. This divides up your data into chunks ~150MB and uploads them in parallel, increasing upload performance.
Option A is wrong as multi-threaded options is best suited for uploading multiple files to better utilize the bandwidth.
Option C is wrong as Cloud Transfer service cannot handle uploads from on-premises data center.
Option D is wrong as recursive upload helps handle folders and subfolders.
Incorrect
Correct answer is B as gsutil provide object composition or parallel upload to handle upload of larger files.
Refer GCP documentation Optimizing for Cloud Storage Performance
More efficient large file uploads
The gsutil utility can also automatically use object composition to perform uploads in parallel for large, local files that you want to upload to Cloud Storage. It splits a large file into component pieces, uploads them in parallel and then recomposes them once theyre in the cloud (and deletes the temporary components it created locally).
You can enable this by setting the `parallel_composite_upload_threshold` option on gsutil (or, updating your .boto file, like the console output suggests).
gsutil -o GSUtil:parallel_composite_upload_threshold=150M cp ./localbigfile gs://your-bucket
Where localbigfile is a file larger than 150MB. This divides up your data into chunks ~150MB and uploads them in parallel, increasing upload performance.
Option A is wrong as multi-threaded options is best suited for uploading multiple files to better utilize the bandwidth.
Option C is wrong as Cloud Transfer service cannot handle uploads from on-premises data center.
Option D is wrong as recursive upload helps handle folders and subfolders.
Unattempted
Correct answer is B as gsutil provide object composition or parallel upload to handle upload of larger files.
Refer GCP documentation Optimizing for Cloud Storage Performance
More efficient large file uploads
The gsutil utility can also automatically use object composition to perform uploads in parallel for large, local files that you want to upload to Cloud Storage. It splits a large file into component pieces, uploads them in parallel and then recomposes them once theyre in the cloud (and deletes the temporary components it created locally).
You can enable this by setting the `parallel_composite_upload_threshold` option on gsutil (or, updating your .boto file, like the console output suggests).
gsutil -o GSUtil:parallel_composite_upload_threshold=150M cp ./localbigfile gs://your-bucket
Where localbigfile is a file larger than 150MB. This divides up your data into chunks ~150MB and uploads them in parallel, increasing upload performance.
Option A is wrong as multi-threaded options is best suited for uploading multiple files to better utilize the bandwidth.
Option C is wrong as Cloud Transfer service cannot handle uploads from on-premises data center.
Option D is wrong as recursive upload helps handle folders and subfolders.
-
Question 9 of 10
9. Question
A Company is planning the migration of their web application to Google App Engine. However, they would still continue to use their on-premises database. How can they setup application?
Correct
Correct answer is B as Google App Engine provides connectivity to on-premises using Cloud VPN.
Refer GCP documentation App Engine Flexible Network Settings
Advanced network configuration
You can segment your Compute Engine network into subnetworks. This allows you to enable VPN scenarios, such as accessing databases within your corporate network.
To enable subnetworks for your App Engine application:
Create a custom subnet network.
Add the network name and subnetwork name to your app.yaml file, as specified above.
To establish a simple VPN based on static routing, create a gateway and a tunnel for a custom subnet network. Otherwise, see how to create other types of VPNs.
Option A is wrong as Google App Engine Standard cannot use Cloud VPN.
Options C & D are wrong as you need a Cloud VPN to connect to on-premises data center. Cloud Route support dynamic routing.
Incorrect
Correct answer is B as Google App Engine provides connectivity to on-premises using Cloud VPN.
Refer GCP documentation App Engine Flexible Network Settings
Advanced network configuration
You can segment your Compute Engine network into subnetworks. This allows you to enable VPN scenarios, such as accessing databases within your corporate network.
To enable subnetworks for your App Engine application:
Create a custom subnet network.
Add the network name and subnetwork name to your app.yaml file, as specified above.
To establish a simple VPN based on static routing, create a gateway and a tunnel for a custom subnet network. Otherwise, see how to create other types of VPNs.
Option A is wrong as Google App Engine Standard cannot use Cloud VPN.
Options C & D are wrong as you need a Cloud VPN to connect to on-premises data center. Cloud Route support dynamic routing.
Unattempted
Correct answer is B as Google App Engine provides connectivity to on-premises using Cloud VPN.
Refer GCP documentation App Engine Flexible Network Settings
Advanced network configuration
You can segment your Compute Engine network into subnetworks. This allows you to enable VPN scenarios, such as accessing databases within your corporate network.
To enable subnetworks for your App Engine application:
Create a custom subnet network.
Add the network name and subnetwork name to your app.yaml file, as specified above.
To establish a simple VPN based on static routing, create a gateway and a tunnel for a custom subnet network. Otherwise, see how to create other types of VPNs.
Option A is wrong as Google App Engine Standard cannot use Cloud VPN.
Options C & D are wrong as you need a Cloud VPN to connect to on-premises data center. Cloud Route support dynamic routing.
-
Question 10 of 10
10. Question
A lead software engineer tells you that his new application design uses websockets and HTTP sessions that are not distributed across the web servers. You want to help him ensure his application will run properly on Google Cloud Platform. What should you do?
Correct
Correct answer is C as the HTTP(S) load balancer in GCP handles websocket traffic natively. Backends that use WebSocket to communicate with clients can use the HTTP(S) load balancer as a front end, for scale and availability.
Refer GCP documentation HTTP Load Balancer
HTTP(S) Load Balancing has native support for the WebSocket protocol. Backends that use WebSocket to communicate with clients can use the HTTP(S) load balancer as a front end, for scale and availability. The load balancer does not need any additional configuration to proxy WebSocket connections.
The WebSocket protocol, which is defined in RFC 6455, provides a full-duplex communication channel between clients and servers. The channel is initiated from an HTTP(S) request
Option A is wrong as there is no compelling reason to move away from websockets as part of a move to GCP.
Option B is wrong as is this may be a good exercise anyway, but it doesnt really have any bearing on the GCP migration.
Option D is wrong as there is no compelling reason to move away from websockets as part of a move to GCP.
Incorrect
Correct answer is C as the HTTP(S) load balancer in GCP handles websocket traffic natively. Backends that use WebSocket to communicate with clients can use the HTTP(S) load balancer as a front end, for scale and availability.
Refer GCP documentation HTTP Load Balancer
HTTP(S) Load Balancing has native support for the WebSocket protocol. Backends that use WebSocket to communicate with clients can use the HTTP(S) load balancer as a front end, for scale and availability. The load balancer does not need any additional configuration to proxy WebSocket connections.
The WebSocket protocol, which is defined in RFC 6455, provides a full-duplex communication channel between clients and servers. The channel is initiated from an HTTP(S) request
Option A is wrong as there is no compelling reason to move away from websockets as part of a move to GCP.
Option B is wrong as is this may be a good exercise anyway, but it doesnt really have any bearing on the GCP migration.
Option D is wrong as there is no compelling reason to move away from websockets as part of a move to GCP.
Unattempted
Correct answer is C as the HTTP(S) load balancer in GCP handles websocket traffic natively. Backends that use WebSocket to communicate with clients can use the HTTP(S) load balancer as a front end, for scale and availability.
Refer GCP documentation HTTP Load Balancer
HTTP(S) Load Balancing has native support for the WebSocket protocol. Backends that use WebSocket to communicate with clients can use the HTTP(S) load balancer as a front end, for scale and availability. The load balancer does not need any additional configuration to proxy WebSocket connections.
The WebSocket protocol, which is defined in RFC 6455, provides a full-duplex communication channel between clients and servers. The channel is initiated from an HTTP(S) request
Option A is wrong as there is no compelling reason to move away from websockets as part of a move to GCP.
Option B is wrong as is this may be a good exercise anyway, but it doesnt really have any bearing on the GCP migration.
Option D is wrong as there is no compelling reason to move away from websockets as part of a move to GCP.
- We are offering 800 latest real Google Associate Cloud Engineer Exam Questions for practice, which will help you to score higher in your exam.
- Aim for above 85% or above in our mock exams before giving the main exam.
- Do review wrong & right answers and thoroughly go through explanations provided to each question which will help you understand the question.
- Master Cheat Sheet was prepared by instructors which contains personal notes of them for all exam objectives. Carefully written to help you all understand the topics easily. It is recommended to use the Master cheat sheet as a final step of preparation to cram the important topics before the exam.
- Weekly updates: We have a dedicated team updating our question bank on a regular basis, based on the feedback of students on what appeared on the actual exam, as well as through external benchmarking.
An Associate Cloud Engineer deploys applications, monitors operations, and manages enterprise solutions. This individual is able to use Google Cloud Console and the command-line interface to perform common platform-based tasks to maintain one or more deployed solutions that leverage Google-managed or self-managed services on Google Cloud.
It is recommended to have the below knowledge when attempting Google Associate Cloud Engineer Exam Questions 2024
- Compute Engine
- Data Storage Services
- Google App Engine
- Google Kubernetes Engine
- Networking and VPCs
- Load Balancing
- Logging, Monitoring, and Debugging
- IAM
- Managed Instance Groups
- Deployment
- Billing
- Security
Before appearing or planning for any certification or course the first question that comes to mind is What is the use of this certification or How will this certification help me in my career? So here are some points to help with this:
- As there is an increase in demand for Cloud Engineers, a CV with this gleaming certification will provide you with an extra edge.
- Certification leads to a rampant gain in terms of job prospects and earnings.
- Most of the people agree that certification has improved their earning and 84% of people have seen better job prospects after getting certified.
- Updating your profile with this certificate will boost your job profile and increase your chances of getting chosen.
- Getting certified will help in gaining an association with Google Cloud Platform services and technologies and attaining comprehensive hands-on experience with GCP tools and processes
Google Associate Cloud Engineer Exam Details
- Certification Name: Google Cloud Certified Associate Cloud Engineer
- Exam Duration: 2 hrs
- Exam Cost: $125 (plus tax where applicable)
- Exam Format: Multiple Choice Questions
- Exam Language: English, Japanese, Spanish, Portuguese, French, and German
- Number of Questions: 50 (approx)
- Prerequisites: None
- Recommendation: 6 months+ hands-on experience with Google Cloud
For exam overview : https://cloud.google.com/certification/cloud-engineer
Singularitylensing –
Cleared the exam today. Have not seen a better practice set with detailed explanations than this one. One of the best available out for everyone. Was really helpful in clearing the exam and boosting your confidence. Thanks a lot. Keep up the great work.
Pranjal Verlekar –
Passed the exam today. The range of questions is very extensive. Can say almost 80% of the test questions came from here. Amazing job to create such an extensive list and proper explanations. Highly recommended!!
Arun Jackie –
Extremely helpful. Helps you learn how the test is going to try to trick you. Many questions here are in the exam, or are variants of what you will see. Passed tonight with an 93% . Felt confident before finding this site. Then I felt worried since the questions were more of a challenge than my materials. Straight up saved me from making mistakes on the exam.
Jahasein Male –
The first time I took the Google ACE I failed and was feeling defeated. A colleague recommended this course, saying it helped him a lot. It was a no-brainer purchase – the price is just 19$. I used the practice exams to warm up for my second shot at the real deal and I passed. 🙂
TIRUPATHI YADA –
I have register exam on oct 23 2022 , have passed the google cloud associate cloud engineer. Mostly 50% of questions came from skillcertpro.com .Thank you for helped me a lot got the successes into certification pass.
Cousin Issac –
Very helpful in certification preparation. Explanations to each questions is what makes skillcertpro standout from others. Passed my exam.
Dive Preeti –
This until now for me is the best set of practice questions that I have found, this shuld be used as example and take care of what I missed in other practice exams, have complete explications with several links explaining why, what, when, this is amazing and avoid waist a lot of time revieweing other documentations, I huge recommend to anyone that want to take the exam.
Conrad –
If you don’t have any IT experience like me, please ensure you study all the questions here in order to pass this exam. Most of the questions that helped me to pass were from this dump. Kudos to skillcertpro
Szymon M –
Passed the exam today (14.01). I have 50 questions single choice. Maybe 10-15 questions repeated of this dump. You must carefully track and understand the answers from this dump, this will help you in the exam because the questions are similar thematically. Good luck
Kenronishe Gumbs –
Roughly between 5-10 questions came from these questions for my exam, however, that’s not really important.
Nonetheless, I was able to pass my exam through dedicated studies.
These questions and the explanation provided really helped me develop my understanding and I was able to create a framework for approaching scenario-based questions for this exam.
In addition, the ref links provided gave me the opportunity to do additional reading on topics/questions I did not understand which is really appreciated. The approach taken by skillcertpro by providing explanations and reference links for past exam questions is really effective.
I strongly recommend skillcertspro!
Jeremy Venlet –
I sure am glad I found these, I was just going to take the test after reading the ACE Study guide from Sybex. I was getting all of those questions right. I looked on the web for some reassurance and found the skillcertpro questions. Boy was I not ready! I surely would have failed. After spending some time studying the questions in each test I was better prepared and passed the Exam today.
Alex B –
Practice from this set and read the explanations. You will pass. Most of the questions were somewhere close the wuestions here.
Diwakar Reddy –
The questions in each of the papers are very important and i experienced that most of them appeared in the ACE exam test.
Also the explanations in the answers are quite detailed and helpful.
My suggestion is first learn the concepts and then solve these question papers and you are all set to score really well .
Vidhya Nagarajan –
It helped to practice and get an understanding of certification exam. Thank you.
Nick S –
I prepared for the Associate Cloud Engineer but has not done for any practice of the questions which was very vital before taking any online exam, purchased this course a day before my exam and practiced the 6 question sets , though I admit I did not pass the first 3 question sets, the review of the questions gave me a very good insight which I missed while preparing for the exam
I recommend this practice tests will be certainly useful for those candidates who have not taken any mock tests online and want to improve the chances of passing the test at first attempt.
Finally , I have passed my Associate Cloud Engineer 🙂
Praveen Dandu –
passed my exam recently by following an online course AND taking all 6 of these practice exams. These practice exams are harder than the actual but an amazing way to prepare. I kept scoring around low to mid 70% on the first 2 or 3 exams, then ramped it up to high 70%. I wanted to hit 80% or better but never could (on my first try). I read all the explanations for each question, studied those and my notes, then took the real exam. I scored above 90% on the real exam so these practice exams are extremely valuable and I cannot recommend it enough! These were critical to me passing!
Udaid Khan –
These practice exams were very good and they were very instrumental in helping me to pass the certification test. The variety of questions was excellent. When the answers are revealed after taking the test, you receive a very detailed explanation of all of your answer selections, correct and incorrect. This proved very helpful to me as it allowed me to focus on the things that I still needed to improve. I highly recommend this course if you want to create a good foundation for your GOOGLE cloud certification journey.
Donald Rayman –
Course sets the bar high by striking good balance of all complexity of the questions which helps you remain grounded. One great thing while reviewing the answers is that it walks you through each option explaining why that option is correct or incorrect instead of simply telling that since B is right hence A, C and D are wrong. Once you complete this course, you feel more confident while going to real exam.
Mohamed Suhail –
Thanks to the creator of this course which helped me pass the certification , the efforts you have put and the updated questions had seriously helped me. Thanks a lot for this content.
Jose Almeida –
Just finished my exam today and in my first attempt I passed. Thanks to SkillPro, these mocks exams were very good and they were very instrumental in helping me to pass the certification test.
My advise is, read all the answers, correct and incorrect to understand.
Marcel Dreyer –
Passed Exam Today First try using SkillPro mock exams as Main source of study. Questions are in the exam are not 100% like the mock tests (Obviously) but majority are word for word.
Martin Byrne –
I took the test this monday. after using this course as one of my main source of study. I passed.. Many, many questions on these practice tests were the same as the real test. I highly recommend going through these practice tests before taking the real exam. Worth the price.
Harpreet Dhaliwal –
The practice tests are really good and gives you a real insight as to what you can expect in actual exam.
The explanation of all the options in the answer is a must read.
Avdhoot Harshe –
these exam sets helped me a lot for confidence booster as well as covering some left out areas from actual study plan. I practiced twice on most sets until i reached 85% or closer and then gave an attempt on actual exam today and cleared in one shot.
Mike Stope –
hi, this is a great test
Mauricio –
Excellent, many questions came out in my certification test
Hanisa K –
Highly recommend this! The structure, complexity, content of the questions are the same as the actual exam. Best part is the explanation! Not only does it explain the correct answers but it also explains what each of the services do in the incorrect options. This then is a great way to not only understand the reasoning behind the correct answer but also learn new information while reading the incorrect options.
Vamsi Boppana –
I recently used the Skillcert Pro practice tests to prepare for my Google Cloud Platform (GCP) certification exam, and I couldn’t be more pleased with the results. The practice tests were incredibly helpful, offering a wide range of questions that mirrored the style and difficulty of the actual exam. Many of the questions were straightforward and directly relevant to what I encountered during the test. Thanks to Skillcert Pro, I felt well-prepared and confident, ultimately leading to my success in passing the exam. Highly recommended for anyone preparing for GCP certifications
Harshita Badhawn –
Exactly the complexity level I was expecting for ACE exams. Questions are clearly written, no ambiguous context (as you may seen in many other practice exam series or sites). Very well documented explanations about correct vs incorrect answers. Good work!
Cindy Garcia –
Excellent practice tests. An eye opener for me. I learnt a lot of topics going through these tests. The explanations for each option, why is it correct and why is it wrong are excellent. Never saw such detailed explanations with proper links. A must for Google Associate Cloud engineer certification preparation.
Soham Roy –
Excellent coverage of the course syllabus. Please don’t expect all the questions will come from these dumps but definitely they’ll boost up your confidence as now you have the guidance of what to study. So please go through the explanations and the gcloud links.One more thing for the admin, SSL on Port 443 for TCP requests are SSL Proxy Load Balancer ( not NLB ). Passed my exam! Thanks!!