Google Cloud Certified – Professional Cloud Architect Sample Exam Questions (10 Questions)
0 of 10 questions completed
Questions:
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
Information
This Sample Test contains 10 Exam Questions. Please fill your Name and Email address and Click on “Start Test”. You can view the results at the end of the test. You will also receive an email with the results. Please purchase to get life time access to Full Practice Tests.
You must specify a text. |
|
You must specify an email address. |
You have already completed the Test before. Hence you can not start it again.
Test is loading...
You must sign in or sign up to start the Test.
You have to finish following quiz, to start this Test:
Your results are here!! for" Google Professional Cloud Architect Sample Exam "
0 of 10 questions answered correctly
Your time:
Time has elapsed
Your Final Score is : 0
You have attempted : 0
Number of Correct Questions : 0 and scored 0
Number of Incorrect Questions : 0 and Negative marks 0
-
Google Professional Cloud Architect
You have attempted: 0
Number of Correct Questions: 0 and scored 0
Number of Incorrect Questions: 0 and Negative marks 0
-
You can review your answers by clicking on “View Answers”.
Important Note : Open Reference Documentation Links in New Tab (Right Click and Open in New Tab).
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- Answered
- Review
-
Question 1 of 10
1. Question
Your company wants to track whether someone is present in a meeting room reserved for a scheduled meeting. There are 1000 meeting rooms across 5 offices on 3 continents. Each room is equipped with a motion sensor that reports its status every second. You want to support the data ingestion needs of this sensor network. The receiving infrastructure needs to account for the possibility that the devices may have inconsistent connectivity. Which solution should you design?
Correct
A is not correct because having a persistent connection does not handle the case where the device is disconnected.
B is not correct because Cloud SQL is a regional, relational database and not the best fit for sensor data. Additionally, the frequency of the writes has the potential to exceed the supported number of concurrent connections.
C is correct because Cloud Pub/Sub can handle the frequency of this data, and consumers of the data can pull from the shared topic for further processing.
D is not correct because having a persistent connection does not handle the case where the device is disconnected.
https://cloud.google.com/sql/
https://cloud.google.com/pubsub/Incorrect
A is not correct because having a persistent connection does not handle the case where the device is disconnected.
B is not correct because Cloud SQL is a regional, relational database and not the best fit for sensor data. Additionally, the frequency of the writes has the potential to exceed the supported number of concurrent connections.
C is correct because Cloud Pub/Sub can handle the frequency of this data, and consumers of the data can pull from the shared topic for further processing.
D is not correct because having a persistent connection does not handle the case where the device is disconnected.
https://cloud.google.com/sql/
https://cloud.google.com/pubsub/Unattempted
A is not correct because having a persistent connection does not handle the case where the device is disconnected.
B is not correct because Cloud SQL is a regional, relational database and not the best fit for sensor data. Additionally, the frequency of the writes has the potential to exceed the supported number of concurrent connections.
C is correct because Cloud Pub/Sub can handle the frequency of this data, and consumers of the data can pull from the shared topic for further processing.
D is not correct because having a persistent connection does not handle the case where the device is disconnected.
https://cloud.google.com/sql/
https://cloud.google.com/pubsub/ -
Question 2 of 10
2. Question
The development team has provided you with a Kubernetes Deployment file. You have no infrastructure yet and need to deploy the application. What should you do?
Correct
Correct answer is B as you would need gcloud to create a kubernetes cluster. Once the cluster is created you can use kubectl to manage the deployments.
Refer GCP documentation : Kubernetes Cluster Tutorial
To create a cluster with the gcloud command-line tool, use the gcloud container clusters command:
gcloud container clusters create hello-cluster num-nodes=3
To deploy and manage applications on a GKE cluster, you must communicate with the Kubernetes cluster management system. You typically do this by using the kubectl command-line tool.
Kubernetes represents applications as Pods, which are units that represent a container (or group of tightly-coupled containers). The Pod is the smallest deployable unit in Kubernetes. In this tutorial, each Pod contains only your hello-app container.
The kubectl run command below causes Kubernetes to create a Deploymentnamed hello-web on your cluster. The Deployment manages multiple copies of your application, called replicas, and schedules them to run on the individual nodes in your cluster. In this case, the Deployment will be running only one Pod of your application.
kubectl run hello-web image=gcr.io/${PROJECT_ID}/hello-app:v1 port 8080
Options A & C are wrong as you need kubectl to do a kubernetes deployment.
Options C & D are wrong as you need gcloud to create the kubernetes cluster.Incorrect
Correct answer is B as you would need gcloud to create a kubernetes cluster. Once the cluster is created you can use kubectl to manage the deployments.
Refer GCP documentation : Kubernetes Cluster Tutorial
To create a cluster with the gcloud command-line tool, use the gcloud container clusters command:
gcloud container clusters create hello-cluster num-nodes=3
To deploy and manage applications on a GKE cluster, you must communicate with the Kubernetes cluster management system. You typically do this by using the kubectl command-line tool.
Kubernetes represents applications as Pods, which are units that represent a container (or group of tightly-coupled containers). The Pod is the smallest deployable unit in Kubernetes. In this tutorial, each Pod contains only your hello-app container.
The kubectl run command below causes Kubernetes to create a Deploymentnamed hello-web on your cluster. The Deployment manages multiple copies of your application, called replicas, and schedules them to run on the individual nodes in your cluster. In this case, the Deployment will be running only one Pod of your application.
kubectl run hello-web image=gcr.io/${PROJECT_ID}/hello-app:v1 port 8080
Options A & C are wrong as you need kubectl to do a kubernetes deployment.
Options C & D are wrong as you need gcloud to create the kubernetes cluster.Unattempted
Correct answer is B as you would need gcloud to create a kubernetes cluster. Once the cluster is created you can use kubectl to manage the deployments.
Refer GCP documentation : Kubernetes Cluster Tutorial
To create a cluster with the gcloud command-line tool, use the gcloud container clusters command:
gcloud container clusters create hello-cluster num-nodes=3
To deploy and manage applications on a GKE cluster, you must communicate with the Kubernetes cluster management system. You typically do this by using the kubectl command-line tool.
Kubernetes represents applications as Pods, which are units that represent a container (or group of tightly-coupled containers). The Pod is the smallest deployable unit in Kubernetes. In this tutorial, each Pod contains only your hello-app container.
The kubectl run command below causes Kubernetes to create a Deploymentnamed hello-web on your cluster. The Deployment manages multiple copies of your application, called replicas, and schedules them to run on the individual nodes in your cluster. In this case, the Deployment will be running only one Pod of your application.
kubectl run hello-web image=gcr.io/${PROJECT_ID}/hello-app:v1 port 8080
Options A & C are wrong as you need kubectl to do a kubernetes deployment.
Options C & D are wrong as you need gcloud to create the kubernetes cluster. -
Question 3 of 10
3. Question
You are designing a relational data repository on Google Cloud to grow as needed. The data will be transactional consistent and added from any location in the world. You want to monitor and adjust node count for input traffic, which can spike unpredictably. What should you do?
Correct
B is correct because of the requirement to globally scalable transactionsuse Cloud Spanner. CPU utilization is the recommended metric for scaling, per Google best practices, see linked below.
A is not correct because you should not use storage utilization as a scaling metric.
C, D are not correct because you should not use Cloud Bigtable for this scenario: The data will be transactional consistent and added from any location in the world.
Reference
Cloud Spanner Monitoring Using Stackdriver: https://cloud.google.com/spanner/docs/monitoring
Monitoring a Cloud Bigtable Instance: https://cloud.google.com/bigtable/docs/monitoring-instance
The correct answer is: Use Cloud Spanner for storage. Monitor CPU utilization and increase node count if more than 70% utilized for your time span.Incorrect
B is correct because of the requirement to globally scalable transactionsuse Cloud Spanner. CPU utilization is the recommended metric for scaling, per Google best practices, see linked below.
A is not correct because you should not use storage utilization as a scaling metric.
C, D are not correct because you should not use Cloud Bigtable for this scenario: The data will be transactional consistent and added from any location in the world.
Reference
Cloud Spanner Monitoring Using Stackdriver: https://cloud.google.com/spanner/docs/monitoring
Monitoring a Cloud Bigtable Instance: https://cloud.google.com/bigtable/docs/monitoring-instance
The correct answer is: Use Cloud Spanner for storage. Monitor CPU utilization and increase node count if more than 70% utilized for your time span.Unattempted
B is correct because of the requirement to globally scalable transactionsuse Cloud Spanner. CPU utilization is the recommended metric for scaling, per Google best practices, see linked below.
A is not correct because you should not use storage utilization as a scaling metric.
C, D are not correct because you should not use Cloud Bigtable for this scenario: The data will be transactional consistent and added from any location in the world.
Reference
Cloud Spanner Monitoring Using Stackdriver: https://cloud.google.com/spanner/docs/monitoring
Monitoring a Cloud Bigtable Instance: https://cloud.google.com/bigtable/docs/monitoring-instance
The correct answer is: Use Cloud Spanner for storage. Monitor CPU utilization and increase node count if more than 70% utilized for your time span. -
Question 4 of 10
4. Question
You have been asked to select the storage system for the click-data of your company’s large portfolio of websites. This data is streamed in from a custom website analytics package at a typical rate of 6,000 clicks per minute. With bursts of up to 8,500 clicks per second. It must have been stored for future analysis by your data science and user experience teams. Which storage infrastructure should you choose?
Correct
The correct answer is B as Bigtable provides a scalable, fully-managed NoSQL wide-column database that is suitable for both real-time access and analytics workloads.
Refer GCP documentation – Storage Options
It is best suited for
-IoT, finance, ed-tech
-Personalization, recommendations
-Monitoring
-Geospatial datasets
-Graphs
Option A is wrong as Google Cloud SQL is mainly for OLTP (Transactional, CRUD) not for taking and storing streaming data. It does not have the scalability and elasticity to absorb this amount of data in real-time.
Option D is wrong as Google Cloud Datastore does not provide analytics capabilities. Google Cloud Datastore is a NoSQL document database built for automatic scaling, high performance, and ease of application development and integrating well with App Engine.
Option C is wrong as Google Cloud Storage is not suitable to handle real-time streaming data. It also needs to be used with BigQuery for analytics.
——Extended Explanation:
A: The data is unstructured so this is not correct.
B & C: Cloud Storage stores files, but Cloud BigTable will allow for labeled partitions of unstructured. So it is better than Cloud Storage.
This leaves the answer up between “B” and “D”.
D: The 10-second idle time of the ACID operations of option D eliminates this option.
Hence, B (Google Cloud Bigtable) is the most correct answer among the given options.Incorrect
The correct answer is B as Bigtable provides a scalable, fully-managed NoSQL wide-column database that is suitable for both real-time access and analytics workloads.
Refer GCP documentation – Storage Options
It is best suited for
-IoT, finance, ed-tech
-Personalization, recommendations
-Monitoring
-Geospatial datasets
-Graphs
Option A is wrong as Google Cloud SQL is mainly for OLTP (Transactional, CRUD) not for taking and storing streaming data. It does not have the scalability and elasticity to absorb this amount of data in real-time.
Option D is wrong as Google Cloud Datastore does not provide analytics capabilities. Google Cloud Datastore is a NoSQL document database built for automatic scaling, high performance, and ease of application development and integrating well with App Engine.
Option C is wrong as Google Cloud Storage is not suitable to handle real-time streaming data. It also needs to be used with BigQuery for analytics.
——Extended Explanation:
A: The data is unstructured so this is not correct.
B & C: Cloud Storage stores files, but Cloud BigTable will allow for labeled partitions of unstructured. So it is better than Cloud Storage.
This leaves the answer up between “B” and “D”.
D: The 10-second idle time of the ACID operations of option D eliminates this option.
Hence, B (Google Cloud Bigtable) is the most correct answer among the given options.Unattempted
The correct answer is B as Bigtable provides a scalable, fully-managed NoSQL wide-column database that is suitable for both real-time access and analytics workloads.
Refer GCP documentation – Storage Options
It is best suited for
-IoT, finance, ed-tech
-Personalization, recommendations
-Monitoring
-Geospatial datasets
-Graphs
Option A is wrong as Google Cloud SQL is mainly for OLTP (Transactional, CRUD) not for taking and storing streaming data. It does not have the scalability and elasticity to absorb this amount of data in real-time.
Option D is wrong as Google Cloud Datastore does not provide analytics capabilities. Google Cloud Datastore is a NoSQL document database built for automatic scaling, high performance, and ease of application development and integrating well with App Engine.
Option C is wrong as Google Cloud Storage is not suitable to handle real-time streaming data. It also needs to be used with BigQuery for analytics.
——Extended Explanation:
A: The data is unstructured so this is not correct.
B & C: Cloud Storage stores files, but Cloud BigTable will allow for labeled partitions of unstructured. So it is better than Cloud Storage.
This leaves the answer up between “B” and “D”.
D: The 10-second idle time of the ACID operations of option D eliminates this option.
Hence, B (Google Cloud Bigtable) is the most correct answer among the given options. -
Question 5 of 10
5. Question
You have created a Kubernetes engine cluster named project-1. Youve realized that you need to change the machine type for the cluster from n1-standard-1 to n1-standard-4. What should be done to make this change happen?
Correct
The correct answer is A as the machine type for the cluster cannot be changed through commands. A new node pool with the updated machine type needs to be created and workload migrated to the new node pool.
Refer GCP documentation Kubernetes Engine Migrating Node Pools
A node pool is a subset of machines that all have the same configuration, including machine type (CPU and memory) authorization scopes. Node pools represent a subset of nodes within a cluster; a container cluster can contain one or more node pools.
When you need to change the machine profile of your Compute Engine cluster, you can create a new node pool and then migrate your workloads over to the new node pool.
To migrate your workloads without incurring downtime, you need to:
-Mark the existing node pool as unschedulable.
-Drain the workloads running on the existing node pool.
-Delete the existing node pool.Incorrect
The correct answer is A as the machine type for the cluster cannot be changed through commands. A new node pool with the updated machine type needs to be created and workload migrated to the new node pool.
Refer GCP documentation Kubernetes Engine Migrating Node Pools
A node pool is a subset of machines that all have the same configuration, including machine type (CPU and memory) authorization scopes. Node pools represent a subset of nodes within a cluster; a container cluster can contain one or more node pools.
When you need to change the machine profile of your Compute Engine cluster, you can create a new node pool and then migrate your workloads over to the new node pool.
To migrate your workloads without incurring downtime, you need to:
-Mark the existing node pool as unschedulable.
-Drain the workloads running on the existing node pool.
-Delete the existing node pool.Unattempted
The correct answer is A as the machine type for the cluster cannot be changed through commands. A new node pool with the updated machine type needs to be created and workload migrated to the new node pool.
Refer GCP documentation Kubernetes Engine Migrating Node Pools
A node pool is a subset of machines that all have the same configuration, including machine type (CPU and memory) authorization scopes. Node pools represent a subset of nodes within a cluster; a container cluster can contain one or more node pools.
When you need to change the machine profile of your Compute Engine cluster, you can create a new node pool and then migrate your workloads over to the new node pool.
To migrate your workloads without incurring downtime, you need to:
-Mark the existing node pool as unschedulable.
-Drain the workloads running on the existing node pool.
-Delete the existing node pool. -
Question 6 of 10
6. Question
You have a data workflow that consists of a data ingestion layer, data transformation layer, data analytics layer, and data storage layer. You are looking for a service that would ease the tasks of creating, scheduling, monitoring, and managing workflows without dealing with the management of the infrastructure. Please select the right service that would fulfill the requirement.
Correct
Option B is the Correct choice because, Cloud Composer is a managed Apache Airflow service that helps you create, schedule, monitor, and manage workflows.
Option A is an Incorrect choice because you could install Apache Airflow on a VM instance but it would mean you will have to manage the infrastructure.
Option C is Incorrect because, Istio an open platform to connect, monitor, and secure microservices.
Option D is Incorrect because, Stackdriver is a monitoring and management for services, containers, applications, and infrastructure.
Read more about it here:
https://cloud.google.com/composer/Incorrect
Option B is the Correct choice because, Cloud Composer is a managed Apache Airflow service that helps you create, schedule, monitor, and manage workflows.
Option A is an Incorrect choice because you could install Apache Airflow on a VM instance but it would mean you will have to manage the infrastructure.
Option C is Incorrect because, Istio an open platform to connect, monitor, and secure microservices.
Option D is Incorrect because, Stackdriver is a monitoring and management for services, containers, applications, and infrastructure.
Read more about it here:
https://cloud.google.com/composer/Unattempted
Option B is the Correct choice because, Cloud Composer is a managed Apache Airflow service that helps you create, schedule, monitor, and manage workflows.
Option A is an Incorrect choice because you could install Apache Airflow on a VM instance but it would mean you will have to manage the infrastructure.
Option C is Incorrect because, Istio an open platform to connect, monitor, and secure microservices.
Option D is Incorrect because, Stackdriver is a monitoring and management for services, containers, applications, and infrastructure.
Read more about it here:
https://cloud.google.com/composer/ -
Question 7 of 10
7. Question
You have a definition for an instance template that contains a web application. You are asked to deploy the application so that it can scale based on the HTTP traffic it receives. What should you do?
Correct
A Is not correct because the Load Balancer will just load balance access to the uploaded image itself, and not create or autoscale VMs based on that image.
B Is not correct because while the App Engine can scale as a proxy, all requests will still end up on the same Compute Engine instance, which needs to scale itself.
C is correct because a managed instance group can use an instance template to scale based on HTTP traffic.
D is not correct because unmanaged instance groups do not offer autoscaling.
Reference:
Managed instance groups and autoscaling
https://cloud.google.com/compute/docs/instance-groups/#managed_instance_groups_and_autoscaling
Exporting an Image
https://cloud.google.com/compute/docs/images/export-image
Adding a Cloud Storage Bucket to Content-based Load Balancing
https://cloud.google.com/compute/docs/load-balancing/http/adding-a-backend-bucket-to-content-based-load-balancing
The correct answer is: Create a managed instance group based on the instance template. Configure autoscaling based on HTTP traffic and configure the instance group as the backend service of an HTTP load balancer.Incorrect
A Is not correct because the Load Balancer will just load balance access to the uploaded image itself, and not create or autoscale VMs based on that image.
B Is not correct because while the App Engine can scale as a proxy, all requests will still end up on the same Compute Engine instance, which needs to scale itself.
C is correct because a managed instance group can use an instance template to scale based on HTTP traffic.
D is not correct because unmanaged instance groups do not offer autoscaling.
Reference:
Managed instance groups and autoscaling
https://cloud.google.com/compute/docs/instance-groups/#managed_instance_groups_and_autoscaling
Exporting an Image
https://cloud.google.com/compute/docs/images/export-image
Adding a Cloud Storage Bucket to Content-based Load Balancing
https://cloud.google.com/compute/docs/load-balancing/http/adding-a-backend-bucket-to-content-based-load-balancing
The correct answer is: Create a managed instance group based on the instance template. Configure autoscaling based on HTTP traffic and configure the instance group as the backend service of an HTTP load balancer.Unattempted
A Is not correct because the Load Balancer will just load balance access to the uploaded image itself, and not create or autoscale VMs based on that image.
B Is not correct because while the App Engine can scale as a proxy, all requests will still end up on the same Compute Engine instance, which needs to scale itself.
C is correct because a managed instance group can use an instance template to scale based on HTTP traffic.
D is not correct because unmanaged instance groups do not offer autoscaling.
Reference:
Managed instance groups and autoscaling
https://cloud.google.com/compute/docs/instance-groups/#managed_instance_groups_and_autoscaling
Exporting an Image
https://cloud.google.com/compute/docs/images/export-image
Adding a Cloud Storage Bucket to Content-based Load Balancing
https://cloud.google.com/compute/docs/load-balancing/http/adding-a-backend-bucket-to-content-based-load-balancing
The correct answer is: Create a managed instance group based on the instance template. Configure autoscaling based on HTTP traffic and configure the instance group as the backend service of an HTTP load balancer. -
Question 8 of 10
8. Question
You have an application server running on Compute Engine in the europe-west1-d zone. You need to ensure high availability and replicate the server to the europe-west2-c zone using the fewest steps possible. What should you do?
Correct
A is correct because this makes sure the VM gets replicated in the new zone.
B is not correct because this takes more steps than A.
C is not correct because this will generate an error because gcloud cannot copy disks.
D is not correct because the original VM will be moved, not replicated.
The correct answer is: Create a snapshot from the disk. Create a disk from the snapshot in the europe-west2-c zone. Create a new VM with that disk.Incorrect
A is correct because this makes sure the VM gets replicated in the new zone.
B is not correct because this takes more steps than A.
C is not correct because this will generate an error because gcloud cannot copy disks.
D is not correct because the original VM will be moved, not replicated.
The correct answer is: Create a snapshot from the disk. Create a disk from the snapshot in the europe-west2-c zone. Create a new VM with that disk.Unattempted
A is correct because this makes sure the VM gets replicated in the new zone.
B is not correct because this takes more steps than A.
C is not correct because this will generate an error because gcloud cannot copy disks.
D is not correct because the original VM will be moved, not replicated.
The correct answer is: Create a snapshot from the disk. Create a disk from the snapshot in the europe-west2-c zone. Create a new VM with that disk. -
Question 9 of 10
9. Question
You have developed an application using Cloud ML Engine that recognizes famous paintings from uploaded images. You want to test the application and allow specific people to upload images for the next 24 hours. Not all users have a Google Account. How should you have users upload images?
Correct
Correct answer is D as not all the users have google accounts, the Cloud Storage can be exposed to users using signed urls. The Signed URLs can be configured to expire after 24 hrs and does not need any manual intervention.
Refer GCP documentation Signed URLs
In some scenarios, you might not want to require your users to have a Google account in order to access Cloud Storage, but you still want to control access using your application-specific logic. The typical way to address this use case is to provide a signed URL to a user, which gives the user read, write, or delete access to that resource for a limited time. Anyone who knows the URL can access the resource until the URL expires. You specify the expiration time in the query string to be signed.
Option A is wrong as Cloud Storage cannot be password protected.
Options B & C are wrong as App Engine would need setup and teardown. Also Cloud Identity would not work as there is no credentials provider.Incorrect
Correct answer is D as not all the users have google accounts, the Cloud Storage can be exposed to users using signed urls. The Signed URLs can be configured to expire after 24 hrs and does not need any manual intervention.
Refer GCP documentation Signed URLs
In some scenarios, you might not want to require your users to have a Google account in order to access Cloud Storage, but you still want to control access using your application-specific logic. The typical way to address this use case is to provide a signed URL to a user, which gives the user read, write, or delete access to that resource for a limited time. Anyone who knows the URL can access the resource until the URL expires. You specify the expiration time in the query string to be signed.
Option A is wrong as Cloud Storage cannot be password protected.
Options B & C are wrong as App Engine would need setup and teardown. Also Cloud Identity would not work as there is no credentials provider.Unattempted
Correct answer is D as not all the users have google accounts, the Cloud Storage can be exposed to users using signed urls. The Signed URLs can be configured to expire after 24 hrs and does not need any manual intervention.
Refer GCP documentation Signed URLs
In some scenarios, you might not want to require your users to have a Google account in order to access Cloud Storage, but you still want to control access using your application-specific logic. The typical way to address this use case is to provide a signed URL to a user, which gives the user read, write, or delete access to that resource for a limited time. Anyone who knows the URL can access the resource until the URL expires. You specify the expiration time in the query string to be signed.
Option A is wrong as Cloud Storage cannot be password protected.
Options B & C are wrong as App Engine would need setup and teardown. Also Cloud Identity would not work as there is no credentials provider. -
Question 10 of 10
10. Question
You have multiple Data Analysts who work with the dataset hosted in BigQuery within the same project. As a BigQuery Administrator, you are required to grant the data analyst only the privilege to create jobs/queries and the ability to cancel self-submitted jobs. Which role should assign to the user?
Correct
The correct answer is B as JobUser access grants users permissions to run jobs and cancel their own jobs within the same project
Refer GCP documentation BigQuery Access Control
roles/bigquery.jobUser Permissions to run jobs, including queries, within the project. The jobUser role can get information about their own jobs and cancel their own jobs.
Rationale: This role allows the separation of data access from the ability to run work in the project, which is useful when team members query data from multiple projects. This role does not allow access to any BigQuery data. If data access is required, grant dataset-level access controls.
Resource Types:
-Organization
-Project
Option A is wrong as the User would allow running queries across projects.
Option C is wrong as the Owner would give more privileges to the users
Option D is wrong as Viewer does not give user permissions to run jobs.Incorrect
The correct answer is B as JobUser access grants users permissions to run jobs and cancel their own jobs within the same project
Refer GCP documentation BigQuery Access Control
roles/bigquery.jobUser Permissions to run jobs, including queries, within the project. The jobUser role can get information about their own jobs and cancel their own jobs.
Rationale: This role allows the separation of data access from the ability to run work in the project, which is useful when team members query data from multiple projects. This role does not allow access to any BigQuery data. If data access is required, grant dataset-level access controls.
Resource Types:
-Organization
-Project
Option A is wrong as the User would allow running queries across projects.
Option C is wrong as the Owner would give more privileges to the users
Option D is wrong as Viewer does not give user permissions to run jobs.Unattempted
The correct answer is B as JobUser access grants users permissions to run jobs and cancel their own jobs within the same project
Refer GCP documentation BigQuery Access Control
roles/bigquery.jobUser Permissions to run jobs, including queries, within the project. The jobUser role can get information about their own jobs and cancel their own jobs.
Rationale: This role allows the separation of data access from the ability to run work in the project, which is useful when team members query data from multiple projects. This role does not allow access to any BigQuery data. If data access is required, grant dataset-level access controls.
Resource Types:
-Organization
-Project
Option A is wrong as the User would allow running queries across projects.
Option C is wrong as the Owner would give more privileges to the users
Option D is wrong as Viewer does not give user permissions to run jobs.
A Professional Cloud Architect enables organizations to leverage Google Cloud technologies. With a thorough understanding of cloud architecture and Google Cloud Platform, this individual can design, develop, and manage robust, secure, scalable, highly available, and dynamic solutions to drive business objectives.
SkillCertPro Offerings (Instructor Note) :
- We are offering 1220 latest real Google Cloud Certified – Professional Cloud Architect exam questions 2024 for practice, which will help you to score higher in your exam.
- Aim for above 85% or above in our mock exams before giving the main exam.
- Do review wrong & right answers and thoroughly go through explanations provided to each question which will help you understand the question.
- Master Cheat Sheet was prepared by instructors which contain personal notes of them for all exam objectives. Carefully written to help you all understand the topics easily.
- It is recommended to use the Master Cheat Sheet just before 2-3 days of the main exam to cram the important notes.
- Weekly updates: We have a dedicated team updating our question bank on a regular basis, based on the feedback of students on what appeared on the actual exam, as well as through external benchmarking.
The Google Cloud Certified – Professional Cloud Architect exam assesses your ability to:
- Design and plan a cloud solution architecture
- Manage and provision the cloud solution infrastructure
- Design for security and compliance
- Analyze and optimize technical and business processes
- Manage implementations of cloud architecture
- Ensure solution and operations reliability
Anshul Singhal –
Great set of practice tests! I am very much satisfied with this skillcertpro practice tests. I have cleared my certification with the help of these tests, the quality of question and answers are good. Thanks a lot for arranging such a helpful course for us.
Nikhil Balakrishnan –
i cleared my Professional Cloud Architect today with 895 score. i am happy i bought this practice test course and adhered to doing it multiple times and reading all explanations without fail.
Though i got only few of the questions came listed in the course in the exam, it prepared me for the ‘twist’ Google puts in its exam. Thank you very much!
amar singh –
Cleared my GCP PCA exam today. 80% of real exam questions were actually covered in skillcertpro 16 question sets. Thank you skillcertpro.
Niyas Elavumkudy –
Am able to pass Professional Cloud Architect today with this course and this covered about 85% of Exam Questions. Thanks for this wonderful course.
Shamoil Shaees –
Best practice exams test if you are willing to go for Professional Cloud Architect certification. All up to date questions and topics. Passed.
Mahesh S –
Please make through preparation on concepts first and then attempt for this question bank. It was very useful and provided good guidance. I was able to clear exam with ease. Worthy spending and people preparing for this cert, please go for this.
Abdullah Zidan –
I would say, this is the best set of questions which everyone should get it. I failed in first attempt since I was confident that with my study I would clear the exam but it couldn’t happen. Then I purchased this material and boom!!! I cleared.
Mohammed Rashid Azmi –
Cleared the exam.
These mock tests would really help anyone who wants to get the certification in less than 1-2 weeks’ time.
Would recommend this test series to everyone.
Riddhi Hathi –
Best decision i have ever made was purchasing this exam practice this has not only filled in the gaps were i was lacking it has also helped me pass my Professional cloud architect Exam….great work guys please keep up the great work. 🙂
Hephzibah Nagothi –
Test sample questions are very smartly twisted, need thorough reading of some questions repeatedly as it will only ring the bell for answers provided you have undergone the GCC PCA training course material a good number of times.
These tests have given me a clear view of what to expect in the certification.
Great job by the folks who have designed & outlined these questions
Balaji Krishnamoorthy –
I failed in first attempt even after enrolling in the skill certpro for this course, i realized the preparation goes on the below areas are very important ,
IAM roles, N/W concepts, Identity proxy, VPC service controls , NAT . Attempted second time and passed the exam . Please pay attention more from Q-10 set to Q-15 set. very very imp to clear the exam.
Dhanasekar Murugesan –
Very helpful, if anything these were harder than the actual exam so by the time I took the exam I felt quite confident and passed easily. My only complaint with these is that some of the questions are poorly worded making them difficult to answer.
SkillCertPro GC Instructor –
Congratulation’s on passing your exam. and Thank you for feedback. We will get in touch with you to understand the issue and will definitely work on fixing it.
Umesh Kumar –
Simply excellent resource to give mock test and to get your concepts clear. Very good explanation on each answer (why is one answer correct and not the other). Very useful
Ramandeep Singh –
Great question bank. Don’t get discouraged if you don’t pass the practice exams in the first attempts. In my experience, the real exam was easier than the practice exams. However, the higher difficulty just leads to better preparation. Passed in first attempt.
Raghuram KG –
I failed in my first attempt with the confidence that I had during my learnings. I bought this material and focused on my weak areas and skillCertPro is wonderful in explaining each and every question in detail. most of the questions in the real exam came in from this bunch of question sets. Big shout out to the SkillCertPro team for providing this amazing course material.
Passed this exam on my second attempt.
Sayani Mitra –
Was directed to these exams by my colleague the 3 day before my exam. I immediately purchased and started taking the exams, and can honestly say I would not have passed the exam without doing so.
The questions were straightforward and seemed very updated to what I saw on the exam. Well worth it! Thank you.
Srinivas Brahmadev –
Each successive practice test ratchets it up a bit over the previous test, providing a great level of challenge and introducing new topics beyond even the course itself, covering items that are confirmed to be necessary for the actual exam! Skillcertpro tests are the gold standard among practice tests. They’re difficult, but they really help!
Daniel Stefen –
These practice tests were essential learning for the exam. It exposed one particularly area where I was clearly weaker than the others. I focused more on this area and took a shot at the exam and passed.
Mutiara Khikmatul Maolidah –
Must use this
Prathamesh Upadhye –
One of the best practice exam. Passed the exam on the very first attempt. There are some minor mistakes that you need to look for. But has an explanation for all the questions.
James Carphot –
Great practice set that helped me pinpoint weak areas of knowledge. Got my certification 🙂
Jay Peters –
This was a great practice sets! It really helped me solidify my knowledge of Google Cloud before taking the professional Certification – which I passed on my first attempt. I don’t think I would have passed without this tests.
Jubair Kaali –
i bought full practice tests and it completely lives up to my expectations. It is well organized to match the syllabus and is enough to get you a pass in your first try. Must have for everybody who wants to pass the exam. You learn a lot of new topics just by answering the questions. Big Thank you Skillcertpro for this awesome content.