Google Certified Professional Cloud DeveloperFull Practice Tests Total Questions: 837 – 15 Mock Exams & 1 Master Cheat Sheet
Practice Set 1
Time limit: 0
0 of 55 questions completed
Questions:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
Information
Click on Start Quiz.
You have already completed the Test before. Hence you can not start it again.
Test is loading...
You must sign in or sign up to start the Test.
You have to finish following quiz, to start this Test:
Your results are here!! for" Google Cloud Professional Cloud Developer Practice Test 1 "
0 of 55 questions answered correctly
Your time:
Time has elapsed
Your Final Score is : 0
You have attempted : 0
Number of Correct Questions : 0 and scored 0
Number of Incorrect Questions : 0 and Negative marks 0
Average score
Your score
Google Cloud Professional Cloud Developer Practice Tests
You have attempted: 0
Number of Correct Questions: 0 and scored 0
Number of Incorrect Questions: 0 and Negative marks 0
You can review your answers by clicking view questions. Important Note : Open Reference Documentation Links in New Tab (Right Click and Open in New Tab).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
Answered
Review
Question 1 of 55
1. Question
A client of yours has asked you for advice because he is looking for a quick and convenient solution for adding functionalities to an Application. Whenever a new customer is created in the Firebase database, he wants to perform a series of welcome activities and a series of follow-up actions, regardless of the specific function that recorded the new customer record. Which of the following solutions will you suggest (choose 1)?
Correct
Correct answer: D Any of these environments can host the additional functionalities required. The best solution is with Cloud Function, because you can handle events in the Firebase Realtime Database with no need to update client code. Cloud Functions may have the full administrative privileges to ensures that each change to the database is processed individually. Furthermore Cloud Functions are a decoupled economic solution, because of the pay-as-you go model. Functions handle database events in 2 ways; listening for specifically for only creation, update, or deletion events, or you can listen for any change of any kind to a path. These Cloud Functions event handlers are supported onWrite(), which triggers when data is created, updated, or deleted in the Realtime Database. onCreate(), which triggers when new data is created in the Realtime Database. onUpdate(), which triggers when data is updated in the Realtime Database. onDelete(), which triggers when data is deleted from the Realtime Database. For any further detail: https://firebase.google.com/docs/database/extend-with-functions
Incorrect
Correct answer: D Any of these environments can host the additional functionalities required. The best solution is with Cloud Function, because you can handle events in the Firebase Realtime Database with no need to update client code. Cloud Functions may have the full administrative privileges to ensures that each change to the database is processed individually. Furthermore Cloud Functions are a decoupled economic solution, because of the pay-as-you go model. Functions handle database events in 2 ways; listening for specifically for only creation, update, or deletion events, or you can listen for any change of any kind to a path. These Cloud Functions event handlers are supported onWrite(), which triggers when data is created, updated, or deleted in the Realtime Database. onCreate(), which triggers when new data is created in the Realtime Database. onUpdate(), which triggers when data is updated in the Realtime Database. onDelete(), which triggers when data is deleted from the Realtime Database. For any further detail: https://firebase.google.com/docs/database/extend-with-functions
Unattempted
Correct answer: D Any of these environments can host the additional functionalities required. The best solution is with Cloud Function, because you can handle events in the Firebase Realtime Database with no need to update client code. Cloud Functions may have the full administrative privileges to ensures that each change to the database is processed individually. Furthermore Cloud Functions are a decoupled economic solution, because of the pay-as-you go model. Functions handle database events in 2 ways; listening for specifically for only creation, update, or deletion events, or you can listen for any change of any kind to a path. These Cloud Functions event handlers are supported onWrite(), which triggers when data is created, updated, or deleted in the Realtime Database. onCreate(), which triggers when new data is created in the Realtime Database. onUpdate(), which triggers when data is updated in the Realtime Database. onDelete(), which triggers when data is deleted from the Realtime Database. For any further detail: https://firebase.google.com/docs/database/extend-with-functions
Question 2 of 55
2. Question
A company asked you to plan a systems migration and a new technological architecture for the Cloud. You have been asked to indicate a series of patterns that can allow you to obtain greater efficiency in Availability. Which of these options do you suggest (pick 2)?
Correct
Correct answers: A, E Throttling Control the consumption of resources used by an instance or a service; it allows applications to use resources only up to a limit, and then throttle them when this limit is reached. When usage exceeds the threshold, it can throttle requests from one or more users. This will enable the system to continue functioning and meet any service level agreements (SLAs) that are in place. Queue-Based Load Leveling Use a queue that acts as a buffer between a task and a service it invokes in order to smooth intermittent heavy loads that can cause the service to fail or the task to time out. The task and the service run asynchronously. For any further detail: https://cloud.google.com/solutions/hybrid-and-multi-cloud-patterns-and-practices
Incorrect
Correct answers: A, E Throttling Control the consumption of resources used by an instance or a service; it allows applications to use resources only up to a limit, and then throttle them when this limit is reached. When usage exceeds the threshold, it can throttle requests from one or more users. This will enable the system to continue functioning and meet any service level agreements (SLAs) that are in place. Queue-Based Load Leveling Use a queue that acts as a buffer between a task and a service it invokes in order to smooth intermittent heavy loads that can cause the service to fail or the task to time out. The task and the service run asynchronously. For any further detail: https://cloud.google.com/solutions/hybrid-and-multi-cloud-patterns-and-practices
Unattempted
Correct answers: A, E Throttling Control the consumption of resources used by an instance or a service; it allows applications to use resources only up to a limit, and then throttle them when this limit is reached. When usage exceeds the threshold, it can throttle requests from one or more users. This will enable the system to continue functioning and meet any service level agreements (SLAs) that are in place. Queue-Based Load Leveling Use a queue that acts as a buffer between a task and a service it invokes in order to smooth intermittent heavy loads that can cause the service to fail or the task to time out. The task and the service run asynchronously. For any further detail: https://cloud.google.com/solutions/hybrid-and-multi-cloud-patterns-and-practices
Question 3 of 55
3. Question
A company asked you to plan a systems migration and a new technological architecture for the Cloud. You have been asked to indicate a series of patterns that can allow you to obtain greater efficiency in processing large amounts of data. Which of these options do you suggest (pick 3)?
Correct
Correct answers: B, C, D Cache-Aside Load data on demand into a cache from a data store. This can improve performance and also helps to maintain consistency between data held in the cache and data in the underlying data store. If an application updates information, it can follow the write-through strategy by making the modification to the data store, and by invalidating the corresponding item in the cache. Materialized View Prepopulated views over the data in one or more tables when the data isn’t ideally structured for required query operations. This can improve querying and data extraction performance. Sharding Split Tables into a set of horizontal partitions or shards. This improves scalability. For any further detail: https://cloud.google.com/solutions/hybrid-and-multi-cloud-patterns-and-practices
Incorrect
Correct answers: B, C, D Cache-Aside Load data on demand into a cache from a data store. This can improve performance and also helps to maintain consistency between data held in the cache and data in the underlying data store. If an application updates information, it can follow the write-through strategy by making the modification to the data store, and by invalidating the corresponding item in the cache. Materialized View Prepopulated views over the data in one or more tables when the data isn’t ideally structured for required query operations. This can improve querying and data extraction performance. Sharding Split Tables into a set of horizontal partitions or shards. This improves scalability. For any further detail: https://cloud.google.com/solutions/hybrid-and-multi-cloud-patterns-and-practices
Unattempted
Correct answers: B, C, D Cache-Aside Load data on demand into a cache from a data store. This can improve performance and also helps to maintain consistency between data held in the cache and data in the underlying data store. If an application updates information, it can follow the write-through strategy by making the modification to the data store, and by invalidating the corresponding item in the cache. Materialized View Prepopulated views over the data in one or more tables when the data isn’t ideally structured for required query operations. This can improve querying and data extraction performance. Sharding Split Tables into a set of horizontal partitions or shards. This improves scalability. For any further detail: https://cloud.google.com/solutions/hybrid-and-multi-cloud-patterns-and-practices
Question 4 of 55
4. Question
A customer of yours is planning to migrate their infrastructure to the Cloud. It is an international Company that has the requirement to transfer web and mobule applications in a short time, and only afterwards gradually optimize and re-engineer them. Se, they want to perform a lift and shift migration and therefore improve the systems according to the strangler pattern. Which of the following strategies is the most advisable for your Customer?
Correct
Correct answer: C You can use Endpoints for OpenAPI as an interface so you can gradually replace specific pieces of functionality served with legacy apps in Compute Engine with the new software developed, for example, with serverless technologies, the ones that you do prefer. Endpoints is an API management system that helps to secure, monitor, analyze, and set quotas to backends. Moreover, after you deploy your API to Endpoints, you can use the Cloud Endpoints Portal to create a developer portal, a website that users of your API can access to view documentation and interact with your API. A is wrong because Cloud Tasks is an asynchronous task execution service B is wrong because it is not related only to App Engine D and E are wrong because you can use all the technologies you need and prefer, not only Cloud Functions and GKE For any further detail: https://cloud.google.com/endpoints/docs/
Incorrect
Correct answer: C You can use Endpoints for OpenAPI as an interface so you can gradually replace specific pieces of functionality served with legacy apps in Compute Engine with the new software developed, for example, with serverless technologies, the ones that you do prefer. Endpoints is an API management system that helps to secure, monitor, analyze, and set quotas to backends. Moreover, after you deploy your API to Endpoints, you can use the Cloud Endpoints Portal to create a developer portal, a website that users of your API can access to view documentation and interact with your API. A is wrong because Cloud Tasks is an asynchronous task execution service B is wrong because it is not related only to App Engine D and E are wrong because you can use all the technologies you need and prefer, not only Cloud Functions and GKE For any further detail: https://cloud.google.com/endpoints/docs/
Unattempted
Correct answer: C You can use Endpoints for OpenAPI as an interface so you can gradually replace specific pieces of functionality served with legacy apps in Compute Engine with the new software developed, for example, with serverless technologies, the ones that you do prefer. Endpoints is an API management system that helps to secure, monitor, analyze, and set quotas to backends. Moreover, after you deploy your API to Endpoints, you can use the Cloud Endpoints Portal to create a developer portal, a website that users of your API can access to view documentation and interact with your API. A is wrong because Cloud Tasks is an asynchronous task execution service B is wrong because it is not related only to App Engine D and E are wrong because you can use all the technologies you need and prefer, not only Cloud Functions and GKE For any further detail: https://cloud.google.com/endpoints/docs/
Question 5 of 55
5. Question
A large bank has migrated most of the applications to the GCP cloud. Now he has the problem of replacing the database on the Mainframe with a Cloud solution that provides the same levels of reliability and transactional security. In addition to this, it wants to provide an integrated system all over the world and therefore let interact the different regional instances. Which is the best solution to adopt?
Correct
Correct Answer C Cloud Spanner is the first scalable, enterprise-grade, globally-distributed, and strongly consistent database service built for the cloud specifically to combine the benefits of relational database structure with non-relational horizontal scale. This combination delivers high-performance transactions and strong consistency across rows, regions, and continents with an industry-leading 99.999% availability SLA, no planned downtime, and enterprise-grade security. Cloud Spanner revolutionizes database administration and management and makes application development more efficient. Cloud SQL for PostgreSQL scale up to 64 processor cores and more than 400 GB of RAMand will automatically scale storage. It is powerful, but regional. The HA configuration, sometimes called a cluster, provides data redundancy. A Cloud SQL instance configured for HA is also called a regional instance and is located in a primary and secondary zone within the configured region. Within a regional instance, the configuration is made up of a primary instance (master) and a standby instance. Through synchronous replication to each zone’s persistent disk, all writes made to the primary instance are also made to the standby instance. In the event of an instance or zone failure, this configuration reduces downtime, and your data continues to be available to client applications. Cloud Bigtable is a noSQL DB: a sparsely populated table that can scale to billions of rows and thousands of columns, enabling you to store terabytes or even petabytes of data. A single value in each row is indexed; this value is known as the row key. Cloud Bigtable is ideal for storing very large amounts of single-keyed data with very low latency. It supports high read and write throughput at low latency, and it is an ideal data source for MapReduce operations. Very powerful, but non suitable for financial transactional applications. Also SQL Server is regional and not global. For Cloud SQL Services there are different location types, never global: A regional location is a specific geographic place, such as London. A multi-regional location is a large geographic area, such as the United States, that contains at least two geographic places. Multi- regional locations are only used for backups. For any further detail: https://cloud.google.com/spanner/https://cloud.google.com/sql/docs/postgres/high-availabilityhttps://cloud.google.com/bigtable/docs/overviewhttps://cloud.google.com/sql/docs/sqlserver/
Incorrect
Correct Answer C Cloud Spanner is the first scalable, enterprise-grade, globally-distributed, and strongly consistent database service built for the cloud specifically to combine the benefits of relational database structure with non-relational horizontal scale. This combination delivers high-performance transactions and strong consistency across rows, regions, and continents with an industry-leading 99.999% availability SLA, no planned downtime, and enterprise-grade security. Cloud Spanner revolutionizes database administration and management and makes application development more efficient. Cloud SQL for PostgreSQL scale up to 64 processor cores and more than 400 GB of RAMand will automatically scale storage. It is powerful, but regional. The HA configuration, sometimes called a cluster, provides data redundancy. A Cloud SQL instance configured for HA is also called a regional instance and is located in a primary and secondary zone within the configured region. Within a regional instance, the configuration is made up of a primary instance (master) and a standby instance. Through synchronous replication to each zone’s persistent disk, all writes made to the primary instance are also made to the standby instance. In the event of an instance or zone failure, this configuration reduces downtime, and your data continues to be available to client applications. Cloud Bigtable is a noSQL DB: a sparsely populated table that can scale to billions of rows and thousands of columns, enabling you to store terabytes or even petabytes of data. A single value in each row is indexed; this value is known as the row key. Cloud Bigtable is ideal for storing very large amounts of single-keyed data with very low latency. It supports high read and write throughput at low latency, and it is an ideal data source for MapReduce operations. Very powerful, but non suitable for financial transactional applications. Also SQL Server is regional and not global. For Cloud SQL Services there are different location types, never global: A regional location is a specific geographic place, such as London. A multi-regional location is a large geographic area, such as the United States, that contains at least two geographic places. Multi- regional locations are only used for backups. For any further detail: https://cloud.google.com/spanner/https://cloud.google.com/sql/docs/postgres/high-availabilityhttps://cloud.google.com/bigtable/docs/overviewhttps://cloud.google.com/sql/docs/sqlserver/
Unattempted
Correct Answer C Cloud Spanner is the first scalable, enterprise-grade, globally-distributed, and strongly consistent database service built for the cloud specifically to combine the benefits of relational database structure with non-relational horizontal scale. This combination delivers high-performance transactions and strong consistency across rows, regions, and continents with an industry-leading 99.999% availability SLA, no planned downtime, and enterprise-grade security. Cloud Spanner revolutionizes database administration and management and makes application development more efficient. Cloud SQL for PostgreSQL scale up to 64 processor cores and more than 400 GB of RAMand will automatically scale storage. It is powerful, but regional. The HA configuration, sometimes called a cluster, provides data redundancy. A Cloud SQL instance configured for HA is also called a regional instance and is located in a primary and secondary zone within the configured region. Within a regional instance, the configuration is made up of a primary instance (master) and a standby instance. Through synchronous replication to each zone’s persistent disk, all writes made to the primary instance are also made to the standby instance. In the event of an instance or zone failure, this configuration reduces downtime, and your data continues to be available to client applications. Cloud Bigtable is a noSQL DB: a sparsely populated table that can scale to billions of rows and thousands of columns, enabling you to store terabytes or even petabytes of data. A single value in each row is indexed; this value is known as the row key. Cloud Bigtable is ideal for storing very large amounts of single-keyed data with very low latency. It supports high read and write throughput at low latency, and it is an ideal data source for MapReduce operations. Very powerful, but non suitable for financial transactional applications. Also SQL Server is regional and not global. For Cloud SQL Services there are different location types, never global: A regional location is a specific geographic place, such as London. A multi-regional location is a large geographic area, such as the United States, that contains at least two geographic places. Multi- regional locations are only used for backups. For any further detail: https://cloud.google.com/spanner/https://cloud.google.com/sql/docs/postgres/high-availabilityhttps://cloud.google.com/bigtable/docs/overviewhttps://cloud.google.com/sql/docs/sqlserver/
Question 6 of 55
6. Question
A team of mobile developers is developing a new application. It will require synchronizing data between mobile devices and a backend database. Which database service would you recommend?
Correct
Correct answer: C Firestore, part of GCP and of Firebase is the only Database designed for Web and Mobile Application that provides live synchronization and offline support, Cloud Firestore is a fast, fully managed, serverless, cloud-native NoSQL document database that simplifies storing, syncing, and querying data for mobile, web, and IoT apps at global scale. Cloud Firestore is the next generation of Cloud Datastore. So Datastore is just the same of Firestore. For any further detail: https://cloud.google.com/firestore/
Incorrect
Correct answer: C Firestore, part of GCP and of Firebase is the only Database designed for Web and Mobile Application that provides live synchronization and offline support, Cloud Firestore is a fast, fully managed, serverless, cloud-native NoSQL document database that simplifies storing, syncing, and querying data for mobile, web, and IoT apps at global scale. Cloud Firestore is the next generation of Cloud Datastore. So Datastore is just the same of Firestore. For any further detail: https://cloud.google.com/firestore/
Unattempted
Correct answer: C Firestore, part of GCP and of Firebase is the only Database designed for Web and Mobile Application that provides live synchronization and offline support, Cloud Firestore is a fast, fully managed, serverless, cloud-native NoSQL document database that simplifies storing, syncing, and querying data for mobile, web, and IoT apps at global scale. Cloud Firestore is the next generation of Cloud Datastore. So Datastore is just the same of Firestore. For any further detail: https://cloud.google.com/firestore/
Question 7 of 55
7. Question
An app for a finance company needs access to a database and a Cloud Storage bucket. There is no predefined role that grants all the needed permissions without granting some permissions that are not needed. You decide to create a custom role. When defining custom roles, you should follow which of the following principles?
Correct
Correct answers: D The principle of least privilege states that users should have only the privileges that are needed to carry out their duties. Rotation of duties means that different people should perform a task at different times. Defense in depth is the practice of using multiple security controls to protect the same asset. is not a real security principal. Hierarchical inheritance means that policies at the organization level, the folder level, the project level, or at the resource level are inherited by all its child resources. For any further detail: https://cloud.google.com/iam/docs/using-iam-securely
Incorrect
Correct answers: D The principle of least privilege states that users should have only the privileges that are needed to carry out their duties. Rotation of duties means that different people should perform a task at different times. Defense in depth is the practice of using multiple security controls to protect the same asset. is not a real security principal. Hierarchical inheritance means that policies at the organization level, the folder level, the project level, or at the resource level are inherited by all its child resources. For any further detail: https://cloud.google.com/iam/docs/using-iam-securely
Unattempted
Correct answers: D The principle of least privilege states that users should have only the privileges that are needed to carry out their duties. Rotation of duties means that different people should perform a task at different times. Defense in depth is the practice of using multiple security controls to protect the same asset. is not a real security principal. Hierarchical inheritance means that policies at the organization level, the folder level, the project level, or at the resource level are inherited by all its child resources. For any further detail: https://cloud.google.com/iam/docs/using-iam-securely
Question 8 of 55
8. Question
As a Developer you have tested the features of your interest of GCP, Firebase and GSuite. You are now required to organize a development project and design the rules for security. Which of these directives is the most correct to manage Team members in a coordinated and secure manner?
Correct
Correct Answer B Google Cloud uses Google accounts for authentication and access management. Your developers and other technical staff must have Google accounts to access Google Cloud. Google recommends using fully managed Google accounts tied to your corporate domain name through Cloud Identity. This way, the developers can access Google Cloud using their corporate email IDs, and your admins can see and control the accounts through the Admin Console. Cloud Identity is q unified identity, access, app, and endpoint management (IAM/EMM) platform that helps IT and security teams maximize end-user efficiency, protect company data, and transition to a digital workspace.
Correct Answer B Google Cloud uses Google accounts for authentication and access management. Your developers and other technical staff must have Google accounts to access Google Cloud. Google recommends using fully managed Google accounts tied to your corporate domain name through Cloud Identity. This way, the developers can access Google Cloud using their corporate email IDs, and your admins can see and control the accounts through the Admin Console. Cloud Identity is q unified identity, access, app, and endpoint management (IAM/EMM) platform that helps IT and security teams maximize end-user efficiency, protect company data, and transition to a digital workspace.
Correct Answer B Google Cloud uses Google accounts for authentication and access management. Your developers and other technical staff must have Google accounts to access Google Cloud. Google recommends using fully managed Google accounts tied to your corporate domain name through Cloud Identity. This way, the developers can access Google Cloud using their corporate email IDs, and your admins can see and control the accounts through the Admin Console. Cloud Identity is q unified identity, access, app, and endpoint management (IAM/EMM) platform that helps IT and security teams maximize end-user efficiency, protect company data, and transition to a digital workspace.
Blue/green deployments, Traffic-splitting deployments, Rolling deployments and Canary deployments. Which of them are supported by App Engine and which by GKE (pick 2)?
HipLocal is planning an innovative system that analyzes the images of events with the GCP Vision API to determine if there are important events (identification of members, games, activities of interest, etc.)
The system must be efficient and safe but economical.
For this reason it was decided not to use the video but only the images uploaded by the users.
The system must recognize and classify the most interesting activities in real time, but also record usage and performance statistics.
The GCP products chosen for the purpose are: Vision API, AutoML Vision, Cloud Storage, Cloud Datastore and Cloud Pub / Sub.
The first issue to solve is the processing after that a picture is loaded into Cloud Storage.
After creating the STORETOPIC topic in Pub / Sub, which of the following procedures is the correct one?
Correct
Correct Answer A
You have to use notifications for Cloud Storage.
Cloud Pub/Sub Notifications sends information about changes to objects in your buckets to Cloud Pub/Sub, where the information is added to a Cloud Pub/Sub topic of your choice in the form of messages. For example, you can track objects that are created and deleted in your bucket. Each notification contains information describing both the event that triggered it and the object that changed.
The correct parameter is OBJECT_FINALIZE, that is sent when a new object is successfully created in the bucket. This includes copying or rewriting an existing object. A failed upload does not trigger this event.
For any further detail: https://cloud.google.com/storage/docs/pubsub-notifications
Incorrect
Correct Answer A
You have to use notifications for Cloud Storage.
Cloud Pub/Sub Notifications sends information about changes to objects in your buckets to Cloud Pub/Sub, where the information is added to a Cloud Pub/Sub topic of your choice in the form of messages. For example, you can track objects that are created and deleted in your bucket. Each notification contains information describing both the event that triggered it and the object that changed.
The correct parameter is OBJECT_FINALIZE, that is sent when a new object is successfully created in the bucket. This includes copying or rewriting an existing object. A failed upload does not trigger this event.
For any further detail: https://cloud.google.com/storage/docs/pubsub-notifications
Unattempted
Correct Answer A
You have to use notifications for Cloud Storage.
Cloud Pub/Sub Notifications sends information about changes to objects in your buckets to Cloud Pub/Sub, where the information is added to a Cloud Pub/Sub topic of your choice in the form of messages. For example, you can track objects that are created and deleted in your bucket. Each notification contains information describing both the event that triggered it and the object that changed.
The correct parameter is OBJECT_FINALIZE, that is sent when a new object is successfully created in the bucket. This includes copying or rewriting an existing object. A failed upload does not trigger this event.
For any further detail: https://cloud.google.com/storage/docs/pubsub-notifications
Question 11 of 55
11. Question
Examine the following Statement: bq query \ –use_legacy_sql=false \ –external_table_definition=follows::/tmp/follows_def \ ‘SELECT COUNT(rowkey) FROM Clients’ Where external_table_definition is a JSON file with sourceUris and sourceFormat=BIGTABLE parameters. Which kind of Data we are querying and which GCP product we are using?
Correct
Correct Answer D With Big Query you can run sql queries with external data from: Cloud SQL, Cloud Storage, Google Drive. An external data source (also known as a federated data source) is a data source that you can query directly even though the data is not stored in BigQuery. Instead of loading or streaming the data, you create a table that references the external data source. To query an external data source without creating a permanent table, you run a command to combine: A table definition file with a query An inline schema definition with a query A JSON schema definition file with a query The table definition file or supplied schema is used to create the temporary external table, and the query runs against the temporary external table. Querying an external data source using a temporary table is supported by the BigQuery CLI and API. For any further detail: https://cloud.google.com/bigquery/external-data-sourceshttps://cloud.google.com/bigquery/external-data-bigtablehttps://cloud.google.com/bigquery/external-table-definition
Incorrect
Correct Answer D With Big Query you can run sql queries with external data from: Cloud SQL, Cloud Storage, Google Drive. An external data source (also known as a federated data source) is a data source that you can query directly even though the data is not stored in BigQuery. Instead of loading or streaming the data, you create a table that references the external data source. To query an external data source without creating a permanent table, you run a command to combine: A table definition file with a query An inline schema definition with a query A JSON schema definition file with a query The table definition file or supplied schema is used to create the temporary external table, and the query runs against the temporary external table. Querying an external data source using a temporary table is supported by the BigQuery CLI and API. For any further detail: https://cloud.google.com/bigquery/external-data-sourceshttps://cloud.google.com/bigquery/external-data-bigtablehttps://cloud.google.com/bigquery/external-table-definition
Unattempted
Correct Answer D With Big Query you can run sql queries with external data from: Cloud SQL, Cloud Storage, Google Drive. An external data source (also known as a federated data source) is a data source that you can query directly even though the data is not stored in BigQuery. Instead of loading or streaming the data, you create a table that references the external data source. To query an external data source without creating a permanent table, you run a command to combine: A table definition file with a query An inline schema definition with a query A JSON schema definition file with a query The table definition file or supplied schema is used to create the temporary external table, and the query runs against the temporary external table. Querying an external data source using a temporary table is supported by the BigQuery CLI and API. For any further detail: https://cloud.google.com/bigquery/external-data-sourceshttps://cloud.google.com/bigquery/external-data-bigtablehttps://cloud.google.com/bigquery/external-table-definition
Question 12 of 55
12. Question
For a project of yours, it is required a database with a non-rigid, high-performance schema that can easily manage Customers, Orders and Invoices relationships; in other words, you need to deal with hierarchically structured objects and you are looking for an economically convenient solution. In addition transactions, with serializable isolation enforcement are required. Which of the following products do you choose?
Correct
Correct answer: A Datastore manages relationships between entities (records), in a hierarchically structured space similar to the directory structure of a file system. When you create an entity, you can optionally designate another entity as its parent; the new entity is a child of the parent entity. An entity without a parent is a root entity. A transaction is a set of Datastore operations on one or more entities in up to 25 entity groups. Each transaction is guaranteed to be atomic, which means that transactions are never partially applied. Either all of the operations in the transaction are applied, or none of them are applied. Regarding the other answer, for a flexible schema you need a noSQL Database. So, Datastore or Bigtable. All the others are SQL DBs. Datastore is the one that can manage transactions, even with serializable isolation enforcement. Bigtable don’t manage transactions. For any further detail: https://cloud.google.com/datastore/docs/concepts/entities#ancestor_pathshttps://cloud.google.com/datastore/docs/concepts/cloud-datastore-transactions
Incorrect
Correct answer: A Datastore manages relationships between entities (records), in a hierarchically structured space similar to the directory structure of a file system. When you create an entity, you can optionally designate another entity as its parent; the new entity is a child of the parent entity. An entity without a parent is a root entity. A transaction is a set of Datastore operations on one or more entities in up to 25 entity groups. Each transaction is guaranteed to be atomic, which means that transactions are never partially applied. Either all of the operations in the transaction are applied, or none of them are applied. Regarding the other answer, for a flexible schema you need a noSQL Database. So, Datastore or Bigtable. All the others are SQL DBs. Datastore is the one that can manage transactions, even with serializable isolation enforcement. Bigtable don’t manage transactions. For any further detail: https://cloud.google.com/datastore/docs/concepts/entities#ancestor_pathshttps://cloud.google.com/datastore/docs/concepts/cloud-datastore-transactions
Unattempted
Correct answer: A Datastore manages relationships between entities (records), in a hierarchically structured space similar to the directory structure of a file system. When you create an entity, you can optionally designate another entity as its parent; the new entity is a child of the parent entity. An entity without a parent is a root entity. A transaction is a set of Datastore operations on one or more entities in up to 25 entity groups. Each transaction is guaranteed to be atomic, which means that transactions are never partially applied. Either all of the operations in the transaction are applied, or none of them are applied. Regarding the other answer, for a flexible schema you need a noSQL Database. So, Datastore or Bigtable. All the others are SQL DBs. Datastore is the one that can manage transactions, even with serializable isolation enforcement. Bigtable don’t manage transactions. For any further detail: https://cloud.google.com/datastore/docs/concepts/entities#ancestor_pathshttps://cloud.google.com/datastore/docs/concepts/cloud-datastore-transactions
Question 13 of 55
13. Question
How can you find out the problem origin and tune a slow SQL Statement in Cloud SQL or Cloud Spanner?
Correct
Any SQL Database uses declarative statements that specify what data you want to retrieve. If you want to understand how it obtains the results, you should use look at execution plans. A query execution plan displays the cost associated with each step of the query. Using those costs, you can debug query performance issues and optimize your query.
A and B are for procedural languages. C is wrong because it doesn’t address the problem; in case you may find out which index to add looking ad the execution plan. For any further detail: https://cloud.google.com/spanner/docs/sql-best-practices
Incorrect
Any SQL Database uses declarative statements that specify what data you want to retrieve. If you want to understand how it obtains the results, you should use look at execution plans. A query execution plan displays the cost associated with each step of the query. Using those costs, you can debug query performance issues and optimize your query.
A and B are for procedural languages. C is wrong because it doesn’t address the problem; in case you may find out which index to add looking ad the execution plan. For any further detail: https://cloud.google.com/spanner/docs/sql-best-practices
Unattempted
Any SQL Database uses declarative statements that specify what data you want to retrieve. If you want to understand how it obtains the results, you should use look at execution plans. A query execution plan displays the cost associated with each step of the query. Using those costs, you can debug query performance issues and optimize your query.
A and B are for procedural languages. C is wrong because it doesn’t address the problem; in case you may find out which index to add looking ad the execution plan. For any further detail: https://cloud.google.com/spanner/docs/sql-best-practices
Question 14 of 55
14. Question
How do you set up an application in GKE organized with multiple containers that need to talk to each other and whose functions must be called by an external API?
Correct
Correct Answer B The only correct way is to use a Pod with multiple containers and create a Service, that is the equivalent of a Load Balancer with a static IP Address for Compute Engine. Pods are the smallest objects in Kubernetes and they act as self-contained, isolated “logical hosts” that contain one or more containers (thy communicate with localhost) and all the systemic needs of the application it serves: Network: Pods are automatically assigned unique IP addresses. Storage: Pods can specify a set of shared storage volumes that can be shared among the containers. You may configure one (default) or more node pools. When you deploy Pods you can choose how to scale: a Deployment runs multiple replicas of your application and automatically replaces any instances that fail or become unresponsive. Services group a set of Pod endpoints into a single resource. You get a stable cluster IP address that clients inside the cluster can use to contact Pods in the Service. A client sends a request to the stable IP address, and the request is routed to one of the Pods in the Service. For any further detail: https://cloud.google.com/kubernetes-engine/docs/concepts/kubernetes-engine-overview
Incorrect
Correct Answer B The only correct way is to use a Pod with multiple containers and create a Service, that is the equivalent of a Load Balancer with a static IP Address for Compute Engine. Pods are the smallest objects in Kubernetes and they act as self-contained, isolated “logical hosts” that contain one or more containers (thy communicate with localhost) and all the systemic needs of the application it serves: Network: Pods are automatically assigned unique IP addresses. Storage: Pods can specify a set of shared storage volumes that can be shared among the containers. You may configure one (default) or more node pools. When you deploy Pods you can choose how to scale: a Deployment runs multiple replicas of your application and automatically replaces any instances that fail or become unresponsive. Services group a set of Pod endpoints into a single resource. You get a stable cluster IP address that clients inside the cluster can use to contact Pods in the Service. A client sends a request to the stable IP address, and the request is routed to one of the Pods in the Service. For any further detail: https://cloud.google.com/kubernetes-engine/docs/concepts/kubernetes-engine-overview
Unattempted
Correct Answer B The only correct way is to use a Pod with multiple containers and create a Service, that is the equivalent of a Load Balancer with a static IP Address for Compute Engine. Pods are the smallest objects in Kubernetes and they act as self-contained, isolated “logical hosts” that contain one or more containers (thy communicate with localhost) and all the systemic needs of the application it serves: Network: Pods are automatically assigned unique IP addresses. Storage: Pods can specify a set of shared storage volumes that can be shared among the containers. You may configure one (default) or more node pools. When you deploy Pods you can choose how to scale: a Deployment runs multiple replicas of your application and automatically replaces any instances that fail or become unresponsive. Services group a set of Pod endpoints into a single resource. You get a stable cluster IP address that clients inside the cluster can use to contact Pods in the Service. A client sends a request to the stable IP address, and the request is routed to one of the Pods in the Service. For any further detail: https://cloud.google.com/kubernetes-engine/docs/concepts/kubernetes-engine-overview
Question 15 of 55
15. Question
In your Company you used to secure VMs with the standard GCP way, with the Compute Engine configuration of persistent SSH key metadata. But a new compliance rule now states that all the cryptographic keys for Linux instances must be locally produced. Is it possible to do that in GCP? How?
Correct
Correct answers: A, D, E In order to produce locally cryptographic keys you have to use ssh-keygen and then set up correctly private and public key as pointed out in the documentation. GCP stores public key in metadata so that you can use IAM to manage User controlled access. B is wrong because the cryptographic keys have to be produce locally C is wrong because is not secure to store them in Cloud Storage and it is hard to manage F is wrong because it drives to the opposite security direction. For any further detail: https://cloud.google.com/compute/docs/instances/connecting-to-instancehttps://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys
Incorrect
Correct answers: A, D, E In order to produce locally cryptographic keys you have to use ssh-keygen and then set up correctly private and public key as pointed out in the documentation. GCP stores public key in metadata so that you can use IAM to manage User controlled access. B is wrong because the cryptographic keys have to be produce locally C is wrong because is not secure to store them in Cloud Storage and it is hard to manage F is wrong because it drives to the opposite security direction. For any further detail: https://cloud.google.com/compute/docs/instances/connecting-to-instancehttps://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys
Unattempted
Correct answers: A, D, E In order to produce locally cryptographic keys you have to use ssh-keygen and then set up correctly private and public key as pointed out in the documentation. GCP stores public key in metadata so that you can use IAM to manage User controlled access. B is wrong because the cryptographic keys have to be produce locally C is wrong because is not secure to store them in Cloud Storage and it is hard to manage F is wrong because it drives to the opposite security direction. For any further detail: https://cloud.google.com/compute/docs/instances/connecting-to-instancehttps://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys
Question 16 of 55
16. Question
In your Company you used to secure VMs with the standard GCP way, with the Compute Engine configuration of persistent SSH key metadata. But a new compliance rule now states that all the users must undergo stricter security with two-factor authentication. Is it possible to do that in GCP? How?
Correct
Correct answers: A The OS Login is the standard feature for GCP that allows to use Compute Engine IAM roles to manage SSH access to Linux instances. It is possible and easy to add an extra layer of security by setting up OS Login with two-factor authentication, and manage access at the organization level by setting up organization policies. What you have to do is: Enable 2FA for your Google account or domain. Enable 2FA on your project or instance. Grant the necessary IAM roles to the correct users . For any further detail: https://cloud.google.com/compute/docs/oslogin/setup-two-factor-authentication
Incorrect
Correct answers: A The OS Login is the standard feature for GCP that allows to use Compute Engine IAM roles to manage SSH access to Linux instances. It is possible and easy to add an extra layer of security by setting up OS Login with two-factor authentication, and manage access at the organization level by setting up organization policies. What you have to do is: Enable 2FA for your Google account or domain. Enable 2FA on your project or instance. Grant the necessary IAM roles to the correct users . For any further detail: https://cloud.google.com/compute/docs/oslogin/setup-two-factor-authentication
Unattempted
Correct answers: A The OS Login is the standard feature for GCP that allows to use Compute Engine IAM roles to manage SSH access to Linux instances. It is possible and easy to add an extra layer of security by setting up OS Login with two-factor authentication, and manage access at the organization level by setting up organization policies. What you have to do is: Enable 2FA for your Google account or domain. Enable 2FA on your project or instance. Grant the necessary IAM roles to the correct users . For any further detail: https://cloud.google.com/compute/docs/oslogin/setup-two-factor-authentication
Question 17 of 55
17. Question
It is necessary to migrate in Cloud GCP a REST API developed with Java 7 and mySQL, which currently works on-premises. The system may be subject to strong demand peaks and must always be available and have good performances. Furthermore, it is necessary to minimize costs. The system must be organized to be scalable and avoid any SPF: Single Point of Failures. Which of these solutions can be adopted for the database server?
Correct
In order to avoid any SPF: Single Point of Failures, you have to use a managed Database Service or manage a Replica.
Cloud SQL is a managed mySQL Service that handles High Availability and Failover out of the box. The alternative solution is to create transactional or merge db replicas. A transactional replica keep in synch Databases at transaction level. A merge replica keep in synch Databases at checkpoint times. For any further detail: https://cloud.google.com/sql/docs/mysql/https://en.wikipedia.org/wiki/Distributed_database
Incorrect
In order to avoid any SPF: Single Point of Failures, you have to use a managed Database Service or manage a Replica.
Cloud SQL is a managed mySQL Service that handles High Availability and Failover out of the box. The alternative solution is to create transactional or merge db replicas. A transactional replica keep in synch Databases at transaction level. A merge replica keep in synch Databases at checkpoint times. For any further detail: https://cloud.google.com/sql/docs/mysql/https://en.wikipedia.org/wiki/Distributed_database
Unattempted
In order to avoid any SPF: Single Point of Failures, you have to use a managed Database Service or manage a Replica.
Cloud SQL is a managed mySQL Service that handles High Availability and Failover out of the box. The alternative solution is to create transactional or merge db replicas. A transactional replica keep in synch Databases at transaction level. A merge replica keep in synch Databases at checkpoint times. For any further detail: https://cloud.google.com/sql/docs/mysql/https://en.wikipedia.org/wiki/Distributed_database
Question 18 of 55
18. Question
It is necessary to migrate in Cloud GCP a REST API developed with Java 7, which currently works on-premises. The system may be subject to strong demand peaks and must always be available and have good performances. Furthermore, it is necessary to minimize costs. The system must be organized to be scalable and avoid any SPF: Single Point of Failures. Which of these solutions is the best for Application Server? How do you integrate the calculation procedure?
Correct
Google Kubernetes Engine (GKE) is a managed, production-ready environment for deploying containerized applications.
Kubernetes provides: automatic management, monitoring and liveness probes for application containers, automatic scaling, rolling updates. Managed instance groups (MIGs) let you operate apps on multiple identical VMs. You can make your workloads scalable and highly available by taking advantage of automated MIG services, including: autoscaling, autohealing, regional (multiple zone) deployment, and automatic updating. Managed instance groups need load balancing services to distribute traffic across all of the instances in the group. A and B are not correct because are both needed for a scalable and ha (high availability) solution and it is more resource intensive than a managed Kubernetes solution with GKE. D is wrong because App Engine Standard Environment is a PaaS that hosts and scales application in a secure, sandboxed environment with standard technologies and versions and Java 7 is not supported., allowing the App Engine standard environment to distribute requests across multiple servers, and scaling servers to meet traffic demands. Your application runs within its own secure, reliable environment that is independent of the hardware, operating system, or physical location of the server. For any further detail: https://cloud.google.com/kubernetes-engine/https://cloud.google.com/compute/docs/instance-groups/https://cloud.google.com/appengine/docs/standard/
Incorrect
Google Kubernetes Engine (GKE) is a managed, production-ready environment for deploying containerized applications.
Kubernetes provides: automatic management, monitoring and liveness probes for application containers, automatic scaling, rolling updates. Managed instance groups (MIGs) let you operate apps on multiple identical VMs. You can make your workloads scalable and highly available by taking advantage of automated MIG services, including: autoscaling, autohealing, regional (multiple zone) deployment, and automatic updating. Managed instance groups need load balancing services to distribute traffic across all of the instances in the group. A and B are not correct because are both needed for a scalable and ha (high availability) solution and it is more resource intensive than a managed Kubernetes solution with GKE. D is wrong because App Engine Standard Environment is a PaaS that hosts and scales application in a secure, sandboxed environment with standard technologies and versions and Java 7 is not supported., allowing the App Engine standard environment to distribute requests across multiple servers, and scaling servers to meet traffic demands. Your application runs within its own secure, reliable environment that is independent of the hardware, operating system, or physical location of the server. For any further detail: https://cloud.google.com/kubernetes-engine/https://cloud.google.com/compute/docs/instance-groups/https://cloud.google.com/appengine/docs/standard/
Unattempted
Google Kubernetes Engine (GKE) is a managed, production-ready environment for deploying containerized applications.
Kubernetes provides: automatic management, monitoring and liveness probes for application containers, automatic scaling, rolling updates. Managed instance groups (MIGs) let you operate apps on multiple identical VMs. You can make your workloads scalable and highly available by taking advantage of automated MIG services, including: autoscaling, autohealing, regional (multiple zone) deployment, and automatic updating. Managed instance groups need load balancing services to distribute traffic across all of the instances in the group. A and B are not correct because are both needed for a scalable and ha (high availability) solution and it is more resource intensive than a managed Kubernetes solution with GKE. D is wrong because App Engine Standard Environment is a PaaS that hosts and scales application in a secure, sandboxed environment with standard technologies and versions and Java 7 is not supported., allowing the App Engine standard environment to distribute requests across multiple servers, and scaling servers to meet traffic demands. Your application runs within its own secure, reliable environment that is independent of the hardware, operating system, or physical location of the server. For any further detail: https://cloud.google.com/kubernetes-engine/https://cloud.google.com/compute/docs/instance-groups/https://cloud.google.com/appengine/docs/standard/
Question 19 of 55
19. Question
The field agents of your company have to send via mobile devices CSV files containing orders and data concerning their activity. The various files have as destination a set of Cloud Storage buckets. You management asked you to create a system that loads these data into Bigquery in real-time, in a quick and affordable way. The target tables are already structured for the purpose. Which of the following is the best method?
Correct
Correct Answer B This pipeline is a ready and optimized procedure that reliably can manage and execute all the process. So there is nothing to code, test and debug apart a very little and standard User Defined Function in order to comply with the structure of your tables.
Correct Answer B This pipeline is a ready and optimized procedure that reliably can manage and execute all the process. So there is nothing to code, test and debug apart a very little and standard User Defined Function in order to comply with the structure of your tables.
Correct Answer B This pipeline is a ready and optimized procedure that reliably can manage and execute all the process. So there is nothing to code, test and debug apart a very little and standard User Defined Function in order to comply with the structure of your tables.
The following statement: SELECT * FROM Clients WHERE __key__ HAS ANCESTOR KEY(ClientId, ‘default’) in which language is written and for which DB?
Correct
GQL is clearly similar to SQL but the clause “HAS ANCESTOR KEY” is not part of standard SQL. GQL is the query language of Cloud DataStore. A is wrong because SQL hasn’t the clause “HAS ANCESTOR KEY” B is wrong because JSON is always enclosed in brackets D is wrong because Hbase ia an API and not a declarative language. An example of HBase is: table.get(new Get(Bytes.toBytes(ClientId)));
GQL is clearly similar to SQL but the clause “HAS ANCESTOR KEY” is not part of standard SQL. GQL is the query language of Cloud DataStore. A is wrong because SQL hasn’t the clause “HAS ANCESTOR KEY” B is wrong because JSON is always enclosed in brackets D is wrong because Hbase ia an API and not a declarative language. An example of HBase is: table.get(new Get(Bytes.toBytes(ClientId)));
GQL is clearly similar to SQL but the clause “HAS ANCESTOR KEY” is not part of standard SQL. GQL is the query language of Cloud DataStore. A is wrong because SQL hasn’t the clause “HAS ANCESTOR KEY” B is wrong because JSON is always enclosed in brackets D is wrong because Hbase ia an API and not a declarative language. An example of HBase is: table.get(new Get(Bytes.toBytes(ClientId)));
The management asked you, as project leader of the development, to prepare a plan with the organizational proposals for the migration of corporate apps to the cloud, both as development projects and as an operational strategy. What do you propose for the new organization of development?
Correct
Correct answer: A In order to improve and modernize software development it is advisable to adopt Agile. Agile software development organized requirements and solutions through the collaborative effort of self-organizing and cross-functional teams and their customer/end user. It aims at adaptive planning, evolutionary development, early delivery, and continual improvement. Scrum is one of the most known agile process framework for managing complex knowledge work. For any further detail: https://www.scrumguides.org/scrum-guide.html
Incorrect
Correct answer: A In order to improve and modernize software development it is advisable to adopt Agile. Agile software development organized requirements and solutions through the collaborative effort of self-organizing and cross-functional teams and their customer/end user. It aims at adaptive planning, evolutionary development, early delivery, and continual improvement. Scrum is one of the most known agile process framework for managing complex knowledge work. For any further detail: https://www.scrumguides.org/scrum-guide.html
Unattempted
Correct answer: A In order to improve and modernize software development it is advisable to adopt Agile. Agile software development organized requirements and solutions through the collaborative effort of self-organizing and cross-functional teams and their customer/end user. It aims at adaptive planning, evolutionary development, early delivery, and continual improvement. Scrum is one of the most known agile process framework for managing complex knowledge work. For any further detail: https://www.scrumguides.org/scrum-guide.html
Question 22 of 55
22. Question
The management of your company asked you to design a static website aimed at hosting your company’s product sheets. The website must be cheap and simple to set up and manage. The site must have the corporate domain, but it does not need to be served through HTTPS. What is the optimal and more economic solution among the following?
Correct
Correct answer: A A Cloud Storage bucket can host a static website for a domain you own. Static web pages can contain client-side technologies such as HTML, CSS, and JavaScript. They cannot contain dynamic content such as server-side scripts like PHP. There is no additional cost beyond storage. It is possible only to serve using direct URIs such as https://storage.googleapis.com/my-bucket/my-object because when hosting a static website using a CNAME redirect, Cloud Storage only supports HTTP. In the case HTTPS serving is required, you can: Set up a load balancer. Use a third-party Content Delivery Network with Cloud Storage. Serve your static website content from Firebase Hosting instead of Cloud Storage. For any further detail: https://cloud.google.com/storage/docs/hosting-static-website
Incorrect
Correct answer: A A Cloud Storage bucket can host a static website for a domain you own. Static web pages can contain client-side technologies such as HTML, CSS, and JavaScript. They cannot contain dynamic content such as server-side scripts like PHP. There is no additional cost beyond storage. It is possible only to serve using direct URIs such as https://storage.googleapis.com/my-bucket/my-object because when hosting a static website using a CNAME redirect, Cloud Storage only supports HTTP. In the case HTTPS serving is required, you can: Set up a load balancer. Use a third-party Content Delivery Network with Cloud Storage. Serve your static website content from Firebase Hosting instead of Cloud Storage. For any further detail: https://cloud.google.com/storage/docs/hosting-static-website
Unattempted
Correct answer: A A Cloud Storage bucket can host a static website for a domain you own. Static web pages can contain client-side technologies such as HTML, CSS, and JavaScript. They cannot contain dynamic content such as server-side scripts like PHP. There is no additional cost beyond storage. It is possible only to serve using direct URIs such as https://storage.googleapis.com/my-bucket/my-object because when hosting a static website using a CNAME redirect, Cloud Storage only supports HTTP. In the case HTTPS serving is required, you can: Set up a load balancer. Use a third-party Content Delivery Network with Cloud Storage. Serve your static website content from Firebase Hosting instead of Cloud Storage. For any further detail: https://cloud.google.com/storage/docs/hosting-static-website
Question 23 of 55
23. Question
What is a columnar Database and which is the GCP Solution?
Correct
Bigtable is a NoSQL wide-columnar database. Wide-column and petabyte-scale database store tables that can have a large and variable number of columns, that may be grouped in families.
Cloud Bigtable is a sparsely populated table with 3 dimensions (row, column, time) that can scale to billions of rows and thousands of columns, enabling you to store terabytes or even petabytes of data and to access data at sub-millisecond latencies. A single value in each row is indexed; this value is known as the row key. Cloud Bigtable is ideal for storing very large amounts of single-keyed data with very low latency. It supports high read and write throughput at low latency, and it is an ideal data source for MapReduce operations. Each row is indexed by a single row key, and columns that are related to one another are typically grouped together into a column family. Each column is identified by a combination of the column family and a column qualifier, which is a unique name within the column family. Each row/column intersection can contain multiple cells, or versions, at different timestamps, providing a record of how the stored data has been altered over time. Cloud Bigtable tables are sparse; if a cell does not contain any data, it does not take up any space. Cloud Bigtable scales in direct proportion to the number of machines in your cluster without any bottleneck. A is wrong because a SQL may not act as a columnar Database, that cannot have joins, secondary indexes and multiple tables. B is wrong because Cloud Datastore is a document Database, not a columnar Database. C is wrong because Cloud Dataprep is a completely different product: a data service for visually exploring, cleaning, and preparing structured and unstructured data for analysis, reporting, and machine learning. For any further detail: https://cloud.google.com/bigtable/docs/overviewhttps://cloud.google.com/dataprep/
Incorrect
Bigtable is a NoSQL wide-columnar database. Wide-column and petabyte-scale database store tables that can have a large and variable number of columns, that may be grouped in families.
Cloud Bigtable is a sparsely populated table with 3 dimensions (row, column, time) that can scale to billions of rows and thousands of columns, enabling you to store terabytes or even petabytes of data and to access data at sub-millisecond latencies. A single value in each row is indexed; this value is known as the row key. Cloud Bigtable is ideal for storing very large amounts of single-keyed data with very low latency. It supports high read and write throughput at low latency, and it is an ideal data source for MapReduce operations. Each row is indexed by a single row key, and columns that are related to one another are typically grouped together into a column family. Each column is identified by a combination of the column family and a column qualifier, which is a unique name within the column family. Each row/column intersection can contain multiple cells, or versions, at different timestamps, providing a record of how the stored data has been altered over time. Cloud Bigtable tables are sparse; if a cell does not contain any data, it does not take up any space. Cloud Bigtable scales in direct proportion to the number of machines in your cluster without any bottleneck. A is wrong because a SQL may not act as a columnar Database, that cannot have joins, secondary indexes and multiple tables. B is wrong because Cloud Datastore is a document Database, not a columnar Database. C is wrong because Cloud Dataprep is a completely different product: a data service for visually exploring, cleaning, and preparing structured and unstructured data for analysis, reporting, and machine learning. For any further detail: https://cloud.google.com/bigtable/docs/overviewhttps://cloud.google.com/dataprep/
Unattempted
Bigtable is a NoSQL wide-columnar database. Wide-column and petabyte-scale database store tables that can have a large and variable number of columns, that may be grouped in families.
Cloud Bigtable is a sparsely populated table with 3 dimensions (row, column, time) that can scale to billions of rows and thousands of columns, enabling you to store terabytes or even petabytes of data and to access data at sub-millisecond latencies. A single value in each row is indexed; this value is known as the row key. Cloud Bigtable is ideal for storing very large amounts of single-keyed data with very low latency. It supports high read and write throughput at low latency, and it is an ideal data source for MapReduce operations. Each row is indexed by a single row key, and columns that are related to one another are typically grouped together into a column family. Each column is identified by a combination of the column family and a column qualifier, which is a unique name within the column family. Each row/column intersection can contain multiple cells, or versions, at different timestamps, providing a record of how the stored data has been altered over time. Cloud Bigtable tables are sparse; if a cell does not contain any data, it does not take up any space. Cloud Bigtable scales in direct proportion to the number of machines in your cluster without any bottleneck. A is wrong because a SQL may not act as a columnar Database, that cannot have joins, secondary indexes and multiple tables. B is wrong because Cloud Datastore is a document Database, not a columnar Database. C is wrong because Cloud Dataprep is a completely different product: a data service for visually exploring, cleaning, and preparing structured and unstructured data for analysis, reporting, and machine learning. For any further detail: https://cloud.google.com/bigtable/docs/overviewhttps://cloud.google.com/dataprep/
Question 24 of 55
24. Question
What is the difference between Blue/green deployments, Traffic-splitting deployments, Rolling deployments and Canary deployments (picks 2)?
Correct
Correct answers: A, C Rolling deployments is a general technique that incrementally replaces old software with the new one. It is designed to update your workloads without downtime. Blue-green deployment lets have two production environments, as identical as possible, one old and one updated. The blue is live and you perform your final stage of testing in the green environment. Once the software is working in the green environment, you switch: the blue one is now idle and the green totally active. Canary is used to deploy in production new software versions by gradually rolling out the change to a small subgroup of users, before rolling it out to the entire platform/infrastructure and making it available to everybody. Canary deployment is like blue-green, but instead of switching from blue to green in one step, you use a phased approach. Traffic-splitting means that you have different environments and you divide the traffic among them. Any of these deployments lets you roll back to the previous stage. So, Canary is a Rolling and Traffic-splitting deployment because it is active on both versions. Blue-green is not a Traffic-splitting deployment because it is active on only 1 version. Managed deployment lets you ever roll back. For any further detail: https://cloud.google.com/solutions/continuous-delivery/https://martinfowler.com/bliki/BlueGreenDeployment.htmlhttps://cloud.google.com/kubernetes-engine/docs/how-to/updating-appshttps://cloud.google.com/appengine/docs/admin-api/migrating-splitting-traffic
Incorrect
Correct answers: A, C Rolling deployments is a general technique that incrementally replaces old software with the new one. It is designed to update your workloads without downtime. Blue-green deployment lets have two production environments, as identical as possible, one old and one updated. The blue is live and you perform your final stage of testing in the green environment. Once the software is working in the green environment, you switch: the blue one is now idle and the green totally active. Canary is used to deploy in production new software versions by gradually rolling out the change to a small subgroup of users, before rolling it out to the entire platform/infrastructure and making it available to everybody. Canary deployment is like blue-green, but instead of switching from blue to green in one step, you use a phased approach. Traffic-splitting means that you have different environments and you divide the traffic among them. Any of these deployments lets you roll back to the previous stage. So, Canary is a Rolling and Traffic-splitting deployment because it is active on both versions. Blue-green is not a Traffic-splitting deployment because it is active on only 1 version. Managed deployment lets you ever roll back. For any further detail: https://cloud.google.com/solutions/continuous-delivery/https://martinfowler.com/bliki/BlueGreenDeployment.htmlhttps://cloud.google.com/kubernetes-engine/docs/how-to/updating-appshttps://cloud.google.com/appengine/docs/admin-api/migrating-splitting-traffic
Unattempted
Correct answers: A, C Rolling deployments is a general technique that incrementally replaces old software with the new one. It is designed to update your workloads without downtime. Blue-green deployment lets have two production environments, as identical as possible, one old and one updated. The blue is live and you perform your final stage of testing in the green environment. Once the software is working in the green environment, you switch: the blue one is now idle and the green totally active. Canary is used to deploy in production new software versions by gradually rolling out the change to a small subgroup of users, before rolling it out to the entire platform/infrastructure and making it available to everybody. Canary deployment is like blue-green, but instead of switching from blue to green in one step, you use a phased approach. Traffic-splitting means that you have different environments and you divide the traffic among them. Any of these deployments lets you roll back to the previous stage. So, Canary is a Rolling and Traffic-splitting deployment because it is active on both versions. Blue-green is not a Traffic-splitting deployment because it is active on only 1 version. Managed deployment lets you ever roll back. For any further detail: https://cloud.google.com/solutions/continuous-delivery/https://martinfowler.com/bliki/BlueGreenDeployment.htmlhttps://cloud.google.com/kubernetes-engine/docs/how-to/updating-appshttps://cloud.google.com/appengine/docs/admin-api/migrating-splitting-traffic
Question 25 of 55
25. Question
What is the difference between Least Privilege and Separation of Duties and What is their meaning?
Correct
The principle of least privilege means users will be given access to the resources that are strictly necessary for a legitimate purpose. With Primitive roles is not possible to comply with the principle of least privilege. It is necessary to use predefined roles.
Separation of duties requires that any critical task should need more than one person to complete it. Also here, you have to use predefined or custom roles. For any further detail: https://cloud.google.com/iam/docs/using-iam-securely
Incorrect
The principle of least privilege means users will be given access to the resources that are strictly necessary for a legitimate purpose. With Primitive roles is not possible to comply with the principle of least privilege. It is necessary to use predefined roles.
Separation of duties requires that any critical task should need more than one person to complete it. Also here, you have to use predefined or custom roles. For any further detail: https://cloud.google.com/iam/docs/using-iam-securely
Unattempted
The principle of least privilege means users will be given access to the resources that are strictly necessary for a legitimate purpose. With Primitive roles is not possible to comply with the principle of least privilege. It is necessary to use predefined roles.
Separation of duties requires that any critical task should need more than one person to complete it. Also here, you have to use predefined or custom roles. For any further detail: https://cloud.google.com/iam/docs/using-iam-securely
Question 26 of 55
26. Question
What is the difference between Primitive and Predefined roles?
Correct
Predefined roles, which provide granular access for a specific service, prevent unwanted access to other resources. and are managed by Google Cloud. For example, roles/appengine.appAdmin has Read/Write/Modify access to all application configuration and settings and includes a set of specific permissions, like : appengine.instances.*, appengine.operations.*, appengine.runtimes.*, appengine.services.* and so on. Primitive roles were used prior to IAM. There are three primitive roles: owner, editor, and viewer. Viewers have permission to perform read-only operations. Editors have viewer permissions and permission to modify an entity. Owners have editor permissions and can manage roles and permission on an entity. Owners can also set up billing for a project. IAM roles are collections of permissions. They are tailored to provide identities with just the permissions they need to perform a task and no more. To see a list of users assigned a role, click the Roles tab in the There are also Custom roles that provide a granular access according to a user-specified list of permissions, so answer C is wrong.
Predefined roles, which provide granular access for a specific service, prevent unwanted access to other resources. and are managed by Google Cloud. For example, roles/appengine.appAdmin has Read/Write/Modify access to all application configuration and settings and includes a set of specific permissions, like : appengine.instances.*, appengine.operations.*, appengine.runtimes.*, appengine.services.* and so on. Primitive roles were used prior to IAM. There are three primitive roles: owner, editor, and viewer. Viewers have permission to perform read-only operations. Editors have viewer permissions and permission to modify an entity. Owners have editor permissions and can manage roles and permission on an entity. Owners can also set up billing for a project. IAM roles are collections of permissions. They are tailored to provide identities with just the permissions they need to perform a task and no more. To see a list of users assigned a role, click the Roles tab in the There are also Custom roles that provide a granular access according to a user-specified list of permissions, so answer C is wrong.
Predefined roles, which provide granular access for a specific service, prevent unwanted access to other resources. and are managed by Google Cloud. For example, roles/appengine.appAdmin has Read/Write/Modify access to all application configuration and settings and includes a set of specific permissions, like : appengine.instances.*, appengine.operations.*, appengine.runtimes.*, appengine.services.* and so on. Primitive roles were used prior to IAM. There are three primitive roles: owner, editor, and viewer. Viewers have permission to perform read-only operations. Editors have viewer permissions and permission to modify an entity. Owners have editor permissions and can manage roles and permission on an entity. Owners can also set up billing for a project. IAM roles are collections of permissions. They are tailored to provide identities with just the permissions they need to perform a task and no more. To see a list of users assigned a role, click the Roles tab in the There are also Custom roles that provide a granular access according to a user-specified list of permissions, so answer C is wrong.
Which are the GCP tools for SRE: Site Reliability Engineering?
Correct
A Site Reliability Engineering team is responsible for availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning.
Stackdriver Trace is a performance analyzer that collects latency data from your applications and displays it in the Google Cloud Platform Console. You can track how requests propagate through your application and receive detailed near real-time performance insights. Stackdriver Profiler is a statistical, low-overhead profiler that continuously gathers CPU usage and memory-allocation information from your production applications. Stackdriver Debugger is a feature of Google Cloud Platform that lets you inspect the state of a running application in real time, without stopping or slowing it down. Your users are not impacted while you capture the call stack and variables at any location in your source code. For any further detail: https://cloud.google.com/apm/ Stackdriver Trace, Stackdriver Debugger, Stackdriver Profiler https://cloud.google.com/files/GCPDDoSprotection-04122016.pdf
Incorrect
A Site Reliability Engineering team is responsible for availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning.
Stackdriver Trace is a performance analyzer that collects latency data from your applications and displays it in the Google Cloud Platform Console. You can track how requests propagate through your application and receive detailed near real-time performance insights. Stackdriver Profiler is a statistical, low-overhead profiler that continuously gathers CPU usage and memory-allocation information from your production applications. Stackdriver Debugger is a feature of Google Cloud Platform that lets you inspect the state of a running application in real time, without stopping or slowing it down. Your users are not impacted while you capture the call stack and variables at any location in your source code. For any further detail: https://cloud.google.com/apm/ Stackdriver Trace, Stackdriver Debugger, Stackdriver Profiler https://cloud.google.com/files/GCPDDoSprotection-04122016.pdf
Unattempted
A Site Reliability Engineering team is responsible for availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning.
Stackdriver Trace is a performance analyzer that collects latency data from your applications and displays it in the Google Cloud Platform Console. You can track how requests propagate through your application and receive detailed near real-time performance insights. Stackdriver Profiler is a statistical, low-overhead profiler that continuously gathers CPU usage and memory-allocation information from your production applications. Stackdriver Debugger is a feature of Google Cloud Platform that lets you inspect the state of a running application in real time, without stopping or slowing it down. Your users are not impacted while you capture the call stack and variables at any location in your source code. For any further detail: https://cloud.google.com/apm/ Stackdriver Trace, Stackdriver Debugger, Stackdriver Profiler https://cloud.google.com/files/GCPDDoSprotection-04122016.pdf
Question 28 of 55
28. Question
Which are the purpose and differences between the following Security Modules? HSM, KMS and Secret Manager
Correct
Correct answer: B Google’s infrastructure provides a variety of storage services, such as Bigtable and Spanner, and a central key management service: Cloud KMS. Most applications at Google access physical storage indirectly via these storage services. The storage services can be configured to use keys from the central key management service to encrypt data before it is written to physical storage. This key management service supports automatic key rotation, provides extensive audit logs, and integrates with the previously mentioned end user permission tickets to link keys to particular end users. So, Google Cloud Key Management Service (KMS) is a cloud service for managing automatically all the services related to encryption keys for other Google cloud services that enterprises can use to implement cryptographic functions. HSM is a physical computing device that stores and manages digital keys for strong authentication and provides crypto-processing. They usually plug-in cards or external devices that are attached directly to a computer or network server. Cloud HSM is a managed service for HSM and it is fully integrated with KMS for creating and using customer-managed encryption keys. It is necessary only in special cases where an hardware enforced additional level of security is required. Secrets are database credentials, passwords, keys secrets, any security token. rotate, manage, and retrieve secrets. Secrets Manager helps you protect secrets needed to access your applications, services, and IT resources. It is needed when you store secrets but manage yourself the security procedures. For any further detail: https://cloud.google.com/hsm/https://cloud.google.com/kms/https://cloud.google.com/secret-manager/docs/
Incorrect
Correct answer: B Google’s infrastructure provides a variety of storage services, such as Bigtable and Spanner, and a central key management service: Cloud KMS. Most applications at Google access physical storage indirectly via these storage services. The storage services can be configured to use keys from the central key management service to encrypt data before it is written to physical storage. This key management service supports automatic key rotation, provides extensive audit logs, and integrates with the previously mentioned end user permission tickets to link keys to particular end users. So, Google Cloud Key Management Service (KMS) is a cloud service for managing automatically all the services related to encryption keys for other Google cloud services that enterprises can use to implement cryptographic functions. HSM is a physical computing device that stores and manages digital keys for strong authentication and provides crypto-processing. They usually plug-in cards or external devices that are attached directly to a computer or network server. Cloud HSM is a managed service for HSM and it is fully integrated with KMS for creating and using customer-managed encryption keys. It is necessary only in special cases where an hardware enforced additional level of security is required. Secrets are database credentials, passwords, keys secrets, any security token. rotate, manage, and retrieve secrets. Secrets Manager helps you protect secrets needed to access your applications, services, and IT resources. It is needed when you store secrets but manage yourself the security procedures. For any further detail: https://cloud.google.com/hsm/https://cloud.google.com/kms/https://cloud.google.com/secret-manager/docs/
Unattempted
Correct answer: B Google’s infrastructure provides a variety of storage services, such as Bigtable and Spanner, and a central key management service: Cloud KMS. Most applications at Google access physical storage indirectly via these storage services. The storage services can be configured to use keys from the central key management service to encrypt data before it is written to physical storage. This key management service supports automatic key rotation, provides extensive audit logs, and integrates with the previously mentioned end user permission tickets to link keys to particular end users. So, Google Cloud Key Management Service (KMS) is a cloud service for managing automatically all the services related to encryption keys for other Google cloud services that enterprises can use to implement cryptographic functions. HSM is a physical computing device that stores and manages digital keys for strong authentication and provides crypto-processing. They usually plug-in cards or external devices that are attached directly to a computer or network server. Cloud HSM is a managed service for HSM and it is fully integrated with KMS for creating and using customer-managed encryption keys. It is necessary only in special cases where an hardware enforced additional level of security is required. Secrets are database credentials, passwords, keys secrets, any security token. rotate, manage, and retrieve secrets. Secrets Manager helps you protect secrets needed to access your applications, services, and IT resources. It is needed when you store secrets but manage yourself the security procedures. For any further detail: https://cloud.google.com/hsm/https://cloud.google.com/kms/https://cloud.google.com/secret-manager/docs/
Question 29 of 55
29. Question
Which are the similarities and differences between a Kubernetes Cluster and an Instance Group? Choose three correct statements among the following ones:
Correct
Both Kubernetes Clusters and Instance Groups are sets of VMs that can be managed as a group. Instance groups, however, are much more restricted All VMs generally run the same image in an instance group. That is not the case with Kubernetes. Also, instance groups have no mechanism to support the deployment of containers They can start and stop much faster (usually in seconds) and use fewer resources Instance groups have some monitoring and restart instances that fail, but Kubernetes has much more flexibility with regard to maintaining a cluster of servers. Giusti You may notice that pods are similar to Compute Engine managed instance groups. A key difference is that pods are for executing applications in containers and may be placed on various nodes in the cluster, while managed instance groups all execute the same application code on each of the nodes. Also, you typically manage instance groups yourself by executing commands in Cloud Console or through the command line. Pods are usually managed by a controller. Services Since pods are ephemeral and can be terminated by a controller, other services that depend on pods should not be tightly coupled to particular pods. For example, even though pods have unique IP addresses, applications should not depend on that IP address to reach an application. If the pod with that address is terminated and another is created, it may have another IP address. The IP address may be re-assigned to another pod running a different container. Kubernetes provides a level of indirection between applications running in pods and other applications that call them: it is called a service. A service, in Kubernetes terminology, is an object that provides API endpoints with a stable IP address that allow applications to discover pods running a particular application. Services update when changes are made to pods, so they maintain an up-to-date list of pods running an application. For any further detail: https://cloud.google.com/kubernetes-engine/docs/concepts/kubernetes-engine-overviewhttps://cloud.google.com/kubernetes-engine/docs/concepts/node-poolshttps://cloud.google.com/compute/docs/instance-groups/adding-an-instance-group-to-a-load-balancer
Incorrect
Both Kubernetes Clusters and Instance Groups are sets of VMs that can be managed as a group. Instance groups, however, are much more restricted All VMs generally run the same image in an instance group. That is not the case with Kubernetes. Also, instance groups have no mechanism to support the deployment of containers They can start and stop much faster (usually in seconds) and use fewer resources Instance groups have some monitoring and restart instances that fail, but Kubernetes has much more flexibility with regard to maintaining a cluster of servers. Giusti You may notice that pods are similar to Compute Engine managed instance groups. A key difference is that pods are for executing applications in containers and may be placed on various nodes in the cluster, while managed instance groups all execute the same application code on each of the nodes. Also, you typically manage instance groups yourself by executing commands in Cloud Console or through the command line. Pods are usually managed by a controller. Services Since pods are ephemeral and can be terminated by a controller, other services that depend on pods should not be tightly coupled to particular pods. For example, even though pods have unique IP addresses, applications should not depend on that IP address to reach an application. If the pod with that address is terminated and another is created, it may have another IP address. The IP address may be re-assigned to another pod running a different container. Kubernetes provides a level of indirection between applications running in pods and other applications that call them: it is called a service. A service, in Kubernetes terminology, is an object that provides API endpoints with a stable IP address that allow applications to discover pods running a particular application. Services update when changes are made to pods, so they maintain an up-to-date list of pods running an application. For any further detail: https://cloud.google.com/kubernetes-engine/docs/concepts/kubernetes-engine-overviewhttps://cloud.google.com/kubernetes-engine/docs/concepts/node-poolshttps://cloud.google.com/compute/docs/instance-groups/adding-an-instance-group-to-a-load-balancer
Unattempted
Both Kubernetes Clusters and Instance Groups are sets of VMs that can be managed as a group. Instance groups, however, are much more restricted All VMs generally run the same image in an instance group. That is not the case with Kubernetes. Also, instance groups have no mechanism to support the deployment of containers They can start and stop much faster (usually in seconds) and use fewer resources Instance groups have some monitoring and restart instances that fail, but Kubernetes has much more flexibility with regard to maintaining a cluster of servers. Giusti You may notice that pods are similar to Compute Engine managed instance groups. A key difference is that pods are for executing applications in containers and may be placed on various nodes in the cluster, while managed instance groups all execute the same application code on each of the nodes. Also, you typically manage instance groups yourself by executing commands in Cloud Console or through the command line. Pods are usually managed by a controller. Services Since pods are ephemeral and can be terminated by a controller, other services that depend on pods should not be tightly coupled to particular pods. For example, even though pods have unique IP addresses, applications should not depend on that IP address to reach an application. If the pod with that address is terminated and another is created, it may have another IP address. The IP address may be re-assigned to another pod running a different container. Kubernetes provides a level of indirection between applications running in pods and other applications that call them: it is called a service. A service, in Kubernetes terminology, is an object that provides API endpoints with a stable IP address that allow applications to discover pods running a particular application. Services update when changes are made to pods, so they maintain an up-to-date list of pods running an application. For any further detail: https://cloud.google.com/kubernetes-engine/docs/concepts/kubernetes-engine-overviewhttps://cloud.google.com/kubernetes-engine/docs/concepts/node-poolshttps://cloud.google.com/compute/docs/instance-groups/adding-an-instance-group-to-a-load-balancer
Question 30 of 55
30. Question
Which kind of consistency is supported by Cloud Spanner?
Correct
Correct Answer B Cloud Spanner provides a special kind of consistency, called external consistency. We are used to deal with strong consistency, that make possible that, after an update, all the queries will receive the same result. In other words the state of the Database is always consistent, no matter the distribution of the processing, partitions and replicas. The problem with a global, horizontal scalable DB as Spanner the transactions are executed in many distributed Instances and therefore, is really difficult to guarantee strong consistency. Spanner manage to achieve all that by means of TrueTime, a distributed clock in all GCP computing systems. With TrueTime, Spanner manages the serialization of transactions, achieving in this way out external consistency, that is the strictest concurrency-control for Databases. A is wrong because eventual consistency is far weaker and it is typically related to noSQL instances. C is wrong because strong consistency for queries against entity group is supported by Cloud Datastore. C is wrong because strong consistency within a partition doesn’t exist. For any further detail: https://cloud.google.com/spanner/docs/https://cloud.google.com/spanner/docs/true-time-external-consistency
Incorrect
Correct Answer B Cloud Spanner provides a special kind of consistency, called external consistency. We are used to deal with strong consistency, that make possible that, after an update, all the queries will receive the same result. In other words the state of the Database is always consistent, no matter the distribution of the processing, partitions and replicas. The problem with a global, horizontal scalable DB as Spanner the transactions are executed in many distributed Instances and therefore, is really difficult to guarantee strong consistency. Spanner manage to achieve all that by means of TrueTime, a distributed clock in all GCP computing systems. With TrueTime, Spanner manages the serialization of transactions, achieving in this way out external consistency, that is the strictest concurrency-control for Databases. A is wrong because eventual consistency is far weaker and it is typically related to noSQL instances. C is wrong because strong consistency for queries against entity group is supported by Cloud Datastore. C is wrong because strong consistency within a partition doesn’t exist. For any further detail: https://cloud.google.com/spanner/docs/https://cloud.google.com/spanner/docs/true-time-external-consistency
Unattempted
Correct Answer B Cloud Spanner provides a special kind of consistency, called external consistency. We are used to deal with strong consistency, that make possible that, after an update, all the queries will receive the same result. In other words the state of the Database is always consistent, no matter the distribution of the processing, partitions and replicas. The problem with a global, horizontal scalable DB as Spanner the transactions are executed in many distributed Instances and therefore, is really difficult to guarantee strong consistency. Spanner manage to achieve all that by means of TrueTime, a distributed clock in all GCP computing systems. With TrueTime, Spanner manages the serialization of transactions, achieving in this way out external consistency, that is the strictest concurrency-control for Databases. A is wrong because eventual consistency is far weaker and it is typically related to noSQL instances. C is wrong because strong consistency for queries against entity group is supported by Cloud Datastore. C is wrong because strong consistency within a partition doesn’t exist. For any further detail: https://cloud.google.com/spanner/docs/https://cloud.google.com/spanner/docs/true-time-external-consistency
Question 31 of 55
31. Question
Which languages you can use with Cloud Spanner?
Correct
Cloud Spanner is a scalable, enterprise-grade, globally-distributed, and strongly consistent relational built for the cloud that combines the benefits and consistency of traditional databases with non-relational horizontal scale. Cloud Spanner uses the industry-standard ANSI 2011 SQL for queries and has client libraries for many programming languages.
Cloud Spanner is a scalable, enterprise-grade, globally-distributed, and strongly consistent relational built for the cloud that combines the benefits and consistency of traditional databases with non-relational horizontal scale. Cloud Spanner uses the industry-standard ANSI 2011 SQL for queries and has client libraries for many programming languages.
Cloud Spanner is a scalable, enterprise-grade, globally-distributed, and strongly consistent relational built for the cloud that combines the benefits and consistency of traditional databases with non-relational horizontal scale. Cloud Spanner uses the industry-standard ANSI 2011 SQL for queries and has client libraries for many programming languages.
With Cloud Storage you may have different classes and it is possible to pass from one class to another. But some transitions are not allowed. Which one of the following is not possible?
Correct
Correct answer: C When you create a bucket you have to declare if it will be either regional or multiregional, You cannot change afterwards. All the other transitions are allowed. For any further detail: https://cloud.google.com/storage-transfer/docs/overview
Incorrect
Correct answer: C When you create a bucket you have to declare if it will be either regional or multiregional, You cannot change afterwards. All the other transitions are allowed. For any further detail: https://cloud.google.com/storage-transfer/docs/overview
Unattempted
Correct answer: C When you create a bucket you have to declare if it will be either regional or multiregional, You cannot change afterwards. All the other transitions are allowed. For any further detail: https://cloud.google.com/storage-transfer/docs/overview
Question 33 of 55
33. Question
With your team, you are planning to migrate HTTPs apps to GKE clusters and worry about scalability and availability. You re-engineered the apps so that they are sessionless. You want the best performances and high availability. How do you manage the scalability of an app distributed in GKE easily and at the lowest cost?
Correct
Correct answer: D GKE’s cluster autoscaler automatically resizes the number of nodes in a given node pool, based on the demands of your workloads. You don’t need to manually add or remove nodes or over-provision your node pools. Instead, you specify a minimum and maximum size for the node pool, and the rest is automatic. If your node pool contains multiple managed instance groups with the same instance type, cluster autoscaler attempts to keep these managed instance group sizes balanced when scaling up. This can help prevent an uneven distribution of nodes among managed instance groups in multiple zones of a node pool. Cluster autoscaler considers the relative cost of the instance types in the various pools, and attempts to expand the least expensive possible node pool. The reduced cost of node pools containing preemptible VMs is taken into account. Vertical pod autoscaling (VPA) is a feature that can recommend values for CPU and memory requests and limits, or it can automatically update the values. With Vertical pod autoscaling: Cluster nodes are used efficiently, because Pods use exactly what they need. Pods are scheduled onto nodes that have the appropriate resources available. You don’t have to run time-consuming benchmarking tasks to determine the correct values for CPU and memory requests. Maintenance time is reduced, because the autoscaler can adjust CPU and memory requests over time without any action on your part. With GKE you don’t have to use the scalability features of Compute Engine. For any further detail: https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscalerhttps://cloud.google.com/kubernetes-engine/docs/concepts/verticalpodautoscalerhttps://cloud.google.com/kubernetes-engine/docs/concepts/scalability
Incorrect
Correct answer: D GKE’s cluster autoscaler automatically resizes the number of nodes in a given node pool, based on the demands of your workloads. You don’t need to manually add or remove nodes or over-provision your node pools. Instead, you specify a minimum and maximum size for the node pool, and the rest is automatic. If your node pool contains multiple managed instance groups with the same instance type, cluster autoscaler attempts to keep these managed instance group sizes balanced when scaling up. This can help prevent an uneven distribution of nodes among managed instance groups in multiple zones of a node pool. Cluster autoscaler considers the relative cost of the instance types in the various pools, and attempts to expand the least expensive possible node pool. The reduced cost of node pools containing preemptible VMs is taken into account. Vertical pod autoscaling (VPA) is a feature that can recommend values for CPU and memory requests and limits, or it can automatically update the values. With Vertical pod autoscaling: Cluster nodes are used efficiently, because Pods use exactly what they need. Pods are scheduled onto nodes that have the appropriate resources available. You don’t have to run time-consuming benchmarking tasks to determine the correct values for CPU and memory requests. Maintenance time is reduced, because the autoscaler can adjust CPU and memory requests over time without any action on your part. With GKE you don’t have to use the scalability features of Compute Engine. For any further detail: https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscalerhttps://cloud.google.com/kubernetes-engine/docs/concepts/verticalpodautoscalerhttps://cloud.google.com/kubernetes-engine/docs/concepts/scalability
Unattempted
Correct answer: D GKE’s cluster autoscaler automatically resizes the number of nodes in a given node pool, based on the demands of your workloads. You don’t need to manually add or remove nodes or over-provision your node pools. Instead, you specify a minimum and maximum size for the node pool, and the rest is automatic. If your node pool contains multiple managed instance groups with the same instance type, cluster autoscaler attempts to keep these managed instance group sizes balanced when scaling up. This can help prevent an uneven distribution of nodes among managed instance groups in multiple zones of a node pool. Cluster autoscaler considers the relative cost of the instance types in the various pools, and attempts to expand the least expensive possible node pool. The reduced cost of node pools containing preemptible VMs is taken into account. Vertical pod autoscaling (VPA) is a feature that can recommend values for CPU and memory requests and limits, or it can automatically update the values. With Vertical pod autoscaling: Cluster nodes are used efficiently, because Pods use exactly what they need. Pods are scheduled onto nodes that have the appropriate resources available. You don’t have to run time-consuming benchmarking tasks to determine the correct values for CPU and memory requests. Maintenance time is reduced, because the autoscaler can adjust CPU and memory requests over time without any action on your part. With GKE you don’t have to use the scalability features of Compute Engine. For any further detail: https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscalerhttps://cloud.google.com/kubernetes-engine/docs/concepts/verticalpodautoscalerhttps://cloud.google.com/kubernetes-engine/docs/concepts/scalability
Question 34 of 55
34. Question
You are asked to manage the backend for a brand new e-commerce site that also will sell complex solutions, that is, kits of products that can be managed both as single items and as a composite unit with different warehouses.
The primary requirement is that costs will be minimized maintaining the system highly scalable and secure.
Which of the following solutions do you consider for the acquisition and the confirmation of the orders?
Correct
Correct Answer C
This is the only solution that scales to 0 and, at the same time, guarantees that the transaction will be completed in time.
It scales to 0 means that, if the function is not executed, there are no activities and no costs.
Cloud Function may be configured in order to achieve the desired scaling behavior.
A is wrong because it is not advisable to create functions with complex and potentially long processing.
B and D are wrong because App Engine doesn’t scale to 0.
E is wrong because it is not a scalable solution.
Correct Answer C
This is the only solution that scales to 0 and, at the same time, guarantees that the transaction will be completed in time.
It scales to 0 means that, if the function is not executed, there are no activities and no costs.
Cloud Function may be configured in order to achieve the desired scaling behavior.
A is wrong because it is not advisable to create functions with complex and potentially long processing.
B and D are wrong because App Engine doesn’t scale to 0.
E is wrong because it is not a scalable solution.
Correct Answer C
This is the only solution that scales to 0 and, at the same time, guarantees that the transaction will be completed in time.
It scales to 0 means that, if the function is not executed, there are no activities and no costs.
Cloud Function may be configured in order to achieve the desired scaling behavior.
A is wrong because it is not advisable to create functions with complex and potentially long processing.
B and D are wrong because App Engine doesn’t scale to 0.
E is wrong because it is not a scalable solution.
You are asked to manage the backend for an e-commerce site that also sells complex solutions, that is, kits of products that can be managed both as single items and as a composite unit.
The primary requirement is that costs are minimized maintaining the system highly scalable and secure.
When a product is about to run out, it is necessary to inform the customers involved in any region and provide updates on the status of the remaining products.
How do you manage data storage and stocks quantity updates?
Correct
Correct Answer D
This is the only solution that uses a feature called “realtime database” that allows to listen to the results of a query and get real time updates when the query results change.
All the other solutions require a complex and heavy approach for refreshing the data and managing stocks updates.
Furthermore, Cloud Firestore is multi-regional and not expensive.
Cloud SQL and Cloud Datastore are a single-region solution and Cloud Spanner is expensive.
Here there is a snippet of code from the official doc.
Correct Answer D
This is the only solution that uses a feature called “realtime database” that allows to listen to the results of a query and get real time updates when the query results change.
All the other solutions require a complex and heavy approach for refreshing the data and managing stocks updates.
Furthermore, Cloud Firestore is multi-regional and not expensive.
Cloud SQL and Cloud Datastore are a single-region solution and Cloud Spanner is expensive.
Here there is a snippet of code from the official doc.
Correct Answer D
This is the only solution that uses a feature called “realtime database” that allows to listen to the results of a query and get real time updates when the query results change.
All the other solutions require a complex and heavy approach for refreshing the data and managing stocks updates.
Furthermore, Cloud Firestore is multi-regional and not expensive.
Cloud SQL and Cloud Datastore are a single-region solution and Cloud Spanner is expensive.
Here there is a snippet of code from the official doc.
You are asked to manage the backend for an e-commerce site that also sells complex solutions, that is, kits of products that can be managed both as single items and as a composite unit.
The primary requirement is that costs are minimized maintaining the system highly scalable and secure.
How do you manage and correct errors that may happen during backend processing (choose 2)?
Correct
Correct Answers B,D
The best way is to put all the data needed for error handling in a structure (jsonPayloads) and to query Stackdriver logs for the apps errors. In this way the errors can be passed to Pub/Sub and, consequently, to the function capable to manage, correct and inform the users about all the exceptions.
A is wrong because the errors could be caught in the code or could occur in some external service. So it is not always possible to organize all inside the code.
C is wrong because textPayload is not structured.
E is wrong because with Cloud Storage you would need to scan and parse log texts.
Correct Answers B,D
The best way is to put all the data needed for error handling in a structure (jsonPayloads) and to query Stackdriver logs for the apps errors. In this way the errors can be passed to Pub/Sub and, consequently, to the function capable to manage, correct and inform the users about all the exceptions.
A is wrong because the errors could be caught in the code or could occur in some external service. So it is not always possible to organize all inside the code.
C is wrong because textPayload is not structured.
E is wrong because with Cloud Storage you would need to scan and parse log texts.
Correct Answers B,D
The best way is to put all the data needed for error handling in a structure (jsonPayloads) and to query Stackdriver logs for the apps errors. In this way the errors can be passed to Pub/Sub and, consequently, to the function capable to manage, correct and inform the users about all the exceptions.
A is wrong because the errors could be caught in the code or could occur in some external service. So it is not always possible to organize all inside the code.
C is wrong because textPayload is not structured.
E is wrong because with Cloud Storage you would need to scan and parse log texts.
You are designing a logistics management system for your e-commerce in GCP. The System is developed with Cloud Functions.
You can take advantage of decreasing prices related to the number of deliveries to be made and you must communicate shipments by 16:00 in order to be delivered the next day.
So you want to process a batch of orders in order to optimize expenses.
How do you manage these operations in the best way (2 possible choices)?
Correct
Correct Answer B, C
Cloud Scheduler is the simplest and cheapest way to schedule units of work to be executed at defined times or regular intervals.
On the other side, Cloud Tasks triggers actions based on how the individual task object is configured. You have to set `scheduleTime` and action is triggered at that time.
Cloud Scheduler is the simplest and cheapest way to schedule units of work to be executed at defined times or regular intervals.
On the other side, Cloud Tasks triggers actions based on how the individual task object is configured. You have to set `scheduleTime` and action is triggered at that time.
Cloud Scheduler is the simplest and cheapest way to schedule units of work to be executed at defined times or regular intervals.
On the other side, Cloud Tasks triggers actions based on how the individual task object is configured. You have to set `scheduleTime` and action is triggered at that time.
You are designing an app that allows users to upload images and videos to the Cloud from the web and mobile interfaces. The users are not allowed to permanent permissions for uploading objects, but the application must provide them with the ability to carry on these tasks only when required. Which of the following is the best technique to use?
Correct
Correct answer: B A signed URL is a simple, clean and economic solution: they give time-limited resource access to anyone in possession of the URL, regardless of whether they have a Google account. A signed URL is a URL that provides limited permission and time to make a request. Signed URLs contain authentication information in their query string, allowing users without credentials to perform specific actions on a resource. When you generate a signed URL, you specify a user or service account which must have sufficient permission to make the request that the signed URL will make. After you generate a signed URL, anyone who possesses it can use the signed URL to perform specified actions, such as reading an object, within a specified period of time. For any further detail: https://cloud.google.com/storage/docs/access-control/signed-urls
Incorrect
Correct answer: B A signed URL is a simple, clean and economic solution: they give time-limited resource access to anyone in possession of the URL, regardless of whether they have a Google account. A signed URL is a URL that provides limited permission and time to make a request. Signed URLs contain authentication information in their query string, allowing users without credentials to perform specific actions on a resource. When you generate a signed URL, you specify a user or service account which must have sufficient permission to make the request that the signed URL will make. After you generate a signed URL, anyone who possesses it can use the signed URL to perform specified actions, such as reading an object, within a specified period of time. For any further detail: https://cloud.google.com/storage/docs/access-control/signed-urls
Unattempted
Correct answer: B A signed URL is a simple, clean and economic solution: they give time-limited resource access to anyone in possession of the URL, regardless of whether they have a Google account. A signed URL is a URL that provides limited permission and time to make a request. Signed URLs contain authentication information in their query string, allowing users without credentials to perform specific actions on a resource. When you generate a signed URL, you specify a user or service account which must have sufficient permission to make the request that the signed URL will make. After you generate a signed URL, anyone who possesses it can use the signed URL to perform specified actions, such as reading an object, within a specified period of time. For any further detail: https://cloud.google.com/storage/docs/access-control/signed-urls
Question 39 of 55
39. Question
You are designing an application to be deployed in GKE that will manage multi-regional images. These files must remain available for a month and then deleted. High performance must be maintained together with low cost and easy implementation. What kind of storage will you adopt?
Correct
The only solution for multi-regional object storage is Cloud Storage. In order to reach higher performances, the use of Cloud CDN is advisable.
Google Cloud CDN leverages Google’s globally distributed edge points of presence to make faster distribution of contents served out of Compute Engine and Cloud Storage. The lifecycle management of Cloud Storage makes it easy to manage deletion after a given period of time. A is wrong because Local SSD is a transient, local block storage. B and C are wrong because Filestore and regional SSD is not multi-regional D is correct E: “Cloud Storage and Cloud CDN” : is not a kind of ‘Storage’ Kinds” For any further detail: https://cloud.google.com/compute/docs/disks/https://cloud.google.com/cdn/docs/overviewhttps://cloud.google.com/storage/docs/
Incorrect
The only solution for multi-regional object storage is Cloud Storage. In order to reach higher performances, the use of Cloud CDN is advisable.
Google Cloud CDN leverages Google’s globally distributed edge points of presence to make faster distribution of contents served out of Compute Engine and Cloud Storage. The lifecycle management of Cloud Storage makes it easy to manage deletion after a given period of time. A is wrong because Local SSD is a transient, local block storage. B and C are wrong because Filestore and regional SSD is not multi-regional D is correct E: “Cloud Storage and Cloud CDN” : is not a kind of ‘Storage’ Kinds” For any further detail: https://cloud.google.com/compute/docs/disks/https://cloud.google.com/cdn/docs/overviewhttps://cloud.google.com/storage/docs/
Unattempted
The only solution for multi-regional object storage is Cloud Storage. In order to reach higher performances, the use of Cloud CDN is advisable.
Google Cloud CDN leverages Google’s globally distributed edge points of presence to make faster distribution of contents served out of Compute Engine and Cloud Storage. The lifecycle management of Cloud Storage makes it easy to manage deletion after a given period of time. A is wrong because Local SSD is a transient, local block storage. B and C are wrong because Filestore and regional SSD is not multi-regional D is correct E: “Cloud Storage and Cloud CDN” : is not a kind of ‘Storage’ Kinds” For any further detail: https://cloud.google.com/compute/docs/disks/https://cloud.google.com/cdn/docs/overviewhttps://cloud.google.com/storage/docs/
Question 40 of 55
40. Question
You are designing an IoT system that collects and processes a large amount of data from different devices for a Smart City project (cameras, sensors, control devices, etc.). Are you wondering which is the best GCP product and the data scheme to use, considering that: The information of each type of device may be different You are interested in immediate identification by area, time, device type You need access times with latencies in milliseconds The project has a limited geographical scope You need to be able to group and identify the data within each record in various ways Which of the following products do you choose?
Correct
Correct answer: C Bigtable is the perfect choice. Bigtable is suitable for ad tech, fintech, and IoT and offers consistent sub-10ms latency. Replication provides higher availability, higher durability, and resilience in the face of zonal failures. Cloud Bigtable is designed with a storage engine for machine learning applications and provides easy integration with open source big data tools Regarding the other answer, for a flexible schema you need a noSQL Database. So, Datastore or Bigtable. All the others are SQL DBs. Datastore cannot supply latencies in milliseconds with petabytes of data. Bigtable can, definitely. For any further detail: https://cloud.google.com/bigtable/docs/schema-designhttps://cloud.google.com/bigtable/
Incorrect
Correct answer: C Bigtable is the perfect choice. Bigtable is suitable for ad tech, fintech, and IoT and offers consistent sub-10ms latency. Replication provides higher availability, higher durability, and resilience in the face of zonal failures. Cloud Bigtable is designed with a storage engine for machine learning applications and provides easy integration with open source big data tools Regarding the other answer, for a flexible schema you need a noSQL Database. So, Datastore or Bigtable. All the others are SQL DBs. Datastore cannot supply latencies in milliseconds with petabytes of data. Bigtable can, definitely. For any further detail: https://cloud.google.com/bigtable/docs/schema-designhttps://cloud.google.com/bigtable/
Unattempted
Correct answer: C Bigtable is the perfect choice. Bigtable is suitable for ad tech, fintech, and IoT and offers consistent sub-10ms latency. Replication provides higher availability, higher durability, and resilience in the face of zonal failures. Cloud Bigtable is designed with a storage engine for machine learning applications and provides easy integration with open source big data tools Regarding the other answer, for a flexible schema you need a noSQL Database. So, Datastore or Bigtable. All the others are SQL DBs. Datastore cannot supply latencies in milliseconds with petabytes of data. Bigtable can, definitely. For any further detail: https://cloud.google.com/bigtable/docs/schema-designhttps://cloud.google.com/bigtable/
Question 41 of 55
41. Question
You are going to deploy a set of applications in GKE. Which of the following choices do you need to make to be sure that the application will not have Single Point of failures of any kind and will be scalable? Choose 2 options among the following ones?
Correct
Correct Answer A, D In order to avoid Single Point of failures of any kind, you have to have workers nodes with auto-healing and auto-scaling Option A a backup master node (regional cluster). Option D Nodes execute the workloads run on the cluster. Nodes are VMs that run containers configured to run an application. Nodes are primarily controlled by the cluster master, but some commands can be run manually. The nodes run an agent called kubelet, which is the service that communicates with the cluster master. A single-zone cluster has a single control plane (master) running in one zone. This control plane manages workloads on nodes running in the same zone. Multi-zonal clusters A multi-zonal cluster has a single replica of the control plane running in a single zone, and has nodes running in multiple zones. During an upgrade of the cluster or an outage of the zone where the control plane runs, workloads still run. However, the cluster, its nodes, and its workloads cannot be configured until the control plane is available. Multi-zonal clusters balance availability and cost for consistent workloads. If you want to maintain availability and the number of your nodes and node pools are changing frequently, consider using a regional cluster. Regional clusters A regional cluster has multiple replicas of the control plane, running in multiple zones within a given region. Nodes also run in each zone where a replica of the control plane runs. Because a regional cluster replicates the control plane and nodes, it consumes more Compute Engine resources than a similar single-zone or multi-zonal cluster. For any further detail: https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscalerhttps://cloud.google.com/kubernetes-engine/docs/concepts/types-of-clusters
Incorrect
Correct Answer A, D In order to avoid Single Point of failures of any kind, you have to have workers nodes with auto-healing and auto-scaling Option A a backup master node (regional cluster). Option D Nodes execute the workloads run on the cluster. Nodes are VMs that run containers configured to run an application. Nodes are primarily controlled by the cluster master, but some commands can be run manually. The nodes run an agent called kubelet, which is the service that communicates with the cluster master. A single-zone cluster has a single control plane (master) running in one zone. This control plane manages workloads on nodes running in the same zone. Multi-zonal clusters A multi-zonal cluster has a single replica of the control plane running in a single zone, and has nodes running in multiple zones. During an upgrade of the cluster or an outage of the zone where the control plane runs, workloads still run. However, the cluster, its nodes, and its workloads cannot be configured until the control plane is available. Multi-zonal clusters balance availability and cost for consistent workloads. If you want to maintain availability and the number of your nodes and node pools are changing frequently, consider using a regional cluster. Regional clusters A regional cluster has multiple replicas of the control plane, running in multiple zones within a given region. Nodes also run in each zone where a replica of the control plane runs. Because a regional cluster replicates the control plane and nodes, it consumes more Compute Engine resources than a similar single-zone or multi-zonal cluster. For any further detail: https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscalerhttps://cloud.google.com/kubernetes-engine/docs/concepts/types-of-clusters
Unattempted
Correct Answer A, D In order to avoid Single Point of failures of any kind, you have to have workers nodes with auto-healing and auto-scaling Option A a backup master node (regional cluster). Option D Nodes execute the workloads run on the cluster. Nodes are VMs that run containers configured to run an application. Nodes are primarily controlled by the cluster master, but some commands can be run manually. The nodes run an agent called kubelet, which is the service that communicates with the cluster master. A single-zone cluster has a single control plane (master) running in one zone. This control plane manages workloads on nodes running in the same zone. Multi-zonal clusters A multi-zonal cluster has a single replica of the control plane running in a single zone, and has nodes running in multiple zones. During an upgrade of the cluster or an outage of the zone where the control plane runs, workloads still run. However, the cluster, its nodes, and its workloads cannot be configured until the control plane is available. Multi-zonal clusters balance availability and cost for consistent workloads. If you want to maintain availability and the number of your nodes and node pools are changing frequently, consider using a regional cluster. Regional clusters A regional cluster has multiple replicas of the control plane, running in multiple zones within a given region. Nodes also run in each zone where a replica of the control plane runs. Because a regional cluster replicates the control plane and nodes, it consumes more Compute Engine resources than a similar single-zone or multi-zonal cluster. For any further detail: https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscalerhttps://cloud.google.com/kubernetes-engine/docs/concepts/types-of-clusters
Question 42 of 55
42. Question
You are looking for a low-cost database service that supports strong consistency, atomic transaction and serializable isolation. The data have to be partially structured. What database and what configuration do you choose?
Correct
Cloud Datastore is a low cost noSQL Managed Database that is partially structured.
Datastore commits are either transactional, meaning they take place in the context of a transaction and the transaction’s set of mutations are either all applied or none are applied, or non-transactional, meaning the set of mutations may not apply as all or none. Consistency ensures that a user reading data from the database will get the same data no matter which server in a cluster responds to the request. Datastore can be configured for strong consistency, but IO operations will take longer than if a less strict consistency configuration is used. Datastore is a good option if your data is unstructured. For any further detail: https://cloud.google.com/datastore/docs/concepts/structuring_for_strong_consistency
Incorrect
Cloud Datastore is a low cost noSQL Managed Database that is partially structured.
Datastore commits are either transactional, meaning they take place in the context of a transaction and the transaction’s set of mutations are either all applied or none are applied, or non-transactional, meaning the set of mutations may not apply as all or none. Consistency ensures that a user reading data from the database will get the same data no matter which server in a cluster responds to the request. Datastore can be configured for strong consistency, but IO operations will take longer than if a less strict consistency configuration is used. Datastore is a good option if your data is unstructured. For any further detail: https://cloud.google.com/datastore/docs/concepts/structuring_for_strong_consistency
Unattempted
Cloud Datastore is a low cost noSQL Managed Database that is partially structured.
Datastore commits are either transactional, meaning they take place in the context of a transaction and the transaction’s set of mutations are either all applied or none are applied, or non-transactional, meaning the set of mutations may not apply as all or none. Consistency ensures that a user reading data from the database will get the same data no matter which server in a cluster responds to the request. Datastore can be configured for strong consistency, but IO operations will take longer than if a less strict consistency configuration is used. Datastore is a good option if your data is unstructured. For any further detail: https://cloud.google.com/datastore/docs/concepts/structuring_for_strong_consistency
Question 43 of 55
43. Question
You are looking for a SQL system to integrate and query both historical and production data. The data must be organized in complex structures. In particular, it is necessary to store orders and invoices in a denormalized and complete manner with the header and detail within the same structure. Which of the following products do you choose?
Correct
Correct answer: D BigQuery is an OLAP engine. So, it is far better, even if it can manage normalised data and joins, to have denormalized information. In addition BigQuery can manage nested and repeated columns and structures, as required. BigQuery is not a Database but an enterprise, serverless, highly scalable, and cost-effective cloud data data warehouse that solves this problem by enabling super-fast SQL queries using the processing power of Google’s infrastructure. It can quickly analyze gigabytes to petabytes of data using ANSI SQL. For any further detail: https://cloud.google.com/bigquery/what-is-bigqueryhttps://cloud.google.com/bigquery/docs/nested-repeated
Incorrect
Correct answer: D BigQuery is an OLAP engine. So, it is far better, even if it can manage normalised data and joins, to have denormalized information. In addition BigQuery can manage nested and repeated columns and structures, as required. BigQuery is not a Database but an enterprise, serverless, highly scalable, and cost-effective cloud data data warehouse that solves this problem by enabling super-fast SQL queries using the processing power of Google’s infrastructure. It can quickly analyze gigabytes to petabytes of data using ANSI SQL. For any further detail: https://cloud.google.com/bigquery/what-is-bigqueryhttps://cloud.google.com/bigquery/docs/nested-repeated
Unattempted
Correct answer: D BigQuery is an OLAP engine. So, it is far better, even if it can manage normalised data and joins, to have denormalized information. In addition BigQuery can manage nested and repeated columns and structures, as required. BigQuery is not a Database but an enterprise, serverless, highly scalable, and cost-effective cloud data data warehouse that solves this problem by enabling super-fast SQL queries using the processing power of Google’s infrastructure. It can quickly analyze gigabytes to petabytes of data using ANSI SQL. For any further detail: https://cloud.google.com/bigquery/what-is-bigqueryhttps://cloud.google.com/bigquery/docs/nested-repeated
Question 44 of 55
44. Question
You are migrating a series of applications to Google Cloud Platform with a lift and shift methodology, using Compute Engine. Applications must be scalable, so Load Balancer and instance groups are being configured. Some applications manage session data in memory. Which of the following configurations do you choose to allow apps to work properly:
Correct
Correct answer: A Session affinity provides a best-effort attempt to send requests from a particular client to the same backend for as long as the backend is healthy and has the capacity, according to the configured balancing mode. It is the best way to assure that session data is maintained in memory. It is a feature of the HTTP(S) load balancing. Google Cloud SSL Proxy Load Balancing terminates user SSL (TLS) connections at the load balancing layer, then balances the connections across your instances using the SSL or TCP protocols. Cloud SSL proxy is intended for non-HTTP(S) traffic. For HTTP(S) traffic, HTTP(S) load balancing is recommended instead. pass-through load balancer, so your backends receive the original client request For any further detail: https://cloud.google.com/load-balancing/docs/https/https://cloud.google.com/load-balancing/docs/https/#session_affinityhttps://cloud.google.com/load-balancing/docs/ssl/https://cloud.google.com/load-balancing/docs/network/
Incorrect
Correct answer: A Session affinity provides a best-effort attempt to send requests from a particular client to the same backend for as long as the backend is healthy and has the capacity, according to the configured balancing mode. It is the best way to assure that session data is maintained in memory. It is a feature of the HTTP(S) load balancing. Google Cloud SSL Proxy Load Balancing terminates user SSL (TLS) connections at the load balancing layer, then balances the connections across your instances using the SSL or TCP protocols. Cloud SSL proxy is intended for non-HTTP(S) traffic. For HTTP(S) traffic, HTTP(S) load balancing is recommended instead. pass-through load balancer, so your backends receive the original client request For any further detail: https://cloud.google.com/load-balancing/docs/https/https://cloud.google.com/load-balancing/docs/https/#session_affinityhttps://cloud.google.com/load-balancing/docs/ssl/https://cloud.google.com/load-balancing/docs/network/
Unattempted
Correct answer: A Session affinity provides a best-effort attempt to send requests from a particular client to the same backend for as long as the backend is healthy and has the capacity, according to the configured balancing mode. It is the best way to assure that session data is maintained in memory. It is a feature of the HTTP(S) load balancing. Google Cloud SSL Proxy Load Balancing terminates user SSL (TLS) connections at the load balancing layer, then balances the connections across your instances using the SSL or TCP protocols. Cloud SSL proxy is intended for non-HTTP(S) traffic. For HTTP(S) traffic, HTTP(S) load balancing is recommended instead. pass-through load balancer, so your backends receive the original client request For any further detail: https://cloud.google.com/load-balancing/docs/https/https://cloud.google.com/load-balancing/docs/https/#session_affinityhttps://cloud.google.com/load-balancing/docs/ssl/https://cloud.google.com/load-balancing/docs/network/
Question 45 of 55
45. Question
You are planning a procedure, which must be prepared by one of your colleagues, in order to create a VM with the required configuration (SO and Disk). You need this VM for an interruptible work and you want to keep expenses at a minimum and being sure that the configuration will be flexible and easily upgradeable and that it will be possible to automatically create other virtual machines. What are the steps and commands to be performed (pick 2)?
Correct
Correct answers: A, C In order to automatically create other virtual machines with a saved and updatable configuration you have to create a template; that is a saved configuration used by GCP to automate the process. A disk image doesn’t set the preentible flag. Instead, you can create preemptible instances in a managed instance group, with the preemptible option in the instance template before you create or update the group. For any further detail: https://cloud.google.com/compute/docs/instances/preemptible
Incorrect
Correct answers: A, C In order to automatically create other virtual machines with a saved and updatable configuration you have to create a template; that is a saved configuration used by GCP to automate the process. A disk image doesn’t set the preentible flag. Instead, you can create preemptible instances in a managed instance group, with the preemptible option in the instance template before you create or update the group. For any further detail: https://cloud.google.com/compute/docs/instances/preemptible
Unattempted
Correct answers: A, C In order to automatically create other virtual machines with a saved and updatable configuration you have to create a template; that is a saved configuration used by GCP to automate the process. A disk image doesn’t set the preentible flag. Instead, you can create preemptible instances in a managed instance group, with the preemptible option in the instance template before you create or update the group. For any further detail: https://cloud.google.com/compute/docs/instances/preemptible
Question 46 of 55
46. Question
You are planning a procedure, which must be prepared by one of your colleagues, in order to create a VM with the required configuration (SO and Disk); the main requirement is to make sure the configuration will be flexible and easily upgradeable and it will be possible to automatically create other virtual machines. What are the steps and commands to be performed (pick 2)?
Correct
In order to automatically create other virtual machines with a saved and updatable configuration you have to create a template; that is a saved configuration used by GCP to automate the process. You usually start from a boot disk image. This command gives you the list of available choices:
In order to automatically create other virtual machines with a saved and updatable configuration you have to create a template; that is a saved configuration used by GCP to automate the process. You usually start from a boot disk image. This command gives you the list of available choices:
In order to automatically create other virtual machines with a saved and updatable configuration you have to create a template; that is a saved configuration used by GCP to automate the process. You usually start from a boot disk image. This command gives you the list of available choices:
You are planning to migrate a set of microservices apps in GCP that are already organized and deployed in Containers. You have been asked to find the best managed platform suited for these applications. Scalability is a requirement but there are not sudden and high bursts of requests. The applications are developed with different programming languages and tools and may be sessionless or may have to manage in memory sessions. Which of the following solutions will you suggest (choose 1)?
Correct
Correct answer: B This is the only platform that covers all the requirements. App Engine Flexible Environment gives a broad range of solutions with the only bound of using Containers, as in our case. Automatic scaling creates dynamic instances based on request rate, response latencies, and other application metrics. New instances start require more time compared to the Standard Environment, but it is clearly not an issue. Cloud Run is not suitable because it supports only sessionless applications. The other solutions are ot managed or don’t support the runtimes required. For any further detail: https://cloud.google.com/appengine/docs/flexible/java/how-instances-are-managed
Incorrect
Correct answer: B This is the only platform that covers all the requirements. App Engine Flexible Environment gives a broad range of solutions with the only bound of using Containers, as in our case. Automatic scaling creates dynamic instances based on request rate, response latencies, and other application metrics. New instances start require more time compared to the Standard Environment, but it is clearly not an issue. Cloud Run is not suitable because it supports only sessionless applications. The other solutions are ot managed or don’t support the runtimes required. For any further detail: https://cloud.google.com/appengine/docs/flexible/java/how-instances-are-managed
Unattempted
Correct answer: B This is the only platform that covers all the requirements. App Engine Flexible Environment gives a broad range of solutions with the only bound of using Containers, as in our case. Automatic scaling creates dynamic instances based on request rate, response latencies, and other application metrics. New instances start require more time compared to the Standard Environment, but it is clearly not an issue. Cloud Run is not suitable because it supports only sessionless applications. The other solutions are ot managed or don’t support the runtimes required. For any further detail: https://cloud.google.com/appengine/docs/flexible/java/how-instances-are-managed
Question 48 of 55
48. Question
You are planning to migrate an app in GCP with these features: C# language Time activated It may need hours to complete elaboration A managed, effective and simple solution is required and preferred. Which of the following solutions will you suggest (choose 1)?
Correct
Correct answer: B This is the only solution that supports the .NET environment and long term processing. Your instances with manual and basic scaling should run indefinitely, but there is no uptime guarantee. Hardware or software failures that cause early termination or frequent restarts can occur without warning and can take considerable time to resolve. All flexible instances are restarted on a weekly basis. During restarts, critical, backwards-compatible updates are automatically rolled out to the underlying operating system. Your application’s image will remain the same across restarts. Cloud Scheduler can trigger the procedure on schedule A Compute Engine is not a managed solution C App Engine Standard Environment supports these programming languages: Python 2.7, Python 3.7 Java 8, Java 11 Node.js 8, Node.js 10 PHP 5.5, PHP 7.2, and PHP 7.3 Ruby 2.5 (beta) Go 1.9, Go 1.11, and Go 1.12 D Cloud Functions don’t support C# E Cloud RUN don’t support long term processing For any further detail: https://cloud.google.com/appengine/docs/flexible/dotnet/quickstarthttps://cloud.google.com/scheduler/
Incorrect
Correct answer: B This is the only solution that supports the .NET environment and long term processing. Your instances with manual and basic scaling should run indefinitely, but there is no uptime guarantee. Hardware or software failures that cause early termination or frequent restarts can occur without warning and can take considerable time to resolve. All flexible instances are restarted on a weekly basis. During restarts, critical, backwards-compatible updates are automatically rolled out to the underlying operating system. Your application’s image will remain the same across restarts. Cloud Scheduler can trigger the procedure on schedule A Compute Engine is not a managed solution C App Engine Standard Environment supports these programming languages: Python 2.7, Python 3.7 Java 8, Java 11 Node.js 8, Node.js 10 PHP 5.5, PHP 7.2, and PHP 7.3 Ruby 2.5 (beta) Go 1.9, Go 1.11, and Go 1.12 D Cloud Functions don’t support C# E Cloud RUN don’t support long term processing For any further detail: https://cloud.google.com/appengine/docs/flexible/dotnet/quickstarthttps://cloud.google.com/scheduler/
Unattempted
Correct answer: B This is the only solution that supports the .NET environment and long term processing. Your instances with manual and basic scaling should run indefinitely, but there is no uptime guarantee. Hardware or software failures that cause early termination or frequent restarts can occur without warning and can take considerable time to resolve. All flexible instances are restarted on a weekly basis. During restarts, critical, backwards-compatible updates are automatically rolled out to the underlying operating system. Your application’s image will remain the same across restarts. Cloud Scheduler can trigger the procedure on schedule A Compute Engine is not a managed solution C App Engine Standard Environment supports these programming languages: Python 2.7, Python 3.7 Java 8, Java 11 Node.js 8, Node.js 10 PHP 5.5, PHP 7.2, and PHP 7.3 Ruby 2.5 (beta) Go 1.9, Go 1.11, and Go 1.12 D Cloud Functions don’t support C# E Cloud RUN don’t support long term processing For any further detail: https://cloud.google.com/appengine/docs/flexible/dotnet/quickstarthttps://cloud.google.com/scheduler/
Question 49 of 55
49. Question
You are planning to migrate several HTTP(s) apps to GKE and are concerned with scalability and which methods are the best to follow. To create and manage a load balancer for HTTP(s) apps, which of these options are possible (pick 3)?
Correct
Correct answers: A, C, E When you create and deploy an app in GKE, the steps are as follows: Create a GKE cluster: A cluster consists of at least one cluster master machine and multiple worker machines called nodes. Nodes are Compute Engine virtual machine (VM) instances that run the Kubernetes processes necessary to make them part of the cluster. Get authentication credentials to interact with the cluster: gcloud container clusters get-credentials cluster-name Deploy an application to the cluster: Kubernetes provides the Deployment object for deploying stateless applications like web servers. Service objects define rules and load balancing for accessing your application from the internet Expose the Deployment: expose it to the internet so that users can access it, a Service will be created, a Kubernetes resource that exposes your application to external traffic. So, there is integrated support for internal and external load balancers (services). For a publicly accessible application GKE offers two types of cloud load balancing: You can create TCP/UDP load balancers by specifying type: LoadBalancer on a Service resource manifest. Although a TCP load balancer works for HTTP web servers, they are not designed to terminate HTTP(S) traffic as they are not aware of individual HTTP(S) requests. GKE does not configure any health checks for TCP/UDP load balancers. See the Guestbook tutorial for an example of this type of load balancer. You can create HTTP(S) load balancers by using an Ingress resource. HTTP(S) load balancers are designed to terminate HTTP(S) requests and can make better context-aware load balancing decisions. They offer features like customizable URL maps and TLS termination. GKE automatically configures health checks for HTTP(S) load balancers. For an internal HTTP(S) load balancer service you have to setup: HTTP health check Backend service with a NEG as the backend A URL map SSL certificate (for HTTPS) Target proxy Forwarding rule For any further detail: https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancerhttps://cloud.google.com/load-balancing/docs/https/https://cloud.google.com/load-balancing/docs/l7-internal/set-up-gke-pods
Incorrect
Correct answers: A, C, E When you create and deploy an app in GKE, the steps are as follows: Create a GKE cluster: A cluster consists of at least one cluster master machine and multiple worker machines called nodes. Nodes are Compute Engine virtual machine (VM) instances that run the Kubernetes processes necessary to make them part of the cluster. Get authentication credentials to interact with the cluster: gcloud container clusters get-credentials cluster-name Deploy an application to the cluster: Kubernetes provides the Deployment object for deploying stateless applications like web servers. Service objects define rules and load balancing for accessing your application from the internet Expose the Deployment: expose it to the internet so that users can access it, a Service will be created, a Kubernetes resource that exposes your application to external traffic. So, there is integrated support for internal and external load balancers (services). For a publicly accessible application GKE offers two types of cloud load balancing: You can create TCP/UDP load balancers by specifying type: LoadBalancer on a Service resource manifest. Although a TCP load balancer works for HTTP web servers, they are not designed to terminate HTTP(S) traffic as they are not aware of individual HTTP(S) requests. GKE does not configure any health checks for TCP/UDP load balancers. See the Guestbook tutorial for an example of this type of load balancer. You can create HTTP(S) load balancers by using an Ingress resource. HTTP(S) load balancers are designed to terminate HTTP(S) requests and can make better context-aware load balancing decisions. They offer features like customizable URL maps and TLS termination. GKE automatically configures health checks for HTTP(S) load balancers. For an internal HTTP(S) load balancer service you have to setup: HTTP health check Backend service with a NEG as the backend A URL map SSL certificate (for HTTPS) Target proxy Forwarding rule For any further detail: https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancerhttps://cloud.google.com/load-balancing/docs/https/https://cloud.google.com/load-balancing/docs/l7-internal/set-up-gke-pods
Unattempted
Correct answers: A, C, E When you create and deploy an app in GKE, the steps are as follows: Create a GKE cluster: A cluster consists of at least one cluster master machine and multiple worker machines called nodes. Nodes are Compute Engine virtual machine (VM) instances that run the Kubernetes processes necessary to make them part of the cluster. Get authentication credentials to interact with the cluster: gcloud container clusters get-credentials cluster-name Deploy an application to the cluster: Kubernetes provides the Deployment object for deploying stateless applications like web servers. Service objects define rules and load balancing for accessing your application from the internet Expose the Deployment: expose it to the internet so that users can access it, a Service will be created, a Kubernetes resource that exposes your application to external traffic. So, there is integrated support for internal and external load balancers (services). For a publicly accessible application GKE offers two types of cloud load balancing: You can create TCP/UDP load balancers by specifying type: LoadBalancer on a Service resource manifest. Although a TCP load balancer works for HTTP web servers, they are not designed to terminate HTTP(S) traffic as they are not aware of individual HTTP(S) requests. GKE does not configure any health checks for TCP/UDP load balancers. See the Guestbook tutorial for an example of this type of load balancer. You can create HTTP(S) load balancers by using an Ingress resource. HTTP(S) load balancers are designed to terminate HTTP(S) requests and can make better context-aware load balancing decisions. They offer features like customizable URL maps and TLS termination. GKE automatically configures health checks for HTTP(S) load balancers. For an internal HTTP(S) load balancer service you have to setup: HTTP health check Backend service with a NEG as the backend A URL map SSL certificate (for HTTPS) Target proxy Forwarding rule For any further detail: https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancerhttps://cloud.google.com/load-balancing/docs/https/https://cloud.google.com/load-balancing/docs/l7-internal/set-up-gke-pods
Question 50 of 55
50. Question
You are responsible for planning the migration to GCP of an important application that works with Oracle Database. A horizontally scalable and globally functioning SQL database is required. Which service is better to use and which type of schema migration is recommended?
Correct
Correct answer: D The requirements point to an SQL Database that is global and distributed with synchronized replicas and shards in multiple Servers: Cloud Spanner. The risk of hotspotting with synchronized replicas needs to be addressed; that is updates that are not distributed among multiple servers. So it is necessary to be careful not to create hotspots with the choice of your primary key. For example, if you insert records with a monotonically increasing integer as the key, you’ll always insert at the end of your key space. This is undesirable because Cloud Spanner divides data among servers by key ranges, which means your inserts will be directed at a single server, creating a hotspot. The techniques that can spread the load across multiple servers and avoid hotspots: Hash the key and store it in a column. Use the hash column (or the hash column and the unique key columns together) as the primary key. Swap the order of the columns in the primary key. Use a Universally Unique Identifier (UUID). Version 4 UUID is recommended, because it uses random values in the high-order bits. Don’t use a UUID algorithm (such as version 1 UUID) that stores the timestamp in the high order bits. Bit-reverse sequential values. For any further detail: https://cloud.google.com/spanner/docs/schema-and-data-model
Incorrect
Correct answer: D The requirements point to an SQL Database that is global and distributed with synchronized replicas and shards in multiple Servers: Cloud Spanner. The risk of hotspotting with synchronized replicas needs to be addressed; that is updates that are not distributed among multiple servers. So it is necessary to be careful not to create hotspots with the choice of your primary key. For example, if you insert records with a monotonically increasing integer as the key, you’ll always insert at the end of your key space. This is undesirable because Cloud Spanner divides data among servers by key ranges, which means your inserts will be directed at a single server, creating a hotspot. The techniques that can spread the load across multiple servers and avoid hotspots: Hash the key and store it in a column. Use the hash column (or the hash column and the unique key columns together) as the primary key. Swap the order of the columns in the primary key. Use a Universally Unique Identifier (UUID). Version 4 UUID is recommended, because it uses random values in the high-order bits. Don’t use a UUID algorithm (such as version 1 UUID) that stores the timestamp in the high order bits. Bit-reverse sequential values. For any further detail: https://cloud.google.com/spanner/docs/schema-and-data-model
Unattempted
Correct answer: D The requirements point to an SQL Database that is global and distributed with synchronized replicas and shards in multiple Servers: Cloud Spanner. The risk of hotspotting with synchronized replicas needs to be addressed; that is updates that are not distributed among multiple servers. So it is necessary to be careful not to create hotspots with the choice of your primary key. For example, if you insert records with a monotonically increasing integer as the key, you’ll always insert at the end of your key space. This is undesirable because Cloud Spanner divides data among servers by key ranges, which means your inserts will be directed at a single server, creating a hotspot. The techniques that can spread the load across multiple servers and avoid hotspots: Hash the key and store it in a column. Use the hash column (or the hash column and the unique key columns together) as the primary key. Swap the order of the columns in the primary key. Use a Universally Unique Identifier (UUID). Version 4 UUID is recommended, because it uses random values in the high-order bits. Don’t use a UUID algorithm (such as version 1 UUID) that stores the timestamp in the high order bits. Bit-reverse sequential values. For any further detail: https://cloud.google.com/spanner/docs/schema-and-data-model
Question 51 of 55
51. Question
You are the leader of a development group that is migrating some applications to the Cloud and who has asked you how to set up a local work environment. In the company they need to use specific development tools that are installed on the Clients. Which of these tips would you provide?
Correct
Correct answer: D A is wrong because because you are not connecting to the services and it is not advisable to credentials within the code B is wrong because it is terribly expensive and complex; moreover, it is not local C is wrong because it is a feasible way, but not local D is correct because Cloud SDK is scoped to this very aim and Service Accounts are the Google recommended practice for authorization For any further detail: https://cloud.google.com/sdk/docs/quickstartshttps://cloud.google.com/shell/docs/using-cloud-shell
Incorrect
Correct answer: D A is wrong because because you are not connecting to the services and it is not advisable to credentials within the code B is wrong because it is terribly expensive and complex; moreover, it is not local C is wrong because it is a feasible way, but not local D is correct because Cloud SDK is scoped to this very aim and Service Accounts are the Google recommended practice for authorization For any further detail: https://cloud.google.com/sdk/docs/quickstartshttps://cloud.google.com/shell/docs/using-cloud-shell
Unattempted
Correct answer: D A is wrong because because you are not connecting to the services and it is not advisable to credentials within the code B is wrong because it is terribly expensive and complex; moreover, it is not local C is wrong because it is a feasible way, but not local D is correct because Cloud SDK is scoped to this very aim and Service Accounts are the Google recommended practice for authorization For any further detail: https://cloud.google.com/sdk/docs/quickstartshttps://cloud.google.com/shell/docs/using-cloud-shell
Question 52 of 55
52. Question
You are the leader of a development group; you deploy your code in containers and you want to use Continuous Integration and Deployment Techniques and you care to organize procedure in the best way. In your company the new trend is to deploy apps and services within GKE You want to start the deployment as soon as new Source is committed. Which is the best method to deploy an application to Kubernetes GKE automatically?
Correct
Correct answer: A Cloud Build provides a gke-deploy builder that enables you to deploy a containerized application to a GKE cluster. gke-deploy is a wrapper around kubectl, the command-line interface for Kubernetes. It applies Google’s recommended practices for deploying applications to Kubernetes by: Updating the application’s Kubernetes configuration to use the container image’s digest instead of a tag. Adding recommended labels to the Kubernetes configuration. Retrieving credentials for the GKE clusters to which you’re deploying the image. Waiting for the Kubernetes configuration that was submitted to be ready. If you want to deploy your applications using kubectl directly and do not need additional functionality, Cloud Build also provides a kubectl builder that you can use to deploy your application to a GKE cluster. For any further detail: https://cloud.google.com/cloud-build/docs/deploying-builds/deploy-gkehttps://cloud.google.com/cloud-build/docs/quickstart-docker
Incorrect
Correct answer: A Cloud Build provides a gke-deploy builder that enables you to deploy a containerized application to a GKE cluster. gke-deploy is a wrapper around kubectl, the command-line interface for Kubernetes. It applies Google’s recommended practices for deploying applications to Kubernetes by: Updating the application’s Kubernetes configuration to use the container image’s digest instead of a tag. Adding recommended labels to the Kubernetes configuration. Retrieving credentials for the GKE clusters to which you’re deploying the image. Waiting for the Kubernetes configuration that was submitted to be ready. If you want to deploy your applications using kubectl directly and do not need additional functionality, Cloud Build also provides a kubectl builder that you can use to deploy your application to a GKE cluster. For any further detail: https://cloud.google.com/cloud-build/docs/deploying-builds/deploy-gkehttps://cloud.google.com/cloud-build/docs/quickstart-docker
Unattempted
Correct answer: A Cloud Build provides a gke-deploy builder that enables you to deploy a containerized application to a GKE cluster. gke-deploy is a wrapper around kubectl, the command-line interface for Kubernetes. It applies Google’s recommended practices for deploying applications to Kubernetes by: Updating the application’s Kubernetes configuration to use the container image’s digest instead of a tag. Adding recommended labels to the Kubernetes configuration. Retrieving credentials for the GKE clusters to which you’re deploying the image. Waiting for the Kubernetes configuration that was submitted to be ready. If you want to deploy your applications using kubectl directly and do not need additional functionality, Cloud Build also provides a kubectl builder that you can use to deploy your application to a GKE cluster. For any further detail: https://cloud.google.com/cloud-build/docs/deploying-builds/deploy-gkehttps://cloud.google.com/cloud-build/docs/quickstart-docker
Question 53 of 55
53. Question
You are the leader of a development group; you want to start using Continuous Integration and Deployment Techniques and you care to organize procedure in the best way. In your company the new trend is to deploy apps and services within containers. The idea is to use Kubernetes. You want to start the deployment as soon as new Source is committed. Which product is the best suitable one for creating Docker images from code?
Correct
Correct answer: A Cloud Build can define workflows for building, testing, and deploying across multiple environments such as VMs, serverless, Kubernetes, or Firebase. B is wrong because Cloud Code is an integrated set of tools to help write, deploy, and debug cloud-native applications. It as extensions to IDEs such as Visual Studio Code and IntelliJ are provided to let rapidly iterate, debug, and deploy code to Kubernetes. C is wrong because Cloud Tasks is an asynchronous task execution service that encode and execute Tasks using Queues. D is wrong because Cloud Repositories are Git source repos. E is wrong because Cloud Run is a serverless platform for containerized applications. For any further detail: https://cloud.google.com/docs/ci-cd/https://cloud.google.com/cloud-build/docs/quickstart-dockerhttps://cloud.google.com/source-repositories/docs/quickstart-triggering-builds-with-source-repositories
Incorrect
Correct answer: A Cloud Build can define workflows for building, testing, and deploying across multiple environments such as VMs, serverless, Kubernetes, or Firebase. B is wrong because Cloud Code is an integrated set of tools to help write, deploy, and debug cloud-native applications. It as extensions to IDEs such as Visual Studio Code and IntelliJ are provided to let rapidly iterate, debug, and deploy code to Kubernetes. C is wrong because Cloud Tasks is an asynchronous task execution service that encode and execute Tasks using Queues. D is wrong because Cloud Repositories are Git source repos. E is wrong because Cloud Run is a serverless platform for containerized applications. For any further detail: https://cloud.google.com/docs/ci-cd/https://cloud.google.com/cloud-build/docs/quickstart-dockerhttps://cloud.google.com/source-repositories/docs/quickstart-triggering-builds-with-source-repositories
Unattempted
Correct answer: A Cloud Build can define workflows for building, testing, and deploying across multiple environments such as VMs, serverless, Kubernetes, or Firebase. B is wrong because Cloud Code is an integrated set of tools to help write, deploy, and debug cloud-native applications. It as extensions to IDEs such as Visual Studio Code and IntelliJ are provided to let rapidly iterate, debug, and deploy code to Kubernetes. C is wrong because Cloud Tasks is an asynchronous task execution service that encode and execute Tasks using Queues. D is wrong because Cloud Repositories are Git source repos. E is wrong because Cloud Run is a serverless platform for containerized applications. For any further detail: https://cloud.google.com/docs/ci-cd/https://cloud.google.com/cloud-build/docs/quickstart-dockerhttps://cloud.google.com/source-repositories/docs/quickstart-triggering-builds-with-source-repositories
Question 54 of 55
54. Question
You are the leader of a development group; you want to start using Continuous Integration and Deployment Techniques and you care to organize procedure in the best way. In your company you are not allowed to publish code in public or not internally certified Sites. Where will the code developed by your team be stored and shared?
Correct
Correct answer: C Google Cloud Source Repositories are private, fully featured, scalable Git repositories hosted on Google Cloud Platform. Git is a program that monitors files and tracks changes. A Git repository: Tracks any updates Registers a history May trigger actions A is wrong because Cloud Storage only registers the complete versions of the files. B is wrong because Github is a Git repository but not private or not internally certified. D is wrong because AppEngine is not a Git repository and Blue green is a kind of deployment, not source integration tool. For any further detail: https://cloud.google.com/docs/ci-cd/
Incorrect
Correct answer: C Google Cloud Source Repositories are private, fully featured, scalable Git repositories hosted on Google Cloud Platform. Git is a program that monitors files and tracks changes. A Git repository: Tracks any updates Registers a history May trigger actions A is wrong because Cloud Storage only registers the complete versions of the files. B is wrong because Github is a Git repository but not private or not internally certified. D is wrong because AppEngine is not a Git repository and Blue green is a kind of deployment, not source integration tool. For any further detail: https://cloud.google.com/docs/ci-cd/
Unattempted
Correct answer: C Google Cloud Source Repositories are private, fully featured, scalable Git repositories hosted on Google Cloud Platform. Git is a program that monitors files and tracks changes. A Git repository: Tracks any updates Registers a history May trigger actions A is wrong because Cloud Storage only registers the complete versions of the files. B is wrong because Github is a Git repository but not private or not internally certified. D is wrong because AppEngine is not a Git repository and Blue green is a kind of deployment, not source integration tool. For any further detail: https://cloud.google.com/docs/ci-cd/
Question 55 of 55
55. Question
You are using some GCP services from an external, on-premises network and a VPC from another Cloud vendor. Which of these methods can you use to securely manage the authorization to access these services? Choose 3 answers.
Using an API management (option B) helps to secure server-to-server calls; GCP has Cloud Endpoint and Apigee for API management; Apigee has a richer set of security functions. Using a VPN tunnels create a secure path between 2 different environments, either on-premises and multi-cloud. C is wrong because it is a security breach to store keys in the cose. D is wrong because the use of signed JWT Token is allowed only with some Google APIs, not all of them, so it will not address our issue. For any further detail: https://cloud.google.com/apigee/https://cloud.google.com/iam/docs/understanding-service-accounts#managing_service_account_keys
Using an API management (option B) helps to secure server-to-server calls; GCP has Cloud Endpoint and Apigee for API management; Apigee has a richer set of security functions. Using a VPN tunnels create a secure path between 2 different environments, either on-premises and multi-cloud. C is wrong because it is a security breach to store keys in the cose. D is wrong because the use of signed JWT Token is allowed only with some Google APIs, not all of them, so it will not address our issue. For any further detail: https://cloud.google.com/apigee/https://cloud.google.com/iam/docs/understanding-service-accounts#managing_service_account_keys
Using an API management (option B) helps to secure server-to-server calls; GCP has Cloud Endpoint and Apigee for API management; Apigee has a richer set of security functions. Using a VPN tunnels create a secure path between 2 different environments, either on-premises and multi-cloud. C is wrong because it is a security breach to store keys in the cose. D is wrong because the use of signed JWT Token is allowed only with some Google APIs, not all of them, so it will not address our issue. For any further detail: https://cloud.google.com/apigee/https://cloud.google.com/iam/docs/understanding-service-accounts#managing_service_account_keys
Use Page numbers below to navigate to other practice tests