You have already completed the Test before. Hence you can not start it again.
Test is loading...
You must sign in or sign up to start the Test.
You have to finish following quiz, to start this Test:
Your results are here!! for" Google Professional Cloud Network Engineer Practice Test 7 "
0 of 39 questions answered correctly
Your time:
Time has elapsed
Your Final Score is : 0
You have attempted : 0
Number of Correct Questions : 0 and scored 0
Number of Incorrect Questions : 0 and Negative marks 0
Average score
Your score
Google Professional Cloud Network Engineer
You have attempted: 0
Number of Correct Questions: 0 and scored 0
Number of Incorrect Questions: 0 and Negative marks 0
You can review your answers by clicking on “View Answers” option.
Important Note : Open Reference Documentation Links in New Tab (Right Click and Open in New Tab).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
Answered
Review
Question 1 of 39
1. Question
You want to apply a new Cloud Armor policy to an application that is deployed in Google Kubernetes Engine (GKE). You want to find out which target to use for your Cloud Armor policy. Which GKE resource should you use?
You converted an auto mode VPC network to custom mode. Since the conversion, some of your Cloud Deployment Manager templates are no longer working. You want to resolve the problem. What should you do?
Correct
Explanation:- We need to refer to the converted custom mode VPC in the resource templates of the deployment manager. All the files used by Deployment Manager as the template used for resources provisioning must be updated manually. Reference: https://cloud.google.com/deployment-manager/docs/quickstart
Incorrect
Explanation:- We need to refer to the converted custom mode VPC in the resource templates of the deployment manager. All the files used by Deployment Manager as the template used for resources provisioning must be updated manually. Reference: https://cloud.google.com/deployment-manager/docs/quickstart
Unattempted
Explanation:- We need to refer to the converted custom mode VPC in the resource templates of the deployment manager. All the files used by Deployment Manager as the template used for resources provisioning must be updated manually. Reference: https://cloud.google.com/deployment-manager/docs/quickstart
Question 3 of 39
3. Question
You have recently been put in charge of managing identity and access management for your organization. You have several projects and want to use scripting and automation wherever possible. You want to grant the editor role to a project member. Which two methods can you use to accomplish this? (Choose two.)
Correct
Explanation:- setIamPolicy is used to set or write a new IAM policy.. Reference: https://cloud.google.com/resource-manager/reference/rest/v1/projects/setIamPolicy. IAM policy binding command is used to set a new IAM policy. The command.. Reference: https://cloud.google.com/sdk/gcloud/reference/projects/add-iam-policy-binding
Incorrect
Explanation:- setIamPolicy is used to set or write a new IAM policy.. Reference: https://cloud.google.com/resource-manager/reference/rest/v1/projects/setIamPolicy. IAM policy binding command is used to set a new IAM policy. The command.. Reference: https://cloud.google.com/sdk/gcloud/reference/projects/add-iam-policy-binding
Unattempted
Explanation:- setIamPolicy is used to set or write a new IAM policy.. Reference: https://cloud.google.com/resource-manager/reference/rest/v1/projects/setIamPolicy. IAM policy binding command is used to set a new IAM policy. The command.. Reference: https://cloud.google.com/sdk/gcloud/reference/projects/add-iam-policy-binding
Question 4 of 39
4. Question
You work for a multinational enterprise that is moving to GCP. These are the cloud requirements:. – An on-premises data center located in the United States in Oregon and New York with Dedicated Interconnects connected to Cloud regions us-west1 (primary HQ) and us-east4 (backup).. – Multiple regional offices in Europe and APAC.. – Regional data processing is required in europe-west1 and australia-southeast1.. Centralized Network Administration Team Your security and compliance team requires a virtual inline security appliance to perform an L7 inspection for URL filtering. You want to deploy the appliance in us- west1. What should you do?
Correct
Explanation:- We need 2 VPCs, as we cannot attach multiple network interfaces to the same VPC network. By attaching 2 separate NIC to 2 separate VPC and the subnet we can configure necessary routes and firewall rules. We need to create the NIC in the Host project. Reference: https://cloud.google.com/vpc/docs/create-use-multiple-interfaces
Incorrect
Explanation:- We need 2 VPCs, as we cannot attach multiple network interfaces to the same VPC network. By attaching 2 separate NIC to 2 separate VPC and the subnet we can configure necessary routes and firewall rules. We need to create the NIC in the Host project. Reference: https://cloud.google.com/vpc/docs/create-use-multiple-interfaces
Unattempted
Explanation:- We need 2 VPCs, as we cannot attach multiple network interfaces to the same VPC network. By attaching 2 separate NIC to 2 separate VPC and the subnet we can configure necessary routes and firewall rules. We need to create the NIC in the Host project. Reference: https://cloud.google.com/vpc/docs/create-use-multiple-interfaces
Question 5 of 39
5. Question
You are designing a Google Kubernetes Engine (GKE) cluster for your organization. The current cluster size is expected to host 10 nodes, with 20 Pods per node and 150 services. Because of the migration of new services over the next 2 years, there is a planned growth for 100 nodes, 200 Pods per node, and 1500 services. You want to use VPC-native clusters with alias IP ranges while minimizing address consumption. How should you design this topology?
Correct
Explanation:- Creating a subnet of CIDR /25 will account for 128 nodes. 2^(32-25)=128, we need 100 nodes. Also, 200 pods per node and 1500 services will be incorporated in the secondary IP ranges of /17 and /21 CIDR respectively.
Incorrect
Explanation:- Creating a subnet of CIDR /25 will account for 128 nodes. 2^(32-25)=128, we need 100 nodes. Also, 200 pods per node and 1500 services will be incorporated in the secondary IP ranges of /17 and /21 CIDR respectively.
Unattempted
Explanation:- Creating a subnet of CIDR /25 will account for 128 nodes. 2^(32-25)=128, we need 100 nodes. Also, 200 pods per node and 1500 services will be incorporated in the secondary IP ranges of /17 and /21 CIDR respectively.
Question 6 of 39
6. Question
Your company has recently expanded their EMEA-based operations into APAC. Globally distributed users report that their SMTP and IMAP services are slow. Your company requires end-to-end encryption, but you do not have access to the SSL certificates. Which Google Cloud load balancer should you use?
You have a storage bucket that contains the following objects: /folder-a/object-rea32. /folder-a/object-rea321432. /folder-a/object-rea2143432. /folder-a/object-rea56980345. Cloud CDN is enabled on the storage bucket, and all four objects have been successfully cached. You want to remove the cached copies of all the objects with the prefix folder-a, using the minimum number of commands. What should you do?
Correct
Explanation:- You might want to remove an object from the cache prior to its normal expiration time. You can force an object or set of objects to be ignored by the cache by requesting a cache invalidation. Each invalidation request specifies a path pattern that identifies the object or set of objects that should be invalidated. Reference: https://cloud.google.com/cdn/docs/invalidating-cached-content
Incorrect
Explanation:- You might want to remove an object from the cache prior to its normal expiration time. You can force an object or set of objects to be ignored by the cache by requesting a cache invalidation. Each invalidation request specifies a path pattern that identifies the object or set of objects that should be invalidated. Reference: https://cloud.google.com/cdn/docs/invalidating-cached-content
Unattempted
Explanation:- You might want to remove an object from the cache prior to its normal expiration time. You can force an object or set of objects to be ignored by the cache by requesting a cache invalidation. Each invalidation request specifies a path pattern that identifies the object or set of objects that should be invalidated. Reference: https://cloud.google.com/cdn/docs/invalidating-cached-content
Question 8 of 39
8. Question
Your company is running out of network capacity to run a critical application in the on-premises data center. You want to migrate the application to GCP. You also want to ensure that the Security team does not lose its ability to monitor traffic to and from Compute Engine instances. Which two products should you incorporate into the solution? (Choose two.)
Correct
Explanation:- VPC flow logs capture and record a sample of network flows sent from and received by VM instances, including instances used as GKE nodes. These logs can be used for network monitoring, forensics, real-time security analysis, and expense optimization.. Reference: https://cloud.google.com/vpc/docs/using-flow-logs. When you enable logging for a firewall rule, Google Cloud creates an entry called a connection record each time the rule allows or denies traffic.. Reference: https://cloud.google.com/vpc/docs/firewall-rules-logging
Incorrect
Explanation:- VPC flow logs capture and record a sample of network flows sent from and received by VM instances, including instances used as GKE nodes. These logs can be used for network monitoring, forensics, real-time security analysis, and expense optimization.. Reference: https://cloud.google.com/vpc/docs/using-flow-logs. When you enable logging for a firewall rule, Google Cloud creates an entry called a connection record each time the rule allows or denies traffic.. Reference: https://cloud.google.com/vpc/docs/firewall-rules-logging
Unattempted
Explanation:- VPC flow logs capture and record a sample of network flows sent from and received by VM instances, including instances used as GKE nodes. These logs can be used for network monitoring, forensics, real-time security analysis, and expense optimization.. Reference: https://cloud.google.com/vpc/docs/using-flow-logs. When you enable logging for a firewall rule, Google Cloud creates an entry called a connection record each time the rule allows or denies traffic.. Reference: https://cloud.google.com/vpc/docs/firewall-rules-logging
Question 9 of 39
9. Question
Your new project currently requires 5 gigabits per second (Gbps) of egress traffic from your Google Cloud environment to your company’s private data center but may scale up to 80 Gbps of traffic in the future. You do not have any public addresses to use. Your company is looking for the most cost-effective long-term solution. Which type of connection should you use?
You have the Google Cloud load balancer backend configuration shown below. You want to reduce your instance group utilization by 20%. Which settings should you use?
You are adding steps to a working automation that uses a service account to authenticate. You need to drive the automation the ability to retrieve files from a Cloud Storage bucket. Your organization requires using the least privilege possible. What should you do?
As a network engineer on a GCP project. You are required to setup logging for resources in various projects within the organization. Which of these is not a type of log within GCP?
You have a web application that is currently hosted in the us-central1 region. Users experience high latency when traveling in Asia. You‘ve configured a network load balancer; but users have not experienced a performance improvement. You want to decrease the latency. What should you do?
You are designing a Google Kubernetes Engine (GKE) cluster for your organization. The current cluster size is expected to host 10 nodes; with 20 Pods per node and 150 services. Because of the migration of new services over the next 2 years; there is a planned growth for 100 nodes; 200 Pods per node; and 1500 services. You want to use VPC-native clusters with alias IP ranges; while minimizing address consumption. How should you design this topology?
Correct
The correct approach to design the GKE cluster topology is:
Create a subnet of size/25 with 2 secondary ranges of: /17 for Pods and /21 for Services. Create a VPC- native cluster and specify those ranges.
Here’s why:
Subnet size: A /25 subnet provides enough IP addresses for the initial cluster size and the planned growth. It also allows for future expansion without needing to create new subnets.
Secondary ranges: Using secondary ranges for Pods and Services helps to minimize address consumption and improve network performance.
VPC-native cluster: VPC-native clusters provide better network performance and security by integrating with the VPC network.
IP alias ranges: Specifying the IP alias ranges for Pods and Services allows you to use the same subnet for both, further reducing address consumption.
This approach ensures that the cluster can accommodate the initial and planned growth, while also optimizing network performance and security.
The correct approach to design the GKE cluster topology is:
Create a subnet of size/25 with 2 secondary ranges of: /17 for Pods and /21 for Services. Create a VPC- native cluster and specify those ranges.
Here’s why:
Subnet size: A /25 subnet provides enough IP addresses for the initial cluster size and the planned growth. It also allows for future expansion without needing to create new subnets.
Secondary ranges: Using secondary ranges for Pods and Services helps to minimize address consumption and improve network performance.
VPC-native cluster: VPC-native clusters provide better network performance and security by integrating with the VPC network.
IP alias ranges: Specifying the IP alias ranges for Pods and Services allows you to use the same subnet for both, further reducing address consumption.
This approach ensures that the cluster can accommodate the initial and planned growth, while also optimizing network performance and security.
The correct approach to design the GKE cluster topology is:
Create a subnet of size/25 with 2 secondary ranges of: /17 for Pods and /21 for Services. Create a VPC- native cluster and specify those ranges.
Here’s why:
Subnet size: A /25 subnet provides enough IP addresses for the initial cluster size and the planned growth. It also allows for future expansion without needing to create new subnets.
Secondary ranges: Using secondary ranges for Pods and Services helps to minimize address consumption and improve network performance.
VPC-native cluster: VPC-native clusters provide better network performance and security by integrating with the VPC network.
IP alias ranges: Specifying the IP alias ranges for Pods and Services allows you to use the same subnet for both, further reducing address consumption.
This approach ensures that the cluster can accommodate the initial and planned growth, while also optimizing network performance and security.
After a network change window one of your company‘s applications stops working. The application uses an on-premises database server that no longer receives any traffic from the application. The database server IP address is 10.2.1.25. You examine the change request; and the only change is that 3 additional VPC subnets were created. The new VPC subnets created are 10.1.0.0/16; 10.2.0.0/16; and 10.3.1.0/24. The on-premises router is advertising 10.0.0.0/8. What is the most likely cause of this problem?
Correct
Correct Option:
B. The more specific VPC subnet route is taking priority: This is most likely the cause. When a more specific subnet route (e.g., 10.2.0.0/16) is created, it takes precedence over a less specific route (e.g., 10.0.0.0/8) because routing protocols always prioritize the most specific match. Since the database server’s IP (10.2.1.25) falls within the new 10.2.0.0/16 subnet, traffic is being routed according to this more specific route instead of the broader 10.0.0.0/8 route advertised by the on-premises router. This rerouting disrupts the communication needed for the application to function.
Incorrect Options:
A. The less specific VPC subnet route is taking priority: Incorrect, as less specific routes do not take priority over more specific routes. In this case, the more specific 10.2.0.0/16 route is overriding the broader 10.0.0.0/8 route.
C. The on-premises router is not advertising a route for the database server: Incorrect because the router is advertising 10.0.0.0/8, which does include the database server’s IP. The issue is the creation of a more specific route within the VPC.
D. A cloud firewall rule that blocks traffic to the on-premises database server was created during the change: Incorrect since the scenario specifically notes the only change was the creation of new VPC subnets. No firewall rules were mentioned as being altered.
Incorrect
Correct Option:
B. The more specific VPC subnet route is taking priority: This is most likely the cause. When a more specific subnet route (e.g., 10.2.0.0/16) is created, it takes precedence over a less specific route (e.g., 10.0.0.0/8) because routing protocols always prioritize the most specific match. Since the database server’s IP (10.2.1.25) falls within the new 10.2.0.0/16 subnet, traffic is being routed according to this more specific route instead of the broader 10.0.0.0/8 route advertised by the on-premises router. This rerouting disrupts the communication needed for the application to function.
Incorrect Options:
A. The less specific VPC subnet route is taking priority: Incorrect, as less specific routes do not take priority over more specific routes. In this case, the more specific 10.2.0.0/16 route is overriding the broader 10.0.0.0/8 route.
C. The on-premises router is not advertising a route for the database server: Incorrect because the router is advertising 10.0.0.0/8, which does include the database server’s IP. The issue is the creation of a more specific route within the VPC.
D. A cloud firewall rule that blocks traffic to the on-premises database server was created during the change: Incorrect since the scenario specifically notes the only change was the creation of new VPC subnets. No firewall rules were mentioned as being altered.
Unattempted
Correct Option:
B. The more specific VPC subnet route is taking priority: This is most likely the cause. When a more specific subnet route (e.g., 10.2.0.0/16) is created, it takes precedence over a less specific route (e.g., 10.0.0.0/8) because routing protocols always prioritize the most specific match. Since the database server’s IP (10.2.1.25) falls within the new 10.2.0.0/16 subnet, traffic is being routed according to this more specific route instead of the broader 10.0.0.0/8 route advertised by the on-premises router. This rerouting disrupts the communication needed for the application to function.
Incorrect Options:
A. The less specific VPC subnet route is taking priority: Incorrect, as less specific routes do not take priority over more specific routes. In this case, the more specific 10.2.0.0/16 route is overriding the broader 10.0.0.0/8 route.
C. The on-premises router is not advertising a route for the database server: Incorrect because the router is advertising 10.0.0.0/8, which does include the database server’s IP. The issue is the creation of a more specific route within the VPC.
D. A cloud firewall rule that blocks traffic to the on-premises database server was created during the change: Incorrect since the scenario specifically notes the only change was the creation of new VPC subnets. No firewall rules were mentioned as being altered.
Question 16 of 39
16. Question
Your company offers a popular gaming service. Your instances are deployed with private IP addresses; and external access is granted through a global load balancer. You believe you have identified a potential malicious actor; but aren‘t certain you have the correct client IP address. You want to identify this actor while minimizing disruption to your legitimate users. What should you do?
Correct
The correct answer is:
B. Create a Cloud Armor Policy rule that denies traffic, enable preview mode, and review necessary logs.
This approach allows you to test the rule without blocking legitimate traffic. By enabling preview mode, you can monitor the logs to identify the malicious actor’s IP address without disrupting service. Once you’ve confirmed the IP address, you can disable preview mode to block traffic from that source. Enabling preview mode allows you to monitor traffic without actually denying it, thereby minimizing disruption to legitimate users. You can identify the potential malicious actor through the logs, ensuring that normal users are not affected while gathering the necessary information
The incorrect answers are:
A. Create a Cloud Armor Policy rule that denies traffic and review necessary logs. This approach will immediately block traffic from the specified IP address, potentially disrupting legitimate users if you’ve misidentified the malicious actor.
C. Create a VPC Firewall rule that denies traffic, enable logging and set enforcement to disabled, and review necessary logs. While this approach will allow you to log traffic without blocking it, VPC Firewall rules are designed for intra-VPC traffic, and might not be as effective in identifying external threats.
D. Create a VPC Firewall rule that denies traffic, enable logging and set enforcement to enabled, and review necessary logs. 1 This approach will immediately block traffic from the specified IP address, potentially disrupting legitimate users if you’ve misidentified the malicious actor.
Incorrect
The correct answer is:
B. Create a Cloud Armor Policy rule that denies traffic, enable preview mode, and review necessary logs.
This approach allows you to test the rule without blocking legitimate traffic. By enabling preview mode, you can monitor the logs to identify the malicious actor’s IP address without disrupting service. Once you’ve confirmed the IP address, you can disable preview mode to block traffic from that source. Enabling preview mode allows you to monitor traffic without actually denying it, thereby minimizing disruption to legitimate users. You can identify the potential malicious actor through the logs, ensuring that normal users are not affected while gathering the necessary information
The incorrect answers are:
A. Create a Cloud Armor Policy rule that denies traffic and review necessary logs. This approach will immediately block traffic from the specified IP address, potentially disrupting legitimate users if you’ve misidentified the malicious actor.
C. Create a VPC Firewall rule that denies traffic, enable logging and set enforcement to disabled, and review necessary logs. While this approach will allow you to log traffic without blocking it, VPC Firewall rules are designed for intra-VPC traffic, and might not be as effective in identifying external threats.
D. Create a VPC Firewall rule that denies traffic, enable logging and set enforcement to enabled, and review necessary logs. 1 This approach will immediately block traffic from the specified IP address, potentially disrupting legitimate users if you’ve misidentified the malicious actor.
Unattempted
The correct answer is:
B. Create a Cloud Armor Policy rule that denies traffic, enable preview mode, and review necessary logs.
This approach allows you to test the rule without blocking legitimate traffic. By enabling preview mode, you can monitor the logs to identify the malicious actor’s IP address without disrupting service. Once you’ve confirmed the IP address, you can disable preview mode to block traffic from that source. Enabling preview mode allows you to monitor traffic without actually denying it, thereby minimizing disruption to legitimate users. You can identify the potential malicious actor through the logs, ensuring that normal users are not affected while gathering the necessary information
The incorrect answers are:
A. Create a Cloud Armor Policy rule that denies traffic and review necessary logs. This approach will immediately block traffic from the specified IP address, potentially disrupting legitimate users if you’ve misidentified the malicious actor.
C. Create a VPC Firewall rule that denies traffic, enable logging and set enforcement to disabled, and review necessary logs. While this approach will allow you to log traffic without blocking it, VPC Firewall rules are designed for intra-VPC traffic, and might not be as effective in identifying external threats.
D. Create a VPC Firewall rule that denies traffic, enable logging and set enforcement to enabled, and review necessary logs. 1 This approach will immediately block traffic from the specified IP address, potentially disrupting legitimate users if you’ve misidentified the malicious actor.
Question 17 of 39
17. Question
You have an application that is running in a managed instance group. Your development team has released an updated instance template which contains a new feature which was not heavily tested. You want to minimize impact to users if there is a bug in the new template. How should you update your instances?
Correct
Correct Answer: D. Perform a canary update by starting a rolling update and specifying a target size for your instances to receive the new template. Verify the new feature on the canary instances; and then roll forward to the rest of the instances.
Explanation:
Why D is correct:
Canary Deployment:This approach allows for a gradual rollout of the new template to a smaller subset of instances, minimizing the potential impact of any unforeseen issues.
Targeted Rollout: By specifying a target size, you can control the number of instances that receive the new template, enabling fine-grained control over the deployment.
Verification: The canary instances can be thoroughly tested to ensure the new feature functions as expected without affecting the overall application’s stability.
Controlled Rollout: If any issues arise during the canary phase, the rollout can be paused or rolled back to the previous template.
Minimal Impact:This method ensures that only a small portion of users is exposed to the potential risks of the new feature.
Why other options are incorrect:
A. Manually patching some of the instances; and then perform a rolling restart on the instance group: This approach is not recommended as it lacks automation and control over the update process. Manual intervention can lead to human error and inconsistencies.
B. Using the new instance template; perform a rolling update across all instances in the instance group. Verify the new feature once the rollout completes: This approach is risky as it exposes all instances to the potential risks of the new feature without any prior testing or verification.
C. Deploy a new instance group and canary the updated template in that group. Verify the new feature in the new canary instance group; and then update the original instance group: While this approach can be effective, it requires additional resources and configuration. Additionally, it might introduce complexities in traffic routing and load balancing.
Incorrect
Correct Answer: D. Perform a canary update by starting a rolling update and specifying a target size for your instances to receive the new template. Verify the new feature on the canary instances; and then roll forward to the rest of the instances.
Explanation:
Why D is correct:
Canary Deployment:This approach allows for a gradual rollout of the new template to a smaller subset of instances, minimizing the potential impact of any unforeseen issues.
Targeted Rollout: By specifying a target size, you can control the number of instances that receive the new template, enabling fine-grained control over the deployment.
Verification: The canary instances can be thoroughly tested to ensure the new feature functions as expected without affecting the overall application’s stability.
Controlled Rollout: If any issues arise during the canary phase, the rollout can be paused or rolled back to the previous template.
Minimal Impact:This method ensures that only a small portion of users is exposed to the potential risks of the new feature.
Why other options are incorrect:
A. Manually patching some of the instances; and then perform a rolling restart on the instance group: This approach is not recommended as it lacks automation and control over the update process. Manual intervention can lead to human error and inconsistencies.
B. Using the new instance template; perform a rolling update across all instances in the instance group. Verify the new feature once the rollout completes: This approach is risky as it exposes all instances to the potential risks of the new feature without any prior testing or verification.
C. Deploy a new instance group and canary the updated template in that group. Verify the new feature in the new canary instance group; and then update the original instance group: While this approach can be effective, it requires additional resources and configuration. Additionally, it might introduce complexities in traffic routing and load balancing.
Unattempted
Correct Answer: D. Perform a canary update by starting a rolling update and specifying a target size for your instances to receive the new template. Verify the new feature on the canary instances; and then roll forward to the rest of the instances.
Explanation:
Why D is correct:
Canary Deployment:This approach allows for a gradual rollout of the new template to a smaller subset of instances, minimizing the potential impact of any unforeseen issues.
Targeted Rollout: By specifying a target size, you can control the number of instances that receive the new template, enabling fine-grained control over the deployment.
Verification: The canary instances can be thoroughly tested to ensure the new feature functions as expected without affecting the overall application’s stability.
Controlled Rollout: If any issues arise during the canary phase, the rollout can be paused or rolled back to the previous template.
Minimal Impact:This method ensures that only a small portion of users is exposed to the potential risks of the new feature.
Why other options are incorrect:
A. Manually patching some of the instances; and then perform a rolling restart on the instance group: This approach is not recommended as it lacks automation and control over the update process. Manual intervention can lead to human error and inconsistencies.
B. Using the new instance template; perform a rolling update across all instances in the instance group. Verify the new feature once the rollout completes: This approach is risky as it exposes all instances to the potential risks of the new feature without any prior testing or verification.
C. Deploy a new instance group and canary the updated template in that group. Verify the new feature in the new canary instance group; and then update the original instance group: While this approach can be effective, it requires additional resources and configuration. Additionally, it might introduce complexities in traffic routing and load balancing.
Question 18 of 39
18. Question
You are increasing your usage of Cloud VPN between on-premises and GCP, and you want to support more traffic than a single tunnel can handle. You want to increase the available bandwidth using Cloud VPN. What should you do?
Correct
Correct Answer: C. Add a second on-premises VPN gateway with a different public IP address. Create a second tunnel on the existing Cloud VPN gateway that forwards the same IP range, but points at the new on-premises gateway IP.
Increased Bandwidth: By adding a second on-premises VPN gateway, you effectively double the potential bandwidth for your VPN connection.
Load Balancing: Traffic can be distributed across both tunnels, improving overall performance and reducing latency.
Fault Tolerance: If one tunnel fails, the other can continue to operate, ensuring high availability.
Simplified Configuration: This approach avoids complex routing configurations and allows for easier management.
Why other options are incorrect:
A. Double the MTU on your on-premises VPN gateway from 1460 bytes to 2920 bytes: Increasing the MTU can improve throughput for larger packets, but it’s not the most effective way to increase overall bandwidth. It’s also important to consider the MTU limitations of your network devices and the potential impact on packet fragmentation.
B. Create two VPN tunnels on the same Cloud VPN gateway that point to the same destination VPN gateway IP address: While this approach can create redundancy, it doesn’t increase bandwidth. Both tunnels will compete for the same network resources.
D. Add a second Cloud VPN gateway in a different region than the existing VPN gateway. Create a new tunnel on the second Cloud VPN gateway that forwards the same IP range, but points to the existing on-premises VPN gateway IP address: This approach can introduce latency and complexity. It’s generally not recommended for increasing bandwidth and fault tolerance.
Incorrect
Correct Answer: C. Add a second on-premises VPN gateway with a different public IP address. Create a second tunnel on the existing Cloud VPN gateway that forwards the same IP range, but points at the new on-premises gateway IP.
Increased Bandwidth: By adding a second on-premises VPN gateway, you effectively double the potential bandwidth for your VPN connection.
Load Balancing: Traffic can be distributed across both tunnels, improving overall performance and reducing latency.
Fault Tolerance: If one tunnel fails, the other can continue to operate, ensuring high availability.
Simplified Configuration: This approach avoids complex routing configurations and allows for easier management.
Why other options are incorrect:
A. Double the MTU on your on-premises VPN gateway from 1460 bytes to 2920 bytes: Increasing the MTU can improve throughput for larger packets, but it’s not the most effective way to increase overall bandwidth. It’s also important to consider the MTU limitations of your network devices and the potential impact on packet fragmentation.
B. Create two VPN tunnels on the same Cloud VPN gateway that point to the same destination VPN gateway IP address: While this approach can create redundancy, it doesn’t increase bandwidth. Both tunnels will compete for the same network resources.
D. Add a second Cloud VPN gateway in a different region than the existing VPN gateway. Create a new tunnel on the second Cloud VPN gateway that forwards the same IP range, but points to the existing on-premises VPN gateway IP address: This approach can introduce latency and complexity. It’s generally not recommended for increasing bandwidth and fault tolerance.
Unattempted
Correct Answer: C. Add a second on-premises VPN gateway with a different public IP address. Create a second tunnel on the existing Cloud VPN gateway that forwards the same IP range, but points at the new on-premises gateway IP.
Increased Bandwidth: By adding a second on-premises VPN gateway, you effectively double the potential bandwidth for your VPN connection.
Load Balancing: Traffic can be distributed across both tunnels, improving overall performance and reducing latency.
Fault Tolerance: If one tunnel fails, the other can continue to operate, ensuring high availability.
Simplified Configuration: This approach avoids complex routing configurations and allows for easier management.
Why other options are incorrect:
A. Double the MTU on your on-premises VPN gateway from 1460 bytes to 2920 bytes: Increasing the MTU can improve throughput for larger packets, but it’s not the most effective way to increase overall bandwidth. It’s also important to consider the MTU limitations of your network devices and the potential impact on packet fragmentation.
B. Create two VPN tunnels on the same Cloud VPN gateway that point to the same destination VPN gateway IP address: While this approach can create redundancy, it doesn’t increase bandwidth. Both tunnels will compete for the same network resources.
D. Add a second Cloud VPN gateway in a different region than the existing VPN gateway. Create a new tunnel on the second Cloud VPN gateway that forwards the same IP range, but points to the existing on-premises VPN gateway IP address: This approach can introduce latency and complexity. It’s generally not recommended for increasing bandwidth and fault tolerance.
Question 19 of 39
19. Question
Your company has a security team that manages firewalls and SSL certificates. It also has a networking team that manages the networking resources. The networking team needs to be able to read firewall rules; but should not be able to create; modify; or delete them. How should you set up permissions for the networking team?
Correct
Correct Answer: B. Assign members of the networking team the compute.networkAdmin role.
As highlighted in the Google Cloud documentation, the compute.networkAdmin role is specifically designed for scenarios where a team needs to manage network resources but doesn’t require the ability to modify firewall rules (which are managed by the security team). This role grants the following permissions:
View and use all network resources, including subnets, routes, VPC networks, and Cloud VPN tunnels.
View firewall rules (but not create, modify, or delete them).
This aligns perfectly with the given scenario where the networking team needs read-only access to firewall rules while managing other network components.
Why other options are incorrect:
A. Assign members of the networking team the compute.networkUser role: This role provides limited access and doesn’t grant permission to view firewall rules.
C. Assign members of the networking team a custom role with only the compute.networks. and the compute.firewalls.list permissions:* 1 While this approach seems like a granular solution, the documentation suggests that compute.networkAdmin is the intended role for this specific use case. Additionally, creating custom roles can add complexity to permission management.
D. Assign members of the networking team the compute.networkViewer role; and add the compute.networks.use permission: This approach doesn’t grant permission to view firewall rules, which is a requirement.
In conclusion, while creating a custom role might seem like a way to achieve granular control, the compute.networkAdmin role already fulfills the requirements for the networking team in this scenario. It provides the necessary permissions for network management while respecting the security team’s responsibility for firewalls.
Key Takeaway:
It’s important to consult official documentation alongside understanding the specific use case when determining the appropriate IAM roles.
Incorrect
Correct Answer: B. Assign members of the networking team the compute.networkAdmin role.
As highlighted in the Google Cloud documentation, the compute.networkAdmin role is specifically designed for scenarios where a team needs to manage network resources but doesn’t require the ability to modify firewall rules (which are managed by the security team). This role grants the following permissions:
View and use all network resources, including subnets, routes, VPC networks, and Cloud VPN tunnels.
View firewall rules (but not create, modify, or delete them).
This aligns perfectly with the given scenario where the networking team needs read-only access to firewall rules while managing other network components.
Why other options are incorrect:
A. Assign members of the networking team the compute.networkUser role: This role provides limited access and doesn’t grant permission to view firewall rules.
C. Assign members of the networking team a custom role with only the compute.networks. and the compute.firewalls.list permissions:* 1 While this approach seems like a granular solution, the documentation suggests that compute.networkAdmin is the intended role for this specific use case. Additionally, creating custom roles can add complexity to permission management.
D. Assign members of the networking team the compute.networkViewer role; and add the compute.networks.use permission: This approach doesn’t grant permission to view firewall rules, which is a requirement.
In conclusion, while creating a custom role might seem like a way to achieve granular control, the compute.networkAdmin role already fulfills the requirements for the networking team in this scenario. It provides the necessary permissions for network management while respecting the security team’s responsibility for firewalls.
Key Takeaway:
It’s important to consult official documentation alongside understanding the specific use case when determining the appropriate IAM roles.
Unattempted
Correct Answer: B. Assign members of the networking team the compute.networkAdmin role.
As highlighted in the Google Cloud documentation, the compute.networkAdmin role is specifically designed for scenarios where a team needs to manage network resources but doesn’t require the ability to modify firewall rules (which are managed by the security team). This role grants the following permissions:
View and use all network resources, including subnets, routes, VPC networks, and Cloud VPN tunnels.
View firewall rules (but not create, modify, or delete them).
This aligns perfectly with the given scenario where the networking team needs read-only access to firewall rules while managing other network components.
Why other options are incorrect:
A. Assign members of the networking team the compute.networkUser role: This role provides limited access and doesn’t grant permission to view firewall rules.
C. Assign members of the networking team a custom role with only the compute.networks. and the compute.firewalls.list permissions:* 1 While this approach seems like a granular solution, the documentation suggests that compute.networkAdmin is the intended role for this specific use case. Additionally, creating custom roles can add complexity to permission management.
D. Assign members of the networking team the compute.networkViewer role; and add the compute.networks.use permission: This approach doesn’t grant permission to view firewall rules, which is a requirement.
In conclusion, while creating a custom role might seem like a way to achieve granular control, the compute.networkAdmin role already fulfills the requirements for the networking team in this scenario. It provides the necessary permissions for network management while respecting the security team’s responsibility for firewalls.
Key Takeaway:
It’s important to consult official documentation alongside understanding the specific use case when determining the appropriate IAM roles.
Question 20 of 39
20. Question
You want to use Cloud Interconnect to connect your on-premises network to a GCP VPC. You cannot meet Google at one of its point-of-presence (POP) locations; and your on-premises router cannot run a Border Gateway Protocol (BGP) configuration. Which connectivity model should you use?
Correct
Correct Option:
D. Partner Interconnect with a layer 3 partner: This is correct because Partner Interconnect allows you to connect to Google Cloud through a service provider without the need to meet at a Google POP location. A Layer 3 partner handles BGP on your behalf, making it suitable if your on-premises router cannot run BGP.
Incorrect Options:
A. Direct Peering: This is incorrect because Direct Peering requires you to meet Google at one of its POP locations, which you cannot do. Additionally, Direct Peering typically requires handling BGP configurations, which your on-premises router cannot support.
B. Dedicated Interconnect: This is incorrect because Dedicated Interconnect also requires physical connectivity to a Google POP location. It is not suitable if you cannot meet Google at one of these locations.
C. Partner Interconnect with a layer 2 partner: This is incorrect because, although Partner Interconnect does not require meeting at a Google POP, a Layer 2 partner does not handle BGP on your behalf. You would still need to manage the BGP configuration, which your on-premises router cannot support.
Incorrect
Correct Option:
D. Partner Interconnect with a layer 3 partner: This is correct because Partner Interconnect allows you to connect to Google Cloud through a service provider without the need to meet at a Google POP location. A Layer 3 partner handles BGP on your behalf, making it suitable if your on-premises router cannot run BGP.
Incorrect Options:
A. Direct Peering: This is incorrect because Direct Peering requires you to meet Google at one of its POP locations, which you cannot do. Additionally, Direct Peering typically requires handling BGP configurations, which your on-premises router cannot support.
B. Dedicated Interconnect: This is incorrect because Dedicated Interconnect also requires physical connectivity to a Google POP location. It is not suitable if you cannot meet Google at one of these locations.
C. Partner Interconnect with a layer 2 partner: This is incorrect because, although Partner Interconnect does not require meeting at a Google POP, a Layer 2 partner does not handle BGP on your behalf. You would still need to manage the BGP configuration, which your on-premises router cannot support.
Unattempted
Correct Option:
D. Partner Interconnect with a layer 3 partner: This is correct because Partner Interconnect allows you to connect to Google Cloud through a service provider without the need to meet at a Google POP location. A Layer 3 partner handles BGP on your behalf, making it suitable if your on-premises router cannot run BGP.
Incorrect Options:
A. Direct Peering: This is incorrect because Direct Peering requires you to meet Google at one of its POP locations, which you cannot do. Additionally, Direct Peering typically requires handling BGP configurations, which your on-premises router cannot support.
B. Dedicated Interconnect: This is incorrect because Dedicated Interconnect also requires physical connectivity to a Google POP location. It is not suitable if you cannot meet Google at one of these locations.
C. Partner Interconnect with a layer 2 partner: This is incorrect because, although Partner Interconnect does not require meeting at a Google POP, a Layer 2 partner does not handle BGP on your behalf. You would still need to manage the BGP configuration, which your on-premises router cannot support.
Question 21 of 39
21. Question
You need to restrict access to your Google Cloud load-balanced application so that only specific IP addresses can connect. What should you do?
Correct
Explanation:- Firewalls are used to configure ingress and egress traffic for an application. We define the ingress traffic and whitelist the source IP address we want to allow. To attach the rule, we ‘tag’ the application and specify that tag in the firewall rule. As the application uses Google Load balancer, we also need to whitelist the default health-check IP for the Load Balancer i.e. 130.211.0.0/22 and 35.191.0.0/16. Reference: https://cloud.google.com/load-balancing/docs/https/setting-up-https#configuring_firewall_rules
Incorrect
Explanation:- Firewalls are used to configure ingress and egress traffic for an application. We define the ingress traffic and whitelist the source IP address we want to allow. To attach the rule, we ‘tag’ the application and specify that tag in the firewall rule. As the application uses Google Load balancer, we also need to whitelist the default health-check IP for the Load Balancer i.e. 130.211.0.0/22 and 35.191.0.0/16. Reference: https://cloud.google.com/load-balancing/docs/https/setting-up-https#configuring_firewall_rules
Unattempted
Explanation:- Firewalls are used to configure ingress and egress traffic for an application. We define the ingress traffic and whitelist the source IP address we want to allow. To attach the rule, we ‘tag’ the application and specify that tag in the firewall rule. As the application uses Google Load balancer, we also need to whitelist the default health-check IP for the Load Balancer i.e. 130.211.0.0/22 and 35.191.0.0/16. Reference: https://cloud.google.com/load-balancing/docs/https/setting-up-https#configuring_firewall_rules
Question 22 of 39
22. Question
You want to create a service in GCP using IPv6. What should you do?
You want to deploy a VPN Gateway to connect your on-premises network to GCP. You are using a non BGP-capable on-premises VPN device. You want to minimize downtime and operational overhead when your network grows. The device supports only IKEv2, and you want to follow Google-recommended practices. What should you do?
Your company just completed the acquisition of Altostrat (a current GCP customer). Each company has a separate organization in GCP and has implemented a custom DNS solution. Each organization will retain its current domain and host names until after a full transition and architectural review is done in one year. These are the assumptions for both GCP environments. Each organization has enabled full connectivity between all of its projects by using Shared VPC. Both organizations strictly use the 10.0.0.0/8 address space for their instances, except for bastion hosts (for accessing the instances) and load balancers for serving web traffic. There are no prefix overlaps between the two organizations. Both organizations already have firewall rules that allow all inbound and outbound traffic from the 10.0.0.0/8 address space. Neither organization has Interconnects to their on-premises environment. You want to integrate networking and DNS infrastructure of both organizations as quickly as possible and with minimal downtime. Which two steps should you take? (Choose two.)
Correct
Explanation:- Based on the current situation both organizations have customized DNS and that means you may have both DNS use Forwarders and zone transfers in order to be able cross-organization name resolution.. Reference: https://cloud.google.com/dns/docs/best-practices. Connecting 2 VPCs from 2 organizations using Cloud VPN is the quickest and the cheapest way.. Reference: https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview
Incorrect
Explanation:- Based on the current situation both organizations have customized DNS and that means you may have both DNS use Forwarders and zone transfers in order to be able cross-organization name resolution.. Reference: https://cloud.google.com/dns/docs/best-practices. Connecting 2 VPCs from 2 organizations using Cloud VPN is the quickest and the cheapest way.. Reference: https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview
Unattempted
Explanation:- Based on the current situation both organizations have customized DNS and that means you may have both DNS use Forwarders and zone transfers in order to be able cross-organization name resolution.. Reference: https://cloud.google.com/dns/docs/best-practices. Connecting 2 VPCs from 2 organizations using Cloud VPN is the quickest and the cheapest way.. Reference: https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview
Question 25 of 39
25. Question
Your on-premises data center has 2 routers connected to your Google Cloud environment through a VPN on each router. All applications are working correctly; however, all of the traffic is passing across a single VPN instead of being load-balanced across the 2 connections as desired. During troubleshooting you find: – Each on-premises router is configured with a unique ASN. – Each on-premises router is configured with the same routes and priorities. – Both on-premises routers are configured with a VPN connected to a single Cloud Router. – BGP sessions are established between both on-premises routers and the Cloud Router. – Only 1 of the on-premises router’s routes is being added to the routing table. What is the most likely cause of this problem?
Correct
Explanation:- For cases where you have multiple on-premises routers connected to a single Cloud Router, the Cloud Router learns and propagates routes from the router with the lowest ASN. Cloud Router ignores advertised routes from routers with higher ASNs, which might result in unexpected behavior. For example, you might have two on-premises routers advertise routes that are using two different Cloud VPN tunnels. You expect traffic to be load balanced between the tunnels, but Google Cloud uses only one of the tunnels because Cloud Router only propagates routes from the on-premises router with the lower ASN. Reference: https://cloud.google.com/network-connectivity/docs/router/support/troubleshooting#ecmp
Incorrect
Explanation:- For cases where you have multiple on-premises routers connected to a single Cloud Router, the Cloud Router learns and propagates routes from the router with the lowest ASN. Cloud Router ignores advertised routes from routers with higher ASNs, which might result in unexpected behavior. For example, you might have two on-premises routers advertise routes that are using two different Cloud VPN tunnels. You expect traffic to be load balanced between the tunnels, but Google Cloud uses only one of the tunnels because Cloud Router only propagates routes from the on-premises router with the lower ASN. Reference: https://cloud.google.com/network-connectivity/docs/router/support/troubleshooting#ecmp
Unattempted
Explanation:- For cases where you have multiple on-premises routers connected to a single Cloud Router, the Cloud Router learns and propagates routes from the router with the lowest ASN. Cloud Router ignores advertised routes from routers with higher ASNs, which might result in unexpected behavior. For example, you might have two on-premises routers advertise routes that are using two different Cloud VPN tunnels. You expect traffic to be load balanced between the tunnels, but Google Cloud uses only one of the tunnels because Cloud Router only propagates routes from the on-premises router with the lower ASN. Reference: https://cloud.google.com/network-connectivity/docs/router/support/troubleshooting#ecmp
Question 26 of 39
26. Question
You have ordered Dedicated Interconnect in the GCP Console and need to give the Letter of Authorization/Connecting Facility Assignment (LOA-CFA) to your cross-connect provider to complete the physical connection. Which two actions can accomplish this? (Choose two.)
Your company offers a popular gaming service. Your instances are deployed with private IP addresses, and external access is granted through a global load balancer. You believe you have identified a potential malicious actor, but aren’t certain you have the correct client IP address. You want to identify this actor while minimizing disruption to your legitimate users. What should you do?
Your company’s web server administrator is migrating on-premises backend servers for an application to GCP. Libraries and configurations differ significantly across these backend servers. The migration to GCP will be lift-and-shift, and all requests to the servers will be served by a single network load balancer frontend. You want to use a GCP-native solution when possible. How should you deploy this service in GCP?
You decide to set up Cloud NAT. After completing the configuration, you find that one of your instances is not using the Cloud NAT for outbound NAT. What is the most likely cause of this problem?
Correct
Explanation:- The existence of an external IP address on an interface always takes precedence and always performs one-to-one NAT, without using Cloud NAT. Reference: https://cloud.google.com/nat/docs/overview
Incorrect
Explanation:- The existence of an external IP address on an interface always takes precedence and always performs one-to-one NAT, without using Cloud NAT. Reference: https://cloud.google.com/nat/docs/overview
Unattempted
Explanation:- The existence of an external IP address on an interface always takes precedence and always performs one-to-one NAT, without using Cloud NAT. Reference: https://cloud.google.com/nat/docs/overview
Question 30 of 39
30. Question
You want to set up two Cloud Routers so that one has an active Border Gateway Protocol (BGP) session, and the other one acts as a standby. Which BGP attribute should you use on your on-premises router?
You are trying to update firewall rules in a shared VPC for which you have been assigned only Network Admin permissions. You cannot modify the firewall rules. Your organization requires using the least privilege necessary. Which level of permissions should you request?
Correct
Explanation:- A Shared VPC Admin can define a Security Admin by granting an IAM member the Security Admin (compute.securityAdmin) role to the host project. Security Admins manage firewall rules and SSL certificates. Reference: https://cloud.google.com/vpc/docs/shared-vpc#net_and_security_admins
Incorrect
Explanation:- A Shared VPC Admin can define a Security Admin by granting an IAM member the Security Admin (compute.securityAdmin) role to the host project. Security Admins manage firewall rules and SSL certificates. Reference: https://cloud.google.com/vpc/docs/shared-vpc#net_and_security_admins
Unattempted
Explanation:- A Shared VPC Admin can define a Security Admin by granting an IAM member the Security Admin (compute.securityAdmin) role to the host project. Security Admins manage firewall rules and SSL certificates. Reference: https://cloud.google.com/vpc/docs/shared-vpc#net_and_security_admins
Question 32 of 39
32. Question
You are disabling DNSSEC for one of your Cloud DNS-managed zones. You removed the DS records from your zone file, waited for them to expire from the cache, and disabled DNSSEC for the zone. You receive reports that DNSSEC validating resolves are unable to resolve names in your zone. What should you do?
Correct
Explanation:- Before you disable DNSSEC for a managed zone that you still want to use, you must deactivate DNSSEC for your zone at your domain registrar to ensure that DNSSEC-validating resolvers can still resolve names in the zone. Reference: https://cloud.google.com/dns/docs/registrars#del-ds
Incorrect
Explanation:- Before you disable DNSSEC for a managed zone that you still want to use, you must deactivate DNSSEC for your zone at your domain registrar to ensure that DNSSEC-validating resolvers can still resolve names in the zone. Reference: https://cloud.google.com/dns/docs/registrars#del-ds
Unattempted
Explanation:- Before you disable DNSSEC for a managed zone that you still want to use, you must deactivate DNSSEC for your zone at your domain registrar to ensure that DNSSEC-validating resolvers can still resolve names in the zone. Reference: https://cloud.google.com/dns/docs/registrars#del-ds
Question 33 of 39
33. Question
You have an application hosted on a Compute Engine virtual machine instance that cannot communicate with a resource outside of its subnet. When you review the flow and firewall logs, you do not see any denied traffic listed. During troubleshooting you find: – Flow logs are enabled for the VPC subnet, and all firewall rules are set to log. – The subnetwork logs are not excluded from Stackdriver. – The instance that is hosting the application can communicate outside the subnet. – Other instances within the subnet can communicate outside the subnet. – The external resource initiates communication. What is the most likely cause of the missing log lines?
Correct
Explanation:- Based on the points given, instances can communicate outside the subnet. That means egress traffic is allowed. It is also mentioned that external resources initiate the connection. This means the ingress traffic is not getting logged. This means the traffic is not matching the ingress rule.
Incorrect
Explanation:- Based on the points given, instances can communicate outside the subnet. That means egress traffic is allowed. It is also mentioned that external resources initiate the connection. This means the ingress traffic is not getting logged. This means the traffic is not matching the ingress rule.
Unattempted
Explanation:- Based on the points given, instances can communicate outside the subnet. That means egress traffic is allowed. It is also mentioned that external resources initiate the connection. This means the ingress traffic is not getting logged. This means the traffic is not matching the ingress rule.
Question 34 of 39
34. Question
You have configured Cloud CDN using HTTP(S) load balancing as the origin for cacheable content. Compression is configured on the web servers, but responses served by Cloud CDN are not compressed. What is the most likely cause of the problem?
Your company‘s web server administrator is migrating on-premises backend servers for an application to GCP. Libraries and configurations differ significantly across these backend servers. The migration to GCP will be lift-and-shift; and all requests to the servers will be served by a single network load balancer frontend. You want to use a GCP-native solution when possible. How should you deploy this service in GCP?
You are disabling DNSSEC for one of your Cloud DNS-managed zones. You removed the DS records from your zone file; waited for them to expire from the cache; and disabled DNSSEC for the zone. You receive reports that DNSSEC validating resolves are unable to resolve names in your zone. What should you do?
Correct
Explanation:- Before disabling DNSSEC for a managed zone you want to use; you must deactivate DNSSEC at your domain registrar to ensure that DNSSEC-validating resolvers can still resolve names in the zone. Reference: https://cloud.google.com/dns/docs/dnssec-config
Incorrect
Explanation:- Before disabling DNSSEC for a managed zone you want to use; you must deactivate DNSSEC at your domain registrar to ensure that DNSSEC-validating resolvers can still resolve names in the zone. Reference: https://cloud.google.com/dns/docs/dnssec-config
Unattempted
Explanation:- Before disabling DNSSEC for a managed zone you want to use; you must deactivate DNSSEC at your domain registrar to ensure that DNSSEC-validating resolvers can still resolve names in the zone. Reference: https://cloud.google.com/dns/docs/dnssec-config
Question 37 of 39
37. Question
You have a web application that is currently hosted in the us-central1 region. Users experience high latency when traveling in Asia. You’ve configured a network load balancer, but users have not experienced a performance improvement. You want to decrease the latency. What should you do?
You have an application running on Compute Engine that uses BigQuery to generate some results that are stored in Cloud Storage. You want to ensure that none of the application instances have external IP addresses. Which two methods can you use to accomplish this? (Choose two.)
Correct
Explanation:- Compute Engine that only has internal IP addresses (no external IP addresses) can use Private Google Access. They can reach the external IP addresses of Google APIs and services. Private Google Access is enabled at the subnet level.. Reference: https://cloud.google.com/vpc/docs/private-google-access. Option E [CORRECT]: We need to allow the VMs or the application to connect to the EXTERNAL IP of the Google APIs. So a Cloud NAT has to be configured with a specific route so that computer instances can reach external IPs.. Reference:https://cloud.google.com/nat/docs/overview
Incorrect
Explanation:- Compute Engine that only has internal IP addresses (no external IP addresses) can use Private Google Access. They can reach the external IP addresses of Google APIs and services. Private Google Access is enabled at the subnet level.. Reference: https://cloud.google.com/vpc/docs/private-google-access. Option E [CORRECT]: We need to allow the VMs or the application to connect to the EXTERNAL IP of the Google APIs. So a Cloud NAT has to be configured with a specific route so that computer instances can reach external IPs.. Reference:https://cloud.google.com/nat/docs/overview
Unattempted
Explanation:- Compute Engine that only has internal IP addresses (no external IP addresses) can use Private Google Access. They can reach the external IP addresses of Google APIs and services. Private Google Access is enabled at the subnet level.. Reference: https://cloud.google.com/vpc/docs/private-google-access. Option E [CORRECT]: We need to allow the VMs or the application to connect to the EXTERNAL IP of the Google APIs. So a Cloud NAT has to be configured with a specific route so that computer instances can reach external IPs.. Reference:https://cloud.google.com/nat/docs/overview
Question 39 of 39
39. Question
You are designing a shared VPC architecture. Your network and security team has strict controls over which routes are exposed between departments. Your Production and Staging departments can communicate with each other, but only via specific networks. You want to follow Google-recommended practices. How should you design this topology?