You have already completed the Test before. Hence you can not start it again.
Test is loading...
You must sign in or sign up to start the Test.
You have to finish following quiz, to start this Test:
Your results are here!! for" Google Professional Cloud Network Engineer Practice Test 8 "
0 of 50 questions answered correctly
Your time:
Time has elapsed
Your Final Score is : 0
You have attempted : 0
Number of Correct Questions : 0 and scored 0
Number of Incorrect Questions : 0 and Negative marks 0
Average score
Your score
Google Professional Cloud Network Engineer
You have attempted: 0
Number of Correct Questions: 0 and scored 0
Number of Incorrect Questions: 0 and Negative marks 0
You can review your answers by clicking on “View Answers” option.
Important Note : Open Reference Documentation Links in New Tab (Right Click and Open in New Tab).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
Answered
Review
Question 1 of 50
1. Question
You are using the gcloud command line tool to create a new custom role in a project by copying a predefined role. You receive this error message: INVALID_ARGUMENT: Permission resourcemanager.projects.list is not valid What should you do?
You create multiple Compute Engine virtual machine instances to be used at TFTP servers. Which type of load balancer should you use?
Correct
Explanation:- A network load balancer in GCP is a UDP/TCP load balancer. We can use the UDP network load balancer for the TFTP server as it works on the UDP protocol. Reference:https://cloud.google.com/load-balancing/docs/network
Incorrect
Explanation:- A network load balancer in GCP is a UDP/TCP load balancer. We can use the UDP network load balancer for the TFTP server as it works on the UDP protocol. Reference:https://cloud.google.com/load-balancing/docs/network
Unattempted
Explanation:- A network load balancer in GCP is a UDP/TCP load balancer. We can use the UDP network load balancer for the TFTP server as it works on the UDP protocol. Reference:https://cloud.google.com/load-balancing/docs/network
Question 4 of 50
4. Question
You create multiple Compute Engine virtual machine instances to be used at TFTP servers. Which type of load balancer should you use?
Correct
Explanation:- A network load balancer in GCP is a UDP/TCP load balancer. We can use the UDP network load balancer for the TFTP server as it works on the UDP protocol. Reference:https://cloud.google.com/load-balancing/docs/network
Incorrect
Explanation:- A network load balancer in GCP is a UDP/TCP load balancer. We can use the UDP network load balancer for the TFTP server as it works on the UDP protocol. Reference:https://cloud.google.com/load-balancing/docs/network
Unattempted
Explanation:- A network load balancer in GCP is a UDP/TCP load balancer. We can use the UDP network load balancer for the TFTP server as it works on the UDP protocol. Reference:https://cloud.google.com/load-balancing/docs/network
Question 5 of 50
5. Question
You want to configure load balancing for an internet-facing, standard voice-over-IP (VOIP) application. Which type of load balancer should you use?
Correct
Explanation:- An external network load balancer in GCP is a UDP/TCP load balancer. We can use the UDP network load balancer for VoIP applications as it works on the UDP protocol. Reference: https://cloud.google.com/load-balancing/docs/network
Incorrect
Explanation:- An external network load balancer in GCP is a UDP/TCP load balancer. We can use the UDP network load balancer for VoIP applications as it works on the UDP protocol. Reference: https://cloud.google.com/load-balancing/docs/network
Unattempted
Explanation:- An external network load balancer in GCP is a UDP/TCP load balancer. We can use the UDP network load balancer for VoIP applications as it works on the UDP protocol. Reference: https://cloud.google.com/load-balancing/docs/network
Question 6 of 50
6. Question
You need to ensure your personal SSH key works on every instance in your project. You want to accomplish this as efficiently as possible. What should you do?
Correct
Explanation:- Use gcloud compute ssh to automatically copy your public ssh key to the instance.. Since we need to allow our SSH key to work on every instance of the GCP project, we need to upload it to project-wide metadata. This is done by adding the public key to the GCP project metadata.. Reference: https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys. Reference: https://cloud.google.com/sdk/gcloud/reference/compute/ssh
Incorrect
Explanation:- Use gcloud compute ssh to automatically copy your public ssh key to the instance.. Since we need to allow our SSH key to work on every instance of the GCP project, we need to upload it to project-wide metadata. This is done by adding the public key to the GCP project metadata.. Reference: https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys. Reference: https://cloud.google.com/sdk/gcloud/reference/compute/ssh
Unattempted
Explanation:- Use gcloud compute ssh to automatically copy your public ssh key to the instance.. Since we need to allow our SSH key to work on every instance of the GCP project, we need to upload it to project-wide metadata. This is done by adding the public key to the GCP project metadata.. Reference: https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys. Reference: https://cloud.google.com/sdk/gcloud/reference/compute/ssh
Question 7 of 50
7. Question
In order to provide subnet level isolation, you want to force instance-A in one subnet to route through a security appliance, called instance-B, in another subnet. What should you do?
Correct
Explanation:- We need to create a more specific route than the default route. That route has to be applied to instance A using a network tag. Reference: https://cloud.google.com/vpc/docs/routes
Incorrect
Explanation:- We need to create a more specific route than the default route. That route has to be applied to instance A using a network tag. Reference: https://cloud.google.com/vpc/docs/routes
Unattempted
Explanation:- We need to create a more specific route than the default route. That route has to be applied to instance A using a network tag. Reference: https://cloud.google.com/vpc/docs/routes
Question 8 of 50
8. Question
Your company has a security team that manages firewalls and SSL certificates. It also has a networking team that manages the networking resources. The networking team needs to be able to read firewall rules, but should not be able to create, modify, or delete them. How should you set up permissions for the networking team?
Correct
Explanation:- Assign members of the networking team the compute.networkViewer role, and add the compute.networks.use permission. compute.networkAdmin role allows users to view the firewall rules but not modify or delete them. It includes permissions to create, modify, and delete networking resources, except for firewall rules and SSL certificates. The network admin role allows read-only access to firewall rules, SSL certificates. Reference: https://cloud.google.com/compute/docs/access/iam#compute.networkAdmin
Incorrect
Explanation:- Assign members of the networking team the compute.networkViewer role, and add the compute.networks.use permission. compute.networkAdmin role allows users to view the firewall rules but not modify or delete them. It includes permissions to create, modify, and delete networking resources, except for firewall rules and SSL certificates. The network admin role allows read-only access to firewall rules, SSL certificates. Reference: https://cloud.google.com/compute/docs/access/iam#compute.networkAdmin
Unattempted
Explanation:- Assign members of the networking team the compute.networkViewer role, and add the compute.networks.use permission. compute.networkAdmin role allows users to view the firewall rules but not modify or delete them. It includes permissions to create, modify, and delete networking resources, except for firewall rules and SSL certificates. The network admin role allows read-only access to firewall rules, SSL certificates. Reference: https://cloud.google.com/compute/docs/access/iam#compute.networkAdmin
Question 9 of 50
9. Question
You have created an HTTP(S) load balanced service. You need to verify that your backend instances are responding properly. How should you configure the health check?
Correct
Explanation:- Set proxy-header to the default value, and set host to include a custom host header that identifies the health check. We have to configure the host header in health-check because as you know the backend could host many domains and we have to know which one is alive. We need to configure a route that returns some response, which is an indication that the backend is healthy and functional. Reference: https://cloud.google.com/load-balancing/docs/health-check-concepts#content-based_health_checks
Incorrect
Explanation:- Set proxy-header to the default value, and set host to include a custom host header that identifies the health check. We have to configure the host header in health-check because as you know the backend could host many domains and we have to know which one is alive. We need to configure a route that returns some response, which is an indication that the backend is healthy and functional. Reference: https://cloud.google.com/load-balancing/docs/health-check-concepts#content-based_health_checks
Unattempted
Explanation:- Set proxy-header to the default value, and set host to include a custom host header that identifies the health check. We have to configure the host header in health-check because as you know the backend could host many domains and we have to know which one is alive. We need to configure a route that returns some response, which is an indication that the backend is healthy and functional. Reference: https://cloud.google.com/load-balancing/docs/health-check-concepts#content-based_health_checks
Question 10 of 50
10. Question
Your company offers a popular gaming service. Your instances are deployed with private IP addresses, and external access is granted through a global load balancer. You have recently engaged a traffic-scrubbing service and want to restrict your origin to allow connections only from the traffic-scrubbing service. What should you do?
Correct
Explanation:- A Cloud Armor is used to protect the application that is hosted on a global load balancer. The Cloud Armor is attached to the backend of the load balancer. It has a security policy that can block the traffic and filter only the allowed traffic. Cloud Armor acts as a Web Application Firewall. Reference: https://cloud.google.com/armor/docs/security-policy-overview
Incorrect
Explanation:- A Cloud Armor is used to protect the application that is hosted on a global load balancer. The Cloud Armor is attached to the backend of the load balancer. It has a security policy that can block the traffic and filter only the allowed traffic. Cloud Armor acts as a Web Application Firewall. Reference: https://cloud.google.com/armor/docs/security-policy-overview
Unattempted
Explanation:- A Cloud Armor is used to protect the application that is hosted on a global load balancer. The Cloud Armor is attached to the backend of the load balancer. It has a security policy that can block the traffic and filter only the allowed traffic. Cloud Armor acts as a Web Application Firewall. Reference: https://cloud.google.com/armor/docs/security-policy-overview
Question 11 of 50
11. Question
You are creating a new application and require access to Cloud SQL from VPC instances without public IP addresses. Which two actions should you take? (Choose two.)
Correct
Explanation:- Service networking API helps provide automatic management of network configurations necessary for certain services. A Cloud SQL can be reached on its private IP if the service networking API is enabled in the GCP project.. Reference: https://cloud.google.com/service-infrastructure/docs/service-networking/getting-started. A private connection can be made to Cloud SQL. Cloud SQL can operate on a private IP.. Reference: https://cloud.google.com/sql/docs/mysql/private-ip
Incorrect
Explanation:- Service networking API helps provide automatic management of network configurations necessary for certain services. A Cloud SQL can be reached on its private IP if the service networking API is enabled in the GCP project.. Reference: https://cloud.google.com/service-infrastructure/docs/service-networking/getting-started. A private connection can be made to Cloud SQL. Cloud SQL can operate on a private IP.. Reference: https://cloud.google.com/sql/docs/mysql/private-ip
Unattempted
Explanation:- Service networking API helps provide automatic management of network configurations necessary for certain services. A Cloud SQL can be reached on its private IP if the service networking API is enabled in the GCP project.. Reference: https://cloud.google.com/service-infrastructure/docs/service-networking/getting-started. A private connection can be made to Cloud SQL. Cloud SQL can operate on a private IP.. Reference: https://cloud.google.com/sql/docs/mysql/private-ip
Question 12 of 50
12. Question
You want to use Cloud Interconnect to connect your on-premises network to a GCP VPC. You cannot meet Google at one of its point-of-presence (POP) locations, and your on-premises router cannot run a Border Gateway Protocol (BGP) configuration. Which connectivity model should you use?
Correct
Explanation:- For Layer 3 connections, your service provider establishes a BGP session between your Cloud Routers and their edge routers for each VLAN attachment. You don‘t need to configure BGP on your on-premises router. Google and your service provider automatically set the correct configurations. Reference: https://cloud.google.com/network-connectivity/docs/interconnect/concepts/partner-overview
Incorrect
Explanation:- For Layer 3 connections, your service provider establishes a BGP session between your Cloud Routers and their edge routers for each VLAN attachment. You don‘t need to configure BGP on your on-premises router. Google and your service provider automatically set the correct configurations. Reference: https://cloud.google.com/network-connectivity/docs/interconnect/concepts/partner-overview
Unattempted
Explanation:- For Layer 3 connections, your service provider establishes a BGP session between your Cloud Routers and their edge routers for each VLAN attachment. You don‘t need to configure BGP on your on-premises router. Google and your service provider automatically set the correct configurations. Reference: https://cloud.google.com/network-connectivity/docs/interconnect/concepts/partner-overview
Question 13 of 50
13. Question
You have deployed a proof-of-concept application by manually placing instances in a single Compute Engine zone. You are now moving the application to production; so you need to increase your application availability and ensure it can autoscale. How should you provision your instances?
You have the Google Cloud load balancer backend configuration shown below. You want to reduce your instance group utilization by 20%. Which settings should you use?
You want to deploy a VPN Gateway to connect your on-premises network to GCP. You are using a non BGP-capable on-premises VPN device. You want to minimize downtime and operational overhead when your network grows. The device supports only IKEv2; and you want to follow Google-recommended practices. What should you do?
You want to use Partner Interconnect to connect your on-premises network with your VPC. You already have an Interconnect partner. What should you first?
Your company has just launched a new critical revenue-generating web application. You deployed the application for scalability using managed instance groups, autoscaling, and a network load balancer as frontend. One day, you notice severe bursty traffic that causes autoscaling to reach the maximum number of instances, and users of your application cannot complete transactions. After an investigation, you think it as a DDOS attack. You want to quickly restore user access to your application and allow successful transactions while minimizing cost. Which two steps should you take? (Choose two.)
Correct
The two steps you should take to quickly restore user access to your application and allow successful transactions while minimizing cost are:
Use Cloud Armor to blacklist the attacker’s IP addresses. This will block traffic from the attacker and prevent them from further disrupting your application.
Increase the maximum auto scaling backend to accommodate the severe bursty traffic. This will allow your application to scale up to handle the increased load and prevent users from being unable to complete transactions.
Here’s why the other options are not recommended:
Create a global HTTP(s) load balancer and move your application backend to this load balancer: While this may help to distribute traffic across multiple regions, it is not necessary to do this immediately in response to a DDoS attack.
Shut down the entire application in GCP for a few hours: This is a drastic measure that should only be taken as a last resort. Shutting down the application will disrupt user access and may not be effective in stopping the attack.
By taking these two steps, you can quickly restore user access to your application and allow successful transactions while minimizing cost.
Incorrect
The two steps you should take to quickly restore user access to your application and allow successful transactions while minimizing cost are:
Use Cloud Armor to blacklist the attacker’s IP addresses. This will block traffic from the attacker and prevent them from further disrupting your application.
Increase the maximum auto scaling backend to accommodate the severe bursty traffic. This will allow your application to scale up to handle the increased load and prevent users from being unable to complete transactions.
Here’s why the other options are not recommended:
Create a global HTTP(s) load balancer and move your application backend to this load balancer: While this may help to distribute traffic across multiple regions, it is not necessary to do this immediately in response to a DDoS attack.
Shut down the entire application in GCP for a few hours: This is a drastic measure that should only be taken as a last resort. Shutting down the application will disrupt user access and may not be effective in stopping the attack.
By taking these two steps, you can quickly restore user access to your application and allow successful transactions while minimizing cost.
Unattempted
The two steps you should take to quickly restore user access to your application and allow successful transactions while minimizing cost are:
Use Cloud Armor to blacklist the attacker’s IP addresses. This will block traffic from the attacker and prevent them from further disrupting your application.
Increase the maximum auto scaling backend to accommodate the severe bursty traffic. This will allow your application to scale up to handle the increased load and prevent users from being unable to complete transactions.
Here’s why the other options are not recommended:
Create a global HTTP(s) load balancer and move your application backend to this load balancer: While this may help to distribute traffic across multiple regions, it is not necessary to do this immediately in response to a DDoS attack.
Shut down the entire application in GCP for a few hours: This is a drastic measure that should only be taken as a last resort. Shutting down the application will disrupt user access and may not be effective in stopping the attack.
By taking these two steps, you can quickly restore user access to your application and allow successful transactions while minimizing cost.
Question 18 of 50
18. Question
In your company; two departments with separate GCP projects (code-dev and data-dev) in the same organization need to allow full cross-communication between all of their virtual machines in GCP. Each department has one VPC in its project and wants full control over their network. Neither department intends to recreate its existing computing resources. You want to implement a solution that minimizes cost. Which two steps should you take? (Choose two.)
Correct
Correct Options:
B. Connect the VPCs in project code-dev and data-dev using VPC Network Peering: This is correct. VPC Network Peering allows you to connect VPC networks from different projects, enabling full cross-communication without requiring additional infrastructure. It is straightforward, cost-effective, and maintains full control over each department’s network.
D. Enable firewall rules to allow all ingress traffic from all subnets of project code-dev to all instances in project data-dev, and vice versa: This is also correct. Even with VPC peering, you must configure firewall rules to permit traffic between the projects’ subnets. Ensuring the necessary ingress rules are in place allows seamless communication between instances in both VPCs.
Incorrect Options:
A. Connect both projects using Cloud VPN: This is incorrect because using Cloud VPN introduces additional complexity and cost compared to VPC Network Peering. VPNs are typically used for connecting on-premises networks to GCP, not for connecting projects within GCP.
C. Enable Shared VPC in one project (e.g., code-dev); and make the second project (e.g., data-dev) a service project: This is incorrect as it involves significant changes to the network setup and could centralize control, which goes against each department’s preference for full network control. It also adds unnecessary complexity compared to VPC peering.
E. Create a route in the code-dev project to the destination prefixes in project data-dev and use nexthop as the default gateway; and vice versa: This is incorrect because creating custom routes is unnecessary with VPC Peering. VPC Peering handles routing automatically, making this step redundant.
Incorrect
Correct Options:
B. Connect the VPCs in project code-dev and data-dev using VPC Network Peering: This is correct. VPC Network Peering allows you to connect VPC networks from different projects, enabling full cross-communication without requiring additional infrastructure. It is straightforward, cost-effective, and maintains full control over each department’s network.
D. Enable firewall rules to allow all ingress traffic from all subnets of project code-dev to all instances in project data-dev, and vice versa: This is also correct. Even with VPC peering, you must configure firewall rules to permit traffic between the projects’ subnets. Ensuring the necessary ingress rules are in place allows seamless communication between instances in both VPCs.
Incorrect Options:
A. Connect both projects using Cloud VPN: This is incorrect because using Cloud VPN introduces additional complexity and cost compared to VPC Network Peering. VPNs are typically used for connecting on-premises networks to GCP, not for connecting projects within GCP.
C. Enable Shared VPC in one project (e.g., code-dev); and make the second project (e.g., data-dev) a service project: This is incorrect as it involves significant changes to the network setup and could centralize control, which goes against each department’s preference for full network control. It also adds unnecessary complexity compared to VPC peering.
E. Create a route in the code-dev project to the destination prefixes in project data-dev and use nexthop as the default gateway; and vice versa: This is incorrect because creating custom routes is unnecessary with VPC Peering. VPC Peering handles routing automatically, making this step redundant.
Unattempted
Correct Options:
B. Connect the VPCs in project code-dev and data-dev using VPC Network Peering: This is correct. VPC Network Peering allows you to connect VPC networks from different projects, enabling full cross-communication without requiring additional infrastructure. It is straightforward, cost-effective, and maintains full control over each department’s network.
D. Enable firewall rules to allow all ingress traffic from all subnets of project code-dev to all instances in project data-dev, and vice versa: This is also correct. Even with VPC peering, you must configure firewall rules to permit traffic between the projects’ subnets. Ensuring the necessary ingress rules are in place allows seamless communication between instances in both VPCs.
Incorrect Options:
A. Connect both projects using Cloud VPN: This is incorrect because using Cloud VPN introduces additional complexity and cost compared to VPC Network Peering. VPNs are typically used for connecting on-premises networks to GCP, not for connecting projects within GCP.
C. Enable Shared VPC in one project (e.g., code-dev); and make the second project (e.g., data-dev) a service project: This is incorrect as it involves significant changes to the network setup and could centralize control, which goes against each department’s preference for full network control. It also adds unnecessary complexity compared to VPC peering.
E. Create a route in the code-dev project to the destination prefixes in project data-dev and use nexthop as the default gateway; and vice versa: This is incorrect because creating custom routes is unnecessary with VPC Peering. VPC Peering handles routing automatically, making this step redundant.
Question 19 of 50
19. Question
You have created a firewall with rules that only allow traffic over HTTP; HTTPS; and SSH ports. While testing; you specifically try to reach the server over multiple ports and protocols; however; you do not see any denied connections in the firewall logs. You want to resolve the issue. What should you do?
Correct
The correct answer is:
D. Create an explicit Deny Any rule and enable logging on the new rule.
This will allow you to capture and analyze denied traffic, which can help you identify the root cause of the issue. The existing rules might not be capturing denied traffic, and creating an explicit Deny Any rule with logging enabled will ensure that all denied traffic is logged.
The incorrect answers are:
A. Enable logging on the default Deny Any Firewall Rule. The default Deny Any rule might not be capturing denied traffic, and enabling logging on it might not be sufficient.
B. Enable logging on the VM Instances that receive traffic. This will not capture denied traffic at the firewall level.
C. Create a logging sink forwarding all firewall logs with no filters. This will capture all firewall logs, but it might not be sufficient if the existing rules are not capturing denied traffic.
Incorrect
The correct answer is:
D. Create an explicit Deny Any rule and enable logging on the new rule.
This will allow you to capture and analyze denied traffic, which can help you identify the root cause of the issue. The existing rules might not be capturing denied traffic, and creating an explicit Deny Any rule with logging enabled will ensure that all denied traffic is logged.
The incorrect answers are:
A. Enable logging on the default Deny Any Firewall Rule. The default Deny Any rule might not be capturing denied traffic, and enabling logging on it might not be sufficient.
B. Enable logging on the VM Instances that receive traffic. This will not capture denied traffic at the firewall level.
C. Create a logging sink forwarding all firewall logs with no filters. This will capture all firewall logs, but it might not be sufficient if the existing rules are not capturing denied traffic.
Unattempted
The correct answer is:
D. Create an explicit Deny Any rule and enable logging on the new rule.
This will allow you to capture and analyze denied traffic, which can help you identify the root cause of the issue. The existing rules might not be capturing denied traffic, and creating an explicit Deny Any rule with logging enabled will ensure that all denied traffic is logged.
The incorrect answers are:
A. Enable logging on the default Deny Any Firewall Rule. The default Deny Any rule might not be capturing denied traffic, and enabling logging on it might not be sufficient.
B. Enable logging on the VM Instances that receive traffic. This will not capture denied traffic at the firewall level.
C. Create a logging sink forwarding all firewall logs with no filters. This will capture all firewall logs, but it might not be sufficient if the existing rules are not capturing denied traffic.
Question 20 of 50
20. Question
You have enabled HTTP(S) load balancing for your application; and your application developers have reported that HTTP(S) requests are not being distributed correctly to your Compute Engine Virtual Machine instances. You want to find data about how the request are being distributed. Which two methods can accomplish this? (Choose two.)
Correct
The correct answers are:
A. On the Load Balancer details page of the GCP Console, click on the Monitoring tab, select your backend service, and look at the graphs. This will provide you with a visual representation of how traffic is being distributed to your backend instances. The Monitoring tab provides insights into traffic distribution, health, and performance metrics, which can help identify how requests are being allocated across instances.
D. In Stackdriver Monitoring, select Resources > Google Cloud Load Balancers and review the Key Metrics graphs in the dashboard. This will also provide you with valuable information about how traffic is being distributed, including the number of requests, response times, and error rates. The default “Deny Any” rule doesn’t have logging enabled by default. By creating an explicit “Deny Any” rule and enabling logging on it, you can capture logs for all denied traffic. This will allow you to see which connections are being blocked, providing valuable insight for troubleshooting
The incorrect answers are:
B. In Stackdriver Error Reporting, look for any unacknowledged errors for the Cloud Load Balancers service. This will only identify errors related to the load balancer itself, not the distribution of traffic to backend instances.
C. In Stackdriver Monitoring, select Resources > Metrics Explorer and search for https/ request_bytes_count metric. This metric will provide information about the amount of data transferred, not the distribution of traffic.
E. In Stackdriver Monitoring, create a new dashboard and track the https/backend_request_count metric for the load balancer. 1 While this metric can provide valuable information, it might not be sufficient to understand the distribution of traffic to backend instances.
Incorrect
The correct answers are:
A. On the Load Balancer details page of the GCP Console, click on the Monitoring tab, select your backend service, and look at the graphs. This will provide you with a visual representation of how traffic is being distributed to your backend instances. The Monitoring tab provides insights into traffic distribution, health, and performance metrics, which can help identify how requests are being allocated across instances.
D. In Stackdriver Monitoring, select Resources > Google Cloud Load Balancers and review the Key Metrics graphs in the dashboard. This will also provide you with valuable information about how traffic is being distributed, including the number of requests, response times, and error rates. The default “Deny Any” rule doesn’t have logging enabled by default. By creating an explicit “Deny Any” rule and enabling logging on it, you can capture logs for all denied traffic. This will allow you to see which connections are being blocked, providing valuable insight for troubleshooting
The incorrect answers are:
B. In Stackdriver Error Reporting, look for any unacknowledged errors for the Cloud Load Balancers service. This will only identify errors related to the load balancer itself, not the distribution of traffic to backend instances.
C. In Stackdriver Monitoring, select Resources > Metrics Explorer and search for https/ request_bytes_count metric. This metric will provide information about the amount of data transferred, not the distribution of traffic.
E. In Stackdriver Monitoring, create a new dashboard and track the https/backend_request_count metric for the load balancer. 1 While this metric can provide valuable information, it might not be sufficient to understand the distribution of traffic to backend instances.
Unattempted
The correct answers are:
A. On the Load Balancer details page of the GCP Console, click on the Monitoring tab, select your backend service, and look at the graphs. This will provide you with a visual representation of how traffic is being distributed to your backend instances. The Monitoring tab provides insights into traffic distribution, health, and performance metrics, which can help identify how requests are being allocated across instances.
D. In Stackdriver Monitoring, select Resources > Google Cloud Load Balancers and review the Key Metrics graphs in the dashboard. This will also provide you with valuable information about how traffic is being distributed, including the number of requests, response times, and error rates. The default “Deny Any” rule doesn’t have logging enabled by default. By creating an explicit “Deny Any” rule and enabling logging on it, you can capture logs for all denied traffic. This will allow you to see which connections are being blocked, providing valuable insight for troubleshooting
The incorrect answers are:
B. In Stackdriver Error Reporting, look for any unacknowledged errors for the Cloud Load Balancers service. This will only identify errors related to the load balancer itself, not the distribution of traffic to backend instances.
C. In Stackdriver Monitoring, select Resources > Metrics Explorer and search for https/ request_bytes_count metric. This metric will provide information about the amount of data transferred, not the distribution of traffic.
E. In Stackdriver Monitoring, create a new dashboard and track the https/backend_request_count metric for the load balancer. 1 While this metric can provide valuable information, it might not be sufficient to understand the distribution of traffic to backend instances.
Question 21 of 50
21. Question
Your company offers a popular gaming service. Your instances are deployed with private IP addresses; and external access is granted through a global load balancer. You have recently engaged a traffic-scrubbing service and want to restrict your origin to allow connections only from the traffic-scrubbing service. What should you do?
Correct
Correct Option
A. Create a Cloud Armor Security Policy that blocks all traffic except for the traffic-scrubbing service.
Explanation: This option is appropriate because Google Cloud Armor allows you to create security policies that can explicitly deny or allow traffic based on specified criteria, such as IP addresses. By blocking all traffic except for that from the traffic-scrubbing service, you effectively protect your origin server from unwanted access while allowing legitimate traffic to pass through.
Incorrect Options
B. Create a VPC Firewall rule that blocks all traffic except for the traffic-scrubbing service.
Explanation: While VPC Firewall rules can restrict access based on IP addresses, they operate at the network level and may not provide the same granularity or ease of management as Cloud Armor policies for HTTP(S) traffic. Additionally, VPC Firewall rules are typically used for internal network security rather than managing external HTTP(S) requests effectively.
C. Create a VPC Service Controls Perimeter that blocks all traffic except for the traffic-scrubbing service.
Explanation: VPC Service Controls are designed to enhance security by creating perimeters around Google Cloud resources to prevent data exfiltration. However, they are not primarily intended for controlling incoming HTTP(S) traffic from external sources. This option would not effectively restrict access to your origin server in the context of managing web traffic.
D. Create IPTables firewall rules that block all traffic except for the traffic-scrubbing service.
Explanation: While IPTables can be used to create firewall rules at the instance level, managing IPTables requires more manual configuration and maintenance compared to using Google Cloud’s built-in services like Cloud Armor. Additionally, this approach may not integrate seamlessly with your existing load balancer and could complicate your network architecture.
Incorrect
Correct Option
A. Create a Cloud Armor Security Policy that blocks all traffic except for the traffic-scrubbing service.
Explanation: This option is appropriate because Google Cloud Armor allows you to create security policies that can explicitly deny or allow traffic based on specified criteria, such as IP addresses. By blocking all traffic except for that from the traffic-scrubbing service, you effectively protect your origin server from unwanted access while allowing legitimate traffic to pass through.
Incorrect Options
B. Create a VPC Firewall rule that blocks all traffic except for the traffic-scrubbing service.
Explanation: While VPC Firewall rules can restrict access based on IP addresses, they operate at the network level and may not provide the same granularity or ease of management as Cloud Armor policies for HTTP(S) traffic. Additionally, VPC Firewall rules are typically used for internal network security rather than managing external HTTP(S) requests effectively.
C. Create a VPC Service Controls Perimeter that blocks all traffic except for the traffic-scrubbing service.
Explanation: VPC Service Controls are designed to enhance security by creating perimeters around Google Cloud resources to prevent data exfiltration. However, they are not primarily intended for controlling incoming HTTP(S) traffic from external sources. This option would not effectively restrict access to your origin server in the context of managing web traffic.
D. Create IPTables firewall rules that block all traffic except for the traffic-scrubbing service.
Explanation: While IPTables can be used to create firewall rules at the instance level, managing IPTables requires more manual configuration and maintenance compared to using Google Cloud’s built-in services like Cloud Armor. Additionally, this approach may not integrate seamlessly with your existing load balancer and could complicate your network architecture.
Unattempted
Correct Option
A. Create a Cloud Armor Security Policy that blocks all traffic except for the traffic-scrubbing service.
Explanation: This option is appropriate because Google Cloud Armor allows you to create security policies that can explicitly deny or allow traffic based on specified criteria, such as IP addresses. By blocking all traffic except for that from the traffic-scrubbing service, you effectively protect your origin server from unwanted access while allowing legitimate traffic to pass through.
Incorrect Options
B. Create a VPC Firewall rule that blocks all traffic except for the traffic-scrubbing service.
Explanation: While VPC Firewall rules can restrict access based on IP addresses, they operate at the network level and may not provide the same granularity or ease of management as Cloud Armor policies for HTTP(S) traffic. Additionally, VPC Firewall rules are typically used for internal network security rather than managing external HTTP(S) requests effectively.
C. Create a VPC Service Controls Perimeter that blocks all traffic except for the traffic-scrubbing service.
Explanation: VPC Service Controls are designed to enhance security by creating perimeters around Google Cloud resources to prevent data exfiltration. However, they are not primarily intended for controlling incoming HTTP(S) traffic from external sources. This option would not effectively restrict access to your origin server in the context of managing web traffic.
D. Create IPTables firewall rules that block all traffic except for the traffic-scrubbing service.
Explanation: While IPTables can be used to create firewall rules at the instance level, managing IPTables requires more manual configuration and maintenance compared to using Google Cloud’s built-in services like Cloud Armor. Additionally, this approach may not integrate seamlessly with your existing load balancer and could complicate your network architecture.
Question 22 of 50
22. Question
You have a storage bucket that contains the following objects: – folder-a/image-a-1.jpg – folder-a/image-a-2.jpg – folder-b/image-b-1.jpg – folder-b/image-b-2.jpg Cloud CDN is enabled on the storage bucket; and all four objects have been successfully cached. You want to remove the cached copies of all the objects with the prefix folder-a; using the minimum number of commands. What should you do?
Correct
Correct Option
B. Issue a cache invalidation command with pattern /folder-a/*.
Explanation: This option is correct because issuing a cache invalidation command with the pattern /folder-a/* specifically targets all cached objects under that prefix. This method efficiently removes the cached copies without affecting other cached content, fulfilling the requirement to clear only the specified items.
Incorrect Options
A. Add an appropriate lifecycle rule on the storage bucket.
Explanation: Lifecycle rules are used to manage the storage of objects over time, such as transitioning them to different storage classes or deleting them after a certain period. However, they do not directly affect cached content in Cloud CDN. Therefore, this option would not achieve the goal of invalidating cached objects immediately.
C. Make sure that all the objects with prefix folder-a are not shared publicly.
Explanation: Changing the public access settings for objects does not impact their cached status in Cloud CDN. This option does not address the need to invalidate cached copies and would not help in removing them from the cache.
D. Disable Cloud CDN on the storage bucket. Wait 90 seconds. Re-enable Cloud CDN on the storage bucket.
Explanation: Disabling and re-enabling Cloud CDN may clear some cached content, but it is a cumbersome and indirect method that does not specifically target the desired objects. Additionally, this approach could lead to service interruptions and would not be an efficient way to manage cache invalidation.
Incorrect
Correct Option
B. Issue a cache invalidation command with pattern /folder-a/*.
Explanation: This option is correct because issuing a cache invalidation command with the pattern /folder-a/* specifically targets all cached objects under that prefix. This method efficiently removes the cached copies without affecting other cached content, fulfilling the requirement to clear only the specified items.
Incorrect Options
A. Add an appropriate lifecycle rule on the storage bucket.
Explanation: Lifecycle rules are used to manage the storage of objects over time, such as transitioning them to different storage classes or deleting them after a certain period. However, they do not directly affect cached content in Cloud CDN. Therefore, this option would not achieve the goal of invalidating cached objects immediately.
C. Make sure that all the objects with prefix folder-a are not shared publicly.
Explanation: Changing the public access settings for objects does not impact their cached status in Cloud CDN. This option does not address the need to invalidate cached copies and would not help in removing them from the cache.
D. Disable Cloud CDN on the storage bucket. Wait 90 seconds. Re-enable Cloud CDN on the storage bucket.
Explanation: Disabling and re-enabling Cloud CDN may clear some cached content, but it is a cumbersome and indirect method that does not specifically target the desired objects. Additionally, this approach could lead to service interruptions and would not be an efficient way to manage cache invalidation.
Unattempted
Correct Option
B. Issue a cache invalidation command with pattern /folder-a/*.
Explanation: This option is correct because issuing a cache invalidation command with the pattern /folder-a/* specifically targets all cached objects under that prefix. This method efficiently removes the cached copies without affecting other cached content, fulfilling the requirement to clear only the specified items.
Incorrect Options
A. Add an appropriate lifecycle rule on the storage bucket.
Explanation: Lifecycle rules are used to manage the storage of objects over time, such as transitioning them to different storage classes or deleting them after a certain period. However, they do not directly affect cached content in Cloud CDN. Therefore, this option would not achieve the goal of invalidating cached objects immediately.
C. Make sure that all the objects with prefix folder-a are not shared publicly.
Explanation: Changing the public access settings for objects does not impact their cached status in Cloud CDN. This option does not address the need to invalidate cached copies and would not help in removing them from the cache.
D. Disable Cloud CDN on the storage bucket. Wait 90 seconds. Re-enable Cloud CDN on the storage bucket.
Explanation: Disabling and re-enabling Cloud CDN may clear some cached content, but it is a cumbersome and indirect method that does not specifically target the desired objects. Additionally, this approach could lead to service interruptions and would not be an efficient way to manage cache invalidation.
Question 23 of 50
23. Question
You are creating an instance group and need to create a new health check for HTTP(s) load balancing. Which two methods can you use to accomplish this? (Choose two.)
Correct
The two methods you can use to create a new health check for HTTP(s) load balancing in Google Cloud Platform (GCP) are:
Create a new health check using the gcloud command line tool.
Create a new health check; or select an existing one; when you complete the load balancer‘s backend configuration in the GCP Console.
Explanation:
gcloud command-line tool: GCP provides a powerful command-line interface (CLI) tool called gcloud. You can use this tool to manage various GCP resources, including creating and managing health checks.
GCP Console: The GCP Console provides a user-friendly web interface for managing GCP resources. When configuring a load balancer, you can either create a new health check directly within the load balancer configuration or select an existing health check that you’ve previously created.
Other options:
Create a new health check using the VPC Network section in the GCP Console: While you can manage health checks within the VPC Network section, it’s typically more convenient to create or select them during the load balancer configuration process.
Create a new legacy health check using the gcloud command line tool or the Health checks section in the GCP Console: While legacy health checks are still supported, it’s generally recommended to use the newer, more flexible health check types for new deployments.
By using either the gcloud command-line tool or the GCP Console, you can effectively define the health check parameters (e.g., port, path, response codes) that your load balancer will use to determine the health of your backend instances.
The two methods you can use to create a new health check for HTTP(s) load balancing in Google Cloud Platform (GCP) are:
Create a new health check using the gcloud command line tool.
Create a new health check; or select an existing one; when you complete the load balancer‘s backend configuration in the GCP Console.
Explanation:
gcloud command-line tool: GCP provides a powerful command-line interface (CLI) tool called gcloud. You can use this tool to manage various GCP resources, including creating and managing health checks.
GCP Console: The GCP Console provides a user-friendly web interface for managing GCP resources. When configuring a load balancer, you can either create a new health check directly within the load balancer configuration or select an existing health check that you’ve previously created.
Other options:
Create a new health check using the VPC Network section in the GCP Console: While you can manage health checks within the VPC Network section, it’s typically more convenient to create or select them during the load balancer configuration process.
Create a new legacy health check using the gcloud command line tool or the Health checks section in the GCP Console: While legacy health checks are still supported, it’s generally recommended to use the newer, more flexible health check types for new deployments.
By using either the gcloud command-line tool or the GCP Console, you can effectively define the health check parameters (e.g., port, path, response codes) that your load balancer will use to determine the health of your backend instances.
The two methods you can use to create a new health check for HTTP(s) load balancing in Google Cloud Platform (GCP) are:
Create a new health check using the gcloud command line tool.
Create a new health check; or select an existing one; when you complete the load balancer‘s backend configuration in the GCP Console.
Explanation:
gcloud command-line tool: GCP provides a powerful command-line interface (CLI) tool called gcloud. You can use this tool to manage various GCP resources, including creating and managing health checks.
GCP Console: The GCP Console provides a user-friendly web interface for managing GCP resources. When configuring a load balancer, you can either create a new health check directly within the load balancer configuration or select an existing health check that you’ve previously created.
Other options:
Create a new health check using the VPC Network section in the GCP Console: While you can manage health checks within the VPC Network section, it’s typically more convenient to create or select them during the load balancer configuration process.
Create a new legacy health check using the gcloud command line tool or the Health checks section in the GCP Console: While legacy health checks are still supported, it’s generally recommended to use the newer, more flexible health check types for new deployments.
By using either the gcloud command-line tool or the GCP Console, you can effectively define the health check parameters (e.g., port, path, response codes) that your load balancer will use to determine the health of your backend instances.
You need to create a GKE cluster in an existing VPC that is accessible from on-premises. You must meet the following requirements: -IP ranges for pods and services must be as small as possible. -The nodes and the master must not be reachable from the internet. -You must be able to use kubectl commands from on-premises subnets to manage the cluster. How should you create the GKE cluster?
Correct
The most suitable approach to create the GKE cluster is:
**Create a VPC-native GKE cluster using user-managed IP ranges.
Enable privateEndpoint on the cluster master.
Set the pod and service ranges as /24.
Set up a network proxy to access the master.
Enable master authorized networks.**
Here’s why this approach meets the requirements:
VPC-Native: VPC-native clusters provide the highest level of integration with your VPC, giving you fine-grained control over networking.
User-Managed IP Ranges: Allows you to specify the exact IP ranges for pods and services, enabling you to use smaller, more efficient ranges (/24).
Private Endpoint: Creates a private connection to the cluster master within your VPC, eliminating internet exposure.
Master Authorized Networks: Restricts access to the cluster master to specific IP ranges (your on-premises subnets), further enhancing security.
Network Proxy: A network proxy is necessary to establish a secure and reliable connection to the cluster master from your on-premises environment, as direct access might not be possible due to network segmentation.
Why other options are less suitable:
Private Cluster with VPC Advanced Routes: While it provides some level of isolation, it might not be as secure as a VPC-native cluster and may not offer the same level of control over networking.
GKE-Managed IP Ranges: Might not allow for the use of small, customized IP ranges as required.
Network Policies: While useful for controlling traffic within the cluster, they don’t directly address the requirements of network isolation and on-premises access.
By implementing this approach, you can create a secure and efficient GKE cluster that meets all your requirements.
The most suitable approach to create the GKE cluster is:
**Create a VPC-native GKE cluster using user-managed IP ranges.
Enable privateEndpoint on the cluster master.
Set the pod and service ranges as /24.
Set up a network proxy to access the master.
Enable master authorized networks.**
Here’s why this approach meets the requirements:
VPC-Native: VPC-native clusters provide the highest level of integration with your VPC, giving you fine-grained control over networking.
User-Managed IP Ranges: Allows you to specify the exact IP ranges for pods and services, enabling you to use smaller, more efficient ranges (/24).
Private Endpoint: Creates a private connection to the cluster master within your VPC, eliminating internet exposure.
Master Authorized Networks: Restricts access to the cluster master to specific IP ranges (your on-premises subnets), further enhancing security.
Network Proxy: A network proxy is necessary to establish a secure and reliable connection to the cluster master from your on-premises environment, as direct access might not be possible due to network segmentation.
Why other options are less suitable:
Private Cluster with VPC Advanced Routes: While it provides some level of isolation, it might not be as secure as a VPC-native cluster and may not offer the same level of control over networking.
GKE-Managed IP Ranges: Might not allow for the use of small, customized IP ranges as required.
Network Policies: While useful for controlling traffic within the cluster, they don’t directly address the requirements of network isolation and on-premises access.
By implementing this approach, you can create a secure and efficient GKE cluster that meets all your requirements.
The most suitable approach to create the GKE cluster is:
**Create a VPC-native GKE cluster using user-managed IP ranges.
Enable privateEndpoint on the cluster master.
Set the pod and service ranges as /24.
Set up a network proxy to access the master.
Enable master authorized networks.**
Here’s why this approach meets the requirements:
VPC-Native: VPC-native clusters provide the highest level of integration with your VPC, giving you fine-grained control over networking.
User-Managed IP Ranges: Allows you to specify the exact IP ranges for pods and services, enabling you to use smaller, more efficient ranges (/24).
Private Endpoint: Creates a private connection to the cluster master within your VPC, eliminating internet exposure.
Master Authorized Networks: Restricts access to the cluster master to specific IP ranges (your on-premises subnets), further enhancing security.
Network Proxy: A network proxy is necessary to establish a secure and reliable connection to the cluster master from your on-premises environment, as direct access might not be possible due to network segmentation.
Why other options are less suitable:
Private Cluster with VPC Advanced Routes: While it provides some level of isolation, it might not be as secure as a VPC-native cluster and may not offer the same level of control over networking.
GKE-Managed IP Ranges: Might not allow for the use of small, customized IP ranges as required.
Network Policies: While useful for controlling traffic within the cluster, they don’t directly address the requirements of network isolation and on-premises access.
By implementing this approach, you can create a secure and efficient GKE cluster that meets all your requirements.
You want to use Cloud Interconnect to connect your on-premises network to a GCP VPC. You cannot meet Google at one of its point-of-presence (POP) locations; and your on-premises router cannot run a Border Gateway Protocol (BGP) configuration. Which connectivity model should you use?
Correct
Partner Interconnect with a Layer 2 partner is the most suitable connectivity model in this scenario.
Here’s why:
No On-Premise BGP Requirement: Layer 2 connections typically handle routing within the partner’s network, eliminating the need for your on-premises router to run BGP.
Flexibility in Location: Partner Interconnect allows you to connect through a partner’s network, which often has a wider reach than Google’s direct POP locations.
Why other options are less suitable:
Direct Peering: Requires you to physically connect to a Google POP, which is not possible in this scenario.
Dedicated Interconnect: Generally requires BGP configuration on your on-premises router, which is not feasible based on your constraints.
Partner Interconnect with a Layer 3 partner: While Partner Interconnect offers flexibility, a Layer 3 partner would likely require BGP configuration on your end for routing.
By choosing Partner Interconnect with a Layer 2 partner, you can establish a reliable connection to GCP while accommodating your specific location and technical limitations.
Partner Interconnect with a Layer 2 partner is the most suitable connectivity model in this scenario.
Here’s why:
No On-Premise BGP Requirement: Layer 2 connections typically handle routing within the partner’s network, eliminating the need for your on-premises router to run BGP.
Flexibility in Location: Partner Interconnect allows you to connect through a partner’s network, which often has a wider reach than Google’s direct POP locations.
Why other options are less suitable:
Direct Peering: Requires you to physically connect to a Google POP, which is not possible in this scenario.
Dedicated Interconnect: Generally requires BGP configuration on your on-premises router, which is not feasible based on your constraints.
Partner Interconnect with a Layer 3 partner: While Partner Interconnect offers flexibility, a Layer 3 partner would likely require BGP configuration on your end for routing.
By choosing Partner Interconnect with a Layer 2 partner, you can establish a reliable connection to GCP while accommodating your specific location and technical limitations.
Partner Interconnect with a Layer 2 partner is the most suitable connectivity model in this scenario.
Here’s why:
No On-Premise BGP Requirement: Layer 2 connections typically handle routing within the partner’s network, eliminating the need for your on-premises router to run BGP.
Flexibility in Location: Partner Interconnect allows you to connect through a partner’s network, which often has a wider reach than Google’s direct POP locations.
Why other options are less suitable:
Direct Peering: Requires you to physically connect to a Google POP, which is not possible in this scenario.
Dedicated Interconnect: Generally requires BGP configuration on your on-premises router, which is not feasible based on your constraints.
Partner Interconnect with a Layer 3 partner: While Partner Interconnect offers flexibility, a Layer 3 partner would likely require BGP configuration on your end for routing.
By choosing Partner Interconnect with a Layer 2 partner, you can establish a reliable connection to GCP while accommodating your specific location and technical limitations.
You have created an HTTP(S) load balanced service. You need to verify that your backend instances are responding properly. How should you configure the health check?
Correct
Explanation:- We have to configure the host header in health-check because as you know the backend could host many domains and we have to know which one is alive. We need to configure a route that returns some response, which is an indication that the backend is healthy and functional. Reference: https://cloud.google.com/load-balancing/docs/health-check-concepts#content-based_health_checks
Incorrect
Explanation:- We have to configure the host header in health-check because as you know the backend could host many domains and we have to know which one is alive. We need to configure a route that returns some response, which is an indication that the backend is healthy and functional. Reference: https://cloud.google.com/load-balancing/docs/health-check-concepts#content-based_health_checks
Unattempted
Explanation:- We have to configure the host header in health-check because as you know the backend could host many domains and we have to know which one is alive. We need to configure a route that returns some response, which is an indication that the backend is healthy and functional. Reference: https://cloud.google.com/load-balancing/docs/health-check-concepts#content-based_health_checks
Question 27 of 50
27. Question
Your company just moved to Google Cloud. You configured separate VPC networks for the Finance and Sales departments. Finance needs access to some resources that are part of the Sales VPC. You want to allow the private RFC 1918 address space traffic to flow between Sales and Finance VPCs without any additional cost and without compromising the security or performance. What should you do?
You created two subnets named Test and Web in the same VPC network. You enabled VPC Flow Logs for the Web subnet. You are trying to connect instances in the Test subnet to the web servers running in the Web subnet, but all of the connections are failing. You do not see any entries in the Stackdriver logs. What should you do?
Correct
Explanation:- because the traffic is being blocked by the firewall rule. Once configured, the request will reach the VM and the flow will be logged in the Stackdriver.
Incorrect
Explanation:- because the traffic is being blocked by the firewall rule. Once configured, the request will reach the VM and the flow will be logged in the Stackdriver.
Unattempted
Explanation:- because the traffic is being blocked by the firewall rule. Once configured, the request will reach the VM and the flow will be logged in the Stackdriver.
Question 29 of 50
29. Question
You are configuring a hybrid cloud topology for your organization. You are using Cloud VPN and Cloud Router to establish connectivity to your on-premises environment. You need to transfer data from on-premises to a Cloud Storage bucket and to BigQuery. Your organization has a strict security policy that mandates the use of VPN for communication to the cloud. You want to follow Google-recommended practices. What should you do?
Correct
Explanation:- because it enables On-Prem Private API access, allowing VPN and Interconnect customers to reach APIs such as bigquery and cloud storage natively across an interconnect/VPN connection.
Incorrect
Explanation:- because it enables On-Prem Private API access, allowing VPN and Interconnect customers to reach APIs such as bigquery and cloud storage natively across an interconnect/VPN connection.
Unattempted
Explanation:- because it enables On-Prem Private API access, allowing VPN and Interconnect customers to reach APIs such as bigquery and cloud storage natively across an interconnect/VPN connection.
Question 30 of 50
30. Question
Your manager has asked for a list of all custom roles with stage General Availability within Identity Access Management (IAM). What should you do?
Correct
Explanation:- because this command will return a value in the Stage field.
Incorrect
Explanation:- because this command will return a value in the Stage field.
Unattempted
Explanation:- because this command will return a value in the Stage field.
Question 31 of 50
31. Question
One of the secure web applications in your Google Cloud project is currently only serving users in North America. All of the application’s resources are currently hosted in a single Google Cloud region. The application uses a large catalog of graphical assets from a Cloud Storage bucket. You are notified that the application now needs to serve global clients without adding any additional Google Cloud regions or Compute Engine instances. What should you do?
Correct
Explanation:- A is correct because Cloud CDN will front a Cloud Storage bucket and move the graphical resources closest to the users.
Incorrect
Explanation:- A is correct because Cloud CDN will front a Cloud Storage bucket and move the graphical resources closest to the users.
Unattempted
Explanation:- A is correct because Cloud CDN will front a Cloud Storage bucket and move the graphical resources closest to the users.
Question 32 of 50
32. Question
You want to configure load balancing for an internet-facing, standard voice-over-IP (VOIP) application. Which type of load balancer should you use?
Correct
Explanation:- An external network load balancer in GCP is a UDP/TCP load balancer. We can use the UDP network load balancer for VoIP applications as it works on the UDP protocol. Reference: https://cloud.google.com/load-balancing/docs/network
Incorrect
Explanation:- An external network load balancer in GCP is a UDP/TCP load balancer. We can use the UDP network load balancer for VoIP applications as it works on the UDP protocol. Reference: https://cloud.google.com/load-balancing/docs/network
Unattempted
Explanation:- An external network load balancer in GCP is a UDP/TCP load balancer. We can use the UDP network load balancer for VoIP applications as it works on the UDP protocol. Reference: https://cloud.google.com/load-balancing/docs/network
Question 33 of 50
33. Question
You want to configure a NAT to perform address translation between your on-premises network blocks and GCP. Which NAT solution should you use?
Correct
Explanation:- Cloud NAT configures the Andromeda software that powers your Virtual Private Cloud (VPC) network so that it provides source network address translation (SNAT) for VMs without external IP addresses. Cloud NAT also provides destination network address translation (DNAT) for established inbound response packets only. Reference: https://cloud.google.com/nat/docs/overview
Incorrect
Explanation:- Cloud NAT configures the Andromeda software that powers your Virtual Private Cloud (VPC) network so that it provides source network address translation (SNAT) for VMs without external IP addresses. Cloud NAT also provides destination network address translation (DNAT) for established inbound response packets only. Reference: https://cloud.google.com/nat/docs/overview
Unattempted
Explanation:- Cloud NAT configures the Andromeda software that powers your Virtual Private Cloud (VPC) network so that it provides source network address translation (SNAT) for VMs without external IP addresses. Cloud NAT also provides destination network address translation (DNAT) for established inbound response packets only. Reference: https://cloud.google.com/nat/docs/overview
Question 34 of 50
34. Question
You need to ensure your personal SSH key works on every instance in your project. You want to accomplish this as efficiently as possible. What should you do?
Correct
Explanation:- Since we need to allow our SSH key to work on every instance of the GCP project, we need to upload it to project-wide metadata. This is done by adding the public key to the GCP project metadata. Reference: https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys
Incorrect
Explanation:- Since we need to allow our SSH key to work on every instance of the GCP project, we need to upload it to project-wide metadata. This is done by adding the public key to the GCP project metadata. Reference: https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys
Unattempted
Explanation:- Since we need to allow our SSH key to work on every instance of the GCP project, we need to upload it to project-wide metadata. This is done by adding the public key to the GCP project metadata. Reference: https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys
Question 35 of 50
35. Question
In order to provide subnet level isolation, you want to force instance-A in one subnet to route through a security appliance, called instance-B, in another subnet. What should you do?
Correct
Explanation:- We need to create a more specific route than the default route. That route has to be applied to instance A using a network tag. Reference: https://cloud.google.com/vpc/docs/routes
Incorrect
Explanation:- We need to create a more specific route than the default route. That route has to be applied to instance A using a network tag. Reference: https://cloud.google.com/vpc/docs/routes
Unattempted
Explanation:- We need to create a more specific route than the default route. That route has to be applied to instance A using a network tag. Reference: https://cloud.google.com/vpc/docs/routes
Question 36 of 50
36. Question
You create a Google Kubernetes Engine private cluster and want to use kubectl to get the status of the pods. In one of your instances you notice the master is not responding, even though the cluster is up and running. What should you do to solve the problem?
You work on a centralized network administration team for a multinational enterprise that is moving to Google Cloud. Your company has on-premises data centers located in the United States in Oregon and New York, with dedicated interconnects to cloud regions us-west1 and us-east4. There are multiple regional offices in Europe and APAC and regional data processing in europe-west1 and australia-southeast1. You want to configure your Cloud Routers so that data from the US data centers can be processed by Compute Engine instances in regional offices in London, UK and Sydney, Australia. How should you configure the topology?
You want to allow access over ports 80 and 443 to servers with the tag “webservers” from external addresses. Currently, there is a firewall rule with a priority of 1000 that denies all incoming traffic from an external address on all ports and protocols. You want to allow the desired traffic without deleting the existing rule. What should you do?
You are designing a shared VPC architecture. Your network and security team has strict controls over which routes are exposed between departments. Your Production and Staging departments can communicate with each other; but only via specific networks. You want to follow Google-recommended practices. How should you design this topology?
You need to give each member of your network operations team least privilege access to create, modify, and delete Cloud Interconnect VLAN attachments. What should you do?
Correct
Explanation:- Role required are Permissions required for creating Interconnect VLAN attachment are following: compute.interconnectAttachments.create compute.interconnectAttachments.get compute.routers.create compute.routers.get compute.routers.update Reference: https://cloud.google.com/network-connectivity/docs/interconnect/how-to/dedicated/creating-vlan-attachments
Incorrect
Explanation:- Role required are Permissions required for creating Interconnect VLAN attachment are following: compute.interconnectAttachments.create compute.interconnectAttachments.get compute.routers.create compute.routers.get compute.routers.update Reference: https://cloud.google.com/network-connectivity/docs/interconnect/how-to/dedicated/creating-vlan-attachments
Unattempted
Explanation:- Role required are Permissions required for creating Interconnect VLAN attachment are following: compute.interconnectAttachments.create compute.interconnectAttachments.get compute.routers.create compute.routers.get compute.routers.update Reference: https://cloud.google.com/network-connectivity/docs/interconnect/how-to/dedicated/creating-vlan-attachments
Question 41 of 50
41. Question
You have an application that is running in a managed instance group. Your development team has released an updated instance template which contains a new feature which was not heavily tested. You want to minimize impact to users if there is a bug in the new template. How should you update your instances?
Your organization is deploying a single project for 3 separate departments. Two of these departments require network connectivity between each other; but the third department should remain in isolation. Your design should create separate network administrative domains between these departments. You want to minimize operational overhead. How should you design the topology?
Correct
Explanation:- Use Shared VPC to connect to a common VPC network. Resources in those projects can communicate with each other securely and efficiently across project boundaries using internal IPs. You can manage shared network resources; such as subnets; routes; and firewalls; from a central host project; enabling you to apply and enforce consistent network policies across the projects. With Shared VPC and IAM controls; you can separate network administration from project administration. This separation helps you implement the principle of least privilege. For example; a centralized network team can administer the network without having any permissions into the participating projects. Similarly; the project admins can manage their project resources without any permissions to manipulate the shared network. Reference: https://cloud.google.com/docs/enterprise/best-practices-for-enterprise-organizations
Incorrect
Explanation:- Use Shared VPC to connect to a common VPC network. Resources in those projects can communicate with each other securely and efficiently across project boundaries using internal IPs. You can manage shared network resources; such as subnets; routes; and firewalls; from a central host project; enabling you to apply and enforce consistent network policies across the projects. With Shared VPC and IAM controls; you can separate network administration from project administration. This separation helps you implement the principle of least privilege. For example; a centralized network team can administer the network without having any permissions into the participating projects. Similarly; the project admins can manage their project resources without any permissions to manipulate the shared network. Reference: https://cloud.google.com/docs/enterprise/best-practices-for-enterprise-organizations
Unattempted
Explanation:- Use Shared VPC to connect to a common VPC network. Resources in those projects can communicate with each other securely and efficiently across project boundaries using internal IPs. You can manage shared network resources; such as subnets; routes; and firewalls; from a central host project; enabling you to apply and enforce consistent network policies across the projects. With Shared VPC and IAM controls; you can separate network administration from project administration. This separation helps you implement the principle of least privilege. For example; a centralized network team can administer the network without having any permissions into the participating projects. Similarly; the project admins can manage their project resources without any permissions to manipulate the shared network. Reference: https://cloud.google.com/docs/enterprise/best-practices-for-enterprise-organizations
Question 43 of 50
43. Question
Your company‘s web server administrator is migrating on-premises backend servers for an application to GCP. Libraries and configurations differ significantly across these backend servers. The migration to GCP will be lift-and-shift; and all requests to the servers will be served by a single network load balancer frontend. You want to use a GCP-native solution when possible. How should you deploy this service in GCP?
You have deployed a proof-of-concept application by manually placing instances in a single Compute Engine zone. You are now moving the application to production, so you need to increase your application availability and ensure it can autoscale. How should you provision your instances?
You have a storage bucket that contains two objects. Cloud CDN is enabled on the bucket, and both objects have been successfully cached. Now you want to make sure that one of the two objects will not be cached anymore, and will always be served to the internet directly from the origin. What should you do?
Correct
Explanation:- Adding Cache-Control entry to the metadata of the object in the bucket, the objects will not be cached. We can also invalidate all the previously cached copies of the object. Reference: https://cloud.google.com/cdn/docs/caching
Incorrect
Explanation:- Adding Cache-Control entry to the metadata of the object in the bucket, the objects will not be cached. We can also invalidate all the previously cached copies of the object. Reference: https://cloud.google.com/cdn/docs/caching
Unattempted
Explanation:- Adding Cache-Control entry to the metadata of the object in the bucket, the objects will not be cached. We can also invalidate all the previously cached copies of the object. Reference: https://cloud.google.com/cdn/docs/caching
Question 46 of 50
46. Question
Your company offers a popular gaming service. Your instances are deployed with private IP addresses, and external access is granted through a global load balancer. You have recently engaged a traffic-scrubbing service and want to restrict your origin to allow connections only from the traffic-scrubbing service. What should you do?
Correct
Explanation:- A Cloud Armor is used to protect the application that is hosted on a global load balancer. The Cloud Armor is attached to the backend of the load balancer. It has a security policy that can block the traffic and filter only the allowed traffic. Cloud Armor acts as a Web Application Firewall. Reference: https://cloud.google.com/armor/docs/security-policy-overview
Incorrect
Explanation:- A Cloud Armor is used to protect the application that is hosted on a global load balancer. The Cloud Armor is attached to the backend of the load balancer. It has a security policy that can block the traffic and filter only the allowed traffic. Cloud Armor acts as a Web Application Firewall. Reference: https://cloud.google.com/armor/docs/security-policy-overview
Unattempted
Explanation:- A Cloud Armor is used to protect the application that is hosted on a global load balancer. The Cloud Armor is attached to the backend of the load balancer. It has a security policy that can block the traffic and filter only the allowed traffic. Cloud Armor acts as a Web Application Firewall. Reference: https://cloud.google.com/armor/docs/security-policy-overview
Question 47 of 50
47. Question
You are creating a new application and require access to Cloud SQL from VPC instances without public IP addresses. Which two actions should you take? (Choose two.)
Correct
Explanation:- Service networking API helps provide automatic management of network configurations necessary for certain services. A Cloud SQL can be reached on its private IP if the service networking API is enabled in the GCP project.. Reference: https://cloud.google.com/service-infrastructure/docs/service-networking/getting-started. A private connection can be made to Cloud SQL. Cloud SQL can operate on a private IP.. Reference: https://cloud.google.com/sql/docs/mysql/private-ip
Incorrect
Explanation:- Service networking API helps provide automatic management of network configurations necessary for certain services. A Cloud SQL can be reached on its private IP if the service networking API is enabled in the GCP project.. Reference: https://cloud.google.com/service-infrastructure/docs/service-networking/getting-started. A private connection can be made to Cloud SQL. Cloud SQL can operate on a private IP.. Reference: https://cloud.google.com/sql/docs/mysql/private-ip
Unattempted
Explanation:- Service networking API helps provide automatic management of network configurations necessary for certain services. A Cloud SQL can be reached on its private IP if the service networking API is enabled in the GCP project.. Reference: https://cloud.google.com/service-infrastructure/docs/service-networking/getting-started. A private connection can be made to Cloud SQL. Cloud SQL can operate on a private IP.. Reference: https://cloud.google.com/sql/docs/mysql/private-ip
Question 48 of 50
48. Question
You want to use Cloud Interconnect to connect your on-premises network to a GCP VPC. You cannot meet Google at one of its point-of-presence (POP) locations, and your on-premises router cannot run a Border Gateway Protocol (BGP) configuration. Which connectivity model should you use?
Correct
Explanation:- For Layer 3 connections, your service provider establishes a BGP session between your Cloud Routers and their edge routers for each VLAN attachment. You don‘t need to configure BGP on your on-premises router. Google and your service provider automatically set the correct configurations. Reference: https://cloud.google.com/network-connectivity/docs/interconnect/concepts/partner-overview
Incorrect
Explanation:- For Layer 3 connections, your service provider establishes a BGP session between your Cloud Routers and their edge routers for each VLAN attachment. You don‘t need to configure BGP on your on-premises router. Google and your service provider automatically set the correct configurations. Reference: https://cloud.google.com/network-connectivity/docs/interconnect/concepts/partner-overview
Unattempted
Explanation:- For Layer 3 connections, your service provider establishes a BGP session between your Cloud Routers and their edge routers for each VLAN attachment. You don‘t need to configure BGP on your on-premises router. Google and your service provider automatically set the correct configurations. Reference: https://cloud.google.com/network-connectivity/docs/interconnect/concepts/partner-overview
Question 49 of 50
49. Question
You need to enable Cloud CDN for all the objects inside a storage bucket. You want to ensure that all the objects in the storage bucket can be served by the CDN. What should you do in the GCP Console?
Your new project currently requires 5 gigabits per second (Gbps) of egress traffic from your Google Cloud environment to your company’s private data center but may scale up to 80 Gbps of traffic in the future. You do not have any public addresses to use. Your company is looking for the most cost-effective long-term solution. Which type of connection should you use?