OCI Architect Professional Total Questions: 529 – 9 Mock Exams
Practice Set 1
Time limit: 0
0 of 54 questions completed
Questions:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
Information
Click on Start Quiz.
You have already completed the Test before. Hence you can not start it again.
Test is loading...
You must sign in or sign up to start the Test.
You have to finish following quiz, to start this Test:
Your results are here!! for" Oracle Cloud Infrastructure Architect Professional Practice Test 1 "
0 of 54 questions answered correctly
Your time:
Time has elapsed
Your Final Score is : 0
You have attempted : 0
Number of Correct Questions : 0 and scored 0
Number of Incorrect Questions : 0 and Negative marks 0
Average score
Your score
Oracle Cloud Infrastructure Architect Professional [1Z0-997-20]
You have attempted: 0
Number of Correct Questions: 0 and scored 0
Number of Incorrect Questions: 0 and Negative marks 0
You can review your answers by clicking view questions. Important Note : Open Reference Documentation Links in New Tab (Right Click and Open in New Tab).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
Answered
Review
Question 1 of 54
1. Question
A large financial services company has used 2 types of Oracle DB Systems. In Oracle Cloud Infrastructure (OCI) to store user data. One is running on a VM. Standard 2.8 shape and the other on a VM.Standard 2.4 shape.
As business grows, data is growing rapidly on both the databases and performance is also degrading. The company wants to address this problem with a viable and economical solution.
As the solution architect for that company you have suggested that they move their databases to Autonomous Transaction Processing Serverless (ATP-S) database.
Which two factors should you consider before you arrived at that recommendation?
Correct
Not all features present in Oracle Database Enterprise Edition are available in ATP, and some some Oracle Database features are restricted, for example, database features designed for administration are not available. so you need to validate it first, You can find a complete list of the features that are not supported.
Also, you must specify the initial storage required for your database but ADB is elastic, so it is possible to grow or shrink your database as needed.
Not all features present in Oracle Database Enterprise Edition are available in ATP, and some some Oracle Database features are restricted, for example, database features designed for administration are not available. so you need to validate it first, You can find a complete list of the features that are not supported.
Also, you must specify the initial storage required for your database but ADB is elastic, so it is possible to grow or shrink your database as needed.
Not all features present in Oracle Database Enterprise Edition are available in ATP, and some some Oracle Database features are restricted, for example, database features designed for administration are not available. so you need to validate it first, You can find a complete list of the features that are not supported.
Also, you must specify the initial storage required for your database but ADB is elastic, so it is possible to grow or shrink your database as needed.
By copying block volume backups to another region at regular intervals, it makes it easier for you to rebuild applications and data in the destination region if a region-wide disaster occurs in the source region.
Which IAM Policy statement allows the VolumeAdmins group to copy volume backups between regions?
Correct
The backups feature of the Oracle Cloud Infrastructure Block Volume service lets you make a point-in-time snapshot of the data on a block volume.These backups can then be restored to new volumes either immediately after a backup or at a later time that you choose.
You can copy block volume backups between regions using the Console, command line interface (CLI), SDKs, or REST APIs.
To copy volume backups between regions, you must have permission to read and copy volume backups in the source region, and permission to create volume backups in the destination region.
to do all things with block storage volumes, volume backups, and volume groups in all compartments with the exception of copying volume backups across regions.
Allow group VolumeAdmins to manage volume-family in tenancy
The aggregate resource type volume-family does not include the VOLUME_BACKUP_COPY permission, so to enable copying volume backups across regions you need to ensure that you include the third statement in that policy, which is:
Allow group VolumeAdmins to use volume-backups in tenancy where request.permission=’VOLUME_BACKUP_COPY’
Incorrect
The backups feature of the Oracle Cloud Infrastructure Block Volume service lets you make a point-in-time snapshot of the data on a block volume.These backups can then be restored to new volumes either immediately after a backup or at a later time that you choose.
You can copy block volume backups between regions using the Console, command line interface (CLI), SDKs, or REST APIs.
To copy volume backups between regions, you must have permission to read and copy volume backups in the source region, and permission to create volume backups in the destination region.
to do all things with block storage volumes, volume backups, and volume groups in all compartments with the exception of copying volume backups across regions.
Allow group VolumeAdmins to manage volume-family in tenancy
The aggregate resource type volume-family does not include the VOLUME_BACKUP_COPY permission, so to enable copying volume backups across regions you need to ensure that you include the third statement in that policy, which is:
Allow group VolumeAdmins to use volume-backups in tenancy where request.permission=’VOLUME_BACKUP_COPY’
Unattempted
The backups feature of the Oracle Cloud Infrastructure Block Volume service lets you make a point-in-time snapshot of the data on a block volume.These backups can then be restored to new volumes either immediately after a backup or at a later time that you choose.
You can copy block volume backups between regions using the Console, command line interface (CLI), SDKs, or REST APIs.
To copy volume backups between regions, you must have permission to read and copy volume backups in the source region, and permission to create volume backups in the destination region.
to do all things with block storage volumes, volume backups, and volume groups in all compartments with the exception of copying volume backups across regions.
Allow group VolumeAdmins to manage volume-family in tenancy
The aggregate resource type volume-family does not include the VOLUME_BACKUP_COPY permission, so to enable copying volume backups across regions you need to ensure that you include the third statement in that policy, which is:
Allow group VolumeAdmins to use volume-backups in tenancy where request.permission=’VOLUME_BACKUP_COPY’
Question 3 of 54
3. Question
A civil engineering company is running an online portal In which engineers can upload there constructions Larger image
photos, videos, and other digital files.
There is a new requirement for you to implement: the online portal must offload the digital content to an Object Storage bucket for a period of 72 hours. After the provided time limit has elapsed, the portal will hold all the digital content locally and wait for the next offload period.
Which option fulfills this requirement?
Correct
Pre-authenticated requests provide a way to let users access a bucket or an object without having their own credentials, as long as the request creator has permission to access those objects.
For example, you can create a request that lets operations support user upload backups to a bucket without owning API keys. Or, you can create a request that lets a business partner update shared data in a bucket without owning API keys.
When creating a pre-authenticated request, you have the following options:
You can specify the name of a bucket that a pre-authenticated request user has write access to and can upload one or more objects to.
You can specify the name of an object that a pre-authenticated request user can read from, write to, or read from and write to.
Scope and Constraints
Understand the following scope and constraints regarding pre-authenticated requests:
Users can’t list bucket contents.
You can create an unlimited number of pre-authenticated requests.
There is no time limit to the expiration date that you can set.
You can’t edit a pre-authenticated request. If you want to change user access options in response to changing requirements, you must create a new pre?authenticated request.
The target and actions for a pre-authenticated request are based on the creator’s permissions. The request is not, however, bound to the creator’s account login credentials. If the creator’s login credentials change, a pre-authenticated request is not affected.
You cannot delete a bucket that has a pre-authenticated request associated with that bucket or with an object in that bucket.
Incorrect
Pre-authenticated requests provide a way to let users access a bucket or an object without having their own credentials, as long as the request creator has permission to access those objects.
For example, you can create a request that lets operations support user upload backups to a bucket without owning API keys. Or, you can create a request that lets a business partner update shared data in a bucket without owning API keys.
When creating a pre-authenticated request, you have the following options:
You can specify the name of a bucket that a pre-authenticated request user has write access to and can upload one or more objects to.
You can specify the name of an object that a pre-authenticated request user can read from, write to, or read from and write to.
Scope and Constraints
Understand the following scope and constraints regarding pre-authenticated requests:
Users can’t list bucket contents.
You can create an unlimited number of pre-authenticated requests.
There is no time limit to the expiration date that you can set.
You can’t edit a pre-authenticated request. If you want to change user access options in response to changing requirements, you must create a new pre?authenticated request.
The target and actions for a pre-authenticated request are based on the creator’s permissions. The request is not, however, bound to the creator’s account login credentials. If the creator’s login credentials change, a pre-authenticated request is not affected.
You cannot delete a bucket that has a pre-authenticated request associated with that bucket or with an object in that bucket.
Unattempted
Pre-authenticated requests provide a way to let users access a bucket or an object without having their own credentials, as long as the request creator has permission to access those objects.
For example, you can create a request that lets operations support user upload backups to a bucket without owning API keys. Or, you can create a request that lets a business partner update shared data in a bucket without owning API keys.
When creating a pre-authenticated request, you have the following options:
You can specify the name of a bucket that a pre-authenticated request user has write access to and can upload one or more objects to.
You can specify the name of an object that a pre-authenticated request user can read from, write to, or read from and write to.
Scope and Constraints
Understand the following scope and constraints regarding pre-authenticated requests:
Users can’t list bucket contents.
You can create an unlimited number of pre-authenticated requests.
There is no time limit to the expiration date that you can set.
You can’t edit a pre-authenticated request. If you want to change user access options in response to changing requirements, you must create a new pre?authenticated request.
The target and actions for a pre-authenticated request are based on the creator’s permissions. The request is not, however, bound to the creator’s account login credentials. If the creator’s login credentials change, a pre-authenticated request is not affected.
You cannot delete a bucket that has a pre-authenticated request associated with that bucket or with an object in that bucket.
Question 4 of 54
4. Question
A retail company has recently adopted a hybrid architecture. They have the following requirements for their end-to-end Connectivity model between their on-premises data center and Oracle Cloud Infrastructure (OCI) region
* Highly available connection with service level redundancy
* Dedicated network bandwidth with low latency
Which connectivity setup is the most cost effective solution for this scenario?
Correct
there are two main requirements for this Customer
First Highly available connection with service level redundancy and that can achieve by
1- VPN Connect with a Redundant Customer Edge Device
2- FastConnect Plus a Single VPN Connect Connection
3- Redundant FastConnect
second Dedicated network bandwidth with low latency and that can achieve by select fastconnect as Primary Path
One option Use a single edge device in your on premises data center for each connection that mean we have a single point of failure and also this is not the most cost effective solution
Incorrect
there are two main requirements for this Customer
First Highly available connection with service level redundancy and that can achieve by
1- VPN Connect with a Redundant Customer Edge Device
2- FastConnect Plus a Single VPN Connect Connection
3- Redundant FastConnect
second Dedicated network bandwidth with low latency and that can achieve by select fastconnect as Primary Path
One option Use a single edge device in your on premises data center for each connection that mean we have a single point of failure and also this is not the most cost effective solution
Unattempted
there are two main requirements for this Customer
First Highly available connection with service level redundancy and that can achieve by
1- VPN Connect with a Redundant Customer Edge Device
2- FastConnect Plus a Single VPN Connect Connection
3- Redundant FastConnect
second Dedicated network bandwidth with low latency and that can achieve by select fastconnect as Primary Path
One option Use a single edge device in your on premises data center for each connection that mean we have a single point of failure and also this is not the most cost effective solution
Question 5 of 54
5. Question
A retail bank is currently hosting their mission critical customer application on-premises. The application has a standard 3 tier architecture – 4 application servers process the incoming traffic and store application data in an Oracle Exadata Database Server. The bank has recently has service disruption to other inter applications to they are looking to avoid this issue for their mission critical Customer Application.
Which Oracle Cloud Infrastructure services should you recommend as part of the DR solution?
Correct
OCI Traffic Management Steering Policies can account for health of answers to provide failover capabilities, provide the ability to load balance traffic across multiple resources, and account for the location where the query was initiated to provide a simple, flexible and powerful mechanism to efficiently steer DNS traffic.
Public Load Balancer Accepts traffic from the internet using a public IP address that serves as the entry point for incoming traffic.Load Balancing service creates a primary load balancer and a standby load balancer, each in a different availability domain
Incorrect
OCI Traffic Management Steering Policies can account for health of answers to provide failover capabilities, provide the ability to load balance traffic across multiple resources, and account for the location where the query was initiated to provide a simple, flexible and powerful mechanism to efficiently steer DNS traffic.
Public Load Balancer Accepts traffic from the internet using a public IP address that serves as the entry point for incoming traffic.Load Balancing service creates a primary load balancer and a standby load balancer, each in a different availability domain
Unattempted
OCI Traffic Management Steering Policies can account for health of answers to provide failover capabilities, provide the ability to load balance traffic across multiple resources, and account for the location where the query was initiated to provide a simple, flexible and powerful mechanism to efficiently steer DNS traffic.
Public Load Balancer Accepts traffic from the internet using a public IP address that serves as the entry point for incoming traffic.Load Balancing service creates a primary load balancer and a standby load balancer, each in a different availability domain
Question 6 of 54
6. Question
A large financial company has a web application hosted in their on-premises data center. They are migrating their application to Oracle Cloud Infrastructure (OCI) and require no downtime while the migration is on-going. In order to achieve this, they have decided to divert only 30% of the traffic to the new application running in OCI and keep the rest 70% traffic to their on-premise Infrastructure. Once the migration is completed and application works fine, they divert all traffic to OCI.
As a solution architect working with this customer, which suggestion should you provide them?
Correct
raffic Management Steering Policies can account for health of answers to provide failover capabilities, provide the ability to load balance traffic across multiple resources, and account for the location where the query was initiated to provide a simple, flexible and powerful mechanism to efficiently steer DNS traffic.
1- OCI Traffic management with failover
Failover policies allow you to prioritize the order in which you want answers served in a policy (for example, Primary and Secondary). Oracle Cloud Infrastructure Health Checks are leveraged to determine the health of answers in the policy. If the Primary Answer is determined to be unhealthy, DNS traffic will automatically be steered to the Secondary Answer.
so the answer # 1 is not correct which the customer decided to divert only 30% of the application works fine, they divert all traffic to OCI.
2- OCI Traffic management with LOAD BALANCER
Load Balancer policies allow distribution of traffic across multiple endpoints. Endpoints can be assigned equal weights to distribute traffic evenly across the endpoints or custom weights may be assigned for ratio load balancing. Oracle Cloud Infrastructure Health Checks are leveraged to determine the health of the endpoint. DNS traffic will be automatically distributed to the other endpoints, if an endpoint is determined to be unhealthy.
Incorrect
raffic Management Steering Policies can account for health of answers to provide failover capabilities, provide the ability to load balance traffic across multiple resources, and account for the location where the query was initiated to provide a simple, flexible and powerful mechanism to efficiently steer DNS traffic.
1- OCI Traffic management with failover
Failover policies allow you to prioritize the order in which you want answers served in a policy (for example, Primary and Secondary). Oracle Cloud Infrastructure Health Checks are leveraged to determine the health of answers in the policy. If the Primary Answer is determined to be unhealthy, DNS traffic will automatically be steered to the Secondary Answer.
so the answer # 1 is not correct which the customer decided to divert only 30% of the application works fine, they divert all traffic to OCI.
2- OCI Traffic management with LOAD BALANCER
Load Balancer policies allow distribution of traffic across multiple endpoints. Endpoints can be assigned equal weights to distribute traffic evenly across the endpoints or custom weights may be assigned for ratio load balancing. Oracle Cloud Infrastructure Health Checks are leveraged to determine the health of the endpoint. DNS traffic will be automatically distributed to the other endpoints, if an endpoint is determined to be unhealthy.
Unattempted
raffic Management Steering Policies can account for health of answers to provide failover capabilities, provide the ability to load balance traffic across multiple resources, and account for the location where the query was initiated to provide a simple, flexible and powerful mechanism to efficiently steer DNS traffic.
1- OCI Traffic management with failover
Failover policies allow you to prioritize the order in which you want answers served in a policy (for example, Primary and Secondary). Oracle Cloud Infrastructure Health Checks are leveraged to determine the health of answers in the policy. If the Primary Answer is determined to be unhealthy, DNS traffic will automatically be steered to the Secondary Answer.
so the answer # 1 is not correct which the customer decided to divert only 30% of the application works fine, they divert all traffic to OCI.
2- OCI Traffic management with LOAD BALANCER
Load Balancer policies allow distribution of traffic across multiple endpoints. Endpoints can be assigned equal weights to distribute traffic evenly across the endpoints or custom weights may be assigned for ratio load balancing. Oracle Cloud Infrastructure Health Checks are leveraged to determine the health of the endpoint. DNS traffic will be automatically distributed to the other endpoints, if an endpoint is determined to be unhealthy.
Question 7 of 54
7. Question
As a part of migration exercise for an existing on premises application to Oracle Cloud Infrastructure (OCI), you are required to transfer a 7 TB file to OCI Object Storage. You have decided to upload it using the multipart upload functionality of Object Storage.
Which two statements are true?
Correct
You can check on an active multipart upload by listing all parts that have been uploaded. (You cannot list information for an individual object part in an active multipart upload.)
The Oracle Cloud Infrastructure Object Storage service supports multipart uploads for more efficient and resilient uploads, especially for large objects. You can perform multipart uploads using the API and CLI
Before you use the multipart upload API, you are responsible for creating the parts to upload. Object Storage provides API operations for the remaining steps.
Note:
When you perform a multipart upload using the CLI, you do not need to split the object into parts as you are required to do by the API. Instead, you specify the part size of your choice, and Object Storage splits the object into parts and performs the upload of all parts automatically.
After you finish creating object parts, initiate a multipart upload by making a CreateMultipartUpload REST API call. Provide the object name and any object metadata. Object Storage responds with a unique upload ID that you must include in any requests related to this multipart upload. Object Storage also marks the upload as active. The upload remains active until you explicitly commit it or abort it.
You do not need to assign contiguous numbers, but Object Storage constructs the object by ordering part numbers in ascending order. also Part numbers do not have to be contiguous
Incorrect
You can check on an active multipart upload by listing all parts that have been uploaded. (You cannot list information for an individual object part in an active multipart upload.)
The Oracle Cloud Infrastructure Object Storage service supports multipart uploads for more efficient and resilient uploads, especially for large objects. You can perform multipart uploads using the API and CLI
Before you use the multipart upload API, you are responsible for creating the parts to upload. Object Storage provides API operations for the remaining steps.
Note:
When you perform a multipart upload using the CLI, you do not need to split the object into parts as you are required to do by the API. Instead, you specify the part size of your choice, and Object Storage splits the object into parts and performs the upload of all parts automatically.
After you finish creating object parts, initiate a multipart upload by making a CreateMultipartUpload REST API call. Provide the object name and any object metadata. Object Storage responds with a unique upload ID that you must include in any requests related to this multipart upload. Object Storage also marks the upload as active. The upload remains active until you explicitly commit it or abort it.
You do not need to assign contiguous numbers, but Object Storage constructs the object by ordering part numbers in ascending order. also Part numbers do not have to be contiguous
Unattempted
You can check on an active multipart upload by listing all parts that have been uploaded. (You cannot list information for an individual object part in an active multipart upload.)
The Oracle Cloud Infrastructure Object Storage service supports multipart uploads for more efficient and resilient uploads, especially for large objects. You can perform multipart uploads using the API and CLI
Before you use the multipart upload API, you are responsible for creating the parts to upload. Object Storage provides API operations for the remaining steps.
Note:
When you perform a multipart upload using the CLI, you do not need to split the object into parts as you are required to do by the API. Instead, you specify the part size of your choice, and Object Storage splits the object into parts and performs the upload of all parts automatically.
After you finish creating object parts, initiate a multipart upload by making a CreateMultipartUpload REST API call. Provide the object name and any object metadata. Object Storage responds with a unique upload ID that you must include in any requests related to this multipart upload. Object Storage also marks the upload as active. The upload remains active until you explicitly commit it or abort it.
You do not need to assign contiguous numbers, but Object Storage constructs the object by ordering part numbers in ascending order. also Part numbers do not have to be contiguous
Question 8 of 54
8. Question
All three Data Guard Configuration are fully supported on Oracle Cloud infrastructure (OCI). You want to deploy a maximum availability architecture (MAA) for database workload.
Which option should you consider while designing your Data Guard configuration to ensure best RTO and PRO without causing any data loss?
Correct
All three Data Guard configurations are fully supported on Oracle Cloud Infrastructure. However, because of a high risk of production outage, we don’t recommend using the maximum protection mode for your Data Guard configuration.
We recommend using the maximum availability mode in SYNC mode between two availability domains (same region), and using the maximum availability mode in ASYNC mode between two regions. This architecture provides you the best RTO and RPO without causing any data loss. We recommend building this architecture in daisy-chain mode: the primary database ships redo logs to the first standby database in another availability domain in SYNC mode, and then the first standby database ships the redo logs to another region in ASYNC mode. This method ensures that your primary database is not doing the double work of shipping redo logs, which can cause performance impact on a production workload.
This configuration offers the following benefits:
No data loss within a region.
No overhead on the production database to maintain standbys in another region.
Option to configure lagging on the DR site if needed for business reasons.
Option to configure multiple standbys in different regions without any additional overhead on the production database. A typical use case is a CDN application
Incorrect
All three Data Guard configurations are fully supported on Oracle Cloud Infrastructure. However, because of a high risk of production outage, we don’t recommend using the maximum protection mode for your Data Guard configuration.
We recommend using the maximum availability mode in SYNC mode between two availability domains (same region), and using the maximum availability mode in ASYNC mode between two regions. This architecture provides you the best RTO and RPO without causing any data loss. We recommend building this architecture in daisy-chain mode: the primary database ships redo logs to the first standby database in another availability domain in SYNC mode, and then the first standby database ships the redo logs to another region in ASYNC mode. This method ensures that your primary database is not doing the double work of shipping redo logs, which can cause performance impact on a production workload.
This configuration offers the following benefits:
No data loss within a region.
No overhead on the production database to maintain standbys in another region.
Option to configure lagging on the DR site if needed for business reasons.
Option to configure multiple standbys in different regions without any additional overhead on the production database. A typical use case is a CDN application
Unattempted
All three Data Guard configurations are fully supported on Oracle Cloud Infrastructure. However, because of a high risk of production outage, we don’t recommend using the maximum protection mode for your Data Guard configuration.
We recommend using the maximum availability mode in SYNC mode between two availability domains (same region), and using the maximum availability mode in ASYNC mode between two regions. This architecture provides you the best RTO and RPO without causing any data loss. We recommend building this architecture in daisy-chain mode: the primary database ships redo logs to the first standby database in another availability domain in SYNC mode, and then the first standby database ships the redo logs to another region in ASYNC mode. This method ensures that your primary database is not doing the double work of shipping redo logs, which can cause performance impact on a production workload.
This configuration offers the following benefits:
No data loss within a region.
No overhead on the production database to maintain standbys in another region.
Option to configure lagging on the DR site if needed for business reasons.
Option to configure multiple standbys in different regions without any additional overhead on the production database. A typical use case is a CDN application
Question 9 of 54
9. Question
A large London based eCommerce company is running Oracle DB System Virtual RAC database on Oracle Cloud Infrastructure (OCI) for their eCommerce application activity. They are launching a new product soon, which is expected to sell in large quantities all over the world.
The application architecture should have minimal cost, no data loss, no performance impacts during the database backup windows and should have minimal downtime.
Correct
Active Data Guard or GoldenGate are used for disaster recovery when fast recovery times or additional levels of data protection are required. and offload queries and backup to standby system.
Oracle GoldenGate to support a disaster recovery site is to have a working bi-directional data flow, from the primary system to the live-standby system and vice versa.
DataGuard and Automatic Backup
You can enable the Automatic Backup feature on a database with the standby role in a Data Guard association. However, automatic backups for that database will not be created until it assumes the primary role.
Incorrect
Active Data Guard or GoldenGate are used for disaster recovery when fast recovery times or additional levels of data protection are required. and offload queries and backup to standby system.
Oracle GoldenGate to support a disaster recovery site is to have a working bi-directional data flow, from the primary system to the live-standby system and vice versa.
DataGuard and Automatic Backup
You can enable the Automatic Backup feature on a database with the standby role in a Data Guard association. However, automatic backups for that database will not be created until it assumes the primary role.
Unattempted
Active Data Guard or GoldenGate are used for disaster recovery when fast recovery times or additional levels of data protection are required. and offload queries and backup to standby system.
Oracle GoldenGate to support a disaster recovery site is to have a working bi-directional data flow, from the primary system to the live-standby system and vice versa.
DataGuard and Automatic Backup
You can enable the Automatic Backup feature on a database with the standby role in a Data Guard association. However, automatic backups for that database will not be created until it assumes the primary role.
Question 10 of 54
10. Question
The Finance department of your company has reached out to you. They have customer sensitive data on compute Instances In Oracle Cloud Infrastructure (OCI) which they want to store in OCI Storage for long term retention and archival.
To meet security requirements they want to ensure this data is NOT transferred over public internet, even if encrypted.
which they want to store In OCI Object Storage for long term retention and archival to meet security requirements they want to ensure this data is NOT transferred over public Internet, even it encrypted
Which option meets this requirements?
Correct
Service Gateway is virtual router that you can add to your VCN. It provides a path for private network traffic between your VCN and supported services in the Oracle Services Network like Object Storage)
so compute Instances in a private subnet in your VCN can back up data to Object Storage without needing public IP addresses or access to the intern
Incorrect
Service Gateway is virtual router that you can add to your VCN. It provides a path for private network traffic between your VCN and supported services in the Oracle Services Network like Object Storage)
so compute Instances in a private subnet in your VCN can back up data to Object Storage without needing public IP addresses or access to the intern
Unattempted
Service Gateway is virtual router that you can add to your VCN. It provides a path for private network traffic between your VCN and supported services in the Oracle Services Network like Object Storage)
so compute Instances in a private subnet in your VCN can back up data to Object Storage without needing public IP addresses or access to the intern
Question 11 of 54
11. Question
A company has an urgent requirement to migrate 300 TB of data to Oracle Cloud Infrastructure (OCI) In two weeks. Their data center has been recently struck by a massive hurricane and the building has been badly damaged, although still operational. They have a 100 Mbps Internet line but the connection is Intermittent due to the damages caused to the electrical grid
in this scenario, what is the most effective service to use to migrate the data to OCI given the time constraints?
Correct
due to the network speed is not good enough and the connection is Intermittent due to the damages caused to the electrical grid
Oracle offers offline data transfer solutions that let you migrate data to Oracle Cloud Infrastructure.
you have 2 Options of Data Transfer
DISK-BASED DATA TRANSFER
You send your data as files on encrypted commodity disk to an Oracle transfer site. Operators at the Oracle transfer site upload the files into your designated Object Storage bucket in your tenancy.
APPLIANCE-BASED DATA TRANSFER
you send your data as files on secure, high-capacity, Oracle-supplied storage appliances to an Oracle transfer site. Operators at the Oracle transfer site upload the data into your designated Object Storage bucket in your tenancy.
the Storage Capacity is 150 TB of protected usable space
Incorrect
due to the network speed is not good enough and the connection is Intermittent due to the damages caused to the electrical grid
Oracle offers offline data transfer solutions that let you migrate data to Oracle Cloud Infrastructure.
you have 2 Options of Data Transfer
DISK-BASED DATA TRANSFER
You send your data as files on encrypted commodity disk to an Oracle transfer site. Operators at the Oracle transfer site upload the files into your designated Object Storage bucket in your tenancy.
APPLIANCE-BASED DATA TRANSFER
you send your data as files on secure, high-capacity, Oracle-supplied storage appliances to an Oracle transfer site. Operators at the Oracle transfer site upload the data into your designated Object Storage bucket in your tenancy.
the Storage Capacity is 150 TB of protected usable space
Unattempted
due to the network speed is not good enough and the connection is Intermittent due to the damages caused to the electrical grid
Oracle offers offline data transfer solutions that let you migrate data to Oracle Cloud Infrastructure.
you have 2 Options of Data Transfer
DISK-BASED DATA TRANSFER
You send your data as files on encrypted commodity disk to an Oracle transfer site. Operators at the Oracle transfer site upload the files into your designated Object Storage bucket in your tenancy.
APPLIANCE-BASED DATA TRANSFER
you send your data as files on secure, high-capacity, Oracle-supplied storage appliances to an Oracle transfer site. Operators at the Oracle transfer site upload the data into your designated Object Storage bucket in your tenancy.
the Storage Capacity is 150 TB of protected usable space
Question 12 of 54
12. Question
After performing maintenance on an Oracle Linux compute instance the system is returned to a running state You attempt to connect using SSH but are unable to do so. You decide to create an instance console connection to troubleshoot the issue.
Which three tasks would enable you to connect to the console connection and begin troubleshooting?
Correct
The Oracle Cloud Infrastructure Compute service provides console connections that enable you to remotely troubleshoot malfunctioning instances, such as:
An imported or customized image that does not complete a successful boot.
A previously working instance that stops responding.
the steps to connect to console and troubleshoot the OS Issue
1- Before you can connect to the serial console you need to create the instance console connection.
Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
Click the instance that you’re interested in.
Under Resources, click Console Connections.
Click Create Console Connection.
Upload the public key (.pub) portion for the SSH key. You can browse to a public key file on your computer or paste your public key into the text box.
Click Create Console Connection.
When the console connection has been created and is available, the status changes to ACTIVE.
2- Connecting to the Serial Console
you can connect to the serial console by using a Secure Shell (SSH) connection to the service endpoint of the console connection service
Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
Click the instance that you’re interested in.
Under Resources, click Console Connections.
Click the Actions icon (three dots), and then click Copy Serial Console Connection for Linux/Mac.
Paste the connection string copied from the previous step to a terminal window on a Mac OS X or Linux system, and then press Enter to connect to the console.
If you are not using the default SSH key or ssh-agent, you can modify the serial console connection string to include the identity file flag, -i, to specify the SSH key to use. You must specify this for both the SSH connection and the SSH ProxyCommand, as shown in the following line:
ssh -i // -o ProxyCommand=’ssh -i // -W %h:%p -p 443…
Press Enter again to activate the console.
3- Troubleshooting Instances from Instance Console Connections
To boot into maintenance mode
Reboot the instance from the Console.
When the reboot process starts, switch back to the terminal window, and you see Console messages start to appear in the window. As soon as you see the GRUB boot menu appear, use the up/down arrow key to stop the automatic boot process, enabling you to use the boot menu.
In the boot menu, highlight the top item in the menu, and type e to edit the boot entry.
In edit mode, use the down arrow key to scroll down through the entries until you reach the line that starts with either linuxefi for instances running Oracle Autonomous Linux 7.x or Oracle Linux 7.x, or kernel for instances running Oracle Linux 6.x.
At the end of that line, add the following:
init=/bin/bash
Reboot the instance from the terminal window by entering the keyboard shortcut CTRL+X.
Incorrect
The Oracle Cloud Infrastructure Compute service provides console connections that enable you to remotely troubleshoot malfunctioning instances, such as:
An imported or customized image that does not complete a successful boot.
A previously working instance that stops responding.
the steps to connect to console and troubleshoot the OS Issue
1- Before you can connect to the serial console you need to create the instance console connection.
Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
Click the instance that you’re interested in.
Under Resources, click Console Connections.
Click Create Console Connection.
Upload the public key (.pub) portion for the SSH key. You can browse to a public key file on your computer or paste your public key into the text box.
Click Create Console Connection.
When the console connection has been created and is available, the status changes to ACTIVE.
2- Connecting to the Serial Console
you can connect to the serial console by using a Secure Shell (SSH) connection to the service endpoint of the console connection service
Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
Click the instance that you’re interested in.
Under Resources, click Console Connections.
Click the Actions icon (three dots), and then click Copy Serial Console Connection for Linux/Mac.
Paste the connection string copied from the previous step to a terminal window on a Mac OS X or Linux system, and then press Enter to connect to the console.
If you are not using the default SSH key or ssh-agent, you can modify the serial console connection string to include the identity file flag, -i, to specify the SSH key to use. You must specify this for both the SSH connection and the SSH ProxyCommand, as shown in the following line:
ssh -i // -o ProxyCommand=’ssh -i // -W %h:%p -p 443…
Press Enter again to activate the console.
3- Troubleshooting Instances from Instance Console Connections
To boot into maintenance mode
Reboot the instance from the Console.
When the reboot process starts, switch back to the terminal window, and you see Console messages start to appear in the window. As soon as you see the GRUB boot menu appear, use the up/down arrow key to stop the automatic boot process, enabling you to use the boot menu.
In the boot menu, highlight the top item in the menu, and type e to edit the boot entry.
In edit mode, use the down arrow key to scroll down through the entries until you reach the line that starts with either linuxefi for instances running Oracle Autonomous Linux 7.x or Oracle Linux 7.x, or kernel for instances running Oracle Linux 6.x.
At the end of that line, add the following:
init=/bin/bash
Reboot the instance from the terminal window by entering the keyboard shortcut CTRL+X.
Unattempted
The Oracle Cloud Infrastructure Compute service provides console connections that enable you to remotely troubleshoot malfunctioning instances, such as:
An imported or customized image that does not complete a successful boot.
A previously working instance that stops responding.
the steps to connect to console and troubleshoot the OS Issue
1- Before you can connect to the serial console you need to create the instance console connection.
Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
Click the instance that you’re interested in.
Under Resources, click Console Connections.
Click Create Console Connection.
Upload the public key (.pub) portion for the SSH key. You can browse to a public key file on your computer or paste your public key into the text box.
Click Create Console Connection.
When the console connection has been created and is available, the status changes to ACTIVE.
2- Connecting to the Serial Console
you can connect to the serial console by using a Secure Shell (SSH) connection to the service endpoint of the console connection service
Open the navigation menu. Under Core Infrastructure, go to Compute and click Instances.
Click the instance that you’re interested in.
Under Resources, click Console Connections.
Click the Actions icon (three dots), and then click Copy Serial Console Connection for Linux/Mac.
Paste the connection string copied from the previous step to a terminal window on a Mac OS X or Linux system, and then press Enter to connect to the console.
If you are not using the default SSH key or ssh-agent, you can modify the serial console connection string to include the identity file flag, -i, to specify the SSH key to use. You must specify this for both the SSH connection and the SSH ProxyCommand, as shown in the following line:
ssh -i // -o ProxyCommand=’ssh -i // -W %h:%p -p 443…
Press Enter again to activate the console.
3- Troubleshooting Instances from Instance Console Connections
To boot into maintenance mode
Reboot the instance from the Console.
When the reboot process starts, switch back to the terminal window, and you see Console messages start to appear in the window. As soon as you see the GRUB boot menu appear, use the up/down arrow key to stop the automatic boot process, enabling you to use the boot menu.
In the boot menu, highlight the top item in the menu, and type e to edit the boot entry.
In edit mode, use the down arrow key to scroll down through the entries until you reach the line that starts with either linuxefi for instances running Oracle Autonomous Linux 7.x or Oracle Linux 7.x, or kernel for instances running Oracle Linux 6.x.
At the end of that line, add the following:
init=/bin/bash
Reboot the instance from the terminal window by entering the keyboard shortcut CTRL+X.
Question 13 of 54
13. Question
You have provisioned a new VM.DeselO2.24 compute instance with local NVMe drives. The compute instance is running production application. This is a write heavy application, with a significant Impact to the business it the application goes down.
What should you do to help maintain write performance and protect against NVMe devices failure?
Correct
VM.DeselO2.24 compute instance include locally attached NVMe devices. These devices provide extremely low latency, high performance block storage that is ideal for big data, OLTP, and any other workload that can benefit from high-performance block storage.
A protected RAID array is the most recommended way to protect against an NVMe device failure. There are three RAID levels that can be used for the majority of workloads:
RAID 1: An exact copy (or mirror) of a set of data on two or more disks; a classic RAID 1 mirrored pair contains two disks
RAID 10: Stripes data across multiple mirrored pairs. As long as one disk in each mirrored pair is functional, data can be retrieved
RAID 6: Block-level striping with two parity blocks distributed across all member disks
If you need the best possible performance and can sacrifice some of your available space, then RAID 10 array is an option.
Incorrect
VM.DeselO2.24 compute instance include locally attached NVMe devices. These devices provide extremely low latency, high performance block storage that is ideal for big data, OLTP, and any other workload that can benefit from high-performance block storage.
A protected RAID array is the most recommended way to protect against an NVMe device failure. There are three RAID levels that can be used for the majority of workloads:
RAID 1: An exact copy (or mirror) of a set of data on two or more disks; a classic RAID 1 mirrored pair contains two disks
RAID 10: Stripes data across multiple mirrored pairs. As long as one disk in each mirrored pair is functional, data can be retrieved
RAID 6: Block-level striping with two parity blocks distributed across all member disks
If you need the best possible performance and can sacrifice some of your available space, then RAID 10 array is an option.
Unattempted
VM.DeselO2.24 compute instance include locally attached NVMe devices. These devices provide extremely low latency, high performance block storage that is ideal for big data, OLTP, and any other workload that can benefit from high-performance block storage.
A protected RAID array is the most recommended way to protect against an NVMe device failure. There are three RAID levels that can be used for the majority of workloads:
RAID 1: An exact copy (or mirror) of a set of data on two or more disks; a classic RAID 1 mirrored pair contains two disks
RAID 10: Stripes data across multiple mirrored pairs. As long as one disk in each mirrored pair is functional, data can be retrieved
RAID 6: Block-level striping with two parity blocks distributed across all member disks
If you need the best possible performance and can sacrifice some of your available space, then RAID 10 array is an option.
Question 14 of 54
14. Question
A hospital in Austin has hosted its web based medical records portal entirely In Oracle cloud Infrastructure (OCI) using Compute Instances for its web-tier and DB system database for its data tier. To validate compliance with Health Insurance Portability and Accountability (HIPAA), the security professional to check their systems it was found that there are a lot of unauthorized coming requests coming from a set of IP addresses originating from a country in Southeast Asia.
Which option can mitigate this type of attack?
Correct
WAF can protect any internet facing endpoint, providing consistent rule enforcement across a customer’s applications.
WAF provides you with the ability to create and manage rules for internet threats including
Cross-Site Scripting (XSS), SQL Injection and other OWASP-defined vulnerabilities. Unwanted
bots can be mitigated while tactically allowed desirable bots to enter. Access rules can limit
based on geography or the signature of the request.
As a WAF administrator you can define explicit actions for requests that meet various
conditions. Conditions use various operations and regular expressions. A rule action can be
set to log and allow, detect, or block requests.
Incorrect
WAF can protect any internet facing endpoint, providing consistent rule enforcement across a customer’s applications.
WAF provides you with the ability to create and manage rules for internet threats including
Cross-Site Scripting (XSS), SQL Injection and other OWASP-defined vulnerabilities. Unwanted
bots can be mitigated while tactically allowed desirable bots to enter. Access rules can limit
based on geography or the signature of the request.
As a WAF administrator you can define explicit actions for requests that meet various
conditions. Conditions use various operations and regular expressions. A rule action can be
set to log and allow, detect, or block requests.
Unattempted
WAF can protect any internet facing endpoint, providing consistent rule enforcement across a customer’s applications.
WAF provides you with the ability to create and manage rules for internet threats including
Cross-Site Scripting (XSS), SQL Injection and other OWASP-defined vulnerabilities. Unwanted
bots can be mitigated while tactically allowed desirable bots to enter. Access rules can limit
based on geography or the signature of the request.
As a WAF administrator you can define explicit actions for requests that meet various
conditions. Conditions use various operations and regular expressions. A rule action can be
set to log and allow, detect, or block requests.
Question 15 of 54
15. Question
You are working as a solution architect with a global automotive provider who is looking to create a multi-cloud solution. They want to run their application tier in Microsoft Azure while utilizing the Oracle DB Systems in the Oracle Cloud Infrastructure (OCI).
What is the most fault tolerant and secure solution for this customer?
Correct
Oracle and Microsoft have created a cross-cloud connection between Oracle Cloud Infrastructure and Microsoft Azure in certain regions. This connection lets you set up cross-cloud workloads without the traffic between the clouds going over the internet.
you can connect your VNet and VCN so that traffic that uses private IP addresses goes over the cross-cloud connection.
For example, the following diagram shows a VNet that is connected to a VCN. Resources in the VNet are running a .NET application that access an Oracle database that runs on Database service resources in the VCN. The traffic between the application and database uses a logical circuit that runs on the cross-cloud connection between Azure and Oracle Cloud Infrastructure.
The two virtual networks must belong to the same company and not have overlapping CIDRs. The connection requires you to create an Azure ExpressRoute circuit and an Oracle Cloud Infrastructure FastConnect virtual circuit.
Incorrect
Oracle and Microsoft have created a cross-cloud connection between Oracle Cloud Infrastructure and Microsoft Azure in certain regions. This connection lets you set up cross-cloud workloads without the traffic between the clouds going over the internet.
you can connect your VNet and VCN so that traffic that uses private IP addresses goes over the cross-cloud connection.
For example, the following diagram shows a VNet that is connected to a VCN. Resources in the VNet are running a .NET application that access an Oracle database that runs on Database service resources in the VCN. The traffic between the application and database uses a logical circuit that runs on the cross-cloud connection between Azure and Oracle Cloud Infrastructure.
The two virtual networks must belong to the same company and not have overlapping CIDRs. The connection requires you to create an Azure ExpressRoute circuit and an Oracle Cloud Infrastructure FastConnect virtual circuit.
Unattempted
Oracle and Microsoft have created a cross-cloud connection between Oracle Cloud Infrastructure and Microsoft Azure in certain regions. This connection lets you set up cross-cloud workloads without the traffic between the clouds going over the internet.
you can connect your VNet and VCN so that traffic that uses private IP addresses goes over the cross-cloud connection.
For example, the following diagram shows a VNet that is connected to a VCN. Resources in the VNet are running a .NET application that access an Oracle database that runs on Database service resources in the VCN. The traffic between the application and database uses a logical circuit that runs on the cross-cloud connection between Azure and Oracle Cloud Infrastructure.
The two virtual networks must belong to the same company and not have overlapping CIDRs. The connection requires you to create an Azure ExpressRoute circuit and an Oracle Cloud Infrastructure FastConnect virtual circuit.
Question 16 of 54
16. Question
Your customer has gone through a recent department restructure. As part of this change, they are organizing their Oracle Cloud Infrastructure (OCI) compartment structure to align with the company and new organizational structure.
They have made the following change:
*Compartment x Is moved, and its parent compartment is now compartment C.
*Policy defined in compartment A: Allow group networkadmins to manage subnets in compartment X Policy defined in root compartment: Allow group admins to read subnets in compartment Finance:A:X .
After the compartment move, which action will provide users of group networkadmins and admins with similar privileges as before the move?
Correct
You can move a compartment to a different parent compartment within the same tenancy.When you move a compartment, all its contents (subcompartments and resources) are moved with it.
After you move a compartment to a new parent compartment, the access policies of the new parent take effect and the policies of the previous parent no longer apply. Before you move a compartment, ensure that:
– You are aware of the policies that govern access to the compartment in its current position.
– You are aware of the polices in the new parent compartment that will take effect when you move the compartment.
1- Policy that defined in root compartment: Allow group admins to read subnets in compartment Finance:A:X
you move compartment X from Finance:A to HR:C. The policy that governs compartment X is attached to the shared parent, root compartment. When the compartment X is moved, the policy statement is automatically updated by the IAM service to specify the new compartment location.
The policy
Allow group admins to read subnets in compartment Finance:A:X
is updated to
Allow group admins to read subnets in compartment HR:C:X
so the admins group will have the same access after the compartment X is moved
2- Policy that defined in compartment A: Allow group networkadmins to manage subnets in compartment X
you move compartment X from Finance:A to HR:C. However, the policy that governs compartment X here is attached directly to the A compartment. When the compartment is moved, the policy is not automatically updated. The policy that specifies compartment X is no longer valid and must be manually removed. Group networkadmins no longer has access to compartment X in its new location under HR:C. Unless another existing policy grants access to group networkadmins , you must create a new policy to allow networkadmins to continue to manage buckets in compartment X.
Incorrect
You can move a compartment to a different parent compartment within the same tenancy.When you move a compartment, all its contents (subcompartments and resources) are moved with it.
After you move a compartment to a new parent compartment, the access policies of the new parent take effect and the policies of the previous parent no longer apply. Before you move a compartment, ensure that:
– You are aware of the policies that govern access to the compartment in its current position.
– You are aware of the polices in the new parent compartment that will take effect when you move the compartment.
1- Policy that defined in root compartment: Allow group admins to read subnets in compartment Finance:A:X
you move compartment X from Finance:A to HR:C. The policy that governs compartment X is attached to the shared parent, root compartment. When the compartment X is moved, the policy statement is automatically updated by the IAM service to specify the new compartment location.
The policy
Allow group admins to read subnets in compartment Finance:A:X
is updated to
Allow group admins to read subnets in compartment HR:C:X
so the admins group will have the same access after the compartment X is moved
2- Policy that defined in compartment A: Allow group networkadmins to manage subnets in compartment X
you move compartment X from Finance:A to HR:C. However, the policy that governs compartment X here is attached directly to the A compartment. When the compartment is moved, the policy is not automatically updated. The policy that specifies compartment X is no longer valid and must be manually removed. Group networkadmins no longer has access to compartment X in its new location under HR:C. Unless another existing policy grants access to group networkadmins , you must create a new policy to allow networkadmins to continue to manage buckets in compartment X.
Unattempted
You can move a compartment to a different parent compartment within the same tenancy.When you move a compartment, all its contents (subcompartments and resources) are moved with it.
After you move a compartment to a new parent compartment, the access policies of the new parent take effect and the policies of the previous parent no longer apply. Before you move a compartment, ensure that:
– You are aware of the policies that govern access to the compartment in its current position.
– You are aware of the polices in the new parent compartment that will take effect when you move the compartment.
1- Policy that defined in root compartment: Allow group admins to read subnets in compartment Finance:A:X
you move compartment X from Finance:A to HR:C. The policy that governs compartment X is attached to the shared parent, root compartment. When the compartment X is moved, the policy statement is automatically updated by the IAM service to specify the new compartment location.
The policy
Allow group admins to read subnets in compartment Finance:A:X
is updated to
Allow group admins to read subnets in compartment HR:C:X
so the admins group will have the same access after the compartment X is moved
2- Policy that defined in compartment A: Allow group networkadmins to manage subnets in compartment X
you move compartment X from Finance:A to HR:C. However, the policy that governs compartment X here is attached directly to the A compartment. When the compartment is moved, the policy is not automatically updated. The policy that specifies compartment X is no longer valid and must be manually removed. Group networkadmins no longer has access to compartment X in its new location under HR:C. Unless another existing policy grants access to group networkadmins , you must create a new policy to allow networkadmins to continue to manage buckets in compartment X.
Question 17 of 54
17. Question
You are designing the network infrastructure for two application servers: appserver-1 and appserver-2 running in two different subnets inside the same Virtual Cloud Network (VCN) Oracle Cloud Infrastructure (OCI). You have a requirement where your end users will access appserver-1 from the internet and appserver-2 from the on-premises network. The on-premises network is connected to your VCN over a FastConnect virtual circuit.
How should you design your routing configuration to meet these requirements?
Correct
An internet gateway is an optional virtual router you can add to your VCN to enable direct connectivity to the internet. Resources that need to use the gateway for internet access must be in a public subnet and have public IP addresses. Each public subnet that needs to use the internet gateway must have a route table rule that specifies the gateway as the target. For traffic to flow between a subnet and an internet gateway, you must create a route rule accordingly in the subnet’s route table (for example, destination CIDR = 0.0.0.0/0 and target = internet gateway).
Dynamic Routing Gateway (DRG) is A virtual edge router attached to your VCN. Necessary for private peering. The DRG is a single point of entry for private traffic coming in to your VCN,After creating the DRG, you must attach it to your VCN and add a route for the DRG in the VCN’s route table to enable traffic flow.
Incorrect
An internet gateway is an optional virtual router you can add to your VCN to enable direct connectivity to the internet. Resources that need to use the gateway for internet access must be in a public subnet and have public IP addresses. Each public subnet that needs to use the internet gateway must have a route table rule that specifies the gateway as the target. For traffic to flow between a subnet and an internet gateway, you must create a route rule accordingly in the subnet’s route table (for example, destination CIDR = 0.0.0.0/0 and target = internet gateway).
Dynamic Routing Gateway (DRG) is A virtual edge router attached to your VCN. Necessary for private peering. The DRG is a single point of entry for private traffic coming in to your VCN,After creating the DRG, you must attach it to your VCN and add a route for the DRG in the VCN’s route table to enable traffic flow.
Unattempted
An internet gateway is an optional virtual router you can add to your VCN to enable direct connectivity to the internet. Resources that need to use the gateway for internet access must be in a public subnet and have public IP addresses. Each public subnet that needs to use the internet gateway must have a route table rule that specifies the gateway as the target. For traffic to flow between a subnet and an internet gateway, you must create a route rule accordingly in the subnet’s route table (for example, destination CIDR = 0.0.0.0/0 and target = internet gateway).
Dynamic Routing Gateway (DRG) is A virtual edge router attached to your VCN. Necessary for private peering. The DRG is a single point of entry for private traffic coming in to your VCN,After creating the DRG, you must attach it to your VCN and add a route for the DRG in the VCN’s route table to enable traffic flow.
Question 18 of 54
18. Question
You are working as a solution architect for an online retail store to create a portal to allow the users to pay for their groceries using credit cards. Since the application is not fully compliant with the Payment Card Industry Data Security Standard (PCI DSS), your company is looking to use a third party payment service to process credit card payments.
The third party service allows a maximum of Spelunk IP addresses, 5 public IP addresses at a time. However, your website is using Oracle Cloud Infrastructure (OCI) Instance Pool Auto Scaling policy to create up to 15 Instances during peak traffic demand, which are launched in VCN in private subnets and attached to an OCI public Load Balancer. Upon user payment, the portal connects to the payment service over the Internet to complete the transaction.
What solution can you implement to make sure that all compute Instances can connect to the third party system to process the payments at peak traffic demand?
Correct
You can OCI Load Balancer for this solution which can you the Public IPs of Load balancer to Traffic to third party services which allows a maximum of Spelunk IP addresses 5 public IP addresses at a time However, your website is using Oracle Cloud Infrastructure (OCI) Instance Pool Auto Scaling policy to create up to 15 Instances during peak traffic demand
Incorrect
You can OCI Load Balancer for this solution which can you the Public IPs of Load balancer to Traffic to third party services which allows a maximum of Spelunk IP addresses 5 public IP addresses at a time However, your website is using Oracle Cloud Infrastructure (OCI) Instance Pool Auto Scaling policy to create up to 15 Instances during peak traffic demand
Unattempted
You can OCI Load Balancer for this solution which can you the Public IPs of Load balancer to Traffic to third party services which allows a maximum of Spelunk IP addresses 5 public IP addresses at a time However, your website is using Oracle Cloud Infrastructure (OCI) Instance Pool Auto Scaling policy to create up to 15 Instances during peak traffic demand
Question 19 of 54
19. Question
A digital marketing company is planning to host a website on Oracle Cloud Infrastructure (OCI) and leverage OCI Container Engine for Kubernetes (OKE). The web server will make API calls to access OCI Object Storage to store all images uploaded by users.
For security purposes, your manager instructed you to ensure that the credentials used by the web server to allow access not stored locally on the compute instance.
What solution results in an implementation with the least effort for this scenario?
Correct
INSTANCE PRINCIPALS
The IAM service feature that enables instances to be authorized actors (or principals) to perform actions on service resources. Each compute instance has its own identity, and it authenticates using the certificates that are added to it.These certificates are automatically created, assigned to instances and rotated, preventing the need for you to distribute credentials to your hosts and rotate them.
Dynamic groups A special type of group that contains resources (such as compute instances) that match rules that you define (thus the membership can change dynamically as matching resources are created or deleted). These instances act as “principal” actors and can make API calls to services according to policies that you write for the dynamic group.
The following steps summarize the process flow for setting up and using instances as principals. The subsequent sections provide more details.
1. Create a dynamic group. In the dynamic group definition, you provide the matching rules to specify which instances you want to allow to make API calls against services.
2. Create a policy granting permissions to the dynamic group to access services in your tenancy (or compartment).
3. A developer in your organization configures the application built using the Oracle Cloud Infrastructure SDK to authenticate using the instance principals provider. The developer deploys the application and the SDK to all the instances that belong to the dynamic group.
4. The deployed SDK makes calls to Oracle Cloud Infrastructure APIs as allowed by the policy (without needing to configure API credentials).
5. For each API call made by an instance, the Audit service logs the event, recording the OCID of the instance as the value of principalId in the event log.
Incorrect
INSTANCE PRINCIPALS
The IAM service feature that enables instances to be authorized actors (or principals) to perform actions on service resources. Each compute instance has its own identity, and it authenticates using the certificates that are added to it.These certificates are automatically created, assigned to instances and rotated, preventing the need for you to distribute credentials to your hosts and rotate them.
Dynamic groups A special type of group that contains resources (such as compute instances) that match rules that you define (thus the membership can change dynamically as matching resources are created or deleted). These instances act as “principal” actors and can make API calls to services according to policies that you write for the dynamic group.
The following steps summarize the process flow for setting up and using instances as principals. The subsequent sections provide more details.
1. Create a dynamic group. In the dynamic group definition, you provide the matching rules to specify which instances you want to allow to make API calls against services.
2. Create a policy granting permissions to the dynamic group to access services in your tenancy (or compartment).
3. A developer in your organization configures the application built using the Oracle Cloud Infrastructure SDK to authenticate using the instance principals provider. The developer deploys the application and the SDK to all the instances that belong to the dynamic group.
4. The deployed SDK makes calls to Oracle Cloud Infrastructure APIs as allowed by the policy (without needing to configure API credentials).
5. For each API call made by an instance, the Audit service logs the event, recording the OCID of the instance as the value of principalId in the event log.
Unattempted
INSTANCE PRINCIPALS
The IAM service feature that enables instances to be authorized actors (or principals) to perform actions on service resources. Each compute instance has its own identity, and it authenticates using the certificates that are added to it.These certificates are automatically created, assigned to instances and rotated, preventing the need for you to distribute credentials to your hosts and rotate them.
Dynamic groups A special type of group that contains resources (such as compute instances) that match rules that you define (thus the membership can change dynamically as matching resources are created or deleted). These instances act as “principal” actors and can make API calls to services according to policies that you write for the dynamic group.
The following steps summarize the process flow for setting up and using instances as principals. The subsequent sections provide more details.
1. Create a dynamic group. In the dynamic group definition, you provide the matching rules to specify which instances you want to allow to make API calls against services.
2. Create a policy granting permissions to the dynamic group to access services in your tenancy (or compartment).
3. A developer in your organization configures the application built using the Oracle Cloud Infrastructure SDK to authenticate using the instance principals provider. The developer deploys the application and the SDK to all the instances that belong to the dynamic group.
4. The deployed SDK makes calls to Oracle Cloud Infrastructure APIs as allowed by the policy (without needing to configure API credentials).
5. For each API call made by an instance, the Audit service logs the event, recording the OCID of the instance as the value of principalId in the event log.
Question 20 of 54
20. Question
An online Stock trading application is deployed to multiple Availability Domains in the us phoenix-1 region. Considering the high volume of transactions that the trading application handles, the company has hired you to ensure that the data stored by the application available, and disaster resilient. In the event of failure, the Recovery Time Objective (RTO)) must be less than 2 hours to meet regulatory requirements.
Which Disaster Recovery strategy should be used to achieve the RTO requirement in the event of system failure?
Correct
You can use the CLI, REST APIs, or the SDKs to automate, script, and manage volume backups and their lifecycle.
Planning Your Backup
The primary use of backups is to support business continuity, disaster recovery, and long-term archiving requirements. When determining a backup schedule, your backup plan and goals should consider the following:
Frequency: How often you want to back up your data.
Recovery time: How long you can wait for a backup to be restored and accessible to the applications that use it. The time for a backup to complete varies on several factors, but it will generally take a few minutes or longer, depending on the size of the data being backed up and the amount of data that has changed since your last backup.
Number of stored backups: How many backups you need to keep available and the deletion schedule for those you no longer need. You can only create one backup at a time, so if a backup is underway, it will need to complete before you can create another one. For details about the number of backups you can store
Incorrect
You can use the CLI, REST APIs, or the SDKs to automate, script, and manage volume backups and their lifecycle.
Planning Your Backup
The primary use of backups is to support business continuity, disaster recovery, and long-term archiving requirements. When determining a backup schedule, your backup plan and goals should consider the following:
Frequency: How often you want to back up your data.
Recovery time: How long you can wait for a backup to be restored and accessible to the applications that use it. The time for a backup to complete varies on several factors, but it will generally take a few minutes or longer, depending on the size of the data being backed up and the amount of data that has changed since your last backup.
Number of stored backups: How many backups you need to keep available and the deletion schedule for those you no longer need. You can only create one backup at a time, so if a backup is underway, it will need to complete before you can create another one. For details about the number of backups you can store
Unattempted
You can use the CLI, REST APIs, or the SDKs to automate, script, and manage volume backups and their lifecycle.
Planning Your Backup
The primary use of backups is to support business continuity, disaster recovery, and long-term archiving requirements. When determining a backup schedule, your backup plan and goals should consider the following:
Frequency: How often you want to back up your data.
Recovery time: How long you can wait for a backup to be restored and accessible to the applications that use it. The time for a backup to complete varies on several factors, but it will generally take a few minutes or longer, depending on the size of the data being backed up and the amount of data that has changed since your last backup.
Number of stored backups: How many backups you need to keep available and the deletion schedule for those you no longer need. You can only create one backup at a time, so if a backup is underway, it will need to complete before you can create another one. For details about the number of backups you can store
Question 21 of 54
21. Question
An online registration system is currently hosted on one large Oracle Cloud Infrastructure (OCI) Bare metal compute instance with attached block volumes to store all of the users data. The registration system accepts the information from the user, including documents and photos then performs automated verification and processing to check it the user is eligible for registration.
The registration system becomes unavailable at times when there is a surge of users using the system. The existing architecture needs improvement as it takes a long time for the system to complete the processing and the attached block volumes are not large enough to store ever growing data being uploaded by the users.
Which is the most effective option to achieve a highly scalable solution?
Correct
The Oracle Cloud Infrastructure Streaming service provides a fully managed, scalable, and durable storage solution for ingesting continuous, high-volume streams of data that you can consume and process in real time. Streaming can be used for messaging, ingesting high-volume data such as application logs, operational telemetry, web click-stream data, or other use cases in which data is produced and processed continually and sequentially in a publish-subscribe messaging model.
Streaming Usage Scenarios
Here are some of the many possible uses for Streaming:
Metric and log ingestion: Use the Streaming service as an alternative for traditional file-scraping approaches to help make critical operational data more quickly available for indexing, analysis, and visualization.
Messaging: Use Streaming to decouple components of large systems. Streaming provides a pull/buffer-based communication model with sufficient capacity to flatten load spikes and the ability to feed multiple consumers with the same data independently. Key-scoped ordering and guaranteed durability provide reliable primitives to implement various messaging patterns, while high throughput potential allows for such a system to scale well.
Web/Mobile activity data ingestion: Use Streaming for capturing activity from websites or mobile apps (such as page views, searches, or other actions users may take). This information can be used for real-time monitoring and analytics, as well as in data warehousing systems for offline processing and reporting.
Infrastructure and apps event processing: Use Streaming as a unified entry point for cloud components to report their life cycle events for audit, accounting, and related activities.
Incorrect
The Oracle Cloud Infrastructure Streaming service provides a fully managed, scalable, and durable storage solution for ingesting continuous, high-volume streams of data that you can consume and process in real time. Streaming can be used for messaging, ingesting high-volume data such as application logs, operational telemetry, web click-stream data, or other use cases in which data is produced and processed continually and sequentially in a publish-subscribe messaging model.
Streaming Usage Scenarios
Here are some of the many possible uses for Streaming:
Metric and log ingestion: Use the Streaming service as an alternative for traditional file-scraping approaches to help make critical operational data more quickly available for indexing, analysis, and visualization.
Messaging: Use Streaming to decouple components of large systems. Streaming provides a pull/buffer-based communication model with sufficient capacity to flatten load spikes and the ability to feed multiple consumers with the same data independently. Key-scoped ordering and guaranteed durability provide reliable primitives to implement various messaging patterns, while high throughput potential allows for such a system to scale well.
Web/Mobile activity data ingestion: Use Streaming for capturing activity from websites or mobile apps (such as page views, searches, or other actions users may take). This information can be used for real-time monitoring and analytics, as well as in data warehousing systems for offline processing and reporting.
Infrastructure and apps event processing: Use Streaming as a unified entry point for cloud components to report their life cycle events for audit, accounting, and related activities.
Unattempted
The Oracle Cloud Infrastructure Streaming service provides a fully managed, scalable, and durable storage solution for ingesting continuous, high-volume streams of data that you can consume and process in real time. Streaming can be used for messaging, ingesting high-volume data such as application logs, operational telemetry, web click-stream data, or other use cases in which data is produced and processed continually and sequentially in a publish-subscribe messaging model.
Streaming Usage Scenarios
Here are some of the many possible uses for Streaming:
Metric and log ingestion: Use the Streaming service as an alternative for traditional file-scraping approaches to help make critical operational data more quickly available for indexing, analysis, and visualization.
Messaging: Use Streaming to decouple components of large systems. Streaming provides a pull/buffer-based communication model with sufficient capacity to flatten load spikes and the ability to feed multiple consumers with the same data independently. Key-scoped ordering and guaranteed durability provide reliable primitives to implement various messaging patterns, while high throughput potential allows for such a system to scale well.
Web/Mobile activity data ingestion: Use Streaming for capturing activity from websites or mobile apps (such as page views, searches, or other actions users may take). This information can be used for real-time monitoring and analytics, as well as in data warehousing systems for offline processing and reporting.
Infrastructure and apps event processing: Use Streaming as a unified entry point for cloud components to report their life cycle events for audit, accounting, and related activities.
Question 22 of 54
22. Question
An automobile company wants to deploy their CRM application for Oracle Database on Oracle Cloud Infrastructure (OCI) DB Systems for one of major clients. In compliance with the Business Continuity Program of the client, they need to provide a Recovery Point objective (RPO) of 24 hours and Recovery Time Objective (RTO) of 1 hour.
The CRM application should be available even if the one entire Region is down.
Which approach is the most suitable and cost effective configuration for this scenario?
Correct
You can configure the Autonomous Database instance as a target database for Oracle GoldenGate. but You can’t set up Oracle Autonomous Database as a source database for Oracle GoldenGate.
Recovery Point objective (RPO) of 24 hours and Recovery Time Objective (RTO) of 1 hour
– To provision new VM and restore the production database from the backup on object storage, will exceed the RTO 1 hour
– You can create the standby DB system in a different availability domain from the primary DB system for availability and disaster recovery purposes. With Data Guard and switchover/failover can meet RTO 1 hour.
– RAC Database is not required in this solution. Standalone will be most suitable and cost effective
Incorrect
You can configure the Autonomous Database instance as a target database for Oracle GoldenGate. but You can’t set up Oracle Autonomous Database as a source database for Oracle GoldenGate.
Recovery Point objective (RPO) of 24 hours and Recovery Time Objective (RTO) of 1 hour
– To provision new VM and restore the production database from the backup on object storage, will exceed the RTO 1 hour
– You can create the standby DB system in a different availability domain from the primary DB system for availability and disaster recovery purposes. With Data Guard and switchover/failover can meet RTO 1 hour.
– RAC Database is not required in this solution. Standalone will be most suitable and cost effective
Unattempted
You can configure the Autonomous Database instance as a target database for Oracle GoldenGate. but You can’t set up Oracle Autonomous Database as a source database for Oracle GoldenGate.
Recovery Point objective (RPO) of 24 hours and Recovery Time Objective (RTO) of 1 hour
– To provision new VM and restore the production database from the backup on object storage, will exceed the RTO 1 hour
– You can create the standby DB system in a different availability domain from the primary DB system for availability and disaster recovery purposes. With Data Guard and switchover/failover can meet RTO 1 hour.
– RAC Database is not required in this solution. Standalone will be most suitable and cost effective
Question 23 of 54
23. Question
You have deployed a web application targeting a global audience across multiple Oracle Cloud Infrastructure (OCI) regions.You decide to use Traffic Management Geo-Location based Steering Policy to serve web requests to users from the region closest to the user. Within each region you have deployed a public load balancer with 4 servers in a backend set. During a DR test disable all web servers in one of the regions however, traffic Management does not automatically direct all users to the other region.
Which two are possible causes?
Correct
Managing Traffic Management GEOLOCATION Steering Policies
Geolocation steering policies distribute DNS traffic to different endpoints based on the location of the end user. Customers can define geographic regions composed of originating continent, countries or states/provinces (North America) and define a separate endpoint or set of endpoints for each region.
The Health Checks service allows you to monitor the health of IP addresses and hostnames, as measured from geographic vantage points of your choosing, using HTTP and ping probes.After configuring a health check, you can view the monitor’s results. The results include the location from which the host was monitored, the availability of the endpoint, and the date and time the test was performed.
Also you can Combine Managing Traffic Management GEOLOCATION Steering Policies with Oracle Health Checks to fail over from one region to another
The Load Balancing service provides health status indicators that use your health check policies to report on the general health of your load balancers and their components.
if you misconfigure the health check Protocol between the Load balancer and backend set that can lead to not get an accurate response as example below
If you run a TCP-level health check against an HTTP service, you might not get an accurate response. The TCP handshake can succeed and indicate that the service is up even when the HTTP service is incorrectly configured or having other issues. Although the health check appears good customers might experience transaction failures.
Incorrect
Managing Traffic Management GEOLOCATION Steering Policies
Geolocation steering policies distribute DNS traffic to different endpoints based on the location of the end user. Customers can define geographic regions composed of originating continent, countries or states/provinces (North America) and define a separate endpoint or set of endpoints for each region.
The Health Checks service allows you to monitor the health of IP addresses and hostnames, as measured from geographic vantage points of your choosing, using HTTP and ping probes.After configuring a health check, you can view the monitor’s results. The results include the location from which the host was monitored, the availability of the endpoint, and the date and time the test was performed.
Also you can Combine Managing Traffic Management GEOLOCATION Steering Policies with Oracle Health Checks to fail over from one region to another
The Load Balancing service provides health status indicators that use your health check policies to report on the general health of your load balancers and their components.
if you misconfigure the health check Protocol between the Load balancer and backend set that can lead to not get an accurate response as example below
If you run a TCP-level health check against an HTTP service, you might not get an accurate response. The TCP handshake can succeed and indicate that the service is up even when the HTTP service is incorrectly configured or having other issues. Although the health check appears good customers might experience transaction failures.
Unattempted
Managing Traffic Management GEOLOCATION Steering Policies
Geolocation steering policies distribute DNS traffic to different endpoints based on the location of the end user. Customers can define geographic regions composed of originating continent, countries or states/provinces (North America) and define a separate endpoint or set of endpoints for each region.
The Health Checks service allows you to monitor the health of IP addresses and hostnames, as measured from geographic vantage points of your choosing, using HTTP and ping probes.After configuring a health check, you can view the monitor’s results. The results include the location from which the host was monitored, the availability of the endpoint, and the date and time the test was performed.
Also you can Combine Managing Traffic Management GEOLOCATION Steering Policies with Oracle Health Checks to fail over from one region to another
The Load Balancing service provides health status indicators that use your health check policies to report on the general health of your load balancers and their components.
if you misconfigure the health check Protocol between the Load balancer and backend set that can lead to not get an accurate response as example below
If you run a TCP-level health check against an HTTP service, you might not get an accurate response. The TCP handshake can succeed and indicate that the service is up even when the HTTP service is incorrectly configured or having other issues. Although the health check appears good customers might experience transaction failures.
Question 24 of 54
24. Question
You are responsible for migrating your on premises legacy databases on 11.2.0.4 version to Autonomous Transaction Processing Dedicated (ATP-D) In Oracle Cloud Infrastructure (OCI). As a solution architect, you need to plan your migration approach.
Which two options do you need to implement together to migrate your on premises databases to OCI?
Correct
Autonomous Database is an Oracle Managed and Secure environment.
A physical database can’t simply be migrated to autonomous because:
– Database must be converted to PDB, upgraded to 19c, and encrypted
– Any changes to Oracle shipped privileges, stored procedures or views must be removed
– All legacy structures and unsupported features must be removed (e.g. legacy LOBs)
GoldenGate replication can be used to keep database online during migration
Incorrect
Autonomous Database is an Oracle Managed and Secure environment.
A physical database can’t simply be migrated to autonomous because:
– Database must be converted to PDB, upgraded to 19c, and encrypted
– Any changes to Oracle shipped privileges, stored procedures or views must be removed
– All legacy structures and unsupported features must be removed (e.g. legacy LOBs)
GoldenGate replication can be used to keep database online during migration
Unattempted
Autonomous Database is an Oracle Managed and Secure environment.
A physical database can’t simply be migrated to autonomous because:
– Database must be converted to PDB, upgraded to 19c, and encrypted
– Any changes to Oracle shipped privileges, stored procedures or views must be removed
– All legacy structures and unsupported features must be removed (e.g. legacy LOBs)
GoldenGate replication can be used to keep database online during migration
Question 25 of 54
25. Question
You are a solutions architect for a global health care company which has numerous data centers around the globe. Due to the ever growing data that your company is storing, you were Instructed to set up a durable, cost effective solution to archive you data from your existing on-premises tape-based backup Infrastructure to Oracle Cloud Infrastructure (OCI).
What is the most-effective mechanism to Implement this requirement?
Correct
Oracle Cloud Infrastructure offers two distinct storage tiers for you to store your unstructured data. Use the Object Storage Standard tier for data to which you need fast, immediate, and frequent access. Use the Archive Storage service’s Archive tier for data that you access infrequently, but which must be preserved for long periods of time. Both storage tiers use the same manageable resources (for example, objects and buckets). The difference is that when you upload a file to Archive Storage, the object is immediately archived. Before you can access an archived object, you must first restore the object to the Standard tier.
you can use Storage Gateway to move files to Oracle Cloud Infrastructure Archive Storage as a cost-effective backup solution. You can move individual files and compressed or uncompressed ZIP or TAR archives. Storing secondary copies of data is an ideal use case for Storage Gateway.
Incorrect
Oracle Cloud Infrastructure offers two distinct storage tiers for you to store your unstructured data. Use the Object Storage Standard tier for data to which you need fast, immediate, and frequent access. Use the Archive Storage service’s Archive tier for data that you access infrequently, but which must be preserved for long periods of time. Both storage tiers use the same manageable resources (for example, objects and buckets). The difference is that when you upload a file to Archive Storage, the object is immediately archived. Before you can access an archived object, you must first restore the object to the Standard tier.
you can use Storage Gateway to move files to Oracle Cloud Infrastructure Archive Storage as a cost-effective backup solution. You can move individual files and compressed or uncompressed ZIP or TAR archives. Storing secondary copies of data is an ideal use case for Storage Gateway.
Unattempted
Oracle Cloud Infrastructure offers two distinct storage tiers for you to store your unstructured data. Use the Object Storage Standard tier for data to which you need fast, immediate, and frequent access. Use the Archive Storage service’s Archive tier for data that you access infrequently, but which must be preserved for long periods of time. Both storage tiers use the same manageable resources (for example, objects and buckets). The difference is that when you upload a file to Archive Storage, the object is immediately archived. Before you can access an archived object, you must first restore the object to the Standard tier.
you can use Storage Gateway to move files to Oracle Cloud Infrastructure Archive Storage as a cost-effective backup solution. You can move individual files and compressed or uncompressed ZIP or TAR archives. Storing secondary copies of data is an ideal use case for Storage Gateway.
Question 26 of 54
26. Question
You want to move a compute instance that is in ‘Compute’ compartment to ‘SysTest-Team’
You login to your Oracle Cloud Infrastructure (OCI) account and use the ‘Move Resource’ option.
What will happen when you attempt moving the compute resource?
Correct
Moving Resources to a Different Compartment
Most resources can be moved after they are created. There are a few resources that you can’t move from one compartment to another. Some resources have attached resource dependencies and some don’t. Not all attached dependencies behave the same way when the parent resource moves.
For some resources, the attached dependencies move with the parent resource to the new compartment. The parent resource moves immediately, but in some cases attached dependencies move asynchronously and are not visible in the new compartment until the move is complete.
For other resources, the attached resource dependencies do not move to the new compartment. You can move these attached resources independently.
You can move Compute resources such as instances, instance pools, and custom images from one compartment to another. When you move a Compute resource to a new compartment, associated resources such as boot volumes and VNICs are not moved.
You can move a VCN from one compartment to another. When you move a VCN, its associated VNICs, private IPs, and ephemeral IPs move with it to the new compartment.
Incorrect
Moving Resources to a Different Compartment
Most resources can be moved after they are created. There are a few resources that you can’t move from one compartment to another. Some resources have attached resource dependencies and some don’t. Not all attached dependencies behave the same way when the parent resource moves.
For some resources, the attached dependencies move with the parent resource to the new compartment. The parent resource moves immediately, but in some cases attached dependencies move asynchronously and are not visible in the new compartment until the move is complete.
For other resources, the attached resource dependencies do not move to the new compartment. You can move these attached resources independently.
You can move Compute resources such as instances, instance pools, and custom images from one compartment to another. When you move a Compute resource to a new compartment, associated resources such as boot volumes and VNICs are not moved.
You can move a VCN from one compartment to another. When you move a VCN, its associated VNICs, private IPs, and ephemeral IPs move with it to the new compartment.
Unattempted
Moving Resources to a Different Compartment
Most resources can be moved after they are created. There are a few resources that you can’t move from one compartment to another. Some resources have attached resource dependencies and some don’t. Not all attached dependencies behave the same way when the parent resource moves.
For some resources, the attached dependencies move with the parent resource to the new compartment. The parent resource moves immediately, but in some cases attached dependencies move asynchronously and are not visible in the new compartment until the move is complete.
For other resources, the attached resource dependencies do not move to the new compartment. You can move these attached resources independently.
You can move Compute resources such as instances, instance pools, and custom images from one compartment to another. When you move a Compute resource to a new compartment, associated resources such as boot volumes and VNICs are not moved.
You can move a VCN from one compartment to another. When you move a VCN, its associated VNICs, private IPs, and ephemeral IPs move with it to the new compartment.
Question 27 of 54
27. Question
You have designed and deployed your Autonomous Data Warehouse (ADW) such that it is accessible from your on-premises data center and servers running on both private and public networks in Oracle Cloud Infrastructure (OCI).
As you are testing the connectivity to your ADW database from the different access paths, you notice that the server running on the private network is unable to connect to ADW.
Which two steps do you need to take to enable connectivity from the server on the private network to ADW?
Correct
There are 3 connections to ADW
1- Connecting to (ADW) from Public Internet
2- Connecting to ADW (via NAT or Service Gateway) from a server running on a private subnet in OCI (in the same tenancy)
3- Connecting to ADW (via internet Gateway) from a server running on a public subnet in OCI (in the same tenancy
Incorrect
There are 3 connections to ADW
1- Connecting to (ADW) from Public Internet
2- Connecting to ADW (via NAT or Service Gateway) from a server running on a private subnet in OCI (in the same tenancy)
3- Connecting to ADW (via internet Gateway) from a server running on a public subnet in OCI (in the same tenancy
Unattempted
There are 3 connections to ADW
1- Connecting to (ADW) from Public Internet
2- Connecting to ADW (via NAT or Service Gateway) from a server running on a private subnet in OCI (in the same tenancy)
3- Connecting to ADW (via internet Gateway) from a server running on a public subnet in OCI (in the same tenancy
Question 28 of 54
28. Question
You are working as a cloud consultant for a major media company in the US and your client requested to consolidate all of their log streams, access logs, application logs, and security logs into a single system. The client wants to analyze all of their logs In real-time based on heuristics and the result should be validated as well. This validation process requires going back to data samples extracted from the last 8 hours.
What approach should you take for this scenario?
Correct
The Oracle Cloud Infrastructure Streaming service provides a fully managed, scalable, and durable storage solution for ingesting continuous, high-volume streams of data that you can consume and process in real time. Streaming can be used for messaging, ingesting high-volume data such as application logs, operational telemetry, web click-stream data, or other use cases in which data is produced and processed continually and sequentially in a publish-subscribe messaging model.
Streaming Usage Scenarios
Here are some of the many possible uses for Streaming:
Metric and log ingestion: Use the Streaming service as an alternative for traditional file-scraping approaches to help make critical operational data more quickly available for indexing, analysis, and visualization.
Messaging: Use Streaming to decouple components of large systems. Streaming provides a pull/buffer-based communication model with sufficient capacity to flatten load spikes and the ability to feed multiple consumers with the same data independently. Key-scoped ordering and guaranteed durability provide reliable primitives to implement various messaging patterns, while high throughput potential allows for such a system to scale well.
Web/Mobile activity data ingestion: Use Streaming for capturing activity from websites or mobile apps (such as page views, searches, or other actions users may take). This information can be used for real-time monitoring and analytics, as well as in data warehousing systems for offline processing and reporting.
Infrastructure and apps event processing: Use Streaming as a unified entry point for cloud components to report their life cycle events for audit, accounting, and related activities.
Incorrect
The Oracle Cloud Infrastructure Streaming service provides a fully managed, scalable, and durable storage solution for ingesting continuous, high-volume streams of data that you can consume and process in real time. Streaming can be used for messaging, ingesting high-volume data such as application logs, operational telemetry, web click-stream data, or other use cases in which data is produced and processed continually and sequentially in a publish-subscribe messaging model.
Streaming Usage Scenarios
Here are some of the many possible uses for Streaming:
Metric and log ingestion: Use the Streaming service as an alternative for traditional file-scraping approaches to help make critical operational data more quickly available for indexing, analysis, and visualization.
Messaging: Use Streaming to decouple components of large systems. Streaming provides a pull/buffer-based communication model with sufficient capacity to flatten load spikes and the ability to feed multiple consumers with the same data independently. Key-scoped ordering and guaranteed durability provide reliable primitives to implement various messaging patterns, while high throughput potential allows for such a system to scale well.
Web/Mobile activity data ingestion: Use Streaming for capturing activity from websites or mobile apps (such as page views, searches, or other actions users may take). This information can be used for real-time monitoring and analytics, as well as in data warehousing systems for offline processing and reporting.
Infrastructure and apps event processing: Use Streaming as a unified entry point for cloud components to report their life cycle events for audit, accounting, and related activities.
Unattempted
The Oracle Cloud Infrastructure Streaming service provides a fully managed, scalable, and durable storage solution for ingesting continuous, high-volume streams of data that you can consume and process in real time. Streaming can be used for messaging, ingesting high-volume data such as application logs, operational telemetry, web click-stream data, or other use cases in which data is produced and processed continually and sequentially in a publish-subscribe messaging model.
Streaming Usage Scenarios
Here are some of the many possible uses for Streaming:
Metric and log ingestion: Use the Streaming service as an alternative for traditional file-scraping approaches to help make critical operational data more quickly available for indexing, analysis, and visualization.
Messaging: Use Streaming to decouple components of large systems. Streaming provides a pull/buffer-based communication model with sufficient capacity to flatten load spikes and the ability to feed multiple consumers with the same data independently. Key-scoped ordering and guaranteed durability provide reliable primitives to implement various messaging patterns, while high throughput potential allows for such a system to scale well.
Web/Mobile activity data ingestion: Use Streaming for capturing activity from websites or mobile apps (such as page views, searches, or other actions users may take). This information can be used for real-time monitoring and analytics, as well as in data warehousing systems for offline processing and reporting.
Infrastructure and apps event processing: Use Streaming as a unified entry point for cloud components to report their life cycle events for audit, accounting, and related activities.
Question 29 of 54
29. Question
You work for a German company as the Lead Oracle Cloud Infrastructure architect. You have designed a highly scalable architecture for your company’s business critical application which uses the Load Balancer service auto which uses the Load Balancer service, autoscaling configuration for the application servers and a 2 Node VM Oracle RAC database. During the peak utilization period of the application yon notice that the application is running slow and customers are complaining. This is resulting in support tickets being created for API timeouts and negative sentiment from the customer base.
What are two possible reasons for this application slowness?
Correct
Autoscaling
Autoscaling enables you to automatically adjust the number of Compute instances in an
instance pool based on performance metrics such as CPU utilization. This helps you provide
consistent performance for your end users during periods of high demand, and helps you
reduce your costs during periods of low demand.
Prerequisites
– You have an instance pool. Optionally, you can attach a load balancer to the instance pool. For steps to create an instance pool and attach a load balancer, see Creating an Instance Pool.
– Monitoring is enabled on the instances in the instance pool. For steps to enable monitoring, see Enabling Monitoring for Compute Instances.
– The instance pool supports the maximum number of instances that you want to scale to. This limit is determined by your tenancy’s service limits.
About Service Limits and Usage
When you sign up for Oracle Cloud Infrastructure, a set of service limits are configured for your tenancy. The service limit is the quota or allowance set on a resource. For example, your tenancy is allowed a maximum number of compute instances per availability domain. These limits are generally established with your Oracle sales representative when you purchase Oracle Cloud Infrastructure.
Compartment Quotas
Compartment quotas are similar to service limits; the biggest difference is that service limits are set by Oracle, and compartment quotas are set by administrators, using policies that allow them to allocate resources with a high level of flexibility.
Incorrect
Autoscaling
Autoscaling enables you to automatically adjust the number of Compute instances in an
instance pool based on performance metrics such as CPU utilization. This helps you provide
consistent performance for your end users during periods of high demand, and helps you
reduce your costs during periods of low demand.
Prerequisites
– You have an instance pool. Optionally, you can attach a load balancer to the instance pool. For steps to create an instance pool and attach a load balancer, see Creating an Instance Pool.
– Monitoring is enabled on the instances in the instance pool. For steps to enable monitoring, see Enabling Monitoring for Compute Instances.
– The instance pool supports the maximum number of instances that you want to scale to. This limit is determined by your tenancy’s service limits.
About Service Limits and Usage
When you sign up for Oracle Cloud Infrastructure, a set of service limits are configured for your tenancy. The service limit is the quota or allowance set on a resource. For example, your tenancy is allowed a maximum number of compute instances per availability domain. These limits are generally established with your Oracle sales representative when you purchase Oracle Cloud Infrastructure.
Compartment Quotas
Compartment quotas are similar to service limits; the biggest difference is that service limits are set by Oracle, and compartment quotas are set by administrators, using policies that allow them to allocate resources with a high level of flexibility.
Unattempted
Autoscaling
Autoscaling enables you to automatically adjust the number of Compute instances in an
instance pool based on performance metrics such as CPU utilization. This helps you provide
consistent performance for your end users during periods of high demand, and helps you
reduce your costs during periods of low demand.
Prerequisites
– You have an instance pool. Optionally, you can attach a load balancer to the instance pool. For steps to create an instance pool and attach a load balancer, see Creating an Instance Pool.
– Monitoring is enabled on the instances in the instance pool. For steps to enable monitoring, see Enabling Monitoring for Compute Instances.
– The instance pool supports the maximum number of instances that you want to scale to. This limit is determined by your tenancy’s service limits.
About Service Limits and Usage
When you sign up for Oracle Cloud Infrastructure, a set of service limits are configured for your tenancy. The service limit is the quota or allowance set on a resource. For example, your tenancy is allowed a maximum number of compute instances per availability domain. These limits are generally established with your Oracle sales representative when you purchase Oracle Cloud Infrastructure.
Compartment Quotas
Compartment quotas are similar to service limits; the biggest difference is that service limits are set by Oracle, and compartment quotas are set by administrators, using policies that allow them to allocate resources with a high level of flexibility.
Question 30 of 54
30. Question
Your customer recently ordered for a 1-Gbps Fast Connect connection in ap-tokyo -1 region of Oracle Cloud Infrastructure (OCI). They will connect this 1-Gbps Fast Connect to one Virtual cloud Network (VCN) in their production (OCI) tenancy and VCN In their development OCI tenancy As a Solution Architect, how should you configure and architect the connectivity between on premises and VCNs In OCI?
Correct
There’s an advanced routing scenario called transit routing that enables communication between an on-premises network and multiple VCNs over a single Oracle Cloud Infrastructure FastConnect or IPSec VPN. The VCNs must be in the same region and locally peered in a hub-and-spoke layout. As part of the scenario, the VCN that is acting as the hub has a route table associated with each LPG (typically route tables are associated with a VCN’s subnets).
Incorrect
There’s an advanced routing scenario called transit routing that enables communication between an on-premises network and multiple VCNs over a single Oracle Cloud Infrastructure FastConnect or IPSec VPN. The VCNs must be in the same region and locally peered in a hub-and-spoke layout. As part of the scenario, the VCN that is acting as the hub has a route table associated with each LPG (typically route tables are associated with a VCN’s subnets).
Unattempted
There’s an advanced routing scenario called transit routing that enables communication between an on-premises network and multiple VCNs over a single Oracle Cloud Infrastructure FastConnect or IPSec VPN. The VCNs must be in the same region and locally peered in a hub-and-spoke layout. As part of the scenario, the VCN that is acting as the hub has a route table associated with each LPG (typically route tables are associated with a VCN’s subnets).
Question 31 of 54
31. Question
You are part of a project team working in the development environment created in Oracle Cloud Infrastructure (OCI). You realize that the CIDR block specified for one of the subnets in a Virtual Cloud Network (VCN) is not correct and want to delete the subnet. While deleting you get an error indicating that there are still resources that you must delete first. The error includes the OCID of the VNIC that is in the subnet.
Which of the following action you will take to troubleshoot this issue?
A customer has a Virtual Machine instance running in their Oracle Cloud Infrastructure tenancy. They realized that they wrongly picked a smaller shape for their compute instance. They are reaching out to you to help them fix the issue.
Which of the below options is best recommended to suggest to the customer?
Correct
You can change the shape of a virtual machine (VM) instance without having to rebuild your instances or redeploy your applications. This lets you scale up your Compute resources for increased performance, or scale down to reduce cost.
When you change the shape of an instance, it affects the number of OCPUs, amount of memory, network bandwidth, and maximum number of VNICs for the instance. Optionally, you can select a shape that uses a different processor. The instance’s public and private IP addresses, volume attachments, and VNIC attachments remain the same.
You can change the shape of a virtual machine (VM) instance without having to rebuild your instances or redeploy your applications. This lets you scale up your Compute resources for increased performance, or scale down to reduce cost.
When you change the shape of an instance, it affects the number of OCPUs, amount of memory, network bandwidth, and maximum number of VNICs for the instance. Optionally, you can select a shape that uses a different processor. The instance’s public and private IP addresses, volume attachments, and VNIC attachments remain the same.
You can change the shape of a virtual machine (VM) instance without having to rebuild your instances or redeploy your applications. This lets you scale up your Compute resources for increased performance, or scale down to reduce cost.
When you change the shape of an instance, it affects the number of OCPUs, amount of memory, network bandwidth, and maximum number of VNICs for the instance. Optionally, you can select a shape that uses a different processor. The instance’s public and private IP addresses, volume attachments, and VNIC attachments remain the same.
You have configured backups for your Oracle Cloud Infrastructure (OCI) 2-node RAC DB systems on virtual machines. In the console, the database backup displays a Failed status.
Which of the following options is the most likely reason for this backup issue?
Correct
An Oracle-generated token that you can use to authenticate with third-party APIs. For example, use an auth token to authenticate with a Swift client when using Recovery Manager (RMAN) to back up an Oracle Database System (DB System) database to Object Storage.
An Oracle-generated token that you can use to authenticate with third-party APIs. For example, use an auth token to authenticate with a Swift client when using Recovery Manager (RMAN) to back up an Oracle Database System (DB System) database to Object Storage.
An Oracle-generated token that you can use to authenticate with third-party APIs. For example, use an auth token to authenticate with a Swift client when using Recovery Manager (RMAN) to back up an Oracle Database System (DB System) database to Object Storage.
An E-Commerce company wants to deploy their web application for Oracle Database on Oracle Cloud Infrastructure (OCIJ DB Systems. In compliance with the business continuity program of the business, they need to provide a Recovery Point Objective (RPO) of 1 hour and a Recovery Time Objective (RTO) of 5 minutes. The web application should be highly available within the region and meet the RTO and RPO requirements in case of a region outage.
Which approach is the most suitable and cost effective configuration for this scenario?
You have been asked to create a mobile application which will be used for submitting orders by users of a popular E-Commerce site. The application is built to work with Autonomous Transaction Processing – Serverless (ATP-S) database as the backend and HTML5 on Oracle Application Express as the front end. During the peak usage of the application you notice that the application response time is very slow. ATP-S database is deployed with 3 CPU cores and 1 TB of memory.
Which two options are expensive or impractical ways to improve the application response times?
?
Identify the maximum memory capacity needed for peak times and scale the memory for the ATP- S database to that number. ATP-S will scale the memory down when not needed
Correct
ADB (serverless) does have auto-scaling – you can select auto scaling during provisioning or later using the Scale Up/Down button on the Oracle Cloud Infrastructure console.
Autonomous Databases comes with over 30 machine learning algorithms implemented as SQL functions that leverage the strengths of the Oracle Autonomous Database.
ADB (serverless) does have auto-scaling – you can select auto scaling during provisioning or later using the Scale Up/Down button on the Oracle Cloud Infrastructure console.
Autonomous Databases comes with over 30 machine learning algorithms implemented as SQL functions that leverage the strengths of the Oracle Autonomous Database.
ADB (serverless) does have auto-scaling – you can select auto scaling during provisioning or later using the Scale Up/Down button on the Oracle Cloud Infrastructure console.
Autonomous Databases comes with over 30 machine learning algorithms implemented as SQL functions that leverage the strengths of the Oracle Autonomous Database.
You have deployed art application server irt a private Subnet irt your virtual cloud network (VCN). For the database, you have provisioned an Autonomous Transaction Processing (ATP) serverless instance. However, you are unable to connect to the database instance from your application server.
Which two steps would you need to enable this connectivity?
Correct
You can add a NAT gateway to your VCN to give instances in a private subnet access to the internet.
Instances in a private subnet don’t have public IP addresses. With the NAT gateway, they can initiate connections to the internet and receive responses, but not receive inbound connections initiated from the internet.
You need to add one egress rule to allow all outbound traffic generated from private subnet.
In the route table you need to add a route, to route the traffic through NAT Gateway only.
You can add a NAT gateway to your VCN to give instances in a private subnet access to the internet.
Instances in a private subnet don’t have public IP addresses. With the NAT gateway, they can initiate connections to the internet and receive responses, but not receive inbound connections initiated from the internet.
You need to add one egress rule to allow all outbound traffic generated from private subnet.
In the route table you need to add a route, to route the traffic through NAT Gateway only.
You can add a NAT gateway to your VCN to give instances in a private subnet access to the internet.
Instances in a private subnet don’t have public IP addresses. With the NAT gateway, they can initiate connections to the internet and receive responses, but not receive inbound connections initiated from the internet.
You need to add one egress rule to allow all outbound traffic generated from private subnet.
In the route table you need to add a route, to route the traffic through NAT Gateway only.
As part of planning the network design on Oracle Cloud Infrastructure, you have been asked to create an Oracle Cloud Infrastructure Virtual Cloud Network (VCN) with 3 subnets, one in each Availability Domain. Each subnet needs to have a minimum of 64 usable IP addresses.
What is the smallest subnet and VCN size you should use to implement this design? The requirements are static, so no growth is expected.
Correct
Each subnet in a VCN consists of a contiguous range of IPv4 addresses that do not overlap with other subnets in the VCN.
Example: 172.16.1.0/24. The first two IPv4 addresses and the last in the subnet’s CIDR are reserved by the Networking service. You can’t change the size of the subnet after creation, so it’s important to think about the size of subnets you need before creating them.
Each subnet in a VCN consists of a contiguous range of IPv4 addresses that do not overlap with other subnets in the VCN.
Example: 172.16.1.0/24. The first two IPv4 addresses and the last in the subnet’s CIDR are reserved by the Networking service. You can’t change the size of the subnet after creation, so it’s important to think about the size of subnets you need before creating them.
Each subnet in a VCN consists of a contiguous range of IPv4 addresses that do not overlap with other subnets in the VCN.
Example: 172.16.1.0/24. The first two IPv4 addresses and the last in the subnet’s CIDR are reserved by the Networking service. You can’t change the size of the subnet after creation, so it’s important to think about the size of subnets you need before creating them.
You are designing the network infrastructure for an application consisting of a web server (server-1) and a Domain Name Server (server-2) running in two different subnets inside the same Virtual Cloud Network (VCN) in Oracle Cloud Infrastructure (OCI). You have a requirement where your end users will access server-1 from the internet and server-2 from your customer’s on-premises network. The on-premises network is connected to your VCN over a FastConnect virtual circuit.
How should you design your routing configuration to meet these requirements?
Correct
Each VCN automatically comes with a default route table that has no rules. If you don’t specify otherwise, every subnet uses the VCN’s default route table. When you add route rules to your VCN, you can simply add them to the default table if that suits your needs. However, if you need both a public subnet and a private subnet, you instead create a separate (custom) route table for each subnet.
Each subnet in a VCN uses a single route table. When you create the subnet, you specify which one to use. You can change which route table the subnet uses at any time. You can also edit a route table’s rules, or remove all the rules from the table
Dynamic routing gateway (DRG): For subnets that need private access to networks connected to your VCN (for example, your on-premises network connected with an IPSec VPN or FastConnect, or a peered VCN in another region)
Each VCN automatically comes with a default route table that has no rules. If you don’t specify otherwise, every subnet uses the VCN’s default route table. When you add route rules to your VCN, you can simply add them to the default table if that suits your needs. However, if you need both a public subnet and a private subnet, you instead create a separate (custom) route table for each subnet.
Each subnet in a VCN uses a single route table. When you create the subnet, you specify which one to use. You can change which route table the subnet uses at any time. You can also edit a route table’s rules, or remove all the rules from the table
Dynamic routing gateway (DRG): For subnets that need private access to networks connected to your VCN (for example, your on-premises network connected with an IPSec VPN or FastConnect, or a peered VCN in another region)
Each VCN automatically comes with a default route table that has no rules. If you don’t specify otherwise, every subnet uses the VCN’s default route table. When you add route rules to your VCN, you can simply add them to the default table if that suits your needs. However, if you need both a public subnet and a private subnet, you instead create a separate (custom) route table for each subnet.
Each subnet in a VCN uses a single route table. When you create the subnet, you specify which one to use. You can change which route table the subnet uses at any time. You can also edit a route table’s rules, or remove all the rules from the table
Dynamic routing gateway (DRG): For subnets that need private access to networks connected to your VCN (for example, your on-premises network connected with an IPSec VPN or FastConnect, or a peered VCN in another region)
Your Oracle database is deployed on-premises and has produced 100 TB database backup locally. You have a disaster recovery plan that requires you to create redundant database backups in Oracle Cloud Infrastructure (OCI). Once the initial backup is completed, the backup must be available for retrieval in less than 30 minutes to support the Recovery Time Objective (RTO) of your solution.
Which is the most cost effective option to meet these requirements?
You are building a highly available and fault tolerant web application deployment for your company. Similar application delayed by competitors experienced web site attack including DDoS which resulted in web server failing. You have decided to use Oracle Web Application Firewall (WAF) to implement an architecture which will provide protection against such attacks and ensure additional configuration will you need to implement to make sure WAF is protecting my web application 24×7.
Which additional configuration will you need to implement to make sure WAF is protecting my web application 24×7?
Correct
Origin Management
An origin is an endpoint (typically an IP address) of the application protected by the WAF. An origin can be an Oracle Cloud Infrastructure load balancer public IP address. A load balancer IP address can be used for high availability to an origin. Multiple origins can be defined, but only a single origin can be active for a WAF. You can set HTTP headers for outbound traffic from the WAF to the origin server. These name value pairs are then available to the application.
Oracle Cloud Infrastructure Web Application Firewall (WAF) is a cloud-based, Payment Card Industry (PCI) compliant, global security service that protects applications from malicious and unwanted internet traffic. WAF can protect any internet facing endpoint, providing consistent rule enforcement across a customer’s applications. WAF provides you with the ability to create and manage rules for internet threats including Cross-Site Scripting (XSS), SQL Injection and other OWASP-defined vulnerabilities. Unwanted bots can be mitigated while tactically allowed desirable bots to enter. Access rules can limit based on geography or the signature of the request.
Distributed Denial of Service (DDoS)
A DDoS attack is an often intentional attack that consumes an entity’s resources, usually using a large number of distributed sources. DDoS can be categorized into either Layer 7 or Layer 3/4 (L3/4)
A layer 7 DDoS attack is a DDoS attack that sends HTTP/S traffic to consume resources and hamper a website’s ability to delivery content or to harm the owner of the site. The Web Application Firewall (WAF) service can protect layer 7 HTTP-based resources from layer 7 DDoS and other web application attack vectors.
Incorrect
Origin Management
An origin is an endpoint (typically an IP address) of the application protected by the WAF. An origin can be an Oracle Cloud Infrastructure load balancer public IP address. A load balancer IP address can be used for high availability to an origin. Multiple origins can be defined, but only a single origin can be active for a WAF. You can set HTTP headers for outbound traffic from the WAF to the origin server. These name value pairs are then available to the application.
Oracle Cloud Infrastructure Web Application Firewall (WAF) is a cloud-based, Payment Card Industry (PCI) compliant, global security service that protects applications from malicious and unwanted internet traffic. WAF can protect any internet facing endpoint, providing consistent rule enforcement across a customer’s applications. WAF provides you with the ability to create and manage rules for internet threats including Cross-Site Scripting (XSS), SQL Injection and other OWASP-defined vulnerabilities. Unwanted bots can be mitigated while tactically allowed desirable bots to enter. Access rules can limit based on geography or the signature of the request.
Distributed Denial of Service (DDoS)
A DDoS attack is an often intentional attack that consumes an entity’s resources, usually using a large number of distributed sources. DDoS can be categorized into either Layer 7 or Layer 3/4 (L3/4)
A layer 7 DDoS attack is a DDoS attack that sends HTTP/S traffic to consume resources and hamper a website’s ability to delivery content or to harm the owner of the site. The Web Application Firewall (WAF) service can protect layer 7 HTTP-based resources from layer 7 DDoS and other web application attack vectors.
Unattempted
Origin Management
An origin is an endpoint (typically an IP address) of the application protected by the WAF. An origin can be an Oracle Cloud Infrastructure load balancer public IP address. A load balancer IP address can be used for high availability to an origin. Multiple origins can be defined, but only a single origin can be active for a WAF. You can set HTTP headers for outbound traffic from the WAF to the origin server. These name value pairs are then available to the application.
Oracle Cloud Infrastructure Web Application Firewall (WAF) is a cloud-based, Payment Card Industry (PCI) compliant, global security service that protects applications from malicious and unwanted internet traffic. WAF can protect any internet facing endpoint, providing consistent rule enforcement across a customer’s applications. WAF provides you with the ability to create and manage rules for internet threats including Cross-Site Scripting (XSS), SQL Injection and other OWASP-defined vulnerabilities. Unwanted bots can be mitigated while tactically allowed desirable bots to enter. Access rules can limit based on geography or the signature of the request.
Distributed Denial of Service (DDoS)
A DDoS attack is an often intentional attack that consumes an entity’s resources, usually using a large number of distributed sources. DDoS can be categorized into either Layer 7 or Layer 3/4 (L3/4)
A layer 7 DDoS attack is a DDoS attack that sends HTTP/S traffic to consume resources and hamper a website’s ability to delivery content or to harm the owner of the site. The Web Application Firewall (WAF) service can protect layer 7 HTTP-based resources from layer 7 DDoS and other web application attack vectors.
Question 41 of 54
41. Question
You are working as a solutions architect for an online retail store in Frankfurt which uses multiple compute instance VMs spread among three availability domains in the eu-frankfurt-1 region.
You noticed the website Is having very high traffic, so you enabled autoscaling to adjust the number of your application but, you observed that one of the availability domains is not receiving any traffic.
What could be wrong in this situation?
Correct
Autoscaling lets you automatically adjust the number of Compute instances in an instance pool based on performance metrics such as CPU utilization. This helps you provide consistent performance for your end users during periods of high demand, and helps you reduce your costs during periods of low demand.
you can associate a load balancer with an instance pool. If you do this, when you add an instance to the instance pool, the instance is automatically added to the load balancer’s backend set . After the instance reaches a healthy state (the instance is listening on the configured port number), incoming traffic is automatically routed to the new instance.
Instance pools let you provision and create multiple Compute instances based off the same configuration, within the same region.
By default, the instances in a pool are distributed across all fault Domains in a best-effort manner based on capacity. If capacity isn’t available in one fault domain, the instances are placed in other fault domains to allow the instance pool to launch successfully.
In a high availability scenario, you can require that the instances in a pool are evenly distributed across each of the fault domains that you specify. When sufficient capacity isn’t available in one of the fault domains, the instance pool will not launch or scale successfully, and a work request for the instance pool will return an “out of capacity” error. To fix the capacity error, either wait for capacity to become available, or use the UpdateInstancePool operation to update the placement configuration (the availability domain and fault domain) for the instance pool.
during create the instance pool you can select the location where you want to place the instances”
In the Availability Domain list, select the availability domain to launch the instances in.
If you want the instances in the pool to be placed evenly in one or more fault domains, select the Distribute instances evenly across selected fault domains check box. Then, select the fault domains to place the instances in.
Incorrect
Autoscaling lets you automatically adjust the number of Compute instances in an instance pool based on performance metrics such as CPU utilization. This helps you provide consistent performance for your end users during periods of high demand, and helps you reduce your costs during periods of low demand.
you can associate a load balancer with an instance pool. If you do this, when you add an instance to the instance pool, the instance is automatically added to the load balancer’s backend set . After the instance reaches a healthy state (the instance is listening on the configured port number), incoming traffic is automatically routed to the new instance.
Instance pools let you provision and create multiple Compute instances based off the same configuration, within the same region.
By default, the instances in a pool are distributed across all fault Domains in a best-effort manner based on capacity. If capacity isn’t available in one fault domain, the instances are placed in other fault domains to allow the instance pool to launch successfully.
In a high availability scenario, you can require that the instances in a pool are evenly distributed across each of the fault domains that you specify. When sufficient capacity isn’t available in one of the fault domains, the instance pool will not launch or scale successfully, and a work request for the instance pool will return an “out of capacity” error. To fix the capacity error, either wait for capacity to become available, or use the UpdateInstancePool operation to update the placement configuration (the availability domain and fault domain) for the instance pool.
during create the instance pool you can select the location where you want to place the instances”
In the Availability Domain list, select the availability domain to launch the instances in.
If you want the instances in the pool to be placed evenly in one or more fault domains, select the Distribute instances evenly across selected fault domains check box. Then, select the fault domains to place the instances in.
Unattempted
Autoscaling lets you automatically adjust the number of Compute instances in an instance pool based on performance metrics such as CPU utilization. This helps you provide consistent performance for your end users during periods of high demand, and helps you reduce your costs during periods of low demand.
you can associate a load balancer with an instance pool. If you do this, when you add an instance to the instance pool, the instance is automatically added to the load balancer’s backend set . After the instance reaches a healthy state (the instance is listening on the configured port number), incoming traffic is automatically routed to the new instance.
Instance pools let you provision and create multiple Compute instances based off the same configuration, within the same region.
By default, the instances in a pool are distributed across all fault Domains in a best-effort manner based on capacity. If capacity isn’t available in one fault domain, the instances are placed in other fault domains to allow the instance pool to launch successfully.
In a high availability scenario, you can require that the instances in a pool are evenly distributed across each of the fault domains that you specify. When sufficient capacity isn’t available in one of the fault domains, the instance pool will not launch or scale successfully, and a work request for the instance pool will return an “out of capacity” error. To fix the capacity error, either wait for capacity to become available, or use the UpdateInstancePool operation to update the placement configuration (the availability domain and fault domain) for the instance pool.
during create the instance pool you can select the location where you want to place the instances”
In the Availability Domain list, select the availability domain to launch the instances in.
If you want the instances in the pool to be placed evenly in one or more fault domains, select the Distribute instances evenly across selected fault domains check box. Then, select the fault domains to place the instances in.
Question 42 of 54
42. Question
A startup company is looking for a solution for processing of data transmitted by the IOT devices fitted to transport vehicles that carry frozen foods. The data should be consumed and processed in real time. The processed data should be archived to OCI Object Storage bucket and use Autonomous Data Warehouse (ADW) to handle analytics.
Which architecture will help you meet this requirement?
Correct
Real-time processing of high-volume streams of data
– OCI Streaming service provides a fully managed, scalable, durable storage option for continuous, high-volume streams of data that you can consume and process in real-time
– Use cases
Log and Event data collection
Web/Mobile activity data ingestion
IoT Data streaming for processing and alerts
Messaging: use streaming to decouple components of large systems
– Oracle managed service with REST APIs (Create, Put, Get, Delete)
– Integrated Monitoring
Incorrect
Real-time processing of high-volume streams of data
– OCI Streaming service provides a fully managed, scalable, durable storage option for continuous, high-volume streams of data that you can consume and process in real-time
– Use cases
Log and Event data collection
Web/Mobile activity data ingestion
IoT Data streaming for processing and alerts
Messaging: use streaming to decouple components of large systems
– Oracle managed service with REST APIs (Create, Put, Get, Delete)
– Integrated Monitoring
Unattempted
Real-time processing of high-volume streams of data
– OCI Streaming service provides a fully managed, scalable, durable storage option for continuous, high-volume streams of data that you can consume and process in real-time
– Use cases
Log and Event data collection
Web/Mobile activity data ingestion
IoT Data streaming for processing and alerts
Messaging: use streaming to decouple components of large systems
– Oracle managed service with REST APIs (Create, Put, Get, Delete)
– Integrated Monitoring
Question 43 of 54
43. Question
A retail company runs their online shopping platform entirely on Oracle cloud Infrastructure (OCI). This is a 3-tier web application that includes a 100 Mbps Load Balancer, Virtual Machine Instances for web and application tier, and an Oracle DB Systems Virtual Machine. Due to unprecedented growth, they noticed an increase in the incoming traffic to their website and all users start getting 503 (Service Unavailable) errors.
What is the potential problem in this scenario?
Correct
A 503 Service Unavailable Error is an HTTP response status code indicating that a server is temporarily unable to handle the request. This may be due to the server being overloaded or down for maintenance.
Incorrect
A 503 Service Unavailable Error is an HTTP response status code indicating that a server is temporarily unable to handle the request. This may be due to the server being overloaded or down for maintenance.
Unattempted
A 503 Service Unavailable Error is an HTTP response status code indicating that a server is temporarily unable to handle the request. This may be due to the server being overloaded or down for maintenance.
Question 44 of 54
44. Question
Your company will soon start moving critical systems Into Oracle Cloud Infrastructure (OCI) platform. These systems will reside in the us-phoenix-1 and us-ashburn 1 regions. As part of the migration planning, you are reviewing the company’s existing security policies and written guidelines for the OCI platform usage within the company. You have to work with the company managed key.
Which two options ensure compliance with this policy?
Correct
which you have to work with company managed key to create a manage Key
Block Volume Encryption
By default all volumes and their backups are encrypted using the Oracle-provided encryption keys. Each time a volume is cloned or restored from a backup the volume is assigned a new unique encryption key.
You have the option to encrypt all of your volumes and their backups using the keys that you own and manage using the Vault service.If you do not configure a volume to use the Vault service or you later unassign a key from the volume, the Block Volume service uses the Oracle-provided encryption key instead. This applies to both encryption at-rest and in-transit encryption.
Object Storage Encryption
Object Storage employs 256-bit Advanced Encryption Standard (AES-256) to encrypt object data on the server. Each object is encrypted with its own data encryption key. Data encryption keys are always encrypted with a master encryption key that is assigned to the bucket. Encryption is enabled by default and cannot be turned off. By default, Oracle manages the master encryption key. However, you can optionally configure a bucket so that it’s assigned an Oracle Cloud Infrastructure Vault master encryption key that you control and rotate on your own schedule.
Encryption: Buckets are encrypted with keys managed by Oracle by default, but you can optionally encrypt the data in this bucket using your own Vault encryption key. To use Vault for your encryption needs, select Encrypt Using Customer-Managed Keys. Then, select the Vault Compartment and Vault that contain the master encryption key you want to use. Also select the Master Encryption Key Compartment and Master Encryption Key.
Incorrect
which you have to work with company managed key to create a manage Key
Block Volume Encryption
By default all volumes and their backups are encrypted using the Oracle-provided encryption keys. Each time a volume is cloned or restored from a backup the volume is assigned a new unique encryption key.
You have the option to encrypt all of your volumes and their backups using the keys that you own and manage using the Vault service.If you do not configure a volume to use the Vault service or you later unassign a key from the volume, the Block Volume service uses the Oracle-provided encryption key instead. This applies to both encryption at-rest and in-transit encryption.
Object Storage Encryption
Object Storage employs 256-bit Advanced Encryption Standard (AES-256) to encrypt object data on the server. Each object is encrypted with its own data encryption key. Data encryption keys are always encrypted with a master encryption key that is assigned to the bucket. Encryption is enabled by default and cannot be turned off. By default, Oracle manages the master encryption key. However, you can optionally configure a bucket so that it’s assigned an Oracle Cloud Infrastructure Vault master encryption key that you control and rotate on your own schedule.
Encryption: Buckets are encrypted with keys managed by Oracle by default, but you can optionally encrypt the data in this bucket using your own Vault encryption key. To use Vault for your encryption needs, select Encrypt Using Customer-Managed Keys. Then, select the Vault Compartment and Vault that contain the master encryption key you want to use. Also select the Master Encryption Key Compartment and Master Encryption Key.
Unattempted
which you have to work with company managed key to create a manage Key
Block Volume Encryption
By default all volumes and their backups are encrypted using the Oracle-provided encryption keys. Each time a volume is cloned or restored from a backup the volume is assigned a new unique encryption key.
You have the option to encrypt all of your volumes and their backups using the keys that you own and manage using the Vault service.If you do not configure a volume to use the Vault service or you later unassign a key from the volume, the Block Volume service uses the Oracle-provided encryption key instead. This applies to both encryption at-rest and in-transit encryption.
Object Storage Encryption
Object Storage employs 256-bit Advanced Encryption Standard (AES-256) to encrypt object data on the server. Each object is encrypted with its own data encryption key. Data encryption keys are always encrypted with a master encryption key that is assigned to the bucket. Encryption is enabled by default and cannot be turned off. By default, Oracle manages the master encryption key. However, you can optionally configure a bucket so that it’s assigned an Oracle Cloud Infrastructure Vault master encryption key that you control and rotate on your own schedule.
Encryption: Buckets are encrypted with keys managed by Oracle by default, but you can optionally encrypt the data in this bucket using your own Vault encryption key. To use Vault for your encryption needs, select Encrypt Using Customer-Managed Keys. Then, select the Vault Compartment and Vault that contain the master encryption key you want to use. Also select the Master Encryption Key Compartment and Master Encryption Key.
Question 45 of 54
45. Question
The development team has deployed quite a few instances under ‘Compute’ Compartment and the operations team needs to list the instances under the same compartment for their testing. Both teams, development and operations are part of a group called ‘Eng-group’. You have been looking for an option to allow the operations team to list the instances without access any confidential information or metadata of resources.
Which IAM policy should you write based on these requirements?
Correct
Policy Attachment
When you create a policy you must attach it to a compartment (or the tenancy, which is the root compartment). Where you attach it controls who can then modify it or delete it. If you attach it to the tenancy (in other words, if the policy is in the root compartment), then anyone with access to manage policies in the tenancy can then change or delete it. Typically that’s the Administrators group or any similar group you create and give broad access to. Anyone with access only to a child compartment cannot modify or delete that policy.
When you attach a policy to a compartment, you must be in that compartment and you must indicate directly in the statement which compartment it applies to. If you are not in the compartment, you’ll get an error if you try to attach the policy to a different compartment. Notice that attachment occurs during policy creation, which means a policy can be attached to only one compartment.
Policies and Compartment Hierarchies
a policy statement must specify the compartment for which access is being granted (or the tenancy).
Where you create the policy determines who can update the policy. If you attach the policy to the compartment or its parent, you can simply specify the compartment name. If you attach the policy further up the hierarchy, you must specify the path. The format of the path is each compartment name (or OCID) in the path, separated by a colon:
:: . . .
to allow action to compartment Compute so you need to set the compartment PATH as per where you attach the policy as below examples
if you attach it to Root compartment you need to specify the PATH as following
Engineering:Dev-Team:Compute
if you attach it to Engineering compartment you need to specify the PATH as following
Dev-Team:Compute
if you attach it to Dev-Team or Compute compartment you need to specify the PATH as following
Compute
Note : in the Policy inspect verb that give the Ability to list resources, without access to any confidential information or user-specified metadata that may be part of that resource.
Incorrect
Policy Attachment
When you create a policy you must attach it to a compartment (or the tenancy, which is the root compartment). Where you attach it controls who can then modify it or delete it. If you attach it to the tenancy (in other words, if the policy is in the root compartment), then anyone with access to manage policies in the tenancy can then change or delete it. Typically that’s the Administrators group or any similar group you create and give broad access to. Anyone with access only to a child compartment cannot modify or delete that policy.
When you attach a policy to a compartment, you must be in that compartment and you must indicate directly in the statement which compartment it applies to. If you are not in the compartment, you’ll get an error if you try to attach the policy to a different compartment. Notice that attachment occurs during policy creation, which means a policy can be attached to only one compartment.
Policies and Compartment Hierarchies
a policy statement must specify the compartment for which access is being granted (or the tenancy).
Where you create the policy determines who can update the policy. If you attach the policy to the compartment or its parent, you can simply specify the compartment name. If you attach the policy further up the hierarchy, you must specify the path. The format of the path is each compartment name (or OCID) in the path, separated by a colon:
:: . . .
to allow action to compartment Compute so you need to set the compartment PATH as per where you attach the policy as below examples
if you attach it to Root compartment you need to specify the PATH as following
Engineering:Dev-Team:Compute
if you attach it to Engineering compartment you need to specify the PATH as following
Dev-Team:Compute
if you attach it to Dev-Team or Compute compartment you need to specify the PATH as following
Compute
Note : in the Policy inspect verb that give the Ability to list resources, without access to any confidential information or user-specified metadata that may be part of that resource.
Unattempted
Policy Attachment
When you create a policy you must attach it to a compartment (or the tenancy, which is the root compartment). Where you attach it controls who can then modify it or delete it. If you attach it to the tenancy (in other words, if the policy is in the root compartment), then anyone with access to manage policies in the tenancy can then change or delete it. Typically that’s the Administrators group or any similar group you create and give broad access to. Anyone with access only to a child compartment cannot modify or delete that policy.
When you attach a policy to a compartment, you must be in that compartment and you must indicate directly in the statement which compartment it applies to. If you are not in the compartment, you’ll get an error if you try to attach the policy to a different compartment. Notice that attachment occurs during policy creation, which means a policy can be attached to only one compartment.
Policies and Compartment Hierarchies
a policy statement must specify the compartment for which access is being granted (or the tenancy).
Where you create the policy determines who can update the policy. If you attach the policy to the compartment or its parent, you can simply specify the compartment name. If you attach the policy further up the hierarchy, you must specify the path. The format of the path is each compartment name (or OCID) in the path, separated by a colon:
:: . . .
to allow action to compartment Compute so you need to set the compartment PATH as per where you attach the policy as below examples
if you attach it to Root compartment you need to specify the PATH as following
Engineering:Dev-Team:Compute
if you attach it to Engineering compartment you need to specify the PATH as following
Dev-Team:Compute
if you attach it to Dev-Team or Compute compartment you need to specify the PATH as following
Compute
Note : in the Policy inspect verb that give the Ability to list resources, without access to any confidential information or user-specified metadata that may be part of that resource.
Question 46 of 54
46. Question
A global retailer has decided to redesign its e-commerce platform to have a microservices architecture. They would like to decouple application architecture into smaller, independent services using Oracle Cloud Infrastructure (OCI). They have decided to use both containers and servers technologies to run these application instances.
Which option should you recommend to build this new platform?
Correct
Oracle Functions is a fully managed, multi-tenant, highly scalable, on-demand, Functions-as-a-Service platform. It is built on enterprise-grade Oracle Cloud Infrastructure and powered by the Fn Project open source engine. Use Oracle Functions (sometimes abbreviated to just Functions) when you want to focus on writing code to meet business needs.
Oracle Cloud Infrastructure Container Engine for Kubernetes is a fully-managed, scalable, and highly available service that you can use to deploy your containerized applications to the cloud. Use Container Engine for Kubernetes (sometimes abbreviated to just OKE) when your development team wants to reliably build, deploy, and manage cloud-native applications. You specify the compute resources that your applications require, and Container Engine for Kubernetes provisions them on Oracle Cloud Infrastructure in an existing OCI tenancy.
Incorrect
Oracle Functions is a fully managed, multi-tenant, highly scalable, on-demand, Functions-as-a-Service platform. It is built on enterprise-grade Oracle Cloud Infrastructure and powered by the Fn Project open source engine. Use Oracle Functions (sometimes abbreviated to just Functions) when you want to focus on writing code to meet business needs.
Oracle Cloud Infrastructure Container Engine for Kubernetes is a fully-managed, scalable, and highly available service that you can use to deploy your containerized applications to the cloud. Use Container Engine for Kubernetes (sometimes abbreviated to just OKE) when your development team wants to reliably build, deploy, and manage cloud-native applications. You specify the compute resources that your applications require, and Container Engine for Kubernetes provisions them on Oracle Cloud Infrastructure in an existing OCI tenancy.
Unattempted
Oracle Functions is a fully managed, multi-tenant, highly scalable, on-demand, Functions-as-a-Service platform. It is built on enterprise-grade Oracle Cloud Infrastructure and powered by the Fn Project open source engine. Use Oracle Functions (sometimes abbreviated to just Functions) when you want to focus on writing code to meet business needs.
Oracle Cloud Infrastructure Container Engine for Kubernetes is a fully-managed, scalable, and highly available service that you can use to deploy your containerized applications to the cloud. Use Container Engine for Kubernetes (sometimes abbreviated to just OKE) when your development team wants to reliably build, deploy, and manage cloud-native applications. You specify the compute resources that your applications require, and Container Engine for Kubernetes provisions them on Oracle Cloud Infrastructure in an existing OCI tenancy.
Question 47 of 54
47. Question
Multiple departments In your company use a shared Oracle Cloud Infrastructure (OCI) tenancy to implement their projects. You are in charge of managing the cost of OCI resources in the tenancy and need to obtain better insights into department’s usage.
Which three options can you implement together to accomplish this?
Correct
budgets
You can use budgets to track costs in your tenancy. After creating a budget for a compartment, you can set up alerts that will notify you if a budget is forecast to be exceeded or if spending surpasses a certain amount.
OCI Cost Analysis
•Visualization tools Help understand spending patterns at a glance
•Filter costs by Date, Tags and Compartments
•Trend lines show how spending patterns are changing
•To use Cost Analysis you must be a member of the Administrators group
Incorrect
budgets
You can use budgets to track costs in your tenancy. After creating a budget for a compartment, you can set up alerts that will notify you if a budget is forecast to be exceeded or if spending surpasses a certain amount.
OCI Cost Analysis
•Visualization tools Help understand spending patterns at a glance
•Filter costs by Date, Tags and Compartments
•Trend lines show how spending patterns are changing
•To use Cost Analysis you must be a member of the Administrators group
Unattempted
budgets
You can use budgets to track costs in your tenancy. After creating a budget for a compartment, you can set up alerts that will notify you if a budget is forecast to be exceeded or if spending surpasses a certain amount.
OCI Cost Analysis
•Visualization tools Help understand spending patterns at a glance
•Filter costs by Date, Tags and Compartments
•Trend lines show how spending patterns are changing
•To use Cost Analysis you must be a member of the Administrators group
Question 48 of 54
48. Question
An organization has its TT infrastructure in a hybrid setup with an on-premises environment and an Oracle Cloud Infrastructure (OCI) Virtual Cloud Network (VCN) in the us-phoenix-1 region. The on-premise applications communications with compute instances inside the VPN over a hardware VPN connection. They are looking to implement an Intrusion Detected and Prevention (IDS/IPS) system for their OCI environment. This platform should have the ability to scale to thousands of compute of instances running inside the VCN.
How should they architect their solution on OCI to achieve this goal?
Correct
in Transit routing through a private IP in the VCN you set up an instance in the VCN to act as a firewall or intrusion detection system to filter or inspect the traffic between the on-premises network and Oracle Services Network.
The Networking service lets you implement network security functions such as intrusion detection, application-level firewalls
In fact, the IDS model can be host-based IDS (HIDS) or network-based IDS (NIDS). HIDS is installed at a host to periodically monitor specific system logs for patterns of intrusions. In contrast, an NIDS sniffs the traffic to analyze suspicious behaviors. A signature-based NIDS (SNIDS) examines the traffic for patterns of known intrusions. SNIDS can quickly and reliably diagnose the attacking techniques and security holes without generating an over-whelming number of false alarms because SNIDS relies on known signatures. However, anomaly-based NIDS (ANIDS) detects unusual behaviors based on statistical methods. ANIDS could detect symptoms of attacks without specific knowledge of details. However, if the training data of the normal traffic are inadequate, ANIDS may generate a large number of false alarms.
Incorrect
in Transit routing through a private IP in the VCN you set up an instance in the VCN to act as a firewall or intrusion detection system to filter or inspect the traffic between the on-premises network and Oracle Services Network.
The Networking service lets you implement network security functions such as intrusion detection, application-level firewalls
In fact, the IDS model can be host-based IDS (HIDS) or network-based IDS (NIDS). HIDS is installed at a host to periodically monitor specific system logs for patterns of intrusions. In contrast, an NIDS sniffs the traffic to analyze suspicious behaviors. A signature-based NIDS (SNIDS) examines the traffic for patterns of known intrusions. SNIDS can quickly and reliably diagnose the attacking techniques and security holes without generating an over-whelming number of false alarms because SNIDS relies on known signatures. However, anomaly-based NIDS (ANIDS) detects unusual behaviors based on statistical methods. ANIDS could detect symptoms of attacks without specific knowledge of details. However, if the training data of the normal traffic are inadequate, ANIDS may generate a large number of false alarms.
Unattempted
in Transit routing through a private IP in the VCN you set up an instance in the VCN to act as a firewall or intrusion detection system to filter or inspect the traffic between the on-premises network and Oracle Services Network.
The Networking service lets you implement network security functions such as intrusion detection, application-level firewalls
In fact, the IDS model can be host-based IDS (HIDS) or network-based IDS (NIDS). HIDS is installed at a host to periodically monitor specific system logs for patterns of intrusions. In contrast, an NIDS sniffs the traffic to analyze suspicious behaviors. A signature-based NIDS (SNIDS) examines the traffic for patterns of known intrusions. SNIDS can quickly and reliably diagnose the attacking techniques and security holes without generating an over-whelming number of false alarms because SNIDS relies on known signatures. However, anomaly-based NIDS (ANIDS) detects unusual behaviors based on statistical methods. ANIDS could detect symptoms of attacks without specific knowledge of details. However, if the training data of the normal traffic are inadequate, ANIDS may generate a large number of false alarms.
Question 49 of 54
49. Question
Which three scenarios are suitable for the Oracle Infrastructure (OCI) Autonomous transaction Processing Serverless (ATP-S) deployment?
Correct
MongoDB is a cross-platform document-oriented database program. Classified as a NoSQL database program, MongoDB uses JSON-like documents with schema, so the best to be migrated to Oracle NoSQL Database.
Autonomous transaction Processing Serverless (ATP-S) isn’t supported yet for EBS database
MongoDB is a cross-platform document-oriented database program. Classified as a NoSQL database program, MongoDB uses JSON-like documents with schema, so the best to be migrated to Oracle NoSQL Database.
Autonomous transaction Processing Serverless (ATP-S) isn’t supported yet for EBS database
MongoDB is a cross-platform document-oriented database program. Classified as a NoSQL database program, MongoDB uses JSON-like documents with schema, so the best to be migrated to Oracle NoSQL Database.
Autonomous transaction Processing Serverless (ATP-S) isn’t supported yet for EBS database
You have multiple IAM users who launch different types of compute instances and block volumes every day. As a result, your Oracle Cloud Infrastructure (OCI) tenancy quickly hit the service limit and you can no longer create any new instances. As you are cleaning up environment, you notice that the majority of the instances and block volumes are untagged. Therefore, It is difficult to pinpoint the owner of these resources verify if they are safe to terminate. Because of this, your company has issued a new mandate, which requires adding compute instances.
Which option is the simplest way to implement this new requirement?
Correct
Tag Variables
You can use a variable to set the value of a defined tag. When you add the tag to a resource, the variable resolves to the data it represents. You can use tag variables in defined tags and default tags.
Supported Tag Variables
The following tag variables are supported.
${iam.principal.name}The name of the principal that tagged the resource
${iam.principal.type}The type of principal that tagged the resource.
${oci.datetime}The date and time that the tag was created.
Consider the following example:
Operations.CostCenter=”${iam.principal.name} at ${oci.datetime}”
Operations is the namespace, CostCenter is the tag key, and the tag value contains two tag variables ${iam.principal.name} and ${oci.datetime}. When you add this tag to a resource, the variable resolves to your user name (the name of the principal that applied the tag) and a time date stamp for when you added the tag.
user_name at 2019-06-18T18:00:57.604Z
The variable is replaced with data at the time you apply the tag. If you later edit the tag, the variable is gone and only the data remains. You can edit the tag value in all the ways you would edit any other tag value.
To create a tag variable, you must use a specific format.
${}
Type a dollar sign followed by open and close curly brackets. The tag variable goes between the curly brackets. You can use tag variables with other tag variables and with string values.
Tag defaults let you specify tags to be applied automatically to all resources, at the time of creation, in a specific compartment. This feature allows you to ensure that appropriate tags are applied at resource creation without requiring the user who is creating the resource to have access to the tag namespaces.
Tag Variables
You can use a variable to set the value of a defined tag. When you add the tag to a resource, the variable resolves to the data it represents. You can use tag variables in defined tags and default tags.
Supported Tag Variables
The following tag variables are supported.
${iam.principal.name}The name of the principal that tagged the resource
${iam.principal.type}The type of principal that tagged the resource.
${oci.datetime}The date and time that the tag was created.
Consider the following example:
Operations.CostCenter=”${iam.principal.name} at ${oci.datetime}”
Operations is the namespace, CostCenter is the tag key, and the tag value contains two tag variables ${iam.principal.name} and ${oci.datetime}. When you add this tag to a resource, the variable resolves to your user name (the name of the principal that applied the tag) and a time date stamp for when you added the tag.
user_name at 2019-06-18T18:00:57.604Z
The variable is replaced with data at the time you apply the tag. If you later edit the tag, the variable is gone and only the data remains. You can edit the tag value in all the ways you would edit any other tag value.
To create a tag variable, you must use a specific format.
${}
Type a dollar sign followed by open and close curly brackets. The tag variable goes between the curly brackets. You can use tag variables with other tag variables and with string values.
Tag defaults let you specify tags to be applied automatically to all resources, at the time of creation, in a specific compartment. This feature allows you to ensure that appropriate tags are applied at resource creation without requiring the user who is creating the resource to have access to the tag namespaces.
Tag Variables
You can use a variable to set the value of a defined tag. When you add the tag to a resource, the variable resolves to the data it represents. You can use tag variables in defined tags and default tags.
Supported Tag Variables
The following tag variables are supported.
${iam.principal.name}The name of the principal that tagged the resource
${iam.principal.type}The type of principal that tagged the resource.
${oci.datetime}The date and time that the tag was created.
Consider the following example:
Operations.CostCenter=”${iam.principal.name} at ${oci.datetime}”
Operations is the namespace, CostCenter is the tag key, and the tag value contains two tag variables ${iam.principal.name} and ${oci.datetime}. When you add this tag to a resource, the variable resolves to your user name (the name of the principal that applied the tag) and a time date stamp for when you added the tag.
user_name at 2019-06-18T18:00:57.604Z
The variable is replaced with data at the time you apply the tag. If you later edit the tag, the variable is gone and only the data remains. You can edit the tag value in all the ways you would edit any other tag value.
To create a tag variable, you must use a specific format.
${}
Type a dollar sign followed by open and close curly brackets. The tag variable goes between the curly brackets. You can use tag variables with other tag variables and with string values.
Tag defaults let you specify tags to be applied automatically to all resources, at the time of creation, in a specific compartment. This feature allows you to ensure that appropriate tags are applied at resource creation without requiring the user who is creating the resource to have access to the tag namespaces.
To serve web traffic for a popular product, your cloud engineer has provisioned four BM.Standard2.52 instances, evenly spread across two availability domains in the us-ashburn-1 region; LoadBalancer is used to deliver the traffic across instances.
After several months, the product grows even more popular and you need additional compute capacity. As a result, an engineer provisioned two additional VM.Standard2.8 instances.
You register the two VM.Standard2. 8 instances with your Load Balancer Backend set and quickly find that the VM Standard2.8 instances running at 100% of CPU utilization but the BM. Standard2 .52 instances have significant CPU capacity that unused.
Which option is the most cost effective and uses instances capacity most effectively?
Correct
Customer have 4 BM.Standard2.52 and After several months he need additional compute capacity
customer find The VM Standard2.8 Instances running at 100% of CPU utilization but the BM.Standard2 .52 instances have significant CPU capacity that unused.
so the customer need to check the Load balance policy to make sure the 4 BM and VM is utilize correctally
Incorrect
Customer have 4 BM.Standard2.52 and After several months he need additional compute capacity
customer find The VM Standard2.8 Instances running at 100% of CPU utilization but the BM.Standard2 .52 instances have significant CPU capacity that unused.
so the customer need to check the Load balance policy to make sure the 4 BM and VM is utilize correctally
Unattempted
Customer have 4 BM.Standard2.52 and After several months he need additional compute capacity
customer find The VM Standard2.8 Instances running at 100% of CPU utilization but the BM.Standard2 .52 instances have significant CPU capacity that unused.
so the customer need to check the Load balance policy to make sure the 4 BM and VM is utilize correctally
Question 52 of 54
52. Question
Your company has recently deployed a new web application that uses Oracle functions. Your manager instructed you to implement a change to manage your systems more effectively. You know that Oracle functions automatically monitors functions on your behalf reports metrics through Service Metrics.
Which two metrics are collected and made available by this feature?
Correct
you can monitor the health, capacity, and performance of functions you’ve deployed to Oracle Functions by using metrics
Oracle Functions monitors function execution, and collects and reports metrics such as:
The number of times a function is invoked.
The length of time a function runs for.
The number of times a function failed.
The number of requests to invoke a function that returned a ‘429 Too Many Requests’ error in the response (known as ‘throttled function invocations’).
Incorrect
you can monitor the health, capacity, and performance of functions you’ve deployed to Oracle Functions by using metrics
Oracle Functions monitors function execution, and collects and reports metrics such as:
The number of times a function is invoked.
The length of time a function runs for.
The number of times a function failed.
The number of requests to invoke a function that returned a ‘429 Too Many Requests’ error in the response (known as ‘throttled function invocations’).
Unattempted
you can monitor the health, capacity, and performance of functions you’ve deployed to Oracle Functions by using metrics
Oracle Functions monitors function execution, and collects and reports metrics such as:
The number of times a function is invoked.
The length of time a function runs for.
The number of times a function failed.
The number of requests to invoke a function that returned a ‘429 Too Many Requests’ error in the response (known as ‘throttled function invocations’).
Question 53 of 54
53. Question
Your team is conducting a root analysis (RCA) following a recent, unplanned outage. One of the block volumes attached to your production WebLogic server was deleted and you have tasked with identifying the source of the action. You search the Audit logs and find several Delete actions that occurred in the previous 24 hours.
Given the sample except of this event
Correct
The Oracle Cloud Infrastructure Audit service automatically records calls to all supported Oracle Cloud Infrastructure public application programming interface (API) endpoints as log events. Currently, all services support logging by Audit.
Every audit log event includes two main parts:
Envelopes that act as a container for all event messages
Payloads that contain data from the resource emitting the event message
The identity object contains the following attributes.
data.identity.authType The type of authentication used.
….
data.identity.principalIdThe OCID of the principal.
data.identity.principalNameThe name of the user or service. This value is the friendly name associated with principalId.
Incorrect
The Oracle Cloud Infrastructure Audit service automatically records calls to all supported Oracle Cloud Infrastructure public application programming interface (API) endpoints as log events. Currently, all services support logging by Audit.
Every audit log event includes two main parts:
Envelopes that act as a container for all event messages
Payloads that contain data from the resource emitting the event message
The identity object contains the following attributes.
data.identity.authType The type of authentication used.
….
data.identity.principalIdThe OCID of the principal.
data.identity.principalNameThe name of the user or service. This value is the friendly name associated with principalId.
Unattempted
The Oracle Cloud Infrastructure Audit service automatically records calls to all supported Oracle Cloud Infrastructure public application programming interface (API) endpoints as log events. Currently, all services support logging by Audit.
Every audit log event includes two main parts:
Envelopes that act as a container for all event messages
Payloads that contain data from the resource emitting the event message
The identity object contains the following attributes.
data.identity.authType The type of authentication used.
….
data.identity.principalIdThe OCID of the principal.
data.identity.principalNameThe name of the user or service. This value is the friendly name associated with principalId.
Question 54 of 54
54. Question
A global retailer is setting up the cloud architecture to be deployed in Oracle Cloud Infrastructure (OCI) which will have thousands of users from two major geographical regions: North America and Asia Pacific. The requirements of the services are:
* Service needs to be available 24/7 to avoid any business disruption
* North American customers should be served by application running in North American regions
* Asia Pacific customers should be served by applications running In Asia Pacific regions
* Must be resilient enough to handle the outage of an entire OCI region
Correct
GEOLOCATION STEERING
Geolocation steering policies distribute DNS traffic to different endpoints based on the location of the end user. Customers can define geographic regions composed of originating continent, countries or states/provinces (North America) and define a separate endpoint or set of endpoints for each region.
Combine with Oracle Health Checks to fail over from one region to another
Incorrect
GEOLOCATION STEERING
Geolocation steering policies distribute DNS traffic to different endpoints based on the location of the end user. Customers can define geographic regions composed of originating continent, countries or states/provinces (North America) and define a separate endpoint or set of endpoints for each region.
Combine with Oracle Health Checks to fail over from one region to another
Unattempted
GEOLOCATION STEERING
Geolocation steering policies distribute DNS traffic to different endpoints based on the location of the end user. Customers can define geographic regions composed of originating continent, countries or states/provinces (North America) and define a separate endpoint or set of endpoints for each region.
Combine with Oracle Health Checks to fail over from one region to another
Use Page numbers below to navigate to other practice tests