You have already completed the Test before. Hence you can not start it again.
Test is loading...
You must sign in or sign up to start the Test.
You have to finish following quiz, to start this Test:
Your results are here!! for" Nutanix Certified Professional - Unified Storage NCP-US 6.10 Practice Test 2 "
0 of 60 questions answered correctly
Your time:
Time has elapsed
Your Final Score is : 0
You have attempted : 0
Number of Correct Questions : 0 and scored 0
Number of Incorrect Questions : 0 and Negative marks 0
Average score
Your score
Nutanix Certified Professional - Unified Storage NCP-US 6.10
You have attempted: 0
Number of Correct Questions: 0 and scored 0
Number of Incorrect Questions: 0 and Negative marks 0
You can review your answers by clicking on “View Answers” option. Important Note : Open Reference Documentation Links in New Tab (Right Click and Open in New Tab).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
Answered
Review
Question 1 of 60
1. Question
Users are complaining about reconnecting to a share when there are networking issues. Which Files feature should the administrator enable to ensure the sessions will auto-reconnect when networking issues occur?
Correct
Enabling Durable File Handles ensures that users do not experience disconnections or need to manually reconnect after networking interruptions. It allows the file handles to survive network failures, thus maintaining uninterrupted access to shared files. This is the correct feature to address the users complaints about reconnecting after networking issues.
Incorrect
Enabling Durable File Handles ensures that users do not experience disconnections or need to manually reconnect after networking interruptions. It allows the file handles to survive network failures, thus maintaining uninterrupted access to shared files. This is the correct feature to address the users complaints about reconnecting after networking issues.
Unattempted
Enabling Durable File Handles ensures that users do not experience disconnections or need to manually reconnect after networking interruptions. It allows the file handles to survive network failures, thus maintaining uninterrupted access to shared files. This is the correct feature to address the users complaints about reconnecting after networking issues.
Question 2 of 60
2. Question
What is a prerequisite for deploying Smart DR?
Correct
Smart DR (Disaster Recovery) is a Nutanix Files feature that enables automated replication and failover of file server shares between clusters for business continuity. One of the key prerequisites before deploying Smart DR is ensuring that TCP port 7515 is open unidirectionally from the source to the recovery site on all file server client IPs. This port is used for inter-cluster communication related to replication traffic. If this port is not open, replication jobs and Smart DR orchestration will fail because the two file server clusters will be unable to establish the required communication.
Incorrect
Smart DR (Disaster Recovery) is a Nutanix Files feature that enables automated replication and failover of file server shares between clusters for business continuity. One of the key prerequisites before deploying Smart DR is ensuring that TCP port 7515 is open unidirectionally from the source to the recovery site on all file server client IPs. This port is used for inter-cluster communication related to replication traffic. If this port is not open, replication jobs and Smart DR orchestration will fail because the two file server clusters will be unable to establish the required communication.
Unattempted
Smart DR (Disaster Recovery) is a Nutanix Files feature that enables automated replication and failover of file server shares between clusters for business continuity. One of the key prerequisites before deploying Smart DR is ensuring that TCP port 7515 is open unidirectionally from the source to the recovery site on all file server client IPs. This port is used for inter-cluster communication related to replication traffic. If this port is not open, replication jobs and Smart DR orchestration will fail because the two file server clusters will be unable to establish the required communication.
Question 3 of 60
3. Question
Which the primary criteria that should be considered for a performance-sensitive application shares with sequential I/O?
Correct
When designing and optimizing performance-sensitive application shares, particularly those involving sequential I/O, the most important metric to consider is throughput. Sequential I/O refers to operations where data is read or written in a continuous, ordered stream (e.g., large file transfers, video editing workloads, backup targets). For these workloads: Throughput (measured in MB/s or GB/s) becomes the dominant performance factor, not the number of IOPS. Higher throughput ensures that large blocks of data are processed quickly and efficiently. Increasing bandwidth (e.g., NIC speeds), tuning block size, and ensuring adequate buffering can all help maximize throughput.
Incorrect
When designing and optimizing performance-sensitive application shares, particularly those involving sequential I/O, the most important metric to consider is throughput. Sequential I/O refers to operations where data is read or written in a continuous, ordered stream (e.g., large file transfers, video editing workloads, backup targets). For these workloads: Throughput (measured in MB/s or GB/s) becomes the dominant performance factor, not the number of IOPS. Higher throughput ensures that large blocks of data are processed quickly and efficiently. Increasing bandwidth (e.g., NIC speeds), tuning block size, and ensuring adequate buffering can all help maximize throughput.
Unattempted
When designing and optimizing performance-sensitive application shares, particularly those involving sequential I/O, the most important metric to consider is throughput. Sequential I/O refers to operations where data is read or written in a continuous, ordered stream (e.g., large file transfers, video editing workloads, backup targets). For these workloads: Throughput (measured in MB/s or GB/s) becomes the dominant performance factor, not the number of IOPS. Higher throughput ensures that large blocks of data are processed quickly and efficiently. Increasing bandwidth (e.g., NIC speeds), tuning block size, and ensuring adequate buffering can all help maximize throughput.
Question 4 of 60
4. Question
According to the exhibit.
A Nutanix administrator is able to review and modify objects in a registered ESXi cluster from a PE instance, but when the administrator attempts to deploy an Objects cluster to the same ESXi cluster, the error that is shown in the exhibit is displayed.
What is the appropriate configuration to verify to allow a successful Objects cluster deployment to this ESXi cluster?
Correct
Ensure that vCenter in PE cluster is registered using FQDN and that vCenter details in Objects UI are using IP address.
Incorrect: This mismatch (FQDN in PE and IP in Objects UI) is often the cause of the issue. The vCenter registration must be consistent between PE and Objects. Mismatched FQDN/IP entries can lead to SSL trust issues or internal validation failures, which would prevent deployment.
Replace the expired self-signed SSL certificate for the Object Store with a non-expired public certificate signed by a valid Certificate Authority.
Incorrect: While certificate expiration can cause issues, the problem described is during the deployment phase, not during secure object access or communication. Also, Objects UI does not require a public CA certificate for deployment success.
Correct answer
Ensure that vCenter in PE cluster is registered using FQDN and that vCenter details in Objects UI are using FQDN.
Correct: For Nutanix Objects to deploy correctly to an ESXi cluster, the vCenter registration must match (FQDN vs IP) across all components: Prism Element and Objects UI.
Using inconsistent references to the vCenter server (e.g., FQDN in one and IP in the other) leads to validation and communication issues, causing the deployment to fail. Matching FQDN ensures trusted SSL/TLS communication and hostname resolution compatibility.
Replace the expired self-signed SSL certificate for the Object Store with a non-expired self signed SSL certificate.
Incorrect: Again, this is not about expired certificates on the Object Store itself. If this were the issue, you’d likely see access or trust issues post-deployment, not during the cluster creation. It also does not relate to the vCenter connection, which is the source of the error in this scenario.
Details:
When deploying Nutanix Objects to an ESXi cluster, vCenter registration must be consistent between Prism Element (PE) and the Objects UI. This includes using FQDN (Fully Qualified Domain Name) instead of an IP address in both places.
This is because:
SSL certificates and authentication tokens are often tied to FQDNs.
Inconsistent use of FQDN vs. IP (e.g., FQDN in PE and IP in Objects UI) can break trust validation and result in deployment failures.
Objects and Prism communicate with vCenter, and vCenter expects the name used in registration to match the CN (Common Name) in its certificate, usually the FQDN.
Incorrect
Ensure that vCenter in PE cluster is registered using FQDN and that vCenter details in Objects UI are using IP address.
Incorrect: This mismatch (FQDN in PE and IP in Objects UI) is often the cause of the issue. The vCenter registration must be consistent between PE and Objects. Mismatched FQDN/IP entries can lead to SSL trust issues or internal validation failures, which would prevent deployment.
Replace the expired self-signed SSL certificate for the Object Store with a non-expired public certificate signed by a valid Certificate Authority.
Incorrect: While certificate expiration can cause issues, the problem described is during the deployment phase, not during secure object access or communication. Also, Objects UI does not require a public CA certificate for deployment success.
Correct answer
Ensure that vCenter in PE cluster is registered using FQDN and that vCenter details in Objects UI are using FQDN.
Correct: For Nutanix Objects to deploy correctly to an ESXi cluster, the vCenter registration must match (FQDN vs IP) across all components: Prism Element and Objects UI.
Using inconsistent references to the vCenter server (e.g., FQDN in one and IP in the other) leads to validation and communication issues, causing the deployment to fail. Matching FQDN ensures trusted SSL/TLS communication and hostname resolution compatibility.
Replace the expired self-signed SSL certificate for the Object Store with a non-expired self signed SSL certificate.
Incorrect: Again, this is not about expired certificates on the Object Store itself. If this were the issue, you’d likely see access or trust issues post-deployment, not during the cluster creation. It also does not relate to the vCenter connection, which is the source of the error in this scenario.
Details:
When deploying Nutanix Objects to an ESXi cluster, vCenter registration must be consistent between Prism Element (PE) and the Objects UI. This includes using FQDN (Fully Qualified Domain Name) instead of an IP address in both places.
This is because:
SSL certificates and authentication tokens are often tied to FQDNs.
Inconsistent use of FQDN vs. IP (e.g., FQDN in PE and IP in Objects UI) can break trust validation and result in deployment failures.
Objects and Prism communicate with vCenter, and vCenter expects the name used in registration to match the CN (Common Name) in its certificate, usually the FQDN.
Unattempted
Ensure that vCenter in PE cluster is registered using FQDN and that vCenter details in Objects UI are using IP address.
Incorrect: This mismatch (FQDN in PE and IP in Objects UI) is often the cause of the issue. The vCenter registration must be consistent between PE and Objects. Mismatched FQDN/IP entries can lead to SSL trust issues or internal validation failures, which would prevent deployment.
Replace the expired self-signed SSL certificate for the Object Store with a non-expired public certificate signed by a valid Certificate Authority.
Incorrect: While certificate expiration can cause issues, the problem described is during the deployment phase, not during secure object access or communication. Also, Objects UI does not require a public CA certificate for deployment success.
Correct answer
Ensure that vCenter in PE cluster is registered using FQDN and that vCenter details in Objects UI are using FQDN.
Correct: For Nutanix Objects to deploy correctly to an ESXi cluster, the vCenter registration must match (FQDN vs IP) across all components: Prism Element and Objects UI.
Using inconsistent references to the vCenter server (e.g., FQDN in one and IP in the other) leads to validation and communication issues, causing the deployment to fail. Matching FQDN ensures trusted SSL/TLS communication and hostname resolution compatibility.
Replace the expired self-signed SSL certificate for the Object Store with a non-expired self signed SSL certificate.
Incorrect: Again, this is not about expired certificates on the Object Store itself. If this were the issue, you’d likely see access or trust issues post-deployment, not during the cluster creation. It also does not relate to the vCenter connection, which is the source of the error in this scenario.
Details:
When deploying Nutanix Objects to an ESXi cluster, vCenter registration must be consistent between Prism Element (PE) and the Objects UI. This includes using FQDN (Fully Qualified Domain Name) instead of an IP address in both places.
This is because:
SSL certificates and authentication tokens are often tied to FQDNs.
Inconsistent use of FQDN vs. IP (e.g., FQDN in PE and IP in Objects UI) can break trust validation and result in deployment failures.
Objects and Prism communicate with vCenter, and vCenter expects the name used in registration to match the CN (Common Name) in its certificate, usually the FQDN.
Question 5 of 60
5. Question
ABC corporate currently has two Objects instances deployed between two sites. Both instances are managed via the same Prism Central to simplify management. They has a critical application with all data in a bucket that needs to be replicated to the secondary site for DR purposes. The replication needs to be asynchronous, including all delete marker versions. How can the ABC achive this requirement in Objects?
Correct
The proper and supported way to replicate object data for DR purposes across Nutanix Objects instances is by using bucket replication rules. These rules enable: Asynchronous replication Automatic handling of object versions and delete markers Site-to-site resilience Configuration and monitoring through Prism Central Because both instances are registered to the same Prism Central, the administrator can easily create a replication rule on the source bucket that designates the destination bucket on the remote site.
Incorrect
The proper and supported way to replicate object data for DR purposes across Nutanix Objects instances is by using bucket replication rules. These rules enable: Asynchronous replication Automatic handling of object versions and delete markers Site-to-site resilience Configuration and monitoring through Prism Central Because both instances are registered to the same Prism Central, the administrator can easily create a replication rule on the source bucket that designates the destination bucket on the remote site.
Unattempted
The proper and supported way to replicate object data for DR purposes across Nutanix Objects instances is by using bucket replication rules. These rules enable: Asynchronous replication Automatic handling of object versions and delete markers Site-to-site resilience Configuration and monitoring through Prism Central Because both instances are registered to the same Prism Central, the administrator can easily create a replication rule on the source bucket that designates the destination bucket on the remote site.
Question 6 of 60
6. Question
An administrator is searching for a tool that includes these features:
Permission Denials
Top 5 Active Users
Top 5 Accessed Files
File Distribution by Type
Which Nutanix tool could the administrator choose?
Correct
The correct answer is File Analytics, which is part of Nutanix Files and provides visibility into file activity, user behavior, and data distribution. It includes dashboards and reports such as:
Permission Denials: identifies access issues or failed attempts.
Top Active Users / Accessed Files: shows usage trends and helps audit activity.
File Type Distribution: assists with storage optimization and data classification.
These features enable administrators to monitor security, analyze user behavior, and manage file storage efficiently capabilities not available in Files Console, File Server Manager, or Prism Central.
Incorrect
The correct answer is File Analytics, which is part of Nutanix Files and provides visibility into file activity, user behavior, and data distribution. It includes dashboards and reports such as:
Permission Denials: identifies access issues or failed attempts.
Top Active Users / Accessed Files: shows usage trends and helps audit activity.
File Type Distribution: assists with storage optimization and data classification.
These features enable administrators to monitor security, analyze user behavior, and manage file storage efficiently capabilities not available in Files Console, File Server Manager, or Prism Central.
Unattempted
The correct answer is File Analytics, which is part of Nutanix Files and provides visibility into file activity, user behavior, and data distribution. It includes dashboards and reports such as:
Permission Denials: identifies access issues or failed attempts.
Top Active Users / Accessed Files: shows usage trends and helps audit activity.
File Type Distribution: assists with storage optimization and data classification.
These features enable administrators to monitor security, analyze user behavior, and manage file storage efficiently capabilities not available in Files Console, File Server Manager, or Prism Central.
Question 7 of 60
7. Question
An administrator has performed an AOS upgrade, but noticed that the compression on containers is not happening. How much the delay before compression begins on the Files container?
Correct
After an AOS upgrade, the system introduces a mandatory 60-minute delay before compression resumes on Files containers. This delay ensures that the system has enough time to stabilize after the upgrade. Even if the administrator had manually configured a different compression delay earlier, it is reset to 60 minutes after an upgrade. It is important to note that while this delay can be adjusted, 60 minutes is the minimum recommended value for compression to work optimally in Nutanix Files.
Incorrect
After an AOS upgrade, the system introduces a mandatory 60-minute delay before compression resumes on Files containers. This delay ensures that the system has enough time to stabilize after the upgrade. Even if the administrator had manually configured a different compression delay earlier, it is reset to 60 minutes after an upgrade. It is important to note that while this delay can be adjusted, 60 minutes is the minimum recommended value for compression to work optimally in Nutanix Files.
Unattempted
After an AOS upgrade, the system introduces a mandatory 60-minute delay before compression resumes on Files containers. This delay ensures that the system has enough time to stabilize after the upgrade. Even if the administrator had manually configured a different compression delay earlier, it is reset to 60 minutes after an upgrade. It is important to note that while this delay can be adjusted, 60 minutes is the minimum recommended value for compression to work optimally in Nutanix Files.
Question 8 of 60
8. Question
What is the binary image extension of File Analytics?
Correct
In the context of File Analytics within the Nutanix ecosystem, QCOW2 is the correct binary image format. It is used for virtual machine disk images and provides features such as dynamic resizing and support for copy-on-write operations, making it suitable for the underlying storage format in the File Analytics environment.
Incorrect
In the context of File Analytics within the Nutanix ecosystem, QCOW2 is the correct binary image format. It is used for virtual machine disk images and provides features such as dynamic resizing and support for copy-on-write operations, making it suitable for the underlying storage format in the File Analytics environment.
Unattempted
In the context of File Analytics within the Nutanix ecosystem, QCOW2 is the correct binary image format. It is used for virtual machine disk images and provides features such as dynamic resizing and support for copy-on-write operations, making it suitable for the underlying storage format in the File Analytics environment.
Question 9 of 60
9. Question
What is the action required to allow the deletion of file server audit data in Data Lens?
Correct
In Nutanix Data Lens, audit data from a File Server is retained based on a configured data retention period. By default, audit data cannot be deleted manually it is purged automatically after the retention period expires. To allow the deletion of audit data: You must update (reduce) the data retention period. This setting defines how long Data Lens retains audit logs for a file server. Once the retention period expires, the system will automatically delete data that is older than the specified duration.
Incorrect
In Nutanix Data Lens, audit data from a File Server is retained based on a configured data retention period. By default, audit data cannot be deleted manually it is purged automatically after the retention period expires. To allow the deletion of audit data: You must update (reduce) the data retention period. This setting defines how long Data Lens retains audit logs for a file server. Once the retention period expires, the system will automatically delete data that is older than the specified duration.
Unattempted
In Nutanix Data Lens, audit data from a File Server is retained based on a configured data retention period. By default, audit data cannot be deleted manually it is purged automatically after the retention period expires. To allow the deletion of audit data: You must update (reduce) the data retention period. This setting defines how long Data Lens retains audit logs for a file server. Once the retention period expires, the system will automatically delete data that is older than the specified duration.
Question 10 of 60
10. Question
When deploying Nutanix Unified Storage, which of the following components provides the storage resources for virtual machines?
Correct
The Nutanix Storage Controlleralso known as the Controller VM (CVM)is the core component that provides storage services in a Nutanix environment. When deploying Nutanix Unified Storage (including Files, Volumes, and Objects), it‘s the Storage Controller that aggregates local storage (SSDs and HDDs) across the cluster and presents it as a distributed storage fabric. For virtual machines, the CVM is responsible for: Managing I/O operations Providing data locality Handling features like deduplication, compression, replication, and snapshots This makes the Storage Controller the key backend that virtual machines interact with when they need storage resources.
Incorrect
The Nutanix Storage Controlleralso known as the Controller VM (CVM)is the core component that provides storage services in a Nutanix environment. When deploying Nutanix Unified Storage (including Files, Volumes, and Objects), it‘s the Storage Controller that aggregates local storage (SSDs and HDDs) across the cluster and presents it as a distributed storage fabric. For virtual machines, the CVM is responsible for: Managing I/O operations Providing data locality Handling features like deduplication, compression, replication, and snapshots This makes the Storage Controller the key backend that virtual machines interact with when they need storage resources.
Unattempted
The Nutanix Storage Controlleralso known as the Controller VM (CVM)is the core component that provides storage services in a Nutanix environment. When deploying Nutanix Unified Storage (including Files, Volumes, and Objects), it‘s the Storage Controller that aggregates local storage (SSDs and HDDs) across the cluster and presents it as a distributed storage fabric. For virtual machines, the CVM is responsible for: Managing I/O operations Providing data locality Handling features like deduplication, compression, replication, and snapshots This makes the Storage Controller the key backend that virtual machines interact with when they need storage resources.
Question 11 of 60
11. Question
ABC company is currently using Nutanix Objects 3.2 with a single Object Store and a single S3 bucket that was created as a repository for their data protection (backup) application. In the near future, additional S3 buckets will be created as this was requested by their DevOps team. After facing several issues when writing backup images to the S3 bucket, the vendor of the data protection solution found the issue to be a compatibility issue with the S3 protocol. The proposed solution is to use an NFS repository instead of the S3 bucket as backup is a critical service, and this issue was unknown to the backup software vendor with no foreseeable date to solve this compatibility issue. What is the fastest solution that requires the least consumption of compute capacity (CPU and memory) of their Nutanix infrastructure?
Correct
The company is currently using Objects 3.2, which does not support NFS access. Starting from Objects 3.4 and later, NFS v3 access was introduced as a feature that allows an Object Store bucket to be mounted via NFS, making it compatible with legacy backup solutions that do not fully support the S3 protocol. Given that the issue is a protocol compatibility between the S3 bucket and the backup application, the most efficient solution that avoids deploying new services (which would consume extra CPU/memory) is to upgrade the existing Objects deployment to a version that supports NFS v3 access (i.e., 3.4+), then create a new bucket with NFS v3 access enabled. This approach is: ? Fastest: An upgrade is generally quicker than deploying Nutanix Files. ? Light on resources: Does not require new file server VMs (as Nutanix Files would). ? Compatible: Solves the S3 protocol compatibility issue by using NFS access. ? Future-proof: Leaves existing Object Store architecture mostly intact for DevOps use cases.
Incorrect
The company is currently using Objects 3.2, which does not support NFS access. Starting from Objects 3.4 and later, NFS v3 access was introduced as a feature that allows an Object Store bucket to be mounted via NFS, making it compatible with legacy backup solutions that do not fully support the S3 protocol. Given that the issue is a protocol compatibility between the S3 bucket and the backup application, the most efficient solution that avoids deploying new services (which would consume extra CPU/memory) is to upgrade the existing Objects deployment to a version that supports NFS v3 access (i.e., 3.4+), then create a new bucket with NFS v3 access enabled. This approach is: ? Fastest: An upgrade is generally quicker than deploying Nutanix Files. ? Light on resources: Does not require new file server VMs (as Nutanix Files would). ? Compatible: Solves the S3 protocol compatibility issue by using NFS access. ? Future-proof: Leaves existing Object Store architecture mostly intact for DevOps use cases.
Unattempted
The company is currently using Objects 3.2, which does not support NFS access. Starting from Objects 3.4 and later, NFS v3 access was introduced as a feature that allows an Object Store bucket to be mounted via NFS, making it compatible with legacy backup solutions that do not fully support the S3 protocol. Given that the issue is a protocol compatibility between the S3 bucket and the backup application, the most efficient solution that avoids deploying new services (which would consume extra CPU/memory) is to upgrade the existing Objects deployment to a version that supports NFS v3 access (i.e., 3.4+), then create a new bucket with NFS v3 access enabled. This approach is: ? Fastest: An upgrade is generally quicker than deploying Nutanix Files. ? Light on resources: Does not require new file server VMs (as Nutanix Files would). ? Compatible: Solves the S3 protocol compatibility issue by using NFS access. ? Future-proof: Leaves existing Object Store architecture mostly intact for DevOps use cases.
Question 12 of 60
12. Question
After configuring Smart DR, an administrator is unable to see the policy in the Policies tab. The administrator has confirmed that all FSVMs are able to connect to Prism Central via port 9440 bidirectionally. What is most likely the reason of this issue?
Correct
For SmartDR policies to be displayed and managed in Prism Central, both the primary and recovery FSVM clusters must be running the same Files version. A mismatch in versions can result in the policy not being registered or visible in the Policies tab, despite network connectivity and Prism Central communication being intact. This is a known and documented prerequisite for enabling SmartDR replication. While network ports like 9440 (PC communication) and 7515 (replication traffic) are essential for ongoing operation, they are not the main blocker for policy visibility in this scenario.
Incorrect
For SmartDR policies to be displayed and managed in Prism Central, both the primary and recovery FSVM clusters must be running the same Files version. A mismatch in versions can result in the policy not being registered or visible in the Policies tab, despite network connectivity and Prism Central communication being intact. This is a known and documented prerequisite for enabling SmartDR replication. While network ports like 9440 (PC communication) and 7515 (replication traffic) are essential for ongoing operation, they are not the main blocker for policy visibility in this scenario.
Unattempted
For SmartDR policies to be displayed and managed in Prism Central, both the primary and recovery FSVM clusters must be running the same Files version. A mismatch in versions can result in the policy not being registered or visible in the Policies tab, despite network connectivity and Prism Central communication being intact. This is a known and documented prerequisite for enabling SmartDR replication. While network ports like 9440 (PC communication) and 7515 (replication traffic) are essential for ongoing operation, they are not the main blocker for policy visibility in this scenario.
Question 13 of 60
13. Question
What information can be obtained through file access auditing in Nutanix Unified Storage?
Correct
File access auditing logs actions like read, modify, delete and who performed them using SMB/NFS.
Incorrect
File access auditing logs actions like read, modify, delete and who performed them using SMB/NFS.
Unattempted
File access auditing logs actions like read, modify, delete and who performed them using SMB/NFS.
Question 14 of 60
14. Question
An administrator is responsible for creating an Objects store with the following settings: Medium Performance (around 10,000 requests per second), 10 TiB capacity, Versioning disabled, Hosted on an AHV cluster. Immediately after creation, the administrator is asked to change the name of the Objects store. How will the administrator achieve this request?
Correct
In Nutanix, once an Objects store is created, the name cannot be modified. The solution to change the name is to delete the existing store and create a new one with the desired name. The other options do not provide a valid method for renaming an Objects store.
Incorrect
In Nutanix, once an Objects store is created, the name cannot be modified. The solution to change the name is to delete the existing store and create a new one with the desired name. The other options do not provide a valid method for renaming an Objects store.
Unattempted
In Nutanix, once an Objects store is created, the name cannot be modified. The solution to change the name is to delete the existing store and create a new one with the desired name. The other options do not provide a valid method for renaming an Objects store.
Question 15 of 60
15. Question
An administrator successfully installed Nutanix Objects and was able to create a bucket. When using the reference URL to access this object store, the administrator is unable to write data in the bucket when using an Active Directory account. What action should the administrator take to resolve this issue?
Correct
Nutanix Objects integrates with Active Directory through IAM services, allowing administrators to control access at the bucket level. If a user cannot write data despite being able to access the bucket, it likely means the bucket policy does not permit write actions for that user. In some cases, usernames with special characters (which may be altered internally) require administrators to adjust policies using the modified form. Reviewing and updating the buckets sharing policies ensures proper write access is granted.
Incorrect
Nutanix Objects integrates with Active Directory through IAM services, allowing administrators to control access at the bucket level. If a user cannot write data despite being able to access the bucket, it likely means the bucket policy does not permit write actions for that user. In some cases, usernames with special characters (which may be altered internally) require administrators to adjust policies using the modified form. Reviewing and updating the buckets sharing policies ensures proper write access is granted.
Unattempted
Nutanix Objects integrates with Active Directory through IAM services, allowing administrators to control access at the bucket level. If a user cannot write data despite being able to access the bucket, it likely means the bucket policy does not permit write actions for that user. In some cases, usernames with special characters (which may be altered internally) require administrators to adjust policies using the modified form. Reviewing and updating the buckets sharing policies ensures proper write access is granted.
Question 16 of 60
16. Question
Which two minimum permission roles must a non-administrator user have to enable Nutanix Objects?
Correct
To enable Nutanix Objects, a non-admin user must be able to configure cluster-level settings (via Cluster Admin) and manage categories (via Category Admin) used in Objects. The other roles do not provide the required permissions.
Incorrect
To enable Nutanix Objects, a non-admin user must be able to configure cluster-level settings (via Cluster Admin) and manage categories (via Category Admin) used in Objects. The other roles do not provide the required permissions.
Unattempted
To enable Nutanix Objects, a non-admin user must be able to configure cluster-level settings (via Cluster Admin) and manage categories (via Category Admin) used in Objects. The other roles do not provide the required permissions.
Question 17 of 60
17. Question
An administrator has received an alert A130358 – ConsistencyGroupWithStaleEntities details of alert as shown below:
Which scenario is causing the alert and needs to be solved to allow the entities to be protected?
Correct
One or more VMs or Volume Groups belonging to the Consistency Group is part of multiple Recovery Plans configured with a Witness.
Incorrect: This scenario does not directly correlate with the “ConsistencyGroupWithStaleEntities” alert. The issue here pertains to stale entities, not Recovery Plans or Witness configurations.
One or more VMs or Volume Groups belonging to the Consistency Group contains stale metadata.
Incorrect: While stale metadata can cause issues, this specific alert is triggered when entities within a Consistency Group have been deleted, not by stale metadata.
Correct answer
One or more VMs or Volume Groups belonging to the Consistency Group may have been deleted.
Correct: The alert A130358 is typically triggered when a VM or Volume Group that was previously part of a Consistency Group has been deleted. This leaves “stale” entries in the consistency group, which prevents protection from being applied.
The logical timestamp for one or more of the Volume Group is not consistent between clusters.
Incorrect: This scenario might cause other types of issues, but it does not specifically trigger the “ConsistencyGroupWithStaleEntities” alert. The issue here is related to deleted entities within the group, not timestamp inconsistencies.
Details:
The A130358 – ConsistencyGroupWithStaleEntities alert occurs when a VM or Volume Group that was part of a Consistency Group has been deleted. When these entities are deleted, the system has stale references in the consistency group that need to be resolved to ensure proper protection.
To address the issue, the administrator should verify whether any VMs or Volume Groups were deleted and, if so, remove them from the consistency group configuration.
Incorrect
One or more VMs or Volume Groups belonging to the Consistency Group is part of multiple Recovery Plans configured with a Witness.
Incorrect: This scenario does not directly correlate with the “ConsistencyGroupWithStaleEntities” alert. The issue here pertains to stale entities, not Recovery Plans or Witness configurations.
One or more VMs or Volume Groups belonging to the Consistency Group contains stale metadata.
Incorrect: While stale metadata can cause issues, this specific alert is triggered when entities within a Consistency Group have been deleted, not by stale metadata.
Correct answer
One or more VMs or Volume Groups belonging to the Consistency Group may have been deleted.
Correct: The alert A130358 is typically triggered when a VM or Volume Group that was previously part of a Consistency Group has been deleted. This leaves “stale” entries in the consistency group, which prevents protection from being applied.
The logical timestamp for one or more of the Volume Group is not consistent between clusters.
Incorrect: This scenario might cause other types of issues, but it does not specifically trigger the “ConsistencyGroupWithStaleEntities” alert. The issue here is related to deleted entities within the group, not timestamp inconsistencies.
Details:
The A130358 – ConsistencyGroupWithStaleEntities alert occurs when a VM or Volume Group that was part of a Consistency Group has been deleted. When these entities are deleted, the system has stale references in the consistency group that need to be resolved to ensure proper protection.
To address the issue, the administrator should verify whether any VMs or Volume Groups were deleted and, if so, remove them from the consistency group configuration.
Unattempted
One or more VMs or Volume Groups belonging to the Consistency Group is part of multiple Recovery Plans configured with a Witness.
Incorrect: This scenario does not directly correlate with the “ConsistencyGroupWithStaleEntities” alert. The issue here pertains to stale entities, not Recovery Plans or Witness configurations.
One or more VMs or Volume Groups belonging to the Consistency Group contains stale metadata.
Incorrect: While stale metadata can cause issues, this specific alert is triggered when entities within a Consistency Group have been deleted, not by stale metadata.
Correct answer
One or more VMs or Volume Groups belonging to the Consistency Group may have been deleted.
Correct: The alert A130358 is typically triggered when a VM or Volume Group that was previously part of a Consistency Group has been deleted. This leaves “stale” entries in the consistency group, which prevents protection from being applied.
The logical timestamp for one or more of the Volume Group is not consistent between clusters.
Incorrect: This scenario might cause other types of issues, but it does not specifically trigger the “ConsistencyGroupWithStaleEntities” alert. The issue here is related to deleted entities within the group, not timestamp inconsistencies.
Details:
The A130358 – ConsistencyGroupWithStaleEntities alert occurs when a VM or Volume Group that was part of a Consistency Group has been deleted. When these entities are deleted, the system has stale references in the consistency group that need to be resolved to ensure proper protection.
To address the issue, the administrator should verify whether any VMs or Volume Groups were deleted and, if so, remove them from the consistency group configuration.
Question 18 of 60
18. Question
According to the exhibit.
An administrator has created a volume and needs to attach it to a Windows host via iSCSI. The Data Services IP has been configured in the MS iSCSI initiator, but no targets are visible.
What seems to be the source of this problem?
Correct
The host’s IP Address is not authorized to access the volume
Incorrect: The IP address of the host needs to be authorized for iSCSI access, but the issue here seems more related to the target visibility. If the IP were unauthorized, you’d typically receive a different error or no connection attempt, not simply an invisible target.
Correct answer
The host’s IQN is not authorized to access the volume
Correct: The iSCSI Qualified Name (IQN) of the Windows host must be authorized to access the volume. If the IQN is not explicitly authorized on the target (volume) configuration, the target will not be visible. This is a common cause of the issue where targets are not showing up despite the Data Services IP being configured.
The CHAP password configured on the client is incorrect
Incorrect: CHAP (Challenge Handshake Authentication Protocol) issues would typically result in a failed connection rather than an invisible target. If the password is incorrect, you’d see an authentication failure message rather than simply no targets being visible.
The CHAP Authentication has not been configured on the client
Incorrect: CHAP Authentication is used for secure login but is not required for every iSCSI setup. If CHAP is not configured, you may experience authentication failures, but this would not cause the target to be invisible. The issue here appears to be related to the authorization of the IQN or other access settings.
Details:
The most likely cause of the issue is that the host’s IQN is not authorized to access the volume. In iSCSI environments, each initiator (the client, in this case, the Windows host) has a unique IQN (iSCSI Qualified Name). If the IQN is not explicitly authorized on the storage target (volume), the target will not appear in the iSCSI initiator, even though the Data Services IP is configured correctly.
Incorrect
The host’s IP Address is not authorized to access the volume
Incorrect: The IP address of the host needs to be authorized for iSCSI access, but the issue here seems more related to the target visibility. If the IP were unauthorized, you’d typically receive a different error or no connection attempt, not simply an invisible target.
Correct answer
The host’s IQN is not authorized to access the volume
Correct: The iSCSI Qualified Name (IQN) of the Windows host must be authorized to access the volume. If the IQN is not explicitly authorized on the target (volume) configuration, the target will not be visible. This is a common cause of the issue where targets are not showing up despite the Data Services IP being configured.
The CHAP password configured on the client is incorrect
Incorrect: CHAP (Challenge Handshake Authentication Protocol) issues would typically result in a failed connection rather than an invisible target. If the password is incorrect, you’d see an authentication failure message rather than simply no targets being visible.
The CHAP Authentication has not been configured on the client
Incorrect: CHAP Authentication is used for secure login but is not required for every iSCSI setup. If CHAP is not configured, you may experience authentication failures, but this would not cause the target to be invisible. The issue here appears to be related to the authorization of the IQN or other access settings.
Details:
The most likely cause of the issue is that the host’s IQN is not authorized to access the volume. In iSCSI environments, each initiator (the client, in this case, the Windows host) has a unique IQN (iSCSI Qualified Name). If the IQN is not explicitly authorized on the storage target (volume), the target will not appear in the iSCSI initiator, even though the Data Services IP is configured correctly.
Unattempted
The host’s IP Address is not authorized to access the volume
Incorrect: The IP address of the host needs to be authorized for iSCSI access, but the issue here seems more related to the target visibility. If the IP were unauthorized, you’d typically receive a different error or no connection attempt, not simply an invisible target.
Correct answer
The host’s IQN is not authorized to access the volume
Correct: The iSCSI Qualified Name (IQN) of the Windows host must be authorized to access the volume. If the IQN is not explicitly authorized on the target (volume) configuration, the target will not be visible. This is a common cause of the issue where targets are not showing up despite the Data Services IP being configured.
The CHAP password configured on the client is incorrect
Incorrect: CHAP (Challenge Handshake Authentication Protocol) issues would typically result in a failed connection rather than an invisible target. If the password is incorrect, you’d see an authentication failure message rather than simply no targets being visible.
The CHAP Authentication has not been configured on the client
Incorrect: CHAP Authentication is used for secure login but is not required for every iSCSI setup. If CHAP is not configured, you may experience authentication failures, but this would not cause the target to be invisible. The issue here appears to be related to the authorization of the IQN or other access settings.
Details:
The most likely cause of the issue is that the host’s IQN is not authorized to access the volume. In iSCSI environments, each initiator (the client, in this case, the Windows host) has a unique IQN (iSCSI Qualified Name). If the IQN is not explicitly authorized on the storage target (volume), the target will not appear in the iSCSI initiator, even though the Data Services IP is configured correctly.
Question 19 of 60
19. Question
An administrator has discovered that the File server services are down on a cluster. Which service should the administrator investigate to solve this issue?
Correct
The minerva_nvm service is responsible for File Server services on a Nutanix cluster. If this service is down, it can cause the File server services to be down. One possible reason for minerva_nvm going down could be overdue tasks, such as quota_stats_collector_and_monitor. It‘s also possible for the minerva_nvm service to go down during upgrades to Files 5.1, or while running tests, especially those involving multicluster and smartsync.
Incorrect
The minerva_nvm service is responsible for File Server services on a Nutanix cluster. If this service is down, it can cause the File server services to be down. One possible reason for minerva_nvm going down could be overdue tasks, such as quota_stats_collector_and_monitor. It‘s also possible for the minerva_nvm service to go down during upgrades to Files 5.1, or while running tests, especially those involving multicluster and smartsync.
Unattempted
The minerva_nvm service is responsible for File Server services on a Nutanix cluster. If this service is down, it can cause the File server services to be down. One possible reason for minerva_nvm going down could be overdue tasks, such as quota_stats_collector_and_monitor. It‘s also possible for the minerva_nvm service to go down during upgrades to Files 5.1, or while running tests, especially those involving multicluster and smartsync.
Question 20 of 60
20. Question
A Nutanix administrator has received an alert A160068 – AFSDuplicateIPDetected details of alert as shown below:
Which error log should the administrator check to determine the related Duplicate IP address involved?
Correct
solver.log
Incorrect: solver.log is used for analyzing metadata-related decisions, not networking or IP conflict issues.
minerva_cvm.log
Incorrect: minerva_cvm.log contains cluster-wide management service info, not specific to IP conflicts.
tcpkill.log
Incorrect: tcpkill.og used for Logs TCP drops and conflicts.
Correct answer
minerva_nvm.log
Correct: The minerva_nvm.log file contains information related to duplicate IP address detection for File Server VMs (FSVMs).
Details:
To find the duplicate IP address involved in the A160068 – AFSDuplicateIPDetected alert, the administrator should review the minerva_nvm.log. The minerva_nvm.log file contains information related to duplicate IP address detection for File Server VMs (FSVMs). Specifically, the administrator should look for error messages similar to “Some other host already uses IP address” or “Detected duplicate IP address” along with the actual IP address flagged as a duplicate.
Additionally, the administrator can use the information from the alert body to identify the affected file server and its IP addresses. If the alert body is empty, as in this case, other logs like minerva_cvm.log can be used to help identify the File Server and its IP addresses, which can help narrow the search in minerva_nvm.log.
Incorrect
solver.log
Incorrect: solver.log is used for analyzing metadata-related decisions, not networking or IP conflict issues.
minerva_cvm.log
Incorrect: minerva_cvm.log contains cluster-wide management service info, not specific to IP conflicts.
tcpkill.log
Incorrect: tcpkill.og used for Logs TCP drops and conflicts.
Correct answer
minerva_nvm.log
Correct: The minerva_nvm.log file contains information related to duplicate IP address detection for File Server VMs (FSVMs).
Details:
To find the duplicate IP address involved in the A160068 – AFSDuplicateIPDetected alert, the administrator should review the minerva_nvm.log. The minerva_nvm.log file contains information related to duplicate IP address detection for File Server VMs (FSVMs). Specifically, the administrator should look for error messages similar to “Some other host already uses IP address” or “Detected duplicate IP address” along with the actual IP address flagged as a duplicate.
Additionally, the administrator can use the information from the alert body to identify the affected file server and its IP addresses. If the alert body is empty, as in this case, other logs like minerva_cvm.log can be used to help identify the File Server and its IP addresses, which can help narrow the search in minerva_nvm.log.
Unattempted
solver.log
Incorrect: solver.log is used for analyzing metadata-related decisions, not networking or IP conflict issues.
minerva_cvm.log
Incorrect: minerva_cvm.log contains cluster-wide management service info, not specific to IP conflicts.
tcpkill.log
Incorrect: tcpkill.og used for Logs TCP drops and conflicts.
Correct answer
minerva_nvm.log
Correct: The minerva_nvm.log file contains information related to duplicate IP address detection for File Server VMs (FSVMs).
Details:
To find the duplicate IP address involved in the A160068 – AFSDuplicateIPDetected alert, the administrator should review the minerva_nvm.log. The minerva_nvm.log file contains information related to duplicate IP address detection for File Server VMs (FSVMs). Specifically, the administrator should look for error messages similar to “Some other host already uses IP address” or “Detected duplicate IP address” along with the actual IP address flagged as a duplicate.
Additionally, the administrator can use the information from the alert body to identify the affected file server and its IP addresses. If the alert body is empty, as in this case, other logs like minerva_cvm.log can be used to help identify the File Server and its IP addresses, which can help narrow the search in minerva_nvm.log.
Question 21 of 60
21. Question
A team of DevOps is working on a new processing application and requires a solution where they can upload the latest version of the code for testing via API calls. Older iterations should be retained as newer code is developed and tested. Which solution is the best choice to satisfy the DevOps team‘s requirements?
Correct
The best solution is Nutanix Objects with Versioning enabled. It supports API-based uploads, allowing developers to programmatically manage and test new code versions. The versioning feature automatically retains older iterations of files whenever a new version is uploaded, making rollback and historical access simple.
Other options are less suitable:
NFS Share: allows file storage but lacks automated version control.
SMB Share with Previous Version: limited to Windows environments and not API-driven.
Volume Group (iSCSI): designed for block storage (VMs, databases), not versioned file management.
Thus, Objects with Versioning provides the scalability, automation, and version retention ideal for DevOps workflows.
Incorrect
The best solution is Nutanix Objects with Versioning enabled. It supports API-based uploads, allowing developers to programmatically manage and test new code versions. The versioning feature automatically retains older iterations of files whenever a new version is uploaded, making rollback and historical access simple.
Other options are less suitable:
NFS Share: allows file storage but lacks automated version control.
SMB Share with Previous Version: limited to Windows environments and not API-driven.
Volume Group (iSCSI): designed for block storage (VMs, databases), not versioned file management.
Thus, Objects with Versioning provides the scalability, automation, and version retention ideal for DevOps workflows.
Unattempted
The best solution is Nutanix Objects with Versioning enabled. It supports API-based uploads, allowing developers to programmatically manage and test new code versions. The versioning feature automatically retains older iterations of files whenever a new version is uploaded, making rollback and historical access simple.
Other options are less suitable:
NFS Share: allows file storage but lacks automated version control.
SMB Share with Previous Version: limited to Windows environments and not API-driven.
Volume Group (iSCSI): designed for block storage (VMs, databases), not versioned file management.
Thus, Objects with Versioning provides the scalability, automation, and version retention ideal for DevOps workflows.
Question 22 of 60
22. Question
A healthcare administrator configures a Nutanix cluster with the following requirements: Enabled for long-term data retention of large files Data should be kept for two years Deletion or overwriting of the data must not be allowed Which Nutanix feature should the administrator employ to satisfy these requirements?
Correct
In highly regulated environments like healthcare, data immutability is crucial for compliance. Nutanix Objects with WORM ensures that once data is written, it cannot be modified or deleted for the defined retention period. Versioning complements this by storing previous versions of files, ensuring full traceability and historical access. This setup directly meets all requirements: long-term retention, immutability, and data protection.
Incorrect
In highly regulated environments like healthcare, data immutability is crucial for compliance. Nutanix Objects with WORM ensures that once data is written, it cannot be modified or deleted for the defined retention period. Versioning complements this by storing previous versions of files, ensuring full traceability and historical access. This setup directly meets all requirements: long-term retention, immutability, and data protection.
Unattempted
In highly regulated environments like healthcare, data immutability is crucial for compliance. Nutanix Objects with WORM ensures that once data is written, it cannot be modified or deleted for the defined retention period. Versioning complements this by storing previous versions of files, ensuring full traceability and historical access. This setup directly meets all requirements: long-term retention, immutability, and data protection.
Question 23 of 60
23. Question
ABC organization deployed Files in multiple sites, including different geographical locations across the globe. ABC has the following requirements to improve its data management lifecycle: Provide a centralized management solution. Automate archiving tier policies for compliance purposes. Protect the data against ransomware. Which solution will satisfy the organizations requirements?
Correct
Data Lens satisfies all the organization‘s requirements. It provides centralized management and automation of archiving tier policies for compliance purposes. Additionally, it protects data against ransomware. While Prism Central (PC) can deploy and manage Files clusters, it doesn‘t specifically address data lifecycle management or ransomware protection in the same way Data Lens does. Files Analytics provides insights but not direct control over tiering or ransomware protection. File Manager is a tool within Files for general management, not multi-site lifecycle automation.
Incorrect
Data Lens satisfies all the organization‘s requirements. It provides centralized management and automation of archiving tier policies for compliance purposes. Additionally, it protects data against ransomware. While Prism Central (PC) can deploy and manage Files clusters, it doesn‘t specifically address data lifecycle management or ransomware protection in the same way Data Lens does. Files Analytics provides insights but not direct control over tiering or ransomware protection. File Manager is a tool within Files for general management, not multi-site lifecycle automation.
Unattempted
Data Lens satisfies all the organization‘s requirements. It provides centralized management and automation of archiving tier policies for compliance purposes. Additionally, it protects data against ransomware. While Prism Central (PC) can deploy and manage Files clusters, it doesn‘t specifically address data lifecycle management or ransomware protection in the same way Data Lens does. Files Analytics provides insights but not direct control over tiering or ransomware protection. File Manager is a tool within Files for general management, not multi-site lifecycle automation.
Question 24 of 60
24. Question
According to the exhibit:
A Nutanix Files administrator needs to generate a report listing the files matching those in the above exhibit.
What is the most efficient way to complete this task?
Correct
Use Report Builder in Files Console.
Incorrect: Files Console does not have a Report Builder feature. It focuses on managing file shares, protocols, and other file-related settings but does not provide report generation capabilities in the form described.
Correct answer
Use Report Builder in File Analytics.
Correct: File Analytics provides a Report Builder specifically designed to generate detailed reports, including file metadata, access patterns, and other file system behaviors. This tool is the most efficient way to generate custom reports based on file data.
Create a custom report in Prism Central.
Incorrect: Prism Central does not provide the same level of file-specific reporting features as File Analytics. While it handles cluster-wide management and monitoring, it doesn’t have the same tools for generating reports on file-related data like File Analytics.
Create a custom report in Files Console.
Incorrect: Files Console, while integral for managing file shares and protocols, lacks built-in capabilities for custom report creation. It does not have a “custom report” functionality like File Analytics.
Details:
File Analytics is specifically designed for the analysis and reporting of file data. Its Report Builder feature allows administrators to generate detailed and customizable reports, which can include specific file types, access patterns, metadata, and other important information related to files in the system. This is the most efficient way to create a report that matches the file criteria described in the exhibit. Prism Central and Files Console do not provide the same level of file-specific reporting functionality, making File Analytics the optimal choice for this task.
Incorrect
Use Report Builder in Files Console.
Incorrect: Files Console does not have a Report Builder feature. It focuses on managing file shares, protocols, and other file-related settings but does not provide report generation capabilities in the form described.
Correct answer
Use Report Builder in File Analytics.
Correct: File Analytics provides a Report Builder specifically designed to generate detailed reports, including file metadata, access patterns, and other file system behaviors. This tool is the most efficient way to generate custom reports based on file data.
Create a custom report in Prism Central.
Incorrect: Prism Central does not provide the same level of file-specific reporting features as File Analytics. While it handles cluster-wide management and monitoring, it doesn’t have the same tools for generating reports on file-related data like File Analytics.
Create a custom report in Files Console.
Incorrect: Files Console, while integral for managing file shares and protocols, lacks built-in capabilities for custom report creation. It does not have a “custom report” functionality like File Analytics.
Details:
File Analytics is specifically designed for the analysis and reporting of file data. Its Report Builder feature allows administrators to generate detailed and customizable reports, which can include specific file types, access patterns, metadata, and other important information related to files in the system. This is the most efficient way to create a report that matches the file criteria described in the exhibit. Prism Central and Files Console do not provide the same level of file-specific reporting functionality, making File Analytics the optimal choice for this task.
Unattempted
Use Report Builder in Files Console.
Incorrect: Files Console does not have a Report Builder feature. It focuses on managing file shares, protocols, and other file-related settings but does not provide report generation capabilities in the form described.
Correct answer
Use Report Builder in File Analytics.
Correct: File Analytics provides a Report Builder specifically designed to generate detailed reports, including file metadata, access patterns, and other file system behaviors. This tool is the most efficient way to generate custom reports based on file data.
Create a custom report in Prism Central.
Incorrect: Prism Central does not provide the same level of file-specific reporting features as File Analytics. While it handles cluster-wide management and monitoring, it doesn’t have the same tools for generating reports on file-related data like File Analytics.
Create a custom report in Files Console.
Incorrect: Files Console, while integral for managing file shares and protocols, lacks built-in capabilities for custom report creation. It does not have a “custom report” functionality like File Analytics.
Details:
File Analytics is specifically designed for the analysis and reporting of file data. Its Report Builder feature allows administrators to generate detailed and customizable reports, which can include specific file types, access patterns, metadata, and other important information related to files in the system. This is the most efficient way to create a report that matches the file criteria described in the exhibit. Prism Central and Files Console do not provide the same level of file-specific reporting functionality, making File Analytics the optimal choice for this task.
Question 25 of 60
25. Question
An administrator needs to allow individual users to restore files and folders hosted in Files. How can the administrator satisfy this requirement?
Correct
To allow individual users to restore files and folders in Files, enable Self-Service Restore (SSR) on the shares/exports. SSR allows users to open, copy, and restore previous versions of files. It‘s enabled at the share level through the Prism web console. SSR snapshots are taken at the share level and stored within the share itself, consuming File Server storage utilization. The SSR snapshot schedule is distinct from the Protection Domain snapshot schedule and is configured in Prism under the File Server dropdown under “Protect“. Enabling SSR on shares/exports gives users the ability to perform file-level restores.
Incorrect
To allow individual users to restore files and folders in Files, enable Self-Service Restore (SSR) on the shares/exports. SSR allows users to open, copy, and restore previous versions of files. It‘s enabled at the share level through the Prism web console. SSR snapshots are taken at the share level and stored within the share itself, consuming File Server storage utilization. The SSR snapshot schedule is distinct from the Protection Domain snapshot schedule and is configured in Prism under the File Server dropdown under “Protect“. Enabling SSR on shares/exports gives users the ability to perform file-level restores.
Unattempted
To allow individual users to restore files and folders in Files, enable Self-Service Restore (SSR) on the shares/exports. SSR allows users to open, copy, and restore previous versions of files. It‘s enabled at the share level through the Prism web console. SSR snapshots are taken at the share level and stored within the share itself, consuming File Server storage utilization. The SSR snapshot schedule is distinct from the Protection Domain snapshot schedule and is configured in Prism under the File Server dropdown under “Protect“. Enabling SSR on shares/exports gives users the ability to perform file-level restores.
Question 26 of 60
26. Question
An administrator wants to ensure maximum performance, throughput, and redundancy for the companys Oracle RAC on Linux implementation, while using the native method for securing workloads. Which configuration satisfy these requirements?
Correct
Nutanix Volumes provide native iSCSI block storage, ideal for Oracle RAC on Linux. When combined with MPIO, a single vDisk can deliver the required redundancy and performance. MPIO ensures multiple network paths are available for fault tolerance and throughput, and Nutanix supports exporting these volumes both to physical and virtual machines, aligning with Oracle RAC requirements.
Incorrect
Nutanix Volumes provide native iSCSI block storage, ideal for Oracle RAC on Linux. When combined with MPIO, a single vDisk can deliver the required redundancy and performance. MPIO ensures multiple network paths are available for fault tolerance and throughput, and Nutanix supports exporting these volumes both to physical and virtual machines, aligning with Oracle RAC requirements.
Unattempted
Nutanix Volumes provide native iSCSI block storage, ideal for Oracle RAC on Linux. When combined with MPIO, a single vDisk can deliver the required redundancy and performance. MPIO ensures multiple network paths are available for fault tolerance and throughput, and Nutanix supports exporting these volumes both to physical and virtual machines, aligning with Oracle RAC requirements.
Question 27 of 60
27. Question
How can administrator block specific file types from being uploaded or accessed in Nutanix Files?
Correct
Nutanix Files uses file content filtering to block file types, introduced in version 3.6 and enhanced in 3.8.
Incorrect
Nutanix Files uses file content filtering to block file types, introduced in version 3.6 and enhanced in 3.8.
Unattempted
Nutanix Files uses file content filtering to block file types, introduced in version 3.6 and enhanced in 3.8.
Question 28 of 60
28. Question
What is the result of an administrator applying the lifecycle policy “Expire current objects after # days/months/years“ to an object with versioning enabled?
Correct
If versioning is enabled, applying the “Expire current objects after # days/months/years“ lifecycle policy will delete the current version of the object after the specified time. Past versions of the object will also be deleted after the specified time. Lifecycle policies determine what happens to an object after a certain amount of time passes.
Incorrect
If versioning is enabled, applying the “Expire current objects after # days/months/years“ lifecycle policy will delete the current version of the object after the specified time. Past versions of the object will also be deleted after the specified time. Lifecycle policies determine what happens to an object after a certain amount of time passes.
Unattempted
If versioning is enabled, applying the “Expire current objects after # days/months/years“ lifecycle policy will delete the current version of the object after the specified time. Past versions of the object will also be deleted after the specified time. Lifecycle policies determine what happens to an object after a certain amount of time passes.
Question 29 of 60
29. Question
What is the most efficient way of allow users to restore their files without administrator intervention in multiple Files shares?
Correct
Enabling Self Service Restore through the Files Console allows users to restore previous file versions through share-level snapshots, without requiring administrator intervention.
Incorrect
Enabling Self Service Restore through the Files Console allows users to restore previous file versions through share-level snapshots, without requiring administrator intervention.
Unattempted
Enabling Self Service Restore through the Files Console allows users to restore previous file versions through share-level snapshots, without requiring administrator intervention.
Question 30 of 60
30. Question
Within the Prism Central > Services > Objects menu option, what is the correct task order for creating an Object Store?
Correct
Please update product page.
The correct sequence for creating an Object Store in Prism Central is: Enable Object Store Services This activates the
1. Objects functionality within Prism Central. 2. Download the Creation Checklist This step provides guidance on prerequisites and configuration details. 3. Create Object Store Once prerequisites are verified, the Object Store can be created.
Following this order ensures that the service is fully active before setup begins, aligning with Nutanix best practices and preventing configuration or dependency errors.
Incorrect
Please update product page.
The correct sequence for creating an Object Store in Prism Central is: Enable Object Store Services This activates the
1. Objects functionality within Prism Central. 2. Download the Creation Checklist This step provides guidance on prerequisites and configuration details. 3. Create Object Store Once prerequisites are verified, the Object Store can be created.
Following this order ensures that the service is fully active before setup begins, aligning with Nutanix best practices and preventing configuration or dependency errors.
Unattempted
Please update product page.
The correct sequence for creating an Object Store in Prism Central is: Enable Object Store Services This activates the
1. Objects functionality within Prism Central. 2. Download the Creation Checklist This step provides guidance on prerequisites and configuration details. 3. Create Object Store Once prerequisites are verified, the Object Store can be created.
Following this order ensures that the service is fully active before setup begins, aligning with Nutanix best practices and preventing configuration or dependency errors.
Question 31 of 60
31. Question
What prerequisite must be fulfilled before a Nutanix Files SMB share can be used?
Correct
Before you can use an SMB share in Nutanix Files, you must configure Active Directory integration, as SMB authentication relies on AD users and groups to control access. This is a mandatory setup step before any SMB share becomes usable.
Incorrect
Before you can use an SMB share in Nutanix Files, you must configure Active Directory integration, as SMB authentication relies on AD users and groups to control access. This is a mandatory setup step before any SMB share becomes usable.
Unattempted
Before you can use an SMB share in Nutanix Files, you must configure Active Directory integration, as SMB authentication relies on AD users and groups to control access. This is a mandatory setup step before any SMB share becomes usable.
Question 32 of 60
32. Question
An administrator is enabling Nutanix Volumes for use with workloads within a Nutanix-based environment.
Based on the exhibit, which field must be populated for Nutanix Volumes to function properly?
Correct
Virtual IP
Incorrect: A Virtual IP can be used in some management or high availability setups, but it is not required for Nutanix Volumes to function. The key requirement for iSCSI connections is the IP address that handles iSCSI traffic, which is configured separately.
FQDN
Incorrect: Fully Qualified Domain Names (FQDNs) are not required for Nutanix Volumes. iSCSI clients connect to an IP address, not a DNS name. This is unlike file-based access (e.g., SMB, NFS) where FQDNs are more common.
Virtual IPv6
Incorrect: Nutanix Volumes does not require IPv6 to function. All iSCSI connectivity in typical environments is handled over IPv4 using the configured Data Services IP. IPv6 support may exist in some environments but is not a mandatory field.
Correct answer
iSCSI Data Services IP
Correct: This is the required field when enabling Nutanix Volumes. It defines the IP address used by external clients to connect to the Nutanix cluster over iSCSI. This IP must be reachable by the iSCSI initiators (e.g., on a physical server or VM) and is essential for establishing the block-level storage connection.
Details:
The iSCSI Data Services IP is the key component that allows external hosts to discover and connect to volume groups presented by the Nutanix cluster. When configuring Nutanix Volumes, this IP must be provided so clients can establish iSCSI sessions with the Nutanix backend. It is the required field that enables the functionality of the Volumes service. Other fields like FQDN, virtual IP, or IPv6 are not necessary for the core functionality of Nutanix Volumes.
Incorrect
Virtual IP
Incorrect: A Virtual IP can be used in some management or high availability setups, but it is not required for Nutanix Volumes to function. The key requirement for iSCSI connections is the IP address that handles iSCSI traffic, which is configured separately.
FQDN
Incorrect: Fully Qualified Domain Names (FQDNs) are not required for Nutanix Volumes. iSCSI clients connect to an IP address, not a DNS name. This is unlike file-based access (e.g., SMB, NFS) where FQDNs are more common.
Virtual IPv6
Incorrect: Nutanix Volumes does not require IPv6 to function. All iSCSI connectivity in typical environments is handled over IPv4 using the configured Data Services IP. IPv6 support may exist in some environments but is not a mandatory field.
Correct answer
iSCSI Data Services IP
Correct: This is the required field when enabling Nutanix Volumes. It defines the IP address used by external clients to connect to the Nutanix cluster over iSCSI. This IP must be reachable by the iSCSI initiators (e.g., on a physical server or VM) and is essential for establishing the block-level storage connection.
Details:
The iSCSI Data Services IP is the key component that allows external hosts to discover and connect to volume groups presented by the Nutanix cluster. When configuring Nutanix Volumes, this IP must be provided so clients can establish iSCSI sessions with the Nutanix backend. It is the required field that enables the functionality of the Volumes service. Other fields like FQDN, virtual IP, or IPv6 are not necessary for the core functionality of Nutanix Volumes.
Unattempted
Virtual IP
Incorrect: A Virtual IP can be used in some management or high availability setups, but it is not required for Nutanix Volumes to function. The key requirement for iSCSI connections is the IP address that handles iSCSI traffic, which is configured separately.
FQDN
Incorrect: Fully Qualified Domain Names (FQDNs) are not required for Nutanix Volumes. iSCSI clients connect to an IP address, not a DNS name. This is unlike file-based access (e.g., SMB, NFS) where FQDNs are more common.
Virtual IPv6
Incorrect: Nutanix Volumes does not require IPv6 to function. All iSCSI connectivity in typical environments is handled over IPv4 using the configured Data Services IP. IPv6 support may exist in some environments but is not a mandatory field.
Correct answer
iSCSI Data Services IP
Correct: This is the required field when enabling Nutanix Volumes. It defines the IP address used by external clients to connect to the Nutanix cluster over iSCSI. This IP must be reachable by the iSCSI initiators (e.g., on a physical server or VM) and is essential for establishing the block-level storage connection.
Details:
The iSCSI Data Services IP is the key component that allows external hosts to discover and connect to volume groups presented by the Nutanix cluster. When configuring Nutanix Volumes, this IP must be provided so clients can establish iSCSI sessions with the Nutanix backend. It is the required field that enables the functionality of the Volumes service. Other fields like FQDN, virtual IP, or IPv6 are not necessary for the core functionality of Nutanix Volumes.
Question 33 of 60
33. Question
After completing a failover to the disaster recovery (DR) site, what steps should an administrator take to complete the migration of volume groups? (Choose two)
Correct
To finalize the VG migration during a planned failover scenario with synchronous replication, the administrator must activate the protection domain on the DR site and power on the VMs there. This ensures the DR site takes full ownership and continues serving data without interruption.
Incorrect
To finalize the VG migration during a planned failover scenario with synchronous replication, the administrator must activate the protection domain on the DR site and power on the VMs there. This ensures the DR site takes full ownership and continues serving data without interruption.
Unattempted
To finalize the VG migration during a planned failover scenario with synchronous replication, the administrator must activate the protection domain on the DR site and power on the VMs there. This ensures the DR site takes full ownership and continues serving data without interruption.
Question 34 of 60
34. Question
For how long does File Analytics retain and analyze data?
Which of the following is a prerequisite for enabling and using Smart DR?
Correct
Smart DR requires that primary and recovery file servers support the same protocols for compatibility, and they may have different FSVM instance numbers on each site. Domain names and the number of file servers managed are not strict requirements.
Incorrect
Smart DR requires that primary and recovery file servers support the same protocols for compatibility, and they may have different FSVM instance numbers on each site. Domain names and the number of file servers managed are not strict requirements.
Unattempted
Smart DR requires that primary and recovery file servers support the same protocols for compatibility, and they may have different FSVM instance numbers on each site. Domain names and the number of file servers managed are not strict requirements.
Question 36 of 60
36. Question
What is the maximum number of configured snapshots supported for Self-Service Restore (SSR) in a Nutanix file server?
Correct
utanix Files supports 50 configured snapshots for Self-Service Restore (SSR). This limit is considerably lower than some competitors‘ offerings such as NetApp (1023 snapshots per volume), Powerscale/Isilon (20K per cluster), and Pure Storage (400K per cluster). Additionally, there is a limit of 30,000 SSR snapshots per node. Any attempt to create more snapshots beyond the limit of 50 for SSR will result in an error message.
Incorrect
utanix Files supports 50 configured snapshots for Self-Service Restore (SSR). This limit is considerably lower than some competitors‘ offerings such as NetApp (1023 snapshots per volume), Powerscale/Isilon (20K per cluster), and Pure Storage (400K per cluster). Additionally, there is a limit of 30,000 SSR snapshots per node. Any attempt to create more snapshots beyond the limit of 50 for SSR will result in an error message.
Unattempted
utanix Files supports 50 configured snapshots for Self-Service Restore (SSR). This limit is considerably lower than some competitors‘ offerings such as NetApp (1023 snapshots per volume), Powerscale/Isilon (20K per cluster), and Pure Storage (400K per cluster). Additionally, there is a limit of 30,000 SSR snapshots per node. Any attempt to create more snapshots beyond the limit of 50 for SSR will result in an error message.
Question 37 of 60
37. Question
An administrator plans to expand an Object Store by adding a secondary cluster. Network evaluation shows consistent latency of approximately 8 milliseconds. Is this latency level acceptable for configuring a secondary Nutanix Objects cluster?
Correct
Clusters in an Object Store federation must meet strict latency requirements (<5 ms) to prevent issues in replication and access.
Incorrect
Clusters in an Object Store federation must meet strict latency requirements (<5 ms) to prevent issues in replication and access.
Unattempted
Clusters in an Object Store federation must meet strict latency requirements (<5 ms) to prevent issues in replication and access.
Question 38 of 60
38. Question
An administrator needs to configure a bare-metal server to boot from a Nutanix Volumes-hosted virtual disk over the network, which volume group configuration setting must the administrator specify?
Correct
To boot a bare-metal server from a Nutanix Volumes virtual disk, you must enable external client access on the volume group. This allows the server to connect to the iSCSI target during boot.
Incorrect
To boot a bare-metal server from a Nutanix Volumes virtual disk, you must enable external client access on the volume group. This allows the server to connect to the iSCSI target during boot.
Unattempted
To boot a bare-metal server from a Nutanix Volumes virtual disk, you must enable external client access on the volume group. This allows the server to connect to the iSCSI target during boot.
Question 39 of 60
39. Question
An administrator is concerned that storage in the Nutanix File Server is being used to store personal photos and videos. What tool or method can the administrator use to identify and confirm such storage usage?
Correct
To determine if users are storing personal photos and videos in Nutanix Files, you need visibility into the types of files stored. This is exactly what the File Distribution by Type widget in File Analytics provides. It categorizes files by MIME type or extension, making it easy to identify if the storage is being misused for personal media.
Incorrect
To determine if users are storing personal photos and videos in Nutanix Files, you need visibility into the types of files stored. This is exactly what the File Distribution by Type widget in File Analytics provides. It categorizes files by MIME type or extension, making it easy to identify if the storage is being misused for personal media.
Unattempted
To determine if users are storing personal photos and videos in Nutanix Files, you need visibility into the types of files stored. This is exactly what the File Distribution by Type widget in File Analytics provides. It categorizes files by MIME type or extension, making it easy to identify if the storage is being misused for personal media.
Question 40 of 60
40. Question
Several users are facing access issues when attempting to connect to a shared folder using both SMB and NFS protocols. What should the administrator check to ensure consistent access permissions across both protocols?
Correct
When users face access issues on a share accessed via both SMB and NFS, the user mapping settings for multiprotocol shares are the key configuration to check. This mapping ensures that user identities are correctly translated between Windows and Unix authentication models, allowing consistent permissions across both protocols. Other settings like DNS, backup, or RBAC are less directly related to cross-protocol access issues.
Incorrect
When users face access issues on a share accessed via both SMB and NFS, the user mapping settings for multiprotocol shares are the key configuration to check. This mapping ensures that user identities are correctly translated between Windows and Unix authentication models, allowing consistent permissions across both protocols. Other settings like DNS, backup, or RBAC are less directly related to cross-protocol access issues.
Unattempted
When users face access issues on a share accessed via both SMB and NFS, the user mapping settings for multiprotocol shares are the key configuration to check. This mapping ensures that user identities are correctly translated between Windows and Unix authentication models, allowing consistent permissions across both protocols. Other settings like DNS, backup, or RBAC are less directly related to cross-protocol access issues.
Question 41 of 60
41. Question
An administrator is required to provide a summary of metrics to the Security team. The entity information being asked for by the Security team is as follows: – Total folders where permissions are tracked – Size of those folders – Total unique users – Total unique groups Which product and dashboard combination provides all this information?
Correct
Both File Analytics and Data Lens offer rich insights into Nutanix Files usage and access, but Data Lens ? Footprint Widget is better aligned with security-focused summaries. It is designed to visualize folder sizes, user and group access, and permission tracking across the file system. While File Analytics may provide partial metrics like folder sizes and user/group counts, it does not unify all the required information?especially not folder-level permission tracking?in a single dashboard. Data Lens ? Footprint Widget offers the most comprehensive, security-aligned view suitable for the Security team‘s needs.
Incorrect
Both File Analytics and Data Lens offer rich insights into Nutanix Files usage and access, but Data Lens ? Footprint Widget is better aligned with security-focused summaries. It is designed to visualize folder sizes, user and group access, and permission tracking across the file system. While File Analytics may provide partial metrics like folder sizes and user/group counts, it does not unify all the required information?especially not folder-level permission tracking?in a single dashboard. Data Lens ? Footprint Widget offers the most comprehensive, security-aligned view suitable for the Security team‘s needs.
Unattempted
Both File Analytics and Data Lens offer rich insights into Nutanix Files usage and access, but Data Lens ? Footprint Widget is better aligned with security-focused summaries. It is designed to visualize folder sizes, user and group access, and permission tracking across the file system. While File Analytics may provide partial metrics like folder sizes and user/group counts, it does not unify all the required information?especially not folder-level permission tracking?in a single dashboard. Data Lens ? Footprint Widget offers the most comprehensive, security-aligned view suitable for the Security team‘s needs.
Question 42 of 60
42. Question
An administrator would like to load balance an SMB share across multiple FSVMs. Which feature should be enabled to achieve this?
Correct
To load balance an SMB share across multiple FSVMs, the administrator should enable the Distributed feature. This allows the SMB share to serve clients from multiple FSVMs at once, improving throughput and resilience through active distribution of load.
Incorrect
To load balance an SMB share across multiple FSVMs, the administrator should enable the Distributed feature. This allows the SMB share to serve clients from multiple FSVMs at once, improving throughput and resilience through active distribution of load.
Unattempted
To load balance an SMB share across multiple FSVMs, the administrator should enable the Distributed feature. This allows the SMB share to serve clients from multiple FSVMs at once, improving throughput and resilience through active distribution of load.
Question 43 of 60
43. Question
An administrator has configured a volume group with four vDisks and needs them to be load-balanced across multiple CVMs. The volume group will be directly connected to the VM. What configuration step must the administrator take to enable this?
Correct
To ensure that the volume group with multiple vDisks is load-balanced across multiple CVMs, the administrator must enable load-balancing using acli. This configures the volume group to distribute I/O load properly, enhancing performance and reliability.
Incorrect
To ensure that the volume group with multiple vDisks is load-balanced across multiple CVMs, the administrator must enable load-balancing using acli. This configures the volume group to distribute I/O load properly, enhancing performance and reliability.
Unattempted
To ensure that the volume group with multiple vDisks is load-balanced across multiple CVMs, the administrator must enable load-balancing using acli. This configures the volume group to distribute I/O load properly, enhancing performance and reliability.
Question 44 of 60
44. Question
An administrator needs to deploy Nutanix Objects on a dark site (offline environment). Which two file bundles must be installed to complete this deployment? (Choose two.)
Correct
On a dark site, no internet access is available, so LCM bundles must be manually uploaded. Both Objects LCM and MSP LCM are essential to deploy and manage Objects services in such environments.
Incorrect
On a dark site, no internet access is available, so LCM bundles must be manually uploaded. Both Objects LCM and MSP LCM are essential to deploy and manage Objects services in such environments.
Unattempted
On a dark site, no internet access is available, so LCM bundles must be manually uploaded. Both Objects LCM and MSP LCM are essential to deploy and manage Objects services in such environments.
Question 45 of 60
45. Question
An administrator has received a complaint from a user that the Windows VM had a service disruption that caused it to randomly lose access to an iSCSI Volume Group (VG) during a maintenance window of an ESXi-based Nutanix cluster.
The cluster configuration is as follows:
– Six-node cluster:
ESXi IP addresses: 172.20.100.41 to 172.20.100.46
vCenter IP address: 172.20.100.40
CVM IP addresses: 172.20.100.101 to 172.20.100.106
Virtual IP (VIP): 172.20.100.200
Data Services IP (DSIP): 172.20.100.50
The administrator retrieved the configuration from the VM, as shown in the exhibit. After reviewing the VM ISCSI configuration, they found that the iSCSI is not properly configured.
What configuration change should be made to prevent future disruptions?
Correct
Select the Enable multi-path checkbox.
Incorrect: While multipathing is important for redundancy and performance, enabling it alone does not resolve the issue of using an incorrect iSCSI target discovery IP. If the Discovery IP is wrong or incomplete, the volume will not be reachable regardless of multipathing.
Add all missing CVM IPs in Discovery tab.
Incorrect: This was the legacy method in older Nutanix versions, where each CVM’s IP was added manually. However, it is no longer the best practice. Modern clusters use the DSIP for discovery, and using CVM IPs can lead to disruptions if a CVM is unavailable during maintenance.
Correct answer
Remove Discovery IP and configure with DSIP.
Correct: Nutanix recommends using the Data Services IP (DSIP) for iSCSI target discovery. The DSIP acts as a single IP front for all CVMs and ensures high availability. This change prevents disruptions during maintenance or CVM failover events.
Change the Discovery IP to match the configured VIP.
Incorrect: The VIP (Virtual IP) is used for Prism (UI/API) access, not for storage traffic. Using the VIP for iSCSI discovery is not supported and can result in failure to connect to the volume group.
Details:
The iSCSI disruption occurred because the discovery mechanism wasn’t correctly configured. Nutanix clusters use a DSIP specifically for iSCSI data path high availability. By replacing any individual IPs or VIPs with the correct DSIP (172.20.100.50), the administrator ensures the Windows VM maintains continuous access, even during node or CVM maintenance.
Incorrect
Select the Enable multi-path checkbox.
Incorrect: While multipathing is important for redundancy and performance, enabling it alone does not resolve the issue of using an incorrect iSCSI target discovery IP. If the Discovery IP is wrong or incomplete, the volume will not be reachable regardless of multipathing.
Add all missing CVM IPs in Discovery tab.
Incorrect: This was the legacy method in older Nutanix versions, where each CVM’s IP was added manually. However, it is no longer the best practice. Modern clusters use the DSIP for discovery, and using CVM IPs can lead to disruptions if a CVM is unavailable during maintenance.
Correct answer
Remove Discovery IP and configure with DSIP.
Correct: Nutanix recommends using the Data Services IP (DSIP) for iSCSI target discovery. The DSIP acts as a single IP front for all CVMs and ensures high availability. This change prevents disruptions during maintenance or CVM failover events.
Change the Discovery IP to match the configured VIP.
Incorrect: The VIP (Virtual IP) is used for Prism (UI/API) access, not for storage traffic. Using the VIP for iSCSI discovery is not supported and can result in failure to connect to the volume group.
Details:
The iSCSI disruption occurred because the discovery mechanism wasn’t correctly configured. Nutanix clusters use a DSIP specifically for iSCSI data path high availability. By replacing any individual IPs or VIPs with the correct DSIP (172.20.100.50), the administrator ensures the Windows VM maintains continuous access, even during node or CVM maintenance.
Unattempted
Select the Enable multi-path checkbox.
Incorrect: While multipathing is important for redundancy and performance, enabling it alone does not resolve the issue of using an incorrect iSCSI target discovery IP. If the Discovery IP is wrong or incomplete, the volume will not be reachable regardless of multipathing.
Add all missing CVM IPs in Discovery tab.
Incorrect: This was the legacy method in older Nutanix versions, where each CVM’s IP was added manually. However, it is no longer the best practice. Modern clusters use the DSIP for discovery, and using CVM IPs can lead to disruptions if a CVM is unavailable during maintenance.
Correct answer
Remove Discovery IP and configure with DSIP.
Correct: Nutanix recommends using the Data Services IP (DSIP) for iSCSI target discovery. The DSIP acts as a single IP front for all CVMs and ensures high availability. This change prevents disruptions during maintenance or CVM failover events.
Change the Discovery IP to match the configured VIP.
Incorrect: The VIP (Virtual IP) is used for Prism (UI/API) access, not for storage traffic. Using the VIP for iSCSI discovery is not supported and can result in failure to connect to the volume group.
Details:
The iSCSI disruption occurred because the discovery mechanism wasn’t correctly configured. Nutanix clusters use a DSIP specifically for iSCSI data path high availability. By replacing any individual IPs or VIPs with the correct DSIP (172.20.100.50), the administrator ensures the Windows VM maintains continuous access, even during node or CVM maintenance.
Question 46 of 60
46. Question
A Nutanix Files cluster is unreachable after an administrator changed its name. What steps should the administrator take to resolve this issue and restore connectivity?
Correct
When an administrator changes the name of a Nutanix Files cluster, any existing DNS entries referencing the old name become invalid. To restore reachability, the administrator needs to update the DNS with the new name ? specifically by removing old entries and adding new ones pointing to the correct IP addresses of the FSVMs. This ensures that client systems and network services can resolve the new hostname correctly. Therefore, the correct answer is Option A.
Incorrect
When an administrator changes the name of a Nutanix Files cluster, any existing DNS entries referencing the old name become invalid. To restore reachability, the administrator needs to update the DNS with the new name ? specifically by removing old entries and adding new ones pointing to the correct IP addresses of the FSVMs. This ensures that client systems and network services can resolve the new hostname correctly. Therefore, the correct answer is Option A.
Unattempted
When an administrator changes the name of a Nutanix Files cluster, any existing DNS entries referencing the old name become invalid. To restore reachability, the administrator needs to update the DNS with the new name ? specifically by removing old entries and adding new ones pointing to the correct IP addresses of the FSVMs. This ensures that client systems and network services can resolve the new hostname correctly. Therefore, the correct answer is Option A.
Question 47 of 60
47. Question
Which two use cases are appropriate for Nutanix Volumes? (Choose two)
Correct
Correct:
C. Boot over iSCSI This is a valid and supported use case for Nutanix Volumes. Nutanix Volumes provides block storage via iSCSI, which allows physical or virtual machines to boot directly from volumes hosted on Nutanix clusters. This is especially useful in bare-metal environments where diskless servers need centralized boot storage.
E. iSCSI for Microsoft Exchange Server Correct. Nutanix Volumes is designed to support enterprise applications like Microsoft Exchange that require high-performance block storage. By exposing volumes over iSCSI, Exchange servers can store mailbox databases and logs on Nutanix Unified Storage, benefiting from resiliency, scalability, and performance.
Incorrect:
A. IAM Synchronization Incorrect. IAM (Identity and Access Management) synchronization is a directory or identity service function, not related to block storage. Nutanix Volumes does not handle IAM tasks, and this is outside the scope of Unified Storage.
B. Non-bare-metal environments Incorrect. While Nutanix Volumes can technically be used in virtualized environments, its primary use case is bare-metal or external host access via iSCSI. Virtual machines on AHV typically use native disk provisioning, not Volumes.
D. CVM Replication Incorrect. CVM (Controller VM) replication is handled by Nutanix Data Protection and DR features, not Nutanix Volumes. Volumes are for block storage, not VM-level replication or backup.
Incorrect
Correct:
C. Boot over iSCSI This is a valid and supported use case for Nutanix Volumes. Nutanix Volumes provides block storage via iSCSI, which allows physical or virtual machines to boot directly from volumes hosted on Nutanix clusters. This is especially useful in bare-metal environments where diskless servers need centralized boot storage.
E. iSCSI for Microsoft Exchange Server Correct. Nutanix Volumes is designed to support enterprise applications like Microsoft Exchange that require high-performance block storage. By exposing volumes over iSCSI, Exchange servers can store mailbox databases and logs on Nutanix Unified Storage, benefiting from resiliency, scalability, and performance.
Incorrect:
A. IAM Synchronization Incorrect. IAM (Identity and Access Management) synchronization is a directory or identity service function, not related to block storage. Nutanix Volumes does not handle IAM tasks, and this is outside the scope of Unified Storage.
B. Non-bare-metal environments Incorrect. While Nutanix Volumes can technically be used in virtualized environments, its primary use case is bare-metal or external host access via iSCSI. Virtual machines on AHV typically use native disk provisioning, not Volumes.
D. CVM Replication Incorrect. CVM (Controller VM) replication is handled by Nutanix Data Protection and DR features, not Nutanix Volumes. Volumes are for block storage, not VM-level replication or backup.
Unattempted
Correct:
C. Boot over iSCSI This is a valid and supported use case for Nutanix Volumes. Nutanix Volumes provides block storage via iSCSI, which allows physical or virtual machines to boot directly from volumes hosted on Nutanix clusters. This is especially useful in bare-metal environments where diskless servers need centralized boot storage.
E. iSCSI for Microsoft Exchange Server Correct. Nutanix Volumes is designed to support enterprise applications like Microsoft Exchange that require high-performance block storage. By exposing volumes over iSCSI, Exchange servers can store mailbox databases and logs on Nutanix Unified Storage, benefiting from resiliency, scalability, and performance.
Incorrect:
A. IAM Synchronization Incorrect. IAM (Identity and Access Management) synchronization is a directory or identity service function, not related to block storage. Nutanix Volumes does not handle IAM tasks, and this is outside the scope of Unified Storage.
B. Non-bare-metal environments Incorrect. While Nutanix Volumes can technically be used in virtualized environments, its primary use case is bare-metal or external host access via iSCSI. Virtual machines on AHV typically use native disk provisioning, not Volumes.
D. CVM Replication Incorrect. CVM (Controller VM) replication is handled by Nutanix Data Protection and DR features, not Nutanix Volumes. Volumes are for block storage, not VM-level replication or backup.
Question 48 of 60
48. Question
An administrator is configuring an object store for backups. The administrator creates the S3 bucket as the backup target. While creating the Nutanix Objects endpoint to the newly created S3 bucket, the following error is observed: Method Not Allowed: An object from the object-lock enabled bucket can not be modified or deleted unless the retention period is elapsed. What is the most likely cause of this error?
Correct
The “Method Not Allowed“ message about object-lock enabled bucket clearly shows that WORM (Write Once Read Many) is active, and the system is blocking changes or deletions due to the active retention policy. This behavior is expected and compliant with how object lock works in S3-compatible storage systems.
Incorrect
The “Method Not Allowed“ message about object-lock enabled bucket clearly shows that WORM (Write Once Read Many) is active, and the system is blocking changes or deletions due to the active retention policy. This behavior is expected and compliant with how object lock works in S3-compatible storage systems.
Unattempted
The “Method Not Allowed“ message about object-lock enabled bucket clearly shows that WORM (Write Once Read Many) is active, and the system is blocking changes or deletions due to the active retention policy. This behavior is expected and compliant with how object lock works in S3-compatible storage systems.
Question 49 of 60
49. Question
To secure the Nutanix Files environment, an administrator wants to detect suspicious user activities such as file changes, unauthorized access attempts, and denied permissions. What is the most effective method for monitoring this behavior?
Correct
To detect suspicious user activity in Nutanix Files, deploying File Analytics and enabling anomaly rules is the most effective method. It allows real-time monitoring and alerting based on behavioral baselines, ensuring proactive security. Other methods offer visibility but lack automation or specificity needed for true behavioral security monitoring.
Incorrect
To detect suspicious user activity in Nutanix Files, deploying File Analytics and enabling anomaly rules is the most effective method. It allows real-time monitoring and alerting based on behavioral baselines, ensuring proactive security. Other methods offer visibility but lack automation or specificity needed for true behavioral security monitoring.
Unattempted
To detect suspicious user activity in Nutanix Files, deploying File Analytics and enabling anomaly rules is the most effective method. It allows real-time monitoring and alerting based on behavioral baselines, ensuring proactive security. Other methods offer visibility but lack automation or specificity needed for true behavioral security monitoring.
Question 50 of 60
50. Question
Which Nutanix Files feature allows for enforcing strict capacity limits for individual users?
Correct
To enforce strict capacity limits for individual users, the system must use a quota policy with a hard quota limit. This ensures users cannot exceed their allocated storage, preventing overconsumption and guaranteeing resource fairness. Soft limits and storage policies typically offer warnings or configure settings but do not strictly enforce capacity usage per user.
Incorrect
To enforce strict capacity limits for individual users, the system must use a quota policy with a hard quota limit. This ensures users cannot exceed their allocated storage, preventing overconsumption and guaranteeing resource fairness. Soft limits and storage policies typically offer warnings or configure settings but do not strictly enforce capacity usage per user.
Unattempted
To enforce strict capacity limits for individual users, the system must use a quota policy with a hard quota limit. This ensures users cannot exceed their allocated storage, preventing overconsumption and guaranteeing resource fairness. Soft limits and storage policies typically offer warnings or configure settings but do not strictly enforce capacity usage per user.
Question 51 of 60
51. Question
What type of performance or log data is primarily collected by the objects_collectperf tool?
Correct
objects_collectperf is used to collect performance insights, including system resource usage and snapshots of Objects service UIs for diagnostics.
Incorrect
objects_collectperf is used to collect performance insights, including system resource usage and snapshots of Objects service UIs for diagnostics.
Unattempted
objects_collectperf is used to collect performance insights, including system resource usage and snapshots of Objects service UIs for diagnostics.
Question 52 of 60
52. Question
What should be enabled for Windows clients when using the SMB protocol in a Nutanix Files deployment to ensure compatibility and functionality?
Correct
To enhance SMB-based access in Nutanix Files and enable seamless failover and namespace abstraction for users, Distributed File System (DFS) support must be enabled on Windows clients. It is the only correct and relevant option in this context.
Incorrect
To enhance SMB-based access in Nutanix Files and enable seamless failover and namespace abstraction for users, Distributed File System (DFS) support must be enabled on Windows clients. It is the only correct and relevant option in this context.
Unattempted
To enhance SMB-based access in Nutanix Files and enable seamless failover and namespace abstraction for users, Distributed File System (DFS) support must be enabled on Windows clients. It is the only correct and relevant option in this context.
Question 53 of 60
53. Question
As a result of migration, an administrator updates the iSCSI Data Services IP address for a Nutanix cluster. What step must an administrator take to maintain client access to iSCSI targets?
Correct
The iSCSI Data Services IP acts as the front-end access point for iSCSI traffic to the Nutanix cluster. If it changes during a migration, clients that previously connected to the old IP will lose access unless they are manually reconfigured to use the new IP. This is a client-side setting and must be updated on each iSCSI initiator to re-establish connectivity.
Incorrect
The iSCSI Data Services IP acts as the front-end access point for iSCSI traffic to the Nutanix cluster. If it changes during a migration, clients that previously connected to the old IP will lose access unless they are manually reconfigured to use the new IP. This is a client-side setting and must be updated on each iSCSI initiator to re-establish connectivity.
Unattempted
The iSCSI Data Services IP acts as the front-end access point for iSCSI traffic to the Nutanix cluster. If it changes during a migration, clients that previously connected to the old IP will lose access unless they are manually reconfigured to use the new IP. This is a client-side setting and must be updated on each iSCSI initiator to re-establish connectivity.
Question 54 of 60
54. Question
Where are standard tiering policies managed within the Nutanix environment?
Correct
Although Nutanix Data Lens helps in identifying cold or infrequently accessed data, the actual standard tiering policy setup and management is performed in the local Nutanix Files console. As per Nutanix documentation (Files Tiering Reference), administrators configure how and where files are tiered (e.g., from Nutanix Files to Nutanix Objects) directly from the Files server UI. Only one policy per file server is allowed, and this setup is not managed through Prism Central or Data Lens.
Incorrect
Although Nutanix Data Lens helps in identifying cold or infrequently accessed data, the actual standard tiering policy setup and management is performed in the local Nutanix Files console. As per Nutanix documentation (Files Tiering Reference), administrators configure how and where files are tiered (e.g., from Nutanix Files to Nutanix Objects) directly from the Files server UI. Only one policy per file server is allowed, and this setup is not managed through Prism Central or Data Lens.
Unattempted
Although Nutanix Data Lens helps in identifying cold or infrequently accessed data, the actual standard tiering policy setup and management is performed in the local Nutanix Files console. As per Nutanix documentation (Files Tiering Reference), administrators configure how and where files are tiered (e.g., from Nutanix Files to Nutanix Objects) directly from the Files server UI. Only one policy per file server is allowed, and this setup is not managed through Prism Central or Data Lens.
Question 55 of 60
55. Question
After upgrading Prism Central on a cluster running Nutanix Objects, what is the appropriate procedure for upgrading the Objects Manager?
Correct
Nutanix Objects Manager must be manually upgraded after Prism Central to ensure the management interface and components are aligned with the latest version.
Incorrect
Nutanix Objects Manager must be manually upgraded after Prism Central to ensure the management interface and components are aligned with the latest version.
Unattempted
Nutanix Objects Manager must be manually upgraded after Prism Central to ensure the management interface and components are aligned with the latest version.
Question 56 of 60
56. Question
An administrator is managing two Nutanix clusters that are both hosting Nutanix Files instances. One cluster is running out of space. Compression is already enabled, and data can‘t be deleted. Which feature could help alleviate the space limitation?
Correct
Since data cannot be deleted and compression is already in place, the best option is to free up space by offloading inactive files. Smart Tiering is the only option designed to automatically move cold data from Nutanix Files to a lower-cost storage tier, effectively reducing storage usage on the source cluster.
Incorrect
Since data cannot be deleted and compression is already in place, the best option is to free up space by offloading inactive files. Smart Tiering is the only option designed to automatically move cold data from Nutanix Files to a lower-cost storage tier, effectively reducing storage usage on the source cluster.
Unattempted
Since data cannot be deleted and compression is already in place, the best option is to free up space by offloading inactive files. Smart Tiering is the only option designed to automatically move cold data from Nutanix Files to a lower-cost storage tier, effectively reducing storage usage on the source cluster.
Question 57 of 60
57. Question
An administrator has been asked to classify data in Data Lens to help with monitoring data usage. What purpose does the file category configuration serve in this context?
Correct
The file category configuration in Nutanix Data Lens is specifically designed to classify files based on their extensions (e.g., .docx, .pdf, .jpg, .mp4, etc.). Administrators can group these into categories like “Documents,“ “Videos,“ or “Executables.“ This classification helps in analyzing storage consumption by file type, identifying unwanted data (like media in a document share), and making informed decisions for data cleanup, archiving, or policy enforcement.
Incorrect
The file category configuration in Nutanix Data Lens is specifically designed to classify files based on their extensions (e.g., .docx, .pdf, .jpg, .mp4, etc.). Administrators can group these into categories like “Documents,“ “Videos,“ or “Executables.“ This classification helps in analyzing storage consumption by file type, identifying unwanted data (like media in a document share), and making informed decisions for data cleanup, archiving, or policy enforcement.
Unattempted
The file category configuration in Nutanix Data Lens is specifically designed to classify files based on their extensions (e.g., .docx, .pdf, .jpg, .mp4, etc.). Administrators can group these into categories like “Documents,“ “Videos,“ or “Executables.“ This classification helps in analyzing storage consumption by file type, identifying unwanted data (like media in a document share), and making informed decisions for data cleanup, archiving, or policy enforcement.
Question 58 of 60
58. Question
An administrator is looking to establish connectivity to cluster storage for iSCSI sessions. However, the administrator is aware that it is not best practice to allow iSCSI sessions to connect directly to CVMs. What should the administrator configure in their cluster to support discovery and serve as the initial access point?
Correct
In Nutanix, the iSCSI Data Services IP is the correct and recommended method for iSCSI discovery and connectivity. It simplifies management, provides high availability, and prevents single points of failure. The other options are related to access control or client configuration, not discovery endpoints.
Incorrect
In Nutanix, the iSCSI Data Services IP is the correct and recommended method for iSCSI discovery and connectivity. It simplifies management, provides high availability, and prevents single points of failure. The other options are related to access control or client configuration, not discovery endpoints.
Unattempted
In Nutanix, the iSCSI Data Services IP is the correct and recommended method for iSCSI discovery and connectivity. It simplifies management, provides high availability, and prevents single points of failure. The other options are related to access control or client configuration, not discovery endpoints.
Question 59 of 60
59. Question
An administrator needs to use Smart DR to ensure that, in the event of an unplanned loss of service, users are redirected automatically to the recovery site. What configuration or feature satisfies this requirement?
Correct
For Smart DR to allow seamless redirection to the recovery site, DNS and Active Directory must be configured correctly so clients resolve the same name to the DR location.
Incorrect
For Smart DR to allow seamless redirection to the recovery site, DNS and Active Directory must be configured correctly so clients resolve the same name to the DR location.
Unattempted
For Smart DR to allow seamless redirection to the recovery site, DNS and Active Directory must be configured correctly so clients resolve the same name to the DR location.
Question 60 of 60
60. Question
An administrator has just registered Prism Element to Prism Central. The administrator cannot see any clusters listed when attempting to create a new object store.
What is the most likely cause
Correct
Although Prism Element is registered, object stores cannot be added via Prism Central.
Incorrect: Object stores can be created and managed via Prism Central. This option is incorrect because it misstates the capability of Prism Central. In fact, Prism Central is the correct place to create and manage Nutanix Objects.
The administrator did not manually sync Prism Element to Prism Central after registration.
Incorrect: Manual syncing is not required. Synchronization occurs automatically after registration, although it may take a few minutes. There?s no manual sync button needed in normal operations.
Prism Element cluster CVMs must be restarted after registration.
Incorrect: There is no need to restart CVMs after registering Prism Element to Prism Central. This would be a drastic and unnecessary step, and not a valid requirement for synchronization or Object Store creation.
Correct answer
Prism Element has not yet completed synchronization with Prism Central.
Correct: After registering a Prism Element to Prism Central, it may take a few minutes for synchronization to complete. Until this sync is finished, the cluster may not appear in dropdowns or lists when attempting to create services like Nutanix Objects. This is the most common and logical cause for the behavior described.
Details:
When you register Prism Element to Prism Central, cluster synchronization is not immediate. During this brief delay, actions such as creating an Object Store will not list the cluster as an available option. Waiting a few minutes usually resolves the issue as Prism Central completes the background sync process.
Incorrect
Although Prism Element is registered, object stores cannot be added via Prism Central.
Incorrect: Object stores can be created and managed via Prism Central. This option is incorrect because it misstates the capability of Prism Central. In fact, Prism Central is the correct place to create and manage Nutanix Objects.
The administrator did not manually sync Prism Element to Prism Central after registration.
Incorrect: Manual syncing is not required. Synchronization occurs automatically after registration, although it may take a few minutes. There?s no manual sync button needed in normal operations.
Prism Element cluster CVMs must be restarted after registration.
Incorrect: There is no need to restart CVMs after registering Prism Element to Prism Central. This would be a drastic and unnecessary step, and not a valid requirement for synchronization or Object Store creation.
Correct answer
Prism Element has not yet completed synchronization with Prism Central.
Correct: After registering a Prism Element to Prism Central, it may take a few minutes for synchronization to complete. Until this sync is finished, the cluster may not appear in dropdowns or lists when attempting to create services like Nutanix Objects. This is the most common and logical cause for the behavior described.
Details:
When you register Prism Element to Prism Central, cluster synchronization is not immediate. During this brief delay, actions such as creating an Object Store will not list the cluster as an available option. Waiting a few minutes usually resolves the issue as Prism Central completes the background sync process.
Unattempted
Although Prism Element is registered, object stores cannot be added via Prism Central.
Incorrect: Object stores can be created and managed via Prism Central. This option is incorrect because it misstates the capability of Prism Central. In fact, Prism Central is the correct place to create and manage Nutanix Objects.
The administrator did not manually sync Prism Element to Prism Central after registration.
Incorrect: Manual syncing is not required. Synchronization occurs automatically after registration, although it may take a few minutes. There?s no manual sync button needed in normal operations.
Prism Element cluster CVMs must be restarted after registration.
Incorrect: There is no need to restart CVMs after registering Prism Element to Prism Central. This would be a drastic and unnecessary step, and not a valid requirement for synchronization or Object Store creation.
Correct answer
Prism Element has not yet completed synchronization with Prism Central.
Correct: After registering a Prism Element to Prism Central, it may take a few minutes for synchronization to complete. Until this sync is finished, the cluster may not appear in dropdowns or lists when attempting to create services like Nutanix Objects. This is the most common and logical cause for the behavior described.
Details:
When you register Prism Element to Prism Central, cluster synchronization is not immediate. During this brief delay, actions such as creating an Object Store will not list the cluster as an available option. Waiting a few minutes usually resolves the issue as Prism Central completes the background sync process.
X
Use Page numbers below to navigate to other practice tests