You have already completed the Test before. Hence you can not start it again.
Test is loading...
You must sign in or sign up to start the Test.
You have to finish following quiz, to start this Test:
Your results are here!! for" Linux LPIC-2 (202-450) Practice Test 4 "
0 of 60 questions answered correctly
Your time:
Time has elapsed
Your Final Score is : 0
You have attempted : 0
Number of Correct Questions : 0 and scored 0
Number of Incorrect Questions : 0 and Negative marks 0
Average score
Your score
Linux LPIC-2 (202-450)
You have attempted: 0
Number of Correct Questions: 0 and scored 0
Number of Incorrect Questions: 0 and Negative marks 0
You can review your answers by clicking on “View Answers” option. Important Note : Open Reference Documentation Links in New Tab (Right Click and Open in New Tab).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
Answered
Review
Question 1 of 60
1. Question
Which of the following services should be started on an NFS server first?
Correct
Correct: A. portmap
portmap (or rpcbind in newer systems) is a fundamental RPC (Remote Procedure Call) service that must be running before other RPC-based services like NFS can start. portmap maps RPC program numbers to the TCP/UDP ports on which they are listening. When an NFS client wants to connect to an NFS server, it first queries portmap on the server to find out which port mountd, nfsd, and other NFS-related services are listening on. Without portmap running, clients wouldn‘t be able to locate and communicate with the NFS services. Incorrect:
B. mountd: The mountd (NFS mount daemon) service handles mount requests from NFS clients. It needs portmap/rpcbind to be running so that it can register its service with portmap and clients can find it. C. statd: The statd (NFS status monitor daemon) service is part of the NLM (Network Lock Manager) protocol, used for file locking in NFS. It also relies on portmap/rpcbind to function correctly. D. nfsd: The nfsd (NFS daemon) is the core NFS server process that serves the actual file data. It also registers its services with portmap/rpcbind and cannot function without it.
Incorrect
Correct: A. portmap
portmap (or rpcbind in newer systems) is a fundamental RPC (Remote Procedure Call) service that must be running before other RPC-based services like NFS can start. portmap maps RPC program numbers to the TCP/UDP ports on which they are listening. When an NFS client wants to connect to an NFS server, it first queries portmap on the server to find out which port mountd, nfsd, and other NFS-related services are listening on. Without portmap running, clients wouldn‘t be able to locate and communicate with the NFS services. Incorrect:
B. mountd: The mountd (NFS mount daemon) service handles mount requests from NFS clients. It needs portmap/rpcbind to be running so that it can register its service with portmap and clients can find it. C. statd: The statd (NFS status monitor daemon) service is part of the NLM (Network Lock Manager) protocol, used for file locking in NFS. It also relies on portmap/rpcbind to function correctly. D. nfsd: The nfsd (NFS daemon) is the core NFS server process that serves the actual file data. It also registers its services with portmap/rpcbind and cannot function without it.
Unattempted
Correct: A. portmap
portmap (or rpcbind in newer systems) is a fundamental RPC (Remote Procedure Call) service that must be running before other RPC-based services like NFS can start. portmap maps RPC program numbers to the TCP/UDP ports on which they are listening. When an NFS client wants to connect to an NFS server, it first queries portmap on the server to find out which port mountd, nfsd, and other NFS-related services are listening on. Without portmap running, clients wouldn‘t be able to locate and communicate with the NFS services. Incorrect:
B. mountd: The mountd (NFS mount daemon) service handles mount requests from NFS clients. It needs portmap/rpcbind to be running so that it can register its service with portmap and clients can find it. C. statd: The statd (NFS status monitor daemon) service is part of the NLM (Network Lock Manager) protocol, used for file locking in NFS. It also relies on portmap/rpcbind to function correctly. D. nfsd: The nfsd (NFS daemon) is the core NFS server process that serves the actual file data. It also registers its services with portmap/rpcbind and cannot function without it.
Question 2 of 60
2. Question
Which option in named.conf specifies which hosts can request information about server domain names?
Correct
Correct:
D. allow-query
The allow-query option in BIND‘s named.conf file is used to specify which client IP addresses are allowed to query the DNS server for domain name information. This directive can be applied globally (in the options block) or within specific zone blocks, allowing administrators to control access to DNS resolution services. Incorrect:
A. allowed-hosts: This is not a standard BIND option for controlling query access. While the concept relates to allowed hosts, the specific directive is allow-query. B. accept-query: This is not a valid BIND option. BIND uses allow-query for this purpose. C. permit-query: This is not a valid BIND option. Similar to accept-query, it does not exist in BIND‘s configuration syntax.
Incorrect
Correct:
D. allow-query
The allow-query option in BIND‘s named.conf file is used to specify which client IP addresses are allowed to query the DNS server for domain name information. This directive can be applied globally (in the options block) or within specific zone blocks, allowing administrators to control access to DNS resolution services. Incorrect:
A. allowed-hosts: This is not a standard BIND option for controlling query access. While the concept relates to allowed hosts, the specific directive is allow-query. B. accept-query: This is not a valid BIND option. BIND uses allow-query for this purpose. C. permit-query: This is not a valid BIND option. Similar to accept-query, it does not exist in BIND‘s configuration syntax.
Unattempted
Correct:
D. allow-query
The allow-query option in BIND‘s named.conf file is used to specify which client IP addresses are allowed to query the DNS server for domain name information. This directive can be applied globally (in the options block) or within specific zone blocks, allowing administrators to control access to DNS resolution services. Incorrect:
A. allowed-hosts: This is not a standard BIND option for controlling query access. While the concept relates to allowed hosts, the specific directive is allow-query. B. accept-query: This is not a valid BIND option. BIND uses allow-query for this purpose. C. permit-query: This is not a valid BIND option. Similar to accept-query, it does not exist in BIND‘s configuration syntax.
Question 3 of 60
3. Question
What is the purpose of the PAM management session group?
Correct
Correct: D. It sets up the environment for a login session and clears when the user logs off.
The PAM (Pluggable Authentication Modules) management session group is responsible for tasks that need to be performed before a user‘s session begins and after it ends. This includes setting up the user‘s environment (e.g., mounting home directories, establishing logging, setting resource limits) when they log in and cleaning up that environment (e.g., unmounting directories, closing log files) when they log out. These actions are crucial for the proper functioning and security of a user‘s session. Incorrect:
A. It performs authentication based on a username and password or perhaps some other criteria, such as a biometric scan. This describes the purpose of the authentication management group in PAM. This group verifies the user‘s identity. B. It changes the password when the user requests a password change. This describes the purpose of the password management group in PAM. This group handles password-related operations like setting or changing passwords. C. It validates or denies a login based on non-authentication data, such as an IP address. This describes the purpose of the account management group in PAM. This group checks for access permissions, account validity (e.g., expiry), and restrictions (e.g., based on time or source IP).
Incorrect
Correct: D. It sets up the environment for a login session and clears when the user logs off.
The PAM (Pluggable Authentication Modules) management session group is responsible for tasks that need to be performed before a user‘s session begins and after it ends. This includes setting up the user‘s environment (e.g., mounting home directories, establishing logging, setting resource limits) when they log in and cleaning up that environment (e.g., unmounting directories, closing log files) when they log out. These actions are crucial for the proper functioning and security of a user‘s session. Incorrect:
A. It performs authentication based on a username and password or perhaps some other criteria, such as a biometric scan. This describes the purpose of the authentication management group in PAM. This group verifies the user‘s identity. B. It changes the password when the user requests a password change. This describes the purpose of the password management group in PAM. This group handles password-related operations like setting or changing passwords. C. It validates or denies a login based on non-authentication data, such as an IP address. This describes the purpose of the account management group in PAM. This group checks for access permissions, account validity (e.g., expiry), and restrictions (e.g., based on time or source IP).
Unattempted
Correct: D. It sets up the environment for a login session and clears when the user logs off.
The PAM (Pluggable Authentication Modules) management session group is responsible for tasks that need to be performed before a user‘s session begins and after it ends. This includes setting up the user‘s environment (e.g., mounting home directories, establishing logging, setting resource limits) when they log in and cleaning up that environment (e.g., unmounting directories, closing log files) when they log out. These actions are crucial for the proper functioning and security of a user‘s session. Incorrect:
A. It performs authentication based on a username and password or perhaps some other criteria, such as a biometric scan. This describes the purpose of the authentication management group in PAM. This group verifies the user‘s identity. B. It changes the password when the user requests a password change. This describes the purpose of the password management group in PAM. This group handles password-related operations like setting or changing passwords. C. It validates or denies a login based on non-authentication data, such as an IP address. This describes the purpose of the account management group in PAM. This group checks for access permissions, account validity (e.g., expiry), and restrictions (e.g., based on time or source IP).
Question 4 of 60
4. Question
You want to export the / home directory to two computers via NFS: remington must have full read / write access, while gentle must have read-only access. How would you set this up in / etc / exports?
Correct
Correct: A. /home remington(rw) gentle(ro)
In the /etc/exports file, each line defines an exported directory and the permissions for specific hosts. The syntax is export_directory host(options). /home: This is the directory being exported. remington(rw): This grants the host named remington read/write access to /home. rw is the shorthand for read/write. gentle(ro): This grants the host named gentle read-only access to /home. ro is the shorthand for read-only. This format correctly specifies the directory and the per-host permissions. Incorrect:
B. [homes] remington(readwrite) gentle(readonly)
[homes] is a special shorthand in some NFS configurations (like autofs or certain distributions‘ default setups) to refer to users‘ home directories, but it‘s not the literal directory name used directly in /etc/exports like this for a specific, single export of /home. Also, readwrite and readonly are not the standard shorthand options; rw and ro are. C. Remington(/home,readwrite) gentle(/home,readonly)
The format is incorrect. The directory to be exported (/home) should come first, followed by the host and its options. The options are also not standard (readwrite instead of rw, readonly instead of ro). D. Remington(/home,rw) gentle(/home,ro)
Similar to C, the format is incorrect as the directory /home should precede the host definitions. While rw and ro are correct options, their placement in this syntax is wrong.
Incorrect
Correct: A. /home remington(rw) gentle(ro)
In the /etc/exports file, each line defines an exported directory and the permissions for specific hosts. The syntax is export_directory host(options). /home: This is the directory being exported. remington(rw): This grants the host named remington read/write access to /home. rw is the shorthand for read/write. gentle(ro): This grants the host named gentle read-only access to /home. ro is the shorthand for read-only. This format correctly specifies the directory and the per-host permissions. Incorrect:
B. [homes] remington(readwrite) gentle(readonly)
[homes] is a special shorthand in some NFS configurations (like autofs or certain distributions‘ default setups) to refer to users‘ home directories, but it‘s not the literal directory name used directly in /etc/exports like this for a specific, single export of /home. Also, readwrite and readonly are not the standard shorthand options; rw and ro are. C. Remington(/home,readwrite) gentle(/home,readonly)
The format is incorrect. The directory to be exported (/home) should come first, followed by the host and its options. The options are also not standard (readwrite instead of rw, readonly instead of ro). D. Remington(/home,rw) gentle(/home,ro)
Similar to C, the format is incorrect as the directory /home should precede the host definitions. While rw and ro are correct options, their placement in this syntax is wrong.
Unattempted
Correct: A. /home remington(rw) gentle(ro)
In the /etc/exports file, each line defines an exported directory and the permissions for specific hosts. The syntax is export_directory host(options). /home: This is the directory being exported. remington(rw): This grants the host named remington read/write access to /home. rw is the shorthand for read/write. gentle(ro): This grants the host named gentle read-only access to /home. ro is the shorthand for read-only. This format correctly specifies the directory and the per-host permissions. Incorrect:
B. [homes] remington(readwrite) gentle(readonly)
[homes] is a special shorthand in some NFS configurations (like autofs or certain distributions‘ default setups) to refer to users‘ home directories, but it‘s not the literal directory name used directly in /etc/exports like this for a specific, single export of /home. Also, readwrite and readonly are not the standard shorthand options; rw and ro are. C. Remington(/home,readwrite) gentle(/home,readonly)
The format is incorrect. The directory to be exported (/home) should come first, followed by the host and its options. The options are also not standard (readwrite instead of rw, readonly instead of ro). D. Remington(/home,rw) gentle(/home,ro)
Similar to C, the format is incorrect as the directory /home should precede the host definitions. While rw and ro are correct options, their placement in this syntax is wrong.
Question 5 of 60
5. Question
You want to add a Samba server to an existing Windows network, where users are used to using their full names, including spaces, as user names. Which file should you edit to allow users to continue using these usernames, while converting them to more conventional and shorter Linux usernames?
Correct
Correct: B. The file pointed to by the user map option in smb.conf
The user map option in smb.conf specifies the path to a plain text file that contains mappings between Windows usernames (which can include spaces or special characters) and their corresponding Linux usernames (which typically do not allow spaces). This allows users to log in using their familiar Windows full names while Samba internally translates them to valid Linux usernames. Incorrect:
A. The file pointed to by the smb passwd file option in smb.conf: The smb passwd file option points to the file that stores Samba‘s encrypted password database (e.g., /etc/samba/smbpasswd). This file is for password authentication and does not handle username mapping. C. The / etc / samba / smbpasswd file: This is the default location for Samba‘s encrypted password database. While it‘s a critical Samba file, it‘s used for storing passwords, not for mapping Windows usernames with spaces to Linux usernames. The actual file used for mapping is specified by the user map option. D. The /etc/samba/username.map file: While a file named username.map might be used in practice, this option is incorrect because the path to this file is what is configured by the user map option in smb.conf. Simply stating the filename without mentioning the user map option is incomplete and implies a fixed path, which isn‘t always the case. The user map option tells Samba to use this specific file for mapping.
Incorrect
Correct: B. The file pointed to by the user map option in smb.conf
The user map option in smb.conf specifies the path to a plain text file that contains mappings between Windows usernames (which can include spaces or special characters) and their corresponding Linux usernames (which typically do not allow spaces). This allows users to log in using their familiar Windows full names while Samba internally translates them to valid Linux usernames. Incorrect:
A. The file pointed to by the smb passwd file option in smb.conf: The smb passwd file option points to the file that stores Samba‘s encrypted password database (e.g., /etc/samba/smbpasswd). This file is for password authentication and does not handle username mapping. C. The / etc / samba / smbpasswd file: This is the default location for Samba‘s encrypted password database. While it‘s a critical Samba file, it‘s used for storing passwords, not for mapping Windows usernames with spaces to Linux usernames. The actual file used for mapping is specified by the user map option. D. The /etc/samba/username.map file: While a file named username.map might be used in practice, this option is incorrect because the path to this file is what is configured by the user map option in smb.conf. Simply stating the filename without mentioning the user map option is incomplete and implies a fixed path, which isn‘t always the case. The user map option tells Samba to use this specific file for mapping.
Unattempted
Correct: B. The file pointed to by the user map option in smb.conf
The user map option in smb.conf specifies the path to a plain text file that contains mappings between Windows usernames (which can include spaces or special characters) and their corresponding Linux usernames (which typically do not allow spaces). This allows users to log in using their familiar Windows full names while Samba internally translates them to valid Linux usernames. Incorrect:
A. The file pointed to by the smb passwd file option in smb.conf: The smb passwd file option points to the file that stores Samba‘s encrypted password database (e.g., /etc/samba/smbpasswd). This file is for password authentication and does not handle username mapping. C. The / etc / samba / smbpasswd file: This is the default location for Samba‘s encrypted password database. While it‘s a critical Samba file, it‘s used for storing passwords, not for mapping Windows usernames with spaces to Linux usernames. The actual file used for mapping is specified by the user map option. D. The /etc/samba/username.map file: While a file named username.map might be used in practice, this option is incorrect because the path to this file is what is configured by the user map option in smb.conf. Simply stating the filename without mentioning the user map option is incomplete and implies a fixed path, which isn‘t always the case. The user map option tells Samba to use this specific file for mapping.
Question 6 of 60
6. Question
Which of the following is true when using the following /etc/pam.d/login file?
Correct
Correct: B. Ordinary users will be able to change their password to be blank.
This answer relies on a common PAM module option, nullok, often used with pam_unix.so in the password stack. If nullok is present, it means that a user can set an empty (null) password. While this is a security risk, it is a direct consequence of this specific PAM configuration. Incorrect:
A. If the control flags for auth were changed to necessary, local users would not be able to log on.
PAM control flags (like sufficient, required, requisite, optional, binding) determine how the success or failure of a module affects the overall outcome of the PAM stack. If a module is necessary (this is not a standard PAM control flag, but if it implies required or requisite behavior), and there‘s another module that handles local users (e.g., pam_unix.so), changing its control flag to a strict failure type would prevent logins if that module fails. However, simply changing a flag doesn‘t inherently prevent local users from logging on if their credentials are correct and other modules in the stack are configured to allow it. The effect of necessary (if it existed) would depend on the other modules in the stack. C. This is the only file needed to set up LDAP authentication on Linux.
This is incorrect. While /etc/pam.d/login (or other files in /etc/pam.d/) is where you configure PAM to use an LDAP module (like pam_ldap.so or pam_sss.so), you also need to: Install the relevant PAM LDAP module. Configure the LDAP client (e.g., /etc/ldap/ldap.conf or sssd.conf) to specify the LDAP server, base DN, etc. Configure nsswitch.conf to use ldap for host and other lookups. A single PAM service file is not sufficient for a complete LDAP authentication setup. D. All users will be authenticated in the LDAP directory.
Not necessarily. The /etc/pam.d/login file dictates how authentication for the login service (e.g., console logins) is performed. If pam_ldap.so or pam_sss.so is configured, then users might be authenticated against LDAP. However, there are usually multiple lines in a PAM stack, and other modules (like pam_unix.so for local users) might still be active. Whether all users are authenticated against LDAP depends entirely on the full PAM configuration and potentially the nsswitch.conf file‘s order. It‘s common to allow both local and LDAP users. E. Only local users will be able to log in, when the / etc / nologin file exists
When the /etc/nologin file exists, it prevents all non-root users from logging in to the system. It displays the content of the file to the users attempting to log in. Root users are typically exempted from this restriction. Therefore, it would prevent most users, not just allow local ones.
Incorrect
Correct: B. Ordinary users will be able to change their password to be blank.
This answer relies on a common PAM module option, nullok, often used with pam_unix.so in the password stack. If nullok is present, it means that a user can set an empty (null) password. While this is a security risk, it is a direct consequence of this specific PAM configuration. Incorrect:
A. If the control flags for auth were changed to necessary, local users would not be able to log on.
PAM control flags (like sufficient, required, requisite, optional, binding) determine how the success or failure of a module affects the overall outcome of the PAM stack. If a module is necessary (this is not a standard PAM control flag, but if it implies required or requisite behavior), and there‘s another module that handles local users (e.g., pam_unix.so), changing its control flag to a strict failure type would prevent logins if that module fails. However, simply changing a flag doesn‘t inherently prevent local users from logging on if their credentials are correct and other modules in the stack are configured to allow it. The effect of necessary (if it existed) would depend on the other modules in the stack. C. This is the only file needed to set up LDAP authentication on Linux.
This is incorrect. While /etc/pam.d/login (or other files in /etc/pam.d/) is where you configure PAM to use an LDAP module (like pam_ldap.so or pam_sss.so), you also need to: Install the relevant PAM LDAP module. Configure the LDAP client (e.g., /etc/ldap/ldap.conf or sssd.conf) to specify the LDAP server, base DN, etc. Configure nsswitch.conf to use ldap for host and other lookups. A single PAM service file is not sufficient for a complete LDAP authentication setup. D. All users will be authenticated in the LDAP directory.
Not necessarily. The /etc/pam.d/login file dictates how authentication for the login service (e.g., console logins) is performed. If pam_ldap.so or pam_sss.so is configured, then users might be authenticated against LDAP. However, there are usually multiple lines in a PAM stack, and other modules (like pam_unix.so for local users) might still be active. Whether all users are authenticated against LDAP depends entirely on the full PAM configuration and potentially the nsswitch.conf file‘s order. It‘s common to allow both local and LDAP users. E. Only local users will be able to log in, when the / etc / nologin file exists
When the /etc/nologin file exists, it prevents all non-root users from logging in to the system. It displays the content of the file to the users attempting to log in. Root users are typically exempted from this restriction. Therefore, it would prevent most users, not just allow local ones.
Unattempted
Correct: B. Ordinary users will be able to change their password to be blank.
This answer relies on a common PAM module option, nullok, often used with pam_unix.so in the password stack. If nullok is present, it means that a user can set an empty (null) password. While this is a security risk, it is a direct consequence of this specific PAM configuration. Incorrect:
A. If the control flags for auth were changed to necessary, local users would not be able to log on.
PAM control flags (like sufficient, required, requisite, optional, binding) determine how the success or failure of a module affects the overall outcome of the PAM stack. If a module is necessary (this is not a standard PAM control flag, but if it implies required or requisite behavior), and there‘s another module that handles local users (e.g., pam_unix.so), changing its control flag to a strict failure type would prevent logins if that module fails. However, simply changing a flag doesn‘t inherently prevent local users from logging on if their credentials are correct and other modules in the stack are configured to allow it. The effect of necessary (if it existed) would depend on the other modules in the stack. C. This is the only file needed to set up LDAP authentication on Linux.
This is incorrect. While /etc/pam.d/login (or other files in /etc/pam.d/) is where you configure PAM to use an LDAP module (like pam_ldap.so or pam_sss.so), you also need to: Install the relevant PAM LDAP module. Configure the LDAP client (e.g., /etc/ldap/ldap.conf or sssd.conf) to specify the LDAP server, base DN, etc. Configure nsswitch.conf to use ldap for host and other lookups. A single PAM service file is not sufficient for a complete LDAP authentication setup. D. All users will be authenticated in the LDAP directory.
Not necessarily. The /etc/pam.d/login file dictates how authentication for the login service (e.g., console logins) is performed. If pam_ldap.so or pam_sss.so is configured, then users might be authenticated against LDAP. However, there are usually multiple lines in a PAM stack, and other modules (like pam_unix.so for local users) might still be active. Whether all users are authenticated against LDAP depends entirely on the full PAM configuration and potentially the nsswitch.conf file‘s order. It‘s common to allow both local and LDAP users. E. Only local users will be able to log in, when the / etc / nologin file exists
When the /etc/nologin file exists, it prevents all non-root users from logging in to the system. It displays the content of the file to the users attempting to log in. Root users are typically exempted from this restriction. Therefore, it would prevent most users, not just allow local ones.
Question 7 of 60
7. Question
What is a significant difference between the host and zone keys generated by dnssec-keygen?
Correct
Correct: C. Both zone key files (.key / .private) contain a public and private key.
When dnssec-keygen is used to generate either a Key Signing Key (KSK) or a Zone Signing Key (ZSK) for a DNS zone, it creates two files for each key: a .key file and a .private file. The .key file contains the public part of the key pair. This is the part that gets published in the DNS zone as a DNSKEY record. The .private file contains the private part of the key pair. This part is kept secret and is used by the DNS server (e.g., BIND) to cryptographically sign the zone data. Therefore, both the public and private components are generated and stored in separate files for each key type (zone key or host key, if “host key“ refers to a specific type of zone key like KSK/ZSK or a key for a host-specific record). In the context of DNSSEC, the primary keys are KSKs and ZSKs, both of which follow this public/private key pair storage convention. Incorrect:
A. Both host key files (.key / .Private) contain a public and private key. This statement is misleading because it implies that each file contains both, which is incorrect. The .key file contains the public key, and the .private file contains the private key. Also, the term “host key“ isn‘t a standard DNSSEC key type in the same way KSK and ZSK are. While a host might have its own SSH keys or other cryptographic keys, in the context of dnssec-keygen, it‘s typically used for zone signing. B. Host keys must always be generated if DNSSEC is used; The zone keys are optional. This is fundamentally incorrect in the context of DNSSEC. For a zone to be signed with DNSSEC, Zone Signing Keys (ZSKs) are mandatory (to sign resource records), and Key Signing Keys (KSKs) are also mandatory (to sign the DNSKEY records themselves and form the chain of trust). “Host keys“ are not a primary concept that must always be generated for DNSSEC; DNSSEC revolves around ZSKs and KSKs. D. There is no difference. This is incorrect. There is a significant difference in how these keys are used in DNSSEC (KSK signs other keys, ZSK signs records), although the storage mechanism of public/private pairs is similar. The question specifically asks about the files generated, and the structure of public/private key pairs is a key aspect. E. Zone keys must always be generated if used; Host keys are optional. While “zone keys“ (ZSKs and KSKs) are indeed mandatory if DNSSEC is used for a zone, the term “host keys“ is ambiguous in this context. If it refers to KSKs, then KSKs are also mandatory. If it refers to something else entirely, it‘s not a direct part of the DNSSEC signing process. The distinction between KSKs and ZSKs is more relevant than a generic “host key“ concept for DNSSEC.
Incorrect
Correct: C. Both zone key files (.key / .private) contain a public and private key.
When dnssec-keygen is used to generate either a Key Signing Key (KSK) or a Zone Signing Key (ZSK) for a DNS zone, it creates two files for each key: a .key file and a .private file. The .key file contains the public part of the key pair. This is the part that gets published in the DNS zone as a DNSKEY record. The .private file contains the private part of the key pair. This part is kept secret and is used by the DNS server (e.g., BIND) to cryptographically sign the zone data. Therefore, both the public and private components are generated and stored in separate files for each key type (zone key or host key, if “host key“ refers to a specific type of zone key like KSK/ZSK or a key for a host-specific record). In the context of DNSSEC, the primary keys are KSKs and ZSKs, both of which follow this public/private key pair storage convention. Incorrect:
A. Both host key files (.key / .Private) contain a public and private key. This statement is misleading because it implies that each file contains both, which is incorrect. The .key file contains the public key, and the .private file contains the private key. Also, the term “host key“ isn‘t a standard DNSSEC key type in the same way KSK and ZSK are. While a host might have its own SSH keys or other cryptographic keys, in the context of dnssec-keygen, it‘s typically used for zone signing. B. Host keys must always be generated if DNSSEC is used; The zone keys are optional. This is fundamentally incorrect in the context of DNSSEC. For a zone to be signed with DNSSEC, Zone Signing Keys (ZSKs) are mandatory (to sign resource records), and Key Signing Keys (KSKs) are also mandatory (to sign the DNSKEY records themselves and form the chain of trust). “Host keys“ are not a primary concept that must always be generated for DNSSEC; DNSSEC revolves around ZSKs and KSKs. D. There is no difference. This is incorrect. There is a significant difference in how these keys are used in DNSSEC (KSK signs other keys, ZSK signs records), although the storage mechanism of public/private pairs is similar. The question specifically asks about the files generated, and the structure of public/private key pairs is a key aspect. E. Zone keys must always be generated if used; Host keys are optional. While “zone keys“ (ZSKs and KSKs) are indeed mandatory if DNSSEC is used for a zone, the term “host keys“ is ambiguous in this context. If it refers to KSKs, then KSKs are also mandatory. If it refers to something else entirely, it‘s not a direct part of the DNSSEC signing process. The distinction between KSKs and ZSKs is more relevant than a generic “host key“ concept for DNSSEC.
Unattempted
Correct: C. Both zone key files (.key / .private) contain a public and private key.
When dnssec-keygen is used to generate either a Key Signing Key (KSK) or a Zone Signing Key (ZSK) for a DNS zone, it creates two files for each key: a .key file and a .private file. The .key file contains the public part of the key pair. This is the part that gets published in the DNS zone as a DNSKEY record. The .private file contains the private part of the key pair. This part is kept secret and is used by the DNS server (e.g., BIND) to cryptographically sign the zone data. Therefore, both the public and private components are generated and stored in separate files for each key type (zone key or host key, if “host key“ refers to a specific type of zone key like KSK/ZSK or a key for a host-specific record). In the context of DNSSEC, the primary keys are KSKs and ZSKs, both of which follow this public/private key pair storage convention. Incorrect:
A. Both host key files (.key / .Private) contain a public and private key. This statement is misleading because it implies that each file contains both, which is incorrect. The .key file contains the public key, and the .private file contains the private key. Also, the term “host key“ isn‘t a standard DNSSEC key type in the same way KSK and ZSK are. While a host might have its own SSH keys or other cryptographic keys, in the context of dnssec-keygen, it‘s typically used for zone signing. B. Host keys must always be generated if DNSSEC is used; The zone keys are optional. This is fundamentally incorrect in the context of DNSSEC. For a zone to be signed with DNSSEC, Zone Signing Keys (ZSKs) are mandatory (to sign resource records), and Key Signing Keys (KSKs) are also mandatory (to sign the DNSKEY records themselves and form the chain of trust). “Host keys“ are not a primary concept that must always be generated for DNSSEC; DNSSEC revolves around ZSKs and KSKs. D. There is no difference. This is incorrect. There is a significant difference in how these keys are used in DNSSEC (KSK signs other keys, ZSK signs records), although the storage mechanism of public/private pairs is similar. The question specifically asks about the files generated, and the structure of public/private key pairs is a key aspect. E. Zone keys must always be generated if used; Host keys are optional. While “zone keys“ (ZSKs and KSKs) are indeed mandatory if DNSSEC is used for a zone, the term “host keys“ is ambiguous in this context. If it refers to KSKs, then KSKs are also mandatory. If it refers to something else entirely, it‘s not a direct part of the DNSSEC signing process. The distinction between KSKs and ZSKs is more relevant than a generic “host key“ concept for DNSSEC.
Question 8 of 60
8. Question
Which of the following Samba configuration parameters is functionally identical to the “read only = yes“ parameter?
Correct
Correct: B. writeable = no
The writeable parameter in Samba‘s smb.conf is the opposite of read only. Setting writeable = no has the exact same functional effect as setting read only = yes: it makes the share read-only for clients, preventing them from writing, creating, or deleting files. Incorrect:
A. write only = no: This is not a standard Samba parameter. The closest would be writeable = no or read only = yes. C. read write = no: This is not a standard Samba parameter. The standard parameters are read only (boolean) or writeable (boolean). D. browseable = no: This parameter controls whether a share is visible when clients browse the network. It does not affect read/write permissions. A share can be non-browseable but still fully writable if accessed directly. E. write access = no: This is not a standard Samba parameter. The correct parameter for controlling write access is writeable or read only.
Incorrect
Correct: B. writeable = no
The writeable parameter in Samba‘s smb.conf is the opposite of read only. Setting writeable = no has the exact same functional effect as setting read only = yes: it makes the share read-only for clients, preventing them from writing, creating, or deleting files. Incorrect:
A. write only = no: This is not a standard Samba parameter. The closest would be writeable = no or read only = yes. C. read write = no: This is not a standard Samba parameter. The standard parameters are read only (boolean) or writeable (boolean). D. browseable = no: This parameter controls whether a share is visible when clients browse the network. It does not affect read/write permissions. A share can be non-browseable but still fully writable if accessed directly. E. write access = no: This is not a standard Samba parameter. The correct parameter for controlling write access is writeable or read only.
Unattempted
Correct: B. writeable = no
The writeable parameter in Samba‘s smb.conf is the opposite of read only. Setting writeable = no has the exact same functional effect as setting read only = yes: it makes the share read-only for clients, preventing them from writing, creating, or deleting files. Incorrect:
A. write only = no: This is not a standard Samba parameter. The closest would be writeable = no or read only = yes. C. read write = no: This is not a standard Samba parameter. The standard parameters are read only (boolean) or writeable (boolean). D. browseable = no: This parameter controls whether a share is visible when clients browse the network. It does not affect read/write permissions. A share can be non-browseable but still fully writable if accessed directly. E. write access = no: This is not a standard Samba parameter. The correct parameter for controlling write access is writeable or read only.
Question 9 of 60
9. Question
With which command is it possible to list the 192.168.1.5 machine‘s NFS shares?
Correct
Correct: D. showmount
The showmount command is specifically designed to query an NFS server and display information about its exported file systems. When used with the -e (exports) option and the hostname or IP address of the NFS server (e.g., showmount -e 192.168.1.5), it will list the directories that the specified server is exporting and to which clients. Incorrect:
A. mount: The mount command is used on the client side to mount an NFS share from a server to a local directory. It shows what is currently mounted on the client, not what shares are available on a remote server. B. exportfs: The exportfs command is used on the NFS server itself to manage its list of exported file systems (e.g., to reread /etc/exports). It‘s used by the server administrator, not typically by a client to discover shares on a remote server. C. nmap: nmap is a powerful network scanning tool used for network discovery and security auditing. While nmap can identify open ports (like 2049 for NFS), it doesn‘t directly list the exported NFS shares themselves. You would need to use showmount after identifying that NFS is running.
Incorrect
Correct: D. showmount
The showmount command is specifically designed to query an NFS server and display information about its exported file systems. When used with the -e (exports) option and the hostname or IP address of the NFS server (e.g., showmount -e 192.168.1.5), it will list the directories that the specified server is exporting and to which clients. Incorrect:
A. mount: The mount command is used on the client side to mount an NFS share from a server to a local directory. It shows what is currently mounted on the client, not what shares are available on a remote server. B. exportfs: The exportfs command is used on the NFS server itself to manage its list of exported file systems (e.g., to reread /etc/exports). It‘s used by the server administrator, not typically by a client to discover shares on a remote server. C. nmap: nmap is a powerful network scanning tool used for network discovery and security auditing. While nmap can identify open ports (like 2049 for NFS), it doesn‘t directly list the exported NFS shares themselves. You would need to use showmount after identifying that NFS is running.
Unattempted
Correct: D. showmount
The showmount command is specifically designed to query an NFS server and display information about its exported file systems. When used with the -e (exports) option and the hostname or IP address of the NFS server (e.g., showmount -e 192.168.1.5), it will list the directories that the specified server is exporting and to which clients. Incorrect:
A. mount: The mount command is used on the client side to mount an NFS share from a server to a local directory. It shows what is currently mounted on the client, not what shares are available on a remote server. B. exportfs: The exportfs command is used on the NFS server itself to manage its list of exported file systems (e.g., to reread /etc/exports). It‘s used by the server administrator, not typically by a client to discover shares on a remote server. C. nmap: nmap is a powerful network scanning tool used for network discovery and security auditing. While nmap can identify open ports (like 2049 for NFS), it doesn‘t directly list the exported NFS shares themselves. You would need to use showmount after identifying that NFS is running.
Question 10 of 60
10. Question
What does the testparm command confirm regarding the Samba configuration?
Correct
Correct: B. The configuration file will be loaded successfully.
The testparm command is a utility provided with Samba that checks the syntax and validity of the smb.conf configuration file. It parses the file, checks for errors, and displays a processed version of the configuration, including any default values applied. It‘s essentially a syntax checker and a basic validator for the configuration file itself. Incorrect:
A. Samba services will start automatically when the system is started. testparm does not check system startup scripts (e.g., systemd units, init.d scripts) to determine if Samba will start automatically. That‘s a system-level configuration, not a Samba configuration file validation. C. The netfilter configuration of the Samba server does not block any access to the services defined in the configuration. testparm only checks Samba‘s internal configuration file (smb.conf). It has no knowledge of or ability to interact with the system‘s firewall (netfilter/iptables) rules. Firewall rules are an external factor that would need to be checked separately. D. The services will work as expected. testparm performs a syntax check and basic validation of the configuration. It does not test the runtime behavior, network connectivity, user authentication, or actual file sharing functionality. For example, it won‘t tell you if a shared directory actually exists or if user permissions are correctly set for access. It only confirms that the smb.conf file is syntactically correct and can be parsed by Samba.
Incorrect
Correct: B. The configuration file will be loaded successfully.
The testparm command is a utility provided with Samba that checks the syntax and validity of the smb.conf configuration file. It parses the file, checks for errors, and displays a processed version of the configuration, including any default values applied. It‘s essentially a syntax checker and a basic validator for the configuration file itself. Incorrect:
A. Samba services will start automatically when the system is started. testparm does not check system startup scripts (e.g., systemd units, init.d scripts) to determine if Samba will start automatically. That‘s a system-level configuration, not a Samba configuration file validation. C. The netfilter configuration of the Samba server does not block any access to the services defined in the configuration. testparm only checks Samba‘s internal configuration file (smb.conf). It has no knowledge of or ability to interact with the system‘s firewall (netfilter/iptables) rules. Firewall rules are an external factor that would need to be checked separately. D. The services will work as expected. testparm performs a syntax check and basic validation of the configuration. It does not test the runtime behavior, network connectivity, user authentication, or actual file sharing functionality. For example, it won‘t tell you if a shared directory actually exists or if user permissions are correctly set for access. It only confirms that the smb.conf file is syntactically correct and can be parsed by Samba.
Unattempted
Correct: B. The configuration file will be loaded successfully.
The testparm command is a utility provided with Samba that checks the syntax and validity of the smb.conf configuration file. It parses the file, checks for errors, and displays a processed version of the configuration, including any default values applied. It‘s essentially a syntax checker and a basic validator for the configuration file itself. Incorrect:
A. Samba services will start automatically when the system is started. testparm does not check system startup scripts (e.g., systemd units, init.d scripts) to determine if Samba will start automatically. That‘s a system-level configuration, not a Samba configuration file validation. C. The netfilter configuration of the Samba server does not block any access to the services defined in the configuration. testparm only checks Samba‘s internal configuration file (smb.conf). It has no knowledge of or ability to interact with the system‘s firewall (netfilter/iptables) rules. Firewall rules are an external factor that would need to be checked separately. D. The services will work as expected. testparm performs a syntax check and basic validation of the configuration. It does not test the runtime behavior, network connectivity, user authentication, or actual file sharing functionality. For example, it won‘t tell you if a shared directory actually exists or if user permissions are correctly set for access. It only confirms that the smb.conf file is syntactically correct and can be parsed by Samba.
Question 11 of 60
11. Question
The generic share that allows a user to directly access their personal directory via Samba is:
Correct
Correct: C. [homes]
The [homes] share is a special, predefined share in Samba that allows users to connect to their individual home directories. When a user tries to connect to a share with the same name as their Unix username (e.g., \\sambaserver\johndoe), Samba will automatically map this request to the user‘s home directory as defined in /etc/passwd. This is a very common and convenient way to provide personal network shares. Incorrect:
A. [personal]: This is not a predefined or special share name in Samba. If you created a share named [personal], it would just be a regular share that points to a specific directory, not automatically to users‘ home directories. B. [global]: The [global] section in smb.conf defines parameters that apply to the entire Samba server, not a specific share. It sets default behaviors for all services and is not itself a share that users can connect to. D. [users]: This is not a predefined or special share name in Samba. Similar to [personal], if you created a share named [users], it would just be a regular share. There‘s no automatic mechanism for it to link to individual user home directories like [homes] does.
Incorrect
Correct: C. [homes]
The [homes] share is a special, predefined share in Samba that allows users to connect to their individual home directories. When a user tries to connect to a share with the same name as their Unix username (e.g., \\sambaserver\johndoe), Samba will automatically map this request to the user‘s home directory as defined in /etc/passwd. This is a very common and convenient way to provide personal network shares. Incorrect:
A. [personal]: This is not a predefined or special share name in Samba. If you created a share named [personal], it would just be a regular share that points to a specific directory, not automatically to users‘ home directories. B. [global]: The [global] section in smb.conf defines parameters that apply to the entire Samba server, not a specific share. It sets default behaviors for all services and is not itself a share that users can connect to. D. [users]: This is not a predefined or special share name in Samba. Similar to [personal], if you created a share named [users], it would just be a regular share. There‘s no automatic mechanism for it to link to individual user home directories like [homes] does.
Unattempted
Correct: C. [homes]
The [homes] share is a special, predefined share in Samba that allows users to connect to their individual home directories. When a user tries to connect to a share with the same name as their Unix username (e.g., \\sambaserver\johndoe), Samba will automatically map this request to the user‘s home directory as defined in /etc/passwd. This is a very common and convenient way to provide personal network shares. Incorrect:
A. [personal]: This is not a predefined or special share name in Samba. If you created a share named [personal], it would just be a regular share that points to a specific directory, not automatically to users‘ home directories. B. [global]: The [global] section in smb.conf defines parameters that apply to the entire Samba server, not a specific share. It sets default behaviors for all services and is not itself a share that users can connect to. D. [users]: This is not a predefined or special share name in Samba. Similar to [personal], if you created a share named [users], it would just be a regular share. There‘s no automatic mechanism for it to link to individual user home directories like [homes] does.
Question 12 of 60
12. Question
How should Samba be configured, so that it can verify passwords against those of / etc / passwd and / etc / shadow?
Correct
Correct: D. It is not possible for Samba to use / etc / passwd and / etc / shadow.
Samba, by default and for security reasons, does not directly authenticate against /etc/passwd and /etc/shadow. It maintains its own separate password database. This is a crucial security design choice. If encrypt passwords = yes (which is the default and recommended setting for modern Samba installations), Samba uses its own encrypted password file, typically smbpasswd (e.g., /var/lib/samba/private/smbpasswd or /etc/samba/smbpasswd). If encrypt passwords = no (an older, less secure method), Samba would send unencrypted passwords, which could theoretically be checked against /etc/passwd, but this is highly insecure and deprecated. The most common modern setup for integrated authentication is to use PAM (Pluggable Authentication Modules) with Samba, often combined with an external authentication source like Active Directory or LDAP, where Samba acts as a client. But even in these cases, it‘s not directly reading /etc/passwd or /etc/shadow for password verification for network logins. Incorrect:
A. Set the parameters “encrypt passwords = yes“, “password file = / etc / passwd“ and “password algorithm = crypt“: This is incorrect. While encrypt passwords = yes is correct for modern Samba, the password file should point to Samba‘s own password database, not /etc/passwd. There isn‘t a password algorithm parameter like “crypt“ that directly makes Samba use /etc/passwd for authentication. Samba handles its own password encryption and storage. B. Delete the smbpasswd file and create a symbolic link to the passwd and shadow file. This is a highly insecure and unsupported method. Symbolic linking /etc/passwd and /etc/shadow directly as Samba‘s password file would expose sensitive system password hashes and is not how Samba is designed to work. It would also likely break Samba or compromise system security. C. Run the smbpasswd command to convert / etc / passwd and / etc / shadow to a Samba password file. The smbpasswd command is used to add/change/delete users in Samba‘s own password database, or to sync passwords with the system. It does not “convert“ /etc/passwd and /etc/shadow into a Samba password file directly. Users typically need to be added to Samba‘s database using smbpasswd -a after they exist as system users. E. Set the parameters “encrypt passwords = yes“ and “password file = / etc / passwd“: Similar to option A, setting password file = /etc/passwd is incorrect. Samba‘s password file parameter refers to its own password database, not the system‘s /etc/passwd file. Modern Samba versions use a separate, encrypted database for security.
Incorrect
Correct: D. It is not possible for Samba to use / etc / passwd and / etc / shadow.
Samba, by default and for security reasons, does not directly authenticate against /etc/passwd and /etc/shadow. It maintains its own separate password database. This is a crucial security design choice. If encrypt passwords = yes (which is the default and recommended setting for modern Samba installations), Samba uses its own encrypted password file, typically smbpasswd (e.g., /var/lib/samba/private/smbpasswd or /etc/samba/smbpasswd). If encrypt passwords = no (an older, less secure method), Samba would send unencrypted passwords, which could theoretically be checked against /etc/passwd, but this is highly insecure and deprecated. The most common modern setup for integrated authentication is to use PAM (Pluggable Authentication Modules) with Samba, often combined with an external authentication source like Active Directory or LDAP, where Samba acts as a client. But even in these cases, it‘s not directly reading /etc/passwd or /etc/shadow for password verification for network logins. Incorrect:
A. Set the parameters “encrypt passwords = yes“, “password file = / etc / passwd“ and “password algorithm = crypt“: This is incorrect. While encrypt passwords = yes is correct for modern Samba, the password file should point to Samba‘s own password database, not /etc/passwd. There isn‘t a password algorithm parameter like “crypt“ that directly makes Samba use /etc/passwd for authentication. Samba handles its own password encryption and storage. B. Delete the smbpasswd file and create a symbolic link to the passwd and shadow file. This is a highly insecure and unsupported method. Symbolic linking /etc/passwd and /etc/shadow directly as Samba‘s password file would expose sensitive system password hashes and is not how Samba is designed to work. It would also likely break Samba or compromise system security. C. Run the smbpasswd command to convert / etc / passwd and / etc / shadow to a Samba password file. The smbpasswd command is used to add/change/delete users in Samba‘s own password database, or to sync passwords with the system. It does not “convert“ /etc/passwd and /etc/shadow into a Samba password file directly. Users typically need to be added to Samba‘s database using smbpasswd -a after they exist as system users. E. Set the parameters “encrypt passwords = yes“ and “password file = / etc / passwd“: Similar to option A, setting password file = /etc/passwd is incorrect. Samba‘s password file parameter refers to its own password database, not the system‘s /etc/passwd file. Modern Samba versions use a separate, encrypted database for security.
Unattempted
Correct: D. It is not possible for Samba to use / etc / passwd and / etc / shadow.
Samba, by default and for security reasons, does not directly authenticate against /etc/passwd and /etc/shadow. It maintains its own separate password database. This is a crucial security design choice. If encrypt passwords = yes (which is the default and recommended setting for modern Samba installations), Samba uses its own encrypted password file, typically smbpasswd (e.g., /var/lib/samba/private/smbpasswd or /etc/samba/smbpasswd). If encrypt passwords = no (an older, less secure method), Samba would send unencrypted passwords, which could theoretically be checked against /etc/passwd, but this is highly insecure and deprecated. The most common modern setup for integrated authentication is to use PAM (Pluggable Authentication Modules) with Samba, often combined with an external authentication source like Active Directory or LDAP, where Samba acts as a client. But even in these cases, it‘s not directly reading /etc/passwd or /etc/shadow for password verification for network logins. Incorrect:
A. Set the parameters “encrypt passwords = yes“, “password file = / etc / passwd“ and “password algorithm = crypt“: This is incorrect. While encrypt passwords = yes is correct for modern Samba, the password file should point to Samba‘s own password database, not /etc/passwd. There isn‘t a password algorithm parameter like “crypt“ that directly makes Samba use /etc/passwd for authentication. Samba handles its own password encryption and storage. B. Delete the smbpasswd file and create a symbolic link to the passwd and shadow file. This is a highly insecure and unsupported method. Symbolic linking /etc/passwd and /etc/shadow directly as Samba‘s password file would expose sensitive system password hashes and is not how Samba is designed to work. It would also likely break Samba or compromise system security. C. Run the smbpasswd command to convert / etc / passwd and / etc / shadow to a Samba password file. The smbpasswd command is used to add/change/delete users in Samba‘s own password database, or to sync passwords with the system. It does not “convert“ /etc/passwd and /etc/shadow into a Samba password file directly. Users typically need to be added to Samba‘s database using smbpasswd -a after they exist as system users. E. Set the parameters “encrypt passwords = yes“ and “password file = / etc / passwd“: Similar to option A, setting password file = /etc/passwd is incorrect. Samba‘s password file parameter refers to its own password database, not the system‘s /etc/passwd file. Modern Samba versions use a separate, encrypted database for security.
Question 13 of 60
13. Question
You have to mount the / data filesystem from an NFS server (srvl) that does not support blocking. Which of the following assembly commands should you use?
Correct
Correct: E. mount -o nolock srvl: / data / mnt / data
When an NFS server does not support file locking (which is typically provided by the Network Lock Manager – NLM protocol), the client must be instructed not to attempt locking. The nolock option for the mount command specifically disables file locking for that NFS mount. mount: The command to mount a filesystem. -o nolock: Specifies mount options, in this case, to disable file locking. srvl:/data: The remote NFS share, in the format hostname_or_ip:exported_directory. /mnt/data: The local mount point where the remote filesystem will be accessible. Incorrect:
A. mount -a -t nfs:
mount -a mounts all filesystems listed in /etc/fstab that have the auto option. -t nfs specifies the filesystem type. This command does not include the nolock option and would only work if nolock was already specified in /etc/fstab for this particular mount, or if the server did support locking. It doesn‘t directly address the problem of a server not supporting blocking. B. mount -o nolock / data @ srvl / mn / data:
The syntax /data @ srvl / mn / data is incorrect for specifying the remote share and local mount point. The standard NFS mount syntax is server:/remote/path /local/path. C. mount -o nolocking srvl: / data / mnt / data:
While conceptually similar, nolocking is not the correct or standard option for disabling NFS locking. The correct option is nolock. D. mount -o locking = off srvl: / data / mnt / data:
locking=off is not the standard or correct option for disabling NFS locking. The correct option is nolock.
Incorrect
Correct: E. mount -o nolock srvl: / data / mnt / data
When an NFS server does not support file locking (which is typically provided by the Network Lock Manager – NLM protocol), the client must be instructed not to attempt locking. The nolock option for the mount command specifically disables file locking for that NFS mount. mount: The command to mount a filesystem. -o nolock: Specifies mount options, in this case, to disable file locking. srvl:/data: The remote NFS share, in the format hostname_or_ip:exported_directory. /mnt/data: The local mount point where the remote filesystem will be accessible. Incorrect:
A. mount -a -t nfs:
mount -a mounts all filesystems listed in /etc/fstab that have the auto option. -t nfs specifies the filesystem type. This command does not include the nolock option and would only work if nolock was already specified in /etc/fstab for this particular mount, or if the server did support locking. It doesn‘t directly address the problem of a server not supporting blocking. B. mount -o nolock / data @ srvl / mn / data:
The syntax /data @ srvl / mn / data is incorrect for specifying the remote share and local mount point. The standard NFS mount syntax is server:/remote/path /local/path. C. mount -o nolocking srvl: / data / mnt / data:
While conceptually similar, nolocking is not the correct or standard option for disabling NFS locking. The correct option is nolock. D. mount -o locking = off srvl: / data / mnt / data:
locking=off is not the standard or correct option for disabling NFS locking. The correct option is nolock.
Unattempted
Correct: E. mount -o nolock srvl: / data / mnt / data
When an NFS server does not support file locking (which is typically provided by the Network Lock Manager – NLM protocol), the client must be instructed not to attempt locking. The nolock option for the mount command specifically disables file locking for that NFS mount. mount: The command to mount a filesystem. -o nolock: Specifies mount options, in this case, to disable file locking. srvl:/data: The remote NFS share, in the format hostname_or_ip:exported_directory. /mnt/data: The local mount point where the remote filesystem will be accessible. Incorrect:
A. mount -a -t nfs:
mount -a mounts all filesystems listed in /etc/fstab that have the auto option. -t nfs specifies the filesystem type. This command does not include the nolock option and would only work if nolock was already specified in /etc/fstab for this particular mount, or if the server did support locking. It doesn‘t directly address the problem of a server not supporting blocking. B. mount -o nolock / data @ srvl / mn / data:
The syntax /data @ srvl / mn / data is incorrect for specifying the remote share and local mount point. The standard NFS mount syntax is server:/remote/path /local/path. C. mount -o nolocking srvl: / data / mnt / data:
While conceptually similar, nolocking is not the correct or standard option for disabling NFS locking. The correct option is nolock. D. mount -o locking = off srvl: / data / mnt / data:
locking=off is not the standard or correct option for disabling NFS locking. The correct option is nolock.
Question 14 of 60
14. Question
A user requests a “Hidden“ Samba share, called confidential, similar to the Windows administration share. How can this be configured?
Correct
Correct:
C. [confidential $] comment = hidden share path = / srv / smb / hidden write list = user create mask = 0700 directory mask = 0700
In Samba, a “hidden“ share, similar to Windows administrative shares (like C$, ADMIN$), is created by appending a dollar sign ($) to the end of the share name in the smb.conf file. When a share name ends with a $, it will not be listed when users browse the network, but it can still be accessed directly if the user knows the share name. Incorrect:
A. [$ confidential]: The dollar sign ($) for hiding a share must be at the end of the share name, not the beginning. This syntax is incorrect and would not create a hidden share. B. [#confidential]: The hash symbol (#) is used to denote a comment in smb.conf. A share defined this way would be ignored by Samba and would not be created at all. D. [confidential]: This defines a regular Samba share named “confidential“. It would be visible to users when they browse the network, failing to meet the “hidden“ requirement. E. [% confidential]: The percent sign (%) has no special meaning for hiding shares in Samba. This syntax is incorrect and would not create a hidden share.
Incorrect
Correct:
C. [confidential $] comment = hidden share path = / srv / smb / hidden write list = user create mask = 0700 directory mask = 0700
In Samba, a “hidden“ share, similar to Windows administrative shares (like C$, ADMIN$), is created by appending a dollar sign ($) to the end of the share name in the smb.conf file. When a share name ends with a $, it will not be listed when users browse the network, but it can still be accessed directly if the user knows the share name. Incorrect:
A. [$ confidential]: The dollar sign ($) for hiding a share must be at the end of the share name, not the beginning. This syntax is incorrect and would not create a hidden share. B. [#confidential]: The hash symbol (#) is used to denote a comment in smb.conf. A share defined this way would be ignored by Samba and would not be created at all. D. [confidential]: This defines a regular Samba share named “confidential“. It would be visible to users when they browse the network, failing to meet the “hidden“ requirement. E. [% confidential]: The percent sign (%) has no special meaning for hiding shares in Samba. This syntax is incorrect and would not create a hidden share.
Unattempted
Correct:
C. [confidential $] comment = hidden share path = / srv / smb / hidden write list = user create mask = 0700 directory mask = 0700
In Samba, a “hidden“ share, similar to Windows administrative shares (like C$, ADMIN$), is created by appending a dollar sign ($) to the end of the share name in the smb.conf file. When a share name ends with a $, it will not be listed when users browse the network, but it can still be accessed directly if the user knows the share name. Incorrect:
A. [$ confidential]: The dollar sign ($) for hiding a share must be at the end of the share name, not the beginning. This syntax is incorrect and would not create a hidden share. B. [#confidential]: The hash symbol (#) is used to denote a comment in smb.conf. A share defined this way would be ignored by Samba and would not be created at all. D. [confidential]: This defines a regular Samba share named “confidential“. It would be visible to users when they browse the network, failing to meet the “hidden“ requirement. E. [% confidential]: The percent sign (%) has no special meaning for hiding shares in Samba. This syntax is incorrect and would not create a hidden share.
Question 15 of 60
15. Question
You have prepared an LDIF file (newusers.ldif) with several new user definitions and want to add these new users to your LDAP-based account directory for example.com, using the administrative account called manager. How can you do this, assuming your system is properly configured to allow for such modifications?
Correct
Correct: B. ldapadd -D cn = manager, dc = example, dc = com -W -f newusers.ldif
ldapadd: This is the command-line utility used to add entries to an LDAP directory. -D cn=manager,dc=example,dc=com: This specifies the Distinguished Name (DN) of the LDAP administrative user (the “bind DN“) who has permissions to add entries. In LDAP, administrative accounts are typically identified by their full DN, not just a simple username like an email address. -W: This option prompts for the password of the administrative user (the bind DN). This is a secure way to provide the password without putting it directly on the command line. -f newusers.ldif: This specifies that the entries to be added should be read from the file newusers.ldif. Incorrect:
The format [email protected] is generally not the correct way to specify an LDAP bind DN. LDAP DNs follow a specific hierarchical structure (e.g., cn=manager,dc=example,dc=com). While some LDAP servers might be configured to accept email-like usernames for binding, it‘s not the standard and most robust method for an LPI exam question. It‘s missing the -f flag to indicate that newusers.ldif is a file. C. ldapadd cn = manager, dc = example, dc = com newusers.ldif:
This command is missing the -D flag to specify that cn=manager,dc=example,dc=com is the bind DN. Without -D, ldapadd would interpret it as a non-standard argument. It‘s also missing the -f flag for the input file and the -W flag for prompting the password securely. D. ldapadd -D [email protected] -W -f newusers.ldif:
While it includes -W and -f, the bind DN format [email protected] is still generally incorrect for standard LDAP binding as explained in option A. The full Distinguished Name is preferred.
Incorrect
Correct: B. ldapadd -D cn = manager, dc = example, dc = com -W -f newusers.ldif
ldapadd: This is the command-line utility used to add entries to an LDAP directory. -D cn=manager,dc=example,dc=com: This specifies the Distinguished Name (DN) of the LDAP administrative user (the “bind DN“) who has permissions to add entries. In LDAP, administrative accounts are typically identified by their full DN, not just a simple username like an email address. -W: This option prompts for the password of the administrative user (the bind DN). This is a secure way to provide the password without putting it directly on the command line. -f newusers.ldif: This specifies that the entries to be added should be read from the file newusers.ldif. Incorrect:
The format [email protected] is generally not the correct way to specify an LDAP bind DN. LDAP DNs follow a specific hierarchical structure (e.g., cn=manager,dc=example,dc=com). While some LDAP servers might be configured to accept email-like usernames for binding, it‘s not the standard and most robust method for an LPI exam question. It‘s missing the -f flag to indicate that newusers.ldif is a file. C. ldapadd cn = manager, dc = example, dc = com newusers.ldif:
This command is missing the -D flag to specify that cn=manager,dc=example,dc=com is the bind DN. Without -D, ldapadd would interpret it as a non-standard argument. It‘s also missing the -f flag for the input file and the -W flag for prompting the password securely. D. ldapadd -D [email protected] -W -f newusers.ldif:
While it includes -W and -f, the bind DN format [email protected] is still generally incorrect for standard LDAP binding as explained in option A. The full Distinguished Name is preferred.
Unattempted
Correct: B. ldapadd -D cn = manager, dc = example, dc = com -W -f newusers.ldif
ldapadd: This is the command-line utility used to add entries to an LDAP directory. -D cn=manager,dc=example,dc=com: This specifies the Distinguished Name (DN) of the LDAP administrative user (the “bind DN“) who has permissions to add entries. In LDAP, administrative accounts are typically identified by their full DN, not just a simple username like an email address. -W: This option prompts for the password of the administrative user (the bind DN). This is a secure way to provide the password without putting it directly on the command line. -f newusers.ldif: This specifies that the entries to be added should be read from the file newusers.ldif. Incorrect:
The format [email protected] is generally not the correct way to specify an LDAP bind DN. LDAP DNs follow a specific hierarchical structure (e.g., cn=manager,dc=example,dc=com). While some LDAP servers might be configured to accept email-like usernames for binding, it‘s not the standard and most robust method for an LPI exam question. It‘s missing the -f flag to indicate that newusers.ldif is a file. C. ldapadd cn = manager, dc = example, dc = com newusers.ldif:
This command is missing the -D flag to specify that cn=manager,dc=example,dc=com is the bind DN. Without -D, ldapadd would interpret it as a non-standard argument. It‘s also missing the -f flag for the input file and the -W flag for prompting the password securely. D. ldapadd -D [email protected] -W -f newusers.ldif:
While it includes -W and -f, the bind DN format [email protected] is still generally incorrect for standard LDAP binding as explained in option A. The full Distinguished Name is preferred.
Question 16 of 60
16. Question
A web server handles approximately 200 simultaneous requests during normal use, however with an occasional spike in activity, this web server has been running slowly. What guidelines in httpd.conf need to be adjusted?
Correct
Correct: D. MinSpareServers, MaxSpareServers, StartServers & MaxClients.
MinSpareServers: Defines the minimum number of idle child server processes that Apache tries to maintain. If the number of idle processes falls below this value, Apache will fork new ones. This helps ensure that new requests can be handled quickly without waiting for a new process to be created. MaxSpareServers: Defines the maximum number of idle child server processes that Apache is allowed to maintain. If the number of idle processes exceeds this value, Apache will kill excess processes. This prevents Apache from consuming too much memory by keeping too many idle processes. StartServers: Sets the number of child server processes that Apache will create when it starts up. This provides an initial pool of processes to handle immediate requests. MaxClients: (Or MaxRequestWorkers in MPM event/worker for Apache 2.4+) This is a crucial directive that sets the maximum number of simultaneous requests that the server can handle. If the server is slow during spikes, it often means it‘s hitting this limit and new connections are being queued or rejected. Adjusting this, along with the spare server settings, is vital for performance tuning under load. Explanation for why these are the most relevant:
When a web server runs slowly during activity spikes, it typically indicates that it‘s struggling to handle the increased concurrent connections.
MinSpareServers, MaxSpareServers, StartServers: These directives (common in the Prefork MPM, and similar concepts exist in Worker/Event MPMs) control how Apache manages its pool of child processes (or threads/processes in other MPMs) that are ready to serve requests. If there aren‘t enough spare servers, new requests have to wait for a new process to be forked, leading to delays. If there are too many, memory is wasted. Adjusting these helps Apache scale its worker pool efficiently to meet demand without excessive overhead. MaxClients: This is the absolute upper limit on concurrent connections Apache will handle. If this value is too low for the number of simultaneous requests during a spike, new connections will be queued or rejected, directly causing the server to appear slow or unresponsive. Raising this limit allows Apache to handle more concurrent users, provided the server has sufficient resources (CPU, RAM). Incorrect:
A. MinServers, MaxServers & MaxClients. MinServers and MaxServers are not standard Apache directives. The correct ones are MinSpareServers and MaxSpareServers. B. MinSpareServers, MaxSpareServers, StartServers, MaxClients & KeepAlive. While KeepAlive is an important performance setting (it allows multiple requests over a single TCP connection), it primarily affects persistent connections and can reduce overhead, but it‘s not the primary set of guidelines to adjust when the server is generally “running slowly“ under load due to an insufficient number of available workers. The core problem during spikes points more directly to the server‘s ability to fork/manage processes to handle new incoming connections, which is governed by the MinSpareServers, MaxSpareServers, StartServers, and MaxClients directives. C. MinSpareServers & MaxSpareServers. While these are important, they only control the idle pool. Without also considering StartServers (for initial readiness) and, most critically, MaxClients (the hard limit on active connections), you won‘t fully address the issues during activity spikes where the server might be hitting its capacity limit.
Incorrect
Correct: D. MinSpareServers, MaxSpareServers, StartServers & MaxClients.
MinSpareServers: Defines the minimum number of idle child server processes that Apache tries to maintain. If the number of idle processes falls below this value, Apache will fork new ones. This helps ensure that new requests can be handled quickly without waiting for a new process to be created. MaxSpareServers: Defines the maximum number of idle child server processes that Apache is allowed to maintain. If the number of idle processes exceeds this value, Apache will kill excess processes. This prevents Apache from consuming too much memory by keeping too many idle processes. StartServers: Sets the number of child server processes that Apache will create when it starts up. This provides an initial pool of processes to handle immediate requests. MaxClients: (Or MaxRequestWorkers in MPM event/worker for Apache 2.4+) This is a crucial directive that sets the maximum number of simultaneous requests that the server can handle. If the server is slow during spikes, it often means it‘s hitting this limit and new connections are being queued or rejected. Adjusting this, along with the spare server settings, is vital for performance tuning under load. Explanation for why these are the most relevant:
When a web server runs slowly during activity spikes, it typically indicates that it‘s struggling to handle the increased concurrent connections.
MinSpareServers, MaxSpareServers, StartServers: These directives (common in the Prefork MPM, and similar concepts exist in Worker/Event MPMs) control how Apache manages its pool of child processes (or threads/processes in other MPMs) that are ready to serve requests. If there aren‘t enough spare servers, new requests have to wait for a new process to be forked, leading to delays. If there are too many, memory is wasted. Adjusting these helps Apache scale its worker pool efficiently to meet demand without excessive overhead. MaxClients: This is the absolute upper limit on concurrent connections Apache will handle. If this value is too low for the number of simultaneous requests during a spike, new connections will be queued or rejected, directly causing the server to appear slow or unresponsive. Raising this limit allows Apache to handle more concurrent users, provided the server has sufficient resources (CPU, RAM). Incorrect:
A. MinServers, MaxServers & MaxClients. MinServers and MaxServers are not standard Apache directives. The correct ones are MinSpareServers and MaxSpareServers. B. MinSpareServers, MaxSpareServers, StartServers, MaxClients & KeepAlive. While KeepAlive is an important performance setting (it allows multiple requests over a single TCP connection), it primarily affects persistent connections and can reduce overhead, but it‘s not the primary set of guidelines to adjust when the server is generally “running slowly“ under load due to an insufficient number of available workers. The core problem during spikes points more directly to the server‘s ability to fork/manage processes to handle new incoming connections, which is governed by the MinSpareServers, MaxSpareServers, StartServers, and MaxClients directives. C. MinSpareServers & MaxSpareServers. While these are important, they only control the idle pool. Without also considering StartServers (for initial readiness) and, most critically, MaxClients (the hard limit on active connections), you won‘t fully address the issues during activity spikes where the server might be hitting its capacity limit.
Unattempted
Correct: D. MinSpareServers, MaxSpareServers, StartServers & MaxClients.
MinSpareServers: Defines the minimum number of idle child server processes that Apache tries to maintain. If the number of idle processes falls below this value, Apache will fork new ones. This helps ensure that new requests can be handled quickly without waiting for a new process to be created. MaxSpareServers: Defines the maximum number of idle child server processes that Apache is allowed to maintain. If the number of idle processes exceeds this value, Apache will kill excess processes. This prevents Apache from consuming too much memory by keeping too many idle processes. StartServers: Sets the number of child server processes that Apache will create when it starts up. This provides an initial pool of processes to handle immediate requests. MaxClients: (Or MaxRequestWorkers in MPM event/worker for Apache 2.4+) This is a crucial directive that sets the maximum number of simultaneous requests that the server can handle. If the server is slow during spikes, it often means it‘s hitting this limit and new connections are being queued or rejected. Adjusting this, along with the spare server settings, is vital for performance tuning under load. Explanation for why these are the most relevant:
When a web server runs slowly during activity spikes, it typically indicates that it‘s struggling to handle the increased concurrent connections.
MinSpareServers, MaxSpareServers, StartServers: These directives (common in the Prefork MPM, and similar concepts exist in Worker/Event MPMs) control how Apache manages its pool of child processes (or threads/processes in other MPMs) that are ready to serve requests. If there aren‘t enough spare servers, new requests have to wait for a new process to be forked, leading to delays. If there are too many, memory is wasted. Adjusting these helps Apache scale its worker pool efficiently to meet demand without excessive overhead. MaxClients: This is the absolute upper limit on concurrent connections Apache will handle. If this value is too low for the number of simultaneous requests during a spike, new connections will be queued or rejected, directly causing the server to appear slow or unresponsive. Raising this limit allows Apache to handle more concurrent users, provided the server has sufficient resources (CPU, RAM). Incorrect:
A. MinServers, MaxServers & MaxClients. MinServers and MaxServers are not standard Apache directives. The correct ones are MinSpareServers and MaxSpareServers. B. MinSpareServers, MaxSpareServers, StartServers, MaxClients & KeepAlive. While KeepAlive is an important performance setting (it allows multiple requests over a single TCP connection), it primarily affects persistent connections and can reduce overhead, but it‘s not the primary set of guidelines to adjust when the server is generally “running slowly“ under load due to an insufficient number of available workers. The core problem during spikes points more directly to the server‘s ability to fork/manage processes to handle new incoming connections, which is governed by the MinSpareServers, MaxSpareServers, StartServers, and MaxClients directives. C. MinSpareServers & MaxSpareServers. While these are important, they only control the idle pool. Without also considering StartServers (for initial readiness) and, most critically, MaxClients (the hard limit on active connections), you won‘t fully address the issues during activity spikes where the server might be hitting its capacity limit.
Question 17 of 60
17. Question
How can you check that the smb.conf file does not contain serious syntax errors before starting a Samba server?
Correct
Correct: C. testparm
testparm is the standard Samba utility specifically designed to check the syntax and validity of the smb.conf configuration file. It parses the file, checks for errors, and outputs a dump of the processed configuration, including any default values. This allows administrators to catch configuration mistakes before restarting the Samba server, preventing potential service disruptions. Incorrect:
A. smbchkconfig: This is not a standard Samba command for checking configuration syntax. B. check-samba: This is not a standard Samba command for checking configuration syntax. D. smbd –check smb.conf: While smbd is the main Samba daemon, and some daemons have a –check or -t option for syntax checking, smbd itself does not have a direct –check smb.conf option for this purpose. The dedicated tool for smb.conf validation is testparm.
Incorrect
Correct: C. testparm
testparm is the standard Samba utility specifically designed to check the syntax and validity of the smb.conf configuration file. It parses the file, checks for errors, and outputs a dump of the processed configuration, including any default values. This allows administrators to catch configuration mistakes before restarting the Samba server, preventing potential service disruptions. Incorrect:
A. smbchkconfig: This is not a standard Samba command for checking configuration syntax. B. check-samba: This is not a standard Samba command for checking configuration syntax. D. smbd –check smb.conf: While smbd is the main Samba daemon, and some daemons have a –check or -t option for syntax checking, smbd itself does not have a direct –check smb.conf option for this purpose. The dedicated tool for smb.conf validation is testparm.
Unattempted
Correct: C. testparm
testparm is the standard Samba utility specifically designed to check the syntax and validity of the smb.conf configuration file. It parses the file, checks for errors, and outputs a dump of the processed configuration, including any default values. This allows administrators to catch configuration mistakes before restarting the Samba server, preventing potential service disruptions. Incorrect:
A. smbchkconfig: This is not a standard Samba command for checking configuration syntax. B. check-samba: This is not a standard Samba command for checking configuration syntax. D. smbd –check smb.conf: While smbd is the main Samba daemon, and some daemons have a –check or -t option for syntax checking, smbd itself does not have a direct –check smb.conf option for this purpose. The dedicated tool for smb.conf validation is testparm.
Question 18 of 60
18. Question
The /etc/pam.d/login file includes the following authentication stack. Which authentication system uses the login tool?
Correct
Correct: D. The correct answer cannot be determined from the information provided.
The reason for this: The question provides no actual content of the /etc/pam.d/login file. PAM (Pluggable Authentication Modules) is highly flexible. The authentication system used (e.g., local Unix accounts, LDAP, Kerberos, Winbind) is determined by the specific PAM modules listed in the /etc/pam.d/login file. Without seeing the contents of that file, it‘s impossible to know which authentication system is configured. For example: If auth sufficient pam_unix.so is present, it might use local accounts. If auth sufficient pam_ldap.so is present, it might use an LDAP server. If auth sufficient pam_winbind.so is present, it might use a Winbind server. A combination of these could also be present, in which case the order and control flags (sufficient, required, etc.) would dictate the behavior. Incorrect:
A. A Winbind server. B. Standard Unix / Linux local accounts. C. An LDAP server. These options are all plausible authentication systems that could be configured in /etc/pam.d/login, but without the actual content of the file, we cannot definitively say that any one of them is being used. The question explicitly states “The /etc/pam.d/login file includes the following authentication stack,“ but then provides no “following authentication stack.“ Therefore, the information is insufficient.
Incorrect
Correct: D. The correct answer cannot be determined from the information provided.
The reason for this: The question provides no actual content of the /etc/pam.d/login file. PAM (Pluggable Authentication Modules) is highly flexible. The authentication system used (e.g., local Unix accounts, LDAP, Kerberos, Winbind) is determined by the specific PAM modules listed in the /etc/pam.d/login file. Without seeing the contents of that file, it‘s impossible to know which authentication system is configured. For example: If auth sufficient pam_unix.so is present, it might use local accounts. If auth sufficient pam_ldap.so is present, it might use an LDAP server. If auth sufficient pam_winbind.so is present, it might use a Winbind server. A combination of these could also be present, in which case the order and control flags (sufficient, required, etc.) would dictate the behavior. Incorrect:
A. A Winbind server. B. Standard Unix / Linux local accounts. C. An LDAP server. These options are all plausible authentication systems that could be configured in /etc/pam.d/login, but without the actual content of the file, we cannot definitively say that any one of them is being used. The question explicitly states “The /etc/pam.d/login file includes the following authentication stack,“ but then provides no “following authentication stack.“ Therefore, the information is insufficient.
Unattempted
Correct: D. The correct answer cannot be determined from the information provided.
The reason for this: The question provides no actual content of the /etc/pam.d/login file. PAM (Pluggable Authentication Modules) is highly flexible. The authentication system used (e.g., local Unix accounts, LDAP, Kerberos, Winbind) is determined by the specific PAM modules listed in the /etc/pam.d/login file. Without seeing the contents of that file, it‘s impossible to know which authentication system is configured. For example: If auth sufficient pam_unix.so is present, it might use local accounts. If auth sufficient pam_ldap.so is present, it might use an LDAP server. If auth sufficient pam_winbind.so is present, it might use a Winbind server. A combination of these could also be present, in which case the order and control flags (sufficient, required, etc.) would dictate the behavior. Incorrect:
A. A Winbind server. B. Standard Unix / Linux local accounts. C. An LDAP server. These options are all plausible authentication systems that could be configured in /etc/pam.d/login, but without the actual content of the file, we cannot definitively say that any one of them is being used. The question explicitly states “The /etc/pam.d/login file includes the following authentication stack,“ but then provides no “following authentication stack.“ Therefore, the information is insufficient.
Question 19 of 60
19. Question
What is the role of the portmapper in relation to NFS?
Correct
Correct: C. It tells clients which port the NFS server is running on.
The portmapper (also known as rpcbind in modern Linux systems) acts as a directory service for RPC (Remote Procedure Call) programs. NFS relies heavily on RPC. When various NFS services (like nfsd, mountd, statd) start, they register themselves with the portmapper, telling it which port they are listening on. When an NFS client wants to communicate with one of these services, it first queries the portmapper on the NFS server to get the correct port number. This allows NFS services to run on dynamic ports while still being discoverable by clients. Incorrect:
A. It listens to the NFS port and disconnects connections to the NFS server. The portmapper does not listen on the NFS data port (2049/TCP) itself or disconnect connections. Its role is to provide port mapping, not to handle the actual data transfer or connection management of NFS. B. Maintains information about local file systems and their relationship to NFS. This is primarily the role of the NFS server daemon (nfsd) and the /etc/exports file, which defines which local file systems are exported via NFS. The portmapper is not involved in managing filesystem information. D. It creates an NFS client map to help the server optimize its speed. The portmapper does not create client maps for optimization. Its function is solely to provide port lookups for RPC services. NFS server optimization typically involves caching, network tuning, and proper export options.
Incorrect
Correct: C. It tells clients which port the NFS server is running on.
The portmapper (also known as rpcbind in modern Linux systems) acts as a directory service for RPC (Remote Procedure Call) programs. NFS relies heavily on RPC. When various NFS services (like nfsd, mountd, statd) start, they register themselves with the portmapper, telling it which port they are listening on. When an NFS client wants to communicate with one of these services, it first queries the portmapper on the NFS server to get the correct port number. This allows NFS services to run on dynamic ports while still being discoverable by clients. Incorrect:
A. It listens to the NFS port and disconnects connections to the NFS server. The portmapper does not listen on the NFS data port (2049/TCP) itself or disconnect connections. Its role is to provide port mapping, not to handle the actual data transfer or connection management of NFS. B. Maintains information about local file systems and their relationship to NFS. This is primarily the role of the NFS server daemon (nfsd) and the /etc/exports file, which defines which local file systems are exported via NFS. The portmapper is not involved in managing filesystem information. D. It creates an NFS client map to help the server optimize its speed. The portmapper does not create client maps for optimization. Its function is solely to provide port lookups for RPC services. NFS server optimization typically involves caching, network tuning, and proper export options.
Unattempted
Correct: C. It tells clients which port the NFS server is running on.
The portmapper (also known as rpcbind in modern Linux systems) acts as a directory service for RPC (Remote Procedure Call) programs. NFS relies heavily on RPC. When various NFS services (like nfsd, mountd, statd) start, they register themselves with the portmapper, telling it which port they are listening on. When an NFS client wants to communicate with one of these services, it first queries the portmapper on the NFS server to get the correct port number. This allows NFS services to run on dynamic ports while still being discoverable by clients. Incorrect:
A. It listens to the NFS port and disconnects connections to the NFS server. The portmapper does not listen on the NFS data port (2049/TCP) itself or disconnect connections. Its role is to provide port mapping, not to handle the actual data transfer or connection management of NFS. B. Maintains information about local file systems and their relationship to NFS. This is primarily the role of the NFS server daemon (nfsd) and the /etc/exports file, which defines which local file systems are exported via NFS. The portmapper is not involved in managing filesystem information. D. It creates an NFS client map to help the server optimize its speed. The portmapper does not create client maps for optimization. Its function is solely to provide port lookups for RPC services. NFS server optimization typically involves caching, network tuning, and proper export options.
Question 20 of 60
20. Question
You want to temporarily export the / mnt / cdrom directory using NFS so that reader.example.org can read, but not write the export. Assuming an NFS server is already running, what do you type at a shell prompt to accomplish this goal?
Correct
Correct:
C. exportfs -o ro reader.example.org:/mnt/cdrom exportfs is the correct command to manage NFS exports dynamically (without editing /etc/exports).
-o ro sets the export as read-only.
reader.example.org:/mnt/cdrom restricts access to the specified host.
Incorrect:
A. showmount -o ro reader.example.org:/mnt/cdrom
showmount lists NFS shares but cannot configure exports.
B. exports -o ro reader.example.org:/mnt/cdrom
There is no exports command in Linux (common typo for exportfs).
D. mount -o ro reader.example.org:/mnt/cdrom
mount is used on the client side to attach NFS shares, not on the server.
Incorrect
Correct:
C. exportfs -o ro reader.example.org:/mnt/cdrom exportfs is the correct command to manage NFS exports dynamically (without editing /etc/exports).
-o ro sets the export as read-only.
reader.example.org:/mnt/cdrom restricts access to the specified host.
Incorrect:
A. showmount -o ro reader.example.org:/mnt/cdrom
showmount lists NFS shares but cannot configure exports.
B. exports -o ro reader.example.org:/mnt/cdrom
There is no exports command in Linux (common typo for exportfs).
D. mount -o ro reader.example.org:/mnt/cdrom
mount is used on the client side to attach NFS shares, not on the server.
Unattempted
Correct:
C. exportfs -o ro reader.example.org:/mnt/cdrom exportfs is the correct command to manage NFS exports dynamically (without editing /etc/exports).
-o ro sets the export as read-only.
reader.example.org:/mnt/cdrom restricts access to the specified host.
Incorrect:
A. showmount -o ro reader.example.org:/mnt/cdrom
showmount lists NFS shares but cannot configure exports.
B. exports -o ro reader.example.org:/mnt/cdrom
There is no exports command in Linux (common typo for exportfs).
D. mount -o ro reader.example.org:/mnt/cdrom
mount is used on the client side to attach NFS shares, not on the server.
Question 21 of 60
21. Question
You know that the hereville computer has an NFS export that you want to use, but you cannot remember its name. How can you find out this information?
Correct
Correct: A. showmount -e hereville
The showmount command with the -e (exports) option is specifically used to display the list of directories exported by an NFS server. By running showmount -e hereville, you will query the hereville server and it will return a list of all its NFS exports, along with the allowed clients for each. This directly answers the question of finding out the names of the NFS shares. Incorrect:
B. nfsstat –server hereville: The nfsstat command is used to display NFS statistics (e.g., RPC calls, server operations). While it provides information about NFS activity, it does not list the names of exported shares. C. nfsclient –show hereville: There is no standard Linux command named nfsclient that performs this function. D. smbclient // hereville: smbclient is used to access Samba/SMB/CIFS shares, not NFS shares. While hereville might also be a Samba server, this command would not help in finding NFS exports.
Incorrect
Correct: A. showmount -e hereville
The showmount command with the -e (exports) option is specifically used to display the list of directories exported by an NFS server. By running showmount -e hereville, you will query the hereville server and it will return a list of all its NFS exports, along with the allowed clients for each. This directly answers the question of finding out the names of the NFS shares. Incorrect:
B. nfsstat –server hereville: The nfsstat command is used to display NFS statistics (e.g., RPC calls, server operations). While it provides information about NFS activity, it does not list the names of exported shares. C. nfsclient –show hereville: There is no standard Linux command named nfsclient that performs this function. D. smbclient // hereville: smbclient is used to access Samba/SMB/CIFS shares, not NFS shares. While hereville might also be a Samba server, this command would not help in finding NFS exports.
Unattempted
Correct: A. showmount -e hereville
The showmount command with the -e (exports) option is specifically used to display the list of directories exported by an NFS server. By running showmount -e hereville, you will query the hereville server and it will return a list of all its NFS exports, along with the allowed clients for each. This directly answers the question of finding out the names of the NFS shares. Incorrect:
B. nfsstat –server hereville: The nfsstat command is used to display NFS statistics (e.g., RPC calls, server operations). While it provides information about NFS activity, it does not list the names of exported shares. C. nfsclient –show hereville: There is no standard Linux command named nfsclient that performs this function. D. smbclient // hereville: smbclient is used to access Samba/SMB/CIFS shares, not NFS shares. While hereville might also be a Samba server, this command would not help in finding NFS exports.
Question 22 of 60
22. Question
A correctly formatted entry has been added to /etc/hosts.allow to allow certain customers to connect to a service, but this has no effect. What would be the cause of this?
Correct
Correct: A. The service does not support tcpwrappers.
/etc/hosts.allow (and /etc/hosts.deny) are configuration files used by TCP Wrappers. TCP Wrappers is a host-based access control system that checks incoming network connections against these files. However, for TCP Wrappers to work, the service itself must be compiled with TCP Wrappers support or be launched by a superserver like inetd or xinetd that provides TCP Wrappers functionality. If the service (e.g., sshd, vsftpd, rpcbind) is directly launched by systemd or as a standalone daemon and was not compiled with TCP Wrappers support, then entries in /etc/hosts.allow will simply be ignored by that service. Incorrect:
B. The machine needs to be restarted. While a restart would restart services, it‘s generally overkill for changes related to /etc/hosts.allow. More importantly, if the service doesn‘t support tcpwrappers, a restart won‘t magically add that support. C. The service needs to be restarted. Similar to option B, restarting the service might be necessary for it to re-read its configuration, if it supports tcpwrappers. However, if the fundamental issue is lack of tcpwrappers support, restarting the service won‘t help. D. There is a conflicting entry in /etc/hosts.deny. This is a plausible scenario if TCP Wrappers were in use. TCP Wrappers checks /etc/hosts.allow first, then /etc/hosts.deny. If an entry in hosts.deny overrides an allow entry, then access would be denied. However, the question states “this has no effect,“ implying the allow rule isn‘t even being considered, which points to a lack of TCP Wrappers support rather than a conflict. If there was a conflict, it would have an effect (denial), not no effect. E. tcpd needs to send the HUP signal. tcpd is the program that performs the access control check for services wrapped by inetd/xinetd. While some services might re-read configurations on a HUP signal, TCP Wrappers doesn‘t typically require a tcpd HUP signal for changes to /etc/hosts.allow to take effect. The services themselves or inetd/xinetd would be responsible for rereading. More critically, if the service isn‘t TCP-wrapped, sending a HUP signal to tcpd (if it were even a standalone process in this context) would still have no effect on the unwrapped service.
Incorrect
Correct: A. The service does not support tcpwrappers.
/etc/hosts.allow (and /etc/hosts.deny) are configuration files used by TCP Wrappers. TCP Wrappers is a host-based access control system that checks incoming network connections against these files. However, for TCP Wrappers to work, the service itself must be compiled with TCP Wrappers support or be launched by a superserver like inetd or xinetd that provides TCP Wrappers functionality. If the service (e.g., sshd, vsftpd, rpcbind) is directly launched by systemd or as a standalone daemon and was not compiled with TCP Wrappers support, then entries in /etc/hosts.allow will simply be ignored by that service. Incorrect:
B. The machine needs to be restarted. While a restart would restart services, it‘s generally overkill for changes related to /etc/hosts.allow. More importantly, if the service doesn‘t support tcpwrappers, a restart won‘t magically add that support. C. The service needs to be restarted. Similar to option B, restarting the service might be necessary for it to re-read its configuration, if it supports tcpwrappers. However, if the fundamental issue is lack of tcpwrappers support, restarting the service won‘t help. D. There is a conflicting entry in /etc/hosts.deny. This is a plausible scenario if TCP Wrappers were in use. TCP Wrappers checks /etc/hosts.allow first, then /etc/hosts.deny. If an entry in hosts.deny overrides an allow entry, then access would be denied. However, the question states “this has no effect,“ implying the allow rule isn‘t even being considered, which points to a lack of TCP Wrappers support rather than a conflict. If there was a conflict, it would have an effect (denial), not no effect. E. tcpd needs to send the HUP signal. tcpd is the program that performs the access control check for services wrapped by inetd/xinetd. While some services might re-read configurations on a HUP signal, TCP Wrappers doesn‘t typically require a tcpd HUP signal for changes to /etc/hosts.allow to take effect. The services themselves or inetd/xinetd would be responsible for rereading. More critically, if the service isn‘t TCP-wrapped, sending a HUP signal to tcpd (if it were even a standalone process in this context) would still have no effect on the unwrapped service.
Unattempted
Correct: A. The service does not support tcpwrappers.
/etc/hosts.allow (and /etc/hosts.deny) are configuration files used by TCP Wrappers. TCP Wrappers is a host-based access control system that checks incoming network connections against these files. However, for TCP Wrappers to work, the service itself must be compiled with TCP Wrappers support or be launched by a superserver like inetd or xinetd that provides TCP Wrappers functionality. If the service (e.g., sshd, vsftpd, rpcbind) is directly launched by systemd or as a standalone daemon and was not compiled with TCP Wrappers support, then entries in /etc/hosts.allow will simply be ignored by that service. Incorrect:
B. The machine needs to be restarted. While a restart would restart services, it‘s generally overkill for changes related to /etc/hosts.allow. More importantly, if the service doesn‘t support tcpwrappers, a restart won‘t magically add that support. C. The service needs to be restarted. Similar to option B, restarting the service might be necessary for it to re-read its configuration, if it supports tcpwrappers. However, if the fundamental issue is lack of tcpwrappers support, restarting the service won‘t help. D. There is a conflicting entry in /etc/hosts.deny. This is a plausible scenario if TCP Wrappers were in use. TCP Wrappers checks /etc/hosts.allow first, then /etc/hosts.deny. If an entry in hosts.deny overrides an allow entry, then access would be denied. However, the question states “this has no effect,“ implying the allow rule isn‘t even being considered, which points to a lack of TCP Wrappers support rather than a conflict. If there was a conflict, it would have an effect (denial), not no effect. E. tcpd needs to send the HUP signal. tcpd is the program that performs the access control check for services wrapped by inetd/xinetd. While some services might re-read configurations on a HUP signal, TCP Wrappers doesn‘t typically require a tcpd HUP signal for changes to /etc/hosts.allow to take effect. The services themselves or inetd/xinetd would be responsible for rereading. More critically, if the service isn‘t TCP-wrapped, sending a HUP signal to tcpd (if it were even a standalone process in this context) would still have no effect on the unwrapped service.
Question 23 of 60
23. Question
What can be said about how user names are mapped to user IDs, based on the following lines in the NSS configuration file (/etc/nsswitch.conf)?
Correct
Correct: C. LDAP is used after local files.
The passwd: files ldap line indicates the order in which sources are consulted for user information (specifically, password-related information, including usernames and UIDs). files: This means the system will first look for user information in local files, primarily /etc/passwd. ldap: If the user is not found in the local files, the system will then query an LDAP server for the user‘s information. Therefore, LDAP is used after local files. Incorrect:
A. Nothing can be done from these lines, only. This is incorrect. These lines explicitly define the lookup order for user information. B. The computer uses only LDAP accounts. This is incorrect. The presence of files before ldap means that local accounts are consulted first and are still in use. The system supports both. D. LDAP is called in compatibility mode. The compat keyword is a specific setting in nsswitch.conf that usually means to use local files but also check against a traditional NIS/YP map. It does not indicate LDAP compatibility mode directly in the way this line is structured. The given line passwd: files ldap explicitly lists files and ldap as separate sources, not a compatibility mode.
Incorrect
Correct: C. LDAP is used after local files.
The passwd: files ldap line indicates the order in which sources are consulted for user information (specifically, password-related information, including usernames and UIDs). files: This means the system will first look for user information in local files, primarily /etc/passwd. ldap: If the user is not found in the local files, the system will then query an LDAP server for the user‘s information. Therefore, LDAP is used after local files. Incorrect:
A. Nothing can be done from these lines, only. This is incorrect. These lines explicitly define the lookup order for user information. B. The computer uses only LDAP accounts. This is incorrect. The presence of files before ldap means that local accounts are consulted first and are still in use. The system supports both. D. LDAP is called in compatibility mode. The compat keyword is a specific setting in nsswitch.conf that usually means to use local files but also check against a traditional NIS/YP map. It does not indicate LDAP compatibility mode directly in the way this line is structured. The given line passwd: files ldap explicitly lists files and ldap as separate sources, not a compatibility mode.
Unattempted
Correct: C. LDAP is used after local files.
The passwd: files ldap line indicates the order in which sources are consulted for user information (specifically, password-related information, including usernames and UIDs). files: This means the system will first look for user information in local files, primarily /etc/passwd. ldap: If the user is not found in the local files, the system will then query an LDAP server for the user‘s information. Therefore, LDAP is used after local files. Incorrect:
A. Nothing can be done from these lines, only. This is incorrect. These lines explicitly define the lookup order for user information. B. The computer uses only LDAP accounts. This is incorrect. The presence of files before ldap means that local accounts are consulted first and are still in use. The system supports both. D. LDAP is called in compatibility mode. The compat keyword is a specific setting in nsswitch.conf that usually means to use local files but also check against a traditional NIS/YP map. It does not indicate LDAP compatibility mode directly in the way this line is structured. The given line passwd: files ldap explicitly lists files and ldap as separate sources, not a compatibility mode.
Question 24 of 60
24. Question
Why should you use the no_root_squash option with caution when exporting from NFS?
Correct
Correct: B. This option root the client systems‘ root privileges in the exported directories on the server, which is dangerous if the client is compromised.
By default, NFS performs “root squashing“ (via the root_squash option, which is implied if not explicitly set). This means that any requests coming from the root user on the NFS client are mapped to an anonymous user ID (typically nfsnobody or nobody) on the NFS server. This is a security measure to prevent a client-side root user from having root privileges on the server‘s file system, even for exported directories. The no_root_squash option disables this security feature. If no_root_squash is used, the root user on the NFS client will be treated as root on the NFS server for the exported directory. This is highly dangerous because if the client system is compromised, an attacker gaining root access on the client could then gain full root access to the files within the exported directory on the NFS server, potentially allowing them to modify system files or sensitive data. It should only be used in very controlled environments, such as with trusted clients within a secure network. Incorrect:
A. This option causes the NFS server to run outside its chroot prison, which gives it access to user and system files that it should not be able to access. no_root_squash has nothing to do with chroot environments for the NFS server. chroot is a separate security mechanism that confines a process to a specific part of the filesystem. C. This option provides full root login privileges on the server without going through a regular account, thus making it easier for an intruder who does not have a user password. no_root_squash deals with file system access privileges for the root user from a client, not direct root login privileges to the NFS server itself (e.g., via SSH). It affects how the client‘s root user‘s UID is mapped when accessing exported files, not how users log into the server. D. This option allows ordinary users to log in to the NFS server as root, which can quickly lead to security compromises if their users are not trusted. no_root_squash specifically applies to the root user on the client. It does not grant ordinary (non-root) client users the ability to log in as root on the NFS server or gain root privileges. Their UIDs are still mapped as usual.
Incorrect
Correct: B. This option root the client systems‘ root privileges in the exported directories on the server, which is dangerous if the client is compromised.
By default, NFS performs “root squashing“ (via the root_squash option, which is implied if not explicitly set). This means that any requests coming from the root user on the NFS client are mapped to an anonymous user ID (typically nfsnobody or nobody) on the NFS server. This is a security measure to prevent a client-side root user from having root privileges on the server‘s file system, even for exported directories. The no_root_squash option disables this security feature. If no_root_squash is used, the root user on the NFS client will be treated as root on the NFS server for the exported directory. This is highly dangerous because if the client system is compromised, an attacker gaining root access on the client could then gain full root access to the files within the exported directory on the NFS server, potentially allowing them to modify system files or sensitive data. It should only be used in very controlled environments, such as with trusted clients within a secure network. Incorrect:
A. This option causes the NFS server to run outside its chroot prison, which gives it access to user and system files that it should not be able to access. no_root_squash has nothing to do with chroot environments for the NFS server. chroot is a separate security mechanism that confines a process to a specific part of the filesystem. C. This option provides full root login privileges on the server without going through a regular account, thus making it easier for an intruder who does not have a user password. no_root_squash deals with file system access privileges for the root user from a client, not direct root login privileges to the NFS server itself (e.g., via SSH). It affects how the client‘s root user‘s UID is mapped when accessing exported files, not how users log into the server. D. This option allows ordinary users to log in to the NFS server as root, which can quickly lead to security compromises if their users are not trusted. no_root_squash specifically applies to the root user on the client. It does not grant ordinary (non-root) client users the ability to log in as root on the NFS server or gain root privileges. Their UIDs are still mapped as usual.
Unattempted
Correct: B. This option root the client systems‘ root privileges in the exported directories on the server, which is dangerous if the client is compromised.
By default, NFS performs “root squashing“ (via the root_squash option, which is implied if not explicitly set). This means that any requests coming from the root user on the NFS client are mapped to an anonymous user ID (typically nfsnobody or nobody) on the NFS server. This is a security measure to prevent a client-side root user from having root privileges on the server‘s file system, even for exported directories. The no_root_squash option disables this security feature. If no_root_squash is used, the root user on the NFS client will be treated as root on the NFS server for the exported directory. This is highly dangerous because if the client system is compromised, an attacker gaining root access on the client could then gain full root access to the files within the exported directory on the NFS server, potentially allowing them to modify system files or sensitive data. It should only be used in very controlled environments, such as with trusted clients within a secure network. Incorrect:
A. This option causes the NFS server to run outside its chroot prison, which gives it access to user and system files that it should not be able to access. no_root_squash has nothing to do with chroot environments for the NFS server. chroot is a separate security mechanism that confines a process to a specific part of the filesystem. C. This option provides full root login privileges on the server without going through a regular account, thus making it easier for an intruder who does not have a user password. no_root_squash deals with file system access privileges for the root user from a client, not direct root login privileges to the NFS server itself (e.g., via SSH). It affects how the client‘s root user‘s UID is mapped when accessing exported files, not how users log into the server. D. This option allows ordinary users to log in to the NFS server as root, which can quickly lead to security compromises if their users are not trusted. no_root_squash specifically applies to the root user on the client. It does not grant ordinary (non-root) client users the ability to log in as root on the NFS server or gain root privileges. Their UIDs are still mapped as usual.
Question 25 of 60
25. Question
The host, called “lpi“, with the MAC address “08: 00: 2b: 4c: 59: 23“, must always receive the IP Address 192.168.1.2 by the DHCP server. Which of the following settings will achieve this?
host lpi { … }: This defines a host-specific configuration block within the dhcpd.conf file, where lpi is the name given to this host entry. ethernet hardware 08:00:2b:4c:59:23;: This directive correctly specifies the MAC address (hardware address) of the client machine. DHCP servers use the MAC address to identify specific clients for fixed assignments. The keyword ethernet indicates the hardware type, followed by hardware and the MAC address. fixed-address 192.168.1.2;: This directive assigns a specific, fixed IP address to the client identified by the ethernet hardware address. This ensures that the host lpi will always receive 192.168.1.2 from the DHCP server. Incorrect:
A. host lpi {mac = 08: 00: 2b: 4c: 59: 23; ip = 192.168.1.2; }: This uses non-standard keywords (mac and ip) and an incorrect syntax for assigning values (using = instead of a space and a semicolon). DHCP server configurations use specific directive names. B. host lpi = 08: 00: 2b: 4c: 59: 23 192.168.1.2: This uses an entirely incorrect syntax for defining a host block and its parameters. DHCP configuration requires curly braces {} and specific keywords for hardware address and fixed IP. C. host lpi {hardware-address 08: 00: 2b.4c: 59: 23; fixed-ip 192.168.1.2; }: hardware-address is close but not the exact standard keyword (ethernet hardware). The MAC address uses a period (.) instead of a colon (:) in 2b.4c, which is typically not accepted. fixed-ip is not the standard keyword; fixed-address is. E. host lpi {hardware-ethernet 08: 00: 2b: 4c: 59: 23; fixed-address 192.168.1.2; }: hardware-ethernet is not the standard keyword; the correct one is ethernet hardware. fixed-address is correct, but the first part makes the whole option incorrect.
host lpi { … }: This defines a host-specific configuration block within the dhcpd.conf file, where lpi is the name given to this host entry. ethernet hardware 08:00:2b:4c:59:23;: This directive correctly specifies the MAC address (hardware address) of the client machine. DHCP servers use the MAC address to identify specific clients for fixed assignments. The keyword ethernet indicates the hardware type, followed by hardware and the MAC address. fixed-address 192.168.1.2;: This directive assigns a specific, fixed IP address to the client identified by the ethernet hardware address. This ensures that the host lpi will always receive 192.168.1.2 from the DHCP server. Incorrect:
A. host lpi {mac = 08: 00: 2b: 4c: 59: 23; ip = 192.168.1.2; }: This uses non-standard keywords (mac and ip) and an incorrect syntax for assigning values (using = instead of a space and a semicolon). DHCP server configurations use specific directive names. B. host lpi = 08: 00: 2b: 4c: 59: 23 192.168.1.2: This uses an entirely incorrect syntax for defining a host block and its parameters. DHCP configuration requires curly braces {} and specific keywords for hardware address and fixed IP. C. host lpi {hardware-address 08: 00: 2b.4c: 59: 23; fixed-ip 192.168.1.2; }: hardware-address is close but not the exact standard keyword (ethernet hardware). The MAC address uses a period (.) instead of a colon (:) in 2b.4c, which is typically not accepted. fixed-ip is not the standard keyword; fixed-address is. E. host lpi {hardware-ethernet 08: 00: 2b: 4c: 59: 23; fixed-address 192.168.1.2; }: hardware-ethernet is not the standard keyword; the correct one is ethernet hardware. fixed-address is correct, but the first part makes the whole option incorrect.
host lpi { … }: This defines a host-specific configuration block within the dhcpd.conf file, where lpi is the name given to this host entry. ethernet hardware 08:00:2b:4c:59:23;: This directive correctly specifies the MAC address (hardware address) of the client machine. DHCP servers use the MAC address to identify specific clients for fixed assignments. The keyword ethernet indicates the hardware type, followed by hardware and the MAC address. fixed-address 192.168.1.2;: This directive assigns a specific, fixed IP address to the client identified by the ethernet hardware address. This ensures that the host lpi will always receive 192.168.1.2 from the DHCP server. Incorrect:
A. host lpi {mac = 08: 00: 2b: 4c: 59: 23; ip = 192.168.1.2; }: This uses non-standard keywords (mac and ip) and an incorrect syntax for assigning values (using = instead of a space and a semicolon). DHCP server configurations use specific directive names. B. host lpi = 08: 00: 2b: 4c: 59: 23 192.168.1.2: This uses an entirely incorrect syntax for defining a host block and its parameters. DHCP configuration requires curly braces {} and specific keywords for hardware address and fixed IP. C. host lpi {hardware-address 08: 00: 2b.4c: 59: 23; fixed-ip 192.168.1.2; }: hardware-address is close but not the exact standard keyword (ethernet hardware). The MAC address uses a period (.) instead of a colon (:) in 2b.4c, which is typically not accepted. fixed-ip is not the standard keyword; fixed-address is. E. host lpi {hardware-ethernet 08: 00: 2b: 4c: 59: 23; fixed-address 192.168.1.2; }: hardware-ethernet is not the standard keyword; the correct one is ethernet hardware. fixed-address is correct, but the first part makes the whole option incorrect.
Question 26 of 60
26. Question
Which option dhcpd.conf defines the DNS server address (es) to be sent to DHCP clients?
Correct
Correct: B. domain-name-servers
The domain-name-servers option in dhcpd.conf is the correct directive used to specify a list of IP addresses for DNS (Domain Name System) servers. These IP addresses are then provided to DHCP clients during the lease offer, allowing the clients to resolve hostnames on the network and the internet. Incorrect:
A. domain-name-server: This is very similar but is not the exact standard directive used in ISC DHCP Server configuration. The correct directive is plural: domain-name-servers. C. domainname: The domainname option in dhcpd.conf specifies the domain name (e.g., example.com) that clients should use to resolve unqualified hostnames, not the IP addresses of DNS servers. D. domain-nameserver: Similar to option A, this is not the exact standard directive. The correct directive uses the plural form.
Incorrect
Correct: B. domain-name-servers
The domain-name-servers option in dhcpd.conf is the correct directive used to specify a list of IP addresses for DNS (Domain Name System) servers. These IP addresses are then provided to DHCP clients during the lease offer, allowing the clients to resolve hostnames on the network and the internet. Incorrect:
A. domain-name-server: This is very similar but is not the exact standard directive used in ISC DHCP Server configuration. The correct directive is plural: domain-name-servers. C. domainname: The domainname option in dhcpd.conf specifies the domain name (e.g., example.com) that clients should use to resolve unqualified hostnames, not the IP addresses of DNS servers. D. domain-nameserver: Similar to option A, this is not the exact standard directive. The correct directive uses the plural form.
Unattempted
Correct: B. domain-name-servers
The domain-name-servers option in dhcpd.conf is the correct directive used to specify a list of IP addresses for DNS (Domain Name System) servers. These IP addresses are then provided to DHCP clients during the lease offer, allowing the clients to resolve hostnames on the network and the internet. Incorrect:
A. domain-name-server: This is very similar but is not the exact standard directive used in ISC DHCP Server configuration. The correct directive is plural: domain-name-servers. C. domainname: The domainname option in dhcpd.conf specifies the domain name (e.g., example.com) that clients should use to resolve unqualified hostnames, not the IP addresses of DNS servers. D. domain-nameserver: Similar to option A, this is not the exact standard directive. The correct directive uses the plural form.
Question 27 of 60
27. Question
Which of the following things can you do when accessing a file on an SMB / CIFS share using mount that you cannot do when accessing it via smbclient?
Correct
Correct: A. Use Emacs to edit the file while it is on the server.
Explanation: When you mount an SMB/CIFS share, the remote filesystem becomes an integral part of your local filesystem hierarchy. This means you can interact with files on the share using standard Linux utilities and applications as if they were local files. This includes using text editors like Emacs (or Vim, Nano, etc.) to open, edit, and save files directly on the mounted share. Incorrect:
B. Rename the file while it is on the server.
smbclient allows renaming files on the server using commands like rename or mv. When an SMB/CIFS share is mounted, you can also rename files using standard shell commands like mv. Therefore, this is something you can do with smbclient. C. Delete the file from the server.
smbclient allows deleting files on the server using commands like del or rm. When an SMB/CIFS share is mounted, you can also delete files using standard shell commands like rm. Therefore, this is something you can do with smbclient. D. Copy the file from the server to the client.
smbclient is primarily a command-line FTP-like client for SMB/CIFS. It has get and put commands specifically for copying files between the server and the client. When an SMB/CIFS share is mounted, you can also copy files using standard shell commands like cp. Therefore, this is something you can do with smbclient.
Incorrect
Correct: A. Use Emacs to edit the file while it is on the server.
Explanation: When you mount an SMB/CIFS share, the remote filesystem becomes an integral part of your local filesystem hierarchy. This means you can interact with files on the share using standard Linux utilities and applications as if they were local files. This includes using text editors like Emacs (or Vim, Nano, etc.) to open, edit, and save files directly on the mounted share. Incorrect:
B. Rename the file while it is on the server.
smbclient allows renaming files on the server using commands like rename or mv. When an SMB/CIFS share is mounted, you can also rename files using standard shell commands like mv. Therefore, this is something you can do with smbclient. C. Delete the file from the server.
smbclient allows deleting files on the server using commands like del or rm. When an SMB/CIFS share is mounted, you can also delete files using standard shell commands like rm. Therefore, this is something you can do with smbclient. D. Copy the file from the server to the client.
smbclient is primarily a command-line FTP-like client for SMB/CIFS. It has get and put commands specifically for copying files between the server and the client. When an SMB/CIFS share is mounted, you can also copy files using standard shell commands like cp. Therefore, this is something you can do with smbclient.
Unattempted
Correct: A. Use Emacs to edit the file while it is on the server.
Explanation: When you mount an SMB/CIFS share, the remote filesystem becomes an integral part of your local filesystem hierarchy. This means you can interact with files on the share using standard Linux utilities and applications as if they were local files. This includes using text editors like Emacs (or Vim, Nano, etc.) to open, edit, and save files directly on the mounted share. Incorrect:
B. Rename the file while it is on the server.
smbclient allows renaming files on the server using commands like rename or mv. When an SMB/CIFS share is mounted, you can also rename files using standard shell commands like mv. Therefore, this is something you can do with smbclient. C. Delete the file from the server.
smbclient allows deleting files on the server using commands like del or rm. When an SMB/CIFS share is mounted, you can also delete files using standard shell commands like rm. Therefore, this is something you can do with smbclient. D. Copy the file from the server to the client.
smbclient is primarily a command-line FTP-like client for SMB/CIFS. It has get and put commands specifically for copying files between the server and the client. When an SMB/CIFS share is mounted, you can also copy files using standard shell commands like cp. Therefore, this is something you can do with smbclient.
Question 28 of 60
28. Question
Which PAM control denies authentication, regardless of the response from other modules for the same type of authentication?
Correct
Correct: D. requisite
requisite: If a module with requisite control flag fails, PAM immediately denies authentication and no further modules in the stack for that type (e.g., auth, account, password, session) are consulted. If a requisite module succeeds, its success is noted, and the PAM stack continues processing subsequent modules. This is a very strong “fail fast“ mechanism. Incorrect:
A. sufficient: If a module with sufficient control flag succeeds, PAM immediately returns success to the application (assuming no preceding required or requisite module has failed). If a sufficient module fails, it is ignored, and the PAM stack continues processing subsequent modules. It does not deny authentication regardless of other modules; rather, it can grant it. B. optional: If a module with optional control flag succeeds or fails, its result generally does not directly determine the overall success or failure of the PAM stack, unless it‘s the only module configured or if all other non-optional modules have passed. It‘s used for services that are not critical for authentication. It does not deny authentication regardless of other modules. C. required: If a module with required control flag fails, PAM will ultimately deny authentication. However, unlike requisite, even if a required module fails, PAM will continue processing all subsequent modules in the stack for that type before returning the ultimate failure status to the application. This is useful for preventing information leakage (e.g., whether a username exists or not) by always executing all checks. It does not deny immediately.
Incorrect
Correct: D. requisite
requisite: If a module with requisite control flag fails, PAM immediately denies authentication and no further modules in the stack for that type (e.g., auth, account, password, session) are consulted. If a requisite module succeeds, its success is noted, and the PAM stack continues processing subsequent modules. This is a very strong “fail fast“ mechanism. Incorrect:
A. sufficient: If a module with sufficient control flag succeeds, PAM immediately returns success to the application (assuming no preceding required or requisite module has failed). If a sufficient module fails, it is ignored, and the PAM stack continues processing subsequent modules. It does not deny authentication regardless of other modules; rather, it can grant it. B. optional: If a module with optional control flag succeeds or fails, its result generally does not directly determine the overall success or failure of the PAM stack, unless it‘s the only module configured or if all other non-optional modules have passed. It‘s used for services that are not critical for authentication. It does not deny authentication regardless of other modules. C. required: If a module with required control flag fails, PAM will ultimately deny authentication. However, unlike requisite, even if a required module fails, PAM will continue processing all subsequent modules in the stack for that type before returning the ultimate failure status to the application. This is useful for preventing information leakage (e.g., whether a username exists or not) by always executing all checks. It does not deny immediately.
Unattempted
Correct: D. requisite
requisite: If a module with requisite control flag fails, PAM immediately denies authentication and no further modules in the stack for that type (e.g., auth, account, password, session) are consulted. If a requisite module succeeds, its success is noted, and the PAM stack continues processing subsequent modules. This is a very strong “fail fast“ mechanism. Incorrect:
A. sufficient: If a module with sufficient control flag succeeds, PAM immediately returns success to the application (assuming no preceding required or requisite module has failed). If a sufficient module fails, it is ignored, and the PAM stack continues processing subsequent modules. It does not deny authentication regardless of other modules; rather, it can grant it. B. optional: If a module with optional control flag succeeds or fails, its result generally does not directly determine the overall success or failure of the PAM stack, unless it‘s the only module configured or if all other non-optional modules have passed. It‘s used for services that are not critical for authentication. It does not deny authentication regardless of other modules. C. required: If a module with required control flag fails, PAM will ultimately deny authentication. However, unlike requisite, even if a required module fails, PAM will continue processing all subsequent modules in the stack for that type before returning the ultimate failure status to the application. This is useful for preventing information leakage (e.g., whether a username exists or not) by always executing all checks. It does not deny immediately.
Question 29 of 60
29. Question
The main LDAP server daemon configuration file is:
Correct
Correct: D. slapd.conf
slapd.conf: This is the primary configuration file for slapd, the standalone LDAP daemon (server). It defines how the LDAP server operates, including its database backend, access control rules, schema definitions, and other operational parameters. While modern OpenLDAP versions increasingly use cn=config (a DIT-based configuration) instead of a flat slapd.conf file, for the LPIC-2 exam, slapd.conf is still a relevant answer as many systems might still use it or concepts related to it. Incorrect:
A. ldap.conf: This file is typically the client-side configuration file for LDAP. It tells LDAP client utilities (like ldapsearch, ldapadd, or applications configured to use LDAP) how to connect to an LDAP server (e.g., the server‘s URI, base DN, TLS settings). It does not configure the LDAP server daemon itself. B. ldif.conf: There is no standard LDAP configuration file named ldif.conf. LDIF (LDAP Data Interchange Format) files are used to represent LDAP entries and changes in a plain text format, but they are data files, not configuration files for the daemon. C. ldaprc: This is an optional client-side configuration file for LDAP utilities. It typically stores per-user or application-specific LDAP client settings, similar in purpose to ldap.conf but often more granular or for specific tools. It does not configure the LDAP server daemon.
Incorrect
Correct: D. slapd.conf
slapd.conf: This is the primary configuration file for slapd, the standalone LDAP daemon (server). It defines how the LDAP server operates, including its database backend, access control rules, schema definitions, and other operational parameters. While modern OpenLDAP versions increasingly use cn=config (a DIT-based configuration) instead of a flat slapd.conf file, for the LPIC-2 exam, slapd.conf is still a relevant answer as many systems might still use it or concepts related to it. Incorrect:
A. ldap.conf: This file is typically the client-side configuration file for LDAP. It tells LDAP client utilities (like ldapsearch, ldapadd, or applications configured to use LDAP) how to connect to an LDAP server (e.g., the server‘s URI, base DN, TLS settings). It does not configure the LDAP server daemon itself. B. ldif.conf: There is no standard LDAP configuration file named ldif.conf. LDIF (LDAP Data Interchange Format) files are used to represent LDAP entries and changes in a plain text format, but they are data files, not configuration files for the daemon. C. ldaprc: This is an optional client-side configuration file for LDAP utilities. It typically stores per-user or application-specific LDAP client settings, similar in purpose to ldap.conf but often more granular or for specific tools. It does not configure the LDAP server daemon.
Unattempted
Correct: D. slapd.conf
slapd.conf: This is the primary configuration file for slapd, the standalone LDAP daemon (server). It defines how the LDAP server operates, including its database backend, access control rules, schema definitions, and other operational parameters. While modern OpenLDAP versions increasingly use cn=config (a DIT-based configuration) instead of a flat slapd.conf file, for the LPIC-2 exam, slapd.conf is still a relevant answer as many systems might still use it or concepts related to it. Incorrect:
A. ldap.conf: This file is typically the client-side configuration file for LDAP. It tells LDAP client utilities (like ldapsearch, ldapadd, or applications configured to use LDAP) how to connect to an LDAP server (e.g., the server‘s URI, base DN, TLS settings). It does not configure the LDAP server daemon itself. B. ldif.conf: There is no standard LDAP configuration file named ldif.conf. LDIF (LDAP Data Interchange Format) files are used to represent LDAP entries and changes in a plain text format, but they are data files, not configuration files for the daemon. C. ldaprc: This is an optional client-side configuration file for LDAP utilities. It typically stores per-user or application-specific LDAP client settings, similar in purpose to ldap.conf but often more granular or for specific tools. It does not configure the LDAP server daemon.
Question 30 of 60
30. Question
Which of the following are valid options in the / etc / exports file?
Correct
Correct:
A. rw (read/write): This is a valid option in /etc/exports that grants both read and write access to the NFS client(s) for the exported directory. D. ro (read-only): This is a valid option in /etc/exports that grants only read access to the NFS client(s) for the exported directory, preventing them from writing, creating, or deleting files. Incorrect:
B. norootsquash: The correct option to disable root squashing is no_root_squash. Options in /etc/exports typically use underscores, not just concatenating words directly without an underscore. C. uid: While UID mapping is a concept in NFS, uid by itself is not a valid option in /etc/exports to configure user ID mapping. Options like anonuid and anongid are used for specific anonymous user mappings. E. rootsquash: The correct option to enable root squashing (which is also the default behavior if neither root_squash nor no_root_squash is specified) is root_squash. Similar to norootsquash, the underscore is missing.
Incorrect
Correct:
A. rw (read/write): This is a valid option in /etc/exports that grants both read and write access to the NFS client(s) for the exported directory. D. ro (read-only): This is a valid option in /etc/exports that grants only read access to the NFS client(s) for the exported directory, preventing them from writing, creating, or deleting files. Incorrect:
B. norootsquash: The correct option to disable root squashing is no_root_squash. Options in /etc/exports typically use underscores, not just concatenating words directly without an underscore. C. uid: While UID mapping is a concept in NFS, uid by itself is not a valid option in /etc/exports to configure user ID mapping. Options like anonuid and anongid are used for specific anonymous user mappings. E. rootsquash: The correct option to enable root squashing (which is also the default behavior if neither root_squash nor no_root_squash is specified) is root_squash. Similar to norootsquash, the underscore is missing.
Unattempted
Correct:
A. rw (read/write): This is a valid option in /etc/exports that grants both read and write access to the NFS client(s) for the exported directory. D. ro (read-only): This is a valid option in /etc/exports that grants only read access to the NFS client(s) for the exported directory, preventing them from writing, creating, or deleting files. Incorrect:
B. norootsquash: The correct option to disable root squashing is no_root_squash. Options in /etc/exports typically use underscores, not just concatenating words directly without an underscore. C. uid: While UID mapping is a concept in NFS, uid by itself is not a valid option in /etc/exports to configure user ID mapping. Options like anonuid and anongid are used for specific anonymous user mappings. E. rootsquash: The correct option to enable root squashing (which is also the default behavior if neither root_squash nor no_root_squash is specified) is root_squash. Similar to norootsquash, the underscore is missing.
Question 31 of 60
31. Question
With Nginx, which of the following policies is used for proxy requests to a FastCGI
Correct
Correct: B. fastcgi_pass
The fastcgi_pass directive in Nginx is specifically designed to pass requests to a FastCGI server. It specifies the address of the FastCGI server and initiates the FastCGI protocol communication. This is the correct and standard directive used for proxying to FastCGI backends. Incorrect:
A. fastcgi_proxy: This is not a valid Nginx directive. While proxy is a common term in Nginx, for FastCGI, the dedicated directive is fastcgi_pass. C. proxy_fastcgi_pass: This is not a valid Nginx directive. It attempts to combine the generic proxy_pass concept with fastcgi, but the correct and specific directive for FastCGI is simply fastcgi_pass. D. proxy_fastcgi: This is not a valid Nginx directive. Similar to option C, it tries to create a composite directive that doesn‘t exist in Nginx‘s configuration syntax for FastCGI proxying.
Incorrect
Correct: B. fastcgi_pass
The fastcgi_pass directive in Nginx is specifically designed to pass requests to a FastCGI server. It specifies the address of the FastCGI server and initiates the FastCGI protocol communication. This is the correct and standard directive used for proxying to FastCGI backends. Incorrect:
A. fastcgi_proxy: This is not a valid Nginx directive. While proxy is a common term in Nginx, for FastCGI, the dedicated directive is fastcgi_pass. C. proxy_fastcgi_pass: This is not a valid Nginx directive. It attempts to combine the generic proxy_pass concept with fastcgi, but the correct and specific directive for FastCGI is simply fastcgi_pass. D. proxy_fastcgi: This is not a valid Nginx directive. Similar to option C, it tries to create a composite directive that doesn‘t exist in Nginx‘s configuration syntax for FastCGI proxying.
Unattempted
Correct: B. fastcgi_pass
The fastcgi_pass directive in Nginx is specifically designed to pass requests to a FastCGI server. It specifies the address of the FastCGI server and initiates the FastCGI protocol communication. This is the correct and standard directive used for proxying to FastCGI backends. Incorrect:
A. fastcgi_proxy: This is not a valid Nginx directive. While proxy is a common term in Nginx, for FastCGI, the dedicated directive is fastcgi_pass. C. proxy_fastcgi_pass: This is not a valid Nginx directive. It attempts to combine the generic proxy_pass concept with fastcgi, but the correct and specific directive for FastCGI is simply fastcgi_pass. D. proxy_fastcgi: This is not a valid Nginx directive. Similar to option C, it tries to create a composite directive that doesn‘t exist in Nginx‘s configuration syntax for FastCGI proxying.
Question 32 of 60
32. Question
The MinSpareServers option defines:
Correct
Correct:
D. The minimum number of idle processes triggered by Apache. MinSpareServers is an Apache MPM (Multi-Processing Module) directive. It specifies the minimum number of idle (ready to handle new requests) child server processes that Apache should maintain. If the number of idle processes drops below this value, Apache will fork new child processes until this minimum is met. This helps ensure that Apache always has enough pre-spawned processes available to quickly respond to incoming requests without incurring the overhead of creating a new process for each request. Incorrect:
A. The number of virtual servers.
MinSpareServers has nothing to do with the number of virtual servers (virtual hosts) configured. Virtual hosts are configured using VirtualHost directives. B. The minimum number of virtual domains.
This is a rephphrasing of option A and is also incorrect. MinSpareServers controls process management, not domain configuration. C. The minimum number of idle threads fired by Apache.
While Apache can use threads (depending on the MPM, e.g., event or worker), the MinSpareServers directive specifically refers to processes (servers), not threads within those processes. For threaded MPMs, there are equivalent directives like MinSpareThreads for managing idle threads. The question specifically refers to MinSpareServers, which is for processes.
Incorrect
Correct:
D. The minimum number of idle processes triggered by Apache. MinSpareServers is an Apache MPM (Multi-Processing Module) directive. It specifies the minimum number of idle (ready to handle new requests) child server processes that Apache should maintain. If the number of idle processes drops below this value, Apache will fork new child processes until this minimum is met. This helps ensure that Apache always has enough pre-spawned processes available to quickly respond to incoming requests without incurring the overhead of creating a new process for each request. Incorrect:
A. The number of virtual servers.
MinSpareServers has nothing to do with the number of virtual servers (virtual hosts) configured. Virtual hosts are configured using VirtualHost directives. B. The minimum number of virtual domains.
This is a rephphrasing of option A and is also incorrect. MinSpareServers controls process management, not domain configuration. C. The minimum number of idle threads fired by Apache.
While Apache can use threads (depending on the MPM, e.g., event or worker), the MinSpareServers directive specifically refers to processes (servers), not threads within those processes. For threaded MPMs, there are equivalent directives like MinSpareThreads for managing idle threads. The question specifically refers to MinSpareServers, which is for processes.
Unattempted
Correct:
D. The minimum number of idle processes triggered by Apache. MinSpareServers is an Apache MPM (Multi-Processing Module) directive. It specifies the minimum number of idle (ready to handle new requests) child server processes that Apache should maintain. If the number of idle processes drops below this value, Apache will fork new child processes until this minimum is met. This helps ensure that Apache always has enough pre-spawned processes available to quickly respond to incoming requests without incurring the overhead of creating a new process for each request. Incorrect:
A. The number of virtual servers.
MinSpareServers has nothing to do with the number of virtual servers (virtual hosts) configured. Virtual hosts are configured using VirtualHost directives. B. The minimum number of virtual domains.
This is a rephphrasing of option A and is also incorrect. MinSpareServers controls process management, not domain configuration. C. The minimum number of idle threads fired by Apache.
While Apache can use threads (depending on the MPM, e.g., event or worker), the MinSpareServers directive specifically refers to processes (servers), not threads within those processes. For threaded MPMs, there are equivalent directives like MinSpareThreads for managing idle threads. The question specifically refers to MinSpareServers, which is for processes.
Question 33 of 60
33. Question
When trying to revert the proxy (reverse proxy) of a web server through Nginx, which keyword is missing from the following configuration example?
Correct
Correct:
B. proxy_pass In Nginx, the proxy_pass directive is used to define the address of the proxied server (the backend server) and pass the client‘s request to it. This is the fundamental directive for configuring a reverse proxy. When a request matches a location block, proxy_pass specifies where Nginx should forward that request. Incorrect:
A. reverse_proxy
While the concept is “reverse proxy,“ reverse_proxy is not the specific directive keyword used in Nginx. C. proxy_reverse
Similar to reverse_proxy, this is not the correct directive keyword in Nginx. D. remote_proxy
This is not an Nginx directive. Nginx uses proxy_pass for forwarding requests to a backend server, which can be local or remote.
Incorrect
Correct:
B. proxy_pass In Nginx, the proxy_pass directive is used to define the address of the proxied server (the backend server) and pass the client‘s request to it. This is the fundamental directive for configuring a reverse proxy. When a request matches a location block, proxy_pass specifies where Nginx should forward that request. Incorrect:
A. reverse_proxy
While the concept is “reverse proxy,“ reverse_proxy is not the specific directive keyword used in Nginx. C. proxy_reverse
Similar to reverse_proxy, this is not the correct directive keyword in Nginx. D. remote_proxy
This is not an Nginx directive. Nginx uses proxy_pass for forwarding requests to a backend server, which can be local or remote.
Unattempted
Correct:
B. proxy_pass In Nginx, the proxy_pass directive is used to define the address of the proxied server (the backend server) and pass the client‘s request to it. This is the fundamental directive for configuring a reverse proxy. When a request matches a location block, proxy_pass specifies where Nginx should forward that request. Incorrect:
A. reverse_proxy
While the concept is “reverse proxy,“ reverse_proxy is not the specific directive keyword used in Nginx. C. proxy_reverse
Similar to reverse_proxy, this is not the correct directive keyword in Nginx. D. remote_proxy
This is not an Nginx directive. Nginx uses proxy_pass for forwarding requests to a backend server, which can be local or remote.
Question 34 of 60
34. Question
What configuration block in Nginx is used to configure settings for a reverse proxy server?
Correct
Correct:
C. location In Nginx, while server blocks define virtual hosts (listening ports and hostnames), the actual configuration for how requests are handled for specific URL paths or patterns within that server block is typically done using location blocks. The proxy_pass directive, which is essential for reverse proxying, is almost always placed inside a location block. This allows Nginx to reverse proxy different paths to different backend servers. Incorrect:
A. server
The server block defines a virtual server (similar to Apache‘s VirtualHost). It specifies what IP address and port Nginx listens on and for which domain names. While a location block for reverse proxying will reside inside a server block, the server block itself isn‘t used to configure settings for a reverse proxy server in the granular sense of where proxy_pass or other proxy-specific directives like proxy_set_header are placed. It sets the context for where reverse proxying might occur. B. reverse
There is no standard Nginx configuration block named reverse. D. http
The http block is the top-level configuration block in Nginx for HTTP-related settings. It contains global settings that apply to all server blocks unless overridden. While server blocks (and thus location blocks for reverse proxying) are nested inside the http block, the http block itself doesn‘t contain the specific reverse proxy settings like proxy_pass. It‘s a parent container.
Incorrect
Correct:
C. location In Nginx, while server blocks define virtual hosts (listening ports and hostnames), the actual configuration for how requests are handled for specific URL paths or patterns within that server block is typically done using location blocks. The proxy_pass directive, which is essential for reverse proxying, is almost always placed inside a location block. This allows Nginx to reverse proxy different paths to different backend servers. Incorrect:
A. server
The server block defines a virtual server (similar to Apache‘s VirtualHost). It specifies what IP address and port Nginx listens on and for which domain names. While a location block for reverse proxying will reside inside a server block, the server block itself isn‘t used to configure settings for a reverse proxy server in the granular sense of where proxy_pass or other proxy-specific directives like proxy_set_header are placed. It sets the context for where reverse proxying might occur. B. reverse
There is no standard Nginx configuration block named reverse. D. http
The http block is the top-level configuration block in Nginx for HTTP-related settings. It contains global settings that apply to all server blocks unless overridden. While server blocks (and thus location blocks for reverse proxying) are nested inside the http block, the http block itself doesn‘t contain the specific reverse proxy settings like proxy_pass. It‘s a parent container.
Unattempted
Correct:
C. location In Nginx, while server blocks define virtual hosts (listening ports and hostnames), the actual configuration for how requests are handled for specific URL paths or patterns within that server block is typically done using location blocks. The proxy_pass directive, which is essential for reverse proxying, is almost always placed inside a location block. This allows Nginx to reverse proxy different paths to different backend servers. Incorrect:
A. server
The server block defines a virtual server (similar to Apache‘s VirtualHost). It specifies what IP address and port Nginx listens on and for which domain names. While a location block for reverse proxying will reside inside a server block, the server block itself isn‘t used to configure settings for a reverse proxy server in the granular sense of where proxy_pass or other proxy-specific directives like proxy_set_header are placed. It sets the context for where reverse proxying might occur. B. reverse
There is no standard Nginx configuration block named reverse. D. http
The http block is the top-level configuration block in Nginx for HTTP-related settings. It contains global settings that apply to all server blocks unless overridden. While server blocks (and thus location blocks for reverse proxying) are nested inside the http block, the http block itself doesn‘t contain the specific reverse proxy settings like proxy_pass. It‘s a parent container.
Question 35 of 60
35. Question
You want your Apache server to use / var / mywww as the directory to make your site available. What Apache configuration option should you set to do this?
Correct
Correct:
D. DocumentRoot /var/mywww/ This is the standard and correct Apache directive used to specify the main directory from which the web server serves files for a given website or virtual host. When a client requests content, Apache looks for the requested path relative to the DocumentRoot. Incorrect:
A. Root /var/mywww/
While “Root“ intuitively sounds like it would define the web root, Root is not a valid Apache configuration directive for this purpose. The correct directive is DocumentRoot. B. Base = /var/www
Base is not a valid Apache configuration directive for defining the document root. Apache configuration uses specific directive names, not generic assignments like Base =. C. set root /var/mywww/
set root is not a valid Apache configuration directive or syntax for defining the document root. Apache directives follow a specific DirectiveName Value format.
Incorrect
Correct:
D. DocumentRoot /var/mywww/ This is the standard and correct Apache directive used to specify the main directory from which the web server serves files for a given website or virtual host. When a client requests content, Apache looks for the requested path relative to the DocumentRoot. Incorrect:
A. Root /var/mywww/
While “Root“ intuitively sounds like it would define the web root, Root is not a valid Apache configuration directive for this purpose. The correct directive is DocumentRoot. B. Base = /var/www
Base is not a valid Apache configuration directive for defining the document root. Apache configuration uses specific directive names, not generic assignments like Base =. C. set root /var/mywww/
set root is not a valid Apache configuration directive or syntax for defining the document root. Apache directives follow a specific DirectiveName Value format.
Unattempted
Correct:
D. DocumentRoot /var/mywww/ This is the standard and correct Apache directive used to specify the main directory from which the web server serves files for a given website or virtual host. When a client requests content, Apache looks for the requested path relative to the DocumentRoot. Incorrect:
A. Root /var/mywww/
While “Root“ intuitively sounds like it would define the web root, Root is not a valid Apache configuration directive for this purpose. The correct directive is DocumentRoot. B. Base = /var/www
Base is not a valid Apache configuration directive for defining the document root. Apache configuration uses specific directive names, not generic assignments like Base =. C. set root /var/mywww/
set root is not a valid Apache configuration directive or syntax for defining the document root. Apache directives follow a specific DirectiveName Value format.
Question 36 of 60
36. Question
You want to use the VirtualHost directive to define a limited number of virtual hosts on an Apache server. In addition, this server has two network interfaces, one for your local network (eth0, 172.24.21.78) and one for the Internet (eth1, 10.203.17.26). What guideline can you include to ensure that your virtual hosts are defined only on your local network?
Correct
Correct:
C. NameVirtualHost 172.24.21.78 This is the correct Apache directive for specifying an IP address (or hostname) on which named-based virtual hosts will listen. By setting NameVirtualHost 172.24.21.78, you explicitly tell Apache to only accept requests for named virtual hosts on the network interface associated with 172.24.21.78 (eth0, your local network). Any VirtualHost blocks that refer to names that resolve to other IPs (like the Internet-facing 10.203.17.26) will not be associated with this NameVirtualHost configuration, effectively limiting the virtual hosts defined within this context to the local network. Incorrect:
A. ExcludeVirtualHosts 10,203.17.26
There is no standard Apache directive called ExcludeVirtualHosts. Apache‘s virtual host mechanism works by explicitly defining what it should listen on, not by excluding. B. VirtualHostOnly eth0
There is no standard Apache directive called VirtualHostOnly. Apache uses NameVirtualHost or Listen directives to specify the interfaces and ports for virtual hosts. D. Bind eth0
While Listen is the directive used to bind Apache to specific IP addresses and ports (e.g., Listen 172.24.21.78:80), Bind itself is not a standard Apache directive for this purpose. More importantly, simply binding to eth0 (or its IP) with Listen makes Apache listen on that interface, but NameVirtualHost is specifically used to declare which IP named virtual hosts should be associated with, which is a more precise way to limit the scope of virtual hosts.
Incorrect
Correct:
C. NameVirtualHost 172.24.21.78 This is the correct Apache directive for specifying an IP address (or hostname) on which named-based virtual hosts will listen. By setting NameVirtualHost 172.24.21.78, you explicitly tell Apache to only accept requests for named virtual hosts on the network interface associated with 172.24.21.78 (eth0, your local network). Any VirtualHost blocks that refer to names that resolve to other IPs (like the Internet-facing 10.203.17.26) will not be associated with this NameVirtualHost configuration, effectively limiting the virtual hosts defined within this context to the local network. Incorrect:
A. ExcludeVirtualHosts 10,203.17.26
There is no standard Apache directive called ExcludeVirtualHosts. Apache‘s virtual host mechanism works by explicitly defining what it should listen on, not by excluding. B. VirtualHostOnly eth0
There is no standard Apache directive called VirtualHostOnly. Apache uses NameVirtualHost or Listen directives to specify the interfaces and ports for virtual hosts. D. Bind eth0
While Listen is the directive used to bind Apache to specific IP addresses and ports (e.g., Listen 172.24.21.78:80), Bind itself is not a standard Apache directive for this purpose. More importantly, simply binding to eth0 (or its IP) with Listen makes Apache listen on that interface, but NameVirtualHost is specifically used to declare which IP named virtual hosts should be associated with, which is a more precise way to limit the scope of virtual hosts.
Unattempted
Correct:
C. NameVirtualHost 172.24.21.78 This is the correct Apache directive for specifying an IP address (or hostname) on which named-based virtual hosts will listen. By setting NameVirtualHost 172.24.21.78, you explicitly tell Apache to only accept requests for named virtual hosts on the network interface associated with 172.24.21.78 (eth0, your local network). Any VirtualHost blocks that refer to names that resolve to other IPs (like the Internet-facing 10.203.17.26) will not be associated with this NameVirtualHost configuration, effectively limiting the virtual hosts defined within this context to the local network. Incorrect:
A. ExcludeVirtualHosts 10,203.17.26
There is no standard Apache directive called ExcludeVirtualHosts. Apache‘s virtual host mechanism works by explicitly defining what it should listen on, not by excluding. B. VirtualHostOnly eth0
There is no standard Apache directive called VirtualHostOnly. Apache uses NameVirtualHost or Listen directives to specify the interfaces and ports for virtual hosts. D. Bind eth0
While Listen is the directive used to bind Apache to specific IP addresses and ports (e.g., Listen 172.24.21.78:80), Bind itself is not a standard Apache directive for this purpose. More importantly, simply binding to eth0 (or its IP) with Listen makes Apache listen on that interface, but NameVirtualHost is specifically used to declare which IP named virtual hosts should be associated with, which is a more precise way to limit the scope of virtual hosts.
Question 37 of 60
37. Question
Which Apache configuration section defines a virtual domain?
Correct
Correct:
A. VirtualHost The directive in Apache is the primary and essential configuration block used to define a virtual domain (or virtual host). This block allows a single Apache server to host multiple websites, each with its own domain name, content, and often separate configurations (like document roots, logging, or SSL settings). You define the IP address and/or port for which the virtual host will respond, along with the ServerName for the domain. Incorrect:
B. ServerHost
ServerHost is not a valid Apache configuration directive for defining a virtual domain. The correct term is VirtualHost. C. RootServer
RootServer is not a valid Apache configuration directive. This term might be associated with DNS root servers, but not Apache virtual hosts. D. NameHost
While Apache uses NameVirtualHost to enable named-based virtual hosting (which allows multiple virtual hosts on a single IP address based on the requested hostname), NameHost itself is not a configuration section or directive for defining a virtual domain. The VirtualHost block is where the domain‘s settings are actually defined.
Incorrect
Correct:
A. VirtualHost The directive in Apache is the primary and essential configuration block used to define a virtual domain (or virtual host). This block allows a single Apache server to host multiple websites, each with its own domain name, content, and often separate configurations (like document roots, logging, or SSL settings). You define the IP address and/or port for which the virtual host will respond, along with the ServerName for the domain. Incorrect:
B. ServerHost
ServerHost is not a valid Apache configuration directive for defining a virtual domain. The correct term is VirtualHost. C. RootServer
RootServer is not a valid Apache configuration directive. This term might be associated with DNS root servers, but not Apache virtual hosts. D. NameHost
While Apache uses NameVirtualHost to enable named-based virtual hosting (which allows multiple virtual hosts on a single IP address based on the requested hostname), NameHost itself is not a configuration section or directive for defining a virtual domain. The VirtualHost block is where the domain‘s settings are actually defined.
Unattempted
Correct:
A. VirtualHost The directive in Apache is the primary and essential configuration block used to define a virtual domain (or virtual host). This block allows a single Apache server to host multiple websites, each with its own domain name, content, and often separate configurations (like document roots, logging, or SSL settings). You define the IP address and/or port for which the virtual host will respond, along with the ServerName for the domain. Incorrect:
B. ServerHost
ServerHost is not a valid Apache configuration directive for defining a virtual domain. The correct term is VirtualHost. C. RootServer
RootServer is not a valid Apache configuration directive. This term might be associated with DNS root servers, but not Apache virtual hosts. D. NameHost
While Apache uses NameVirtualHost to enable named-based virtual hosting (which allows multiple virtual hosts on a single IP address based on the requested hostname), NameHost itself is not a configuration section or directive for defining a virtual domain. The VirtualHost block is where the domain‘s settings are actually defined.
Question 38 of 60
38. Question
Why are different IP addresses recommended when hosting multiple HTTPS virtual hosts? (Choose TWO correct answers.)
Correct
Correct:
A. The SSL connection is made before the virtual host name is known to the server.
When an HTTPS connection is initiated, the SSL/TLS handshake occurs before the HTTP request (which contains the Host header indicating the virtual host name) is sent. During the SSL/TLS handshake, the server needs to present its certificate. If multiple HTTPS virtual hosts share the same IP address, the server doesn‘t know which virtual host‘s certificate to present at that early stage because it hasn‘t yet received the Host header. This leads to a certificate mismatch if the client requests a different virtual host than the one associated with the default certificate. B. The Server Name Indication extension for TLS is not universally supported.
Server Name Indication (SNI) is a TLS extension that allows a client to specify the hostname it is trying to reach during the SSL/TLS handshake. This enables a server to present the correct certificate even when multiple HTTPS virtual hosts share the same IP address. However, while SNI is widely supported by modern browsers and operating systems, there are still older clients (e.g., Windows XP with IE, older Android versions) that do not support SNI. For compatibility with these older clients, separate IP addresses are recommended for each HTTPS virtual host. (Note: The option uses “Server Name Referral“ but likely means “Server Name Indication,“ which is the correct technical term. For LPI exams, sometimes terms might be slightly off but refer to the correct concept.) Incorrect:
C. This is only necessary when dynamic content is being generated by more than one of the virtual hosts.
The type of content (static vs. dynamic) served by the virtual hosts has no bearing on the requirements for distinct IP addresses for HTTPS. The issue is with the SSL/TLS handshake and certificate presentation, not with how the content is generated. D. The SSL key is linked to a specific IP address when issued by the certification authority.
SSL certificates are primarily issued for domain names (Common Name and Subject Alternative Names), not directly for IP addresses. While a certificate can be issued for an IP address, the fundamental issue with shared IP HTTPS virtual hosts is about distinguishing which domain‘s certificate to serve, not a direct link between the key and IP by the CA. E. Apache caches SSL keys based on the IP address.
Apache (or mod_ssl) does not primarily cache SSL keys based on the IP address in a way that necessitates different IPs for multiple virtual hosts. The caching mechanisms are more related to SSL session IDs and certificate validity, which are separate from the core problem of certificate selection during the initial handshake without SNI.
Incorrect
Correct:
A. The SSL connection is made before the virtual host name is known to the server.
When an HTTPS connection is initiated, the SSL/TLS handshake occurs before the HTTP request (which contains the Host header indicating the virtual host name) is sent. During the SSL/TLS handshake, the server needs to present its certificate. If multiple HTTPS virtual hosts share the same IP address, the server doesn‘t know which virtual host‘s certificate to present at that early stage because it hasn‘t yet received the Host header. This leads to a certificate mismatch if the client requests a different virtual host than the one associated with the default certificate. B. The Server Name Indication extension for TLS is not universally supported.
Server Name Indication (SNI) is a TLS extension that allows a client to specify the hostname it is trying to reach during the SSL/TLS handshake. This enables a server to present the correct certificate even when multiple HTTPS virtual hosts share the same IP address. However, while SNI is widely supported by modern browsers and operating systems, there are still older clients (e.g., Windows XP with IE, older Android versions) that do not support SNI. For compatibility with these older clients, separate IP addresses are recommended for each HTTPS virtual host. (Note: The option uses “Server Name Referral“ but likely means “Server Name Indication,“ which is the correct technical term. For LPI exams, sometimes terms might be slightly off but refer to the correct concept.) Incorrect:
C. This is only necessary when dynamic content is being generated by more than one of the virtual hosts.
The type of content (static vs. dynamic) served by the virtual hosts has no bearing on the requirements for distinct IP addresses for HTTPS. The issue is with the SSL/TLS handshake and certificate presentation, not with how the content is generated. D. The SSL key is linked to a specific IP address when issued by the certification authority.
SSL certificates are primarily issued for domain names (Common Name and Subject Alternative Names), not directly for IP addresses. While a certificate can be issued for an IP address, the fundamental issue with shared IP HTTPS virtual hosts is about distinguishing which domain‘s certificate to serve, not a direct link between the key and IP by the CA. E. Apache caches SSL keys based on the IP address.
Apache (or mod_ssl) does not primarily cache SSL keys based on the IP address in a way that necessitates different IPs for multiple virtual hosts. The caching mechanisms are more related to SSL session IDs and certificate validity, which are separate from the core problem of certificate selection during the initial handshake without SNI.
Unattempted
Correct:
A. The SSL connection is made before the virtual host name is known to the server.
When an HTTPS connection is initiated, the SSL/TLS handshake occurs before the HTTP request (which contains the Host header indicating the virtual host name) is sent. During the SSL/TLS handshake, the server needs to present its certificate. If multiple HTTPS virtual hosts share the same IP address, the server doesn‘t know which virtual host‘s certificate to present at that early stage because it hasn‘t yet received the Host header. This leads to a certificate mismatch if the client requests a different virtual host than the one associated with the default certificate. B. The Server Name Indication extension for TLS is not universally supported.
Server Name Indication (SNI) is a TLS extension that allows a client to specify the hostname it is trying to reach during the SSL/TLS handshake. This enables a server to present the correct certificate even when multiple HTTPS virtual hosts share the same IP address. However, while SNI is widely supported by modern browsers and operating systems, there are still older clients (e.g., Windows XP with IE, older Android versions) that do not support SNI. For compatibility with these older clients, separate IP addresses are recommended for each HTTPS virtual host. (Note: The option uses “Server Name Referral“ but likely means “Server Name Indication,“ which is the correct technical term. For LPI exams, sometimes terms might be slightly off but refer to the correct concept.) Incorrect:
C. This is only necessary when dynamic content is being generated by more than one of the virtual hosts.
The type of content (static vs. dynamic) served by the virtual hosts has no bearing on the requirements for distinct IP addresses for HTTPS. The issue is with the SSL/TLS handshake and certificate presentation, not with how the content is generated. D. The SSL key is linked to a specific IP address when issued by the certification authority.
SSL certificates are primarily issued for domain names (Common Name and Subject Alternative Names), not directly for IP addresses. While a certificate can be issued for an IP address, the fundamental issue with shared IP HTTPS virtual hosts is about distinguishing which domain‘s certificate to serve, not a direct link between the key and IP by the CA. E. Apache caches SSL keys based on the IP address.
Apache (or mod_ssl) does not primarily cache SSL keys based on the IP address in a way that necessitates different IPs for multiple virtual hosts. The caching mechanisms are more related to SSL session IDs and certificate validity, which are separate from the core problem of certificate selection during the initial handshake without SNI.
Question 39 of 60
39. Question
The list below is a part of a Squid configuration file, where:
Correct
Incorrect
Unattempted
Question 40 of 60
40. Question
Which Apache policy allows the use of external configuration files defined by the AccessFileName directive?
Correct
Correct:
E. AllowOverride The AllowOverride directive in Apache‘s main configuration file (httpd.conf or apache2.conf) controls which directives are permitted in .htaccess files (or whatever file is specified by AccessFileName) within individual directories. If AllowOverride None is set, no directives in .htaccess files will be processed. To allow external configuration files (like .htaccess) to function and define settings, AllowOverride must be set to something other than None (e.g., All, AuthConfig, Indexes, etc.) within the relevant block. Incorrect:
A. AllowExternalConfig
This is not a standard Apache directive. The functionality described is handled by AllowOverride. B. AllowConfig
This is not a standard Apache directive. C. IncludeAccessFile
While Apache has an Include directive to pull in other configuration files, IncludeAccessFile is not a directive that specifically enables .htaccess-like files. AccessFileName defines the name of these files, and AllowOverride controls their processing. D. AllowAccessFile
This is not a standard Apache directive. The control over AccessFileName files is managed by AllowOverride.
Incorrect
Correct:
E. AllowOverride The AllowOverride directive in Apache‘s main configuration file (httpd.conf or apache2.conf) controls which directives are permitted in .htaccess files (or whatever file is specified by AccessFileName) within individual directories. If AllowOverride None is set, no directives in .htaccess files will be processed. To allow external configuration files (like .htaccess) to function and define settings, AllowOverride must be set to something other than None (e.g., All, AuthConfig, Indexes, etc.) within the relevant block. Incorrect:
A. AllowExternalConfig
This is not a standard Apache directive. The functionality described is handled by AllowOverride. B. AllowConfig
This is not a standard Apache directive. C. IncludeAccessFile
While Apache has an Include directive to pull in other configuration files, IncludeAccessFile is not a directive that specifically enables .htaccess-like files. AccessFileName defines the name of these files, and AllowOverride controls their processing. D. AllowAccessFile
This is not a standard Apache directive. The control over AccessFileName files is managed by AllowOverride.
Unattempted
Correct:
E. AllowOverride The AllowOverride directive in Apache‘s main configuration file (httpd.conf or apache2.conf) controls which directives are permitted in .htaccess files (or whatever file is specified by AccessFileName) within individual directories. If AllowOverride None is set, no directives in .htaccess files will be processed. To allow external configuration files (like .htaccess) to function and define settings, AllowOverride must be set to something other than None (e.g., All, AuthConfig, Indexes, etc.) within the relevant block. Incorrect:
A. AllowExternalConfig
This is not a standard Apache directive. The functionality described is handled by AllowOverride. B. AllowConfig
This is not a standard Apache directive. C. IncludeAccessFile
While Apache has an Include directive to pull in other configuration files, IncludeAccessFile is not a directive that specifically enables .htaccess-like files. AccessFileName defines the name of these files, and AllowOverride controls their processing. D. AllowAccessFile
This is not a standard Apache directive. The control over AccessFileName files is managed by AllowOverride.
Question 41 of 60
41. Question
Which of these would be the simplest way to configure BIND to return a different version number for queries?
Correct
Correct:
A. Define version “my version“ in the BIND configuration file. Explanation: In BIND‘s named.conf file, placing the directive version “your custom version string“; within the options { … }; block is the standard and simplest way to change the version string that BIND reports in response to version queries (e.g., dig @server version.bind chaos txt). This is a common security practice to obfuscate the actual BIND version, making it harder for attackers to identify known vulnerabilities. Incorrect:
B. Define version-bind “my version“ in the BIND configuration file.
Explanation: The correct directive name is version, not version-bind. BIND configuration directives have specific names. C. Define version-string “my version“ in the BIND configuration file.
Explanation: Similar to option B, the correct directive name is version, not version-string. D. Compile BIND with the -blur-version = my version option.
Explanation: While BIND can be compiled with various options, blur-version is not a standard compile-time option for changing the reported version string. The recommended and simpler method is a runtime configuration directive in named.conf. Customizing compile-time options is far more complex than a configuration file entry and often unnecessary for this purpose. E. Set version = my version in the BIND configuration file.
Explanation: This is very close to the correct answer. However, BIND configuration uses a colon for assignments and requires the string to be quoted, so version “my version“; is the exact syntax. version = my version is not the correct syntax.
Incorrect
Correct:
A. Define version “my version“ in the BIND configuration file. Explanation: In BIND‘s named.conf file, placing the directive version “your custom version string“; within the options { … }; block is the standard and simplest way to change the version string that BIND reports in response to version queries (e.g., dig @server version.bind chaos txt). This is a common security practice to obfuscate the actual BIND version, making it harder for attackers to identify known vulnerabilities. Incorrect:
B. Define version-bind “my version“ in the BIND configuration file.
Explanation: The correct directive name is version, not version-bind. BIND configuration directives have specific names. C. Define version-string “my version“ in the BIND configuration file.
Explanation: Similar to option B, the correct directive name is version, not version-string. D. Compile BIND with the -blur-version = my version option.
Explanation: While BIND can be compiled with various options, blur-version is not a standard compile-time option for changing the reported version string. The recommended and simpler method is a runtime configuration directive in named.conf. Customizing compile-time options is far more complex than a configuration file entry and often unnecessary for this purpose. E. Set version = my version in the BIND configuration file.
Explanation: This is very close to the correct answer. However, BIND configuration uses a colon for assignments and requires the string to be quoted, so version “my version“; is the exact syntax. version = my version is not the correct syntax.
Unattempted
Correct:
A. Define version “my version“ in the BIND configuration file. Explanation: In BIND‘s named.conf file, placing the directive version “your custom version string“; within the options { … }; block is the standard and simplest way to change the version string that BIND reports in response to version queries (e.g., dig @server version.bind chaos txt). This is a common security practice to obfuscate the actual BIND version, making it harder for attackers to identify known vulnerabilities. Incorrect:
B. Define version-bind “my version“ in the BIND configuration file.
Explanation: The correct directive name is version, not version-bind. BIND configuration directives have specific names. C. Define version-string “my version“ in the BIND configuration file.
Explanation: Similar to option B, the correct directive name is version, not version-string. D. Compile BIND with the -blur-version = my version option.
Explanation: While BIND can be compiled with various options, blur-version is not a standard compile-time option for changing the reported version string. The recommended and simpler method is a runtime configuration directive in named.conf. Customizing compile-time options is far more complex than a configuration file entry and often unnecessary for this purpose. E. Set version = my version in the BIND configuration file.
Explanation: This is very close to the correct answer. However, BIND configuration uses a colon for assignments and requires the string to be quoted, so version “my version“; is the exact syntax. version = my version is not the correct syntax.
Question 42 of 60
42. Question
Which of these tools can provide more information about DNS queries?
Correct
Correct:
E. dig Explanation: dig (Domain Information Groper) is the most powerful and flexible command-line tool for querying DNS name servers. It provides detailed information about DNS lookups, including the exact query sent, the server that responded, the response time, the full answer section, authority section, and additional section, as well as various flags and statistics. It allows for querying specific record types (A, MX, NS, SOA, etc.), using specific name servers, and controlling various query options, making it ideal for troubleshooting and in-depth DNS analysis. Incorrect:
A. host
host is a simpler utility for performing basic DNS lookups (e.g., converting hostnames to IP addresses and vice-versa). While useful for quick checks, it provides less detailed information and fewer options than dig. It‘s good for “what is the IP for this hostname?“, but not for deep query analysis. B. named-checkconf
named-checkconf is a utility used to check the syntax and validity of the BIND DNS server‘s main configuration file (named.conf). It verifies the structural correctness of the configuration but does not perform or provide information about actual DNS queries. C. nslookup
nslookup is an older utility for querying DNS. While it can perform various lookups, it is generally considered deprecated in favor of dig for most advanced tasks. Its output can be less consistent and harder to parse than dig‘s, and it lacks some of dig‘s advanced features for detailed query information. D. named-checkzone
named-checkzone is a utility used to check the syntax and validity of BIND DNS zone files (e.g., db.example.com). It ensures that the zone file is correctly formatted and free of common errors, but it does not perform or provide information about live DNS queries.
Incorrect
Correct:
E. dig Explanation: dig (Domain Information Groper) is the most powerful and flexible command-line tool for querying DNS name servers. It provides detailed information about DNS lookups, including the exact query sent, the server that responded, the response time, the full answer section, authority section, and additional section, as well as various flags and statistics. It allows for querying specific record types (A, MX, NS, SOA, etc.), using specific name servers, and controlling various query options, making it ideal for troubleshooting and in-depth DNS analysis. Incorrect:
A. host
host is a simpler utility for performing basic DNS lookups (e.g., converting hostnames to IP addresses and vice-versa). While useful for quick checks, it provides less detailed information and fewer options than dig. It‘s good for “what is the IP for this hostname?“, but not for deep query analysis. B. named-checkconf
named-checkconf is a utility used to check the syntax and validity of the BIND DNS server‘s main configuration file (named.conf). It verifies the structural correctness of the configuration but does not perform or provide information about actual DNS queries. C. nslookup
nslookup is an older utility for querying DNS. While it can perform various lookups, it is generally considered deprecated in favor of dig for most advanced tasks. Its output can be less consistent and harder to parse than dig‘s, and it lacks some of dig‘s advanced features for detailed query information. D. named-checkzone
named-checkzone is a utility used to check the syntax and validity of BIND DNS zone files (e.g., db.example.com). It ensures that the zone file is correctly formatted and free of common errors, but it does not perform or provide information about live DNS queries.
Unattempted
Correct:
E. dig Explanation: dig (Domain Information Groper) is the most powerful and flexible command-line tool for querying DNS name servers. It provides detailed information about DNS lookups, including the exact query sent, the server that responded, the response time, the full answer section, authority section, and additional section, as well as various flags and statistics. It allows for querying specific record types (A, MX, NS, SOA, etc.), using specific name servers, and controlling various query options, making it ideal for troubleshooting and in-depth DNS analysis. Incorrect:
A. host
host is a simpler utility for performing basic DNS lookups (e.g., converting hostnames to IP addresses and vice-versa). While useful for quick checks, it provides less detailed information and fewer options than dig. It‘s good for “what is the IP for this hostname?“, but not for deep query analysis. B. named-checkconf
named-checkconf is a utility used to check the syntax and validity of the BIND DNS server‘s main configuration file (named.conf). It verifies the structural correctness of the configuration but does not perform or provide information about actual DNS queries. C. nslookup
nslookup is an older utility for querying DNS. While it can perform various lookups, it is generally considered deprecated in favor of dig for most advanced tasks. Its output can be less consistent and harder to parse than dig‘s, and it lacks some of dig‘s advanced features for detailed query information. D. named-checkzone
named-checkzone is a utility used to check the syntax and validity of BIND DNS zone files (e.g., db.example.com). It ensures that the zone file is correctly formatted and free of common errors, but it does not perform or provide information about live DNS queries.
Question 43 of 60
43. Question
Consider the following /srv/www/default/html/restricted/.htaccess. Whereas DocumentRoot is set to / srv / www / default / html. Check two of the following sentences that are true?
Correct
Incorrect
Unattempted
Question 44 of 60
44. Question
Which of the following are log file guidelines commonly used in Apache? (Choose TWO correct answers).
Correct
Correct: B. ErrorLog
Defines the file where Apache logs error messages (e.g., ErrorLog /var/log/apache2/error.log).
Critical for debugging HTTP 500 errors, permission issues, etc.
C. CustomLog
Defines access logs with customizable formats (e.g., CustomLog /var/log/apache2/access.log combined).
Logs client requests, response codes, and other traffic details.
Incorrect: A. ServerLog
Fake directive: Apache has no such directive.
D. VHostLog
Fake directive: Virtual host logs are configured via ErrorLog/CustomLog inside blocks.
E. ConfigLog
Fake directive: Apache logs configuration errors to ErrorLog, not a separate file.
Incorrect
Correct: B. ErrorLog
Defines the file where Apache logs error messages (e.g., ErrorLog /var/log/apache2/error.log).
Critical for debugging HTTP 500 errors, permission issues, etc.
C. CustomLog
Defines access logs with customizable formats (e.g., CustomLog /var/log/apache2/access.log combined).
Logs client requests, response codes, and other traffic details.
Incorrect: A. ServerLog
Fake directive: Apache has no such directive.
D. VHostLog
Fake directive: Virtual host logs are configured via ErrorLog/CustomLog inside blocks.
E. ConfigLog
Fake directive: Apache logs configuration errors to ErrorLog, not a separate file.
Unattempted
Correct: B. ErrorLog
Defines the file where Apache logs error messages (e.g., ErrorLog /var/log/apache2/error.log).
Critical for debugging HTTP 500 errors, permission issues, etc.
C. CustomLog
Defines access logs with customizable formats (e.g., CustomLog /var/log/apache2/access.log combined).
Logs client requests, response codes, and other traffic details.
Incorrect: A. ServerLog
Fake directive: Apache has no such directive.
D. VHostLog
Fake directive: Virtual host logs are configured via ErrorLog/CustomLog inside blocks.
E. ConfigLog
Fake directive: Apache logs configuration errors to ErrorLog, not a separate file.
Question 45 of 60
45. Question
When trying to revert a web server‘s proxy through Nginx, what keyword is needed to pass the Host header from the original request to the proxy server?
Correct
Correct: C. proxy_set_header
In Nginx, the proxy_set_header directive is used to redefine or add new headers to the request that is passed to the proxied server. To pass the original Host header from the client‘s request to the upstream server, you would typically use proxy_set_header Host $host; or proxy_set_header Host $http_host; within your location or server block. This ensures that the backend server receives the correct host information, which is crucial for many web applications and virtual hosts. Incorrect:
A. proxy_pass_header: This is not a valid Nginx directive. There is no direct proxy_pass_header keyword for this purpose. The functionality of passing headers is handled by proxy_set_header. B. proxy_forward_header: Similar to option A, this is not a recognized Nginx directive for header manipulation in proxying. D. proxy_header: This is also not a valid Nginx directive. While it contains “proxy“ and “header,“ it doesn‘t correspond to any configuration keyword for setting or passing headers to a proxied server.
Incorrect
Correct: C. proxy_set_header
In Nginx, the proxy_set_header directive is used to redefine or add new headers to the request that is passed to the proxied server. To pass the original Host header from the client‘s request to the upstream server, you would typically use proxy_set_header Host $host; or proxy_set_header Host $http_host; within your location or server block. This ensures that the backend server receives the correct host information, which is crucial for many web applications and virtual hosts. Incorrect:
A. proxy_pass_header: This is not a valid Nginx directive. There is no direct proxy_pass_header keyword for this purpose. The functionality of passing headers is handled by proxy_set_header. B. proxy_forward_header: Similar to option A, this is not a recognized Nginx directive for header manipulation in proxying. D. proxy_header: This is also not a valid Nginx directive. While it contains “proxy“ and “header,“ it doesn‘t correspond to any configuration keyword for setting or passing headers to a proxied server.
Unattempted
Correct: C. proxy_set_header
In Nginx, the proxy_set_header directive is used to redefine or add new headers to the request that is passed to the proxied server. To pass the original Host header from the client‘s request to the upstream server, you would typically use proxy_set_header Host $host; or proxy_set_header Host $http_host; within your location or server block. This ensures that the backend server receives the correct host information, which is crucial for many web applications and virtual hosts. Incorrect:
A. proxy_pass_header: This is not a valid Nginx directive. There is no direct proxy_pass_header keyword for this purpose. The functionality of passing headers is handled by proxy_set_header. B. proxy_forward_header: Similar to option A, this is not a recognized Nginx directive for header manipulation in proxying. D. proxy_header: This is also not a valid Nginx directive. While it contains “proxy“ and “header,“ it doesn‘t correspond to any configuration keyword for setting or passing headers to a proxied server.
Question 46 of 60
46. Question
Which of the following is not a valid ACL type when configuring squid?
Correct
Correct:
C. source source is not a valid ACL type in Squid‘s configuration. While ACLs can match based on the source of a request (e.g., client IP address), the correct ACL type for this purpose is src (short for source). Incorrect:
A. url_regex
url_regex is a valid Squid ACL type. It is used to match URLs based on a regular expression pattern. This is powerful for filtering or allowing access to specific URL patterns. B. time
time is a valid Squid ACL type. It allows you to define access rules based on specific times of the day or days of the week. D. src
src is a valid Squid ACL type. It is used to match requests based on the client‘s source IP address or network. This is one of the most commonly used ACL types for controlling who can use the proxy. E. dstdomain
dstdomain is a valid Squid ACL type. It is used to match requests based on the destination domain name. This is useful for controlling access to specific websites or domains.
Incorrect
Correct:
C. source source is not a valid ACL type in Squid‘s configuration. While ACLs can match based on the source of a request (e.g., client IP address), the correct ACL type for this purpose is src (short for source). Incorrect:
A. url_regex
url_regex is a valid Squid ACL type. It is used to match URLs based on a regular expression pattern. This is powerful for filtering or allowing access to specific URL patterns. B. time
time is a valid Squid ACL type. It allows you to define access rules based on specific times of the day or days of the week. D. src
src is a valid Squid ACL type. It is used to match requests based on the client‘s source IP address or network. This is one of the most commonly used ACL types for controlling who can use the proxy. E. dstdomain
dstdomain is a valid Squid ACL type. It is used to match requests based on the destination domain name. This is useful for controlling access to specific websites or domains.
Unattempted
Correct:
C. source source is not a valid ACL type in Squid‘s configuration. While ACLs can match based on the source of a request (e.g., client IP address), the correct ACL type for this purpose is src (short for source). Incorrect:
A. url_regex
url_regex is a valid Squid ACL type. It is used to match URLs based on a regular expression pattern. This is powerful for filtering or allowing access to specific URL patterns. B. time
time is a valid Squid ACL type. It allows you to define access rules based on specific times of the day or days of the week. D. src
src is a valid Squid ACL type. It is used to match requests based on the client‘s source IP address or network. This is one of the most commonly used ACL types for controlling who can use the proxy. E. dstdomain
dstdomain is a valid Squid ACL type. It is used to match requests based on the destination domain name. This is useful for controlling access to specific websites or domains.
Question 47 of 60
47. Question
In the / var / squid / url_blacklist file is a list of URLs that users should not be allowed to access. What is correct in the Squid configuration file to create an acl called blacklist based on this file?
Correct
Correct: E. acl blacklist urlpath_regex “/ var / squid / url_blacklist“
This option correctly defines an Access Control List (ACL) named “blacklist“ using the urlpath_regex type, which is used to match URLs against regular expressions. The file containing the list of URLs (/var/squid/url_blacklist) is correctly enclosed in double quotes, which is good practice when specifying file paths in Squid configurations, especially if they contain spaces or special characters (though not strictly necessary here, it‘s safer). The urlpath_regex type is appropriate for a blacklist file where each line is expected to be a regular expression matching a prohibited URL path. Incorrect:
A. acl urlpath_regex blacklist / var / squid / url_blacklist: The order is incorrect. In Squid ACL definitions, the ACL name comes immediately after acl, followed by the ACL type. B. acl blacklist urlpath_regex / var / squid / url_blacklist: This option is very close to the correct one, but it omits the double quotes around the file path. While Squid might sometimes parse this correctly, enclosing paths in quotes is generally safer and more robust against potential issues with spaces or special characters in the path. For exam purposes, the most precise and safest syntax is often preferred. C. acl blacklist “/ var / squid / url_blacklist“: This option is missing the ACL type (urlpath_regex). Squid needs to know how to interpret the contents of the file (e.g., as a list of exact URLs, regular expressions, domains, etc.). Simply providing the filename without the type is insufficient. D. acl blacklist file / var / squid / url_blacklist: While file can be used with some ACL types (like src or dst), it‘s not the correct type to specify a list of URLs that should be matched using regular expressions. For URL matching based on regular expressions from a file, urlpath_regex (or url_regex for full URLs) is required.
Incorrect
Correct: E. acl blacklist urlpath_regex “/ var / squid / url_blacklist“
This option correctly defines an Access Control List (ACL) named “blacklist“ using the urlpath_regex type, which is used to match URLs against regular expressions. The file containing the list of URLs (/var/squid/url_blacklist) is correctly enclosed in double quotes, which is good practice when specifying file paths in Squid configurations, especially if they contain spaces or special characters (though not strictly necessary here, it‘s safer). The urlpath_regex type is appropriate for a blacklist file where each line is expected to be a regular expression matching a prohibited URL path. Incorrect:
A. acl urlpath_regex blacklist / var / squid / url_blacklist: The order is incorrect. In Squid ACL definitions, the ACL name comes immediately after acl, followed by the ACL type. B. acl blacklist urlpath_regex / var / squid / url_blacklist: This option is very close to the correct one, but it omits the double quotes around the file path. While Squid might sometimes parse this correctly, enclosing paths in quotes is generally safer and more robust against potential issues with spaces or special characters in the path. For exam purposes, the most precise and safest syntax is often preferred. C. acl blacklist “/ var / squid / url_blacklist“: This option is missing the ACL type (urlpath_regex). Squid needs to know how to interpret the contents of the file (e.g., as a list of exact URLs, regular expressions, domains, etc.). Simply providing the filename without the type is insufficient. D. acl blacklist file / var / squid / url_blacklist: While file can be used with some ACL types (like src or dst), it‘s not the correct type to specify a list of URLs that should be matched using regular expressions. For URL matching based on regular expressions from a file, urlpath_regex (or url_regex for full URLs) is required.
Unattempted
Correct: E. acl blacklist urlpath_regex “/ var / squid / url_blacklist“
This option correctly defines an Access Control List (ACL) named “blacklist“ using the urlpath_regex type, which is used to match URLs against regular expressions. The file containing the list of URLs (/var/squid/url_blacklist) is correctly enclosed in double quotes, which is good practice when specifying file paths in Squid configurations, especially if they contain spaces or special characters (though not strictly necessary here, it‘s safer). The urlpath_regex type is appropriate for a blacklist file where each line is expected to be a regular expression matching a prohibited URL path. Incorrect:
A. acl urlpath_regex blacklist / var / squid / url_blacklist: The order is incorrect. In Squid ACL definitions, the ACL name comes immediately after acl, followed by the ACL type. B. acl blacklist urlpath_regex / var / squid / url_blacklist: This option is very close to the correct one, but it omits the double quotes around the file path. While Squid might sometimes parse this correctly, enclosing paths in quotes is generally safer and more robust against potential issues with spaces or special characters in the path. For exam purposes, the most precise and safest syntax is often preferred. C. acl blacklist “/ var / squid / url_blacklist“: This option is missing the ACL type (urlpath_regex). Squid needs to know how to interpret the contents of the file (e.g., as a list of exact URLs, regular expressions, domains, etc.). Simply providing the filename without the type is insufficient. D. acl blacklist file / var / squid / url_blacklist: While file can be used with some ACL types (like src or dst), it‘s not the correct type to specify a list of URLs that should be matched using regular expressions. For URL matching based on regular expressions from a file, urlpath_regex (or url_regex for full URLs) is required.
Question 48 of 60
48. Question
Which BIND option should be used to limit which IP addresses slave name servers can connect to?
Correct
Correct: A. allow-transfer
The allow-transfer option in BIND‘s named.conf file is specifically used within a zone definition to control which IP addresses are permitted to request a zone transfer from the master name server. This is crucial for security and ensuring that only authorized slave (secondary) name servers can obtain a copy of the zone data. Incorrect:
B. allow-slaves: This is not a valid BIND option. While it conceptually relates to allowing slaves, the correct directive for zone transfers is allow-transfer. C. allow-secondary: This is also not a valid BIND option. Similar to allow-slaves, it‘s not the correct syntax for controlling zone transfers to secondary servers. D. allow-queries: The allow-queries option controls which IP addresses are permitted to send DNS queries to the name server. It does not control zone transfers. A server might allow queries from a wide range of IPs but only allow zone transfers from a very restricted set of slave servers. E. allow-zone-transfer: While this option precisely describes the action, it is not the correct syntax for the BIND configuration file. The correct directive is allow-transfer. BIND options often use concise names.
Incorrect
Correct: A. allow-transfer
The allow-transfer option in BIND‘s named.conf file is specifically used within a zone definition to control which IP addresses are permitted to request a zone transfer from the master name server. This is crucial for security and ensuring that only authorized slave (secondary) name servers can obtain a copy of the zone data. Incorrect:
B. allow-slaves: This is not a valid BIND option. While it conceptually relates to allowing slaves, the correct directive for zone transfers is allow-transfer. C. allow-secondary: This is also not a valid BIND option. Similar to allow-slaves, it‘s not the correct syntax for controlling zone transfers to secondary servers. D. allow-queries: The allow-queries option controls which IP addresses are permitted to send DNS queries to the name server. It does not control zone transfers. A server might allow queries from a wide range of IPs but only allow zone transfers from a very restricted set of slave servers. E. allow-zone-transfer: While this option precisely describes the action, it is not the correct syntax for the BIND configuration file. The correct directive is allow-transfer. BIND options often use concise names.
Unattempted
Correct: A. allow-transfer
The allow-transfer option in BIND‘s named.conf file is specifically used within a zone definition to control which IP addresses are permitted to request a zone transfer from the master name server. This is crucial for security and ensuring that only authorized slave (secondary) name servers can obtain a copy of the zone data. Incorrect:
B. allow-slaves: This is not a valid BIND option. While it conceptually relates to allowing slaves, the correct directive for zone transfers is allow-transfer. C. allow-secondary: This is also not a valid BIND option. Similar to allow-slaves, it‘s not the correct syntax for controlling zone transfers to secondary servers. D. allow-queries: The allow-queries option controls which IP addresses are permitted to send DNS queries to the name server. It does not control zone transfers. A server might allow queries from a wide range of IPs but only allow zone transfers from a very restricted set of slave servers. E. allow-zone-transfer: While this option precisely describes the action, it is not the correct syntax for the BIND configuration file. The correct directive is allow-transfer. BIND options often use concise names.
Question 49 of 60
49. Question
When Apache is configured to use name-based virtual hosts:
Correct
Correct: B. It is necessary to create a VirtualHost block for the primary host.
When Apache is configured for name-based virtual hosts, it‘s good practice and often necessary to define a VirtualHost block for the “primary“ or default host. This is the VirtualHost that will handle requests that don‘t match any other explicitly defined ServerName in other VirtualHost blocks on the same IP/port combination. If a request comes in that doesn‘t match any specific ServerName, Apache will serve content from the first VirtualHost block it encounters for that IP/port. Explicitly defining a default VirtualHost (often with _default_:80 or ServerName matching the server‘s main hostname) ensures predictable behavior and avoids unexpected serving of content. Incorrect:
A. It starts several daemons (one for each virtual host). This is incorrect. Apache typically runs as one or a few main daemon processes, which then fork child processes or threads to handle requests. It does not start a separate daemon for each virtual host. All name-based virtual hosts are handled by the same set of Apache processes listening on the configured IP addresses and ports. C. The Listen directive is ignored by the server. This is incorrect. The Listen directive is fundamental to Apache‘s operation, regardless of whether name-based or IP-based virtual hosts are used. It tells Apache on which IP address(es) and port(s) to listen for incoming connections. Name-based virtual hosts rely on the Listen directive to receive requests on a shared IP/port. D. Only the ServerName and DocumentRoot directives can be used within a block. This is incorrect. VirtualHost blocks are highly flexible and can contain almost any Apache directive that can be applied per-host. This includes directives like ErrorLog, CustomLog, Directory blocks, AllowOverride, Require, RewriteEngine, and many others, allowing for fine-grained configuration of each virtual host. E. You must configure a different IP address for each virtual host. This describes IP-based virtual hosts, not name-based virtual hosts. The primary purpose and advantage of name-based virtual hosts are to allow multiple domains to share a single IP address (and port), with Apache distinguishing between them based on the Host header sent by the client.
Incorrect
Correct: B. It is necessary to create a VirtualHost block for the primary host.
When Apache is configured for name-based virtual hosts, it‘s good practice and often necessary to define a VirtualHost block for the “primary“ or default host. This is the VirtualHost that will handle requests that don‘t match any other explicitly defined ServerName in other VirtualHost blocks on the same IP/port combination. If a request comes in that doesn‘t match any specific ServerName, Apache will serve content from the first VirtualHost block it encounters for that IP/port. Explicitly defining a default VirtualHost (often with _default_:80 or ServerName matching the server‘s main hostname) ensures predictable behavior and avoids unexpected serving of content. Incorrect:
A. It starts several daemons (one for each virtual host). This is incorrect. Apache typically runs as one or a few main daemon processes, which then fork child processes or threads to handle requests. It does not start a separate daemon for each virtual host. All name-based virtual hosts are handled by the same set of Apache processes listening on the configured IP addresses and ports. C. The Listen directive is ignored by the server. This is incorrect. The Listen directive is fundamental to Apache‘s operation, regardless of whether name-based or IP-based virtual hosts are used. It tells Apache on which IP address(es) and port(s) to listen for incoming connections. Name-based virtual hosts rely on the Listen directive to receive requests on a shared IP/port. D. Only the ServerName and DocumentRoot directives can be used within a block. This is incorrect. VirtualHost blocks are highly flexible and can contain almost any Apache directive that can be applied per-host. This includes directives like ErrorLog, CustomLog, Directory blocks, AllowOverride, Require, RewriteEngine, and many others, allowing for fine-grained configuration of each virtual host. E. You must configure a different IP address for each virtual host. This describes IP-based virtual hosts, not name-based virtual hosts. The primary purpose and advantage of name-based virtual hosts are to allow multiple domains to share a single IP address (and port), with Apache distinguishing between them based on the Host header sent by the client.
Unattempted
Correct: B. It is necessary to create a VirtualHost block for the primary host.
When Apache is configured for name-based virtual hosts, it‘s good practice and often necessary to define a VirtualHost block for the “primary“ or default host. This is the VirtualHost that will handle requests that don‘t match any other explicitly defined ServerName in other VirtualHost blocks on the same IP/port combination. If a request comes in that doesn‘t match any specific ServerName, Apache will serve content from the first VirtualHost block it encounters for that IP/port. Explicitly defining a default VirtualHost (often with _default_:80 or ServerName matching the server‘s main hostname) ensures predictable behavior and avoids unexpected serving of content. Incorrect:
A. It starts several daemons (one for each virtual host). This is incorrect. Apache typically runs as one or a few main daemon processes, which then fork child processes or threads to handle requests. It does not start a separate daemon for each virtual host. All name-based virtual hosts are handled by the same set of Apache processes listening on the configured IP addresses and ports. C. The Listen directive is ignored by the server. This is incorrect. The Listen directive is fundamental to Apache‘s operation, regardless of whether name-based or IP-based virtual hosts are used. It tells Apache on which IP address(es) and port(s) to listen for incoming connections. Name-based virtual hosts rely on the Listen directive to receive requests on a shared IP/port. D. Only the ServerName and DocumentRoot directives can be used within a block. This is incorrect. VirtualHost blocks are highly flexible and can contain almost any Apache directive that can be applied per-host. This includes directives like ErrorLog, CustomLog, Directory blocks, AllowOverride, Require, RewriteEngine, and many others, allowing for fine-grained configuration of each virtual host. E. You must configure a different IP address for each virtual host. This describes IP-based virtual hosts, not name-based virtual hosts. The primary purpose and advantage of name-based virtual hosts are to allow multiple domains to share a single IP address (and port), with Apache distinguishing between them based on the Host header sent by the client.
Question 50 of 60
50. Question
A security certificate is considered valid when:
Correct
Correct: D. It is issued by a CA – Certification Authority – recognized by the browser.
A security certificate (specifically, an X.509 certificate used for SSL/TLS) is considered valid primarily when it has been issued by a Certificate Authority (CA) that is trusted by the web browser or operating system. Browsers maintain a list of trusted root CAs. When a browser receives a certificate from a website, it checks if the certificate‘s issuer is in its list of trusted CAs, or if it can trace a chain of trust back to a trusted root CA. If the chain of trust is valid and the certificate has not expired or been revoked, the browser considers the certificate valid. Incorrect:
A. The connection is established with a known website. While a valid certificate is usually associated with a known website, the mere establishment of a connection does not automatically validate the certificate. An attacker could establish a connection with a malicious site and present a self-signed or invalid certificate. B. The certificate has more than 128 bits of encryption. The number of bits of encryption (e.g., 128-bit, 256-bit) refers to the strength of the symmetric encryption key used for the secure connection after the certificate has been validated and the handshake is complete. It does not determine the validity of the certificate itself. A certificate with strong encryption could still be invalid if it‘s expired or issued by an untrusted CA. C. The browser informs about the security of the website. The browser informs you about the security (often with a padlock icon and “HTTPS“) because it has validated the certificate. The information itself is a result of the validation process, not the condition that makes the certificate valid. If the certificate is not valid, the browser will typically display a warning.
Incorrect
Correct: D. It is issued by a CA – Certification Authority – recognized by the browser.
A security certificate (specifically, an X.509 certificate used for SSL/TLS) is considered valid primarily when it has been issued by a Certificate Authority (CA) that is trusted by the web browser or operating system. Browsers maintain a list of trusted root CAs. When a browser receives a certificate from a website, it checks if the certificate‘s issuer is in its list of trusted CAs, or if it can trace a chain of trust back to a trusted root CA. If the chain of trust is valid and the certificate has not expired or been revoked, the browser considers the certificate valid. Incorrect:
A. The connection is established with a known website. While a valid certificate is usually associated with a known website, the mere establishment of a connection does not automatically validate the certificate. An attacker could establish a connection with a malicious site and present a self-signed or invalid certificate. B. The certificate has more than 128 bits of encryption. The number of bits of encryption (e.g., 128-bit, 256-bit) refers to the strength of the symmetric encryption key used for the secure connection after the certificate has been validated and the handshake is complete. It does not determine the validity of the certificate itself. A certificate with strong encryption could still be invalid if it‘s expired or issued by an untrusted CA. C. The browser informs about the security of the website. The browser informs you about the security (often with a padlock icon and “HTTPS“) because it has validated the certificate. The information itself is a result of the validation process, not the condition that makes the certificate valid. If the certificate is not valid, the browser will typically display a warning.
Unattempted
Correct: D. It is issued by a CA – Certification Authority – recognized by the browser.
A security certificate (specifically, an X.509 certificate used for SSL/TLS) is considered valid primarily when it has been issued by a Certificate Authority (CA) that is trusted by the web browser or operating system. Browsers maintain a list of trusted root CAs. When a browser receives a certificate from a website, it checks if the certificate‘s issuer is in its list of trusted CAs, or if it can trace a chain of trust back to a trusted root CA. If the chain of trust is valid and the certificate has not expired or been revoked, the browser considers the certificate valid. Incorrect:
A. The connection is established with a known website. While a valid certificate is usually associated with a known website, the mere establishment of a connection does not automatically validate the certificate. An attacker could establish a connection with a malicious site and present a self-signed or invalid certificate. B. The certificate has more than 128 bits of encryption. The number of bits of encryption (e.g., 128-bit, 256-bit) refers to the strength of the symmetric encryption key used for the secure connection after the certificate has been validated and the handshake is complete. It does not determine the validity of the certificate itself. A certificate with strong encryption could still be invalid if it‘s expired or issued by an untrusted CA. C. The browser informs about the security of the website. The browser informs you about the security (often with a padlock icon and “HTTPS“) because it has validated the certificate. The information itself is a result of the validation process, not the condition that makes the certificate valid. If the certificate is not valid, the browser will typically display a warning.
Question 51 of 60
51. Question
In the configuration of the Squid proxy, the http_access entry is intended to:
Correct
Correct: D. Control http access to an ACL.
In Squid configuration, the http_access directive is fundamental for controlling access to the proxy server based on various criteria defined by Access Control Lists (ACLs). You use http_access allow or http_access deny followed by one or more ACL names to create rules that determine who can access what resources through the proxy. This allows for granular control over web traffic. Incorrect:
A. Inform which is the HTTP access server. This is incorrect. http_access is about defining access rules for the proxy, not about specifying the HTTP access server itself. The proxy server is the HTTP access server. B. Release user access. While http_access rules can effectively “release“ (allow) user access, its purpose is broader: it‘s about defining the rules for both allowing and denying access, not just allowing it. C. Define the proxy access port. The proxy access port is defined by the http_port directive in Squid‘s configuration, not http_access. http_access comes into play after a connection is received on the http_port.
Incorrect
Correct: D. Control http access to an ACL.
In Squid configuration, the http_access directive is fundamental for controlling access to the proxy server based on various criteria defined by Access Control Lists (ACLs). You use http_access allow or http_access deny followed by one or more ACL names to create rules that determine who can access what resources through the proxy. This allows for granular control over web traffic. Incorrect:
A. Inform which is the HTTP access server. This is incorrect. http_access is about defining access rules for the proxy, not about specifying the HTTP access server itself. The proxy server is the HTTP access server. B. Release user access. While http_access rules can effectively “release“ (allow) user access, its purpose is broader: it‘s about defining the rules for both allowing and denying access, not just allowing it. C. Define the proxy access port. The proxy access port is defined by the http_port directive in Squid‘s configuration, not http_access. http_access comes into play after a connection is received on the http_port.
Unattempted
Correct: D. Control http access to an ACL.
In Squid configuration, the http_access directive is fundamental for controlling access to the proxy server based on various criteria defined by Access Control Lists (ACLs). You use http_access allow or http_access deny followed by one or more ACL names to create rules that determine who can access what resources through the proxy. This allows for granular control over web traffic. Incorrect:
A. Inform which is the HTTP access server. This is incorrect. http_access is about defining access rules for the proxy, not about specifying the HTTP access server itself. The proxy server is the HTTP access server. B. Release user access. While http_access rules can effectively “release“ (allow) user access, its purpose is broader: it‘s about defining the rules for both allowing and denying access, not just allowing it. C. Define the proxy access port. The proxy access port is defined by the http_port directive in Squid‘s configuration, not http_access. http_access comes into play after a connection is received on the http_port.
Question 52 of 60
52. Question
A custom log record can be created in the Apache configuration file with the instructions:
Correct
Correct: A. LogFormat and CustomLog
LogFormat: This directive is used to define the format of your log entries. You create a custom format string using various directives (e.g., %h for host, %t for time, %r for request line, %s for status, etc.) and assign it a nickname. CustomLog: This directive then uses the defined LogFormat (by its nickname) and specifies the path to the log file where the entries should be written. Together, these two directives allow for highly flexible and custom log configurations in Apache. Incorrect:
B. Log and Access: These are not valid Apache directives for defining custom log formats or locations in this manner. Log is a general term, and Access often refers to access control directives like Require. C. LogError and Directory: LogError is related to error logging and Directory is a container directive for applying configurations to specific file system directories. Neither is used for defining custom access log formats. D. Syslog and LogFormat: While LogFormat is correct, Syslog is used to send logs to a system syslog daemon, not to define custom log files and formats for standard access logging within Apache‘s configuration. While Apache can integrate with syslog, it‘s not the primary directive for defining custom file-based log formats.
Incorrect
Correct: A. LogFormat and CustomLog
LogFormat: This directive is used to define the format of your log entries. You create a custom format string using various directives (e.g., %h for host, %t for time, %r for request line, %s for status, etc.) and assign it a nickname. CustomLog: This directive then uses the defined LogFormat (by its nickname) and specifies the path to the log file where the entries should be written. Together, these two directives allow for highly flexible and custom log configurations in Apache. Incorrect:
B. Log and Access: These are not valid Apache directives for defining custom log formats or locations in this manner. Log is a general term, and Access often refers to access control directives like Require. C. LogError and Directory: LogError is related to error logging and Directory is a container directive for applying configurations to specific file system directories. Neither is used for defining custom access log formats. D. Syslog and LogFormat: While LogFormat is correct, Syslog is used to send logs to a system syslog daemon, not to define custom log files and formats for standard access logging within Apache‘s configuration. While Apache can integrate with syslog, it‘s not the primary directive for defining custom file-based log formats.
Unattempted
Correct: A. LogFormat and CustomLog
LogFormat: This directive is used to define the format of your log entries. You create a custom format string using various directives (e.g., %h for host, %t for time, %r for request line, %s for status, etc.) and assign it a nickname. CustomLog: This directive then uses the defined LogFormat (by its nickname) and specifies the path to the log file where the entries should be written. Together, these two directives allow for highly flexible and custom log configurations in Apache. Incorrect:
B. Log and Access: These are not valid Apache directives for defining custom log formats or locations in this manner. Log is a general term, and Access often refers to access control directives like Require. C. LogError and Directory: LogError is related to error logging and Directory is a container directive for applying configurations to specific file system directories. Neither is used for defining custom access log formats. D. Syslog and LogFormat: While LogFormat is correct, Syslog is used to send logs to a system syslog daemon, not to define custom log files and formats for standard access logging within Apache‘s configuration. While Apache can integrate with syslog, it‘s not the primary directive for defining custom file-based log formats.
Question 53 of 60
53. Question
The standard communication port for the HTTPS protocol is:
Correct
Correct: B. 443
HTTPS (Hypertext Transfer Protocol Secure) uses port 443 by default for secure communication over the internet. This port is specifically registered for HTTPS, ensuring that secure web traffic is directed to the correct service. Incorrect:
A. 143: This port is commonly used for IMAP (Internet Message Access Protocol), which is a protocol for retrieving emails from a mail server. It is not associated with HTTPS. C. 80: This is the standard communication port for HTTP (Hypertext Transfer Protocol), which is the unencrypted version of web traffic. While related to web Browse, it does not provide the security features of HTTPS. D. 8080: This port is often used for alternative HTTP services or proxy servers, but it is not the standard or default port for HTTPS. It‘s frequently used for development servers or as a non-standard port for web applications.
Incorrect
Correct: B. 443
HTTPS (Hypertext Transfer Protocol Secure) uses port 443 by default for secure communication over the internet. This port is specifically registered for HTTPS, ensuring that secure web traffic is directed to the correct service. Incorrect:
A. 143: This port is commonly used for IMAP (Internet Message Access Protocol), which is a protocol for retrieving emails from a mail server. It is not associated with HTTPS. C. 80: This is the standard communication port for HTTP (Hypertext Transfer Protocol), which is the unencrypted version of web traffic. While related to web Browse, it does not provide the security features of HTTPS. D. 8080: This port is often used for alternative HTTP services or proxy servers, but it is not the standard or default port for HTTPS. It‘s frequently used for development servers or as a non-standard port for web applications.
Unattempted
Correct: B. 443
HTTPS (Hypertext Transfer Protocol Secure) uses port 443 by default for secure communication over the internet. This port is specifically registered for HTTPS, ensuring that secure web traffic is directed to the correct service. Incorrect:
A. 143: This port is commonly used for IMAP (Internet Message Access Protocol), which is a protocol for retrieving emails from a mail server. It is not associated with HTTPS. C. 80: This is the standard communication port for HTTP (Hypertext Transfer Protocol), which is the unencrypted version of web traffic. While related to web Browse, it does not provide the security features of HTTPS. D. 8080: This port is often used for alternative HTTP services or proxy servers, but it is not the standard or default port for HTTPS. It‘s frequently used for development servers or as a non-standard port for web applications.
Question 54 of 60
54. Question
Which Squid configuration keyword is used to define networks and times when a service can be accessed?
Correct
Correct: D. acl
The acl (Access Control List) keyword in Squid is the fundamental building block for defining various criteria, including networks, time ranges, user authentications, URL patterns, and more. Once an acl is defined, it can then be used with directives like http_access to specify when a service can be accessed (e.g., http_access allow my_network time_range_working_hours). Therefore, acl is the keyword used to define these conditions. Incorrect:
A. http_allow: This is not a valid Squid configuration keyword. While http_access allow is used to allow access, http_allow itself does not exist to define conditions. B. allow: This is part of the http_access allow or icp_access allow directives, but it‘s not the keyword used to define the networks and times themselves. It‘s an action taken based on previously defined ACLs. C. permit: Similar to allow, permit is not a Squid configuration keyword for defining access criteria. Squid primarily uses allow and deny with http_access directives.
Incorrect
Correct: D. acl
The acl (Access Control List) keyword in Squid is the fundamental building block for defining various criteria, including networks, time ranges, user authentications, URL patterns, and more. Once an acl is defined, it can then be used with directives like http_access to specify when a service can be accessed (e.g., http_access allow my_network time_range_working_hours). Therefore, acl is the keyword used to define these conditions. Incorrect:
A. http_allow: This is not a valid Squid configuration keyword. While http_access allow is used to allow access, http_allow itself does not exist to define conditions. B. allow: This is part of the http_access allow or icp_access allow directives, but it‘s not the keyword used to define the networks and times themselves. It‘s an action taken based on previously defined ACLs. C. permit: Similar to allow, permit is not a Squid configuration keyword for defining access criteria. Squid primarily uses allow and deny with http_access directives.
Unattempted
Correct: D. acl
The acl (Access Control List) keyword in Squid is the fundamental building block for defining various criteria, including networks, time ranges, user authentications, URL patterns, and more. Once an acl is defined, it can then be used with directives like http_access to specify when a service can be accessed (e.g., http_access allow my_network time_range_working_hours). Therefore, acl is the keyword used to define these conditions. Incorrect:
A. http_allow: This is not a valid Squid configuration keyword. While http_access allow is used to allow access, http_allow itself does not exist to define conditions. B. allow: This is part of the http_access allow or icp_access allow directives, but it‘s not the keyword used to define the networks and times themselves. It‘s an action taken based on previously defined ACLs. C. permit: Similar to allow, permit is not a Squid configuration keyword for defining access criteria. Squid primarily uses allow and deny with http_access directives.
Question 55 of 60
55. Question
You have made extensive changes to the Apache configuration files and want to check for notorious errors before restarting the server. What command can you enter to do this?
Correct
Correct: C. apache2ctl configtest
The apache2ctl configtest command (or httpd -t on some systems) is the correct and most commonly used method to check the syntax of Apache configuration files without actually restarting the server. It parses the configuration files and reports any syntax errors, misspelled directives, or other configuration issues, which is crucial for preventing a broken server after a restart. Incorrect:
A. apache2ctl teststat: This is not a standard or valid Apache control command for checking configuration syntax. B. apachectl testconfig: While apachectl is a common utility (often symlinked to apache2ctl), testconfig is not a standard subcommand for syntax checking. The correct subcommand is configtest. D. apachectl configstatus: This is not a standard or valid Apache control command for checking configuration syntax. It sounds like it might relate to runtime status, but not configuration validation.
Incorrect
Correct: C. apache2ctl configtest
The apache2ctl configtest command (or httpd -t on some systems) is the correct and most commonly used method to check the syntax of Apache configuration files without actually restarting the server. It parses the configuration files and reports any syntax errors, misspelled directives, or other configuration issues, which is crucial for preventing a broken server after a restart. Incorrect:
A. apache2ctl teststat: This is not a standard or valid Apache control command for checking configuration syntax. B. apachectl testconfig: While apachectl is a common utility (often symlinked to apache2ctl), testconfig is not a standard subcommand for syntax checking. The correct subcommand is configtest. D. apachectl configstatus: This is not a standard or valid Apache control command for checking configuration syntax. It sounds like it might relate to runtime status, but not configuration validation.
Unattempted
Correct: C. apache2ctl configtest
The apache2ctl configtest command (or httpd -t on some systems) is the correct and most commonly used method to check the syntax of Apache configuration files without actually restarting the server. It parses the configuration files and reports any syntax errors, misspelled directives, or other configuration issues, which is crucial for preventing a broken server after a restart. Incorrect:
A. apache2ctl teststat: This is not a standard or valid Apache control command for checking configuration syntax. B. apachectl testconfig: While apachectl is a common utility (often symlinked to apache2ctl), testconfig is not a standard subcommand for syntax checking. The correct subcommand is configtest. D. apachectl configstatus: This is not a standard or valid Apache control command for checking configuration syntax. It sounds like it might relate to runtime status, but not configuration validation.
Question 56 of 60
56. Question
When the Apache HTTP Server is configured to use name-based virtual hosts:
Correct
Correct:
E. The NameVirtualHost : 80 setting indicates that all name-based virtual hosts will listen on port 80.
The NameVirtualHost directive (though largely superseded by the default behavior in modern Apache versions, it‘s still relevant for older configurations or explicit definition) explicitly informs Apache that the specified IP address and port (*:80 in this case, meaning all IP addresses on port 80) will be used to serve name-based virtual hosts. This means that Apache will differentiate between multiple virtual hosts on that shared IP/port based on the Host header sent by the client.
Incorrect:
A. You must configure a different IP address for each virtual host. This describes IP-based virtual hosts, not name-based virtual hosts. The primary benefit of name-based virtual hosts is to allow multiple domains to share a single IP address.
B. It is necessary to create a VirtualHost block for the main host. While it is highly recommended and often good practice to define a default VirtualHost block that catches requests not matching any other specific ServerName, it‘s not strictly “necessary“ in the sense that Apache won‘t function without it. If no default is defined, the first VirtualHost block listed for a given IP/port will act as the default. However, for predictable behavior, explicitly defining a default is best. The question asks about necessity, and technically, Apache will still operate.
C. The Listen directive is required for each virtual host. The Listen directive is required once to tell Apache which IP address(es) and port(s) to listen on. It defines the network sockets Apache will bind to. You do not repeat Listen for each VirtualHost block that uses the same IP/port combination. The VirtualHost directive itself specifies the IP/port for that virtual host.
D. Each virtual host can service requests for a single hostname only. This is incorrect. While a ServerName directive specifies the primary hostname for a VirtualHost block, the ServerAlias directive allows you to specify additional hostnames (aliases) that the same VirtualHost block should respond to. This means a single virtual host can serve requests for multiple domain names.
Incorrect
Correct:
E. The NameVirtualHost : 80 setting indicates that all name-based virtual hosts will listen on port 80.
The NameVirtualHost directive (though largely superseded by the default behavior in modern Apache versions, it‘s still relevant for older configurations or explicit definition) explicitly informs Apache that the specified IP address and port (*:80 in this case, meaning all IP addresses on port 80) will be used to serve name-based virtual hosts. This means that Apache will differentiate between multiple virtual hosts on that shared IP/port based on the Host header sent by the client.
Incorrect:
A. You must configure a different IP address for each virtual host. This describes IP-based virtual hosts, not name-based virtual hosts. The primary benefit of name-based virtual hosts is to allow multiple domains to share a single IP address.
B. It is necessary to create a VirtualHost block for the main host. While it is highly recommended and often good practice to define a default VirtualHost block that catches requests not matching any other specific ServerName, it‘s not strictly “necessary“ in the sense that Apache won‘t function without it. If no default is defined, the first VirtualHost block listed for a given IP/port will act as the default. However, for predictable behavior, explicitly defining a default is best. The question asks about necessity, and technically, Apache will still operate.
C. The Listen directive is required for each virtual host. The Listen directive is required once to tell Apache which IP address(es) and port(s) to listen on. It defines the network sockets Apache will bind to. You do not repeat Listen for each VirtualHost block that uses the same IP/port combination. The VirtualHost directive itself specifies the IP/port for that virtual host.
D. Each virtual host can service requests for a single hostname only. This is incorrect. While a ServerName directive specifies the primary hostname for a VirtualHost block, the ServerAlias directive allows you to specify additional hostnames (aliases) that the same VirtualHost block should respond to. This means a single virtual host can serve requests for multiple domain names.
Unattempted
Correct:
E. The NameVirtualHost : 80 setting indicates that all name-based virtual hosts will listen on port 80.
The NameVirtualHost directive (though largely superseded by the default behavior in modern Apache versions, it‘s still relevant for older configurations or explicit definition) explicitly informs Apache that the specified IP address and port (*:80 in this case, meaning all IP addresses on port 80) will be used to serve name-based virtual hosts. This means that Apache will differentiate between multiple virtual hosts on that shared IP/port based on the Host header sent by the client.
Incorrect:
A. You must configure a different IP address for each virtual host. This describes IP-based virtual hosts, not name-based virtual hosts. The primary benefit of name-based virtual hosts is to allow multiple domains to share a single IP address.
B. It is necessary to create a VirtualHost block for the main host. While it is highly recommended and often good practice to define a default VirtualHost block that catches requests not matching any other specific ServerName, it‘s not strictly “necessary“ in the sense that Apache won‘t function without it. If no default is defined, the first VirtualHost block listed for a given IP/port will act as the default. However, for predictable behavior, explicitly defining a default is best. The question asks about necessity, and technically, Apache will still operate.
C. The Listen directive is required for each virtual host. The Listen directive is required once to tell Apache which IP address(es) and port(s) to listen on. It defines the network sockets Apache will bind to. You do not repeat Listen for each VirtualHost block that uses the same IP/port combination. The VirtualHost directive itself specifies the IP/port for that virtual host.
D. Each virtual host can service requests for a single hostname only. This is incorrect. While a ServerName directive specifies the primary hostname for a VirtualHost block, the ServerAlias directive allows you to specify additional hostnames (aliases) that the same VirtualHost block should respond to. This means a single virtual host can serve requests for multiple domain names.
Question 57 of 60
57. Question
A DNS server has the IP address 192.168.0.1. Which of the following needs to be applied to a client to use this DNS server? (Select 2 responses).
Correct
Correct:
A. Make sure the dns service is listed in the hosts entry in the /etc/nsswitch.conf file.
Explanation: The /etc/nsswitch.conf file (Name Service Switch configuration file) determines the order in which a system looks up hostnames and other network information. The hosts entry specifies which services are consulted for hostname resolution. To use DNS, dns (or bind) must be present and usually listed before files (which refers to /etc/hosts) to prioritize DNS lookups. For example: hosts: files dns. If dns is missing, the system won‘t use DNS for name resolution even if a nameserver is configured. B. Add the 192.168.0.1 nameserver to /etc/resolv.conf
Explanation: The /etc/resolv.conf file is the primary configuration file for the DNS resolver client on Linux systems. To tell a client which DNS server to use, you add lines in the format nameserver IP_ADDRESS to this file. In this case, nameserver 192.168.0.1 would be added. This file is read by applications that need to resolve hostnames. Incorrect:
C. Run route add nameserver 192.168.0.1
Explanation: The route command (or ip route) is used to manage the system‘s routing table. It deals with how network packets are sent to different destinations, not with configuring DNS servers. There is no nameserver option for the route add command. D. Run ifconfig eth0 nameserver 192.168.0.1
Explanation: The ifconfig command (or ip addr) is used to configure network interfaces (like eth0). While it‘s used for setting IP addresses, netmasks, and other interface-specific parameters, it does not directly configure DNS nameservers. DNS server settings are handled by resolver configuration files like /etc/resolv.conf or by network configuration tools that write to that file (e.g., NetworkManager, systemd-resolved). E. Run bind nameserver 192.168.1.1
Explanation: bind (Berkeley Internet Name Domain) is a software suite used to implement a DNS server. Running a bind command in this context doesn‘t configure a client to use a DNS server. If anything, it would be related to setting up a DNS server on the machine itself, not configuring it as a client. Also, the IP address 192.168.1.1 is different from the one specified in the question (192.168.0.1).
Incorrect
Correct:
A. Make sure the dns service is listed in the hosts entry in the /etc/nsswitch.conf file.
Explanation: The /etc/nsswitch.conf file (Name Service Switch configuration file) determines the order in which a system looks up hostnames and other network information. The hosts entry specifies which services are consulted for hostname resolution. To use DNS, dns (or bind) must be present and usually listed before files (which refers to /etc/hosts) to prioritize DNS lookups. For example: hosts: files dns. If dns is missing, the system won‘t use DNS for name resolution even if a nameserver is configured. B. Add the 192.168.0.1 nameserver to /etc/resolv.conf
Explanation: The /etc/resolv.conf file is the primary configuration file for the DNS resolver client on Linux systems. To tell a client which DNS server to use, you add lines in the format nameserver IP_ADDRESS to this file. In this case, nameserver 192.168.0.1 would be added. This file is read by applications that need to resolve hostnames. Incorrect:
C. Run route add nameserver 192.168.0.1
Explanation: The route command (or ip route) is used to manage the system‘s routing table. It deals with how network packets are sent to different destinations, not with configuring DNS servers. There is no nameserver option for the route add command. D. Run ifconfig eth0 nameserver 192.168.0.1
Explanation: The ifconfig command (or ip addr) is used to configure network interfaces (like eth0). While it‘s used for setting IP addresses, netmasks, and other interface-specific parameters, it does not directly configure DNS nameservers. DNS server settings are handled by resolver configuration files like /etc/resolv.conf or by network configuration tools that write to that file (e.g., NetworkManager, systemd-resolved). E. Run bind nameserver 192.168.1.1
Explanation: bind (Berkeley Internet Name Domain) is a software suite used to implement a DNS server. Running a bind command in this context doesn‘t configure a client to use a DNS server. If anything, it would be related to setting up a DNS server on the machine itself, not configuring it as a client. Also, the IP address 192.168.1.1 is different from the one specified in the question (192.168.0.1).
Unattempted
Correct:
A. Make sure the dns service is listed in the hosts entry in the /etc/nsswitch.conf file.
Explanation: The /etc/nsswitch.conf file (Name Service Switch configuration file) determines the order in which a system looks up hostnames and other network information. The hosts entry specifies which services are consulted for hostname resolution. To use DNS, dns (or bind) must be present and usually listed before files (which refers to /etc/hosts) to prioritize DNS lookups. For example: hosts: files dns. If dns is missing, the system won‘t use DNS for name resolution even if a nameserver is configured. B. Add the 192.168.0.1 nameserver to /etc/resolv.conf
Explanation: The /etc/resolv.conf file is the primary configuration file for the DNS resolver client on Linux systems. To tell a client which DNS server to use, you add lines in the format nameserver IP_ADDRESS to this file. In this case, nameserver 192.168.0.1 would be added. This file is read by applications that need to resolve hostnames. Incorrect:
C. Run route add nameserver 192.168.0.1
Explanation: The route command (or ip route) is used to manage the system‘s routing table. It deals with how network packets are sent to different destinations, not with configuring DNS servers. There is no nameserver option for the route add command. D. Run ifconfig eth0 nameserver 192.168.0.1
Explanation: The ifconfig command (or ip addr) is used to configure network interfaces (like eth0). While it‘s used for setting IP addresses, netmasks, and other interface-specific parameters, it does not directly configure DNS nameservers. DNS server settings are handled by resolver configuration files like /etc/resolv.conf or by network configuration tools that write to that file (e.g., NetworkManager, systemd-resolved). E. Run bind nameserver 192.168.1.1
Explanation: bind (Berkeley Internet Name Domain) is a software suite used to implement a DNS server. Running a bind command in this context doesn‘t configure a client to use a DNS server. If anything, it would be related to setting up a DNS server on the machine itself, not configuring it as a client. Also, the IP address 192.168.1.1 is different from the one specified in the question (192.168.0.1).
Question 58 of 60
58. Question
What is a valid option for Squid to define a listening port?
Correct
Correct: B. http_port 3128
The http_port directive in Squid‘s configuration file (squid.conf) is used to specify the TCP port(s) on which Squid will listen for incoming HTTP client requests. The default port for Squid is commonly 3128, but it can be configured to any valid port. Incorrect:
A. squid_port 3128: This is not a valid Squid configuration keyword for defining the listening port. Squid uses http_port. C. http-listen-port = 3128: This syntax is not used by Squid. Squid uses a directive-value pair, not a key-value pair with an equals sign for this specific configuration, and the directive name itself is incorrect. D. port = 3128: This is too generic and not the specific directive Squid uses for defining its listening port. While other applications might use a generic port directive, Squid requires http_port.
Incorrect
Correct: B. http_port 3128
The http_port directive in Squid‘s configuration file (squid.conf) is used to specify the TCP port(s) on which Squid will listen for incoming HTTP client requests. The default port for Squid is commonly 3128, but it can be configured to any valid port. Incorrect:
A. squid_port 3128: This is not a valid Squid configuration keyword for defining the listening port. Squid uses http_port. C. http-listen-port = 3128: This syntax is not used by Squid. Squid uses a directive-value pair, not a key-value pair with an equals sign for this specific configuration, and the directive name itself is incorrect. D. port = 3128: This is too generic and not the specific directive Squid uses for defining its listening port. While other applications might use a generic port directive, Squid requires http_port.
Unattempted
Correct: B. http_port 3128
The http_port directive in Squid‘s configuration file (squid.conf) is used to specify the TCP port(s) on which Squid will listen for incoming HTTP client requests. The default port for Squid is commonly 3128, but it can be configured to any valid port. Incorrect:
A. squid_port 3128: This is not a valid Squid configuration keyword for defining the listening port. Squid uses http_port. C. http-listen-port = 3128: This syntax is not used by Squid. Squid uses a directive-value pair, not a key-value pair with an equals sign for this specific configuration, and the directive name itself is incorrect. D. port = 3128: This is too generic and not the specific directive Squid uses for defining its listening port. While other applications might use a generic port directive, Squid requires http_port.
Question 59 of 60
59. Question
Which option is used to configure pppd to use up to two DNS server addresses provided by the remote server?
Correct
Correct:
D. usepeerdns
The usepeerdns option in pppd (Point-to-Point Protocol Daemon) configuration tells pppd to ask the remote peer for DNS server addresses and then to use those addresses. When this option is enabled, pppd typically writes the received DNS server IP addresses into the /etc/resolv.conf file, allowing the system to use them for name resolution. It will usually accept up to two DNS server addresses. Incorrect:
A. nameserver: While nameserver is used in /etc/resolv.conf to list DNS servers, it is not a direct pppd option to request them from the peer. If you explicitly use nameserver in your pppd configuration, you are providing static DNS server addresses, not asking the peer for them. B. ms-dns: The ms-dns option in pppd is used to explicitly provide a specific DNS server address to pppd, often used on the client side when you want to use a particular DNS server, rather than accepting one from the peer. It‘s for static configuration, not requesting from the peer. C. dns: This is not a standard pppd configuration option for handling DNS server addresses. E. NDA: This is not a valid option and likely stands for “No Data Available“ or “Non-Disclosure Agreement,“ neither of which is relevant to pppd configuration.
Incorrect
Correct:
D. usepeerdns
The usepeerdns option in pppd (Point-to-Point Protocol Daemon) configuration tells pppd to ask the remote peer for DNS server addresses and then to use those addresses. When this option is enabled, pppd typically writes the received DNS server IP addresses into the /etc/resolv.conf file, allowing the system to use them for name resolution. It will usually accept up to two DNS server addresses. Incorrect:
A. nameserver: While nameserver is used in /etc/resolv.conf to list DNS servers, it is not a direct pppd option to request them from the peer. If you explicitly use nameserver in your pppd configuration, you are providing static DNS server addresses, not asking the peer for them. B. ms-dns: The ms-dns option in pppd is used to explicitly provide a specific DNS server address to pppd, often used on the client side when you want to use a particular DNS server, rather than accepting one from the peer. It‘s for static configuration, not requesting from the peer. C. dns: This is not a standard pppd configuration option for handling DNS server addresses. E. NDA: This is not a valid option and likely stands for “No Data Available“ or “Non-Disclosure Agreement,“ neither of which is relevant to pppd configuration.
Unattempted
Correct:
D. usepeerdns
The usepeerdns option in pppd (Point-to-Point Protocol Daemon) configuration tells pppd to ask the remote peer for DNS server addresses and then to use those addresses. When this option is enabled, pppd typically writes the received DNS server IP addresses into the /etc/resolv.conf file, allowing the system to use them for name resolution. It will usually accept up to two DNS server addresses. Incorrect:
A. nameserver: While nameserver is used in /etc/resolv.conf to list DNS servers, it is not a direct pppd option to request them from the peer. If you explicitly use nameserver in your pppd configuration, you are providing static DNS server addresses, not asking the peer for them. B. ms-dns: The ms-dns option in pppd is used to explicitly provide a specific DNS server address to pppd, often used on the client side when you want to use a particular DNS server, rather than accepting one from the peer. It‘s for static configuration, not requesting from the peer. C. dns: This is not a standard pppd configuration option for handling DNS server addresses. E. NDA: This is not a valid option and likely stands for “No Data Available“ or “Non-Disclosure Agreement,“ neither of which is relevant to pppd configuration.
Question 60 of 60
60. Question
What records must be inserted into a zone file to use “Round Robin Load Distribution“ for a web server?
DNS “Round Robin“ load distribution is achieved by simply having multiple A (Address) records for the same hostname, each pointing to a different IP address. When a DNS resolver queries for that hostname, the DNS server will return the A records in a rotating order. This simple mechanism distributes incoming requests across multiple servers, providing a basic form of load balancing. Each A record must be on its own line and follow the standard DNS record syntax. Incorrect:
A. http://www.example.org. 60 IN A 192.168.1-3: This syntax (192.168.1-3) is not valid for an A record in a DNS zone file. A records require a single, specific IP address. B. http://www.example.org. 60 IN RR 192.168.1: 3: RR is not a standard DNS record type for IP addresses. A is the correct record type. The syntax 192.168.1: 3 is also invalid. D. http://www.example.org. 60 IN RR 192.168.1.1; 192.168.1.2; 192.168.1.3: Again, RR is not a valid DNS record type. While it attempts to list multiple IPs, the syntax and record type are incorrect. E. http://www.example.org. 60 IN A 192.168.1.1; 192.168.1.2; 192.168.1.3: While A is the correct record type, you cannot list multiple IP addresses separated by semicolons on a single line for an A record. Each IP address for Round Robin must be specified on its own separate A record line, as shown in the correct answer.
DNS “Round Robin“ load distribution is achieved by simply having multiple A (Address) records for the same hostname, each pointing to a different IP address. When a DNS resolver queries for that hostname, the DNS server will return the A records in a rotating order. This simple mechanism distributes incoming requests across multiple servers, providing a basic form of load balancing. Each A record must be on its own line and follow the standard DNS record syntax. Incorrect:
A. http://www.example.org. 60 IN A 192.168.1-3: This syntax (192.168.1-3) is not valid for an A record in a DNS zone file. A records require a single, specific IP address. B. http://www.example.org. 60 IN RR 192.168.1: 3: RR is not a standard DNS record type for IP addresses. A is the correct record type. The syntax 192.168.1: 3 is also invalid. D. http://www.example.org. 60 IN RR 192.168.1.1; 192.168.1.2; 192.168.1.3: Again, RR is not a valid DNS record type. While it attempts to list multiple IPs, the syntax and record type are incorrect. E. http://www.example.org. 60 IN A 192.168.1.1; 192.168.1.2; 192.168.1.3: While A is the correct record type, you cannot list multiple IP addresses separated by semicolons on a single line for an A record. Each IP address for Round Robin must be specified on its own separate A record line, as shown in the correct answer.
DNS “Round Robin“ load distribution is achieved by simply having multiple A (Address) records for the same hostname, each pointing to a different IP address. When a DNS resolver queries for that hostname, the DNS server will return the A records in a rotating order. This simple mechanism distributes incoming requests across multiple servers, providing a basic form of load balancing. Each A record must be on its own line and follow the standard DNS record syntax. Incorrect:
A. http://www.example.org. 60 IN A 192.168.1-3: This syntax (192.168.1-3) is not valid for an A record in a DNS zone file. A records require a single, specific IP address. B. http://www.example.org. 60 IN RR 192.168.1: 3: RR is not a standard DNS record type for IP addresses. A is the correct record type. The syntax 192.168.1: 3 is also invalid. D. http://www.example.org. 60 IN RR 192.168.1.1; 192.168.1.2; 192.168.1.3: Again, RR is not a valid DNS record type. While it attempts to list multiple IPs, the syntax and record type are incorrect. E. http://www.example.org. 60 IN A 192.168.1.1; 192.168.1.2; 192.168.1.3: While A is the correct record type, you cannot list multiple IP addresses separated by semicolons on a single line for an A record. Each IP address for Round Robin must be specified on its own separate A record line, as shown in the correct answer.
X
Use Page numbers below to navigate to other practice tests