You have already completed the Test before. Hence you can not start it again.
Test is loading...
You must sign in or sign up to start the Test.
You have to finish following quiz, to start this Test:
Your results are here!! for" Salesforce Certified Platform Integration Architect Practice Test 6 "
0 of 60 questions answered correctly
Your time:
Time has elapsed
Your Final Score is : 0
You have attempted : 0
Number of Correct Questions : 0 and scored 0
Number of Incorrect Questions : 0 and Negative marks 0
Average score
Your score
Salesforce Certified Platform Integration Architect
You have attempted: 0
Number of Correct Questions: 0 and scored 0
Number of Incorrect Questions: 0 and Negative marks 0
You can review your answers by clicking on “View Answers” option. Important Note : Open Reference Documentation Links in New Tab (Right Click and Open in New Tab).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
Answered
Review
Question 1 of 60
1. Question
Internal team emails about company policy changes are best classified as:
Correct
Option 3 is correct because internal emails generally contain operational info not meant for public but also not critically sensitive. Option 1 overstates the sensitivity. Option 2 is risky and not compliant with standard practices. Option 4 confuses protection level with classification.
Incorrect
Option 3 is correct because internal emails generally contain operational info not meant for public but also not critically sensitive. Option 1 overstates the sensitivity. Option 2 is risky and not compliant with standard practices. Option 4 confuses protection level with classification.
Unattempted
Option 3 is correct because internal emails generally contain operational info not meant for public but also not critically sensitive. Option 1 overstates the sensitivity. Option 2 is risky and not compliant with standard practices. Option 4 confuses protection level with classification.
Question 2 of 60
2. Question
An API call from Salesforce returns “403 Forbidden.” What is the likely root cause?
Correct
Option 2 is correct because 403 implies access was denied, typically due to missing credentials or wrong permissions. Data types and volume lead to different errors. UI settings don‘t influence API access.
Incorrect
Option 2 is correct because 403 implies access was denied, typically due to missing credentials or wrong permissions. Data types and volume lead to different errors. UI settings don‘t influence API access.
Unattempted
Option 2 is correct because 403 implies access was denied, typically due to missing credentials or wrong permissions. Data types and volume lead to different errors. UI settings don‘t influence API access.
Question 3 of 60
3. Question
A team notices duplicated records after integration. What might be the underlying issue?
Correct
Option 2 is correct because deduplication using External IDs is crucial during upsert operations. Bandwidth, dashboards, and authentication do not influence record uniqueness.
Incorrect
Option 2 is correct because deduplication using External IDs is crucial during upsert operations. Bandwidth, dashboards, and authentication do not influence record uniqueness.
Unattempted
Option 2 is correct because deduplication using External IDs is crucial during upsert operations. Bandwidth, dashboards, and authentication do not influence record uniqueness.
Question 4 of 60
4. Question
A system cannot connect to Salesforce due to a firewall. What type of constraint is this?
Correct
Option 2 is correct because firewalls enforce network boundaries, preventing unauthorized traffic. The other options deal with internal configuration, not connectivity.
Incorrect
Option 2 is correct because firewalls enforce network boundaries, preventing unauthorized traffic. The other options deal with internal configuration, not connectivity.
Unattempted
Option 2 is correct because firewalls enforce network boundaries, preventing unauthorized traffic. The other options deal with internal configuration, not connectivity.
Question 5 of 60
5. Question
Why might a Salesforce Flow not trigger after an external system updates a record?
Correct
Option 2 is correct because if updates donÂ’t go through the API, automation like Flows may not fire. Deactivation would cause all flows to fail, not selectively. Reports and tabs donÂ’t influence automation.
Incorrect
Option 2 is correct because if updates donÂ’t go through the API, automation like Flows may not fire. Deactivation would cause all flows to fail, not selectively. Reports and tabs donÂ’t influence automation.
Unattempted
Option 2 is correct because if updates donÂ’t go through the API, automation like Flows may not fire. Deactivation would cause all flows to fail, not selectively. Reports and tabs donÂ’t influence automation.
Question 6 of 60
6. Question
A Salesforce API integration returns “Too Many Requests.” What should be reviewed first?
Correct
Option 1 is correct because the error suggests throttling, which can be mitigated through limits and retries. Permissions and UI settings donÂ’t impact request volumes.
Incorrect
Option 1 is correct because the error suggests throttling, which can be mitigated through limits and retries. Permissions and UI settings donÂ’t impact request volumes.
Unattempted
Option 1 is correct because the error suggests throttling, which can be mitigated through limits and retries. Permissions and UI settings donÂ’t impact request volumes.
Question 7 of 60
7. Question
Which of the following is a potential bottleneck in a large-scale real-time integration?
Correct
Option 2 is correct because real-time integration depends on event-driven architectures, and lack of event publishing causes lag. The other choices are UI or metadata and irrelevant.
Incorrect
Option 2 is correct because real-time integration depends on event-driven architectures, and lack of event publishing causes lag. The other choices are UI or metadata and irrelevant.
Unattempted
Option 2 is correct because real-time integration depends on event-driven architectures, and lack of event publishing causes lag. The other choices are UI or metadata and irrelevant.
Question 8 of 60
8. Question
Which log file would help analyze integration errors over time?
Correct
Option 2 is correct because API debug logs capture call failures, payloads, and response codes critical for troubleshooting. Login or dashboard files do not reveal integration behavior.
Incorrect
Option 2 is correct because API debug logs capture call failures, payloads, and response codes critical for troubleshooting. Login or dashboard files do not reveal integration behavior.
Unattempted
Option 2 is correct because API debug logs capture call failures, payloads, and response codes critical for troubleshooting. Login or dashboard files do not reveal integration behavior.
Question 9 of 60
9. Question
What type of data classification applies to a customer‘s credit card number?
Correct
Option 1 is correct because credit card numbers are considered highly sensitive and confidential due to regulatory and compliance requirements such as PCI DSS. Option 2 is incorrect because such data should never be publicly exposed. Option 3 is close, but “secure” is not a classification label—it‘s a control measure. Option 4, internal, refers to organizational info not intended for external use, but still not as sensitive as credit card data.
Incorrect
Option 1 is correct because credit card numbers are considered highly sensitive and confidential due to regulatory and compliance requirements such as PCI DSS. Option 2 is incorrect because such data should never be publicly exposed. Option 3 is close, but “secure” is not a classification label—it‘s a control measure. Option 4, internal, refers to organizational info not intended for external use, but still not as sensitive as credit card data.
Unattempted
Option 1 is correct because credit card numbers are considered highly sensitive and confidential due to regulatory and compliance requirements such as PCI DSS. Option 2 is incorrect because such data should never be publicly exposed. Option 3 is close, but “secure” is not a classification label—it‘s a control measure. Option 4, internal, refers to organizational info not intended for external use, but still not as sensitive as credit card data.
Question 10 of 60
10. Question
How should a product catalog meant for public-facing websites be classified?
Correct
Option 3 is correct because product catalogs intended for public consumption are typically classified as public data. Option 1 is incorrect because there’s no sensitivity around the data. Option 2, internal, would restrict access unnecessarily. Option 4, secure, applies to data that needs encryption and access control—not product listings.
Incorrect
Option 3 is correct because product catalogs intended for public consumption are typically classified as public data. Option 1 is incorrect because there’s no sensitivity around the data. Option 2, internal, would restrict access unnecessarily. Option 4, secure, applies to data that needs encryption and access control—not product listings.
Unattempted
Option 3 is correct because product catalogs intended for public consumption are typically classified as public data. Option 1 is incorrect because there’s no sensitivity around the data. Option 2, internal, would restrict access unnecessarily. Option 4, secure, applies to data that needs encryption and access control—not product listings.
Question 11 of 60
11. Question
A healthcare integration includes patient medical records. What is the correct classification?
Correct
Option 2 is correct because medical records are legally protected under laws such as HIPAA and must be treated as confidential data. Option 1 is incorrect because “secure” refers to how the data is protected, not its classification. Option 3 is invalid as such data should never be public. Option 4 is too lenient and insufficient for compliance purposes.
Incorrect
Option 2 is correct because medical records are legally protected under laws such as HIPAA and must be treated as confidential data. Option 1 is incorrect because “secure” refers to how the data is protected, not its classification. Option 3 is invalid as such data should never be public. Option 4 is too lenient and insufficient for compliance purposes.
Unattempted
Option 2 is correct because medical records are legally protected under laws such as HIPAA and must be treated as confidential data. Option 1 is incorrect because “secure” refers to how the data is protected, not its classification. Option 3 is invalid as such data should never be public. Option 4 is too lenient and insufficient for compliance purposes.
Question 12 of 60
12. Question
Which type of data should be encrypted both at rest and in transit?
Correct
Option 1 is correct because confidential agreements often contain legal and financial terms that require protection via encryption. Option 2 and 3 contain general access data that doesnÂ’t require such levels of protection. Option 4 is clearly public-facing and not sensitive.
Incorrect
Option 1 is correct because confidential agreements often contain legal and financial terms that require protection via encryption. Option 2 and 3 contain general access data that doesnÂ’t require such levels of protection. Option 4 is clearly public-facing and not sensitive.
Unattempted
Option 1 is correct because confidential agreements often contain legal and financial terms that require protection via encryption. Option 2 and 3 contain general access data that doesnÂ’t require such levels of protection. Option 4 is clearly public-facing and not sensitive.
Question 13 of 60
13. Question
An API transmits employee salary data to a payroll vendor. What classification should this data have?
Correct
Option 3 is correct because salary data contains private financial information that must be treated as confidential. Option 1 is incorrect because salary information should never be exposed publicly. Option 2, while internal, does not fully reflect the required confidentiality. Option 4 is irrelevant.
Incorrect
Option 3 is correct because salary data contains private financial information that must be treated as confidential. Option 1 is incorrect because salary information should never be exposed publicly. Option 2, while internal, does not fully reflect the required confidentiality. Option 4 is irrelevant.
Unattempted
Option 3 is correct because salary data contains private financial information that must be treated as confidential. Option 1 is incorrect because salary information should never be exposed publicly. Option 2, while internal, does not fully reflect the required confidentiality. Option 4 is irrelevant.
Question 14 of 60
14. Question
A system shares press releases with partners. What is the best classification?
Correct
Option 1 is correct because press releases are intended for external and public distribution. Option 2 implies access only within an organization, which is overly restrictive. Option 3 and 4 are meant for sensitive data and donÂ’t apply to public communications.
Incorrect
Option 1 is correct because press releases are intended for external and public distribution. Option 2 implies access only within an organization, which is overly restrictive. Option 3 and 4 are meant for sensitive data and donÂ’t apply to public communications.
Unattempted
Option 1 is correct because press releases are intended for external and public distribution. Option 2 implies access only within an organization, which is overly restrictive. Option 3 and 4 are meant for sensitive data and donÂ’t apply to public communications.
Question 15 of 60
15. Question
What classification applies to unpublished quarterly financial results?
Correct
Option 4 is correct because unpublished financial data can affect market behavior and must be restricted and confidential. Option 1 is wrong because the data hasnÂ’t been released. Option 2 underestimates the sensitivity. Option 3 describes how it should be handled, not its classification.
Incorrect
Option 4 is correct because unpublished financial data can affect market behavior and must be restricted and confidential. Option 1 is wrong because the data hasnÂ’t been released. Option 2 underestimates the sensitivity. Option 3 describes how it should be handled, not its classification.
Unattempted
Option 4 is correct because unpublished financial data can affect market behavior and must be restricted and confidential. Option 1 is wrong because the data hasnÂ’t been released. Option 2 underestimates the sensitivity. Option 3 describes how it should be handled, not its classification.
Question 16 of 60
16. Question
What is a common cause for inconsistent record sync between Salesforce and an external system?
Correct
Option 2 is correct because sync depends on schedules or events; if missing, records go out of sync. Themes and layouts donÂ’t affect data flows. Reports are read-only and don‘t cause sync issues.
Incorrect
Option 2 is correct because sync depends on schedules or events; if missing, records go out of sync. Themes and layouts donÂ’t affect data flows. Reports are read-only and don‘t cause sync issues.
Unattempted
Option 2 is correct because sync depends on schedules or events; if missing, records go out of sync. Themes and layouts donÂ’t affect data flows. Reports are read-only and don‘t cause sync issues.
Question 17 of 60
17. Question
What data classification is suitable for customer names shown on public testimonials?
Correct
Option 3 is correct because customer names already exposed in public-facing testimonials are public by nature. Option 1 or 2 would unnecessarily restrict access. Option 4 focuses on protection rather than classification and is too restrictive.
Incorrect
Option 3 is correct because customer names already exposed in public-facing testimonials are public by nature. Option 1 or 2 would unnecessarily restrict access. Option 4 focuses on protection rather than classification and is too restrictive.
Unattempted
Option 3 is correct because customer names already exposed in public-facing testimonials are public by nature. Option 1 or 2 would unnecessarily restrict access. Option 4 focuses on protection rather than classification and is too restrictive.
Question 18 of 60
18. Question
What classification is appropriate for internal training videos not available externally?
Correct
Option 3 is correct because the data is meant only for internal personnel but doesn‘t necessarily contain sensitive details. Option 1 is invalid as the data is not for external use. Option 2 implies it‘s legally or financially sensitive. Option 4 is overly cautious.
Incorrect
Option 3 is correct because the data is meant only for internal personnel but doesn‘t necessarily contain sensitive details. Option 1 is invalid as the data is not for external use. Option 2 implies it‘s legally or financially sensitive. Option 4 is overly cautious.
Unattempted
Option 3 is correct because the data is meant only for internal personnel but doesn‘t necessarily contain sensitive details. Option 1 is invalid as the data is not for external use. Option 2 implies it‘s legally or financially sensitive. Option 4 is overly cautious.
Question 19 of 60
19. Question
What is a functional requirement in an integration project?
Correct
Option 1 is correct because functional requirements describe what the system must do — in this case, real-time updates. Option 2 and 3 are non-functional as they define how the system performs. Option 4 is about data governance, not system functionality.
Incorrect
Option 1 is correct because functional requirements describe what the system must do — in this case, real-time updates. Option 2 and 3 are non-functional as they define how the system performs. Option 4 is about data governance, not system functionality.
Unattempted
Option 1 is correct because functional requirements describe what the system must do — in this case, real-time updates. Option 2 and 3 are non-functional as they define how the system performs. Option 4 is about data governance, not system functionality.
Question 20 of 60
20. Question
Which is an example of a non-functional requirement?
Correct
Option 3 is correct because response time is about performance, a classic non-functional attribute. Options 1, 2, and 4 describe actual features and behavior of the system, so they are functional.
Incorrect
Option 3 is correct because response time is about performance, a classic non-functional attribute. Options 1, 2, and 4 describe actual features and behavior of the system, so they are functional.
Unattempted
Option 3 is correct because response time is about performance, a classic non-functional attribute. Options 1, 2, and 4 describe actual features and behavior of the system, so they are functional.
Question 21 of 60
21. Question
WhatÂ’s an example of a data consistency requirement in an integration?
Correct
Option 1 is correct because data consistency ensures that changes in one system are accurately reflected in another. Option 2 relates to security, not consistency. Option 3 is usability-related. Option 4 is a security setting.
Incorrect
Option 1 is correct because data consistency ensures that changes in one system are accurately reflected in another. Option 2 relates to security, not consistency. Option 3 is usability-related. Option 4 is a security setting.
Unattempted
Option 1 is correct because data consistency ensures that changes in one system are accurately reflected in another. Option 2 relates to security, not consistency. Option 3 is usability-related. Option 4 is a security setting.
Question 22 of 60
22. Question
A requirement that “data must be encrypted during transfer” is an example of:
Correct
Option 3 is correct because encryption relates to how data is protected, making it a non-functional requirement. Option 1 is incorrect as it does not describe business functionality. Option 2 refers to speed, which is unrelated. Option 4 is incorrect because it doesnÂ’t define behavior or logic rules.
Incorrect
Option 3 is correct because encryption relates to how data is protected, making it a non-functional requirement. Option 1 is incorrect as it does not describe business functionality. Option 2 refers to speed, which is unrelated. Option 4 is incorrect because it doesnÂ’t define behavior or logic rules.
Unattempted
Option 3 is correct because encryption relates to how data is protected, making it a non-functional requirement. Option 1 is incorrect as it does not describe business functionality. Option 2 refers to speed, which is unrelated. Option 4 is incorrect because it doesnÂ’t define behavior or logic rules.
Question 23 of 60
23. Question
Which of the following best describes a functional requirement?
Correct
Option 4 is correct because rolling back transactions defines actual behavior of the system, a functional aspect. Option 1 is a security control. Option 2 is availability-related, and option 3 is about protection, both of which are non-functional.
Incorrect
Option 4 is correct because rolling back transactions defines actual behavior of the system, a functional aspect. Option 1 is a security control. Option 2 is availability-related, and option 3 is about protection, both of which are non-functional.
Unattempted
Option 4 is correct because rolling back transactions defines actual behavior of the system, a functional aspect. Option 1 is a security control. Option 2 is availability-related, and option 3 is about protection, both of which are non-functional.
Question 24 of 60
24. Question
What describes a non-functional requirement for an API gateway?
Correct
Option 2 is correct because throughput relates to performance, making it a non-functional requirement. Option 1 and 3 describe behavior, and Option 4, while helpful, is more of a monitoring or auditing feature than a pure non-functional spec.
Incorrect
Option 2 is correct because throughput relates to performance, making it a non-functional requirement. Option 1 and 3 describe behavior, and Option 4, while helpful, is more of a monitoring or auditing feature than a pure non-functional spec.
Unattempted
Option 2 is correct because throughput relates to performance, making it a non-functional requirement. Option 1 and 3 describe behavior, and Option 4, while helpful, is more of a monitoring or auditing feature than a pure non-functional spec.
Question 25 of 60
25. Question
A requirement that “the system must comply with GDPR” is an example of:
Correct
Option 3 is correct because compliance falls under non-functional needs, defining standards or laws the system must adhere to. Option 1 is about labeling data. Option 2 implies functionality. Option 4 is too broad to be precise.
Incorrect
Option 3 is correct because compliance falls under non-functional needs, defining standards or laws the system must adhere to. Option 1 is about labeling data. Option 2 implies functionality. Option 4 is too broad to be precise.
Unattempted
Option 3 is correct because compliance falls under non-functional needs, defining standards or laws the system must adhere to. Option 1 is about labeling data. Option 2 implies functionality. Option 4 is too broad to be precise.
Question 26 of 60
26. Question
A business states that API results must be returned in under 1 second 95% of the time. This is a:
Correct
Option 2 is correct because this is a performance requirement under SLA definitions, hence non-functional. Option 1 would require specifying behavior. Option 3 and 4 are unrelated to timing or service expectations.
Incorrect
Option 2 is correct because this is a performance requirement under SLA definitions, hence non-functional. Option 1 would require specifying behavior. Option 3 and 4 are unrelated to timing or service expectations.
Unattempted
Option 2 is correct because this is a performance requirement under SLA definitions, hence non-functional. Option 1 would require specifying behavior. Option 3 and 4 are unrelated to timing or service expectations.
Question 27 of 60
27. Question
Which is a functional requirement for an order management integration?
Correct
Option 1 is correct because it defines a core task the system must perform — logging orders. Option 2 is about format, not function. Option 3 and 4 are security and reliability measures, not functional behaviors.
Incorrect
Option 1 is correct because it defines a core task the system must perform — logging orders. Option 2 is about format, not function. Option 3 and 4 are security and reliability measures, not functional behaviors.
Unattempted
Option 1 is correct because it defines a core task the system must perform — logging orders. Option 2 is about format, not function. Option 3 and 4 are security and reliability measures, not functional behaviors.
Question 28 of 60
28. Question
What is a key CRM success factor that should be supported through integration?
Correct
Option 1 is correct because successful CRM adoption relies on timely insights into customer behavior and touchpoints, which require integrated, real-time data across systems. Option 2 limits visibility to outdated data, which affects decision-making. Option 3 undermines adoption and usability. Option 4 is error-prone and inefficient, introducing bottlenecks and data mismatches.
Incorrect
Option 1 is correct because successful CRM adoption relies on timely insights into customer behavior and touchpoints, which require integrated, real-time data across systems. Option 2 limits visibility to outdated data, which affects decision-making. Option 3 undermines adoption and usability. Option 4 is error-prone and inefficient, introducing bottlenecks and data mismatches.
Unattempted
Option 1 is correct because successful CRM adoption relies on timely insights into customer behavior and touchpoints, which require integrated, real-time data across systems. Option 2 limits visibility to outdated data, which affects decision-making. Option 3 undermines adoption and usability. Option 4 is error-prone and inefficient, introducing bottlenecks and data mismatches.
Question 29 of 60
29. Question
What integration requirement best supports sales user adoption of CRM?
Correct
Option 1 is correct because sales users benefit from automation that reduces friction, such as leads being automatically pushed from marketing systems to CRM. Option 2 introduces delays and requires extra effort from users. Option 3 reduces accessibility and usability. Option 4 impacts data trust, leading to low adoption.
Incorrect
Option 1 is correct because sales users benefit from automation that reduces friction, such as leads being automatically pushed from marketing systems to CRM. Option 2 introduces delays and requires extra effort from users. Option 3 reduces accessibility and usability. Option 4 impacts data trust, leading to low adoption.
Unattempted
Option 1 is correct because sales users benefit from automation that reduces friction, such as leads being automatically pushed from marketing systems to CRM. Option 2 introduces delays and requires extra effort from users. Option 3 reduces accessibility and usability. Option 4 impacts data trust, leading to low adoption.
Question 30 of 60
30. Question
Which factor is most critical to ensure CRM data quality through integration?
Correct
Option 1 is correct because deduplication at the point of integration prevents the accumulation of redundant or conflicting records, which directly supports CRM data quality. Option 2 is UI-related and cosmetic. Option 3 is reactive and not preventative. Option 4 increases the chance of human error and inconsistency.
Incorrect
Option 1 is correct because deduplication at the point of integration prevents the accumulation of redundant or conflicting records, which directly supports CRM data quality. Option 2 is UI-related and cosmetic. Option 3 is reactive and not preventative. Option 4 increases the chance of human error and inconsistency.
Unattempted
Option 1 is correct because deduplication at the point of integration prevents the accumulation of redundant or conflicting records, which directly supports CRM data quality. Option 2 is UI-related and cosmetic. Option 3 is reactive and not preventative. Option 4 increases the chance of human error and inconsistency.
Question 31 of 60
31. Question
Which constraint affects real-time data availability in an existing system?
Correct
Option 1 is correct because real-time integration depends on systems that can notify or push events. The other options are about UI or storage, which donÂ’t trigger live updates.
Incorrect
Option 1 is correct because real-time integration depends on systems that can notify or push events. The other options are about UI or storage, which donÂ’t trigger live updates.
Unattempted
Option 1 is correct because real-time integration depends on systems that can notify or push events. The other options are about UI or storage, which donÂ’t trigger live updates.
Question 32 of 60
32. Question
A healthcare firm integrates an EMR with Salesforce. The EMR only supports SOAP 1.1 over TLS 1.2 with client certificates and enforces a 2 MB message size. Salesforce side uses a middleware that can do REST or SOAP. Which factor most directly represents a “limitation” that drives interface design choices now?
Correct
The EMRÂ’s enforced SOAP 1.1 with mutual TLS and a 2 MB limit is an immediate, non-negotiable limitation that constrains how requests are formed, how security is handled, and how payloads are chunked. Standards and boundaries are often dictated by systems that cannot change quickly, so integration must adapt to them now. This constraint directly impacts serialization (XML), security setup (client certs), and potential need for message splitting or compression. Option 2 describes middleware flexibility, which is helpful but not constraining; it does not force a design path. Option 3 is a preference, not a boundary; standards can be adapted or mapped by transformation layers. Option 4 is a future plan and does not influence current compliance needs. Prioritizing hard external constraints prevents build-time surprises. Recognizing payload ceilings early avoids runtime failures and retries. Finally, security handshakes like mTLS require certificate lifecycle planning that must be reflected in the design immediately.
Incorrect
The EMRÂ’s enforced SOAP 1.1 with mutual TLS and a 2 MB limit is an immediate, non-negotiable limitation that constrains how requests are formed, how security is handled, and how payloads are chunked. Standards and boundaries are often dictated by systems that cannot change quickly, so integration must adapt to them now. This constraint directly impacts serialization (XML), security setup (client certs), and potential need for message splitting or compression. Option 2 describes middleware flexibility, which is helpful but not constraining; it does not force a design path. Option 3 is a preference, not a boundary; standards can be adapted or mapped by transformation layers. Option 4 is a future plan and does not influence current compliance needs. Prioritizing hard external constraints prevents build-time surprises. Recognizing payload ceilings early avoids runtime failures and retries. Finally, security handshakes like mTLS require certificate lifecycle planning that must be reflected in the design immediately.
Unattempted
The EMRÂ’s enforced SOAP 1.1 with mutual TLS and a 2 MB limit is an immediate, non-negotiable limitation that constrains how requests are formed, how security is handled, and how payloads are chunked. Standards and boundaries are often dictated by systems that cannot change quickly, so integration must adapt to them now. This constraint directly impacts serialization (XML), security setup (client certs), and potential need for message splitting or compression. Option 2 describes middleware flexibility, which is helpful but not constraining; it does not force a design path. Option 3 is a preference, not a boundary; standards can be adapted or mapped by transformation layers. Option 4 is a future plan and does not influence current compliance needs. Prioritizing hard external constraints prevents build-time surprises. Recognizing payload ceilings early avoids runtime failures and retries. Finally, security handshakes like mTLS require certificate lifecycle planning that must be reflected in the design immediately.
Question 33 of 60
33. Question
A fintech org has integrations crossing a corporate DMZ with IP allowlists, proxy inspection, and outbound-only traffic from Salesforce. The business needs near real-time customer updates from a core banking host into Salesforce. Which approach best respects current network “boundaries and protocols” while achieving the requirement?
Correct
The best fit is to avoid direct inbound calls to Salesforce and instead leverage a pattern that respects outbound-only from Salesforce while using the DMZ proxy and allowed egress to middleware. A near real-time approach is achieved by event publishing from core banking to a secure gateway, then middleware consumes and pushes via an allowed path (or Salesforce subscribes via outbound connections). Option 2 proposes direct inbound to Salesforce REST APIs from the banking host, which violates outbound-only and likely fails the allowlist/proxy boundary. Option 3 requires opening bidirectional rules, breaching the established security posture. Option 4 asks for a VPN to the banking network, which is usually disallowed from multi-tenant SaaS to core banking and contradicts the outbound-only constraint. The selected pattern decouples producers and consumers and adheres to inspection and allowlist controls. It leverages DMZ components to maintain enterprise security standards. Event/middleware buffering also helps with resilience and back-pressure. Near real-time delivery is preserved without violating boundaries.
Incorrect
The best fit is to avoid direct inbound calls to Salesforce and instead leverage a pattern that respects outbound-only from Salesforce while using the DMZ proxy and allowed egress to middleware. A near real-time approach is achieved by event publishing from core banking to a secure gateway, then middleware consumes and pushes via an allowed path (or Salesforce subscribes via outbound connections). Option 2 proposes direct inbound to Salesforce REST APIs from the banking host, which violates outbound-only and likely fails the allowlist/proxy boundary. Option 3 requires opening bidirectional rules, breaching the established security posture. Option 4 asks for a VPN to the banking network, which is usually disallowed from multi-tenant SaaS to core banking and contradicts the outbound-only constraint. The selected pattern decouples producers and consumers and adheres to inspection and allowlist controls. It leverages DMZ components to maintain enterprise security standards. Event/middleware buffering also helps with resilience and back-pressure. Near real-time delivery is preserved without violating boundaries.
Unattempted
The best fit is to avoid direct inbound calls to Salesforce and instead leverage a pattern that respects outbound-only from Salesforce while using the DMZ proxy and allowed egress to middleware. A near real-time approach is achieved by event publishing from core banking to a secure gateway, then middleware consumes and pushes via an allowed path (or Salesforce subscribes via outbound connections). Option 2 proposes direct inbound to Salesforce REST APIs from the banking host, which violates outbound-only and likely fails the allowlist/proxy boundary. Option 3 requires opening bidirectional rules, breaching the established security posture. Option 4 asks for a VPN to the banking network, which is usually disallowed from multi-tenant SaaS to core banking and contradicts the outbound-only constraint. The selected pattern decouples producers and consumers and adheres to inspection and allowlist controls. It leverages DMZ components to maintain enterprise security standards. Event/middleware buffering also helps with resilience and back-pressure. Near real-time delivery is preserved without violating boundaries.
Question 34 of 60
34. Question
A subscription service has nightly SFTP batch loads of 5–10 GB and also a REST integration for real-time entitlement checks. The business wants “consolidation” but cannot change the upstream billing vendor this quarter. Which standardization should the architect recommend FIRST to align with current protocols and reduce risk?
Correct
Establishing a common contract for identifiers, naming, and versioning is a standards step that can be applied without replacing protocols immediately. It respects existing SFTP and REST patterns while creating consistency that lowers mapping errors and future migration cost. Option 2 replaces SFTP with queues, which is high-risk given the vendor cannot change this quarter. Option 3 similarly forces a vendor change and ignores contractual or technical constraints. Option 4 degrades capability by eliminating real-time checks, harming customer experience and violating non-functional needs. A canonical contract aligns schema semantics across flows, enabling reliable transformations. It also supports observability because fields are predictable and traceable. Versioning avoids breaking changes when either side evolves. This incremental standardization is a low-risk, high-value first step. It creates a path for later protocol convergence.
Incorrect
Establishing a common contract for identifiers, naming, and versioning is a standards step that can be applied without replacing protocols immediately. It respects existing SFTP and REST patterns while creating consistency that lowers mapping errors and future migration cost. Option 2 replaces SFTP with queues, which is high-risk given the vendor cannot change this quarter. Option 3 similarly forces a vendor change and ignores contractual or technical constraints. Option 4 degrades capability by eliminating real-time checks, harming customer experience and violating non-functional needs. A canonical contract aligns schema semantics across flows, enabling reliable transformations. It also supports observability because fields are predictable and traceable. Versioning avoids breaking changes when either side evolves. This incremental standardization is a low-risk, high-value first step. It creates a path for later protocol convergence.
Unattempted
Establishing a common contract for identifiers, naming, and versioning is a standards step that can be applied without replacing protocols immediately. It respects existing SFTP and REST patterns while creating consistency that lowers mapping errors and future migration cost. Option 2 replaces SFTP with queues, which is high-risk given the vendor cannot change this quarter. Option 3 similarly forces a vendor change and ignores contractual or technical constraints. Option 4 degrades capability by eliminating real-time checks, harming customer experience and violating non-functional needs. A canonical contract aligns schema semantics across flows, enabling reliable transformations. It also supports observability because fields are predictable and traceable. Versioning avoids breaking changes when either side evolves. This incremental standardization is a low-risk, high-value first step. It creates a path for later protocol convergence.
Question 35 of 60
35. Question
An enterprise has Salesforce and multiple ERPs. Discovery reveals different auth schemes (OAuth 2.0, Basic, API keys) and inconsistent error formats. The business requires a uniform governance model for “standards and limitations” without disrupting live traffic. What should the architect define as the immediate enforcement point?
Correct
A mediation layer (API gateway or middleware) policy set allows normalization of authentication, rate limits, and error formats without touching each backend immediately. This approach enforces standards at a boundary where change is manageable and reversible. It provides consistent client experience while honoring backend limitations. Option 2 implies rewriting all adapters, which is disruptive and risky for live systems. Option 3 is an abrupt directive that may cause outages and noncompliance; deprecation must be phased with compensating controls. Option 4 is a platform consolidation strategy, not an immediate enforcement mechanism for standards. Policy-as-code at the gateway enables progressive hardening and observability. It also supports exception handling where legacy constraints apply. Over time, the gateway can deprecate weak methods safely. This incremental control aligns governance with operational realities and minimizes business disruption.
Incorrect
A mediation layer (API gateway or middleware) policy set allows normalization of authentication, rate limits, and error formats without touching each backend immediately. This approach enforces standards at a boundary where change is manageable and reversible. It provides consistent client experience while honoring backend limitations. Option 2 implies rewriting all adapters, which is disruptive and risky for live systems. Option 3 is an abrupt directive that may cause outages and noncompliance; deprecation must be phased with compensating controls. Option 4 is a platform consolidation strategy, not an immediate enforcement mechanism for standards. Policy-as-code at the gateway enables progressive hardening and observability. It also supports exception handling where legacy constraints apply. Over time, the gateway can deprecate weak methods safely. This incremental control aligns governance with operational realities and minimizes business disruption.
Unattempted
A mediation layer (API gateway or middleware) policy set allows normalization of authentication, rate limits, and error formats without touching each backend immediately. This approach enforces standards at a boundary where change is manageable and reversible. It provides consistent client experience while honoring backend limitations. Option 2 implies rewriting all adapters, which is disruptive and risky for live systems. Option 3 is an abrupt directive that may cause outages and noncompliance; deprecation must be phased with compensating controls. Option 4 is a platform consolidation strategy, not an immediate enforcement mechanism for standards. Policy-as-code at the gateway enables progressive hardening and observability. It also supports exception handling where legacy constraints apply. Over time, the gateway can deprecate weak methods safely. This incremental control aligns governance with operational realities and minimizes business disruption.
Question 36 of 60
36. Question
A government client mandates FIPS 140-2 validated cryptography and TLS 1.2+, and prohibits long-lived refresh tokens. The current mobile app uses OAuth 2.0 with offline access and 90-day refresh tokens. Which action best aligns with “standards and limitations” while maintaining usability?
Correct
Moving to short-lived access tokens with PKCE and tightening refresh token rotation meets the security standards while staying within OAuth 2.0 best practices for mobile. It directly addresses long-lived credential risk and aligns with FIPS/TLS mandates. PKCE strengthens public clients without secrets, and frequent refresh rotation reduces token theft impact. Option 2 leaves the core violation (90-day refresh) in place; fingerprinting is not a substitute for proper token lifetime controls. Option 3 misapplies SAML in a mobile API context and complicates flows without solving token lifetime constraints. Option 4 ignores the mandate; exceptions undermine governance and risk acceptance. The chosen option is technically feasible and compatible with current protocols. It preserves user experience via silent refresh while staying compliant. It also simplifies future audits by matching policy to implementation.
Incorrect
Moving to short-lived access tokens with PKCE and tightening refresh token rotation meets the security standards while staying within OAuth 2.0 best practices for mobile. It directly addresses long-lived credential risk and aligns with FIPS/TLS mandates. PKCE strengthens public clients without secrets, and frequent refresh rotation reduces token theft impact. Option 2 leaves the core violation (90-day refresh) in place; fingerprinting is not a substitute for proper token lifetime controls. Option 3 misapplies SAML in a mobile API context and complicates flows without solving token lifetime constraints. Option 4 ignores the mandate; exceptions undermine governance and risk acceptance. The chosen option is technically feasible and compatible with current protocols. It preserves user experience via silent refresh while staying compliant. It also simplifies future audits by matching policy to implementation.
Unattempted
Moving to short-lived access tokens with PKCE and tightening refresh token rotation meets the security standards while staying within OAuth 2.0 best practices for mobile. It directly addresses long-lived credential risk and aligns with FIPS/TLS mandates. PKCE strengthens public clients without secrets, and frequent refresh rotation reduces token theft impact. Option 2 leaves the core violation (90-day refresh) in place; fingerprinting is not a substitute for proper token lifetime controls. Option 3 misapplies SAML in a mobile API context and complicates flows without solving token lifetime constraints. Option 4 ignores the mandate; exceptions undermine governance and risk acceptance. The chosen option is technically feasible and compatible with current protocols. It preserves user experience via silent refresh while staying compliant. It also simplifies future audits by matching policy to implementation.
Question 37 of 60
37. Question
During discovery, you find a legacy order system that exposes SOAP operations requiring WS-Security UsernameToken with timestamps. The middleware prefers OAuth-backed REST. The business wants minimal change to the legacy. How should the “boundary” be defined to respect existing protocols?
Correct
A façade pattern preserves the legacy SOAP contract with WS-Security while allowing upstream consumers to use standardized OAuth REST. This defines a boundary where translation, security headers, and timestamp handling are centralized. It minimizes change to the legacy, which meets the business constraint. Option 2 forces migration, which is risky and likely out of scope for “evaluate current landscape.” Option 3 couples Salesforce to SOAP and WS-Security details, increasing complexity and governance overhead. Option 4 weakens security by bypassing WS-Security requirements and is unlikely to pass audit. The façade also provides a place for schema mapping and error normalization. It supports gradual modernization without breaking existing clients. Observability and retries can be handled consistently at this boundary. This approach is a standard solution for protocol mismatch in heterogeneous estates.
Incorrect
A façade pattern preserves the legacy SOAP contract with WS-Security while allowing upstream consumers to use standardized OAuth REST. This defines a boundary where translation, security headers, and timestamp handling are centralized. It minimizes change to the legacy, which meets the business constraint. Option 2 forces migration, which is risky and likely out of scope for “evaluate current landscape.” Option 3 couples Salesforce to SOAP and WS-Security details, increasing complexity and governance overhead. Option 4 weakens security by bypassing WS-Security requirements and is unlikely to pass audit. The façade also provides a place for schema mapping and error normalization. It supports gradual modernization without breaking existing clients. Observability and retries can be handled consistently at this boundary. This approach is a standard solution for protocol mismatch in heterogeneous estates.
Unattempted
A façade pattern preserves the legacy SOAP contract with WS-Security while allowing upstream consumers to use standardized OAuth REST. This defines a boundary where translation, security headers, and timestamp handling are centralized. It minimizes change to the legacy, which meets the business constraint. Option 2 forces migration, which is risky and likely out of scope for “evaluate current landscape.” Option 3 couples Salesforce to SOAP and WS-Security details, increasing complexity and governance overhead. Option 4 weakens security by bypassing WS-Security requirements and is unlikely to pass audit. The façade also provides a place for schema mapping and error normalization. It supports gradual modernization without breaking existing clients. Observability and retries can be handled consistently at this boundary. This approach is a standard solution for protocol mismatch in heterogeneous estates.
Question 38 of 60
38. Question
A media company uses Platform Events, Change Data Capture, and an external webhook system. They complain about duplicate downstream processing. As you evaluate “standards and limitations,” what should you recommend first to clarify event semantics across protocols?
Correct
An enterprise event taxonomy defines what each event represents, how it is identified, and the delivery guarantees (at-least-once, exactly-once via idempotency keys). Clarifying semantics reduces accidental duplication and aligns diverse mechanisms. This does not require ripping out existing protocols; it standardizes how they are used. Option 2 forces a single mechanism but may not be feasible for external systems and ignores legitimate use cases for webhooks. Option 3 treats symptoms rather than causes; faster consumers still mis-handle duplicates. Option 4 removes async benefits and adds coupling, harming resilience and scale. Idempotency keys let handlers safely de-dup across CDC, events, and webhooks. Taxonomy and standards provide a shared language for producers and consumers. This is essential for governance and monitoring. It also simplifies replay strategies and error handling consistently.
Incorrect
An enterprise event taxonomy defines what each event represents, how it is identified, and the delivery guarantees (at-least-once, exactly-once via idempotency keys). Clarifying semantics reduces accidental duplication and aligns diverse mechanisms. This does not require ripping out existing protocols; it standardizes how they are used. Option 2 forces a single mechanism but may not be feasible for external systems and ignores legitimate use cases for webhooks. Option 3 treats symptoms rather than causes; faster consumers still mis-handle duplicates. Option 4 removes async benefits and adds coupling, harming resilience and scale. Idempotency keys let handlers safely de-dup across CDC, events, and webhooks. Taxonomy and standards provide a shared language for producers and consumers. This is essential for governance and monitoring. It also simplifies replay strategies and error handling consistently.
Unattempted
An enterprise event taxonomy defines what each event represents, how it is identified, and the delivery guarantees (at-least-once, exactly-once via idempotency keys). Clarifying semantics reduces accidental duplication and aligns diverse mechanisms. This does not require ripping out existing protocols; it standardizes how they are used. Option 2 forces a single mechanism but may not be feasible for external systems and ignores legitimate use cases for webhooks. Option 3 treats symptoms rather than causes; faster consumers still mis-handle duplicates. Option 4 removes async benefits and adds coupling, harming resilience and scale. Idempotency keys let handlers safely de-dup across CDC, events, and webhooks. Taxonomy and standards provide a shared language for producers and consumers. This is essential for governance and monitoring. It also simplifies replay strategies and error handling consistently.
Question 39 of 60
39. Question
A regional deployment must comply with data residency laws. Current architecture centralizes integrations in a single US-hosted middleware, and Salesforce orgs exist in multiple regions. Which change best respects “boundaries and limitations” without a full re-platform?
Correct
Adding regional mediation nodes creates enforcement points within legal boundaries, ensuring data residency while maintaining a federated integration pattern. These nodes can filter, tokenize, or anonymize fields before forwarding, satisfying local restrictions. Option 2 depends on legal mechanisms but does not technically enforce residency, risking non-compliance. Option 3 violates residency by mirroring first, even if masking occurs later; the breach happens on ingress. Option 4 degrades business operations and introduces data quality risks. Regional nodes respect existing platforms and minimize disruption while embedding compliance controls. They also help with audit trails and regional SLAs. This approach scales as new regions are added. It preserves a common governance layer with localized policies.
Incorrect
Adding regional mediation nodes creates enforcement points within legal boundaries, ensuring data residency while maintaining a federated integration pattern. These nodes can filter, tokenize, or anonymize fields before forwarding, satisfying local restrictions. Option 2 depends on legal mechanisms but does not technically enforce residency, risking non-compliance. Option 3 violates residency by mirroring first, even if masking occurs later; the breach happens on ingress. Option 4 degrades business operations and introduces data quality risks. Regional nodes respect existing platforms and minimize disruption while embedding compliance controls. They also help with audit trails and regional SLAs. This approach scales as new regions are added. It preserves a common governance layer with localized policies.
Unattempted
Adding regional mediation nodes creates enforcement points within legal boundaries, ensuring data residency while maintaining a federated integration pattern. These nodes can filter, tokenize, or anonymize fields before forwarding, satisfying local restrictions. Option 2 depends on legal mechanisms but does not technically enforce residency, risking non-compliance. Option 3 violates residency by mirroring first, even if masking occurs later; the breach happens on ingress. Option 4 degrades business operations and introduces data quality risks. Regional nodes respect existing platforms and minimize disruption while embedding compliance controls. They also help with audit trails and regional SLAs. This approach scales as new regions are added. It preserves a common governance layer with localized policies.
Question 40 of 60
40. Question
In discovery workshops, teams report hitting Salesforce API limits during peak hours and inconsistent pagination logic across consumers. To document “limitations and standards,” what should the architect specify for client consumption patterns?
Correct
A shared client standard that defines exponential backoff, retry policies, consistent pagination, and appropriate use of Composite/Batch APIs directly addresses the operational limitations. It makes consumption predictable and fair, reducing spikes and errors. Option 2 treats the symptom by asking for more limits but does not fix poor client behavior; limits are finite. Option 3 over-caches and risks staleness, violating data freshness needs and potentially causing logic errors. Option 4 restricts business capability by forbidding daytime reads, which is not realistic for interactive workloads. Client patterns are a controllable lever that improve resilience under constraints. Standard pagination avoids duplicate or missing records. Backoff and jitter reduce thundering herds. Composite endpoints minimize round trips within limits. Documenting and enforcing these standards institutionalizes good behavior.
Incorrect
A shared client standard that defines exponential backoff, retry policies, consistent pagination, and appropriate use of Composite/Batch APIs directly addresses the operational limitations. It makes consumption predictable and fair, reducing spikes and errors. Option 2 treats the symptom by asking for more limits but does not fix poor client behavior; limits are finite. Option 3 over-caches and risks staleness, violating data freshness needs and potentially causing logic errors. Option 4 restricts business capability by forbidding daytime reads, which is not realistic for interactive workloads. Client patterns are a controllable lever that improve resilience under constraints. Standard pagination avoids duplicate or missing records. Backoff and jitter reduce thundering herds. Composite endpoints minimize round trips within limits. Documenting and enforcing these standards institutionalizes good behavior.
Unattempted
A shared client standard that defines exponential backoff, retry policies, consistent pagination, and appropriate use of Composite/Batch APIs directly addresses the operational limitations. It makes consumption predictable and fair, reducing spikes and errors. Option 2 treats the symptom by asking for more limits but does not fix poor client behavior; limits are finite. Option 3 over-caches and risks staleness, violating data freshness needs and potentially causing logic errors. Option 4 restricts business capability by forbidding daytime reads, which is not realistic for interactive workloads. Client patterns are a controllable lever that improve resilience under constraints. Standard pagination avoids duplicate or missing records. Backoff and jitter reduce thundering herds. Composite endpoints minimize round trips within limits. Documenting and enforcing these standards institutionalizes good behavior.
Question 41 of 60
41. Question
What pain-point might indicate a poorly integrated third-party data source?
Correct
Option 2 is correct because inconsistent data reflects synchronization failures or missing transformation logic. The other options are positive or irrelevant and don‘t signal an integration issue.
Incorrect
Option 2 is correct because inconsistent data reflects synchronization failures or missing transformation logic. The other options are positive or irrelevant and don‘t signal an integration issue.
Unattempted
Option 2 is correct because inconsistent data reflects synchronization failures or missing transformation logic. The other options are positive or irrelevant and don‘t signal an integration issue.
Question 42 of 60
42. Question
Which scenario suggests a system boundary issue during integration?
Correct
Option 2 is correct because lack of access due to firewall or VPN rules is a classic system boundary problem. UI, app installs, or reports are end-user issues, not architectural boundaries.
Incorrect
Option 2 is correct because lack of access due to firewall or VPN rules is a classic system boundary problem. UI, app installs, or reports are end-user issues, not architectural boundaries.
Unattempted
Option 2 is correct because lack of access due to firewall or VPN rules is a classic system boundary problem. UI, app installs, or reports are end-user issues, not architectural boundaries.
Question 43 of 60
43. Question
A system reports multiple failed authentication attempts. What could be a root cause?
Correct
Option 2 is correct because failed logins often stem from incompatible authentication standards. Dashboards, code coverage, and email issues donÂ’t relate to auth flows.
Incorrect
Option 2 is correct because failed logins often stem from incompatible authentication standards. Dashboards, code coverage, and email issues donÂ’t relate to auth flows.
Unattempted
Option 2 is correct because failed logins often stem from incompatible authentication standards. Dashboards, code coverage, and email issues donÂ’t relate to auth flows.
Question 44 of 60
44. Question
When a system landscape involves both REST and SOAP APIs, what should the architect evaluate?
Correct
Option 2 is correct because handling both APIs requires careful management of limits and formats. The other options are unrelated to the technical constraints of mixed protocol integration.
Incorrect
Option 2 is correct because handling both APIs requires careful management of limits and formats. The other options are unrelated to the technical constraints of mixed protocol integration.
Unattempted
Option 2 is correct because handling both APIs requires careful management of limits and formats. The other options are unrelated to the technical constraints of mixed protocol integration.
Question 45 of 60
45. Question
A customerÂ’s FTP-based integration is dropping files. What might be the technical pain-point?
Correct
Option 2 is correct because FTP-based issues often relate to transfer size limits or unstable sessions. Layouts, filters, and record rules are not part of file transport logic.
Incorrect
Option 2 is correct because FTP-based issues often relate to transfer size limits or unstable sessions. Layouts, filters, and record rules are not part of file transport logic.
Unattempted
Option 2 is correct because FTP-based issues often relate to transfer size limits or unstable sessions. Layouts, filters, and record rules are not part of file transport logic.
Question 46 of 60
46. Question
A global retailer is documenting their current integrations. They discover multiple endpoints, some using REST with OAuth 2.0, some SOAP with mutual TLS, and some nightly CSV SFTP drops. Given a requirement to map “standards, limitations, boundaries, and protocols,” what is the FIRST artifact the architect should produce to represent the as-is landscape consistently?
Correct
The correct answer is a canonical integration inventory because the task explicitly asks to identify the current system landscape and determine standards, limitations, boundaries, and protocols, which requires a structured catalog of what exists today. An inventory captures each endpointÂ’s protocol (REST, SOAP, SFTP), authentication (OAuth, mTLS), volumes, SLAs, schedules, and ownership, enabling a precise view of constraints before any design decisions. Starting with as-is facts prevents premature solutioning and avoids confirmation bias. It also makes gaps and duplicates visible across teams. Additionally, this inventory aligns stakeholders on terminology and scope. Option 2 is future-state; it assumes decisions have been made and obscures current constraints. Option 3 is an execution artifact and presupposes that analysis is complete; it is too granular and time-boxed for landscape discovery. Option 4 is a financial artifact and does not capture technical standards or boundaries. None of the incorrect options provide the breadth of metadata needed to reason about standards and limitations. The inventory becomes the foundation for later design reviews and risk analysis. It also supports governance by feeding into integration lifecycle management.
Incorrect
The correct answer is a canonical integration inventory because the task explicitly asks to identify the current system landscape and determine standards, limitations, boundaries, and protocols, which requires a structured catalog of what exists today. An inventory captures each endpointÂ’s protocol (REST, SOAP, SFTP), authentication (OAuth, mTLS), volumes, SLAs, schedules, and ownership, enabling a precise view of constraints before any design decisions. Starting with as-is facts prevents premature solutioning and avoids confirmation bias. It also makes gaps and duplicates visible across teams. Additionally, this inventory aligns stakeholders on terminology and scope. Option 2 is future-state; it assumes decisions have been made and obscures current constraints. Option 3 is an execution artifact and presupposes that analysis is complete; it is too granular and time-boxed for landscape discovery. Option 4 is a financial artifact and does not capture technical standards or boundaries. None of the incorrect options provide the breadth of metadata needed to reason about standards and limitations. The inventory becomes the foundation for later design reviews and risk analysis. It also supports governance by feeding into integration lifecycle management.
Unattempted
The correct answer is a canonical integration inventory because the task explicitly asks to identify the current system landscape and determine standards, limitations, boundaries, and protocols, which requires a structured catalog of what exists today. An inventory captures each endpointÂ’s protocol (REST, SOAP, SFTP), authentication (OAuth, mTLS), volumes, SLAs, schedules, and ownership, enabling a precise view of constraints before any design decisions. Starting with as-is facts prevents premature solutioning and avoids confirmation bias. It also makes gaps and duplicates visible across teams. Additionally, this inventory aligns stakeholders on terminology and scope. Option 2 is future-state; it assumes decisions have been made and obscures current constraints. Option 3 is an execution artifact and presupposes that analysis is complete; it is too granular and time-boxed for landscape discovery. Option 4 is a financial artifact and does not capture technical standards or boundaries. None of the incorrect options provide the breadth of metadata needed to reason about standards and limitations. The inventory becomes the foundation for later design reviews and risk analysis. It also supports governance by feeding into integration lifecycle management.
Question 47 of 60
47. Question
During an integration review, an API returns ‘429 Too Many Requests’. What does this imply?
Correct
Option 2 is correct because a 429 error specifically signals rate limiting, which is a key performance constraint. Other options suggest unrelated issues.
Incorrect
Option 2 is correct because a 429 error specifically signals rate limiting, which is a key performance constraint. Other options suggest unrelated issues.
Unattempted
Option 2 is correct because a 429 error specifically signals rate limiting, which is a key performance constraint. Other options suggest unrelated issues.
Question 48 of 60
48. Question
What does inconsistent data between systems typically point to?
Correct
Option 2 is correct because poor synchronization or missing data mapping causes discrepancies across systems. Branding or visuals do not affect data integrity.
Incorrect
Option 2 is correct because poor synchronization or missing data mapping causes discrepancies across systems. Branding or visuals do not affect data integrity.
Unattempted
Option 2 is correct because poor synchronization or missing data mapping causes discrepancies across systems. Branding or visuals do not affect data integrity.
Question 49 of 60
49. Question
Why would an integration architect review API logs during system analysis?
Correct
Option 2 is correct because API logs provide insight into failures, latency, and usage — all critical for diagnosing integration health. The other choices focus on unrelated UI or metadata updates.
Incorrect
Option 2 is correct because API logs provide insight into failures, latency, and usage — all critical for diagnosing integration health. The other choices focus on unrelated UI or metadata updates.
Unattempted
Option 2 is correct because API logs provide insight into failures, latency, and usage — all critical for diagnosing integration health. The other choices focus on unrelated UI or metadata updates.
Question 50 of 60
50. Question
What is the first step when determining the authentication needs of an external system integrating with Salesforce?
Correct
Option 1 is correct because understanding which authentication protocols (like OAuth, SAML) are supported is foundational to planning secure access. Option 2 is about UI and not related to authentication. Option 3 focuses on analytics, which doesn‘t help with identity verification. Option 4 addresses scale, not security design.
Incorrect
Option 1 is correct because understanding which authentication protocols (like OAuth, SAML) are supported is foundational to planning secure access. Option 2 is about UI and not related to authentication. Option 3 focuses on analytics, which doesn‘t help with identity verification. Option 4 addresses scale, not security design.
Unattempted
Option 1 is correct because understanding which authentication protocols (like OAuth, SAML) are supported is foundational to planning secure access. Option 2 is about UI and not related to authentication. Option 3 focuses on analytics, which doesn‘t help with identity verification. Option 4 addresses scale, not security design.
Question 51 of 60
51. Question
Which authentication method is most appropriate when integrating Salesforce with a mobile app that cannot store long-lived credentials?
Correct
Option 2 is correct because the Username-password OAuth flow provides short-term tokens and avoids the need for persistent credential storage on the device. Option 1 requires longer-lived sessions. Option 3 is typically used for server-to-server without user involvement. Option 4 is insecure for mobile contexts.
Incorrect
Option 2 is correct because the Username-password OAuth flow provides short-term tokens and avoids the need for persistent credential storage on the device. Option 1 requires longer-lived sessions. Option 3 is typically used for server-to-server without user involvement. Option 4 is insecure for mobile contexts.
Unattempted
Option 2 is correct because the Username-password OAuth flow provides short-term tokens and avoids the need for persistent credential storage on the device. Option 1 requires longer-lived sessions. Option 3 is typically used for server-to-server without user involvement. Option 4 is insecure for mobile contexts.
Question 52 of 60
52. Question
WhatÂ’s the most secure way to authenticate an external system that does not support user interaction during login?
Correct
Option 1 is correct because the JWT Bearer Flow is designed for server-to-server integrations where no user is present. Option 2 requires browser interaction. Option 3 involves manual steps and is unsuitable for background systems. Option 4 is a security anti-pattern and highly discouraged.
Incorrect
Option 1 is correct because the JWT Bearer Flow is designed for server-to-server integrations where no user is present. Option 2 requires browser interaction. Option 3 involves manual steps and is unsuitable for background systems. Option 4 is a security anti-pattern and highly discouraged.
Unattempted
Option 1 is correct because the JWT Bearer Flow is designed for server-to-server integrations where no user is present. Option 2 requires browser interaction. Option 3 involves manual steps and is unsuitable for background systems. Option 4 is a security anti-pattern and highly discouraged.
Question 53 of 60
53. Question
How should an architect handle authorization when a partner system needs access only to a specific set of Salesforce objects?
Correct
Option 2 is correct because permission sets allow granular control of object-level access based on needs. Option 1 violates least-privilege principles. Option 3 is too broad and not tailored to the use case. Option 4 addresses network security, not access rights.
Incorrect
Option 2 is correct because permission sets allow granular control of object-level access based on needs. Option 1 violates least-privilege principles. Option 3 is too broad and not tailored to the use case. Option 4 addresses network security, not access rights.
Unattempted
Option 2 is correct because permission sets allow granular control of object-level access based on needs. Option 1 violates least-privilege principles. Option 3 is too broad and not tailored to the use case. Option 4 addresses network security, not access rights.
Question 54 of 60
54. Question
Which Salesforce feature ensures that external users can only access data scoped to their associated account?
Correct
Option 3 is correct because External Account Hierarchies enforce data access based on the userÂ’s account. Role hierarchies expand access upward. Login Flows manage UI flow, not record visibility. Community profiles help assign base access but lack dynamic scoping.
Incorrect
Option 3 is correct because External Account Hierarchies enforce data access based on the userÂ’s account. Role hierarchies expand access upward. Login Flows manage UI flow, not record visibility. Community profiles help assign base access but lack dynamic scoping.
Unattempted
Option 3 is correct because External Account Hierarchies enforce data access based on the userÂ’s account. Role hierarchies expand access upward. Login Flows manage UI flow, not record visibility. Community profiles help assign base access but lack dynamic scoping.
Question 55 of 60
55. Question
A clientÂ’s system requires login via a centralized Identity Provider. What integration pattern satisfies this?
Correct
Option 2 is correct because SAML enables federated identity and supports SSO through a central identity provider. Option 1 uses credentials directly and bypasses the IDP. Option 3 is a UI choice, not a protocol. Option 4 is unrelated to authentication.
Incorrect
Option 2 is correct because SAML enables federated identity and supports SSO through a central identity provider. Option 1 uses credentials directly and bypasses the IDP. Option 3 is a UI choice, not a protocol. Option 4 is unrelated to authentication.
Unattempted
Option 2 is correct because SAML enables federated identity and supports SSO through a central identity provider. Option 1 uses credentials directly and bypasses the IDP. Option 3 is a UI choice, not a protocol. Option 4 is unrelated to authentication.
Question 56 of 60
56. Question
Which situation best calls for using Named Credentials in Salesforce?
Correct
Option 2 is correct because Named Credentials handle endpoint and authentication details, simplifying secure external API calls. Option 1 and 3 are unrelated to integration. Option 4 adds unnecessary complexity and duplicates native features.
Incorrect
Option 2 is correct because Named Credentials handle endpoint and authentication details, simplifying secure external API calls. Option 1 and 3 are unrelated to integration. Option 4 adds unnecessary complexity and duplicates native features.
Unattempted
Option 2 is correct because Named Credentials handle endpoint and authentication details, simplifying secure external API calls. Option 1 and 3 are unrelated to integration. Option 4 adds unnecessary complexity and duplicates native features.
Question 57 of 60
57. Question
In a scenario where multiple third-party systems must access different Salesforce APIs, what is the best practice for managing credentials?
Correct
Option 2 is correct because separate users with scoped access ensure better auditability and minimize breach risk. Shared or guest users violate security best practices. Admin reuse creates an over-permissioned attack surface.
Incorrect
Option 2 is correct because separate users with scoped access ensure better auditability and minimize breach risk. Shared or guest users violate security best practices. Admin reuse creates an over-permissioned attack surface.
Unattempted
Option 2 is correct because separate users with scoped access ensure better auditability and minimize breach risk. Shared or guest users violate security best practices. Admin reuse creates an over-permissioned attack surface.
Question 58 of 60
58. Question
What is the main benefit of using OAuth scopes in an integration scenario?
Correct
Option 2 is correct because OAuth scopes clearly define what a client app can access, reducing exposure. Option 1 is done through HTTPS. Option 3 is UI-level, not API-level. Option 4 is unrelated to token-based access.
Incorrect
Option 2 is correct because OAuth scopes clearly define what a client app can access, reducing exposure. Option 1 is done through HTTPS. Option 3 is UI-level, not API-level. Option 4 is unrelated to token-based access.
Unattempted
Option 2 is correct because OAuth scopes clearly define what a client app can access, reducing exposure. Option 1 is done through HTTPS. Option 3 is UI-level, not API-level. Option 4 is unrelated to token-based access.
Question 59 of 60
59. Question
What problem might occur if an external system tries to call a Salesforce API without authentication?
Correct
Option 3 is correct because failing to authenticate leads to HTTP 401, preventing access. Option 1 is false; unauthenticated calls are blocked. Option 2 is a data issue, not auth-related. Option 4 is not relevant unless login succeeds.
Incorrect
Option 3 is correct because failing to authenticate leads to HTTP 401, preventing access. Option 1 is false; unauthenticated calls are blocked. Option 2 is a data issue, not auth-related. Option 4 is not relevant unless login succeeds.
Unattempted
Option 3 is correct because failing to authenticate leads to HTTP 401, preventing access. Option 1 is false; unauthenticated calls are blocked. Option 2 is a data issue, not auth-related. Option 4 is not relevant unless login succeeds.
Question 60 of 60
60. Question
A batch job fails because the target system rejects payloads over 2MB. What type of constraint is this?
Correct
Option 2 is correct because the system‘s rejection is due to exceeding payload limits, which is a common constraint in APIs. Storage limits affect saved data, not in-transit data. Schema mismatches would throw validation errors. Permission issues relate to access, not size.
Incorrect
Option 2 is correct because the system‘s rejection is due to exceeding payload limits, which is a common constraint in APIs. Storage limits affect saved data, not in-transit data. Schema mismatches would throw validation errors. Permission issues relate to access, not size.
Unattempted
Option 2 is correct because the system‘s rejection is due to exceeding payload limits, which is a common constraint in APIs. Storage limits affect saved data, not in-transit data. Schema mismatches would throw validation errors. Permission issues relate to access, not size.
X
Use Page numbers below to navigate to other practice tests