You have already completed the Test before. Hence you can not start it again.
Test is loading...
You must sign in or sign up to start the Test.
You have to finish following quiz, to start this Test:
Your results are here!! for" Salesforce Certified Platform Integration Architect Practice Test 4 "
0 of 60 questions answered correctly
Your time:
Time has elapsed
Your Final Score is : 0
You have attempted : 0
Number of Correct Questions : 0 and scored 0
Number of Incorrect Questions : 0 and Negative marks 0
Average score
Your score
Salesforce Certified Platform Integration Architect
You have attempted: 0
Number of Correct Questions: 0 and scored 0
Number of Incorrect Questions: 0 and Negative marks 0
You can review your answers by clicking on “View Answers” option. Important Note : Open Reference Documentation Links in New Tab (Right Click and Open in New Tab).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
Answered
Review
Question 1 of 60
1. Question
A customer requires real-time updates, but the external system only supports batch imports. What is the best trade-off solution?
Correct
Option 1 is correct because middleware can bridge the gap by capturing real-time events and storing them until batch processing is triggered, respecting the limitations of the external system. Option 2 is too drastic and expensive. Option 3 delays value delivery. Option 4 misuses Platform Events, which are meant for real-time delivery and may expire before batch processing.
Incorrect
Option 1 is correct because middleware can bridge the gap by capturing real-time events and storing them until batch processing is triggered, respecting the limitations of the external system. Option 2 is too drastic and expensive. Option 3 delays value delivery. Option 4 misuses Platform Events, which are meant for real-time delivery and may expire before batch processing.
Unattempted
Option 1 is correct because middleware can bridge the gap by capturing real-time events and storing them until batch processing is triggered, respecting the limitations of the external system. Option 2 is too drastic and expensive. Option 3 delays value delivery. Option 4 misuses Platform Events, which are meant for real-time delivery and may expire before batch processing.
Question 2 of 60
2. Question
A solution requires syncing customer data from an on-premise system into Salesforce every hour. Which integration component is best suited?
Correct
Option 2 is correct because middleware can schedule hourly jobs, handle data transformation, and manage errors when syncing between on-prem and Salesforce. Option 1 is good for internal jobs, not cross-system. Option 3 is event-driven, not scheduled. Option 4 is triggered by record changes in Salesforce, not external events.
Incorrect
Option 2 is correct because middleware can schedule hourly jobs, handle data transformation, and manage errors when syncing between on-prem and Salesforce. Option 1 is good for internal jobs, not cross-system. Option 3 is event-driven, not scheduled. Option 4 is triggered by record changes in Salesforce, not external events.
Unattempted
Option 2 is correct because middleware can schedule hourly jobs, handle data transformation, and manage errors when syncing between on-prem and Salesforce. Option 1 is good for internal jobs, not cross-system. Option 3 is event-driven, not scheduled. Option 4 is triggered by record changes in Salesforce, not external events.
Question 3 of 60
3. Question
You need to expose Salesforce data to a third-party reporting system. Which component should be used?
Correct
Option 3 is correct because the REST API is ideal for exposing Salesforce data securely to external systems in a structured and scalable format. Option 1 helps manage credentials but doesnÂ’t expose data. Option 2 is used to call external services, not to expose data. Option 4 is designed for receiving data, not for exposing it.
Incorrect
Option 3 is correct because the REST API is ideal for exposing Salesforce data securely to external systems in a structured and scalable format. Option 1 helps manage credentials but doesnÂ’t expose data. Option 2 is used to call external services, not to expose data. Option 4 is designed for receiving data, not for exposing it.
Unattempted
Option 3 is correct because the REST API is ideal for exposing Salesforce data securely to external systems in a structured and scalable format. Option 1 helps manage credentials but doesnÂ’t expose data. Option 2 is used to call external services, not to expose data. Option 4 is designed for receiving data, not for exposing it.
Question 4 of 60
4. Question
A client wants to visualize external ERP data inside Salesforce without persisting it. What should you recommend?
Correct
Option 2 is correct because Salesforce Connect allows you to view external data in real-time without storing it in Salesforce, which reduces storage costs. Option 1 requires writing logic and stores data, which violates the requirement. Option 3 involves data duplication. Option 4 creates storage bloat and delays.
Incorrect
Option 2 is correct because Salesforce Connect allows you to view external data in real-time without storing it in Salesforce, which reduces storage costs. Option 1 requires writing logic and stores data, which violates the requirement. Option 3 involves data duplication. Option 4 creates storage bloat and delays.
Unattempted
Option 2 is correct because Salesforce Connect allows you to view external data in real-time without storing it in Salesforce, which reduces storage costs. Option 1 requires writing logic and stores data, which violates the requirement. Option 3 involves data duplication. Option 4 creates storage bloat and delays.
Question 5 of 60
5. Question
What integration component helps simplify authentication and endpoint configuration in Salesforce?
Correct
Option 2 is correct because Named Credentials securely manage authentication and endpoint details, reducing hardcoding and improving maintainability. Option 1 only supports basic outbound use. Option 3 is for configuration data, not integrations. Option 4 is a UI tool and not relevant to endpoint integration.
Incorrect
Option 2 is correct because Named Credentials securely manage authentication and endpoint details, reducing hardcoding and improving maintainability. Option 1 only supports basic outbound use. Option 3 is for configuration data, not integrations. Option 4 is a UI tool and not relevant to endpoint integration.
Unattempted
Option 2 is correct because Named Credentials securely manage authentication and endpoint details, reducing hardcoding and improving maintainability. Option 1 only supports basic outbound use. Option 3 is for configuration data, not integrations. Option 4 is a UI tool and not relevant to endpoint integration.
Question 6 of 60
6. Question
Which integration pattern supports a use case where Salesforce needs to send data to another system and receive a reply immediately?
Correct
Option 1 is correct because this pattern ensures Salesforce sends data and waits for a response, making it suitable for situations requiring immediate confirmation or follow-up actions. Option 2 does not expect a reply. Option 3 is not real-time. Option 4 is asynchronous and best for publish/subscribe use cases.
Incorrect
Option 1 is correct because this pattern ensures Salesforce sends data and waits for a response, making it suitable for situations requiring immediate confirmation or follow-up actions. Option 2 does not expect a reply. Option 3 is not real-time. Option 4 is asynchronous and best for publish/subscribe use cases.
Unattempted
Option 1 is correct because this pattern ensures Salesforce sends data and waits for a response, making it suitable for situations requiring immediate confirmation or follow-up actions. Option 2 does not expect a reply. Option 3 is not real-time. Option 4 is asynchronous and best for publish/subscribe use cases.
Question 7 of 60
7. Question
A retail company needs Salesforce to receive order updates from their order management system without polling. Which integration pattern should they use?
Correct
Option 1 is correct because Platform Events enable the external system to push data to Salesforce asynchronously, removing the need for polling and improving efficiency. Option 2 would delay updates. Option 3 is unrelated. Option 4 is Salesforce exposing endpoints, not subscribing to external pushes.
Incorrect
Option 1 is correct because Platform Events enable the external system to push data to Salesforce asynchronously, removing the need for polling and improving efficiency. Option 2 would delay updates. Option 3 is unrelated. Option 4 is Salesforce exposing endpoints, not subscribing to external pushes.
Unattempted
Option 1 is correct because Platform Events enable the external system to push data to Salesforce asynchronously, removing the need for polling and improving efficiency. Option 2 would delay updates. Option 3 is unrelated. Option 4 is Salesforce exposing endpoints, not subscribing to external pushes.
Question 8 of 60
8. Question
Which pattern is ideal for Salesforce to call an external service without waiting for a response?
Correct
Option 1 is correct because Fire & Forget allows Salesforce to initiate a call without blocking for a response, ideal for non-critical or delayed processing. Option 2 would hold execution. Option 3 is scheduled and not event-driven. Option 4 is not outbound in nature.
Incorrect
Option 1 is correct because Fire & Forget allows Salesforce to initiate a call without blocking for a response, ideal for non-critical or delayed processing. Option 2 would hold execution. Option 3 is scheduled and not event-driven. Option 4 is not outbound in nature.
Unattempted
Option 1 is correct because Fire & Forget allows Salesforce to initiate a call without blocking for a response, ideal for non-critical or delayed processing. Option 2 would hold execution. Option 3 is scheduled and not event-driven. Option 4 is not outbound in nature.
Question 9 of 60
9. Question
A customer support team needs real-time updates from Salesforce to a chat platform. Which pattern suits this need?
Correct
Option 3 is correct because Platform Events provide asynchronous, real-time communication, perfect for pushing updates from Salesforce to external apps like chat platforms. Option 1 is for UI reactions. Option 2 lacks immediacy. Option 4 is overkill for one-way communication.
Incorrect
Option 3 is correct because Platform Events provide asynchronous, real-time communication, perfect for pushing updates from Salesforce to external apps like chat platforms. Option 1 is for UI reactions. Option 2 lacks immediacy. Option 4 is overkill for one-way communication.
Unattempted
Option 3 is correct because Platform Events provide asynchronous, real-time communication, perfect for pushing updates from Salesforce to external apps like chat platforms. Option 1 is for UI reactions. Option 2 lacks immediacy. Option 4 is overkill for one-way communication.
Question 10 of 60
10. Question
What pattern should be used when Salesforce must receive real-time requests from a third-party app?
Correct
Option 1 is correct because Remote Call-In allows external apps to make direct calls into Salesforce APIs, enabling real-time data push. Option 2 is outbound only. Option 3 relates to frontend behavior. Option 4 does not support real-time.
Incorrect
Option 1 is correct because Remote Call-In allows external apps to make direct calls into Salesforce APIs, enabling real-time data push. Option 2 is outbound only. Option 3 relates to frontend behavior. Option 4 does not support real-time.
Unattempted
Option 1 is correct because Remote Call-In allows external apps to make direct calls into Salesforce APIs, enabling real-time data push. Option 2 is outbound only. Option 3 relates to frontend behavior. Option 4 does not support real-time.
Question 11 of 60
11. Question
A utility provider must upload large data sets from smart meters to Salesforce nightly. What pattern is ideal?
Correct
Option 1 is correct because batch data synchronization handles large volumes efficiently and is suitable for non-real-time data such as nightly uploads. Option 2 lacks monitoring and reliability for large volumes. Option 3 is unrelated to data loads. Option 4 is designed for smaller, near real-time events.
Incorrect
Option 1 is correct because batch data synchronization handles large volumes efficiently and is suitable for non-real-time data such as nightly uploads. Option 2 lacks monitoring and reliability for large volumes. Option 3 is unrelated to data loads. Option 4 is designed for smaller, near real-time events.
Unattempted
Option 1 is correct because batch data synchronization handles large volumes efficiently and is suitable for non-real-time data such as nightly uploads. Option 2 lacks monitoring and reliability for large volumes. Option 3 is unrelated to data loads. Option 4 is designed for smaller, near real-time events.
Question 12 of 60
12. Question
Which pattern is best when Salesforce must notify multiple downstream systems simultaneously about a data change?
Correct
Option 1 is correct because the publish/subscribe model allows Salesforce to publish events that multiple systems can subscribe to and react to independently and in near real-time. Option 2 is limited to frontend updates. Option 3 is not push-based. Option 4 is not real-time and not multi-system friendly.
Incorrect
Option 1 is correct because the publish/subscribe model allows Salesforce to publish events that multiple systems can subscribe to and react to independently and in near real-time. Option 2 is limited to frontend updates. Option 3 is not push-based. Option 4 is not real-time and not multi-system friendly.
Unattempted
Option 1 is correct because the publish/subscribe model allows Salesforce to publish events that multiple systems can subscribe to and react to independently and in near real-time. Option 2 is limited to frontend updates. Option 3 is not push-based. Option 4 is not real-time and not multi-system friendly.
Question 13 of 60
13. Question
What integration pattern allows Salesforce to process external data without storing it?
Correct
Option 1 is correct because Salesforce Connect allows viewing of external data via OData without persisting it in Salesforce, reducing storage and sync requirements. Option 2 is for inbound real-time event processing. Option 3 and 4 involve writing data into Salesforce.
Incorrect
Option 1 is correct because Salesforce Connect allows viewing of external data via OData without persisting it in Salesforce, reducing storage and sync requirements. Option 2 is for inbound real-time event processing. Option 3 and 4 involve writing data into Salesforce.
Unattempted
Option 1 is correct because Salesforce Connect allows viewing of external data via OData without persisting it in Salesforce, reducing storage and sync requirements. Option 2 is for inbound real-time event processing. Option 3 and 4 involve writing data into Salesforce.
Question 14 of 60
14. Question
An enterprise customer needs to orchestrate complex business logic between multiple systems including Salesforce. Which pattern should be used?
Correct
Option 1 is correct because middleware can orchestrate logic, handle retries, error handling, and data transformation across systems. Apex-only logic can‘t handle cross-system orchestration well. Email alerts are not suitable for automation logic. Sharing rules control access, not process logic.
Incorrect
Option 1 is correct because middleware can orchestrate logic, handle retries, error handling, and data transformation across systems. Apex-only logic can‘t handle cross-system orchestration well. Email alerts are not suitable for automation logic. Sharing rules control access, not process logic.
Unattempted
Option 1 is correct because middleware can orchestrate logic, handle retries, error handling, and data transformation across systems. Apex-only logic can‘t handle cross-system orchestration well. Email alerts are not suitable for automation logic. Sharing rules control access, not process logic.
Question 15 of 60
15. Question
Which pattern is most appropriate for a use case where Salesforce data must be exposed securely via APIs to external partners?
Correct
Option 1 is correct because Remote Call-In allows secure, authenticated access to Salesforce via REST/SOAP APIs for external systems. Option 2 is outbound only. Option 3 tracks changes but does not expose data directly. Option 4 is internal processing.
Incorrect
Option 1 is correct because Remote Call-In allows secure, authenticated access to Salesforce via REST/SOAP APIs for external systems. Option 2 is outbound only. Option 3 tracks changes but does not expose data directly. Option 4 is internal processing.
Unattempted
Option 1 is correct because Remote Call-In allows secure, authenticated access to Salesforce via REST/SOAP APIs for external systems. Option 2 is outbound only. Option 3 tracks changes but does not expose data directly. Option 4 is internal processing.
Question 16 of 60
16. Question
What is the most appropriate integration component to use when an external system must trigger a Salesforce process in real-time?
Correct
Option 4 is correct because Salesforce API endpoints (REST/SOAP) allow external systems to initiate actions in real-time, offering flexibility and synchronous responses. Option 1 is more suitable for event-driven use cases within Salesforce. Option 2 can send data but does not support externally triggered real-time communication. Option 3 is time-based and not event-driven.
Incorrect
Option 4 is correct because Salesforce API endpoints (REST/SOAP) allow external systems to initiate actions in real-time, offering flexibility and synchronous responses. Option 1 is more suitable for event-driven use cases within Salesforce. Option 2 can send data but does not support externally triggered real-time communication. Option 3 is time-based and not event-driven.
Unattempted
Option 4 is correct because Salesforce API endpoints (REST/SOAP) allow external systems to initiate actions in real-time, offering flexibility and synchronous responses. Option 1 is more suitable for event-driven use cases within Salesforce. Option 2 can send data but does not support externally triggered real-time communication. Option 3 is time-based and not event-driven.
Question 17 of 60
17. Question
A companyÂ’s firewall restricts external incoming traffic. Which trade-off solution enables Salesforce to interact with their on-premise database?
Correct
Option 2 is correct because middleware running inside the firewall can safely initiate outbound calls to Salesforce, complying with the firewall policy. Option 1 incorrectly assumes VPN calls can be made via outbound messaging. Option 3 is often blocked or complex to maintain. Option 4 is risky and violates security best practices.
Incorrect
Option 2 is correct because middleware running inside the firewall can safely initiate outbound calls to Salesforce, complying with the firewall policy. Option 1 incorrectly assumes VPN calls can be made via outbound messaging. Option 3 is often blocked or complex to maintain. Option 4 is risky and violates security best practices.
Unattempted
Option 2 is correct because middleware running inside the firewall can safely initiate outbound calls to Salesforce, complying with the firewall policy. Option 1 incorrectly assumes VPN calls can be made via outbound messaging. Option 3 is often blocked or complex to maintain. Option 4 is risky and violates security best practices.
Question 18 of 60
18. Question
A government customer mandates on-premise data storage. What is a valid limitation of using Salesforce in this scenario?
Correct
Option 1 is correct because government regulations may restrict data from being stored outside their borders, posing a limitation to cloud-based platforms like Salesforce. Option 2 is incorrect—Salesforce does support encryption. Option 3 is false; on-prem apps can consume Salesforce APIs. Option 4 is misleading—firewalls can be configured for Salesforce use.
Incorrect
Option 1 is correct because government regulations may restrict data from being stored outside their borders, posing a limitation to cloud-based platforms like Salesforce. Option 2 is incorrect—Salesforce does support encryption. Option 3 is false; on-prem apps can consume Salesforce APIs. Option 4 is misleading—firewalls can be configured for Salesforce use.
Unattempted
Option 1 is correct because government regulations may restrict data from being stored outside their borders, posing a limitation to cloud-based platforms like Salesforce. Option 2 is incorrect—Salesforce does support encryption. Option 3 is false; on-prem apps can consume Salesforce APIs. Option 4 is misleading—firewalls can be configured for Salesforce use.
Question 19 of 60
19. Question
What is a limitation when using Salesforce Connect with OData for real-time data access?
Correct
Option 4 is correct because Salesforce Connect reads data in real-time but cannot trigger automation like Flows or Triggers based on external object changes. Option 1 is incorrect; write-back is supported with certain connectors. Option 2 contradicts the purpose of Salesforce Connect. Option 3 is irrelevant to external object usage.
Incorrect
Option 4 is correct because Salesforce Connect reads data in real-time but cannot trigger automation like Flows or Triggers based on external object changes. Option 1 is incorrect; write-back is supported with certain connectors. Option 2 contradicts the purpose of Salesforce Connect. Option 3 is irrelevant to external object usage.
Unattempted
Option 4 is correct because Salesforce Connect reads data in real-time but cannot trigger automation like Flows or Triggers based on external object changes. Option 1 is incorrect; write-back is supported with certain connectors. Option 2 contradicts the purpose of Salesforce Connect. Option 3 is irrelevant to external object usage.
Question 20 of 60
20. Question
A retail client wants to sync real-time inventory but has a hard limit of 1000 API calls/hour. What is the best constraint-based solution?
Correct
Option 1 is correct because Platform Events allow for consolidation and reduce overall API call volume while still maintaining near real-time sync. Option 2 is not always possible and doesnÂ’t solve architectural inefficiency. Option 3 contradicts the real-time requirement. Option 4 is a fallback and ignores the business need.
Incorrect
Option 1 is correct because Platform Events allow for consolidation and reduce overall API call volume while still maintaining near real-time sync. Option 2 is not always possible and doesnÂ’t solve architectural inefficiency. Option 3 contradicts the real-time requirement. Option 4 is a fallback and ignores the business need.
Unattempted
Option 1 is correct because Platform Events allow for consolidation and reduce overall API call volume while still maintaining near real-time sync. Option 2 is not always possible and doesnÂ’t solve architectural inefficiency. Option 3 contradicts the real-time requirement. Option 4 is a fallback and ignores the business need.
Question 21 of 60
21. Question
A customer has strict latency requirements but also wants to reuse legacy batch interfaces. What trade-off should be considered?
Correct
Option 1 is correct because middleware can queue incoming real-time requests and respond immediately while deferring the actual batch execution, simulating low latency. Option 2 is costly and time-consuming. Option 3 limits functionality and access. Option 4 is inefficient and defeats the purpose of integration.
Incorrect
Option 1 is correct because middleware can queue incoming real-time requests and respond immediately while deferring the actual batch execution, simulating low latency. Option 2 is costly and time-consuming. Option 3 limits functionality and access. Option 4 is inefficient and defeats the purpose of integration.
Unattempted
Option 1 is correct because middleware can queue incoming real-time requests and respond immediately while deferring the actual batch execution, simulating low latency. Option 2 is costly and time-consuming. Option 3 limits functionality and access. Option 4 is inefficient and defeats the purpose of integration.
Question 22 of 60
22. Question
A system cannot handle high transaction volumes during peak hours. What is a constraint-aware integration design?
Correct
Option 1 is correct because it respects the systemÂ’s limits and smooths out demand using a queuing model. Option 2 assumes infrastructure can scale, which may not be the case. Option 3 compromises data integrity. Option 4 reduces user experience and leads to errors.
Incorrect
Option 1 is correct because it respects the systemÂ’s limits and smooths out demand using a queuing model. Option 2 assumes infrastructure can scale, which may not be the case. Option 3 compromises data integrity. Option 4 reduces user experience and leads to errors.
Unattempted
Option 1 is correct because it respects the systemÂ’s limits and smooths out demand using a queuing model. Option 2 assumes infrastructure can scale, which may not be the case. Option 3 compromises data integrity. Option 4 reduces user experience and leads to errors.
Question 23 of 60
23. Question
A partner system only supports FTP uploads, and the business wants near real-time integration. What is a trade-off approach?
Correct
Option 1 is correct because this allows Salesforce to process data in real-time, then transform and export data in a format compatible with FTP constraints. Option 2 is not always feasible or within your control. Option 3 compromises data quality. Option 4 is protocol-specific and may not meet FTP format requirements.
Incorrect
Option 1 is correct because this allows Salesforce to process data in real-time, then transform and export data in a format compatible with FTP constraints. Option 2 is not always feasible or within your control. Option 3 compromises data quality. Option 4 is protocol-specific and may not meet FTP format requirements.
Unattempted
Option 1 is correct because this allows Salesforce to process data in real-time, then transform and export data in a format compatible with FTP constraints. Option 2 is not always feasible or within your control. Option 3 compromises data quality. Option 4 is protocol-specific and may not meet FTP format requirements.
Question 24 of 60
24. Question
What is a constraint when sending large datasets through Salesforce REST API?
Correct
Option 1 is correct because Salesforce REST APIs have payload size limits (e.g., 6 MB for synchronous calls), and large datasets may need to be chunked. Option 2 is false—REST APIs can be secured. Option 3 is inaccurate—REST supports JSON/XML. Option 4 is incorrect—Apex can invoke REST endpoints.
Incorrect
Option 1 is correct because Salesforce REST APIs have payload size limits (e.g., 6 MB for synchronous calls), and large datasets may need to be chunked. Option 2 is false—REST APIs can be secured. Option 3 is inaccurate—REST supports JSON/XML. Option 4 is incorrect—Apex can invoke REST endpoints.
Unattempted
Option 1 is correct because Salesforce REST APIs have payload size limits (e.g., 6 MB for synchronous calls), and large datasets may need to be chunked. Option 2 is false—REST APIs can be secured. Option 3 is inaccurate—REST supports JSON/XML. Option 4 is incorrect—Apex can invoke REST endpoints.
Question 25 of 60
25. Question
A developer needs to integrate a mobile app that performs CRUD operations on Salesforce records. Which API is most appropriate?
Correct
Option 1 is correct because the REST API is lightweight and ideal for mobile apps that require CRUD operations with minimal overhead. Option 2 is designed for large data sets. Option 3 is used for notifications, not transactions. Option 4 is focused on configuration, not data.
Incorrect
Option 1 is correct because the REST API is lightweight and ideal for mobile apps that require CRUD operations with minimal overhead. Option 2 is designed for large data sets. Option 3 is used for notifications, not transactions. Option 4 is focused on configuration, not data.
Unattempted
Option 1 is correct because the REST API is lightweight and ideal for mobile apps that require CRUD operations with minimal overhead. Option 2 is designed for large data sets. Option 3 is used for notifications, not transactions. Option 4 is focused on configuration, not data.
Question 26 of 60
26. Question
Which API is best suited for performing large-scale inserts and updates of 1 million records?
Correct
Option 2 is correct because Bulk API is specifically designed for handling large data volumes efficiently in asynchronous batches. Option 1 handles configuration, not data. Option 3 is not suitable for such volumes due to limits. Option 4 is focused on developer tools and metadata.
Incorrect
Option 2 is correct because Bulk API is specifically designed for handling large data volumes efficiently in asynchronous batches. Option 1 handles configuration, not data. Option 3 is not suitable for such volumes due to limits. Option 4 is focused on developer tools and metadata.
Unattempted
Option 2 is correct because Bulk API is specifically designed for handling large data volumes efficiently in asynchronous batches. Option 1 handles configuration, not data. Option 3 is not suitable for such volumes due to limits. Option 4 is focused on developer tools and metadata.
Question 27 of 60
27. Question
A system must receive notifications about data changes in Salesforce. Which API should be used?
Correct
Option 3 is correct because the Streaming API allows clients to subscribe to events triggered by DML operations in Salesforce, enabling push-based communication. Option 1 and 2 do not provide real-time notifications. Option 4 is used for data load, not notifications.
Incorrect
Option 3 is correct because the Streaming API allows clients to subscribe to events triggered by DML operations in Salesforce, enabling push-based communication. Option 1 and 2 do not provide real-time notifications. Option 4 is used for data load, not notifications.
Unattempted
Option 3 is correct because the Streaming API allows clients to subscribe to events triggered by DML operations in Salesforce, enabling push-based communication. Option 1 and 2 do not provide real-time notifications. Option 4 is used for data load, not notifications.
Question 28 of 60
28. Question
Which API would you use to deploy new custom objects and fields?
Correct
Option 1 is correct because Metadata API supports deployment of metadata components like custom objects and fields across environments. REST API is for data operations. Streaming API is event-driven. Chatter API is for collaboration.
Incorrect
Option 1 is correct because Metadata API supports deployment of metadata components like custom objects and fields across environments. REST API is for data operations. Streaming API is event-driven. Chatter API is for collaboration.
Unattempted
Option 1 is correct because Metadata API supports deployment of metadata components like custom objects and fields across environments. REST API is for data operations. Streaming API is event-driven. Chatter API is for collaboration.
Question 29 of 60
29. Question
What API is used to manage user permissions, roles, and profiles?
Correct
Option 3 is correct because Tooling API provides fine-grained access to setup and developer-related data such as profiles and permissions. Metadata API also applies, but Tooling offers faster, more flexible access for development tools. Option 1 is more for deployments. Option 2 and 4 are incorrect for this purpose.
Incorrect
Option 3 is correct because Tooling API provides fine-grained access to setup and developer-related data such as profiles and permissions. Metadata API also applies, but Tooling offers faster, more flexible access for development tools. Option 1 is more for deployments. Option 2 and 4 are incorrect for this purpose.
Unattempted
Option 3 is correct because Tooling API provides fine-grained access to setup and developer-related data such as profiles and permissions. Metadata API also applies, but Tooling offers faster, more flexible access for development tools. Option 1 is more for deployments. Option 2 and 4 are incorrect for this purpose.
Question 30 of 60
30. Question
Which API supports importing large CSV files asynchronously into Salesforce?
Correct
Option 1 is correct because Bulk API handles large data imports via CSV in an asynchronous and scalable way. REST is not suitable for file-based batch loads. Chatter and Streaming APIs are not data import tools.
Incorrect
Option 1 is correct because Bulk API handles large data imports via CSV in an asynchronous and scalable way. REST is not suitable for file-based batch loads. Chatter and Streaming APIs are not data import tools.
Unattempted
Option 1 is correct because Bulk API handles large data imports via CSV in an asynchronous and scalable way. REST is not suitable for file-based batch loads. Chatter and Streaming APIs are not data import tools.
Question 31 of 60
31. Question
In what scenario is the Composite API most appropriate?
Correct
Composite API enables sending multiple related requests in a single call, preserving execution order and reducing API limits. It‘s ideal for scenarios where a series of dependent operations must execute atomically. Option 1 is better suited for Bulk API. Option 2 doesn‘t require Composite API‘s complexity. Option 4 would be better handled with platform events or scheduled jobs.
Incorrect
Composite API enables sending multiple related requests in a single call, preserving execution order and reducing API limits. It‘s ideal for scenarios where a series of dependent operations must execute atomically. Option 1 is better suited for Bulk API. Option 2 doesn‘t require Composite API‘s complexity. Option 4 would be better handled with platform events or scheduled jobs.
Unattempted
Composite API enables sending multiple related requests in a single call, preserving execution order and reducing API limits. It‘s ideal for scenarios where a series of dependent operations must execute atomically. Option 1 is better suited for Bulk API. Option 2 doesn‘t require Composite API‘s complexity. Option 4 would be better handled with platform events or scheduled jobs.
Question 32 of 60
32. Question
When designing for a mobile app requiring offline capability, what is the primary constraint to consider for data sync with Salesforce?
Correct
Option 4 is correct because data sync for offline use must consider sync conflict resolution strategies, which is a core technical constraint. Option 1 affects performance but is not as critical as handling update collisions. Option 2 is unrelated to offline capability. Option 3 misinterprets UI constraints as technical ones.
Incorrect
Option 4 is correct because data sync for offline use must consider sync conflict resolution strategies, which is a core technical constraint. Option 1 affects performance but is not as critical as handling update collisions. Option 2 is unrelated to offline capability. Option 3 misinterprets UI constraints as technical ones.
Unattempted
Option 4 is correct because data sync for offline use must consider sync conflict resolution strategies, which is a core technical constraint. Option 1 affects performance but is not as critical as handling update collisions. Option 2 is unrelated to offline capability. Option 3 misinterprets UI constraints as technical ones.
Question 33 of 60
33. Question
A global organization wants to ensure minimal latency for integrations between Salesforce and external systems hosted in different continents. Which of the following should be prioritized?
Correct
Option 4 is correct because using regionalized middleware helps optimize latency by routing traffic locally and offering failover mechanisms. Option 1 helps for static assets, not transactional integrations. Option 2 would increase latency and risk failures due to timeouts. Option 3 is inefficient and adds unnecessary complexity.
Incorrect
Option 4 is correct because using regionalized middleware helps optimize latency by routing traffic locally and offering failover mechanisms. Option 1 helps for static assets, not transactional integrations. Option 2 would increase latency and risk failures due to timeouts. Option 3 is inefficient and adds unnecessary complexity.
Unattempted
Option 4 is correct because using regionalized middleware helps optimize latency by routing traffic locally and offering failover mechanisms. Option 1 helps for static assets, not transactional integrations. Option 2 would increase latency and risk failures due to timeouts. Option 3 is inefficient and adds unnecessary complexity.
Question 34 of 60
34. Question
Which Salesforce API is best suited for large data volume extraction with minimal system impact?
Correct
Option 3 is correct because Bulk API is specifically designed to handle large data volumes efficiently by processing them in batches, minimizing impact on system performance. Option 1 is used for real-time events and is not optimized for bulk. Option 2 is more suitable for small-volume, synchronous transactions. Option 4 manages configuration data and not transactional records.
Incorrect
Option 3 is correct because Bulk API is specifically designed to handle large data volumes efficiently by processing them in batches, minimizing impact on system performance. Option 1 is used for real-time events and is not optimized for bulk. Option 2 is more suitable for small-volume, synchronous transactions. Option 4 manages configuration data and not transactional records.
Unattempted
Option 3 is correct because Bulk API is specifically designed to handle large data volumes efficiently by processing them in batches, minimizing impact on system performance. Option 1 is used for real-time events and is not optimized for bulk. Option 2 is more suitable for small-volume, synchronous transactions. Option 4 manages configuration data and not transactional records.
Question 35 of 60
35. Question
For a use case where data must be updated in real-time and with guaranteed delivery, which API would be most appropriate?
Correct
Option 2 is correct because SOAP API supports transactional operations and is designed for enterprise-level reliability and guaranteed delivery when configured correctly. Option 1 is for configuration data. Option 3 is event-driven and does not provide guaranteed delivery without replay mechanisms. Option 4 is batch-oriented and not suitable for real-time needs.
Incorrect
Option 2 is correct because SOAP API supports transactional operations and is designed for enterprise-level reliability and guaranteed delivery when configured correctly. Option 1 is for configuration data. Option 3 is event-driven and does not provide guaranteed delivery without replay mechanisms. Option 4 is batch-oriented and not suitable for real-time needs.
Unattempted
Option 2 is correct because SOAP API supports transactional operations and is designed for enterprise-level reliability and guaranteed delivery when configured correctly. Option 1 is for configuration data. Option 3 is event-driven and does not provide guaranteed delivery without replay mechanisms. Option 4 is batch-oriented and not suitable for real-time needs.
Question 36 of 60
36. Question
A Salesforce Architect is designing a solution that must push changes in Account records to an external system in near real-time. Which API is best suited?
Correct
Option 4 is correct because Platform Events are designed for pushing data asynchronously and in near real-time, ideal for triggering changes to external systems. Option 1 is for request-response APIs and is not push-based. Option 2 is for high-volume operations but not event-driven. Option 3 is limited to tooling data, not transactional.
Incorrect
Option 4 is correct because Platform Events are designed for pushing data asynchronously and in near real-time, ideal for triggering changes to external systems. Option 1 is for request-response APIs and is not push-based. Option 2 is for high-volume operations but not event-driven. Option 3 is limited to tooling data, not transactional.
Unattempted
Option 4 is correct because Platform Events are designed for pushing data asynchronously and in near real-time, ideal for triggering changes to external systems. Option 1 is for request-response APIs and is not push-based. Option 2 is for high-volume operations but not event-driven. Option 3 is limited to tooling data, not transactional.
Question 37 of 60
37. Question
A company needs to monitor and act upon changes in high-volume custom object records. Which API is most efficient for this use case?
Correct
Option 1 is correct because Change Data Capture is built to notify subscribers about record-level changes in high-volume objects, making it ideal for monitoring and reacting to data changes. Option 2 lacks scalability for high-volume triggers. Option 3 is used for metadata changes. Option 4 is used for analytics, not real-time data sync.
Incorrect
Option 1 is correct because Change Data Capture is built to notify subscribers about record-level changes in high-volume objects, making it ideal for monitoring and reacting to data changes. Option 2 lacks scalability for high-volume triggers. Option 3 is used for metadata changes. Option 4 is used for analytics, not real-time data sync.
Unattempted
Option 1 is correct because Change Data Capture is built to notify subscribers about record-level changes in high-volume objects, making it ideal for monitoring and reacting to data changes. Option 2 lacks scalability for high-volume triggers. Option 3 is used for metadata changes. Option 4 is used for analytics, not real-time data sync.
Question 38 of 60
38. Question
A developer needs to retrieve metadata about custom fields and objects. Which API should be used?
Correct
Option 1 is correct because the Metadata API allows access to the definitions of objects, fields, layouts, etc., necessary for deploying and retrieving configurations. REST and Bulk APIs deal with data, not metadata. Streaming API is irrelevant to metadata operations.
Incorrect
Option 1 is correct because the Metadata API allows access to the definitions of objects, fields, layouts, etc., necessary for deploying and retrieving configurations. REST and Bulk APIs deal with data, not metadata. Streaming API is irrelevant to metadata operations.
Unattempted
Option 1 is correct because the Metadata API allows access to the definitions of objects, fields, layouts, etc., necessary for deploying and retrieving configurations. REST and Bulk APIs deal with data, not metadata. Streaming API is irrelevant to metadata operations.
Question 39 of 60
39. Question
A project requires integrating Salesforce with a legacy system using XML messaging and strong contract enforcement. Which API best supports this need?
Correct
Option 3 is correct because SOAP API supports strongly typed contracts and XML-based messaging, making it compatible with older systems that require rigid schemas. Option 1 is lightweight and not contract-driven. Option 2 is batch-based and does not support complex message contracts. Option 4 is event-based and not suitable for strict schema validation.
Incorrect
Option 3 is correct because SOAP API supports strongly typed contracts and XML-based messaging, making it compatible with older systems that require rigid schemas. Option 1 is lightweight and not contract-driven. Option 2 is batch-based and does not support complex message contracts. Option 4 is event-based and not suitable for strict schema validation.
Unattempted
Option 3 is correct because SOAP API supports strongly typed contracts and XML-based messaging, making it compatible with older systems that require rigid schemas. Option 1 is lightweight and not contract-driven. Option 2 is batch-based and does not support complex message contracts. Option 4 is event-based and not suitable for strict schema validation.
Question 40 of 60
40. Question
What trade-off must be considered when using real-time integration over batch processing in Salesforce?
Correct
Real-time integration reduces data latency, providing up-to-date information, but consumes more resources like API limits and processing power. Option 2 is incorrect as real-time requires more complexity in error handling and concurrency. Option 3 is misleading because batch provides more consistent timing but less flexibility. Option 4 is the opposite: real-time favors speed but may risk reliability without proper error handling.
Incorrect
Real-time integration reduces data latency, providing up-to-date information, but consumes more resources like API limits and processing power. Option 2 is incorrect as real-time requires more complexity in error handling and concurrency. Option 3 is misleading because batch provides more consistent timing but less flexibility. Option 4 is the opposite: real-time favors speed but may risk reliability without proper error handling.
Unattempted
Real-time integration reduces data latency, providing up-to-date information, but consumes more resources like API limits and processing power. Option 2 is incorrect as real-time requires more complexity in error handling and concurrency. Option 3 is misleading because batch provides more consistent timing but less flexibility. Option 4 is the opposite: real-time favors speed but may risk reliability without proper error handling.
Question 41 of 60
41. Question
What is a limitation of using Platform Events for integration?
Correct
Platform Events support near real-time and asynchronous processing. However, they are not stored permanently; retention is short-term (usually 24 hours). Option 1 is incorrect because platform events are built for high concurrency. Option 2 is incorrect because they are specifically designed for async processing. Option 4 is unrelated — Platform Events work alongside Salesforce objects.
Incorrect
Platform Events support near real-time and asynchronous processing. However, they are not stored permanently; retention is short-term (usually 24 hours). Option 1 is incorrect because platform events are built for high concurrency. Option 2 is incorrect because they are specifically designed for async processing. Option 4 is unrelated — Platform Events work alongside Salesforce objects.
Unattempted
Platform Events support near real-time and asynchronous processing. However, they are not stored permanently; retention is short-term (usually 24 hours). Option 1 is incorrect because platform events are built for high concurrency. Option 2 is incorrect because they are specifically designed for async processing. Option 4 is unrelated — Platform Events work alongside Salesforce objects.
Question 42 of 60
42. Question
Which factor should be prioritized when selecting Bulk API 2.0 over REST API for data ingestion?
Correct
Bulk API 2.0 is optimized for high-volume asynchronous operations like importing large datasets. REST API is better suited for real-time, lower-volume, or interactive use cases. Metadata manipulation is not a typical use for either of these APIs. Choosing the right API depends on volume, timing, and use case.
Incorrect
Bulk API 2.0 is optimized for high-volume asynchronous operations like importing large datasets. REST API is better suited for real-time, lower-volume, or interactive use cases. Metadata manipulation is not a typical use for either of these APIs. Choosing the right API depends on volume, timing, and use case.
Unattempted
Bulk API 2.0 is optimized for high-volume asynchronous operations like importing large datasets. REST API is better suited for real-time, lower-volume, or interactive use cases. Metadata manipulation is not a typical use for either of these APIs. Choosing the right API depends on volume, timing, and use case.
Question 43 of 60
43. Question
When integrating with external systems, what trade-off does caching introduce?
Correct
Caching improves system performance by reducing repeated calls to external systems but introduces the possibility of using outdated or stale data. It is a common trade-off in systems where performance is critical. Option 1 is incorrect because caching can compromise accuracy. Option 2 is incorrect as caching inherently goes against real-time sync. Option 4 is unrelated to caching.
Incorrect
Caching improves system performance by reducing repeated calls to external systems but introduces the possibility of using outdated or stale data. It is a common trade-off in systems where performance is critical. Option 1 is incorrect because caching can compromise accuracy. Option 2 is incorrect as caching inherently goes against real-time sync. Option 4 is unrelated to caching.
Unattempted
Caching improves system performance by reducing repeated calls to external systems but introduces the possibility of using outdated or stale data. It is a common trade-off in systems where performance is critical. Option 1 is incorrect because caching can compromise accuracy. Option 2 is incorrect as caching inherently goes against real-time sync. Option 4 is unrelated to caching.
Question 44 of 60
44. Question
Why might REST API be chosen over SOAP API in modern Salesforce integrations?
Correct
REST API uses standard HTTP verbs (GET, POST, etc.) and is lightweight, making it easier to work with for web and mobile apps. SOAP is heavier and requires WSDLs, making it better suited for legacy enterprise integrations. Option 1 is wrong because SOAP is XML-based, not JSON-friendly. Option 4 is incorrect — Apex supports both.
Incorrect
REST API uses standard HTTP verbs (GET, POST, etc.) and is lightweight, making it easier to work with for web and mobile apps. SOAP is heavier and requires WSDLs, making it better suited for legacy enterprise integrations. Option 1 is wrong because SOAP is XML-based, not JSON-friendly. Option 4 is incorrect — Apex supports both.
Unattempted
REST API uses standard HTTP verbs (GET, POST, etc.) and is lightweight, making it easier to work with for web and mobile apps. SOAP is heavier and requires WSDLs, making it better suited for legacy enterprise integrations. Option 1 is wrong because SOAP is XML-based, not JSON-friendly. Option 4 is incorrect — Apex supports both.
Question 45 of 60
45. Question
What should be considered when using Named Credentials with an external REST service?
Correct
Named Credentials securely store authentication and endpoint details, simplifying external callout configurations. They do not support runtime endpoint switching or eliminate authentication. Throttling must be managed separately. This makes them ideal for secure, centralized management of integration secrets.
Incorrect
Named Credentials securely store authentication and endpoint details, simplifying external callout configurations. They do not support runtime endpoint switching or eliminate authentication. Throttling must be managed separately. This makes them ideal for secure, centralized management of integration secrets.
Unattempted
Named Credentials securely store authentication and endpoint details, simplifying external callout configurations. They do not support runtime endpoint switching or eliminate authentication. Throttling must be managed separately. This makes them ideal for secure, centralized management of integration secrets.
Question 46 of 60
46. Question
A company needs to synchronize customer data from an external ERP to Salesforce in near real-time. Which limitation is most relevant when designing this solution?
Correct
Option 4 is correct because near real-time integration is highly sensitive to network latency and reliability, which directly affect response time and user experience. Option 1, although important, is more relevant to bulk or batch updates. Option 2 is irrelevant if the ERP supports real-time APIs. Option 3 might be a factor but not a technical limitation impacting performance.
Incorrect
Option 4 is correct because near real-time integration is highly sensitive to network latency and reliability, which directly affect response time and user experience. Option 1, although important, is more relevant to bulk or batch updates. Option 2 is irrelevant if the ERP supports real-time APIs. Option 3 might be a factor but not a technical limitation impacting performance.
Unattempted
Option 4 is correct because near real-time integration is highly sensitive to network latency and reliability, which directly affect response time and user experience. Option 1, although important, is more relevant to bulk or batch updates. Option 2 is irrelevant if the ERP supports real-time APIs. Option 3 might be a factor but not a technical limitation impacting performance.
Question 47 of 60
47. Question
When should the Streaming API be used in Salesforce integration?
Correct
Streaming API is designed to push data changes in real-time to clients via PushTopic or Change Data Capture. It is ideal for updating external systems or dashboards without polling. It is not suitable for bulk data loads or analytics. Two-way sync typically requires additional logic beyond Streaming API.
Incorrect
Streaming API is designed to push data changes in real-time to clients via PushTopic or Change Data Capture. It is ideal for updating external systems or dashboards without polling. It is not suitable for bulk data loads or analytics. Two-way sync typically requires additional logic beyond Streaming API.
Unattempted
Streaming API is designed to push data changes in real-time to clients via PushTopic or Change Data Capture. It is ideal for updating external systems or dashboards without polling. It is not suitable for bulk data loads or analytics. Two-way sync typically requires additional logic beyond Streaming API.
Question 48 of 60
48. Question
What API would best support mobile applications that require minimal payload and fast response times?
Correct
REST API is suited for lightweight interactions and quick data retrieval, making it ideal for mobile apps. Bulk API is too heavy and asynchronous. Metadata and Tooling APIs are specialized and not typically used by mobile clients. REST‘s statelessness and performance advantages make it preferable in mobile contexts.
Incorrect
REST API is suited for lightweight interactions and quick data retrieval, making it ideal for mobile apps. Bulk API is too heavy and asynchronous. Metadata and Tooling APIs are specialized and not typically used by mobile clients. REST‘s statelessness and performance advantages make it preferable in mobile contexts.
Unattempted
REST API is suited for lightweight interactions and quick data retrieval, making it ideal for mobile apps. Bulk API is too heavy and asynchronous. Metadata and Tooling APIs are specialized and not typically used by mobile clients. REST‘s statelessness and performance advantages make it preferable in mobile contexts.
Question 49 of 60
49. Question
Which is a primary benefit of using External Services in Salesforce integrations?
Correct
External Services let you declaratively connect to APIs described using OpenAPI, exposing them as invocable actions in Flow. They don‘t eliminate the need for Apex in all use cases. They also donÂ’t offer persistent storage or enforce OAuth — those are separate concerns. Their main benefit is enabling integration without code.
Incorrect
External Services let you declaratively connect to APIs described using OpenAPI, exposing them as invocable actions in Flow. They don‘t eliminate the need for Apex in all use cases. They also donÂ’t offer persistent storage or enforce OAuth — those are separate concerns. Their main benefit is enabling integration without code.
Unattempted
External Services let you declaratively connect to APIs described using OpenAPI, exposing them as invocable actions in Flow. They don‘t eliminate the need for Apex in all use cases. They also donÂ’t offer persistent storage or enforce OAuth — those are separate concerns. Their main benefit is enabling integration without code.
Question 50 of 60
50. Question
What is a secure method for authenticating outbound calls from Salesforce to an external service?
Correct
Option 3 is correct because Named Credentials with OAuth manage token exchange securely and automatically, reducing manual security handling. Option 1 is insecure and exposes sensitive information in code. Option 2 is more secure than hardcoding but still inferior to OAuth-based Named Credentials. Option 4 lacks authentication altogether, posing a major security risk.
Incorrect
Option 3 is correct because Named Credentials with OAuth manage token exchange securely and automatically, reducing manual security handling. Option 1 is insecure and exposes sensitive information in code. Option 2 is more secure than hardcoding but still inferior to OAuth-based Named Credentials. Option 4 lacks authentication altogether, posing a major security risk.
Unattempted
Option 3 is correct because Named Credentials with OAuth manage token exchange securely and automatically, reducing manual security handling. Option 1 is insecure and exposes sensitive information in code. Option 2 is more secure than hardcoding but still inferior to OAuth-based Named Credentials. Option 4 lacks authentication altogether, posing a major security risk.
Question 51 of 60
51. Question
A subscription app must ingest 8M records nightly within a four-hour window, with no user-facing latency requirements. Transformations are simple, and ordering is not important. Which approach best meets the throughput and window constraints?
Correct
Bulk API 2.0 in parallel mode is designed for high-volume, time-bounded loads and removes ordering constraints while maximizing throughput. It handles chunking server-side and supports automatic retry of failed batches, which keeps the pipeline within the four-hour window. The lack of user-facing latency means synchronous semantics are unnecessary, so asynchronous bulk fits perfectly. Composite REST is optimized for small transactional sets, not millions of rows, and client loops will throttle the ingest rate badly. Per-record synchronous REST inserts would hit API limits and dramatically increase elapsed time with network overhead. Platform Events target event-driven near-real-time delivery and subscriber processing, but eight million events would stress retention, replay, and consumers. Bulk provides better error handling with partial success, which simplifies reruns. It also offers monitoring endpoints to track job progress against the SLA. Finally, parallelization can be tuned to balance lock contention with speed.
Incorrect
Bulk API 2.0 in parallel mode is designed for high-volume, time-bounded loads and removes ordering constraints while maximizing throughput. It handles chunking server-side and supports automatic retry of failed batches, which keeps the pipeline within the four-hour window. The lack of user-facing latency means synchronous semantics are unnecessary, so asynchronous bulk fits perfectly. Composite REST is optimized for small transactional sets, not millions of rows, and client loops will throttle the ingest rate badly. Per-record synchronous REST inserts would hit API limits and dramatically increase elapsed time with network overhead. Platform Events target event-driven near-real-time delivery and subscriber processing, but eight million events would stress retention, replay, and consumers. Bulk provides better error handling with partial success, which simplifies reruns. It also offers monitoring endpoints to track job progress against the SLA. Finally, parallelization can be tuned to balance lock contention with speed.
Unattempted
Bulk API 2.0 in parallel mode is designed for high-volume, time-bounded loads and removes ordering constraints while maximizing throughput. It handles chunking server-side and supports automatic retry of failed batches, which keeps the pipeline within the four-hour window. The lack of user-facing latency means synchronous semantics are unnecessary, so asynchronous bulk fits perfectly. Composite REST is optimized for small transactional sets, not millions of rows, and client loops will throttle the ingest rate badly. Per-record synchronous REST inserts would hit API limits and dramatically increase elapsed time with network overhead. Platform Events target event-driven near-real-time delivery and subscriber processing, but eight million events would stress retention, replay, and consumers. Bulk provides better error handling with partial success, which simplifies reruns. It also offers monitoring endpoints to track job progress against the SLA. Finally, parallelization can be tuned to balance lock contention with speed.
Question 52 of 60
52. Question
A mobile checkout flow must show inventory availability in under 250 ms p95 while traffic peaks at 2k requests/sec. Inventory changes originate from an ERP every few minutes. What integration pattern should be used to hit the latency target?
Correct
Meeting a 250 ms p95 at 2k RPS requires data to be local and precomputed; a cache updated asynchronously is the only practical way to hit that latency. ERP systems rarely deliver sub-250 ms response at that concurrency, and network hops add variability. By pushing periodic changes to a cache or using a CDN/edge cache, the checkout path becomes a fast read. Synchronous calls to ERP (option 2 and 4) create a per-request dependency on a slower system, making tail latency uncontrollable. Middleware aggregation (option 3) is still synchronous and adds another hop, which worsens latency and limits RPS. An asynchronous refresh (CDC, events, or scheduled jobs) aligns with the “changes every few minutes” cadence and avoids over-fetching. This approach also reduces API costs and isolates failures from the user path. Cache expiration can be tuned to business tolerance for slight staleness. Observability on cache hit rates ensures the SLA is met.
Incorrect
Meeting a 250 ms p95 at 2k RPS requires data to be local and precomputed; a cache updated asynchronously is the only practical way to hit that latency. ERP systems rarely deliver sub-250 ms response at that concurrency, and network hops add variability. By pushing periodic changes to a cache or using a CDN/edge cache, the checkout path becomes a fast read. Synchronous calls to ERP (option 2 and 4) create a per-request dependency on a slower system, making tail latency uncontrollable. Middleware aggregation (option 3) is still synchronous and adds another hop, which worsens latency and limits RPS. An asynchronous refresh (CDC, events, or scheduled jobs) aligns with the “changes every few minutes” cadence and avoids over-fetching. This approach also reduces API costs and isolates failures from the user path. Cache expiration can be tuned to business tolerance for slight staleness. Observability on cache hit rates ensures the SLA is met.
Unattempted
Meeting a 250 ms p95 at 2k RPS requires data to be local and precomputed; a cache updated asynchronously is the only practical way to hit that latency. ERP systems rarely deliver sub-250 ms response at that concurrency, and network hops add variability. By pushing periodic changes to a cache or using a CDN/edge cache, the checkout path becomes a fast read. Synchronous calls to ERP (option 2 and 4) create a per-request dependency on a slower system, making tail latency uncontrollable. Middleware aggregation (option 3) is still synchronous and adds another hop, which worsens latency and limits RPS. An asynchronous refresh (CDC, events, or scheduled jobs) aligns with the “changes every few minutes” cadence and avoids over-fetching. This approach also reduces API costs and isolates failures from the user path. Cache expiration can be tuned to business tolerance for slight staleness. Observability on cache hit rates ensures the SLA is met.
Question 53 of 60
53. Question
A partner uploads order files every hour (?100k rows). Business needs orders available to agents within five minutes after the file arrives. Data quality has occasional rejects that must be reviewed. What design best balances timeliness and reprocess needs?
Correct
A file-arrival trigger that immediately launches a Bulk upsert aligns with the five-minute availability SLA and the hourly cadence. Bulk API provides partial success and failure results, which can be stored in a dead-letter queue or error object for review and reprocessing. Nightly processing violates the timeliness requirement and delays visibility. Per-record synchronous inserts would be slow, hit API limits, and be fragile in the face of transient errors. Platform Events at 100k/hour may be feasible but adds complexity with durable subscriptions and replay while still requiring row-by-row handling; Bulk is simpler and purpose-built. Bulk monitoring allows quick detection of delays. Upsert ensures idempotency to avoid duplicates on retry. The dead-letter mechanism provides an explicit workflow for data quality remediation. The pattern is scalable as volumes grow.
Incorrect
A file-arrival trigger that immediately launches a Bulk upsert aligns with the five-minute availability SLA and the hourly cadence. Bulk API provides partial success and failure results, which can be stored in a dead-letter queue or error object for review and reprocessing. Nightly processing violates the timeliness requirement and delays visibility. Per-record synchronous inserts would be slow, hit API limits, and be fragile in the face of transient errors. Platform Events at 100k/hour may be feasible but adds complexity with durable subscriptions and replay while still requiring row-by-row handling; Bulk is simpler and purpose-built. Bulk monitoring allows quick detection of delays. Upsert ensures idempotency to avoid duplicates on retry. The dead-letter mechanism provides an explicit workflow for data quality remediation. The pattern is scalable as volumes grow.
Unattempted
A file-arrival trigger that immediately launches a Bulk upsert aligns with the five-minute availability SLA and the hourly cadence. Bulk API provides partial success and failure results, which can be stored in a dead-letter queue or error object for review and reprocessing. Nightly processing violates the timeliness requirement and delays visibility. Per-record synchronous inserts would be slow, hit API limits, and be fragile in the face of transient errors. Platform Events at 100k/hour may be feasible but adds complexity with durable subscriptions and replay while still requiring row-by-row handling; Bulk is simpler and purpose-built. Bulk monitoring allows quick detection of delays. Upsert ensures idempotency to avoid duplicates on retry. The dead-letter mechanism provides an explicit workflow for data quality remediation. The pattern is scalable as volumes grow.
Question 54 of 60
54. Question
A call center needs a “customer summary” panel to load under 500 ms p95. The summary aggregates Salesforce data plus two external systems. The external services vary between 150–400 ms each. What pattern best meets the latency target reliably?
Correct
A middleware fan-out calling external services in parallel reduces total latency to roughly the slowest service plus small overhead, keeping p95 under 500 ms when combined with caching. Client-side serial calls accumulate latencies (150–400 ms each), making the combined time exceed the target routinely. Nightly snapshots are too stale for an interactive summary that must reflect recent changes. Conditional fallback on cache misses can help, but without centralized aggregation it still requires the UI to manage complexity and error handling. Middleware centralizes timeouts, retries with jitter, and circuit breakers, improving tail behavior. Caching of stable parts further reduces average latency. The approach also simplifies client code and reduces network chatter. Observability at the middleware layer enables precise SLO tracking.
Incorrect
A middleware fan-out calling external services in parallel reduces total latency to roughly the slowest service plus small overhead, keeping p95 under 500 ms when combined with caching. Client-side serial calls accumulate latencies (150–400 ms each), making the combined time exceed the target routinely. Nightly snapshots are too stale for an interactive summary that must reflect recent changes. Conditional fallback on cache misses can help, but without centralized aggregation it still requires the UI to manage complexity and error handling. Middleware centralizes timeouts, retries with jitter, and circuit breakers, improving tail behavior. Caching of stable parts further reduces average latency. The approach also simplifies client code and reduces network chatter. Observability at the middleware layer enables precise SLO tracking.
Unattempted
A middleware fan-out calling external services in parallel reduces total latency to roughly the slowest service plus small overhead, keeping p95 under 500 ms when combined with caching. Client-side serial calls accumulate latencies (150–400 ms each), making the combined time exceed the target routinely. Nightly snapshots are too stale for an interactive summary that must reflect recent changes. Conditional fallback on cache misses can help, but without centralized aggregation it still requires the UI to manage complexity and error handling. Middleware centralizes timeouts, retries with jitter, and circuit breakers, improving tail behavior. Caching of stable parts further reduces average latency. The approach also simplifies client code and reduces network chatter. Observability at the middleware layer enables precise SLO tracking.
Question 55 of 60
55. Question
An integration needs to process 50 updates per second from IoT devices into Salesforce and provide device status to dashboards within seconds. The payloads are small, and ordering per device matters. Which solution fits both throughput and ordering needs?
Correct
A queue or Pub/Sub with partitioning by device achieves per-key ordering while absorbing bursty traffic, then a subscriber can update Salesforce quickly. This design matches the 50 updates/sec load and preserves order constraints. Direct REST per device would suffer API limit pressure and lacks backpressure, risking drops under bursts. Nightly batches violate the “within seconds” visibility requirement. Hourly CSV bulk uploads likewise fail the timeliness need and complicate ordering. Partitioned streams allow parallelism across devices while serializing per device. Consumers can scale horizontally to meet throughput. The subscriber can use upsert to ensure idempotency. Monitoring lag on the stream ensures the “seconds” objective is met. Retries are handled off the user path, increasing resilience.
Incorrect
A queue or Pub/Sub with partitioning by device achieves per-key ordering while absorbing bursty traffic, then a subscriber can update Salesforce quickly. This design matches the 50 updates/sec load and preserves order constraints. Direct REST per device would suffer API limit pressure and lacks backpressure, risking drops under bursts. Nightly batches violate the “within seconds” visibility requirement. Hourly CSV bulk uploads likewise fail the timeliness need and complicate ordering. Partitioned streams allow parallelism across devices while serializing per device. Consumers can scale horizontally to meet throughput. The subscriber can use upsert to ensure idempotency. Monitoring lag on the stream ensures the “seconds” objective is met. Retries are handled off the user path, increasing resilience.
Unattempted
A queue or Pub/Sub with partitioning by device achieves per-key ordering while absorbing bursty traffic, then a subscriber can update Salesforce quickly. This design matches the 50 updates/sec load and preserves order constraints. Direct REST per device would suffer API limit pressure and lacks backpressure, risking drops under bursts. Nightly batches violate the “within seconds” visibility requirement. Hourly CSV bulk uploads likewise fail the timeliness need and complicate ordering. Partitioned streams allow parallelism across devices while serializing per device. Consumers can scale horizontally to meet throughput. The subscriber can use upsert to ensure idempotency. Monitoring lag on the stream ensures the “seconds” objective is met. Retries are handled off the user path, increasing resilience.
Question 56 of 60
56. Question
A data science team needs to export 30M records from Salesforce to a data lake weekly. The window is eight hours, and the export must minimize API consumption. Which approach is most appropriate?
Correct
Bulk Query with PK chunking is optimized for extracting large datasets efficiently and within time windows. It reduces API calls by letting the platform handle chunking and parallelization server-side. REST SOQL paging would generate a massive number of calls and likely exceed limits and time windows. Report exports are intended for interactive analytics, not for industrial-scale data movement, and lack automation controls for this volume. CDC is for ongoing changes, not initial or periodic full extracts of 30M records. Bulk Query also simplifies resume logic on failures and supports selective field export. Writing directly to staged files aligns with downstream lake ingestion patterns. It also enables compression to reduce transfer time. Observability on job progress prevents window overruns.
Incorrect
Bulk Query with PK chunking is optimized for extracting large datasets efficiently and within time windows. It reduces API calls by letting the platform handle chunking and parallelization server-side. REST SOQL paging would generate a massive number of calls and likely exceed limits and time windows. Report exports are intended for interactive analytics, not for industrial-scale data movement, and lack automation controls for this volume. CDC is for ongoing changes, not initial or periodic full extracts of 30M records. Bulk Query also simplifies resume logic on failures and supports selective field export. Writing directly to staged files aligns with downstream lake ingestion patterns. It also enables compression to reduce transfer time. Observability on job progress prevents window overruns.
Unattempted
Bulk Query with PK chunking is optimized for extracting large datasets efficiently and within time windows. It reduces API calls by letting the platform handle chunking and parallelization server-side. REST SOQL paging would generate a massive number of calls and likely exceed limits and time windows. Report exports are intended for interactive analytics, not for industrial-scale data movement, and lack automation controls for this volume. CDC is for ongoing changes, not initial or periodic full extracts of 30M records. Bulk Query also simplifies resume logic on failures and supports selective field export. Writing directly to staged files aligns with downstream lake ingestion patterns. It also enables compression to reduce transfer time. Observability on job progress prevents window overruns.
Question 57 of 60
57. Question
A B2B storefront must create orders in an external OMS. The UX requires an end-to-end response under 800 ms p95 at 200 RPS. The OMS create API takes 1.2–2.5 seconds. What pattern should the architect propose to meet UX goals?
Correct
An asynchronous acceptance pattern decouples the UX latency from OMS processing time by issuing a token immediately and completing creation off the user path. This meets the 800 ms p95 requirement and scales to 200 RPS by buffering. Synchronous calls to OMS would always miss the SLA because the OMS is slower than the target latency. Splitting into multiple synchronous calls increases call count and adds overhead, likely worsening latency. Retrying until latency drops is ineffective and amplifies load on the OMS, risking cascading failures. The token allows the client to poll or receive a webhook when the order is ready. A background worker can handle retries with backoff and idempotency. This also improves resiliency if the OMS is intermittently slow. Observability can correlate token to final order for traceability.
Incorrect
An asynchronous acceptance pattern decouples the UX latency from OMS processing time by issuing a token immediately and completing creation off the user path. This meets the 800 ms p95 requirement and scales to 200 RPS by buffering. Synchronous calls to OMS would always miss the SLA because the OMS is slower than the target latency. Splitting into multiple synchronous calls increases call count and adds overhead, likely worsening latency. Retrying until latency drops is ineffective and amplifies load on the OMS, risking cascading failures. The token allows the client to poll or receive a webhook when the order is ready. A background worker can handle retries with backoff and idempotency. This also improves resiliency if the OMS is intermittently slow. Observability can correlate token to final order for traceability.
Unattempted
An asynchronous acceptance pattern decouples the UX latency from OMS processing time by issuing a token immediately and completing creation off the user path. This meets the 800 ms p95 requirement and scales to 200 RPS by buffering. Synchronous calls to OMS would always miss the SLA because the OMS is slower than the target latency. Splitting into multiple synchronous calls increases call count and adds overhead, likely worsening latency. Retrying until latency drops is ineffective and amplifies load on the OMS, risking cascading failures. The token allows the client to poll or receive a webhook when the order is ready. A background worker can handle retries with backoff and idempotency. This also improves resiliency if the OMS is intermittently slow. Observability can correlate token to final order for traceability.
Question 58 of 60
58. Question
A marketing system needs near-real-time updates (?5 seconds) when Leads change in Salesforce. The downstream system can handle 1,000 msgs/sec and requires at-least-once delivery with de-duplication keys. Which Salesforce mechanism best fits?
Correct
CDC is designed for change streams with near-real-time delivery and supports durable replay, which fits the ?5-second requirement. At-least-once delivery is the default, and using record identifiers or custom keys enables de-duplication downstream. Nightly ETL clearly misses the timeliness requirement. Minute-by-minute polling increases load, risks missing changes at high scale, and introduces variable latency. Outbound messages are legacy, have delivery limitations, and lack the robust replay semantics and scale assurances the downstream needs. CDC integrates cleanly with event buses and can throttle or buffer to match subscriber capacity. Replay IDs provide recovery from consumer outages. CDC also includes changed fields, reducing payload size. Monitoring subscription lag ensures the SLO is maintained.
Incorrect
CDC is designed for change streams with near-real-time delivery and supports durable replay, which fits the ?5-second requirement. At-least-once delivery is the default, and using record identifiers or custom keys enables de-duplication downstream. Nightly ETL clearly misses the timeliness requirement. Minute-by-minute polling increases load, risks missing changes at high scale, and introduces variable latency. Outbound messages are legacy, have delivery limitations, and lack the robust replay semantics and scale assurances the downstream needs. CDC integrates cleanly with event buses and can throttle or buffer to match subscriber capacity. Replay IDs provide recovery from consumer outages. CDC also includes changed fields, reducing payload size. Monitoring subscription lag ensures the SLO is maintained.
Unattempted
CDC is designed for change streams with near-real-time delivery and supports durable replay, which fits the ?5-second requirement. At-least-once delivery is the default, and using record identifiers or custom keys enables de-duplication downstream. Nightly ETL clearly misses the timeliness requirement. Minute-by-minute polling increases load, risks missing changes at high scale, and introduces variable latency. Outbound messages are legacy, have delivery limitations, and lack the robust replay semantics and scale assurances the downstream needs. CDC integrates cleanly with event buses and can throttle or buffer to match subscriber capacity. Replay IDs provide recovery from consumer outages. CDC also includes changed fields, reducing payload size. Monitoring subscription lag ensures the SLO is maintained.
Question 59 of 60
59. Question
A compliance dashboard must show aggregated KPIs over 200M historical case records with sub-second drill-downs by region. The data changes slowly (daily). What is the most appropriate approach to meet query latency?
Correct
An external analytics store (data warehouse or OLAP engine) with pre-aggregations is necessary for sub-second drill-downs on 200M rows. Pre-computed cubes or materialized views deliver the required latency and scale. Live SOQL against that volume will be seconds to minutes and risks timeouts, especially with complex filters. Reports refreshed every minute still require heavy queries and cannot guarantee sub-second interactive drill-down. Synchronous Apex computing aggregates per session would be extremely slow and consume CPU limits, failing to meet the UX target. Externalizing heavy analytics also isolates operational workloads from reporting spikes. Daily change cadence aligns with batch updates to the store. Caching at the dashboard or CDN further improves p95. This approach provides elasticity for peak hours.
Incorrect
An external analytics store (data warehouse or OLAP engine) with pre-aggregations is necessary for sub-second drill-downs on 200M rows. Pre-computed cubes or materialized views deliver the required latency and scale. Live SOQL against that volume will be seconds to minutes and risks timeouts, especially with complex filters. Reports refreshed every minute still require heavy queries and cannot guarantee sub-second interactive drill-down. Synchronous Apex computing aggregates per session would be extremely slow and consume CPU limits, failing to meet the UX target. Externalizing heavy analytics also isolates operational workloads from reporting spikes. Daily change cadence aligns with batch updates to the store. Caching at the dashboard or CDN further improves p95. This approach provides elasticity for peak hours.
Unattempted
An external analytics store (data warehouse or OLAP engine) with pre-aggregations is necessary for sub-second drill-downs on 200M rows. Pre-computed cubes or materialized views deliver the required latency and scale. Live SOQL against that volume will be seconds to minutes and risks timeouts, especially with complex filters. Reports refreshed every minute still require heavy queries and cannot guarantee sub-second interactive drill-down. Synchronous Apex computing aggregates per session would be extremely slow and consume CPU limits, failing to meet the UX target. Externalizing heavy analytics also isolates operational workloads from reporting spikes. Daily change cadence aligns with batch updates to the store. Caching at the dashboard or CDN further improves p95. This approach provides elasticity for peak hours.
Question 60 of 60
60. Question
A payments team requires webhook processing from a PSP with bursts of 5k events/sec for short intervals. Each event must be acknowledged within two seconds, and idempotency is required. Which design meets the throughput and acknowledgement SLA?
Correct
A scalable gateway that immediately enqueues events allows the system to acknowledge within two seconds while smoothing bursts. Workers can process at the sustainable rate, preserving idempotency via keys. Posting directly into Apex REST risks hitting concurrency and CPU limits under bursts, causing timeouts and missed acknowledgements. Writing to a database table and polling introduces latency and may not meet the two-second ACK, particularly during spikes. A monolith that processes synchronously before ACK ties acknowledgement to business logic latency, violating the SLA under load. Queues provide backpressure and horizontal scalability. The gateway can rate-limit downstream while remaining elastic at the edge. Idempotency keys prevent duplicate side effects on retries. Metrics on queue depth ensure resilience during spikes.
Incorrect
A scalable gateway that immediately enqueues events allows the system to acknowledge within two seconds while smoothing bursts. Workers can process at the sustainable rate, preserving idempotency via keys. Posting directly into Apex REST risks hitting concurrency and CPU limits under bursts, causing timeouts and missed acknowledgements. Writing to a database table and polling introduces latency and may not meet the two-second ACK, particularly during spikes. A monolith that processes synchronously before ACK ties acknowledgement to business logic latency, violating the SLA under load. Queues provide backpressure and horizontal scalability. The gateway can rate-limit downstream while remaining elastic at the edge. Idempotency keys prevent duplicate side effects on retries. Metrics on queue depth ensure resilience during spikes.
Unattempted
A scalable gateway that immediately enqueues events allows the system to acknowledge within two seconds while smoothing bursts. Workers can process at the sustainable rate, preserving idempotency via keys. Posting directly into Apex REST risks hitting concurrency and CPU limits under bursts, causing timeouts and missed acknowledgements. Writing to a database table and polling introduces latency and may not meet the two-second ACK, particularly during spikes. A monolith that processes synchronously before ACK ties acknowledgement to business logic latency, violating the SLA under load. Queues provide backpressure and horizontal scalability. The gateway can rate-limit downstream while remaining elastic at the edge. Idempotency keys prevent duplicate side effects on retries. Metrics on queue depth ensure resilience during spikes.
X
Use Page numbers below to navigate to other practice tests