You have already completed the Test before. Hence you can not start it again.
Test is loading...
You must sign in or sign up to start the Test.
You have to finish following quiz, to start this Test:
Your results are here!! for" Salesforce Certified Platform Integration Architect Practice Test 2 "
0 of 60 questions answered correctly
Your time:
Time has elapsed
Your Final Score is : 0
You have attempted : 0
Number of Correct Questions : 0 and scored 0
Number of Incorrect Questions : 0 and Negative marks 0
Average score
Your score
Salesforce Certified Platform Integration Architect
You have attempted: 0
Number of Correct Questions: 0 and scored 0
Number of Incorrect Questions: 0 and Negative marks 0
You can review your answers by clicking on “View Answers” option. Important Note : Open Reference Documentation Links in New Tab (Right Click and Open in New Tab).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
Answered
Review
Question 1 of 60
1. Question
An upstream provider will enforce stricter rate limits after their next release. Your integration currently spikes during batch jobs and risks 429 responses. What is the most resilient remediation?
Correct
Adaptive rate limiting smooths traffic by shaping request rates to remain within provider limits, and jittered schedules avoid synchronized spikes across jobs. Longer timeouts do not address server-side throttling and may worsen tail latency. More parallelism intensifies the spike and triggers even more 429s. Disabling retries removes resilience to transient failures and can lose work. Token bucket or leaky bucket algorithms match well with provider quotas and are predictable. Jitter prevents thundering herd effects when many clients start simultaneously. Scheduling jobs in staggered windows aligns with provider guidance during updates. Observability on rate-limit headers allows dynamic adjustment. Backoff with full jitter ensures fair use under contention. This combination keeps throughput high while staying compliant.
Incorrect
Adaptive rate limiting smooths traffic by shaping request rates to remain within provider limits, and jittered schedules avoid synchronized spikes across jobs. Longer timeouts do not address server-side throttling and may worsen tail latency. More parallelism intensifies the spike and triggers even more 429s. Disabling retries removes resilience to transient failures and can lose work. Token bucket or leaky bucket algorithms match well with provider quotas and are predictable. Jitter prevents thundering herd effects when many clients start simultaneously. Scheduling jobs in staggered windows aligns with provider guidance during updates. Observability on rate-limit headers allows dynamic adjustment. Backoff with full jitter ensures fair use under contention. This combination keeps throughput high while staying compliant.
Unattempted
Adaptive rate limiting smooths traffic by shaping request rates to remain within provider limits, and jittered schedules avoid synchronized spikes across jobs. Longer timeouts do not address server-side throttling and may worsen tail latency. More parallelism intensifies the spike and triggers even more 429s. Disabling retries removes resilience to transient failures and can lose work. Token bucket or leaky bucket algorithms match well with provider quotas and are predictable. Jitter prevents thundering herd effects when many clients start simultaneously. Scheduling jobs in staggered windows aligns with provider guidance during updates. Observability on rate-limit headers allows dynamic adjustment. Backoff with full jitter ensures fair use under contention. This combination keeps throughput high while staying compliant.
Question 2 of 60
2. Question
A system landscape includes Kafka, Salesforce, and AWS. What integration pattern is Kafka most likely representing?
Correct
Option 2 is correct. Kafka is a messaging platform commonly used in Event-Driven Architectures. Option 1 implies synchronous interaction. Option 3 and 4 do not align with Kafkas use cases.
Incorrect
Option 2 is correct. Kafka is a messaging platform commonly used in Event-Driven Architectures. Option 1 implies synchronous interaction. Option 3 and 4 do not align with Kafkas use cases.
Unattempted
Option 2 is correct. Kafka is a messaging platform commonly used in Event-Driven Architectures. Option 1 implies synchronous interaction. Option 3 and 4 do not align with Kafkas use cases.
Question 3 of 60
3. Question
The landscape shows Salesforce using Heroku Connect. What type of integration is this?
Correct
Option 2 is correct. Heroku Connect replicates data between Salesforce and Heroku Postgres. Option 1 doesnt store data. Option 3 refers to user interface. Option 4 doesnt apply here.
Incorrect
Option 2 is correct. Heroku Connect replicates data between Salesforce and Heroku Postgres. Option 1 doesnt store data. Option 3 refers to user interface. Option 4 doesnt apply here.
Unattempted
Option 2 is correct. Heroku Connect replicates data between Salesforce and Heroku Postgres. Option 1 doesnt store data. Option 3 refers to user interface. Option 4 doesnt apply here.
Question 4 of 60
4. Question
A diagram shows Salesforce integrating with SAP using MuleSoft APIs. What role does MuleSoft play?
Correct
Option 1 is correct. MuleSoft acts as middleware facilitating communication. Option 2 handles presentation, not integration. Option 3 manages auth. Option 4 is unrelated.
Incorrect
Option 1 is correct. MuleSoft acts as middleware facilitating communication. Option 2 handles presentation, not integration. Option 3 manages auth. Option 4 is unrelated.
Unattempted
Option 1 is correct. MuleSoft acts as middleware facilitating communication. Option 2 handles presentation, not integration. Option 3 manages auth. Option 4 is unrelated.
Question 5 of 60
5. Question
A system sends messages to Salesforce via webhooks. What should the architect document in the landscape?
Correct
Option 3 is correct. Webhooks are event-driven mechanisms to notify external systems. Option 1 involves batch. Option 2 is security-related. Option 4 is not push-based like webhooks.
Incorrect
Option 3 is correct. Webhooks are event-driven mechanisms to notify external systems. Option 1 involves batch. Option 2 is security-related. Option 4 is not push-based like webhooks.
Unattempted
Option 3 is correct. Webhooks are event-driven mechanisms to notify external systems. Option 1 involves batch. Option 2 is security-related. Option 4 is not push-based like webhooks.
Question 6 of 60
6. Question
The landscape includes Salesforce, a CMS, and SSO. What should the inventory include?
Correct
Option 2 is correct. Auth flows and endpoints are integration-relevant. Option 1 is about content. Option 3 and 4 are Salesforce-internal and not integration-specific.
Incorrect
Option 2 is correct. Auth flows and endpoints are integration-relevant. Option 1 is about content. Option 3 and 4 are Salesforce-internal and not integration-specific.
Unattempted
Option 2 is correct. Auth flows and endpoints are integration-relevant. Option 1 is about content. Option 3 and 4 are Salesforce-internal and not integration-specific.
Question 7 of 60
7. Question
A diagram includes middleware orchestration between Salesforce and other apps. What pattern should be noted?
Correct
Option 2 is correct. Middleware performing steps and coordinating flows fits orchestration. Other options either dont match or refer to unrelated functions.
Incorrect
Option 2 is correct. Middleware performing steps and coordinating flows fits orchestration. Other options either dont match or refer to unrelated functions.
Unattempted
Option 2 is correct. Middleware performing steps and coordinating flows fits orchestration. Other options either dont match or refer to unrelated functions.
Question 8 of 60
8. Question
A system uses Salesforce Connect to show external data. What pattern does this represent?
Correct
Option 2 is correct. Salesforce Connect virtualizes data access without storing it locally. Option 1 copies data. Option 3 involves files. Option 4 is not relevant.
Incorrect
Option 2 is correct. Salesforce Connect virtualizes data access without storing it locally. Option 1 copies data. Option 3 involves files. Option 4 is not relevant.
Unattempted
Option 2 is correct. Salesforce Connect virtualizes data access without storing it locally. Option 1 copies data. Option 3 involves files. Option 4 is not relevant.
Question 9 of 60
9. Question
A manufacturing company wants to integrate Salesforce with its factory equipment system, which can only receive data in XML over a secured FTP protocol. What is the most compatible integration method?
Correct
Option 1 is correct. Since the target system requires XML via SFTP, a batch job that produces the correct format and uses the right protocol is the most suitable. Option 2 and 3 involve JSON and real-time processing, which arent compatible with the factory systems constraints. Option 4 (Salesforce Connect) is used to virtualize external data, not push data in file format.
Incorrect
Option 1 is correct. Since the target system requires XML via SFTP, a batch job that produces the correct format and uses the right protocol is the most suitable. Option 2 and 3 involve JSON and real-time processing, which arent compatible with the factory systems constraints. Option 4 (Salesforce Connect) is used to virtualize external data, not push data in file format.
Unattempted
Option 1 is correct. Since the target system requires XML via SFTP, a batch job that produces the correct format and uses the right protocol is the most suitable. Option 2 and 3 involve JSON and real-time processing, which arent compatible with the factory systems constraints. Option 4 (Salesforce Connect) is used to virtualize external data, not push data in file format.
Question 10 of 60
10. Question
An integration architect reviews a diagram showing Salesforce, a data warehouse, and a nightly data push from Salesforce to the warehouse using CSV files. What integration pattern is represented?
Correct
Option 4 is correct. A nightly push of data in CSV format clearly represents a batch data synchronization pattern. Option 1 is incorrect because it refers to synchronous real-time requests. Option 2 involves UI interactions, which are unrelated. Option 3 is used for near real-time messaging, not batch data export.
Incorrect
Option 4 is correct. A nightly push of data in CSV format clearly represents a batch data synchronization pattern. Option 1 is incorrect because it refers to synchronous real-time requests. Option 2 involves UI interactions, which are unrelated. Option 3 is used for near real-time messaging, not batch data export.
Unattempted
Option 4 is correct. A nightly push of data in CSV format clearly represents a batch data synchronization pattern. Option 1 is incorrect because it refers to synchronous real-time requests. Option 2 involves UI interactions, which are unrelated. Option 3 is used for near real-time messaging, not batch data export.
Question 11 of 60
11. Question
A payment gateway plans a mandatory TLS version upgrade next quarter. The integration must keep processing during the cutover with minimal errors. What should the architect implement first to build resilience for this system update?
Correct
Dual-stack support with canary routing allows validating the new TLS configuration with a safe percentage of traffic before full adoption, which materially reduces risk during upgrades. Canarying detects incompatibilities early and provides telemetry for rapid rollback, preserving transaction continuity. A single hard cutover maximizes blast radius and forces a full outage if anything is misconfigured. Adding client retries without topology change does not address protocol incompatibility and simply increases load during failure. Manual host file switches are brittle, error-prone, and not auditable at scale, which undermines resilience. Canary routing combined with automated fallback ensures mean time to recovery is minimized. It also supports progressive migration aligned with governance. Observability hooks during canary phases catch certificate or cipher issues. Dual endpoints let teams perform synthetic tests continuously. This approach creates a repeatable pattern for future protocol updates.
Incorrect
Dual-stack support with canary routing allows validating the new TLS configuration with a safe percentage of traffic before full adoption, which materially reduces risk during upgrades. Canarying detects incompatibilities early and provides telemetry for rapid rollback, preserving transaction continuity. A single hard cutover maximizes blast radius and forces a full outage if anything is misconfigured. Adding client retries without topology change does not address protocol incompatibility and simply increases load during failure. Manual host file switches are brittle, error-prone, and not auditable at scale, which undermines resilience. Canary routing combined with automated fallback ensures mean time to recovery is minimized. It also supports progressive migration aligned with governance. Observability hooks during canary phases catch certificate or cipher issues. Dual endpoints let teams perform synthetic tests continuously. This approach creates a repeatable pattern for future protocol updates.
Unattempted
Dual-stack support with canary routing allows validating the new TLS configuration with a safe percentage of traffic before full adoption, which materially reduces risk during upgrades. Canarying detects incompatibilities early and provides telemetry for rapid rollback, preserving transaction continuity. A single hard cutover maximizes blast radius and forces a full outage if anything is misconfigured. Adding client retries without topology change does not address protocol incompatibility and simply increases load during failure. Manual host file switches are brittle, error-prone, and not auditable at scale, which undermines resilience. Canary routing combined with automated fallback ensures mean time to recovery is minimized. It also supports progressive migration aligned with governance. Observability hooks during canary phases catch certificate or cipher issues. Dual endpoints let teams perform synthetic tests continuously. This approach creates a repeatable pattern for future protocol updates.
Question 12 of 60
12. Question
A core ERP will change response schemas in a minor release while keeping the same URLs. The integration must avoid production breakage when the new version rolls out region by region. What is the most resilient design choice?
Correct
A version-aware mediation layer can detect and adapt to different ERP schema versions, normalizing them to a stable canonical model for clients. This isolates downstream consumers from staggered rollouts and reduces coordination overhead across regions. Direct client mappings are fragile; every client must change simultaneously, magnifying risk. A big-bang refactor ties many teams to one date, and any delay causes outages. Suppressing errors hides problems and can corrupt data by dropping important changes silently. Mediation supports feature flags, A/B testing, and schema evolution with contract tests. It provides a controlled boundary where transformations are observable and reversible. Centralized normalization eases audit and troubleshooting. Consumers remain stable while backends evolve independently. This design supports blue/green or region-by-region adoption seamlessly.
Incorrect
A version-aware mediation layer can detect and adapt to different ERP schema versions, normalizing them to a stable canonical model for clients. This isolates downstream consumers from staggered rollouts and reduces coordination overhead across regions. Direct client mappings are fragile; every client must change simultaneously, magnifying risk. A big-bang refactor ties many teams to one date, and any delay causes outages. Suppressing errors hides problems and can corrupt data by dropping important changes silently. Mediation supports feature flags, A/B testing, and schema evolution with contract tests. It provides a controlled boundary where transformations are observable and reversible. Centralized normalization eases audit and troubleshooting. Consumers remain stable while backends evolve independently. This design supports blue/green or region-by-region adoption seamlessly.
Unattempted
A version-aware mediation layer can detect and adapt to different ERP schema versions, normalizing them to a stable canonical model for clients. This isolates downstream consumers from staggered rollouts and reduces coordination overhead across regions. Direct client mappings are fragile; every client must change simultaneously, magnifying risk. A big-bang refactor ties many teams to one date, and any delay causes outages. Suppressing errors hides problems and can corrupt data by dropping important changes silently. Mediation supports feature flags, A/B testing, and schema evolution with contract tests. It provides a controlled boundary where transformations are observable and reversible. Centralized normalization eases audit and troubleshooting. Consumers remain stable while backends evolve independently. This design supports blue/green or region-by-region adoption seamlessly.
Question 13 of 60
13. Question
Your organization will rotate OAuth client secrets every 30 days across multiple connectors. The integration must continue operating through rotations without human intervention. What mechanism most increases resilience?
Correct
Vault-backed dynamic credentials or JWKS-based key discovery allow automated rotation without service interruption, ensuring tokens validate with current keys and reducing operational toil. Manual runbooks are error-prone and introduce human latency, which can cause downtime. Extending token lifetime increases blast radius for compromise and violates security best practices. Hardcoding multiple secrets is insecure and inflexible, complicating audits and rollbacks. Automated rotation integrates with CI/CD and policy-as-code, enabling predictable change windows. JWKS provides standard discovery for public keys supporting seamless key rollover. Dynamic secrets also support immediate revocation if compromise is suspected. This pattern keeps secrets out of code and configuration repositories. It reduces mean time to recovery in key incidents. Observability around rotation success prevents silent failures.
Incorrect
Vault-backed dynamic credentials or JWKS-based key discovery allow automated rotation without service interruption, ensuring tokens validate with current keys and reducing operational toil. Manual runbooks are error-prone and introduce human latency, which can cause downtime. Extending token lifetime increases blast radius for compromise and violates security best practices. Hardcoding multiple secrets is insecure and inflexible, complicating audits and rollbacks. Automated rotation integrates with CI/CD and policy-as-code, enabling predictable change windows. JWKS provides standard discovery for public keys supporting seamless key rollover. Dynamic secrets also support immediate revocation if compromise is suspected. This pattern keeps secrets out of code and configuration repositories. It reduces mean time to recovery in key incidents. Observability around rotation success prevents silent failures.
Unattempted
Vault-backed dynamic credentials or JWKS-based key discovery allow automated rotation without service interruption, ensuring tokens validate with current keys and reducing operational toil. Manual runbooks are error-prone and introduce human latency, which can cause downtime. Extending token lifetime increases blast radius for compromise and violates security best practices. Hardcoding multiple secrets is insecure and inflexible, complicating audits and rollbacks. Automated rotation integrates with CI/CD and policy-as-code, enabling predictable change windows. JWKS provides standard discovery for public keys supporting seamless key rollover. Dynamic secrets also support immediate revocation if compromise is suspected. This pattern keeps secrets out of code and configuration repositories. It reduces mean time to recovery in key incidents. Observability around rotation success prevents silent failures.
Question 14 of 60
14. Question
A third-party shipping API is deprecating an endpoint and will throttle legacy calls before shutdown. You must preserve order fulfillment with minimal customer impact during the deprecation window. What pattern should be adopted?
Correct
A circuit breaker with graceful degradation avoids cascading failures during throttling by quickly detecting error rates and backing off, while progressively migrating to the new endpoint reduces risk. Retrying aggressively amplifies throttling and can trigger broader rate limits. Stopping all calls interrupts the business process and creates backlogs and manual work. Waiting to switch until the shutdown date compresses risk into one moment without telemetry. The breaker protects upstream systems and keeps the UX responsive with fallbacks like queuing or delayed labels. Progressive migration enables canary test coverage and performance validation. Degradation paths can display accurate status to users, preserving trust. This approach supports monitoring of both legacy and new endpoints. It provides levers to tune traffic during the transition. Ultimately it balances availability, performance, and change risk.
Incorrect
A circuit breaker with graceful degradation avoids cascading failures during throttling by quickly detecting error rates and backing off, while progressively migrating to the new endpoint reduces risk. Retrying aggressively amplifies throttling and can trigger broader rate limits. Stopping all calls interrupts the business process and creates backlogs and manual work. Waiting to switch until the shutdown date compresses risk into one moment without telemetry. The breaker protects upstream systems and keeps the UX responsive with fallbacks like queuing or delayed labels. Progressive migration enables canary test coverage and performance validation. Degradation paths can display accurate status to users, preserving trust. This approach supports monitoring of both legacy and new endpoints. It provides levers to tune traffic during the transition. Ultimately it balances availability, performance, and change risk.
Unattempted
A circuit breaker with graceful degradation avoids cascading failures during throttling by quickly detecting error rates and backing off, while progressively migrating to the new endpoint reduces risk. Retrying aggressively amplifies throttling and can trigger broader rate limits. Stopping all calls interrupts the business process and creates backlogs and manual work. Waiting to switch until the shutdown date compresses risk into one moment without telemetry. The breaker protects upstream systems and keeps the UX responsive with fallbacks like queuing or delayed labels. Progressive migration enables canary test coverage and performance validation. Degradation paths can display accurate status to users, preserving trust. This approach supports monitoring of both legacy and new endpoints. It provides levers to tune traffic during the transition. Ultimately it balances availability, performance, and change risk.
Question 15 of 60
15. Question
A compliance update requires moving PII fields to tokenized forms, and downstream consumers will update at different times. How do you design for resilience during this multi-phase rollout?
Correct
Dual-writing with a feature flag keeps both representations available during the transition and allows consumers to migrate on their timeline. Read-through detokenization maintains backward compatibility while you validate downstream readiness. Forcing immediate replacement creates coordinated outages across many systems. Freezing writes is operationally unacceptable and risks data loss or business interruption. Emailing PII breaks security controls and auditability and is noncompliant. Feature flags permit controlled rollout per consumer or region. Detokenization services can be monitored and rate-limited to prevent abuse. This approach provides a reversible path if issues are found. Data lineage remains intact with clear mapping between original and tokenized values. It also supports progressive compliance hardening once migration completes.
Incorrect
Dual-writing with a feature flag keeps both representations available during the transition and allows consumers to migrate on their timeline. Read-through detokenization maintains backward compatibility while you validate downstream readiness. Forcing immediate replacement creates coordinated outages across many systems. Freezing writes is operationally unacceptable and risks data loss or business interruption. Emailing PII breaks security controls and auditability and is noncompliant. Feature flags permit controlled rollout per consumer or region. Detokenization services can be monitored and rate-limited to prevent abuse. This approach provides a reversible path if issues are found. Data lineage remains intact with clear mapping between original and tokenized values. It also supports progressive compliance hardening once migration completes.
Unattempted
Dual-writing with a feature flag keeps both representations available during the transition and allows consumers to migrate on their timeline. Read-through detokenization maintains backward compatibility while you validate downstream readiness. Forcing immediate replacement creates coordinated outages across many systems. Freezing writes is operationally unacceptable and risks data loss or business interruption. Emailing PII breaks security controls and auditability and is noncompliant. Feature flags permit controlled rollout per consumer or region. Detokenization services can be monitored and rate-limited to prevent abuse. This approach provides a reversible path if issues are found. Data lineage remains intact with clear mapping between original and tokenized values. It also supports progressive compliance hardening once migration completes.
Question 16 of 60
16. Question
A diagram shows real-time API calls from Salesforce to an external system when a case is created. What integration type is used?
Correct
The correct answer is 2. Real-time API calls for immediate processing align with the Request and Reply pattern. Option 1 is replication-focused. Option 3 involves delay. Option 4 refers to async notification, not real-time processing.
Incorrect
The correct answer is 2. Real-time API calls for immediate processing align with the Request and Reply pattern. Option 1 is replication-focused. Option 3 involves delay. Option 4 refers to async notification, not real-time processing.
Unattempted
The correct answer is 2. Real-time API calls for immediate processing align with the Request and Reply pattern. Option 1 is replication-focused. Option 3 involves delay. Option 4 refers to async notification, not real-time processing.
Question 17 of 60
17. Question
A middleware upgrade will change connection pools and thread limits. During the upgrade window, you must prevent cascading failures into Salesforce and downstream ERPs. Which design element most improves resilience?
Correct
Bulkheads isolate failures by confining resource exhaustion to a single dependency, while prioritized queues ensure critical workloads are served first during constrained capacity. A single shared pool allows one noisy neighbor to starve others and magnifies failure impact. Removing queues eliminates backpressure and can translate transient slowdowns into widespread timeouts. Forcing synchronous paths increases coupling and raises the chance that a single slow hop breaks the entire chain. Bulkheads mirror ship compartments: a leak in one does not sink all. Prioritized queues protect key flows (e.g., payments) during maintenance. This pattern enables controlled degradation rather than total failure. It works well with circuit breakers and rate limiting. Metrics on queue depth and latency guide dynamic throttling. The result is graceful performance under constrained resources.
Incorrect
Bulkheads isolate failures by confining resource exhaustion to a single dependency, while prioritized queues ensure critical workloads are served first during constrained capacity. A single shared pool allows one noisy neighbor to starve others and magnifies failure impact. Removing queues eliminates backpressure and can translate transient slowdowns into widespread timeouts. Forcing synchronous paths increases coupling and raises the chance that a single slow hop breaks the entire chain. Bulkheads mirror ship compartments: a leak in one does not sink all. Prioritized queues protect key flows (e.g., payments) during maintenance. This pattern enables controlled degradation rather than total failure. It works well with circuit breakers and rate limiting. Metrics on queue depth and latency guide dynamic throttling. The result is graceful performance under constrained resources.
Unattempted
Bulkheads isolate failures by confining resource exhaustion to a single dependency, while prioritized queues ensure critical workloads are served first during constrained capacity. A single shared pool allows one noisy neighbor to starve others and magnifies failure impact. Removing queues eliminates backpressure and can translate transient slowdowns into widespread timeouts. Forcing synchronous paths increases coupling and raises the chance that a single slow hop breaks the entire chain. Bulkheads mirror ship compartments: a leak in one does not sink all. Prioritized queues protect key flows (e.g., payments) during maintenance. This pattern enables controlled degradation rather than total failure. It works well with circuit breakers and rate limiting. Metrics on queue depth and latency guide dynamic throttling. The result is graceful performance under constrained resources.
Question 18 of 60
18. Question
A SaaS vendor will change their DNS and introduce new IP ranges. Your integration must survive this change without manual redeploys. Which approach best supports resilience to such infrastructure updates?
Correct
Using short TTLs with health-checked discovery ensures clients pick up new endpoints quickly, while automating allowlists keeps network controls in sync with vendor IP changes. Hardcoding IPs leads to outages when ranges change and is difficult to maintain across environments. Static hosts entries are similarly brittle and invisible to standard monitoring. Client retries do not help if connections fail due to blocked IPs or stale DNS; they only waste resources. Automated network policy updates reduce manual errors and speed recovery. Health checks validate reachability before traffic shifts. Short TTLs balance responsiveness with DNS load. This design supports blue/green and failover scenarios. It also provides governance artifacts for audit. Together these practices prevent downtime during DNS or IP migrations.
Incorrect
Using short TTLs with health-checked discovery ensures clients pick up new endpoints quickly, while automating allowlists keeps network controls in sync with vendor IP changes. Hardcoding IPs leads to outages when ranges change and is difficult to maintain across environments. Static hosts entries are similarly brittle and invisible to standard monitoring. Client retries do not help if connections fail due to blocked IPs or stale DNS; they only waste resources. Automated network policy updates reduce manual errors and speed recovery. Health checks validate reachability before traffic shifts. Short TTLs balance responsiveness with DNS load. This design supports blue/green and failover scenarios. It also provides governance artifacts for audit. Together these practices prevent downtime during DNS or IP migrations.
Unattempted
Using short TTLs with health-checked discovery ensures clients pick up new endpoints quickly, while automating allowlists keeps network controls in sync with vendor IP changes. Hardcoding IPs leads to outages when ranges change and is difficult to maintain across environments. Static hosts entries are similarly brittle and invisible to standard monitoring. Client retries do not help if connections fail due to blocked IPs or stale DNS; they only waste resources. Automated network policy updates reduce manual errors and speed recovery. Health checks validate reachability before traffic shifts. Short TTLs balance responsiveness with DNS load. This design supports blue/green and failover scenarios. It also provides governance artifacts for audit. Together these practices prevent downtime during DNS or IP migrations.
Question 19 of 60
19. Question
A quarterly Salesforce release may change governor behavior for synchronous Apex under heavy load. Your integration calls external services from triggers today. What is the most resilient redesign?
Correct
Moving external calls out of triggers and into asynchronous mechanisms like Platform Events or Queueables decouples business transactions from external dependencies and governor changes. Try/catch in triggers does not mitigate callout limits or transaction rollbacks under load. Parallelizing synchronous futures is constrained by platform limits and can still hit governor changes during releases. Formula fields and validation rules cannot perform callouts or orchestrate retries. Asynchronous patterns allow backoff, idempotency, and dead-letter handling during vendor updates. They reduce user-facing latency and avoid partial commits. Subscriber services can be versioned independently. Monitoring and replay capabilities increase operational resilience. This architecture aligns with Salesforce best practices for integration robustness.
Incorrect
Moving external calls out of triggers and into asynchronous mechanisms like Platform Events or Queueables decouples business transactions from external dependencies and governor changes. Try/catch in triggers does not mitigate callout limits or transaction rollbacks under load. Parallelizing synchronous futures is constrained by platform limits and can still hit governor changes during releases. Formula fields and validation rules cannot perform callouts or orchestrate retries. Asynchronous patterns allow backoff, idempotency, and dead-letter handling during vendor updates. They reduce user-facing latency and avoid partial commits. Subscriber services can be versioned independently. Monitoring and replay capabilities increase operational resilience. This architecture aligns with Salesforce best practices for integration robustness.
Unattempted
Moving external calls out of triggers and into asynchronous mechanisms like Platform Events or Queueables decouples business transactions from external dependencies and governor changes. Try/catch in triggers does not mitigate callout limits or transaction rollbacks under load. Parallelizing synchronous futures is constrained by platform limits and can still hit governor changes during releases. Formula fields and validation rules cannot perform callouts or orchestrate retries. Asynchronous patterns allow backoff, idempotency, and dead-letter handling during vendor updates. They reduce user-facing latency and avoid partial commits. Subscriber services can be versioned independently. Monitoring and replay capabilities increase operational resilience. This architecture aligns with Salesforce best practices for integration robustness.
Question 20 of 60
20. Question
A vendor library used by your integration will be upgraded with breaking changes to error codes and transient failure semantics. You must avoid client regressions. What is the best resilience strategy?
Correct
Contract tests formalize expected behaviors and, with consumer-driven mocks, ensure the upgraded library still satisfies downstream expectations or is adapted via compatibility shims. Upgrading in production is risky and shifts validation to customers. Removing error handling increases blast radius and degrades user experience during incidents. Disabling timeouts can create resource starvation and cascading failures under load. Compatibility shims translate new error codes to stable ones, shielding clients. Tests run in CI prevent regressions from reaching production. Mocks capture edge cases that real dependencies may not easily reproduce. This approach supports phased rollout with confidence. Observability verifies parity in real traffic. Together they create a robust safety net for library updates.
Incorrect
Contract tests formalize expected behaviors and, with consumer-driven mocks, ensure the upgraded library still satisfies downstream expectations or is adapted via compatibility shims. Upgrading in production is risky and shifts validation to customers. Removing error handling increases blast radius and degrades user experience during incidents. Disabling timeouts can create resource starvation and cascading failures under load. Compatibility shims translate new error codes to stable ones, shielding clients. Tests run in CI prevent regressions from reaching production. Mocks capture edge cases that real dependencies may not easily reproduce. This approach supports phased rollout with confidence. Observability verifies parity in real traffic. Together they create a robust safety net for library updates.
Unattempted
Contract tests formalize expected behaviors and, with consumer-driven mocks, ensure the upgraded library still satisfies downstream expectations or is adapted via compatibility shims. Upgrading in production is risky and shifts validation to customers. Removing error handling increases blast radius and degrades user experience during incidents. Disabling timeouts can create resource starvation and cascading failures under load. Compatibility shims translate new error codes to stable ones, shielding clients. Tests run in CI prevent regressions from reaching production. Mocks capture edge cases that real dependencies may not easily reproduce. This approach supports phased rollout with confidence. Observability verifies parity in real traffic. Together they create a robust safety net for library updates.
Question 21 of 60
21. Question
What approach supports resilience when a dependent system is being upgraded?
Correct
Option 2 is correct because fallback systems with caching can handle temporary disruptions without losing data or functionality. Option 1 increases failure risk. Option 3 affects user trust. Option 4 is an overreaction and halts business.
Incorrect
Option 2 is correct because fallback systems with caching can handle temporary disruptions without losing data or functionality. Option 1 increases failure risk. Option 3 affects user trust. Option 4 is an overreaction and halts business.
Unattempted
Option 2 is correct because fallback systems with caching can handle temporary disruptions without losing data or functionality. Option 1 increases failure risk. Option 3 affects user trust. Option 4 is an overreaction and halts business.
Question 22 of 60
22. Question
How do you ensure encrypted outbound traffic from Salesforce to an external API?
Correct
Option 1 is correct because HTTPS with TLS provides encryption and secure transmission. Option 2 is insecure. Option 3 exposes secrets. Option 4 is a compliance violation.
Incorrect
Option 1 is correct because HTTPS with TLS provides encryption and secure transmission. Option 2 is insecure. Option 3 exposes secrets. Option 4 is a compliance violation.
Unattempted
Option 1 is correct because HTTPS with TLS provides encryption and secure transmission. Option 2 is insecure. Option 3 exposes secrets. Option 4 is a compliance violation.
Question 23 of 60
23. Question
Which method ensures message durability when the receiving system is intermittently unavailable?
Correct
Option 1 is correct because Platform Events with durable subscribers guarantee that messages are retained until acknowledged. Option 2 is unreliable. Option 3 pushes complexity to the client. Option 4 results in lost data.
Incorrect
Option 1 is correct because Platform Events with durable subscribers guarantee that messages are retained until acknowledged. Option 2 is unreliable. Option 3 pushes complexity to the client. Option 4 results in lost data.
Unattempted
Option 1 is correct because Platform Events with durable subscribers guarantee that messages are retained until acknowledged. Option 2 is unreliable. Option 3 pushes complexity to the client. Option 4 results in lost data.
Question 24 of 60
24. Question
How do mutual TLS integrations provide security?
Correct
Option 3 is correct because mutual TLS confirms both parties‘ identities, ensuring encrypted and trusted communication. Option 1 is partial. Option 2 misses the client side. Option 4 is irrelevant to authentication.
Incorrect
Option 3 is correct because mutual TLS confirms both parties‘ identities, ensuring encrypted and trusted communication. Option 1 is partial. Option 2 misses the client side. Option 4 is irrelevant to authentication.
Unattempted
Option 3 is correct because mutual TLS confirms both parties‘ identities, ensuring encrypted and trusted communication. Option 1 is partial. Option 2 misses the client side. Option 4 is irrelevant to authentication.
Question 25 of 60
25. Question
Whats an appropriate strategy for handling transient errors in outbound REST callouts?
Correct
Option 1 is correct because exponential backoff avoids overwhelming systems and provides time for recovery. Option 2 may cause spikes. Option 3 ignores resolution chances. Option 4 is inefficient.
Incorrect
Option 1 is correct because exponential backoff avoids overwhelming systems and provides time for recovery. Option 2 may cause spikes. Option 3 ignores resolution chances. Option 4 is inefficient.
Unattempted
Option 1 is correct because exponential backoff avoids overwhelming systems and provides time for recovery. Option 2 may cause spikes. Option 3 ignores resolution chances. Option 4 is inefficient.
Question 26 of 60
26. Question
How should you prevent unauthorized inbound access to a public API endpoint?
Correct
Option 2 is correct because token-based authentication validates authorized access. Option 1 is reactive. Option 3 is insecure. Option 4 is passive.
Incorrect
Option 2 is correct because token-based authentication validates authorized access. Option 1 is reactive. Option 3 is insecure. Option 4 is passive.
Unattempted
Option 2 is correct because token-based authentication validates authorized access. Option 1 is reactive. Option 3 is insecure. Option 4 is passive.
Question 27 of 60
27. Question
Which element adds resiliency for systems that may respond with delays or timeouts?
Correct
Option 2 is correct because combining retries with timeout handling addresses latency and ensures retry after temporary failure. Option 1 causes blocking. Option 3 bypasses security. Option 4 is irresponsible.
Incorrect
Option 2 is correct because combining retries with timeout handling addresses latency and ensures retry after temporary failure. Option 1 causes blocking. Option 3 bypasses security. Option 4 is irresponsible.
Unattempted
Option 2 is correct because combining retries with timeout handling addresses latency and ensures retry after temporary failure. Option 1 causes blocking. Option 3 bypasses security. Option 4 is irresponsible.
Question 28 of 60
28. Question
What protects API credentials during transmission?
Correct
Option 2 is correct because TLS encrypts the entire communication channel, including sensitive credentials. Option 1 exposes credentials. Option 3 and 4 violate best practices.
Incorrect
Option 2 is correct because TLS encrypts the entire communication channel, including sensitive credentials. Option 1 exposes credentials. Option 3 and 4 violate best practices.
Unattempted
Option 2 is correct because TLS encrypts the entire communication channel, including sensitive credentials. Option 1 exposes credentials. Option 3 and 4 violate best practices.
Question 29 of 60
29. Question
What prevents an integration from breaking during schema updates on a third-party API?
Correct
Option 3 is correct because decoupling using mapping layers allows flexibility during schema evolution. Option 1 creates fragility. Option 2 is negligent. Option 4 introduces risk.
Incorrect
Option 3 is correct because decoupling using mapping layers allows flexibility during schema evolution. Option 1 creates fragility. Option 2 is negligent. Option 4 introduces risk.
Unattempted
Option 3 is correct because decoupling using mapping layers allows flexibility during schema evolution. Option 1 creates fragility. Option 2 is negligent. Option 4 introduces risk.
Question 30 of 60
30. Question
What is the first step when handling an integration failure in a critical process?
Correct
Option 3 is correct because before taking any further action, identifying the root cause ensures informed resolution. Option 1 may repeat failures without understanding the issue. Option 2 should follow analysis. Option 4 may spread unnecessary concern before a fix is in place.
Incorrect
Option 3 is correct because before taking any further action, identifying the root cause ensures informed resolution. Option 1 may repeat failures without understanding the issue. Option 2 should follow analysis. Option 4 may spread unnecessary concern before a fix is in place.
Unattempted
Option 3 is correct because before taking any further action, identifying the root cause ensures informed resolution. Option 1 may repeat failures without understanding the issue. Option 2 should follow analysis. Option 4 may spread unnecessary concern before a fix is in place.
Question 31 of 60
31. Question
What key metric should be monitored to understand traffic volume trends in APIs?
Correct
API call volume over time reveals usage spikes and helps plan for scaling. Licenses and object count are not traffic indicators. Login history is user-level.
Incorrect
API call volume over time reveals usage spikes and helps plan for scaling. Licenses and object count are not traffic indicators. Login history is user-level.
Unattempted
API call volume over time reveals usage spikes and helps plan for scaling. Licenses and object count are not traffic indicators. Login history is user-level.
Question 32 of 60
32. Question
How can a system be designed to gracefully handle a partner API that is frequently updated?
Correct
An API gateway with versioning allows the integration to abstract away backend changes and provide controlled exposure of API versions. This prevents direct impact on consumers and ensures compatibility during updates. Direct SOAP callouts make the system fragile due to tight coupling. Ignoring versioning leads to failure when APIs change. Statelessness is beneficial, but without versioning it doesnt ensure resilience.
Incorrect
An API gateway with versioning allows the integration to abstract away backend changes and provide controlled exposure of API versions. This prevents direct impact on consumers and ensures compatibility during updates. Direct SOAP callouts make the system fragile due to tight coupling. Ignoring versioning leads to failure when APIs change. Statelessness is beneficial, but without versioning it doesnt ensure resilience.
Unattempted
An API gateway with versioning allows the integration to abstract away backend changes and provide controlled exposure of API versions. This prevents direct impact on consumers and ensures compatibility during updates. Direct SOAP callouts make the system fragile due to tight coupling. Ignoring versioning leads to failure when APIs change. Statelessness is beneficial, but without versioning it doesnt ensure resilience.
Question 33 of 60
33. Question
What method should be used to avoid integration failure due to system maintenance windows?
Correct
Asynchronous processing with retry logic ensures that messages or operations are retried automatically if the target system is down, reducing the need for manual intervention. Manual resumption and pausing are not scalable or reliable in high-availability environments. Availability calendars may not always reflect unexpected downtime. Retry logic ensures resilience and uptime.
Incorrect
Asynchronous processing with retry logic ensures that messages or operations are retried automatically if the target system is down, reducing the need for manual intervention. Manual resumption and pausing are not scalable or reliable in high-availability environments. Availability calendars may not always reflect unexpected downtime. Retry logic ensures resilience and uptime.
Unattempted
Asynchronous processing with retry logic ensures that messages or operations are retried automatically if the target system is down, reducing the need for manual intervention. Manual resumption and pausing are not scalable or reliable in high-availability environments. Availability calendars may not always reflect unexpected downtime. Retry logic ensures resilience and uptime.
Question 34 of 60
34. Question
In the event of partial data failure in a bulk integration process, what design approach helps ensure resilience?
Correct
Breaking data into smaller batches allows targeted retries and minimizes reprocessing. It improves throughput and isolates issues, preventing a single failure from halting the entire job. Skipping records without retrying can result in data loss. Transactional commits per record may not be efficient. Aborting the whole batch compromises availability.
Incorrect
Breaking data into smaller batches allows targeted retries and minimizes reprocessing. It improves throughput and isolates issues, preventing a single failure from halting the entire job. Skipping records without retrying can result in data loss. Transactional commits per record may not be efficient. Aborting the whole batch compromises availability.
Unattempted
Breaking data into smaller batches allows targeted retries and minimizes reprocessing. It improves throughput and isolates issues, preventing a single failure from halting the entire job. Skipping records without retrying can result in data loss. Transactional commits per record may not be efficient. Aborting the whole batch compromises availability.
Question 35 of 60
35. Question
What pattern best supports resilience for Salesforce Platform Events integration with external systems?
Correct
Durable streaming with replay IDs ensures reliable message delivery even if the subscriber temporarily disconnects. It supports replay of missed events and maintains resilience. Polling and batch jobs are not optimal for real-time and reliable processing. Confirmation patterns add complexity and dont ensure replay.
Incorrect
Durable streaming with replay IDs ensures reliable message delivery even if the subscriber temporarily disconnects. It supports replay of missed events and maintains resilience. Polling and batch jobs are not optimal for real-time and reliable processing. Confirmation patterns add complexity and dont ensure replay.
Unattempted
Durable streaming with replay IDs ensures reliable message delivery even if the subscriber temporarily disconnects. It supports replay of missed events and maintains resilience. Polling and batch jobs are not optimal for real-time and reliable processing. Confirmation patterns add complexity and dont ensure replay.
Question 36 of 60
36. Question
What is the primary role of idempotency in a resilient integration design?
Correct
Idempotency ensures that retries of the same operation dont cause duplication or inconsistency. This is vital in ensuring resilience, especially with retry logic, so that failures can be retried safely. It doesnt directly address latency or response times. Its unrelated to authentication.
Incorrect
Idempotency ensures that retries of the same operation dont cause duplication or inconsistency. This is vital in ensuring resilience, especially with retry logic, so that failures can be retried safely. It doesnt directly address latency or response times. Its unrelated to authentication.
Unattempted
Idempotency ensures that retries of the same operation dont cause duplication or inconsistency. This is vital in ensuring resilience, especially with retry logic, so that failures can be retried safely. It doesnt directly address latency or response times. Its unrelated to authentication.
Question 37 of 60
37. Question
Which strategy helps reduce downtime impact during system deployments in a resilient integration?
Correct
Canary releases or blue-green deployments reduce risk by shifting traffic to updated systems gradually. This ensures fallback is possible and avoids total system disruption. Manual steps and stopping all integrations increase downtime risk. Rollback scripts are reactive rather than preventive.
Incorrect
Canary releases or blue-green deployments reduce risk by shifting traffic to updated systems gradually. This ensures fallback is possible and avoids total system disruption. Manual steps and stopping all integrations increase downtime risk. Rollback scripts are reactive rather than preventive.
Unattempted
Canary releases or blue-green deployments reduce risk by shifting traffic to updated systems gradually. This ensures fallback is possible and avoids total system disruption. Manual steps and stopping all integrations increase downtime risk. Rollback scripts are reactive rather than preventive.
Question 38 of 60
38. Question
What should be included in a design to prevent message loss during system downtime?
Correct
Persistent message queues store messages until they can be successfully processed. This design ensures that messages arent lost during outages. Synchronous APIs and manual intervention don‘t provide guarantees. Disabling endpoints blocks messages entirely.
Incorrect
Persistent message queues store messages until they can be successfully processed. This design ensures that messages arent lost during outages. Synchronous APIs and manual intervention don‘t provide guarantees. Disabling endpoints blocks messages entirely.
Unattempted
Persistent message queues store messages until they can be successfully processed. This design ensures that messages arent lost during outages. Synchronous APIs and manual intervention don‘t provide guarantees. Disabling endpoints blocks messages entirely.
Question 39 of 60
39. Question
What integration behavior supports system updates without breaking existing consumers?
Correct
Versioning APIs allows backward compatibility, so current consumers continue functioning while newer consumers adopt updated versions. In-place updates and undocumented changes break consumers. Tight coupling creates fragility.
Incorrect
Versioning APIs allows backward compatibility, so current consumers continue functioning while newer consumers adopt updated versions. In-place updates and undocumented changes break consumers. Tight coupling creates fragility.
Unattempted
Versioning APIs allows backward compatibility, so current consumers continue functioning while newer consumers adopt updated versions. In-place updates and undocumented changes break consumers. Tight coupling creates fragility.
Question 40 of 60
40. Question
Which of the following should be used to track API usage, latency, and error rates over time?
Correct
Real-time monitoring with alerts allows teams to proactively detect issues and maintain performance SLAs. Static or manual tracking lacks immediacy. Sampling misses full data. This enables faster resolution and more reliable integrations.
Incorrect
Real-time monitoring with alerts allows teams to proactively detect issues and maintain performance SLAs. Static or manual tracking lacks immediacy. Sampling misses full data. This enables faster resolution and more reliable integrations.
Unattempted
Real-time monitoring with alerts allows teams to proactively detect issues and maintain performance SLAs. Static or manual tracking lacks immediacy. Sampling misses full data. This enables faster resolution and more reliable integrations.
Question 41 of 60
41. Question
Which approach ensures accurate performance analysis of integrations?
Correct
Response time, throughput, and error rate tracking provide direct indicators of integration performance. CPU alone doesnt reflect API behavior. Uptime logs show availability, not performance. Surveys provide subjective data.
Incorrect
Response time, throughput, and error rate tracking provide direct indicators of integration performance. CPU alone doesnt reflect API behavior. Uptime logs show availability, not performance. Surveys provide subjective data.
Unattempted
Response time, throughput, and error rate tracking provide direct indicators of integration performance. CPU alone doesnt reflect API behavior. Uptime logs show availability, not performance. Surveys provide subjective data.
Question 42 of 60
42. Question
Which tool is best for automated identification of performance degradation in APIs?
Correct
APM tools analyze metrics like latency, error rates, and throughput in real time, and trigger alerts when thresholds are breached. Schema Builder and Change Sets are design tools. Custom objects are not suited for monitoring.
Incorrect
APM tools analyze metrics like latency, error rates, and throughput in real time, and trigger alerts when thresholds are breached. Schema Builder and Change Sets are design tools. Custom objects are not suited for monitoring.
Unattempted
APM tools analyze metrics like latency, error rates, and throughput in real time, and trigger alerts when thresholds are breached. Schema Builder and Change Sets are design tools. Custom objects are not suited for monitoring.
Question 43 of 60
43. Question
What is an important benefit of using integration monitoring dashboards?
Correct
Dashboards consolidate key metrics and provide visibility across systems. They dont eliminate logging, but complement it. Network issues and QA testing still require dedicated approaches. Dashboards help stakeholders track trends and incidents.
Incorrect
Dashboards consolidate key metrics and provide visibility across systems. They dont eliminate logging, but complement it. Network issues and QA testing still require dedicated approaches. Dashboards help stakeholders track trends and incidents.
Unattempted
Dashboards consolidate key metrics and provide visibility across systems. They dont eliminate logging, but complement it. Network issues and QA testing still require dedicated approaches. Dashboards help stakeholders track trends and incidents.
Question 44 of 60
44. Question
What should be used to capture end-to-end latency in a distributed integration system?
Correct
Distributed tracing tools are designed to monitor flow across services and calculate exact latency. Server metrics and API limits give partial data. Manual testing isnt scalable.
Incorrect
Distributed tracing tools are designed to monitor flow across services and calculate exact latency. Server metrics and API limits give partial data. Manual testing isnt scalable.
Unattempted
Distributed tracing tools are designed to monitor flow across services and calculate exact latency. Server metrics and API limits give partial data. Manual testing isnt scalable.
Question 45 of 60
45. Question
How can performance issues be diagnosed after an integration failure?
Correct
Logs and monitoring tools provide historical and real-time insight into the root cause. Guessing and rebuilding are inefficient. Setup menu offers limited diagnostics.
Incorrect
Logs and monitoring tools provide historical and real-time insight into the root cause. Guessing and rebuilding are inefficient. Setup menu offers limited diagnostics.
Unattempted
Logs and monitoring tools provide historical and real-time insight into the root cause. Guessing and rebuilding are inefficient. Setup menu offers limited diagnostics.
Question 46 of 60
46. Question
What is a key consideration for building resilience into an integration when a downstream system may be temporarily unavailable?
Correct
The circuit breaker pattern helps avoid overwhelming a failing system by detecting faults and breaking the connection for a period of time. It prevents endless retries and system strain, which are risks with continuous retry logic. While message queues can help buffer requests, they dont protect against cascading failures. Synchronous callouts fail immediately and dont provide resilience.
Incorrect
The circuit breaker pattern helps avoid overwhelming a failing system by detecting faults and breaking the connection for a period of time. It prevents endless retries and system strain, which are risks with continuous retry logic. While message queues can help buffer requests, they dont protect against cascading failures. Synchronous callouts fail immediately and dont provide resilience.
Unattempted
The circuit breaker pattern helps avoid overwhelming a failing system by detecting faults and breaking the connection for a period of time. It prevents endless retries and system strain, which are risks with continuous retry logic. While message queues can help buffer requests, they dont protect against cascading failures. Synchronous callouts fail immediately and dont provide resilience.
Question 47 of 60
47. Question
What is the role of SLAs in integration performance monitoring?
Correct
SLAs define metrics like response time and availability. These are monitored to meet business commitments. They dont control access or release frequency.
Incorrect
SLAs define metrics like response time and availability. These are monitored to meet business commitments. They dont control access or release frequency.
Unattempted
SLAs define metrics like response time and availability. These are monitored to meet business commitments. They dont control access or release frequency.
Question 48 of 60
48. Question
How can an organization ensure integrations remain healthy after new deployments?
Correct
Post-deployment monitoring helps detect regression and issues quickly. Relying on users or increasing volume blindly can lead to failure. Skipping testing is risky.
Incorrect
Post-deployment monitoring helps detect regression and issues quickly. Relying on users or increasing volume blindly can lead to failure. Skipping testing is risky.
Unattempted
Post-deployment monitoring helps detect regression and issues quickly. Relying on users or increasing volume blindly can lead to failure. Skipping testing is risky.
Question 49 of 60
49. Question
A company needs to synchronize customer records between Salesforce and an on-premises ERP. The ERP can only handle 50 API calls per minute. What is the best integration strategy to prevent system overload?
Correct
The correct answer is 4. A batch job ensures data is sent at controlled intervals, avoiding overload. Option 1 risks exceeding the ERP limit. Option 2 helps throttle but requires careful handling of peak volume. Option 3 doesn‘t provide timing control, so it‘s not suitable for strict limits.
Incorrect
The correct answer is 4. A batch job ensures data is sent at controlled intervals, avoiding overload. Option 1 risks exceeding the ERP limit. Option 2 helps throttle but requires careful handling of peak volume. Option 3 doesn‘t provide timing control, so it‘s not suitable for strict limits.
Unattempted
The correct answer is 4. A batch job ensures data is sent at controlled intervals, avoiding overload. Option 1 risks exceeding the ERP limit. Option 2 helps throttle but requires careful handling of peak volume. Option 3 doesn‘t provide timing control, so it‘s not suitable for strict limits.
Question 50 of 60
50. Question
A healthcare provider wants real-time insurance validation during patient registration. Their third-party insurance system has an API with a 5-second timeout. What‘s the best integration approach?
Correct
The correct answer is 2. A synchronous API call ensures immediate validation during registration. Option 1 is asynchronous and not suitable for real-time needs. Option 3 delays feedback, breaking the real-time requirement. Option 4 offloads logic but still relies on synchronyoption 2 handles it directly and securely.
Incorrect
The correct answer is 2. A synchronous API call ensures immediate validation during registration. Option 1 is asynchronous and not suitable for real-time needs. Option 3 delays feedback, breaking the real-time requirement. Option 4 offloads logic but still relies on synchronyoption 2 handles it directly and securely.
Unattempted
The correct answer is 2. A synchronous API call ensures immediate validation during registration. Option 1 is asynchronous and not suitable for real-time needs. Option 3 delays feedback, breaking the real-time requirement. Option 4 offloads logic but still relies on synchronyoption 2 handles it directly and securely.
Question 51 of 60
51. Question
A retail company uses a mobile app to update inventory in Salesforce. Connectivity is unreliable. What is the best integration method?
Correct
Option 2 is correct. Storing data locally during offline periods and syncing later ensures reliability. Option 1 requires constant connectivity. Option 3 would fail under poor network conditions. Option 4 only pulls data and doesn‘t address updates from the app.
Incorrect
Option 2 is correct. Storing data locally during offline periods and syncing later ensures reliability. Option 1 requires constant connectivity. Option 3 would fail under poor network conditions. Option 4 only pulls data and doesn‘t address updates from the app.
Unattempted
Option 2 is correct. Storing data locally during offline periods and syncing later ensures reliability. Option 1 requires constant connectivity. Option 3 would fail under poor network conditions. Option 4 only pulls data and doesn‘t address updates from the app.
Question 52 of 60
52. Question
A bank must transmit financial transactions to a legacy mainframe system. The mainframe only accepts files via SFTP nightly. What‘s the best way to integrate?
Correct
The correct answer is 2. A nightly batch job fits the timing and format constraints of the legacy system. Option 1 and 3 are real-time, which the mainframe doesn‘t support. Option 4 is invalid due to lack of HTTP support in the mainframe.
Incorrect
The correct answer is 2. A nightly batch job fits the timing and format constraints of the legacy system. Option 1 and 3 are real-time, which the mainframe doesn‘t support. Option 4 is invalid due to lack of HTTP support in the mainframe.
Unattempted
The correct answer is 2. A nightly batch job fits the timing and format constraints of the legacy system. Option 1 and 3 are real-time, which the mainframe doesn‘t support. Option 4 is invalid due to lack of HTTP support in the mainframe.
Question 53 of 60
53. Question
A SaaS billing system can only receive data once every hour. What should the integration architect do to comply with this constraint?
Correct
The correct answer is 3. An hourly batch respects the external system‘s update window. Option 1 and 4 stream changes in near real-time, violating the hourly limit. Option 2 would flood the system and be incompatible with its constraint.
Incorrect
The correct answer is 3. An hourly batch respects the external system‘s update window. Option 1 and 4 stream changes in near real-time, violating the hourly limit. Option 2 would flood the system and be incompatible with its constraint.
Unattempted
The correct answer is 3. An hourly batch respects the external system‘s update window. Option 1 and 4 stream changes in near real-time, violating the hourly limit. Option 2 would flood the system and be incompatible with its constraint.
Question 54 of 60
54. Question
A logistics company has an aging warehouse system that can only ingest data in CSV format through email. Which approach is most aligned with this constraint?
Correct
The correct answer is 2. Generating and emailing a CSV meets the systems input format and timing. Option 1 and 3 are real-time and format-incompatible. Salesforce Connect only reads external data and wouldnt generate files.
Incorrect
The correct answer is 2. Generating and emailing a CSV meets the systems input format and timing. Option 1 and 3 are real-time and format-incompatible. Salesforce Connect only reads external data and wouldnt generate files.
Unattempted
The correct answer is 2. Generating and emailing a CSV meets the systems input format and timing. Option 1 and 3 are real-time and format-incompatible. Salesforce Connect only reads external data and wouldnt generate files.
Question 55 of 60
55. Question
A business needs to call an external API, but the provider enforces strict daily limits. What is the best strategy to handle this?
Correct
The correct answer is 2. Middleware can track and throttle calls to stay within limits. Option 1 may exceed limits quickly. Option 3 is for async delivery but doesnt throttle. Option 4 ignores the constraint and risks lockout.
Incorrect
The correct answer is 2. Middleware can track and throttle calls to stay within limits. Option 1 may exceed limits quickly. Option 3 is for async delivery but doesnt throttle. Option 4 ignores the constraint and risks lockout.
Unattempted
The correct answer is 2. Middleware can track and throttle calls to stay within limits. Option 1 may exceed limits quickly. Option 3 is for async delivery but doesnt throttle. Option 4 ignores the constraint and risks lockout.
Question 56 of 60
56. Question
A service requires OAuth2 authentication and expires tokens every 15 minutes. What‘s the best way to manage this in Salesforce?
Correct
The correct answer is 3. Named Credentials handle token refresh securely. Option 1 is risky and prone to failure. Option 2 is not secure and not dynamic. Option 4 is insecure and brittle.
Incorrect
The correct answer is 3. Named Credentials handle token refresh securely. Option 1 is risky and prone to failure. Option 2 is not secure and not dynamic. Option 4 is insecure and brittle.
Unattempted
The correct answer is 3. Named Credentials handle token refresh securely. Option 1 is risky and prone to failure. Option 2 is not secure and not dynamic. Option 4 is insecure and brittle.
Question 57 of 60
57. Question
A Salesforce org receives data from multiple external systems. To avoid data overwrites, what should the architect consider?
Correct
Option 2 is correct. Versioning helps determine the latest correct record. Option 1 is non-existent. Option 3 doesnt prevent conflict. Option 4 increases risk of overwrites without coordination.
Incorrect
Option 2 is correct. Versioning helps determine the latest correct record. Option 1 is non-existent. Option 3 doesnt prevent conflict. Option 4 increases risk of overwrites without coordination.
Unattempted
Option 2 is correct. Versioning helps determine the latest correct record. Option 1 is non-existent. Option 3 doesnt prevent conflict. Option 4 increases risk of overwrites without coordination.
Question 58 of 60
58. Question
A Salesforce org must aggregate data from four different systems before showing it on a dashboard. Performance is a concern. What‘s the best approach?
Correct
Option 2 is correct. Middleware allows data consolidation and performance tuning before Salesforce access. Option 1 would be slow. Option 3 doesnt aggregate data. Option 4 adds complexity without solving latency issues.
Incorrect
Option 2 is correct. Middleware allows data consolidation and performance tuning before Salesforce access. Option 1 would be slow. Option 3 doesnt aggregate data. Option 4 adds complexity without solving latency issues.
Unattempted
Option 2 is correct. Middleware allows data consolidation and performance tuning before Salesforce access. Option 1 would be slow. Option 3 doesnt aggregate data. Option 4 adds complexity without solving latency issues.
Question 59 of 60
59. Question
An architect is reviewing a landscape diagram showing Salesforce, SAP, and MuleSoft. What should the architect identify first?
Correct
The correct answer is 1. Understanding the integration endpoints and mechanisms is key to mapping the data flow. Options 2, 3, and 4 are important but secondary to establishing technical connectivity.
Incorrect
The correct answer is 1. Understanding the integration endpoints and mechanisms is key to mapping the data flow. Options 2, 3, and 4 are important but secondary to establishing technical connectivity.
Unattempted
The correct answer is 1. Understanding the integration endpoints and mechanisms is key to mapping the data flow. Options 2, 3, and 4 are important but secondary to establishing technical connectivity.
Question 60 of 60
60. Question
The landscape diagram shows batch jobs between systems. What integration pattern should be documented?
Correct
Option 2 is correct. Batch jobs are a classic batch data sync pattern. Option 1 is synchronous. Option 3 applies to UI cases. Option 4 doesnt describe batch behavior.
Incorrect
Option 2 is correct. Batch jobs are a classic batch data sync pattern. Option 1 is synchronous. Option 3 applies to UI cases. Option 4 doesnt describe batch behavior.
Unattempted
Option 2 is correct. Batch jobs are a classic batch data sync pattern. Option 1 is synchronous. Option 3 applies to UI cases. Option 4 doesnt describe batch behavior.
X
Use Page numbers below to navigate to other practice tests