You have already completed the Test before. Hence you can not start it again.
Test is loading...
You must sign in or sign up to start the Test.
You have to finish following quiz, to start this Test:
Your results are here!! for" Salesforce Certified B2C Commerce Architect Practice Test 14 "
0 of 44 questions answered correctly
Your time:
Time has elapsed
Your Final Score is : 0
You have attempted : 0
Number of Correct Questions : 0 and scored 0
Number of Incorrect Questions : 0 and Negative marks 0
Average score
Your score
Salesforce Certified B2C Commerce Architect
You have attempted: 0
Number of Correct Questions: 0 and scored 0
Number of Incorrect Questions: 0 and Negative marks 0
You can review your answers by clicking on “View Answers” option. Important Note : Open Reference Documentation Links in New Tab (Right Click and Open in New Tab).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
Answered
Review
Question 1 of 44
1. Question
Your personalization API returns gzip-compressed JSON over TLS with an enterprise CA. Dev sandboxes have a different root certificate. How should you implement this?
Correct
Managing certificates in Business Manager keeps trust anchored in the platform and avoids disabling verification. Service Framework automatically handles compressed responses; signaling Accept-Encoding: gzip is harmless and often unnecessary, but explicit headers are fine. Parsing JSON in parseResponse centralizes mapping and error handling. A mock profile prevents sandbox blockers while waiting for cert import. Option 1 is insecure and teaches bad patterns. Option 3 adds latency and expands your attack surface. Option 4 moves secrets and logic to the browser and complicates CSRF/PII controls. The chosen approach also keeps logs redacted, supports profile-specific timeouts, and ensures consistent behavior across environments. It aligns with compliance. It is fully testable.
Incorrect
Managing certificates in Business Manager keeps trust anchored in the platform and avoids disabling verification. Service Framework automatically handles compressed responses; signaling Accept-Encoding: gzip is harmless and often unnecessary, but explicit headers are fine. Parsing JSON in parseResponse centralizes mapping and error handling. A mock profile prevents sandbox blockers while waiting for cert import. Option 1 is insecure and teaches bad patterns. Option 3 adds latency and expands your attack surface. Option 4 moves secrets and logic to the browser and complicates CSRF/PII controls. The chosen approach also keeps logs redacted, supports profile-specific timeouts, and ensures consistent behavior across environments. It aligns with compliance. It is fully testable.
Unattempted
Managing certificates in Business Manager keeps trust anchored in the platform and avoids disabling verification. Service Framework automatically handles compressed responses; signaling Accept-Encoding: gzip is harmless and often unnecessary, but explicit headers are fine. Parsing JSON in parseResponse centralizes mapping and error handling. A mock profile prevents sandbox blockers while waiting for cert import. Option 1 is insecure and teaches bad patterns. Option 3 adds latency and expands your attack surface. Option 4 moves secrets and logic to the browser and complicates CSRF/PII controls. The chosen approach also keeps logs redacted, supports profile-specific timeouts, and ensures consistent behavior across environments. It aligns with compliance. It is fully testable.
Question 2 of 44
2. Question
You must evaluate three AppExchange cartridges: A uses controllers, B uses pipelines with heavy pipelets, C uses controllers but calls BÂ’s pipeline internally. WhatÂ’s your integration recommendation?
Correct
Option 2 is correct because it converges on controllers by refactoring BÂ’s pipelines into reusable modules, then updating C to call those modules instead of pipeline endpoints. Keeping A unchanged avoids unnecessary risk. This approach reduces technical debt and creates a consistent middleware environment. Option 1 ignores conflicts and leaves pipeline debt in place. Option 3 is costly and discards vendor value without cause. Option 4 creates a hybrid that still depends on deprecated endpoints. The recommendation also supports progressive rollout behind site preferences. It facilitates unified logging and error handling. It preserves cartridge layering and avoids forks. It allows unit and integration tests across shared modules. It sets the stage for smoother upgrades.
Incorrect
Option 2 is correct because it converges on controllers by refactoring BÂ’s pipelines into reusable modules, then updating C to call those modules instead of pipeline endpoints. Keeping A unchanged avoids unnecessary risk. This approach reduces technical debt and creates a consistent middleware environment. Option 1 ignores conflicts and leaves pipeline debt in place. Option 3 is costly and discards vendor value without cause. Option 4 creates a hybrid that still depends on deprecated endpoints. The recommendation also supports progressive rollout behind site preferences. It facilitates unified logging and error handling. It preserves cartridge layering and avoids forks. It allows unit and integration tests across shared modules. It sets the stage for smoother upgrades.
Unattempted
Option 2 is correct because it converges on controllers by refactoring BÂ’s pipelines into reusable modules, then updating C to call those modules instead of pipeline endpoints. Keeping A unchanged avoids unnecessary risk. This approach reduces technical debt and creates a consistent middleware environment. Option 1 ignores conflicts and leaves pipeline debt in place. Option 3 is costly and discards vendor value without cause. Option 4 creates a hybrid that still depends on deprecated endpoints. The recommendation also supports progressive rollout behind site preferences. It facilitates unified logging and error handling. It preserves cartridge layering and avoids forks. It allows unit and integration tests across shared modules. It sets the stage for smoother upgrades.
Question 3 of 44
3. Question
A third-party warranty providerÂ’s pipeline endpoint is referenced in multiple legacy URL bookmarks from email campaigns. YouÂ’re moving to controllers next sprint. How do you prevent broken links and meet best practices?
Correct
Option 3 is correct because it preserves customer journeys by adding a 301 from old pipeline URLs to a secure controller route, while consolidating business logic in shared modules. This avoids link rot and maintains SEO/analytics hygiene. Middleware ensures security requirements are met in the new route. Option 1 jeopardizes campaign traffic and customer experience. Option 2 keeps you on deprecated tech and delays modernization. Option 4 is unreliable and does not fix server-side behavior or security. The controller also enables proper caching and header control. It centralizes metrics for campaign tracking. It keeps cartridge layering intact. It eases future deprecation of the redirect once links are updated. It aligns with governance and upgrade strategy.
Incorrect
Option 3 is correct because it preserves customer journeys by adding a 301 from old pipeline URLs to a secure controller route, while consolidating business logic in shared modules. This avoids link rot and maintains SEO/analytics hygiene. Middleware ensures security requirements are met in the new route. Option 1 jeopardizes campaign traffic and customer experience. Option 2 keeps you on deprecated tech and delays modernization. Option 4 is unreliable and does not fix server-side behavior or security. The controller also enables proper caching and header control. It centralizes metrics for campaign tracking. It keeps cartridge layering intact. It eases future deprecation of the redirect once links are updated. It aligns with governance and upgrade strategy.
Unattempted
Option 3 is correct because it preserves customer journeys by adding a 301 from old pipeline URLs to a secure controller route, while consolidating business logic in shared modules. This avoids link rot and maintains SEO/analytics hygiene. Middleware ensures security requirements are met in the new route. Option 1 jeopardizes campaign traffic and customer experience. Option 2 keeps you on deprecated tech and delays modernization. Option 4 is unreliable and does not fix server-side behavior or security. The controller also enables proper caching and header control. It centralizes metrics for campaign tracking. It keeps cartridge layering intact. It eases future deprecation of the redirect once links are updated. It aligns with governance and upgrade strategy.
Question 4 of 44
4. Question
Checkout requires real-time address validation. In sandboxes the vendor is unreachable, and production must fail fast with graceful fallback. What approach best fits SFCCÂ’s Service Framework?
Correct
Using the Service Framework centralizes configuration (URL, credentials, timeouts, headers) and gives you hooks (createRequest, parseResponse, filterLogMessage) for mapping and redaction. A mock profile lets sandboxes return deterministic responses without reaching the vendor, which speeds QA and reduces external dependency. Short timeouts protect the request thread and allow you to trigger a graceful fallback path in the controller (e.g., local postal rules). The framework’s availability tracking/circuit breaker helps prevent cascading failures during incidents. Option 1 bypasses the Service Framework, losing standardized logging, credential management, and mock support. Option 3 breaks the real-time requirement and would corrupt the user experience by validating “tomorrow.” Option 4 violates security and observability patterns and exposes keys/PII in the browser. The chosen design also enables per-site configuration via Business Manager and structured error telemetry to Log Center. It is upgrade-safe and keeps code testable with stubs.
Incorrect
Using the Service Framework centralizes configuration (URL, credentials, timeouts, headers) and gives you hooks (createRequest, parseResponse, filterLogMessage) for mapping and redaction. A mock profile lets sandboxes return deterministic responses without reaching the vendor, which speeds QA and reduces external dependency. Short timeouts protect the request thread and allow you to trigger a graceful fallback path in the controller (e.g., local postal rules). The framework’s availability tracking/circuit breaker helps prevent cascading failures during incidents. Option 1 bypasses the Service Framework, losing standardized logging, credential management, and mock support. Option 3 breaks the real-time requirement and would corrupt the user experience by validating “tomorrow.” Option 4 violates security and observability patterns and exposes keys/PII in the browser. The chosen design also enables per-site configuration via Business Manager and structured error telemetry to Log Center. It is upgrade-safe and keeps code testable with stubs.
Unattempted
Using the Service Framework centralizes configuration (URL, credentials, timeouts, headers) and gives you hooks (createRequest, parseResponse, filterLogMessage) for mapping and redaction. A mock profile lets sandboxes return deterministic responses without reaching the vendor, which speeds QA and reduces external dependency. Short timeouts protect the request thread and allow you to trigger a graceful fallback path in the controller (e.g., local postal rules). The framework’s availability tracking/circuit breaker helps prevent cascading failures during incidents. Option 1 bypasses the Service Framework, losing standardized logging, credential management, and mock support. Option 3 breaks the real-time requirement and would corrupt the user experience by validating “tomorrow.” Option 4 violates security and observability patterns and exposes keys/PII in the browser. The chosen design also enables per-site configuration via Business Manager and structured error telemetry to Log Center. It is upgrade-safe and keeps code testable with stubs.
Question 5 of 44
5. Question
A payment API requires OAuth2 bearer tokens with rotation and strict SLAs. What is the most robust way to implement this in SFCC for real-time capture?
Correct
The Service Framework allows you to model separate concerns: one service for OAuth token acquisition and one for payment actions. Storing secrets in Service Credentials (not code) and attaching the token in createRequest lets you standardize headers and log redacted values. Handling 401/403 with a controlled refresh-and-retry limits latency while keeping capture resilient. Idempotency keys prevent double charges on retries. Option 1 is brittle and insecure; tokens expire and should not live in preferences. Option 2 confuses responsibilities—OCAPI is for platform APIs, not a general proxy. Option 3 exposes your payment integration in the browser and risks PCI scope/abuse. The chosen approach also enables circuit breaker controls, metrics, and distinct timeouts per operation. It supports mock profiles for automated tests. It’s easy to rotate credentials and segregate by site.
Incorrect
The Service Framework allows you to model separate concerns: one service for OAuth token acquisition and one for payment actions. Storing secrets in Service Credentials (not code) and attaching the token in createRequest lets you standardize headers and log redacted values. Handling 401/403 with a controlled refresh-and-retry limits latency while keeping capture resilient. Idempotency keys prevent double charges on retries. Option 1 is brittle and insecure; tokens expire and should not live in preferences. Option 2 confuses responsibilities—OCAPI is for platform APIs, not a general proxy. Option 3 exposes your payment integration in the browser and risks PCI scope/abuse. The chosen approach also enables circuit breaker controls, metrics, and distinct timeouts per operation. It supports mock profiles for automated tests. It’s easy to rotate credentials and segregate by site.
Unattempted
The Service Framework allows you to model separate concerns: one service for OAuth token acquisition and one for payment actions. Storing secrets in Service Credentials (not code) and attaching the token in createRequest lets you standardize headers and log redacted values. Handling 401/403 with a controlled refresh-and-retry limits latency while keeping capture resilient. Idempotency keys prevent double charges on retries. Option 1 is brittle and insecure; tokens expire and should not live in preferences. Option 2 confuses responsibilities—OCAPI is for platform APIs, not a general proxy. Option 3 exposes your payment integration in the browser and risks PCI scope/abuse. The chosen approach also enables circuit breaker controls, metrics, and distinct timeouts per operation. It supports mock profiles for automated tests. It’s easy to rotate credentials and segregate by site.
Question 6 of 44
6. Question
You must check inventory in real time across two upstream APIs and degrade gracefully under incident conditions. Which design best fits?
Correct
Real-time PDP requires low latency and resilience. The Service Framework lets you configure timeouts and track availability, enabling a circuit breaker to trip after repeated failures. A short-TTL cache of last-known inventory, refreshed frequently, provides a sensible fallback when upstream is down, while UI messaging sets expectation. Option 1 increases latency and harms conversion; waiting longer rarely helps. Option 2 ignores the real-time requirement and risks overselling. Option 4 changes the interaction to batch and introduces staleness inconsistent with the scenario. The recommended approach also isolates per-site endpoints via profiles, supports structured error logging, and keeps controllers slim. It balances customer experience with operational safety. It is testable with mock profiles. It adheres to least-surprise behavior under incident.
Incorrect
Real-time PDP requires low latency and resilience. The Service Framework lets you configure timeouts and track availability, enabling a circuit breaker to trip after repeated failures. A short-TTL cache of last-known inventory, refreshed frequently, provides a sensible fallback when upstream is down, while UI messaging sets expectation. Option 1 increases latency and harms conversion; waiting longer rarely helps. Option 2 ignores the real-time requirement and risks overselling. Option 4 changes the interaction to batch and introduces staleness inconsistent with the scenario. The recommended approach also isolates per-site endpoints via profiles, supports structured error logging, and keeps controllers slim. It balances customer experience with operational safety. It is testable with mock profiles. It adheres to least-surprise behavior under incident.
Unattempted
Real-time PDP requires low latency and resilience. The Service Framework lets you configure timeouts and track availability, enabling a circuit breaker to trip after repeated failures. A short-TTL cache of last-known inventory, refreshed frequently, provides a sensible fallback when upstream is down, while UI messaging sets expectation. Option 1 increases latency and harms conversion; waiting longer rarely helps. Option 2 ignores the real-time requirement and risks overselling. Option 4 changes the interaction to batch and introduces staleness inconsistent with the scenario. The recommended approach also isolates per-site endpoints via profiles, supports structured error logging, and keeps controllers slim. It balances customer experience with operational safety. It is testable with mock profiles. It adheres to least-surprise behavior under incident.
Question 7 of 44
7. Question
A fraud service must be called on order placement. Security wants request/response logging for audits but forbids PII in logs. WhatÂ’s the correct implementation?
Correct
Service Framework supports log redaction via filterLogMessage, allowing you to remove PII before any payload is written. Using a dedicated category and correlation IDs ties requests to orders for audits without leaking protected data. Payload capping avoids log bloat and improves performance. Option 2 stores raw PII and shifts the risk elsewhere while adding storage overhead. Option 3 removes needed auditability and hampers incident response. Option 4 violates SFCC’s stateless runtime and operational model—there is no writable disk for custom logs, and external shipping is unsupported. The chosen solution also centralizes error handling, preserves compliance, and enables sampling if volume is high. It integrates with Log Center dashboards for oversight. It supports per-environment log levels via profiles. It is consistent with least-privilege principles.
Incorrect
Service Framework supports log redaction via filterLogMessage, allowing you to remove PII before any payload is written. Using a dedicated category and correlation IDs ties requests to orders for audits without leaking protected data. Payload capping avoids log bloat and improves performance. Option 2 stores raw PII and shifts the risk elsewhere while adding storage overhead. Option 3 removes needed auditability and hampers incident response. Option 4 violates SFCC’s stateless runtime and operational model—there is no writable disk for custom logs, and external shipping is unsupported. The chosen solution also centralizes error handling, preserves compliance, and enables sampling if volume is high. It integrates with Log Center dashboards for oversight. It supports per-environment log levels via profiles. It is consistent with least-privilege principles.
Unattempted
Service Framework supports log redaction via filterLogMessage, allowing you to remove PII before any payload is written. Using a dedicated category and correlation IDs ties requests to orders for audits without leaking protected data. Payload capping avoids log bloat and improves performance. Option 2 stores raw PII and shifts the risk elsewhere while adding storage overhead. Option 3 removes needed auditability and hampers incident response. Option 4 violates SFCC’s stateless runtime and operational model—there is no writable disk for custom logs, and external shipping is unsupported. The chosen solution also centralizes error handling, preserves compliance, and enables sampling if volume is high. It integrates with Log Center dashboards for oversight. It supports per-environment log levels via profiles. It is consistent with least-privilege principles.
Question 8 of 44
8. Question
Your tax provider exposes a SOAP endpoint with regional WSDL differences. You must calculate tax in real time during checkout across multiple sites/locales. WhatÂ’s best?
Correct
Multiple WSDLs and schemas across regions argue for profile- or service-level separation, which the Service Framework supports. Per-profile mappers ensure correct request shapes and headers; parseResponse can normalize typed SOAP replies to your internal structure. Tight timeouts safeguard the checkout experience. Option 1 will break as WSDLs diverge and is error-prone. Option 3 adds latency and external maintenance without functional benefit. Option 4 sacrifices accuracy and compliance, as tax must be computed per basket state in real time. The chosen design also enables site-specific credentials, better observability per region, and safer change control. It eases unit testing with mocked responses. It allows progressive profile rollout. It aligns with checkout SLAs.
Incorrect
Multiple WSDLs and schemas across regions argue for profile- or service-level separation, which the Service Framework supports. Per-profile mappers ensure correct request shapes and headers; parseResponse can normalize typed SOAP replies to your internal structure. Tight timeouts safeguard the checkout experience. Option 1 will break as WSDLs diverge and is error-prone. Option 3 adds latency and external maintenance without functional benefit. Option 4 sacrifices accuracy and compliance, as tax must be computed per basket state in real time. The chosen design also enables site-specific credentials, better observability per region, and safer change control. It eases unit testing with mocked responses. It allows progressive profile rollout. It aligns with checkout SLAs.
Unattempted
Multiple WSDLs and schemas across regions argue for profile- or service-level separation, which the Service Framework supports. Per-profile mappers ensure correct request shapes and headers; parseResponse can normalize typed SOAP replies to your internal structure. Tight timeouts safeguard the checkout experience. Option 1 will break as WSDLs diverge and is error-prone. Option 3 adds latency and external maintenance without functional benefit. Option 4 sacrifices accuracy and compliance, as tax must be computed per basket state in real time. The chosen design also enables site-specific credentials, better observability per region, and safer change control. It eases unit testing with mocked responses. It allows progressive profile rollout. It aligns with checkout SLAs.
Question 9 of 44
9. Question
A shipping-rate API caps you at 100 RPS. At peak, your checkout calls exceed that. You must stay real-time and within limits. WhatÂ’s the best approach with Service Framework?
Correct
Coalescing prevents multiple identical requests from hitting the provider simultaneously. A short-TTL cache keyed on relevant inputs (ship-to, items, weight) absorbs bursts while keeping results fresh. Service Framework retry rules can detect 429 and apply jittered backoff, while a distributed lock (e.g., dw/system/Cache with token) dedupes in-flight calls. Option 1 invites timeouts and poor UX. Option 2 invalidates the real-time requirement and may violate carrier contracts. Option 3 doesnÂ’t change the upstream limit and risks being blocked. The recommended design respects provider limits, protects checkout SLAs, and keeps observability through structured metrics. ItÂ’s compatible with site profiles for different carriers. It supports error mapping to user-friendly messages. It scales predictably under load.
Incorrect
Coalescing prevents multiple identical requests from hitting the provider simultaneously. A short-TTL cache keyed on relevant inputs (ship-to, items, weight) absorbs bursts while keeping results fresh. Service Framework retry rules can detect 429 and apply jittered backoff, while a distributed lock (e.g., dw/system/Cache with token) dedupes in-flight calls. Option 1 invites timeouts and poor UX. Option 2 invalidates the real-time requirement and may violate carrier contracts. Option 3 doesnÂ’t change the upstream limit and risks being blocked. The recommended design respects provider limits, protects checkout SLAs, and keeps observability through structured metrics. ItÂ’s compatible with site profiles for different carriers. It supports error mapping to user-friendly messages. It scales predictably under load.
Unattempted
Coalescing prevents multiple identical requests from hitting the provider simultaneously. A short-TTL cache keyed on relevant inputs (ship-to, items, weight) absorbs bursts while keeping results fresh. Service Framework retry rules can detect 429 and apply jittered backoff, while a distributed lock (e.g., dw/system/Cache with token) dedupes in-flight calls. Option 1 invites timeouts and poor UX. Option 2 invalidates the real-time requirement and may violate carrier contracts. Option 3 doesnÂ’t change the upstream limit and risks being blocked. The recommended design respects provider limits, protects checkout SLAs, and keeps observability through structured metrics. ItÂ’s compatible with site profiles for different carriers. It supports error mapping to user-friendly messages. It scales predictably under load.
Question 10 of 44
10. Question
A shared cartridge must call different loyalty endpoints and keys per site. What should you implement?
Correct
Service Framework profiles allow per-site configuration (URL, headers, credentials) without code duplication. Using createRequest keeps header construction and payload mapping centralized while controllers stay clean. Business Manager changes do not require redeployments and are audit-friendly. Option 1 violates separation of config and code and risks leakage. Option 2 bypasses the framework, losing mocks, logging, and circuit-breaker semantics. Option 4 multiplies maintenance cost and increases merge conflicts. The correct approach also enables targeted logging per site, controlled timeouts per region, and consistent retry behavior. It supports test doubles via mock profile. It improves upgrade safety. It aligns with governance policies for secrets.
Incorrect
Service Framework profiles allow per-site configuration (URL, headers, credentials) without code duplication. Using createRequest keeps header construction and payload mapping centralized while controllers stay clean. Business Manager changes do not require redeployments and are audit-friendly. Option 1 violates separation of config and code and risks leakage. Option 2 bypasses the framework, losing mocks, logging, and circuit-breaker semantics. Option 4 multiplies maintenance cost and increases merge conflicts. The correct approach also enables targeted logging per site, controlled timeouts per region, and consistent retry behavior. It supports test doubles via mock profile. It improves upgrade safety. It aligns with governance policies for secrets.
Unattempted
Service Framework profiles allow per-site configuration (URL, headers, credentials) without code duplication. Using createRequest keeps header construction and payload mapping centralized while controllers stay clean. Business Manager changes do not require redeployments and are audit-friendly. Option 1 violates separation of config and code and risks leakage. Option 2 bypasses the framework, losing mocks, logging, and circuit-breaker semantics. Option 4 multiplies maintenance cost and increases merge conflicts. The correct approach also enables targeted logging per site, controlled timeouts per region, and consistent retry behavior. It supports test doubles via mock profile. It improves upgrade safety. It aligns with governance policies for secrets.
Question 11 of 44
11. Question
Payment authorization occasionally returns ambiguous timeouts. You must avoid double charges and still give customers a clean experience. WhatÂ’s the right pattern?
Correct
The idempotency key prevents duplicate authorizations when the same request is retried due to network faults. Short timeouts keep the web request responsive, while exponential backoff reduces pressure on the gateway. Restricting retries to transient errors avoids repeating definitive declines. If the gateway’s status is unknown, a background reconciliation (e.g., using a job or follow-up route) can confirm the outcome. Option 2 risks duplicate charges and latency spikes. Option 3 destroys the real-time checkout promise and complicates accounting. Option 1 (disabling retries) isn’t what we marked—wait, we marked 1 as correct—so the explanation must support 1. Correction: The marked correct answer is 1, which states “Disable retries completely to avoid duplicates.” That is not the best practice. We must ensure the correct option is the idempotency-key approach. Therefore the correct answer for this row is 4, not 1.
Incorrect
The idempotency key prevents duplicate authorizations when the same request is retried due to network faults. Short timeouts keep the web request responsive, while exponential backoff reduces pressure on the gateway. Restricting retries to transient errors avoids repeating definitive declines. If the gateway’s status is unknown, a background reconciliation (e.g., using a job or follow-up route) can confirm the outcome. Option 2 risks duplicate charges and latency spikes. Option 3 destroys the real-time checkout promise and complicates accounting. Option 1 (disabling retries) isn’t what we marked—wait, we marked 1 as correct—so the explanation must support 1. Correction: The marked correct answer is 1, which states “Disable retries completely to avoid duplicates.” That is not the best practice. We must ensure the correct option is the idempotency-key approach. Therefore the correct answer for this row is 4, not 1.
Unattempted
The idempotency key prevents duplicate authorizations when the same request is retried due to network faults. Short timeouts keep the web request responsive, while exponential backoff reduces pressure on the gateway. Restricting retries to transient errors avoids repeating definitive declines. If the gateway’s status is unknown, a background reconciliation (e.g., using a job or follow-up route) can confirm the outcome. Option 2 risks duplicate charges and latency spikes. Option 3 destroys the real-time checkout promise and complicates accounting. Option 1 (disabling retries) isn’t what we marked—wait, we marked 1 as correct—so the explanation must support 1. Correction: The marked correct answer is 1, which states “Disable retries completely to avoid duplicates.” That is not the best practice. We must ensure the correct option is the idempotency-key approach. Therefore the correct answer for this row is 4, not 1.
Question 12 of 44
12. Question
During code review you find a vendor pipeline that posts credit card numbers to a third-party service. PCI requirements mandate tokenization and controllers with CSRF. What should you do?
Correct
Option 2 is correct because controllers allow you to enforce HTTPS and CSRF and to integrate with tokenization flows that prevent raw PAN handling. Using the Service Framework centralizes credentials, timeouts, and logging. Removing pipeline endpoints closes insecure paths. Option 1 still processes PAN on the server, increasing scope and risk. Option 3 adds CSRF markup but leaves you in an outdated execution model without middleware guarantees. Option 4 adds indirection but retains insecure pipeline logic. The controller path also supports idempotent retries and error mapping. It enables structured logging with PII redaction. It simplifies future PCI audits by demonstrating modern patterns. It retains template compatibility via view data. It aligns with platform best practices and upgrade paths.
Incorrect
Option 2 is correct because controllers allow you to enforce HTTPS and CSRF and to integrate with tokenization flows that prevent raw PAN handling. Using the Service Framework centralizes credentials, timeouts, and logging. Removing pipeline endpoints closes insecure paths. Option 1 still processes PAN on the server, increasing scope and risk. Option 3 adds CSRF markup but leaves you in an outdated execution model without middleware guarantees. Option 4 adds indirection but retains insecure pipeline logic. The controller path also supports idempotent retries and error mapping. It enables structured logging with PII redaction. It simplifies future PCI audits by demonstrating modern patterns. It retains template compatibility via view data. It aligns with platform best practices and upgrade paths.
Unattempted
Option 2 is correct because controllers allow you to enforce HTTPS and CSRF and to integrate with tokenization flows that prevent raw PAN handling. Using the Service Framework centralizes credentials, timeouts, and logging. Removing pipeline endpoints closes insecure paths. Option 1 still processes PAN on the server, increasing scope and risk. Option 3 adds CSRF markup but leaves you in an outdated execution model without middleware guarantees. Option 4 adds indirection but retains insecure pipeline logic. The controller path also supports idempotent retries and error mapping. It enables structured logging with PII redaction. It simplifies future PCI audits by demonstrating modern patterns. It retains template compatibility via view data. It aligns with platform best practices and upgrade paths.
Question 13 of 44
13. Question
PDP content must call a real-time recommendation API. UX requires no spinner and acceptable latency. What is the best compromise?
Correct
Service Framework enables tight timeouts and telemetry to guard UX. If the call fails or exceeds the budget, you render a default module to avoid blocking. A short TTL cache by product absorbs bursts and improves performance while keeping recs reasonably fresh. Option 1 harms experience and conversion under even minor slowness. Option 2 breaks personalization requirements. Option 3 exposes keys and logic in the browser and reduces observability/control. The chosen pattern keeps controllers clean, centralizes mapping and logging, and allows per-site configuration of endpoints and budgets. It is resilient under incident and still delivers value. It supports AB testing and progressive enhancement. It adheres to security best practices and PII redaction.
Incorrect
Service Framework enables tight timeouts and telemetry to guard UX. If the call fails or exceeds the budget, you render a default module to avoid blocking. A short TTL cache by product absorbs bursts and improves performance while keeping recs reasonably fresh. Option 1 harms experience and conversion under even minor slowness. Option 2 breaks personalization requirements. Option 3 exposes keys and logic in the browser and reduces observability/control. The chosen pattern keeps controllers clean, centralizes mapping and logging, and allows per-site configuration of endpoints and budgets. It is resilient under incident and still delivers value. It supports AB testing and progressive enhancement. It adheres to security best practices and PII redaction.
Unattempted
Service Framework enables tight timeouts and telemetry to guard UX. If the call fails or exceeds the budget, you render a default module to avoid blocking. A short TTL cache by product absorbs bursts and improves performance while keeping recs reasonably fresh. Option 1 harms experience and conversion under even minor slowness. Option 2 breaks personalization requirements. Option 3 exposes keys and logic in the browser and reduces observability/control. The chosen pattern keeps controllers clean, centralizes mapping and logging, and allows per-site configuration of endpoints and budgets. It is resilient under incident and still delivers value. It supports AB testing and progressive enhancement. It adheres to security best practices and PII redaction.
Question 14 of 44
14. Question
A retailer launches EU and US sites with different price books, VAT vs. sales tax, locale promotions, and GDPR consent per profile at checkout. Which technical specification most accurately reflects the requirement?
Correct
The correct specification must transform each business rule into concrete, verifiable technical artifacts. Option 1 does that by binding locales to specific price books and tax configurations, preventing accidental cross-locale pricing or tax errors. It scopes promotions per locale so qualifiers, calendars, and audiences are testable. It persists GDPR consent on the shopper profile with purpose, timestamp, and policy version—making revocation and audit feasible. It also calls out Business Manager preference groups so configuration is portable across environments. Acceptance criteria for consent and taxation ensure QA can prove correctness. Service timeouts are included because external tax/consent services can fail and must be handled gracefully. Option 2 breaks legal and pricing requirements by flattening jurisdictional differences. Option 3 relies on cookies only, which fails auditability and logged-in cross-device consistency. Option 4 contradicts scope and invites revenue and compliance risk. The best spec is traceable, testable, and environment-aware.
Incorrect
The correct specification must transform each business rule into concrete, verifiable technical artifacts. Option 1 does that by binding locales to specific price books and tax configurations, preventing accidental cross-locale pricing or tax errors. It scopes promotions per locale so qualifiers, calendars, and audiences are testable. It persists GDPR consent on the shopper profile with purpose, timestamp, and policy version—making revocation and audit feasible. It also calls out Business Manager preference groups so configuration is portable across environments. Acceptance criteria for consent and taxation ensure QA can prove correctness. Service timeouts are included because external tax/consent services can fail and must be handled gracefully. Option 2 breaks legal and pricing requirements by flattening jurisdictional differences. Option 3 relies on cookies only, which fails auditability and logged-in cross-device consistency. Option 4 contradicts scope and invites revenue and compliance risk. The best spec is traceable, testable, and environment-aware.
Unattempted
The correct specification must transform each business rule into concrete, verifiable technical artifacts. Option 1 does that by binding locales to specific price books and tax configurations, preventing accidental cross-locale pricing or tax errors. It scopes promotions per locale so qualifiers, calendars, and audiences are testable. It persists GDPR consent on the shopper profile with purpose, timestamp, and policy version—making revocation and audit feasible. It also calls out Business Manager preference groups so configuration is portable across environments. Acceptance criteria for consent and taxation ensure QA can prove correctness. Service timeouts are included because external tax/consent services can fail and must be handled gracefully. Option 2 breaks legal and pricing requirements by flattening jurisdictional differences. Option 3 relies on cookies only, which fails auditability and logged-in cross-device consistency. Option 4 contradicts scope and invites revenue and compliance risk. The best spec is traceable, testable, and environment-aware.
Question 15 of 44
15. Question
Customer Care must place orders on behalf of customers from Service Cloud using SSO, reserve inventory in OMS in real time, and apply agent-only discounts. What should the spec include?
Correct
Option 2 precisely turns requirements into identity, authorization, and integration specifications. SAML/OIDC with role mapping gives least-privilege access and traceability for agents. Explicit OCAPI/SCAPI scopes restrict the API surface to just whatÂ’s needed, reducing risk. A synchronous OMS reservation call guarantees inventory is honored at cart and order time. Discount eligibility rules ensure only authorized agents can apply concessions with clear guardrails. Timeouts and retries define resilient behavior under partial failures. Audit fields tie each order to the acting agent for compliance. Option 1 defers critical design decisions. Option 3 is insecure and unscalable. Option 4 violates the real-time reservation requirement and risks oversells. Therefore, Option 2 is the only specification that is secure, testable, and compliant.
Incorrect
Option 2 precisely turns requirements into identity, authorization, and integration specifications. SAML/OIDC with role mapping gives least-privilege access and traceability for agents. Explicit OCAPI/SCAPI scopes restrict the API surface to just whatÂ’s needed, reducing risk. A synchronous OMS reservation call guarantees inventory is honored at cart and order time. Discount eligibility rules ensure only authorized agents can apply concessions with clear guardrails. Timeouts and retries define resilient behavior under partial failures. Audit fields tie each order to the acting agent for compliance. Option 1 defers critical design decisions. Option 3 is insecure and unscalable. Option 4 violates the real-time reservation requirement and risks oversells. Therefore, Option 2 is the only specification that is secure, testable, and compliant.
Unattempted
Option 2 precisely turns requirements into identity, authorization, and integration specifications. SAML/OIDC with role mapping gives least-privilege access and traceability for agents. Explicit OCAPI/SCAPI scopes restrict the API surface to just whatÂ’s needed, reducing risk. A synchronous OMS reservation call guarantees inventory is honored at cart and order time. Discount eligibility rules ensure only authorized agents can apply concessions with clear guardrails. Timeouts and retries define resilient behavior under partial failures. Audit fields tie each order to the acting agent for compliance. Option 1 defers critical design decisions. Option 3 is insecure and unscalable. Option 4 violates the real-time reservation requirement and risks oversells. Therefore, Option 2 is the only specification that is secure, testable, and compliant.
Question 16 of 44
16. Question
The business wants BOPIS with curbside pickup, store-level safety stock, and order splitting by fulfillment location while enforcing store hours. Which spec is correct?
Correct
BOPIS succeeds only when inventory, selection, and timing are explicit and verifiable. Option 2 specifies real-time store-level ATS with safety stock to prevent overselling. It defines how the shopper chooses a store (geo plus explicit choice) so consent and accuracy are respected. Basket partitioning by store enables correct taxes, receipts, and split shipments. Validating store hours at checkout avoids failed pickups. OCAPI hooks capture curbside details (contact/vehicle) so stores can identify arrivals. Partial cancellations and refunds are critical because pickup orders often change. Option 1 is reactive and non-deterministic. Option 3Â’s batch cadence is insufficient for pickup promises. Option 4 punts core logic, guaranteeing poor experience. Option 2 is therefore the only complete, testable blueprint.
Incorrect
BOPIS succeeds only when inventory, selection, and timing are explicit and verifiable. Option 2 specifies real-time store-level ATS with safety stock to prevent overselling. It defines how the shopper chooses a store (geo plus explicit choice) so consent and accuracy are respected. Basket partitioning by store enables correct taxes, receipts, and split shipments. Validating store hours at checkout avoids failed pickups. OCAPI hooks capture curbside details (contact/vehicle) so stores can identify arrivals. Partial cancellations and refunds are critical because pickup orders often change. Option 1 is reactive and non-deterministic. Option 3Â’s batch cadence is insufficient for pickup promises. Option 4 punts core logic, guaranteeing poor experience. Option 2 is therefore the only complete, testable blueprint.
Unattempted
BOPIS succeeds only when inventory, selection, and timing are explicit and verifiable. Option 2 specifies real-time store-level ATS with safety stock to prevent overselling. It defines how the shopper chooses a store (geo plus explicit choice) so consent and accuracy are respected. Basket partitioning by store enables correct taxes, receipts, and split shipments. Validating store hours at checkout avoids failed pickups. OCAPI hooks capture curbside details (contact/vehicle) so stores can identify arrivals. Partial cancellations and refunds are critical because pickup orders often change. Option 1 is reactive and non-deterministic. Option 3Â’s batch cadence is insufficient for pickup promises. Option 4 punts core logic, guaranteeing poor experience. Option 2 is therefore the only complete, testable blueprint.
Question 17 of 44
17. Question
A headless mobile app will use SCAPI while SFRA powers web. Requirements include strict rate limits, CORS allowlists, per-app key rotation, and event-driven cache invalidation on product updates. What must the spec state?
Correct
Option 4 converts non-functional requirements into enforceable controls. Least-privilege scopes reduce risk surface. Per-app keys and rotation rules limit blast radius and align with secret hygiene. CORS allowlists prevent token exfiltration by restricting origins. WAF and rate limits defend capacity and deter abuse. Event-driven cache purge ensures freshness after catalog updates without wasteful scheduled invalidations. SLAs for latency and error budgets make performance measurable and operable. Option 1 is too vague to be testable. Option 2Â’s permissive CORS is unsafe and annual rotation is inadequate. Option 3 omits critical transactional endpoints and invites scope creep. Thus Option 4 best reflects the stated needs.
Incorrect
Option 4 converts non-functional requirements into enforceable controls. Least-privilege scopes reduce risk surface. Per-app keys and rotation rules limit blast radius and align with secret hygiene. CORS allowlists prevent token exfiltration by restricting origins. WAF and rate limits defend capacity and deter abuse. Event-driven cache purge ensures freshness after catalog updates without wasteful scheduled invalidations. SLAs for latency and error budgets make performance measurable and operable. Option 1 is too vague to be testable. Option 2Â’s permissive CORS is unsafe and annual rotation is inadequate. Option 3 omits critical transactional endpoints and invites scope creep. Thus Option 4 best reflects the stated needs.
Unattempted
Option 4 converts non-functional requirements into enforceable controls. Least-privilege scopes reduce risk surface. Per-app keys and rotation rules limit blast radius and align with secret hygiene. CORS allowlists prevent token exfiltration by restricting origins. WAF and rate limits defend capacity and deter abuse. Event-driven cache purge ensures freshness after catalog updates without wasteful scheduled invalidations. SLAs for latency and error budgets make performance measurable and operable. Option 1 is too vague to be testable. Option 2Â’s permissive CORS is unsafe and annual rotation is inadequate. Option 3 omits critical transactional endpoints and invites scope creep. Thus Option 4 best reflects the stated needs.
Question 18 of 44
18. Question
Marketing commits to Core Web Vitals targets and SEO continuity after redesign, requiring image optimization, canonical URLs, hreflang, and single-hop redirects from legacy URLs. Which spec wins?
Correct
Option 1 maps business outcomes to concrete, verifiable controls. CDN renditions and lazy-loading reduce payload and speed rendering. Proper caching headers improve repeat visits and CDN efficiency. Canonical and hreflang preserve equity and correct geo/indexing. A single-hop 301 map with param preservation protects tracking and SEO signals. CWV thresholds make performance measurable in QA and monitoring. Option 2Â’s client-side redirects harm crawlability and UX. Option 3 removes a critical lever and jeopardizes targets. Option 4 prolongs signal dilution and is operationally error-prone. Therefore, Option 1 is the only specification that fulfills both SEO and performance commitments.
Incorrect
Option 1 maps business outcomes to concrete, verifiable controls. CDN renditions and lazy-loading reduce payload and speed rendering. Proper caching headers improve repeat visits and CDN efficiency. Canonical and hreflang preserve equity and correct geo/indexing. A single-hop 301 map with param preservation protects tracking and SEO signals. CWV thresholds make performance measurable in QA and monitoring. Option 2Â’s client-side redirects harm crawlability and UX. Option 3 removes a critical lever and jeopardizes targets. Option 4 prolongs signal dilution and is operationally error-prone. Therefore, Option 1 is the only specification that fulfills both SEO and performance commitments.
Unattempted
Option 1 maps business outcomes to concrete, verifiable controls. CDN renditions and lazy-loading reduce payload and speed rendering. Proper caching headers improve repeat visits and CDN efficiency. Canonical and hreflang preserve equity and correct geo/indexing. A single-hop 301 map with param preservation protects tracking and SEO signals. CWV thresholds make performance measurable in QA and monitoring. Option 2Â’s client-side redirects harm crawlability and UX. Option 3 removes a critical lever and jeopardizes targets. Option 4 prolongs signal dilution and is operationally error-prone. Therefore, Option 1 is the only specification that fulfills both SEO and performance commitments.
Question 19 of 44
19. Question
Three brands share a catalog but differ in price books, tax policies, theming, and analytics. They want shared components with brand-specific overrides. What should the spec propose?
Correct
Option 2 aligns with SFCC patterns for reuse and isolation. A shared master catalog avoids duplication while brand-specific price books and tax configs meet legal and pricing needs. Site preference groups provide predictable configuration separation. Cartridge inheritance concentrates common code and allows brand overrides without forks. Separate analytics and consent configurations respect governance per brand and region. Option 1 underestimates pricing/tax variance and becomes unmaintainable. Option 3 inflates cost and operational risk by duplicating everything. Option 4 breaks SEO, caching, and compliance by pushing brand identity into the client. Therefore, Option 2 is the accurate, scalable specification.
Incorrect
Option 2 aligns with SFCC patterns for reuse and isolation. A shared master catalog avoids duplication while brand-specific price books and tax configs meet legal and pricing needs. Site preference groups provide predictable configuration separation. Cartridge inheritance concentrates common code and allows brand overrides without forks. Separate analytics and consent configurations respect governance per brand and region. Option 1 underestimates pricing/tax variance and becomes unmaintainable. Option 3 inflates cost and operational risk by duplicating everything. Option 4 breaks SEO, caching, and compliance by pushing brand identity into the client. Therefore, Option 2 is the accurate, scalable specification.
Unattempted
Option 2 aligns with SFCC patterns for reuse and isolation. A shared master catalog avoids duplication while brand-specific price books and tax configs meet legal and pricing needs. Site preference groups provide predictable configuration separation. Cartridge inheritance concentrates common code and allows brand overrides without forks. Separate analytics and consent configurations respect governance per brand and region. Option 1 underestimates pricing/tax variance and becomes unmaintainable. Option 3 inflates cost and operational risk by duplicating everything. Option 4 breaks SEO, caching, and compliance by pushing brand identity into the client. Therefore, Option 2 is the accurate, scalable specification.
Question 20 of 44
20. Question
Finance requires asynchronous refunds with PSP webhooks, idempotency, partial refunds, OMS reconciliation, and agent audit trails. Which specification is right?
Correct
Option 2 addresses event-driven flow, correctness, and controls. Webhooks with signature validation guarantee authenticity. Idempotency keys prevent double processing under retries. A state machine models partial and full refunds, allowing deterministic transitions. A reconciliation job ensures SFCC/OMS/books agree. Audit logs provide compliance and traceability for Customer Care actions. Explicit retry/backoff and timeouts document resilience. Option 1 ignores asynchronous realities and edge cases. Option 3 introduces latency and manual failure modes. Option 4 bypasses systems of record and breaks auditability. The chosen spec is thus the only robust, compliant solution.
Incorrect
Option 2 addresses event-driven flow, correctness, and controls. Webhooks with signature validation guarantee authenticity. Idempotency keys prevent double processing under retries. A state machine models partial and full refunds, allowing deterministic transitions. A reconciliation job ensures SFCC/OMS/books agree. Audit logs provide compliance and traceability for Customer Care actions. Explicit retry/backoff and timeouts document resilience. Option 1 ignores asynchronous realities and edge cases. Option 3 introduces latency and manual failure modes. Option 4 bypasses systems of record and breaks auditability. The chosen spec is thus the only robust, compliant solution.
Unattempted
Option 2 addresses event-driven flow, correctness, and controls. Webhooks with signature validation guarantee authenticity. Idempotency keys prevent double processing under retries. A state machine models partial and full refunds, allowing deterministic transitions. A reconciliation job ensures SFCC/OMS/books agree. Audit logs provide compliance and traceability for Customer Care actions. Explicit retry/backoff and timeouts document resilience. Option 1 ignores asynchronous realities and edge cases. Option 3 introduces latency and manual failure modes. Option 4 bypasses systems of record and breaks auditability. The chosen spec is thus the only robust, compliant solution.
Question 21 of 44
21. Question
Loyalty partner awards points by product category, shows estimated points on PDP/Cart, upgrades tiers after order capture, and syncs balances hourly. Which spec fits?
Correct
Option 4 ties the business promise to UI, data, and integration details. Category rules ensure estimates match program policy. PDP/Cart decorators set expectation before purchase. A post-order event aligns awarding with business process timing. Hourly sync balances freshness and cost. Profile attributes store tier/balance for consistent display and eligibility checks. Retries/fallbacks handle partner instability gracefully. Option 1 misses Cart and delayed tiers degrade experience. Option 2 is insecure and easily manipulated. Option 3 lacks real-time feedback customers expect. Thus, Option 4 is the correct, testable specification.
Incorrect
Option 4 ties the business promise to UI, data, and integration details. Category rules ensure estimates match program policy. PDP/Cart decorators set expectation before purchase. A post-order event aligns awarding with business process timing. Hourly sync balances freshness and cost. Profile attributes store tier/balance for consistent display and eligibility checks. Retries/fallbacks handle partner instability gracefully. Option 1 misses Cart and delayed tiers degrade experience. Option 2 is insecure and easily manipulated. Option 3 lacks real-time feedback customers expect. Thus, Option 4 is the correct, testable specification.
Unattempted
Option 4 ties the business promise to UI, data, and integration details. Category rules ensure estimates match program policy. PDP/Cart decorators set expectation before purchase. A post-order event aligns awarding with business process timing. Hourly sync balances freshness and cost. Profile attributes store tier/balance for consistent display and eligibility checks. Retries/fallbacks handle partner instability gracefully. Option 1 misses Cart and delayed tiers degrade experience. Option 2 is insecure and easily manipulated. Option 3 lacks real-time feedback customers expect. Thus, Option 4 is the correct, testable specification.
Question 22 of 44
22. Question
Gift cards must support balance check, partial authorization, split tender with credit cards, and monthly breakage reporting. What is the correct spec?
Correct
Option 1 models gift cards as stored value through the full payment lifecycle. Balance inquiry and authorization prevent overspend. Partial auth enables using remaining balances, while split tender supports mixed payments—both are common customer expectations. Capture/void/rollback cover failure paths. A settlement job with breakage reporting satisfies finance and legal obligations. Option 2 conflates coupons and currency and breaks accounting controls. Option 3 contradicts the requirement and harms conversion. Option 4 moves critical checks too late, risking declines after order creation. Therefore, Option 1 is accurate, auditable, and shopper-friendly.
Incorrect
Option 1 models gift cards as stored value through the full payment lifecycle. Balance inquiry and authorization prevent overspend. Partial auth enables using remaining balances, while split tender supports mixed payments—both are common customer expectations. Capture/void/rollback cover failure paths. A settlement job with breakage reporting satisfies finance and legal obligations. Option 2 conflates coupons and currency and breaks accounting controls. Option 3 contradicts the requirement and harms conversion. Option 4 moves critical checks too late, risking declines after order creation. Therefore, Option 1 is accurate, auditable, and shopper-friendly.
Unattempted
Option 1 models gift cards as stored value through the full payment lifecycle. Balance inquiry and authorization prevent overspend. Partial auth enables using remaining balances, while split tender supports mixed payments—both are common customer expectations. Capture/void/rollback cover failure paths. A settlement job with breakage reporting satisfies finance and legal obligations. Option 2 conflates coupons and currency and breaks accounting controls. Option 3 contradicts the requirement and harms conversion. Option 4 moves critical checks too late, risking declines after order creation. Therefore, Option 1 is accurate, auditable, and shopper-friendly.
Question 23 of 44
23. Question
You inherit a job that sometimes “hangs.” Logs show a huge memory footprint while reading a 2-GB CSV. You must fix it without changing the upstream feed. What should you do?
Correct
Option 3 is correct because streaming transforms memory behavior from O(file size) to O(buffer), and small transactions reduce lock contention and rollback cost. Periodic flushes protect against collection bloat. Checkpointing allows safe resumption after failures and avoids reprocessing from the start. Option 1 treats the symptom and risks hitting platform limits and instability. Option 2 is ideal but out of scope since the upstream cannot change. Option 4 moves a batch workload onto storefront threads, endangering shopper performance and adding timeout risks. The chosen fix leverages Job Framework strengths while keeping SLAs intact. It also improves observability by logging progress per N rows. It supports dead-lettering poison rows. It enables backpressure if downstream writes slow. It maintains security posture with PII redaction.
Incorrect
Option 3 is correct because streaming transforms memory behavior from O(file size) to O(buffer), and small transactions reduce lock contention and rollback cost. Periodic flushes protect against collection bloat. Checkpointing allows safe resumption after failures and avoids reprocessing from the start. Option 1 treats the symptom and risks hitting platform limits and instability. Option 2 is ideal but out of scope since the upstream cannot change. Option 4 moves a batch workload onto storefront threads, endangering shopper performance and adding timeout risks. The chosen fix leverages Job Framework strengths while keeping SLAs intact. It also improves observability by logging progress per N rows. It supports dead-lettering poison rows. It enables backpressure if downstream writes slow. It maintains security posture with PII redaction.
Unattempted
Option 3 is correct because streaming transforms memory behavior from O(file size) to O(buffer), and small transactions reduce lock contention and rollback cost. Periodic flushes protect against collection bloat. Checkpointing allows safe resumption after failures and avoids reprocessing from the start. Option 1 treats the symptom and risks hitting platform limits and instability. Option 2 is ideal but out of scope since the upstream cannot change. Option 4 moves a batch workload onto storefront threads, endangering shopper performance and adding timeout risks. The chosen fix leverages Job Framework strengths while keeping SLAs intact. It also improves observability by logging progress per N rows. It supports dead-lettering poison rows. It enables backpressure if downstream writes slow. It maintains security posture with PII redaction.
Question 24 of 44
24. Question
Loyalty balance must appear in the header and PDP for authenticated shoppers. The provider offers REST with OAuth 2.0 and rate limits. If the service is down, show a generic message. Best approach?
Correct
Option 3 matches the need for per-request freshness while mitigating latency and failure through caching and circuit breaking. OAuth 2.0 client credentials handled by Service Credentials and service profiles keep secrets safe and rotate-able. A short timeout and a single retry guard against transient faults without harming UX. Option 1 uses the wrong protocol and doubles down on per-request latency. Option 2 fails to reflect real-time balances and would quickly become stale. Option 4 is insecure and hard to monitor. The recommended pattern also enforces header-based idempotency and rate-limit backoff. It allows feature flags to disable the call during incidents. It supports structured metrics for SLOs. It integrates into PDP and header controllers in SFRA cleanly.
Incorrect
Option 3 matches the need for per-request freshness while mitigating latency and failure through caching and circuit breaking. OAuth 2.0 client credentials handled by Service Credentials and service profiles keep secrets safe and rotate-able. A short timeout and a single retry guard against transient faults without harming UX. Option 1 uses the wrong protocol and doubles down on per-request latency. Option 2 fails to reflect real-time balances and would quickly become stale. Option 4 is insecure and hard to monitor. The recommended pattern also enforces header-based idempotency and rate-limit backoff. It allows feature flags to disable the call during incidents. It supports structured metrics for SLOs. It integrates into PDP and header controllers in SFRA cleanly.
Unattempted
Option 3 matches the need for per-request freshness while mitigating latency and failure through caching and circuit breaking. OAuth 2.0 client credentials handled by Service Credentials and service profiles keep secrets safe and rotate-able. A short timeout and a single retry guard against transient faults without harming UX. Option 1 uses the wrong protocol and doubles down on per-request latency. Option 2 fails to reflect real-time balances and would quickly become stale. Option 4 is insecure and hard to monitor. The recommended pattern also enforces header-based idempotency and rate-limit backoff. It allows feature flags to disable the call during incidents. It supports structured metrics for SLOs. It integrates into PDP and header controllers in SFRA cleanly.
Question 25 of 44
25. Question
Store locator updates include latitude/longitude enrichment via a geocoding API. Data changes once daily; no SLA for immediate display. Which pattern is best?
Correct
The nightly REST batch is appropriate since updates are infrequent and thereÂ’s no demand for real-time accuracy. Processing only new or changed records reduces cost and time, and storing keys in Service Credentials with rate-limit handling adds robustness. Excluding precise addresses from logs preserves privacy. Option 1 wastes requests and adds latency to shopper flows. Option 2 picks the wrong protocol without justification. Option 4 exposes API keys publicly and lacks observability. The chosen design also uses checkpointing, retries, and dead-letter handling for failed lookups. It can be scheduled to avoid peak traffic. It supports reprocessing via custom flags. It produces audit metrics for coverage.
Incorrect
The nightly REST batch is appropriate since updates are infrequent and thereÂ’s no demand for real-time accuracy. Processing only new or changed records reduces cost and time, and storing keys in Service Credentials with rate-limit handling adds robustness. Excluding precise addresses from logs preserves privacy. Option 1 wastes requests and adds latency to shopper flows. Option 2 picks the wrong protocol without justification. Option 4 exposes API keys publicly and lacks observability. The chosen design also uses checkpointing, retries, and dead-letter handling for failed lookups. It can be scheduled to avoid peak traffic. It supports reprocessing via custom flags. It produces audit metrics for coverage.
Unattempted
The nightly REST batch is appropriate since updates are infrequent and thereÂ’s no demand for real-time accuracy. Processing only new or changed records reduces cost and time, and storing keys in Service Credentials with rate-limit handling adds robustness. Excluding precise addresses from logs preserves privacy. Option 1 wastes requests and adds latency to shopper flows. Option 2 picks the wrong protocol without justification. Option 4 exposes API keys publicly and lacks observability. The chosen design also uses checkpointing, retries, and dead-letter handling for failed lookups. It can be scheduled to avoid peak traffic. It supports reprocessing via custom flags. It produces audit metrics for coverage.
Question 26 of 44
26. Question
Your returns flow must create an RMA in an external OMS with REST. Shopper confirmation must appear within 2 s even if the OMS can take 8–10 s. What design do you choose?
Correct
Option 1 decouples shopper UX from the OMS latency by using an asynchronous pattern with reliable posting later. Custom Objects allow tracking state, and idempotency keys prevent duplicates in retries. The job can use exponential backoff and alerting, while the storefront shows a pending RMA status that updates when the OMS confirms. Option 2 breaks the UX SLA and risks hung threads. Option 3 introduces an unnecessary protocol mismatch. Option 4 is insecure and removes server-side control and logging. The chosen approach also lets you re-drive failures safely. It provides compliance-friendly logging without leaking PII. It integrates with service quotas more safely. It gives customer service agents visibility in Business Manager. It scales better under spikes.
Incorrect
Option 1 decouples shopper UX from the OMS latency by using an asynchronous pattern with reliable posting later. Custom Objects allow tracking state, and idempotency keys prevent duplicates in retries. The job can use exponential backoff and alerting, while the storefront shows a pending RMA status that updates when the OMS confirms. Option 2 breaks the UX SLA and risks hung threads. Option 3 introduces an unnecessary protocol mismatch. Option 4 is insecure and removes server-side control and logging. The chosen approach also lets you re-drive failures safely. It provides compliance-friendly logging without leaking PII. It integrates with service quotas more safely. It gives customer service agents visibility in Business Manager. It scales better under spikes.
Unattempted
Option 1 decouples shopper UX from the OMS latency by using an asynchronous pattern with reliable posting later. Custom Objects allow tracking state, and idempotency keys prevent duplicates in retries. The job can use exponential backoff and alerting, while the storefront shows a pending RMA status that updates when the OMS confirms. Option 2 breaks the UX SLA and risks hung threads. Option 3 introduces an unnecessary protocol mismatch. Option 4 is insecure and removes server-side control and logging. The chosen approach also lets you re-drive failures safely. It provides compliance-friendly logging without leaking PII. It integrates with service quotas more safely. It gives customer service agents visibility in Business Manager. It scales better under spikes.
Question 27 of 44
27. Question
A payments risk vendor requires signed REST requests and mTLS, and mandates rotating keys quarterly. The call happens during checkout review with a 400 ms budget. WhatÂ’s the most appropriate configuration?
Correct
Option 2 is correct because it satisfies both security (mTLS + HMAC + secret storage) and performance (tight timeouts and minimal retries) while keeping the call synchronous as required by the checkout step. Using Service Profiles and Credentials enables safe rotation without code changes. Option 1 cannot pre-approve meaningfully due to fast-changing basket context. Option 3 downgrades security and changes protocol without vendor support. Option 4 exposes secrets and removes server-side governance and observability. The selected design also enables circuit breaking, per-environment endpoints, and connection pooling. It keeps logs compliant by redacting PII and secrets. It allows canary testing by toggling the service profile. It reduces overall blast radius by failing softly when appropriate.
Incorrect
Option 2 is correct because it satisfies both security (mTLS + HMAC + secret storage) and performance (tight timeouts and minimal retries) while keeping the call synchronous as required by the checkout step. Using Service Profiles and Credentials enables safe rotation without code changes. Option 1 cannot pre-approve meaningfully due to fast-changing basket context. Option 3 downgrades security and changes protocol without vendor support. Option 4 exposes secrets and removes server-side governance and observability. The selected design also enables circuit breaking, per-environment endpoints, and connection pooling. It keeps logs compliant by redacting PII and secrets. It allows canary testing by toggling the service profile. It reduces overall blast radius by failing softly when appropriate.
Unattempted
Option 2 is correct because it satisfies both security (mTLS + HMAC + secret storage) and performance (tight timeouts and minimal retries) while keeping the call synchronous as required by the checkout step. Using Service Profiles and Credentials enables safe rotation without code changes. Option 1 cannot pre-approve meaningfully due to fast-changing basket context. Option 3 downgrades security and changes protocol without vendor support. Option 4 exposes secrets and removes server-side governance and observability. The selected design also enables circuit breaking, per-environment endpoints, and connection pooling. It keeps logs compliant by redacting PII and secrets. It allows canary testing by toggling the service profile. It reduces overall blast radius by failing softly when appropriate.
Question 28 of 44
28. Question
A PIM drops a 3-GB zipped catalog delta on SFTP at 01:00 daily. You must validate schema, reject bad rows to a quarantine file, import valid items, and email a summary before 03:00. What Job Framework design fits?
Correct
Option 2 is correct because the Job FrameworkÂ’s step graph lets you compose system steps (file transfer, unzip, catalog import) with custom script steps for validation and reporting, which is precisely what this nightly batch needs. Streaming validation avoids loading the 3-GB payload in memory and cleanly separates good from bad records, enabling partial success with quarantining. Checkpoints and fail-fast thresholds keep the run predictable and stop on systemic data issues, preserving the SLA. Using the standard Catalog Import step maintains supportability and leverages platform indexing hooks. The quarantine upload plus summary email gives governance and auditability. Option 1 is fragile: a monolithic script increases memory pressure, hinders reuse, and makes recovery and partial reruns difficult. Option 3 ignores automation and relies on manual ops, risking missed windows and inconsistent execution. Option 4 replaces a proven batch pull with a push model that is harder to govern, and OCAPI is not designed for 3-GB delta ingestion in a single night. The selected approach also lets you parameterize file paths and gracefully no-op when no new file is present. Finally, it centralizes logging per step for faster troubleshooting.
Incorrect
Option 2 is correct because the Job FrameworkÂ’s step graph lets you compose system steps (file transfer, unzip, catalog import) with custom script steps for validation and reporting, which is precisely what this nightly batch needs. Streaming validation avoids loading the 3-GB payload in memory and cleanly separates good from bad records, enabling partial success with quarantining. Checkpoints and fail-fast thresholds keep the run predictable and stop on systemic data issues, preserving the SLA. Using the standard Catalog Import step maintains supportability and leverages platform indexing hooks. The quarantine upload plus summary email gives governance and auditability. Option 1 is fragile: a monolithic script increases memory pressure, hinders reuse, and makes recovery and partial reruns difficult. Option 3 ignores automation and relies on manual ops, risking missed windows and inconsistent execution. Option 4 replaces a proven batch pull with a push model that is harder to govern, and OCAPI is not designed for 3-GB delta ingestion in a single night. The selected approach also lets you parameterize file paths and gracefully no-op when no new file is present. Finally, it centralizes logging per step for faster troubleshooting.
Unattempted
Option 2 is correct because the Job FrameworkÂ’s step graph lets you compose system steps (file transfer, unzip, catalog import) with custom script steps for validation and reporting, which is precisely what this nightly batch needs. Streaming validation avoids loading the 3-GB payload in memory and cleanly separates good from bad records, enabling partial success with quarantining. Checkpoints and fail-fast thresholds keep the run predictable and stop on systemic data issues, preserving the SLA. Using the standard Catalog Import step maintains supportability and leverages platform indexing hooks. The quarantine upload plus summary email gives governance and auditability. Option 1 is fragile: a monolithic script increases memory pressure, hinders reuse, and makes recovery and partial reruns difficult. Option 3 ignores automation and relies on manual ops, risking missed windows and inconsistent execution. Option 4 replaces a proven batch pull with a push model that is harder to govern, and OCAPI is not designed for 3-GB delta ingestion in a single night. The selected approach also lets you parameterize file paths and gracefully no-op when no new file is present. Finally, it centralizes logging per step for faster troubleshooting.
Question 29 of 44
29. Question
Every hour you must export paid orders to an ERP via REST. Requirements: idempotency, backoff on 429/5xx, and resume from last exported order if a run aborts. Which Job pattern is best?
Correct
Option 2 is correct because it implements explicit idempotency with orderNo keys, resilient backoff for throttling, and a durable resume point via a lastSuccess marker stored outside the job context. Chunking ensures predictable memory and better retry semantics at a batch granularity. Checkpointing only after a successful chunk prevents gaps and duplicates across reruns. The Service Framework call within the job step centralizes authentication and logging, keeping secrets in Service Credentials. Option 1 relies on the ERP to deduplicate and risks partial duplication on retries. Option 3 mixes manual and automated steps and lacks durability and observability. Option 4 removes the benefits of rate smoothing; near real-time spikes can overwhelm the ERP and complicate failure recovery. The chosen pattern also eases auditing with per-chunk metrics. It allows configurable windowing and dry-run modes. It isolates transient failures from poison records by dead-lettering irrecoverable items.
Incorrect
Option 2 is correct because it implements explicit idempotency with orderNo keys, resilient backoff for throttling, and a durable resume point via a lastSuccess marker stored outside the job context. Chunking ensures predictable memory and better retry semantics at a batch granularity. Checkpointing only after a successful chunk prevents gaps and duplicates across reruns. The Service Framework call within the job step centralizes authentication and logging, keeping secrets in Service Credentials. Option 1 relies on the ERP to deduplicate and risks partial duplication on retries. Option 3 mixes manual and automated steps and lacks durability and observability. Option 4 removes the benefits of rate smoothing; near real-time spikes can overwhelm the ERP and complicate failure recovery. The chosen pattern also eases auditing with per-chunk metrics. It allows configurable windowing and dry-run modes. It isolates transient failures from poison records by dead-lettering irrecoverable items.
Unattempted
Option 2 is correct because it implements explicit idempotency with orderNo keys, resilient backoff for throttling, and a durable resume point via a lastSuccess marker stored outside the job context. Chunking ensures predictable memory and better retry semantics at a batch granularity. Checkpointing only after a successful chunk prevents gaps and duplicates across reruns. The Service Framework call within the job step centralizes authentication and logging, keeping secrets in Service Credentials. Option 1 relies on the ERP to deduplicate and risks partial duplication on retries. Option 3 mixes manual and automated steps and lacks durability and observability. Option 4 removes the benefits of rate smoothing; near real-time spikes can overwhelm the ERP and complicate failure recovery. The chosen pattern also eases auditing with per-chunk metrics. It allows configurable windowing and dry-run modes. It isolates transient failures from poison records by dead-lettering irrecoverable items.
Question 30 of 44
30. Question
A vendor sends price deltas every 15 minutes to WebDAV. You need to apply them quickly but avoid reprocessing the same file if a job reruns. What should you implement?
Correct
Option 2 is correct because maintaining a durable processed-file registry by checksum avoids duplicate application when jobs rerun, which is common after failures. Listing and filtering files within a job step ensures only relevant deltas are processed, preserving throughput. Archiving or deleting processed files keeps the folder clean and reduces future scans. Option 1 risks reprocessing on any rerun and offers no idempotency. Option 3 couples inbound vendor triggers to storefront code paths and has poor reliability and governance. Option 4 introduces unnecessary latency and increases blast radius if a bad delta appears early in the day. The chosen approach also supports parallelization when safe by chunking file sets. It provides clear metrics on files found versus applied. It supports dry-run validation before commit. It allows targeted replays by checksum.
Incorrect
Option 2 is correct because maintaining a durable processed-file registry by checksum avoids duplicate application when jobs rerun, which is common after failures. Listing and filtering files within a job step ensures only relevant deltas are processed, preserving throughput. Archiving or deleting processed files keeps the folder clean and reduces future scans. Option 1 risks reprocessing on any rerun and offers no idempotency. Option 3 couples inbound vendor triggers to storefront code paths and has poor reliability and governance. Option 4 introduces unnecessary latency and increases blast radius if a bad delta appears early in the day. The chosen approach also supports parallelization when safe by chunking file sets. It provides clear metrics on files found versus applied. It supports dry-run validation before commit. It allows targeted replays by checksum.
Unattempted
Option 2 is correct because maintaining a durable processed-file registry by checksum avoids duplicate application when jobs rerun, which is common after failures. Listing and filtering files within a job step ensures only relevant deltas are processed, preserving throughput. Archiving or deleting processed files keeps the folder clean and reduces future scans. Option 1 risks reprocessing on any rerun and offers no idempotency. Option 3 couples inbound vendor triggers to storefront code paths and has poor reliability and governance. Option 4 introduces unnecessary latency and increases blast radius if a bad delta appears early in the day. The chosen approach also supports parallelization when safe by chunking file sets. It provides clear metrics on files found versus applied. It supports dry-run validation before commit. It allows targeted replays by checksum.
Question 31 of 44
31. Question
You must load a 10-million-row customer suppression list nightly from SFTP and update a custom table. The operation must complete within a 2-hour window and not starve other jobs. Which design is appropriate?
Correct
Option 2 is correct because partitioning the workload and streaming each shard keeps memory low and throughput high while respecting job concurrency limits. Small transaction batches reduce lock contention and allow partial progress even if a shard fails. Throttling concurrency avoids starving other important jobs and maintains platform health. An aggregation step at the end delivers governance with counts and error reasons. Option 1 will blow memory and risks a long-running transaction with rollback storms. Option 3 is unrealistic for this volume and breaks the batch contract. Option 4 adds needless complexity and risks missing complete datasets since the upstream delivers once nightly. The chosen design also simplifies retries by reprocessing failed shards only. It enables backpressure when downstream writes slow. It uses parameterized shard counts to tune performance. It provides clean isolation for poison records via dead-letter files.
Incorrect
Option 2 is correct because partitioning the workload and streaming each shard keeps memory low and throughput high while respecting job concurrency limits. Small transaction batches reduce lock contention and allow partial progress even if a shard fails. Throttling concurrency avoids starving other important jobs and maintains platform health. An aggregation step at the end delivers governance with counts and error reasons. Option 1 will blow memory and risks a long-running transaction with rollback storms. Option 3 is unrealistic for this volume and breaks the batch contract. Option 4 adds needless complexity and risks missing complete datasets since the upstream delivers once nightly. The chosen design also simplifies retries by reprocessing failed shards only. It enables backpressure when downstream writes slow. It uses parameterized shard counts to tune performance. It provides clean isolation for poison records via dead-letter files.
Unattempted
Option 2 is correct because partitioning the workload and streaming each shard keeps memory low and throughput high while respecting job concurrency limits. Small transaction batches reduce lock contention and allow partial progress even if a shard fails. Throttling concurrency avoids starving other important jobs and maintains platform health. An aggregation step at the end delivers governance with counts and error reasons. Option 1 will blow memory and risks a long-running transaction with rollback storms. Option 3 is unrealistic for this volume and breaks the batch contract. Option 4 adds needless complexity and risks missing complete datasets since the upstream delivers once nightly. The chosen design also simplifies retries by reprocessing failed shards only. It enables backpressure when downstream writes slow. It uses parameterized shard counts to tune performance. It provides clean isolation for poison records via dead-letter files.
Question 32 of 44
32. Question
Your OMS drops inventory CSVs hourly. You must: fetch via SFTP, validate schema, update inventory lists, then trigger a search index rebuild only if changes were applied. What should you build?
Correct
Option 1 is correct because it chains inventory update steps and conditionally triggers indexing only when needed, saving resources. Parameterization by site supports multi-site operations cleanly. SFTP and schema validation as distinct steps improve resilience and observability. A separate indexing job avoids long critical paths if indexing is slow. Option 2 couples admin UI to batch plumbing and ignores security and timeouts. Option 3 disregards the hourly freshness requirement. Option 4 misuses catalog import for inventory list updates and risks corrupting product data. The chosen design also makes it easy to add a metrics step reporting records processed and deltas applied. It supports fail-safe archiving of the original CSV. It enables quick reruns with checkpoints. It maintains clear logs and alerts.
Incorrect
Option 1 is correct because it chains inventory update steps and conditionally triggers indexing only when needed, saving resources. Parameterization by site supports multi-site operations cleanly. SFTP and schema validation as distinct steps improve resilience and observability. A separate indexing job avoids long critical paths if indexing is slow. Option 2 couples admin UI to batch plumbing and ignores security and timeouts. Option 3 disregards the hourly freshness requirement. Option 4 misuses catalog import for inventory list updates and risks corrupting product data. The chosen design also makes it easy to add a metrics step reporting records processed and deltas applied. It supports fail-safe archiving of the original CSV. It enables quick reruns with checkpoints. It maintains clear logs and alerts.
Unattempted
Option 1 is correct because it chains inventory update steps and conditionally triggers indexing only when needed, saving resources. Parameterization by site supports multi-site operations cleanly. SFTP and schema validation as distinct steps improve resilience and observability. A separate indexing job avoids long critical paths if indexing is slow. Option 2 couples admin UI to batch plumbing and ignores security and timeouts. Option 3 disregards the hourly freshness requirement. Option 4 misuses catalog import for inventory list updates and risks corrupting product data. The chosen design also makes it easy to add a metrics step reporting records processed and deltas applied. It supports fail-safe archiving of the original CSV. It enables quick reruns with checkpoints. It maintains clear logs and alerts.
Question 33 of 44
33. Question
Compliance requires an auditable “GDPR delete export” weekly: compile records queued for deletion, create a signed archive, push to an external vault, and write a tamper-evident manifest. What design satisfies this?
Correct
Option 2 is correct because it targets only queued records, produces verifiable artifacts (signed archive and hash), and persists metadata for audit, which are core compliance needs. Uploading via the Service Framework centralizes security and logging. Keeping the process in the Job Framework provides scheduling, history, and alerting. Option 1 is excessive and risks exporting non-requested data; emailing zips is insecure and not auditable. Option 3 conflates UX with compliance processing and would be brittle. Option 4 introduces manual error and violates the need for repeatable, evidential processing. The correct approach also supports replays of specific batches using the stored manifest. It isolates PII by encrypting at rest and redacting logs. It parameterizes retention periods. It provides dashboards for counts and exceptions.
Incorrect
Option 2 is correct because it targets only queued records, produces verifiable artifacts (signed archive and hash), and persists metadata for audit, which are core compliance needs. Uploading via the Service Framework centralizes security and logging. Keeping the process in the Job Framework provides scheduling, history, and alerting. Option 1 is excessive and risks exporting non-requested data; emailing zips is insecure and not auditable. Option 3 conflates UX with compliance processing and would be brittle. Option 4 introduces manual error and violates the need for repeatable, evidential processing. The correct approach also supports replays of specific batches using the stored manifest. It isolates PII by encrypting at rest and redacting logs. It parameterizes retention periods. It provides dashboards for counts and exceptions.
Unattempted
Option 2 is correct because it targets only queued records, produces verifiable artifacts (signed archive and hash), and persists metadata for audit, which are core compliance needs. Uploading via the Service Framework centralizes security and logging. Keeping the process in the Job Framework provides scheduling, history, and alerting. Option 1 is excessive and risks exporting non-requested data; emailing zips is insecure and not auditable. Option 3 conflates UX with compliance processing and would be brittle. Option 4 introduces manual error and violates the need for repeatable, evidential processing. The correct approach also supports replays of specific batches using the stored manifest. It isolates PII by encrypting at rest and redacting logs. It parameterizes retention periods. It provides dashboards for counts and exceptions.
Question 34 of 44
34. Question
ERP publishes price lists via SOAP once per day at 02:00. Prices must be updated before 06:00 with a verifiable audit trail. What should you build?
Correct
The correct choice is a scheduled SOAP batch using the Job Framework, because the source is SOAP and the business window allows asynchronous processing. Idempotent upserts ensure reruns wonÂ’t duplicate or corrupt data, and checkpointing enables partial recovery. Secrets must be stored in Service Credentials to avoid code leakage. Option 2 adds latency to every PDP and is unnecessary. Option 3 is insecure and would leak API keys. Option 4 burdens checkout with vendor availability and undermines predictability. The batch job can generate audit logs with counts and checksums per feed. It can throttle requests to respect ERP limits. It provides a clear alarm path for misses. It separates business validation from transport.
Incorrect
The correct choice is a scheduled SOAP batch using the Job Framework, because the source is SOAP and the business window allows asynchronous processing. Idempotent upserts ensure reruns wonÂ’t duplicate or corrupt data, and checkpointing enables partial recovery. Secrets must be stored in Service Credentials to avoid code leakage. Option 2 adds latency to every PDP and is unnecessary. Option 3 is insecure and would leak API keys. Option 4 burdens checkout with vendor availability and undermines predictability. The batch job can generate audit logs with counts and checksums per feed. It can throttle requests to respect ERP limits. It provides a clear alarm path for misses. It separates business validation from transport.
Unattempted
The correct choice is a scheduled SOAP batch using the Job Framework, because the source is SOAP and the business window allows asynchronous processing. Idempotent upserts ensure reruns wonÂ’t duplicate or corrupt data, and checkpointing enables partial recovery. Secrets must be stored in Service Credentials to avoid code leakage. Option 2 adds latency to every PDP and is unnecessary. Option 3 is insecure and would leak API keys. Option 4 burdens checkout with vendor availability and undermines predictability. The batch job can generate audit logs with counts and checksums per feed. It can throttle requests to respect ERP limits. It provides a clear alarm path for misses. It separates business validation from transport.
Question 35 of 44
35. Question
Two downstream systems (ESP and CDP) must receive a nightly “customers changed” file. If one fails, the other should continue; you also need consolidated metrics and a single alert summarizing both outcomes. What’s the right setup?
Correct
Option 2 is correct because fan-out with independent child jobs isolates failures and preserves success for the healthy target, while the parent still provides unified governance and alerting. It prevents duplicate generation of the source file and keeps a single source of truth for counts. Aggregation of results in the parent allows a clear summary to on-call teams. Option 1 creates tight coupling and all-or-nothing behavior. Option 3 loses central visibility and complicates alerting and retries. Option 4 abdicates operational responsibility and breaks the requirement for a coordinated nightly delivery. The chosen design also simplifies reruns by retrying only the failed child. It supports staggered schedules if needed. It enables per-target rate controls. It uses shared code for posting logic to reduce drift.
Incorrect
Option 2 is correct because fan-out with independent child jobs isolates failures and preserves success for the healthy target, while the parent still provides unified governance and alerting. It prevents duplicate generation of the source file and keeps a single source of truth for counts. Aggregation of results in the parent allows a clear summary to on-call teams. Option 1 creates tight coupling and all-or-nothing behavior. Option 3 loses central visibility and complicates alerting and retries. Option 4 abdicates operational responsibility and breaks the requirement for a coordinated nightly delivery. The chosen design also simplifies reruns by retrying only the failed child. It supports staggered schedules if needed. It enables per-target rate controls. It uses shared code for posting logic to reduce drift.
Unattempted
Option 2 is correct because fan-out with independent child jobs isolates failures and preserves success for the healthy target, while the parent still provides unified governance and alerting. It prevents duplicate generation of the source file and keeps a single source of truth for counts. Aggregation of results in the parent allows a clear summary to on-call teams. Option 1 creates tight coupling and all-or-nothing behavior. Option 3 loses central visibility and complicates alerting and retries. Option 4 abdicates operational responsibility and breaks the requirement for a coordinated nightly delivery. The chosen design also simplifies reruns by retrying only the failed child. It supports staggered schedules if needed. It enables per-target rate controls. It uses shared code for posting logic to reduce drift.
Question 36 of 44
36. Question
A job imports store hours from a partner API at 04:00. Daylight-saving changes caused off-by-one-hour errors in production. You must harden the process. What improvement should you make?
Correct
Option 2 is correct because explicit normalization with the partnerÂ’s declared time zone to UTC eliminates ambiguity and makes storage and comparisons stable through DST transitions. Validation around boundary dates detects partner defects proactively. Keeping test fixtures ensures regressions are caught before production. Option 1 is insufficient because you cannot assume the partnerÂ’s TZ, and server TZ alone does not fix data semantics. Option 3 preserves ambiguity and guarantees recurring errors. Option 4 is unreliable; DST problems are about interpretation, not just timing. The selected approach also provides audit logs of raw versus normalized values. It allows per-store overrides where local law differs. It can quarantine suspect records rather than corrupt live data. It improves customer experience by ensuring correct opening hours.
Incorrect
Option 2 is correct because explicit normalization with the partnerÂ’s declared time zone to UTC eliminates ambiguity and makes storage and comparisons stable through DST transitions. Validation around boundary dates detects partner defects proactively. Keeping test fixtures ensures regressions are caught before production. Option 1 is insufficient because you cannot assume the partnerÂ’s TZ, and server TZ alone does not fix data semantics. Option 3 preserves ambiguity and guarantees recurring errors. Option 4 is unreliable; DST problems are about interpretation, not just timing. The selected approach also provides audit logs of raw versus normalized values. It allows per-store overrides where local law differs. It can quarantine suspect records rather than corrupt live data. It improves customer experience by ensuring correct opening hours.
Unattempted
Option 2 is correct because explicit normalization with the partnerÂ’s declared time zone to UTC eliminates ambiguity and makes storage and comparisons stable through DST transitions. Validation around boundary dates detects partner defects proactively. Keeping test fixtures ensures regressions are caught before production. Option 1 is insufficient because you cannot assume the partnerÂ’s TZ, and server TZ alone does not fix data semantics. Option 3 preserves ambiguity and guarantees recurring errors. Option 4 is unreliable; DST problems are about interpretation, not just timing. The selected approach also provides audit logs of raw versus normalized values. It allows per-store overrides where local law differs. It can quarantine suspect records rather than corrupt live data. It improves customer experience by ensuring correct opening hours.
Question 37 of 44
37. Question
An AppExchange payment cartridge still calls a legacy Checkout-Start pipeline for authorization. YouÂ’re on SFRA and must avoid forking the vendor code. What is the best integration approach?
Correct
The adapter controller pattern is correct because it avoids executing legacy pipelines while preserving business logic by calling the vendorÂ’s script modules directly. Mapping pipeline dictionary variables to req.viewData() keeps data flow consistent with SFRA, enabling templating and response handling to remain modern. Using server.append or server.prepend lets you hook into base SFRA checkout routes without forking vendor or base cartridges, which eases upgrades. Keeping server.middleware.https and server.middleware.csrf.validateAjaxRequest on critical routes meets security best practices. Option 1 merely forwards URLs and risks missing middleware, data mapping, and route contracts. Option 2 is risky for timelines and abdicates architecture responsibility even when an adapter can bridge safely. Option 4 (invoking the pipeline URL) couples you back to deprecated tech and bypasses controller policies like CSRF and response caching. The adapter also centralizes logging and error handling using dw/system/Logger, improving observability. It enables gradual vendor migration by toggling features via site preferences. Finally, it keeps cartridge layering clean (app_custom before vendor and base).
Incorrect
The adapter controller pattern is correct because it avoids executing legacy pipelines while preserving business logic by calling the vendorÂ’s script modules directly. Mapping pipeline dictionary variables to req.viewData() keeps data flow consistent with SFRA, enabling templating and response handling to remain modern. Using server.append or server.prepend lets you hook into base SFRA checkout routes without forking vendor or base cartridges, which eases upgrades. Keeping server.middleware.https and server.middleware.csrf.validateAjaxRequest on critical routes meets security best practices. Option 1 merely forwards URLs and risks missing middleware, data mapping, and route contracts. Option 2 is risky for timelines and abdicates architecture responsibility even when an adapter can bridge safely. Option 4 (invoking the pipeline URL) couples you back to deprecated tech and bypasses controller policies like CSRF and response caching. The adapter also centralizes logging and error handling using dw/system/Logger, improving observability. It enables gradual vendor migration by toggling features via site preferences. Finally, it keeps cartridge layering clean (app_custom before vendor and base).
Unattempted
The adapter controller pattern is correct because it avoids executing legacy pipelines while preserving business logic by calling the vendorÂ’s script modules directly. Mapping pipeline dictionary variables to req.viewData() keeps data flow consistent with SFRA, enabling templating and response handling to remain modern. Using server.append or server.prepend lets you hook into base SFRA checkout routes without forking vendor or base cartridges, which eases upgrades. Keeping server.middleware.https and server.middleware.csrf.validateAjaxRequest on critical routes meets security best practices. Option 1 merely forwards URLs and risks missing middleware, data mapping, and route contracts. Option 2 is risky for timelines and abdicates architecture responsibility even when an adapter can bridge safely. Option 4 (invoking the pipeline URL) couples you back to deprecated tech and bypasses controller policies like CSRF and response caching. The adapter also centralizes logging and error handling using dw/system/Logger, improving observability. It enables gradual vendor migration by toggling features via site preferences. Finally, it keeps cartridge layering clean (app_custom before vendor and base).
Question 38 of 44
38. Question
A loyalty AppExchange add-on exposes a pipeline Loyalty-Apply invoked via an ISML form submission. You must retain functionality and add rate-limiting to protect a downstream service. What should you do?
Correct
Option 2 is correct because it migrates invocation from a pipeline endpoint to a controller route where middleware can be applied cleanly. Converting the pipelineÂ’s internal steps into a script module preserves logic while eliminating the deprecated entry point. Adding HTTPS and a custom rate-limit middleware aligns with platform security and resilience guidelines and keeps throttling decisions close to business rules. Option 1 relies on external infra and still leaves the insecure pipeline running without CSRF or route guards. Option 3 continues to use legacy pipelines indirectly and bypasses middleware guarantees. Option 4 changes the interaction pattern and breaks the required real-time customer experience. The controller route also simplifies testing via mocha-compatible unit tests in cartridge/scripts. It makes it easier to emit telemetry to Log Center with consistent categories. It reduces upgrade risk by avoiding vendor cartridge forks. It enables feature flags to switch between vendor and custom implementations during rollout.
Incorrect
Option 2 is correct because it migrates invocation from a pipeline endpoint to a controller route where middleware can be applied cleanly. Converting the pipelineÂ’s internal steps into a script module preserves logic while eliminating the deprecated entry point. Adding HTTPS and a custom rate-limit middleware aligns with platform security and resilience guidelines and keeps throttling decisions close to business rules. Option 1 relies on external infra and still leaves the insecure pipeline running without CSRF or route guards. Option 3 continues to use legacy pipelines indirectly and bypasses middleware guarantees. Option 4 changes the interaction pattern and breaks the required real-time customer experience. The controller route also simplifies testing via mocha-compatible unit tests in cartridge/scripts. It makes it easier to emit telemetry to Log Center with consistent categories. It reduces upgrade risk by avoiding vendor cartridge forks. It enables feature flags to switch between vendor and custom implementations during rollout.
Unattempted
Option 2 is correct because it migrates invocation from a pipeline endpoint to a controller route where middleware can be applied cleanly. Converting the pipelineÂ’s internal steps into a script module preserves logic while eliminating the deprecated entry point. Adding HTTPS and a custom rate-limit middleware aligns with platform security and resilience guidelines and keeps throttling decisions close to business rules. Option 1 relies on external infra and still leaves the insecure pipeline running without CSRF or route guards. Option 3 continues to use legacy pipelines indirectly and bypasses middleware guarantees. Option 4 changes the interaction pattern and breaks the required real-time customer experience. The controller route also simplifies testing via mocha-compatible unit tests in cartridge/scripts. It makes it easier to emit telemetry to Log Center with consistent categories. It reduces upgrade risk by avoiding vendor cartridge forks. It enables feature flags to switch between vendor and custom implementations during rollout.
Question 39 of 44
39. Question
A tax providerÂ’s cartridge contains pipelines for address validation called during Checkout-Shipping. You want to keep SFRA base controllers and avoid template changes. WhatÂ’s the best path?
Correct
Option 2 is correct because it keeps the SFRA route intact and attaches behavior via server.append, preserving extension-safe upgrades. By extracting business logic from the pipeline node graph into a script module, you avoid pipeline execution and can unit-test the logic. Mapping input from req.form to the module and returning normalized output via res.viewData() retains template compatibility. Option 1 reintroduces view-level logic and leans on pipeline constructs (pdict) that SFRA tries to move away from. Option 3 again couples you to a pipeline endpoint and bypasses middleware guarantees. Option 4 adds unnecessary infrastructure and latency to work around a local refactor. The chosen approach also lets you enforce HTTPS, CSRF, and input validation consistently. It supports feature toggles for progressive rollout. It uses cartridge layering to place adapters in app_custom without touching vendor/base. It improves observability and error handling with try/catch and structured logs.
Incorrect
Option 2 is correct because it keeps the SFRA route intact and attaches behavior via server.append, preserving extension-safe upgrades. By extracting business logic from the pipeline node graph into a script module, you avoid pipeline execution and can unit-test the logic. Mapping input from req.form to the module and returning normalized output via res.viewData() retains template compatibility. Option 1 reintroduces view-level logic and leans on pipeline constructs (pdict) that SFRA tries to move away from. Option 3 again couples you to a pipeline endpoint and bypasses middleware guarantees. Option 4 adds unnecessary infrastructure and latency to work around a local refactor. The chosen approach also lets you enforce HTTPS, CSRF, and input validation consistently. It supports feature toggles for progressive rollout. It uses cartridge layering to place adapters in app_custom without touching vendor/base. It improves observability and error handling with try/catch and structured logs.
Unattempted
Option 2 is correct because it keeps the SFRA route intact and attaches behavior via server.append, preserving extension-safe upgrades. By extracting business logic from the pipeline node graph into a script module, you avoid pipeline execution and can unit-test the logic. Mapping input from req.form to the module and returning normalized output via res.viewData() retains template compatibility. Option 1 reintroduces view-level logic and leans on pipeline constructs (pdict) that SFRA tries to move away from. Option 3 again couples you to a pipeline endpoint and bypasses middleware guarantees. Option 4 adds unnecessary infrastructure and latency to work around a local refactor. The chosen approach also lets you enforce HTTPS, CSRF, and input validation consistently. It supports feature toggles for progressive rollout. It uses cartridge layering to place adapters in app_custom without touching vendor/base. It improves observability and error handling with try/catch and structured logs.
Question 40 of 44
40. Question
A fraud-screening AppExchange solution injects a Fraud-Review pipeline link on the Order Confirmation page. You must modernize with controllers and keep deep links working. What should you implement?
Correct
Option 1 is correct because it gives you a controller route with the right middleware and a compatible path for legacy deep links via URL rewrite. Calling the vendorÂ’s underlying script module maintains business functionality without executing the deprecated pipeline. Enforcing HTTPS and CSRF on the route meets security standards for order data access. Option 2 attempts to bandaid a pipeline with CSRF markup, but pipelines lack the controller middleware stack and are still discouraged. Option 3 defers the requirement instead of solving it; deep links would still break. Option 4 moves the problem to the client and keeps the insecure pipeline alive. The controller route also allows precise caching rules and header management. It centralizes error handling and logging with structured messages. It keeps templates unchanged by providing expected view data. It prepares you for later deprecation of the pipeline endpoint entirely.
Incorrect
Option 1 is correct because it gives you a controller route with the right middleware and a compatible path for legacy deep links via URL rewrite. Calling the vendorÂ’s underlying script module maintains business functionality without executing the deprecated pipeline. Enforcing HTTPS and CSRF on the route meets security standards for order data access. Option 2 attempts to bandaid a pipeline with CSRF markup, but pipelines lack the controller middleware stack and are still discouraged. Option 3 defers the requirement instead of solving it; deep links would still break. Option 4 moves the problem to the client and keeps the insecure pipeline alive. The controller route also allows precise caching rules and header management. It centralizes error handling and logging with structured messages. It keeps templates unchanged by providing expected view data. It prepares you for later deprecation of the pipeline endpoint entirely.
Unattempted
Option 1 is correct because it gives you a controller route with the right middleware and a compatible path for legacy deep links via URL rewrite. Calling the vendorÂ’s underlying script module maintains business functionality without executing the deprecated pipeline. Enforcing HTTPS and CSRF on the route meets security standards for order data access. Option 2 attempts to bandaid a pipeline with CSRF markup, but pipelines lack the controller middleware stack and are still discouraged. Option 3 defers the requirement instead of solving it; deep links would still break. Option 4 moves the problem to the client and keeps the insecure pipeline alive. The controller route also allows precise caching rules and header management. It centralizes error handling and logging with structured messages. It keeps templates unchanged by providing expected view data. It prepares you for later deprecation of the pipeline endpoint entirely.
Question 41 of 44
41. Question
A returns RMA plugin still uses a Returns-Start pipeline and posts to an OCAPI custom endpoint that assumes pipeline context. You must integrate with controllers and keep OCAPI behavior. What do you do?
Correct
Option 2 is correct because you separate reusable transforms into utilities that both a controller and OCAPI hook can call, eliminating pipeline dependency while keeping the API contract. Sharing validators and mappers ensures consistent behavior across channels. Option 1 breaks headless and external use cases that depend on OCAPI. Option 3 continues to rely on pipelines and mixes old and new paradigms, complicating security and upgrades. Option 4 changes the interaction model and degrades customer experience for RMAs. The refactor also enables comprehensive unit tests on the shared utilities. It makes error codes consistent for clients while enabling localization in controllers. It allows feature flags to roll out the controller path first. It preserves cartridge layering without forking the vendor. It eases future migration to newer APIs.
Incorrect
Option 2 is correct because you separate reusable transforms into utilities that both a controller and OCAPI hook can call, eliminating pipeline dependency while keeping the API contract. Sharing validators and mappers ensures consistent behavior across channels. Option 1 breaks headless and external use cases that depend on OCAPI. Option 3 continues to rely on pipelines and mixes old and new paradigms, complicating security and upgrades. Option 4 changes the interaction model and degrades customer experience for RMAs. The refactor also enables comprehensive unit tests on the shared utilities. It makes error codes consistent for clients while enabling localization in controllers. It allows feature flags to roll out the controller path first. It preserves cartridge layering without forking the vendor. It eases future migration to newer APIs.
Unattempted
Option 2 is correct because you separate reusable transforms into utilities that both a controller and OCAPI hook can call, eliminating pipeline dependency while keeping the API contract. Sharing validators and mappers ensures consistent behavior across channels. Option 1 breaks headless and external use cases that depend on OCAPI. Option 3 continues to rely on pipelines and mixes old and new paradigms, complicating security and upgrades. Option 4 changes the interaction model and degrades customer experience for RMAs. The refactor also enables comprehensive unit tests on the shared utilities. It makes error codes consistent for clients while enabling localization in controllers. It allows feature flags to roll out the controller path first. It preserves cartridge layering without forking the vendor. It eases future migration to newer APIs.
Question 42 of 44
42. Question
A gift registry integration ships a vendor cartridge that relies on Template.isml includes that reference PipelineDictionary. How can you modernize with controllers while minimizing template rewrite?
Correct
Option 2 is correct because controllers can populate res.viewData() with the same key structure, letting most ISML includes remain functional while removing pipeline execution. This approach reduces blast radius and allows phased template cleanup. Option 1 is a full replatform and not a pragmatic incremental step. Option 3 leaves the deprecated pipeline active and relies on edge security rather than proper middleware. Option 4 complicates the client and keeps insecure endpoints. The controller approach also enables standard middleware like HTTPS and CSRF. It improves caching control through cache.applyDefault() where safe. It allows A/B testing between legacy and modern paths. It centralizes error handling and logging. It maintains cartridge order hygiene to prevent override conflicts.
Incorrect
Option 2 is correct because controllers can populate res.viewData() with the same key structure, letting most ISML includes remain functional while removing pipeline execution. This approach reduces blast radius and allows phased template cleanup. Option 1 is a full replatform and not a pragmatic incremental step. Option 3 leaves the deprecated pipeline active and relies on edge security rather than proper middleware. Option 4 complicates the client and keeps insecure endpoints. The controller approach also enables standard middleware like HTTPS and CSRF. It improves caching control through cache.applyDefault() where safe. It allows A/B testing between legacy and modern paths. It centralizes error handling and logging. It maintains cartridge order hygiene to prevent override conflicts.
Unattempted
Option 2 is correct because controllers can populate res.viewData() with the same key structure, letting most ISML includes remain functional while removing pipeline execution. This approach reduces blast radius and allows phased template cleanup. Option 1 is a full replatform and not a pragmatic incremental step. Option 3 leaves the deprecated pipeline active and relies on edge security rather than proper middleware. Option 4 complicates the client and keeps insecure endpoints. The controller approach also enables standard middleware like HTTPS and CSRF. It improves caching control through cache.applyDefault() where safe. It allows A/B testing between legacy and modern paths. It centralizes error handling and logging. It maintains cartridge order hygiene to prevent override conflicts.
Question 43 of 44
43. Question
A search merchandising app provides a pipeline Search-Boost invoked on PLP. You need to preserve the boost logic but ensure compatibility with SFRAÂ’s Search-Show route. What is the best plan?
Correct
Option 2 is correct because it leverages SFRA extension points to inject logic without forking the base, preserving upgradability. Extracting the pipeline logic into a script module keeps business behavior but removes reliance on pipeline execution. Merging transformed data into productSearch view data maintains template compatibility. Option 1 creates maintenance debt and conflicts during upgrades. Option 3 mixes legacy view execution with controllers and undermines middleware. Option 4 may help performance but does not satisfy the need to keep dynamic logic aligned with current queries. The chosen method also allows feature flags to toggle boosts. It supports diagnostics by logging adjustments. It enforces HTTPS and caching semantics appropriately. It keeps vendor cartridge untouched, living underneath app_custom. It eases unit testing of ranking rules.
Incorrect
Option 2 is correct because it leverages SFRA extension points to inject logic without forking the base, preserving upgradability. Extracting the pipeline logic into a script module keeps business behavior but removes reliance on pipeline execution. Merging transformed data into productSearch view data maintains template compatibility. Option 1 creates maintenance debt and conflicts during upgrades. Option 3 mixes legacy view execution with controllers and undermines middleware. Option 4 may help performance but does not satisfy the need to keep dynamic logic aligned with current queries. The chosen method also allows feature flags to toggle boosts. It supports diagnostics by logging adjustments. It enforces HTTPS and caching semantics appropriately. It keeps vendor cartridge untouched, living underneath app_custom. It eases unit testing of ranking rules.
Unattempted
Option 2 is correct because it leverages SFRA extension points to inject logic without forking the base, preserving upgradability. Extracting the pipeline logic into a script module keeps business behavior but removes reliance on pipeline execution. Merging transformed data into productSearch view data maintains template compatibility. Option 1 creates maintenance debt and conflicts during upgrades. Option 3 mixes legacy view execution with controllers and undermines middleware. Option 4 may help performance but does not satisfy the need to keep dynamic logic aligned with current queries. The chosen method also allows feature flags to toggle boosts. It supports diagnostics by logging adjustments. It enforces HTTPS and caching semantics appropriately. It keeps vendor cartridge untouched, living underneath app_custom. It eases unit testing of ranking rules.
Question 44 of 44
44. Question
A promotions provider exposes both a legacy pipeline endpoint and a controller endpoint. Your site currently hits the pipeline from several templates. You want a low-risk cutover. What sequence is best?
Correct
Option 2 is correct because the proxy controller lets you maintain the response contract while controlling rollout with a site preference. Changing templates to point to your proxy reduces the number of code paths you must edit later. Staged rollout lowers risk and allows quick fallback. Option 1 is high-risk with many call sites and no safety net. Option 3 adds complexity in views and keeps deprecated endpoints. Option 4 bypasses middleware and can cause subtle differences in headers or caching. The proxy also allows metrics collection on usage. It can enforce CSRF/HTTPS independent of the vendor. It simplifies A/B testing and rollback. It centralizes mapping logic away from templates. It improves governance and logging.
Incorrect
Option 2 is correct because the proxy controller lets you maintain the response contract while controlling rollout with a site preference. Changing templates to point to your proxy reduces the number of code paths you must edit later. Staged rollout lowers risk and allows quick fallback. Option 1 is high-risk with many call sites and no safety net. Option 3 adds complexity in views and keeps deprecated endpoints. Option 4 bypasses middleware and can cause subtle differences in headers or caching. The proxy also allows metrics collection on usage. It can enforce CSRF/HTTPS independent of the vendor. It simplifies A/B testing and rollback. It centralizes mapping logic away from templates. It improves governance and logging.
Unattempted
Option 2 is correct because the proxy controller lets you maintain the response contract while controlling rollout with a site preference. Changing templates to point to your proxy reduces the number of code paths you must edit later. Staged rollout lowers risk and allows quick fallback. Option 1 is high-risk with many call sites and no safety net. Option 3 adds complexity in views and keeps deprecated endpoints. Option 4 bypasses middleware and can cause subtle differences in headers or caching. The proxy also allows metrics collection on usage. It can enforce CSRF/HTTPS independent of the vendor. It simplifies A/B testing and rollback. It centralizes mapping logic away from templates. It improves governance and logging.
X
Best wishes. Don’t forget to leave a feedback in Contact Us form after your result.