You have already completed the Test before. Hence you can not start it again.
Test is loading...
You must sign in or sign up to start the Test.
You have to finish following quiz, to start this Test:
Your results are here!! for" Salesforce Certified B2C Commerce Architect Practice Test 8 "
0 of 60 questions answered correctly
Your time:
Time has elapsed
Your Final Score is : 0
You have attempted : 0
Number of Correct Questions : 0 and scored 0
Number of Incorrect Questions : 0 and Negative marks 0
Average score
Your score
Salesforce Certified B2C Commerce Architect
You have attempted: 0
Number of Correct Questions: 0 and scored 0
Number of Incorrect Questions: 0 and Negative marks 0
You can review your answers by clicking on “View Answers” option. Important Note : Open Reference Documentation Links in New Tab (Right Click and Open in New Tab).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
Answered
Review
Question 1 of 60
1. Question
Security mandates mTLS for third-party calls, key rotation, and PII minimization. API docs show optional mTLS, OAuth client credentials, and data redaction endpoints. The AppExchange package supports OAuth but not mTLS. Whats your path?
Correct
Option 3 is correct because it meets security requirements by implementing mTLS as documented, while still leveraging the package where mTLS is not required. Short-lived OAuth tokens plus mTLS satisfy key rotation mandates, and redaction endpoints reduce data retention. Option 1 dismisses explicit security requirements and risks compliance. Option 2 increases long-term maintenance cost and diverges from vendor upgrades. Option 4 exposes secrets and increases attack surface. The recommended split is pragmatic and defensible in security reviews. It documents cert rotation and pinning procedures. It defines fallbacks for mTLS handshake failures. It keeps upgrade paths open for the package. It aligns with privacy-by-design principles. It provides clear QA test cases for handshake and redaction flows.
Incorrect
Option 3 is correct because it meets security requirements by implementing mTLS as documented, while still leveraging the package where mTLS is not required. Short-lived OAuth tokens plus mTLS satisfy key rotation mandates, and redaction endpoints reduce data retention. Option 1 dismisses explicit security requirements and risks compliance. Option 2 increases long-term maintenance cost and diverges from vendor upgrades. Option 4 exposes secrets and increases attack surface. The recommended split is pragmatic and defensible in security reviews. It documents cert rotation and pinning procedures. It defines fallbacks for mTLS handshake failures. It keeps upgrade paths open for the package. It aligns with privacy-by-design principles. It provides clear QA test cases for handshake and redaction flows.
Unattempted
Option 3 is correct because it meets security requirements by implementing mTLS as documented, while still leveraging the package where mTLS is not required. Short-lived OAuth tokens plus mTLS satisfy key rotation mandates, and redaction endpoints reduce data retention. Option 1 dismisses explicit security requirements and risks compliance. Option 2 increases long-term maintenance cost and diverges from vendor upgrades. Option 4 exposes secrets and increases attack surface. The recommended split is pragmatic and defensible in security reviews. It documents cert rotation and pinning procedures. It defines fallbacks for mTLS handshake failures. It keeps upgrade paths open for the package. It aligns with privacy-by-design principles. It provides clear QA test cases for handshake and redaction flows.
Question 2 of 60
2. Question
Finance needs precise tax rounding and multiple payment providers for redundancy. The spec shows a direct SDK call in controllers. What revision should you defend?
Correct
Option 2 is correct because an adapter decouples business logic from provider specifics and supports safe swaps. Feature flags allow canary release and instant rollback. Idempotency prevents double charges, and documented reconciliation keeps books correct. Option 1 is risky and slow to change under incident pressure. Option 3 ignores resilience and increases business risk if that provider degrades. Option 4 breaks user expectations and can cause fulfillment without payment if failures occur. The proposed change is measurable with error budgets. It aligns with audit requirements. Stakeholders can defend it by reduced MTTR and controlled experimentation. It also improves testability with provider mocks.
Incorrect
Option 2 is correct because an adapter decouples business logic from provider specifics and supports safe swaps. Feature flags allow canary release and instant rollback. Idempotency prevents double charges, and documented reconciliation keeps books correct. Option 1 is risky and slow to change under incident pressure. Option 3 ignores resilience and increases business risk if that provider degrades. Option 4 breaks user expectations and can cause fulfillment without payment if failures occur. The proposed change is measurable with error budgets. It aligns with audit requirements. Stakeholders can defend it by reduced MTTR and controlled experimentation. It also improves testability with provider mocks.
Unattempted
Option 2 is correct because an adapter decouples business logic from provider specifics and supports safe swaps. Feature flags allow canary release and instant rollback. Idempotency prevents double charges, and documented reconciliation keeps books correct. Option 1 is risky and slow to change under incident pressure. Option 3 ignores resilience and increases business risk if that provider degrades. Option 4 breaks user expectations and can cause fulfillment without payment if failures occur. The proposed change is measurable with error budgets. It aligns with audit requirements. Stakeholders can defend it by reduced MTTR and controlled experimentation. It also improves testability with provider mocks.
Question 3 of 60
3. Question
Product data comes from a PIM; inventory from an OMS; prices from ERP. Spec uses a single nightly import job for all. What is the best defensible change?
Correct
Option 2 is correct because dependency ordering prevents partial publishes and corrupted storefront states. Reconciliation counts detect drift and missing data rapidly. Partial-failure and retry windows document how to recover without reloading everything. Option 1 hides errors and makes rollback coarse. Option 3 breaks references and risks prices for missing products. Option 4 over-rotates to real time, increasing cost and risk without business need. The chosen plan scales with volume and is easy to test. It offers traceability per domain. Stakeholders can defend it with reliability and time-to-recover metrics. It aligns with release windows cleanly.
Incorrect
Option 2 is correct because dependency ordering prevents partial publishes and corrupted storefront states. Reconciliation counts detect drift and missing data rapidly. Partial-failure and retry windows document how to recover without reloading everything. Option 1 hides errors and makes rollback coarse. Option 3 breaks references and risks prices for missing products. Option 4 over-rotates to real time, increasing cost and risk without business need. The chosen plan scales with volume and is easy to test. It offers traceability per domain. Stakeholders can defend it with reliability and time-to-recover metrics. It aligns with release windows cleanly.
Unattempted
Option 2 is correct because dependency ordering prevents partial publishes and corrupted storefront states. Reconciliation counts detect drift and missing data rapidly. Partial-failure and retry windows document how to recover without reloading everything. Option 1 hides errors and makes rollback coarse. Option 3 breaks references and risks prices for missing products. Option 4 over-rotates to real time, increasing cost and risk without business need. The chosen plan scales with volume and is easy to test. It offers traceability per domain. Stakeholders can defend it with reliability and time-to-recover metrics. It aligns with release windows cleanly.
Question 4 of 60
4. Question
Analytics needs consent-aware events and BI wants a unified schema. The spec shows a generic GTM container pasted site-wide. What should you recommend and defend?
Correct
Option 2 is correct because a governed data layer reduces drift, enforces consent, and stabilizes downstream analytics. Server-side events improve performance and control. CI linting prevents unauthorized tags and breaks. Option 1 is not enforceable and risks compliance. Option 3 is impractical and blocks insight. Option 4 punts governance and misses real-time personalization needs. The recommended plan yields measurable improvements in performance and compliance. It is defensible with audit logs and tag diffs. It reduces vendor lock-in. It aligns to privacy-by-design principles. It also keeps marketing agile within guardrails.
Incorrect
Option 2 is correct because a governed data layer reduces drift, enforces consent, and stabilizes downstream analytics. Server-side events improve performance and control. CI linting prevents unauthorized tags and breaks. Option 1 is not enforceable and risks compliance. Option 3 is impractical and blocks insight. Option 4 punts governance and misses real-time personalization needs. The recommended plan yields measurable improvements in performance and compliance. It is defensible with audit logs and tag diffs. It reduces vendor lock-in. It aligns to privacy-by-design principles. It also keeps marketing agile within guardrails.
Unattempted
Option 2 is correct because a governed data layer reduces drift, enforces consent, and stabilizes downstream analytics. Server-side events improve performance and control. CI linting prevents unauthorized tags and breaks. Option 1 is not enforceable and risks compliance. Option 3 is impractical and blocks insight. Option 4 punts governance and misses real-time personalization needs. The recommended plan yields measurable improvements in performance and compliance. It is defensible with audit logs and tag diffs. It reduces vendor lock-in. It aligns to privacy-by-design principles. It also keeps marketing agile within guardrails.
Question 5 of 60
5. Question
The organization demands 99.9% checkout availability and global expansion next year. The spec lacks SLOs and rollback plans. What change should you defend?
Correct
Option 2 is correct because SLOs tie engineering work to business targets, while blue/green plus DB-backed sessions enables safe deploys and instant rollback. Automated criteria reduce human delay during incidents. Option 1 adds cost without risk control. Option 3 is a one-time test and does not guarantee ongoing resilience. Option 4 is reactive and slow. The chosen plan scales to global expansion with repeatable processes. It creates clear acceptance tests. Stakeholders can defend it with error-budget policy. It reduces MTTR and change failure rate. It also clarifies ownership during incidents.
Incorrect
Option 2 is correct because SLOs tie engineering work to business targets, while blue/green plus DB-backed sessions enables safe deploys and instant rollback. Automated criteria reduce human delay during incidents. Option 1 adds cost without risk control. Option 3 is a one-time test and does not guarantee ongoing resilience. Option 4 is reactive and slow. The chosen plan scales to global expansion with repeatable processes. It creates clear acceptance tests. Stakeholders can defend it with error-budget policy. It reduces MTTR and change failure rate. It also clarifies ownership during incidents.
Unattempted
Option 2 is correct because SLOs tie engineering work to business targets, while blue/green plus DB-backed sessions enables safe deploys and instant rollback. Automated criteria reduce human delay during incidents. Option 1 adds cost without risk control. Option 3 is a one-time test and does not guarantee ongoing resilience. Option 4 is reactive and slow. The chosen plan scales to global expansion with repeatable processes. It creates clear acceptance tests. Stakeholders can defend it with error-budget policy. It reduces MTTR and change failure rate. It also clarifies ownership during incidents.
Question 6 of 60
6. Question
Your brand must comply with PSD2/SCA, support Apple Pay/Google Pay, and switch payment providers during incidents with minimal code change. You found two LINK cartridges and a custom REST integration in the API docs. Which evaluation outcome and spec revision is best?
Correct
Option 2 is correct because it evaluates the cartridge not just for features, but for integration shape: adapter abstraction, webhook callbacks, and idempotency are explicitly documented, which aligns with incident-friendly provider swaps. Official wallet certifications and SCAPI support reduce risk in checkout, and the Service Framework adds resilience (timeouts, circuit breakers, logging). Option 1 still couples controller logic to a vendor surface and makes future swaps expensive. Option 3 underestimates the scope and ignores wallet certification and SCA nuances described in the docs. Option 4 selects stability by age, but the API docs likely show missing SCA/webhook patterns and no wallet certification matrix, creating compliance and maintenance gaps. Version notes and deprecation timelines further favor the modern adapter cartridge. This choice improves observability because events are standardized. It also satisfies architecture review checklists for security controls. The approach enables canary and rollback via feature flags. It meets PSD2 mandates with documented flows.
Incorrect
Option 2 is correct because it evaluates the cartridge not just for features, but for integration shape: adapter abstraction, webhook callbacks, and idempotency are explicitly documented, which aligns with incident-friendly provider swaps. Official wallet certifications and SCAPI support reduce risk in checkout, and the Service Framework adds resilience (timeouts, circuit breakers, logging). Option 1 still couples controller logic to a vendor surface and makes future swaps expensive. Option 3 underestimates the scope and ignores wallet certification and SCA nuances described in the docs. Option 4 selects stability by age, but the API docs likely show missing SCA/webhook patterns and no wallet certification matrix, creating compliance and maintenance gaps. Version notes and deprecation timelines further favor the modern adapter cartridge. This choice improves observability because events are standardized. It also satisfies architecture review checklists for security controls. The approach enables canary and rollback via feature flags. It meets PSD2 mandates with documented flows.
Unattempted
Option 2 is correct because it evaluates the cartridge not just for features, but for integration shape: adapter abstraction, webhook callbacks, and idempotency are explicitly documented, which aligns with incident-friendly provider swaps. Official wallet certifications and SCAPI support reduce risk in checkout, and the Service Framework adds resilience (timeouts, circuit breakers, logging). Option 1 still couples controller logic to a vendor surface and makes future swaps expensive. Option 3 underestimates the scope and ignores wallet certification and SCA nuances described in the docs. Option 4 selects stability by age, but the API docs likely show missing SCA/webhook patterns and no wallet certification matrix, creating compliance and maintenance gaps. Version notes and deprecation timelines further favor the modern adapter cartridge. This choice improves observability because events are standardized. It also satisfies architecture review checklists for security controls. The approach enables canary and rollback via feature flags. It meets PSD2 mandates with documented flows.
Question 7 of 60
7. Question
Tax rules vary by region, and promotions must recompute tax consistently. Vendors offer SOAP v2 with WSDLs, REST v1 with HMAC, and REST v2 with mTLS and async callbacks. Which evaluation should you defend?
Correct
Option 4 is correct because the vendors documented async pattern with mTLS provides the strongest security and a reliable flow for recomputation and order state transitions. The API docs for v2 likely include paging, versioning, SLAs, and callback schemas that reduce ambiguity during incidents. Idempotency and request correlation are critical to avoid double commits; the Service Framework supports these patterns cleanly. Option 1 (SOAP v2) can work but often lacks modern retry and callback semantics in the docs, increasing latency risk. Option 2 prefers simpler auth but ignores the absence of callbacks and weaker transport assurances. Option 3 mixes concerns and breaks auditability; tax logic belongs with a tax service, and price books arent substitutes. The selected approach documents timeouts and fallback cache behavior up front. It aligns with governance and security reviews. It positions QA to simulate callbacks via the vendors sandbox. It reduces change risk with version pinning. It meets promotion recomputation needs documented by the business.
Incorrect
Option 4 is correct because the vendors documented async pattern with mTLS provides the strongest security and a reliable flow for recomputation and order state transitions. The API docs for v2 likely include paging, versioning, SLAs, and callback schemas that reduce ambiguity during incidents. Idempotency and request correlation are critical to avoid double commits; the Service Framework supports these patterns cleanly. Option 1 (SOAP v2) can work but often lacks modern retry and callback semantics in the docs, increasing latency risk. Option 2 prefers simpler auth but ignores the absence of callbacks and weaker transport assurances. Option 3 mixes concerns and breaks auditability; tax logic belongs with a tax service, and price books arent substitutes. The selected approach documents timeouts and fallback cache behavior up front. It aligns with governance and security reviews. It positions QA to simulate callbacks via the vendors sandbox. It reduces change risk with version pinning. It meets promotion recomputation needs documented by the business.
Unattempted
Option 4 is correct because the vendors documented async pattern with mTLS provides the strongest security and a reliable flow for recomputation and order state transitions. The API docs for v2 likely include paging, versioning, SLAs, and callback schemas that reduce ambiguity during incidents. Idempotency and request correlation are critical to avoid double commits; the Service Framework supports these patterns cleanly. Option 1 (SOAP v2) can work but often lacks modern retry and callback semantics in the docs, increasing latency risk. Option 2 prefers simpler auth but ignores the absence of callbacks and weaker transport assurances. Option 3 mixes concerns and breaks auditability; tax logic belongs with a tax service, and price books arent substitutes. The selected approach documents timeouts and fallback cache behavior up front. It aligns with governance and security reviews. It positions QA to simulate callbacks via the vendors sandbox. It reduces change risk with version pinning. It meets promotion recomputation needs documented by the business.
Question 8 of 60
8. Question
Ratings & reviews must syndicate to the PDP, allow moderation, and export UGC to analytics. Docs show: AppExchange package v5 (REST + webhooks), v3 (REST without webhooks), and a raw GraphQL API. What should your spec say?
Correct
Option 1 is correct because v5s documented webhooks reduce polling, enable near real-time moderation flows, and the marketplace package covers auth, pagination, and retry patterns. The Service Framework formalizes timeouts and logging, while the governed data layer ensures events land consistently in analytics. Option 2 increases surface area and misses cartridge-level best practices the package already solved; docs for GraphQL may not include Commerce-ready helpers. Option 3 is workable but introduces data staleness and higher quota usage; the v3 docs lack push semantics noted as required by marketing. Option 4 splits sources and risks inconsistencies between counts and content, often flagged in SEO and QA. Version notes, upgrade guides, and deprecation dates further favor v5. The marketplace certification implies tested compatibility with SFRA/SCAPI. Webhook schemas are contract-friendly for QA mocks. Observability is stronger with events over polling. The plan is defensible to stakeholders under freshness and cost.
Incorrect
Option 1 is correct because v5s documented webhooks reduce polling, enable near real-time moderation flows, and the marketplace package covers auth, pagination, and retry patterns. The Service Framework formalizes timeouts and logging, while the governed data layer ensures events land consistently in analytics. Option 2 increases surface area and misses cartridge-level best practices the package already solved; docs for GraphQL may not include Commerce-ready helpers. Option 3 is workable but introduces data staleness and higher quota usage; the v3 docs lack push semantics noted as required by marketing. Option 4 splits sources and risks inconsistencies between counts and content, often flagged in SEO and QA. Version notes, upgrade guides, and deprecation dates further favor v5. The marketplace certification implies tested compatibility with SFRA/SCAPI. Webhook schemas are contract-friendly for QA mocks. Observability is stronger with events over polling. The plan is defensible to stakeholders under freshness and cost.
Unattempted
Option 1 is correct because v5s documented webhooks reduce polling, enable near real-time moderation flows, and the marketplace package covers auth, pagination, and retry patterns. The Service Framework formalizes timeouts and logging, while the governed data layer ensures events land consistently in analytics. Option 2 increases surface area and misses cartridge-level best practices the package already solved; docs for GraphQL may not include Commerce-ready helpers. Option 3 is workable but introduces data staleness and higher quota usage; the v3 docs lack push semantics noted as required by marketing. Option 4 splits sources and risks inconsistencies between counts and content, often flagged in SEO and QA. Version notes, upgrade guides, and deprecation dates further favor v5. The marketplace certification implies tested compatibility with SFRA/SCAPI. Webhook schemas are contract-friendly for QA mocks. Observability is stronger with events over polling. The plan is defensible to stakeholders under freshness and cost.
Question 9 of 60
9. Question
ERP will be the SoR for price and inventory. Docs show a bulk delta REST v3 with CDC webhooks, a nightly CSV SFTP drop, and a legacy SOAP. The business wants near real-time price changes and accurate OOS signals. Whats your recommendation?
Correct
Option 3 is correct because the API docs for CDC and bulk deltas provide the blend of freshness and reliability the business needs, minimizing load while covering gaps. Persisted deltas and backfill jobs reduce lost updates during outages, which the docs likely address via sequence IDs or watermarks. The Job Framework is built for bulk operations and retries; the Service Framework covers low-latency fallbacks. Option 1 is too coarse and cannot meet near real-time price changes. Option 2 mixes protocols without a clear consistency model and ignores CDC capabilities. Option 4 over-calls ERP and will breach latency SLAs; the docs usually warn about rate limits. The chosen design also supports A/B experiments on price rules with deterministic updates. It simplifies QA with replayable delta streams. It aligns to scalability by avoiding hot-path ERP dependencies. It lets monitoring focus on stream health KPIs. Stakeholders can defend it with concrete SLA math.
Incorrect
Option 3 is correct because the API docs for CDC and bulk deltas provide the blend of freshness and reliability the business needs, minimizing load while covering gaps. Persisted deltas and backfill jobs reduce lost updates during outages, which the docs likely address via sequence IDs or watermarks. The Job Framework is built for bulk operations and retries; the Service Framework covers low-latency fallbacks. Option 1 is too coarse and cannot meet near real-time price changes. Option 2 mixes protocols without a clear consistency model and ignores CDC capabilities. Option 4 over-calls ERP and will breach latency SLAs; the docs usually warn about rate limits. The chosen design also supports A/B experiments on price rules with deterministic updates. It simplifies QA with replayable delta streams. It aligns to scalability by avoiding hot-path ERP dependencies. It lets monitoring focus on stream health KPIs. Stakeholders can defend it with concrete SLA math.
Unattempted
Option 3 is correct because the API docs for CDC and bulk deltas provide the blend of freshness and reliability the business needs, minimizing load while covering gaps. Persisted deltas and backfill jobs reduce lost updates during outages, which the docs likely address via sequence IDs or watermarks. The Job Framework is built for bulk operations and retries; the Service Framework covers low-latency fallbacks. Option 1 is too coarse and cannot meet near real-time price changes. Option 2 mixes protocols without a clear consistency model and ignores CDC capabilities. Option 4 over-calls ERP and will breach latency SLAs; the docs usually warn about rate limits. The chosen design also supports A/B experiments on price rules with deterministic updates. It simplifies QA with replayable delta streams. It aligns to scalability by avoiding hot-path ERP dependencies. It lets monitoring focus on stream health KPIs. Stakeholders can defend it with concrete SLA math.
Question 10 of 60
10. Question
A search vendor offers: Plugin v2 (supports synonyms and facet pinning, SCAPI events), Plugin v1 (OCAPI-only), and a raw REST API. The roadmap adds headless PLP in six months. What is your evaluation outcome?
Correct
Option 2 is correct because Plugin v2s docs show native SCAPI event formats, reducing translation work when headless arrives. Indexed webhooks decrease lag behind catalog updates, and relevance controls meet merch needs. Version pinning plus a compatibility matrix mitigate breaking changes. Option 1 lacks SCAPI awareness and increases migration effort later. Option 3 rebuilds solved problems (schema, events, ranking) and increases risk. Option 4 delays value and misses measurable SEO uplift. The selected path documents rollout and rollback steps. It clarifies monitoring on event throughput. It standardizes analytics mappings via the plugins schemas. It aligns to headless architecture goals with less churn. It satisfies merch governance with facet/pinning features.
Incorrect
Option 2 is correct because Plugin v2s docs show native SCAPI event formats, reducing translation work when headless arrives. Indexed webhooks decrease lag behind catalog updates, and relevance controls meet merch needs. Version pinning plus a compatibility matrix mitigate breaking changes. Option 1 lacks SCAPI awareness and increases migration effort later. Option 3 rebuilds solved problems (schema, events, ranking) and increases risk. Option 4 delays value and misses measurable SEO uplift. The selected path documents rollout and rollback steps. It clarifies monitoring on event throughput. It standardizes analytics mappings via the plugins schemas. It aligns to headless architecture goals with less churn. It satisfies merch governance with facet/pinning features.
Unattempted
Option 2 is correct because Plugin v2s docs show native SCAPI event formats, reducing translation work when headless arrives. Indexed webhooks decrease lag behind catalog updates, and relevance controls meet merch needs. Version pinning plus a compatibility matrix mitigate breaking changes. Option 1 lacks SCAPI awareness and increases migration effort later. Option 3 rebuilds solved problems (schema, events, ranking) and increases risk. Option 4 delays value and misses measurable SEO uplift. The selected path documents rollout and rollback steps. It clarifies monitoring on event throughput. It standardizes analytics mappings via the plugins schemas. It aligns to headless architecture goals with less churn. It satisfies merch governance with facet/pinning features.
Question 11 of 60
11. Question
The OMS vendor has a new REST v3 with webhooks for fulfillment states; your current AppExchange connector supports v2 only. The business wants split shipments and store pickup. What should you propose after evaluating docs and versions?
Correct
Option 4 is correct because it respects upgrade paths while unblocking business value by adding a thin shim based on the v3 docs. Webhook schemas for fulfillment states are mapped in the shim, enabling split shipments and pickup without forking the entire package. Option 1 corrupts order integrity and complicates finance and returns. Option 2 unduly delays value and ignores the clearly documented v3 capabilities. Option 3 increases fragmentation and long-term maintenance. The shim approach documents contracts, error handling, and observability. It can be removed once the connector upgrades. It keeps governance intact by avoiding core changes. It supports QA with mocked v3 payloads. It enables staged rollout with feature flags. It aligns to future growth while minimizing risk.
Incorrect
Option 4 is correct because it respects upgrade paths while unblocking business value by adding a thin shim based on the v3 docs. Webhook schemas for fulfillment states are mapped in the shim, enabling split shipments and pickup without forking the entire package. Option 1 corrupts order integrity and complicates finance and returns. Option 2 unduly delays value and ignores the clearly documented v3 capabilities. Option 3 increases fragmentation and long-term maintenance. The shim approach documents contracts, error handling, and observability. It can be removed once the connector upgrades. It keeps governance intact by avoiding core changes. It supports QA with mocked v3 payloads. It enables staged rollout with feature flags. It aligns to future growth while minimizing risk.
Unattempted
Option 4 is correct because it respects upgrade paths while unblocking business value by adding a thin shim based on the v3 docs. Webhook schemas for fulfillment states are mapped in the shim, enabling split shipments and pickup without forking the entire package. Option 1 corrupts order integrity and complicates finance and returns. Option 2 unduly delays value and ignores the clearly documented v3 capabilities. Option 3 increases fragmentation and long-term maintenance. The shim approach documents contracts, error handling, and observability. It can be removed once the connector upgrades. It keeps governance intact by avoiding core changes. It supports QA with mocked v3 payloads. It enables staged rollout with feature flags. It aligns to future growth while minimizing risk.
Question 12 of 60
12. Question
A loyalty platform lists OAuth 2.0 Authorization Code with PKCE, idempotent earn/burn, and rate limits; an older SSO-only package exists on AppExchange. Marketing requires real-time point balance on PDP. Whats your evaluated recommendation?
Correct
Option 1 is correct because the documented OAuth 2.0 flow with PKCE meets modern security requirements, while idempotent earn/burn calls and caching patterns protect latency on PDP. The Service Framework provides retries, circuit breakers, and logging as per best practices. Option 2 is brittle and likely violates the API terms and security guidance. Option 3 cannot meet real time, and the docs rate limits imply server-side caching rather than batching. Option 4 exposes tokens and increases attack surface; the docs advise server-side token handling. This approach aligns with consent and privacy flags. It scales with rate limits through caching and fallback. It simplifies QA with sandbox tokens and scopes. Its defensible in security and architecture reviews. It avoids lock-in to a legacy package with limited features.
Incorrect
Option 1 is correct because the documented OAuth 2.0 flow with PKCE meets modern security requirements, while idempotent earn/burn calls and caching patterns protect latency on PDP. The Service Framework provides retries, circuit breakers, and logging as per best practices. Option 2 is brittle and likely violates the API terms and security guidance. Option 3 cannot meet real time, and the docs rate limits imply server-side caching rather than batching. Option 4 exposes tokens and increases attack surface; the docs advise server-side token handling. This approach aligns with consent and privacy flags. It scales with rate limits through caching and fallback. It simplifies QA with sandbox tokens and scopes. Its defensible in security and architecture reviews. It avoids lock-in to a legacy package with limited features.
Unattempted
Option 1 is correct because the documented OAuth 2.0 flow with PKCE meets modern security requirements, while idempotent earn/burn calls and caching patterns protect latency on PDP. The Service Framework provides retries, circuit breakers, and logging as per best practices. Option 2 is brittle and likely violates the API terms and security guidance. Option 3 cannot meet real time, and the docs rate limits imply server-side caching rather than batching. Option 4 exposes tokens and increases attack surface; the docs advise server-side token handling. This approach aligns with consent and privacy flags. It scales with rate limits through caching and fallback. It simplifies QA with sandbox tokens and scopes. Its defensible in security and architecture reviews. It avoids lock-in to a legacy package with limited features.
Question 13 of 60
13. Question
PIM integration must support variant-rich catalogs and localized attributes. Docs show a bulk GraphQL export and an AppExchange connector that maps to catalog import files. Which path should the spec take?
Correct
Option 3 is correct because the GraphQL APIs documented delta and localization fields let you minimize volume and generate precise delta files, while the connector covers edge transformations the docs identify. This hybrid recognizes strengths: custom control for variants/locales with high change rates and package convenience where mappings are complex. Option 1 risks large, slow imports and misses delta efficiencies listed in the docs. Option 2 ignores the connectors value for tricky edge mappings and increases build scope. Option 4 contradicts the requirement for localization and creates rework. The chosen design documents version pinning for GraphQL schema. It sets acceptance criteria on delta correctness. It creates replay plans for failures. It isolates performance tuning to high-churn entities. It supports auditability with manifest logs.
Incorrect
Option 3 is correct because the GraphQL APIs documented delta and localization fields let you minimize volume and generate precise delta files, while the connector covers edge transformations the docs identify. This hybrid recognizes strengths: custom control for variants/locales with high change rates and package convenience where mappings are complex. Option 1 risks large, slow imports and misses delta efficiencies listed in the docs. Option 2 ignores the connectors value for tricky edge mappings and increases build scope. Option 4 contradicts the requirement for localization and creates rework. The chosen design documents version pinning for GraphQL schema. It sets acceptance criteria on delta correctness. It creates replay plans for failures. It isolates performance tuning to high-churn entities. It supports auditability with manifest logs.
Unattempted
Option 3 is correct because the GraphQL APIs documented delta and localization fields let you minimize volume and generate precise delta files, while the connector covers edge transformations the docs identify. This hybrid recognizes strengths: custom control for variants/locales with high change rates and package convenience where mappings are complex. Option 1 risks large, slow imports and misses delta efficiencies listed in the docs. Option 2 ignores the connectors value for tricky edge mappings and increases build scope. Option 4 contradicts the requirement for localization and creates rework. The chosen design documents version pinning for GraphQL schema. It sets acceptance criteria on delta correctness. It creates replay plans for failures. It isolates performance tuning to high-churn entities. It supports auditability with manifest logs.
Question 14 of 60
14. Question
Subscription products require proration, pause/resume, and dunning. A vendor offers REST v2 with webhooks and an AppExchange package v1 lacking proration. What should the evaluated spec propose?
Correct
Option 2 is correct because the REST v2 docs enumerate proration and lifecycle events that are critical to the requirement, while the package v1 lacks them. A split approach leverages the package where it helps (admin setup) but ensures the runtime path uses the versioned API with webhooks for state changes. Option 1 delays a must-have and risks rework. Option 3 is high risk and reinvents a complex domain the vendor already covers. Option 4 cannot meet real-time dunning and proration scenarios described in the docs. The selected plan documents webhook payload contracts and idempotency. It defines rollback and migration steps. It aligns to financial audit trails. It reduces customer impact during incidents. It enables canary via feature flags.
Incorrect
Option 2 is correct because the REST v2 docs enumerate proration and lifecycle events that are critical to the requirement, while the package v1 lacks them. A split approach leverages the package where it helps (admin setup) but ensures the runtime path uses the versioned API with webhooks for state changes. Option 1 delays a must-have and risks rework. Option 3 is high risk and reinvents a complex domain the vendor already covers. Option 4 cannot meet real-time dunning and proration scenarios described in the docs. The selected plan documents webhook payload contracts and idempotency. It defines rollback and migration steps. It aligns to financial audit trails. It reduces customer impact during incidents. It enables canary via feature flags.
Unattempted
Option 2 is correct because the REST v2 docs enumerate proration and lifecycle events that are critical to the requirement, while the package v1 lacks them. A split approach leverages the package where it helps (admin setup) but ensures the runtime path uses the versioned API with webhooks for state changes. Option 1 delays a must-have and risks rework. Option 3 is high risk and reinvents a complex domain the vendor already covers. Option 4 cannot meet real-time dunning and proration scenarios described in the docs. The selected plan documents webhook payload contracts and idempotency. It defines rollback and migration steps. It aligns to financial audit trails. It reduces customer impact during incidents. It enables canary via feature flags.
Question 15 of 60
15. Question
Customer service requires a unified order timeline in Service Cloud. You found an AppExchange connector (orders-only) and OMS docs with a webhook for status changes plus a reporting API for shipments/returns. What should your evaluation conclude?
Correct
Option 3 is correct because it preserves proven order flows from the connector while the cartridge extension adds documented webhooks for shipments/returns, creating a complete timeline. Publishing normalized events to Service Cloud via Platform Events aligns with the APIs Async models and scales. Option 1 misses a critical support need and contradicts the requirement. Option 2 creates stale timelines and ignores real-time support expectations. Option 4 increases scope and risk unnecessarily when the connector already meets part of the need. The selected design documents event schemas and replay strategy. It defines error handling and idempotency. It clarifies ownership between package and extension. It improves observability with event logs. Its defendable to stakeholders on value and risk.
Incorrect
Option 3 is correct because it preserves proven order flows from the connector while the cartridge extension adds documented webhooks for shipments/returns, creating a complete timeline. Publishing normalized events to Service Cloud via Platform Events aligns with the APIs Async models and scales. Option 1 misses a critical support need and contradicts the requirement. Option 2 creates stale timelines and ignores real-time support expectations. Option 4 increases scope and risk unnecessarily when the connector already meets part of the need. The selected design documents event schemas and replay strategy. It defines error handling and idempotency. It clarifies ownership between package and extension. It improves observability with event logs. Its defendable to stakeholders on value and risk.
Unattempted
Option 3 is correct because it preserves proven order flows from the connector while the cartridge extension adds documented webhooks for shipments/returns, creating a complete timeline. Publishing normalized events to Service Cloud via Platform Events aligns with the APIs Async models and scales. Option 1 misses a critical support need and contradicts the requirement. Option 2 creates stale timelines and ignores real-time support expectations. Option 4 increases scope and risk unnecessarily when the connector already meets part of the need. The selected design documents event schemas and replay strategy. It defines error handling and idempotency. It clarifies ownership between package and extension. It improves observability with event logs. Its defendable to stakeholders on value and risk.
Question 16 of 60
16. Question
The CMS will own rich content blocks, but engineering proposes embedding HTML snippets in custom attributes. Your spec review must call gaps and defend a path. What do you recommend?
Correct
Option 2 is correct because it separates content ownership from commerce data using IDs and placements, which scales across locales and channels. Sanitization guards security while still giving editors freedom. Preview flows ensure governance and reduce rollbacks. Option 1 invites XSS risks and localization drift and makes versioning painful. Option 3 couples presentation to data and becomes brittle; layout changes then require code changes. Option 4 limits editorial agility and typically fails brand requirements. The proposed approach documents the interface contract and reduces future rework. It supports A/B testing and search indexing. Stakeholders can defend it as a balance of velocity and safety. It creates clear SLOs for content delivery.
Incorrect
Option 2 is correct because it separates content ownership from commerce data using IDs and placements, which scales across locales and channels. Sanitization guards security while still giving editors freedom. Preview flows ensure governance and reduce rollbacks. Option 1 invites XSS risks and localization drift and makes versioning painful. Option 3 couples presentation to data and becomes brittle; layout changes then require code changes. Option 4 limits editorial agility and typically fails brand requirements. The proposed approach documents the interface contract and reduces future rework. It supports A/B testing and search indexing. Stakeholders can defend it as a balance of velocity and safety. It creates clear SLOs for content delivery.
Unattempted
Option 2 is correct because it separates content ownership from commerce data using IDs and placements, which scales across locales and channels. Sanitization guards security while still giving editors freedom. Preview flows ensure governance and reduce rollbacks. Option 1 invites XSS risks and localization drift and makes versioning painful. Option 3 couples presentation to data and becomes brittle; layout changes then require code changes. Option 4 limits editorial agility and typically fails brand requirements. The proposed approach documents the interface contract and reduces future rework. It supports A/B testing and search indexing. Stakeholders can defend it as a balance of velocity and safety. It creates clear SLOs for content delivery.
Question 17 of 60
17. Question
A global apparel brand is migrating from a legacy platform to B2C Commerce. The PIM exposes delta GraphQL feeds for 2M SKUs, and the OMS emits fulfillment webhooks. Black Friday requires near-real-time inventory while allowing staged content testing. Which end-to-end design and migration approach is best?
Correct
Option 3 is correct because delta generation from the PIM reduces volume and aligns to B2C Commerce import formats (catalog, inventory lists), while the Job Framework provides retryable, observable processing at scale. Persisting deltas enables replay if a run fails and supports backfilling after outages. OMS webhooks shift the order timeline to event-driven updates instead of fragile polling. Using staging for content and configuration lets you validate promotions, search indexes, and page variations before replicating to production. Option 1 cannot meet near-real-time inventory needs and creates long maintenance windows. Option 2 couples storefront availability to external SLAs, increasing latency and incident blast radius. Option 4 adds unnecessary hops and duplicates storage; system objects are the right target for runtime, and weekly cadence is too slow. The chosen approach decomposes sync (batch) and serve (real-time) concerns. It also supports scale testing by replaying deltas at peak rates. Finally, it drives a clean architecture diagram with clear async boundaries and monitoring points.
Incorrect
Option 3 is correct because delta generation from the PIM reduces volume and aligns to B2C Commerce import formats (catalog, inventory lists), while the Job Framework provides retryable, observable processing at scale. Persisting deltas enables replay if a run fails and supports backfilling after outages. OMS webhooks shift the order timeline to event-driven updates instead of fragile polling. Using staging for content and configuration lets you validate promotions, search indexes, and page variations before replicating to production. Option 1 cannot meet near-real-time inventory needs and creates long maintenance windows. Option 2 couples storefront availability to external SLAs, increasing latency and incident blast radius. Option 4 adds unnecessary hops and duplicates storage; system objects are the right target for runtime, and weekly cadence is too slow. The chosen approach decomposes sync (batch) and serve (real-time) concerns. It also supports scale testing by replaying deltas at peak rates. Finally, it drives a clean architecture diagram with clear async boundaries and monitoring points.
Unattempted
Option 3 is correct because delta generation from the PIM reduces volume and aligns to B2C Commerce import formats (catalog, inventory lists), while the Job Framework provides retryable, observable processing at scale. Persisting deltas enables replay if a run fails and supports backfilling after outages. OMS webhooks shift the order timeline to event-driven updates instead of fragile polling. Using staging for content and configuration lets you validate promotions, search indexes, and page variations before replicating to production. Option 1 cannot meet near-real-time inventory needs and creates long maintenance windows. Option 2 couples storefront availability to external SLAs, increasing latency and incident blast radius. Option 4 adds unnecessary hops and duplicates storage; system objects are the right target for runtime, and weekly cadence is too slow. The chosen approach decomposes sync (batch) and serve (real-time) concerns. It also supports scale testing by replaying deltas at peak rates. Finally, it drives a clean architecture diagram with clear async boundaries and monitoring points.
Question 18 of 60
18. Question
Youre consolidating three regional sites into one multi-site B2C Commerce realm. Price books differ per country, and promotions reference specific categories. What load and sequencing plan should you document for the initial migration dress rehearsal?
Correct
Option 1 is correct because dependencies flow from foundational configuration to downstream references: promotions depend on categories and price books, and price books depend on the catalog and currencies. Inventory lists must reference existing products and sites. Loading customers and only the most recent orders last keeps data volume manageable while providing end-to-end flows for checkout and service. Executing on staging and replicating to production matches B2C Commerce governance and lets you validate import jobs under production-like conditions. Option 2 inverts dependencies and will break promotion references when catalog entities arent present. Option 3 misuses production for seeding and complicates audit/rollback, and OCAPI /orders isnt intended for bulk historical backfills. Option 4 conflates content (Page Designer) with product data; price books cannot be inferred reliably from templates. This sequence also aligns with search/vendor indexing prerequisites. It enables repeatable dress rehearsals with measurable KPIs. Finally, it yields a clear swimlane diagram of systems and data flows.
Incorrect
Option 1 is correct because dependencies flow from foundational configuration to downstream references: promotions depend on categories and price books, and price books depend on the catalog and currencies. Inventory lists must reference existing products and sites. Loading customers and only the most recent orders last keeps data volume manageable while providing end-to-end flows for checkout and service. Executing on staging and replicating to production matches B2C Commerce governance and lets you validate import jobs under production-like conditions. Option 2 inverts dependencies and will break promotion references when catalog entities arent present. Option 3 misuses production for seeding and complicates audit/rollback, and OCAPI /orders isnt intended for bulk historical backfills. Option 4 conflates content (Page Designer) with product data; price books cannot be inferred reliably from templates. This sequence also aligns with search/vendor indexing prerequisites. It enables repeatable dress rehearsals with measurable KPIs. Finally, it yields a clear swimlane diagram of systems and data flows.
Unattempted
Option 1 is correct because dependencies flow from foundational configuration to downstream references: promotions depend on categories and price books, and price books depend on the catalog and currencies. Inventory lists must reference existing products and sites. Loading customers and only the most recent orders last keeps data volume manageable while providing end-to-end flows for checkout and service. Executing on staging and replicating to production matches B2C Commerce governance and lets you validate import jobs under production-like conditions. Option 2 inverts dependencies and will break promotion references when catalog entities arent present. Option 3 misuses production for seeding and complicates audit/rollback, and OCAPI /orders isnt intended for bulk historical backfills. Option 4 conflates content (Page Designer) with product data; price books cannot be inferred reliably from templates. This sequence also aligns with search/vendor indexing prerequisites. It enables repeatable dress rehearsals with measurable KPIs. Finally, it yields a clear swimlane diagram of systems and data flows.
Question 19 of 60
19. Question
Checkout must support 3DS2, token migration from the existing PSP, and resilient settlement during peak. The PSP offers REST with webhooks for auth/capture and a PCI vault export of tokens. What integration and migration plan is preferable?
Correct
Option 4 is correct because Service Framework with idempotency ensures retriable, safe auth flows, and webhooks deliver authoritative settlement states without excessive polling. Mapping vault-exported tokens to payment instruments moves PCI scope out of B2C Commerce while preserving saved cards. Option 1 violates PCI principles by holding PANs. Option 2 introduces rate-limit risk and data drift compared with webhook push. Option 3 cannot meet peak resilience or real-time capture scenarios and delays customer value. The recommended plan also documents backoff strategies and webhook signature validation. It outlines a cutover window with dual-write token checks. It identifies monitoring points (auth latency, webhook lag). It cleanly separates sync migration (token import) from real-time checkout. It produces an architecture diagram that shows PSP, webhooks, and storefront boundaries.
Incorrect
Option 4 is correct because Service Framework with idempotency ensures retriable, safe auth flows, and webhooks deliver authoritative settlement states without excessive polling. Mapping vault-exported tokens to payment instruments moves PCI scope out of B2C Commerce while preserving saved cards. Option 1 violates PCI principles by holding PANs. Option 2 introduces rate-limit risk and data drift compared with webhook push. Option 3 cannot meet peak resilience or real-time capture scenarios and delays customer value. The recommended plan also documents backoff strategies and webhook signature validation. It outlines a cutover window with dual-write token checks. It identifies monitoring points (auth latency, webhook lag). It cleanly separates sync migration (token import) from real-time checkout. It produces an architecture diagram that shows PSP, webhooks, and storefront boundaries.
Unattempted
Option 4 is correct because Service Framework with idempotency ensures retriable, safe auth flows, and webhooks deliver authoritative settlement states without excessive polling. Mapping vault-exported tokens to payment instruments moves PCI scope out of B2C Commerce while preserving saved cards. Option 1 violates PCI principles by holding PANs. Option 2 introduces rate-limit risk and data drift compared with webhook push. Option 3 cannot meet peak resilience or real-time capture scenarios and delays customer value. The recommended plan also documents backoff strategies and webhook signature validation. It outlines a cutover window with dual-write token checks. It identifies monitoring points (auth latency, webhook lag). It cleanly separates sync migration (token import) from real-time checkout. It produces an architecture diagram that shows PSP, webhooks, and storefront boundaries.
Question 20 of 60
20. Question
The business wants agents to see a 5-year order history in Service Cloud, but only the last 18 months should appear in customer Order History pages for performance. How do you design data placement and migration?
Correct
Option 2 is correct because it places run-time, customer-facing reads on a bounded dataset while giving agents full context through Service Cloud and the data lake. This reduces storefront table growth and index pressure yet satisfies service requirements. Option 1 inflates order tables, slowing queries and increasing quota risks. Option 3 misuses custom objects for transactional data and complicates reporting, indexing, and security. Option 4 fails the requirement and degrades customer experience. The recommended plan includes a migration job that transforms legacy order schemas into normalized events. It documents identity resolution and masking for PII. It sets clear retention policies by channel. It provides agent UI affordances to pivot between recent and historical records. It draws a system diagram showing async feeds and lookup paths. It defines success metrics such as storefront latency and agent handle time.
Incorrect
Option 2 is correct because it places run-time, customer-facing reads on a bounded dataset while giving agents full context through Service Cloud and the data lake. This reduces storefront table growth and index pressure yet satisfies service requirements. Option 1 inflates order tables, slowing queries and increasing quota risks. Option 3 misuses custom objects for transactional data and complicates reporting, indexing, and security. Option 4 fails the requirement and degrades customer experience. The recommended plan includes a migration job that transforms legacy order schemas into normalized events. It documents identity resolution and masking for PII. It sets clear retention policies by channel. It provides agent UI affordances to pivot between recent and historical records. It draws a system diagram showing async feeds and lookup paths. It defines success metrics such as storefront latency and agent handle time.
Unattempted
Option 2 is correct because it places run-time, customer-facing reads on a bounded dataset while giving agents full context through Service Cloud and the data lake. This reduces storefront table growth and index pressure yet satisfies service requirements. Option 1 inflates order tables, slowing queries and increasing quota risks. Option 3 misuses custom objects for transactional data and complicates reporting, indexing, and security. Option 4 fails the requirement and degrades customer experience. The recommended plan includes a migration job that transforms legacy order schemas into normalized events. It documents identity resolution and masking for PII. It sets clear retention policies by channel. It provides agent UI affordances to pivot between recent and historical records. It draws a system diagram showing async feeds and lookup paths. It defines success metrics such as storefront latency and agent handle time.
Question 21 of 60
21. Question
Your search provider must index 12M SKUs across 8 locales with freshness under 5 minutes. The provider supports event-driven indexing with backpressure and bulk fallbacks. What integration pattern and SFCC components should you propose?
Correct
Option 3 is correct because event-driven indexing with buffered, idempotent deltas achieves freshness without hammering bulk endpoints, and scheduled replays mitigate outages. The Job Framework suits generating reliable change events, and isolating SCAPI/OCAPI keeps runtime APIs clean. Option 1 misses the freshness SLA and creates merch frustration. Option 2 ties page latency to external SLAs and risks errors surfacing to shoppers. Option 4 exposes secrets in the browser and can be throttled by clients. The recommended design also documents retry policies and dead-letter handling. It defines a backfill path using bulk endpoints. It includes monitoring on lag and failure rates. It maps locale handling to separate indexes or fields. It diagrams flows from catalog import ? change event ? index. It sets KPIs for freshness and error budgets.
Incorrect
Option 3 is correct because event-driven indexing with buffered, idempotent deltas achieves freshness without hammering bulk endpoints, and scheduled replays mitigate outages. The Job Framework suits generating reliable change events, and isolating SCAPI/OCAPI keeps runtime APIs clean. Option 1 misses the freshness SLA and creates merch frustration. Option 2 ties page latency to external SLAs and risks errors surfacing to shoppers. Option 4 exposes secrets in the browser and can be throttled by clients. The recommended design also documents retry policies and dead-letter handling. It defines a backfill path using bulk endpoints. It includes monitoring on lag and failure rates. It maps locale handling to separate indexes or fields. It diagrams flows from catalog import ? change event ? index. It sets KPIs for freshness and error budgets.
Unattempted
Option 3 is correct because event-driven indexing with buffered, idempotent deltas achieves freshness without hammering bulk endpoints, and scheduled replays mitigate outages. The Job Framework suits generating reliable change events, and isolating SCAPI/OCAPI keeps runtime APIs clean. Option 1 misses the freshness SLA and creates merch frustration. Option 2 ties page latency to external SLAs and risks errors surfacing to shoppers. Option 4 exposes secrets in the browser and can be throttled by clients. The recommended design also documents retry policies and dead-letter handling. It defines a backfill path using bulk endpoints. It includes monitoring on lag and failure rates. It maps locale handling to separate indexes or fields. It diagrams flows from catalog import ? change event ? index. It sets KPIs for freshness and error budgets.
Question 22 of 60
22. Question
Youre integrating a tax service across 15 countries with frequent rule updates. Docs show async quote?commit with webhook confirmations and recommended caching. What approach should you specify and how do you migrate tax mappings?
Correct
Option 3 is correct because the services async model with cache reduces latency and cost while providing accurate commits at checkout, and reconciling tax categories during migration ensures proper rule application. Option 1 conflates pricing with tax law and cannot keep pace with updates. Option 2 over-calls on browse traffic and will hit quotas. Option 4 oversimplifies complex tax regimes and fails compliance. The plan includes idempotency for commit calls and webhook signature validation. It documents cache invalidation on cart changes. It outlines test cases for tricky promotions and returns. It identifies monitoring points like webhook lag and error rates. It provides a rollback to cached quotes if the service is down. It includes a diagram showing browse/checkout split and async callbacks.
Incorrect
Option 3 is correct because the services async model with cache reduces latency and cost while providing accurate commits at checkout, and reconciling tax categories during migration ensures proper rule application. Option 1 conflates pricing with tax law and cannot keep pace with updates. Option 2 over-calls on browse traffic and will hit quotas. Option 4 oversimplifies complex tax regimes and fails compliance. The plan includes idempotency for commit calls and webhook signature validation. It documents cache invalidation on cart changes. It outlines test cases for tricky promotions and returns. It identifies monitoring points like webhook lag and error rates. It provides a rollback to cached quotes if the service is down. It includes a diagram showing browse/checkout split and async callbacks.
Unattempted
Option 3 is correct because the services async model with cache reduces latency and cost while providing accurate commits at checkout, and reconciling tax categories during migration ensures proper rule application. Option 1 conflates pricing with tax law and cannot keep pace with updates. Option 2 over-calls on browse traffic and will hit quotas. Option 4 oversimplifies complex tax regimes and fails compliance. The plan includes idempotency for commit calls and webhook signature validation. It documents cache invalidation on cart changes. It outlines test cases for tricky promotions and returns. It identifies monitoring points like webhook lag and error rates. It provides a rollback to cached quotes if the service is down. It includes a diagram showing browse/checkout split and async callbacks.
Question 23 of 60
23. Question
Identity must unify logins across Commerce, Service, and Marketing while supporting consent management and progressive profiling. Legacy sites stored passwords locally. What architecture and migration path do you propose?
Correct
Option 4 is correct because it cleanly separates identity to an IdP, uses gateway-issued JWTs to create Commerce sessions, and propagates profile/consent via CDC. The phased cutover plan reduces risk and avoids storing passwords in Commerce. Option 2 is close but lacks the rollout details and cross-domain session considerations emphasized for multi-cloud. Option 1 continues a risky pattern and complicates compliance. Option 3 exposes tokens in the browser and weakens security. The recommended plan includes consent versioning and audit. It documents token lifetimes and refresh patterns. It defines customer experience for first login post-cutover. It diagrams auth flows and event paths. It sets KPIs for login success and session stability. It outlines rollback to legacy auth if needed.
Incorrect
Option 4 is correct because it cleanly separates identity to an IdP, uses gateway-issued JWTs to create Commerce sessions, and propagates profile/consent via CDC. The phased cutover plan reduces risk and avoids storing passwords in Commerce. Option 2 is close but lacks the rollout details and cross-domain session considerations emphasized for multi-cloud. Option 1 continues a risky pattern and complicates compliance. Option 3 exposes tokens in the browser and weakens security. The recommended plan includes consent versioning and audit. It documents token lifetimes and refresh patterns. It defines customer experience for first login post-cutover. It diagrams auth flows and event paths. It sets KPIs for login success and session stability. It outlines rollback to legacy auth if needed.
Unattempted
Option 4 is correct because it cleanly separates identity to an IdP, uses gateway-issued JWTs to create Commerce sessions, and propagates profile/consent via CDC. The phased cutover plan reduces risk and avoids storing passwords in Commerce. Option 2 is close but lacks the rollout details and cross-domain session considerations emphasized for multi-cloud. Option 1 continues a risky pattern and complicates compliance. Option 3 exposes tokens in the browser and weakens security. The recommended plan includes consent versioning and audit. It documents token lifetimes and refresh patterns. It defines customer experience for first login post-cutover. It diagrams auth flows and event paths. It sets KPIs for login success and session stability. It outlines rollback to legacy auth if needed.
Question 24 of 60
24. Question
A subscription program needs real-time pricing preview during checkout and reliable renewal processing later. The vendor offers REST with preview endpoints and renewal webhooks. How should you split real-time vs. batch, and what migration is needed?
Correct
Option 2 is correct because preview endpoints belong in the real-time path with Service Framework controls, while renewals and dunning are asynchronous and fit Job Framework plus webhook consumption. Vendor exports provide authoritative subscription IDs for migration and mapping. Option 1 cannot deliver accurate checkout previews and will cause bill shock. Option 3 exposes secrets and moves transactional state to unsupported objects. Option 4 misrepresents entitlements and collapses complex lifecycle needs. The plan documents idempotency for create/update calls. It includes retry and dead-letter handling for webhooks. It sets data contracts for entitlement flags used on PDP/Cart. It diagrams the split between synchronous and asynchronous flows. It defines KPIs for preview latency and renewal success. It includes rollback strategy for failed renewals.
Incorrect
Option 2 is correct because preview endpoints belong in the real-time path with Service Framework controls, while renewals and dunning are asynchronous and fit Job Framework plus webhook consumption. Vendor exports provide authoritative subscription IDs for migration and mapping. Option 1 cannot deliver accurate checkout previews and will cause bill shock. Option 3 exposes secrets and moves transactional state to unsupported objects. Option 4 misrepresents entitlements and collapses complex lifecycle needs. The plan documents idempotency for create/update calls. It includes retry and dead-letter handling for webhooks. It sets data contracts for entitlement flags used on PDP/Cart. It diagrams the split between synchronous and asynchronous flows. It defines KPIs for preview latency and renewal success. It includes rollback strategy for failed renewals.
Unattempted
Option 2 is correct because preview endpoints belong in the real-time path with Service Framework controls, while renewals and dunning are asynchronous and fit Job Framework plus webhook consumption. Vendor exports provide authoritative subscription IDs for migration and mapping. Option 1 cannot deliver accurate checkout previews and will cause bill shock. Option 3 exposes secrets and moves transactional state to unsupported objects. Option 4 misrepresents entitlements and collapses complex lifecycle needs. The plan documents idempotency for create/update calls. It includes retry and dead-letter handling for webhooks. It sets data contracts for entitlement flags used on PDP/Cart. It diagrams the split between synchronous and asynchronous flows. It defines KPIs for preview latency and renewal success. It includes rollback strategy for failed renewals.
Question 25 of 60
25. Question
SEO requires preserving legacy URLs while moving to a locale-aware structure. You have a CSV of ~500k legacy slugs and their canonical targets. What approach should be in the spec?
Correct
Option 3 is correct because precomputing redirects into the platforms alias module scales to hundreds of thousands of entries, respects locale and currency routing, and enables validation before replication. Option 1 loses link equity and creates needless 404s. Option 2 blocks the desired locale structure and causes ongoing inconsistency. Option 4 adds per-request latency and custom object bloat, and may exceed quota under load. The plan includes a migration dress rehearsal using a subset. It documents 301 vs. 302 rules. It integrates crawl log feedback to catch misses. It defines monitoring for 404 rates post-launch. It diagrams the URL resolution flow. It sets KPIs such as crawl budget and conversion impact. It provides rollback by restoring the previous alias set.
Incorrect
Option 3 is correct because precomputing redirects into the platforms alias module scales to hundreds of thousands of entries, respects locale and currency routing, and enables validation before replication. Option 1 loses link equity and creates needless 404s. Option 2 blocks the desired locale structure and causes ongoing inconsistency. Option 4 adds per-request latency and custom object bloat, and may exceed quota under load. The plan includes a migration dress rehearsal using a subset. It documents 301 vs. 302 rules. It integrates crawl log feedback to catch misses. It defines monitoring for 404 rates post-launch. It diagrams the URL resolution flow. It sets KPIs such as crawl budget and conversion impact. It provides rollback by restoring the previous alias set.
Unattempted
Option 3 is correct because precomputing redirects into the platforms alias module scales to hundreds of thousands of entries, respects locale and currency routing, and enables validation before replication. Option 1 loses link equity and creates needless 404s. Option 2 blocks the desired locale structure and causes ongoing inconsistency. Option 4 adds per-request latency and custom object bloat, and may exceed quota under load. The plan includes a migration dress rehearsal using a subset. It documents 301 vs. 302 rules. It integrates crawl log feedback to catch misses. It defines monitoring for 404 rates post-launch. It diagrams the URL resolution flow. It sets KPIs such as crawl budget and conversion impact. It provides rollback by restoring the previous alias set.
Question 26 of 60
26. Question
ERP will push CDC for inventory and prices with strict rate limits. Marketing wants price overrides live within 15 minutes; ops needs accurate OOS at scale. What architecture balances limits, freshness, and resilience?
Correct
Option 2 is correct because a mediation layer (e.g., MuleSoft) can enforce rate limits, buffer spikes, and replay missed messages, while B2C Commerce consumes deltas via supported imports. A hot-fix SCAPI path allows urgent corrections without waiting for the next batch. Option 1 ties customer latency to ERP and risks rate-limit errors in the critical path. Option 3 misses the freshness target and adds operational toil. Option 4 pushes operational concerns into the web tier and violates separation of concerns. The recommended design documents sequencing (price before promo eval, inventory before OOS checks). It includes poison-queue handling and alerting. It defines monitoring on lag and success rates. It diagrams the CDC ? iPaaS ? SFCC flow. It sets KPIs for freshness and accuracy. It includes fallback to last-known-good deltas if CDC pauses.
Incorrect
Option 2 is correct because a mediation layer (e.g., MuleSoft) can enforce rate limits, buffer spikes, and replay missed messages, while B2C Commerce consumes deltas via supported imports. A hot-fix SCAPI path allows urgent corrections without waiting for the next batch. Option 1 ties customer latency to ERP and risks rate-limit errors in the critical path. Option 3 misses the freshness target and adds operational toil. Option 4 pushes operational concerns into the web tier and violates separation of concerns. The recommended design documents sequencing (price before promo eval, inventory before OOS checks). It includes poison-queue handling and alerting. It defines monitoring on lag and success rates. It diagrams the CDC ? iPaaS ? SFCC flow. It sets KPIs for freshness and accuracy. It includes fallback to last-known-good deltas if CDC pauses.
Unattempted
Option 2 is correct because a mediation layer (e.g., MuleSoft) can enforce rate limits, buffer spikes, and replay missed messages, while B2C Commerce consumes deltas via supported imports. A hot-fix SCAPI path allows urgent corrections without waiting for the next batch. Option 1 ties customer latency to ERP and risks rate-limit errors in the critical path. Option 3 misses the freshness target and adds operational toil. Option 4 pushes operational concerns into the web tier and violates separation of concerns. The recommended design documents sequencing (price before promo eval, inventory before OOS checks). It includes poison-queue handling and alerting. It defines monitoring on lag and success rates. It diagrams the CDC ? iPaaS ? SFCC flow. It sets KPIs for freshness and accuracy. It includes fallback to last-known-good deltas if CDC pauses.
Question 27 of 60
27. Question
During cutover, you must migrate 1.8M customer profiles with hashed passwords, consents, and loyalty IDs from multiple regions. Some identities are duplicates. What migration and integration approach will you defend?
Correct
Option 2 is correct because pre-normalizing identities prevents duplicate accounts, external IDs preserve lineage, consent versioning supports audits, and moving auth to an IdP enables secure password upgrades. Backfilling loyalty with vendor exports maintains entitlements. Option 1 disregards duplication and loses auditability of consent. Option 3 cements fragmentation and undermines the programs goals. Option 4 misuses custom objects for authentication and adds latency/security concerns. The plan documents a dry-run migration with reconciliation reports. It defines PII masking and regional residency handling. It outlines customer communications. It diagrams flows among MDM, IdP, and Commerce. It sets KPIs for successful logins and dedupe accuracy. It includes rollback for accounts with conflicts.
Incorrect
Option 2 is correct because pre-normalizing identities prevents duplicate accounts, external IDs preserve lineage, consent versioning supports audits, and moving auth to an IdP enables secure password upgrades. Backfilling loyalty with vendor exports maintains entitlements. Option 1 disregards duplication and loses auditability of consent. Option 3 cements fragmentation and undermines the programs goals. Option 4 misuses custom objects for authentication and adds latency/security concerns. The plan documents a dry-run migration with reconciliation reports. It defines PII masking and regional residency handling. It outlines customer communications. It diagrams flows among MDM, IdP, and Commerce. It sets KPIs for successful logins and dedupe accuracy. It includes rollback for accounts with conflicts.
Unattempted
Option 2 is correct because pre-normalizing identities prevents duplicate accounts, external IDs preserve lineage, consent versioning supports audits, and moving auth to an IdP enables secure password upgrades. Backfilling loyalty with vendor exports maintains entitlements. Option 1 disregards duplication and loses auditability of consent. Option 3 cements fragmentation and undermines the programs goals. Option 4 misuses custom objects for authentication and adds latency/security concerns. The plan documents a dry-run migration with reconciliation reports. It defines PII masking and regional residency handling. It outlines customer communications. It diagrams flows among MDM, IdP, and Commerce. It sets KPIs for successful logins and dedupe accuracy. It includes rollback for accounts with conflicts.
Question 28 of 60
28. Question
Youre replacing a heavily customized SiteGenesis build with SFRA. The business requires multi-site branding, localized promotions, and minimal regression risk. Which implementation process best ensures the solution meets these requirements?
Correct
Option 3 is correct because it ties epics and acceptance criteria to concrete SFRA extension points, ensuring coverage and reducing regression risk. Cartridge inheritance and ISML decorators keep brand variations modular without duplicating controllers. Proof-of-concepts on promotions de-risk complex business rules early. A traceability matrix makes it transparent which business goals are satisfied by which technical components. Automated integration tests guard against regressions across multi-site behavior. Option 1 creates brittle duplication and inflates maintenance. Option 2 defers essential localization and bundles concerns into a monolith, which complicates future growth. Option 4 ignores modern controllers, increases technical debt, and lacks alignment to SFRA best practices. The chosen process also supports phased rollout by brand while keeping a common core. Finally, it enables repeatable builds and cleaner code reviews tied to business outcomes.
Incorrect
Option 3 is correct because it ties epics and acceptance criteria to concrete SFRA extension points, ensuring coverage and reducing regression risk. Cartridge inheritance and ISML decorators keep brand variations modular without duplicating controllers. Proof-of-concepts on promotions de-risk complex business rules early. A traceability matrix makes it transparent which business goals are satisfied by which technical components. Automated integration tests guard against regressions across multi-site behavior. Option 1 creates brittle duplication and inflates maintenance. Option 2 defers essential localization and bundles concerns into a monolith, which complicates future growth. Option 4 ignores modern controllers, increases technical debt, and lacks alignment to SFRA best practices. The chosen process also supports phased rollout by brand while keeping a common core. Finally, it enables repeatable builds and cleaner code reviews tied to business outcomes.
Unattempted
Option 3 is correct because it ties epics and acceptance criteria to concrete SFRA extension points, ensuring coverage and reducing regression risk. Cartridge inheritance and ISML decorators keep brand variations modular without duplicating controllers. Proof-of-concepts on promotions de-risk complex business rules early. A traceability matrix makes it transparent which business goals are satisfied by which technical components. Automated integration tests guard against regressions across multi-site behavior. Option 1 creates brittle duplication and inflates maintenance. Option 2 defers essential localization and bundles concerns into a monolith, which complicates future growth. Option 4 ignores modern controllers, increases technical debt, and lacks alignment to SFRA best practices. The chosen process also supports phased rollout by brand while keeping a common core. Finally, it enables repeatable builds and cleaner code reviews tied to business outcomes.
Question 29 of 60
29. Question
The company is going headless with React. Requirements: single sign-on with SLAS, SCAPI for cart/checkout, and resilient performance under peak. Which implementation plan best ensures the build meets those needs?
Correct
Option 2 is correct because a BFF enforces SLAS token hygiene, abstracts scopes, and centralizes resilience patterns (circuit breaking, caching) to meet peak performance goals. Contract tests ensure the React app and BFF agree on payloads, reducing runtime surprises. Chaos drills validate that retries and fallbacks behave as intended under failure. Option 1 weakens security by persisting tokens in the browser and lacks granular control. Option 3 skips SLAS and over-relies on client retries, which can amplify load during incidents. Option 4 mixes APIs and defers identity, risking rework and inconsistent behavior. The recommended plan also eases observability with correlation IDs across BFF?SCAPI calls. It supports feature flags for progressive release. It provides an audit trail of scopes used per endpoint. It aligns directly to the business needs of SSO, robust checkout, and performance SLAs.
Incorrect
Option 2 is correct because a BFF enforces SLAS token hygiene, abstracts scopes, and centralizes resilience patterns (circuit breaking, caching) to meet peak performance goals. Contract tests ensure the React app and BFF agree on payloads, reducing runtime surprises. Chaos drills validate that retries and fallbacks behave as intended under failure. Option 1 weakens security by persisting tokens in the browser and lacks granular control. Option 3 skips SLAS and over-relies on client retries, which can amplify load during incidents. Option 4 mixes APIs and defers identity, risking rework and inconsistent behavior. The recommended plan also eases observability with correlation IDs across BFF?SCAPI calls. It supports feature flags for progressive release. It provides an audit trail of scopes used per endpoint. It aligns directly to the business needs of SSO, robust checkout, and performance SLAs.
Unattempted
Option 2 is correct because a BFF enforces SLAS token hygiene, abstracts scopes, and centralizes resilience patterns (circuit breaking, caching) to meet peak performance goals. Contract tests ensure the React app and BFF agree on payloads, reducing runtime surprises. Chaos drills validate that retries and fallbacks behave as intended under failure. Option 1 weakens security by persisting tokens in the browser and lacks granular control. Option 3 skips SLAS and over-relies on client retries, which can amplify load during incidents. Option 4 mixes APIs and defers identity, risking rework and inconsistent behavior. The recommended plan also eases observability with correlation IDs across BFF?SCAPI calls. It supports feature flags for progressive release. It provides an audit trail of scopes used per endpoint. It aligns directly to the business needs of SSO, robust checkout, and performance SLAs.
Question 30 of 60
30. Question
Finance mandates external tax with variable nexus rules. Requirements: accurate tax on checkout, graceful degradation, and auditability. Which implementation process meets the requirements?
Correct
Option 2 is correct because Service Framework plus idempotency ensures reliable, replay-safe calls, while short-TTL caching meets performance without sacrificing accuracy. A formal functional integration test pack verifies promotions, shipping, and returns across scenarios. Storing signed responses creates an audit trail meeting Finance needs. Explicit outage fallbacks protect the checkout experience. Option 1 replicates a tax engine poorly and reduces accuracy. Option 3 overloads browse traffic and risks rate limits. Option 4s static tables wont handle complex nexus rules. The process also documents responsibility matrices with the provider and deploy gates that require passing FIT packs. It includes monitoring of quote latency and error rates. It ensures privacy by not persisting PII beyond what is required.
Incorrect
Option 2 is correct because Service Framework plus idempotency ensures reliable, replay-safe calls, while short-TTL caching meets performance without sacrificing accuracy. A formal functional integration test pack verifies promotions, shipping, and returns across scenarios. Storing signed responses creates an audit trail meeting Finance needs. Explicit outage fallbacks protect the checkout experience. Option 1 replicates a tax engine poorly and reduces accuracy. Option 3 overloads browse traffic and risks rate limits. Option 4s static tables wont handle complex nexus rules. The process also documents responsibility matrices with the provider and deploy gates that require passing FIT packs. It includes monitoring of quote latency and error rates. It ensures privacy by not persisting PII beyond what is required.
Unattempted
Option 2 is correct because Service Framework plus idempotency ensures reliable, replay-safe calls, while short-TTL caching meets performance without sacrificing accuracy. A formal functional integration test pack verifies promotions, shipping, and returns across scenarios. Storing signed responses creates an audit trail meeting Finance needs. Explicit outage fallbacks protect the checkout experience. Option 1 replicates a tax engine poorly and reduces accuracy. Option 3 overloads browse traffic and risks rate limits. Option 4s static tables wont handle complex nexus rules. The process also documents responsibility matrices with the provider and deploy gates that require passing FIT packs. It includes monitoring of quote latency and error rates. It ensures privacy by not persisting PII beyond what is required.
Question 31 of 60
31. Question
Analytics needs event streaming for checkout and account events, gated by consent and resilient to outages. What must your spec include?
Correct
The spec needs structure and governance, not ad-hoc logging. A taxonomy ensures consistent event names and fields. Consent flags prevent unlawful collection and keep behavior predictable. Data layer contracts make front-end and back-end integration deterministic. Dual real-time/batch paths improve timeliness and reliability. Retries and DLQ protect against transient failures while backfill repairs gaps. Option 3 is slow, heavy, and risky for PII. Option 4 breaks compliance and user trust. Option 1 is too vague to implement. The chosen answer is implementable, auditable, and aligns with privacy by design.
Incorrect
The spec needs structure and governance, not ad-hoc logging. A taxonomy ensures consistent event names and fields. Consent flags prevent unlawful collection and keep behavior predictable. Data layer contracts make front-end and back-end integration deterministic. Dual real-time/batch paths improve timeliness and reliability. Retries and DLQ protect against transient failures while backfill repairs gaps. Option 3 is slow, heavy, and risky for PII. Option 4 breaks compliance and user trust. Option 1 is too vague to implement. The chosen answer is implementable, auditable, and aligns with privacy by design.
Unattempted
The spec needs structure and governance, not ad-hoc logging. A taxonomy ensures consistent event names and fields. Consent flags prevent unlawful collection and keep behavior predictable. Data layer contracts make front-end and back-end integration deterministic. Dual real-time/batch paths improve timeliness and reliability. Retries and DLQ protect against transient failures while backfill repairs gaps. Option 3 is slow, heavy, and risky for PII. Option 4 breaks compliance and user trust. Option 1 is too vague to implement. The chosen answer is implementable, auditable, and aligns with privacy by design.
Question 32 of 60
32. Question
You must migrate 220k legacy redirects and canonical tags while switching to locale-aware domains. How should this be executed to minimize SEO risk and operational impact?
Correct
Chunked, pre-validated redirect imports reduce the risk of timeouts and catch loops before go-live. Prioritizing 301s preserves link equity and helps search engines settle quickly. Locale-aware mapping avoids sending users to the wrong domain. Pre-warming caches makes initial traffic smoother. Option 1 concentrates risk into a single failure point and unnecessary full reindex. Option 3 is SEO-hostile and provides poor UX. Option 4 teaches search engines impermanence and delays ranking stabilization. Sampled testing provides confidence without crawling all URLs. The plan allows quick rollback by removing a batch. It keeps platform resources available for other cutover tasks. It aligns with best practice for large redirect sets.
Incorrect
Chunked, pre-validated redirect imports reduce the risk of timeouts and catch loops before go-live. Prioritizing 301s preserves link equity and helps search engines settle quickly. Locale-aware mapping avoids sending users to the wrong domain. Pre-warming caches makes initial traffic smoother. Option 1 concentrates risk into a single failure point and unnecessary full reindex. Option 3 is SEO-hostile and provides poor UX. Option 4 teaches search engines impermanence and delays ranking stabilization. Sampled testing provides confidence without crawling all URLs. The plan allows quick rollback by removing a batch. It keeps platform resources available for other cutover tasks. It aligns with best practice for large redirect sets.
Unattempted
Chunked, pre-validated redirect imports reduce the risk of timeouts and catch loops before go-live. Prioritizing 301s preserves link equity and helps search engines settle quickly. Locale-aware mapping avoids sending users to the wrong domain. Pre-warming caches makes initial traffic smoother. Option 1 concentrates risk into a single failure point and unnecessary full reindex. Option 3 is SEO-hostile and provides poor UX. Option 4 teaches search engines impermanence and delays ranking stabilization. Sampled testing provides confidence without crawling all URLs. The plan allows quick rollback by removing a batch. It keeps platform resources available for other cutover tasks. It aligns with best practice for large redirect sets.
Question 33 of 60
33. Question
Inventory will be sourced from a new OMS: one full baseline the weekend before launch and 15-minute deltas after DNS cutover. PDP/PLP must not show stale stock during the switch. What migration pattern should you use?
Correct
Freezing legacy updates removes drifting targets while you build the baseline. A parity check confirms correctness prior to traffic. Queuing and draining deltas keeps you consistent across the switch window. Enabling traffic only after draining ensures PDP/PLP reflect the latest state. Option 1 risks incoherence if product/price are not stable yet. Option 3 invites holes because deltas can miss older records. Option 4 harms conversion and undermines confidence. The chosen pattern is resilient to variable cutover timing. It minimizes customer-visible inventory flicker. It also yields a clean rollback path by re-pointing to legacy until parity passes.
Incorrect
Freezing legacy updates removes drifting targets while you build the baseline. A parity check confirms correctness prior to traffic. Queuing and draining deltas keeps you consistent across the switch window. Enabling traffic only after draining ensures PDP/PLP reflect the latest state. Option 1 risks incoherence if product/price are not stable yet. Option 3 invites holes because deltas can miss older records. Option 4 harms conversion and undermines confidence. The chosen pattern is resilient to variable cutover timing. It minimizes customer-visible inventory flicker. It also yields a clean rollback path by re-pointing to legacy until parity passes.
Unattempted
Freezing legacy updates removes drifting targets while you build the baseline. A parity check confirms correctness prior to traffic. Queuing and draining deltas keeps you consistent across the switch window. Enabling traffic only after draining ensures PDP/PLP reflect the latest state. Option 1 risks incoherence if product/price are not stable yet. Option 3 invites holes because deltas can miss older records. Option 4 harms conversion and undermines confidence. The chosen pattern is resilient to variable cutover timing. It minimizes customer-visible inventory flicker. It also yields a clean rollback path by re-pointing to legacy until parity passes.
Question 34 of 60
34. Question
Lower environments need production-like data for end-to-end UAT, but PII must be protected and marketing consents honored. What is the correct data seeding plan?
Correct
Irreversible masking satisfies privacy while keeping data realistic for workflows. Tokenized emails prevent accidental sends, and synthetic payment artifacts avoid compliance exposure. Preserving consent states enables testing of opt-in/opt-out logic. Suppressing notifications avoids noisy side effects during import. Option 1 breaks privacy and contractual obligations. Option 3 cant capture the distribution and edge cases seen in production. Option 4 is dangerous because reversibility risks a breach. The plan balances realism and compliance. It fits CI/UAT cycles and can be automated. It also supports repeatable refreshes with audit logs.
Incorrect
Irreversible masking satisfies privacy while keeping data realistic for workflows. Tokenized emails prevent accidental sends, and synthetic payment artifacts avoid compliance exposure. Preserving consent states enables testing of opt-in/opt-out logic. Suppressing notifications avoids noisy side effects during import. Option 1 breaks privacy and contractual obligations. Option 3 cant capture the distribution and edge cases seen in production. Option 4 is dangerous because reversibility risks a breach. The plan balances realism and compliance. It fits CI/UAT cycles and can be automated. It also supports repeatable refreshes with audit logs.
Unattempted
Irreversible masking satisfies privacy while keeping data realistic for workflows. Tokenized emails prevent accidental sends, and synthetic payment artifacts avoid compliance exposure. Preserving consent states enables testing of opt-in/opt-out logic. Suppressing notifications avoids noisy side effects during import. Option 1 breaks privacy and contractual obligations. Option 3 cant capture the distribution and edge cases seen in production. Option 4 is dangerous because reversibility risks a breach. The plan balances realism and compliance. It fits CI/UAT cycles and can be automated. It also supports repeatable refreshes with audit logs.
Question 35 of 60
35. Question
You are migrating multi-currency price books with locale rounding rules and tiered B2C pricing. Which sequence prevents mispriced items appearing in search and PLP during cutover?
Correct
Catalog must exist first, and prices should be complete before indexing for accurate facets and sorting. Derived currency books depend on base values to apply rounding matrices correctly. A single reindex after all prices land avoids half-priced items in PLP. Replicating per-locale keeps blast radius small. Option 1 risks incomplete currency coverage in the index if some books are late. Option 2 creates thrash and inconsistent multi-currency views. Option 4 inverts dependencies and can produce invalid effective prices. Validation of rounding and tiers before the reindex reduces backouts. The plan supports deterministic replication scheduling. It also eases rollback by reapplying the prior index snapshot.
Incorrect
Catalog must exist first, and prices should be complete before indexing for accurate facets and sorting. Derived currency books depend on base values to apply rounding matrices correctly. A single reindex after all prices land avoids half-priced items in PLP. Replicating per-locale keeps blast radius small. Option 1 risks incomplete currency coverage in the index if some books are late. Option 2 creates thrash and inconsistent multi-currency views. Option 4 inverts dependencies and can produce invalid effective prices. Validation of rounding and tiers before the reindex reduces backouts. The plan supports deterministic replication scheduling. It also eases rollback by reapplying the prior index snapshot.
Unattempted
Catalog must exist first, and prices should be complete before indexing for accurate facets and sorting. Derived currency books depend on base values to apply rounding matrices correctly. A single reindex after all prices land avoids half-priced items in PLP. Replicating per-locale keeps blast radius small. Option 1 risks incomplete currency coverage in the index if some books are late. Option 2 creates thrash and inconsistent multi-currency views. Option 4 inverts dependencies and can produce invalid effective prices. Validation of rounding and tiers before the reindex reduces backouts. The plan supports deterministic replication scheduling. It also eases rollback by reapplying the prior index snapshot.
Question 36 of 60
36. Question
Content assets and slot configurations with translations must be live at launch. Translated assets may arrive hours after base content. What is the safest plan?
Correct
Base content can go live without a full index rebuild because assets/slots dont require it. Staging translations until QA avoids broken experiences. Batch activation plus targeted cache purges minimize operational risk. Option 1 replicates too frequently and burns the window needlessly. Option 3 delays launch and isnt business-friendly. Option 4 couples unrelated concerns and increases index churn. The dashboard gives stakeholders visibility by locale. This plan allows partial but correct experiences that later improve with translations. It reduces rollback complexity to cache reinstatement and asset versioning. It aligns with the platforms caching and content lifecycles.
Incorrect
Base content can go live without a full index rebuild because assets/slots dont require it. Staging translations until QA avoids broken experiences. Batch activation plus targeted cache purges minimize operational risk. Option 1 replicates too frequently and burns the window needlessly. Option 3 delays launch and isnt business-friendly. Option 4 couples unrelated concerns and increases index churn. The dashboard gives stakeholders visibility by locale. This plan allows partial but correct experiences that later improve with translations. It reduces rollback complexity to cache reinstatement and asset versioning. It aligns with the platforms caching and content lifecycles.
Unattempted
Base content can go live without a full index rebuild because assets/slots dont require it. Staging translations until QA avoids broken experiences. Batch activation plus targeted cache purges minimize operational risk. Option 1 replicates too frequently and burns the window needlessly. Option 3 delays launch and isnt business-friendly. Option 4 couples unrelated concerns and increases index churn. The dashboard gives stakeholders visibility by locale. This plan allows partial but correct experiences that later improve with translations. It reduces rollback complexity to cache reinstatement and asset versioning. It aligns with the platforms caching and content lifecycles.
Question 37 of 60
37. Question
You want to de-risk launch with a full rehearsal of data migration and cutover. What must be included for a meaningful dress rehearsal?
Correct
Only an end-to-end rehearsal exposes sequencing bugs, environment limits, and duration risks. Using production-sized data validates throughput and quota assumptions. Including deltas tests cutover windows and queue draining. Reconciliation reports prove correctness and provide baseline timings. Option 1 focuses on a single stream and misses cross-dependencies. Option 3 ignores the dominant launch risk: data readiness and timing. Option 4 yields false confidence without empirical data. Repetition until SLAs pass lowers the likelihood of surprises. Rollback drills ensure you can recover under pressure. Documented checkpoints make success measurable and repeatable.
Incorrect
Only an end-to-end rehearsal exposes sequencing bugs, environment limits, and duration risks. Using production-sized data validates throughput and quota assumptions. Including deltas tests cutover windows and queue draining. Reconciliation reports prove correctness and provide baseline timings. Option 1 focuses on a single stream and misses cross-dependencies. Option 3 ignores the dominant launch risk: data readiness and timing. Option 4 yields false confidence without empirical data. Repetition until SLAs pass lowers the likelihood of surprises. Rollback drills ensure you can recover under pressure. Documented checkpoints make success measurable and repeatable.
Unattempted
Only an end-to-end rehearsal exposes sequencing bugs, environment limits, and duration risks. Using production-sized data validates throughput and quota assumptions. Including deltas tests cutover windows and queue draining. Reconciliation reports prove correctness and provide baseline timings. Option 1 focuses on a single stream and misses cross-dependencies. Option 3 ignores the dominant launch risk: data readiness and timing. Option 4 yields false confidence without empirical data. Repetition until SLAs pass lowers the likelihood of surprises. Rollback drills ensure you can recover under pressure. Documented checkpoints make success measurable and repeatable.
Question 38 of 60
38. Question
A retailer needs multi-site, multi-locale checkout with SLAS SSO, device-level cart persistence, and fraud checks before authorization. Which technical specification set best translates the requirement?
Correct
The correct spec must encode flows, contracts, states, and NFRs so engineers can implement without guessing. Sequence diagrams clarify the exact step order for SLAS login and cart merge across devices. API contracts remove ambiguity about request/response shapes and authentication. State models for baskets and sessions prevent edge-case bugs from unclear lifecycles. NFRs like TTFB or P95 set performance targets testable in CI/CD. Error matrices ensure predictable handling of fraud declines, timeouts, and partial failures. Option 1 is too shallow and postpones critical SSO scope. Option 3 lists endpoints but omits states and acceptance tests. Option 4 is tracking, not a specification, and provides no implementable detail. A tight spec like option 2 aligns directly to the customers goals and is verifiable.
Incorrect
The correct spec must encode flows, contracts, states, and NFRs so engineers can implement without guessing. Sequence diagrams clarify the exact step order for SLAS login and cart merge across devices. API contracts remove ambiguity about request/response shapes and authentication. State models for baskets and sessions prevent edge-case bugs from unclear lifecycles. NFRs like TTFB or P95 set performance targets testable in CI/CD. Error matrices ensure predictable handling of fraud declines, timeouts, and partial failures. Option 1 is too shallow and postpones critical SSO scope. Option 3 lists endpoints but omits states and acceptance tests. Option 4 is tracking, not a specification, and provides no implementable detail. A tight spec like option 2 aligns directly to the customers goals and is verifiable.
Unattempted
The correct spec must encode flows, contracts, states, and NFRs so engineers can implement without guessing. Sequence diagrams clarify the exact step order for SLAS login and cart merge across devices. API contracts remove ambiguity about request/response shapes and authentication. State models for baskets and sessions prevent edge-case bugs from unclear lifecycles. NFRs like TTFB or P95 set performance targets testable in CI/CD. Error matrices ensure predictable handling of fraud declines, timeouts, and partial failures. Option 1 is too shallow and postpones critical SSO scope. Option 3 lists endpoints but omits states and acceptance tests. Option 4 is tracking, not a specification, and provides no implementable detail. A tight spec like option 2 aligns directly to the customers goals and is verifiable.
Question 39 of 60
39. Question
Marketing wants Page Designer components that render Einstein recommendations with graceful fallback if the model or API is unavailable. What belongs in the spec?
Correct
The requirement is behavioral under failure, so the spec must define cache, fallback, and observability. Slot inputs and contracts make Page Designer composition concrete for content authors. Cache keys and TTLs constrain stale or erroneous recs from polluting the page. Fallback rules (e.g., show top sellers) replace hand-waving about resilience with deterministic behavior. AB-test flags enable experimentation without code changes. Telemetry events create signals for monitoring and SLOs. Option 1 provides aspirations, not executable design. Option 2 outsources design to QA and leaves gaps. Option 4 is a mock, not a specification, and ignores failure semantics. The chosen answer translates business outcomes into technical controls.
Incorrect
The requirement is behavioral under failure, so the spec must define cache, fallback, and observability. Slot inputs and contracts make Page Designer composition concrete for content authors. Cache keys and TTLs constrain stale or erroneous recs from polluting the page. Fallback rules (e.g., show top sellers) replace hand-waving about resilience with deterministic behavior. AB-test flags enable experimentation without code changes. Telemetry events create signals for monitoring and SLOs. Option 1 provides aspirations, not executable design. Option 2 outsources design to QA and leaves gaps. Option 4 is a mock, not a specification, and ignores failure semantics. The chosen answer translates business outcomes into technical controls.
Unattempted
The requirement is behavioral under failure, so the spec must define cache, fallback, and observability. Slot inputs and contracts make Page Designer composition concrete for content authors. Cache keys and TTLs constrain stale or erroneous recs from polluting the page. Fallback rules (e.g., show top sellers) replace hand-waving about resilience with deterministic behavior. AB-test flags enable experimentation without code changes. Telemetry events create signals for monitoring and SLOs. Option 1 provides aspirations, not executable design. Option 2 outsources design to QA and leaves gaps. Option 4 is a mock, not a specification, and ignores failure semantics. The chosen answer translates business outcomes into technical controls.
Question 40 of 60
40. Question
The business adds redeem loyalty points at checkout with real-time balance checks and idempotent charges. Which spec element set is correct?
Correct
The spec must show how data and calls behave in real time and avoid double charging. Mapping balances into structured basket attributes keeps totals consistent and auditable. Idempotency keys protect against duplicate redemption on retries. Service Framework retries with backoff handle transient network issues. Option 2 defers all design and risks UX and correctness problems. Option 3 violates the real-time requirement and misrepresents redemption as coupons. Option 4 is presentation-only with no implementable integration detail. By locking contract, data shape, and retry semantics, option 1 is directly buildable and testable. It also supports reconciliation of redemptions post-purchase. Finally, it makes rollback clear by removing redemption without basket corruption.
Incorrect
The spec must show how data and calls behave in real time and avoid double charging. Mapping balances into structured basket attributes keeps totals consistent and auditable. Idempotency keys protect against duplicate redemption on retries. Service Framework retries with backoff handle transient network issues. Option 2 defers all design and risks UX and correctness problems. Option 3 violates the real-time requirement and misrepresents redemption as coupons. Option 4 is presentation-only with no implementable integration detail. By locking contract, data shape, and retry semantics, option 1 is directly buildable and testable. It also supports reconciliation of redemptions post-purchase. Finally, it makes rollback clear by removing redemption without basket corruption.
Unattempted
The spec must show how data and calls behave in real time and avoid double charging. Mapping balances into structured basket attributes keeps totals consistent and auditable. Idempotency keys protect against duplicate redemption on retries. Service Framework retries with backoff handle transient network issues. Option 2 defers all design and risks UX and correctness problems. Option 3 violates the real-time requirement and misrepresents redemption as coupons. Option 4 is presentation-only with no implementable integration detail. By locking contract, data shape, and retry semantics, option 1 is directly buildable and testable. It also supports reconciliation of redemptions post-purchase. Finally, it makes rollback clear by removing redemption without basket corruption.
Question 41 of 60
41. Question
A strict performance goal requires PDP TTFB ? 1.2 s at P95 under peak, using CDN edge caching with currency/locale variance and origin failover. What should your spec mandate?
Correct
Meeting a quantified goal needs precise rules, not slogans. Vary keys ensure correct content per currency and locale without cache poisoning. ESI or fragment strategies let dynamic blocks bypass full-page cache misses. An origin failover plan is necessary for availability during partial outages. Synthetic monitoring validates P95 continuously rather than once. A capacity model guards against under-provisioning. Option 1 is non-actionable. Option 2 risks stale or incorrect content and ignores variation. Option 4 shifts work to the browser and likely increases TTFB and complexity. The chosen answer ties technical controls to measurable outcomes and verifiable tests.
Incorrect
Meeting a quantified goal needs precise rules, not slogans. Vary keys ensure correct content per currency and locale without cache poisoning. ESI or fragment strategies let dynamic blocks bypass full-page cache misses. An origin failover plan is necessary for availability during partial outages. Synthetic monitoring validates P95 continuously rather than once. A capacity model guards against under-provisioning. Option 1 is non-actionable. Option 2 risks stale or incorrect content and ignores variation. Option 4 shifts work to the browser and likely increases TTFB and complexity. The chosen answer ties technical controls to measurable outcomes and verifiable tests.
Unattempted
Meeting a quantified goal needs precise rules, not slogans. Vary keys ensure correct content per currency and locale without cache poisoning. ESI or fragment strategies let dynamic blocks bypass full-page cache misses. An origin failover plan is necessary for availability during partial outages. Synthetic monitoring validates P95 continuously rather than once. A capacity model guards against under-provisioning. Option 1 is non-actionable. Option 2 risks stale or incorrect content and ignores variation. Option 4 shifts work to the browser and likely increases TTFB and complexity. The chosen answer ties technical controls to measurable outcomes and verifiable tests.
Question 42 of 60
42. Question
Legal mandates WCAG 2.2 AA, while Finance requires precise multi-currency rounding rules; both must be testable. Which spec fits?
Correct
The spec should convert policy into testable artifacts. Gherkin scenarios communicate behavior to business and QA. Component-level WCAG checklists localize responsibility and prevent gaps. A rounding matrix removes ambiguity across currencies and locales. CI hooks catch regressions early with automated accessibility scans. Option 1 is not verifiable. Option 3 is documentation without executable checks. Option 4 focuses on visuals and ignores behavioral and numerical precision. The selected answer makes requirements measurable and enforceable. It also eases audits by linking tests to controls. That alignment eliminates ambiguity at delivery time.
Incorrect
The spec should convert policy into testable artifacts. Gherkin scenarios communicate behavior to business and QA. Component-level WCAG checklists localize responsibility and prevent gaps. A rounding matrix removes ambiguity across currencies and locales. CI hooks catch regressions early with automated accessibility scans. Option 1 is not verifiable. Option 3 is documentation without executable checks. Option 4 focuses on visuals and ignores behavioral and numerical precision. The selected answer makes requirements measurable and enforceable. It also eases audits by linking tests to controls. That alignment eliminates ambiguity at delivery time.
Unattempted
The spec should convert policy into testable artifacts. Gherkin scenarios communicate behavior to business and QA. Component-level WCAG checklists localize responsibility and prevent gaps. A rounding matrix removes ambiguity across currencies and locales. CI hooks catch regressions early with automated accessibility scans. Option 1 is not verifiable. Option 3 is documentation without executable checks. Option 4 focuses on visuals and ignores behavioral and numerical precision. The selected answer makes requirements measurable and enforceable. It also eases audits by linking tests to controls. That alignment eliminates ambiguity at delivery time.
Question 43 of 60
43. Question
The OMS requires exactly-once order submission with compensation if capture fails after confirmation. What belongs in the spec?
Correct
Exactly-once behavior needs explicit mechanics, not aspiration. Correlation IDs make tracing and deduplication possible across services. Idempotency window defines how long duplicates must be recognized. Store-and-forward ensures transient outages do not lose orders; DLQ isolates poison messages. Compensating actions are necessary when downstream capture fails after confirmation. Retry/backoff profiles must align with OMS SLA to avoid cascading failures. Option 1 is vague and unsafe. Option 2 finds issues late and violates exactly-once. Option 3 risks duplicates by design. The chosen elements create determinism and auditability and are testable. They also let operations tune behavior safely over time.
Incorrect
Exactly-once behavior needs explicit mechanics, not aspiration. Correlation IDs make tracing and deduplication possible across services. Idempotency window defines how long duplicates must be recognized. Store-and-forward ensures transient outages do not lose orders; DLQ isolates poison messages. Compensating actions are necessary when downstream capture fails after confirmation. Retry/backoff profiles must align with OMS SLA to avoid cascading failures. Option 1 is vague and unsafe. Option 2 finds issues late and violates exactly-once. Option 3 risks duplicates by design. The chosen elements create determinism and auditability and are testable. They also let operations tune behavior safely over time.
Unattempted
Exactly-once behavior needs explicit mechanics, not aspiration. Correlation IDs make tracing and deduplication possible across services. Idempotency window defines how long duplicates must be recognized. Store-and-forward ensures transient outages do not lose orders; DLQ isolates poison messages. Compensating actions are necessary when downstream capture fails after confirmation. Retry/backoff profiles must align with OMS SLA to avoid cascading failures. Option 1 is vague and unsafe. Option 2 finds issues late and violates exactly-once. Option 3 risks duplicates by design. The chosen elements create determinism and auditability and are testable. They also let operations tune behavior safely over time.
Question 44 of 60
44. Question
Customer service needs Business Manager search by email, phone, and external order ID with sub-second response for the last two years. What should the spec contain?
Correct
The requirement targets searchability and latency, so the spec must define indexed fields and SLAs. Custom attributes allow storing external identifiers in supported structures. Index configuration ensures BM can query by those fields efficiently. Replication notes prevent stale data across Staging/Production. Option 2 is manual and non-compliant with security and governance. Option 3 is a visualization without implementation details. Option 4 treats performance as infrastructure only, ignoring data design and index strategy. The correct answer is implementable, testable, and aligned to the stated outcome. It also eases future extensions like partial PII masking. Finally, it supports monitoring of query performance against the SLA.
Incorrect
The requirement targets searchability and latency, so the spec must define indexed fields and SLAs. Custom attributes allow storing external identifiers in supported structures. Index configuration ensures BM can query by those fields efficiently. Replication notes prevent stale data across Staging/Production. Option 2 is manual and non-compliant with security and governance. Option 3 is a visualization without implementation details. Option 4 treats performance as infrastructure only, ignoring data design and index strategy. The correct answer is implementable, testable, and aligned to the stated outcome. It also eases future extensions like partial PII masking. Finally, it supports monitoring of query performance against the SLA.
Unattempted
The requirement targets searchability and latency, so the spec must define indexed fields and SLAs. Custom attributes allow storing external identifiers in supported structures. Index configuration ensures BM can query by those fields efficiently. Replication notes prevent stale data across Staging/Production. Option 2 is manual and non-compliant with security and governance. Option 3 is a visualization without implementation details. Option 4 treats performance as infrastructure only, ignoring data design and index strategy. The correct answer is implementable, testable, and aligned to the stated outcome. It also eases future extensions like partial PII masking. Finally, it supports monitoring of query performance against the SLA.
Question 45 of 60
45. Question
SEO requires canonicalization, hreflang, and strict legacy URL preservation during a domain/locale split. Which spec is right?
Correct
The requirement spans generation, mapping, and caching, not just redirects. A URL schema defines deterministic paths for products and content. Primary domain per locale prevents duplicate content in search results. Canonical and hreflang rules encode the correct signals for search engines. A job to generate and validate the redirect matrix reduces manual errors. Edge headers ensure crawlers observe intended behavior consistently. Option 1 and 2 are partial and invite SEO regressions. Option 4 is necessary but insufficient alone. The chosen spec covers design, operations, and observability facets of SEO. It directly maps to measurable outcomes in rankings and crawl efficiency.
Incorrect
The requirement spans generation, mapping, and caching, not just redirects. A URL schema defines deterministic paths for products and content. Primary domain per locale prevents duplicate content in search results. Canonical and hreflang rules encode the correct signals for search engines. A job to generate and validate the redirect matrix reduces manual errors. Edge headers ensure crawlers observe intended behavior consistently. Option 1 and 2 are partial and invite SEO regressions. Option 4 is necessary but insufficient alone. The chosen spec covers design, operations, and observability facets of SEO. It directly maps to measurable outcomes in rankings and crawl efficiency.
Unattempted
The requirement spans generation, mapping, and caching, not just redirects. A URL schema defines deterministic paths for products and content. Primary domain per locale prevents duplicate content in search results. Canonical and hreflang rules encode the correct signals for search engines. A job to generate and validate the redirect matrix reduces manual errors. Edge headers ensure crawlers observe intended behavior consistently. Option 1 and 2 are partial and invite SEO regressions. Option 4 is necessary but insufficient alone. The chosen spec covers design, operations, and observability facets of SEO. It directly maps to measurable outcomes in rankings and crawl efficiency.
Question 46 of 60
46. Question
A promotion library migration must land on Production at 05:30 with zero invalid references (products, price books, customer groups). Catalog and price books finish importing at 05:10. Which plan is best?
Correct
Validating dependencies after catalog and price books complete ensures promotions wont reference missing entities. Auto-disabling invalids protects storefront integrity. Importing promos next and replicating data plus index as a single unit keeps site view consistent. Option 2 breaches governance and invites human error. Option 3 risks storefront showing products without associated promo metadata. Option 4 couples concerns that are better sequenced, increasing failure blast radius. The proposed plan confines risk to a narrow window with clear checkpoints. It aligns with replication consistency principles. It enables deterministic rollback by re-running with a clean state. Logging makes audits straightforward. This flow supports on-time activation with minimal chaos.
Incorrect
Validating dependencies after catalog and price books complete ensures promotions wont reference missing entities. Auto-disabling invalids protects storefront integrity. Importing promos next and replicating data plus index as a single unit keeps site view consistent. Option 2 breaches governance and invites human error. Option 3 risks storefront showing products without associated promo metadata. Option 4 couples concerns that are better sequenced, increasing failure blast radius. The proposed plan confines risk to a narrow window with clear checkpoints. It aligns with replication consistency principles. It enables deterministic rollback by re-running with a clean state. Logging makes audits straightforward. This flow supports on-time activation with minimal chaos.
Unattempted
Validating dependencies after catalog and price books complete ensures promotions wont reference missing entities. Auto-disabling invalids protects storefront integrity. Importing promos next and replicating data plus index as a single unit keeps site view consistent. Option 2 breaches governance and invites human error. Option 3 risks storefront showing products without associated promo metadata. Option 4 couples concerns that are better sequenced, increasing failure blast radius. The proposed plan confines risk to a narrow window with clear checkpoints. It aligns with replication consistency principles. It enables deterministic rollback by re-running with a clean state. Logging makes audits straightforward. This flow supports on-time activation with minimal chaos.
Question 47 of 60
47. Question
Gift card processing must support balance check, authorization, capture, partial refunds, strict rate limits, and offline fallback. What should the spec dictate?
Correct
Different operations have different latency and idempotency characteristics, so profiles must be tuned individually. Circuit breakers protect checkout from provider instability. Rate-limit handling avoids breaches and throttling penalties. An offline voucher cache maintains UX during brief outages while TTL limits risk. Reconciliation ensures financial accuracy and detects drift. Option 1 over-generalizes and increases failure risk. Option 2 abdicates responsibility for resilience. Option 3 violates real-time requirements and risks stale balances. The selected approach encodes reliability, compliance, and customer experience. It is verifiable with targeted failure tests. It also provides clear rollback paths and monitoring hooks.
Incorrect
Different operations have different latency and idempotency characteristics, so profiles must be tuned individually. Circuit breakers protect checkout from provider instability. Rate-limit handling avoids breaches and throttling penalties. An offline voucher cache maintains UX during brief outages while TTL limits risk. Reconciliation ensures financial accuracy and detects drift. Option 1 over-generalizes and increases failure risk. Option 2 abdicates responsibility for resilience. Option 3 violates real-time requirements and risks stale balances. The selected approach encodes reliability, compliance, and customer experience. It is verifiable with targeted failure tests. It also provides clear rollback paths and monitoring hooks.
Unattempted
Different operations have different latency and idempotency characteristics, so profiles must be tuned individually. Circuit breakers protect checkout from provider instability. Rate-limit handling avoids breaches and throttling penalties. An offline voucher cache maintains UX during brief outages while TTL limits risk. Reconciliation ensures financial accuracy and detects drift. Option 1 over-generalizes and increases failure risk. Option 2 abdicates responsibility for resilience. Option 3 violates real-time requirements and risks stale balances. The selected approach encodes reliability, compliance, and customer experience. It is verifiable with targeted failure tests. It also provides clear rollback paths and monitoring hooks.
Question 48 of 60
48. Question
You must design end-to-end observability for a SFRA + SCAPI build. What is the first configuration to ensure traceability across tiers and Log Center correlation?
Correct
The correct approach begins with a single correlation ID created at the earliest boundary, typically the CDN or web tier, then propagated through controllers, pipelines, SCAPI calls, and outbound integrations. This ensures Log Center queries can stitch events from multiple sources into a single timeline. Without a shared identifier, you cannot reliably reconstruct flows, which blocks governance and incident response. Capturing errors alone loses context for slow successes and masked timeouts. Random IDs per hop fragment traces and make joins guesswork. Separate IDs that are stripped at the CDN remove the anchor needed for edge-to-origin analysis. A consistent correlation ID also powers dashboards, alert deduplication, and SLO burn-rate checks. It enables sampling strategies while preserving high-value paths. Finally, it supports root-cause analysis across environments by standardizing the header and log key. Therefore, correlation ID propagation is the foundational first step.
Incorrect
The correct approach begins with a single correlation ID created at the earliest boundary, typically the CDN or web tier, then propagated through controllers, pipelines, SCAPI calls, and outbound integrations. This ensures Log Center queries can stitch events from multiple sources into a single timeline. Without a shared identifier, you cannot reliably reconstruct flows, which blocks governance and incident response. Capturing errors alone loses context for slow successes and masked timeouts. Random IDs per hop fragment traces and make joins guesswork. Separate IDs that are stripped at the CDN remove the anchor needed for edge-to-origin analysis. A consistent correlation ID also powers dashboards, alert deduplication, and SLO burn-rate checks. It enables sampling strategies while preserving high-value paths. Finally, it supports root-cause analysis across environments by standardizing the header and log key. Therefore, correlation ID propagation is the foundational first step.
Unattempted
The correct approach begins with a single correlation ID created at the earliest boundary, typically the CDN or web tier, then propagated through controllers, pipelines, SCAPI calls, and outbound integrations. This ensures Log Center queries can stitch events from multiple sources into a single timeline. Without a shared identifier, you cannot reliably reconstruct flows, which blocks governance and incident response. Capturing errors alone loses context for slow successes and masked timeouts. Random IDs per hop fragment traces and make joins guesswork. Separate IDs that are stripped at the CDN remove the anchor needed for edge-to-origin analysis. A consistent correlation ID also powers dashboards, alert deduplication, and SLO burn-rate checks. It enables sampling strategies while preserving high-value paths. Finally, it supports root-cause analysis across environments by standardizing the header and log key. Therefore, correlation ID propagation is the foundational first step.
Question 49 of 60
49. Question
A security review requires that sensitive fields never appear in logs while still enabling troubleshooting. What configuration is most appropriate in B2C Commerce?
Correct
Log scrubbing with an allowlist and tokenized placeholders preserves useful context while ensuring PII and secrets never land in storage. This pattern replaces sensitive values with deterministic tokens so issues can be traced without revealing data. Disabling all logging eliminates observability and violates operational best practices. Logging everything verbatim increases breach impact and may contravene compliance frameworks. Client-side masking does not protect server logs, where the real exposure risk lives. An allowlist strategy forces engineers to justify fields, reducing accidental leakage. It also makes compliance audits straightforward, as scrubbing rules are declarative and testable. Pairing scrubbing with correlation IDs allows investigations without seeing the raw value. Automated tests should assert that protected keys never appear. This approach balances governance, trust, and troubleshooting efficiency.
Incorrect
Log scrubbing with an allowlist and tokenized placeholders preserves useful context while ensuring PII and secrets never land in storage. This pattern replaces sensitive values with deterministic tokens so issues can be traced without revealing data. Disabling all logging eliminates observability and violates operational best practices. Logging everything verbatim increases breach impact and may contravene compliance frameworks. Client-side masking does not protect server logs, where the real exposure risk lives. An allowlist strategy forces engineers to justify fields, reducing accidental leakage. It also makes compliance audits straightforward, as scrubbing rules are declarative and testable. Pairing scrubbing with correlation IDs allows investigations without seeing the raw value. Automated tests should assert that protected keys never appear. This approach balances governance, trust, and troubleshooting efficiency.
Unattempted
Log scrubbing with an allowlist and tokenized placeholders preserves useful context while ensuring PII and secrets never land in storage. This pattern replaces sensitive values with deterministic tokens so issues can be traced without revealing data. Disabling all logging eliminates observability and violates operational best practices. Logging everything verbatim increases breach impact and may contravene compliance frameworks. Client-side masking does not protect server logs, where the real exposure risk lives. An allowlist strategy forces engineers to justify fields, reducing accidental leakage. It also makes compliance audits straightforward, as scrubbing rules are declarative and testable. Pairing scrubbing with correlation IDs allows investigations without seeing the raw value. Automated tests should assert that protected keys never appear. This approach balances governance, trust, and troubleshooting efficiency.
Question 50 of 60
50. Question
You notice intermittent cart failures under load. The spec calls for SCAPI with idempotent order submission. Where should you instrument first to leverage Log Center?
Correct
Instrumenting the SCAPI cart and order endpoints with structured fields is the most direct way to observe failure modes under load. These logs should include correlation IDs, idempotency keys, status codes, latency, and retry metadata. Client-side prints do not reach Log Center and disappear in production. Job logs are valuable for batch operations but will not surface shopper-time issues. Screenshots are anecdotal and unsearchable, making them unsuitable for systemic analysis. With structured API logs, Log Center can slice errors by route, tenant, time, and region. This enables alerting on spikes that breach SLOs and correlating with upstream CDN metrics. It also validates that retries respect backoff and keys are honored. The visibility will quickly differentiate network, timeout, or validation errors. That is why server-side SCAPI instrumentation is the correct starting point.
Incorrect
Instrumenting the SCAPI cart and order endpoints with structured fields is the most direct way to observe failure modes under load. These logs should include correlation IDs, idempotency keys, status codes, latency, and retry metadata. Client-side prints do not reach Log Center and disappear in production. Job logs are valuable for batch operations but will not surface shopper-time issues. Screenshots are anecdotal and unsearchable, making them unsuitable for systemic analysis. With structured API logs, Log Center can slice errors by route, tenant, time, and region. This enables alerting on spikes that breach SLOs and correlating with upstream CDN metrics. It also validates that retries respect backoff and keys are honored. The visibility will quickly differentiate network, timeout, or validation errors. That is why server-side SCAPI instrumentation is the correct starting point.
Unattempted
Instrumenting the SCAPI cart and order endpoints with structured fields is the most direct way to observe failure modes under load. These logs should include correlation IDs, idempotency keys, status codes, latency, and retry metadata. Client-side prints do not reach Log Center and disappear in production. Job logs are valuable for batch operations but will not surface shopper-time issues. Screenshots are anecdotal and unsearchable, making them unsuitable for systemic analysis. With structured API logs, Log Center can slice errors by route, tenant, time, and region. This enables alerting on spikes that breach SLOs and correlating with upstream CDN metrics. It also validates that retries respect backoff and keys are honored. The visibility will quickly differentiate network, timeout, or validation errors. That is why server-side SCAPI instrumentation is the correct starting point.
Question 51 of 60
51. Question
Commerce jobs are overrunning their windows and impacting replication. You need to diagnose and prevent repeats. What logging strategy best supports governance and root cause?
Correct
Job lifecycle logs with explicit step timings allow you to see where time is consumed, not just that a job ran long. Including run IDs, input sizes, and downstream API response codes supports reproducibility and trend analysis. Final status-only logging hides variability and makes tuning impossible. Email alerts provide notice but lack the depth for forensic reviews. Local text files are not centralized, breaching governance and auditability. With structured lifecycle logs, Log Center can visualize duration histograms and trigger alerts on p95 deviations. This helps differentiate data growth from regressions in code or throttling downstream. Correlating with replication start events avoids overlap windows. The approach also documents ownership by tagging the job and responsible team. As a result, both prevention and post-mortems are materially improved.
Incorrect
Job lifecycle logs with explicit step timings allow you to see where time is consumed, not just that a job ran long. Including run IDs, input sizes, and downstream API response codes supports reproducibility and trend analysis. Final status-only logging hides variability and makes tuning impossible. Email alerts provide notice but lack the depth for forensic reviews. Local text files are not centralized, breaching governance and auditability. With structured lifecycle logs, Log Center can visualize duration histograms and trigger alerts on p95 deviations. This helps differentiate data growth from regressions in code or throttling downstream. Correlating with replication start events avoids overlap windows. The approach also documents ownership by tagging the job and responsible team. As a result, both prevention and post-mortems are materially improved.
Unattempted
Job lifecycle logs with explicit step timings allow you to see where time is consumed, not just that a job ran long. Including run IDs, input sizes, and downstream API response codes supports reproducibility and trend analysis. Final status-only logging hides variability and makes tuning impossible. Email alerts provide notice but lack the depth for forensic reviews. Local text files are not centralized, breaching governance and auditability. With structured lifecycle logs, Log Center can visualize duration histograms and trigger alerts on p95 deviations. This helps differentiate data growth from regressions in code or throttling downstream. Correlating with replication start events avoids overlap windows. The approach also documents ownership by tagging the job and responsible team. As a result, both prevention and post-mortems are materially improved.
Question 52 of 60
52. Question
A headless implementation proxies SCAPI through an API gateway. During incidents, teams debate where latency occurs. Which configuration enables efficient triage in Log Center?
Correct
Hop-by-hop timings using consistent correlation and span identifiers allow you to see exactly where time is spent. With a parent correlation ID and child spans for gateway, origin, and OMS, Log Center can render a waterfall for each request. Gateway totals alone cannot separate edge throttling from origin slowness. Random IDs per hop prevent joins, forcing manual and error-prone analysis. Disabling logs in production removes the only meaningful signal during real traffic. Span-level logging exposes patterns like TLS handshakes, cache misses, or downstream backoffs. It also informs capacity planning by revealing saturation points. Pairing spans with status codes and retry tags reveals error amplification. This setup is the backbone of efficient, evidence-based triage.
Incorrect
Hop-by-hop timings using consistent correlation and span identifiers allow you to see exactly where time is spent. With a parent correlation ID and child spans for gateway, origin, and OMS, Log Center can render a waterfall for each request. Gateway totals alone cannot separate edge throttling from origin slowness. Random IDs per hop prevent joins, forcing manual and error-prone analysis. Disabling logs in production removes the only meaningful signal during real traffic. Span-level logging exposes patterns like TLS handshakes, cache misses, or downstream backoffs. It also informs capacity planning by revealing saturation points. Pairing spans with status codes and retry tags reveals error amplification. This setup is the backbone of efficient, evidence-based triage.
Unattempted
Hop-by-hop timings using consistent correlation and span identifiers allow you to see exactly where time is spent. With a parent correlation ID and child spans for gateway, origin, and OMS, Log Center can render a waterfall for each request. Gateway totals alone cannot separate edge throttling from origin slowness. Random IDs per hop prevent joins, forcing manual and error-prone analysis. Disabling logs in production removes the only meaningful signal during real traffic. Span-level logging exposes patterns like TLS handshakes, cache misses, or downstream backoffs. It also informs capacity planning by revealing saturation points. Pairing spans with status codes and retry tags reveals error amplification. This setup is the backbone of efficient, evidence-based triage.
Question 53 of 60
53. Question
Your security team requests evidence that no payment tokens appear in any logs across environments. What is the most reliable validation approach?
Correct
Automated log scans with denylisted patterns provide continuous, objective validation that sensitive tokens are absent. This can run on ingestion or periodically within Log Center with alerts on matches. Visual inspection is subjective and cannot cover volume or concurrency. Blind trust in cartridges ignores custom code paths and configuration drift. Reducing log level does not guarantee sensitive fields are excluded and may also hide essential diagnostics. Automated scanning can be paired with unit tests that assert scrubbing on serialization. Audit artifacts then prove compliance to stakeholders. This also surfaces regressions quickly after deployments. Combined with a secure allowlist approach, the risk of leakage drops substantially.
Incorrect
Automated log scans with denylisted patterns provide continuous, objective validation that sensitive tokens are absent. This can run on ingestion or periodically within Log Center with alerts on matches. Visual inspection is subjective and cannot cover volume or concurrency. Blind trust in cartridges ignores custom code paths and configuration drift. Reducing log level does not guarantee sensitive fields are excluded and may also hide essential diagnostics. Automated scanning can be paired with unit tests that assert scrubbing on serialization. Audit artifacts then prove compliance to stakeholders. This also surfaces regressions quickly after deployments. Combined with a secure allowlist approach, the risk of leakage drops substantially.
Unattempted
Automated log scans with denylisted patterns provide continuous, objective validation that sensitive tokens are absent. This can run on ingestion or periodically within Log Center with alerts on matches. Visual inspection is subjective and cannot cover volume or concurrency. Blind trust in cartridges ignores custom code paths and configuration drift. Reducing log level does not guarantee sensitive fields are excluded and may also hide essential diagnostics. Automated scanning can be paired with unit tests that assert scrubbing on serialization. Audit artifacts then prove compliance to stakeholders. This also surfaces regressions quickly after deployments. Combined with a secure allowlist approach, the risk of leakage drops substantially.
Question 54 of 60
54. Question
A merchandising bug appears only in one locale and price book combination. What logging enhancements will most help isolate the defect using Log Center?
Correct
Adding locale and price book as structured fields allows queries that isolate exactly where behavior differs. Without these tags, you cannot correlate errors with merchandising variations. Generic messages provide no filtering capability and waste operator time. Stack traces only on 500s miss subtle rendering or data mismatches that still return 200s. Flat files without metadata make Log Center enrichment difficult and reduce searchability. With explicit tags, you can pivot error rates by locale or price book and compare response payload sizes. This precision supports faster defect localization and verification of fixes. It also improves ongoing monitoring by catching regressions confined to specific merchandising contexts.
Incorrect
Adding locale and price book as structured fields allows queries that isolate exactly where behavior differs. Without these tags, you cannot correlate errors with merchandising variations. Generic messages provide no filtering capability and waste operator time. Stack traces only on 500s miss subtle rendering or data mismatches that still return 200s. Flat files without metadata make Log Center enrichment difficult and reduce searchability. With explicit tags, you can pivot error rates by locale or price book and compare response payload sizes. This precision supports faster defect localization and verification of fixes. It also improves ongoing monitoring by catching regressions confined to specific merchandising contexts.
Unattempted
Adding locale and price book as structured fields allows queries that isolate exactly where behavior differs. Without these tags, you cannot correlate errors with merchandising variations. Generic messages provide no filtering capability and waste operator time. Stack traces only on 500s miss subtle rendering or data mismatches that still return 200s. Flat files without metadata make Log Center enrichment difficult and reduce searchability. With explicit tags, you can pivot error rates by locale or price book and compare response payload sizes. This precision supports faster defect localization and verification of fixes. It also improves ongoing monitoring by catching regressions confined to specific merchandising contexts.
Question 55 of 60
55. Question
Youre asked to certify that production logging follows least privilege, retention, and access controls. What should be part of the logging configuration review?
Correct
Governance requires role-based access so only authorized users can read sensitive logs. Retention policies must follow corporate and legal requirements, deleting data after the approved window. Redaction rules ensure sensitive fields never appear regardless of role. Giving all developers admin access violates least privilege and auditability. Indefinite retention increases compliance risk and storage cost without benefit. Cross-tenant access threatens isolation and can leak data between brands or regions. A proper review also checks encryption at rest and in transit. Evidence should include access reviews, policy documents, and successful redaction tests. Together these show logging aligns with trust and best practices.
Incorrect
Governance requires role-based access so only authorized users can read sensitive logs. Retention policies must follow corporate and legal requirements, deleting data after the approved window. Redaction rules ensure sensitive fields never appear regardless of role. Giving all developers admin access violates least privilege and auditability. Indefinite retention increases compliance risk and storage cost without benefit. Cross-tenant access threatens isolation and can leak data between brands or regions. A proper review also checks encryption at rest and in transit. Evidence should include access reviews, policy documents, and successful redaction tests. Together these show logging aligns with trust and best practices.
Unattempted
Governance requires role-based access so only authorized users can read sensitive logs. Retention policies must follow corporate and legal requirements, deleting data after the approved window. Redaction rules ensure sensitive fields never appear regardless of role. Giving all developers admin access violates least privilege and auditability. Indefinite retention increases compliance risk and storage cost without benefit. Cross-tenant access threatens isolation and can leak data between brands or regions. A proper review also checks encryption at rest and in transit. Evidence should include access reviews, policy documents, and successful redaction tests. Together these show logging aligns with trust and best practices.
Question 56 of 60
56. Question
Checkout timeouts spike after a cartridge release. You must identify the offending change quickly. What practice best enables rapid detection via Log Center?
Correct
Tagging every request with the release version and commit SHA lets you slice metrics by deployment boundary instantly. You can compare latency and error rates pre- and post-release without guessing. Inferring deployments from calendars is slow and error-prone. Environment tags without versioning cannot pinpoint which build caused regressions. Removing logs during peak eliminates the data needed most during incidents. Version tagging also supports canary analysis and rollbacks. Coupled with correlation IDs, operators can trace a regression to a specific code path. It is a low-overhead practice with outsized incident response benefits.
Incorrect
Tagging every request with the release version and commit SHA lets you slice metrics by deployment boundary instantly. You can compare latency and error rates pre- and post-release without guessing. Inferring deployments from calendars is slow and error-prone. Environment tags without versioning cannot pinpoint which build caused regressions. Removing logs during peak eliminates the data needed most during incidents. Version tagging also supports canary analysis and rollbacks. Coupled with correlation IDs, operators can trace a regression to a specific code path. It is a low-overhead practice with outsized incident response benefits.
Unattempted
Tagging every request with the release version and commit SHA lets you slice metrics by deployment boundary instantly. You can compare latency and error rates pre- and post-release without guessing. Inferring deployments from calendars is slow and error-prone. Environment tags without versioning cannot pinpoint which build caused regressions. Removing logs during peak eliminates the data needed most during incidents. Version tagging also supports canary analysis and rollbacks. Coupled with correlation IDs, operators can trace a regression to a specific code path. It is a low-overhead practice with outsized incident response benefits.
Question 57 of 60
57. Question
Leadership wants proactive detection of emerging issues across channels. Which combination leverages Log Center effectively while aligning to best practices?
Correct
Service-level objectives with alerts on the four golden signals (errors, latency, traffic, saturation) provide timely, quantifiable detection. Log Center can compute rate and percentile metrics with thresholds tied to business impact. Weekly CSVs delay response and miss fast-moving issues. Waiting for ticket volume makes customers your monitoring system, which harms trust. Dashboards without thresholds invite complacency and inconsistent reactions. SLOs align engineering and business around explicit reliability goals. Alerting based on burn rate avoids noise while catching sustained degradations. This practice supports governance by documenting what good looks like and how to react. It also drives continuous improvement by exposing chronic pain points.
Incorrect
Service-level objectives with alerts on the four golden signals (errors, latency, traffic, saturation) provide timely, quantifiable detection. Log Center can compute rate and percentile metrics with thresholds tied to business impact. Weekly CSVs delay response and miss fast-moving issues. Waiting for ticket volume makes customers your monitoring system, which harms trust. Dashboards without thresholds invite complacency and inconsistent reactions. SLOs align engineering and business around explicit reliability goals. Alerting based on burn rate avoids noise while catching sustained degradations. This practice supports governance by documenting what good looks like and how to react. It also drives continuous improvement by exposing chronic pain points.
Unattempted
Service-level objectives with alerts on the four golden signals (errors, latency, traffic, saturation) provide timely, quantifiable detection. Log Center can compute rate and percentile metrics with thresholds tied to business impact. Weekly CSVs delay response and miss fast-moving issues. Waiting for ticket volume makes customers your monitoring system, which harms trust. Dashboards without thresholds invite complacency and inconsistent reactions. SLOs align engineering and business around explicit reliability goals. Alerting based on burn rate avoids noise while catching sustained degradations. This practice supports governance by documenting what good looks like and how to react. It also drives continuous improvement by exposing chronic pain points.
Question 58 of 60
58. Question
A new loyalty provider is selected late. Checkout integration must be resilient, and marketing wants near real-time earn events. Which implementation spec is most defensible?
Correct
Option 3 is correct because it balances latency, resilience, and correctness with an adapter contract, idempotency, and circuit breakers. The DLQ ensures partial outages do not block orders and allows replay, which is defensible during incidents. A rollback path to earn on ship protects margins if the provider underperforms. Option 1 risks lost accrual for cancellations/returns and ignores idempotency realities. Option 2 introduces unacceptable latency to marketing use cases that expect same-session messaging. Option 4 overloads checkout and increases user-visible failures; every cart change call is costly. The chosen pattern isolates loyalty concerns from business logic and documents failure semantics. It supports clear SLOs and observability. It also enables A/B rollout via feature flags. Stakeholders can defend it using controlled risk and measurable impact.
Incorrect
Option 3 is correct because it balances latency, resilience, and correctness with an adapter contract, idempotency, and circuit breakers. The DLQ ensures partial outages do not block orders and allows replay, which is defensible during incidents. A rollback path to earn on ship protects margins if the provider underperforms. Option 1 risks lost accrual for cancellations/returns and ignores idempotency realities. Option 2 introduces unacceptable latency to marketing use cases that expect same-session messaging. Option 4 overloads checkout and increases user-visible failures; every cart change call is costly. The chosen pattern isolates loyalty concerns from business logic and documents failure semantics. It supports clear SLOs and observability. It also enables A/B rollout via feature flags. Stakeholders can defend it using controlled risk and measurable impact.
Unattempted
Option 3 is correct because it balances latency, resilience, and correctness with an adapter contract, idempotency, and circuit breakers. The DLQ ensures partial outages do not block orders and allows replay, which is defensible during incidents. A rollback path to earn on ship protects margins if the provider underperforms. Option 1 risks lost accrual for cancellations/returns and ignores idempotency realities. Option 2 introduces unacceptable latency to marketing use cases that expect same-session messaging. Option 4 overloads checkout and increases user-visible failures; every cart change call is costly. The chosen pattern isolates loyalty concerns from business logic and documents failure semantics. It supports clear SLOs and observability. It also enables A/B rollout via feature flags. Stakeholders can defend it using controlled risk and measurable impact.
Question 59 of 60
59. Question
The brand will expand to five languages next year and demands SEO continuity. Your review finds URL rules and alias plans are minimal. What should you defend to stakeholders?
Correct
Option 2 is correct because it implements explicit locale routing, SEO signals, and automated verification, reducing regressions at scale. Hreflang and canonical rules prevent duplicate content and geotargeting mistakes. A strict alias policy avoids broken links and inconsistent casing/diacritics. Link-map validation in CI catches redirect loops before production. Option 1 with query params harms SEO and dilutes analytics attribution. Option 3 increases domain overhead, complicates trust and tracking, and is unnecessary if one domain can serve locales cleanly. Option 4 is passive and risks traffic loss; search engines will not fix misconfigured signals. The chosen plan is future-proof for more locales. It also yields clear acceptance criteria for QA. Stakeholders can defend it by pointing to measurable organic performance. It provides a rollback matrix if anomalies appear.
Incorrect
Option 2 is correct because it implements explicit locale routing, SEO signals, and automated verification, reducing regressions at scale. Hreflang and canonical rules prevent duplicate content and geotargeting mistakes. A strict alias policy avoids broken links and inconsistent casing/diacritics. Link-map validation in CI catches redirect loops before production. Option 1 with query params harms SEO and dilutes analytics attribution. Option 3 increases domain overhead, complicates trust and tracking, and is unnecessary if one domain can serve locales cleanly. Option 4 is passive and risks traffic loss; search engines will not fix misconfigured signals. The chosen plan is future-proof for more locales. It also yields clear acceptance criteria for QA. Stakeholders can defend it by pointing to measurable organic performance. It provides a rollback matrix if anomalies appear.
Unattempted
Option 2 is correct because it implements explicit locale routing, SEO signals, and automated verification, reducing regressions at scale. Hreflang and canonical rules prevent duplicate content and geotargeting mistakes. A strict alias policy avoids broken links and inconsistent casing/diacritics. Link-map validation in CI catches redirect loops before production. Option 1 with query params harms SEO and dilutes analytics attribution. Option 3 increases domain overhead, complicates trust and tracking, and is unnecessary if one domain can serve locales cleanly. Option 4 is passive and risks traffic loss; search engines will not fix misconfigured signals. The chosen plan is future-proof for more locales. It also yields clear acceptance criteria for QA. Stakeholders can defend it by pointing to measurable organic performance. It provides a rollback matrix if anomalies appear.
Question 60 of 60
60. Question
A flash-sale pattern will drive 8× traffic spikes. Spec shows standard caching and default OCAPI profiles. What revision is most defendable for growth?
Correct
Option 2 is correct because growth planning must address amplification effects, not just add servers. Request coalescing prevents stampedes on product data during spikes. Varying cache by price book and locale preserves correctness without cache poisoning. Prewarming top SKUs plus surge runbooks ensures predictable ramp-up and recovery. Rate limits on APIs protect critical paths and downstream systems. Option 1s long TTL risks stale data and misses API protection. Option 3 scales cost without fixing contention on shared resources. Option 4 hurts business intent and is not necessary with proper engineering. The chosen plan is measurable with SLOs. It supports chaos testing and drills. It is easy to defend with capacity evidence and failure budgets.
Incorrect
Option 2 is correct because growth planning must address amplification effects, not just add servers. Request coalescing prevents stampedes on product data during spikes. Varying cache by price book and locale preserves correctness without cache poisoning. Prewarming top SKUs plus surge runbooks ensures predictable ramp-up and recovery. Rate limits on APIs protect critical paths and downstream systems. Option 1s long TTL risks stale data and misses API protection. Option 3 scales cost without fixing contention on shared resources. Option 4 hurts business intent and is not necessary with proper engineering. The chosen plan is measurable with SLOs. It supports chaos testing and drills. It is easy to defend with capacity evidence and failure budgets.
Unattempted
Option 2 is correct because growth planning must address amplification effects, not just add servers. Request coalescing prevents stampedes on product data during spikes. Varying cache by price book and locale preserves correctness without cache poisoning. Prewarming top SKUs plus surge runbooks ensures predictable ramp-up and recovery. Rate limits on APIs protect critical paths and downstream systems. Option 1s long TTL risks stale data and misses API protection. Option 3 scales cost without fixing contention on shared resources. Option 4 hurts business intent and is not necessary with proper engineering. The chosen plan is measurable with SLOs. It supports chaos testing and drills. It is easy to defend with capacity evidence and failure budgets.
X
Use Page numbers below to navigate to other practice tests