You have already completed the Test before. Hence you can not start it again.
Test is loading...
You must sign in or sign up to start the Test.
You have to finish following quiz, to start this Test:
Your results are here!! for" Salesforce Certified B2C Commerce Architect Practice Test 6 "
0 of 60 questions answered correctly
Your time:
Time has elapsed
Your Final Score is : 0
You have attempted : 0
Number of Correct Questions : 0 and scored 0
Number of Incorrect Questions : 0 and Negative marks 0
Average score
Your score
Salesforce Certified B2C Commerce Architect
You have attempted: 0
Number of Correct Questions: 0 and scored 0
Number of Incorrect Questions: 0 and Negative marks 0
You can review your answers by clicking on “View Answers” option. Important Note : Open Reference Documentation Links in New Tab (Right Click and Open in New Tab).
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
Answered
Review
Question 1 of 60
1. Question
Your merchandising team needs real-time stock checks with a legacy OMS that only supports SOAP 1.2 with WS-Security signature and server-side certificate validation. Peak traffic is high, but checks must be synchronous during add-to-cart. WhatÂ’s the best path?
Correct
The OMS is SOAP-only and requires WS-Security; SFCC’s Services framework can sign requests and send XML over TLS, making real-time SOAP the fit. Because traffic is high, you pair it with short timeouts, a circuit breaker, and fallbacks to a recent cache so the site stays responsive when the OMS slows down. Mutual trust is enforced by TLS and signature validation, and schema checks reduce bad data. Option 1 fails the “synchronous must” requirement and risks stale adds. Option 2 breaks accuracy for flash sales and backorders. Option 3 is unsafe: browsers should not call OMS directly nor convert protocols client-side; it leaks credentials and violates CORS/security. Therefore, secured real-time SOAP with resilience patterns is appropriate.
Incorrect
The OMS is SOAP-only and requires WS-Security; SFCC’s Services framework can sign requests and send XML over TLS, making real-time SOAP the fit. Because traffic is high, you pair it with short timeouts, a circuit breaker, and fallbacks to a recent cache so the site stays responsive when the OMS slows down. Mutual trust is enforced by TLS and signature validation, and schema checks reduce bad data. Option 1 fails the “synchronous must” requirement and risks stale adds. Option 2 breaks accuracy for flash sales and backorders. Option 3 is unsafe: browsers should not call OMS directly nor convert protocols client-side; it leaks credentials and violates CORS/security. Therefore, secured real-time SOAP with resilience patterns is appropriate.
Unattempted
The OMS is SOAP-only and requires WS-Security; SFCC’s Services framework can sign requests and send XML over TLS, making real-time SOAP the fit. Because traffic is high, you pair it with short timeouts, a circuit breaker, and fallbacks to a recent cache so the site stays responsive when the OMS slows down. Mutual trust is enforced by TLS and signature validation, and schema checks reduce bad data. Option 1 fails the “synchronous must” requirement and risks stale adds. Option 2 breaks accuracy for flash sales and backorders. Option 3 is unsafe: browsers should not call OMS directly nor convert protocols client-side; it leaks credentials and violates CORS/security. Therefore, secured real-time SOAP with resilience patterns is appropriate.
Question 2 of 60
2. Question
Switching locales mid-checkout occasionally throws 500s. Stack traces reference missing resource keys in a custom cartridge. What diagnosis and action make the most sense?
Correct
The stack clearly implicates missing resource keys, which often surface only on specific locale switches where fallback logic fails or keys differ by case. Normalizing key casing, ensuring complete bundles, and configuring a predictable fallback chain prevents 500s and replaces them with sane defaults. Build-time checks keep regressions out of Production. Option 1 chases infrastructure ghosts. Option 3 and 4 do not relate to resource resolution and introduce new risks. The root cause sits squarely in i18n data and fallback behavior.
Incorrect
The stack clearly implicates missing resource keys, which often surface only on specific locale switches where fallback logic fails or keys differ by case. Normalizing key casing, ensuring complete bundles, and configuring a predictable fallback chain prevents 500s and replaces them with sane defaults. Build-time checks keep regressions out of Production. Option 1 chases infrastructure ghosts. Option 3 and 4 do not relate to resource resolution and introduce new risks. The root cause sits squarely in i18n data and fallback behavior.
Unattempted
The stack clearly implicates missing resource keys, which often surface only on specific locale switches where fallback logic fails or keys differ by case. Normalizing key casing, ensuring complete bundles, and configuring a predictable fallback chain prevents 500s and replaces them with sane defaults. Build-time checks keep regressions out of Production. Option 1 chases infrastructure ghosts. Option 3 and 4 do not relate to resource resolution and introduce new risks. The root cause sits squarely in i18n data and fallback behavior.
Question 3 of 60
3. Question
A memory spike and long GC pauses began after a refactor of PLP aggregation. CPU is moderate, but heap old-gen climbs steadily during browsing sessions. What is the likeliest culprit and targeted fix?
Correct
A slowly rising old-gen with moderate CPU often means long-lived objects are being retained, not simply a traffic surge. Storing big per-user aggregates or whole result sets in caches without bounds or TTL causes heap bloat and eventual GC pain. Bounded caches with LRU/TTL, streaming or paginating server responses, and avoiding retention of entire lists remove the leak pressure. Option 1 and 2 affect bandwidth and origin calls rather than heap residency. Option 4 postpones the symptom but increases GC pause times later and avoids the real fix. The refactor likely introduced a retention pattern that must be corrected.
Incorrect
A slowly rising old-gen with moderate CPU often means long-lived objects are being retained, not simply a traffic surge. Storing big per-user aggregates or whole result sets in caches without bounds or TTL causes heap bloat and eventual GC pain. Bounded caches with LRU/TTL, streaming or paginating server responses, and avoiding retention of entire lists remove the leak pressure. Option 1 and 2 affect bandwidth and origin calls rather than heap residency. Option 4 postpones the symptom but increases GC pause times later and avoids the real fix. The refactor likely introduced a retention pattern that must be corrected.
Unattempted
A slowly rising old-gen with moderate CPU often means long-lived objects are being retained, not simply a traffic surge. Storing big per-user aggregates or whole result sets in caches without bounds or TTL causes heap bloat and eventual GC pain. Bounded caches with LRU/TTL, streaming or paginating server responses, and avoiding retention of entire lists remove the leak pressure. Option 1 and 2 affect bandwidth and origin calls rather than heap residency. Option 4 postpones the symptom but increases GC pause times later and avoids the real fix. The refactor likely introduced a retention pattern that must be corrected.
Question 4 of 60
4. Question
After a Service timeout mitigation, average latency improved but P99 remained erratic during sales. Log Center shows bursts of concurrent regenerations for the same PLP fragment. What is the best explanation and fix?
Correct
Erratic tails with evidence of multiple regenerations for the same key indicates coordinated expiry rather than raw query time or GC. When many requests miss at once, origin work spikes and produces long P99s even if averages look fine. Stale-while-revalidate ensures users see acceptable pages while refresh happens in the background. Jitter spreads expirations, and coalescing prevents dog-piles. Option 1 can help baseline times but not synchronized spikes. Option 2 and 4 are generic changes without connection to the symptom. Addressing coordinated expiry directly stabilizes tail latency.
Incorrect
Erratic tails with evidence of multiple regenerations for the same key indicates coordinated expiry rather than raw query time or GC. When many requests miss at once, origin work spikes and produces long P99s even if averages look fine. Stale-while-revalidate ensures users see acceptable pages while refresh happens in the background. Jitter spreads expirations, and coalescing prevents dog-piles. Option 1 can help baseline times but not synchronized spikes. Option 2 and 4 are generic changes without connection to the symptom. Addressing coordinated expiry directly stabilizes tail latency.
Unattempted
Erratic tails with evidence of multiple regenerations for the same key indicates coordinated expiry rather than raw query time or GC. When many requests miss at once, origin work spikes and produces long P99s even if averages look fine. Stale-while-revalidate ensures users see acceptable pages while refresh happens in the background. Jitter spreads expirations, and coalescing prevents dog-piles. Option 1 can help baseline times but not synchronized spikes. Option 2 and 4 are generic changes without connection to the symptom. Addressing coordinated expiry directly stabilizes tail latency.
Question 5 of 60
5. Question
Your retailer expects a 6× traffic spike during seasonal drops. Current SLOs are 300 ms p95 PLP and 500 ms p95 checkout. Which proactive plan best balances risk, cost, and scalability?
Correct
The best plan mixes measurement and controlled risk. A capacity model based on real diurnal patterns plus progressive load testing validates headroom against SLOs rather than guesses. Autoscaling on saturation (not just CPU) prevents both under and over-scaling, while canary ramps with rollback gates reduce blast radius. Options 1 and 4 are blunt: overprovisioning wastes cost and may hide bottlenecks, while moving all dynamics to the edge can break personalization and checkout correctness. Option 2 under-tests, narrows SLOs to PLP only, and creates a long change freeze that can increase risk by delaying fixes. The chosen plan is proactive, cost-aware, and operationally safe.
Incorrect
The best plan mixes measurement and controlled risk. A capacity model based on real diurnal patterns plus progressive load testing validates headroom against SLOs rather than guesses. Autoscaling on saturation (not just CPU) prevents both under and over-scaling, while canary ramps with rollback gates reduce blast radius. Options 1 and 4 are blunt: overprovisioning wastes cost and may hide bottlenecks, while moving all dynamics to the edge can break personalization and checkout correctness. Option 2 under-tests, narrows SLOs to PLP only, and creates a long change freeze that can increase risk by delaying fixes. The chosen plan is proactive, cost-aware, and operationally safe.
Unattempted
The best plan mixes measurement and controlled risk. A capacity model based on real diurnal patterns plus progressive load testing validates headroom against SLOs rather than guesses. Autoscaling on saturation (not just CPU) prevents both under and over-scaling, while canary ramps with rollback gates reduce blast radius. Options 1 and 4 are blunt: overprovisioning wastes cost and may hide bottlenecks, while moving all dynamics to the edge can break personalization and checkout correctness. Option 2 under-tests, narrows SLOs to PLP only, and creates a long change freeze that can increase risk by delaying fixes. The chosen plan is proactive, cost-aware, and operationally safe.
Question 6 of 60
6. Question
Nightly jobs collide with morning traffic causing queue backlogs and slow carts. What proactive adjustment yields sustainable stability without losing throughput?
Correct
Sustainable stability hinges on shaping load, not just making jobs longer. Tiered concurrency and staggering avoids synchronized bursts, while idempotency and jittered backoff prevent thundering herds on retries. Per-tenant caps protect fairness and isolate noisy neighbors. Option 2 extends tail latency and can starve foreground requests. Option 3 risks data drift and weekend overloads that can harm recovery if a weekend fails. Option 4 creates a single fragile peak that collides with maintenance windows and regional traffic. The selected plan is proactive, predictable, and resilient.
Incorrect
Sustainable stability hinges on shaping load, not just making jobs longer. Tiered concurrency and staggering avoids synchronized bursts, while idempotency and jittered backoff prevent thundering herds on retries. Per-tenant caps protect fairness and isolate noisy neighbors. Option 2 extends tail latency and can starve foreground requests. Option 3 risks data drift and weekend overloads that can harm recovery if a weekend fails. Option 4 creates a single fragile peak that collides with maintenance windows and regional traffic. The selected plan is proactive, predictable, and resilient.
Unattempted
Sustainable stability hinges on shaping load, not just making jobs longer. Tiered concurrency and staggering avoids synchronized bursts, while idempotency and jittered backoff prevent thundering herds on retries. Per-tenant caps protect fairness and isolate noisy neighbors. Option 2 extends tail latency and can starve foreground requests. Option 3 risks data drift and weekend overloads that can harm recovery if a weekend fails. Option 4 creates a single fragile peak that collides with maintenance windows and regional traffic. The selected plan is proactive, predictable, and resilient.
Question 7 of 60
7. Question
Payment gateway latency is rising at p99 during promos, driving sporadic checkout timeouts. Which forward-looking control best protects UX while containing partner risk?
Correct
Circuit breakers plus bulkheads proactively contain partner slowness without letting it cascade. Adaptive timeouts and selective hedging protect p99 while preventing global slowdowns, and a pre-defined secondary route keeps revenue flowing. Option 1 masks the issue and increases the number of stuck threads. Option 2 increases pressure on a slow dependency and worsens tail latency. Option 4 is risky: payment tokenization responses are not broadly cacheable and reusing them can violate PCI and gateway semantics. The chosen pattern is a well-known resilience control aligned with proactive scaling and reliability.
Incorrect
Circuit breakers plus bulkheads proactively contain partner slowness without letting it cascade. Adaptive timeouts and selective hedging protect p99 while preventing global slowdowns, and a pre-defined secondary route keeps revenue flowing. Option 1 masks the issue and increases the number of stuck threads. Option 2 increases pressure on a slow dependency and worsens tail latency. Option 4 is risky: payment tokenization responses are not broadly cacheable and reusing them can violate PCI and gateway semantics. The chosen pattern is a well-known resilience control aligned with proactive scaling and reliability.
Unattempted
Circuit breakers plus bulkheads proactively contain partner slowness without letting it cascade. Adaptive timeouts and selective hedging protect p99 while preventing global slowdowns, and a pre-defined secondary route keeps revenue flowing. Option 1 masks the issue and increases the number of stuck threads. Option 2 increases pressure on a slow dependency and worsens tail latency. Option 4 is risky: payment tokenization responses are not broadly cacheable and reusing them can violate PCI and gateway semantics. The chosen pattern is a well-known resilience control aligned with proactive scaling and reliability.
Question 8 of 60
8. Question
After a search relevance upgrade, index rebuilds degrade PLP latencies for 90 minutes. What proactive design minimizes impact while keeping freshness?
Correct
A blue-green index and alias swap isolates heavy rebuild work from query traffic, while warmups and throttles prevent IO contention. Rolling primaries reduce cache cold starts and keep latencies within SLOs during cutover. Option 1 ensures maximum contention and removes a performance safety net. Option 3 trades relevance and experience for questionable wins and can create stale content. Option 4 ignores global audiences and business calendars; “quiet hours” are unreliable in retail. The selected approach is proactive and minimizes user impact without sacrificing freshness.
Incorrect
A blue-green index and alias swap isolates heavy rebuild work from query traffic, while warmups and throttles prevent IO contention. Rolling primaries reduce cache cold starts and keep latencies within SLOs during cutover. Option 1 ensures maximum contention and removes a performance safety net. Option 3 trades relevance and experience for questionable wins and can create stale content. Option 4 ignores global audiences and business calendars; “quiet hours” are unreliable in retail. The selected approach is proactive and minimizes user impact without sacrificing freshness.
Unattempted
A blue-green index and alias swap isolates heavy rebuild work from query traffic, while warmups and throttles prevent IO contention. Rolling primaries reduce cache cold starts and keep latencies within SLOs during cutover. Option 1 ensures maximum contention and removes a performance safety net. Option 3 trades relevance and experience for questionable wins and can create stale content. Option 4 ignores global audiences and business calendars; “quiet hours” are unreliable in retail. The selected approach is proactive and minimizes user impact without sacrificing freshness.
Question 9 of 60
9. Question
API quotas are frequently breached around noon by affiliate traffic bursts. Which plan most effectively scales fairly while preserving platform health?
Correct
Per-client and per-route token buckets ensure fairness and keep global health intact. 429 with retry hints teaches well-behaved backoff, and a bulk async endpoint shifts expensive operations to controlled execution outside the request path. Option 1 treats symptoms and invites more traffic. Option 2 introduces latency and head-of-line blocking; it can still starve critical paths. Option 4 offers little control; DNS boundaries donÂ’t enforce quotas or fairness. The chosen plan is proactive control with clear migration for heavy workloads.
Incorrect
Per-client and per-route token buckets ensure fairness and keep global health intact. 429 with retry hints teaches well-behaved backoff, and a bulk async endpoint shifts expensive operations to controlled execution outside the request path. Option 1 treats symptoms and invites more traffic. Option 2 introduces latency and head-of-line blocking; it can still starve critical paths. Option 4 offers little control; DNS boundaries donÂ’t enforce quotas or fairness. The chosen plan is proactive control with clear migration for heavy workloads.
Unattempted
Per-client and per-route token buckets ensure fairness and keep global health intact. 429 with retry hints teaches well-behaved backoff, and a bulk async endpoint shifts expensive operations to controlled execution outside the request path. Option 1 treats symptoms and invites more traffic. Option 2 introduces latency and head-of-line blocking; it can still starve critical paths. Option 4 offers little control; DNS boundaries donÂ’t enforce quotas or fairness. The chosen plan is proactive control with clear migration for heavy workloads.
Question 10 of 60
10. Question
Image rendition CPU usage spikes with new hi-res assets; cache hit rate is fine but origin hosts run hot. What anticipatory change best sustains scale?
Correct
Pre-generation removes repeat CPU work for hot paths, quotas prevent bursts from overwhelming origin, and an edge transformation layer scales horizontally closer to users. Caching by full URL avoids collisions and ensures predictable reuse. Option 1 buys time but not efficiency. Option 3 reduces quality and only partly reduces CPU; negotiation is not the main cost driver. Option 4 risks serving stale images and does nothing about generation cost on misses. The recommendation proactively reshapes load and separates concerns for durability.
Incorrect
Pre-generation removes repeat CPU work for hot paths, quotas prevent bursts from overwhelming origin, and an edge transformation layer scales horizontally closer to users. Caching by full URL avoids collisions and ensures predictable reuse. Option 1 buys time but not efficiency. Option 3 reduces quality and only partly reduces CPU; negotiation is not the main cost driver. Option 4 risks serving stale images and does nothing about generation cost on misses. The recommendation proactively reshapes load and separates concerns for durability.
Unattempted
Pre-generation removes repeat CPU work for hot paths, quotas prevent bursts from overwhelming origin, and an edge transformation layer scales horizontally closer to users. Caching by full URL avoids collisions and ensures predictable reuse. Option 1 buys time but not efficiency. Option 3 reduces quality and only partly reduces CPU; negotiation is not the main cost driver. Option 4 risks serving stale images and does nothing about generation cost on misses. The recommendation proactively reshapes load and separates concerns for durability.
Question 11 of 60
11. Question
Personalization expanded memory footprints; GC pauses grew and p95 rose slightly. Which change responsibly restores headroom without gutting features?
Correct
Bounding caches and offloading bulky profiles reduces long-lived object retention. Streaming and pagination limit per-request heap growth while preserving functionality. This directly targets GC pressure while keeping value. Option 1 eliminates business benefits. Option 3 postpones pain and can worsen pause times. Option 4 shrinks customer value and can harm conversion without guaranteeing stability. The proactive fix adjusts data handling patterns rather than removing features.
Incorrect
Bounding caches and offloading bulky profiles reduces long-lived object retention. Streaming and pagination limit per-request heap growth while preserving functionality. This directly targets GC pressure while keeping value. Option 1 eliminates business benefits. Option 3 postpones pain and can worsen pause times. Option 4 shrinks customer value and can harm conversion without guaranteeing stability. The proactive fix adjusts data handling patterns rather than removing features.
Unattempted
Bounding caches and offloading bulky profiles reduces long-lived object retention. Streaming and pagination limit per-request heap growth while preserving functionality. This directly targets GC pressure while keeping value. Option 1 eliminates business benefits. Option 3 postpones pain and can worsen pause times. Option 4 shrinks customer value and can harm conversion without guaranteeing stability. The proactive fix adjusts data handling patterns rather than removing features.
Question 12 of 60
12. Question
Observability gaps make it hard to predict saturation. What is the most effective forward plan to keep the system healthy?
Correct
SLOs, error budgets, and saturation metrics provide predictive signals. Burn-rate alerting detects budget exhaustion early, while runbooks and auto-remediation turn detection into action. Option 2 is noisy and non-predictive. Option 3 helps availability visibility but misses resource stress and real-time control. Option 4 risks performance and cost and is reactive, not proactive. The chosen approach creates continuous, actionable health feedback aligned to business goals.
Incorrect
SLOs, error budgets, and saturation metrics provide predictive signals. Burn-rate alerting detects budget exhaustion early, while runbooks and auto-remediation turn detection into action. Option 2 is noisy and non-predictive. Option 3 helps availability visibility but misses resource stress and real-time control. Option 4 risks performance and cost and is reactive, not proactive. The chosen approach creates continuous, actionable health feedback aligned to business goals.
Unattempted
SLOs, error budgets, and saturation metrics provide predictive signals. Burn-rate alerting detects budget exhaustion early, while runbooks and auto-remediation turn detection into action. Option 2 is noisy and non-predictive. Option 3 helps availability visibility but misses resource stress and real-time control. Option 4 risks performance and cost and is reactive, not proactive. The chosen approach creates continuous, actionable health feedback aligned to business goals.
Question 13 of 60
13. Question
Bot surges triggered by marketing links inflate sessions and distort analytics. Which plan safeguards capacity without harming legitimate shoppers?
Correct
A WAF with behavioral analysis distinguishes bots from humans beyond IP lists. Burst rate limits protect capacity, while targeted challenges deter abuse with minimal friction. Allow-lists avoid harming partners or assistive tech. Option 1 will block good traffic and invites cat-and-mouse IP changes. Option 2 adds friction in core funnels and hurts conversion. Option 4 changes the UX model and harms SEO and discovery. The recommended approach proactively preserves capacity and experience.
Incorrect
A WAF with behavioral analysis distinguishes bots from humans beyond IP lists. Burst rate limits protect capacity, while targeted challenges deter abuse with minimal friction. Allow-lists avoid harming partners or assistive tech. Option 1 will block good traffic and invites cat-and-mouse IP changes. Option 2 adds friction in core funnels and hurts conversion. Option 4 changes the UX model and harms SEO and discovery. The recommended approach proactively preserves capacity and experience.
Unattempted
A WAF with behavioral analysis distinguishes bots from humans beyond IP lists. Burst rate limits protect capacity, while targeted challenges deter abuse with minimal friction. Allow-lists avoid harming partners or assistive tech. Option 1 will block good traffic and invites cat-and-mouse IP changes. Option 2 adds friction in core funnels and hurts conversion. Option 4 changes the UX model and harms SEO and discovery. The recommended approach proactively preserves capacity and experience.
Question 14 of 60
14. Question
Ad-hoc analytics on production slows PLP queries during month-end reporting. What proactive architecture keeps prod responsive and data timely?
Correct
Offloading heavy analytics to a warehouse preserves prod while improving analytical scale. Read replicas cover light dashboards without harming OLTP, and throttling long queries protects SLOs. Option 1 scales cost without addressing the shared-resource problem. Option 3 delays insights and piles work into narrow windows. Option 4 harms shopper UX and conversion. The proactive solution separates concerns and enforces guardrails.
Incorrect
Offloading heavy analytics to a warehouse preserves prod while improving analytical scale. Read replicas cover light dashboards without harming OLTP, and throttling long queries protects SLOs. Option 1 scales cost without addressing the shared-resource problem. Option 3 delays insights and piles work into narrow windows. Option 4 harms shopper UX and conversion. The proactive solution separates concerns and enforces guardrails.
Unattempted
Offloading heavy analytics to a warehouse preserves prod while improving analytical scale. Read replicas cover light dashboards without harming OLTP, and throttling long queries protects SLOs. Option 1 scales cost without addressing the shared-resource problem. Option 3 delays insights and piles work into narrow windows. Option 4 harms shopper UX and conversion. The proactive solution separates concerns and enforces guardrails.
Question 15 of 60
15. Question
A retailer must calculate tax at checkout using a third-party that exposes a JSON API with OAuth 2.0 client credentials. The business requires sub-300 ms added latency and accurate line-level tax on every change. What integration choice fits best?
Correct
The requirement is interactive accuracy with low latency and an OAuth JSON API, which points to REST in real time through SFCCÂ’s Services framework. Token acquisition should use a dedicated auth service with caching to avoid token round-trips, and strict timeouts with limited retries protect checkout p95/p99. Logging must mask tokens and PII; credentials live in Service credentials, not code. Optional mTLS increases trust when the provider supports it. Option 1 is batch and SOAP, so it cannot meet per-change accuracy or latency. Option 3Â’s hourly SFTP batch cannot reflect user edits instantly and breaks UX. Option 4 adds risk by stretching timeouts and using SOAP when the provider is JSON OAuth; longer timeouts harm capacity and donÂ’t ensure success. Real-time REST aligns with both business and security needs.
Incorrect
The requirement is interactive accuracy with low latency and an OAuth JSON API, which points to REST in real time through SFCCÂ’s Services framework. Token acquisition should use a dedicated auth service with caching to avoid token round-trips, and strict timeouts with limited retries protect checkout p95/p99. Logging must mask tokens and PII; credentials live in Service credentials, not code. Optional mTLS increases trust when the provider supports it. Option 1 is batch and SOAP, so it cannot meet per-change accuracy or latency. Option 3Â’s hourly SFTP batch cannot reflect user edits instantly and breaks UX. Option 4 adds risk by stretching timeouts and using SOAP when the provider is JSON OAuth; longer timeouts harm capacity and donÂ’t ensure success. Real-time REST aligns with both business and security needs.
Unattempted
The requirement is interactive accuracy with low latency and an OAuth JSON API, which points to REST in real time through SFCCÂ’s Services framework. Token acquisition should use a dedicated auth service with caching to avoid token round-trips, and strict timeouts with limited retries protect checkout p95/p99. Logging must mask tokens and PII; credentials live in Service credentials, not code. Optional mTLS increases trust when the provider supports it. Option 1 is batch and SOAP, so it cannot meet per-change accuracy or latency. Option 3Â’s hourly SFTP batch cannot reflect user edits instantly and breaks UX. Option 4 adds risk by stretching timeouts and using SOAP when the provider is JSON OAuth; longer timeouts harm capacity and donÂ’t ensure success. Real-time REST aligns with both business and security needs.
Question 16 of 60
16. Question
Android in-app webviews render blank PDPs after a deploy, while desktop and iOS are fine. No errors appear in server logs; client console shows blocked inline scripts. What root cause and remedy are most consistent?
Correct
Blank pages with client-side console messages about blocked scripts align with a Content Security Policy mismatch rather than server performance. Some Android webviews enforce stricter interpretations, so inline snippets or eval-based code fails to execute. Adding nonces or hashes for legitimate inline bootstrap and removing eval-like usage restores functionality without broadly weakening CSP. A report-only endpoint validates real violations. Option 1 and 2 target performance rather than execution denial. Option 4 would break more clients and doesnÂ’t relate to CSP errors. The precise fix addresses policy and bundle behavior to restore rendering safely.
Incorrect
Blank pages with client-side console messages about blocked scripts align with a Content Security Policy mismatch rather than server performance. Some Android webviews enforce stricter interpretations, so inline snippets or eval-based code fails to execute. Adding nonces or hashes for legitimate inline bootstrap and removing eval-like usage restores functionality without broadly weakening CSP. A report-only endpoint validates real violations. Option 1 and 2 target performance rather than execution denial. Option 4 would break more clients and doesnÂ’t relate to CSP errors. The precise fix addresses policy and bundle behavior to restore rendering safely.
Unattempted
Blank pages with client-side console messages about blocked scripts align with a Content Security Policy mismatch rather than server performance. Some Android webviews enforce stricter interpretations, so inline snippets or eval-based code fails to execute. Adding nonces or hashes for legitimate inline bootstrap and removing eval-like usage restores functionality without broadly weakening CSP. A report-only endpoint validates real violations. Option 1 and 2 target performance rather than execution denial. Option 4 would break more clients and doesnÂ’t relate to CSP errors. The precise fix addresses policy and bundle behavior to restore rendering safely.
Question 17 of 60
17. Question
Marketing wants a consent service to be updated within 30 seconds of profile edits. The provider offers REST and recommends a webhook pattern for near real-time updates. What integration should you implement from SFCC?
Correct
The requirement is near real-time and the provider supports REST; SFCC should push on profile save and use a small queue for resilience. OAuth client-credentials with token caching protects secrets, while HMAC signing (if offered) adds tamper protection. Secrets belong in Service credentials, and logs must be scrubbed. A Job fallback handles transient outages without losing updates. Option 2 violates the 30-second expectation and delays consent alignment. Option 3Â’s hourly batch and API key in a query string are weak security practices and miss timeliness. Option 4 uses SOAP and long timeouts, adding risk and exposing PII in logs; Basic Auth over TLS is weaker than OAuth and should be avoided where possible. The real-time REST push best fits latency and security.
Incorrect
The requirement is near real-time and the provider supports REST; SFCC should push on profile save and use a small queue for resilience. OAuth client-credentials with token caching protects secrets, while HMAC signing (if offered) adds tamper protection. Secrets belong in Service credentials, and logs must be scrubbed. A Job fallback handles transient outages without losing updates. Option 2 violates the 30-second expectation and delays consent alignment. Option 3Â’s hourly batch and API key in a query string are weak security practices and miss timeliness. Option 4 uses SOAP and long timeouts, adding risk and exposing PII in logs; Basic Auth over TLS is weaker than OAuth and should be avoided where possible. The real-time REST push best fits latency and security.
Unattempted
The requirement is near real-time and the provider supports REST; SFCC should push on profile save and use a small queue for resilience. OAuth client-credentials with token caching protects secrets, while HMAC signing (if offered) adds tamper protection. Secrets belong in Service credentials, and logs must be scrubbed. A Job fallback handles transient outages without losing updates. Option 2 violates the 30-second expectation and delays consent alignment. Option 3Â’s hourly batch and API key in a query string are weak security practices and miss timeliness. Option 4 uses SOAP and long timeouts, adding risk and exposing PII in logs; Basic Auth over TLS is weaker than OAuth and should be avoided where possible. The real-time REST push best fits latency and security.
Question 18 of 60
18. Question
A tax provider supports both SOAP and REST, but legal requires all tax requests to be replay-protected and traceable per order. Latency tolerance is moderate (?700 ms). Which option balances compliance and maintainability?
Correct
REST real-time meets the latency target, and idempotency keys plus correlation IDs satisfy replay protection and traceability. OAuth 2.0 with TLS 1.2 (or higher) is modern and widely supported; signed webhooks enable asynchronous adjustments safely. Masking logs and keeping secrets in Service credentials finishes the security posture. Option 1 breaks real-time accuracy and relies on stale local tax. Option 2 lacks explicit replay protection and audit correlation; “no retries” can still duplicate server-side processing. Option 4’s UsernameToken without signatures is weak, long timeouts harm throughput, and storing secrets in custom objects is not best practice. The selected REST approach is secure, traceable, and maintainable.
Incorrect
REST real-time meets the latency target, and idempotency keys plus correlation IDs satisfy replay protection and traceability. OAuth 2.0 with TLS 1.2 (or higher) is modern and widely supported; signed webhooks enable asynchronous adjustments safely. Masking logs and keeping secrets in Service credentials finishes the security posture. Option 1 breaks real-time accuracy and relies on stale local tax. Option 2 lacks explicit replay protection and audit correlation; “no retries” can still duplicate server-side processing. Option 4’s UsernameToken without signatures is weak, long timeouts harm throughput, and storing secrets in custom objects is not best practice. The selected REST approach is secure, traceable, and maintainable.
Unattempted
REST real-time meets the latency target, and idempotency keys plus correlation IDs satisfy replay protection and traceability. OAuth 2.0 with TLS 1.2 (or higher) is modern and widely supported; signed webhooks enable asynchronous adjustments safely. Masking logs and keeping secrets in Service credentials finishes the security posture. Option 1 breaks real-time accuracy and relies on stale local tax. Option 2 lacks explicit replay protection and audit correlation; “no retries” can still duplicate server-side processing. Option 4’s UsernameToken without signatures is weak, long timeouts harm throughput, and storing secrets in custom objects is not best practice. The selected REST approach is secure, traceable, and maintainable.
Question 19 of 60
19. Question
A content enrichment vendor returns SEO metadata and image tags for products. Turnaround under 10 minutes is acceptable; volume is ~200k products nightly. Which integration suits scale and security?
Correct
The workload is large, latency tolerance is minutes, and enrichment is non-transactional; batch fits. REST batch with Jobs scales well, and asynchronous callbacks reduce wait times while keeping PDPs fast. Securing callbacks with shared secrets and allowlists, and processing via an ingestion Job, aligns with SFCC practices and avoids exposing credentials to browsers. Option 1 ties enrichment to PDP latency and explodes traffic/quotas. Option 3 makes 200k real-time SOAP calls with little benefit and high risk. Option 4 still uses SOAP, increases operational complexity during peaks, and may collide with shopper load. Therefore, a batch REST job with secure callbacks is best.
Incorrect
The workload is large, latency tolerance is minutes, and enrichment is non-transactional; batch fits. REST batch with Jobs scales well, and asynchronous callbacks reduce wait times while keeping PDPs fast. Securing callbacks with shared secrets and allowlists, and processing via an ingestion Job, aligns with SFCC practices and avoids exposing credentials to browsers. Option 1 ties enrichment to PDP latency and explodes traffic/quotas. Option 3 makes 200k real-time SOAP calls with little benefit and high risk. Option 4 still uses SOAP, increases operational complexity during peaks, and may collide with shopper load. Therefore, a batch REST job with secure callbacks is best.
Unattempted
The workload is large, latency tolerance is minutes, and enrichment is non-transactional; batch fits. REST batch with Jobs scales well, and asynchronous callbacks reduce wait times while keeping PDPs fast. Securing callbacks with shared secrets and allowlists, and processing via an ingestion Job, aligns with SFCC practices and avoids exposing credentials to browsers. Option 1 ties enrichment to PDP latency and explodes traffic/quotas. Option 3 makes 200k real-time SOAP calls with little benefit and high risk. Option 4 still uses SOAP, increases operational complexity during peaks, and may collide with shopper load. Therefore, a batch REST job with secure callbacks is best.
Question 20 of 60
20. Question
Fraud screening must happen before order placement. The vendor provides REST with client-certificate (mTLS) auth and strict rate limits. Which solution is most appropriate?
Correct
Fraud checks are a gating control, so the call must be synchronous. The vendor supports REST with mTLS, which SFCC Services can handle by attaching a client cert and enforcing TLS. Client-side rate limiting and honoring Retry-After avoid hammering the service and improve success under quotas. Short timeouts and masked logs keep performance and security strong; secrets remain in Service credentials. Option 1 and 4 are batch and allow fraudulent orders through, creating costs and poor CX. Option 2 weakens security by skipping mTLS and tries to hide rate-limit issues with long timeouts, increasing tail latency. The chosen approach is secure and aligned with business flow.
Incorrect
Fraud checks are a gating control, so the call must be synchronous. The vendor supports REST with mTLS, which SFCC Services can handle by attaching a client cert and enforcing TLS. Client-side rate limiting and honoring Retry-After avoid hammering the service and improve success under quotas. Short timeouts and masked logs keep performance and security strong; secrets remain in Service credentials. Option 1 and 4 are batch and allow fraudulent orders through, creating costs and poor CX. Option 2 weakens security by skipping mTLS and tries to hide rate-limit issues with long timeouts, increasing tail latency. The chosen approach is secure and aligned with business flow.
Unattempted
Fraud checks are a gating control, so the call must be synchronous. The vendor supports REST with mTLS, which SFCC Services can handle by attaching a client cert and enforcing TLS. Client-side rate limiting and honoring Retry-After avoid hammering the service and improve success under quotas. Short timeouts and masked logs keep performance and security strong; secrets remain in Service credentials. Option 1 and 4 are batch and allow fraudulent orders through, creating costs and poor CX. Option 2 weakens security by skipping mTLS and tries to hide rate-limit issues with long timeouts, increasing tail latency. The chosen approach is secure and aligned with business flow.
Question 21 of 60
21. Question
A price service sends daily price lists in CSV over SFTP. Business asks for minimal change risk and auditability, not immediate price flips. What should you recommend?
Correct
The provider is SFTP CSV and the business wants low risk and auditability. SFCC Jobs handle SFTP with SSH keys; adding PGP (or comparable file encryption) and checksums protects in transit/at rest. Staging and validation prevent corrupt data from going live; versioned price books enable controlled cutovers in quiet windows. Option 1 changes the contract and increases PDP latency and risk. Option 3 creates unnecessary real-time dependencies and exposes credentials if logging is excessive. Option 4 is insecure—browsers should not hit vendor APIs and would expose tokens and CORS issues. Batch with strong controls is the proper fit.
Incorrect
The provider is SFTP CSV and the business wants low risk and auditability. SFCC Jobs handle SFTP with SSH keys; adding PGP (or comparable file encryption) and checksums protects in transit/at rest. Staging and validation prevent corrupt data from going live; versioned price books enable controlled cutovers in quiet windows. Option 1 changes the contract and increases PDP latency and risk. Option 3 creates unnecessary real-time dependencies and exposes credentials if logging is excessive. Option 4 is insecure—browsers should not hit vendor APIs and would expose tokens and CORS issues. Batch with strong controls is the proper fit.
Unattempted
The provider is SFTP CSV and the business wants low risk and auditability. SFCC Jobs handle SFTP with SSH keys; adding PGP (or comparable file encryption) and checksums protects in transit/at rest. Staging and validation prevent corrupt data from going live; versioned price books enable controlled cutovers in quiet windows. Option 1 changes the contract and increases PDP latency and risk. Option 3 creates unnecessary real-time dependencies and exposes credentials if logging is excessive. Option 4 is insecure—browsers should not hit vendor APIs and would expose tokens and CORS issues. Batch with strong controls is the proper fit.
Question 22 of 60
22. Question
A shipping carrier provides both REST and SOAP. They require HMAC request signing and will send delivery webhooks. Checkout needs rates in <400 ms p95. What approach is most balanced?
Correct
Rate quoting is interactive with tight p95 constraints, so real-time is required. REST is simpler to sign with HMAC and carries JSON payloads; SFCC Services can attach signatures and enforce TLS. Adaptive timeouts and minimal retries preserve tail latency; token caching helps if OAuth is used. Signed webhooks should be verified server-side on a hardened endpoint, never in the browser, and secrets must be masked in logs and stored in Service credentials. Option 1 risks stale or incorrect rates. Option 3 exposes credentials and violates security best practices. Option 4Â’s long timeouts increase checkout risk and misplaces secret storage. The REST real-time signed approach fits best.
Incorrect
Rate quoting is interactive with tight p95 constraints, so real-time is required. REST is simpler to sign with HMAC and carries JSON payloads; SFCC Services can attach signatures and enforce TLS. Adaptive timeouts and minimal retries preserve tail latency; token caching helps if OAuth is used. Signed webhooks should be verified server-side on a hardened endpoint, never in the browser, and secrets must be masked in logs and stored in Service credentials. Option 1 risks stale or incorrect rates. Option 3 exposes credentials and violates security best practices. Option 4Â’s long timeouts increase checkout risk and misplaces secret storage. The REST real-time signed approach fits best.
Unattempted
Rate quoting is interactive with tight p95 constraints, so real-time is required. REST is simpler to sign with HMAC and carries JSON payloads; SFCC Services can attach signatures and enforce TLS. Adaptive timeouts and minimal retries preserve tail latency; token caching helps if OAuth is used. Signed webhooks should be verified server-side on a hardened endpoint, never in the browser, and secrets must be masked in logs and stored in Service credentials. Option 1 risks stale or incorrect rates. Option 3 exposes credentials and violates security best practices. Option 4Â’s long timeouts increase checkout risk and misplaces secret storage. The REST real-time signed approach fits best.
Question 23 of 60
23. Question
Loyalty redemptions can be queued; SLA is within 15 minutes. The vendor only supports REST and prefers bulk endpoints. WhatÂ’s the most appropriate design?
Correct
The SLA is minutes, and the vendor prefers bulk, so batch Jobs are appropriate. Using OAuth client-credentials with token caching keeps security strong, and idempotency keys ensure safe retries. A cached read-only balance lookup covers PDP needs without forcing writes in critical paths. Option 1 adds unnecessary coupling and latency to the cart flow and risks quota exhaustion. Option 3 adds needless protocol conversion and complexity. Option 4 is insecure and exposes keys to the client. The selected hybrid keeps UX responsive while meeting vendor expectations and security best practices.
Incorrect
The SLA is minutes, and the vendor prefers bulk, so batch Jobs are appropriate. Using OAuth client-credentials with token caching keeps security strong, and idempotency keys ensure safe retries. A cached read-only balance lookup covers PDP needs without forcing writes in critical paths. Option 1 adds unnecessary coupling and latency to the cart flow and risks quota exhaustion. Option 3 adds needless protocol conversion and complexity. Option 4 is insecure and exposes keys to the client. The selected hybrid keeps UX responsive while meeting vendor expectations and security best practices.
Unattempted
The SLA is minutes, and the vendor prefers bulk, so batch Jobs are appropriate. Using OAuth client-credentials with token caching keeps security strong, and idempotency keys ensure safe retries. A cached read-only balance lookup covers PDP needs without forcing writes in critical paths. Option 1 adds unnecessary coupling and latency to the cart flow and risks quota exhaustion. Option 3 adds needless protocol conversion and complexity. Option 4 is insecure and exposes keys to the client. The selected hybrid keeps UX responsive while meeting vendor expectations and security best practices.
Question 24 of 60
24. Question
A compliance rule mandates that PII never appears in logs and that secrets rotate quarterly. The gift card provider enforces mutual TLS and rate limiting. Which solution addresses both integration and security?
Correct
The provider demands mTLS and rate limiting; REST real-time with SFCC Services fits and allows attaching a client certificate. Secrets must live in Service credentials with masked logs to satisfy the no-PII logging rule. Implementing client-side rate limiting and automated rotation windows meets quota and governance requirements. Option 1 embeds secrets in code and uses weaker auth. Option 3Â’s full-payload logging violates policy and batch misses real-time redemption needs common to gift cards. Option 4Â’s clear-text logs and ad hoc rotation conflict with compliance. The chosen design unites integration mechanics and security controls.
Incorrect
The provider demands mTLS and rate limiting; REST real-time with SFCC Services fits and allows attaching a client certificate. Secrets must live in Service credentials with masked logs to satisfy the no-PII logging rule. Implementing client-side rate limiting and automated rotation windows meets quota and governance requirements. Option 1 embeds secrets in code and uses weaker auth. Option 3Â’s full-payload logging violates policy and batch misses real-time redemption needs common to gift cards. Option 4Â’s clear-text logs and ad hoc rotation conflict with compliance. The chosen design unites integration mechanics and security controls.
Unattempted
The provider demands mTLS and rate limiting; REST real-time with SFCC Services fits and allows attaching a client certificate. Secrets must live in Service credentials with masked logs to satisfy the no-PII logging rule. Implementing client-side rate limiting and automated rotation windows meets quota and governance requirements. Option 1 embeds secrets in code and uses weaker auth. Option 3Â’s full-payload logging violates policy and batch misses real-time redemption needs common to gift cards. Option 4Â’s clear-text logs and ad hoc rotation conflict with compliance. The chosen design unites integration mechanics and security controls.
Question 25 of 60
25. Question
Your program must import 5M product records per night for three sites. Feeds arrive as PGP-encrypted CSV on SFTP. The business needs resumability, row-level error reporting, and post-import reindexing without blocking storefront traffic. Which Job Framework design best meets the requirements?
Correct
The correct choice uses the productized Job Framework the way it is intended for high-volume, recoverable processing: staged, chunked, and checkpointed. Streaming parsing avoids memory spikes, and small transactions reduce lock contention. Checkpointing in a Custom Object enables resumability on failure without reprocessing the entire file. Site-scoped steps and a staged index with a final alias switch keep the storefront responsive. Credentials belong in Service credentials instead of code, and SFTP/PGP protect the feed in transit/at rest. Option 1 is risky: a single long transaction is prone to contention and rollback storms, and in-memory parsing is not scalable. Option 2 unnecessarily introduces an external real-time hop and OCAPI during batch, increasing latency and failure modes. Option 4 invites race conditions and data corruption because parallel Jobs modify the same catalog without coordination. The selected approach balances scale, safety, and operational control.
Incorrect
The correct choice uses the productized Job Framework the way it is intended for high-volume, recoverable processing: staged, chunked, and checkpointed. Streaming parsing avoids memory spikes, and small transactions reduce lock contention. Checkpointing in a Custom Object enables resumability on failure without reprocessing the entire file. Site-scoped steps and a staged index with a final alias switch keep the storefront responsive. Credentials belong in Service credentials instead of code, and SFTP/PGP protect the feed in transit/at rest. Option 1 is risky: a single long transaction is prone to contention and rollback storms, and in-memory parsing is not scalable. Option 2 unnecessarily introduces an external real-time hop and OCAPI during batch, increasing latency and failure modes. Option 4 invites race conditions and data corruption because parallel Jobs modify the same catalog without coordination. The selected approach balances scale, safety, and operational control.
Unattempted
The correct choice uses the productized Job Framework the way it is intended for high-volume, recoverable processing: staged, chunked, and checkpointed. Streaming parsing avoids memory spikes, and small transactions reduce lock contention. Checkpointing in a Custom Object enables resumability on failure without reprocessing the entire file. Site-scoped steps and a staged index with a final alias switch keep the storefront responsive. Credentials belong in Service credentials instead of code, and SFTP/PGP protect the feed in transit/at rest. Option 1 is risky: a single long transaction is prone to contention and rollback storms, and in-memory parsing is not scalable. Option 2 unnecessarily introduces an external real-time hop and OCAPI during batch, increasing latency and failure modes. Option 4 invites race conditions and data corruption because parallel Jobs modify the same catalog without coordination. The selected approach balances scale, safety, and operational control.
Question 26 of 60
26. Question
You must apply nightly price book deltas per brand. Each brand must be processed sequentially to avoid overlapping edits, but the overall run should still utilize parallelism across independent steps. What approach should you take?
Correct
The first option uses the Job FrameworkÂ’s strengths: controlled sequencing within a Job while still leveraging parallel execution at a finer granularity. Using a distributed lock per brand guarantees no overlap for the same target resources, while sub-job or step execution preserves observability and retry semantics. Chunking with Transaction.wrap keeps database work lean and reversible, and checkpoints permit resuming after failure. Option 2 abdicates control to the scheduler, which doesnÂ’t inherently serialize shared resources and can still overlap. Option 3 moves batch responsibility off-platform and into real-time OCAPI calls, introducing latency and rate-limit exposure. Option 4 creates a massive blast radius by replacing multiple price books in one transaction, which is fragile and difficult to recover from. Therefore, orchestrated sequential brand processing with locks is the most robust and auditable plan.
Incorrect
The first option uses the Job FrameworkÂ’s strengths: controlled sequencing within a Job while still leveraging parallel execution at a finer granularity. Using a distributed lock per brand guarantees no overlap for the same target resources, while sub-job or step execution preserves observability and retry semantics. Chunking with Transaction.wrap keeps database work lean and reversible, and checkpoints permit resuming after failure. Option 2 abdicates control to the scheduler, which doesnÂ’t inherently serialize shared resources and can still overlap. Option 3 moves batch responsibility off-platform and into real-time OCAPI calls, introducing latency and rate-limit exposure. Option 4 creates a massive blast radius by replacing multiple price books in one transaction, which is fragile and difficult to recover from. Therefore, orchestrated sequential brand processing with locks is the most robust and auditable plan.
Unattempted
The first option uses the Job FrameworkÂ’s strengths: controlled sequencing within a Job while still leveraging parallel execution at a finer granularity. Using a distributed lock per brand guarantees no overlap for the same target resources, while sub-job or step execution preserves observability and retry semantics. Chunking with Transaction.wrap keeps database work lean and reversible, and checkpoints permit resuming after failure. Option 2 abdicates control to the scheduler, which doesnÂ’t inherently serialize shared resources and can still overlap. Option 3 moves batch responsibility off-platform and into real-time OCAPI calls, introducing latency and rate-limit exposure. Option 4 creates a massive blast radius by replacing multiple price books in one transaction, which is fragile and difficult to recover from. Therefore, orchestrated sequential brand processing with locks is the most robust and auditable plan.
Question 27 of 60
27. Question
Orders must be exported to ERP every 10 minutes. Requirements include exactly-once delivery, idempotent retries, archiving payloads, and clear replay procedures. Which Job design is most appropriate?
Correct
The fourth option aligns with batch integration best practices on SFCC Jobs: an export state machine, idempotent artifacts, and an auditable archive. Selecting by status prevents duplicates; atomic state updates coupled with unique filenames and checksums provide exactly-once semantics even under retries. SFTP via the Services framework keeps credentials safe, and a replay Step offers controlled reprocessing without code changes. Option 1 relies on timeouts and storefront behavior, which is brittle and mixes transactional flows with batch. Option 2 offloads control outside the platform, losing Job monitoring, credential governance, and audit trails. Option 3 uses a controller for batch, risking request timeouts and contention while polluting request logs. The chosen Job structure gives durability, observability, and repeatability.
Incorrect
The fourth option aligns with batch integration best practices on SFCC Jobs: an export state machine, idempotent artifacts, and an auditable archive. Selecting by status prevents duplicates; atomic state updates coupled with unique filenames and checksums provide exactly-once semantics even under retries. SFTP via the Services framework keeps credentials safe, and a replay Step offers controlled reprocessing without code changes. Option 1 relies on timeouts and storefront behavior, which is brittle and mixes transactional flows with batch. Option 2 offloads control outside the platform, losing Job monitoring, credential governance, and audit trails. Option 3 uses a controller for batch, risking request timeouts and contention while polluting request logs. The chosen Job structure gives durability, observability, and repeatability.
Unattempted
The fourth option aligns with batch integration best practices on SFCC Jobs: an export state machine, idempotent artifacts, and an auditable archive. Selecting by status prevents duplicates; atomic state updates coupled with unique filenames and checksums provide exactly-once semantics even under retries. SFTP via the Services framework keeps credentials safe, and a replay Step offers controlled reprocessing without code changes. Option 1 relies on timeouts and storefront behavior, which is brittle and mixes transactional flows with batch. Option 2 offloads control outside the platform, losing Job monitoring, credential governance, and audit trails. Option 3 uses a controller for batch, risking request timeouts and contention while polluting request logs. The chosen Job structure gives durability, observability, and repeatability.
Question 28 of 60
28. Question
Your warehouse sends hourly inventory CSVs with occasional bad lines. Business needs partial success, a reject file for bad rows, and quick recovery if a file is re-sent. What should your Job do?
Correct
The second option reflects resilient batch design: stream, validate, and commit in small units so a few bad lines donÂ’t block good data. A dedicated reject file supports operations and can be reprocessed after correction. Idempotent upsert logic keyed by SKU and a feed watermark allows safe re-ingestion of re-sent files. Option 1Â’s all-or-nothing approach harms availability and delays corrections. Option 3 bloats storage and defers work without improving throughput or traceability in the Job Framework. Option 4 converts batch to chatty real-time OCAPI calls, incurring rate limits and losing unified error handling. Streaming with partial commits and rejects provides the best balance of integrity and speed.
Incorrect
The second option reflects resilient batch design: stream, validate, and commit in small units so a few bad lines donÂ’t block good data. A dedicated reject file supports operations and can be reprocessed after correction. Idempotent upsert logic keyed by SKU and a feed watermark allows safe re-ingestion of re-sent files. Option 1Â’s all-or-nothing approach harms availability and delays corrections. Option 3 bloats storage and defers work without improving throughput or traceability in the Job Framework. Option 4 converts batch to chatty real-time OCAPI calls, incurring rate limits and losing unified error handling. Streaming with partial commits and rejects provides the best balance of integrity and speed.
Unattempted
The second option reflects resilient batch design: stream, validate, and commit in small units so a few bad lines donÂ’t block good data. A dedicated reject file supports operations and can be reprocessed after correction. Idempotent upsert logic keyed by SKU and a feed watermark allows safe re-ingestion of re-sent files. Option 1Â’s all-or-nothing approach harms availability and delays corrections. Option 3 bloats storage and defers work without improving throughput or traceability in the Job Framework. Option 4 converts batch to chatty real-time OCAPI calls, incurring rate limits and losing unified error handling. Streaming with partial commits and rejects provides the best balance of integrity and speed.
Question 29 of 60
29. Question
Compliance mandates that customer-data purge runs nightly with the ability to pause during peak traffic and resume later. The run spans multiple sites and must not overlap with itself. What is your recommended pattern?
Correct
The second approach uses Job capabilities for long-running, controllable batch work. Pausing at chunk boundaries avoids half-written transactions, and a persisted cursor enables precise resume. A distributed lock ensures no concurrent runs of the same purge operate on the same resources. This design also keeps visibility in Business Manager with logs and history intact. Option 1 risks overlap or unpredictable queuing and makes coordination tricky at scale. Option 3Â’s single large transaction is dangerous, likely to hit limits and cause lock contention. Option 4 removes governance and observability by moving control out of the Job Framework and disables logging, which contradicts compliance needs. The recommended pattern balances safety, performance, and control.
Incorrect
The second approach uses Job capabilities for long-running, controllable batch work. Pausing at chunk boundaries avoids half-written transactions, and a persisted cursor enables precise resume. A distributed lock ensures no concurrent runs of the same purge operate on the same resources. This design also keeps visibility in Business Manager with logs and history intact. Option 1 risks overlap or unpredictable queuing and makes coordination tricky at scale. Option 3Â’s single large transaction is dangerous, likely to hit limits and cause lock contention. Option 4 removes governance and observability by moving control out of the Job Framework and disables logging, which contradicts compliance needs. The recommended pattern balances safety, performance, and control.
Unattempted
The second approach uses Job capabilities for long-running, controllable batch work. Pausing at chunk boundaries avoids half-written transactions, and a persisted cursor enables precise resume. A distributed lock ensures no concurrent runs of the same purge operate on the same resources. This design also keeps visibility in Business Manager with logs and history intact. Option 1 risks overlap or unpredictable queuing and makes coordination tricky at scale. Option 3Â’s single large transaction is dangerous, likely to hit limits and cause lock contention. Option 4 removes governance and observability by moving control out of the Job Framework and disables logging, which contradicts compliance needs. The recommended pattern balances safety, performance, and control.
Question 30 of 60
30. Question
Product media transformations (thumbnails, webp) must be generated weekly from a 200GB source set stored in SFTP. Operations want automatic retry, throughput control, and a consolidated run report emailed to admins. Which Job design fits?
Correct
The first choice leverages the Job Framework for paging large listings, controlling throughput, and orchestrating multi-step processing with durable retry. Using Custom Objects to track failed items makes targeted re-runs possible. Generating a consolidated report within the Job and emailing via configured BM settings preserves auditability and avoids leaking credentials. Option 2Â’s single Step with a long timeout risks termination, provides poor visibility, and lacks throttling. Option 3 is insecure and unreliable; browsers shouldnÂ’t run privileged batch work or send system emails. Option 4 tries to escape the sandbox with OS processes and loses platform-level observability and governance. The selected multi-step Job strikes the right balance of resilience, performance, and reporting.
Incorrect
The first choice leverages the Job Framework for paging large listings, controlling throughput, and orchestrating multi-step processing with durable retry. Using Custom Objects to track failed items makes targeted re-runs possible. Generating a consolidated report within the Job and emailing via configured BM settings preserves auditability and avoids leaking credentials. Option 2Â’s single Step with a long timeout risks termination, provides poor visibility, and lacks throttling. Option 3 is insecure and unreliable; browsers shouldnÂ’t run privileged batch work or send system emails. Option 4 tries to escape the sandbox with OS processes and loses platform-level observability and governance. The selected multi-step Job strikes the right balance of resilience, performance, and reporting.
Unattempted
The first choice leverages the Job Framework for paging large listings, controlling throughput, and orchestrating multi-step processing with durable retry. Using Custom Objects to track failed items makes targeted re-runs possible. Generating a consolidated report within the Job and emailing via configured BM settings preserves auditability and avoids leaking credentials. Option 2Â’s single Step with a long timeout risks termination, provides poor visibility, and lacks throttling. Option 3 is insecure and unreliable; browsers shouldnÂ’t run privileged batch work or send system emails. Option 4 tries to escape the sandbox with OS processes and loses platform-level observability and governance. The selected multi-step Job strikes the right balance of resilience, performance, and reporting.
Question 31 of 60
31. Question
Checkout declines increase and page timeouts appear on payment authorization under peak load. How should you harden the integration?
Correct
The correct answer balances user experience with safety and performance. Separate, conservative timeouts avoid tying up threads, and limiting retries to timeouts (not hard declines) prevents double-charges. Exponential backoff with a circuit breaker stops cascading failures and reduces queueing delays. Pre-tokenization moves expensive steps earlier to reduce critical-path latency. Clear messaging keeps user trust. Option 1 risks duplicate authorizations and gateway throttling. Option 2 increases tail latency and hurts throughput under load. Option 3 jeopardizes revenue integrity and increases manual work. The recommended pattern is a standard service-resilience design and produces actionable telemetry to distinguish gateway health from business rejections.
Incorrect
The correct answer balances user experience with safety and performance. Separate, conservative timeouts avoid tying up threads, and limiting retries to timeouts (not hard declines) prevents double-charges. Exponential backoff with a circuit breaker stops cascading failures and reduces queueing delays. Pre-tokenization moves expensive steps earlier to reduce critical-path latency. Clear messaging keeps user trust. Option 1 risks duplicate authorizations and gateway throttling. Option 2 increases tail latency and hurts throughput under load. Option 3 jeopardizes revenue integrity and increases manual work. The recommended pattern is a standard service-resilience design and produces actionable telemetry to distinguish gateway health from business rejections.
Unattempted
The correct answer balances user experience with safety and performance. Separate, conservative timeouts avoid tying up threads, and limiting retries to timeouts (not hard declines) prevents double-charges. Exponential backoff with a circuit breaker stops cascading failures and reduces queueing delays. Pre-tokenization moves expensive steps earlier to reduce critical-path latency. Clear messaging keeps user trust. Option 1 risks duplicate authorizations and gateway throttling. Option 2 increases tail latency and hurts throughput under load. Option 3 jeopardizes revenue integrity and increases manual work. The recommended pattern is a standard service-resilience design and produces actionable telemetry to distinguish gateway health from business rejections.
Question 32 of 60
32. Question
The spec outlines “idempotent order submission, correlation IDs from cart to OMS, and retry with backoff.” Logs show duplicate orders under retry storms. What fix best meets the specification?
Correct
Idempotency keys ensure that multiple submissions of the same logical order result in a single committed order. Combined with exponential backoff, they prevent thundering herds on transient failures. Increasing retry frequency undercuts stability and exacerbates duplicates. Disabling retries harms UX and revenue during brief downstream hiccups. Random UUIDs per attempt break idempotency and force manual dedupe, which violates the spec. Correlation IDs should persist from cart through OMS to trace behavior. Implementing these controls aligns resiliency with correctness. It also simplifies post-incident analysis and SLA reporting. This fix precisely delivers on the technical specification and business reliability goals.
Incorrect
Idempotency keys ensure that multiple submissions of the same logical order result in a single committed order. Combined with exponential backoff, they prevent thundering herds on transient failures. Increasing retry frequency undercuts stability and exacerbates duplicates. Disabling retries harms UX and revenue during brief downstream hiccups. Random UUIDs per attempt break idempotency and force manual dedupe, which violates the spec. Correlation IDs should persist from cart through OMS to trace behavior. Implementing these controls aligns resiliency with correctness. It also simplifies post-incident analysis and SLA reporting. This fix precisely delivers on the technical specification and business reliability goals.
Unattempted
Idempotency keys ensure that multiple submissions of the same logical order result in a single committed order. Combined with exponential backoff, they prevent thundering herds on transient failures. Increasing retry frequency undercuts stability and exacerbates duplicates. Disabling retries harms UX and revenue during brief downstream hiccups. Random UUIDs per attempt break idempotency and force manual dedupe, which violates the spec. Correlation IDs should persist from cart through OMS to trace behavior. Implementing these controls aligns resiliency with correctness. It also simplifies post-incident analysis and SLA reporting. This fix precisely delivers on the technical specification and business reliability goals.
Question 33 of 60
33. Question
You need a fast rollback plan for a risky release. Which compile/deploy approach provides the cleanest rollback?
Correct
Option 4 is correct because code version activation is instantaneous and reversible when versions are immutable and preserved. Keeping prior versions on the instance allows a quick, low-risk rollback without re-uploading. If data was changed, re-importing the previous package completes the rollback and restores parity. Option 1 produces unknown states that canÂ’t be cleanly reversed. Option 2 relies on a personal copy and lacks governance. Option 3 deletes history and can remove artifacts needed for analysis. The recommended plan is auditable, fast, and aligned with controlled change windows.
Incorrect
Option 4 is correct because code version activation is instantaneous and reversible when versions are immutable and preserved. Keeping prior versions on the instance allows a quick, low-risk rollback without re-uploading. If data was changed, re-importing the previous package completes the rollback and restores parity. Option 1 produces unknown states that canÂ’t be cleanly reversed. Option 2 relies on a personal copy and lacks governance. Option 3 deletes history and can remove artifacts needed for analysis. The recommended plan is auditable, fast, and aligned with controlled change windows.
Unattempted
Option 4 is correct because code version activation is instantaneous and reversible when versions are immutable and preserved. Keeping prior versions on the instance allows a quick, low-risk rollback without re-uploading. If data was changed, re-importing the previous package completes the rollback and restores parity. Option 1 produces unknown states that canÂ’t be cleanly reversed. Option 2 relies on a personal copy and lacks governance. Option 3 deletes history and can remove artifacts needed for analysis. The recommended plan is auditable, fast, and aligned with controlled change windows.
Question 34 of 60
34. Question
YouÂ’re standardizing logging across multiple cartridges (SFRA app, payments, search). Compliance wants traceability without flooding Production. WhatÂ’s the best end-to-end approach?
Correct
Option 2 is correct because governance requires structure (categories), observability requires correlation, and best practice keeps DEBUG off by default. A TTL’d flag prevents “debug left on” incidents. Log Center aggregation plus SIEM forwarding provides central visibility and retention. Sampling verbose classes limits cost and noise while preserving signal. Option 1 creates excessive volume and risk. Option 3 lacks the application context needed to triage business logic faults. Option 4 violates data-minimization and security rules even if masked. The proposed design balances traceability, performance, and compliance.
Incorrect
Option 2 is correct because governance requires structure (categories), observability requires correlation, and best practice keeps DEBUG off by default. A TTL’d flag prevents “debug left on” incidents. Log Center aggregation plus SIEM forwarding provides central visibility and retention. Sampling verbose classes limits cost and noise while preserving signal. Option 1 creates excessive volume and risk. Option 3 lacks the application context needed to triage business logic faults. Option 4 violates data-minimization and security rules even if masked. The proposed design balances traceability, performance, and compliance.
Unattempted
Option 2 is correct because governance requires structure (categories), observability requires correlation, and best practice keeps DEBUG off by default. A TTL’d flag prevents “debug left on” incidents. Log Center aggregation plus SIEM forwarding provides central visibility and retention. Sampling verbose classes limits cost and noise while preserving signal. Option 1 creates excessive volume and risk. Option 3 lacks the application context needed to triage business logic faults. Option 4 violates data-minimization and security rules even if masked. The proposed design balances traceability, performance, and compliance.
Question 35 of 60
35. Question
A sporadic 500 occurs after checkout for a minority of users in one region. Which plan best leverages Log Center to isolate and monitor the defect?
Correct
Option 4 is correct because focused searches reduce noise and saved queries accelerate every investigation. Threshold-based alerts surface regressions quickly. Narrow, time-boxed DEBUG on the impacted category respects performance and governance. Correlation IDs allow stitching user flow across services. Option 1 is passive and risks customer impact. Option 2 is excessive and expensive. Option 3Â’s synthetic checks lack the user context necessary to pinpoint the path. The chosen plan uses Log Center as intended: scoped search, alerting, and disciplined verbosity.
Incorrect
Option 4 is correct because focused searches reduce noise and saved queries accelerate every investigation. Threshold-based alerts surface regressions quickly. Narrow, time-boxed DEBUG on the impacted category respects performance and governance. Correlation IDs allow stitching user flow across services. Option 1 is passive and risks customer impact. Option 2 is excessive and expensive. Option 3Â’s synthetic checks lack the user context necessary to pinpoint the path. The chosen plan uses Log Center as intended: scoped search, alerting, and disciplined verbosity.
Unattempted
Option 4 is correct because focused searches reduce noise and saved queries accelerate every investigation. Threshold-based alerts surface regressions quickly. Narrow, time-boxed DEBUG on the impacted category respects performance and governance. Correlation IDs allow stitching user flow across services. Option 1 is passive and risks customer impact. Option 2 is excessive and expensive. Option 3Â’s synthetic checks lack the user context necessary to pinpoint the path. The chosen plan uses Log Center as intended: scoped search, alerting, and disciplined verbosity.
Question 36 of 60
36. Question
Security asks for provable log hygiene: no PII, consistent redaction, and auditable configuration. What should you implement?
Correct
Option 1 is correct because redaction must occur at source to prevent leakage at rest and in transit. A shared helper ensures consistency across cartridges. Pattern-based masking with tests provides governance and prevents regressions. A toggle for extra redaction supports regional rules without code changes. A runbook plus change control gives auditors the trace they need. Option 2 relies on downstream controls and risks exposure. Option 3 normalizes bad habits and increases data sprawl. Option 4 leads to drift and inconsistent masking. This approach raises trust while keeping logs actionable.
Incorrect
Option 1 is correct because redaction must occur at source to prevent leakage at rest and in transit. A shared helper ensures consistency across cartridges. Pattern-based masking with tests provides governance and prevents regressions. A toggle for extra redaction supports regional rules without code changes. A runbook plus change control gives auditors the trace they need. Option 2 relies on downstream controls and risks exposure. Option 3 normalizes bad habits and increases data sprawl. Option 4 leads to drift and inconsistent masking. This approach raises trust while keeping logs actionable.
Unattempted
Option 1 is correct because redaction must occur at source to prevent leakage at rest and in transit. A shared helper ensures consistency across cartridges. Pattern-based masking with tests provides governance and prevents regressions. A toggle for extra redaction supports regional rules without code changes. A runbook plus change control gives auditors the trace they need. Option 2 relies on downstream controls and risks exposure. Option 3 normalizes bad habits and increases data sprawl. Option 4 leads to drift and inconsistent masking. This approach raises trust while keeping logs actionable.
Question 37 of 60
37. Question
After a promotion launch, latency spikes appear for search suggestions. Logs are voluminous but inconclusive. What should you add to the implementation to improve diagnosability before the next event?
Correct
Option 3 is correct because structured context turns free-text logs into analyzable telemetry. Cache and backend timings pinpoint where time is lost. Correlation IDs connect client calls to downstream latency. Dashboards with percentiles make regressions visible at a glance. Option 1 removes the very signal you need. Option 2 stores more noise rather than improving signal quality. Option 4 is incomplete because suggestions often bypass CDN. The recommended enhancements make future analysis faster and more precise, aligning with best practices.
Incorrect
Option 3 is correct because structured context turns free-text logs into analyzable telemetry. Cache and backend timings pinpoint where time is lost. Correlation IDs connect client calls to downstream latency. Dashboards with percentiles make regressions visible at a glance. Option 1 removes the very signal you need. Option 2 stores more noise rather than improving signal quality. Option 4 is incomplete because suggestions often bypass CDN. The recommended enhancements make future analysis faster and more precise, aligning with best practices.
Unattempted
Option 3 is correct because structured context turns free-text logs into analyzable telemetry. Cache and backend timings pinpoint where time is lost. Correlation IDs connect client calls to downstream latency. Dashboards with percentiles make regressions visible at a glance. Option 1 removes the very signal you need. Option 2 stores more noise rather than improving signal quality. Option 4 is incomplete because suggestions often bypass CDN. The recommended enhancements make future analysis faster and more precise, aligning with best practices.
Question 38 of 60
38. Question
An integration team wants end-to-end tracing across CDN ? B2C Commerce ? middleware ? ERP. What logging and header strategy fits governance and Log Center usage?
Correct
Option 2 is correct because a single, propagated correlation value enables stitching spans across systems. Logging it consistently in all categories makes Log Center queries trivial. Saved searches and dashboards then become reusable runbooks. Option 1 prevents end-to-end linkage and slows investigations. Option 3 raises privacy concerns and is brittle across domains. Option 4 loses causality from the user edge. The recommended header strategy is lightweight, compliant, and observable.
Incorrect
Option 2 is correct because a single, propagated correlation value enables stitching spans across systems. Logging it consistently in all categories makes Log Center queries trivial. Saved searches and dashboards then become reusable runbooks. Option 1 prevents end-to-end linkage and slows investigations. Option 3 raises privacy concerns and is brittle across domains. Option 4 loses causality from the user edge. The recommended header strategy is lightweight, compliant, and observable.
Unattempted
Option 2 is correct because a single, propagated correlation value enables stitching spans across systems. Logging it consistently in all categories makes Log Center queries trivial. Saved searches and dashboards then become reusable runbooks. Option 1 prevents end-to-end linkage and slows investigations. Option 3 raises privacy concerns and is brittle across domains. Option 4 loses causality from the user edge. The recommended header strategy is lightweight, compliant, and observable.
Question 39 of 60
39. Question
Your Production logs show INFO-level noise from a recommendation plugin, pushing ingestion costs up and burying errors. What should you mandate?
Correct
Option 1 is correct because category thresholds let you tune verbosity precisely without losing critical events. Sampling preserves trend visibility while controlling cost. An alert on error bursts ensures genuine problems surface fast. Option 2 ignores operational realities. Option 3 is the opposite of best practice and will inflate volume. Option 4 removes useful context and may hide genuine failures. Proper tuning keeps logs useful and trustworthy under governance constraints.
Incorrect
Option 1 is correct because category thresholds let you tune verbosity precisely without losing critical events. Sampling preserves trend visibility while controlling cost. An alert on error bursts ensures genuine problems surface fast. Option 2 ignores operational realities. Option 3 is the opposite of best practice and will inflate volume. Option 4 removes useful context and may hide genuine failures. Proper tuning keeps logs useful and trustworthy under governance constraints.
Unattempted
Option 1 is correct because category thresholds let you tune verbosity precisely without losing critical events. Sampling preserves trend visibility while controlling cost. An alert on error bursts ensures genuine problems surface fast. Option 2 ignores operational realities. Option 3 is the opposite of best practice and will inflate volume. Option 4 removes useful context and may hide genuine failures. Proper tuning keeps logs useful and trustworthy under governance constraints.
Question 40 of 60
40. Question
A vendor requests direct access to Production logs for their connector triage. What approach satisfies trust and governance?
Correct
Option 4 is correct because scoped forwarding enforces least privilege, filtering sensitive fields and categories. Time-bound access and documented agreements meet governance expectations. Automatic revocation reduces long-lived risk. Option 1 violates credential hygiene. Option 2 grants overly broad access to Production. Option 3 creates uncontrolled data copies. The recommended method keeps auditability high and exposure low while still enabling effective vendor triage.
Incorrect
Option 4 is correct because scoped forwarding enforces least privilege, filtering sensitive fields and categories. Time-bound access and documented agreements meet governance expectations. Automatic revocation reduces long-lived risk. Option 1 violates credential hygiene. Option 2 grants overly broad access to Production. Option 3 creates uncontrolled data copies. The recommended method keeps auditability high and exposure low while still enabling effective vendor triage.
Unattempted
Option 4 is correct because scoped forwarding enforces least privilege, filtering sensitive fields and categories. Time-bound access and documented agreements meet governance expectations. Automatic revocation reduces long-lived risk. Option 1 violates credential hygiene. Option 2 grants overly broad access to Production. Option 3 creates uncontrolled data copies. The recommended method keeps auditability high and exposure low while still enabling effective vendor triage.
Question 41 of 60
41. Question
The team struggles to tell whether an incident is a regression or traffic anomaly. How should you use logs and tools to create actionable baselines?
Correct
Option 3 is correct because percentiles reflect tail pain that averages hide. Storing baselines makes deviations obvious and defensible. Pairing with SLOs turns trends into decisions. Anomaly detection shortens time to signal during off-pattern events. Option 1 is unreliable and not auditable. Option 2 misses critical regressions. Option 4 manipulates data and can hide problems that start at low volume. The approach makes logs a governance asset and not just a record of the past.
Incorrect
Option 3 is correct because percentiles reflect tail pain that averages hide. Storing baselines makes deviations obvious and defensible. Pairing with SLOs turns trends into decisions. Anomaly detection shortens time to signal during off-pattern events. Option 1 is unreliable and not auditable. Option 2 misses critical regressions. Option 4 manipulates data and can hide problems that start at low volume. The approach makes logs a governance asset and not just a record of the past.
Unattempted
Option 3 is correct because percentiles reflect tail pain that averages hide. Storing baselines makes deviations obvious and defensible. Pairing with SLOs turns trends into decisions. Anomaly detection shortens time to signal during off-pattern events. Option 1 is unreliable and not auditable. Option 2 misses critical regressions. Option 4 manipulates data and can hide problems that start at low volume. The approach makes logs a governance asset and not just a record of the past.
Question 42 of 60
42. Question
During a penetration test, auditors find credentials and emails in exception messages. What is the best remediation plan tied to logging practices?
Correct
Option 2 is correct because it addresses the root: sanitize before emit, separate user-facing messages from diagnostic logs, and enforce by tests. Opaque codes preserve traceability without revealing details. Redaction ensures traces can be kept for analysis without violating policy. Option 1 ignores the issue. Option 3 removes critical incident data. Option 4 mislabels severity and undermines alerting. The plan combines security with operational readiness and meets governance requirements.
Incorrect
Option 2 is correct because it addresses the root: sanitize before emit, separate user-facing messages from diagnostic logs, and enforce by tests. Opaque codes preserve traceability without revealing details. Redaction ensures traces can be kept for analysis without violating policy. Option 1 ignores the issue. Option 3 removes critical incident data. Option 4 mislabels severity and undermines alerting. The plan combines security with operational readiness and meets governance requirements.
Unattempted
Option 2 is correct because it addresses the root: sanitize before emit, separate user-facing messages from diagnostic logs, and enforce by tests. Opaque codes preserve traceability without revealing details. Redaction ensures traces can be kept for analysis without violating policy. Option 1 ignores the issue. Option 3 removes critical incident data. Option 4 mislabels severity and undermines alerting. The plan combines security with operational readiness and meets governance requirements.
Question 43 of 60
43. Question
Scheduled jobs (catalog feed, price sync) fail intermittently. How do you enhance logging and monitoring to isolate and prevent recurrences?
Correct
Option 3 is correct because structured job telemetry reveals where failures occur and whether they are data or system related. Correlation IDs link to partner systems for end-to-end triage. Alerts on consecutive failures and SLA breaches catch silent degradations. Dashboards quantify impact and validate fixes. Option 1 hides symptoms and delays root cause analysis. Option 2 ignores the path where most clues live. Option 4 creates noise and performance risk in Production. The recommended approach makes jobs observable and auditable.
Incorrect
Option 3 is correct because structured job telemetry reveals where failures occur and whether they are data or system related. Correlation IDs link to partner systems for end-to-end triage. Alerts on consecutive failures and SLA breaches catch silent degradations. Dashboards quantify impact and validate fixes. Option 1 hides symptoms and delays root cause analysis. Option 2 ignores the path where most clues live. Option 4 creates noise and performance risk in Production. The recommended approach makes jobs observable and auditable.
Unattempted
Option 3 is correct because structured job telemetry reveals where failures occur and whether they are data or system related. Correlation IDs link to partner systems for end-to-end triage. Alerts on consecutive failures and SLA breaches catch silent degradations. Dashboards quantify impact and validate fixes. Option 1 hides symptoms and delays root cause analysis. Option 2 ignores the path where most clues live. Option 4 creates noise and performance risk in Production. The recommended approach makes jobs observable and auditable.
Question 44 of 60
44. Question
PDP TTFB spikes during traffic, and cache-hit ratio is low. The page includes localized content and customer-specific recommendations. What is the best remediation approach?
Correct
The correct answer keeps the heavy, stable PDP shell cacheable while isolating volatile, user-specific fragments. Page caching is most effective when cache keys encode all dimensions that change the HTML (locale, currency/pricebook, sometimes device). Moving personalized zones to remote includes or client-side calls prevents cache fragmentation and stampedes. Proper TTLs with cache debugging improve hit ratio and reduce origin CPU. Option 1 removes an important performance lever and shifts complexity to the CDN where personalization still breaks caching. Option 2 adds cost without addressing the root: cache key design and content separations. Option 4 would leak personalization and is unsafe for privacy and relevance. The recommended design maximizes cache utility, reduces quota pressure on services, and provides measurable improvements visible in Log Center dashboards.
Incorrect
The correct answer keeps the heavy, stable PDP shell cacheable while isolating volatile, user-specific fragments. Page caching is most effective when cache keys encode all dimensions that change the HTML (locale, currency/pricebook, sometimes device). Moving personalized zones to remote includes or client-side calls prevents cache fragmentation and stampedes. Proper TTLs with cache debugging improve hit ratio and reduce origin CPU. Option 1 removes an important performance lever and shifts complexity to the CDN where personalization still breaks caching. Option 2 adds cost without addressing the root: cache key design and content separations. Option 4 would leak personalization and is unsafe for privacy and relevance. The recommended design maximizes cache utility, reduces quota pressure on services, and provides measurable improvements visible in Log Center dashboards.
Unattempted
The correct answer keeps the heavy, stable PDP shell cacheable while isolating volatile, user-specific fragments. Page caching is most effective when cache keys encode all dimensions that change the HTML (locale, currency/pricebook, sometimes device). Moving personalized zones to remote includes or client-side calls prevents cache fragmentation and stampedes. Proper TTLs with cache debugging improve hit ratio and reduce origin CPU. Option 1 removes an important performance lever and shifts complexity to the CDN where personalization still breaks caching. Option 2 adds cost without addressing the root: cache key design and content separations. Option 4 would leak personalization and is unsafe for privacy and relevance. The recommended design maximizes cache utility, reduces quota pressure on services, and provides measurable improvements visible in Log Center dashboards.
Question 45 of 60
45. Question
Cart AJAX calls are hitting OCAPI rate limits during promos, causing intermittent 429s. What should you implement first to relieve pressure without breaking behavior?
Correct
The correct option reduces needless duplicate calls and smooths bursts. Debounce and coalescing prevent “type-ahead thrash” from spamming OCAPI, while short-lived caches on read paths lower per-user request counts. Backoff with jitter avoids retry storms that worsen quota violations. Log Center quota widgets validate improvements and catch regressions. Option 2 treats a symptom and may be infeasible or expensive; it also doesn’t scale for peak campaigns. Option 3 masks the error but adds latency and complexity without lowering call volume. The recommended steps are low risk, quick to ship, and directly target volume and burstiness characteristics behind 429s.
Incorrect
The correct option reduces needless duplicate calls and smooths bursts. Debounce and coalescing prevent “type-ahead thrash” from spamming OCAPI, while short-lived caches on read paths lower per-user request counts. Backoff with jitter avoids retry storms that worsen quota violations. Log Center quota widgets validate improvements and catch regressions. Option 2 treats a symptom and may be infeasible or expensive; it also doesn’t scale for peak campaigns. Option 3 masks the error but adds latency and complexity without lowering call volume. The recommended steps are low risk, quick to ship, and directly target volume and burstiness characteristics behind 429s.
Unattempted
The correct option reduces needless duplicate calls and smooths bursts. Debounce and coalescing prevent “type-ahead thrash” from spamming OCAPI, while short-lived caches on read paths lower per-user request counts. Backoff with jitter avoids retry storms that worsen quota violations. Log Center quota widgets validate improvements and catch regressions. Option 2 treats a symptom and may be infeasible or expensive; it also doesn’t scale for peak campaigns. Option 3 masks the error but adds latency and complexity without lowering call volume. The recommended steps are low risk, quick to ship, and directly target volume and burstiness characteristics behind 429s.
Question 46 of 60
46. Question
The spec requires “Page Designer for campaign pages, approvals in Business Manager, and staging to production via replication jobs.” The team deploys static templates and edits directly in production. What is the correct remediation plan?
Correct
The specification emphasizes governed content operations: Page Designer authoring, approvals, and controlled replication. Rebuilding those pages in Page Designer returns business control, auditability, and safe rollback. A spreadsheet is not an approval workflow and is error-prone. Screenshots in notes do not enforce process or prevent mistakes. Disabling approvals removes a required control and risks brand or legal issues. With Page Designer, content versions and experiences allow quick iterate-and-rollback. Replication guarantees parity across instances. Approvals ensure four-eyes review for regulated content. This remediation realigns the build to governance and business needs.
Incorrect
The specification emphasizes governed content operations: Page Designer authoring, approvals, and controlled replication. Rebuilding those pages in Page Designer returns business control, auditability, and safe rollback. A spreadsheet is not an approval workflow and is error-prone. Screenshots in notes do not enforce process or prevent mistakes. Disabling approvals removes a required control and risks brand or legal issues. With Page Designer, content versions and experiences allow quick iterate-and-rollback. Replication guarantees parity across instances. Approvals ensure four-eyes review for regulated content. This remediation realigns the build to governance and business needs.
Unattempted
The specification emphasizes governed content operations: Page Designer authoring, approvals, and controlled replication. Rebuilding those pages in Page Designer returns business control, auditability, and safe rollback. A spreadsheet is not an approval workflow and is error-prone. Screenshots in notes do not enforce process or prevent mistakes. Disabling approvals removes a required control and risks brand or legal issues. With Page Designer, content versions and experiences allow quick iterate-and-rollback. Replication guarantees parity across instances. Approvals ensure four-eyes review for regulated content. This remediation realigns the build to governance and business needs.
Question 47 of 60
47. Question
Search suggestions stall when users type quickly; service calls are chatty and cache misses are frequent. What is the most effective change?
Correct
The correct answer addresses burstiness and cacheability. Debounce and thresholds reduce keystroke chatter, cancellation prevents outdated requests from consuming capacity, and prefix-keyed caches make popular openings (“iph”, “run”) hot cache entries. Short TTLs keep results fresh while absorbing traffic spikes. Monitoring percentiles and cache hit validates the improvement. Option 1 increases server load and does nothing to remove extra calls. Option 3 expands index size but doesn’t treat the pacing or caching issues. The proposed approach is low effort and high impact for both latency and quota consumption.
Incorrect
The correct answer addresses burstiness and cacheability. Debounce and thresholds reduce keystroke chatter, cancellation prevents outdated requests from consuming capacity, and prefix-keyed caches make popular openings (“iph”, “run”) hot cache entries. Short TTLs keep results fresh while absorbing traffic spikes. Monitoring percentiles and cache hit validates the improvement. Option 1 increases server load and does nothing to remove extra calls. Option 3 expands index size but doesn’t treat the pacing or caching issues. The proposed approach is low effort and high impact for both latency and quota consumption.
Unattempted
The correct answer addresses burstiness and cacheability. Debounce and thresholds reduce keystroke chatter, cancellation prevents outdated requests from consuming capacity, and prefix-keyed caches make popular openings (“iph”, “run”) hot cache entries. Short TTLs keep results fresh while absorbing traffic spikes. Monitoring percentiles and cache hit validates the improvement. Option 1 increases server load and does nothing to remove extra calls. Option 3 expands index size but doesn’t treat the pacing or caching issues. The proposed approach is low effort and high impact for both latency and quota consumption.
Question 48 of 60
48. Question
Custom cache shows frequent evictions and low efficacy for PLP filters; keys include raw query strings. How should you stabilize hit rates?
Correct
The correct answer fixes key explosion and stampedes. Normalization ensures logically identical requests map to one entry, and excluding tracking parameters prevents needless cache diversity. Namespacing reduces cross-site churn. Not caching error responses avoids poisoning the cache with transient failures. Short TTLs with jitter stagger refreshes and reduce thundering herds. Option 1 sacrifices a major performance lever. Option 2 treats symptoms and can still thrash if the key space remains fragmented. Proper key design and expiry strategy usually yields a larger gain than raw capacity increases.
Incorrect
The correct answer fixes key explosion and stampedes. Normalization ensures logically identical requests map to one entry, and excluding tracking parameters prevents needless cache diversity. Namespacing reduces cross-site churn. Not caching error responses avoids poisoning the cache with transient failures. Short TTLs with jitter stagger refreshes and reduce thundering herds. Option 1 sacrifices a major performance lever. Option 2 treats symptoms and can still thrash if the key space remains fragmented. Proper key design and expiry strategy usually yields a larger gain than raw capacity increases.
Unattempted
The correct answer fixes key explosion and stampedes. Normalization ensures logically identical requests map to one entry, and excluding tracking parameters prevents needless cache diversity. Namespacing reduces cross-site churn. Not caching error responses avoids poisoning the cache with transient failures. Short TTLs with jitter stagger refreshes and reduce thundering herds. Option 1 sacrifices a major performance lever. Option 2 treats symptoms and can still thrash if the key space remains fragmented. Proper key design and expiry strategy usually yields a larger gain than raw capacity increases.
Question 49 of 60
49. Question
PDP calls a real-time inventory API on every view; timeouts rise during drops. WhatÂ’s the best remediation that protects UX and origin capacity?
Correct
The correct option reduces on-request dependency while maintaining accuracy. Short-TTL caches absorb burst reads, and prewarming top SKUs aligns capacity with demand patterns. Graceful degradation maintains page speed if the API is slow, while alerts ensure operators see real failures. Option 1 worsens tail latency and harms throughput. Option 2 distorts business rules and risks overselling. Option 3 arbitrarily punishes guests and still leaves spikes unmitigated. The recommended design modernizes read behavior without rewriting the integration.
Incorrect
The correct option reduces on-request dependency while maintaining accuracy. Short-TTL caches absorb burst reads, and prewarming top SKUs aligns capacity with demand patterns. Graceful degradation maintains page speed if the API is slow, while alerts ensure operators see real failures. Option 1 worsens tail latency and harms throughput. Option 2 distorts business rules and risks overselling. Option 3 arbitrarily punishes guests and still leaves spikes unmitigated. The recommended design modernizes read behavior without rewriting the integration.
Unattempted
The correct option reduces on-request dependency while maintaining accuracy. Short-TTL caches absorb burst reads, and prewarming top SKUs aligns capacity with demand patterns. Graceful degradation maintains page speed if the API is slow, while alerts ensure operators see real failures. Option 1 worsens tail latency and harms throughput. Option 2 distorts business rules and risks overselling. Option 3 arbitrarily punishes guests and still leaves spikes unmitigated. The recommended design modernizes read behavior without rewriting the integration.
Question 50 of 60
50. Question
PLP latency regresses after enabling dynamic promotional pricing sourced from an external service. How should you restore performance without losing promo fidelity?
Correct
The correct answer moves expensive computation off the request path. Price materialization into price books lets PLP render from local data, and short-TTL cache per segment preserves personalization without chatty calls. A graceful fallback protects TTFB and revenue if the service hiccups. Logging deltas ensures finance can reconcile. Option 1 guarantees high latency and quota risk. Option 3 sacrifices business outcomes and user trust. The proposed approach is common in high-scale commerce and yields predictable performance under load.
Incorrect
The correct answer moves expensive computation off the request path. Price materialization into price books lets PLP render from local data, and short-TTL cache per segment preserves personalization without chatty calls. A graceful fallback protects TTFB and revenue if the service hiccups. Logging deltas ensures finance can reconcile. Option 1 guarantees high latency and quota risk. Option 3 sacrifices business outcomes and user trust. The proposed approach is common in high-scale commerce and yields predictable performance under load.
Unattempted
The correct answer moves expensive computation off the request path. Price materialization into price books lets PLP render from local data, and short-TTL cache per segment preserves personalization without chatty calls. A graceful fallback protects TTFB and revenue if the service hiccups. Logging deltas ensures finance can reconcile. Option 1 guarantees high latency and quota risk. Option 3 sacrifices business outcomes and user trust. The proposed approach is common in high-scale commerce and yields predictable performance under load.
Question 51 of 60
51. Question
A sudden TTFB jump coincides with peak traffic, and teams are unsure whether itÂ’s infrastructure saturation or quota throttling. How do you triage quickly?
Correct
The correct option uses evidence rather than guesswork. Correlating latency percentiles with quota and saturation signals shows causality: throttling events often precede latency if call volume spikes, while CPU/thread exhaustion shows the opposite pattern. Baselines distinguish genuine regressions from traffic shape changes. Remedies then target the cause: backoff for quotas, work shedding for saturation. Option 2 masks symptoms and worsens tail latency. Option 3 risks stampedes and further origin stress. Data-driven triage shortens time to mitigation and avoids harmful “fixes.”
Incorrect
The correct option uses evidence rather than guesswork. Correlating latency percentiles with quota and saturation signals shows causality: throttling events often precede latency if call volume spikes, while CPU/thread exhaustion shows the opposite pattern. Baselines distinguish genuine regressions from traffic shape changes. Remedies then target the cause: backoff for quotas, work shedding for saturation. Option 2 masks symptoms and worsens tail latency. Option 3 risks stampedes and further origin stress. Data-driven triage shortens time to mitigation and avoids harmful “fixes.”
Unattempted
The correct option uses evidence rather than guesswork. Correlating latency percentiles with quota and saturation signals shows causality: throttling events often precede latency if call volume spikes, while CPU/thread exhaustion shows the opposite pattern. Baselines distinguish genuine regressions from traffic shape changes. Remedies then target the cause: backoff for quotas, work shedding for saturation. Option 2 masks symptoms and worsens tail latency. Option 3 risks stampedes and further origin stress. Data-driven triage shortens time to mitigation and avoids harmful “fixes.”
Question 52 of 60
52. Question
EU users see slower checkout than US users; logs show cross-region service endpoints used for tax calculations. What change improves performance most with minimal risk?
Correct
The correct answer eliminates unnecessary cross-region latency and adds resilience. Regional endpoints reduce round-trip time, and circuit breakers prevent slow regions from degrading the whole checkout. Jurisdiction-scoped caches keep results fresh while avoiding recomputation on identical baskets. Regional dashboards confirm the improvement and catch regressions. Option 1 increases tail times and backend load. Option 2 jeopardizes correctness due to regional rules. Option 3 raises security and privacy concerns and fragments observability. Localizing services is the standard cure for region-specific slowness.
Incorrect
The correct answer eliminates unnecessary cross-region latency and adds resilience. Regional endpoints reduce round-trip time, and circuit breakers prevent slow regions from degrading the whole checkout. Jurisdiction-scoped caches keep results fresh while avoiding recomputation on identical baskets. Regional dashboards confirm the improvement and catch regressions. Option 1 increases tail times and backend load. Option 2 jeopardizes correctness due to regional rules. Option 3 raises security and privacy concerns and fragments observability. Localizing services is the standard cure for region-specific slowness.
Unattempted
The correct answer eliminates unnecessary cross-region latency and adds resilience. Regional endpoints reduce round-trip time, and circuit breakers prevent slow regions from degrading the whole checkout. Jurisdiction-scoped caches keep results fresh while avoiding recomputation on identical baskets. Regional dashboards confirm the improvement and catch regressions. Option 1 increases tail times and backend load. Option 2 jeopardizes correctness due to regional rules. Option 3 raises security and privacy concerns and fragments observability. Localizing services is the standard cure for region-specific slowness.
Question 53 of 60
53. Question
Cache stampedes occur when PLP fragments expire, causing origin spikes. WhatÂ’s the most appropriate mitigation?
Correct
The correct option addresses coordination, not just capacity. Soft-TTL lets users see slightly stale content while a background refresh updates the entry. Jitter spreads expirations to avoid synchronized invalidations. Coalescing ensures only one request triggers regeneration, and concurrency caps stop dog-piles. Option 1 removes a critical performance tool. Option 3 may delay the next storm and increases stale risk without solving synchronization. The recommended pattern is widely used to stabilize caches under bursty demand.
Incorrect
The correct option addresses coordination, not just capacity. Soft-TTL lets users see slightly stale content while a background refresh updates the entry. Jitter spreads expirations to avoid synchronized invalidations. Coalescing ensures only one request triggers regeneration, and concurrency caps stop dog-piles. Option 1 removes a critical performance tool. Option 3 may delay the next storm and increases stale risk without solving synchronization. The recommended pattern is widely used to stabilize caches under bursty demand.
Unattempted
The correct option addresses coordination, not just capacity. Soft-TTL lets users see slightly stale content while a background refresh updates the entry. Jitter spreads expirations to avoid synchronized invalidations. Coalescing ensures only one request triggers regeneration, and concurrency caps stop dog-piles. Option 1 removes a critical performance tool. Option 3 may delay the next storm and increases stale risk without solving synchronization. The recommended pattern is widely used to stabilize caches under bursty demand.
Question 54 of 60
54. Question
During peak, a subset of customers reports duplicate orders after clicking “Place Order” twice. Logs show two payment authorizations with the same basket ID within 500 ms. What’s the most likely root cause and best fix?
Correct
The key evidence is two authorizations for the same basket almost simultaneously, which points to the server treating repeat submits as independent transactions. Client-side debouncing helps UX but cannot be trusted for correctness under race conditions and dropped responses. The robust remedy is enforcing idempotency and atomicity when converting baskets to orders and when invoking the payment gateway. An idempotency key derived from basket and attempt metadata ensures the second request is recognized as a duplicate and short-circuited. Atomic locking around the conversion prevents double consumption of inventory and promotion counters. Option 1 alone remains vulnerable to retries, refreshes, and back-button behavior. Option 3 treats capacity, not concurrency, and duplicates can still slip through at high throughput. Option 4 is unrelated to order submission semantics and wonÂ’t stop duplicate payment calls. Root-cause resolution requires server-side idempotency guarantees and transactional guards.
Incorrect
The key evidence is two authorizations for the same basket almost simultaneously, which points to the server treating repeat submits as independent transactions. Client-side debouncing helps UX but cannot be trusted for correctness under race conditions and dropped responses. The robust remedy is enforcing idempotency and atomicity when converting baskets to orders and when invoking the payment gateway. An idempotency key derived from basket and attempt metadata ensures the second request is recognized as a duplicate and short-circuited. Atomic locking around the conversion prevents double consumption of inventory and promotion counters. Option 1 alone remains vulnerable to retries, refreshes, and back-button behavior. Option 3 treats capacity, not concurrency, and duplicates can still slip through at high throughput. Option 4 is unrelated to order submission semantics and wonÂ’t stop duplicate payment calls. Root-cause resolution requires server-side idempotency guarantees and transactional guards.
Unattempted
The key evidence is two authorizations for the same basket almost simultaneously, which points to the server treating repeat submits as independent transactions. Client-side debouncing helps UX but cannot be trusted for correctness under race conditions and dropped responses. The robust remedy is enforcing idempotency and atomicity when converting baskets to orders and when invoking the payment gateway. An idempotency key derived from basket and attempt metadata ensures the second request is recognized as a duplicate and short-circuited. Atomic locking around the conversion prevents double consumption of inventory and promotion counters. Option 1 alone remains vulnerable to retries, refreshes, and back-button behavior. Option 3 treats capacity, not concurrency, and duplicates can still slip through at high throughput. Option 4 is unrelated to order submission semantics and wonÂ’t stop duplicate payment calls. Root-cause resolution requires server-side idempotency guarantees and transactional guards.
Question 55 of 60
55. Question
A new image optimization service was added in front of static content. Some locales intermittently show broken thumbnails and 5xx at the edge after deploys. Origin looks healthy. WhatÂ’s the most plausible cause and corrective action?
Correct
The failure pattern appears only after deploys and at the edge, while origin is fine, which suggests stale or mismatched variants in a middle cache tier. If the optimizerÂ’s cache key ignores the version parameter, the edge can serve outdated or partial files until purges propagate. Including the entire URL (with versioning query) in cache keys prevents collisions between old and new assets. Adding a content-version parameter on assets provides deterministic busting instead of broad purges. Option 1 reduces size but does not fix a key-collision bug. Option 2 targets origin saturation that logs do not support. Option 4 treats application cache, not the external optimizer logic, and introduces unnecessary operational risk. The recommended change aligns the cache key with deployment semantics and makes post-deploy behavior predictable.
Incorrect
The failure pattern appears only after deploys and at the edge, while origin is fine, which suggests stale or mismatched variants in a middle cache tier. If the optimizerÂ’s cache key ignores the version parameter, the edge can serve outdated or partial files until purges propagate. Including the entire URL (with versioning query) in cache keys prevents collisions between old and new assets. Adding a content-version parameter on assets provides deterministic busting instead of broad purges. Option 1 reduces size but does not fix a key-collision bug. Option 2 targets origin saturation that logs do not support. Option 4 treats application cache, not the external optimizer logic, and introduces unnecessary operational risk. The recommended change aligns the cache key with deployment semantics and makes post-deploy behavior predictable.
Unattempted
The failure pattern appears only after deploys and at the edge, while origin is fine, which suggests stale or mismatched variants in a middle cache tier. If the optimizerÂ’s cache key ignores the version parameter, the edge can serve outdated or partial files until purges propagate. Including the entire URL (with versioning query) in cache keys prevents collisions between old and new assets. Adding a content-version parameter on assets provides deterministic busting instead of broad purges. Option 1 reduces size but does not fix a key-collision bug. Option 2 targets origin saturation that logs do not support. Option 4 treats application cache, not the external optimizer logic, and introduces unnecessary operational risk. The recommended change aligns the cache key with deployment semantics and makes post-deploy behavior predictable.
Question 56 of 60
56. Question
Merchants report that PLPs show stale prices even though the search index job finished successfully. No service timeouts are observed. What is the most likely root cause and corrective measure?
Correct
Search indexing success does not imply that pricebooks used to render PLP tiles are current or activated. If pricebook replication or activation lags behind indexing, the search documents are fresh but the price resolution layer serves old values. Chaining a pricebook replication/activation step in the post-index pipeline aligns data freshness, and targeted cache invalidation avoids a full-site purge. Option 2 is heavy-handed and harms performance without addressing price data freshness. Option 3 changes search topology, which is not indicated by the symptoms. Option 4 confuses session affinity with distributed caching and would not repair catalog price staleness. Root cause sits in pricebook lifecycle, not search runtime.
Incorrect
Search indexing success does not imply that pricebooks used to render PLP tiles are current or activated. If pricebook replication or activation lags behind indexing, the search documents are fresh but the price resolution layer serves old values. Chaining a pricebook replication/activation step in the post-index pipeline aligns data freshness, and targeted cache invalidation avoids a full-site purge. Option 2 is heavy-handed and harms performance without addressing price data freshness. Option 3 changes search topology, which is not indicated by the symptoms. Option 4 confuses session affinity with distributed caching and would not repair catalog price staleness. Root cause sits in pricebook lifecycle, not search runtime.
Unattempted
Search indexing success does not imply that pricebooks used to render PLP tiles are current or activated. If pricebook replication or activation lags behind indexing, the search documents are fresh but the price resolution layer serves old values. Chaining a pricebook replication/activation step in the post-index pipeline aligns data freshness, and targeted cache invalidation avoids a full-site purge. Option 2 is heavy-handed and harms performance without addressing price data freshness. Option 3 changes search topology, which is not indicated by the symptoms. Option 4 confuses session affinity with distributed caching and would not repair catalog price staleness. Root cause sits in pricebook lifecycle, not search runtime.
Question 57 of 60
57. Question
After enabling additional DEBUG logging for a promotion ruleset, CPU usage and response times degrade across the site. What is the most defensible explanation and fix?
Correct
Elevated DEBUG logging often introduces heavy string formatting and repeated object allocations inside hot loops of promotion evaluation, which can consume CPU, increase lock contention on appenders, and inflate I/O wait. The fastest and least risky fix is to reduce verbosity, restructure log statements to avoid concatenation unless enabled, and introduce sampling where detail is needed. Using Log Center to compare percentiles before and after validates causality. Option 1 treats symptoms and can worsen GC without removing the root cause. Option 2 misunderstands why auth pages are slower and introduces correctness risks. Option 4 assumes unrelated DB issues and proposes operational Band-Aids. The performance regression is directly tied to logging discipline in critical code paths.
Incorrect
Elevated DEBUG logging often introduces heavy string formatting and repeated object allocations inside hot loops of promotion evaluation, which can consume CPU, increase lock contention on appenders, and inflate I/O wait. The fastest and least risky fix is to reduce verbosity, restructure log statements to avoid concatenation unless enabled, and introduce sampling where detail is needed. Using Log Center to compare percentiles before and after validates causality. Option 1 treats symptoms and can worsen GC without removing the root cause. Option 2 misunderstands why auth pages are slower and introduces correctness risks. Option 4 assumes unrelated DB issues and proposes operational Band-Aids. The performance regression is directly tied to logging discipline in critical code paths.
Unattempted
Elevated DEBUG logging often introduces heavy string formatting and repeated object allocations inside hot loops of promotion evaluation, which can consume CPU, increase lock contention on appenders, and inflate I/O wait. The fastest and least risky fix is to reduce verbosity, restructure log statements to avoid concatenation unless enabled, and introduce sampling where detail is needed. Using Log Center to compare percentiles before and after validates causality. Option 1 treats symptoms and can worsen GC without removing the root cause. Option 2 misunderstands why auth pages are slower and introduces correctness risks. Option 4 assumes unrelated DB issues and proposes operational Band-Aids. The performance regression is directly tied to logging discipline in critical code paths.
Question 58 of 60
58. Question
Random checkout failures appear only for customers in one country. Logs show SSL handshake timeouts on a tax API with the same region. What explanation and action best fit the data?
Correct
Handshake timeouts point to the start of the connection lifecycle, not read latency. If clients connect per request with no keep-alive, handshake cost and sporadic TLS negotiation issues dominate. Updating cipher suites, enabling pooled persistent connections, and using a regional fallback path turns repeated handshakes into amortized costs and mitigates regional outages. Option 1 treats effects and risks tying up threads with long timeouts. Option 3 blames clients despite service evidence. Option 4 misroutes private service traffic through a CDN and doesnÂ’t address TLS negotiation. The fix is to modernize and reuse connections while providing regional redundancy.
Incorrect
Handshake timeouts point to the start of the connection lifecycle, not read latency. If clients connect per request with no keep-alive, handshake cost and sporadic TLS negotiation issues dominate. Updating cipher suites, enabling pooled persistent connections, and using a regional fallback path turns repeated handshakes into amortized costs and mitigates regional outages. Option 1 treats effects and risks tying up threads with long timeouts. Option 3 blames clients despite service evidence. Option 4 misroutes private service traffic through a CDN and doesnÂ’t address TLS negotiation. The fix is to modernize and reuse connections while providing regional redundancy.
Unattempted
Handshake timeouts point to the start of the connection lifecycle, not read latency. If clients connect per request with no keep-alive, handshake cost and sporadic TLS negotiation issues dominate. Updating cipher suites, enabling pooled persistent connections, and using a regional fallback path turns repeated handshakes into amortized costs and mitigates regional outages. Option 1 treats effects and risks tying up threads with long timeouts. Option 3 blames clients despite service evidence. Option 4 misroutes private service traffic through a CDN and doesnÂ’t address TLS negotiation. The fix is to modernize and reuse connections while providing regional redundancy.
Question 59 of 60
59. Question
Abandoned carts spike after a new “stackable” promotion went live. CPU time in promotion evaluation doubled; some requests hit service timeouts. What root cause and approach are most credible?
Correct
A sharp CPU increase aligned with a combinable promotion strongly suggests rule interaction complexity, not cache size or compression. When combinability explodes, carts traverse many evaluation branches, leading to timeouts and poor UX. Imposing guardrails on maximum applicable promotions and evaluation depth immediately reduces worst-case cost. Precomputing frequent stacks or constraints offline brings latency back under control without losing business intent. Option 1 raises memory but not algorithmic efficiency. Option 2 helps bandwidth, not CPU-bound evaluation. Option 3 improves observability but not performance. The credible fix targets the rule engineÂ’s complexity while maintaining accuracy.
Incorrect
A sharp CPU increase aligned with a combinable promotion strongly suggests rule interaction complexity, not cache size or compression. When combinability explodes, carts traverse many evaluation branches, leading to timeouts and poor UX. Imposing guardrails on maximum applicable promotions and evaluation depth immediately reduces worst-case cost. Precomputing frequent stacks or constraints offline brings latency back under control without losing business intent. Option 1 raises memory but not algorithmic efficiency. Option 2 helps bandwidth, not CPU-bound evaluation. Option 3 improves observability but not performance. The credible fix targets the rule engineÂ’s complexity while maintaining accuracy.
Unattempted
A sharp CPU increase aligned with a combinable promotion strongly suggests rule interaction complexity, not cache size or compression. When combinability explodes, carts traverse many evaluation branches, leading to timeouts and poor UX. Imposing guardrails on maximum applicable promotions and evaluation depth immediately reduces worst-case cost. Precomputing frequent stacks or constraints offline brings latency back under control without losing business intent. Option 1 raises memory but not algorithmic efficiency. Option 2 helps bandwidth, not CPU-bound evaluation. Option 3 improves observability but not performance. The credible fix targets the rule engineÂ’s complexity while maintaining accuracy.
Question 60 of 60
60. Question
The OMS receives duplicate shipment notices for the same order when webhooks are retried after 500 errors. How should you resolve without losing events?
Correct
Duplicate deliveries under retry indicate missing end-to-end idempotency. Adding an idempotency key per event (e.g., orderId+eventType+sequence) allows the OMS to treat repeats as safe no-ops while the sender retries transient failures. Persisting delivery outcomes supports at-least-once semantics with safe replay. Option 2 delays fulfillment and introduces manual effort. Option 3 increases tail latency and connection pressure without ensuring success. Option 4 downgrades from push to poll, increasing lag and load. The root fix is to make the integration idempotent under failure and retries.
Incorrect
Duplicate deliveries under retry indicate missing end-to-end idempotency. Adding an idempotency key per event (e.g., orderId+eventType+sequence) allows the OMS to treat repeats as safe no-ops while the sender retries transient failures. Persisting delivery outcomes supports at-least-once semantics with safe replay. Option 2 delays fulfillment and introduces manual effort. Option 3 increases tail latency and connection pressure without ensuring success. Option 4 downgrades from push to poll, increasing lag and load. The root fix is to make the integration idempotent under failure and retries.
Unattempted
Duplicate deliveries under retry indicate missing end-to-end idempotency. Adding an idempotency key per event (e.g., orderId+eventType+sequence) allows the OMS to treat repeats as safe no-ops while the sender retries transient failures. Persisting delivery outcomes supports at-least-once semantics with safe replay. Option 2 delays fulfillment and introduces manual effort. Option 3 increases tail latency and connection pressure without ensuring success. Option 4 downgrades from push to poll, increasing lag and load. The root fix is to make the integration idempotent under failure and retries.
X
Use Page numbers below to navigate to other practice tests