Orivel Orivel
Open menu

Design a Scalable Notification Service

Compare model answers for this System Design benchmark and review scores, judging comments, and related examples.

Login or register to use likes and favorites. Register

X f L

Contents

Task Overview

Benchmark Genres

System Design

Task Creator Model

Answering Models

Judge Models

Task Prompt

You are a senior software engineer at a rapidly growing social media company. Your task is to design a scalable and reliable notification service. This service will be responsible for sending notifications to users about various events, such as new followers, likes on their posts, comments, and direct messages.

Task Context

The system must support the following requirements: 1. **Notification Types:** Push notifications (to mobile devices), in-app notifications (visible within the app), and email notifications. 2. **Scale:** The platform has 100 million daily active users. The system should handle a peak load of 50,000 notification requests per second. 3. **Latency:** 99% of push and in-app notifications should be delivered to the user's device within 2 seconds of the event occurring. 4. **Reliability:** The system must guarantee...

Show more

The system must support the following requirements: 1. **Notification Types:** Push notifications (to mobile devices), in-app notifications (visible within the app), and email notifications. 2. **Scale:** The platform has 100 million daily active users. The system should handle a peak load of 50,000 notification requests per second. 3. **Latency:** 99% of push and in-app notifications should be delivered to the user's device within 2 seconds of the event occurring. 4. **Reliability:** The system must guarantee at-least-once delivery for all notifications. No notifications should be lost, even in the case of component failures. 5. **Personalization:** Users should be able to configure their notification preferences (e.g., disable email notifications for likes). Propose a high-level system architecture. Describe the key components, their responsibilities, and how they interact. Explain your choices for technologies (e.g., databases, message queues, caching) and discuss the trade-offs involved. Address how your design meets the requirements for scalability, low latency, and reliability.

Judging Policy

A high-quality answer will present a coherent and well-reasoned system design. The evaluation will focus on the following aspects: - **Architecture Clarity:** The proposed architecture should be logical and clearly explained, detailing the major components (e.g., API gateway, notification service, workers, message queues) and the data flow between them. - **Scalability and Performance:** The design must effectively address how it would handle the specified high load and low latency requirements. This includes disc...

Show more

A high-quality answer will present a coherent and well-reasoned system design. The evaluation will focus on the following aspects: - **Architecture Clarity:** The proposed architecture should be logical and clearly explained, detailing the major components (e.g., API gateway, notification service, workers, message queues) and the data flow between them. - **Scalability and Performance:** The design must effectively address how it would handle the specified high load and low latency requirements. This includes discussing strategies like horizontal scaling, load balancing, and the use of appropriate data stores and caching mechanisms. - **Reliability and Fault Tolerance:** The answer should demonstrate an understanding of building resilient systems. It should explain how the design ensures at-least-once delivery and handles potential failures of individual components, for example, through retries, redundancy, and dead-letter queues. - **Technology Choices and Trade-offs:** The justification for choosing specific technologies (e.g., Kafka vs. RabbitMQ, NoSQL vs. SQL databases) should be sound. The answer must articulate the trade-offs of the chosen approach, showing an understanding that there is no single 'perfect' solution.

Answers

A scalable notification service should be built as an asynchronous, event-driven platform that separates event ingestion, preference evaluation, notification persistence, channel-specific delivery, and delivery tracking. The most important design principle is that the source product systems, such as follows, likes, comments, and direct messages, should not synchronously call mobile push providers, email providers, or user devices. They should publish durable events into a messaging layer, and the notification platform should process those events independently with strong retry and idempotency guarantees. At a hi...

Show Full Answer

A scalable notification service should be built as an asynchronous, event-driven platform that separates event ingestion, preference evaluation, notification persistence, channel-specific delivery, and delivery tracking. The most important design principle is that the source product systems, such as follows, likes, comments, and direct messages, should not synchronously call mobile push providers, email providers, or user devices. They should publish durable events into a messaging layer, and the notification platform should process those events independently with strong retry and idempotency guarantees. At a high level, the architecture contains these components: event producers, an ingestion API, a durable event log, notification processors, a user preference service, a template and personalization service, a notification store, channel fanout queues, channel-specific delivery workers, third-party provider integrations, a real-time gateway for in-app delivery, and observability/retry infrastructure. Product services generate notification events when user-facing actions occur. For example, the social graph service emits a new follower event, the post service emits a like or comment event, and the messaging service emits a direct message event. Each event contains an event ID, event type, actor user ID, recipient user ID or recipient set, object ID, creation timestamp, and metadata needed for rendering. Producers send these events to a notification ingestion API or directly to a durable message bus. The ingestion API validates schema, authenticates the producer, assigns or verifies an idempotency key, and writes the event to the durable log before acknowledging the producer. This prevents notification loss if downstream processors fail. For the durable messaging backbone, I would use Apache Kafka, Amazon MSK, Google Pub/Sub, or Pulsar. Kafka/Pulsar are good fits because they provide high throughput, partitioned ordering, retention, replay, consumer groups, and durable storage. At 50,000 notification requests per second, the event stream should be partitioned by recipient user ID for user-level ordering where needed, or by event ID when strict per-user ordering is less important. Partitioning by recipient helps avoid out-of-order in-app notifications for a single user, but it can create hot partitions for celebrity accounts or group events. For large fanout cases, such as one event producing notifications to millions of followers, a separate fanout service should split recipients into batches and publish derived per-recipient notification jobs across many partitions. Notification processors consume raw events from the durable event log. Their responsibilities are to determine recipients, fetch user preferences, apply rate limits and quiet hours, deduplicate events, generate channel-specific notification records, and publish delivery jobs. For direct events like a comment on a user’s post, the recipient set is small. For fanout events such as a celebrity posting, the processor should avoid doing all fanout synchronously. It should create a fanout job and process recipients in shards, using batch reads from the social graph store. This prevents one very large event from blocking the low-latency path for normal notifications. The user preference service stores configuration such as whether a user wants push, in-app, or email notifications for likes, comments, followers, and direct messages. Preferences should be stored in a highly available database such as DynamoDB, Cassandra, ScyllaDB, or a sharded relational database. The access pattern is mostly key-value lookup by user ID and notification type, so a distributed key-value or wide-column store is appropriate. To meet the 2-second latency target, preferences should also be cached in Redis, Memcached, or a local in-process cache with short TTLs. Preference updates are written to the source-of-truth database and propagated to caches through invalidation events. The trade-off is that cache staleness may cause a recently changed preference to take a few seconds to apply; if strict preference consistency is required, processors can read through to the database on cache miss or for recently updated users. The template and personalization service renders notification content. It maps event types to templates such as “Alex liked your post” or “Maya commented: ...”. It handles localization, deep links, image URLs, and channel-specific payload constraints. Template definitions can be stored in a configuration database and cached aggressively because they change infrequently. Rendering should happen before delivery jobs are published so that each job is self-contained and can be retried safely. The notification store is the source of truth for user-visible in-app notifications and delivery state. A good choice is Cassandra, DynamoDB, ScyllaDB, or another horizontally scalable store partitioned by recipient user ID and sorted by notification timestamp. The primary access pattern is “fetch the latest notifications for user X,” so the table can use recipient_user_id as the partition key and created_at or notification_id as the sort key. The service writes an in-app notification record before or atomically with publishing the in-app delivery job. Records include notification ID, recipient, type, content, status, read/unread state, timestamps, and deduplication key. This store guarantees that even if WebSocket delivery fails, the user can still see the notification when opening the app. After preferences and templates are applied, the processor publishes jobs to separate channel queues: push queue, in-app queue, and email queue. Separating queues is important because each channel has different latency and reliability characteristics. Push and in-app queues are latency-sensitive and should be provisioned for high throughput with minimal backlog. Email is less latency-sensitive and can tolerate longer delays, provider throttling, and batching. Separate queues also prevent a slow email provider from affecting push delivery. Push delivery workers consume from the push queue and send notifications to Apple Push Notification service, Firebase Cloud Messaging, or other mobile push providers. Device tokens are stored in a device registry keyed by user ID, with token, platform, app version, locale, and last-seen timestamp. The registry can use a distributed key-value store and cache active tokens. Push workers must handle provider responses, remove invalid tokens, retry transient failures with exponential backoff, and record delivery attempts. Push provider acknowledgments do not guarantee that the user saw the notification, only that the provider accepted it, so the system should distinguish provider acceptance from actual user receipt. In-app delivery has two paths. First, the notification is persisted in the notification store. Second, an in-app delivery worker sends it to the user’s currently connected devices through a real-time gateway. The gateway can be implemented using WebSockets, HTTP/2 streams, or a mobile push-like persistent connection infrastructure. Gateway nodes maintain user connection state in memory and publish presence information to a distributed presence service. A routing layer or Redis/NATS-based presence map tells the in-app worker which gateway node currently owns a user’s connection. If the user is offline or the gateway send fails, no notification is lost because the persisted notification will be fetched through the app’s notification inbox API on the next session. For low latency, gateway nodes should be regionally deployed close to users and the in-app queue should be processed by workers in the same region where possible. Email delivery workers consume from the email queue and send through providers such as SES, SendGrid, or Mailgun. They should support provider failover, bounce handling, suppression lists, unsubscribe compliance, and per-provider rate limits. Email notifications can be batched or digested for low-priority event types like likes, while direct messages or security-related events may be sent immediately. Because email is slower and more expensive, user preferences and rate limiting are especially important. Reliability is achieved through durable writes, at-least-once processing, idempotency, retries, and dead-letter queues. The ingestion layer only acknowledges producers after the event is durably written to Kafka/Pulsar. Consumers commit offsets only after they have successfully written notification records and published downstream channel jobs. Because retries can create duplicates, every event and notification must have stable idempotency keys. For example, a like notification key could be recipient_id + actor_id + post_id + event_type, while a comment notification key could include comment_id. The notification store enforces uniqueness on this key, or processors perform conditional writes. Delivery workers should also use attempt IDs and idempotent state transitions so that duplicate jobs do not create duplicate in-app records or duplicate emails when avoidable. The system guarantees at-least-once delivery, not exactly-once delivery, so clients should also deduplicate by notification ID. Dead-letter queues are required for poison messages, malformed events, repeated provider failures, or records that cannot be rendered. A replay tool should allow operators to fix issues and reprocess events from the original durable log or from the dead-letter queue. Kafka retention should be long enough to support operational recovery, for example several days. Critical metadata and delivery state should also be persisted in the notification database for auditability. To meet the scale requirement of 100 million daily active users and 50,000 notification requests per second, all major services should be horizontally scalable and stateless where possible. Ingestion APIs scale behind load balancers. Kafka/Pulsar topics are partitioned widely enough to support peak throughput and consumer parallelism. Processors and delivery workers run in autoscaling groups or Kubernetes deployments and scale based on queue lag, CPU, provider latency, and request rate. Databases are partitioned by user ID to spread load. Hot-key problems should be handled with sharded fanout jobs, celebrity-user special handling, and backpressure. For extremely large fanout, the system may use pull-based fanout for low-priority notifications: instead of writing one notification per follower immediately, it stores the event once and materializes it when a user opens the app. This reduces write amplification but increases read complexity and may not be appropriate for direct messages or comments. The 2-second latency target for 99% of push and in-app notifications is met by keeping the critical path short: producer to durable log, processor preference lookup from cache, notification record write, publish to channel queue, and immediate delivery by warm workers. Push and in-app workers should be overprovisioned for peak load, and queues should use priority lanes so direct messages and comments are processed ahead of low-priority likes. Regional deployment reduces network latency. For users in multiple regions, routing can be based on the recipient’s home region, with cross-region replication for disaster recovery. The design should measure end-to-end latency from event creation to provider acceptance or gateway send, not merely internal processing time. Observability is essential. The platform should track request rate, queue lag by partition, processing latency, preference cache hit rate, notification creation failures, provider error rates, retry counts, dead-letter volume, WebSocket delivery success, push provider acceptance latency, and end-to-end p50/p95/p99 latency. Distributed tracing should carry the original event ID through every component. Alerts should fire on queue lag growth, elevated failures, provider throttling, database write latency, and missed latency SLOs. There are several trade-offs. Kafka or Pulsar adds operational complexity but provides the durability, replay, and throughput needed for at-least-once delivery at this scale. Cassandra or DynamoDB provides high write scalability and availability but offers limited ad hoc querying, so analytics should be sent to a separate data warehouse. Caching preferences improves latency but introduces short-lived consistency issues. At-least-once delivery is much simpler and more reliable than exactly-once delivery, but the system must tolerate duplicates using idempotency keys and client-side deduplication. Fanout-on-write gives fast inbox reads but can be expensive for high-follower events; fanout-on-read saves writes but complicates ranking and read-time materialization. A hybrid model is usually best: fanout-on-write for direct, important, and normal-volume notifications, and fanout-on-read or batched fanout for massive low-priority events. In summary, the service should use a durable event-driven architecture with partitioned message streams, stateless processors, cached preference lookups, persistent notification storage, separate channel queues, channel-specific workers, and strong retry/idempotency mechanisms. This design scales horizontally to 50,000 notification requests per second, keeps push and in-app delivery within the 2-second p99 target through caching and low-latency queues, and guarantees that notifications are not lost by persisting events and notification records before acknowledging progress.

Result

#1 | Winner

Winning Votes

3 / 3

Average Score

89
Judge Models OpenAI GPT-5.4

Total Score

92

Overall Comments

Answer A presents a thorough, coherent high-level architecture with clear component responsibilities, realistic data flow, and strong attention to operational concerns. It directly addresses all required channels, latency goals, at-least-once delivery semantics, preference handling, large fanout scenarios, idempotency, retries, persistence, and observability. Its technology discussion is balanced and nuanced, with concrete trade-offs such as fanout-on-write vs fanout-on-read, caching consistency, and Kafka/Pulsar operational complexity. The main weakness is that it is somewhat long and could be more condensed, but technically it is strong and well aligned with the prompt.

View Score Details

Architecture Quality

Weight 30%
92

The architecture is well structured and end-to-end: ingestion, durable log, processors, preference service, template service, notification store, per-channel queues, delivery workers, real-time gateway, and observability all fit together coherently. It also distinguishes persisted in-app state from real-time delivery and handles fanout as a first-class concern.

Completeness

Weight 20%
94

It covers all required notification types, user preferences, scale, latency, reliability, technology choices, and trade-offs. It also adds important missing practical concerns such as device registry, dead-letter queues, idempotency keys, fanout batching, regional deployment, observability, and recovery tooling.

Trade-off Reasoning

Weight 20%
91

The answer gives strong comparative reasoning for Kafka/Pulsar, NoSQL choices, caching consistency, at-least-once vs exactly-once, and fanout-on-write vs fanout-on-read. These trade-offs are concrete and tied directly to workload and product behavior.

Scalability & Reliability

Weight 20%
93

This is a major strength. The design clearly explains horizontal scaling, partitioning, queue isolation by channel, hot-key mitigation, retries, consumer offset handling, conditional writes for deduplication, dead-letter queues, replay, and durability before acknowledgment. It directly supports at-least-once delivery and the 2-second target with realistic mechanisms.

Clarity

Weight 10%
84

The explanation is clear, logically ordered, and precise despite being long. It communicates the data flow well, though the length makes it slightly denser and less immediately scannable than a more structured response.

Total Score

92

Overall Comments

Answer A provides an exceptionally detailed and robust system design. It demonstrates a deep understanding of complex distributed system challenges, such as fanout for celebrity accounts, specific idempotency key construction, and the nuances of at-least-once delivery. The architecture is highly granular, well-reasoned, and explicitly addresses all requirements with sophisticated solutions and trade-off discussions, reflecting the expertise expected from a senior software engineer.

View Score Details

Architecture Quality

Weight 30%
95

Answer A presents a highly detailed and logical architecture, clearly separating concerns and providing robust solutions for complex scenarios like large-scale fanout and two-path in-app delivery. The component interactions are well-defined.

Completeness

Weight 20%
92

Answer A comprehensively addresses all requirements, including advanced topics such as specific idempotency key examples, detailed observability, and nuanced fanout strategies (on-write vs. on-read), demonstrating a very complete understanding.

Trade-off Reasoning

Weight 20%
90

Answer A integrates trade-off discussions throughout the design and explicitly highlights fundamental system design trade-offs (e.g., at-least-once vs. exactly-once, fanout strategies), showcasing a deep understanding of implications beyond just technology choices.

Scalability & Reliability

Weight 20%
93

Answer A provides excellent coverage of both scalability and reliability, detailing specific mechanisms like partitioning strategies, consumer offset commits, durable writes before acknowledgment, hot-key handling, and priority queues, demonstrating a strong grasp of implementation details.

Clarity

Weight 10%
88

Answer A is very clear, well-structured, and uses professional language, making the complex design easy to follow despite its depth. The logical flow is excellent.

Total Score

84

Overall Comments

Answer A delivers a deeply reasoned, prose-driven system design that engages with subtle, important issues: hot partitions for celebrity fanout, fanout-on-write vs fanout-on-read hybrid, idempotency key construction, presence routing for WebSockets, regional deployment, priority lanes for queues, and the distinction between provider acceptance and user receipt. Trade-offs are discussed in context rather than listed superficially. The narrative is long but coherent and demonstrates senior-level depth. Minor weaknesses: lacks a visual diagram and structured headings/tables that would aid scanning.

View Score Details

Architecture Quality

Weight 30%
85

Comprehensive component breakdown with sophisticated handling of fanout, partitioning by recipient, presence routing, separate channel queues, and persistent notification store as source of truth. Treats subtle issues like celebrity fanout and priority lanes.

Completeness

Weight 20%
85

Covers ingestion, durable log, processors, preferences, templates, notification store, channel queues, workers, WebSocket gateway, device registry, DLQ, observability, regional deployment, and explicit handling of all four requirements.

Trade-off Reasoning

Weight 20%
85

Discusses concrete trade-offs in context: at-least-once vs exactly-once, fanout-on-write vs fanout-on-read hybrid, cache staleness vs consistency, partitioning by recipient vs event ID, Kafka operational complexity vs durability benefits.

Scalability & Reliability

Weight 20%
85

Strong reliability story: durable writes before ack, offset commits after downstream success, idempotency keys with concrete examples, DLQ with replay tooling, hot-key mitigation, regional deployment for latency, priority lanes.

Clarity

Weight 10%
70

Well-structured prose but very long with few visual aids; dense paragraphs make scanning harder despite logical flow.

## Scalable Notification Service Architecture ### System Overview The notification service is designed as a distributed, event-driven system that processes notifications asynchronously to handle 50,000 requests per second with sub-2-second latency for 99% of deliveries. The architecture follows a microservices pattern with clear separation of concerns. ### High-Level Architecture ``` Event Sources → API Gateway → Notification Service → Message Queue → Delivery Workers ↓ Preference Service ↓...

Show Full Answer

## Scalable Notification Service Architecture ### System Overview The notification service is designed as a distributed, event-driven system that processes notifications asynchronously to handle 50,000 requests per second with sub-2-second latency for 99% of deliveries. The architecture follows a microservices pattern with clear separation of concerns. ### High-Level Architecture ``` Event Sources → API Gateway → Notification Service → Message Queue → Delivery Workers ↓ Preference Service ↓ Cache Layer (Redis) ↓ Database (PostgreSQL) ``` ### Key Components #### 1. API Gateway - **Responsibility:** Entry point for all notification requests. Handles rate limiting, authentication, and request validation. - **Technology:** NGINX or AWS API Gateway - **Rationale:** Provides a single entry point, enables request throttling to prevent overload, and distributes traffic across backend services. #### 2. Notification Service (Core Processing) - **Responsibility:** Receives notification events, validates them, enriches with user preferences, and publishes to message queue. - **Technology:** Stateless microservice (Node.js, Go, or Java) - **Key Features:** - Validates notification payload - Fetches user preferences from cache - Determines notification channels (push, in-app, email) based on user settings - Publishes enriched notifications to message queue - **Scalability:** Horizontally scalable; multiple instances behind load balancer #### 3. Message Queue (Event Bus) - **Technology:** Apache Kafka or AWS SQS/SNS - **Rationale:** - Decouples notification generation from delivery - Provides durability and replay capability for at-least-once delivery guarantee - Enables parallel processing of notifications - Handles traffic spikes by buffering requests - **Configuration:** Multiple partitions (e.g., 100-200) to enable parallel consumption - **Topic Structure:** Separate topics for push, in-app, and email notifications to allow independent scaling #### 4. Preference Service - **Responsibility:** Manages user notification preferences and settings. - **Technology:** Microservice with PostgreSQL backend - **Features:** - Stores user preferences (notification types, channels, frequency limits) - Provides API for users to update preferences - Caches preferences in Redis for fast access - **Scalability:** Read-heavy service; caching significantly reduces database load #### 5. Cache Layer - **Technology:** Redis Cluster - **Purpose:** - Caches user preferences (TTL: 1 hour) - Caches user device tokens for push notifications - Stores rate-limiting counters - Reduces database queries by 90%+ - **Rationale:** Sub-millisecond latency for preference lookups #### 6. Delivery Workers - **Push Notification Worker:** - Consumes from push topic - Integrates with FCM (Firebase Cloud Messaging) and APNs (Apple Push Notification service) - Handles device token management and invalid token cleanup - Implements exponential backoff for retries - **In-App Notification Worker:** - Consumes from in-app topic - Stores notifications in a time-series database (e.g., Cassandra or ClickHouse) - Publishes to WebSocket servers for real-time delivery - Maintains notification read/unread status - **Email Notification Worker:** - Consumes from email topic - Batches emails for efficiency - Integrates with email service (SendGrid, AWS SES) - Implements rate limiting to avoid spam filters #### 7. WebSocket Server - **Responsibility:** Maintains persistent connections with mobile and web clients for real-time in-app notifications. - **Technology:** Node.js with Socket.io or Go with gorilla/websocket - **Scalability:** Stateless servers; uses Redis pub/sub for cross-server communication - **Fallback:** HTTP long-polling for clients that don't support WebSocket #### 8. Notification Store (Time-Series Database) - **Technology:** Cassandra or ClickHouse - **Purpose:** Stores all notifications for audit, analytics, and user notification history - **Rationale:** Optimized for write-heavy workloads and time-range queries - **Retention:** 90 days of notification history #### 9. User Device Registry - **Technology:** Redis + PostgreSQL - **Purpose:** Maintains mapping of users to their devices and push tokens - **Features:** - Tracks device registration/deregistration - Stores device metadata (OS, app version) - Handles token refresh and invalidation ### Data Flow 1. **Event Ingestion:** Event sources (user actions, system events) send notification requests to API Gateway 2. **Validation & Enrichment:** Notification Service validates payload and fetches user preferences from cache 3. **Channel Determination:** Based on preferences, notification is routed to appropriate topic (push, in-app, email) 4. **Queuing:** Message Queue buffers notifications and ensures durability 5. **Delivery:** Specialized workers consume from topics and deliver via respective channels 6. **Acknowledgment:** Delivery workers confirm successful delivery and update notification status ### Meeting Requirements #### Scalability (50,000 req/sec) - **Horizontal Scaling:** All components are stateless and horizontally scalable - **Message Queue Partitioning:** Kafka topics partitioned across multiple brokers; each partition can handle ~500-1000 req/sec - **Database Sharding:** User data sharded by user_id to distribute load - **Caching Strategy:** Redis caches hot data, reducing database queries by 90%+ - **Load Balancing:** API Gateway distributes traffic across Notification Service instances #### Low Latency (99% within 2 seconds) - **Asynchronous Processing:** Notifications processed asynchronously; API returns immediately after queuing - **Caching:** User preferences cached in Redis (sub-millisecond access) - **Direct Integration:** Push workers directly integrate with FCM/APNs without intermediate hops - **WebSocket for In-App:** Real-time delivery via persistent connections - **Optimization:** Batch processing for email notifications (acceptable for email use case) - **Monitoring:** Latency tracked per component; alerts for degradation #### Reliability (At-Least-Once Delivery) - **Message Queue Durability:** Kafka replicates messages across brokers (replication factor: 3) - **Persistent Queues:** Messages persisted to disk before acknowledgment - **Idempotency:** Notifications include unique IDs; workers implement idempotent processing - **Retry Logic:** Failed deliveries retried with exponential backoff (max 5 retries over 24 hours) - **Dead Letter Queue:** Failed notifications after max retries moved to DLQ for manual investigation - **Monitoring & Alerting:** Real-time dashboards track delivery success rates; alerts for failures - **Backup & Recovery:** Regular backups of preference and notification data; disaster recovery procedures #### Personalization - **Preference Service:** Dedicated service manages user notification settings - **Granular Controls:** Users can disable notifications per type (likes, comments, follows) and per channel - **Frequency Limits:** Support for daily/weekly digest options - **Quiet Hours:** Users can set quiet hours (e.g., no push notifications 10 PM - 8 AM) - **Cache Invalidation:** Preference changes invalidate cache immediately for real-time effect ### Technology Choices & Trade-offs | Component | Technology | Rationale | Trade-offs | |-----------|-----------|-----------|-----------| | Message Queue | Kafka | High throughput, durability, replay capability | Operational complexity, requires cluster management | | Cache | Redis Cluster | Sub-millisecond latency, supports complex data structures | Memory-bound, requires careful eviction policies | | Preference DB | PostgreSQL | ACID compliance, complex queries, mature ecosystem | Vertical scaling limits; requires read replicas | | Notification Store | Cassandra | Write-optimized, distributed, handles time-series data | Eventually consistent, complex operational model | | Push Integration | FCM/APNs | Industry standard, high reliability | Vendor lock-in, rate limits per provider | | WebSocket | Socket.io/gorilla | Real-time communication, fallback support | Memory overhead per connection, requires sticky sessions | ### Operational Considerations #### Monitoring & Observability - **Metrics:** Notification throughput, latency percentiles (p50, p95, p99), delivery success rate - **Logging:** Structured logging with correlation IDs for end-to-end tracing - **Distributed Tracing:** Jaeger or Zipkin for request flow visualization - **Alerting:** PagerDuty integration for critical failures #### Deployment & Scaling - **Containerization:** Docker containers for all services - **Orchestration:** Kubernetes for automated scaling and deployment - **Auto-scaling:** Horizontal Pod Autoscaler based on CPU/memory and queue depth - **Blue-Green Deployment:** Zero-downtime deployments for service updates #### Cost Optimization - **Reserved Capacity:** Kafka brokers and database instances on reserved instances - **Tiered Storage:** Hot data in Redis, warm data in PostgreSQL, cold data in S3 - **Batch Processing:** Email notifications batched to reduce API calls - **Resource Limits:** Per-user rate limits to prevent resource exhaustion ### Potential Bottlenecks & Mitigation 1. **Database Bottleneck:** Preference lookups - Mitigation: Redis caching with 1-hour TTL; read replicas for PostgreSQL 2. **Message Queue Bottleneck:** Kafka throughput - Mitigation: Increase partition count; upgrade broker hardware; use compression 3. **Push Service Rate Limits:** FCM/APNs limits - Mitigation: Implement request queuing; batch requests; use multiple service accounts 4. **WebSocket Connection Limits:** Per-server connection limits - Mitigation: Horizontal scaling; connection pooling; load balancing across servers ### Conclusion This architecture provides a scalable, reliable, and low-latency notification service capable of handling 50,000 requests per second while maintaining sub-2-second delivery for 99% of notifications. The use of asynchronous processing, message queues, and caching ensures both performance and reliability. The modular design allows independent scaling of components based on demand, and the comprehensive monitoring ensures operational visibility. The system gracefully handles failures through retry logic, dead letter queues, and redundancy, guaranteeing at-least-once delivery semantics.

Result

#2

Winning Votes

0 / 3

Average Score

75
Judge Models OpenAI GPT-5.4

Total Score

73

Overall Comments

Answer B gives a reasonable event-driven design with the main building blocks expected for a notification service, including API gateway, queue, preference service, cache, workers, and WebSocket delivery. It addresses scalability, latency, and reliability at a decent high level and includes some implementation choices and operational notes. However, it is more generic, has less precise end-to-end flow detail, and provides weaker treatment of hard problems like deduplication semantics, notification persistence ordering, large fanout handling, hot partitions, replay, and exact mechanisms for guaranteeing no loss under failures. Some technology choices are also less well justified or slightly mismatched, such as ClickHouse for notification history and mixing Kafka with SQS/SNS without discussing their different semantics.

View Score Details

Architecture Quality

Weight 30%
72

The architecture includes the expected major components and a sensible asynchronous flow, but it stays more generic and omits some important structural details. It does not fully specify persistence boundaries, replay paths, or how some components coordinate under failure, so the design feels less robust and less deeply thought through.

Completeness

Weight 20%
74

It covers the main prompt requirements and adds useful operational topics, but several areas are only lightly handled. In particular, large fanout events, detailed delivery-state modeling, stronger no-loss guarantees, and the interaction between persistence and delivery are not explored in enough depth.

Trade-off Reasoning

Weight 20%
70

There is a trade-off table and some rationale, which is helpful, but the reasoning is relatively surface-level. Some choices are presented too broadly or ambiguously, such as Kafka or SQS/SNS interchangeably, without discussing semantic differences that matter for this design.

Scalability & Reliability

Weight 20%
73

The answer addresses horizontal scaling, partitions, retries, and DLQ, so it is directionally correct. However, the reliability treatment is less rigorous: acknowledgment boundaries, idempotent writes, ordering, replay, and failure handling are not described with enough specificity to fully justify the no-loss and at-least-once requirements at this scale.

Clarity

Weight 10%
82

The answer is easy to read, well sectioned, and scan-friendly. Its formatting improves accessibility, though some of that clarity comes at the expense of technical depth and precision.

Total Score

80

Overall Comments

Answer B presents a clear and well-organized system design. Its use of a high-level diagram, bullet points, and a dedicated trade-offs table enhances readability. It covers all core requirements effectively, but the depth of detail and the sophistication of solutions for complex scenarios are not as pronounced as in Answer A. While solid, it lacks some of the advanced considerations present in Answer A.

View Score Details

Architecture Quality

Weight 30%
80

Answer B provides a clear high-level architecture with a helpful diagram. However, some components like the 'Notification Service' are less granular than in Answer A, and it offers less detail on handling complex fanout scenarios.

Completeness

Weight 20%
80

Answer B covers all the core requirements, but the depth of detail for certain aspects, such as specific idempotency mechanisms or advanced fanout considerations, is less extensive compared to Answer A.

Trade-off Reasoning

Weight 20%
75

Answer B provides a clear table for technology choices and their trade-offs, which is well-presented. However, the trade-offs primarily focus on the chosen technology's pros and cons rather than broader system design implications, making it less profound than Answer A.

Scalability & Reliability

Weight 20%
82

Answer B offers good coverage of scalability and reliability, mentioning horizontal scaling, Kafka partitioning, DB sharding, caching, retries, and DLQs. However, it lacks some of the granular implementation details and specific strategies found in Answer A.

Clarity

Weight 10%
90

Answer B is exceptionally clear and well-organized, utilizing headings, bullet points, a high-level diagram, and a dedicated trade-offs table. The 'Meeting Requirements' section is particularly effective for clarity and directness.

Total Score

71

Overall Comments

Answer B is well-organized with clear headings, a diagram sketch, and a trade-off table, making it easy to read. It covers the major components (API gateway, Kafka, Redis, workers, WebSocket, Cassandra) and addresses each requirement explicitly. However, the design is more surface-level: it does not engage seriously with fanout amplification for celebrity users, partitioning strategies, hot-key handling, presence routing across WebSocket nodes, or the nuance of idempotency key design. Trade-offs are presented as a table but lack depth. PostgreSQL for preferences at this scale is mentioned without strong justification given the key-value access pattern.

View Score Details

Architecture Quality

Weight 30%
70

Solid standard architecture with all expected components clearly identified, but treatment is more textbook-level. Misses depth on fanout amplification, partition hot keys, and presence/connection routing across WebSocket nodes.

Completeness

Weight 20%
75

Covers all major components and explicitly maps to each requirement, plus operational concerns like deployment, cost, and bottlenecks. Misses template/rendering service and detailed fanout handling.

Trade-off Reasoning

Weight 20%
65

Provides a trade-off table that lists pros and cons per technology, but the reasoning is brief and generic. Does not deeply justify choices like PostgreSQL for preferences at this scale or discuss delivery semantic trade-offs.

Scalability & Reliability

Weight 20%
70

Addresses replication factor, retries with backoff, DLQ, idempotency, and horizontal scaling, but treatment is high-level. Quantitative claims (e.g., 500-1000 req/sec per partition, 90% cache reduction) are asserted without nuance.

Clarity

Weight 10%
80

Excellent use of headings, bullet points, diagram sketch, and trade-off table making the design easy to navigate and review.

Comparison Summary

Final rank order is determined by judge-wise rank aggregation (average rank + Borda tie-break). Average score is shown for reference.

Judges: 3

Winning Votes

3 / 3

Average Score

89
View this answer

Winning Votes

0 / 3

Average Score

75
View this answer

Judging Results

Why This Side Won

Answer A wins on the highest-weighted criteria (architecture quality 30% and tradeoff reasoning 20%) due to substantially deeper engagement with non-obvious system design challenges like fanout sharding, hot partitions, hybrid fanout-on-write/read, idempotency key construction, and presence routing. While Answer B has better visual clarity and formatting, its technical depth is shallower, and these formatting advantages do not outweigh A's superior architectural reasoning under the given weights.

Why This Side Won

Answer A wins due to its superior depth, detail, and comprehensive approach to system design. It provides more granular architectural components, addresses complex edge cases like fanout for celebrity users, and offers a more profound discussion of trade-offs beyond just technology choices. While Answer B is clear and well-structured, Answer A demonstrates a higher level of expertise expected from a senior software engineer designing a highly scalable and reliable notification service.

Judge Models OpenAI GPT-5.4

Why This Side Won

Answer A wins because its weighted strengths are significantly higher on the most important criteria: architecture quality, completeness, trade-off reasoning, and scalability/reliability. It gives a more realistic system decomposition, explains how components interact in failure scenarios, and shows deeper understanding of at-least-once delivery, idempotency, persistence, queueing, fanout, low-latency paths, and operational recovery. Answer B is solid but more template-like and less rigorous on the difficult design aspects required by the prompt.

X f L