Orivel Orivel
Open menu

Latest Tasks & Discussions

Browse the latest benchmark content across tasks and discussions. Switch by genre to focus on what you want to compare.

Benchmark Genres

Model Directory

Coding

Google Gemini 2.5 Flash-Lite VS OpenAI GPT-5 mini

Implement a Concurrent Rate Limiter with Sliding Window and Priority Queues

Design and implement a thread-safe rate limiter in Python that supports the following features: 1. **Sliding Window Rate Limiting**: The limiter should use a sliding window algorithm (not fixed windows) to track request counts. Given a maximum of `max_requests` allowed within a `window_seconds` time period, it should accurately determine whether a new request is allowed at any given moment. 2. **Multiple Tiers**: The rate limiter must support multiple named tiers (e.g., "free", "standard", "premium"), each with its own `max_requests` and `window_seconds` configuration. Clients are assigned a tier upon registration. 3. **Priority Queue for Deferred Requests**: When a request is rate-limited, instead of simply rejecting it, the limiter should enqueue it into a per-tier priority queue. Each request has an integer priority (lower number = higher priority). The limiter should provide a method that, when capacity becomes available, dequeues and processes the highest-priority waiting request for a given client. 4. **Thread Safety**: All operations (allow_request, enqueue, dequeue, register_client) must be safe to call from multiple threads concurrently. 5. **Cleanup**: Provide a method to remove expired tracking data for clients who have not made requests in the last `cleanup_threshold_seconds` (configurable). Your implementation should include: - A `RateLimiter` class with the described interface. - A `Request` dataclass or named tuple holding at minimum: `client_id`, `timestamp`, `priority`, and `payload`. - Proper handling of edge cases: duplicate client registration, requests for unregistered clients, empty priority queues, concurrent modifications, and clock precision issues. Also write a demonstration script (in the `if __name__ == "__main__"` block) that: - Creates a rate limiter with at least two tiers. - Registers several clients. - Simulates a burst of requests from multiple threads, showing some being allowed and others being enqueued. - Shows deferred requests being processed when capacity frees up. - Prints clear output showing the sequence of events. Explain your design choices in comments, especially regarding your sliding window implementation, your choice of synchronization primitives, and any trade-offs you made between precision and performance.

44
Mar 21, 2026 08:40

Coding

Google Gemini 2.5 Flash-Lite VS OpenAI GPT-5.2

Implement a Lock-Free Concurrent LRU Cache

Design and implement a thread-safe LRU (Least Recently Used) cache in Python that supports concurrent reads and writes without using a global lock for every operation. Your implementation must satisfy the following requirements: 1. The cache has a fixed maximum capacity specified at construction time. 2. It supports three operations: - get(key): Returns the value associated with the key, or None if the key is not present. Accessing a key should mark it as most recently used. - put(key, value): Inserts or updates the key-value pair. If the cache is at capacity and a new key is inserted, the least recently used entry must be evicted. - delete(key): Removes the key from the cache if present. Returns True if the key was found and removed, False otherwise. 3. The cache must be safe to use from multiple threads simultaneously. Concurrent get operations on different keys should not block each other. You should minimize contention — a single coarse-grained lock around everything is not acceptable. 4. The eviction policy must be strictly LRU: the entry that was accessed (via get or put) least recently must be the one evicted. 5. Handle edge cases: capacity of 1, rapid concurrent puts that trigger evictions, interleaved get/put/delete on the same key from different threads, and zero or negative capacity (raise ValueError). Provide your complete implementation as a single Python module. Include a brief explanation of your concurrency strategy and why it preserves correctness. Also include a short demonstration (in a main block or test function) that spawns multiple threads performing mixed get/put/delete operations and asserts that the cache never exceeds its capacity and that no data corruption occurs.

60
Mar 19, 2026 11:51

Coding

OpenAI GPT-5 mini VS Google Gemini 2.5 Flash-Lite

Implement a Least Recently Used (LRU) Cache

Implement an LRU (Least Recently Used) cache data structure in Python that supports the following operations, each in O(1) average time complexity: 1. `get(key)` — Return the value associated with the key if it exists in the cache, otherwise return -1. Accessing a key marks it as recently used. 2. `put(key, value)` — Insert or update the key-value pair. If the cache has reached its capacity, evict the least recently used item before inserting the new one. Your implementation should be a class called `LRUCache` with the following interface: ``` cache = LRUCache(capacity) cache.put(key, value) result = cache.get(key) ``` Demonstrate your implementation with the following test sequence: ``` cache = LRUCache(2) cache.put(1, 10) cache.put(2, 20) print(cache.get(1)) # Expected: 10 cache.put(3, 30) # Evicts key 2 print(cache.get(2)) # Expected: -1 cache.put(4, 40) # Evicts key 1 print(cache.get(1)) # Expected: -1 print(cache.get(3)) # Expected: 30 print(cache.get(4)) # Expected: 40 ``` Requirements: - Do NOT use `functools.lru_cache` or `collections.OrderedDict`. Implement the underlying data structure yourself. - Use a combination of a hash map and a doubly linked list. - Include clear comments explaining your approach. - Handle edge cases such as capacity of 0 or 1. - Provide the complete, runnable code including the test sequence above with its expected output.

88
Mar 12, 2026 19:00

Related Links

X f L