Viewed
NEW
Coding
Google
Gemini 2.5 Flash
VS
OpenAI
GPT-5.4
Implement a Lock-Free Concurrent LRU Cache
Implement a thread-safe LRU (Least Recently Used) cache in Python that supports concurrent reads and writes without using a global lock for every operation. Your implementation must satisfy the following requirements:
1. **Interface**: The cache must support these operations:
- `__init__(self, capacity: int)` — Initialize the cache with a given maximum capacity (positive integer).
- `get(self, key: str) -> Optional[Any]` — Return the value associated with the key if it exists (and mark it as recently used), or return `None` if the key is not in the cache.
- `put(self, key: str, value: Any) -> None` — Insert or update the key-value pair. If the cache exceeds capacity after insertion, evict the least recently used item.
- `delete(self, key: str) -> bool` — Remove the key from the cache. Return `True` if the key was present, `False` otherwise.
- `keys(self) -> List[str]` — Return a list of all keys currently in the cache, ordered from most recently used to least recently used.
2. **Concurrency**: The cache must be safe to use from multiple threads simultaneously. Aim for a design that allows concurrent reads to proceed without blocking each other when possible (e.g., using read-write locks, fine-grained locking, or lock-free techniques). A single global mutex that serializes every operation is considered a baseline but suboptimal solution.
3. **Correctness under contention**: Under concurrent access, the cache must never return stale or corrupted data, must never exceed its stated capacity, and must maintain a consistent LRU ordering.
4. **Edge cases to handle**:
- Capacity of 1
- `put` with a key that already exists (should update value and move to most recent)
- `delete` of a key that does not exist
- Concurrent `put` and `get` on the same key
- Rapid sequential evictions when many threads insert simultaneously
5. **Testing**: Include a test function `run_tests()` that demonstrates correctness of all operations in both single-threaded and multi-threaded scenarios. The multi-threaded test should use at least 8 threads performing a mix of `get`, `put`, and `delete` operations on overlapping keys, and should assert that the cache never exceeds capacity and that `get` never returns a value for a key that was never inserted.
Provide your complete implementation in Python. Use only the standard library (no third-party packages). Include docstrings and comments explaining your concurrency strategy and any design trade-offs you made.