Answer A: OpenAI GPT-5 mini
# LRU Cache implementation using a hash map + doubly linked list # - Hash map (dict) provides O(1) access from key -> node # - Doubly linked list maintains usage order: most recently used at head, least recently used at tail # - When a key is accessed or updated, we move its node to the head # - When capacity is exceeded, we remove the tail node (LRU) and delete its key from the map class _Node: """Doubly linked list node to hold key and value.""" def __init__(self, key=None, value=None): self.key = key self.value = value self.prev = None self.next = None class LRUCache:...
Show Full Answer ▼
# LRU Cache implementation using a hash map + doubly linked list # - Hash map (dict) provides O(1) access from key -> node # - Doubly linked list maintains usage order: most recently used at head, least recently used at tail # - When a key is accessed or updated, we move its node to the head # - When capacity is exceeded, we remove the tail node (LRU) and delete its key from the map class _Node: """Doubly linked list node to hold key and value.""" def __init__(self, key=None, value=None): self.key = key self.value = value self.prev = None self.next = None class LRUCache: def __init__(self, capacity: int): """Initialize LRUCache with given capacity. Handles capacity <= 0 by effectively making the cache unusable (get returns -1, put no-op). """ self.capacity = max(0, int(capacity)) self.cache = {} # key -> _Node # Sentinel nodes for head and tail to avoid edge checks self.head = _Node() # Most recently used comes right after head self.tail = _Node() # Least recently used is right before tail self.head.next = self.tail self.tail.prev = self.head self.size = 0 # Internal helper: add node right after head (mark as recently used) def _add_node(self, node: _Node) -> None: node.prev = self.head node.next = self.head.next self.head.next.prev = node self.head.next = node # Internal helper: remove an existing node from the linked list def _remove_node(self, node: _Node) -> None: prev = node.prev nxt = node.next if prev: prev.next = nxt if nxt: nxt.prev = prev node.prev = node.next = None # Internal helper: move an existing node to head (most recently used) def _move_to_head(self, node: _Node) -> None: self._remove_node(node) self._add_node(node) # Internal helper: pop the least recently used node (right before tail) def _pop_tail(self) -> _Node: node = self.tail.prev if node is self.head: return None self._remove_node(node) return node def get(self, key: int) -> int: """Return value associated with key if present; otherwise -1. Access marks the key as recently used. O(1) average time. """ if self.capacity == 0: return -1 node = self.cache.get(key) if not node: return -1 # Move the accessed node to head (recently used) self._move_to_head(node) return node.value def put(self, key: int, value: int) -> None: """Insert or update key with value. Evict LRU if capacity exceeded. O(1) average time. """ if self.capacity == 0: return node = self.cache.get(key) if node: # Update and mark recently used node.value = value self._move_to_head(node) else: # Insert new node new_node = _Node(key, value) self.cache[key] = new_node self._add_node(new_node) self.size += 1 if self.size > self.capacity: # Evict least recently used node tail_node = self._pop_tail() if tail_node: del self.cache[tail_node.key] self.size -= 1 # Demonstration of the implementation with the given test sequence if __name__ == "__main__": cache = LRUCache(2) cache.put(1, 10) cache.put(2, 20) print(cache.get(1)) # Expected: 10 cache.put(3, 30) # Evicts key 2 print(cache.get(2)) # Expected: -1 cache.put(4, 40) # Evicts key 1 print(cache.get(1)) # Expected: -1 print(cache.get(3)) # Expected: 30 print(cache.get(4)) # Expected: 40
Result
Winning Votes
3 / 3
Average Score
Total Score
Overall Comments
This is a strong, well-structured implementation of an LRU cache using a hash map and doubly linked list. The code correctly produces all expected outputs (10, -1, -1, 30, 40), handles edge cases like capacity 0, uses sentinel nodes to simplify list operations, and includes clear comments throughout. The implementation is clean, follows Python conventions, and achieves O(1) average time for both get and put. Minor note: the get method uses 'if not node' which could theoretically be problematic if node were a falsy value, but since nodes are objects this is fine in practice. The capacity=1 edge case is handled implicitly and correctly. Overall a high-quality answer with only very minor stylistic considerations.
View Score Details ▼
Correctness
Weight 35%The implementation correctly produces all five expected outputs (10, -1, -1, 30, 40). The doubly linked list operations are correct, sentinel nodes prevent edge case bugs, and the eviction logic is sound. Both get and put are O(1) average. The capacity=0 edge case is handled. The 'if not node' check in get is technically fine since nodes are objects, but using 'if node is None' would be more explicit.
Completeness
Weight 20%All required components are present: the _Node class, LRUCache class with get and put, internal helpers for list manipulation, sentinel nodes, edge case handling for capacity 0 and 1, the full test sequence with expected output comments, and a main guard. Nothing significant is missing.
Code Quality
Weight 20%Code is clean, well-organized, and follows Python conventions. Comments explain the overall approach and each method. Sentinel nodes are a good design choice. Helper methods are well-named and single-purpose. Type hints are used. Minor: 'if not node' is slightly less explicit than 'if node is None', and the _remove_node null checks for prev/nxt are unnecessary given sentinel nodes always exist, but these are very minor issues.
Practical Value
Weight 15%The implementation is directly usable in real scenarios. It handles capacity 0 gracefully, uses a clean interface, and the design with sentinel nodes makes it robust. The code is self-contained and runnable. It could be extended easily (e.g., thread safety, TTL). The demonstration clearly shows the expected behavior.
Instruction Following
Weight 10%All instructions are followed: no OrderedDict or functools.lru_cache used, hash map + doubly linked list implemented, class named LRUCache with correct interface, test sequence included with expected output comments, edge cases handled, and comments explain the approach. The code is complete and runnable.
Total Score
Overall Comments
Provides a correct, runnable LRUCache implementation using a dict plus a doubly linked list with sentinel nodes, achieving O(1) average get/put and producing the expected outputs for the given test. Includes clear comments and handles capacity 0 gracefully. Minor issues: _pop_tail is annotated to return _Node but can return None; some defensive checks in _remove_node are unnecessary given sentinel usage, but not harmful.
View Score Details ▼
Correctness
Weight 35%Correct LRU behavior: get moves nodes to MRU position, put updates/moves existing keys, and evicts LRU on overflow. The provided test sequence will output 10, -1, -1, 30, 40 as expected. Capacity 0 is handled with get=-1 and put no-op.
Completeness
Weight 20%Includes full class implementation, internal helpers, and the exact requested test sequence. Comments explain the approach and key operations. Edge cases for capacity 0 are explicitly handled; capacity 1 works naturally via eviction logic.
Code Quality
Weight 20%Code is clean, readable, and well-structured with sentinel nodes to simplify list operations. Minor quality nits: _pop_tail return annotation doesn’t match possible None return, and some checks in _remove_node are redundant with the sentinel invariant.
Practical Value
Weight 15%A practical reference implementation: O(1) operations, avoids prohibited libraries, and is easy to adapt. Demonstration code is runnable as-is.
Instruction Following
Weight 10%Follows all instructions: no OrderedDict/functools, uses hashmap + doubly linked list, includes explanatory comments, handles edge cases (notably capacity 0), and provides complete runnable code with the specified test.
Total Score
Overall Comments
This is an outstanding response that provides a textbook-quality implementation of an LRU cache. The code is correct, efficient, and exceptionally well-structured, using the sophisticated technique of sentinel nodes to simplify the doubly linked list logic. It perfectly adheres to all constraints, includes excellent comments, handles edge cases gracefully, and provides the required runnable test case. It serves as a model answer for this task.
View Score Details ▼
Correctness
Weight 35%The implementation is perfectly correct. The logic for both `get` and `put` operations correctly maintains the least-recently-used order and adheres to the capacity constraint. The provided test sequence produces the exact expected output. The O(1) average time complexity for both operations is successfully achieved by the combination of a hash map and a doubly linked list. Edge cases, such as a capacity of 0, are handled correctly.
Completeness
Weight 20%The submission is fully complete. It includes the `LRUCache` class, the private helper `_Node` class, and the complete, runnable demonstration block (`if __name__ == "__main__":`) as requested. All required components of the prompt have been delivered.
Code Quality
Weight 20%The code quality is exceptional. The use of private helper methods for node manipulation (`_add_node`, `_remove_node`, etc.) makes the main `get` and `put` methods clean and readable. The implementation of sentinel head and tail nodes is a sophisticated and efficient pattern that eliminates the need for edge-case checks within the list manipulation logic. Comments are clear, concise, and explain both the what and the why of the implementation. It's a professional-grade solution.
Practical Value
Weight 15%The answer has high practical and educational value. It's a classic, from-scratch implementation of a fundamental caching algorithm used widely in systems design. This code serves as an excellent reference for anyone needing to understand or implement an LRU cache and is a canonical solution to a very common software engineering interview problem.
Instruction Following
Weight 10%The answer meticulously follows every instruction in the prompt. It correctly implements the `LRUCache` class using a hash map and a custom doubly linked list, while explicitly avoiding the forbidden `collections.OrderedDict` and `functools.lru_cache`. The code is runnable, commented, and includes the specified test case.