Answer A: OpenAI GPT-5.2
class _Node: __slots__ = ("key", "value", "prev", "next") def __init__(self, key=None, value=None): self.key = key self.value = value self.prev = None self.next = None class LRUCache: def __init__(self, capacity: int): if not isinstance(capacity, int) or capacity <= 0: raise ValueError("capacity must be a positive integer") self.capacity = capacity self.map = {} # key -> _Node # Dummy head/tail to avoid edge cases. # Most-recent is right after head. Least-recent is right before tail. self.head = _Node()...
Show Full Answer ▼
class _Node: __slots__ = ("key", "value", "prev", "next") def __init__(self, key=None, value=None): self.key = key self.value = value self.prev = None self.next = None class LRUCache: def __init__(self, capacity: int): if not isinstance(capacity, int) or capacity <= 0: raise ValueError("capacity must be a positive integer") self.capacity = capacity self.map = {} # key -> _Node # Dummy head/tail to avoid edge cases. # Most-recent is right after head. Least-recent is right before tail. self.head = _Node() self.tail = _Node() self.head.next = self.tail self.tail.prev = self.head def _add_to_front(self, node: _Node) -> None: node.prev = self.head node.next = self.head.next self.head.next.prev = node self.head.next = node def _remove(self, node: _Node) -> None: node.prev.next = node.next node.next.prev = node.prev node.prev = None node.next = None def _move_to_front(self, node: _Node) -> None: self._remove(node) self._add_to_front(node) def _evict_lru(self) -> None: lru = self.tail.prev if lru is self.head: return self._remove(lru) del self.map[lru.key] def get(self, key: int) -> int: node = self.map.get(key) if node is None: return -1 self._move_to_front(node) return node.value def put(self, key: int, value: int) -> None: node = self.map.get(key) if node is not None: node.value = value self._move_to_front(node) return new_node = _Node(key, value) self.map[key] = new_node self._add_to_front(new_node) if len(self.map) > self.capacity: self._evict_lru() # Demonstration cache = LRUCache(2) cache.put(1, 10) cache.put(2, 20) print(cache.get(1)) # Expected: 10 cache.put(3, 30) # Evicts key 2 print(cache.get(2)) # Expected: -1 cache.put(4, 40) # Evicts key 1 print(cache.get(1)) # Expected: -1 print(cache.get(3)) # Expected: 30 print(cache.get(4)) # Expected: 40
Result
Winning Votes
3 / 3
Average Score
Total Score
Overall Comments
The submission is outstanding. It provides a correct, efficient, and high-quality implementation of an LRU cache from scratch. The code uses the classic and optimal approach of a hash map combined with a doubly linked list. The implementation is robust, featuring dummy head/tail nodes to simplify edge cases, and demonstrates excellent coding practices like helper methods and type hints. It fully adheres to all task requirements and constraints.
View Score Details ▼
Correctness
Weight 35%The implementation is perfectly correct. It successfully uses a hash map for O(1) lookups and a doubly linked list to maintain the order of use. The logic for `get`, `put`, and eviction works flawlessly, and the provided demonstration code produces the exact expected output.
Completeness
Weight 20%The solution is fully complete. It provides a self-contained `LRUCache` class, a private `_Node` helper class, and the exact demonstration code requested in the prompt. All required functionalities are implemented.
Code Quality
Weight 20%The code quality is excellent. It is well-structured with clear, single-responsibility helper methods (_add_to_front, _remove, etc.) that enhance readability. The use of dummy head/tail nodes is a sophisticated technique that simplifies the linked list logic. The inclusion of type hints, `__slots__`, and input validation for capacity demonstrates strong, professional coding practices.
Practical Value
Weight 15%The solution has high practical value. It provides a textbook-perfect, efficient implementation of a fundamental and widely-used data structure. This code could be used in real-world applications and serves as an excellent reference for how to build an LRU cache from first principles.
Instruction Following
Weight 10%The solution perfectly adheres to all instructions. It correctly implements the LRU cache from scratch without using forbidden libraries like `collections.OrderedDict`. It achieves the required O(1) average time complexity for its core operations and includes the mandatory demonstration.
Total Score
Overall Comments
This is an excellent implementation of an LRU Cache. The solution correctly uses a doubly linked list combined with a hash map to achieve O(1) average time complexity for both get and put operations. The code avoids forbidden constructs (OrderedDict, functools.lru_cache), handles all edge cases properly, and produces the exact expected output. The use of dummy head/tail sentinel nodes is a clean approach that eliminates edge cases. The code is well-structured, uses __slots__ for memory efficiency, includes input validation, and is clearly commented. The demonstration matches the required output exactly.
View Score Details ▼
Correctness
Weight 35%The implementation is fully correct. The doubly linked list with dummy sentinels correctly tracks recency, get moves accessed nodes to front, put handles both insert and update cases, eviction removes the tail.prev node (LRU), and all demonstration outputs match expected values (10, -1, -1, 30, 40).
Completeness
Weight 20%The solution is fully complete: it includes the LRUCache class with all required methods (constructor, get, put), helper private methods, the _Node class, input validation, and the full demonstration code as specified in the task.
Code Quality
Weight 20%Code quality is excellent. The use of __slots__ on _Node shows attention to memory efficiency. Private helper methods (_add_to_front, _remove, _move_to_front, _evict_lru) are well-named and single-purpose. Type hints are used. Comments explain the sentinel node design. The code is clean and readable.
Practical Value
Weight 15%The implementation is highly practical. It includes input validation with a meaningful error message, uses __slots__ for performance, and the design is extensible. The only minor gap is that it doesn't handle thread safety, but that was not required by the task.
Instruction Following
Weight 10%All instructions are followed precisely: no OrderedDict or functools.lru_cache used, O(1) complexity achieved via hash map + doubly linked list, the underlying data structure is implemented manually, and the exact demonstration code from the task is included with correct expected output comments.
Total Score
Overall Comments
This answer provides a correct, efficient, and self-contained Python implementation of an LRU cache using a hash map and doubly linked list. It meets the required O(1) operations, handles capacity and eviction correctly, and includes the requested demonstration which would print the expected outputs. The code is clean and readable; minor issues are limited to small style points (e.g., using the name 'map' shadows a builtin and there are no docstrings), but these do not affect correctness or performance.
View Score Details ▼
Correctness
Weight 35%Implements LRU using a hashmap + doubly linked list correctly. get and put operate in O(1) average time, eviction logic is correct, and the provided demonstration will produce the expected outputs.
Completeness
Weight 20%All required features are present: constructor with capacity validation, get and put semantics, LRU eviction when capacity exceeded, no use of OrderedDict or functools.lru_cache, and the requested demonstration is included.
Code Quality
Weight 20%Code is well-structured, clear, and uses dummy head/tail to simplify list operations. Methods are small and focused. Minor style nits: the attribute name 'map' shadows the builtin, and there are no docstrings, but these do not affect correctness.
Practical Value
Weight 15%Implementation is practical and ready for use in real code: efficient, safe (validates capacity), and memory-efficient (uses __slots__). Would benefit from small additions like docstrings or unit tests, but functionally complete.
Instruction Following
Weight 10%Follows instructions exactly: does not use prohibited libraries, implements required data structures manually, ensures O(1) operations, and includes the exact demonstration with expected printed results.