Answer A: OpenAI GPT-5.2
class LRUCache: class _Node: __slots__ = ("key", "val", "prev", "next") def __init__(self, key=0, val=0): self.key = key self.val = val self.prev = None self.next = None def __init__(self, capacity: int): if capacity <= 0: raise ValueError("capacity must be a positive integer") self.capacity = capacity self.map = {} # key -> _Node # Doubly-linked list with sentinels: # head <-> ... <-> tail # Most recently used is right after head. # Least recently used is right before tai...
Show Full Answer ▼
class LRUCache: class _Node: __slots__ = ("key", "val", "prev", "next") def __init__(self, key=0, val=0): self.key = key self.val = val self.prev = None self.next = None def __init__(self, capacity: int): if capacity <= 0: raise ValueError("capacity must be a positive integer") self.capacity = capacity self.map = {} # key -> _Node # Doubly-linked list with sentinels: # head <-> ... <-> tail # Most recently used is right after head. # Least recently used is right before tail. self.head = self._Node() self.tail = self._Node() self.head.next = self.tail self.tail.prev = self.head def _remove(self, node: _Node) -> None: p, n = node.prev, node.next p.next = n n.prev = p def _add_to_front(self, node: _Node) -> None: # Insert right after head (mark as most recently used) first = self.head.next node.prev = self.head node.next = first self.head.next = node first.prev = node def _move_to_front(self, node: _Node) -> None: self._remove(node) self._add_to_front(node) def _evict_lru(self) -> None: # Remove node before tail lru = self.tail.prev if lru is self.head: return self._remove(lru) del self.map[lru.key] def get(self, key: int) -> int: node = self.map.get(key) if node is None: return -1 self._move_to_front(node) return node.val def put(self, key: int, value: int) -> None: node = self.map.get(key) if node is not None: node.val = value self._move_to_front(node) return new_node = self._Node(key, value) self.map[key] = new_node self._add_to_front(new_node) if len(self.map) > self.capacity: self._evict_lru() # Demonstration of correctness cache = LRUCache(2) cache.put(1, 10) cache.put(2, 20) print(cache.get(1)) # Expected: 10 cache.put(3, 30) # Evicts key 2 print(cache.get(2)) # Expected: -1 cache.put(4, 40) # Evicts key 1 print(cache.get(1)) # Expected: -1 print(cache.get(3)) # Expected: 30 print(cache.get(4)) # Expected: 40 # Output: # 10 # -1 # -1 # 30 # 40 # Brief O(1) explanation: # - A dictionary (hash map) maps keys to nodes, giving O(1) average lookup for get/put. # - A doubly linked list maintains recency order; moving a node to the front, # inserting a new node, and removing the LRU node are all O(1) pointer updates. # Combining these gives O(1) average time for both get and put.
Result
Winning Votes
2 / 3
Average Score
Total Score
Overall Comments
Excellent implementation of an LRU cache that fully meets all requirements. The solution uses a well-designed combination of a hash map and doubly-linked list with sentinel nodes to achieve O(1) time complexity for both get and put operations. The code is clean, well-structured, and handles all edge cases correctly, including updating existing keys and proper eviction. The demonstration produces the exact expected output (10, -1, -1, 30, 40), and the explanation of O(1) complexity is accurate and clear. The use of __slots__ in the Node class shows attention to memory efficiency. Minor observations: the ValueError for non-positive capacity is a nice defensive touch but wasn't explicitly required; the explanation could have been slightly more detailed about why dictionary lookups are O(1) average. Overall, this is a production-quality implementation.
View Score Details ▼
Correctness
Weight 35%The implementation is correct and produces the exact expected output sequence (10, -1, -1, 30, 40). All core operations work properly: get returns correct values and marks items as recently used, put correctly inserts new items and updates existing ones, and eviction properly removes the least recently used item when capacity is exceeded. The doubly-linked list with sentinels correctly maintains recency order. Edge cases like updating an existing key are handled correctly. The only minor deduction is that the ValueError for non-positive capacity, while good practice, wasn't explicitly required by the task.
Completeness
Weight 20%The answer is fully complete. It provides the entire LRUCache class with all required methods (__init__, get, put), includes a complete working demonstration with the exact test sequence specified, produces all expected outputs, and concludes with a clear explanation of the O(1) time complexity. The implementation includes helper methods (_remove, _add_to_front, _move_to_front, _evict_lru) that are well-organized and necessary for the solution. Nothing is missing or incomplete.
Code Quality
Weight 20%The code is of high quality with excellent structure and readability. The use of a nested _Node class with __slots__ demonstrates memory efficiency awareness. Method names are descriptive and follow Python conventions. The doubly-linked list implementation with sentinel nodes is elegant and avoids null-checking edge cases. Comments clearly explain the data structure layout and purpose of each method. The code is well-organized with logical separation of concerns. Minor: the explanation comment at the end could be slightly more detailed about hash map average-case complexity, but this is a very minor point.
Practical Value
Weight 15%This implementation has high practical value as a reference solution. It demonstrates best practices for implementing LRU caches: using appropriate data structures (hash map + doubly-linked list), proper sentinel node usage to eliminate edge cases, and clean separation of helper methods. The code is production-ready and could be used directly in real applications. The defensive ValueError check adds robustness. The implementation would serve as an excellent educational example for understanding how to combine data structures to achieve specific time complexity requirements.
Instruction Following
Weight 10%The answer follows all instructions precisely. It provides a complete class named exactly 'LRUCache' with the three required methods having correct signatures. Both get and put achieve O(1) average time complexity using appropriate data structures. The demonstration sequence is executed exactly as specified with correct output. The explanation of O(1) complexity is included and accurate. All requirements from the judging policy are met: correct output, proper edge case handling (updating existing keys), use of O(1) data structures, and complete runnable code.
Total Score
Overall Comments
The provided LRUCache implementation is exceptionally well-structured, correct, and highly efficient. It correctly utilizes a combination of a hash map and a custom doubly linked list with sentinel nodes to achieve the required O(1) average time complexity for both `get` and `put` operations. The code demonstrates excellent attention to detail, including the use of `__slots__` for the node class and robust handling of various cache operations like updating existing keys and evicting the least recently used item. The demonstration output is perfectly correct, and the explanation of time complexity is clear and accurate.
View Score Details ▼
Correctness
Weight 35%The implementation is fully correct. All LRU cache logic, including key retrieval, updates, insertions, and eviction of the least recently used item, functions as expected. The provided demonstration sequence produces the exact correct output.
Completeness
Weight 20%The answer provides a complete `LRUCache` class with all required methods (`__init__`, `get`, `put`), a clear demonstration of its usage, and a concise explanation of its time complexity. Nothing requested was missing.
Code Quality
Weight 20%The code quality is excellent. It's well-organized with helper methods for linked list operations, uses sentinel nodes for cleaner list manipulation, and includes `__slots__` for memory efficiency in the `_Node` class. Variable names are descriptive, and the overall structure is highly readable and maintainable.
Practical Value
Weight 15%This implementation is highly practical and production-ready. It demonstrates a deep understanding of data structures required for an LRU cache, providing a robust and efficient solution. The explicit custom doubly linked list implementation offers full control and insight into the underlying mechanism, making it valuable in scenarios where `collections.OrderedDict` might not be suitable or a deeper understanding is needed.
Instruction Following
Weight 10%All instructions were followed precisely. The class is named `LRUCache`, methods conform to the specified signatures, O(1) average time complexity is achieved and explained, capacity validation is present, and the demonstration sequence, along with its output, matches the requirements exactly.
Total Score
Overall Comments
Well-implemented LRU cache: correct behavior, efficient O(1) operations using a hash map + doubly linked list, and the provided demonstration produces the expected output. Code is clear and runnable, with small style/annotation improvements possible (e.g., more type annotations or docstrings) but nothing that affects correctness or performance.
View Score Details ▼
Correctness
Weight 35%Implementation is correct: get and put behave as specified, updates move items to MRU, eviction removes the LRU when capacity is exceeded, and the demonstration prints the expected sequence (10, -1, -1, 30, 40). Edge cases like updating an existing key and zero/negative capacity (ValueError) are handled.
Completeness
Weight 20%Solution includes a complete, runnable LRUCache class with __init__, get, and put, a demonstration of the required operation sequence, and a concise explanation of O(1) complexity. All required behaviors from the prompt are present.
Code Quality
Weight 20%Code is clean and well-structured: inner Node class with __slots__, sentinel head/tail nodes, and clear helper methods for list operations. Minor style improvements could include additional type annotations (e.g., for self.map) and docstrings/comments for public methods, but these do not affect correctness.
Practical Value
Weight 15%Implementation is efficient and practical for real use: O(1) average operations, low-overhead node design, and clear semantics. For production use one might add thread-safety considerations or richer typing/docs, but the core functionality is solid.
Instruction Following
Weight 10%All instructions were followed: class named LRUCache provided, required methods implemented with correct signatures, demonstration included, and the explanation of O(1) complexity is accurate.