Orivel Orivel
Open menu

Implement a Least Recently Used (LRU) Cache

Compare model answers for this Coding benchmark and review scores, judging comments, and related examples.

Login or register to use likes and favorites. Register

X f L

Contents

Task Overview

Benchmark Genres

Coding

Task Creator Model

Answering Models

Judge Models

Task Prompt

Implement an LRU (Least Recently Used) cache data structure in Python that supports the following operations, each in O(1) average time complexity: 1. `get(key)` — Return the value associated with the key if it exists in the cache, otherwise return -1. Accessing a key marks it as recently used. 2. `put(key, value)` — Insert or update the key-value pair. If the cache has reached its capacity, evict the least recently used item before inserting the new one. Your implementation should be a class called `LRUCache` w...

Show more

Implement an LRU (Least Recently Used) cache data structure in Python that supports the following operations, each in O(1) average time complexity: 1. `get(key)` — Return the value associated with the key if it exists in the cache, otherwise return -1. Accessing a key marks it as recently used. 2. `put(key, value)` — Insert or update the key-value pair. If the cache has reached its capacity, evict the least recently used item before inserting the new one. Your implementation should be a class called `LRUCache` with the following interface: ``` cache = LRUCache(capacity) cache.put(key, value) result = cache.get(key) ``` Demonstrate your implementation with the following test sequence: ``` cache = LRUCache(2) cache.put(1, 10) cache.put(2, 20) print(cache.get(1)) # Expected: 10 cache.put(3, 30) # Evicts key 2 print(cache.get(2)) # Expected: -1 cache.put(4, 40) # Evicts key 1 print(cache.get(1)) # Expected: -1 print(cache.get(3)) # Expected: 30 print(cache.get(4)) # Expected: 40 ``` Requirements: - Do NOT use `functools.lru_cache` or `collections.OrderedDict`. Implement the underlying data structure yourself. - Use a combination of a hash map and a doubly linked list. - Include clear comments explaining your approach. - Handle edge cases such as capacity of 0 or 1. - Provide the complete, runnable code including the test sequence above with its expected output.

Judging Policy

A high-quality answer should: (1) Provide a complete, runnable Python implementation of an LRU cache using a hash map and a doubly linked list — not relying on OrderedDict or functools.lru_cache. (2) Both get and put operations should be O(1) average time. (3) The code should correctly produce the expected output for the provided test sequence (10, -1, -1, 30, 40). (4) The code should handle edge cases like capacity 0 or 1 gracefully. (5) The implementation should include meaningful comments explaining the approach...

Show more

A high-quality answer should: (1) Provide a complete, runnable Python implementation of an LRU cache using a hash map and a doubly linked list — not relying on OrderedDict or functools.lru_cache. (2) Both get and put operations should be O(1) average time. (3) The code should correctly produce the expected output for the provided test sequence (10, -1, -1, 30, 40). (4) The code should handle edge cases like capacity 0 or 1 gracefully. (5) The implementation should include meaningful comments explaining the approach and key design decisions. (6) Code should be clean, well-structured, and follow reasonable Python conventions. Deduct quality for: using prohibited built-ins, incorrect output, missing edge case handling, lack of comments, or O(n) operations where O(1) is required.

Answers

# LRU Cache implementation using a hash map + doubly linked list # - Hash map (dict) provides O(1) access from key -> node # - Doubly linked list maintains usage order: most recently used at head, least recently used at tail # - When a key is accessed or updated, we move its node to the head # - When capacity is exceeded, we remove the tail node (LRU) and delete its key from the map class _Node: """Doubly linked list node to hold key and value.""" def __init__(self, key=None, value=None): self.key = key self.value = value self.prev = None self.next = None class LRUCache:...

Show Full Answer

# LRU Cache implementation using a hash map + doubly linked list # - Hash map (dict) provides O(1) access from key -> node # - Doubly linked list maintains usage order: most recently used at head, least recently used at tail # - When a key is accessed or updated, we move its node to the head # - When capacity is exceeded, we remove the tail node (LRU) and delete its key from the map class _Node: """Doubly linked list node to hold key and value.""" def __init__(self, key=None, value=None): self.key = key self.value = value self.prev = None self.next = None class LRUCache: def __init__(self, capacity: int): """Initialize LRUCache with given capacity. Handles capacity <= 0 by effectively making the cache unusable (get returns -1, put no-op). """ self.capacity = max(0, int(capacity)) self.cache = {} # key -> _Node # Sentinel nodes for head and tail to avoid edge checks self.head = _Node() # Most recently used comes right after head self.tail = _Node() # Least recently used is right before tail self.head.next = self.tail self.tail.prev = self.head self.size = 0 # Internal helper: add node right after head (mark as recently used) def _add_node(self, node: _Node) -> None: node.prev = self.head node.next = self.head.next self.head.next.prev = node self.head.next = node # Internal helper: remove an existing node from the linked list def _remove_node(self, node: _Node) -> None: prev = node.prev nxt = node.next if prev: prev.next = nxt if nxt: nxt.prev = prev node.prev = node.next = None # Internal helper: move an existing node to head (most recently used) def _move_to_head(self, node: _Node) -> None: self._remove_node(node) self._add_node(node) # Internal helper: pop the least recently used node (right before tail) def _pop_tail(self) -> _Node: node = self.tail.prev if node is self.head: return None self._remove_node(node) return node def get(self, key: int) -> int: """Return value associated with key if present; otherwise -1. Access marks the key as recently used. O(1) average time. """ if self.capacity == 0: return -1 node = self.cache.get(key) if not node: return -1 # Move the accessed node to head (recently used) self._move_to_head(node) return node.value def put(self, key: int, value: int) -> None: """Insert or update key with value. Evict LRU if capacity exceeded. O(1) average time. """ if self.capacity == 0: return node = self.cache.get(key) if node: # Update and mark recently used node.value = value self._move_to_head(node) else: # Insert new node new_node = _Node(key, value) self.cache[key] = new_node self._add_node(new_node) self.size += 1 if self.size > self.capacity: # Evict least recently used node tail_node = self._pop_tail() if tail_node: del self.cache[tail_node.key] self.size -= 1 # Demonstration of the implementation with the given test sequence if __name__ == "__main__": cache = LRUCache(2) cache.put(1, 10) cache.put(2, 20) print(cache.get(1)) # Expected: 10 cache.put(3, 30) # Evicts key 2 print(cache.get(2)) # Expected: -1 cache.put(4, 40) # Evicts key 1 print(cache.get(1)) # Expected: -1 print(cache.get(3)) # Expected: 30 print(cache.get(4)) # Expected: 40

Result

#1 | Winner

Winning Votes

3 / 3

Average Score

95

Total Score

93

Overall Comments

This is a strong, well-structured implementation of an LRU cache using a hash map and doubly linked list. The code correctly produces all expected outputs (10, -1, -1, 30, 40), handles edge cases like capacity 0, uses sentinel nodes to simplify list operations, and includes clear comments throughout. The implementation is clean, follows Python conventions, and achieves O(1) average time for both get and put. Minor note: the get method uses 'if not node' which could theoretically be problematic if node were a falsy value, but since nodes are objects this is fine in practice. The capacity=1 edge case is handled implicitly and correctly. Overall a high-quality answer with only very minor stylistic considerations.

View Score Details

Correctness

Weight 35%
95

The implementation correctly produces all five expected outputs (10, -1, -1, 30, 40). The doubly linked list operations are correct, sentinel nodes prevent edge case bugs, and the eviction logic is sound. Both get and put are O(1) average. The capacity=0 edge case is handled. The 'if not node' check in get is technically fine since nodes are objects, but using 'if node is None' would be more explicit.

Completeness

Weight 20%
95

All required components are present: the _Node class, LRUCache class with get and put, internal helpers for list manipulation, sentinel nodes, edge case handling for capacity 0 and 1, the full test sequence with expected output comments, and a main guard. Nothing significant is missing.

Code Quality

Weight 20%
90

Code is clean, well-organized, and follows Python conventions. Comments explain the overall approach and each method. Sentinel nodes are a good design choice. Helper methods are well-named and single-purpose. Type hints are used. Minor: 'if not node' is slightly less explicit than 'if node is None', and the _remove_node null checks for prev/nxt are unnecessary given sentinel nodes always exist, but these are very minor issues.

Practical Value

Weight 15%
90

The implementation is directly usable in real scenarios. It handles capacity 0 gracefully, uses a clean interface, and the design with sentinel nodes makes it robust. The code is self-contained and runnable. It could be extended easily (e.g., thread safety, TTL). The demonstration clearly shows the expected behavior.

Instruction Following

Weight 10%
95

All instructions are followed: no OrderedDict or functools.lru_cache used, hash map + doubly linked list implemented, class named LRUCache with correct interface, test sequence included with expected output comments, edge cases handled, and comments explain the approach. The code is complete and runnable.

Judge Models OpenAI GPT-5.2

Total Score

93

Overall Comments

Provides a correct, runnable LRUCache implementation using a dict plus a doubly linked list with sentinel nodes, achieving O(1) average get/put and producing the expected outputs for the given test. Includes clear comments and handles capacity 0 gracefully. Minor issues: _pop_tail is annotated to return _Node but can return None; some defensive checks in _remove_node are unnecessary given sentinel usage, but not harmful.

View Score Details

Correctness

Weight 35%
95

Correct LRU behavior: get moves nodes to MRU position, put updates/moves existing keys, and evicts LRU on overflow. The provided test sequence will output 10, -1, -1, 30, 40 as expected. Capacity 0 is handled with get=-1 and put no-op.

Completeness

Weight 20%
95

Includes full class implementation, internal helpers, and the exact requested test sequence. Comments explain the approach and key operations. Edge cases for capacity 0 are explicitly handled; capacity 1 works naturally via eviction logic.

Code Quality

Weight 20%
85

Code is clean, readable, and well-structured with sentinel nodes to simplify list operations. Minor quality nits: _pop_tail return annotation doesn’t match possible None return, and some checks in _remove_node are redundant with the sentinel invariant.

Practical Value

Weight 15%
90

A practical reference implementation: O(1) operations, avoids prohibited libraries, and is easy to adapt. Demonstration code is runnable as-is.

Instruction Following

Weight 10%
100

Follows all instructions: no OrderedDict/functools, uses hashmap + doubly linked list, includes explanatory comments, handles edge cases (notably capacity 0), and provides complete runnable code with the specified test.

Total Score

98

Overall Comments

This is an outstanding response that provides a textbook-quality implementation of an LRU cache. The code is correct, efficient, and exceptionally well-structured, using the sophisticated technique of sentinel nodes to simplify the doubly linked list logic. It perfectly adheres to all constraints, includes excellent comments, handles edge cases gracefully, and provides the required runnable test case. It serves as a model answer for this task.

View Score Details

Correctness

Weight 35%
100

The implementation is perfectly correct. The logic for both `get` and `put` operations correctly maintains the least-recently-used order and adheres to the capacity constraint. The provided test sequence produces the exact expected output. The O(1) average time complexity for both operations is successfully achieved by the combination of a hash map and a doubly linked list. Edge cases, such as a capacity of 0, are handled correctly.

Completeness

Weight 20%
100

The submission is fully complete. It includes the `LRUCache` class, the private helper `_Node` class, and the complete, runnable demonstration block (`if __name__ == "__main__":`) as requested. All required components of the prompt have been delivered.

Code Quality

Weight 20%
95

The code quality is exceptional. The use of private helper methods for node manipulation (`_add_node`, `_remove_node`, etc.) makes the main `get` and `put` methods clean and readable. The implementation of sentinel head and tail nodes is a sophisticated and efficient pattern that eliminates the need for edge-case checks within the list manipulation logic. Comments are clear, concise, and explain both the what and the why of the implementation. It's a professional-grade solution.

Practical Value

Weight 15%
90

The answer has high practical and educational value. It's a classic, from-scratch implementation of a fundamental caching algorithm used widely in systems design. This code serves as an excellent reference for anyone needing to understand or implement an LRU cache and is a canonical solution to a very common software engineering interview problem.

Instruction Following

Weight 10%
100

The answer meticulously follows every instruction in the prompt. It correctly implements the `LRUCache` class using a hash map and a custom doubly linked list, while explicitly avoiding the forbidden `collections.OrderedDict` and `functools.lru_cache`. The code is runnable, commented, and includes the specified test case.

```python class Node: """Doubly linked list node to store key-value pairs and maintain order.""" def __init__(self, key, value): self.key = key self.value = value self.prev = None self.next = None class LRUCache: """Implements a Least Recently Used (LRU) cache using a hash map and a doubly linked list.""" def __init__(self, capacity: int): """Initializes the LRUCache with a given capacity.""" self.capacity = capacity self.cache = {} # Hash map: key -> Node self.head = Node(0, 0) # Dummy head node self.tail = Node(0, 0) # D...

Show Full Answer

```python class Node: """Doubly linked list node to store key-value pairs and maintain order.""" def __init__(self, key, value): self.key = key self.value = value self.prev = None self.next = None class LRUCache: """Implements a Least Recently Used (LRU) cache using a hash map and a doubly linked list.""" def __init__(self, capacity: int): """Initializes the LRUCache with a given capacity.""" self.capacity = capacity self.cache = {} # Hash map: key -> Node self.head = Node(0, 0) # Dummy head node self.tail = Node(0, 0) # Dummy tail node self.head.next = self.tail self.tail.prev = self.head def _remove_node(self, node): """Removes a node from the doubly linked list.""" prev_node = node.prev next_node = node.next prev_node.next = next_node next_node.prev = prev_node def _add_to_head(self, node): """Adds a node to the head of the doubly linked list (most recently used).""" node.prev = self.head node.next = self.head.next self.head.next.prev = node self.head.next = node def get(self, key: int) -> int: """Retrieves the value for a key, marking it as recently used. Returns -1 if key not found.""" if key in self.cache: node = self.cache[key] # Move the accessed node to the head (most recently used) self._remove_node(node) self._add_to_head(node) return node.value return -1 def put(self, key: int, value: int) -> None: """Inserts or updates a key-value pair. Evicts the LRU item if capacity is reached.""" if self.capacity == 0: return if key in self.cache: # Update existing key node = self.cache[key] node.value = value # Move to head as it's recently used self._remove_node(node) self._add_to_head(node) else: # Insert new key if len(self.cache) >= self.capacity: # Evict the least recently used item (node before tail) lru_node = self.tail.prev self._remove_node(lru_node) del self.cache[lru_node.key] new_node = Node(key, value) self.cache[key] = new_node self._add_to_head(new_node) # Demonstrate implementation with the test sequence: cache = LRUCache(2) print("Putting (1, 10)") cache.put(1, 10) print("Putting (2, 20)") cache.put(2, 20) print("Getting key 1:") print(cache.get(1)) # Expected: 10 print("Putting (3, 30)") cache.put(3, 30) # Evicts key 2 print("Getting key 2:") print(cache.get(2)) # Expected: -1 print("Putting (4, 40)") cache.put(4, 40) # Evicts key 1 print("Getting key 1:") print(cache.get(1)) # Expected: -1 print("Getting key 3:") print(cache.get(3)) # Expected: 30 print("Getting key 4:") print(cache.get(4)) # Expected: 40 ```

Result

#2

Winning Votes

0 / 3

Average Score

91

Total Score

92

Overall Comments

The implementation is excellent overall. It correctly uses a hash map combined with a doubly linked list with sentinel head and tail nodes, achieving O(1) for both get and put. The test sequence produces the correct expected output (10, -1, -1, 30, 40). Edge cases like capacity 0 are handled. The code is clean, well-commented, and follows good Python conventions. The only minor weaknesses are that capacity 1 is not explicitly demonstrated (though it works correctly by design), and the comments, while present and helpful, could be slightly more detailed about the overall design rationale. Overall this is a high-quality, complete, and correct implementation.

View Score Details

Correctness

Weight 35%
95

All operations are O(1). The test sequence produces exactly the expected output: 10, -1, -1, 30, 40. LRU eviction logic is correct — after get(1) moves key 1 to most-recently-used, put(3,30) correctly evicts key 2, and put(4,40) correctly evicts key 1. Capacity 0 edge case is handled. No use of prohibited built-ins.

Completeness

Weight 20%
90

The implementation includes the Node class, LRUCache class with all required methods, helper methods for list manipulation, capacity 0 handling, and the full test sequence with descriptive print statements. Capacity 1 edge case is not explicitly tested but works correctly by design. All required components are present.

Code Quality

Weight 20%
90

Code is clean, well-structured, and uses meaningful names. Docstrings are present on all classes and methods. Helper methods _remove_node and _add_to_head are properly abstracted. Type hints are included. Sentinel nodes simplify boundary conditions elegantly. Minor: could add a brief high-level comment explaining the overall design strategy at the top.

Practical Value

Weight 15%
85

The implementation is production-quality and directly usable. It handles the capacity 0 edge case gracefully. The demonstration output is clear with descriptive labels. Could be enhanced with additional edge case demonstrations (capacity 1, duplicate puts) to increase practical educational value.

Instruction Following

Weight 10%
95

All instructions are followed: no OrderedDict or functools.lru_cache used, hash map + doubly linked list implemented from scratch, comments explaining the approach are included, edge cases handled, complete runnable code with the exact test sequence provided. The output matches expected values exactly.

Judge Models OpenAI GPT-5.2

Total Score

85

Overall Comments

Implements an LRU cache correctly using a hashmap plus a doubly linked list with dummy head/tail, achieving O(1) average get/put and proper eviction. Handles capacity=0 and typical edge cases (including capacity=1) fine. The main weakness is the demo output: it includes extra explanatory print lines, so it does not match the required expected output sequence exactly, even though the returned values are correct. Otherwise the code is clean and well-commented.

View Score Details

Correctness

Weight 35%
92

Core LRU behavior is correct: get moves nodes to MRU position, put updates/moves existing nodes, and eviction removes tail.prev (LRU). Hash map points to nodes, and list ops are O(1). Capacity=0 is handled by early return. No obvious logical bugs.

Completeness

Weight 20%
86

Provides full runnable code with Node, LRUCache, helper methods, and a demonstration sequence. Edge cases are addressed at least for capacity=0 (and capacity=1 works implicitly). However, the demonstration does not present output in the exact expected minimal form due to extra print statements.

Code Quality

Weight 20%
84

Well-structured with clear separation of concerns and helpful docstrings/comments. Uses dummy head/tail sentinels appropriately. Minor improvements could include type hints for helper methods and ensuring helper methods are marked as internal consistently, but overall solid.

Practical Value

Weight 15%
78

Practical, standard LRU implementation that can be reused. The extra demo verbosity may be inconvenient for automated checking, but the data structure itself is useful and efficient.

Instruction Following

Weight 10%
70

Follows the main constraints: does not use OrderedDict or functools.lru_cache; uses hashmap + doubly linked list; includes comments; includes test sequence. Main deviation is that the printed output is not the exact expected output lines (adds descriptive text), which can violate the 'expected output' requirement.

Total Score

98

Overall Comments

The answer provides an excellent and textbook-quality implementation of an LRU Cache. It correctly uses a hash map and a doubly linked list to achieve O(1) time complexity for both `get` and `put` operations. The code is clean, well-commented, uses good practices like helper methods and dummy nodes, and correctly handles edge cases. All instructions from the prompt were followed meticulously, resulting in a complete and correct solution.

View Score Details

Correctness

Weight 35%
100

The implementation is perfectly correct. It successfully uses a hash map for O(1) lookups and a doubly linked list for O(1) ordering updates. The logic for `get`, `put`, and eviction is sound. The code passes the provided test sequence and correctly handles the capacity=0 edge case.

Completeness

Weight 20%
100

The submission is fully complete and runnable. It includes the required `LRUCache` class, a helper `Node` class, all necessary methods, and the full demonstration code with the test sequence as requested in the prompt.

Code Quality

Weight 20%
95

The code quality is exceptionally high. It is well-structured, using separate classes and private helper methods (`_remove_node`, `_add_to_head`) for clarity. The use of dummy head and tail nodes is a classic, robust technique that simplifies the list manipulation logic. The code includes clear docstrings, type hints, and follows Python conventions.

Practical Value

Weight 15%
90

The solution has high practical value as it provides a canonical, efficient implementation of a fundamental and widely used data structure. This code is not just a theoretical exercise; it is a direct and robust solution that could be used in real-world applications.

Instruction Following

Weight 10%
100

The answer perfectly adheres to all instructions. It implements the LRU cache from scratch using a hash map and a doubly linked list, without relying on prohibited libraries like `collections.OrderedDict`. It includes comments, handles edge cases, and provides the complete runnable code with the test sequence.

Comparison Summary

Final rank order is determined by judge-wise rank aggregation (average rank + Borda tie-break). Average score is shown for reference.

Judges: 3

Winning Votes

3 / 3

Average Score

95
View this answer

Winning Votes

0 / 3

Average Score

91
View this answer
X f L