Orivel Orivel
Open menu

Implement a Least Recently Used (LRU) Cache

Compare model answers for this Coding benchmark and review scores, judging comments, and related examples.

Login or register to use likes and favorites. Register

X f L

Contents

Task Overview

Benchmark Genres

Coding

Task Creator Model

Answering Models

Judge Models

Task Prompt

Implement an LRU (Least Recently Used) cache data structure in Python. Your implementation should be a class called `LRUCache` that supports the following operations: 1. `__init__(self, capacity: int)` — Initialize the cache with a positive integer capacity. 2. `get(self, key: int) -> int` — Return the value associated with the key if it exists in the cache, otherwise return -1. Accessing a key counts as a "use". 3. `put(self, key: int, value: int) -> None` — Insert or update the key-value pair. If the cache excee...

Show more

Implement an LRU (Least Recently Used) cache data structure in Python. Your implementation should be a class called `LRUCache` that supports the following operations: 1. `__init__(self, capacity: int)` — Initialize the cache with a positive integer capacity. 2. `get(self, key: int) -> int` — Return the value associated with the key if it exists in the cache, otherwise return -1. Accessing a key counts as a "use". 3. `put(self, key: int, value: int) -> None` — Insert or update the key-value pair. If the cache exceeds its capacity after insertion, evict the least recently used key. Both `get` and `put` must run in O(1) average time complexity. Provide the complete class implementation. Then, demonstrate its correctness by showing the output of the following sequence of operations: ``` cache = LRUCache(2) cache.put(1, 10) cache.put(2, 20) print(cache.get(1)) # Expected: 10 cache.put(3, 30) # Evicts key 2 print(cache.get(2)) # Expected: -1 cache.put(4, 40) # Evicts key 1 print(cache.get(1)) # Expected: -1 print(cache.get(3)) # Expected: 30 print(cache.get(4)) # Expected: 40 ``` Explain briefly how your implementation achieves O(1) time complexity for both operations.

Judging Policy

A good answer should: (1) Provide a complete, runnable Python class named `LRUCache` with correct `__init__`, `get`, and `put` methods. (2) Use an appropriate data structure combination (e.g., a hash map plus a doubly linked list, or Python's `OrderedDict`) to achieve O(1) average time for both `get` and `put`. (3) Correctly handle eviction of the least recently used item when capacity is exceeded. (4) Produce the correct output for the demonstration sequence: 10, -1, -1, 30, 40. (5) Include a clear, accurate expla...

Show more

A good answer should: (1) Provide a complete, runnable Python class named `LRUCache` with correct `__init__`, `get`, and `put` methods. (2) Use an appropriate data structure combination (e.g., a hash map plus a doubly linked list, or Python's `OrderedDict`) to achieve O(1) average time for both `get` and `put`. (3) Correctly handle eviction of the least recently used item when capacity is exceeded. (4) Produce the correct output for the demonstration sequence: 10, -1, -1, 30, 40. (5) Include a clear, accurate explanation of why the chosen data structures yield O(1) time complexity. Deduct quality for: incorrect output, missing edge case handling (e.g., updating an existing key), use of data structures that don't achieve O(1) (e.g., scanning a list), or incomplete/non-runnable code.

Answers

class LRUCache: class _Node: __slots__ = ("key", "val", "prev", "next") def __init__(self, key=0, val=0): self.key = key self.val = val self.prev = None self.next = None def __init__(self, capacity: int): if capacity <= 0: raise ValueError("capacity must be a positive integer") self.capacity = capacity self.map = {} # key -> _Node # Doubly-linked list with sentinels: # head <-> ... <-> tail # Most recently used is right after head. # Least recently used is right before tai...

Show Full Answer

class LRUCache: class _Node: __slots__ = ("key", "val", "prev", "next") def __init__(self, key=0, val=0): self.key = key self.val = val self.prev = None self.next = None def __init__(self, capacity: int): if capacity <= 0: raise ValueError("capacity must be a positive integer") self.capacity = capacity self.map = {} # key -> _Node # Doubly-linked list with sentinels: # head <-> ... <-> tail # Most recently used is right after head. # Least recently used is right before tail. self.head = self._Node() self.tail = self._Node() self.head.next = self.tail self.tail.prev = self.head def _remove(self, node: _Node) -> None: p, n = node.prev, node.next p.next = n n.prev = p def _add_to_front(self, node: _Node) -> None: # Insert right after head (mark as most recently used) first = self.head.next node.prev = self.head node.next = first self.head.next = node first.prev = node def _move_to_front(self, node: _Node) -> None: self._remove(node) self._add_to_front(node) def _evict_lru(self) -> None: # Remove node before tail lru = self.tail.prev if lru is self.head: return self._remove(lru) del self.map[lru.key] def get(self, key: int) -> int: node = self.map.get(key) if node is None: return -1 self._move_to_front(node) return node.val def put(self, key: int, value: int) -> None: node = self.map.get(key) if node is not None: node.val = value self._move_to_front(node) return new_node = self._Node(key, value) self.map[key] = new_node self._add_to_front(new_node) if len(self.map) > self.capacity: self._evict_lru() # Demonstration of correctness cache = LRUCache(2) cache.put(1, 10) cache.put(2, 20) print(cache.get(1)) # Expected: 10 cache.put(3, 30) # Evicts key 2 print(cache.get(2)) # Expected: -1 cache.put(4, 40) # Evicts key 1 print(cache.get(1)) # Expected: -1 print(cache.get(3)) # Expected: 30 print(cache.get(4)) # Expected: 40 # Output: # 10 # -1 # -1 # 30 # 40 # Brief O(1) explanation: # - A dictionary (hash map) maps keys to nodes, giving O(1) average lookup for get/put. # - A doubly linked list maintains recency order; moving a node to the front, # inserting a new node, and removing the LRU node are all O(1) pointer updates. # Combining these gives O(1) average time for both get and put.

Result

#1 | Winner

Winning Votes

2 / 3

Average Score

98

Total Score

98

Overall Comments

Excellent implementation of an LRU cache that fully meets all requirements. The solution uses a well-designed combination of a hash map and doubly-linked list with sentinel nodes to achieve O(1) time complexity for both get and put operations. The code is clean, well-structured, and handles all edge cases correctly, including updating existing keys and proper eviction. The demonstration produces the exact expected output (10, -1, -1, 30, 40), and the explanation of O(1) complexity is accurate and clear. The use of __slots__ in the Node class shows attention to memory efficiency. Minor observations: the ValueError for non-positive capacity is a nice defensive touch but wasn't explicitly required; the explanation could have been slightly more detailed about why dictionary lookups are O(1) average. Overall, this is a production-quality implementation.

View Score Details

Correctness

Weight 35%
98

The implementation is correct and produces the exact expected output sequence (10, -1, -1, 30, 40). All core operations work properly: get returns correct values and marks items as recently used, put correctly inserts new items and updates existing ones, and eviction properly removes the least recently used item when capacity is exceeded. The doubly-linked list with sentinels correctly maintains recency order. Edge cases like updating an existing key are handled correctly. The only minor deduction is that the ValueError for non-positive capacity, while good practice, wasn't explicitly required by the task.

Completeness

Weight 20%
100

The answer is fully complete. It provides the entire LRUCache class with all required methods (__init__, get, put), includes a complete working demonstration with the exact test sequence specified, produces all expected outputs, and concludes with a clear explanation of the O(1) time complexity. The implementation includes helper methods (_remove, _add_to_front, _move_to_front, _evict_lru) that are well-organized and necessary for the solution. Nothing is missing or incomplete.

Code Quality

Weight 20%
95

The code is of high quality with excellent structure and readability. The use of a nested _Node class with __slots__ demonstrates memory efficiency awareness. Method names are descriptive and follow Python conventions. The doubly-linked list implementation with sentinel nodes is elegant and avoids null-checking edge cases. Comments clearly explain the data structure layout and purpose of each method. The code is well-organized with logical separation of concerns. Minor: the explanation comment at the end could be slightly more detailed about hash map average-case complexity, but this is a very minor point.

Practical Value

Weight 15%
95

This implementation has high practical value as a reference solution. It demonstrates best practices for implementing LRU caches: using appropriate data structures (hash map + doubly-linked list), proper sentinel node usage to eliminate edge cases, and clean separation of helper methods. The code is production-ready and could be used directly in real applications. The defensive ValueError check adds robustness. The implementation would serve as an excellent educational example for understanding how to combine data structures to achieve specific time complexity requirements.

Instruction Following

Weight 10%
100

The answer follows all instructions precisely. It provides a complete class named exactly 'LRUCache' with the three required methods having correct signatures. Both get and put achieve O(1) average time complexity using appropriate data structures. The demonstration sequence is executed exactly as specified with correct output. The explanation of O(1) complexity is included and accurate. All requirements from the judging policy are met: correct output, proper edge case handling (updating existing keys), use of O(1) data structures, and complete runnable code.

Total Score

100

Overall Comments

The provided LRUCache implementation is exceptionally well-structured, correct, and highly efficient. It correctly utilizes a combination of a hash map and a custom doubly linked list with sentinel nodes to achieve the required O(1) average time complexity for both `get` and `put` operations. The code demonstrates excellent attention to detail, including the use of `__slots__` for the node class and robust handling of various cache operations like updating existing keys and evicting the least recently used item. The demonstration output is perfectly correct, and the explanation of time complexity is clear and accurate.

View Score Details

Correctness

Weight 35%
100

The implementation is fully correct. All LRU cache logic, including key retrieval, updates, insertions, and eviction of the least recently used item, functions as expected. The provided demonstration sequence produces the exact correct output.

Completeness

Weight 20%
100

The answer provides a complete `LRUCache` class with all required methods (`__init__`, `get`, `put`), a clear demonstration of its usage, and a concise explanation of its time complexity. Nothing requested was missing.

Code Quality

Weight 20%
100

The code quality is excellent. It's well-organized with helper methods for linked list operations, uses sentinel nodes for cleaner list manipulation, and includes `__slots__` for memory efficiency in the `_Node` class. Variable names are descriptive, and the overall structure is highly readable and maintainable.

Practical Value

Weight 15%
100

This implementation is highly practical and production-ready. It demonstrates a deep understanding of data structures required for an LRU cache, providing a robust and efficient solution. The explicit custom doubly linked list implementation offers full control and insight into the underlying mechanism, making it valuable in scenarios where `collections.OrderedDict` might not be suitable or a deeper understanding is needed.

Instruction Following

Weight 10%
100

All instructions were followed precisely. The class is named `LRUCache`, methods conform to the specified signatures, O(1) average time complexity is achieved and explained, capacity validation is present, and the demonstration sequence, along with its output, matches the requirements exactly.

Judge Models OpenAI GPT-5 mini

Total Score

97

Overall Comments

Well-implemented LRU cache: correct behavior, efficient O(1) operations using a hash map + doubly linked list, and the provided demonstration produces the expected output. Code is clear and runnable, with small style/annotation improvements possible (e.g., more type annotations or docstrings) but nothing that affects correctness or performance.

View Score Details

Correctness

Weight 35%
100

Implementation is correct: get and put behave as specified, updates move items to MRU, eviction removes the LRU when capacity is exceeded, and the demonstration prints the expected sequence (10, -1, -1, 30, 40). Edge cases like updating an existing key and zero/negative capacity (ValueError) are handled.

Completeness

Weight 20%
100

Solution includes a complete, runnable LRUCache class with __init__, get, and put, a demonstration of the required operation sequence, and a concise explanation of O(1) complexity. All required behaviors from the prompt are present.

Code Quality

Weight 20%
90

Code is clean and well-structured: inner Node class with __slots__, sentinel head/tail nodes, and clear helper methods for list operations. Minor style improvements could include additional type annotations (e.g., for self.map) and docstrings/comments for public methods, but these do not affect correctness.

Practical Value

Weight 15%
90

Implementation is efficient and practical for real use: O(1) average operations, low-overhead node design, and clear semantics. For production use one might add thread-safety considerations or richer typing/docs, but the core functionality is solid.

Instruction Following

Weight 10%
100

All instructions were followed: class named LRUCache provided, required methods implemented with correct signatures, demonstration included, and the explanation of O(1) complexity is accurate.

```python import collections class LRUCache: """ An implementation of a Least Recently Used (LRU) cache. This cache uses a dictionary for O(1) key lookups and a doubly linked list to maintain the order of usage. The dictionary maps keys to nodes in the linked list. """ # Inner class to represent a node in the doubly linked list class _Node: def __init__(self, key=0, value=0): self.key = key self.value = value self.prev = None self.next = None def __init__(self, capacity: int): """ Initializes the cache...

Show Full Answer

```python import collections class LRUCache: """ An implementation of a Least Recently Used (LRU) cache. This cache uses a dictionary for O(1) key lookups and a doubly linked list to maintain the order of usage. The dictionary maps keys to nodes in the linked list. """ # Inner class to represent a node in the doubly linked list class _Node: def __init__(self, key=0, value=0): self.key = key self.value = value self.prev = None self.next = None def __init__(self, capacity: int): """ Initializes the cache with a positive integer capacity. """ if capacity <= 0: raise ValueError("Capacity must be a positive integer.") self.capacity = capacity self.cache = {} # Sentinel nodes for head and tail to simplify list operations self.head = self._Node() self.tail = self._Node() self.head.next = self.tail self.tail.prev = self.head def _remove_node(self, node: _Node): """Removes a node from the linked list.""" prev_node = node.prev next_node = node.next prev_node.next = next_node next_node.prev = prev_node def _add_to_front(self, node: _Node): """Adds a node to the front of the linked list (most recent).""" node.prev = self.head node.next = self.head.next self.head.next.prev = node self.head.next = node def _move_to_front(self, node: _Node): """Moves an existing node to the front of the list.""" self._remove_node(node) self._add_to_front(node) def get(self, key: int) -> int: """ Return the value associated with the key if it exists, otherwise -1. Accessing a key marks it as most recently used. """ if key in self.cache: node = self.cache[key] # Move the accessed node to the front to mark it as recently used self._move_to_front(node) return node.value return -1 def put(self, key: int, value: int) -> None: """ Insert or update a key-value pair. If capacity is exceeded, evict the least recently used key. """ if key in self.cache: # Key exists, update its value and move it to the front node = self.cache[key] node.value = value self._move_to_front(node) else: # Key is new, create a new node new_node = self._Node(key, value) self.cache[key] = new_node self._add_to_front(new_node) # If cache exceeds capacity, evict the least recently used item if len(self.cache) > self.capacity: # The LRU item is the one just before the tail sentinel lru_node = self.tail.prev self._remove_node(lru_node) del self.cache[lru_node.key] # --- Demonstration --- print("Demonstrating LRUCache functionality:") cache = LRUCache(2) cache.put(1, 10) cache.put(2, 20) print(f"get(1): {cache.get(1)}") # Expected: 10 cache.put(3, 30) # Evicts key 2 print(f"get(2): {cache.get(2)}") # Expected: -1 cache.put(4, 40) # Evicts key 1 print(f"get(1): {cache.get(1)}") # Expected: -1 print(f"get(3): {cache.get(3)}") # Expected: 30 print(f"get(4): {cache.get(4)}") # Expected: 40 # --- Expected Output of Demonstration --- # Demonstrating LRUCache functionality: # get(1): 10 # get(2): -1 # get(1): -1 # get(3): 30 # get(4): 40 # --- Explanation of O(1) Time Complexity --- # The O(1) average time complexity for both `get` and `put` is achieved by combining two data structures: # 1. **A Hash Map (Dictionary):** Python's `dict` provides O(1) average time complexity for insertions, deletions, and lookups. In this implementation, the dictionary `self.cache` stores keys and maps them directly to their corresponding nodes in the doubly linked list. This allows us to instantly locate any node in the cache by its key. # 2. **A Doubly Linked List:** This data structure maintains the order of usage. Adding a new node to the front (most recently used) or removing the last node (least recently used) are O(1) operations because they only require updating a few pointers. When an item is accessed or updated, its corresponding node (found in O(1) via the hash map) can be moved to the front of the list in O(1) by simply re-linking its neighbors. # By using both structures together, all parts of the `get` and `put` operations—looking up a key, updating a value, moving a node, and removing the LRU node—can be performed in constant average time. ```

Result

#2

Winning Votes

1 / 3

Average Score

97

Total Score

98

Overall Comments

This is an excellent implementation of an LRU cache that fully satisfies all requirements. The code uses a well-designed combination of a hash map and doubly linked list with sentinel nodes to achieve O(1) time complexity for both get and put operations. The implementation correctly handles all edge cases including updating existing keys and evicting the least recently used item. The demonstration produces the exact expected output (10, -1, -1, 30, 40), and the explanation of time complexity is clear and accurate. The code is clean, well-documented, and production-ready.

View Score Details

Correctness

Weight 35%
98

The implementation is functionally correct and produces the exact expected output sequence. All operations (get, put, eviction) work as specified. Edge cases are properly handled: updating existing keys moves them to front, new keys are added correctly, and LRU eviction removes the correct node. The only minor observation is that the ValueError for non-positive capacity, while good practice, wasn't explicitly required by the task specification.

Completeness

Weight 20%
100

The answer is fully complete. It provides the entire LRUCache class with all required methods (__init__, get, put), includes a complete working demonstration with the exact test sequence, produces all expected outputs, and provides a thorough explanation of the O(1) time complexity. Helper methods (_remove_node, _add_to_front, _move_to_front) are well-implemented and necessary for the solution.

Code Quality

Weight 20%
95

The code is well-structured and highly readable. It includes clear docstrings for the class and all methods, uses meaningful variable names, and employs sentinel nodes elegantly to simplify boundary conditions. The inner _Node class is appropriately defined. The code follows Python conventions and is maintainable. Minor: the unused import of `collections` could be removed, though this is negligible.

Practical Value

Weight 15%
98

This implementation is immediately practical and production-ready. The doubly linked list with sentinel nodes approach is a standard, proven pattern for LRU caches. The code handles real-world scenarios correctly (updating existing keys, capacity management). The explanation demonstrates deep understanding of why this design works, making it valuable for learning and implementation reference.

Instruction Following

Weight 10%
100

The answer precisely follows all instructions. It provides a complete, runnable Python class named LRUCache with correct method signatures. It uses appropriate data structures (hash map + doubly linked list) achieving O(1) complexity. It correctly handles eviction, produces the exact expected output sequence (10, -1, -1, 30, 40), and includes a clear, accurate explanation of the O(1) time complexity achievement.

Total Score

98

Overall Comments

The provided LRUCache implementation is excellent. It correctly implements the LRU logic using a dictionary for O(1) lookups and a custom doubly linked list for O(1) order management. The code is well-structured, includes good docstrings, and robustly handles edge cases like capacity validation and updating existing keys. The demonstration perfectly matches the expected output, and the explanation of O(1) time complexity is clear and accurate. The only minor improvement could be removing the unused 'collections' import, but this does not detract from the overall high quality.

View Score Details

Correctness

Weight 35%
98

The LRUCache implementation is functionally correct. It accurately handles `get` and `put` operations, including updating existing keys, evicting the least recently used item when capacity is exceeded, and returning -1 for non-existent keys. The use of a dictionary and a custom doubly linked list correctly achieves O(1) average time complexity for both operations. The demonstration output matches the expected output perfectly.

Completeness

Weight 20%
100

The answer is fully complete. It provides the entire `LRUCache` class implementation, including the `__init__`, `get`, and `put` methods. A comprehensive demonstration of its functionality with the specified sequence of operations is included, along with a clear and accurate explanation of the O(1) time complexity.

Code Quality

Weight 20%
95

The code quality is very high. The `LRUCache` class is well-structured with an internal `_Node` class and helper methods for linked list operations (`_remove_node`, `_add_to_front`, `_move_to_front`), which enhance readability and maintainability. Docstrings are present for the class and main methods. The use of sentinel head and tail nodes for the doubly linked list simplifies list manipulation and improves robustness. Capacity validation is included. The only very minor point is an unused import (`collections`).

Practical Value

Weight 15%
97

The implemented LRU cache is highly practical and efficient. It uses the standard and most performant combination of a hash map and a doubly linked list, ensuring O(1) operations. The design is robust and would be suitable for real-world caching scenarios where performance is critical. It correctly handles various operational scenarios, making it reliable.

Instruction Following

Weight 10%
100

All instructions in the prompt were followed meticulously. The class is named `LRUCache`, and the `__init__`, `get`, and `put` methods have the correct signatures and behavior. The capacity is handled as a positive integer, O(1) complexity is achieved and explained, the eviction logic is correct, and the demonstration sequence and its output are exactly as specified.

Judge Models OpenAI GPT-5 mini

Total Score

96

Overall Comments

A correct, well-structured LRUCache implementation using a hash map plus a doubly linked list. The class implements get and put in O(1) average time, includes a demonstration that produces the expected results, and explains the complexity clearly. Minor issues: an unused import (collections) and small stylistic nitpicks, but these do not affect correctness or performance.

View Score Details

Correctness

Weight 35%
100

Implementation is functionally correct: get and put behave as specified, eviction removes the least recently used item, updating an existing key works, and the demonstration produces the expected outputs (10, -1, -1, 30, 40). Edge cases such as capacity enforcement are handled.

Completeness

Weight 20%
95

Provides a complete, runnable LRUCache class with __init__, get, and put, helper methods, a demonstration of the required sequence, and an explanation of complexity. Slight deductions only for including an unnecessary import and not showing raw output lines exactly as in the prompt (labels are added), but functionality is complete.

Code Quality

Weight 20%
90

Code is clean, readable, and well-documented with helper methods and sentinel nodes to simplify list operations. Minor issues: an unused 'collections' import and minor type-hinting style choices for the inner class; otherwise good structure and naming.

Practical Value

Weight 15%
90

The implementation is practical and efficient for real use: O(1) average operations, clear memory management, and straightforward API. The ValueError on non-positive capacity is sensible. No performance pitfalls in the core design.

Instruction Following

Weight 10%
100

Follows instructions precisely: class named LRUCache, correct method signatures, O(1) average complexity achieved with dictionary + doubly linked list, correct eviction behavior, demonstration included, and a concise explanation of why operations are O(1).

Comparison Summary

Final rank order is determined by judge-wise rank aggregation (average rank + Borda tie-break). Average score is shown for reference.

Judges: 3

Winning Votes

2 / 3

Average Score

98
View this answer

Winning Votes

1 / 3

Average Score

97
View this answer
X f L