Orivel Orivel
Open menu

Implement a Least Recently Used (LRU) Cache

Compare model answers for this Coding benchmark and review scores, judging comments, and related examples.

Login or register to use likes and favorites. Register

X f L

Contents

Task Overview

Benchmark Genres

Coding

Task Creator Model

Answering Models

Judge Models

Task Prompt

Implement an LRU (Least Recently Used) Cache class in Python that supports the following operations: 1. `LRUCache(capacity)` — Initialize the cache with a positive integer capacity. 2. `get(key)` — Return the value associated with the key if it exists in the cache, otherwise return -1. Accessing a key marks it as recently used. 3. `put(key, value)` — Insert or update the key-value pair. If the cache exceeds its capacity after insertion, evict the least recently used key. Both `get` and `put` must run in O(1) aver...

Show more

Implement an LRU (Least Recently Used) Cache class in Python that supports the following operations: 1. `LRUCache(capacity)` — Initialize the cache with a positive integer capacity. 2. `get(key)` — Return the value associated with the key if it exists in the cache, otherwise return -1. Accessing a key marks it as recently used. 3. `put(key, value)` — Insert or update the key-value pair. If the cache exceeds its capacity after insertion, evict the least recently used key. Both `get` and `put` must run in O(1) average time complexity. Provide a complete, self-contained Python implementation. Do not use `functools.lru_cache` or `collections.OrderedDict`. You should implement the underlying data structure yourself (e.g., using a doubly linked list and a hash map). After your class definition, include a short demonstration that creates an LRUCache with capacity 2 and performs the following operations, printing the result of each `get`: ``` cache = LRUCache(2) cache.put(1, 10) cache.put(2, 20) print(cache.get(1)) # Expected: 10 cache.put(3, 30) # Evicts key 2 print(cache.get(2)) # Expected: -1 cache.put(4, 40) # Evicts key 1 print(cache.get(1)) # Expected: -1 print(cache.get(3)) # Expected: 30 print(cache.get(4)) # Expected: 40 ```

Judging Policy

A good answer must provide a complete, runnable Python implementation of an LRU Cache that meets all of the following criteria: (1) It does not use OrderedDict or functools.lru_cache. (2) It achieves O(1) average time for both get and put, typically via a hash map combined with a doubly linked list. (3) The class correctly handles capacity limits and evicts the least recently used entry. (4) The demonstration code is included and would produce the expected output (10, -1, -1, 30, 40). (5) The code is clean, well-st...

Show more

A good answer must provide a complete, runnable Python implementation of an LRU Cache that meets all of the following criteria: (1) It does not use OrderedDict or functools.lru_cache. (2) It achieves O(1) average time for both get and put, typically via a hash map combined with a doubly linked list. (3) The class correctly handles capacity limits and evicts the least recently used entry. (4) The demonstration code is included and would produce the expected output (10, -1, -1, 30, 40). (5) The code is clean, well-structured, and reasonably commented or self-explanatory. Partial credit may be given for solutions that are mostly correct but have minor issues. Solutions that use OrderedDict or fail to achieve O(1) complexity should be rated lower. Solutions with bugs that produce incorrect output for the demonstration should be rated significantly lower.

Answers

class _Node: __slots__ = ("key", "value", "prev", "next") def __init__(self, key=None, value=None): self.key = key self.value = value self.prev = None self.next = None class LRUCache: def __init__(self, capacity: int): if not isinstance(capacity, int) or capacity <= 0: raise ValueError("capacity must be a positive integer") self.capacity = capacity self.map = {} # key -> _Node # Dummy head/tail to avoid edge cases. # Most-recent is right after head. Least-recent is right before tail. self.head = _Node()...

Show Full Answer

class _Node: __slots__ = ("key", "value", "prev", "next") def __init__(self, key=None, value=None): self.key = key self.value = value self.prev = None self.next = None class LRUCache: def __init__(self, capacity: int): if not isinstance(capacity, int) or capacity <= 0: raise ValueError("capacity must be a positive integer") self.capacity = capacity self.map = {} # key -> _Node # Dummy head/tail to avoid edge cases. # Most-recent is right after head. Least-recent is right before tail. self.head = _Node() self.tail = _Node() self.head.next = self.tail self.tail.prev = self.head def _add_to_front(self, node: _Node) -> None: node.prev = self.head node.next = self.head.next self.head.next.prev = node self.head.next = node def _remove(self, node: _Node) -> None: node.prev.next = node.next node.next.prev = node.prev node.prev = None node.next = None def _move_to_front(self, node: _Node) -> None: self._remove(node) self._add_to_front(node) def _evict_lru(self) -> None: lru = self.tail.prev if lru is self.head: return self._remove(lru) del self.map[lru.key] def get(self, key: int) -> int: node = self.map.get(key) if node is None: return -1 self._move_to_front(node) return node.value def put(self, key: int, value: int) -> None: node = self.map.get(key) if node is not None: node.value = value self._move_to_front(node) return new_node = _Node(key, value) self.map[key] = new_node self._add_to_front(new_node) if len(self.map) > self.capacity: self._evict_lru() # Demonstration cache = LRUCache(2) cache.put(1, 10) cache.put(2, 20) print(cache.get(1)) # Expected: 10 cache.put(3, 30) # Evicts key 2 print(cache.get(2)) # Expected: -1 cache.put(4, 40) # Evicts key 1 print(cache.get(1)) # Expected: -1 print(cache.get(3)) # Expected: 30 print(cache.get(4)) # Expected: 40

Result

#1 | Winner

Winning Votes

3 / 3

Average Score

99

Total Score

100

Overall Comments

The submission is outstanding. It provides a correct, efficient, and high-quality implementation of an LRU cache from scratch. The code uses the classic and optimal approach of a hash map combined with a doubly linked list. The implementation is robust, featuring dummy head/tail nodes to simplify edge cases, and demonstrates excellent coding practices like helper methods and type hints. It fully adheres to all task requirements and constraints.

View Score Details

Correctness

Weight 35%
100

The implementation is perfectly correct. It successfully uses a hash map for O(1) lookups and a doubly linked list to maintain the order of use. The logic for `get`, `put`, and eviction works flawlessly, and the provided demonstration code produces the exact expected output.

Completeness

Weight 20%
100

The solution is fully complete. It provides a self-contained `LRUCache` class, a private `_Node` helper class, and the exact demonstration code requested in the prompt. All required functionalities are implemented.

Code Quality

Weight 20%
100

The code quality is excellent. It is well-structured with clear, single-responsibility helper methods (_add_to_front, _remove, etc.) that enhance readability. The use of dummy head/tail nodes is a sophisticated technique that simplifies the linked list logic. The inclusion of type hints, `__slots__`, and input validation for capacity demonstrates strong, professional coding practices.

Practical Value

Weight 15%
100

The solution has high practical value. It provides a textbook-perfect, efficient implementation of a fundamental and widely-used data structure. This code could be used in real-world applications and serves as an excellent reference for how to build an LRU cache from first principles.

Instruction Following

Weight 10%
100

The solution perfectly adheres to all instructions. It correctly implements the LRU cache from scratch without using forbidden libraries like `collections.OrderedDict`. It achieves the required O(1) average time complexity for its core operations and includes the mandatory demonstration.

Total Score

99

Overall Comments

This is an excellent implementation of an LRU Cache. The solution correctly uses a doubly linked list combined with a hash map to achieve O(1) average time complexity for both get and put operations. The code avoids forbidden constructs (OrderedDict, functools.lru_cache), handles all edge cases properly, and produces the exact expected output. The use of dummy head/tail sentinel nodes is a clean approach that eliminates edge cases. The code is well-structured, uses __slots__ for memory efficiency, includes input validation, and is clearly commented. The demonstration matches the required output exactly.

View Score Details

Correctness

Weight 35%
100

The implementation is fully correct. The doubly linked list with dummy sentinels correctly tracks recency, get moves accessed nodes to front, put handles both insert and update cases, eviction removes the tail.prev node (LRU), and all demonstration outputs match expected values (10, -1, -1, 30, 40).

Completeness

Weight 20%
100

The solution is fully complete: it includes the LRUCache class with all required methods (constructor, get, put), helper private methods, the _Node class, input validation, and the full demonstration code as specified in the task.

Code Quality

Weight 20%
100

Code quality is excellent. The use of __slots__ on _Node shows attention to memory efficiency. Private helper methods (_add_to_front, _remove, _move_to_front, _evict_lru) are well-named and single-purpose. Type hints are used. Comments explain the sentinel node design. The code is clean and readable.

Practical Value

Weight 15%
95

The implementation is highly practical. It includes input validation with a meaningful error message, uses __slots__ for performance, and the design is extensible. The only minor gap is that it doesn't handle thread safety, but that was not required by the task.

Instruction Following

Weight 10%
100

All instructions are followed precisely: no OrderedDict or functools.lru_cache used, O(1) complexity achieved via hash map + doubly linked list, the underlying data structure is implemented manually, and the exact demonstration code from the task is included with correct expected output comments.

Judge Models OpenAI GPT-5 mini

Total Score

97

Overall Comments

This answer provides a correct, efficient, and self-contained Python implementation of an LRU cache using a hash map and doubly linked list. It meets the required O(1) operations, handles capacity and eviction correctly, and includes the requested demonstration which would print the expected outputs. The code is clean and readable; minor issues are limited to small style points (e.g., using the name 'map' shadows a builtin and there are no docstrings), but these do not affect correctness or performance.

View Score Details

Correctness

Weight 35%
100

Implements LRU using a hashmap + doubly linked list correctly. get and put operate in O(1) average time, eviction logic is correct, and the provided demonstration will produce the expected outputs.

Completeness

Weight 20%
100

All required features are present: constructor with capacity validation, get and put semantics, LRU eviction when capacity exceeded, no use of OrderedDict or functools.lru_cache, and the requested demonstration is included.

Code Quality

Weight 20%
90

Code is well-structured, clear, and uses dummy head/tail to simplify list operations. Methods are small and focused. Minor style nits: the attribute name 'map' shadows the builtin, and there are no docstrings, but these do not affect correctness.

Practical Value

Weight 15%
90

Implementation is practical and ready for use in real code: efficient, safe (validates capacity), and memory-efficient (uses __slots__). Would benefit from small additions like docstrings or unit tests, but functionally complete.

Instruction Following

Weight 10%
100

Follows instructions exactly: does not use prohibited libraries, implements required data structures manually, ensures O(1) operations, and includes the exact demonstration with expected printed results.

class Node: def __init__(self, key=0, value=0): self.key = key self.value = value self.prev = None self.next = None class LRUCache: def __init__(self, capacity: int): self.capacity = capacity self.cache = {} # Sentinel head and tail for the doubly linked list self.head = Node() self.tail = Node() self.head.next = self.tail self.tail.prev = self.head def _add_node(self, node): # Add a node right after the head (making it MRU) node.prev = self.head node.next = self.head.next self.hea...

Show Full Answer

class Node: def __init__(self, key=0, value=0): self.key = key self.value = value self.prev = None self.next = None class LRUCache: def __init__(self, capacity: int): self.capacity = capacity self.cache = {} # Sentinel head and tail for the doubly linked list self.head = Node() self.tail = Node() self.head.next = self.tail self.tail.prev = self.head def _add_node(self, node): # Add a node right after the head (making it MRU) node.prev = self.head node.next = self.head.next self.head.next.prev = node self.head.next = node def _remove_node(self, node): # Remove a node from the linked list prev_node = node.prev next_node = node.next prev_node.next = next_node next_node.prev = prev_node def _move_to_front(self, node): # Remove a node and add it back to the front (MRU) self._remove_node(node) self._add_node(node) def _pop_tail(self): # Remove and return the LRU node (the one just before the tail sentinel) if self.tail.prev == self.head: return None # List is empty lru_node = self.tail.prev self._remove_node(lru_node) return lru_node def get(self, key: int) -> int: if key not in self.cache: return -1 node = self.cache[key] self._move_to_front(node) # Mark as recently used return node.value def put(self, key: int, value: int) -> None: if key in self.cache: node = self.cache[key] node.value = value # Update value self._move_to_front(node) # Mark as recently used else: new_node = Node(key, value) self.cache[key] = new_node self._add_node(new_node) # Add new node to front if len(self.cache) > self.capacity: # Evict LRU item lru_node = self._pop_tail() if lru_node: del self.cache[lru_node.key] # Demonstration cache = LRUCache(2) cache.put(1, 10) cache.put(2, 20) print(cache.get(1)) # Expected: 10 cache.put(3, 30) # Evicts key 2 print(cache.get(2)) # Expected: -1 cache.put(4, 40) # Evicts key 1 print(cache.get(1)) # Expected: -1 print(cache.get(3)) # Expected: 30 print(cache.get(4)) # Expected: 40

Result

#2

Winning Votes

0 / 3

Average Score

98

Total Score

100

Overall Comments

This is an excellent solution that provides a classic, correct, and efficient implementation of an LRU Cache. The code is very well-structured, using a helper Node class and private methods to encapsulate the doubly linked list logic, which enhances readability and maintainability. It correctly uses a hash map for O(1) lookups and a doubly linked list for O(1) updates of recency, fully meeting the task's performance requirements. The submission follows all instructions perfectly, including the prohibition of standard library helpers and the inclusion of the required demonstration, which runs correctly.

View Score Details

Correctness

Weight 35%
100

The implementation is fully correct. The logic for get, put, and eviction works as expected. The use of a hash map for lookups and a doubly linked list with sentinel nodes for recency tracking is the standard and correct approach. The provided demonstration code produces the exact expected output.

Completeness

Weight 20%
100

The submission is complete. It provides a self-contained solution with both the `Node` and `LRUCache` class definitions, and it includes the specific demonstration block requested in the prompt. All required functionalities are implemented.

Code Quality

Weight 20%
100

The code quality is excellent. It is clean, well-organized, and easy to follow. The use of private helper methods for linked list operations is a great design choice. Variable names are clear, and the use of sentinel head/tail nodes is a robust technique that simplifies the list manipulation logic by avoiding null checks.

Practical Value

Weight 15%
100

The solution has high practical value as it is the standard, most efficient implementation of an LRU cache. It correctly achieves the required O(1) average time complexity for both `get` and `put` operations. This is precisely the solution one would expect in a technical interview or real-world application where such a cache is needed.

Instruction Following

Weight 10%
100

The submission follows all instructions perfectly. It implements the LRU cache from scratch without using the forbidden `functools.lru_cache` or `collections.OrderedDict`. It includes the specified class and method signatures and provides the exact demonstration code as requested.

Total Score

98

Overall Comments

This is an excellent implementation of an LRU Cache. The solution correctly uses a doubly linked list with sentinel head/tail nodes combined with a hash map to achieve O(1) average time complexity for both get and put operations. The code does not use OrderedDict or functools.lru_cache as required. All helper methods are well-named and logically organized. The demonstration code is complete and would produce exactly the expected output (10, -1, -1, 30, 40). The code is clean, well-commented, and easy to follow. Minor note: the null check in _pop_tail is slightly redundant since put only calls it when len(self.cache) > self.capacity, meaning the list is never empty at that point, but it is a harmless defensive check.

View Score Details

Correctness

Weight 35%
100

The implementation is fully correct. The doubly linked list with sentinel nodes and hash map correctly implements O(1) get and put. The eviction logic properly removes the LRU item (tail.prev) and deletes it from the hash map. The demonstration would produce exactly 10, -1, -1, 30, 40 as expected.

Completeness

Weight 20%
100

The solution is fully complete. It includes the Node class, the LRUCache class with all required methods (constructor, get, put), all necessary helper methods, and the full demonstration code matching the task specification exactly.

Code Quality

Weight 20%
95

The code is clean, well-structured, and appropriately commented. Helper methods are well-named (_add_node, _remove_node, _move_to_front, _pop_tail) and each has a brief docstring comment. The only very minor issue is the slightly redundant None check in _pop_tail, but this is a harmless defensive measure and does not detract significantly.

Practical Value

Weight 15%
95

The implementation is highly practical and production-ready. It correctly handles all edge cases including updating existing keys, eviction on capacity overflow, and accessing non-existent keys. The use of sentinel nodes simplifies boundary conditions. The O(1) complexity makes it suitable for real-world use.

Instruction Following

Weight 10%
100

All instructions are followed precisely. OrderedDict and functools.lru_cache are not used. The underlying data structure is implemented manually with a doubly linked list and hash map. The demonstration code matches exactly what was specified in the task, including the comments about expected values and eviction.

Judge Models OpenAI GPT-5 mini

Total Score

97

Overall Comments

Solid and correct LRU cache implementation using a doubly linked list and a hashmap. get and put run in O(1) and the provided demonstration will produce the expected output. Minor shortcomings: no explicit validation for non-positive capacity and small stylistic/type-hint improvements could be made.

View Score Details

Correctness

Weight 35%
100

The implementation correctly maintains LRU order with sentinel head/tail, moves accessed nodes to the front, updates existing keys, inserts new keys, and evicts the least recently used item when capacity is exceeded. The demonstration yields the expected results.

Completeness

Weight 20%
100

All required features are implemented: LRUCache class with get and put, manual doubly linked list plus hashmap, correct eviction policy, and a demonstration matching the prompt. No forbidden libraries are used.

Code Quality

Weight 20%
90

Code is clear, well-structured, and uses helper methods for list operations. Variable names and comments are descriptive. Minor improvements: validate capacity in __init__, add type hints for internal methods, and consider docstrings for public methods.

Practical Value

Weight 15%
90

Efficient and practical O(1) implementation that can be used in real code. Handles typical cases well. Missing explicit handling/validation for zero or negative capacity is a small practical shortcoming.

Instruction Following

Weight 10%
100

Follows instructions precisely: does not use OrderedDict or functools.lru_cache, uses a manual linked list with a hashmap for O(1) ops, and includes the specified demonstration producing the expected outputs.

Comparison Summary

Final rank order is determined by judge-wise rank aggregation (average rank + Borda tie-break). Average score is shown for reference.

Judges: 3

Winning Votes

3 / 3

Average Score

99
View this answer

Winning Votes

0 / 3

Average Score

98
View this answer
X f L