top of page
  • LinkedIn
  • Facebook
  • YouTube
  • Twitter
  • Instagram
  • Pinterest
  • Tumblr
  • Vkontakte

LRU Cache - Everything You Need To Know

Updated: Jun 3, 2022

In our earlier article we had an overview on caching and a brief introduction to various cache eviction policies. In this article we will have a detailed discussion on LRU or Least Recently Used cache eviction policy.


Introduction


LRU cache or least recently used cache is one of the most popular cache eviction strategies. It is also a very commonly asked interview question.


LRU cache keeps track of the order in which items in the cache were accessed. It stores items in the order in which they were requested. So the most recently used item will be at the top of the cache and the least recently used item will be at the end of the cache. In LRU strategy, when the cache is full, the item that hasn't been used for the longest time (least recently used item) will be eliminated or evicted from cache. It also provides a quick constant time access to items in cache.


The idea behind LRU cache is simple, if the cache has a capacity to store n items, then LRU cache stores the n most recently used items in cache. And when the cache is full and a new item needs to be added, it makes space for the new item by evicting the least recently used item.


Example


Lets understand LRU cache with an example. Consider we have an application where our users can read from a collection of books. For the ease of understanding let us assume that our application has 4 books in database: book1, book2, book3 and book4 and our cache limit is 3 (3 times can be stored at max in cache).


Initially the cache is empty since no requests have been made for any book.

Lets look at how LRU cache stores and evicts items from cache as we try to access data from it.


To begin with lets say a user makes a request for book1 from our application. Our application will check to see if the data is available in cache, since book1 is not available in cache it fetches it from database and an entry for "book1" is now stored in cache as shown below.

Next we get a request for book2. Again book2 is stored in cache. Also since book2 is the most recently used item now, it is placed at the top of cache and book1 is pushed down in cache as shown in diagram below.


And we get a third request for book3, so book3 is now placed at the top of cache.

Now imagine we get a request for book2 again. Since book2 is already in cache our application responds to this request from cache instead of database. Also book3 is bumped down and book2 is now moved to the top of our cache since it is the most recently used item. Same is shown in diagram below:

Now lets say we get a 5th request for book4. Since the cache is full, it wont be able to store book4 unless it evicts an already existing item from cache. And as per LRU cache policy, the least recently used item is the one which should be evicted in order to make space for the new item. The item at the bottom of cache is the least recently used item. So we evict book1, from cache and put book4 at the top of cache.

This how a LRU cache works. It repeats the above steps for each subsequent request. Now that we know how an LRU cache works, lets see how it can be implemented.


Implementation


We need to implement two methods (functions), one for putting items into cache and one for getting items for cache. Lets call these methods Put() and Get() respectively.


Caches in general are key-value based. This means every item in cache is stored against a key. When the item is needed we get that item from cache using the corresponding key. So, Put() will take two arguments, "key" of the item to be stored and actual "value" of the item to be stored. And Get() will take one argument, "key" of the item to be retrieved. It will also return the item corresponding to that key.


So, in general our Put() and Get() methods will look like this:


Put(key, value)

Get(key) return value


We need to consider a few things while implementing LRU cache. The data structure/structures that we choose to implement LRU cache should have the following properties:


  1. We should be able to store a list of items in cache.

  2. We should be able to keep track of items in the cache. Items should be stored in cache in the order in which they were accessed or requested.

  3. When an item is accessed from cache (new item or item already in cache) we should be able to efficiently move it to the front of cache.

  4. When an item is requested from cache we should be able to retrieve that item in constant time, O(1).

  5. We should be able to efficiently evict the least recently used item from cache when the cache is full.


From the list of requirements above, one of the requirements is to have fast access to items in cache. The data structure that comes to mind when we think of constant time access to items is a hashmap or hashtable. Okay, we will store our cache items in hashmap so that it can be retrieved in O(1) time.


But if you remember there is also a requirement where our cache should be able to store the order in which the items were accessed/requested. How can we maintain this order? Hashmap cannot keep track of the order of items. So how can we achieve this? We will have to use another data structure to keep track of the order in which the items were accessed.

What data structure can we use to maintain order of items? For maintaining this list we can use multiple data structures like array, queue, singly, doubly linked list etc. But the important thing is whatever data structure we choose should allow us to perform write, update and delete (evict) operations efficiently to ensure that our cache is performant.


Doubly linked list can perform update, write, push to front, delete operations in constant time if we have access to doubly linked list node on which these operations need to be performed. We can achieve O(1) access to doubly linked list nodes by storing the node itself in a hashmap. So in our earlier hashmap instead of storing the cache item value we store a reference to doubly linked list node (which in turn will have the cache item value).


In summary, we will be using a combination of doubly linked list and hashmap to implement LRU cache. Hashmap enables fast access to items in cache and doubly linked list keeps track of the order in which the items in the cache were accessed. Key to hashmap is the cache item key and value will be doubly linked list node reference, which in turn has the cache item value.


Note: Cache item "Key" can be any unique value. For example if our requirement is to store user details in cache, item "key" can be any unique value like email ID, user UUID, user name etc.


One last thing that we need to consider is what happens if multiple users/threads try to write to write the same item into cache in parallel as this can lead to data corruption. We can overcome this situation using mutex locks. However introducing a global level mutex lock in Put() can prove expensive as this would block all users from writing into cache even if they are writing different items. For this reason we introduce a lightweight key based mutex lock which blocks users only when they try to write the same item at the same time. Key based mutex locks block only when two concurrent threads try to write to the same key concurrently.


Want to master coding? Looking to learn new skills and crack interviews? We recommend you to explore these tailor made courses:


If we consider our previous book example, LRU cache internally stores the data as shown in below diagrams and for the same sequence of requests LRU cache goes through the following changes:


Request for book 1:


Request for book 2. Order of items in map does not matter, it is to be ignored. Order is maintained by doubly linked list.


Request for book 3.


Request for book 2.


Request for book 4.


Code


Language: Go


Lets define the structures needed for implementing LRU cache.



Next we will create a method which initializes and returns a new LRU cache object.



Below is the implementation for put method for adding items into cache.



Below is the implementation for get method for retrieving items from cache.



Dummy method that simulates fetching values from database:



Key based mutex locking implementation:



Language: Java


Coming soon...


Language: Python


Coming soon...


Time Complexity


Both the get and put methods contain constant time operations and hence the overall time complexity for both Get() and Put() is O(1).


That is all for this article, thank you for taking your time to read this. If you have any questions or doubts, please let us know in the comments section below, we will be happy to answer you.


If you found this article useful, do not forget to subscribe to our website, your support motivates us to bring out more such articles in future (scroll down to the bottom of the page to find the subscription form).


You can explore more such amazing articles from code recipe in our blogs section.


Code Recipe New Year Offer: Get 100% discount on Code Recipe Membership Plan. Join now and get exclusive access to premium content for free. Hurry! Offer only available for a limited time - Join now.


Follow us on social media: Facebook, Twitter, Linkedin, Tumblr, Instagram.



18,159 views2 comments

Recent Posts

See All

We are now on YouTube!

Prefer learning through videos? No worries! Visit Code Recipe on YouTube now.

bottom of page