top of page

54 results found with an empty search

  • 242. Valid Anagram - LeetCode Fastest Solution

    Hello Code Recipian! Welcome back to another article on   LeetCode problem solutions . In our previous article we discussed the solution for LeetCode 112. Path Sum problem. Today we will be solving LeetCode problem 242. Valid Anagram . Let's check out the problem statement. Problem Statement: Valid Anagram Given two strings s and t , return true if t is a valid anagram of s . Otherwise return false . Note: An anagram is a word or phrase made by rearranging the letters of another word or phrase. Example 1: Input: s = "anagram", t = "nagaram" Output: true Example 2: Input: s = "listen", t = "silent" Output: true Example 3: Input:  s = "car", t = "bar" Output:  false Constraints: 1 <= s.length, t.length <= 5 * 10^4 s and t consist of lowercase English letters. Solution To ascertain if s and t are valid anagrams, we must check the following criteria: Both s and t must have the same number of characters (length of s must be equal to t ). The frequency (count) of characters in s must be the same as that in t . We can utilize the hashmap data structure to solve this problem efficiently in linear time. The idea is to use the hashmap to count the characters in s and t and based on this count we determine the final result. Let's see how this algorithm works in detail. Algorithm Below is a step-by-step explanation for the working of the algorithm: Preliminary Validation: Before verifying anything else, the initial step is to confirm whether the strings s and t have the same length. If the lengths of s and t are not equal, they can never be anagrams of each other. Therefore, if they are not equal we straight away return false as result. Initialization: If the lengths are equal, we initialize a hashmap, let's call it freqMap . freqMap is used to maintain the count characters in s and t . It takes a character key and integer value. Initially this map is empty. Count characters in s: The first step is to iterate through the given string s and count its characters. We iterate through each character in s one-by-one and in each iteration we increment the count for that character in freqMap . Decrement freqMap count: Next, we iterate through string t . In each iteration we decrement the current character count from freqMap . The logic behind this is that, if s and t are anagrams then the count of characters in s should match the character count in t . So, when we decrement character count for string t , every character count from s in freqMap should become 0. Check if every character count is zero: Now we iterate through the freqMap . In each iteration we check the current character count in freqMap : If the current character count is zero, continue iterating. At any point if we find that the current character count is not zero, then s and t are not anagrams. Therefore, we immediately return false as result. Return true as result: If the execution comes out of the 3rd loop in step 5, it means every character in freqMap is equal to 0 and s and t are anagrams. Therefore, we return result as true . Note: We can also solve this problem using sorting approach, but the time complexity for the sorting solution would be O(n log n) . Simulation Below is a pictorial depiction for the working of the algorithm: Want to master coding? Looking to upskill and crack interviews? We suggest you check out these specially designed courses: Udemy Video Tutorials - Available at 95% off System Design Interview Complete Guide Cracking The Coding Interview - Most Preferred Book For Coding Interviews Data Structures And Algorithms - A Complete Guide   Learn In-Demand Tech Skills And Stay Ahead Of Others Code Go Solution Python Solution Java Solution JavaScript Solution TypeScript Solution C++ Solution C# Solution C Solution PHP Solution Kotlin Solution Swift Solution Ruby Solution Scala Solution Rust Solution Complexity Analysis Time Complexity Length Check: Length check is a constant time operation, hence the time complexity is O(1) . First Loop: At this point, length of both s and t are equal (due to length check in step 1). If n is the length of s and t, time complexity of the first loop is O(n) , since we are iterating through every element in s to count its characters. Second Loop: Similarly, the time complexity of the second loop is also O(n) as we iterate through every character in t . Third Loop: This loop, in the worst case can make upto 26 iterations. This because the input consists of only lowercase English letters (as per problem constraint) making time complexity for this loop O(1) . Therefore, the overall time complexity of this solution is O(1) + O(n) + O(n) + O(1) = O(n) . Space Complexity This algorithm uses hashmap  freqMap to store the count of letters in s and t . However, since the input consists of only lowercase English letters, freqMap can have at most 26 elements. Apart from this we are not using any extra variables that have impact on the space complexity. Therefore, the space complexity of this solution is O(1) . That brings us to the end of this article. We appreciate the time you have taken to read through it. If you have any questions, feel free to ask them in the comments below. We're here to help and will gladly answer your queries. If you enjoyed this article, please subscribe to our website and Youtube channel . Your support inspires us to bring out more such articles in future. Don't forget to delve into more such fascinating articles from Code Recipe in our blogs section . There's a wealth of knowledge waiting for you there! Code Recipe Limited Time Offer:   Get 100% discount on Code Recipe Membership Plan. Join now and get exclusive access to premium content for free. Hurry! Offer only available for a limited time - Join now . Follow us: YouTube , Facebook , Twitter , LinkedIn , Tumblr , Instagram .

  • URL Shortener System Design (Tiny URL): A Complete Guide

    In our previous article we studied Netflix system design . In this article we will discuss how to design a high performance tiny URL or URL shortener service. Problem Statement: URL Shortener System Design Design a URL shortner service. Functional Requirements For a given input URL (long URL) our service should generate and return a shortened URL to the user. When the user clicks on the shortened URL, our service should redirect the user to the original URL. Non-Functional Requirements The system should be scalable, highly available. Our system should be performant. For security purposes, the generated short URL should be as random as possible, it should not be predictable. Introduction URL Shortener System Design: A tiny URL or URL shortener is a service which takes a long URL and converts it into an equivalent short URL containing lesser characters. When the user clicks this shortened URL, he would be redirected to the same destination address as the original URL. For example, lets say our original URL is: If we pass this long URL to a URL shortner service, it would return you a shortened URL that looks something similar to this: Why URL shortening is necessary? There are several use cases where a shortened URL is preferred over a long URL: Short URLs look clean when they are placed on websites, files, social media etc. Shortened URLs are useful on services that have a restriction on number of characters that can be posted on them. Ex: Tweets on Twitter. To mask a URL. Sometimes you would not want to expose the original URL as is to the end users. Short URLs in such a case do the job of masking the original URL while preserving the destination address. Ex: For masking affiliate links. Some URLs are just too long and it is better off having a shortened version to represent them. Short URLs in such a case can be used as a placeholder. Shortened URLs can also be used for tracking purposes. Most URL shortner services usually provides us with additional metrics on the shortened URL like number of clicks etc, which can be extremely useful for business insights. Social campaigns with shorter URLs perform better. People tend to trust short URLs as compared to longer ones, especially when the long URL contains special characters. As a result shortened URLs produce better social engagement. Data Estimates For our design it is safe to assume that a system like this would be read heavy i.e. the probability of users creating a short URL from long URL will be less as compared to users clicking on a short URL and getting redirected to the original URL. For the purpose of this example let us assume that the read to write ratio is 100:1 which means for every short URL created there will be 100 redirections or 100 clicks on that short URL. Another important metric that will be useful for our design is the number of short URL generation requests that our service is expected to receive per month. It is important that we clarify all these details with the interviewer beforehand because it will give us better clarity to proceed with our design. For our case let us assume our service gets 100 million requests per month on an average. Traffic Estimates Considering all the above assumptions: Total no. short URL generation requests per month = 100 million. Therefore no. short URL requests per second = 100 million /(30 days * 24 hours * 3600 seconds ) ~ 40 URLs/sec. Total short URL clicks or redirections per second (assuming 100:1 read to write ratio) = 40 URLs/sec * 100 = 4000 URLs/sec. Data Estimates Most popular browsers support 2000 characters in a URL. So, lets say our long URL will at max take up to 2000 characters or 2KB. Most URL shortener services create a short URL with 15-16 characters (We will see more on this later in our discussion). So we can say the short URL size is ~ 16 bytes. Additionally we might need few more bytes to store metadata like creation timestamp, user details etc, lets say 50 bytes. So, total storage needed per shortened URL ~ 2.1 KB. Storage needed for 1 month = 2.1 KB * 100 million = 210 GB. Storage needed for 1 year = 210 GB * 12 months ~ 2.5 TB. For 5 years this will be 12.5 TB and 25 TB for 10 years. System Design At first look designing a URL shortner service may not look like a big deal. All that we need is a server that runs the logic for converting the long URL to short URL, a database to store the mapping of long URL to short URL and when the user requests for the short URL you redirect the user to its corresponding long URL using this mapping data. Simple right? Yes, this approach works fine as long as we have to serve only a small set of users. But what if our service gets thousands of requests per second? Will our server be able to handle the load? Will our database be able to handle the volume of data? How do we ensure that the short URLs generated are all unique? How to make sure that the data stored on our database is not corrupted? All these subtle aspects associated with designing a URL shortner service makes it such a popular system design interview question as it allows the interviewer to test the candidate on various design aspects. Interview Tip: In most system design interviews you will be given a loose ended or generic problem statement (similar to the one given here). It is your job as an interviewee to ask relevant follow up questions and streamline the requirements. Want to master coding? Looking to learn new skills and crack interviews? We recommend you to explore these tailor made courses: Udemy Video Tutorials - Available at 95% off System Design Interview Complete Guide Cracking The Coding Interview - Most Preferred Book For Coding Interviews Data Structures And Algorithms - A Complete Guide Learn In-Demand Tech Skills And Stay Ahead Of Others For this particular use case, an immediate question to start with could be, what is the scale that our service needs to operate at? What are the monthly active users and what are the number of requests/second our service can expect? This will help us decide on the design aspects like server specification, type of database to use, length of the shortened URL that our service has to create and also calculate the data estimates. We will start with a simple approach first and evolve our design incrementally as we move along. First, let us think of what components we need in order to build a service like this. Users send request to our service from a mobile or desktop browser, our service should have the logic/code to convert this long URL to short URL. We will get into the details of how we can build this logic later in our discussion, for now let us consider that we need some kind of logic and this logic has to be hosted somewhere on a server for our users to reach us. Once the request reaches the server, the code on the server runs to generate the short URL and this will be sent back to the user as a response. We would also need to store this short URL to long URL mapping data in a database so that the next time when the user clicks on the short URL our service redirects the user to the original URL destination. We will see what type of database to use for our service later in our discussion, for now lets just assume we need some kind of a database to store this data. With the above design, now when the user clicks on the short URL, the request reaches the server, the server then connects to the database to get the short to long URL mapping and redirects the request to long URLs destination address and there you go, we have our URL shortner service! As you can see, the above approach is simple, we just need to have a sever and a database to make this work. However, this approach of a single server hosting the shortening logic and a database to store the mapping works fine only for a small system that has to serve less number of requests, but it wont scale. If our service becomes really popular and if we start getting thousands of request per second, this design wont work, lets see why. As we start getting thousands of requests per second, a single server instance may not be able to handle all the load. The increased load may cause the server to crash resulting in downtime in our service and bad experience to users of our service. We can overcome this problem either by scaling up (vertical scaling) or scaling out (horizontal scaling). In vertical scaling we use a bigger machine having more memory and CPU to cope with the increased load. As the numbers of requests grow, we add more memory and CPU to the existing machine (server instance) to meet the demand. However this approach has a limitation, we can keep adding more memory and CPU only up to a certain extent because of hardware the limits. Another approach to scale the system is using horizontal scaling . In horizontal scaling as the load increases we add more machines as opposed to using bigger machines. This approach works really well for large systems serving huge number of requests. To optimize this further, we could use a hybrid model having a mix of horizontal and vertical scaling approaches, i.e. we keep adding more machines to meet the demand (horizontal scaling) and each of these machines is a big machine with adequate CPU and memory depending on our cost estimates. With all of these ideas in place our system would look as shown below: You might be wondering looking at the above diagram, when there are n server instances, how does the system know which server has to handle which request. This is a valid point, we just cannot have multiple servers and expose them as end points to users. We need to have an intermediate component which interprets the requests and re-directs them to specific server instance using some of kind of a logic. This job is done by the load balancer . Load balancer as the name suggests, balances the load by distributing the requests across our servers. There are various kinds of load balancers, each type having its own logic on how to distribute the load, but for our use case lets keep this simple by assuming the load balancer redirects the requests depending on which server is free or available to process the request. The load balancer also acts as a single point of contact for all our users, they don't have to know the individual server IP addresses of our sever instances. All the user requests land on the load balancer and the load balancer is responsible for re-routing these requests to a specific server instance. So far our system looks as shown below: We can optimize this design further by adding a caching layer to our service. With the current design our servers have to talk to the database every time the user clicks on the short URL, in order to retrieve the short URL to long URL mapping. Database calls can be slow and expensive as compared to fetching the data from a cache which is essentially an in-memory storage. We can improve the response time of our APIs by caching frequently accessed short URLs so that when we get a request for a short URL our servers will first check to see if the data is available in cache, if yes it retrieves the data from cache, otherwise it fetches it from database. Apart faster reads, cache can also help reduce the load on our database. This is because data available in cache can be served from the cache itself without having to reach to the database, this in turn reduces the number of calls that we make to the database, thereby reducing the load on our database. There various caching solutions available in the market. Few popular ones include Redis, Memcached, Varnish etc. We can pick any one them for our use case. If you want to learn more about caching, when to use a cache and various caching strategies, we highly encourage you to go through this article on caching . Short URL Logic We now have most of the components in place to build our system, but we have still not discussed the most important aspect. How do we convert the given long URL to a short URL? Short URL Composition Before understanding how to convert a long URL to short URL, let us go a level above and discuss how the shortened URL should look like. Most popular URL shortening services have two parts to a shortened URL. The first part is the domain name of the service and the second part is a set of random characters. It is important we ensure that the domain name of our service is crisp, yet meaningful. For the purpose of this example, let us assume the domain name of our service is urls.com (short form for URL shortner). The second part of the short URL is a string formed by a set of random characters. This random string should be unique for each short URL created for a long URL. Random String Length What should be the length of the randomly generated string? Length of the random string should be such that it is not so long that it defeats the purpose of having a shortened URL, nor too small either, otherwise we would run out of combinations after a certain point. We can generate random characters using algorithms like base62 or md5. Both base62 and MD5 algorithm outputs consist of 62 character combinations (a-z A-Z 0-9). Lets say we decide to go with random character string of length 7, we would have 62^7 ~ 3.5 trillion combinations to work with. This looks sufficient because even if our system gets 1000 requests per second, it would take 110 years to exhaust these many combinations. So if we combine both the parts (domain name+random string) the total length of the short URL that our service creates would be equal to 15 characters, , 8 characters for the first part, 7 characters for the second part, plus an additional 1 character for '/' in between, which seems acceptable (Ex: urls.com/3NVyw2j). How to generate random string? Now lets see which algorithm to choose for generating the random string. There are various hashing or encoding algorithms which we can be used for this purpose. Base62 and MD5 are the popular ones. Both base62 and MD5 produces output consisting of 62 characters combinations (a-z, A-Z and 0-9). Base62 takes a integer input and MD5 takes string as the input and generates a random string output. For base62 algorithm, we need to convert our long URL string into an integer first and then pass it to the algorithm and for MD5 algorithm we can directly pass the long URL as input. Most languages have ready to use implementation (core/third party libraries) for both the algorithms. Now lets us analyze each of these algorithms and see which one would fit better for our use case. Using MD5 Algorithm MD5 algorithm takes a string as input and produces a fixed length output. For each input string MD5 algorithm mostly produces unique output, there is still a chance of it producing the same output for two different inputs, but this is very rare. But the thing with MD5 algorithm is that it produces a really long output. This is a problem because we have a restriction of 7 characters for our random string. We can solve this problem by taking only a part of the output produced by MD5 algorithm, say for instance we consider only the first 7 characters of the MD5 output. But there is a problem with this approach as well. The chances of collision increases if we consider only the first 7 characters of the MD5 output, meaning for different long URLs we might end up with the same 7 characters as output which is not what we want. For each long URL we always need to have a unique set of 7 characters in the short URL or it will result in data corruption. Lets see if we can do this more efficiently using base62 algorithm. Using Base62 Algorithm Base62 also produces an output consisting of a combination of same 62 letters as md5. Base62 takes integer type as input. So we need to first convert the long URL to a random number then pass this random number to the base62 algorithm. Can we use the output of base62 as is and store it into the database? The answer is no because, if you notice we first convert the long URL to random number and then pass it to base62 algorithm. So there is a possibility that we end up getting the same random number for different long URLs and therefore if we store the output of base62 algorithm directly into the database it can lead to data corruption. This means, each time we generate random characters using base62, we first need to check if that data is already present in database, if yes we regenerate the random number and again pass it to base62 algorithm. We repeat this until we obtain a unique random string and then store it in the database. Now this is clearly not an efficient approach. Also apart from being inefficient, the above approach works if our service is just hosted on a single server instance. If we have multiple servers running to meet the demand, this approach may not always work. Imagine having two replicas of the server running, and a request comes into to each of these servers simultaneously, assume the base62 generates the same random string for both the long URLs. Now when each of these servers try to save this data into the database which can lead to data corruption. This can be solved by using constraints in DB query like insert if not present, and if this query fails regenerate the random string, but again we are adding to the inefficiency and complexity. How can we make this work efficiently? If you look at the problem at its core, the reason why we end up with duplicate random string is because we pass duplicate random number to base62 algorithm. If we can somehow pass a unique random number to base62 every time which can avoid this problem completely. And this brings us to our final approach which is the counter based approach. Counter Based Approach The idea is to make use of a counter instead of generating the random number each time. When the server gets a request to convert a long URL to short URL, it first talks to the counter to get a count value, this value is then passed to the base62 algorithm to generate random string. Making use of a counter to get the integer value ensures that the number we get will always be unique by default because after every request the counter increments its value. For example, lets assume our counter starts from 1. Upon getting the first request to convert a long URL to short URL, our service will ask counter to provide a unique number, counter returns 1 to our service, it then increments its current value. Next time our service requests the counter for a unique number, it returns 2, for the 3rd request it returns 3 and so on. As you can seen this is a much efficient compared to our previous approaches where we had to check if the random string obtained is unique or duplicate each time before we could write it to our database. Challenges Even with the counter based approach we have to take care of few things. For instance, the here counter is a single point of failure, if the counter goes down our entire service stops working. What if the counter runs out of numbers? Also if our service has multiple server instance running to manage the load and if two or more servers request the counter for number at the same time, will the counter be able to handle this by properly incrementing and send them unique numbers? Solution If we plan to implement the counter ourselves, we would have to handle all these complexities ourselves and for that reason we would be making use of a third party service called Apache Zookeeper which takes care of all these complexities for us so that we just have to focus on the core business logic. Zookeeper Let us briefly understand Zookeeper. Zookeeper is an open-source distributed service which can be used for maintaining configurations, managing co-ordination between services in a cluster, distributed synchronization, naming nodes in a cluster and many other things. In addition to the above functionalities zookeeper also provides a shared counter. The counter provided by zookeeper contains a range of values within it. For each new server created in our service, zookeeper assigns one of these ranges. If a server runs out of counter range assigned to it, zookeeper assigns it a new range. Zookeeper is a distributed service, if the counter node fails, zookeeper automatically spins up a new counter instance. Also as we get more users using our service, we can add as many servers as we want to handle this load, zookeeper takes care of assigning and maintaining the counter values for all these servers. Database What database would be ideal for our service? Should it be SQL or No-SQL? In order to be able to make this choice, we need to understand the type of data that we need to store in our database and also the scale. Coming to the type of data, we only need to store the short URL to long URL mapping. When the user gives us the short URL, we just need to get the long URL using the 7 character random string in the short URL. We do not need to perform any relational queries or joins to store or get this data. So we do not need a SQL database. As far as No-SQL databases are concerned, there are various types of No-SQL databases like document database, graph database, column database, key-value database etc. As we can clearly see, our service uses a simple key value mapping. Therefore we can go with a No-SQL key-value pair database. Some of the popular ones are Amazon DynamoDB , Oracle NoSQL Database, InfinityDB, Aerospike etc. As far as the scale is concerned, most of these No-SQL solutions scale well with increasing data. Scaling Servers It is a good practice to keep the number of active server instances in our system elastic. During peak hours we need to have more server instances running to cope with the increased load, other times we reduce this count to a preset value. Having this flexibility with the number of server instances is important for various reasons: Makes sure our system does not go down when the number of requests spike during peak hours. On cloud we are mostly billed per hour or on demand basis. So, if deploy our service on cloud, running extra server instances, more than what is needed, can have an impact on cost. If we do not run our service on cloud, we will have to bare the cost of having extra reserved servers upfront, to ensure our service is always available. Most popular cloud solutions provide the auto scaling feature. If we deploy our service to a popular cloud provider like AWS we can easily leverage its auto scaling capability. Once integrated, it will automatically increase or decrease the number of active server instances depending on the number of requests we get. Final Design Now putting together everything we have discussed so far together, our URL shortner service design looks as shown below: Summary Lets summarize what we have studied so far and how the final design works. URL Shortening Request When our service gets a request for converting a long URL to short URL, for the sake of simplicity lets assume the long URL in request is www.google.com . When this request reaches our service it first hits the load balancer. Load balancer scans through all the available server instances, checks which server is free to take the incoming request and forwards the request to the appropriate server instance. The server then performs the following steps: Contacts zookeeper to get the counter value and passes the obtained counter value as input to the base62 algorithm. Base62 returns a unique random string which is then appended to create a short URL. For the purpose of this example let us assume the random string obtained is 3NVyw2j , so the short URL will be urls.com/3NVyw2j . This long URL to short URL mapping is stored in the database. Random String Long URL 3NVyw2j www.google.com . User Clicks On The Short URL When this short URL is clicked or pasted in browser, the request will again hit the load balancer which forwards the request to an appropriate server instance. The server now does the following steps: Extract the random string from short URL ( 3NVyw2j in our case). Using this random string, retrieve the corresponding long URL from database. Redirect the request to the original long URL destination address. Save the short URL to long URL mapping in cache so that the next time the user requests this short URL we can quickly get it from cache instead of querying the database. And finally lets analyze if our design accomplishes all the functional and non-functional requirements. Our design covers both the points mentioned as part of the functional requirement. As far as the non-functional requirements are concerned, our design ensures that the system is scalable, we have seen throughout our design discussion how the system can increase the number of server instances when there is an increased load and how the load balancers does the job of balancing the load among the available server instances. Also our choice of database is made keeping scalability in mind. Is our system highly available? Yes, if one of our server instances goes down, the load balancer re-directs the request to one of the other available ones. Also our auto scaling policy ensures a new server instance is created to replace the failed instance. The counter that we used is a distributed counter and the zookeeper ensures it is always available and not a single point of failure. The system is performant. The algorithm for long URL to short URL conversion is an efficient one. We have also made use of caching, to cache the most frequently requested short URLs which also helps improve the performance. And for the third non-functional requirement, since we use a counter value as input to the base62 algorithm we are guaranteed to get a random short URL each time. And with this we have covered all the design objectives. That is all for this article, thank you for taking your time to read this. If you have any questions or doubts, please let us know in the comments section below, we will be happy to answer you. If you found this article useful, do not forget to subscribe to our website, your support motivates us to bring out more such articles in future (scroll down to the bottom of the page to find the subscription form). You can explore more such amazing articles from Code Recipe in our topics section . You can find more system design questions in our System Design Questions Section . Code Recipe Limited Time Offer:   Get 100% discount on Code Recipe Membership Plan. Join now and get exclusive access to premium content for free. Hurry! Offer only available for a limited time - Join now . Follow us: YouTube , Facebook , Twitter , LinkedIn , Tumblr , Instagram .

  • Two Sum - Leetcode #1 Short & Simple Solution

    Hello Code Recipian! Welcome back to another article on leetcode problem solutions . In this article we will be solving leetcode problem no. 1 Two Sum . This problem is one of the most popular questions on leetcode and also a popular choice in coding interviews. Once you have a good understanding of this problem, it should help you solve advanced level problems like three sum  which is an extension of the two sum problem. Problem Statement: Two Sum Consider you are given an array of integers nums and an integer target , return indices of two numbers in the array such that they add up to the given target . You may assume that each input would have exactly one solution. Also, you cannot use the same element twice. You are allowed to return the answer in any order. Example ​ Example 1: Input: nums = [7,2,13,11], target = 9 Output: [0,1] Explanation: Element at index 0 i.e. 7 and element at index 1 i.e. 2 when added give target = 9. Example 2: Input: nums = [7,3,5], target = 8 Output: [1,2] Solution Let us try to understand the problem statement first. Here we are given an array of integers nums and another integer called target . We are asked to write an algorithm which returns indices of two elements in this array, such that, when we add the the numbers at these two indices it should be equal to the given target sum. For instance, in example 1 [7, 2, 13, 11] is the given array and the given target sum = 9. If we take a look at the given array, the pair which adds to the target sum 9 is (7,2) i.e. 7+2 = 9. So, our algorithm should return (0,1) as the result because these are the indices of elements 7 and 2 respectively in the given nums array. Similarly, for the array in example 2 [7, 3, 5] output is (1,2) because these are the indices of elements 3 and 5 respectively which add up to the target sum 8. It is possible to solve this problem in linear time. The idea is to use a hashmap to store the indices of the elements that are already visited. Let's see how this algorithm works. Algorithm Below is a step-by-step explanation for the working of the algorithm: Initialization: Create a hashmap numberIndexMap which accepts an integer key and value. This hashmap is used to store the indices of the numbers we have encountered so far in our previous iterations. Find indices that add up to the given target: Iterate through each element in the given array starting from the first element. In each iteration we perform the following steps: Calculate the number needed to achieve the target sum using the current number. number needed = target - current number Check if the number needed is already present in hashmap, i.e. check if we have already encountered the needed number in one of our previous iterations. If yes, we have found the numbers that add up to the given target (i.e. current number, needed number). Therefore, return the indices of current number and needed number as result. If no, add the current number along with its index to the numberIndexMap . This will allow us to verify if we have come across this number in future iterations. Repeat steps a and b for each element in nums array until we find the result. Simulation Below diagram shows a pictorial depiction of the working of the algorithm: Want to master coding? Looking to upskill and crack interviews? We suggest you check out these specially designed courses: Udemy Video Tutorials - Available at 95% off System Design Interview Complete Guide Cracking The Coding Interview - Most Preferred Book For Coding Interviews Data Structures And Algorithms - A Complete Guide   Learn In-Demand Tech Skills And Stay Ahead Of Others Video Explanation If you are looking for a video explanation for Two Sum problem, you can checkout the video in our Code Recipe YouTube channel: Code Go Solution Python Solution Java Solution JavaScript Solution TypeScript Solution C++ Solution C# Solution PHP Solution Kotlin Solution Swift Solution Ruby Solution C Solution Rust Solution Complexity Analysis Time Complexity The running time complexity of this solution is O(n) since we would have to go through all array elements in the worst case. The worst case occurs when we have to traverse till the end of array to get the result. Note: The worst case for checking if an element exists in hashmap can go up to O(n^2) in the worst case if there are collisions. However, most modern languages have a pretty robust implementation for hashmap and hence the chances of collision is minimal. So, in general we can say the overall time complexity of the solution is O(n) . Space Complexity Also the auxiliary space required is O(n) since we store the array elements in hashmap and in the worst case we would end up storing all values in nums in hashmap. That brings us to the end of this article. We sincerely appreciate the time you have taken to read through it. If there are any questions, feel free to ask them in the comments below. We're here to help and will gladly answer your queries. If you enjoyed this article, please subscribe to our website and Youtube channel . Your support inspires us to create more such articles in the future. Don't forget to delve into more such fascinating articles from Code Recipe in our blogs section . There's a wealth of knowledge waiting for you there! Code Recipe Limited Time Offer:   Get 100% discount on Code Recipe Membership Plan. Join now and get exclusive access to premium content for free. Hurry! Offer only available for a limited time - Join now . Follow us: YouTube , Facebook , Twitter , LinkedIn , Tumblr , Instagram .

  • 680. Valid Palindrome II - LeetCode Fastest Solution

    Hello Code Recipian! Welcome back to another article on LeetCode problem solutions . In our last article we solved LeetCode problem 125. Valid Palindrome . Today we will be solving an extension of this problem 680. Valid Palindrome II . If you have not yet checked out 125. Valid Palindrome , this is a good time to go through it, before proceeding further in this article. Problem Statement: Valid Palindrome II Given string s , return true if it is possible to make s a palindrome by removing at-most one character from s . A string is a palindrome if it reads the same forward and backward. Example 1: Input:  s = "aba" Output:  true Explanation:  s is already a palindrome. Example 2: Input:  s = "abca" Output:  true Explanation:  We can delete the character 'c' to make s a palindrome. Example 3: Input:  s = "abc" Output:  false Explanation:  s is not a palindrome and cannot be made palindrome by deleting any one character. Constraints: 1 <= s.length <= 10^5 s consists of lowercase English letters. Solution We can use the same two pointer technique that we used for solving 125. Valid Palindrome here as well with a slight modification. We will be using the two pointer technique along with greedy approach to solve this problem efficiently in linear time. Let's see how this algorithm works in detail. Algorithm Below is a step-by-step explanation for the working of the algorithm: Initialization: Initialize two variables: start , end . start represents the index of the variable used to iterate the string from starting index. This is initially set to the 0th index. end represents the index of the variable used to iterate the string from the end. This is initially set to the last index. Check if s is a palindrome: Start iterating the string inwards using start , end index variables. In each iteration perform the following steps: Check if characters represented by start and end indices match. If yes, continue iterating inwards through the string s . If no, we have 2 choices: Skip the start character and continue checking for a palindrome. Skip the end character and continue checking for palindrome. If any of these choices returns a true , it means we can form a palindrome by skipping one character. Therefore return true . If both choices return false , it means palindrome cannot be formed by skipping one character in string s . Therefore return false . Return result: If the execution comes out of main loop, it means we have checked all characters in string s and all of them match, therefore s is a palindrome. Therefore, we return result as true . Simulation Below is a pictorial depiction of the working of the algorithm: Want to master coding? Looking to upskill and crack interviews? We suggest you check out these specially designed courses: Udemy Video Tutorials - Available at 95% off System Design Interview Complete Guide Cracking The Coding Interview - Most Preferred Book For Coding Interviews Data Structures And Algorithms - A Complete Guide   Learn In-Demand Tech Skills And Stay Ahead Of Others Code Go Solution Python Solution Java Solution JavaScript Solution TypeScript Solution C++ Solution C# Solution C Solution PHP Solution Kotlin Solution Swift Solution Ruby Solution Scala Solution Rust Solution Complexity Analysis Time Complexity We make use of two loops here, one in the main function and one in the helper function, effectively it is a nested loop. The helper function can get called from the main loop at-most twice. If n is the length of the given string s , if the first loop iterates through x elements of string s , and then calls the helper function. The helper function runs for the remaining (n-x) elements, and it runs twice in the worst case. So the overall time complexity if O(x + 2 * (n-x)) = O(2n - x) which is asymptotically equivalent to O(n) . Space Complexity The space complexity of this solution is O(1) as we are not using any additional variables apart from start and end which take constant space. That brings us to the end of this article. We appreciate the time you have taken to read through it. If you have any questions, feel free to ask them in the comments below. We're here to help and will gladly answer your queries. If you enjoyed this article, please subscribe to our website and Youtube channel . Your support inspires us to bring out more such articles in future. Don't forget to delve into more such fascinating articles from Code Recipe in our blogs section . There's a wealth of knowledge waiting for you there! Code Recipe Limited Time Offer:   Get 100% discount on Code Recipe Membership Plan. Join now and get exclusive access to premium content for free. Hurry! Offer only available for a limited time - Join now . Follow us: YouTube , Facebook , Twitter , LinkedIn , Tumblr , Instagram .

  • Insertion Sort - Your Sorting 101 Guide

    Algorithm In our last article we discussed the bubble sort sorting algorithm. This is another article in the series sorting algorithms . In this article we will discuss insertion sort. Insertion sort is an in-place sorting algorithm. Insertion sort works by picking each unsorted element and placing it in its correct sorted position in the array. During its execution insertion sort divides the array into two portions, sorted portion and unsorted portion. Sorted elements will be on the left side of the array and unsorted elements will be on the right side. In each iteration one element is picked from unsorted portion and moved to its correct position in the sorted portion on the left. For a given input array arr that needs to be sorted in ascending order, insertion sort does the following steps: We assume that the first element is already sorted and start from the second element in the array. We compare the second element to the element before it. If the current element is less than the element before it, we swap the two elements. Next we move on to the third element and compare to all elements before it and swap if it is less than those elements. We need to do the above steps for all the elements in the array. To implement this algorithm we need to run two loops, an inner loop and an outer loop. Let i and j be the indexes of inner and outer loops respectively. i points to the last sorted element in the sorted portion of the array. j points to the first unsorted element in the unsorted portion in the array. To begin with we assume that the first element in the given array is sorted, so i points to the first element in the given array and j points to the second element. We fix the outer loop and iterate through the inner loop. At the start of the inner loop we assign i to j, i.e j = i . In each iteration of the inner loop check if arr [ j ] < arr[ j -1]. As long as arr [ j ] < arr [ j -1] we swap arr [ j ] and arr [ j -1] decrement j . Else we break the inner loop and move on to the next iteration of the outer loop. We repeat these steps until all the elements in the given array are processed. Simulation Consider we are given the below input array: Below is a simulation of how insertion sort sorts the given array in each iteration. Element marked in green is the element under consideration for the current iteration that needs to be moved to its correct position in the sorted portion. Elements marked in grey are the elements that are already sorted. Iteration 1 To start with we assume that the first element is already sorted. We start the iteration with the second element 19 (since we assumed the first element is already sorted). We compare 19 with every element on its left until it reaches the correct sorted position or there are no more elements to compare with. First we compare 19 with the element before it, 17. Since 19 > 17, so there is no need to swap. Since there are no more elements to compare we mark 17 and 19 as sorted as shown below: Iteration 2 After the first iteration 1st two elements of the given array are sorted. In the second iteration we process the next unsorted element 4. Compare 4 with all elements on its left starting from 19 until it moves to its correct sorted position. Compare 4 with 19. 4 < 19, so swap. Next compare 4 with 17. 4<17, so swap. There are no more elements to compare with, therefore we mark 4, 17, 19 as sorted. Iteration 3 Next unsorted element is 9. As earlier we compare it with 19, 17 and 4 and move it to its correct position in the array. Now 9 is in its correct position in the array, we mark 4, 9, 17, 19 as sorted. Iteration 4 Finally we compare 7 with elements before it 19, 17, 9, 4. Mark 4, 7, 9, 17, 19 as sorted. Now all the elements in the array are sorted, so we stop our algorithm. Want to master coding? Looking to learn new skills and crack interviews? We recommend you to explore these tailor made courses: Udemy Video Tutorials - Available at 95% off System Design Interview Complete Guide Cracking The Coding Interview - Most Preferred Book For Coding Interviews Data Structures And Algorithms - A Complete Guide Learn In-Demand Tech Skills And Stay Ahead Of Others Implementation Code Language: Go Language: Python Language: Java Complexity Analysis Time Complexity: O(n^2) The worst case time complexity of the algorithm is O(n^2) since it uses two loops to arrive at the result. The worst case occurs when the array is sorted in descending order, in this case we end up making maximum number of comparisons to sort the array. Space Complexity: O(1) No extra space is used. Now you know how insertion sort works. Don't stop here, you can explore more sorting algorithms from code recipe here . That is all for this article, thank you for taking your time to read this. If you have any questions or doubts, please let us know in the comments section below, we will be happy to answer you. If you found this article useful, do not forget to subscribe to our website, your support motivates us to bring out more such articles in future (scroll down to the bottom of the page to find the subscription form). You can explore more such amazing articles from code recipe here . Get 100% discount on Code Recipe Membership Plan. Join now and get exclusive access to premium content for free. Hurry! Offer only available for a limited time: Join now Follow us on social media: Facebook , Twitter , Linkedin , Tumblr , Instagram .

  • LRU Cache - Everything You Need To Know

    In our earlier article we had an overview on caching and a brief introduction to various cache eviction policies. In this article we will have a detailed discussion on LRU or Least Recently Used cache eviction policy. Introduction LRU cache or least recently used cache is one of the most popular cache eviction strategies. It is also a very commonly asked interview question. LRU cache keeps track of the order in which items in the cache were accessed. It stores items in the order in which they were requested. So the most recently used item will be at the top of the cache and the least recently used item will be at the end of the cache. In LRU strategy, when the cache is full, the item that hasn't been used for the longest time (least recently used item) will be eliminated or evicted from cache. It also provides a quick constant time access to items in cache. The idea behind LRU cache is simple, if the cache has a capacity to store n items, then LRU cache stores the n most recently used items in cache. And when the cache is full and a new item needs to be added, it makes space for the new item by evicting the least recently used item. Example Lets understand LRU cache with an example. Consider we have an application where our users can read from a collection of books. For the ease of understanding let us assume that our application has 4 books in database: book1, book2, book3 and book4 and our cache limit is 3 (3 times can be stored at max in cache). Initially the cache is empty since no requests have been made for any book. Lets look at how LRU cache stores and evicts items from cache as we try to access data from it. To begin with lets say a user makes a request for book1 from our application. Our application will check to see if the data is available in cache, since book1 is not available in cache it fetches it from database and an entry for "book1" is now stored in cache as shown below. Next we get a request for book2. Again book2 is stored in cache. Also since book2 is the most recently used item now, it is placed at the top of cache and book1 is pushed down in cache as shown in diagram below. And we get a third request for book3, so book3 is now placed at the top of cache. Now imagine we get a request for book2 again. Since book2 is already in cache our application responds to this request from cache instead of database. Also book3 is bumped down and book2 is now moved to the top of our cache since it is the most recently used item. Same is shown in diagram below: Now lets say we get a 5th request for book4. Since the cache is full, it wont be able to store book4 unless it evicts an already existing item from cache. And as per LRU cache policy, the least recently used item is the one which should be evicted in order to make space for the new item. The item at the bottom of cache is the least recently used item. So we evict book1, from cache and put book4 at the top of cache. This how a LRU cache works. It repeats the above steps for each subsequent request. Now that we know how an LRU cache works, lets see how it can be implemented. Implementation We need to implement two methods (functions), one for putting items into cache and one for getting items for cache. Lets call these methods Put() and Get() respectively. Caches in general are key-value based. This means every item in cache is stored against a key. When the item is needed we get that item from cache using the corresponding key. So, Put() will take two arguments, "key" of the item to be stored and actual "value" of the item to be stored. And Get() will take one argument, "key" of the item to be retrieved. It will also return the item corresponding to that key. So, in general our Put() and Get() methods will look like this: Put(key, value) Get(key) return value We need to consider a few things while implementing LRU cache. The data structure/structures that we choose to implement LRU cache should have the following properties: We should be able to store a list of items in cache. We should be able to keep track of items in the cache. Items should be stored in cache in the order in which they were accessed or requested. When an item is accessed from cache (new item or item already in cache) we should be able to efficiently move it to the front of cache. When an item is requested from cache we should be able to retrieve that item in constant time, O(1). We should be able to efficiently evict the least recently used item from cache when the cache is full. From the list of requirements above, one of the requirements is to have fast access to items in cache. The data structure that comes to mind when we think of constant time access to items is a hashmap or hashtable. Okay, we will store our cache items in hashmap so that it can be retrieved in O(1) time. But if you remember there is also a requirement where our cache should be able to store the order in which the items were accessed/requested. How can we maintain this order? Hashmap cannot keep track of the order of items. So how can we achieve this? We will have to use another data structure to keep track of the order in which the items were accessed. What data structure can we use to maintain order of items? For maintaining this list we can use multiple data structures like array, queue, singly, doubly linked list etc. But the important thing is whatever data structure we choose should allow us to perform write, update and delete (evict) operations efficiently to ensure that our cache is performant. Doubly linked list can perform update, write, push to front, delete operations in constant time if we have access to doubly linked list node on which these operations need to be performed. We can achieve O(1) access to doubly linked list nodes by storing the node itself in a hashmap. So in our earlier hashmap instead of storing the cache item value we store a reference to doubly linked list node (which in turn will have the cache item value). In summary, we will be using a combination of doubly linked list and hashmap to implement LRU cache. Hashmap enables fast access to items in cache and doubly linked list keeps track of the order in which the items in the cache were accessed. Key to hashmap is the cache item key and value will be doubly linked list node reference, which in turn has the cache item value. Note: Cache item "Key" can be any unique value. For example if our requirement is to store user details in cache, item "key" can be any unique value like email ID, user UUID, user name etc. One last thing that we need to consider is what happens if multiple users/threads try to write to write the same item into cache in parallel as this can lead to data corruption. We can overcome this situation using mutex locks. However introducing a global level mutex lock in Put() can prove expensive as this would block all users from writing into cache even if they are writing different items. For this reason we introduce a lightweight key based mutex lock which blocks users only when they try to write the same item at the same time. Key based mutex locks block only when two concurrent threads try to write to the same key concurrently. Want to master coding? Looking to learn new skills and crack interviews? We recommend you to explore these tailor made courses: Udemy Video Tutorials - Available at 95% off System Design Interview Complete Guide Cracking The Coding Interview - Most Preferred Book For Coding Interviews Data Structures And Algorithms - A Complete Guide Learn In-Demand Tech Skills And Stay Ahead Of Others If we consider our previous book example, LRU cache internally stores the data as shown in below diagrams and for the same sequence of requests LRU cache goes through the following changes: Request for book 1: Request for book 2. Order of items in map does not matter, it is to be ignored. Order is maintained by doubly linked list. Request for book 3. Request for book 2. Request for book 4. Code Language: Go Lets define the structures needed for implementing LRU cache. Next we will create a method which initializes and returns a new LRU cache object. Below is the implementation for put method for adding items into cache. Below is the implementation for get method for retrieving items from cache. Dummy method that simulates fetching values from database: Key based mutex locking implementation: Language: Java Coming soon... Language: Python Coming soon... Time Complexity Both the get and put methods contain constant time operations and hence the overall time complexity for both Get() and Put() is O(1). That is all for this article, thank you for taking your time to read this. If you have any questions or doubts, please let us know in the comments section below, we will be happy to answer you. If you found this article useful, do not forget to subscribe to our website, your support motivates us to bring out more such articles in future (scroll down to the bottom of the page to find the subscription form). You can explore more such amazing articles from code recipe in our blogs section . Code Recipe New Year Offer: Get 100% discount on Code Recipe Membership Plan. Join now and get exclusive access to premium content for free. Hurry! Offer only available for a limited time - Join now . Follow us on social media: Facebook , Twitter , Linkedin , Tumblr , Instagram .

  • Caching - System Design Building Blocks

    What is caching? A cache is a hardware or software component that acts as a temporary storage allowing fast access to data stored in it. The primary objective behind using a cache in any application is to improve performance. The process of storing and accessing the data from cache is known as caching . Caching is used everywhere. It is used in different layers like operating systems, CDN, DNS and also by various applications and services. If the requested data or item is available in the cache it is called a cache hit and if the requested data is not available in the cache it is called a cache miss . If implemented correctly, caches can help improve response times, reduce load on database, and save computation costs. Retrieving data from a persistent storage like database can take a considerable amount of time, caches reduce the response time of our API by providing fast access to data. Mainly caches are used to avoid frequency of network calls to database and to store the results of operations that are computationally expensive. Caches can help bring down the computation costs especially if your application is running on a cloud. How caching is useful? There are a wide variety of use cases where caching can be applied. Some of the scenarios where caching is useful are: Frequently Requested Data One of the popular scenarios where caching is useful is if you have to frequently query for some commonly used data. For example, in a service like twitter each time when we open a user profile, a common query is to get the count of followers/following for that user. This is not a very frequently changing data and is a good candidate for caching. You can fetch this data from database the first time when any user tries to access it, after which it can be cached and each subsequent request thereafter can be served from cache until the data becomes stale, which helps us avoid network calls to database for each request. Also, if you remember we have made use of caching in our URL shortener system design to cache the most frequently used short URLs, this is another example which shows the real life use case of a cache. Expensive Computations Some APIs are simple and have fast response times, while others might require you to do multiple intermediary steps involving slow and heavy operations that might delay the response time. A good example for this is a user feed API in a service like instagram or facebook. Displaying user feed for a particular user is mostly based on custom algorithms that may involve several computationally expensive operations like fetching all the people, public pages that a particular user follows from database, separating out the most recent posts made by his followers and pages, aggregating all of these posts and building a time sorted list using all of this data. Since we may have to make multiple calls to database and a lot of computation in order to get, aggregate and sort this data, our API response time can take a hit if we try to compute this on the go (as and when we get the request from user). And since a user feed is the first page that loads when we open an application like facebook or instagram, this can lead to a bad user experience. So in order to improve the performance and reduce the response time, we can precompute user feed for a particular user beforehand and store it in the cache (even before a request for user feed is made). We can serve this data to users directly from cache when he requests for it. This can potentially reduce the response time of our API from several seconds to few milliseconds. Avoid Load On Database If our service has a large number of users, and there are multiple microservices or replicas handling user requests which in turn call the database, this can put a lot of load on our database (especially during peak hours). This situation can be avoided by having a cache (most likely a distributed cache ) which can ease the load on our database. Want to master coding? Looking to learn new skills and crack interviews? We recommend you to explore these tailor made courses: Udemy Video Tutorials - Available at 95% off System Design Interview Complete Guide Cracking The Coding Interview - Most Preferred Book For Coding Interviews Data Structures And Algorithms - A Complete Guide Learn In-Demand Tech Skills And Stay Ahead Of Others Why not cache everything? You might be wondering, if caches are so fast and efficient why not store all our data in cache instead of putting it in the database? Wouldn't that be an ideal thing to do? There are a few reasons mainly why we cannot do this: Firstly, the hardware used by caches are expensive as compared to the hardware used by traditional databases. Traditional databases mostly run on commodity hardware which are relatively inexpensive. So we may have to shell out a ton of money if we were to store everything in cache. Second, storing all our data on cache instead of a database is counter-intuitive because this defeats the purpose of using a cache in the first place. This is so because as you store more and more data on cache, this might increase the time needed to fetch the data from cache making it redundant. Where do a caches fit in? A typical web application backed by a data store would look like this: This data store can be a traditional database or another microservice. When client makes a request to our service, it initially hits one of the microservices responsible for handling the request. This microservice in turn connects to database to retrieve the data requested by the client. Calls to database can be slow and can utilize a lot of system resources. It would be a good idea to store at least some of these items in memory so that we don't have to reach the database for each and every request. What this does is firstly it can improve the response time of our API since we are responding directly from cache. Second even if our database is down due to some failure our service might still be able to serve some of the user requests. Also if there are lots of clients requesting for the same data again and again, having a cache in between can reduce load on our database. When we get a client request, our service first check to see if that data is present in cache, if yes our service directly responds from cache. If the data is not present or if the data is outdated it fetches the data from database. Cache Invalidation Ideally the data that is stored in cache is transient and not meant to be there forever. This data has to be cleaned up or updated from time to time to keep it coherent with the datasource. Data in cache can go stale if the original data (in database for instance) is changed or removed. The process of cleaning up or updating the data in cache with new values to keep it in sync with the original data is known as cache invalidation . One of the popular ways to invalidate a cache entry is by using the TTL strategy. In Time to live or TTL strategy each entry in cache is associated with a specific time after which the data is considered stale. Once the data expires, we can flush these stale entries using one of two ways: When the user requests for a stale data, we can actively expire it. We can also do this passively with the help of a job that runs periodically at specified intervals. Cache Eviction Policies Cache eviction policies control the way in which items are removed from cache, when the cache is full. Based on the algorithm selected a cache eviction policy decides which item to remove from cache when the cache limit is reached. Why is it needed? It is important that we store data in cache in such a way that no matter how much data is stored in our database, our cache only has relevant items considering the future requests that are expected to come into our system. While predicting the future requests we need to consider two things mainly: 1. When to add items to cache 2. When do we remove items from cache. Our cache performance almost entirely depends on our cache policy. Imagine having a very poor eviction policy, every time the service requests for some data from cache, it results in a cache miss. So hitting the cache is of no use in this case. Call to a cache is an extra step, and every time if it responds with no data, you would end up pulling data from the database almost all the time with an additional redundant call to cache as well, which can add to the delays instead of improving performance. In such a case having a cache becomes an extra overhead instead of improving application performance. Also as mentioned earlier hardware used by cache are expensive, so storing a ton of items in cache would not make any sense both in terms of budget as well as performance. So we need to set a limit on the maximum number of items that can be stored in a cache at any given time. When the cache is full we eliminate or remove certain items depending on the cache eviction policy selected. Some of the well known cache eviction policies are: LRU - Least Recently Used : In this policy, the oldest item is removed from cache when the cache is full. We have described LRU cache in detail in a separate article . LFU - Least Frequently Used: In this policy, items are evicted based on the frequency of usage. Each item in cache has a count of how many times it is requested, when the cache is full the item with the least count is evicted. MRU - Most Recently Used : This is exactly opposite to the LRU policy. When the cache is full, the item that is most recently requested is evicted from cache. RR - Random Replacement: When the cache is full, a random item is evicted from cache. FIFO - First In First Out: Items are evicted in the order in which they were added to cache. LIFO - Last In First Out: The cache evicts item that was added most recently regardless of how many times it was accessed before. Note : It is important to note that cache invalidation is different from cache eviction. We invalidate data from cache because it is either stale or has expired, but we evict data from cache when the cache limit is reached (memory is full). Distributed Cache If the amount of data from a service or an application is too large to be stored on cache memory of a single machine, the data in this case has to be distributed across multiple machines. Distributed cache is an extension of traditional cache. While a traditional cache is mostly a single server or machine, a distributed cache is can grow beyond the memory limits of a single machine, formed by the interlinking of multiple cache servers or cache clusters. A distributed cache has its data spread across several nodes (servers) in a cluster. It can also be across several clusters across geographically distributed data centers. Distributed caches have the ability to scale horizontally. As the data grows, we can add more machines (cache servers/nodes) to our cluster allowing our cache to grow along with the growing data requirements. Distributed caches are especially useful for applications with large data volumes. Some of the popular distributed cache solutions are Memcahed, Redis, Hazelcast. How cache is different from a CDN? CDNs are geographically distributed network of servers that work together to provide content (videos, images etc) to users more quickly. CDN acts as an intermediate layer between the end user and the server minimizing the number of requests that need to be served from the origin server. Consider a service like Netflix having origin server in United States. For a user viewing content say from India, if you have to serve this content from the origin server, this could result in a lot of delays and buffering for the user viewing the content because of distance the data has to travel from server to end user. This is where CDN comes to rescue. CDNs have servers distributed all over the world. These servers cache data and when user requests for data, instead of serving this data from origin server it is served from the CDN server nearest to the user thereby reducing the delay. The main difference between a cache and CDN is that while CDNs do perform caching, not everything that performs caching is a CDN. Also CDN servers are strategically placed at the network exchange points (IXPs) to avoid network round trips, while this may not always be true with a regular cache. Note : Internet exchange points (IXPs) are points where different internet providers connect to exchange traffic originating on their network. Caching Strategies There are a number of caching strategies and choosing the right one is an important step when you decide to incorporate caching into your system. Some of the popular caching strategies are: Write through cache: In this strategy data is written to both cache and database asynchronously. Write back cache: Service writes data to cache and it immediately responds back to the user. This data is written to the database after a specified interval or under certain condition. Write around cache: In this strategy data is directly written to database. When the user requests for this data at a later point, it is written into the cache. We will be explaining caching strategies in detail in our upcoming article. That is all for this article, thank you for taking your time to read this. If you have any questions or doubts, please let us know in the comments section below, we will be happy to answer you. If you found this article useful, do not forget to subscribe to our website, your support motivates us to bring out more such articles in future (scroll down to the bottom of the page to find the subscription form). You can explore more such amazing articles from code recipe in our blogs section . Code Recipe New Year Offer: Get 100% discount on Code Recipe Membership Plan. Join now and get exclusive access to premium content for free. Hurry! Offer only available for a limited time - Join now . Follow us on social media: Facebook , Twitter , Linkedin , Tumblr , Instagram .

  • Reverse Integer - Leetcode #7 Short & Simple Solution

    In our previous article we solved the two sum problem.This is another article in the series leetcode problem solutions and this article is a solution to leetcode 7 two sum problem. Given a 32-bit signed integer x, reverse the digits in x and return the result. If after reversing, the result goes outside the signed 32-bit integer range [-2^31, 2^31 - 1], then return 0. Example Example 1 Input: x = 123 Output: 321 Example 2 Input: x = -123 Output: -321 Example 3 Input: x = 120 Output: 21 Solution The problem statement is pretty straightforward. We are given an integer value x, our task is to write an algorithm that reverses this given input number. Also it is stated that the given number can be negative. If you have gone through our palindrome number article you already know that reversing an integer value was one of the sub-tasks we had to do in our palindrome number solution. The solution to this problem is almost the same as the reversing integer part described in palindrome number solution. This problem can be efficiently solved using the divide and mod operations. The idea is to reverse the given number by applying the divide (/) and mod (%) operations on it. To reverse the given number we need to do the following steps: Extract last digit from right to left from the given number using the mod operator. Put the extracted digit into its correct position in the result. Discard the currently processed digit from the original number using divide operation. Repeat steps 1 - 3 as long as the given value x is greater than 0. The above steps can be represented using the formula: result = result * 10 + currentDigit And, currentDigit = x %10 Lets understand this with an example: Consider x = 369 is the given number. Lets take a variable to store our result, lets call it result . Initially result = 0. One by one we process each digit in x from right to left, starting with the last digit 9. Iteration 1 Step 1 Extract last digit from x using the mod operator i.e. currentDigit = x %10 = 369 % 10 = 9. Step 2 Move currentDigit to its correct base position in result . This step is similar to the leetcode 8 problem where we had to move converted digit to its correct base position. As in leetcode 8 problem we can accomplish this by multiplying result with 10 and then adding the currentDigit to it. So result = result * 10 + currentDigit = 0 * 10 + 9 = 9. Step 3 Now that we have processed the digit 9 completely we need to remove it from x . We can do this using the divide operation, x = x /10 = 369/10 = 36. Iteration 2 Repeat the above three steps again for digit 6 in x = 36. Step 1 currentDigit = x %10 = 36 % 10 = 6 Step 2 result = result * 10 + currentDigit = 9 * 10 + 6 = 96 Step 3 x = x /10 = 36/10 = 3. Iteration 3 Again we repeat these steps for the final digit in x = 3. Step 1 currentDigit = x %10 = 3 % 10 = 3 Step 2 result = result * 10 + currentDigit = 96 * 10 + 3 = 963 Step 3 x = x /10 = 3/10 = 0 As you can see we have successfully reversed the given number 369. Finally we return result = 963. We do all these operations in a loop. Once x = 0 we stop our loop. How do we handle negative number in input? We take an integer variable signMultiplier to handle negative numbers. signMultiplier = -1 if the input number is negative and signMultiplier = 1 for positive numbers. Whenever we get a negative number we first convert it into a positive number first by multiplying it with the signMultiplier and then do the steps needed to reverse the number. Once the number is reversed we convert it back to negative number by multiplying the result again with signMultiplier . Want to master coding? Looking to learn new skills and crack interviews? We recommend you to explore these tailor made courses: Udemy Video Tutorials - Available at 95% off System Design Interview Complete Guide Cracking The Coding Interview - Most Preferred Book For Coding Interviews Data Structures And Algorithms - A Complete Guide Learn In-Demand Tech Skills And Stay Ahead Of Others Implementation Language: Go Language: Python Language: Java Complexity Analysis Time Complexity: O(n) n here is the no. of digits in the given input number. Time complexity is O(n) because we have to process each digit in the given number at least once to reverse the given number. Space Complexity: O(1) No extra space is used. Reversed an integer? Its time to reverse a list now! Check it out: Reversing a list That is all for this article, thank you for taking your time to read this. If you have any questions or doubts, please let us know in the comments section below, we will be happy to answer you. If you found this article useful, do not forget to subscribe to our website, your support motivates us to bring out more such articles in future (scroll down to the bottom of the page to find the subscription form). You can explore more such amazing articles from code recipe in our blogs section . Code Recipe New Year Offer: Get 100% discount on Code Recipe Membership Plan. Join now and get exclusive access to premium content for free. Hurry! Offer only available for a limited time - Join now . Follow us on social media: Facebook , Twitter , Linkedin , Tumblr , Instagram .

  • String To Integer - Leetcode #8 Short & Simple Solution

    Problem Statement In our previous article we solved the longest distinct substring problem. This is another article in the series leetcode problem solutions and this article is a solution to leetcode 8 problem . Implement myAtoi(string s) function, which converts a string to a 32-bit signed integer. Conditions for converting string to integer: Read in and ignore any leading whitespace. Check if the first character (if not already at the end of the string) is '-' or '+'. Read this character in if it is either. This determines if the final result is negative or positive respectively. Assume the result is positive if neither is present. Read in next the characters until the next non-digit character or the end of the input is reached. The rest of the string is ignored. Convert these digits into an integer (i.e. "123" -> 123, "0032" -> 32). If no digits were read, then the integer is 0. Change the sign as necessary (from step 2). If the integer is out of the 32-bit signed integer range [-2^31, 2^31 - 1], then clamp the integer so that it remains in the range. Specifically, integers less than -2^31 should be clamped to -2^31, and integers greater than 2^31 - 1 should be clamped to 2^31 - 1. Return the integer as the final result. Note: Only the space character ' ' is considered a whitespace character. Do not ignore any characters other than the leading whitespace or the rest of the string after the digits. Example Example 1 Input: s = "42" Output: 42 Example 2 Input: s = " -42" Output: -42 Example 3 Input: s = "4193 with words" Output: 4193 Solution This problem is rated medium difficulty in leetcode. However it is a pretty straightforward solution if you understand the logic behind converting a string to an integer . Rest of the handling is needed mainly to cover boundary conditions for various types of input values. How to convert a string type to an integer type? Given a string s how can we convert this to an integer? One way to do this is by manipulating the ASCII value of characters. Each character in a string has a specific ASCII value associated with it. Numeric values 0 - 9 have an ASCII value in the range 48 - 57(For more details refer ASCII table ). We can use this information to convert a character to an integer type. Lets see how: For each character in the given string we need to do the following steps in order to convert it into an integer: First we need to check if the given character represents a numeric value.This can be done by checking if its ASCII value is in the range 48 - 57. Once we determine that the character represents a numeric value, we convert this character type into an integer type. To do this we need to subtract the characters' ASCII value with 48 (which is the ASCII value of number 0). Once the character is converted to an integer we need to move the integer to its correct base position in result by multiplying the previous result with 10 and adding the answer obtained in step 2 to it. These three steps can be represented by the formula: result = (result * 10)+ (ASCII value of current character - ASCII value of '0') Want to master coding? Looking to learn new skills and crack interviews? We recommend you to explore these tailor made courses: Udemy Video Tutorials - Available at 95% off System Design Interview Complete Guide Cracking The Coding Interview - Most Preferred Book For Coding Interviews Data Structures And Algorithms - A Complete Guide Learn In-Demand Tech Skills And Stay Ahead Of Others Lets understand this with an example. Consider s = "123" is the given string. Lets take a integer type variable result to store our result. Initially result = 0. We will start with the first character in string s = "123" i.e. character'1'. First step is to check if it is a numeric value, so we get the ASCII value of '1', which is 49, this is in the range 48 - 57, so we can conclude it is a numeric value. Next step is to subtract the ASCII value of 1 with ASCII value of 0, i.e. 49 - 48 = 1, which is our required integer type value. So now we have converted the character '1' to its integer representation 1. Next we move 1 to its correct base position in result by using the above formula. So at the end of first iteration result will be , result = result * 10 + (49-48) = 0 * 10 +1 = 1. Next we move on to the second character in string "123", i.e. character '2'. So, result = (result * 10)+ (ASCII value of '2' - ASCII value of '0') Thus result = (1* 10)+ (50 - 48) = 10 + 2 = 12. As you can see we have successfully converted the first two characters of the string s into integer format. Next we move on to the 3rd character '3'. Again we calculate result using the same formula. result = (result * 10)+ (ASCII value of '3' - ASCII value of '0') Thus result = (12* 10)+ (51 - 48) = 120 + 3 = 123, which our final result. How to handle input values containing +/- sign? We take a int variable, lets call it signMultiplier . We make signMultiplier = -1 for input values starting with '-' sign and 1 for input values starting with '+'. If the input contains any of +/- sign we start processing the string from 2nd character and we multiply the signMultiplier to our final result. Code Language: Go Language: Python Complexity Analysis Time Complexity : O(n) The worst case time complexity of this algorithm is O(n) since we have to go through each character in the string at least once to fully convert the given string to integer. n here is the length of the string. Space Complexity : O(1) No extra space is used. That is all for this article, thank you for taking your time to read this. If you have any questions or doubts, please let us know in the comments section below, we will be happy to answer you. If you found this article useful, do not forget to subscribe to our website, your support motivates us to bring out more such articles in future (scroll down to the bottom of the page to find the subscription form). You can explore more such amazing articles from code recipe in our blogs section . Get 100% discount on Code Recipe Membership Plan. Join now and get exclusive access to premium content for free. Hurry! Offer only available for a limited time. Join now . Follow us on social media: Facebook , Twitter , Linkedin , Tumblr , Instagram .

  • Binary Search: Searching Made Easy

    Algorithm Binary search is one of the most commonly used search algorithms . It is also one of the fastest searching algorithms in terms of time complexity. Binary search works only sorted arrays. The algorithm divides the given array into two halves and does its searching on these halves based on certain conditions. Hence the name binary search (since it splits the array into two halves). Binary search works by reducing the search space by half each time, in each iteration half of the elements are eliminated from the current search space. Binary search starts searching from the middle of the array and moves outwards (either left or right), unlike some other search algorithms like linear search that begin their search from the first element of the array. For a given input array arr and a target element lets say x, binary search does the following steps: To begin with the algorithm considers the entire array as its search space. We create two variables left and right representing the index of the first and last elements in the search space. Next we need to find the middle element by dividing the search space into two halves. If our search space has n elements, we can find the middle element by using the formula: mid = (left+right)/2. Now our search space is divided into halves, one half on the left of middle element and the other half on the right of the middle element. Now compare the target element x to the middle element arr [ mid ] obtained from the previous step. There are three possibilities here: 1. x is equal to middle element 2. x is greater than middle element 3. x is less than middle element. If x is equal to middle element we have found our answer, so we return the index of the middle element as our result. If x is greater than middle element, we can discard all the elements on the left of middle element. We can safely do this because we know that the input array given to us is sorted, so if x is greater than middle element it should obviously be greater than all elements before the middle element. So, definitely you cannot find x by searching in the left half of the array. Hence we discard all the elements in the left half. Therefore our new search space now is the right half (all the elements on the right of middle element), so we update left = mid+1 and right remains the same. Thus our new search space is from index mid+1 to index right . Similarly if x is less than middle element, we know there is no benefit searching the right half elements, since x will obviously be less than all elements on the right of middle element. Therefore update right= mid-1 and left remains the same. Thus our new search space is from index left to index mid-1 (left half). We repeat steps 2-6 until we find the middle element or the search spaces reduces to 0 elements (target element not present in the array). If the given target element is not found we return -1. Note : Our above formula to calculate middle element, mid = (left+right)/2, can result in overflow for large arrays. We can alternately use the below for to calculate middle element to prevent overflow condition, mid = left + (right-left)/2 How mid=left + (right-left)/2 prevents overflow? Please refer the detailed explanation here . Working Consider we are given the array [1, 3, 5, 7, 9, 11, 13, 15, 17, 19] and the target element is x = 19 as shown below: Initially our search space is the entire array as highlighted in grey. We create two variables left and right representing the indexes of the first and last elements in the search space, as shown in the below diagram. Initially left = 0 and right = 9. So, mid = (0 + 9)/2 = 4.5 ~ 4. Now we compare x to middle element 9. Clearly 19>9, so we discard all the elements on the left of mid (Discarded elements are shown using X mark). Also now our new search space is all the elements on the right of middle element (highlighted in grey). So we update left = mid+1 = 5 and right remains same. Again we calculate middle element using our new left and right values. mid = (5+9)/2 = 7. We repeat the step of comparing x to the middle element. Again x is greater than middle element (19 > 15). So we shift our search space to the right half by updating left = mid + 1 = 7+1 = 8 and right remains same. Thus our new search space now is from index 8 to 9 as highlighted in grey in the below diagram. Now mid = (8 + 9)/2 = 8.5 ~ 8. Same is shown in diagram below: Again 19 > 17, so left = mid + 1 = 8 + 1 = 9 and right remains same. Now the target element x is equal to the middle element, i.e. arr[9] = x = 19, therefore we have found the target element we are searching for, so return 9 as our result indicating that the target element is found at 9th index in the given input array. Implementation We can implement binary search using both iterative as well as recursive approaches. Below code shows both these implementations: Iterative Approach Language: Go Recursive Approach Language: Go Want to master coding? Looking to learn new skills and crack interviews? We recommend you to explore these tailor made courses: Udemy Video Tutorials - Available at 95% off System Design Interview Complete Guide Cracking The Coding Interview - Most Preferred Book For Coding Interviews Data Structures And Algorithms - A Complete Guide Learn In-Demand Tech Skills And Stay Ahead Of Others Complexity Analysis Time Complexity : O(log n) In the worst case our algorithm would end up doing log n comparisons to find the target element . The worst case occurs when the target element is the first or last element in the input array or the target element itself is not present in the array. How time complexity is O(log n)? Consider you are given an array with 8 elements arr={2, 4, 6, 8, 10, 12, 14, 16} and your target element is 2. As we know binary search discards half the elements in each comparison. In the first comparison the search space reduces to 4 elements, in the second comparison it reduces to 2 and then 1. So in total we had to make 3 comparisons. Similarly if the array had 16 elements, it would take 4 comparisons, i.e. 16 - >8 - >4 -> 2 ->1 in the worst case. Now if you observe both the cases where n=8 and n=16, the number of comparisons we had to do corresponds to a logarithmic function with base 2. For n=8, log2 8 = 3 and for n=16 it is log2 16 = 4 which correlates to 3 and 4 comparisons that our algorithm had to do respectively. And this relation is true for all values of n and hence the time complexity is O(log n). Space Complexity : O(1) No extra space is used is used for iterative approach. For recursive approach space complexity is O(log n) if you consider the recursion call stack otherwise O(1). Output We can test our algorithm with the below code snippet: We get the below output after executing the test: Note : Position displayed in the above output is array index based. Advantages Binary search eliminates half the elements in each iteration, thus has a time complexity of O(log n), therefore it is more efficient as compared to other search algorithms like linear search . Binary search is preferred over other search algorithms especially when the size of the input array is large. Disadvantages Binary search cannot be efficiently implemented for linked lists as linked list does not allow random access in O(1) time as compared to arrays. Remember quick access to elements is needed to reach the middle element efficiently in binary search. Binary search only works on sorted arrays. That is all for this article, thank you for taking your time to read this. If you have any questions or doubts, please let us know in the comments section below, we will be happy to answer you. If you found this article useful, do not forget to subscribe to our website, your support motivates us to bring out more such articles in future (scroll down to the bottom of the page to find the subscription form). You can explore more such amazing articles from code recipe in our blogs section . Get 100% discount on Code Recipe Membership Plan. Join now and get exclusive access to premium content for free. Hurry! Offer only available for a limited time. Join now . Follow us on social media: Facebook , Twitter , Linkedin , Tumblr , Instagram . Want to master coding? Looking to learn new skills and crack interviews? We recommend you to explore these tailor made courses: Udemy Video Tutorials - Available at 95% off System Design Interview Complete Guide Cracking The Coding Interview - Most Preferred Book For Coding Interviews Educative IO Tutorials Fiverr Qualify for a better job in weeks instead of years, with skills-based training & certification courses at Unmudl today!

bottom of page