Тёмный

What are Distributed CACHES and how do they manage DATA CONSISTENCY? 

Gaurav Sen
Подписаться 572 тыс.
Просмотров 969 тыс.
50% 1

Caching in distributed systems is an important aspect for designing scalable systems. We first discuss what is a cache and why we use it. We then talk about what are the key features of a cache in a distributed system.
The cache management policies of LRU and Sliding Window are mentioned here. For high performance, the cache eviction policy must be chosen carefully. To keep data consistent and memory footprint low, we must choose a write-through or write-back consistency policy.
Cache management is important because of its relation to cache hit ratios and performance. We talk about various scenarios in a distributed environment.
System Design Video Course:
interviewready.io
00:00 Who should watch this video?
00:18 What is a cache?
02:14 Why not store everything in a cache?
03:00 Cache Policies
04:49 Cache Evictions and Thrashing
05:52 Consistency Problems
06:32 Local Caches
07:49 Global Caches
08:56 Where should you place a cache?
09:35 Cache Write Policies
11:38 Hybrid Write Policy?
13:10 Thank you!
A complete course on how systems are designed. Along with video lectures, the course has architecture diagrams, capacity planning, API contracts, and evaluation tests.
System Design Playlist: • System Design for Begi...
Code: github.com/coding-parrot/Low-...
You can follow me on:
Facebook: / gkcs0
Quora: www.quora.com/profile/Gaurav-...
LinkedIn: / gaurav-sen-56b6a941
Twitter: / gkcs_
References:
Guava Cache - github.com/google/guava/wiki/...
LRU - www.mathcs.emory.edu/~cheung/C...
en.wikipedia.org/wiki/Cache_r...
Implementation of Sliding Window Cache policies (Caffeine) - github.com/ben-manes/caffeine
highscalability.com/blog/2016/...
docs.microsoft.com/en-us/prev...)
#SystemDesign #Caching #DistributedSystems

Опубликовано:

 

1 июн 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 526   
@VrajaJivan
@VrajaJivan 5 лет назад
Gaurav nice video. One comment. Writeback cache refers to writing to cache first and then the update gets propagated to db asynchronously from cache. What you're describing as writeback is actually write-through, since in write through, order of writing (to db or cache first) doesn't matter.
@gkcs
@gkcs 5 лет назад
Ah, thanks for the clarification!
@KumarAbhishek123
@KumarAbhishek123 5 лет назад
Yes, would be great if you can add a comment saying correction about the 'Write back cache'. Thanks for the great video!
@gururajsridhar7314
@gururajsridhar7314 5 лет назад
I agree.. a comment in the video correcting this would be good update to this.
@mrityunjoynath7673
@mrityunjoynath7673 4 года назад
So Gaurav was also wrong in saying "write-back" is a good policy for distributed systems?
@jyotipandey9218
@jyotipandey9218 4 года назад
@Gaurav Yes that would be great. That part was confusing, had to read about that separately.
@waterislife9
@waterislife9 3 года назад
Write-through: data is written in cache & DB; I/O completion is confirmed only when data is written in both places Write-around: data is written in DB only; I/O completion is confirmed when data is written in DB Write-back: data is written in cache first; I/O completion is confirmed when data is written in cache; data is written to DB asynchronously (background job) and does not block the request from being processed
@rajee120
@rajee120 Год назад
Q
@GK-rl5du
@GK-rl5du 5 лет назад
Other variants 1. There are 2 hard problems in computer science: cache invalidation, naming things, and off-by-1 errors. 2. There are only two hard problems in distributed systems: 2. Exactly-once delivery 1. Guaranteed order of messages 2. Exactly-once delivery
@gkcs
@gkcs 5 лет назад
Hahahaha!
@GK-rl5du
@GK-rl5du 5 лет назад
@@gkcs A humble suggestion, I think you should have a sub-reddit for the channel, because these are such critical topics [not just for cracking interviews], I'm sure they'd definitely encourage healthy discussions. I think YT's comment system is not really ideal to have/track conversations with fellow channel members.
@RAJATTHEPAGAL
@RAJATTHEPAGAL 4 года назад
This is an underrated comment .... 😂😂😂
@kumarakantirava429
@kumarakantirava429 4 года назад
​@@gkcs Can you please give some hints on WHY "out of order Delivery" is a problem in distributed systems, if the application is running on TCP ..................PLease Kindly reply.
@kumarakantirava429
@kumarakantirava429 4 года назад
@goutham Kolluru , Can you please give an hint on WHY "out of order Delivery" is a problem in distributed systems, if the application is running on TCP ..................PLease Kindly reply.
@mengyonglee7057
@mengyonglee7057 Год назад
Notes: In Memory Caching - Save memory cost - For commonly accessed data - Avoid Re-computation - For frequent computation like finding average age - Reduce DB Load - Hit cache before querying DB Drawbacks of Cache - Hardware (SSD) much more expensive than DB - As we store more data on cache, search time increases (counter productive) Design - Database (Infinite information) vs Cache (Relevant information) Cache Policy - Least Recently Used (LRU) - Top entires are recent entries, remove least recently used entries in cache Issue with caches - Extra calls - When we couldn’t find entry in cache, we query from database. - Threshing - Input and output cache without ever using results - Consistency - When update DB, we must maintain consistency between cache and DB Where to place the cache - Close to server (in memory) - Benefit - Fast - Issue - Maintaining consistency between memory of different servers, especially for sensitive data such as password - Close to DB (global cache, i.e. Redis) - Benefit - Accurate, Able to scale independently Write-through vs Write-back - Write-through - Update cache, before updating DB - Not possible for multiple servers - Write-back - Update DB, before updating cache - Issue: Performance - When we update the DB, and we keep updating the cache based on that, much of the data in the cache will be fine and invalidating them will be expensive - Hybrid - Any update first write to cache - After a while, persist entries in bulk to database
@pushp3593
@pushp3593 5 месяцев назад
nice, but write through and write back notes part is wrong, pls correct it. you can check other comments. thanks
@cheerladinnemouli2864
@cheerladinnemouli2864 3 месяца назад
Nice notes
@mannion1985
@mannion1985 4 года назад
I can already hear the interviewer asking "with the hybrid solution: what happens when the cache node dies before it flushes to the concrete storage?" You said youd avoid using that strategy for sensitive writes but you'd still stand to lose upto the size of the buffer you defined on the cache in the e entire of failure. You'd have to factor that risk into your trade off. Great video, as always. Thank you!
@SatyadeepRoat
@SatyadeepRoat 3 года назад
I am actually using write back redis in our system but this video actually helped me to understand what's happening overall. GReat video
@devinsills1281
@devinsills1281 2 года назад
A few other reasons not to store completely everything in cache (and thereby ditching DBs altogether) are (1) durability since some caches are in-memory only; (2) range lookups, which would require searching the whole cache vs a DB which could at least leverage an index to help with a range query. Once a DB responds to a range query, of course that response could be cached.
@mayankvora8329
@mayankvora8329 3 года назад
I don't know how people can dislike your video Gaurav, you are a master at explaining the concepts.
@AnonyoX
@AnonyoX 4 года назад
Great video. But I wanted to point out that, I think what you are referring to as 'write-back' is termed as 'write-around', as it comes "around" to the cache after writing to the database. Both 'write-around' and 'write-through' are "eager writes" and done synchronously. In contrast, "write-back" is a "lazy write" policy done asynchronously - data is written to the cache and updated to the database in a non-blocking manner. We may choose to be even lazier and play around with the timing however and batch the writes to save network round-trips. This reduces latency, at the cost of temporary inconsistency (or permanent if the cache server crashes - to avoid which we replicate the caches)
@Sound_.-Safari
@Sound_.-Safari 3 года назад
Cache doesn’t stop network calls but does stop slow costly database queries. This is still explained well and I’m being a little pedantic. Good video, great excitement and energy.
@anjurawat9274
@anjurawat9274 4 года назад
I watched this video 3 times because of confusion but ur pinned comment saved my mind thank you sir
@bhavyeshvyas2990
@bhavyeshvyas2990 5 лет назад
Dude you are the reason for my system design interest Thanks and never stop making system design videos
@jsf17
@jsf17 3 года назад
The world needs more people like you. Thank you!
@kabooby0
@kabooby0 3 года назад
Great content. Would love to hear more about how to solve cached data inconsistencies in distributed systems.
@Satu0King
@Satu0King 4 года назад
Description for write back cache is incorrect. Write-back cache: Under this scheme, data is written to cache alone and completion is immediately confirmed to the client. The write to the permanent storage is done after specified intervals or under certain conditions. This results in low latency and high throughput for write-intensive applications, however, this speed comes with the risk of data loss in case of a crash or other adverse event because the only copy of the written data is in the cache.
@gkcs
@gkcs 4 года назад
Thanks for pointing this out Satvik 😁👍
@justinmancherje6168
@justinmancherje6168 4 года назад
I believe the description in the video given for write-back cache is actually a write-around cache (according to grokking system design)
@mostinho7
@mostinho7 4 года назад
What if the cache itself is replicated? Will write-back still has risk of data loss
@arpansen964
@arpansen964 2 года назад
Yes, as per my understanding, write-through cache : when data is written on the cache it is modified in the main memory, write back cache: when dirty data (data changed) is evicted from the cache , it is written on the main memory, so write back cache will be faster. The whole explanation around there two concepts given in this video seems fuzzy.
@legozxx6655
@legozxx6655 5 лет назад
Great explanation. You are making my revision so much easier. Thanks!!
@jajasaria
@jajasaria 5 лет назад
always watching your videos. topic straight to the point. keep uploading man. thanks always.
@user-oy4kf5wr8l
@user-oy4kf5wr8l 4 года назад
each of ur videos, i watched ay least twice lol, thank you!! WE ALL LOVE U! U R THE BEST!
@rishiraj9131
@rishiraj9131 2 года назад
I also watch his videos mamy times. At least 4 times to be precise.
@zehrasubas9768
@zehrasubas9768 4 года назад
Hi Guarav, I really like your videos thank you for sharing! I need to point out something about this video. Writing directly do DB and updating cache after, is called write around not write back. The last option you have provided, writing to cache and updating DB after a while if necessary, is called write back
@gkcs
@gkcs 4 года назад
Thanks Zehra 😁
@rahuljain5642
@rahuljain5642 2 года назад
If someone explains any concept with confidence & clarity like you in the interview, he/she can rock it seriously. Heavily inspired by you & love your content of system design. Thanks for the effort @Gaurav Sen
@harisridhar6698
@harisridhar6698 2 года назад
Hi Gaurav - good video on distributed caching! This expands a bit more on what I learned in my computer architecture class - I didn't recall thrashing the cache too well, or what distinguished write-through vs. write-back. I think learning caching in the context of networks is more interesting, since it was initially introduced as a way to avoid hitting disk ( on a single machine ), but is also a way to reduce network calls invoked from server to databases.
@akash.vekariya
@akash.vekariya 3 года назад
This man is literally insane in explanation 🔥
@semperfiArs
@semperfiArs 5 лет назад
Extremely good video series bro. Just subscribed yesterday and loving it so far. Suggest you to start a interview series where you can answer a few important questions. Will be helpful.
@an_R_key
@an_R_key 4 года назад
You articulate these concepts very well. Thanks for the upload.
@manasbudam7192
@manasbudam7192 4 года назад
What you explained as write-back cache is actually a write-around cache. In write-back cache...you update only the cache during the write call and update the db later (either while eviction or periodically in the background).
@NohandleReqd
@NohandleReqd 2 года назад
Teaching and learning are processes. Gaurav makes it fun to learn about stuff, then let it be systems or the egg dropping problem. I might just take the InterviewReady course to participate in the interactive sessions. Take a bow!
@neeraj91mathur
@neeraj91mathur 3 года назад
Nice video Gaurav, really like your way of explaining. Also, the fast forward when you write on board is great editing, keeps the viewer hooked.
@aswath_s
@aswath_s 4 года назад
Awesome explanation gaurav. You're cool man. We want a lottt more from you. We admire your ability to explain topics with great simplicity.
@vakul121
@vakul121 4 года назад
It is a really great video.Finally found a detailed video.Thank you for sharing your knowledge!!
@rajeevkulkarni2888
@rajeevkulkarni2888 2 года назад
Thank you so much for these videos!. Using this I was able to pass my system design interview.
@kfqfguoqf
@kfqfguoqf 4 года назад
Your System Design videos are very good and helpful, thanks!
@OwenValentine
@OwenValentine 5 лет назад
Gaurav, what you initially described as write-back at around 10:30 I have seen described as write-around. Write-back is where you write to the cache and get confirmation that the update was made, then the system copies from the cache to the database (or whatever authoritative data store you have) later... be it milliseconds or minutes later. Write through is reliable for things that have to be ACID but it is slower than write back. You later describe what I have always heard as write-back at around 12 and a half minutes
@gkcs
@gkcs 5 лет назад
Yes, I messed up with the names. Thanks for pointing it out 😁
@muhammadanas11
@muhammadanas11 3 года назад
The way you explained concepts is AWSOME. Can you please create a video that decribes DOCKER and Containers in your style.
@JinkProject
@JinkProject 4 года назад
this video was gold. studying for my facebook on-site and i need to understand a bit more how backend works. cheers @gaurav sen
@enfieldli9296
@enfieldli9296 2 года назад
I just can't find a better content on YT than this, thanks man!
@prakharpanwaria
@prakharpanwaria 2 года назад
Good video around basic caching concepts. I was hoping to learn more about Redis (given your video title)!
@VikramKumar-qo3rg
@VikramKumar-qo3rg 3 года назад
Fun part. I was going through 'Grokking The System Design Interview' course, found the term 'Redis', started searching for more on it on youtube, landed here, finished the video and Gaurav is now asking me to go back to the course. Was going to anyway! :)
@gkcs
@gkcs 3 года назад
Hahaha!
@pat2715
@pat2715 3 месяца назад
amazing clarity, intuitive explanations
@chenwang7194
@chenwang7194 3 года назад
Nice video, thanks! For the hybrid mode, when S1 persists to DB in bulk, the S2 is still having the old data, right? How do we update S2?
@silentknight2851
@silentknight2851 4 года назад
hey Gaurav, for holidays I'll watch your videos day in and day out... So please teach new topics asap. I love to listen you
@TheHalude
@TheHalude 4 года назад
Thanks this was a good video for a high level overview of cache, easy to follow,
@muraliboddu4007
@muraliboddu4007 2 года назад
nice quick video to get an overview. thanks Gaurav. you are helping a lot of people.
@akashnag3879
@akashnag3879 5 лет назад
Loved it.. Thank you for such amazing video.. keep coming up with more.
@happilysmpl
@happilysmpl 2 года назад
Excellent! Great video with tremendous info and design considerations
@rishiraj1616
@rishiraj1616 5 лет назад
This is my video on your channel and I must say that you explain very well! You seem professional, knowledgable and researched your topic well!
@jayantsogani8389
@jayantsogani8389 5 лет назад
Thanks Gaurav, your lecture helped me to crack MS. Keep posting video's
@gkcs
@gkcs 5 лет назад
Congrats!
@shubham.1172
@shubham.1172 4 года назад
Are you in the Hyd campus?
@andreigatej6704
@andreigatej6704 5 лет назад
Very well explained! Thank you
@psrajput09
@psrajput09 5 лет назад
Superb, now I started using my free time to learn something ! Thanks!!!
@codingart7736
@codingart7736 5 лет назад
Loved your sharing Thanks a lot
@Not0rious7
@Not0rious7 3 года назад
You continue to offer great content. thank you !
@renon3359
@renon3359 5 лет назад
Keep making these videos brother, great job as always. :)
@shreyasns1
@shreyasns1 2 года назад
Thank you for the video. You could have gone a little deeper about how the cache is implemented? What’s the underlying data structure of the cache?
@CloudXpat
@CloudXpat 3 года назад
Great explanation for caching. I believe you'll go far.
@sandeepk9640
@sandeepk9640 3 года назад
Nicely packed lot of information for glimpse.. Great work
@daysimples7658
@daysimples7658 4 года назад
Summary Caching can be used for the following purposes: Reduce duplication of the same request Reduce load on DB. Fast retrieval of already computed things. Cache runs on SSD (RAM) Rather than on commodity hardware. Don't overload the cache for obvious reasons: It is expensive(hardware) Search time will increase Think of two things:(You obviously want to keep data that is going to be most used) !So predict! When will you load data in the cache When will you evict data from the cache Cache Policy = Cache Performance Least Recently Used Least Frequently used Sliding Window Cache Policy = Cache Performance Least Recently Used Least Frequently used Sliding Window Avoid thrashing in Cache Putting data into the cache and removing it without using it again most of the time. Issues can be of Data Consistency What if data has changed Problems with Keeping cache in Server memory(In memory) -What if the server goes down(cache will go down) -How to maintain consistency in data across cache. Mechanism Write through Always write first in the cache if there is an entry and then write in DB. The second part can be synchronous. But if you have in-memory cache for every server obviously you will enter into data inconsistency again Write back Go to Db, make an update, and check-in cache if you have the entry.. Evict it. But suppose there is no any important update and you keep evicting entries from cache like this you can again fall into thrashing. One can use Hybrid approach as per the use case. Thanks to @GauravSen
@manishamulchandani1500
@manishamulchandani1500 3 года назад
I have one doubt regarding the cache policy. Gaurav explained that for critical data we use Write Back policy to ensure consistency. In write through one instance memory cache gets updated and others can remain stale. 1) My question is same can happen in Write Back, one instance's in memory cache entry gets deleted and we update DB..other instances still have that entry. So there is inconsistency in write Back as well. Why do we prefer write back for critical data because same issue is there in write back. If answer is invalidate all instances in memory cache entry then same can be done for Write through. Which makes me ask question 2. 2) My another question is : We can update all instances' in memory cache entry and then update DB. In this way consistency is maintained so why not we use this for critical data like password financial information.
@oscarjesusresendiz100
@oscarjesusresendiz100 4 года назад
Great explanation Dude! You killed it!
@souradiptachoudhuri4724
@souradiptachoudhuri4724 5 лет назад
This video is just awesome. Thank you
@asankaherath1744
@asankaherath1744 3 месяца назад
Thank you so much..! your videos are really valuable. Really appreciate your effort, sir.!!
@nishantmahajan55
@nishantmahajan55 4 года назад
Very helpful and to the point. Thanks.
@djanupamdas
@djanupamdas 5 лет назад
I think simply telling THANK YOU will be very less for this help !!! Superb video.
@gkcs
@gkcs 5 лет назад
Glad to help :)
@jagatsastry
@jagatsastry 4 года назад
I mean you can always do more by becoming a channel member 😄
@jatinchugh373
@jatinchugh373 5 лет назад
dude your content is amazing
@kumarp1976
@kumarp1976 3 года назад
Good content. However, the Hybrid approach is based only on write through. There should be a Hybrid approach on write back too. The system (post db entry) should also not just invalidate, but update the cache with new data. That way, both persistent and cache storage has new data. No cache miss for next read. What do you think @gaurav?
@RpraneelK
@RpraneelK 3 года назад
Very informative and concepts explained clearly. Thanks
@1970mcgraw
@1970mcgraw 3 года назад
Excellent info and presentation - thanks!
@ZALOP123
@ZALOP123 3 года назад
@Gaurav Sen Thanks for the video. I have a question for you: I have a system where I need to cache a small amount of data (user IDs to usernames - in a map). This is so that on the front end I can populate the data with the auto-complete feature on the UI. For this scenario, would you recommend a cache infrastructure like Redis? Or could we just use a Concurrent HashMap in a programming language like Java to do this? thanks.
@ravimulchandani2916
@ravimulchandani2916 Год назад
Nice Explanation Gaurav. This video covers basics of caching. In one of the interviews, I was asked to design the Caching System for stream of objects having validity. Is it possible for you to make some video on this system design topic?
@i2chang
@i2chang 4 года назад
Thank you, this video is very good!
@majortakleef8445
@majortakleef8445 4 года назад
Gaurav, what you are describing as a Write Back cache is actually called Write Around cache. What you describe as the hybrid mechanism, is actually called the Write Back cache. In both assumption is an asynchronous update unlike Write Through where update is synchronous. Might be worth taking this video offline and uploading a corrected version to avoid misleading folks prepping for interviews.
@AbhideepChakravarty
@AbhideepChakravarty 3 года назад
The draw back of write through you explained is equally applicable in Write Back i.e. I null the value in S1 still the value is not null in S2. Major thing is - Redis is not distributed cache. Even their own definition does not include the word "Distributed" - Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache and message broker. It supports data structures such as strings, hashes, lists, sets, sorted sets with range queries, bitmaps, hyperloglogs, geospatial indexes with radius queries and streams. Redis has built-in replication, Lua scripting, LRU eviction, transactions and different levels of on-disk persistence, and provides high availability via Redis Sentinel and automatic partitioning with Redis Cluster.
@grijeshmnit
@grijeshmnit 4 года назад
Your explanations are improved a lot now... correct sequence of information was missing in your earlier vdos... May you can improve your flow chart, fig further for less confusing...
@timhomstad
@timhomstad 2 года назад
Do you implement caching on most systems? It will add complexity, how can you determine if it is worth the additional effort to develop. Love the videos by the way. These are a great learning tool, you do a great job.
@ananava254
@ananava254 3 года назад
Thank you Gaurav, it was a really good explanation
@jask00
@jask00 5 лет назад
Great job. Really well explained
@raghavabharati
@raghavabharati 4 года назад
Pls refer to the caching topic page in Educative(That you've recommended). Is the explanation in your video and that page aligned?
@sirunworld
@sirunworld 3 года назад
Great going, Gaurav. You have a great future!
@CodeSbyAniz
@CodeSbyAniz 3 года назад
You have explained it very nicely. Thanks.
@osheennayak9477
@osheennayak9477 4 года назад
Hi Gaurav, there is a slight confusion, as per the "grokking the system design" write-back cache invalidation means the writing to cache only and then later on writing to DB, something that you mentioned as the hybrid approach. Can you please clarify
@sharifulhaque6809
@sharifulhaque6809 3 года назад
Very easy understanding Gaurav. Thanks a lot !!!
@billyean
@billyean Год назад
Explained like my interviewed candidate today.
@zainsyed9811
@zainsyed9811 5 лет назад
Awesome overview thanks. One other possible issue with write-through - it's possible to make the update to the cache then the DB update itself fails. Now your cache and db will be inconsistent.
@gkcs
@gkcs 5 лет назад
True 😁
@kartikpandey3631
@kartikpandey3631 4 года назад
Hi Gaurav, great video and nice presentation skills (y). Just wanted to know if we can use write-through with global cache will it be a better approach as there will be consistency as well as high performance of write-through? and can it be used for critical data ?
@KajkoCar
@KajkoCar 4 года назад
Title: What is Distributed Caching? Explained... There is not a single 'D' in this 'Distibuted' explanation. You are talking about 'cache' and it's variations in implementation ONLY. All in all, change the title to 'What is caching?'
@ashishverma-mj1kl
@ashishverma-mj1kl 3 года назад
@7:55
@pranavsurampudi6838
@pranavsurampudi6838 3 года назад
One Observation, cache need not run on expensive hardware, and for cache, one would use "memory" centric instances on the cloud, not SSD(s) and caches can be used in place of a database if the size is relatively small and you require high throughput and efficiency.
@AmitKumar-je7rn
@AmitKumar-je7rn 2 года назад
I have one doubt. The definition you gave for write-back should be for write-around. In write-around, we hit the DB first and then update the cache. In write-back, we first update the cache and then wait for some time to bulk write in DB. Please let me know if my understanding is wrong.
@rahulchawla6696
@rahulchawla6696 Год назад
wonderfully explained. thanks
@ShaliniNegi24
@ShaliniNegi24 4 года назад
Nice Explanation! Thanks :)
@mkgcodes
@mkgcodes 5 лет назад
This one is very helpful for me. Many thanks Gaurav.
@gkcs
@gkcs 5 лет назад
Cheers!
@ashwinasokan
@ashwinasokan Год назад
Bhai. u r a life saver! Brilliant tutoring. Thank you!
@rsragsh55
@rsragsh55 4 года назад
Hi Gaurav, Your videos are awesome and of great help. Would really appreciate if you could make a video regarding ''how to get aws certification''.
@mostinho7
@mostinho7 4 года назад
Once we have updated an entry in the database, how does of say S2 or S_n (any other server) know that this entry has been updated in the db and that it needs to invalidate it?
@siris3957
@siris3957 5 лет назад
You explain so well! :) Thank you
@gkcs
@gkcs 5 лет назад
Thanks!
@ivandrofly
@ivandrofly 4 года назад
My boy look very energized... keep it up!
@gkcs
@gkcs 4 года назад
😁
@hareendranep8422
@hareendranep8422 4 года назад
Very nice presentation . Simple, powerful and fast presentation. Keep up the style
@gkcs
@gkcs 4 года назад
Thank you!
@ANILKHANDEI
@ANILKHANDEI 4 года назад
Nice video. Few questions, What is the use case of storing password in the cache? I always considered server side caching. I was unaware of global cache where multiple servers able to access the same cache will read more about it.
@sivaram2492
@sivaram2492 2 года назад
A label/comment in the video about the change of usage w.r.t to write-back and write-through would help future viewers. I never saw the pinned comment until recently. This could have backfired in an interview.
@jazeem10
@jazeem10 4 года назад
this isn't distributed caching , this is simply about caching & Redis ...
@larskrenning260
@larskrenning260 3 года назад
@@deshkarabhishek This indeed again an example of "click bait". A person saying X but - as many others before him - explaining Y. Where Y is The Basics, and X is The Difficult. These people who this "click bait" trick are mostly people from India. I'm not saying that all Indian people upload worthless info, some of them are really spectacular - but 100% of the worthless info are from India. With regards to Redis / Caching - my guess it that RedisLabs acknowledged this "click bait" problem and uploads extremely good info. (And some of this info is actually done by some ultra intelligent Indians - because when an Indian is intelligent, he / she is extremely intelligent)
@YashArya01
@YashArya01 3 года назад
@@larskrenning260 I think you gotta keep in mind that some of what you're seeing is because of the high population and because of the higher proportion of Indians pursuing engineering. :) So I'm not sure you get anything of value from that anecdotal observation.
@namangarg3933
@namangarg3933 3 года назад
@@deshkarabhishek Well, that's bad. It will be great if you could share a video with your production experience. May be Gaurav can also learn about 'DISTRIBUTED' cache from you.
@shubhammadankar6390
@shubhammadankar6390 3 года назад
@@namangarg3933 correct
@TheAppAlchemist
@TheAppAlchemist 3 года назад
@@larskrenning260 lol, are you a jealous pig? cuz your comment sounds like a nazi who is not potty trained, this is youtube, not toilet, please behave and inform yourself before commenting such stupid stuff. your comment makes me feel go and throw up 100% crap people like you make this world stink I agree this video was not his best video, but you all are here and learning from him your comment shows how much of ignorant you are I would delete it if I was you
@meletisflevarakis40
@meletisflevarakis40 4 года назад
Your explanation is awesome. Keep it up!
@gkcs
@gkcs 4 года назад
Thanks!
@rupeshpatil6957
@rupeshpatil6957 3 года назад
Thanks for Video Gaurav. What if global cache itself failed? What are different backup strategies for it?
@openretailsstore3808
@openretailsstore3808 2 года назад
@Gaurav Sen - How network call can be reduced in terms of distributed cache wherein cache would be distributed? Why distributed cache is faster than database?
Далее