Тёмный

19. System Design: Distributed Cache and Caching Strategies | Cache-Aside, Write-Through, Write-Back 

Concept && Coding - by Shrayansh
Подписаться 123 тыс.
Просмотров 32 тыс.
50% 1

Опубликовано:

 

9 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 48   
@rv0_0
@rv0_0 Год назад
HI brother. I cannot appreciate the videos enough. From last two days I am binge watching. Please upload more videos on designing the system.
@ConceptandCoding
@ConceptandCoding Год назад
Sure i will
@sdash2023
@sdash2023 Год назад
A much needed concept. Thanks for this lesson. Sir please bring the part 2 quickly. And also how can we relate redis here. If you could explain a usecase with redis server.
@ConceptandCoding
@ConceptandCoding Год назад
Sure , redis is a distributed Cache only, so all which i explained is applicable to redis also
@HimanshuKumar-xz5tk
@HimanshuKumar-xz5tk 16 дней назад
Read through cache - Cache Server sits in front of the DB and fetches data from DB and updates the cache in case of Miss. Cache Library would mean its the application server that's interacting with the DB
@sumitbasu5146
@sumitbasu5146 8 месяцев назад
Hi Shreyansh, Thank u for the wonderful video and the contributions u made so far. One question related to the second part of this video or when the Cache Eviction Policy video will be out? OR am I missing something here?
@HimanshuKumar-xz5tk
@HimanshuKumar-xz5tk 16 дней назад
Cache-Aside Caching Strategy Data first read from cache. If present, returned immediately. If not, its fetched from DB, updates the cache and returns Pros - Control over what gets cached. Cons - Data inconsistency issues. Complexity at application layer Read Through Caching Strategy Data is read from cache. If present, returned immediately. If not, its fetched from DB by the cache server itself, updated in cache and returned Pros - Simple Cons - Less control over what gets cached Write Around Caching Strategy Data is written to the DB. Its updated in the cache, only when read operation is performed Pros - Less data inconsistency issues Cons - Cache misses since its not always updated Write Through Caching Strategy Data is written to the DB and cache in same transaction making sure Cache is always consistent with the DB Pros - Always consistent data Cons - High latency Write Back Caching Strategy Data is written to Cache and read from Cache. Its updated in DB asynchronously using a queue/scheduler Pros - Low latency Cons - Chances of data loss
@vivekchourasiya1875
@vivekchourasiya1875 Год назад
Bhut acha video tha theory samaj m aagya but sir isko practical me kaise select krenge aur kya kya krna pdega uska bhi detail btae
@ConceptandCoding
@ConceptandCoding Год назад
I have shared the pros and cons right. That will help you to select which strategy to use.
@harshagrawal007
@harshagrawal007 3 месяца назад
Could you please share, or link the video that contains Part 2, cache eviction policies
@ConceptandCoding
@ConceptandCoding 3 месяца назад
i have not covered it yet
@ujjalroy1442
@ujjalroy1442 Год назад
Very useful video..... indeed is one of the best i have ever seen
@ConceptandCoding
@ConceptandCoding Год назад
Thank you
@sachinmagdum
@sachinmagdum 9 месяцев назад
Hi Shreyansh, Does the "write around" strategy effectively solve the consistency problem? Imagine this scenario: at T1, thread-A reads a record from the database. At T2, thread-B writes the latest value of that record to the database. Subsequently, at T3, thread-B invalidates the cache, signaling that the record is no longer valid. However, at T4, thread-A updates the cache with the stale value it read at T1. This raises concerns about the efficacy of the "write around" approach in ensuring consistency. Despite the cache invalidation at T3, the update at T4 reintroduces a stale value, potentially leading to data inconsistencies. Let me know your thoughts around this.
@sraynitjsr
@sraynitjsr Год назад
Thanks Shreyansh Sir, Loved It.....
@ConceptandCoding
@ConceptandCoding Год назад
Thank you
@guy_whocode
@guy_whocode Год назад
Bahut Sahi Shreyansh.....Good Job
@ConceptandCoding
@ConceptandCoding Год назад
Thank you
@ManishTiwari-or8zt
@ManishTiwari-or8zt 3 месяца назад
Can you please make video to implment caching technique in distributed system.
@mitulgupta9258
@mitulgupta9258 Год назад
Hi Shreyansh! Just one question - In Write Around Cache we are marking the entry in cache as dirty if it is updated on DB. Do we also need a 2 phase commit in this case as in Write Through Cache? Because updating the value as dirty in cache is a necessary operation to be completed along with DB update.
@anirbunt
@anirbunt 3 месяца назад
How is Write back Cache fault tolerant, a) what if there is down time in messaging service where server can't push the message to queue? b) What if the server crashes before sending the status to queue while it has already sent a successfull response.
@girishanker3796
@girishanker3796 3 месяца назад
In distributed systems, we will be having multiple instances of the application running on different servers. So in order for the systems to be highly available you can go for multi node architecture.
@ehtashammazhar3518
@ehtashammazhar3518 9 месяцев назад
Hey Shreyansh… in write around if a put/ patch request invalidate the data and same time db goes down and then get request comes so what is the use of this stale data ?
@ConceptandCoding
@ConceptandCoding 9 месяцев назад
This stale data is of no use, it will be removed from the cache after its TTL. But in real world scenario, if DB goes down, again the read should be success from the replicas. Now this open ups one more scenario, lets understand that: - Put request has make the Cache Invalidate and updated the DB with Version 2 - Before the Sync up happens between other replicas, lets say DB got down - Read is coming, and it will read the data from DB (since mail DB is down, so replica will fulfill the request and since sync up was not happened, lets say replica has Version 1) So GET call should not put this Version1 in the Cache. Else we will put stale data in the cache. So that scenario need to be handled too. Only read it but in cache we should not put in that scenario.
@ehtashammazhar3518
@ehtashammazhar3518 9 месяцев назад
@@ConceptandCoding thanks for explanation. highly appreciate your service for the community. keep it up man.
@ShashwatShukla-p8h
@ShashwatShukla-p8h Месяц назад
where is caching part 2??
@subhamacharya4472
@subhamacharya4472 Год назад
Hi Shreyansh , very useful video . But can you share the sequence diagram for all the 5 caching strategies as it will help to revise whenever needed only by seeing the sequence diagrams 😀
@ConceptandCoding
@ConceptandCoding Год назад
Yes i have shared it on LinkedIn too, i will upload on gitlab and share the link by today buddy. Thanks for reminding, i forgot to push the diagram on gitlab
@rahulbharadia9152
@rahulbharadia9152 8 месяцев назад
Buddy which software u have used in this for teaching ??
@ConceptandCoding
@ConceptandCoding 8 месяцев назад
one note and wacom tab
@vennamurthy
@vennamurthy Год назад
Thank you @Shrayansh Jain
@ConceptandCoding
@ConceptandCoding Год назад
Thank you
@ritveak
@ritveak Год назад
In write through cache we write everything in cache and the DB in a 2phase commit, meaning cache has all data that DB has. Then the cache would be heavy as well, the searchability will take more time ! Is it good only for small data scenarios? Also what's the significance of a cache if the DB and cache have same amount of data? Correct me if I am wrong but Does the presence of cache in memory and giving faster access the only pro?
@ConceptandCoding
@ConceptandCoding Год назад
Access to cache is fast compared to DB. Generally it's a happy scenario that Cache and DB has the same data but sometimes Cache helps to achieve fault tolerant when DB is down. Cache is not as big as DB bcoz the TTL of cache is less 3hrs, 10hrs etc depends upon the need. Cache searching done based on Key, so fetching the data is faster only.
@sdash2023
@sdash2023 Год назад
As he correctly explained the older data will be removed, and new data will be added to cache using LRU, LFU or FIFO logics. So cache will not grow as big as db. It depends on the TAT and TTL. Thanks for the question it will help others get the answer.
@vikasjoshi8381
@vikasjoshi8381 Год назад
Thanks. very informative. One question. should'nt cache take responsibility for updating database in write-through strategy?
@ConceptandCoding
@ConceptandCoding Год назад
Depends, if cache library do not support this, even application can write into DB after writing into the cache
@vikasjoshi8381
@vikasjoshi8381 Год назад
@@ConceptandCoding makes sense. In that case we need to take care is of rollbacking cache updations if DB related exception occurs. Thanks a lot.
@ConceptandCoding
@ConceptandCoding Год назад
@@vikasjoshi8381 exactly.
@harshitagarwal2682
@harshitagarwal2682 2 месяца назад
👍👍
@shubhamkumar6383
@shubhamkumar6383 Год назад
which type of cache we can use in a multiplayer game where user answer the question and they earn coins and user scale is very large ?
@ConceptandCoding
@ConceptandCoding Год назад
It's not like that Shubham, we have to collect more requirements like how much get call we expect?, how much write call we expect? is there a case where same data can be requested? Is less Latency required? When DB goes down, still read and write should work? Then we can choose any strategy
@saanikagupta1508
@saanikagupta1508 28 дней назад
You're mixing TTL with TAT
@pleasantdayNwisdom
@pleasantdayNwisdom Год назад
Sir from where u learnt all these ?
@ConceptandCoding
@ConceptandCoding Год назад
8 saal ka exp hogaya hai, kaafi to projects mein implement karne ke liye analysis karein hue hai.
@alokgarg7494
@alokgarg7494 2 месяца назад
Bhai notes thora sahi se bana diya karo. Kafi ghatiya handwriting rahti hai. Baad mein padhne pe samajh nahi aata ki kya likha hai
@ConceptandCoding
@ConceptandCoding 2 месяца назад
hi, alok i highly encouraged everyone to make self made notes. That way, new doubts or question will come, which helps to understand the topic in much better way. but feedback taken, will improve the handwriting in all future notes.
Далее
ФОКУС -СВЕТОФОР
00:32
Просмотров 66 тыс.
Bacon на громкость
00:47
Просмотров 87 тыс.
Adding a cache is not as simple as it may seem...
13:29
System Design Interview - Distributed Cache
34:34
Просмотров 361 тыс.
How Distributed Lock works | ft Redis | System Design
10:24
Amazon System Design Interview: Design Parking Garage
29:59
Redis system design | Distributed cache System design
34:10
ФОКУС -СВЕТОФОР
00:32
Просмотров 66 тыс.