I was very confused how this differentiated from the ResponseCache middleware. Maybe I missed it during your video, but I found the answer in the MS docs for Response Cache: "Is typically not beneficial for UI apps such as Razor Pages because browsers generally set request headers that prevent caching. Output caching, which is available in ASP.NET Core 7.0 and later, benefits UI apps. With output caching, configuration decides what should be cached independently of HTTP headers."
Amazing video. Now that I'm working with Docker I just realize how easy it makes development without the need of installing 3rd party sotfware. Obsessed with container technology now
I know you could probably create a custom policy for it, but it'd be nice if there was an easier way (e.g. a fluent method call) to specify what types of response to cache. For example I'd like to cache Ok responses for the city "London" for 5 minutes, but cache the NotFound responses for the city "asdf123" for several hours or days.
Good video. Side note, the full redis-stack image (omit -server on the end of the image name) also runs Redis Insight in case someone doesn't have Another Redis Desktop Manager.
Hey Nick. Thanks for the video. Here you are using a simple location string as parameter. How about complex object as parameter? Will the caching work correctly, as the object may differ on every api call?
Caching is actually a very difficult problem as soon as the complexity of your API increases even a little bit. Like what about caching endpoints that return lists, what about objects with different keys that can be used for fetching (I guess just duplicate the cache entries per query right), what if those endpoints return an object with lazy loaded fields? I look at caching more as a necessary evil to be used sparingly when absolutely all else fails.
when the cache expires, and you get 100 calls at that endpoint at the same time, will they all go in trying to get data (i.e., requires you to thread lock the access to the data refresh call) or redis has a built-in option to control that access? (i.e., only one call goes in to refresh the data and the remaining 99 wait for the cached data after the first call gets served). And if redis has such multi-threaded control option built-in, is it scalable or only per instance?
Perfect timing! Thanks 🙏 I was just looking at a high latency request today which would benefit from some caching. Question: can you invalidate certain keys? For example if you wanted to fetch fresh weather for London but not Milan, can you use tagging to say “I want to invalidate the cache for ?city=London but keep ?city=Milan”?
The problem with caching is to determine how long we can store cached value. In case of token it is cool because if token would become obsolete then we get error. In case of weather we could return wrong data.
Hey Nick I use output cache in my application, but I would like to have option to invalidate cache by some dynamic value. For example when user with specific id is updated, I would like to evict his entry from cache. With this approach you evicted all 'weather' tagged entries. What if we would like to evict only Milan and leave London in cache? This issue stops me from putting output cache in my whole applicaiton. Is it worth of follow up video?
Instead of using this minimal API approach you want to use the service with dependency injection and cache it by UUID for example. Use the service like he did invalidation.
@@DaminGamerMC I am not using minimal APIs at all. Currently there is no out of the box functionality to invalidate specific entries in this Output cache functionality that I am aware of. IOutputCacheStore has only EvictByTag method. And with tags defined as in the video, you are not able to evict single entry. Only all of entries related to this tag
@@Velociapcior couldn't you just use a different tag for every entry (i.e. the key/id of the user)? Then when you want to invalidate a user, just evictbytag(userId)?
Is it possibile to evict by key? For example if the weather for London gets updated (let's pretend that it's my API that's updating the weather), could I just delete the cache entry for that specific key (maybe using the query string)?
Trying to implement redis cache right now, because api that I'm creating is being deployed into k8s. I did it little different and experiencing timeouts when testing with thousands of requests at once. I believe its somehow thread related? Like threads are waiting to be completed? I will definettely try this way you showed in video, to see if I'll experience same problem. If anybody have some experience with this problem, or just have any ideas, please, let me know.
As this is intended to facilitate scaling up, and the key used looks to contain the port number as well - this can result in different keys for each instance of the service depending on the infrastructure. I am wondering how can that be excluded? And in general: how can I format the key that is created?
Hey, thanks for the video. Can you please explain how to create own sample service with certain options just like any other services we use in startup and configure their options?
One tiny problem - it does NOT cache authenticated requests, and it's by design. All this beauty and you can't actually use this with API, because most of the time it is protected with API keys :/ No, you still can use the OutputCache, but you have to rewrite original policy code, because devs don't want to implement a toggle in configuration method.
Each call is per city in this case, so yes. It depends on how the API's request parameters are set up. A cache entry is recorded per api method and parameter set.
Great vídeo Nick. I wonder if it works along with Authorization. Once i tried ourput caching in .NET 6 with authorized controllers and it didn't work. It would be great to have the opportunity to vary by Authorization header
Hey Nick, have you thought of creating a subscription model for dometrain? I’ve take courses from you in the past and I’m wondering if this is something we could expect in a near future now that you are inviting external experts
hi there. could you please share your experience with completed course? if you don't mind to share what courses you have taken and how useful found them. thanks in advance!
For those like me scratching their heads why output caching doesn't work with authenticated endpoints: you'll need to either redefine base policy or write your custom one.
What happened if multiple people hit the endpoint at the same moment when it is not cache anymore ? Because the http call is long may it be possible that there is multiple people entering the controller and so make multiple http call ?
it is an output cache and depends on request. if request has an instruction of ordering as parameter (e.g. in query string such as ..&order=desc&...), the cache middleware will scan for such key. And if not found the mapped delegate will be called and the result will be stored with key that accounts this parameter. there are two ways - or cache each ordered response (the same tag will help on evicting) or cache unordered and then order on the client side.
also you may decouple fetching and presentation of data, cache just data that fetched from storage by predicate and prepare it to present from api doing necessary ordering always.
I don't get the idea of distributed cache since such data as in example can be easily cached in memory to avoid additional network call + cache DB overhead Today memory is fairly cheap and caching of small sets of data in external system looks redundant
Because you might run multiple instances of your service for scalability. If you cache in memory they'll each end up with different caches and give inconsistent responses. I memory is the right answer if you only have one instance. It's the wrong answer if you have more.
If you have more than one instance of your application running behind a load balancer, with in-memory caching the work will be need to be ran on each instance for it to be cached locally to that instance. If you use a distributed cache only one of the instances needs to perform the work.
I've never been a fan of caching results at the endpoint level. What happens if you want to use your Weather service in another endpoint? Each one would be caching different values whereas if your cache logic was in the service it would be available for any consumer.
In older videos he's said "don't bother trying to use it cause it will be invalid by the time I release the video" and I think he stopped bothering to say that now