Want to master Clean Architecture? Go here: bit.ly/3PupkOJ Want to unlock Modular Monoliths? Go here: bit.ly/3SXlzSt P.S. I missed a _minor_ thing in the video - passing *TransactionScopeAsyncFlowOption.Enabled* to the TransactionScope constructor. This is necessary to be able to use it with async/await.
quick question: is transactionscope's default isolation level the same as the one used by ef core implicit transactions? it's different from sql server's default level....
Exatcly as mentioned, this has a "magic" smell and persisting changes to database is implicit. Same with transaction, what if i don't want all changes to be packed into one transaction. Depends on team, imho code readability is much more important than DRY. Thanks for video :)
@@krzysztofhandzlik9273can you show an example of being more performant with multiple transactions? Because that supposed performance breaks the concept of transaction. If you are going to talk about massive updates or deletes, then you should consider a different architecture, like message queues and worker services.
I’m glad to hear that you haven’t encountered issues related with high scale applications yet, but I would strongly suggest to avoid having multiple SaveChanges() in single “transaction” as you call it, however in that case technically it’s no longer a transaction
I love this approach, and I apply the same principles. Here are a few comments/suggestions: 1. You can use a generic constraint to run this exclusively for commands. Here's how you can do it: public sealed class UnitOfWorkPipelineBehavior : IPipelineBehavior where TRequest : ICommand where TResponse : IResult 2. In other videos, you advise against throwing exceptions, suggesting to return the result instead. In such a case, you would need to commit changes only if the result is successful. I do it like this public async Task Handle(TRequest request, RequestHandlerDelegate next, CancellationToken cancellationToken) { var response = await next(); if (response is IResult {IsSuccess: true}) { await _unitOfWork.CommitAsync(cancellationToken); } return response; } 3. If you're dealing with nested commands and foreign keys, you risk invoking SaveChanges() in "reverse" order, where a row with the foreign key is saved before the row with the primary key is inserted (even if you use TransactionScope). In this situation, you would want to call SaveChanges() only in the outer UnitOfWork pipeline behavior. I've created a simple solution to this issue by using a scoped ConcurrentCounter (see link below) 4. If you are using SQLite for testing, you should be aware that TransactionScope is not an option as it uses distributed transactions, which are currently unsupported. Once again, I use my ConcurrentCounter to ensure I only call SaveChanges once. SQLite's InMemory database is superior to Entity Framework because it is a real relational database, and it understands foreign key constraints. 5. You could consider disabling EntityFramework change tracker. Since you are calling repository.Update() you do not not need to have change tracker enabled at all. If you disable it you also do not risk saving tracked entities unintentionally. Also, this is faster if you use EntityFramework for querying. 6. For generating numerical ID, I'm using "Snowflake Transaction IDs" which are generated IDs code (like Guids), but have the benefits of being chronological across servers in a web farm (so you don't need to sort rows in your database), smaller and IMO more aesthetically pleasing - GUIDs are ugly :). You see my solution to all this here: github.com/PlatformPlatform/platformplatform/blob/main/shared-kernel/ApplicationCore/Behaviors/UnitOfWorkPipelineBehavior.cs
Absolutely enjoyed the few hours I spent going through your repository. I might need to make something like this myself. 😁 I agree with all of your points. Except the issue with counting save changes calls. To me, one use case is one transaction. Any side effects I prefer pushing into domain events, which are processed asynchronously. 6. Have you used ULIDs?
EF Core's DBContext obviates the need for UnitOfWork and Repositories. If you see examples given by Microsoft, they implement UnitOfWork, just as a wrapper around DBContext. This is because UnitOfWork is executed inside the SaveChanges method. I can't find the exact link where Microsoft states this, but I am sure reading about it.
Great vídeo, but i believe that it cannot be used on very scenarios. A command can use others services for persistence on same application that use ef core, like other api, file system, storage system... Is commands, but not depend of ef core and this behavior is applied on all commands. I like that you share very possibilities and i learning a lot of new things, but is very important the viewers understand that some things only can applied especified scenarios. Thanks for share spectacular contents.
@@MilanJovanovicTech Wrapping the execution of the next delegate is something that could not be done with PostProcessors, but the first examples you should could have been. But I'd rather not use a middleware for that.
Managing all transactions with mediatr pipelines is correct way in applications? If called another command in command handler, this pipeline works two times(I solved with correlationId, but i am not sure). Moreover, dbcontex object is not thread safe. It can be problem in multithread apps. Have you solved these problems before? I dont know how to deal. Thanks
Can i have several transactiuon Scopes in memory? Will it work if i holding 2 unit of works in 2 window or tabs in my desktop application? For example: i have 1 tab with order and other with product. Will it work if i create scope for each tab?
Hi Milan, I hope you're great. I was checking out your channel and find out several playlists but there's no specific order to watch the videos. Is there any particular order you suggest? Not to get lost and confused, I love your way of explaining things but it's not really clear where to begin. At this point in my life, what I want to learn is how to implement good design principles, patterns and clean architecture, when do I need each scenario, etc, but I just can't figure out where to begin with all the info you've uploaded so far. It's a bit overwhelming. Do you have a roadmap or something? Thanks!
Wouldn't you say that double dispatch, in your given case, forces the upper layers to always have access to data persistence objects? I mean it works, but wouldn't you rather just write a wrapper or an abstraction to enforce those rules instead? It's nice to have it right there on the model, but it really stings a bit. Like a work-around for something that you know isn't correct but it works and doesn't violate anything so you leave it be. Rich domain models are nice, but isn't there a boundry to what we consider logic for an entity and logic for data persistance? Even if the resource itself is desynced (your in memory representation of the list isn't accurate, the state in your database is different), wouldn't you say that it would be better to just hook your entity to some event handler that would update the collection (and its contents if needed)? It involves more complexity but it removes the responsibility from the domain model, which shouldn't be bothered with the outside world. Any thoughts on this?
"but it really stings a bit" - it does, because the example I used here kinda sucks. My bad on that. Sometimes I try to hard to share some concept without coming up with a problem the requires it. Your analysis is pretty spot on.
Question Please according to the latest approach as you have used the transaction scope in the UnitOfWorkBehaviour Now in the handler we need the Id of the Order firstly from the database and then pass it to the Order Summary So i think this approach does not meet this usecase we still do not persist the changes in the db to get the id from the db and pass it to the Order Summary ?!
TransactionScope does not support asynchronous disposal. It's important to know that the implicit committing of the transaction will be done synchronously and block the thread. This issue is unlikely to get fixed any time soon.
the TransactionScope is really strange here because EF does it by default if you call SaveChanges once... It would be more helpful if you showed how to organize code to call multiple mediatr commands/save changes in different variations within transactions to achieve atomic behavior
Great stuff. But keep in mind that if you are testing your handlers directly without establishing mediator pipeline behaviours then you have to call UnitOfWork manually in those tests. That seems a little unintuitive.
I really like this approach, I was wondering how can I solve this duplication problem until you came out with this video, thanks! P.S: Is there any possibility that you can make a future video or provide me some resources about querying nested entities in DDD from EF Core? I'm using repository patterns and "Include" and "ThenInclude" queries, but it would be awesome to know what is the approach like, how should I model my domains and how should I map my models 😃
TransactionScope does not work with EnableRetryOnFailure. Another issue is what if you call command inside command and so on ... when you have logic spreaded through modules ? Then you can meet with nested transactions ...
@@MilanJovanovicTech I think the solution could be to create separate Command: TransactionCommand with the payload of source command then call mediator on that command and finally call ExecutionStrategy using TransactionScope. This way you can: 1. Use EnableRetryOnFailure 2. use Single Transaction. What do you think?
I also have a question. In the pipeline behavior, is there no need to pass parameters to the next() delegate like the cancelation token? Does the pipeline take care of the somehow?
I've been using "transaction = context.Database.BeginTransactionAsync(IsolationLevel.ReadCommitted)", then .Commit() or .Rollback(). You can get the context via DI.
The transactional components in the Application project don't seem appropriate. They should probably be placed in the infrastructure layer. Am I right?
what do you think about the following approach of Unit of Work? public class UnitOfWork : IUnitOfWork { private readonly DbContext _context; private Lazy _users; public UnitOfWork(DbContext context) { _context = context; _users = new Lazy(() => new UserRepository(_context)); } public IUserRepository Users => _users.Value; //.... } services.AddScoped();
Thanks for the great content! I appreciate your explanation in the videos How many more videos are left in this series? and do you know the percentage of completion for this series? because I've been studying this series in a short time
It's a never-ending series, so until I get bored... 😅 But I'm going to start moving to Microservices and distributed systems in the near term (after I launch my course)
@@MilanJovanovicTech yep. Guess this is a great example of the flexibility of client generated keys. Hi lo like you mentuoned is fine. But we tend to go with guids now.
@@MilanJovanovicTech because ypu you create a strong dependency between the business logic and a particular database implementation). Plus: database IDs does not migrate easily.
@@yohm31true, but on the other hand if you have a table where you expect to insert a lot of rows you should consider integer PK as GUIDs would result in tremendous rearrangement of data pages on bulk inserts and that would have a negative impact on performance.
I have trouble when I try to persist the data (my saveChangesAsync doesn't work)... how can I access your repository code? If it has a cost, we can talk in private, thanks for your time Milan J!!
Wonderful video, esp. the part with TransactionScope. Before adding that we had not only the problem of having to call SaveChanges in a handler, but also the possibility of calling SaveChanges in the repositories by mistake, carelessness etc, since the DbContext is injected into them.
I rarely found it to be a problem in practice (multiple SaveChanges calls) since teams are usually mindful, but this is a s simple way to move that responsibility elsewhere
Is this good idea to use EF for post calls and Dapper for get calls in a performance centric big application? Unit of Work With MediatR + TransactionScope even if across project ask is not command as in your example? In my project in few cases it is definably needed but not everywhere as your example.
@@MilanJovanovicTech it's simple. When I see what people do in c#, I am not sure if its the right approach. That is abstracting everything away and hiding it in a magic reflection based wire up system.
Alright, but we already make multiple DB calls when it comes to fetching data like customer detail, order details etc before saving anything in the DB. I think we can't reduce those query calls but to save a couple of save calls, we are using unitofwork?
i think this break DDD rules that a one transcation belong a one aggergate and couldnt change other aggergate ! but i think you didnt need in this video to consider it !
@@MilanJovanovicTech in ddd concept each aggregate represents a single transaction boundary, with this approach you affect 3 aggrrgate root in 3 diffrent transaction and thats not true i think ! Actually why you need to keep track of domain event that not created for this approch I just say my ideas base on ddd concepts in one of my projects
This is so good! When I saw the title I was aware what will be in the video, but was never thinking about that myself. You are extending my thinking so much! Thanks!