Another amazing video, would also appreciate it if you would create a small end to end web api, ddd architecture project showcasing these concepts in an end to end project…
Great video! It would also be great if you could add scenarios where EDA is not suitable, i.e. it's simpler and cheaper to make those HTTP/gRPC calls instead
gRPC is synchronous and what I found through searching is its not suitable for loose coupling and also is synchronized while loose coupling is the motivation to use microservices. On the other hand, EDA is loose coupled and asynchronized.
Very informative video! Some follow-up questions- 1) I am having trouble understanding the difference between #2 and #4. Are they not both trying to orchestrate an internal process? 2) i have been trying to figure out how granular the events need to be. We have a Human Resources system, where they are trying to support Enterprise events that span use cases #1. 3 where the goal is to notify downstream systems of data change. We are evaluating the granularity of the events - e.g. it could be New Hire or Promotion or Separation, or one could go granular e.g. new hire future, new hire retro, new hire onboarding. When the consumers get the events, they have to make a call back to get the change. Do we in such a case define the granularity of the events based on the use cases and consumer needs? I have come across event storming but for use case #1, we don’t need to have events for things that are not relevant for the use case e.g. say item added to wish list. Sorry for the long question! I have been trying to understand this but haven’t been able to find any resources. Any pointers would be greatly appreciated.
1) #2 is talking about workflows and orchestrating that, yes. However #4 was talking about how those orchestrations can be temporally decoupled so you aren't bound by time. 2) I have a few videos that talk about Event-Carried State Transfer (ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-qKD2YUTJAXM.html and ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-IzBEbfSg0uY.html). Check those out and see if any of that helps. It really depends WHAT you're trying to do with events to determine what they should contain. If you have notification type events that are slim, often those are for workflow, and that's when you get into the callback situation. It's also important to know why other services need data.
Thanks for the video. The last part of it about temporal decoupling raised a question for me: What if we have a system where we have to fail after a while if one of the service is not responding. Since we are not waiting this service to do it's job, how we are going to know there is a failure? Setting a timeout for certain events to occur?
Yes, absolutely, that is one way. Check out this video, it will give you some ideas as you can use a time/expiry: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-PZm0RQGcs38.html
Great video! But I was wondering if event-driven fits most of the cases in the late life-time of a project? I think there's a complication overhead with adding a broker and using event-choreography instead of having the whole work-flow in one place. From your experience you think it is worth it to start from the beginning with an event-driven architecture? or start with something simpler like service architecture and refactor later if things started getting complicated ?
This is a really good topic idea. EDA is great for established codebases. It allows you to build out more functionality that's decoupled from what you already have.
To send an email you should use a mail server and not a service from Amazon. This whole idea that the ownership of the entire IT infrastructure of the world needs to pass to 3-4 megamonopolistic companies is madness, no matter how many euphimisms we create to describe it. Other than that, thank you for the video.
Do you have any views on syncing 'from' an external system? That is, pulling in a list of products on a regular, scheduled job? Btw great content from your channel!
Meaning another system is pulling data as a scheduled job? If so, nothing fancy other than if the amount of data it has to pull is very large. In that case it can be beneficial to have the initial request kick off it's own internal job or fan out the work into multiple smaller units. Giving the client back not the immediate response from the request, but an identifier or callback to provide it with the data once the request was processed.
For publishing, use an outbox or a fallback (ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-tcePbob8rrY.html). For consumers, well they just consume when the broker is available.
Regarding temporal decoupling example, in NodeJs we can call warehouse and billing related methods asynchronously ( may catch them in their respective catch methods ). So, I believe that's correct and we may not need decoupling if we write code like that. Please comment. I do know you are explaining it from event driven architecture perspective but Node.Js is event driven, although we don't save those event like the kafka does. PS: I like the way you explain things by code and example.
They are two different things. Async programming models that are in-process are different than using durable queues to pass messages between processes, which is what I'm explaining in the video. I'm going to create another video to explain the differences.