Great video Fredrik! Very easy to follow. I see you are also storing the finalized order in a separate table, not only the events to build the order. This as necessary in case you want to query two tables through a join (such as orders by customer). I am trying to visualize how to best build the model with several tables to build queries with. Is this the way you would recommend doing it? That is, to have an event store for orders plus a store for the built orders? And is such a way, also have an event store for customers, plus a store for the finalized customer entry? Thanks for your input.
After watching this video, I found that mongodb 3.6 has change streams which we could use to listen to change events and do something similar to what you have accomplished there.
Glad you enjoyed it m8! Streams are a great fit for event sourcing, it brings a bit more complexity by using this approach but it has some very nice benefits to it as well. Have a great day and thank you so much for watching!
Thanks a log sharing his video was very useful. Can you make some video implementing CQRS, Event sourcing and RabbitMQ with microservices please. I read a lot of DDD CQRS, but never seen some real project example?
Great, best of all videos I have seen so far. Few doubts: 1. What if event source is saved but data isn’t stored assuming mongoDB crashed. As it’s not in a single transaction. 2. What if I repeat the last step without deleting the data?
As long as you have the events you can recreate the state of your view so it becomes transactional even if the database crashes. Think of the events as a long list and the view as what you are left with if you repeat all the steps in the list, even if the view is broken or gone you can always just replay all the steps in the list and be left with the same view. If you didn't delete the data before you replay the events you would be left with two identical views but that is less of an issues since in order to recreate a clean state you simply drop your view database and run your events again to create them. Have a great day and thank you so much for watching!
Bulk insert would be very hard when using event sourcing yes and when I do this at work we do it by streaming the events over the course of an hour or two. This works for any size of dataset it just takes time to do it.
The reducer should at no point contain code that can fail but if there is an issue with the database connection we will simply ignore the event and either crash or return the unmodified view and logout the error depending on preference. Have a great day and thank you so much for watching!
Can't you put it on github and share link, so we guys can give a deep look to code. If you can, it would be very good for us. well anyways Thanks for the tutorial