36:10 Colin, you may have hit the jackpot. Checkout @thatdot Quine product. Entities, relations and state(late arrival also) , ingestion with back pressure , events in motion( standing query) more than a group by . May be pairing with nats would do it
My take is just on his example of services relying on each other to complete a workflow (order + credit service). The thing he's missing is to use a saga pattern. And if they did their DDD well they could have realise some of them could have been in the same bounded context. eventing in microservice should be a broadcast and forget. Once we have the notion of state 1 > state 2 > state 3, you need another strategy to manage that.
"After" structured concurrency? It's only in Incubator at the moment. And while you can certainly reimplement something like Reactor on StructuredTaskScope, the multitude of operators you have already production ready in Reactor that allow you to define your pipeline will look very ugly if you just build it up "in place" with StructuredTaskScope. So, I beg to differ, Loom is not going to kill it (first time I ever disagree with Brian Goetz, ever!)
What I don’t understand about event sourcing, is how do you maintain that current state for millions of different entities ? Are you basically committing it to database by consuming the ‘added’ command? Or is it held in memory and has to be recalculated every time he restarts his server by reading the whole stream? When would you replay the whole stream if it’s committed to a database?
Another thing - these abstractions are great, I don’t know this framework well. I have experience with Spring, is there any similar abstraction there for event sourcing?
so correct me if im wrong: good event source implementation includes both an EVENT and a snapshot of STATE so if required we can query and replay historical events up the point of failure ?
I've never heard stateful vs stateless described in this way, which is basically just caching vs not caching. More commonly stateless refers to the state being carried around on the requests/responses, so that there's no need to retrieve state on an arbitrary insuran instance, unlike stateful where the instance must either hold state or retrieve it from a central location.
I expected a talk about actors and modern uses of them, instead I got something else? If you want, go re-watch this talk, there are more buzz-words in than I could count. Every slide is an assertion, with disconnected results of that assertion and absolutely no facts, code, or explanation. Even the Q&A section was a mess: "Professor, how do we deal with X and Y when Z exists? Carl: well we have Y and X so Z isn't a problem". Every answer he gave was just a re-phrase of the question, then he acted like he had answered the question! This talk was a markov chain of buzzwords. The Q&A section was the same markov with a different random seed.
1:52 distributed transactions is a different game 1:59 resource manager 3:04 in practice, life is more challenging than that 3:16 configuration is hard 4:33 reason: CAP theorem 4:58 important paper: life beyond distributed transactions 5:43 CQRS/ES to the rescure 7:37 only Durability in ACID is valid
Interesting about the problem of clustering in 17:23 Erlang has a global registry where you can find the correct process across the cluster. Pretty similar
OK, not just forgiving but enjoying and encouraging diversity of accents but... C'mon man! SKEDJeweler is "schedule... er"? Anyway, awesome presentation all kidding aside.
very useful detailed presentation for akka actors and akka streams. if you can please keep in touch for future online meetings with you to understand more about the system. furqan.cloud.dev@gmail.com
A great introduction into Akka Streams from the trenches! One question, though: if data loss is to be avoided at all costs, then why rely on the assumption that processing of in-flight messages will succeed? An Akka Streams process could be abruptly terminated by a multitude of external factors, most trivially hardware failure. Since your're consuming from Kafka, why not use explicit offset confirmation like Alpakka Kafka supports, and only confirm the Kafka offset after the HTTP message was successfully sent? That way, unconfirmed in-flight messages would be refetched from Kafka and reprocessed. Of course, that would only work if the external endpoints were idempotent, but that's a reasonable assumption. Even if they weren't, there would be other ways to work around that.