A while ago I had a issue up related to using embedded servers as part of a cluster. Didn’t manage to get it working but I’d be interested what a working setup could look like. Using a leaf node may even be the answer for me
Great video! This is the exact use case we have, but for our business model the price per leaf node connection in NGS is sadly not feasible. What would you suggest in this case?
I have been using embedded nats server for a couple of years now. It is one of the great features of nats. What I was missing was how to connect it to a nats cluster and your video answered my question very nicely. Thanks. It is a very powerful construct that can replace a lot of infrastructure setup and have microservices talk to each other in a clean transparent unified way. Thanks!!
My target design is effectively a blend of this and Natster where you end up with a peer-to-peer desktop application mesh of sorts. We'll see how that works out over the coming months.
Embedding lets you write software that scales from the small to the large. If you are writing a NATS-based API, your users can start with a single static binary, and move to HA with clustered+embedded, then scale out to externalized nats-server clusters. Giving people a way to try stuff out with minimal infra is a killer feature, imo. It's kind of like how Gitea (git server) or Prosody (xmpp srever) support sqlite. It's sufficient for small/medium deployments, and that's the proving ground for introducing new tech into your organization. Etcd supported embedding early on, too, and while it's hard to measure I think that partially accounts for its success.
This is very powerful, I like modular monoliths, spinning my API servers globally with nats included makes life so much simpler, comes with unified KV store, literally input one key on one API server it syncs in all. Life has never been easier.
What does it look like if I want to deploy an embedded NATS server in an autoscaling k8s environment? Is there a leader? How does the cluster get bootstrapped? My idea is that we'd autoscale the pods, and the leader would poll for work to be done, distributing work via a queue group
This is awesome. I've mainly been using embedded servers as a way to do local development and testing. I hadn't thought of using it to do offline-first applications. My thinking was: local -> embedded server production -> remote server Do you have more content on leaf nodes and their applications?
Is there a way to control when/if the server connects the leaf node connection. We have a use case that need to control when we connect to the back to sync data.