Man, Absolutely brillaint. really looking for more videos like this. also. if you can create a project starting from this high level system design until deployment it would be massive
Probably would have been worth going over data exchange between services a bit more - yes it is guaranteed delivery but, most likely, it's at least once delivery, so the services need to be able to handle duplicate events. Also probably worth discussing the web server and e.g. loadbalancing
Why don't we have full-fledge microservices course for building a real life microserivces using NestJS / Nodejs, Kafka, SQL databases and all other required tech stuff. I am sure this will rock. You can consider this as a request from my side.
Sorry for the stupid question but, in this case the VPC would be a docker swarm cluster or a kubernetes cluster, or am I misunderstanding it? (keep in mind I'm going kinda blind into this and have been working with monolithic apps for 2y in school without putting much in practice)
Massive thanks for sharing your knowledge and delivering it in your beautifully unique way! Quick question: What drawing tool are you using in this video? Anyone who knows is welcome to answer incase I missed it somewhere.
Nice video Tom! I did have some questions: You mention having a single load balancer for all services but I have usually seen it done with multiple load balancers allowing for ssl termination at each and thus zero trust. Is there a reason to go with a single LB other then reduced complexity? Also, why not use a daemon to export metrics and logs such as a data dog agent? I feel like having to setup metrics endpoints itsa lot of work, what are the benefits over the daemon?
any chance you can resurrect your expressjs tutorial from a couple of years ago and apply it to this? also what diagramming app is that? it's super cool
Im a little bit confuse on the connection between the BFF and the Webserver. When request comes in does the BFF calls the Web Server and for the response I assume the services like Product service will directly call the BFF for the response?
The BFF will make requests to the downstream services. All requests to those services go through the web server. All requests from the UI to the BFF also go through the web server because it's just a service like the others and it's the only way you can reach it from the internet
Why wouldn't you just log (info, err, debug etc) in the services, and hook up something like prometheus to cloudwatch logs? Feels weird having a /metrics endpoint in each service?
You're also logging a bunch of other stuff, so would be difficult for prometheus to know what it should ingest and what to ignore. The metrics endpoint is much easier and cheaper.
This was awesome, if it is not too much to ask would it be possible to create a small application where you show us how we can configure all these together? Thank you
A caching layer is missing (Maybe more import than a BFF?) and from the scheme it's really hard to understand how horizontal scaling comes into play, but I love the video
I didn't add a caching layer because caches should only be used to solve a specific problem. I don't think it's a good idea to be shoving caches in just because they exist. But yeh, most implementations would have a cache somewhere
@@TomDoesTech I see your point but I have a hard time to share it, caching, in a way or another (it can happen on so many levels of your infrastructure sometimes you don’t even know it’s there), is always required in ‘Large Scale Applications’: it brings down costs so badly you can’t afford not to have it
I have a question: How would authentication and authorization work here? Would this be the job of the BFF? And also how do you test the Microservices/ How can you run them locally, if the have dependencies of let's say to an Auth and Permission Service and also to Kafka for sending and receiving events?
Auth would be handled by one of the microservices, in this case, probably the user api. There's different strategies for developing locally and testing. Mostly you develop on one service at a time and connect to the other services that are running in a staging/dev environment. You can also run services like Kafka locally, it's easy with Docker.
@@TomDoesTech : In the interviews i attend, I often get questions like, what are the tools you have used for HLD & LLD. Can you please help me with the answer for this?
Better terminology for the BFF is API Gateway, "API Gateway" will be aware of where each service is located and automatically route requests to the service/server and back to the User Interface. I hope this is clear.
@@TomDoesTech To be honest, it was a bit hard to predict that as you didn't mention it at all. I do really appreciate your work, but those missing points can lead to someone implementing architecture in the wrong way. Thanks for your great work!
This design requires lots of CPUs and RAMs. It will cost a lot of money. This is a nice summary of microservices but most of them can be handled without any of this. Nevertheless, I liked your content. Thanks.
Hey Tom and everyone, I'm new to microservice system design, and I'm wondering, we've accounted for a request which will hit our web server and will load balance/route our traffic to the necessary service, So, why do we need BFFs and why do they go from the user interfaces direct to the services and not as requests or considered requests? Also, if possible, recommend for me a resource i can use to understand this concepts better. Thanks
The BFF makes the UI simpler to develop. The BFF request do go through the web server, all requests going to the services, including the BFF go through the web server