My take, having done this for a while and living through the initial hype and more measured approach I see today is that micro services for the sake of micro services is really bad. What you usually end up with is higher overhead. More CI/CD pipelines, more points of failure, more packages to manage, more servers to maintain. I feel like the movement was a backlash to what came before when everything was monolithic, where it does become more difficult to work in code bases with hidden interdependencies and more wide spread control/interest. The trend I'm seeing on the ground is a lot more caution. Micro services being treated more as a tool in the belt than the solution to everything it was years ago. Now that infrastructure has become cheap and easy to get I'll see more teams treating their architecture like they did their code. Building modular applications that can talk together or do one thing well. The other thing I'm seeing is a lot more monorepos and segregation of the application by infrastructure, not code. Such has worker services with the application code or workers capable of picking up many types of jobs. So before going to a micro service architecture I want to have a couple attributes for the project that tell me I need to split it out. 1 . The service can be used by multiple application in our organization. Think SSO. 2. There's specific tools provided in another language that aren't in my application. Think data science in Python, or a need for performance in C/C++. 3. There's a job too large for my client facing applications to handle. PDF and report generation, long running jobs, etc. 4. There's a security need that I need to segregate. Like a cold crypto wallet or credentials. 5. Bottlenecks that need to be maintained. Such as building a serverless application in lambda using a relational database. Having thousands of lambda's hitting that database will max out connections. Same can go for vendor API limits.