Тёмный

Google Cloud Run (GCR) vs Azure Container Instances (ACI) vs AWS ECS with Fargate 

DevOps Toolkit
Подписаться 73 тыс.
Просмотров 4,8 тыс.
50% 1

Should we use managed Containers as a Service (CaaS)? That must be the most crucial question we should try to answer. Unfortunately, it is hard to provide a universal answer since the solutions differ significantly from one provider to another. CaaS can be described as wild west with solutions ranging from amazing to useless.
Before we attempt to answer the big question, let's go through some of the things we learned by exploring Google Cloud Run, AWS ECS with Fargate, and Azure Container Instances.
DevOps Catalog, Patterns, And Blueprints: www.devopstoolkitseries.com/p...
Books and courses: www.devopstoolkitseries.com
Podcast: www.devopsparadox.com/
Help us choose the next subject for the course by filling in a survey at www.devopsparadox.com/survey

Наука

Опубликовано:

 

12 июл 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 39   
@attainconsult
@attainconsult 3 года назад
great summary basically the same conclusion we have reached
@LuisGestosoMuñoz
@LuisGestosoMuñoz 16 дней назад
This video is just amazing. It solved all my doubts in 16 mins after 2 hours of searching the internet without finding clarity. Thanks a lot
@AzeezAbass
@AzeezAbass 3 года назад
You have explained in 16 mins what would have taken me hours to figure out. Thank you! Excellent content.
@DevOpsToolkit
@DevOpsToolkit 3 года назад
Glad it helped!
@marcpinke9413
@marcpinke9413 3 года назад
Very well thought out. Thank you for taking the time to present this.
@GUIHTD
@GUIHTD 2 года назад
Very well spoken. Clear and concise. Thank you!
@biropa04
@biropa04 3 года назад
I watched some RU-vid: Cloud Next '18-'20 presentations and now I understand this, thanks, you are great! I now know Cloud Run adds a server after every 100 requests auto-magically.
@DevOpsToolkit
@DevOpsToolkit 3 года назад
Actually, by default, it adds an instance of the application (pod) for each 100 concurrent requests. How many pods run on a servers depends on the capacity and requested cpu and memory.
@viciouz25
@viciouz25 Год назад
Great explanation. Thank you. I think with AWS fargate for ECS, we can consider this as serveless.
@MohamedAmineJALLOULI
@MohamedAmineJALLOULI 3 года назад
Nice comparison... :)
@mikeparaskevopoulos9749
@mikeparaskevopoulos9749 3 года назад
Great video and article. I was wondering though, because this is already 3 months old , did AWS change something ? Because they are claiming you only pay what you use (Scale to zero)? Subbed !
@DevOpsToolkit
@DevOpsToolkit 3 года назад
They are indeed charging what you use, but what you need to use is a lot. Personally, I think that ECS is a dead-end that AWS does not want to drop because the alternative is Kubernetes (in one form or another) and AWS prefers selling services based on their own proprietary tech.
@JackReacher1
@JackReacher1 Год назад
What are your views on observability when using ECS Fargate to deploy your apps? What would be the preferred way container insights, managed prometheus, self hosted prometheus or something else?
@DevOpsToolkit
@DevOpsToolkit Год назад
It's been a while since i used ECS and even when I did that wasn't any serious setup. For one reason or another it never bacame a thing for me. So, I'm not sure whether it even works with Prometheus and, if it does, whether that is on-prem with the experience we get from using Prometheus in kubernetes.
@JackReacher1
@JackReacher1 Год назад
I understand. I think I will apply to the open jobs at Upbound. I really like the way you think about DevOps and I would really like to work with you. The only issue is that I would need visa sponsorship to come to EU, does your company provide visa sponsorship?
@DevOpsToolkit
@DevOpsToolkit Год назад
We are a fully remote company so there's no work-related reason for anyone to move anywhere 🙂 Please apply.
@bobuputheeckal2693
@bobuputheeckal2693 Год назад
Can you do a video, serverless vs containerized clustering? What are the best use cases for each approach.
@DevOpsToolkit
@DevOpsToolkit Год назад
Adding it to my TODO list... A quick note in the meantime... Serverless solutions can and are using containers. Lambda now supports containers, and Google Cloud Run and Azure Container App were designed to be based on containers from day one. Knative is yet another serverless solution that, unlike others I mentioned, is self-managed and runs in your clusters. What I'm trying to say is that serverless and containerized solutions (including those running in your clusters) and not necessarily different but often the same.
@bobuputheeckal2693
@bobuputheeckal2693 Год назад
@@DevOpsToolkit understood. But please explain when to choose between a Containers as a Service vs Managed Kubenetes Cluster. Like G Cloud Run vs GKE.
@DevOpsToolkit
@DevOpsToolkit Год назад
@@bobuputheeckal2693 Serverless (ignoring the price) is mostly for stateless stateless apps. Now, it's hard to say more simply because the question is invalid. Apps deployed to GKE using Knative are serverless apps. Now, if the question is when to use managed (e.g., GCR) instead of self-managed services (e.g., Knative), the answer is "almost always managed unless it's too expensive". That answer is equally valid for serverless as for almost everything else.
@DevOpsToolkit
@DevOpsToolkit Год назад
The thing is that Google cloud run uses KNative so if you run KNative in a Kubernetes cluster yourself, you are getting self-managed server less solution similar to gcr. Now, there are many types of server less apps. Lambda, for example, is functions as a service type of server less that is suits Le for relatively small type of workloads. On the other hand, azure container apps, gcr, and KNative are containers as a service type of server less that is suitable for almost any stateless app. I recommend trying KNative in your Kubernetes cluster. Given that it is unlikely that all your apps are suitable to be serverless, you need a Kubernetes cluster anyway. From there on, you can choose to move from KNative to gcr and switch from self managed to managed type of service.
@Mophead64
@Mophead64 3 года назад
Working on a test automation project at the moment, spent the last few days learning ACI and it just feels sluggish. My desired workflow is to spin up x number of containers via a script, which will exit and be removed at the end of their life cycle - unfortunately Azure can't take care of this last bit, which is the first annoying thing. I think there's a way to have an Azure function watching for terminated ACI containers, but it's just mental that you can't do this on ACI. Limits are also a thing, you have a max of 100 container instances per resource group, then 60 containers per container instance. I currently have 66 Containers that I want to run, so I have to figure out a way to split them up, or just spin up a container instance per container, which limits me at 100. Deployment time! Man, this is a pain. I'm sequentially deploying containers at the moment with the az CLI, and it's so freaking slow. About 1-2 minutes per instance spin up. I honestly thought I'd get a good CaaS experience with Azure, but its just bollocks. My org has me locked in with either Azure or AWS, even if ECS is more complex initially, hopefully it'll fit my needs.
@DevOpsToolkit
@DevOpsToolkit 3 года назад
The only CaaS (among those I used) that is truly great is, in my opinion, Google Cloud Run. ECS is OK, but too complex and not really designed to leverage all the benefits we have today from schedulers. ACI is too simple for any serious usage. It is, essentially, a way to run Docker containers without any scheduling, scaling, and any other good things we expect today. What might be interesting is Lambda with containers or ECS with copilot. I'll publish reviews/tutorials for both soon. From what I gathered in your description, Lambda with containers might be the right choice since ECS is more about long running processes and you are looking for one-shot executions (if I understood it correctly).
@AndreasWest
@AndreasWest 2 года назад
Victor, you're hammering down on the point that it's impossible with Fargate to stop all instances, I get that. But you left out the point of cold start completely in your review. Yes, it's nice to have no service running and hence not to pay. But as you can't predict when you backend is used, it will surely create problems and a bad UX when the cold start is too long. Can you please comment on how long it will take for GCR to go from 0 instances to 1? Thank you
@DevOpsToolkit
@DevOpsToolkit 2 года назад
You're right. I should have explained better the implications of a cold start. I'll do my best to be more explicit the next time. TL; DR; There is a cold start penalty that can differ greatly depending on the app packaged in a container image. Starting a container is almost instant, but starting an app can take any amount of time. Now, that's typically not a big deal. Services that create a replica for each request (e.g., GCF, Lambda, etc.) are having a cold start issue for every request (even though they can keep it "warm"). Cloud Run is based on the idea that a replica can handle multiple requests (100 by default; configurable) so the cold start penalty applies only to the cases when no one (zero requests) is using the app for a period of time. In those cases, the cold start is often not an issue. A high-traffic high-response type of app does not tend to drop to 0 requests over a period of time. All that being said, having 0 replicas is not mandatory, but only one of the options. One can configure it to have a minimum of 1 replica in prod and, let's say, a minimum of 0 replicas in dev environments.
@AndreasWest
@AndreasWest 2 года назад
@@DevOpsToolkit Thanks for outlining this. The advantage of zero cost with going down to 0 instances has to come at some disadvantage. With what you highlighted it's now clearer and I think it shows that Fargate isn't too far off GCR for those use cases where someone cannot afford the cold start penalty or will never reach the 0 instances state due to 24x7 uptime/requests (with some grace period where GCR keeps it warm). Don't get me wrong, I'm interested to explore GCR as an alternative for some Golang functions I'm adding that aren't justifying the same Fargate setup I'm using for the NodeJS backend service I'm running successfully for years. And yes, the setup of Fargate is far more complicated and takes more time than GCR or even AWS Lambda itself for FaaS.
@adrianblazquez6322
@adrianblazquez6322 3 года назад
Thanks Victor :) I have a doubt. If someone has to deploy an entire service (processing + database, for example, I'm prototyping for my master thesis an odoo deployment), how does the comparison changes if it is needed to use a db too? I mean, cloud run seems the best option but gcp does not have a serverless db. But as far as I've understood you and the other comments, fargate is so complex to deploy that is not worth it to choose the fargate + aurora db serverless path to save money (also, because of the fargate need for lb, this option in total is much expensive than a dedicate cloud sql instance). Isn't it?
@DevOpsToolkit
@DevOpsToolkit 3 года назад
You shouldn't use GCR for DBs while ACI is not good for any "serious" usage. Generally speaking, you want to choose what fits the bast an application type. That often means that you do not use the same platform for everything. If your application is stateless, some form of serverless might be a good option. You can make DBs serverless as well, but that is too risky and complicated if you are managing it. For DBs, the best option (ignoring the cost) is always to use DB as a service provided by a vendor. Let others figure out how to make it work, HA, etc. If that's not an option, you can either run it on a VM or inside a Kubernetes cluster. The latter option is better (if a DB is designed to run in Kubernetes), but it requires certain level of experience with k8s. Ultimately, it is not about finding a platform that fits all use cases, but to pick one or another depending on the type of the resource and the expectations. Combining serverless with other modes of running stuff is perfectly normal and welcome.
@adrianblazquez6322
@adrianblazquez6322 3 года назад
@@DevOpsToolkit Thanks Viktor!
@bobuputheeckal2693
@bobuputheeckal2693 Год назад
Azure has Webapps and you can use a docker image to run it
@DevOpsToolkit
@DevOpsToolkit Год назад
Since I published that video, azure released Container App which is similar and, in some ways better,, that GCR.
@adrianblazquez6322
@adrianblazquez6322 3 года назад
Hi Viktor! I have another doubt, which is your opinion regarding the Alibaba Cloud Elastic Container Service? From your point of view, is it production ready as Google Cloud Run? Do you have any video on that or are you planning one? I just discover this service, and I'm wondering if in terms of price and production readiness is it worth it to migrate from Cloud Run to this service. Even, I've found a database service with a pay-as-you-go mechanism. Have you used it? I haven't found bad comments about these services in the web so if anyone can provide me their opinion, I'd appreciate it :)
@adrianblazquez6322
@adrianblazquez6322 3 года назад
And about IBM Cloud Code Engine?
@DevOpsToolkit
@DevOpsToolkit 3 года назад
I haven't used it. To be more precise, I used Alibaba, but not their Container Service. I'll add it to my TODO list...
@DevOpsToolkit
@DevOpsToolkit 3 года назад
Adding that one to my TODO list as well. You are very good at figuring out what I haven't used :)
@2007dinand
@2007dinand 3 года назад
Why was Azure AKS not considered?
@DevOpsToolkit
@DevOpsToolkit 3 года назад
AKS is Kubernetes as a service just like EKS and GKE. That video is about containers as a service which means that underlying infra and everything else is abstracted.
@user-qr4jf4tv2x
@user-qr4jf4tv2x Год назад
aws don't like scale to zero
Далее
Cloud Run: What’s new?
23:25
Просмотров 16 тыс.
Boots on point 👢
00:24
Просмотров 675 тыс.
Learn Live - Azure Container Options (AKS vs ACA vs ACI)
1:23:45
AWS vs Azure vs GCP | Which one should you learn?
6:48
Is AWS AppRunner the worst way to run containers?
13:02
10 Must-Have Kubernetes Tools
18:53
Просмотров 38 тыс.
Coding Interviews Be Like
5:31
Просмотров 6 млн
What Is Kubernetes Ingress And How Does It Work?
13:46
Собери ПК и Получи 10,000₽
1:00
Просмотров 2,4 млн