Victor - as always thank you for the video. Just a note here. When using Compositions which consists of multiple resources, we also cannot use some metadata name. So the external name is there the only option. Thank you for confirming that we are doing it right :)
A-ha! This helps me explain some behavior. I had been playing with an early PoC last year to deploy some lambdas via a helm chart implementing Crossplane manifests. In the process of testing you naturally destroy and recreate the objects a lot of times. For one test I intended to do a full reset. I destroyed all the cluster objects related to the lambda to redefine them. I had forgotten to let it destroy the existing objects first by doing a proper delete leaving the objects orphaned in AWS. I was surprised when syncing the chart that all the objects reidentified right away. Now I understand that this was always the intended behavior. This gives me a lot of new ideas on pulling our existing custom terraform into reusable charts. This is very exciting.
This sounds like a dangerous idea to take over an existing resource with possibly incomplete configuration on the Crossplane's side. The approach chosen by "terraform import" is much better IMHO.
Thanks again for another awesome and informative video. Just one query: Is there a way to let Crossplane know about all the resources (500+) that were provisioned and managed by Terraform? Any easy way to migrate everything or at least 80% of resources from Terraform to Crossplane, please?
I don't think there is an easy way. Even if there would be, I would not recommend to move it at that scale since that would result in using Crossplane as if it's Terraform instead of creating Compositions. Realistically, if you do adopt Compositions, many of those resources will be repeated across Claims (Composition instances) so you might want to codify patterns (as Compositions).
@@DevOpsToolkit Thanks a lot for this suggestion. So, it will likely involve refactoring the infrastructure into reusable templates (Compositions) instead of managing each individual resource one by one. And rather than trying to migrate 100% Terraform resources, we should identify the patterns in the infrastructure that can be abstracted into Compositions, which would cover a large portion of our use cases (perhaps 80%, I hope)
Do you have a video on how to migrate claims from one apiVersion to another? I want to introduce a breaking change to my xrd but, from my own experiments, it's not straight-forward.
Its very nice, but for large enviroments the proccess of declaring every existing object on my provider on crossplane can be weeks. Is there any way to generate these yaml files automatically for a given aws / gce / azure (etc...) account?
What about third party providers that are built on top of Terraform? I'm looking specifically at the Keycloak and MinIO providers. I was considering migrating our set-up at work from Terraform + Fluc-IaC/tofu-controller. The ability to manually step in to fix problematic scenarios is what's holding me back. With terraform I know I can import or do a `state rm` and whatnot. Not so sure about crossplane. For example, even migrating from TF to crossplane has me a bit worried since all of those resources are already created.
There should not be a problem with providers built on top of Terraform. It's mostly code generation that uses Terraform modules to generate Crossplane Go code. It's a way to get to providers faster, not necessarily with worse quality. How they work behind the scene is very different.
Is it not possible to manage existing resources by terrafrom is by just importing the infrastructure using terrafrom and that will include the resources not created by terraform itself?
I was not comparing crossplane with terraform but only explaining how it works in crossplane. I'm not even sure that those two can be compared since they operate on very different levels.
I also have a question which could be a subject for another video. How do you observe applications and infrastructure signals (logs, metrics, traces) with Grafana Alloy and Beyla?
@@DevOpsToolkit RU-vid does not allow links but I found there is this Helm called k8s-monitoring-helm, which is supposed to install everything needed to start using Alloy for monitoring, maybe it helps
Hi Victor, Thank you for your consistently informative videos; I always find them incredibly helpful. I have one query related to ArgoCD. I have successfully implemented several features with ArgoCD, as outlined below: 1. Deployed ArgoCD using Terraform. 2. Set up Single Sign-On with Keycloak. 3. Integrated AWS CodeCommit with ArgoCD. 4. Retrieved custom Helm values from AWS CodeCommit. 5. Configured an S3 bucket to fetch private Helm charts. 6. Monitored pipeline status effectively. 7. Established pipeline dependencies (app-to-app). I have a question regarding the suspension of applications in Argo. Specifically, I have five applications deployed, and I would like to disable or suspend one of them using Jenkins without modifying the source code stored in AWS CodeCommit. Could you please share your expertise on this matter? Your guidance would be greatly appreciated. Thank you!
@@DevOpsToolkit I have a query regarding the suspension of applications in Argo. Specifically, I have five applications deployed, and I would like to disable or suspend one of them using Jenkins without modifying the source code stored in AWS CodeCommit.
A few follow up questions (mostly so that I understand what you're looking for). Are you trying to suspend synchronization so that changes to the desired state in Git are not synchronized to the cluster or are you trying to suspend your application so that it's not running in the cluster? Is there a reason why you would not like to change anything in Git? Is the requirement that change should be done by Jenkins but without it changing anything in Git?
@@DevOpsToolkit The primary reason for not wanting to change anything in Git is that our CodeCommit repository contains a generic manifest that serves multiple applications. Different projects require varying numbers of applications; for instance, one project may require two applications while another may require four. By suspending specific applications without modifying the manifest in Git, I can manage which applications are active or inactive based on the project requirements without creating multiple manifests for different projects.
@@DevOpsToolkit Sorry for the delay. The primary reason for not wanting to change anything in Git is that our CodeCommit repository contains a generic manifest that serves multiple applications. Different projects require varying numbers of applications; for instance, one project may require two applications while another may require four. By suspending specific applications without modifying the manifest in Git, I can manage which applications are active or inactive based on the project requirements without creating multiple manifests for different projects.
Yes. You can set Crossplane Managed Resources to be in observe mode and, in that case, they gather the info, but do not manage the destination resources.
@@DevOpsToolkit Yes, but you still need to know the resources, would be nice to fetch or discover existing resources. But thats might be just tooling while developing the compositions, basically fetch&Observe and from there going to managed mode...
i have a problem. if i want to import ALL my infra - i need to obtain every id? i,ve tried to did it and ive realized that i need every id. is there a way to bulk import ALL infra from as example one account (vpcs vms and other stuff). maybe some cli tool or i need to write some script with sdk or smth like that? what is the right way or how you recomend to slove this case? as example keycloak provider - i needed to import EVERY user or client. i understand that i can do this programmatically but maybe there is another way?
Many of the resources do not have auto-generated IDs so that issue would not apply to all of them. There is ongoing work to make it easier and faster, but I cannot say the date when it might be released.
If I have blue green EKS clusters, with stateless applications it is pretty easy to manage (just control the weight of DNS with external DNS operator, etc) wondering how to implement blue-green with cross-plan. For example, if for some reason, I create vpc with cross plan on "blue eks cluster", how can I manage it on "green cluster"? green cluster is not aware of vpc id (created but blue eks cluster) Hope the question is clear, thanks
You would have to have external-name annotation either by adding it to yaml, backing up and restoring etcd, or any other way. That being said, I'm not sure I understood why you would do something like that.
@@DevOpsToolkit, thanks for the quick response, I really like your videos, I saw you for the first time last year, AWS TLV, and it was awsome. I will try to use a better example. Let's assume I want to manage my API application + and some security group over kubernetes (security group with crossplane) My purpose is to ensure that all configurations are in both blue and green clusters. I assume if I apply security with crossplane on blue cluster, I supply name and annotation added with the security group id on the blue cluster, same as you demonstrate on video with vpc. green cluster is unaware of this security group ID, so I assume it will try to create another security group or something like this. Just to clarify, we are using blue-green for eks upgrades and some other big changes like migrations
@ofekedri7884 I'm still not sure I understood why you would want to have managed resources for the same security group in both clusters and both referencing the same AWS security group? Are you trying to make crossplane itself fault tolerant or is that somehow in relation to applications or...? Would it be easier if we have a live chat? If it would, pick a time that works for you from calendar.app.google/Ze9HVDwzywSGStMS9.
@@DevOpsToolkit i use blue cluster normally, when i upgrade or make big change, we move to green cluster, so i want green cluster to take control about everything, including deployment/pods and all application resources managed by crossplane. I can consider green cluster as DR as well
Got it. If, in cases resources that cannot be named, you have external-name annotation, you should be able to run managed resources in both clusters. You can get those either by adding them yourself or backing up and restoring etcd. As a side note, we are working on a feature that will allow you to view and test changes to compositions before applying them. Once that is released you might not need bg, at least when using crossplane.
Have you worked with customers who shifted their serverless infra from Terraform to Crossplane? Are there any gaps not supported/covered in crossplane?
I know quite a few, often large companies who switch to crossplane. There are always things that could be improved in crossplane or anything else. However, what makes crossplane special is that it works with all other kubernetes native tools so many of the gaps are intentional since they are already covered with argo CD, kyverno, Prometheus, and so on and so forth. I would argue that kubernetes ecosystem is the biggest one we ever saw and, as a result, if you look at it as a whole, there are much fewer gaps than anywhere else.
@@DevOpsToolkit that’s great to hear, I will be starting an experiment to converting terraform modules into crossplane resources that’s further abstracted by helm charts…do you have any advice for me here? maybe a topic for new video?
@Luther_Luffeigh if you're creating composite definitions, you'll be instantiating them through claims that will likely be 10-20 lines in total. You won't need to package that into helm.
I Azure, there is the resource called Api management, but everytime i have tried to add apis in Azure Api management which is managed by another team, i keep getting errors. So the question is how do i use resources that i dont own and dont want to manage?
I will guess that you want to have crossplane resources reference some azure resources that are not managed by crossplane. If that is the case, you can create those azure resources with Crossplane on observe-only mode. In those cases, all the data will be pulled from azure to crossplane resources without them bring managed by them. Please let me know if I misunderstood your question.
@@DevOpsToolkit that is correct :), and i think the issue i experienced was not related directly to Crossplane but more the Service principal, because they give us a very limited role assignment, Api management contributor, so i think the issue is that Crossplane expect you to have contributor or owner at the resource group level
Permissions crossplane needs depend on which managed resources you're using. Unlike other tools, you can give it very wide permissions and still be safe since you are controlling users access on kubernetes rbac level.