Have you noticed that a new 'az.terraform' PS module has just been published? This has an 'export-azterraform' cmdlet but I can't get it working - I get a 'The resource type could not be found in the namespace' error, irrespective of what resource type I try to export
Been thinking a lot about this 'terralith' thingy, and had this fear somewhere inside, that if using single state is really a good approach. Seems like it is not only me. Would be fascinating to see a video, which would give some guidelines about criteria which could help to identify when to use terralith and when 'micro-terra'. Plan/apply time? Amount of resources? May be some types of resources should always have separate state, like AKS? Multi-region? So far I was only thinking about plan/apply times - if it is quick enough - let it stay in one state, cause convenience of stitching everything in one root module is so hard to beat.
Separate state candidate: Azure Application Gateway?! Any change which is made outside of TF seems to cause TF to want to redo everything (listeners, probes, rules etc etc). This is a constant annoyance. I guess it's probably not Hashicorp's fault - probably just how the API works....
I've written several Powershell 'wrapper' scripts that do a 'state list' then present that list as a PS grid from where you can choose one or more resources on which to perform various changes, eg remove from state+HCL, rename resources (does a state move and finds+replaces within the HCL) etc. Also, ones to find Azure resources that aren't TF'd and automagically import them. They've proven quite useful...
Came for the IaC, stayed for the goatee's! I've been hired on several times now to un-eF a company's Terralith. Or when I started somewhere I discovered all their terraform was a Giant terralith and no one knew how to use it or understood it. And it can take quite a while to just understand it. They'll have modules on top of modules calling other modules that call even more modules. idk why some of these outside teams do such a thing to these companies. I don't feel bad for the Companies, I feel bad for me having to sort it out!
It is end of Q3 2024 and the scarcity of info and study materials on this certification is quite surprising considering everyone and their momma is using Terraform these days. Why is that, you think?
The test just went GA, so I'm guessing there will be A LOT coming in the next few months. Check out my buddy's study guide if you're interested: leanpub.com/terraform-professional-certification
What is the big Pro to check the set tag in the post-condition? I could easily write a azure policy which requestsme to set a tag with the key environment at every ressource? So it is just another way to do it ? What is your opinion ?
There's some excellent materials on the HashiCorp site: developer.hashicorp.com/terraform/tutorials/pro-cert I am working on something with Bryan Krausen that will be out later this year.
That is awesome. I am diving to Azure Landing Zone for couple of weeks and happy that I found one great resource here. Appreciate if you could share your dev container for terraform codes/ repo. Thanks
Disagree. Have tried all ways and: - writing the configuration in a way that can be customised between environments + - splitting resources between workspaces with separate management is the Best implementation we have seen. Otherwise, you're basically assuming that: - people will perfectly maintain all the different code versions; they won't - leading to having wildly different dependencies and resources managed in each environment - people will want or have the inclination to write rego rules for each environment separately - there is no benefit to be gained by dev fully or mostly emulating prod for continuous integration and staging type testing purposes (or that your use case won't require it) - running your whopper of a plan/apply (that you will likely evolve towards) won't exceed time limit and the number of resources won't kill the underlying engine ... and you're encouraging everyone to push to the same default workspace, meaning that now all teams have to review ALL possible changes... which is a nightmare when you're just updating a small part of the system as part of a routine change and have to unpick all the previous discarded/buffered plans and failed applies or drift realignments. From the pov of a company with literally hundreds of AWS accounts, countless repositories applying across accounts, areas and teams, it is SUPREMELY impractical. Workspaces are the way forward. Just pray your TF Cloud provider doesn't charge by them!
Thanks for the feedback! I def think that workspaces in TF Cloud or a similar implementation in other TACOS makes a lot of sense. My quibble is with workspaces in the Terraform Community addition.
I took typing in middle school in the 70s. We had cars and jet airliners. Imagine eating your traveling companions in the snow after eating your horses.
If you ever get time, I'd love to see some tutorials on Setting up Terraform (both OpenScource and Enterprise) for use with AWS Service Catalog. Great Stuff! Wish y'all used your cameras.