Hey man, fantastic video, thank you so much for it! Not only is this incredibly useful, but I hadn't gotten into runbooks just yet and this just made so many ideas click from this scenario, to auto cleanups, to reporting! Thanks man. I will be sure to put the knowledge to good use.
Quite informative, thanks for sharing it! Hoping to see right sizing selection of VMs baked in as a native Azure offering from the Azure Advisor some day.
I was curious what is the main difference between this script and what azure advisor does when recommending under utilized VMs? Advisor checks memory and cpu as well so I was curious if the perf data is better then what advisor checks for. thanks!
Advisor only does something when you click the button. this solution will run automation in the background to check the efficiency of the VM and make changes how ever often you allow it
This doesn't seem to factor in network and disk throughput data which AZURE has some pretty restrictive elements which drive sizing choice too. Or does it and I am missing something?
That is a good point, and depending on number of disks attached changing size might pose a problem…I am not 100% sure that current number of attached disks are part of the code…could be a nice upgrade, I will share this feedback with Jos and let you know
@@AzureAcademy With respect to VM instance type selection, disk I/O capping when downsizing a VM could be a hugely important consideration. In particular, resizing SQL Server VMs like this could be problematic as could any other database intensive VM
Ok, so followed your video but i'm getting the following error? 'Failed to get memory performance data from Azure Monitor because no data returned by log analytics? Great video btw, can you advise?
that message means that there are NOT enough data points to do a resize...has it been at least 6 days? if not you can use another parameter to shorten the timeframe...details are in JOS blog that was linked in the video's description.
@@AzureAcademy Hey, i'm still getting the following error and it's been like three weeks? :( (set-vmRightSize : TRES-MB-0 failed to get memory performance data from Azure Monitor because no data returned by Log Analytics. Was the VM turned on the past hours, and has the 'Available Mbytes' or 'Available Bytes' counter been turned on, and do you have permissions to query Log Analytics?)
Hi. Thanks for this video and info, but I'm having this error: set-vmRightSize: The term 'set-vmRightSize' is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. Did the command change?
@@AzureAcademy RIs are tied to SKU/Family. You'd want to wait until that RI has expired (or near enough to expired that cost benefit is there) before letting this script lose on those instances. Then after that you wouldn't be able to run with RIs, because you couldn't be sure if this script were going to resize and thus negating that RI. Savings Plans if scoped properly would be okay since it doesn't care about SKU/Family the same way RIs do.
Oh you were talking about reserved instances! I probably wouldn’t use this with RI’s, because a RI is paid for in advance to get a reduced price this script would change the size of the VM’s, which would break a RI’s function
pretty much every environment I look at that has VMs are not sized properly, same way VMs are never sized properly in bare metal hypervisors so not much has changed 😂
Yup…the big difference though from Hyper-V to the cloud is the Hyper-V VM‘s didn’t cost anything… In the cloud it does so you need to get the sizing right
@@AzureAcademy exactly, improper sizing (9/10 times its over vs under) has a hard opex expense now. im curious if nerdio will integrate this module in some capacity for AVD to it's existing automation account it uses to right size VMs since you would need to reconnect new session hosts to the workspace on every redeploy (which can also be scripted) but not everyone is comfortable doing that. great review of the module, certainly something to keep in the toolbelt.
this is great but i'm gettting an error saying "VMname failed to get memory performance data from azure monitor because unexpected (negative or too large) memory perf value detected, VM was probably already resized less then 152 hours ago." My vm has not been resized and I have plenty of vm perf data for cpu and memory in my LAW. Any idea why i'm getting this error? I've run it many times against many different vms and I get the same thing no matter what.
@@AzureAcademy thanks for responding. its really strange. i have a meeting with microsoft in 10 minutes and i'm going to bring it up with them and see whats up. I'll update with what i find. thanks!
@@AzureAcademy Hello so the issue was when the script runs it runs this query... VERBOSE: testnewvm1 querying log analytics: Perf | where TimeGenerated between (ago(152h) .. ago(0h)) and CounterName =~ 'Available Mbytes' and Computer =~ 'testnewvm1' | project TimeGenerated, CounterValue | order by CounterValue Its looking for the counter name "Available Mbytes" but I was using Azure Monitor Agent, not Log Analytic agent and when you use AMA the counter name is "Available "Bytes" so its not finding the perf data so it fails. What's weird though is in the script it also runs this query... VERBOSE: testnewvm1 querying azure monitor: Perf | where TimeGenerated between (ago(152h) .. ago(0h)) and CounterName =~ 'Available Bytes' and Computer =~ 'testnewvm1' | project TimeGenerated, CounterValue | order by CounterValue So its also checking for Available Bytes so i was hoping the script was edited to be used on vms have the LA agent or the AMA but the script fails right after it tries the Available Bytes query above so you must use the legacy agent for now. I added a comment on the website of the author and told him about the issue since LA agent is going to sunset soon. I think your videos are great. thanks again, rick
The behavior in the example is quite strange. It suggests you to change from a B size to a D size, more than doubling the cost for no reason at all. Memory consumption is OK and so are the CPU metrics... Weird🤔
The tool does not only size you smaller…it is all based on usage and performance. Could have something to do with the B series being butstable, which means their REAL performance is reduced but they can burst faster if you have the burst credits…which I do, so it sees my REAL usage and recommended a size that would give me that performance regularly.
@@AzureAcademy makes sense but I'm a huge fan of the burstable VM's. They are so cheap. Our clients love them, they even have them in production environments haha
Also noticed this. Even if to find out that B series has 40% of standard CPU performance, this example's VM should still stay on the same size. Also if there is Linux OS 124% of price increase is fair. But for Windows price increase is almost 200%!
@@AzureAcademy interesting take on the B series we run out of CPU credits all the time so our company strays from them. Possible this script does as well without taking into consideration how many CPU credits you have. But if you run out... bad times are to be had
Good material, just one note: 6:05 VERBOSE example result does not make sense to me: more expensive D machine was recommended in place for underutulized B series, or I missed something?
No, not at all. The Burstable series flexes the VM over its normal limits so the tool is suggesting I scale up to a size that is better suited to my performance needs.
Yes and no, it would work fine on any vm size including reserved instances…however the way you purchase reserved instances is to lock in on a size for that vm for the long term So what I would do is use auto resize for several weeks then once you KNOW the right size buy the reserved instances
yes you can, that is what I showed with the -WhatIF parameter. You could also add logic to your runbook to check the size before the change, and there are built in parameters for specific sizes you should look at as well.
Another great lesson...This is extremely helpful esp for medium to large size organizations as they can benifit with lot more savings. Thank you for sharing.
A quick search found this playlist on AZ-800 ru-vid.com/group/PLc6LqxQFwub8sskcc3_UqaDtnw-Qp-yPD I watched a few parts quickly and it is good material…so you don’t have to wait for me…takes a while to make a course ☺️
Azure has a basic design problem. Inability to resize a VW without downtime. Other Vm solutions can adjust CPUs and memory dynamically. So, people will oversize a machine just for one or two hours of EOD period. This can be a high cost if using SQL server. Too many CPUs is very costly with SQL server in the mix. Too small a machine and the one day things run slow, management calls for upsizing…. I have a situation where people call for an F32s because of a specific task rather than a D series of 16 or even 8 CPUs. Daunting trying to save a company money with one or two loud people on a project who write inefficient code.
Interesting that you say it is an Azure design problem. I think this goes back to early versions of windows or hyper-v not supporting dynamic cpu and Ram changes. And Azure has to be consistent…so just guessing here…but if it would work for Linux or windows server but not for windows client then it is a design decision to not implement a feature. Agreed bad code hurts everyone…but I also find that big SQL servers need a ton of resources at certain times like month end processing but not the rest of the time…so I would run it as small as possible then right size it when you need to process…but of course that won’t work for everything, which is why this is a great solution ☺️
@@AzureAcademy i think it is VMWare that dynamically adjusts for cpu and RAM. Had a client do it for me recently. We use Win Server Datacenter 2019 at Azure. Hard to get more current. Our SQL runs the same work for EOM and EOD, no need for monthly beef. But we have to size for that two hour overnight window and that kills costs. Such a waste watching systems run at 5% CPU all day. We use B-series for test servers to save there. And I sure wish 12 core Azure servers were a thing, we would use those over 16 most of the time. This really is a money thing for Microsoft. And I invest in $MSFT to at least make a few bucks on all this customer wasted resources.
If wishes were horses I guess…VMware uses the memory ballon feature…not sure about the cpu side of that magic…but I hear ya…it would be nice if everything was 100% dynamic and just billed for what you used at the end of the month…maybe one day
@@AzureAcademy The B-series is close but they say "don't use for PROD" (because if you did, they would make less money). The regular VMs throttle upon reaching target IOPS Bandwidth saturation anyway - so you do hit a VM size wall if you do a lot. I just hate paying for a 16-core SQL license for 2-hours of heavy activity per day... Times numerous servers. Even for some Test servers. Ahhhh! I've saved this client about $20k a month but still over $120k a month spend which is heavy. Especially paying $127/mo reserved instance for a 1TB SSD when you can buy one outright for that. Many industries have gone to subscription products to make more money than if they sold product outright.
@@AzureAcademy How to configure the listeners and the policy, we started to implement that with the help of an external party but they don't do any knowledge transfer as of now.
He Dean Thank you for the Video as always it is a great information for AVD. But my bigger question is, how I can get this awesome Azure Virtual Desktop T-Shirt?? 😁
LOL I wish I sold them Markus! I got mine when I hosted the AVD master class last year ☺️As far as I know, not available publicly…but would you be interested in Azure Academy merch?