On ARM, it's more difficult for the compiler to produce the executable but its easier to get executed on cpu later (you gain efficiency on runtime, because simpler instructions) that's why you get better time compiling x86 versions. With x86 its the opposite (less compile time, heavier complex instructions later at runtime). I think :)
The build times comparison ins't the best because they're building to different OSs and architectures. The closest you could get would be building for Android, but even then the Android SDK tools Unity uses are not compatible with Apple Silicon (or at least were not last time I checked) and will run using Rosetta, slowing down the Mac build considerably.
@@AZiskYep its not really about being fair but what you actually have to do as a developer and I'm glad you are trying ro cover most cases you know a unity game dev would experience to see what happens.
Usually they get those bad pictures of ours with our moms... I have this very weird portrait with me like 3 years old, pipi showing off and she loves flashing it around... Once I made it disappear just to be surprised with an extra copy she had. That is my mom! Love ya!
It's important to note that the 4090m is connected to a power source, while the MacBook is not. Of course, the MacBook's performance does not change depending on whether it is plugged into power. However, for the 4090m laptop, performance drops significantly if it is not connected to a power source.
Also the rtx machine ifs viewport is set much bigger so it's rendering at a higher resolution....and its screen resolution is higher...so not a fair Comparison
I would personally do this kind of work on a desktop with a real 4090, blowing everything shown here completely out of the water. If you value you your time and don't have to be mobile, desktop is the way to go imho. Of course, I understand that some people need to be mobile 😅
i'm extremely excited for this one as a unity dev Okay after watching this now I'm wishing Apple just sold they're m3 chips like Intel. I'm actually impressed by the build and compile times. I'm not too concerned with how long it took with baked lighting. Because I can't imagine myself trying to bake lighting without using my beefy GPU because even sometimes that can struggle baking lighting is a nightmare and within unity you can genuinely fuck up your scene and have lighting times up to 8 hours 😐 I'd rather not explain how I learned that. But overall great video I'm genuinely impressed by the M3
@@timothyandrewnielsen Don't worry I don't work for unity I just use it for indie dev. Hate it if you will but you know what it's not the optimization nightmare that is the unreal engine. It might look AAA but you don't have the AAA team to make sure it runs
Ok now you have to do a test scene with more going on. Grab the Matrix Unreal Engine demo.. then we can really see how things stack up. Btw.. as someone that has been a sub of yours for years.. you still do a great job.
@@tonyburzio4107 by default apple silicone wins. but am not going to render Unity or unreal engine or blender or redshift scenes in a coffee shop or an library when under load all of the laptops are so loud. I have a RTX 4080 laptop b/w
Why would you be doing AAA rendering pipelines as an indie dev? If you were working at those companies they would provide you with the equipment necessary for that.
I've always wanted to see how M3Max works in Unity, so this video is much appreciated! By the way, is the number of GPU cores in the M3Max you tested 40?
Hello, I'm a 50-year-old man, and I'd like to learn with my 15-year-old daughter how to develop a small 2D game using Unity. Just for fun and to spend some quality time together working on a shared project. I have an old MacBook Air from 2017 that doesn't support the installation. I was thinking of investing in a new MacBook Air for 1290 euros from Apple. Do you think this PC will be sufficient for what I want to do? Thank you for your responses and for your very nice and informative videos.
That's interesting that the M2 machine uses such a different fan curve. It was louder in this demo, but much cooler than the M3 machine. I've found that my M3 Pro machine gets really hot compared to my gaming laptop and that's just because the fans don't seem to turn on until it's just about to reach critical temps.
One advantage of 3nm is greater tolerance to temperature stress, which explains Apple's change in fan curve. But when I do intensive work, I use a fan controller and push the fan from the start, meaning it stays cooler.
I have a Mac Studio M1 Max, and though Unity is noticeably faster than on the iMac it replaced, some things still seem slow. Generating a light map, as you showed, is faster than before, but I still spend a lot of time waiting (you can't modify the scene while a light map is generating). Even worse is the webGL build times. I never timed it on my old machine, and maybe I'm just more impatient now, but webGL build times seem excessively long, even a small project can take 8-10 minutes during which I can do nothing else.
This reminds me of the old days (25 years ago) of cross-compiling Linux kernels from one architecture to another. It blew my mind that it was even possible, let alone a good idea - but I understand a lot more of the “obvious” reasons now why this isn’t such a big deal. And of course it can be faster on the better architecture. It’s not a special case that building to the local architecture should be “the fastest way”.
Can you conduct a Flutter Development test on both Windows and Mac? I am a Flutter Developer and am considering getting a Mac, but I'm not sure if it's a good choice. Additionally, as I develop Android apps and iOS applications, could you test if the Android emulator is natively supported by Apple or if it operates through a transition layer? If there is no layer, please provide a comparison between the native support and the transition layer.
Do your research. You need a mac for iOS, anyway. Compile time plays almost no role for flutter, since hot reload. Of course Emulator works, and is native (but rosetta would be good enough).
With unity a lot of the ui updates requires GPU, e.g. when update a material, modify a shader, change lighting and object locations... So although my Mac compiles faster than PC, it often feels sluggish when working on your project.
I mean building for x86 should be faster right? More instruction built in. While for the arm processor the build needs to write more instruction due to lack of in the arm architecture, riiight? Or am i off by a mile?
I can certainly see a use for both however I personally wouldn't get either. The Titan is too big for most stuff and I find the mac too expensive and has a lack of upgradability. I'd personally go with a smaller gaming laptop with a smaller power brick that can also do USB C PD. As for the Mac the lack of gaming support really sucks. I don't really care about the performance unplugged because I don't do heavy workloads when I'm running on battery as that's going to tank battery life on both. I would save heavy workload for when I'm at home or some place where I typically have access to a power port
Almost forgotten: Intel Mac or PC with eGPU, and pop a badass GPU in there. I guess for Mac, that answer will now forever be AMD 6900 and never go up 😢
If you don't use il2cpp, Unity will quite literally just use precompiled engine binaries and Mono runtime, so the only thing you are compiling is the .NET IL.
Do game devs really work on laptops (or mobile workstations). Seems like an awful lot of drawbacks for mobility, which is (imho) the only advantage in this line of work.
I learned a lot from watching your content. I wanted to ask you as a developer why not having a really strong PC with desktop 4090 and all the specs you want and remote using any cheap laptop. Wouldn't that a better bargain than expensive apple or underperforming Intel laptops
@@hugoramallo1980 If you look at versus videos in a lot of tasks it is not anywhere near a 4080. Unless creators are all lying but I doubt that since a lot of creators making these videos are enthusiastic who love mac. Show me a benchmark video where the M3 max consistently performs like a 4080 in most applications. I haven’t found a video that shows this happening but if you have seen one I will watch the video.
Well, you have to start it every time (unless you do some command line tomfoolery), and starting with what is easily the slowest app on my machine kind've dampens the enthusiasm. "Enthusiams... Enthusiams... Where's my baseball bat."
@@AZisk For the same reasons if you had to open a Pages document using "The Pages Hub." You just want to double-click the document itself, which is what I wish could be done with Unity projects. (If anyone knows a way to do this, please let me know!)
@@AZisk Well, we tend to have tens of projects so a hub to manage them all is a great idea, that's where you can change the Editor version of each project, that's where you download multiple Editor versions, that's where you enable Version Control to the projects you want...
One thing to keep in mind that a 4090 mobile in a notebook is that it is not a very fast gpu. The 4090 mobile is literally outrun by a stock 4070 Super desktop gpu, which doesn't say much about the 4090 mobile other than it is nerfed to death and probably should not be called a 4090 at all. Comparing Apples Best, to Intel and Nvidia nerfed to death to run in a notebook..... Devs really shouldn't use a notebook on a daily basis, for multiple reasons..... I do a lot of content development, and an M3 would not be my first choice for productivity, because it just isn't fast enough and is lacking many features used in many many programs. You can't dedicate gpu's to computation only on a Mac, you cant add any more, and that is a huge downfall rarely discussed. It is a huge disadvantage on content creation, period. My current dev system has 3 gpu's in it for that very reason. One for display, 2 for compute/preview/compile/CUDA/Optix,etc.... Stuff most programs can actually take advantage of and use.... It will be interesting to see what Apple does in the future, simply because they can't really compete in this area at all atm.... Time is money, and Mac isn't up to the job just yet. I'm not saying it never will be, just that right now it is a waste of resources that someone is paying for. Far superior machines are the key to saving time.... Gaming companies don't want you waiting around, and they don't pay you to do so either..... Only Apple can fix that with a Mac, and to date, they have not even come close.. People will disagree with that, which is fine by me, I don't pay them.. And if I did, they would not be using a notebook to develop. Testing, yes, developing, not a chance.....
Hello Alex. Thank you for this video 👍 It would be awesome if you can make Unreal Engine 5.3 comparison too 🙏🙏🙏 Speed of light build with CPU/GPU and PathTracer. Thank you 🤘
Exactly my question! Have been looking through multiple videos to try to find out what the RAM is on the M3 Max and I think it’s 64GB but confirmation would be good ETA: he confirmed it’s 64GB in a comment below
Apple still has some catching up to do before they reach Nvidia but that's to be expected. After all, they are going against a company with decades of experience. In 4 years, Apple improved the GPU performance of their chips immensly.
@@abdocake Very oversimplified view. Gaming is not considered as "doing work". The M3 Max had an immensely powerful GPU with a lot of hp. It can't be implemented in games due to many limitations but it can be implemented in actual work such as video editing, photo editing and now even rendering. The Mac got pretty competent in rendering tho it comparable solutions from Nvidia would obviously perform better in rendering. That comes down to optimization. It is funny to say it like this but a lot of our digital world was built around CUDA. Nvidia is a company that cannot be denied easily if ever.
Im having a very hard time understanding the Apple Tax over 14900KF and 4080-4090, unless you need power and mobility; which I rarely do. PS: Not a hater post, I own, iPhone, iWatch, iPad, Air.
" the Apple Tax" - prices are not taxes. Prices are prices. And the MSI w/i9 and 4090 is $3649 at Amazon, hardly inexpensive. The 16" M3 Max in the Apple discount store is $3,699 - it has more RAM, but less SSD (pick your poison) than the above MSI, so your complaint it doubly off the mark. Oh, and have fun carrying that huge MSI power brick.
You missed the point of my post, I said aside from mobile, this does not make any sense. Furthermore, 14900KF is exclusively a desktop processor.@@TheDanEdwards
I am sorry you need to inform yourself properly. 14900KF+4090 smokes any M3 Max in pretty much all departments, it just consumes tremendous amount of power.@@hugoramallo1980
Unity games run so well and compile to other devices much better than unreal That's why I prefer it. Unreal looks really good but runs like a potato compared to unity
@@Pasi123 these are games that are compiled natively for Linux without the use of a compatibility layer like proton This is because I use Linux as my main operating system
@@CommentGuard717 I know what native means but in your original comment you said Genshin is the best looking Linux native game you have, but it doesn't even have a Linux version
Most of the time the exhaust is at the back, because on the right is where you may keep your hands (for the mouse). They may be competing for cold air tho
Don't forget unity needs to reload whole application domain to recompile your code. Serialze managed editor state, finalize all managed objects, unload managed dlls, recompile user code, load managed dlls in fresh app domain, and deserialize managed editor state. Even if there is no user code - engine have a lot of it. And for several years unity moves a lot of unmanaged stuff to managed, for extensibility, modularity and easier maintenance. You can reduce iteration time by disabling domain reload. But your project need to support it by not relying on static variables auto reset upon enering play mode and other side effects of domain reload. Unreal also need a lot of work to reload after code changes. And the process even more painfull as Hot Reload has a lot of drawbacks what can lead to asset and memory corruption.
@@last8exileyeah, it’s not just creating a file and boilerplate code, it does a compile, but i think it just compiles part of the scene, not everything. otherwise what a waste of time
the reason i dont like those gaming laptops are, i cant show up with that big chunk of plastic with huge ac brick, and rgb lights and the loud fan noise at the clients meeting. they all get annoyed by the noise.
@@SunsetNova Muah, maybe go cry some more in the corner when people do not join your fanboyism. "Data analytics trash on Windows" says someone who cannot even run Power BI Desktop on his machine.
Hmmm. A mobile RTX 4090 is hardly a real 4090. I have a real one in my desktop PC and the card alone is easily as big as your entire laptop and way, way more powerful. Just sayin'. :)
Yeah, mobile "4090" is a power constrained desktop 4080. Out of pure greed Ngreedia did not give real desktop chips for the tiers that are implied by the naming.
Yeah, the desktop RTX 4090 has a 609 mm² AD102 with 16384 cuda cores while the mobile RTX 4090 has a 379 mm² AD103 with 9728 cuda cores. On desktop the AD103 with 9728 cores is used on the RTX 4080 but it has much higher power limit so it's much faster than the mobile 4090. Performance wise the mobile 4090 should be between RTX 4070 and RTX 4070 Super