I'm gonna get burned at the stake for saying this, but the problem that Docker's trying to solve on Linux was never a problem on Windows because the APIs were always very stable. I can run an exe from 20 years ago on the latest Windows with few exceptions. Plus Docker's model can't help you deploy GUI apps since WM APIs cannot be syscalls and you can't bring the whole WM in the container with you either. You actually need the OS to provide a stable API in that case, but Linux people never understood this (in fact not even today they understand that that is the reason why Linux has virtually zero market share on the desktop).
I'm not a MS fan btw, and I hate it that this is the reality with desktop software because I would love to be able to make desktop software for Linux that I can package and ship in binary form and it would just work out of the box and I hate to say it but that has been a reality on Windows for 20 years and it still is, and I know Linux people will bring up drivers and NVIDIA and whatnot but this was the case long before Linux had a driver disadvantage.
This is partly true. Windows has A LOT of compatibility layers on top of A LOT of compatibility layers, in order to achieve this. Which is one of their weakest points in perspective on stability and maintenance. Then, the next thing is: Linux is HIGHLY ABI compatible: The distros are not.
And you can use AppImages. And Habitat. And tons of other stuff. Its really the distros failing here. They dont know what they are doing most of the time. A can rant for ours about the most distros packaging system. The only sane one that I have seen is that one in the only distribution that ships binarys like upstream recommends and without conflicting strategies like providing two dozens different UI`s, while sane persons can see that hundreds of shared packages cant possibly provide a sane base for all of them at the same time, period. kaosx.us/