Many of you who watch this channel know I use VM's for playing around, learning stuff and in this case compiling a linux kernel on a Debian 12 VM. My workstation is Fedora KDE which runs under Asahi Linux on a Mac Mini...so that is just how I roll. So suspicious .... I never ever put a new kernel on hardware until I have tested it out...one thing I do not need is a kernel build which fails, breaks or mutalates my data...
I can honestly say (having struggled trying to get NetBSD working on an old Mac IIci back in 1999) you have satisfied _ALL_ my curiosity. Nice, brief, to the point 👍
@@carpetbomberz I still had issue running NetBSD even on VM, e.g I couldn't able to make it use my flash drive, remind me Linux on late 90's, you won't find driver for silly stuff (e.g you mouse)
I agree, but isn’t it rather “specific intellectual property requirements” to keep the walled garden eh… walled? Then the general user pay up and hope they get their money’s worth - without actually _owning_ anything… 😜
As a Gentoo user for 21 years now, compiling kernels for me is "par for the course" - once you have a good and working kernel config then just re-using that every time you upgrade works pretty flawlessly. Every couple of years I do try to streamline the configuration because of "bloat" due to blindly accepting default options when doing a "make oldconfig" to upgrade a config to a newer kernel release.
Great video, I've never built a kernel outside Gentoo neat to see. I assumed it was both possible and well supported but id never tried... Must be nice not having to manually update grub config and initramfs.
The first time when I had to recompile a kernel was during an attempt to make a 33600 modem with ISA interface work under Red Hat. It didn't help so I ended up buying an external version that used RS232 serial interface. US Robotics made some really good modems back in the day.
I had one from TerraData, it was optimized for the unix discussion group transfers, not particularly fast as I recall it was 19.2Kbs, later I did switch over to USR
@@CyberGizmo that's why I was launching kernel compiling before going to sleep :-) I create my own linux distribuon "Bad Penguin" that was far better then Red Hat 4.0 and Slackware :-P I can't believe I had my own CUI package manager better then SUSE :-P And a "control panel" like the one you had in SCO :-P Then LIBC6 and GNOME ruined me.... hahahaha it was impossibile (for a single teenager) to maintain it after that and I was in Italy and Internet was not great at such time. I had to use a FTP2EMAIL gateway to download bigger FTP at the office :-P
Thank you very much. I'm not having much luck compiling the kernel. My biggest problem is getting the correct ".config" file. Starting with "make menuconfig" often ends with a bloated kernel with giant modules. I usually start with "make x86_64_defconfig" and then trim with "trial and error and crash." Once I get a matching ".config," it will always be ' yes "" | make oldconfig.' Is there any good reference for trimming a virtual machine's ".config" file?
Ha ha, I've just been testing a new workstation's performance on cross-compiling OpenWrt with kernel 6.1 from scratch (make clean && time make -j32 world). ~3m20s for kernel, drivers and packages... Nice.
Compiling a kernel would be better probably if you know what you're doing. But for the rest of us using a ppa to install the Liquorix kernel would be a lot easier, for Ubuntu or Mint at least.
I first compiled a kernel in the mid 1990's. I had an old 386DX machine and wanted to have as stripped down a kernel as I could get so there were more resources for the stuff I was trying to do. Many hours and many broken compiles later I had something that worked and I felt great that I'd achieved that. The hardware was a lot simpler in those days and working through menuconfig wasn't as daunting a prospect as it would be today; I haven't tried a kernel build recently. I might just have to give it another shot, just for old time's sake.
I love the Fedora icon for the KDE menu panel. Thank you for this great tutorial, I think we need more of this. You probably don't appriciate how much value your little tutorials have. Watching someone who did that gazillion times is completely different from watching tutorials by people who make tutorials. We can learn from your knowlege and experience, it's very much appriciated.
Just compiled the latest version of the NetBSD 9.3 kernel for an old 486 system that was upgraded to an 83 MHz Pentium Overdrive. Compiling it myself reduced the footprint to
@@andersjjensen Less than Debian? Because I've had almost no issues with Debian. A friend back in high school, decades ago, used Slackware... but back then I was still a Windows user.
It's funny that there are still magazines with Linux on DVD. I got my first PC with Windows 95 pre-installed when I was 13/14. I immediately unscrewed it because Windows kept crashing. Of course, it wasn't the hardware, so I went to the gas station and bought a booklet with a Linux CD. The magazine contained instructions on how to compile Linux. It was a bit more complicated, but at least the 133 MHZ, 16MB computer ran without any problems. Well, 26 years later I still use Linux.
I used to do it since around 2.0.32, stopped doing it early 2000s. It made sense at the time and it was fun, but with newer and more powerful hardware didn't feel the hassle was worth it anymore.
The patch sets I have run are capsicum (for real security) and MRI (for high-performance GC support). I don't know if there are patch sets people are commonly applying out there today.
Interesting info. Never crossed my mind that you can compile you're own kernel and that could easily become Pandora's box later down the road. I'm only 3+ years in atm so it's a later project but something to think about. Thanks for sharing.
Ah, there was a day back in the 90's and early 2000's that I used to compile the kernel every time a new one was released. I then got a job creating a custom distro around 2002, and I had to patch the kernel source for the IBM RAID controller being used on the servers, and some enabled and disabled bits and pieces to keep the kernel size down. Sadly, it's been many years since I have typed "make xconfiig" or "make menuconfig" Fond memories...
Hold on, something fishy up here! You are in KDE clearly, even used Firefox for the download, then all of the sudden after the compile, you are in GNOME! What a hell happened here? From 6 cores to PROXMOX VM. Fishy very fishy!
@@CyberGizmo I didn't mean it this way, one assumed that you were running Debian in its entirity on this PC, as you have clicked Firefox to download the Kernel and then unpacked it from the Terminal and compiled it. I'm sorry I pay attention, geek is a geek. I watch all your videos, you said your distro of choice is Debian, I knew from your old videos that you are RedHat junky, and you used to run Fedora in the past. I keep forgeting that, we are all guilty of distro hopping and if you have few machines it can get confusing. Thanks so much!
@@isopticon and no worries, text is always hard to show the right emotion, and I didn't mean my reply to sound defensive either. You are right I did not show the upload from fedora to debian, figuring people would assume that,,,and no worries my friend,
Yup. That config goes, and goes, and goes…and goes… And that's just for the kernel. But WAIT! There's more!! Then you get to compile the applications too!!! gentoo is fun for this, being ideally a source distribution. "…and a good time was had by all. They lived happily ever after. The beginning."
I think the first feature to remove in the kernel is Audit. For a single user computer it seems a waste of a resource to me . Of course, I may be missing something why I shouldn't remove it.
A lot of people look at it as unnecessary, but I'd say learning is always necessary. Never stop learning because the day you do is the day you cease to exist.
Man, to think that the first time I tried installing gentoo with all the kernel configuration and such was before youtube even existed is wild. What I'd've given for the resources of today
the first thing i would propose is bottom up solutions ie where you are part of concentric circles pertaining to your situation for instance if you have laptop model x config y, that is the first circle, then model x is next, then the particular os, then the general os and so on. our world is top down, the biggest conspiracy imo
The compile is just compiling the kernel code, building Linux from Scratch is creating your own version full out Liniux distribution with all the packages, services, etc that you need to operate.
I remember doing this looong time ago, when you really did not have a choice many times. Nowadays I can't even find a reason to do so, other than learning. Kernel is a very fast moving target, so unless you're doing it for LTS kernel release, it would be a waste of time [well, not really, as these days machines can compile kernel very quickly, admittedly].
Minor pet peeve: it's "dash" not "minus". The reason I always compile my own kernels is a liiitle bit pedantic: I prefer to be able to see every single driver in use with lsmod. This means I compile my kernels with everything , except initrdfs, as a module. This does have the side effect that I need to generate my initrd by hand, but that's OK with me, as I know my hardware and the module dependencies.
hahaha, actually the character is called a hyphen, it is also called a dash, and its also called a minus sign. - Source Christopher Sholes (the inventor of the typewriter). So all are correct.
The compilation of a kernel is easy enough, cleaning it up neatly is the challenge. On Arch in some cases compiling a kernel requires using makepkg but then the packagemanager pacman kjnows about it which is aw problem because there is no neat solution to remove the kernel via makepkg wiht how they set it up. So makepkg knows about the a kernel existing which doesn't exist anymore resulting in an errormessage after each update. It is required however that pacman knows about the extra kernel existing on your system because of hooks from other software.which needs to do something to make the non-standard kernel work for some things (like a virtual machine).
I remember when I first started using Linux in the mid 90s, kernel builds were the only path to upgrade or even do a reasonable install. Most people did not want to waste memory (which was precious in those days) on a bloated kernel and a lot of distro kernels didn't have a lot of the drivers for normal devices. Then Kernel modules... In any case, because of the history of having to build you own all the time, the path to do so was relatively easy for the average Linux user (who admittedly was beyond today's variants). Building a kernel just to add/remove a module is still not hard. Some modules are internal so blacklisting is not effective.
I remember back in the day eagerly awaiting a new kernel release and getting it back to my machine and compiing it. That was back before the support for kernel modules so you needed to do custom builds to suit the hardware you had. no menu based config tool either - just a never ending list of y/n questions to answer!
It is still there, i.e. your old way. DJ Ware issued a 'make menuconfig' and it gave him a ncurse style menu selection. If you type 'make config' you go back to your good old days to answer endless y/n questions. I have done that both. The final way is to 'make xconfig' which uses gui.
Usually because you need a faster kernel, where some modules are compiled directly into the kernel, and else you need to remove all the stuff that you don't use. UPDATE: it is pretty *_rare_* that the kernel doesn't already contain the module that you need for your hardware, it is rather much common that you need a better and a later version of said module, or that it doesn't function well unless you compile it into the kernel, but yes: if you have old age or brand new hardware, you might need to compile a kernel to get it even working.
Huh? I have never EVER had a problem where there was a difference between a driver whether the driver was a module or built in. In fact I always recompile my kernels so the only thing that ISN'T a module is support for initrdfs (for quite obvious reasons).
@@CyberGizmo I haven't really been using snaps. You also have to do an update to apt to correct some link issues, but that's a single command. It just takes fewer steps. Will manually compiling stop Snap errors?
I compile my own kernel, since i get faster boot times and it judt "feels" more responsive, after throwing out the junk i dont need. My kernel is about 200 mb, and its pretty fast.
Without wishing to appear pedantic, do you mean that the RAM usage with that kernel when booted up in 200 mb? That bit I can believe... For me, the current 6.6.x kernel image sizes (in Gentoo) seem to be around 10-11 mb in size, so otherwise I don't understand where that 200 mb comes from.
I don't need to solder my own RAM because I can buy the right RAM that I need for any computing task that I have to complete. Were that not the case, I might well need to solder my own RAM. I do need to compile my own kernels because I build Gentoo on many different types of platforms, including some low powered SBCs and older computing platforms where every byte of "the RAM that I don't have to solder myself" saved counts. If you too don't have to solder your own RAM or compile your own kernels then I am entirely happy for you as clearly an extremely fulfilled individual with entirely different computing requirements to me. Well done, sir! Now, if you'll excuse me, I need to continue fulfilling my computing requirements with a kernel compilation or two. Next time remind me to bring some cheese to go with your whine.
If you just want a computer "that does computer stuff", and you're not a hardware enthusiast who constantly gets the itch, then there is no practical reason to do so. However, if there is nothing you wanna watch on Netflix and your Steam Library is boring you to tears, then learning all the ins and outs of the deep bowels of a Linux system is free, entertaining and a hobby that will never cost you a single cent, but it WILL "massage your brain" and give you an enormous appreciation of what goes into quality software production.
@@andersjjensen The point is that the kernel becomes very bloated with things that are not needed. BTW, the first Unix machine I used was a Dual System 83/20, MC68000 CPU, with only 1 MB RAM and a 20MB 8" Fujitsu disk.
@@andersjjensen I don't think anybody cares about the space on the disk, it's the space in RAM that's important. Not just how much, say, percentage-wise, it takes, but also because if it's smaller it loads faster, and might have better cache performance. That being said, I don't know where those values are taken from. I do have a self compiled kernel, which is the norm on Gentoo, what I use. I did try to have it minimal, but I'm still learning (and also, I do need stuff, it's a general-purpose laptop). The only notable thing that I didn't include is any kind of bluetooth. And currently I see that the initramfs has 27 MB (and was only 7 MB in 6.6 kernel versions, I'm now curious why) and the vmlinuz has only 15 MB.