Scottibyte Enterprise Consulting Services is dedicated to describing the best practices for creating and maintaining home and office networks that are fundamentally secure with the best possible performance.
We leverage private cloud to provide extended functionality in home and office automation using Ubiquti Unifi and QNAP NAS products.
We provide simplified processes and procedures to leverage best in class products to make your life easier.
We also focus on providing simplicity to the Home Automation environment.
That one came as a result of my adding a desktop to a LXD container about three years ago. I've had a few questions on how to do this. Interestingly, there is no easy way to add the "Spice" protocol to a container. Spice is the embedded protocol used for the GUI on Incus VM's and QEMU in general.
Great example, thanks, but have you tried to take it one step further and mount /dev/dri/renderD128 and friends to effectively passthrough the GPU to the desktop environment?
@@scottibyte I'm not up on all the details, but there are two parts; the codec itself, and the DRM encryption layer. I would expect codec support to show up, despite likely requiring license fees for the documentation or for implementing the patents. But the encryption layer would likely only be able to be cracked if someone got ahold of leaked keys. Now those keys have to be embedded in any licensed device, so at some point those could be cracked, but I believe they also have key revocation methods to counter that.
Man I've used all of the big competitors when it comes to RMA and samsung has by far the worst process and policies in place for getting a replacement. G skill requires only the product sku and it's off to the races. Samsung makes it nearly impossible to even submit a claim. Thanks for the video.
@@scottibyte Sure, Incus is cool and shows great promise but right now it's no match for PVE + PBS. Ironically, the point of Incus for me is to host Proxmox on a Minisforum MS-01 within a CachyOS desktop, thanks to some help from your excellent YT channel.
Docker adds that "appliance" convenience, but is not without sometimes bizarre issues to deal with, My "Fix Docker Issues" video got into some of that. Now that I have had Ubuntu 24.04 for a week on my incus servers, life is much better. Nested docker in both LXD & incus has never been an issue until now. This is a prime example of the docker folks not testing the impact of zfs 2.2 in Ubuntu 24.04 in order to head off this issue and that may be because this bizarre ID mapping might only occur with container nesting. Still, I shouldn't be seemingly the first to identify the issue. This isn't an incus/LXD server issue since the hosting infrastructure support shouldn't have caused the issue to begin with.
@@scottibyte Yes, about a year ago I started seeing disk I/O issues in lxd containers when using zfs pools, over top of a zfs (FreeNAS) storage back end. In summary I learned COW (copy on write) on COW file systems is dicey when running zfs stacked, due to the version differences in zfs.
Again, Linux provides choices and their are always other tools like podman, boxes and k3s. Now, docker is very popular. It helps others to look at issues concerning the most popular tools.
As with all things, if you make a different decision would different results occur? Yes. Zfs is a great file system. Overlay2 is the default in docker. It has always worked. Why is it broken suddenly????
I use explicit udev rules to guarantee consistant nic device names. here's a sample with onboard and usb nics: #--- On-Board ethernet NIC ---# ACTION=="add", \ SUBSYSTEM=="net", \ ATTR{address}=="2c:44:fd:65:58:e8", \ NAME="ne0" #--- Plugable USB to ethernet adapter ---# ACTION=="add", \ SUBSYSTEM=="net", \ SUBSYSTEMS=="usb", \ ATTRS{idProduct}=="1790", \ ATTRS{idVendor}=="0b95", \ NAME="ue0"
Yes, those work great for USB devices and I have used them for years. If you carefully watched, the point was to NOT configure the PCIx device, but still have it enabled on the Mobo BIOS. Seems everyone is homing in on my "brief" comment on inconsistent naming. In that respect, you are right. However, that was not the intended subject of the video.
@@scottibyte Yes, I understand and agree with your use case and I've also done what you did. My personal history has proven that I need to use the additional nic at some point in the future. 🙂
@@danwilhelm7214 sorry to have been short with you. I've been so stressed lately and trying to decide if I want to call it quits doing this. Just unending hours and not enough interest from others to make it appealing.
@@scottibyte Hope you don't quit doing these. I watch a lot of them and use your forums quite often (as you probably remember). I'm very grateful for everything you've done for me!
@@scottibyte Hi Scott. No need to apologize at all! I really enjoy your videos, but I understand that you have to "stay sane" 🙃 I wishh you luck with whichever path you choose!
If you use systemd-networkd or netplan, which uses networkd, you can create your link interface and match it based on its mac address. This allows it to be configured even with a wrong interface name. Networkd could also match on a the driver used. Alternatively you could also use udev rules to rename the interface to something stable as well.
I am guessing that is because in Windows that you are either using putty to ssh or the virtual wsl2 terminal. Unfortunately you have bumped into limits of Windows. Windows will be able to use the Yubikey just fine for web based authentication which uses Webauthn.
I'm noticing than none of these virtualization platforms will run Docker/Podman directly. Is there some reason why Docker/Podman are always needing to be installed into either a VM or an LXC? Why not run Docker/Podman containers "directly in Incus" for use-cases where that would be fine? Yeah, I know I'm probably ignorant, but so far the question has not gone away.
You can always run docker/podman directly on a server, just like you can run a LXD or Incus server on a server. If you want to run Podman and don't need LXD or Incus, then just load podman/docker on your server and you are done.
Pretty cool tool. Curious if you can place the username and password into a environment variable instead of clear text in the config file? That seems like a good practice to have. Also is it normal to clone apps into the html directory? All of this is new so I am curious about said usage.
Thanks for this! It got me up and running. What is the benefit of installing it in Docker vs directly into the container? It seems to install fine without Docker and with simple apt install of the various Python dependencies.
I've completely stopped editing apt-provisioned config files, and am using drop-ins instead for this exact reason. This is especially useful if you use the GUI updater which just blindly does whatever it wants, and flagrantly refuses to log what it did in an easily accessible way.
Most of us Home Lab folks are deeply into the CLI and we customize our configurations for precise use cases. In videos like this one, I try to address the basics for those new to the technology and I have gotten the question about upgrading container OS instances bunches of times.
I quite enjoyed this video for being easy to follow, and not having to try it on my Debian as I watch. Probably also worth stating that one should remove any 3rd party repos (such as PPAs) from apt config, before attempting any distro upgrades. Otherwise things can explode in spectacular ways when funky packages come in through the sky-lights to crash the party.
@@stepannovotny4291 Not required. The update manager automatically comments out all Repros (PPAs) during the upgrade process as a part of the upgrade process.
@@scottibyte Maybe I've been doing it for no good reason. Would you know whether /etc/apt/sources.list.d/* is also taken care of automatically (on Debian)?
Ubuntu belongs to Canonical which fired S Graber (Incus guru) ... Well no ubuntu for me: all containers are NixOS and all hosts are NixOS. I have chosen the unstable channel :) In containers I made an alias (in the unique configuration file) for the rebuild command: rr = sudo nixos-rebuild switch; So for upgrading containers I enter: rr (enter) That's all, the power of NixOS !
Canonical did not fire Stephane. He left Canonical with his team because they didn't like the direction that the project was taking. Specifically, Canonical decided that contributions to the project would require relinquishing rights to Canonical for contributed code. That, and Canonical was integrating LXD with increasing numbers of Canonical developed products and Stephane and his team did not like that direction. Remember, Linux is about choices. Especially desktop Linux. Historically RHEL is the leading "server" distro. However, in the last two years, Ubuntu server has taken the lead. Ubuntu server has the largest contingent of forums/community support to aid in solving problems. NixOS is an immutable OS and that is attractive. However, immutable OS is more applicable for desktops rather than servers and servers normally host a dedicated app often with a lot of customization. My advice to you, is if NixOS works for you, go for it!
@@magnificoas388 and honestly LXD will probably work better for most companies and Incus will be a better fit for us Home Labbers. With Stephane's direction, we are likely to see some awesome advances in Incus. BTW, I have covered some of this in my Incus version videos.
@@scottibyte thx! I am following your tutos. BTW I have chosen incus because 100% FOSS and I can keep my hosts as NixOS. LXD wont be able to do that (if I have understood) and Proxmox is debian.
@@scottibyte I agree, and your niche is not the only genre that needs this done. I am interested in the fact that you have incus container on your desktop computer.
Very interesting content - I'm a long time SSH / SFTP user and use FIDO keys for MFA, but I will be using your video as a guide to using Yuibikey for SSH keys going forward!