• 0 Posts
  • 331 Comments
Joined 1 year ago
cake
Cake day: June 17th, 2023

help-circle
  • Um no. Containers are not just chroot. Chroot is a way to isolate or namespace the filesystem giving the process run inside access only to those files. Containers do this. But they also isolate the process id, network, and various other system resources.

    Additionally with runtimes like docker they bring in vastly better tooling around this. Making them much easier to work with. They are like chroot on steroids, not simply marketing fluff.




  • It might. Depending on how much tension there is. Too much and it will cause the filament to slip in the extruder causing under extrusion. If you are not seeing signs of under extrusion then you are fine for now - but that might change if you change filament or anything else. I would try to lower how much tension the filament is under to avoid problems in the future. Otherwise it would be something to keep in mind if you do start seeing signs of under extrusion.


  • When I change devices or hit file size limits, I’ll compress and send things to my NAS.

    Whaaatt!?!!? That sounds like you don’t use git? You should use git. It is a requirement for basically any job and there is no reason to not use it on every project. Then you can keep your projects on a server somewhere, on your NAS if you want else something like github/gitlab/bitbucket etc. That way it does not really matter about your local projects, only what is on the remote and with decent backups of that you don’t need to constantly archive things from your local machine.


  • It doesn’t technically have drivers at all or go missing. All supporting kernel modules for hardware are always present at the configuration level.

    This isn’t true? The Linux kernel has a lot of drivers in the kernel source tree. But not all of them. Notably NVIDIA drivers have not been included before. And even for the included drivers they may or may not be compiled into the kernel. They can and generally are compiled with the kernel but as separate libraries that are loaded at runtime. These days few drivers are compiled in and most are dynamically loaded depending on what hardware is present on the system. Distros can opt to split these drives up into different packages that you may or may not have installed - which is common for less common hardware.

    Though with the way most distros ship drivers they don’t tend to spontaneously stop working. Well, with the exception of Arch Linux which deletes the old kernel and modules during an upgrade which means the current running kernel cannot find its drivers and stops dynamically loading them - which often results in hotplug devices like USB to stop working if you try to plug them in again after the drivers get unloaded (and need a reboot to fix as that boots into the latest kernel that has its drivers present).


  • I don’t get it? They seem to be arguing in favor of bootc over systemd because bootc supports both split /usr and /usr merge? But systemd is the same. There is really nothing in systemd that requires it one way or another even in the linked post about systemd it says:

    Note that this page discusses a topic that is actually independent of systemd. systemd supports both systems with split and with merged /usr, and the /usr merge also makes sense for systemd-less systems.

    I don’t really get his points for it either. Basically boils down to they don’t like mutable root filesystem becuase the symlinks are so load bearing… but most distros before use merge had writable /bin anyway and nothing is stopping you from mounting the root fs as read-only in a usr merge distro.

    And their main argument /opt and similar don’t follow /usr merge as well as things like docker. But /opt is just a dumping ground for things that don’t fir the file hierarchy and docker containers you can do what you want - like any package really nothing needs to follow the unix filesystem hierarchy. I don’t get what any of that has to do with bootc nor /usr merge at all.


  • TLDR; yes it does affect security. But quite likely not by any meaningful amount to be worth worrying about.

    Any extra package you install is extra code on your system that has a chance to include vulnerabilities and thus could be an extra attack vector on your system. But the chances that they will affect you are minuscule at best. Unless you have some from of higher threat model then I would not worry about it. There are far more things you would want to tackle first to increase your security that have far larger effects than a second desktop environment being installed.




  • nous@programming.devtoLinux@lemmy.mlShould I be worried?
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    27 days ago

    If you have everything you need backed up you can reinstall on a new hard drive and restore everything you need. So you should not be completely fucked. Just an inconvenience you might have to go through. You will lose the stuff not backed up so if any of that is a pain to get again it might be more painful to restore everything.

    Others have said some thing you might want to try. But having a spare disk you can swap to is never a bad idea. Disks to fail and you should plan for what to do when they do. Backing up your data is a good first step.

    I would say it is not a bad idea to just get a new disk now and go through the process of restoring everything anyway - you can treat it like your disk has failed and do what you would need to do to restore. With the ability to swap back when you need to.

    This is a good way to find things you might have missed in your backups.