Hi! Im new to self hosting. Currently i am running a Jellyfin server on an old laptop. I am very curious to host other things in the future like immich or other services. I see a lot of mention of a program called docker.

search this on The internet I am still Not very clear what it does.

Could someone explain this to me like im stupid? What does it do and why would I need it?

Also what are other services that might be interesting to self host in The future?

Many thanks!

EDIT: Wow! thanks for all the detailed and super quick replies! I’ve been reading all the comments here and am concluding that (even though I am currently running only one service) it might be interesting to start using Docker to run all (future) services seperately on the server!

  • Black616Angel@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    Please don’t call yourself stupid. The common internet slang for that is ELI5 or “explain [it] like I’m 5 [years old]”.

    I’ll also try to explain it:

    Docker is a way to run a program on your machine, but in a way that the developer of the program can control.
    It’s called containerization and the developer can make a package (or container) with an operating system and all the software they need and ship that directly to you.

    You then need the software docker (or podman, etc.) to run this container.

    Another advantage of containerization is that all changes stay inside the container except for directories you explicitly want to add to the container (called volumes).
    This way the software can’t destroy your system and you can’t accidentally destroy the software inside the container.

      • folekaule@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        I know it’s ELI5, but this is a common misconception and will lead you astray. They do not have the same level of isolation, and they have very different purposes.

        For example, containers are disposable cattle. You don’t backup containers. You backup volumes and configuration, but not containers.

        Containers share the kernel with the host, so your container needs to be compatible with the host (though most dependencies are packaged with images).

        For self hosting maybe the difference doesn’t matter much, but there is a difference.

        • fishpen0@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 month ago

          A million times this. A major difference between the way most vms are run and most containers are run is:

          VMs write to their own internal disk, containers should be immutable and not be able to write to their internal filesystem

          You can have 100 identical containers running and if you are using your filesystem correctly only one copy of that container image is on your hard drive. You have have two nearly identical containers running and then only a small amount of the second container image (another layer) is wasting disk space

          Similarly containers and VMs use memory and cpu allocations differently and they run with extremely different security and networking scopes, but that requires even more explanation and is less relevant to self hosting unless you are trying to learn this to eventually get a job in it.

  • PhilipTheBucket@ponder.cat
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    Okay, so way back when, Google needed a way to install and administer 500 new instances of whatever web service they had going on without it being a nightmare. So they made a little tool to make it easier to spin up random new stuff easily and scriptably.

    So then the whole rest of the world said “Hey Google’s doing that and they’re super smart, we should do that too.” So they did. They made Docker, and for some reason that involved Y Combinator giving someone millions of dollars for reasons I don’t really understand.

    So anyway, once Docker existed, nobody except Google and maybe like 50 other tech companies actually needed to do anything that it was useful for (and 48 out of those 50 are too addled by layoffs and nepotism to actually use Borg / K8s/ Docker (don’t worry they’re all the the same thing) for its intended purpose.) They just use it so their tech leads can have conversations at conferences and lunches where they make it out like anyone who’s not using Docker must be an idiot, which is the primary purpose for technology as far as they’re concerned.

    But anyway in the meantime a bunch of FOSS software authors said “Hey this is pretty convenient, if I put a setup script inside a Dockerfile I can literally put whatever crazy bullshit I want into it, like 20 times more than even the most certifiably insane person would ever put up with in a list of setup instructions, and also I can pull in 50 gigs of dependencies if I want to of which 2,421 have critical security vulnerabilities and no one will see because they’ll just hit the button and make it go.”

    And so now everyone uses Docker and it’s a pain in the ass to make any edits to the configuration or setup and it’s all in this weird virtualized box, and the “from scratch” instructions are usually out of date.

    The end

    • i_am_not_a_robot@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      Borg / k8s / Docker are not the same thing. Borg is the predecessor of k8s, a serious tool for running production software. Docker is the predecessor of Podman. They all use containers, but Borg / k8s manage complete software deployments (usually featuring processes running in containers) while Docker / Podman only run containers. Docker / Podman are better for development or small temporary deployments. Docker is a company that has moved features from their free software into paid software. Podman is run by RedHat.

      There are a lot of publicly available container images out there, and most of them are poorly constructed, obsolete, unreprodicible, unverifiable, vulnerable software, uploaded by some random stranger who at one point wanted to host something.

    • tuckerm@feddit.online
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      I’m an advocate of running all of your self-hosted services in a Docker container and even I can admit that this is completely accurate.

  • grue@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    A program isn’t just a program: in order to work properly, the context in which it runs — system libraries, configuration files, other programs it might need to help it such as databases or web servers, etc. — needs to be correct. Getting that stuff figured out well enough that end users can easily get it working on random different Linux distributions with arbitrary other software installed is hard, so developers eventually resorted to getting it working on their one (virtual) machine and then just (virtually) shipping that whole machine.

    • Scrollone@feddit.it
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 month ago

      Isn’t all of this a complete waste of computer resources?

      I’ve never used Docker but I want to set up a Immich server, and Docker is the only official way to install it. And I’m a bit afraid.

      Edit: thanks for downvoting an honest question. Wtf.

      • Encrypt-Keeper@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 month ago

        If it were actual VMs, it would be a huge waste of resources. That’s really the purpose of containers. It’s functionally similar to running a separate VM specific to every application, except you’re not actually virtualizing an entire system like you are with a VM. Containers are actually very lightweight. So much so, that if you have 10 apps that all require database backends, it’s common practice to just run 10 separate database containers.

      • PM_Your_Nudes_Please@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        It can be, yes. One of the largest complaints with Docker is that you often end up running the same dependencies a dozen times, because each of your dozen containers uses them. But the trade-off is that you can run a dozen different versions of those dependencies, because each image shipped with the specific version they needed.

        Of course, the big issue with running a dozen different versions of dependencies is that it makes security a nightmare. You’re not just tracking exploits for the most recent version of what you have installed. Many images end up shipping with out-of-date dependencies, which can absolutely be a security risk under certain circumstances. In most cases the risk is mitigated by the fact that the services are isolated and don’t really interact with the rest of the computer. But it’s at least something to keep in mind.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        The main “wasted” resources here is storage space and maybe a bit of RAM, actual runtime overhead is very limited. It turns out, storage and RAM are some of the cheapest resources on a machine, and you probably won’t notice the extra storage or RAM usage.

        VMs are heavy, Docker containers are very light. You get most of the benefits of a VM with containers, without paying as high of a resource cost.

      • dustyData@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        On the contrary. It relies on the premise of segregating binaries, config and data. But since it is only running one app, then it is a bare minimum version of it. Most containers systems include elements that also deduplicate common required binaries. So, the containers are usually very small and efficient. While a traditional system’s libraries could balloon to dozens of gigabytes, pieces of which are only used at a time by different software. Containers can be made headless and barebones very easily. Cutting the fat, and leaving only the most essential libraries. Fitting in very tiny and underpowered hardware applications without losing functionality or performance.

        Don’t be afraid of it, it’s like Lego but for software.

    • I Cast Fist@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      So instead of having problems getting the fucking program to run, you have problems getting docker to properly build/run when you need it to.

      At work, I have one program that fails to build an image because of a 3rd party package who forgot to update their pgp signature; one that builds and runs, but for some reason gives a 404 error when I try to access it on localhost; one that whoever the fuck made it literally never ran it, because the Dockerfile was missing some 7 packages in the apt install line.

      • turmacar@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        Building from source is always going to come with complications. That’s why most people don’t do it. A docker compose file that ‘just’ downloads the stable release from a repo and starts running is dramatically more simple than cross-referencing all your services to make sure there are no dependency conflicts.

        There’s an added layer of complexity under the hood to simplify the common use case.

  • jagged_circle@feddit.nl
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Its an extremely fast and insecure way to setup services. Avoid it unless you want to download and execute malicious code.

      • jagged_circle@feddit.nl
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        1 month ago

        Package managers like apt use cryptography to check signatures in everything they download to make sure they aren’t malicious.

        Docker doesn’t do this. They have a system called DCT but its horribly broken (not to mention off by default).

        So when you run docker pull, you can’t trust anything it downloads.

        • Darioirad@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          Thank you very much! For the off by default part i can agree, but why it’s horribly broken?

          • jagged_circle@feddit.nl
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            1 month ago

            PKI.

            Apt and most release signing has a root of trust shipped with the OS and the PGP keys are cross signed on keyservers (web of trust).

            DCT is just TOFU. They disable it because it gives a false sense of security. Docker is just not safe. Maybe on 10 years they’ll fix it, but honestly it seems like they just dont care. The well is poisoned. Avoid. Use apt or some package manager that actually cares about security

            • Darioirad@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 month ago

              So, if I understand correctly: rather than using prebuilt images from Docker Hub or untrusted sources, the recommended approach is to start from a minimal base image of a known OS (like Debian or Ubuntu), and explicitly install required packages via apt within the Dockerfile to ensure provenance and security. Does that make sense?

              • jagged_circle@feddit.nl
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                1 month ago

                Install the package with apt. Avoid docker completely.

                If the docker image maintainer has a github, open a ticket asking them to publish a Debian package

                • Darioirad@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 month ago

                  I see your point about trusting signed Debian packages, and I agree that’s ideal when possible. But Docker and APT serve very different purposes — one is for OS-level package management, the other for containerization and isolation. That’s actually where I got a bit confused by your answer — it felt like you were comparing tools with different goals (due to my limited knowledge). My intent isn’t just to install software, but to run it in a clean, reproducible, and isolated environment (maybe more than one in the same hosting machine). That’s why I’m considering building my own container from a minimal Debian base and installing everything via apt inside it, to preserve trust while still using containers responsibly! Does this makes sense for you? Thank you again for wasting your time to reply to my dumb messages

        • ianonavy@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          1 month ago

          A signature only tells you where something came from, not whether it’s safe. Saying APT is more secure than Docker just because it checks signatures is like saying a mysterious package from a stranger is safer because it includes a signed postcard and matches the delivery company’s database. You still have to trust both the sender and the delivery company. Sure, it’s important to reject signatures you don’t recognize—but the bigger question is: who do you trust?

          APT trusts its keyring. Docker pulls over HTTPS with TLS, which already ensures you’re talking to the right registry. If you trust the registry and the image source, that’s often enough. If you don’t, tools like Cosign let you verify signatures. Pulling random images is just as risky as adding sketchy PPAs or running curl | bash—unless, again, you trust the source. I certainly trust Debian and Ubuntu more than Docker the company, but “no signature = insecure” misses the point.

          Pointing out supply chain risks is good. But calling Docker “insecure” without nuance shuts down discussion and doesn’t help anyone think more critically about safer practices.

          • jagged_circle@feddit.nl
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            1 month ago

            Oof, TLS isnt a replacement for signatures. There’s a reason most package managers use release signatures. x.509 is broken.

            And, yes PGP has a WoT to solve its PKI. That’s why we can trust apt sigs and not docker sigs.

    • festus@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      1 month ago

      Entirely depends on who’s publishing the image. Many projects publish their own images, in which case you’re running their code regardless.