Modern AI data centers consume enormous amounts of power, and it looks like they will get even more power-hungry in the coming years as companies like Google, Microsoft, Meta, and OpenAI strive towards artificial general intelligence (AGI). Oracle has already outlined plans to use nuclear power plants for its 1-gigawatt datacenters. It looks like Microsoft plans to do the same as it just inked a deal to restart a nuclear power plant to feed its data centers, reports Bloomberg.

  • Etterra@lemmy.world
    link
    fedilink
    English
    arrow-up
    43
    arrow-down
    4
    ·
    3 months ago

    I’m sure that everyone will recognize that this was a great idea in a couple of years when generative LLM AI goes the way of the NFT.

    • Cethin@lemmy.zip
      link
      fedilink
      English
      arrow-up
      38
      arrow-down
      1
      ·
      3 months ago

      Honestly, it probably is a great idea regardless. The plant operated for a very long time profitably. I’m sure it can again with some maintenance and upgrades. People only know three mile island for the (not so disastrous) disaster, but the rest of the plant operated for decades after without any issues.

      • kent_eh@lemmy.ca
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        1
        ·
        3 months ago

        with some maintenance and upgrades.

        Hopefully we can trust these tech bros to do that properly and without using their usual “move fast and break things” approach.

        • Cethin@lemmy.zip
          link
          fedilink
          English
          arrow-up
          18
          ·
          3 months ago

          They are only buying 100% of the output. The old owners are still owning and operating it.

        • ShepherdPie@midwest.social
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          2
          ·
          3 months ago

          And if they do skimp on maintenance and upgrades and the plant melts down, we can be assured that no harm will come to the company because the scale of the disaster would wipe them out and they’re “too big to fail.”

      • Valmond@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        3 months ago

        It’s one of a hell of an old nuclear plant if it’s the original three mile island one.

    • TheGalacticVoid@lemm.ee
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      2
      ·
      3 months ago

      LLMs have real uses, even if they’re being overhyped right now. Even if they do fail, though, more nuclear power is a great outcome

    • douglasg14b@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      8
      ·
      edit-2
      3 months ago

      Nfts were a scam from the start something that has no actual purpose utility or value being given value through hype.

      Generative AI is very different. In my honest opinion you have to have your head in the sand if you don’t believe that AI is only going to incrementally improve and expand in capabilities. Just like it has year over year for the last 5 to 10 years. And just like for the last decade it continues to solve more and more real-world problems in increasingly effective manners.

      It isn’t just constrained to llms either.

        • oatscoop@midwest.social
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          edit-2
          3 months ago

          One of the major problems with LLMs is it’s a “boom”. People are rightfully soured on them as a concept because jackasses trying to make money lie about their capabilities and utility – never mind the ethics of obtaining the datasets used to train them.

          They’re absolutely limited, flawed, and there are better solutions for most problems … but beyond the bullshit LLMs are a useful tool for some problems and they’re not going away.

            • oatscoop@midwest.social
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              3 months ago

              There are jobs where it’s not feasible or practical to pay an actual human to do.

              Human translators exist and are far superior to machine translators. Do you hire one every time you need something translated in a casual setting, or do you use something Google translate? LLMs are the reason modern machine translation is is infinitely better than it was a few years ago.

        • EnoBlk@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          2
          ·
          3 months ago

          That’s one groups opinion, we still see improving LLMs I’m sure they will continue to improve and be adapted for whatever future use we need them. I mean I personally find them great in their current state for what I use them for

            • EnoBlk@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              3 months ago

              I use them regularly for personal and work projects, they work great at outlining what I need to do in a project as well as identifying oversights in my project. If industry experts are saying this, then why are there still improvements being made, why are they still providing value to people, just because you don’t use them doesn’t mean they aren’t useful.

            • areyouevenreal@lemm.ee
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              2
              ·
              3 months ago

              Even if it didn’t improve further there are still uses for LLMs we have today. That’s only one kind of AI as well, the kind that makes all the images and videos is completely separate. That has come on a long way too.

                • areyouevenreal@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  2
                  ·
                  3 months ago

                  Bruh you have no idea about the costs. Doubt you have even tried running AI models on your own hardware. There are literally some models that will run on a decent smartphone. Not every LLM is ChatGPT that’s enormous in size and resource consumption, and hidden behind a vail of closed source technology.

                  Also that trick isn’t going to work just looking at a comment. Lemmy compresses whitespace because it uses Markdown. It only shows the extra lines when replying.

                  Can I ask you something? What did Machine Learning do to you? Did a robot kill your wife?

        • iopq@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          3 months ago

          There are always new techniques and improvements. If you look at the current state, we haven’t even had a slowdown

      • AEsheron@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        I suspect you’re right. But there really is never a good way to tell with these kinds of experimental techs. It could be a runaway chain of improvement. Or it is probably even odds that there is a visible and clear decline before it peters out, or just suddenly slams into a beick wall with no warning.