• vane@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    9 hours ago

    The problem is that those companies are monopolies and can raise prices indefinitely to pursue this shitty dream because they got governments in their pockets. Because gov are cloud / microsoft software dependent - literally every country is on this planet - maybe except China / North Korea and Russia. They can like raise prices 10 times in next 10 years and don’t give a fuck. Spend 1 trillion on AI and say we’re near over and over again and literally nobody can stop them right now.

  • mrmanager@lemmy.today
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    9 hours ago

    It doesnt matter if they reach any end result, as long as stocks go up and profits go up.

    Consumers arent really asking for AI but its being used to push new hardware and make previous hardware feel old. Eventually everyone has AI on their phone, most of it unused.

    • Excrubulent@slrpnk.net
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 hours ago

      I enough researchers talk about the problems them that will eventually break through the bubble and investors will pull out.

      We’re at the stage of the new technology hype cycle where it crashes, essentially for this reason. I really hope it does soon because then they’ll stop trying to force it down our throats in every service we use.

  • brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    54
    ·
    edit-2
    16 hours ago

    It’s ironic how conservative the spending actually is.

    Awesome ML papers and ideas come out every week. Low power training/inference optimizations, fundamental changes in the math like bitnet, new attention mechanisms, cool tools to make models more controllable and steerable and grounded. This is all getting funded, right?

    No.

    Universities and such are seeding and putting out all this research, but the big model trainers holding the purse strings/GPU clusters are not using them. They just keep releasing very similar, mostly bog standard transformers models over and over again, bar a tiny expense for a little experiment here and there. In other words, it’s full corporate: tiny, guaranteed incremental improvements without changing much, and no sharing with each other. It’s hilariously inefficient. And it relies on lies and jawboning from people like Sam Altman.

    Deepseek is what happens when a company is smart but resource constrained. An order of magnitude more efficient, and even their architecture was very conservative.

  • iAvicenna@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    13 hours ago

    The funny thing is with so much money you could probably do lots of great stuff with the existing AI as it is. Instead they put all the money into compute power so that they can overfit their LLMs to look like a human.

      • SqueakyBeaver@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        1
        ·
        22 hours ago

        Some parts of the world (mostly Europe, I think) use dots instead of commas for displaying thousands. For example, 5.000 is 5,000 and 1.300 is 1,300

          • lemmydividebyzero@reddthat.com
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            19 hours ago

            We (in Europe) probably should be thankful that you are not using feet as thousands-separator over there in the USA… Or maybe separate after each 2nd digit, because why not… ;)

          • itslilith@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            3
            ·
            19 hours ago

            It makes sense from typographical standpoint, the comma is the larger symbol and thus harder to overlook, especially in small fonts or messy handwriting

            • suicidaleggroll@lemm.ee
              link
              fedilink
              English
              arrow-up
              3
              ·
              edit-2
              17 hours ago

              But from a grammatical sense it’s the opposite. In a sentence, a comma is a short pause, while a period is a hard stop. That means it makes far more sense for the comma to be the thousands separator and the period to be the stop between integer and fraction.

              • itslilith@lemmy.blahaj.zone
                link
                fedilink
                English
                arrow-up
                3
                ·
                16 hours ago

                I have no strong preference either way. I think both are valid and sensible systems, and it’s only confusing because of competing standards. I think over long enough time, due to the internet, the period as the decimal separator will prevail, but it’s gonna happen normally, it’s not something we can force. Many young people I know already use it that way here in Germany

        • Valmond@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          21 hours ago

          But usually you don’t put three 000 because that becomes a hint of thousand.

          Like 2.50 is 2€50 but 2.500 is 2500€

          Is there an ISO standard for this stuff?

          • itslilith@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            2
            ·
            19 hours ago

            No, 2,50€ is 2€ and 50ct, 2.50€ is wrong in this system. 2,500€ is also wrong (for currency, where you only care for two digits after the comma), 2.500€ is 2500€

            • desktop_user@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              0
              ·
              13 hours ago

              what if you are displaying a live bill for a service billed monthly, like bandwidth, and are charged one pence/cent/(whatever eutopes hundredth is called) per gigabyte if you use a few megabytes the bill is less than a hundredth but still exists.

              • itslilith@lemmy.blahaj.zone
                link
                fedilink
                English
                arrow-up
                2
                ·
                13 hours ago

                Yes, that’s true, but more of an edge case. Something like gasoline is commonly priced in fractional cents, tho:

  • TommySoda@lemmy.world
    link
    fedilink
    English
    arrow-up
    54
    arrow-down
    1
    ·
    edit-2
    1 day ago

    Technology in most cases progresses on a logarithmic scale when innovation isn’t prioritized. We’ve basically reached the plateau of what LLMs can currently do without a breakthrough. They could absorb all the information on the internet and not even come close to what they say it is. These days we’re in the “bells and whistles” phase where they add unnecessary bullshit to make it seem new like adding 5 cameras to a phone or adding touchscreens to cars. Things that make something seem fancy by slapping buzzwords and features nobody needs without needing to actually change anything but bump up the price.

    • Balder@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      17 hours ago

      I remember listening to a podcast that’s about explaining stuff according to what we know today (scientifically). The guy explaining is just so knowledgeable about this stuff and he does his research and talk to experts when the subject involves something he isn’t himself an expert.

      There was this episode where he kinda got into the topic of how technology only evolves with science (because you need to understand the stuff you’re doing and you need a theory of how it works before you make new assumptions and test those assumptions). He gave an example of the Apple visionPro being a machine that despite being new (the hardware capabilities, at least), the algorithm for tracking eyes they use was developed decades ago and was already well understood and proven correct by other applications.

      So his point in the episode is that real innovation just can’t be rushed by throwing money or more people at a problem. Because real innovation takes real scientists having novel insights and experiments to expand the knowledge we have. Sometimes those insights are completely random, often you need to have a whole career in that field and sometimes it takes a new genius to revolutionize it (think Newton and Einstein).

      Even the current wave of LLMs are simply a product of the Google’s paper that showed we could parallelize language models, leading to the creation of “larger language models”. That was Google doing science. But you can’t control when some new breakthrough is discovered, and LLMs are subject to this constraint.

      In fact, the only practice we know that actually accelerates science is the collaboration of scientists around the world, the publishing of reproducible papers so that others can expand upon and have insights you didn’t even think about, and so on.

  • Tony Bark@pawb.social
    link
    fedilink
    English
    arrow-up
    86
    arrow-down
    5
    ·
    1 day ago

    They’re throwing billions upon billions into a technology with extremely limited use cases and a novelty, at best. My god, even drones fared better in the long run.

    • Snot Flickerman@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      69
      ·
      1 day ago

      I mean it’s pretty clear they’re desperate to cut human workers out of the picture so they don’t have to pay employees that need things like emotional support, food, and sleep.

      They want a workslave that never demands better conditions, that’s it. That’s the play. Period.

      • CosmoNova@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        22 hours ago

        And the tragedy of the whole situation is that they can‘t win because if every worker is replaced by an algorithm or a robot then who‘s going to buy your products? Nobody has money because nobody has a job. And so the economy will shift to producing war machines that fight each other for territory to build more war machine factories until you can’t expand anymore for one reason or another. Then the entire system will collapse like the Roman Empire and we start from scratch.

        • thatKamGuy@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          5
          ·
          19 hours ago

          producing war machines that fight each other for territory to build more war machine factories until you can’t expand anymore for one reason or another.

          As seen in the retro-documentary Z!

      • TommySoda@lemmy.world
        link
        fedilink
        English
        arrow-up
        24
        ·
        edit-2
        1 day ago

        If this is their way of making AI, with brute forcing the technology without innovation, AI will probably cost more for these companies to maintain infrastructure than just hiring people. These AI companies are already not making a lot of money for how much they cost to maintain. And unless they charge companies millions of dollars just to be able to use their services they will never make a profit. And since companies are trying to use AI to replace the millions they spend on employees it seems kinda pointless if they aren’t willing to prioritize efficiency.

        It’s basically the same argument they have with people. They don’t wanna treat people like actual humans because it costs too much, yet letting them love happy lives makes them more efficient workers. Whereas now they don’t want to spend money to make AI more efficient, yet increasing efficiency would make them less expensive to run. It’s the never ending cycle of cutting corners only to eventually make less money than you would have if you did things the right way.

        • Snot Flickerman@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          23
          ·
          edit-2
          1 day ago

          Absolutely. It’s maddening that I’ve had to go from “maybe we should make society better somewhat” in my twenties to “if we’re gonna do capitalism, can we do it how it actually works instead of doing it stupid?” in my forties.

    • NoiseColor @lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      20
      ·
      1 day ago

      I don’t think any designer does work without heavily relying on ai. I bet that’s not the only profession.

  • ABetterTomorrow@lemm.ee
    link
    fedilink
    English
    arrow-up
    8
    ·
    19 hours ago

    Current big tech is going to keeping pushing limits and have SM influencers/youtubers market and their consumers picking up the R&D bill. Emotionally I want to say stop innovating but really cut your speed by 75%. We are going to witness an era of optimization and efficiency. Most users just need a Pi 5 16gb, Intel NUC or an Apple air base models. Those are easy 7-10 year computers. No need to rush and get latest and greatest. I’m talking about everything computing in general. One point gaming,more people are waking up realizing they don’t need every new GPU, studios are burnt out, IPs are dying due to no lingering core base to keep franchise up float and consumers can’t keep opening their wallets. Hence studios like square enix going to start support all platforms and not do late stage capitalism with going with their own launcher with a store. It’s over.

  • Ledericas@lemm.ee
    link
    fedilink
    English
    arrow-up
    13
    ·
    22 hours ago

    It’s because customers don’t want it or care for it, it’s only the corporations themselves are obsessed with it

  • LostXOR@fedia.io
    link
    fedilink
    arrow-up
    29
    arrow-down
    2
    ·
    1 day ago

    I liked generative AI more when it was just a funny novelty and not being advertised to everyone under the false pretenses of being smart and useful. Its architecture is incompatible with actual intelligence, and anyone who thinks otherwise is just fooling themselves. (It does make an alright autocomplete though).

    • devfuuu@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      20 hours ago

      Like all the previous bubbles of scam that were kinda interesting or fun for novelty and once money came pouring in became absolut chaos and maddening.

    • Sheridan@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 day ago

      The peak of AI for me was generating images Muppet versions of the Breaking Bad cast; it’s been downhill since.

      • morgunkorn@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        25
        ·
        1 day ago

        trust me bro, we’re almost there, we just need another data center and a few billions, it’s coming i promise, we are testing incredible things internally, can’t wait to show you!

          • LostXOR@fedia.io
            link
            fedilink
            arrow-up
            2
            arrow-down
            2
            ·
            14 hours ago

            Around a year ago I bet a friend $100 we won’t have AGI by 2029, and I’d do the same today. LLMs are nothing more than fancy predictive text and are incapable of thinking or reasoning. We burn through immense amounts of compute and terabytes of data to train them, then stick them together in a convoluted mess, only to end up with something that’s still dumber than the average human. In comparison humans are “trained” with maybe ten thousand “tokens” and ten megajoules of energy a day for a decade or two, and take only a couple dozen watts for even the most complex thinking.

            • pixxelkick@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              14 hours ago

              Humans are “trained” with maybe ten thousand “tokens” per day

              Uhhh… you may wanna rerun those numbers.

              It’s waaaaaaaay more than that lol.

              and take only a couple dozen watts for even the most complex thinking

              Mate’s literally got smoke coming out if his ears lol.

              A single Wh is 860 calories…

              I think you either have no idea wtf you are talking about, or your just made up a bunch of extremely wrong numbers to try and look smart.

              1. Humans will encounter hundreds of thousands of tokens per day, ramping up to millions in school.

              2. An human, by my estimate, has burned about 13,000 Wh by the time they reach adulthood. Maybe more depending in activity levels.

              3. While yes, an AI costs substantially more Wh, it also is done in weeks so it’s obviously going to be way less energy efficient due to the exponential laws of resistance. If we grew a functional human in like 2 months it’d prolly require way WAY more than 13,000 Wh during the process for similiar reasons.

              4. Once trained, a single model can be duplicated infinitely. So it’d be more fair to compare how much millions of people cost to raise, compared to a single model to be trained. Because once trained, you can now make millions of copies of it…

              5. Operating costs are continuing to go down and down and down. Diffusion based text generation just made another huge leap forward, reporting around a twenty times efficiency increase over traditional gpt style LLMs. Improvements like this are coming out every month.

              • LostXOR@fedia.io
                link
                fedilink
                arrow-up
                1
                ·
                13 hours ago

                True, my estimate for tokens may have been a bit low. Assuming a 7 hour school day where someone talks at 5 tokens/sec you’d encounter about 120k tokens. You’re off by 3 orders of magnitude on your energy consumption though; 1 watt-hour is 0.86 food Calories (kcal).

  • nectar45@lemmy.zip
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    2
    ·
    1 day ago

    Imo our current version of ai are too generalized, we add so much information into the ai to make them good at everything it all mixes together into a single grey halucinating slop that the ai ends up being good at nothing.

    We need to find ways to specialize ai and give said ai a more consistent and concrete personality to move forward.

    • nectar45@lemmy.zip
      link
      fedilink
      English
      arrow-up
      17
      ·
      1 day ago

      Imo to make an ai that is truly good at everything we need to have multiple ai all designed to do something different all working together (like the human brain works) instead of making every single ai a personality-less sludge of jack of all trades master of none

        • pixxelkick@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          No, it’s just not something exposed to you to see

          But under the hood it very much does shift gears depending on what you ask it to do

          It’s why gpt can do stuff now like analyze contents of images, basic OCR, but also generate images too.

          Yet it can also do math, talk about biology, give relationship advice…

          I believe open AI called the term “specialists” or something vaguely like that, at the time.