• Tony Bark@pawb.social
    link
    fedilink
    English
    arrow-up
    89
    arrow-down
    5
    ·
    3 months ago

    They’re throwing billions upon billions into a technology with extremely limited use cases and a novelty, at best. My god, even drones fared better in the long run.

    • Snot Flickerman@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      72
      ·
      3 months ago

      I mean it’s pretty clear they’re desperate to cut human workers out of the picture so they don’t have to pay employees that need things like emotional support, food, and sleep.

      They want a workslave that never demands better conditions, that’s it. That’s the play. Period.

      • TommySoda@lemmy.world
        link
        fedilink
        English
        arrow-up
        28
        ·
        edit-2
        3 months ago

        If this is their way of making AI, with brute forcing the technology without innovation, AI will probably cost more for these companies to maintain infrastructure than just hiring people. These AI companies are already not making a lot of money for how much they cost to maintain. And unless they charge companies millions of dollars just to be able to use their services they will never make a profit. And since companies are trying to use AI to replace the millions they spend on employees it seems kinda pointless if they aren’t willing to prioritize efficiency.

        It’s basically the same argument they have with people. They don’t wanna treat people like actual humans because it costs too much, yet letting them love happy lives makes them more efficient workers. Whereas now they don’t want to spend money to make AI more efficient, yet increasing efficiency would make them less expensive to run. It’s the never ending cycle of cutting corners only to eventually make less money than you would have if you did things the right way.

        • Snot Flickerman@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          27
          ·
          edit-2
          3 months ago

          Absolutely. It’s maddening that I’ve had to go from “maybe we should make society better somewhat” in my twenties to “if we’re gonna do capitalism, can we do it how it actually works instead of doing it stupid?” in my forties.

      • CosmoNova@lemmy.world
        link
        fedilink
        English
        arrow-up
        15
        ·
        3 months ago

        And the tragedy of the whole situation is that they can‘t win because if every worker is replaced by an algorithm or a robot then who‘s going to buy your products? Nobody has money because nobody has a job. And so the economy will shift to producing war machines that fight each other for territory to build more war machine factories until you can’t expand anymore for one reason or another. Then the entire system will collapse like the Roman Empire and we start from scratch.

        • thatKamGuy@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          5
          ·
          3 months ago

          producing war machines that fight each other for territory to build more war machine factories until you can’t expand anymore for one reason or another.

          As seen in the retro-documentary Z!

        • howrar@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 months ago

          Why would you need anyone to buy your products when you can just enjoy them yourself?

          • CosmoNova@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 months ago

            Because there’s always a bigger fish out there to get you. Or that’s what trillionaires will tell themselves when they wage a robotic war. This system isn’t made to last the way it’s progressing right now.

    • NoiseColor @lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      20
      ·
      3 months ago

      I don’t think any designer does work without heavily relying on ai. I bet that’s not the only profession.

  • brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    70
    ·
    edit-2
    3 months ago

    It’s ironic how conservative the spending actually is.

    Awesome ML papers and ideas come out every week. Low power training/inference optimizations, fundamental changes in the math like bitnet, new attention mechanisms, cool tools to make models more controllable and steerable and grounded. This is all getting funded, right?

    No.

    Universities and such are seeding and putting out all this research, but the big model trainers holding the purse strings/GPU clusters are not using them. They just keep releasing very similar, mostly bog standard transformers models over and over again, bar a tiny expense for a little experiment here and there. In other words, it’s full corporate: tiny, guaranteed incremental improvements without changing much, and no sharing with each other. It’s hilariously inefficient. And it relies on lies and jawboning from people like Sam Altman.

    Deepseek is what happens when a company is smart but resource constrained. An order of magnitude more efficient, and even their architecture was very conservative.

    • bearboiblake@pawb.social
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 months ago

      wait so the people doing the work don’t get paid and the people who get paid steal from others?

      that is just so uncharacteristic of capitalism, what a surprise

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        3 months ago

        It’s also cultish.

        Everyone was trying to ape ChatGPT. Now they’re rushing to ape Deepseek R1, since that’s what is trending on social media.

        It’s very late stage capitalism, yes, but that doesn’t come close to painting the whole picture. There’s a lot of groupthink, an urgency to “catch up and ship” and look good quick rather than focus experimentation, sane applications and such. When I think of shitty capitalism, I think of stagnant entities like shitty publishers, dysfunctional departments, consumers abuse, things like that.

        This sector is trying to innovate and make something efficient, but it’s like the purse holders and researchers have horse blinders on. Like they are completely captured by social media hype and can’t see much past that.

    • silverhand@reddthat.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      Good ideas are dime a dozen. Implementation is the game.

      Universities may churn out great papers, but what matters is how well they can implement them. Private entities win at implementation.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        The corporate implementations are mostly crap though. With a few exceptions.

        What’s needed is better “glue” in the middle. Larger entities integrating ideas from a bunch of standalone papers, out in the open, so they actually work together instead of mostly fading out of memory while the big implementations never even know they existed.

  • TommySoda@lemmy.world
    link
    fedilink
    English
    arrow-up
    57
    arrow-down
    1
    ·
    edit-2
    3 months ago

    Technology in most cases progresses on a logarithmic scale when innovation isn’t prioritized. We’ve basically reached the plateau of what LLMs can currently do without a breakthrough. They could absorb all the information on the internet and not even come close to what they say it is. These days we’re in the “bells and whistles” phase where they add unnecessary bullshit to make it seem new like adding 5 cameras to a phone or adding touchscreens to cars. Things that make something seem fancy by slapping buzzwords and features nobody needs without needing to actually change anything but bump up the price.

    • Balder@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      3 months ago

      I remember listening to a podcast that is about scientific explanations. The guy hosting it is very knowledgeable about this subject, does his research and talks to experts when the subject involves something he isn’t himself an expert.

      There was this episode where he kinda got into the topic of how technology only evolves with science (because you need to understand the stuff you’re doing and you need a theory of how it works before you make new assumptions and test those assumptions). He gave an example of the Apple visionPro being a machine that despite being new (the hardware capabilities, at least), the algorithm for tracking eyes they use was developed decades ago and was already well understood and proven correct by other applications.

      So his point in the episode is that real innovation just can’t be rushed by throwing money or more people at a problem. Because real innovation takes real scientists having novel insights and experiments to expand the knowledge we have. Sometimes those insights are completely random, often you need to have a whole career in that field and sometimes it takes a new genius to revolutionize it (think Newton and Einstein).

      Even the current wave of LLMs are simply a product of the Google’s paper that showed we could parallelize language models, leading to the creation of “larger language models”. That was Google doing science. But you can’t control when some new breakthrough is discovered, and LLMs are subject to this constraint.

      In fact, the only practice we know that actually accelerates science is the collaboration of scientists around the world, the publishing of reproducible papers so that others can expand upon and have insights you didn’t even think about, and so on.

  • LostXOR@fedia.io
    link
    fedilink
    arrow-up
    30
    arrow-down
    2
    ·
    3 months ago

    I liked generative AI more when it was just a funny novelty and not being advertised to everyone under the false pretenses of being smart and useful. Its architecture is incompatible with actual intelligence, and anyone who thinks otherwise is just fooling themselves. (It does make an alright autocomplete though).

    • Sheridan@lemmy.world
      cake
      link
      fedilink
      English
      arrow-up
      11
      ·
      3 months ago

      The peak of AI for me was generating images Muppet versions of the Breaking Bad cast; it’s been downhill since.

    • devfuuu@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 months ago

      Like all the previous bubbles of scam that were kinda interesting or fun for novelty and once money came pouring in became absolut chaos and maddening.

    • howrar@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      It peaked when it was good enough to generate short somewhat coherent phrases. We’d make it generate ideas for silly things and laugh at how ridiculous the results were.

    • torrentialgrain@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      17
      ·
      3 months ago

      AGI models will enter the market in under 5 years according to experts and scientists.

      • morgunkorn@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        26
        ·
        3 months ago

        trust me bro, we’re almost there, we just need another data center and a few billions, it’s coming i promise, we are testing incredible things internally, can’t wait to show you!

          • LostXOR@fedia.io
            link
            fedilink
            arrow-up
            2
            arrow-down
            2
            ·
            3 months ago

            Around a year ago I bet a friend $100 we won’t have AGI by 2029, and I’d do the same today. LLMs are nothing more than fancy predictive text and are incapable of thinking or reasoning. We burn through immense amounts of compute and terabytes of data to train them, then stick them together in a convoluted mess, only to end up with something that’s still dumber than the average human. In comparison humans are “trained” with maybe ten thousand “tokens” and ten megajoules of energy a day for a decade or two, and take only a couple dozen watts for even the most complex thinking.

            • pixxelkick@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              3 months ago

              Humans are “trained” with maybe ten thousand “tokens” per day

              Uhhh… you may wanna rerun those numbers.

              It’s waaaaaaaay more than that lol.

              and take only a couple dozen watts for even the most complex thinking

              Mate’s literally got smoke coming out if his ears lol.

              A single Wh is 860 calories…

              I think you either have no idea wtf you are talking about, or your just made up a bunch of extremely wrong numbers to try and look smart.

              1. Humans will encounter hundreds of thousands of tokens per day, ramping up to millions in school.

              2. An human, by my estimate, has burned about 13,000 Wh by the time they reach adulthood. Maybe more depending in activity levels.

              3. While yes, an AI costs substantially more Wh, it also is done in weeks so it’s obviously going to be way less energy efficient due to the exponential laws of resistance. If we grew a functional human in like 2 months it’d prolly require way WAY more than 13,000 Wh during the process for similiar reasons.

              4. Once trained, a single model can be duplicated infinitely. So it’d be more fair to compare how much millions of people cost to raise, compared to a single model to be trained. Because once trained, you can now make millions of copies of it…

              5. Operating costs are continuing to go down and down and down. Diffusion based text generation just made another huge leap forward, reporting around a twenty times efficiency increase over traditional gpt style LLMs. Improvements like this are coming out every month.

              • LostXOR@fedia.io
                link
                fedilink
                arrow-up
                1
                ·
                3 months ago

                True, my estimate for tokens may have been a bit low. Assuming a 7 hour school day where someone talks at 5 tokens/sec you’d encounter about 120k tokens. You’re off by 3 orders of magnitude on your energy consumption though; 1 watt-hour is 0.86 food Calories (kcal).

  • vane@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    edit-2
    3 months ago

    The problem is that those companies are monopolies and can raise prices indefinitely to pursue this shitty dream because they got governments in their pockets. Because gov are cloud / microsoft software dependent - literally every country is on this planet - maybe except China / North Korea and Russia. They can like raise prices 10 times in next 10 years and don’t give a fuck. Spend 1 trillion on AI and say we’re near over and over again and literally nobody can stop them right now.

      • vane@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        3 months ago

        How many governments were using computers back then when IBM was controlling hardware and how many relied on paper and calculators ? The problem is that gov are dependend on companies right now, not companies dependent on governments.

        Imagine Apple, Google, Amazon and Microsoft decides to leave EU on Monday. They say we ban all European citizens from all of our services on Monday and we close all of our offices and delete data from all of our datacenters. Good Fucking Luck !

        What will happen in Europe on Monday ? Compare it with what would happen if IBM said 50 years ago they are leaving Europe.

  • nectar45@lemmy.zip
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    2
    ·
    3 months ago

    Imo our current version of ai are too generalized, we add so much information into the ai to make them good at everything it all mixes together into a single grey halucinating slop that the ai ends up being good at nothing.

    We need to find ways to specialize ai and give said ai a more consistent and concrete personality to move forward.

    • nectar45@lemmy.zip
      link
      fedilink
      English
      arrow-up
      17
      ·
      3 months ago

      Imo to make an ai that is truly good at everything we need to have multiple ai all designed to do something different all working together (like the human brain works) instead of making every single ai a personality-less sludge of jack of all trades master of none

        • pixxelkick@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 months ago

          No, it’s just not something exposed to you to see

          But under the hood it very much does shift gears depending on what you ask it to do

          It’s why gpt can do stuff now like analyze contents of images, basic OCR, but also generate images too.

          Yet it can also do math, talk about biology, give relationship advice…

          I believe open AI called the term “specialists” or something vaguely like that, at the time.

  • Ledericas@lemm.ee
    link
    fedilink
    English
    arrow-up
    13
    ·
    3 months ago

    It’s because customers don’t want it or care for it, it’s only the corporations themselves are obsessed with it

  • mrmanager@lemmy.today
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    3 months ago

    It doesnt matter if they reach any end result, as long as stocks go up and profits go up.

    Consumers arent really asking for AI but its being used to push new hardware and make previous hardware feel old. Eventually everyone has AI on their phone, most of it unused.

    • Excrubulent@slrpnk.net
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      3 months ago

      If enough researchers talk about the problems then that will eventually break through the bubble and investors will pull out.

      We’re at the stage of the new technology hype cycle where it crashes, essentially for this reason. I really hope it does soon because then they’ll stop trying to force it down our throats in every service we use.

  • ABetterTomorrow@lemm.ee
    link
    fedilink
    English
    arrow-up
    9
    ·
    3 months ago

    Current big tech is going to keeping pushing limits and have SM influencers/youtubers market and their consumers picking up the R&D bill. Emotionally I want to say stop innovating but really cut your speed by 75%. We are going to witness an era of optimization and efficiency. Most users just need a Pi 5 16gb, Intel NUC or an Apple air base models. Those are easy 7-10 year computers. No need to rush and get latest and greatest. I’m talking about everything computing in general. One point gaming,more people are waking up realizing they don’t need every new GPU, studios are burnt out, IPs are dying due to no lingering core base to keep franchise up float and consumers can’t keep opening their wallets. Hence studios like square enix going to start support all platforms and not do late stage capitalism with going with their own launcher with a store. It’s over.

  • pixxelkick@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    11
    ·
    3 months ago

    Meanwhile a huge chunk of the software industry is now heavily using this “dead end” technology 👀

    I work in a pretty massive tech company (think, the type that frequently acquires other smaller ones and absorbs them)

    Everyone I know here is using it. A lot.

    However my company also has tonnes of dedicated sessions and paid time to instruct it’s employees on how to use it well, and to get good value out of it, abd the pitfalls it can have

    So yeah turns out if you teach your employees how to use a tool, they start using it.

    I’d say LLMs have made me about 3x as efficient or so at my job.

    • Snot Flickerman@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      38
      arrow-down
      2
      ·
      edit-2
      3 months ago

      Your labor before they had LLMs helped pay for the LLMs. If you’re 3x more efficient and not also getting 3x more time off for the labor you put in previously for your bosses to afford the LLMs you got ripped off my dude.

      If you’re working the same amount and not getting more time to cool your heels, maybe, just maybe, your own labor was exploited and used against you. Hyping how much harder you can work just makes you sound like a bitch.

      Real “tread on me harder, daddy!” vibes all throughout this thread. Meanwhile your CEO is buying another yacht.

      • pixxelkick@lemmy.world
        link
        fedilink
        English
        arrow-up
        19
        arrow-down
        2
        ·
        edit-2
        3 months ago

        I am indeed getting more time off for PD

        We delivered on a project 2 weeks ahead of schedule so we were given raises, I got a promotion, and we were given 2 weeks to just do some chill PD at our own discretion as a reward. All paid on the clock.

        Some companies are indeed pretty cool about it.

        I was asked to give some demos and do some chats with folks to spread info on how we had such success, and they were pretty fond of my methodology.

        At its core delivering faster does translate to getting bigger bonuses and kickbacks at my company, so yeah there’s actual financial incentive for me to perform way better.

        You also are ignoring the stress thing. If I can work 3x better, I can also just deliver in almost the same time, but spend all that freed up time instead focusing on quality, polishing the product up, documentation, double checking my work, testing, etc.

        Instead of scraping past the deadline by the skin of our teeth, we hit the deadline with a week or 2 to spare and spent a buncha extra time going over everything with a fine tooth comb twice to make sure we didn’t miss anything.

        And instead of mad rushing 8 hours straight, it’s just generally more casual. I can take it slower and do the same work but just in a less stressed out way. So I’m literally just physically working less hard, I feel happier, and overall my mood is way better, and I have way more energy.

        • Lemminary@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          3 months ago

          That sounds so cool! I’m glad you’re getting the benefits.

          I’m only wary that the cash-making machine will start tightening the ropes on the free time and the deadlines.

        • Rimu@piefed.social
          link
          fedilink
          English
          arrow-up
          5
          ·
          3 months ago

          That’s very cool.

          It’ll be interesting to see how it goes in a year’s time, maybe they’ll have raised their expectations and tightened the deadlines by then.

          • pixxelkick@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            3 months ago

            The thing is, the tech keeps advancing too so even if they tighten up deadlines, by the time they did that our productivity also took another gearshift up so we still are some degree ahead.

            This isn’t new, in software we have always been getting new tools to do our jobs better and faster, or produce fancier results in the same time

            This is just another tool in the toolbelt.

        • gamer@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 months ago

          Are you a software engineer? Without doxxing yourself, do you think you could share some more info or guidance? I’ve personally been trying to integrate AI code gen into my own work, but haven’t had much success.

          I’ve been able to ask ChatGPT to generate some simple but tedious code that would normally require me read through a bunch of documentation. Usually, that’s a third party library or a part of the standard library I’m not familiar with. My work is mostly Python and C++, and I’ve found that ChatGPT is terrible at C++ and more often than not generates code that doesn’t even compile. It is very good at generating Python by comparison, but unfortunately for me, that’s only like 10% of my work.

          For C++, I’ve found it helpful to ask misc questions about the design of the STL or new language features while I’m studying them myself. It’s not actually generating any code, but it definitely saves me some time. It’s very useful for translating C++'s “standardese” into english, for example. It still struggles generating valid code using C++20 or newer though.

          I also tried a few local models on my GPU, but haven’t had good results. I assume it’s a problem with the models I used not being optimized for code, or maybe the inference tools I tried weren’t using them right (oobabooga, kobold, and some others I don’t remember). If you have any recommendations for good coding models I can run locally on a 4090, I’d love to hear them!

          I tried using a few of those AI code editors (mostly VS Code plugins) years ago, and they really sucked. I’m sure things have improved since then, so maybe that’s the way to go?

          • pixxelkick@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            3 months ago

            I primarily use GPT style tools like ChatGPT and whatnot.

            The key is, rather than asking it to generate code, specify that you dont want code and instead want it to help you work through the solution. Tell it to ask you meaningful questions about your problem and effectively act as a rubber duck

            Then, after you’ve chosen a solution with it, ask it to generate code based on all the above convo.

            This will typically produce way higher quality results and helps avoid potential X/Y problems.

      • LuigiDidNothingWrong87@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 months ago

        This is how all tech innovation has gone. If you don’t let the bosses exploit your labour someone else will.

        If tech had unions this wouldn’t happen as much, but that’s why they don’t really exist.

    • andallthat@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      edit-2
      3 months ago

      It’s not that LLMs aren’t useful as they are. The problem is that they won’t stay as they are today, because they are too expensive. There are two ways for this to go (or an eventual combination of both:

      • Investors believe LLMs are going to get better and they keep pouring money into “AI” companies, allowing them to operate at a loss for longer That’s tied to the promise of an actual “intelligence” emerging out of a statistical model.

      • Investments stop pouring in, the bubble bursts and companies need to make money out of LLMs in their current state. To do that, they need to massively cut costs and monetize. I believe that’s called enshttificarion.

      • pixxelkick@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        6
        ·
        3 months ago

        You skipped possibility 3, which is actively happening ing:

        Advancements in tech enable us to produce results at a much much cheaper cost

        Which us happening with diffusion style LLMs that simultaneously cost less to train, cost less to run, but also produce both faster abd better quality outputs.

        That’s a big part people forget about AI: it’s a feedback loop of improvement as soon as you can start using AI to develop AI

        And we are past that mark now, most developers have easy access to AI as a tool to improve their performance, and AI is made by… software developers

        So you get this loop where as we make better and better AIs, we get better and better at making AIs with the AIs…

        It’s incredibly likely the new diffusion AI systems were built with AI assisting in the process, enabling them to make a whole new tech innovation much faster and easier.

        We are now in the uptick of the singularity, and have been for about a year now.

        Same goes for hardware, it’s very likely now that mvidia has AI incorporating into their production process, using it for micro optimizations in its architectures and designs.

        And then those same optimized gpus turn around and get used to train and run even better AIs…

        In 5-10 years we will look back on 2024 as the start of a very wild ride.

        Remember we are just now in the “computers that take up entire warehouses” step of the tech.

        Remember that in the 80s, a “computer” cost a fortune, took tonnes of resources, multiple people to run it, took up an entire room, was slow as hell, and could only do basic stuff.

        But now 40 years later they fit in our pockets and are (non hyoerbole) billions of times faster.

        I think by 2035 we will be looking at AI as something mass produced for consumers to just go in their homes, you go to best buy and compare different AI boxes to pick which one you are gonna get for your home.

        We are still at the stage of people in the 80s looking at computers and pondering “why would someone even need to use this, why would someone put one in their house, let alone their pocket”

        • andallthat@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          3 months ago

          I want to believe that commoditization of AI will happen as you describe, with AI made by devs for devs. So far what I see is “developer productivity is now up and 1 dev can do the work of 3? Good, fire 2 devs out of 3. Or you know what? Make it 5 out of 6, because the remaining ones should get used to working 60 hours/week.”

          All that increased dev capacity needs to translate into new useful products. Right now the “new useful product” that all energies are poured into is… AI itself. Or even worse, shoehorning “AI-powered” features in all existing product, whether it makes sense or not (welcome, AI features in MS Notepad!). Once this masturbatory stage is over and the dust settles, I’m pretty confident that something new and useful will remain but for now the level of hype is tremendous!

          • pixxelkick@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 months ago

            Good, fire 2 devs out of 3.

            Companies that do this will fail.

            Successful companies respond to this by hiring more developers.

            Consider the taxi cab driver:

            With the invention if the automobile, cab drivers could do their job way faster and way cheaper.

            Did companies fire drivers in response? God no. They hired more

            Why?

            Because they became more affordable, less wealthy clients could now afford their services which means demand went way way up

            If you can do your work for half the cost, usually demand goes up by way more than x2 because as you go down in wealth levels of target demographics, your pool of clients exponentially grows

            If I go from “it costs me 100k to make you a website” to “it costs me 50k to make you a website” my pool of possible clients more than doubles

            Which means… you need to hire more devs asap to start matching this newfound level of demand

            If you fire devs when your demand is about to skyrocket, you fucked up bad lol

      • pixxelkick@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 months ago

        For sure, much like how a cab driver has to know how to drive a cab.

        AI is absolutely a “garbage in, garbage out” tool. Just having it doesn’t automatically make you good at your job.

        The difference in someone who can weild it well vs someone who has no idea what they are doing is palpable.

  • silverhand@reddthat.com
    link
    fedilink
    English
    arrow-up
    7
    ·
    3 months ago

    Misleading title. From the article,

    Asked whether “scaling up” current AI approaches could lead to achieving artificial general intelligence (AGI), or a general purpose AI that matches or surpasses human cognition, an overwhelming 76 percent of respondents said it was “unlikely” or “very unlikely” to succeed.

    In no way does this imply that the “industry is pouring billions into a dead end”. AGI isn’t even needed for industry applications, just implementing current-level agentic systems will be more than enough to have massive industrial impact.