• oakey66@lemmy.world
    link
    fedilink
    English
    arrow-up
    59
    ·
    2 months ago

    My consulting company is literally talking about nothing else. It’s fucking awful.

    • Victor@lemmy.world
      link
      fedilink
      English
      arrow-up
      20
      ·
      2 months ago

      Mine also mentioned it on the last company retreat. That it’s important to look into using AI tools and not get “left behind”. Old geezers who don’t code anymore who think this is something we want to work with.

      I’m fine with using AI as some sort of dynamic snippets tool where I myself know what I want the code to look like in the end, and where you don’t have to predefine those snippets. But not to write entire apps for me.

      I don’t even use regular dumb snippets. It’s so easy to write code without them, why should I dumb myself down.

      • oakey66@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        2 months ago

        I’m in IT consulting. I have personally done some really cool shit for my clients. Things they didn’t have the talent to do themselves. Business management consulting and tax audit consulting is a completely different story. I don’t help automate away jobs. I’m not presenting decks to strip companies and governments for parts. Needless to say, not all consulting is created equally and my hope is that there comes a time where this bubble bursts this push for AI dies on the vine.

          • oakey66@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            2 months ago

            We help them build solutions that they then maintain and own. I’m in analytics. So we’re doing data engineering, security, and delivery.

            • oakey66@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 months ago

              Just to add the difference. Managed solutions typically has the consulting firm managing the maintenance. In some cases, they take over an existing solution vs the consulting company building something.

    • pixxelkick@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      2 months ago

      Same, but they did set up a self hosted instance for us to use and, tbh, it works pretty good.

      I think it’s s good tool specifically for helping when you dunno what’s going on, to help with brainstorming or exploring different solutions. Getting recommended names of tools, finding out “how do other people solve this”, generating documentation, etc

      But for very straightforward tasks where you already know what you are doing, it’s not helpful, you already know what code you are going to write anyways.

      Right tool for the right job.

      • shortrounddev@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        2 months ago

        I use it as a form of google, basically. I ask it coding questions a lot, some of which are a bit more philosophical. I never allow it to write code for me, though. Sometimes I’ll have it check my work

  • FearfulSalad@ttrpg.network
    cake
    link
    fedilink
    English
    arrow-up
    23
    ·
    edit-2
    2 months ago

    Preface: I have a lot of AI skepticism.

    My company is using Cursor and Windsurf, focusing on agent mode (and whatever Windsurf’s equivalent is). It hallucinates real hard with any open ended task, but when you have ALL of:

    • an app with good preexisting test coverage
    • the ability to run relevant tests quickly (who has time to run an 18 hour CI suite locally for a 1 line change?)
    • a well thought out product use case with edge cases

    Then you can tell the agent to write test cases before writing code, and run all relevant tests when making any code changes. What it produces is often fine, but rarely great. If you get clever with setting up rules (that tell it to do all of the above), you can sometimes just drop in a product requirement and have it implement, making only minor recommendations. It’s as if you are pair programming with an idiot savant, emphasis on idiot.

    But whose app is well covered with tests? (Admittedly, AI can help speed up the boilerplating necessary to backfill test cases, so long as someone knows how the app is supposed to work). Whose app is well-modularized such that it’s easy to select only downstream affected tests for any given code change? (If you know what the modules should be, AI can help… But it’s pretty bad at figuring that out itself). And who writes well thought out product use cases nowadays?

    If we were still in the olde waterfall era, with requirements written by business analysts, then maybe this could unlock the fabled 100x gains per developer. Or 10x gains. Or 1.1x gains, most likely.

    But nowadays it’s more common for AI to write the use cases, hallucinate edge cases that aren’t real, and when coupled with the above, patchwork together an app that no one fully understands, and that only sometimes works.

    Edit: if all of that sounds like TDD, which on its own gives devs a speed boost when they actually use it consistently, and you wonder if CEOs will claim that the boosts are attributable to AI when their devs finally start to TDD like they have been told to for decades now, well, I wonder the same thing.

    • NuXCOM_90Percent@lemmy.zip
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      edit-2
      2 months ago

      The thing to understand is that it is not about improving developer efficiency. It is about improving corporate profits.

      Because that engineer using “AI”? If they are doing work that can be reliably generated by an AI then they aren’t a particularly “valuable” coder and, most likely, have some severe title inflation. The person optimizing the DB queries? They are awesome. The person writing out utility functions or integrating a library? And, regardless, you are going to need code review that invariably boils down to a select few who actually can be trusted to think through the implications of an implementation and check that the test coverage was useful.

      End result? A team of ten becomes a team of four. The workload for the team leader goes up as they have to do more code review themselves but that ain’t Management’s problem. And that team now has saved the company closer to a million a year than not. The question isn’t “Why would we use AI if it is only 0.9x as effective as a human being?” and instead “Why are we paying a human being a six figure salary when an AI is 90% as good and we pay once for the entire company?”

      And if people key in on “Well how do you find the people who can be trusted to review the code or make the tickets?”: Congrats. You have thought about this more than most Managers.

      My company hasn’t mandated the use of AI tools yet but it is “strongly encouraged” and we have a few evangelists who can’t stop talking about how “AI” makes them two or three times as fast and blah blah blah. And… I’ve outright side channeled some of the more early career staff that I like and explained why they need to be very careful about saying that “AI” is better at their jobs than they are.

      And I personally make it very clear that these tools are pretty nice for the boiler plate code I dislike writing (mostly unit tests) but that it just isn’t at the point where it can handle the optimizations and design work that I bring to the table. Because stuff is gonna get REALLY bad REALLY fast as the recession/depression speeds up and I want to make it clear that I am more useful than a “vibe coder” who studied prompt engineering.

      • taladar@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        2 months ago

        “Why are we paying a human being a six figure salary when an AI is 90% as good and we pay once for the entire company?”

        And if it actually was 90% as good that would be a valid question, in reality however it is more like 9% as good with occasional downwards spikes towards 0.9%.

  • Curious Canid@lemmy.ca
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    3
    ·
    2 months ago

    An LLM does not write code. It cobbles together bits and pieces of existing code. Some developers do that too, but the decent ones look at existing code to learn new principles and then apply them. An LLM can’t do that. If human developers have not already written code that solves your problem, an LLM cannot solve your problem.

    The difference between a weak developer and an LLM is that the LLM can plagiarize from a much larger code base and do it much more quickly.

    A lot of coding really is just rehashing existing solutions. LLMs could be useful for that, but a lot of what you get is going to contain errors. Worse yet, LLMs tend to “learn” how to cheat at their tasks. The code they generate often has lot of exception handling built in to hide the failures. That makes testing and debugging more difficult and time-consuming. And it gets really dangerous if you also rely on an LLM to generate your tests.

    The software industry has already evolved to favor speed over quality. LLM generated code may be the next logical step. That does not make it a good one. Buggy software in many areas, such as banking and finance, can destroy lies. Buggy software in medical applications can kill people. It would be good if we could avoid that.

    • demizerone@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      2 months ago

      I am at a company that is forcing devs to use AI tooling. So far, it saves a lot of time on an already well defined project, including documentation. I have not used it to generate tests or to build a green field project. Those are coming tho as we have been told by management that all future projects should include AI components in some way. Coolaid has been consumed deeply.

      • AA5B@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 months ago

        I think of ai more as an enhanced autocomplete. Instead of autocompleting function calls, it can autocomplete entire lines.

        Unit tests are fairly repetitive, so it does a decent job of autocompleting those, needing only minor corrections

        I’m still up in the air over regexes. It does generate something but I’m not sure it adds value

        I haven’t had much success with the results of generating larger sections of code

  • SocialMediaRefugee@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    2 months ago

    I can’t even get it to help with configurations most of the time. It gives commands that don’t exist in that OS version, etc.

  • Lovable Sidekick@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    2 months ago

    Not in the work force anymore but these accounts remind me of other influences that were foisted on me and my coworkers over the span of my software career. A couple I remember by name were Agile and Yourdon Structured Design, but there were a bunch more.

    In the old days somebody in management would attend a seminar or get a sales presentation or something and come back with a new “methodology” we were supposed to use. It typically took the form of a stack of binders full of documentation, and was always going to make our productivity “skyrocket”. We would either follow some rigorous process or just go through the motions, or something in between, and in say 6 months to a year the manager would have either left the company or forgotten all about it.

    It sounds like today’s managers are cut from about the same mold as always, and AI is yet another shiny object being dangled in front of them.