• taladar@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      17 days ago

      Creating issues is free to a large number of people you don’t really control, whether that is the general public or some customers who have access to your issue tracker and love AI doesn’t really matter, if anything dealing with the public is easier since you can just ban members of the public who misbehave.

  • MagicShel@lemmy.zip
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    1
    ·
    17 days ago

    The place I work is actively developing an internal version of this. We already have optional AI PR reviews (they neither approve nor reject, just offer an opinion). As a reviewer, AI is the same as any other. It offers an opinion and you can judge for yourself whether its points need to be addressed or not. I’ll be interested to see whether its comments affect the comments of the tech lead.

    I’ve seen a preview of a system that detects problems like failing sonar analysis and it can offer a PR to fix it. I suppose for simple enough fixes like removing unused imports or unused code it might be fine. It gets static analysis and review like any other PR, so it’s not going to be merging any defects without getting past a human reviewer.

    I don’t know how good any of this shit actually is. I tested the AI review once and it didn’t have a lot to say because it was a really simple PR. It’s a tool. When it does good, fine. When it doesn’t, it probably won’t take any more effort than any other bad input.

    I’m sure you can always find horrific examples, but the question is how common they are and how subtle any introduced bugs are, to get past the developer and a human reviewer. Might depend more on time pressure than anything, like always.

    • snooggums@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      17 days ago

      The “AI agent” approach’s goal doesn’t include a human reviewer. As in the agent is independent, or is reviewed by other AI agents. Full automation.

      They are selling those AI agents as working right now despite the obvious flaws.

  • peopleproblems@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    17 days ago

    This feels like an attempt to destroy open source projects. Overwhelm developers with crap PRs so they can’t fix real issues.

    It won’t work long term, because I can’t imagine anyone staying on GitHub after it gets bad.