• loaExMachina@sh.itjust.works
    link
    fedilink
    arrow-up
    9
    arrow-down
    1
    ·
    4 months ago

    I had fun with it a dozen times or so when it was new, but I’m not amused anymore. Last time was about a month ago, when someone told me about using chatGPT to seek an answer, and I intentionally found a few prompts that made it spill clear bullshit, to send screenshots making a point that LLMs aren’t reliable and asking them factual questions is a bad idea.

    • friend_of_satan@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      4 months ago

      asking them factual questions is a bad idea

      This is a crucial point that everybody should make sure their non-techie friends understand. AI is not good at facts. AI is primarily a bullshitter. They are really only useful where facts don’t matter, like planning events, finding ways to spend time, creating art, etc…

      • subignition@fedia.io
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        4 months ago

        If you’re prepared to fact check what it gives you, it can still be a pretty useful tool for breaking down unfamiliar things or for brainstorming. And I’m saying that as someone with a very realistic/concerned view about its limitations.

        Used it earlier this week as a jumping off point for troubleshooting a problem I was having with the USMT in Windows 11.

        • friend_of_satan@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          4 months ago

          Absolutely. With code (and I suppose it’s of other logical truths) it’s easier because you can ask it to write a test for the code, though the test may be invalid so you have to check that. With arbitrary facts, I usually ask “is that true?” To have it check itself. Sometimes it just gets into a loop of lies, but other times it actually does tell the truth.