• Veraticus@lib.lgbtOP
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 year ago

      No one is saying there’s problems with the bots (though I don’t understand why you’re being so defensive of them – they have no feelings so describing their limitations doesn’t hurt them).

      The problem is what humans expect from LLMs and how humans use them. Their purposes is to string words together in pretty ways. Sometimes those ways are also correct. Being aware of what they’re designed to do, and their limitations, seems important for using them properly.

    • FIash Mob #5678@beehaw.org
      link
      fedilink
      arrow-up
      7
      ·
      1 year ago

      These AI systems do make up bullshit often enough that there’s even a term for it: Hallucination.

      Kind of a euphemistic term, like how religious people made up the word ‘faith’ to cover for the more honest term: gullible.