In a study recently published in the journal Patterns, researchers demonstrate that computer algorithms often used to identify AI-generated text frequently falsely label articles written by non-native language speakers as being created by artificial intelligence. The researchers warn that the unreliable performance of these AI text-detection programs could adversely affect many individuals, including students and job applicants.

  • Absurdist@lemm.ee
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    1
    ·
    1 year ago

    Completely disagree - a lot of non-native speakers have excellent grasp of grammar, precisely because they have learnt the rules. Native speakers rely on stuff sounding right, rather than necessarily knowing the rules. But following grammatical rules rigidly is exactly what I would expect both from a genAI and a non-native speaker (as well as avoiding figurative speech and idioms).

    • Cloudless ☼@feddit.uk
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      Sorry I might have overly generalised based on my personal experience. I have been a non-native English speaker for over 30 years, and I keep making grammatical mistakes.

      Everyone is different and it depends heavily on how the person learned/acquired the language.