In a study recently published in the journal Patterns, researchers demonstrate that computer algorithms often used to identify AI-generated text frequently falsely label articles written by non-native language speakers as being created by artificial intelligence. The researchers warn that the unreliable performance of these AI text-detection programs could adversely affect many individuals, including students and job applicants.
Most AI-generated texts are grammatically perfect. That’s not a characteristic of non-native speakers.
According to the article grammatical errors are not the reason. The reason is that AI uses simpler vocabulary to mimic a regular conversation of average people.
A lot of non-native speakers can show higher command of the language, because they took the time to study its rules. Just look at how people type on social media.
I must be one of the lazy ones who didn’t take enough time to study English grammar. (͡•_ ͡• )
Yeah I get your point, many non-natives pay more attention to grammar when they write.
Completely disagree - a lot of non-native speakers have excellent grasp of grammar, precisely because they have learnt the rules. Native speakers rely on stuff sounding right, rather than necessarily knowing the rules. But following grammatical rules rigidly is exactly what I would expect both from a genAI and a non-native speaker (as well as avoiding figurative speech and idioms).
Sorry I might have overly generalised based on my personal experience. I have been a non-native English speaker for over 30 years, and I keep making grammatical mistakes.
Everyone is different and it depends heavily on how the person learned/acquired the language.