Here is the FCC proposal (pdf)
The U.S. Federal Communications Commission has proposed new rules governing the use of AI-generated phone calls and texts. Part of the proposal centers on create a clear definition for AI-generated calls, with the rest focuses on consumer protection by making companies disclose when AI is being used in calls or texts.
“This provides consumers with an opportunity to identify and avoid those calls or texts that contain an enhanced risk of fraud and other scams,” the FCC said. The agency is also looking ensure that legitimate uses of AI to assist people with disabilities to communicate remains protected.
Today’s proposal is the latest action by the FCC to regulate how AI is used in robocalls and robotexts. The commission has already moved to place a ban on AI-generated voices in robocalls and has called on telecoms to crack down on the practice. Ahead of this year’s November election, there has already been one notable use of AI robocalls attempting to spread misinformation to New Hampshire voters.
Robocalls and robotexts are a serious problem that nobody takes seriously, and just treats as a fact of life when it doesn’t have to be that way. The rest of the world literally laughs at how much we’re harassed by them
I worked in telecom for a couple of years up until recently. There’s actually a growing body of self-regulation going on within the SMS industry. Most notably, any business sending text messages has to apply for a “license” to do so, with some pretty strict consent requirements. Violating those requirements comes with heavy penalties, mostly enforced by downstream carriers. If you’re curious, 10DLC/A2P are the terms to Google for.
This is also raising questions of foreign interference/influence in democratic process.
In Canada, the federal Elections Commissioner has been called on to investigate the source of bot campaigns for the leading opposition party: Online bot campaign backing Pierre Pollievre prompts call for probe.