Google’s AI model will potentially listen in on all your phone calls — or at least ones it suspects are coming from a fraudster.

To protect the user’s privacy, the company says Gemini Nano operates locally, without connecting to the internet. “This protection all happens on-device, so your conversation stays private to you. We’ll share more about this opt-in feature later this year,” the company says.

“This is incredibly dangerous,” says Meredith Whittaker, the president of a foundation for the end-to-end encrypted messaging app Signal.

Whittaker —a former Google employee— argues that the entire premise of the anti-scam call feature poses a potential threat. That’s because Google could potentially program the same technology to scan for other keywords, like asking for access to abortion services.

“It lays the path for centralized, device-level client-side scanning,” she said in a post on Twitter/X. “From detecting ‘scams’ it’s a short step to ‘detecting patterns commonly associated w/ seeking reproductive care’ or ‘commonly associated w/ providing LGBTQ resources’ or ‘commonly associated with tech worker whistleblowing.’”

  • ᴅᴜᴋᴇᴛʜᴏʀɪᴏɴ@lemmy.world
    link
    fedilink
    English
    arrow-up
    50
    ·
    6 months ago

    “…locally on device without connecting to the internet”

    How would it then report such behavior to Google, without internet?

    If it notifies the end user, what good does that do? My phone is at my ear, I don’t stop a conversation when another app sends a notification while I’m on a call.

    This will 100% report things in the background to Google.

    • The Hobbyist@lemmy.zip
      link
      fedilink
      English
      arrow-up
      21
      ·
      6 months ago

      You’re putting a very large amount of trust on something which may simply require the flip of a switch to add the specified information to be sent back to Google along with all the regular heavy telemetry already feeding back…

    • GenderNeutralBro@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      9
      ·
      6 months ago

      There are a few ways this could work, but it hardly seems worth the effort if it’s not phoning home.

      They could have an on-device database of red flags and use on-device voice recognition against that database. But then what? Pop up a “scam likely” screen while you’re already mid-call? Maybe include an option to report scams back to Google with a transcript? I guess that could be useful.

      Any more more than that would be a privacy nightmare. I don’t want Google’s AI deciding which of my conversations are private and which get sent back to Google. Any non-zero false positive rate would simply be unacceptable.

      Maybe this is the first look at a new cat and mouse game: AI to detect AI-generated voices? AI-generated voice scams are already out there in the wild and will only become more common as time goes on.

    • smeg@feddit.uk
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 months ago

      I assume it means the “AI” bit is running locally (for cost/efficiency reasons and so your actual voice isn’t uploaded) the results are then uploaded wherever (which is theoretically better but still hugely open to abuse)

    • helenslunch@feddit.nl
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      6 months ago

      How would it then report such behavior to Google, without internet?

      It doesn’t

      In a demo, the tech giant simulated a scam call involving a fraudster impersonating a bank. A pop-up message appeared, encouraging the user to hang up.

      If it notifies the end user, what good does that do?

      You can’t see why it might be helpful for a user to know that they’re speaking to a scammer?