I’m sure it depends on the AI tools and features being used, but with all the “magic” obfuscation from companies surrounding them, it’s not exactly clear how much of the processing is happening locally over remotely.
With some of the text stuff, I’m relatively sure most of that involves data exchange to work, but for some of the image/video editing and audio processing? That’s where things get much murkier, at least to me, and where this question is largely stemming from.
I’m aware more processors are specifically being made to support these features, so it seems like there are efforts to make more of this happen locally, on one’s own devices, but…What’s the present situation look like?
It varies some….
Most of it is remote, however “Siri” actually does a lot locally, and I assume Google assistant does too.
Those are likely the only two that do much locally, everyone else does it all remotely.
On what basis? It’s Google, so I would assume any and all data that you could possibly input into their apps and services to be used against you.
Mostly a “cost” basis, but it’s an assumption for the Google assistant for sure. Siri I’ve tested.
Yeah most hey Google features don’t work if you don’t have an internet connection, can’t even start music when driving through an area with no signal, had to pull over and use the GUI…