

on device
scam detection
I know I’ll be downvoted into oblivion as I can hardly believe I’ve formed this opinion myself, but tbh this is a good application for some of this AI tech.
Anecdotally, a friend of mine grew up well-off; from an immigrant family but their parents were educated and in a lucrative profession so he always went to private schools etc. Fast forward to about 10 years after all the kids moved out; the parents had divorced amicably and his mom had a sizeable retirement along with the payout she had from the divorce. In the 7 figures - she never had to worry about money.
Anywho, mom ran into some medical issues so the kids had to get involved with her finances again, as she couldn’t do it herself. Turns out that over the course of months or years, mom had been getting scammed to the tune of tens of thousands of dollars at a time, to the point where she had actually taken out a mortgage on the home she previously owned outright. They’re still sorting things out but the number he has tossed out in the past is ~$1.4M that got wired overseas and is just… gone now.
So yes, I probably won’t turn this feature on myself, but for the tens of millions of uneducated and inept people out there, this could genuinely make a difference in avoiding some catastrophic outcomes. It certainly isn’t a perfect solution, but I suspect my friend would rate it as much better than nothing, and I would argue that this falls short of being “strictly evil”.
Great article, thanks for sharing it OP.
Okay, now imagine you’re Elon Musk and you really want to change hearts and minds on the topic of, for example, white supremacy. AI chatbots have the potential to fundamentally change how a wide swath of people perceive reality.
If we think the reality distortion bubble is bad now (MAGAsphere, etc), how bad will things get when people implicitly trust the output from these models and the underlying process by which the model decides how to present information is weighted towards particular ideologies? Considering the rest of the article, which explores the way in which chatbots attempt to create a profile for the user and serve different content based on that profile, now it will be even easier to identify those most susceptible to mis/disinformation and deliver it with a cheery tone.
How might we, as a society, create a process for conducting oversight for these “tools”? We need a cohesive approach that can be explained to policymakers in a way that will call them to action on this issue.