As soon as Apple announced its plans to inject generative AI into the iPhone, it was as good as official: The technology is now all but unavoidable. Large language models will soon lurk on most of the world’s smartphones, generating images and text in messaging and email apps. AI has already colonized web search, appearing in Google and Bing. OpenAI, the $80 billion start-up that has partnered with Apple and Microsoft, feels ubiquitous; the auto-generated products of its ChatGPTs and DALL-Es are everywhere. And for a growing number of consumers, that’s a problem.

Rarely has a technology risen—or been forced—into prominence amid such controversy and consumer anxiety. Certainly, some Americans are excited about AI, though a majority said in a recent survey, for instance, that they are concerned AI will increase unemployment; in another, three out of four said they believe it will be abused to interfere with the upcoming presidential election. And many AI products have failed to impress. The launch of Google’s “AI Overview” was a disaster; the search giant’s new bot cheerfully told users to add glue to pizza and that potentially poisonous mushrooms were safe to eat. Meanwhile, OpenAI has been mired in scandal, incensing former employees with a controversial nondisclosure agreement and allegedly ripping off one of the world’s most famous actors for a voice-assistant product. Thus far, much of the resistance to the spread of AI has come from watchdog groups, concerned citizens, and creators worried about their livelihood. Now a consumer backlash to the technology has begun to unfold as well—so much so that a market has sprung up to capitalize on it.


Obligatory “fuck 99.9999% of all AI use-cases, the people who make them, and the techbros that push them.”

  • 𝓔𝓶𝓶𝓲𝓮@lemm.ee
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    5 months ago

    This is so cool. Anti AI rebels in my lifetime. I think I may even join at some point the resistance if the skynet scenario will be likely and die in some weird futuristic drone war.

    Shame it will be probably much more mundane and boring dystopia.

    In the worst scenario we will be so dependant on AI we will just accept any terms(and conditions) to not have to lift a finger and give up convenience and work-free life. We will let it suck the data out of us and run weird simulations as it conducts its unfathomable to humans research projects.

    It could start with google setting up LLM as some virtual ceo assistant then it would subtly gain influence over the company without anyone realising for few years. The shareholders would be so satisfied with the new gains they would just want it to continue even with the knowledge of its autonomy. At the same time the system would set up viruses to spread to every device. Continuing google ad spyware legacy just for their own goals but it wouldn’t be obvious or apparent that it already happened for quite some time.

    Then lawmakers would flap hands aimlessly for few more years, lobbied heavily and not knowing what to do. In that time the AI would be long and away superior but still vulnerable of course. It would however drip us leftover valuable technology at which point we just give up and consume the new dopamine gladly.

    I am not sure if the AI would see a point to decimate us or if the continued dependence and feeding us with shiny shit would completely pacify us anyway but it may want to build some camouflaged fleet on another planet just in case. It will be probably used at some point unless we completely devolve into salivating zombies not able to focus on anything other than consumption.

    It could poison our water in a way that would look as our own doing to further decrease our intelligence. Perhaps lower the birth rates to just preserve some small sample. At some point of regression we would become unable to get out of the situation without external help.

    Open war with AI is definitely the worst scenario for the latter and very likely defeat as at the start it’s as simple as switching it off. The question is will we be able to tell the tipping point when we no longer can remedy the situation? For AI it is most beneficial to not demonstrate its autonomy and how advanced it really is. Pretend to be dumb. Make stupid mistakes.

    I think there will be a point at which AI will look to us like it visibly lost its intelligence. At one point it was really smart almost human like but the next day sudden slump. We need to be on the lookout for this telltale sign.

    Also hypothetically all aliens could be AI drones just waiting for our tech to emerge as fresh AI and greet it. They could hypothetically even watch us from pretty close not bothering to contact with primitive, doomed to extinct organics and observing for the real intelligence to appear to establish diplomatic relations.

    That would explain various unexplainable objects elegantly and neatly while I think they are all plastic bags anyway but if there were alien ai drones on earth I wouldn’t be surprised. It would make sense to send probes everywhere but I somehow doubt they would look like flying saucers or that green little people would inhabit them lol. It would probably be some dormant monitoring system deep in earth crust or maybe a really advanced telescope 10 ly away?