I’m being serious. I think that if instead of Trump there was just a prompt “engineer” the country would actually run better. Even if you train it to be far right.
And this is not a praise of LLMs…
I don’t know if this is true, but people were saying that the tariff policy was what LLMs suggest when asked about how to reduce a trade deficit. So perhaps we are already being governed by ChatGPT.
Watching the US implode under the combined pressure of greed, corruption, incomprehensible idiocy and incompetence would almost be amusing if its global importance didn’t threaten to pull the rest of us down with it like a boat anchor chain tangled around the ankles of a drowning man.
I’m just hoping to make it out the other side of this alive at this point.
I highly doubt that. For so many reasons. Here’s just a few:
- What data would you train it on, the Constitution? The entirely of federal law? How would that work? Knowing how ridiculous textualism is even when done by humans, do you really think a non-thinking algorithm could understand the intention behind the words? Or even what laws, rules, or norms should be respected in each unique situation?
- We don’t know why LLMs return the responses they return. This would be hugely problematic for understanding its directions.
- If an LLM doesn’t know an answer, instead of saying so it will usually just make something up. Plenty of people do this too, but I’m not sure why we should trust an algorithm’s hallucinations over a human’s bullshit.
- How would you ensure the integrity of the prompt engineer’s prompts? Would there be oversight? Could the LLM’s “decisions” be reversed?
- How could you hold an LLM accountable for the inevitable harm it causes? People will undoubtedly die for one reason or another based on the LLM’s “decisions.” Would you delete the model? Retrain it? How would you prevent it from making the same mistake again?
I don’t mean this as an attack on you, but I think you trust the implementation of LLMs way more than they deserve. These are unfinished products. They have some limited potential, but should by no means have any power or control over our lives. Have they really shown you they should be trusted with this kind of power?
I honestly don’t see how that is worse than Trump. Is it terrible? Yeah. Is Trump worse? Also yeah
DOGE is basically a bunch of 25 year old know-nothings feeding budget spreadsheets and job descriptions to a chatbot and asking what to cut, so…
And by an LLM you mean the people who train and tune the LLM to generate the type of responses they like.
It probably couldn’t do much worse…