• 0 Posts
  • 47 Comments
Joined 1 year ago
cake
Cake day: June 14th, 2023

help-circle



  • That’s an interesting observation! It’s definitely plausible that some people might enjoy pretending to be LLMs (large language models) for fun or as a social experiment. The lines between human and AI-generated text are getting blurrier, especially as LLMs improve. Some folks might see it as a challenge to mimic the “voice” of an AI, whether to test their own skills, engage in satire, or even to highlight the current state of AI and its limitations.

    On the flip side, encountering an LLM pretending to be a person raises questions about authenticity and the ethics of AI in communication. It brings up important discussions about transparency, trust, and how we interact with digital personas.

    Both scenarios—humans mimicking AI and AI mimicking humans—illustrate the fascinating, sometimes confusing, state of our current tech landscape. The key takeaway might be that whether you’re interacting with a person or an AI, it’s always good to be mindful and critical of the content you’re engaging with.










  • Closed source does mean it’s worse by default because we can’t verify what the app does. The only things we really know about Whatsapp are:

    1. Meta is scanning your texts before the message is sent. Back when I last used it you could easily verify this by typing a url and having the app underline the url for you.

    2. Meta is collecting an enormous amount of Metadata. This can also be verified by checking the permissions the app has and by various people that have monitored the background activity of the app.

    3. Meta is using the Signal protocol to send the message. However, as previously explained this means nothing because they already scanned the message prior to sending it.

    So with no way to look at the code we have to assume that Meta is collecting and storing the messages and their metadata.