Was using my SO’s laptop, I had been talking (not searching, or otherwise typing) about some VPN solutions for my homelab, and had the curiosity to use the new big copilot button and ask what it can do. The beginning of this context was actually me asking if it can turn off my computer for me (it cannot) and I ask this.

Very unnerved, I hate to be so paranoid to think that it actually picked up on the context of me talking, but again: SO’s laptop, so none of my technical search history to pull off of.

  • BossDj@lemm.ee
    link
    fedilink
    arrow-up
    73
    arrow-down
    2
    ·
    7 months ago

    Is it possible that your chain of questions is very similar to other “paranoid” users who inevitably question copilot about privacy, so this is a learned response?

    • bbuez@lemmy.worldOP
      link
      fedilink
      arrow-up
      22
      ·
      7 months ago

      I’ll pull the rest of the context when she’s back in town, I doubt she’s used it more so it should be saved still. She looked at me when this typed out and said “you’re fucking with me right?”. I am still just as shocked, I wish I was fucking around and I have no other explaination how it would remotely key onto saying this given the previous interactions.

  • muntedcrocodile@lemmy.world
    link
    fedilink
    arrow-up
    22
    arrow-down
    1
    ·
    edit-2
    7 months ago

    Looks to me like not audio tracking but that u somehow inadvertently triggered microsofts privacy training they have given to copilot. Im guessing the ai was being too vocal about privacy and microsoft wanted to tame it and get it to downplay etc.

  • LWD@lemm.ee
    link
    fedilink
    arrow-up
    14
    ·
    7 months ago

    ChatGPT has a short but distinct history of encouraging paranoia in people who use it.

    Asked for help with a coding issue, ChatGPT wrote a long, rambling and largely nonsensical answer that included the phrase “Let’s keep the line as if AI in the room”.

    • bbuez@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      7 months ago

      I have to credit to the novelty of the technology, there’s certainly a reason I’m wanting to self host models, my concern really is with what data is being used, and how these models are being trusted.

      My goal is to contribute the least useable data to the likes of OpenAI “in the puruit of AGI” because it will inevitably become as did MS Tay did, especially if something can change on their end and suddenly have it spitting out garbage for users who may be potentially at risk of bad advice or actually paranoid.

      That also doesn’t mean I havent and wont use chatGPT, it certainly has been a useful tool, knowing its limitations, but OpenAI has their head in the clouds and it only leads to greed in pursuit of an end goal. /Imho

      • LWD@lemm.ee
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        7 months ago

        I think AI is humanized and otherwise designed so that people will feel encouraged to give private data to it. The Kagi Corporation wrote about this in their manifesto. In reality, giving your data to open AI is just as unsafe as typing in a personal search query into Google or Bing. But by changing the context, it feels like you’re talking to a friend or a person you met at a bus stop.

        AI Bros always say “it’s just a tool” as a sort of thought terminating cliche (note: this wasn’t intended to be a dig at your comment). Guns are a tool too. I wouldn’t want the richest corporations in the United States to personally own the most powerful missile systems, and in terms of AI, that’s kind of where we are.

  • Lemongrab@lemmy.one
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    7 months ago

    If its anything like Cortana’s permission it’ll have access to all your web searches. Cortana also had speech and typing personalization, so Microsoft is definitely giving copilot at least those permissions.

  • Echo Dot@feddit.uk
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    7 months ago

    I doubt that it’s sending audio data back to Microsoft although it probably does have access to your search history if you’ve used bing / the inbuilt search bar.

  • bbuez@lemmy.worldOP
    link
    fedilink
    arrow-up
    4
    arrow-down
    3
    ·
    edit-2
    7 months ago

    I will post the full context tomorrow when I can use the laptop again. No previous chats had anything to do with privacy and this was the first chat since the update. The first chat was something like “shit fart” that my SO had scientifically gauged the model with

    • helenslunch@feddit.nl
      link
      fedilink
      arrow-up
      1
      ·
      7 months ago

      I doubt it was listening to your conversation but regardless this problem can be solved entirely by installing Linux and GPT4all or one of the many other local FOSS LLMs.

  • Shawdow194@kbin.social
    link
    fedilink
    arrow-up
    0
    arrow-down
    1
    ·
    7 months ago

    It’s a LLM. You asked it “what can you even do” and one of the most hot topics with AI is privacy concerns. With Copilot being neutered by MSFT to produce curated responses asking it what it can do, and it branching to privacy concerns first, seems totally reasonable

  • Uriel238 [all pronouns]@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    1
    arrow-down
    2
    ·
    edit-2
    7 months ago

    If the response is not related to listening in on your convo then it smacks of a buddy processing a personal insecurity.

    Actually my last girlfriend said I was “nicely accommodable.”