Was using my SO’s laptop, I had been talking (not searching, or otherwise typing) about some VPN solutions for my homelab, and had the curiosity to use the new big copilot button and ask what it can do. The beginning of this context was actually me asking if it can turn off my computer for me (it cannot) and I ask this.
Very unnerved, I hate to be so paranoid to think that it actually picked up on the context of me talking, but again: SO’s laptop, so none of my technical search history to pull off of.
There’s a real risk of survivorship bias here. Somebody asking about a car gets that and thinks nothing of it. A privacy minded person, however, would find it odd. And being the kind of person concerned about what could have been the cause considered the prior conversation.
I’m not saying its an unreasonable concern or technically not feasible. It’s just not how the LLM’s tend to work.
Id consider it more likely to be a bug, or general inquiries like you said, or that SO had a bunch of documents locally that reference privacy or browsing history (anytime really) that MS could have used as a kind of “here’s more about the person asking you a question”
A privacy minded person probably wouldn’t use these tools to begin with tbh, they would likely run their own LLM instead.
I guess that’s why OP brought up that they were using someone else’s computer.
Also, a truly privacy-minded person wouldn’t refuse to use a hosted AI product at all. We generally just make ourselves aware that we don’t have privacy when using it, and never type anything sensitive into it. Also, have you seen what it costs to run a capable LLM?
Just don’t pull a samsung
I’ve just started messing with GPT4all for CPU based language models which can run relatively well on older gaming hardware, and a coral accelerator module for my NVR presence detection with Frigate only cost 30$
That’s what I’ve been playing with. Cool stuff even though it’s limited because of my 8GB nvidia card
It’ll be interesting to see how the technology advances in even 2 or 3 years, even with just an 8GB card, thanks to optimizations etc.
Meet GPT4all