Lucene search

K
schneierBruce SchneierSCHNEIER:04BC5C4333EE10D27F04F6CA45FA1AA6
HistoryMay 23, 2024 - 11:00 a.m.

Personal AI Assistants and Privacy

2024-05-2311:00:36
Bruce Schneier
www.schneier.com
6
privacy concerns
ai trust problem
personal digital assistant
windows 11
corporate ais
trustworthy ai
public ai

7 High

AI Score

Confidence

Low

Microsoft is trying to create a personal digital assistant:

> At a Build conference event on Monday, Microsoft revealed a new AI-powered feature called "Recall" for Copilot+ PCs that will allow Windows 11 users to search and retrieve their past activities on their PC. To make it work, Recall records everything users do on their PC, including activities in apps, communications in live meetings, and websites visited for research. Despite encryption and local storage, the new feature raises privacy concerns for certain Windows users.

I wrote about this AI trust problem last year:

> One of the promises of generative AI is a personal digital assistant. Acting as your advocate with others, and as a butler with you. This requires an intimacy greater than your search engine, email provider, cloud storage system, or phone. You're going to want it with you 24/7, constantly training on everything you do. You will want it to know everything about you, so it can most effectively work on your behalf.
>
> And it will help you in many ways. It will notice your moods and know what to suggest. It will anticipate your needs and work to satisfy them. It will be your therapist, life coach, and relationship counselor.
>
> You will default to thinking of it as a friend. You will speak to it in natural language, and it will respond in kind. If it is a robot, it will look humanoid–­or at least like an animal. It will interact with the whole of your existence, just like another person would.
>
> […]
>
> And you will want to trust it. It will use your mannerisms and cultural references. It will have a convincing voice, a confident tone, and an authoritative manner. Its personality will be optimized to exactly what you like and respond to.
>
> It will act trustworthy, but it will not be trustworthy. We won't know how they are trained. We won't know their secret instructions. We won't know their biases, either accidental or deliberate.
>
> We do know that they are built at enormous expense, mostly in secret, by profit-maximizing corporations for their own benefit.
>
> […]
>
> All of this is a long-winded way of saying that we need trustworthy AI. AI whose behavior, limitations, and training are understood. AI whose biases are understood, and corrected for. AI whose goals are understood. That won’t secretly betray your trust to someone else.
>
> The market will not provide this on its own. Corporations are profit maximizers, at the expense of society. And the incentives of surveillance capitalism are just too much to resist.

We are going to need some sort of public AI to counterbalance all of these corporate AIs.

EDITED TO ADD (5/24): Lots of comments about Microsoft Recall and security:

This:

> Because Recall is "default allow" (it relies on a list of things not to record) … it's going to vacuum up huge volumes and heretofore unknown types of data, most of which are ephemeral today. The "we can't avoid saving passwords if they're not masked" warning Microsoft included is only the tip of that iceberg. There's an ocean of data that the security ecosystem assumes is "out of reach" because it's either never stored, or it's encrypted in transit. All of that goes out the window if the endpoint is just going to…turn around and write it to disk. (And local encryption at rest won't help much here if the data is queryable in the user's own authentication context!)

This:

> The fact that Microsoft's new Recall thing won't capture DRM content means the engineers do understand the risk of logging everything. They just chose to preference the interests of corporates and money over people, deliberately.

This:

> Microsoft Recall is going to make post-breach impact analysis impossible. Right now IR processes can establish a timeline of data stewardship to identify what information may have been available to an attacker based on the level of access they obtained. It's not trivial work, but IR folks can do it. Once a system with Recall is compromised, all data that has touched that system is potentially compromised too, and the ML indirection makes it near impossible to confidently identify a blast radius.

This:

> You may be in a position where leaders in your company are hot to turn on Microsoft Copilot Recall. Your best counterargument isn't threat actors stealing company data. It's that opposing counsel will request the recall data and demand it not be disabled as part of e-discovery proceedings.

7 High

AI Score

Confidence

Low