- cross-posted to:
- apple_enthusiast@lemmy.world
- cross-posted to:
- apple_enthusiast@lemmy.world
Actually, really liked the Apple Intelligence announcement. It must be a very exciting time at Apple as they layer AI on top of the entire OS. A few of the major themes.
Step 1 Multimodal I/O. Enable text/audio/image/video capability, both read and write. These are the native human APIs, so to speak.
Step 2 Agentic. Allow all parts of the OS and apps to inter-operate via “function calling”; kernel process LLM that can schedule and coordinate work across them given user queries.
Step 3 Frictionless. Fully integrate these features in a highly frictionless, fast, “always on”, and contextual way. No going around copy pasting information, prompt engineering, or etc. Adapt the UI accordingly.
Step 4 Initiative. Don’t perform a task given a prompt, anticipate the prompt, suggest, initiate.
Step 5 Delegation hierarchy. Move as much intelligence as you can on device (Apple Silicon very helpful and well-suited), but allow optional dispatch of work to cloud.
Step 6 Modularity. Allow the OS to access and support an entire and growing ecosystem of LLMs (e.g. ChatGPT announcement).
Step 7 Privacy. <3
We’re quickly heading into a world where you can open up your phone and just say stuff. It talks back and it knows you. And it just works. Super exciting and as a user, quite looking forward to it.
You think your iPhone isn’t collecting data on you? Is that what you’re saying?
The phone is, Apple isn’t. They outline everything in the keynote if you are interested.
deleted by creator
Their keynotes are irrelevant, their official privacy policies and legal disclosures take precedence over marketing claims or statements made in keynotes or presentations. Apple’s privacy policy states that the company collects data necessary to provide and improve its products and services. The OS-level AI would fall under this category, allowing Apple to collect data processed by the AI for improving its functionality and models. Apple’s keynotes and marketing materials do not carry legal weight when it comes to their data practices. With the AI system operating at the OS level, it likely has access to a wide range of user data, including text inputs, conversations, and potentially other sensitive information.
Unless you are designing and creating your own chips for processing, networking etc, then privacy today is about trust, not technology. There’s no escaping it. I know iPhone and Apple is collecting data about me. I currently trust them the most on how they use it.
Running FOSS and taking control of your network will do a far better trick of privacy vs convenience than most people can imagine
There are degrees of trust though. You can trust the developers and people who audited the code if you have no skill/desire to audit it yourself, or you can trust just the developers.
And even closed systems’ behavior can be monitored and analyzed.
Yes definitely, Apple claimed that their privacy could be independently audited and verified; we will have to wait and see what’s actually behind that info.
How? The only way to truly be able to do that to a 100% verifiable degree is if it were open source, and I highly doubt Apple would do that, especially considering it’s OS level integration. At best, they’d probably only have a self-report mechanism which would also likely be proprietary and therefore not verifiable in itself.
They have designed a very extensive solution for Private Cloud Computing: https://security.apple.com/blog/private-cloud-compute/
All I have seen from security persons reviewing this is that it will probably be one of the best solutions of its kind - they basically do almost everything correctly, and extensively so.
They could have provided even more source code and easier ways for third parties to verify their claims, but it is understandable that they didn’t, is the only critique I’ve seen.
As stated above, Private cloud compute has nothing to do with the OS level AI itself. ರ_ರ That’s in the cloud not on device.
As stated here, it still has the same issue of not being 100% verifiable, they only publish a few code snippets they deam “security-critical”, it doesn’t allow us to verify the handling of user data.
Adding to what it says here, if the on device AI is compromised in anyway, be it from an attacker or Apple themselves then PCC is rendered irrelevant regardless if PCC were open source or not.
Additionally, I’ll raise the issue that this entire blog is nothing but just that a blog, nothing stated here is legally binding, so any claims of how they handled user data is irrelevant and can easily be dismissed as marketing.
Security and privacy in 2024 is unfortunately about trust, not technology, unless you are able to isolate yourself or design and produce all the chips you use yourself.
Yeah and apple is completely untrustworthy like any other corporation, my point exactly. Idk about you, but I’ll stick to what I can verify the security & privacy of for myself, e.g. Ollama, GrapheneOS, Linux, Coreboot, Libreboot/Canoeboot, etc.