AI is clearly a key focus for Apple. When Apple announced the iOS 18 update in June, it also introduced many new AI features under the Apple Intelligence brand. These features will come to iPhones later this year and will dramatically change the software experience.
In the spirit of AI safety, Apple has joined tech giants like OpenAI, Amazon, Google, Meta, and Microsoft in agreeing to a set of voluntary AI safety rules from the Biden administration, as first reported by Bloomberg.
These rules are guidelines to ensure that AI systems are tested for biases, security issues, and potential threats to national security. The goal is to keep AI development transparent, accountable, and safer for everyone.
How does this affect Apple?
These voluntary safeguards are meant to guide AI development. As part of an Executive Order from last year, companies like Apple will need to test their AI thoroughly for biases and security issues. They must share the test results with the government, civil society, and academia. While these guidelines aren’t legally binding yet, they show that the tech industry is starting to care more about self-regulation for AI.