Last August, Apple proposed a set of tools for detecting child sexual abuse material (CSAM), but the rollout was delayed due to criticism from critics. In addition to alerts for explicit images on a child’s phone, Apple considered putting up hurdles to finding CSAM and sending law enforcement alerts if CSAM is located in a user’s iCloud photos. While Apple said the safeguards would protect users’ privacy, questions were raised about how they may be used to establish a backdoor into massive content tracking and monitoring.
Since then, Apple has made several changes to its image scanning tool. The initial statement said that if explicit photos were found, parents of users under 13 would be instantly contacted. However, this is no longer addressed in the update. The communication safety feature is also turned off by default, and parents must activate it.
Instead, if it detects any nudity, Communication Safety blurs the substance of the communication. Children will be given a warning and directed to resources provided by child safety organizations. Most crucially, the feature is bidirectional. Various safeguards are activated if nudity is discovered in a photo that a youngster intends to submit. The youngster is discouraged from sending it, and they are given the alternative of calling an adult.
Apple has gone above and beyond to ensure that the highest level of privacy is maintained. All image processing is done on the device, with Apple having almost little knowledge of the analysis. Apple apps like Siri, Spotlight, and Safari Search also get CSAM detection functionality. If a user searches for terms related to child exploitation, these applications will step in.
“These interventions inform users that their interest in this topic is harmful and troublesome, and they provide tools from partners to help them deal with the problem,” Apple explained. Users who ask Siri how to report CSAM content will be directed to resources on how to file a report.