Apple’s latest iOS 15.2 beta has introduced a previously announced opt-in communication safety feature for Messages that scans photos in order to warn children, but not their parents, when children receive or send photos containing nudity.
UPDATE: 2:45pm ET: As per an update via MacRumors: We’ve learned from Apple that the Communication Safety code found in the first iOS 15.2 beta is not a feature in that update and Apple does not plan to release the feature as it is described in the article.
MacDailyNews Note: This iMessage feature is not the untenable (delayed, not canceled) backdoor surveillance system introduced via the trojan horse of “detecting child sexual abuse material (CSAM).”
Mike Peterson for AppleInsider:
Although not enabled by default, parents or guardians can switch on the Messages feature for child accounts on a Family Sharing plan. The feature will detect nudity in Messages, blur the image, and warn the child.
Apple says children will be given helpful resources and reassured that it’s okay if they don’t want to view the image. If a child attempts to send photos that contain nudity, similar protections will kick in. In either case, the child will be given the option to message someone they trust for help.
Unlike the previously planned version of the feature, parents will not be notified if the system detects a message contains nudity. Apple says this is because of concerns that a parental notification could present a risk for a child, including the threat of physical violence or abuse.
Apple says that the detection of nudity flag will never leave a device, and the system doesn’t encroach upon the end-to-end encryption of iMessages.
MacDailyNews Take: While the opt-in element is certainly welcome, how much of a leap would it be to enable scanning for any user, not just children, and for content other than nudity?
How much further of a leap would it be for local law followers — regardless of the law in question — like Apple to be legally forced by governments to send them clandestine notifications whenever “illegal” content is discovered on a user’s device?
Imagine China scanning for Winnie-the-Pooh on devices in an effort to weed out critics of Xi Jinping (and/or lovers of anthropomorphic teddy bears with a honey fetish). In 2017, a list of thousands of images, including those depicting Vladimir Putin in full makeup, were outlawed in Russia. Extrapolate.
Further, given Apple’s newfound ability to read text in images and convert that to actual text data, any bastardizations of this innocuous-sounding opt-in on-device Messages photo scanning could quickly become the stuff of dystopian nightmares.
Please help support MacDailyNews. Click or tap here to support our independent tech blog. Thank you!