KUALA LUMPUR, Aug 10 — WhatsApp has expressed its concern over the new iOS 15 child safety update, where neural matching will be used by Apple to identify images of child abuse on iCloud. 

WhatsApp head Will Cathcart said that these forthcoming updates have the potential to undermine user privacy, while also stating that WhatsApp will not use such a system.

Will suggested that Apple would do better to allow its users the option to report abusive content instead, which is one of the methods used by WhatsApp.

He said WhatsApp had reported over 40,000 cases to the National Centre for Missing & Exploited Children (NCMEC) without having to breach encryption.

Advertisement

He is worried that governments will be able to use the system to scan for content they would like to control since different countries will have different definitions of unlawful content.

There is also the threat of spyware companies getting their hands on the technology to exploit it as well.

 

Advertisement

 

Apple’s detection technology

Apple said its new NeuralHash detection tool would still be able to retain user privacy while limiting the spread of child-exploitative content.

The system will convert an image into binary numbers (also known as a hash), to be compared against a hash database of known child abuse material to detect a match.

Apple also states that end-users will not be able to review the database, neither will they be able to know if a match was detected. Image hash that does not have a database match will essentially be ignored by Apple.

Once there is a match result, a cryptographic safety voucher is encrypted which includes the photo’s NeuralHash as well as a visual derivative, before being uploaded onto iCloud. Even then, Apple will only be able to decrypt the safety voucher to investigate the suspected images only if numerous matches were detected.

Once Apple has verified that said content is indeed child-exploitative, it will then report those accounts to the National Centre for Missing & Exploited Children (NCMEC).

Apple explained that the NeuralHash system only applies to iCloud Photos. Benny Pinkas, a cryptography researcher who reviewed the system, also said that the system only detects matches with known images from the database.

Benny also claimed on Twitter that the detection happens on the iCloud servers, but Apple’s technical report of its child abuse content detection technology clearly states that the matching happens on the device itself.

A false positive match?

Another security researcher named Matthew Green also weighed in on the iOS 15 update, calling it a bad idea. Matthew, who has previously worked with Apple, fears that the NeuralHash system will eventually be used as a surveillance tool for end-to-end encrypted devices.

He is also worried about the possibility of hash collisions, which can result in mismatches. However, Apple has said that its system is very accurate and that its error rate is less than one in 1 trillion chance per year.

 

 

In 2013, Twitter had also started using a similar technology called PhotoDNA that also compares image hash against a database to weed out content that depicted child abuse.

In addition to NeuralHash, iOS 15 will also allow children and their parents to get a warning notification if sexually explicit photos were sent or received through the Messages app (not to be confused with iMessage). This feature will be available for accounts that are set up as families.

There will also be updates to Siri and Search, where Siri can assist users with reporting child abuse content or exploitation. Siri and Search would also be able to step in if users are performing searches relating to sexually explicit material involving kids. It would notify them that their interest is not appropriate, and will provide resources for them to seek help.

Aside from iOS 15, these updates will also apply to iPadOS 15, watchOS 8, and macOS Monterey. — SoyaCincau