Select Page

It has now become the norm to feel shocked about Big Tech’s decisions regarding privacy. While some of these large enterprises are actually making steps towards better privacy conditions for users, others seem to go in the opposite direction, inviting us to question their motives.

In this sense, Apple’s latest initiative has been especially controversial. The company announced that very soon it will start scanning iCloud accounts, trying to find iPhones and iPads containing CSAM (child sexual abuse material).

This decision comes in two parts. The first is a machine learning algorithm that will maintain active surveillance on the Message app, sending alerts to parents if their children receive any material deemed explicit. The second part is a cutting-edge system that will automatically scan iPhones and iPads that connect to iCloud Photos.

The scanning system is based on a more complex premise. Connected to a known database of CSAM, the software will scan the photos and, if it recognizes any material that matches with illicit images, proceeds to flag the iCloud account and report the data to the National Center for Missing and Exploited Children (NCMEC).

Apple’s announcement caused mixed responses that were all loud and decisive. Critics argue that this technology and its implementation represent a massive privacy violation and create the possibility of harmful errors for iPhone and iPad users. It also creates an opportunity for abusive authorities to move further in their efforts to violate user privacy.

Regarding the algorithm’s margin of error, Apple argues that there is only one in a trillion chance per year to incorrectly report an account. Even this argument met with criticizing privacy experts, who think that this calculation is impossible to prove.

Debate ensued online, with thousands of recognized experts and privacy-focused organizations mostly agreeing on the dangers of rolling out this technology. While the consensus is not overwhelming, the majority agrees that this is a problem and not a solution.

Government Abuse

One of the concerns that are getting most of the attention is that governments may force Apple to use this system for other purposes beyond just detecting and reporting CSAM. By connecting this technology to databases of different nature, abusive governments around the world could scan private assets and flag users according to political or religious criteria.

Some say that this should not come as a surprise. A few years ago, the FBI ensued a bitter fight with Apple over unlocking a terror suspect’s iPhone. The FBI didn’t achieve to unlock the device, and therefore the investigation found a major obstacle. Apple fought hard to defend its stance. Now, this new technology seems to be in stark contrast with that previous stance, opening the door for authorities to violate user privacy almost at will.

Apple’s Response

We could say that Apple expected this reaction from the public. So, it comes as no surprise its rich defense. Through a recently published FAQs page, the company proceeds to clear some of the main concerns that experts are discussing.

“Could governments force Apple to add non-CSAM images to the hash list?”, is one of the questions found on the website. Their definitive answer is: “Apple will refuse any such demands.”

However, this answer seems not to consider or simply ignore scenarios such as China, where the government has already forced Apple to bend its privacy policy to protect their political interests.

Politicians all around the world have shown praise to Apple’s decision, which isn’t necessarily a good sign. If governments are celebrating such technology, it is easy to think that they may be some intention to intervene in the near future.

We can expect further debate, with privacy experts also defending the nature ofprivate property, such as the device we are paying for.