New Apple algorithms to protect children from violent treating

Modern algorithms based on machine learning techniques have been already used for detecting various criminal activity in different ways. Some of the algorithms are widely used for identifying phishing messages whereas others are able to detect fraudulent activity while working with images. The team of Apple have adopted artificial intelligence for tackling one more problem which is violence against children.

According to the latest news, the company is going to release a brand-new tool for detecting violence against children based on the photo analysis. Just like in the case of any technology, the new algorithm has both positive sides and negative sides.

The technology behind the violence detector

The algorithm used by Apple for detecting violence against children is a neural network which used a large amount of photos depicting illegal content for learning the markers typical for this type of pictures. After being trained with the external content, the algorithm is snow capable of detecting the same markers in other images.

This neural network will be working on the same principles which are used by the standard photo gallery app of iOS which is already equipped into the function of the search based on photos.

According to the developers, this technology was designed in such a way as to keep all of the calculations right on the phone. Of course, the aim of such a design is protecting the privacy of the phone users. Yet, once the system finds anything suspicious, it will send it for further investigation to people.

Needless to say, the idea of this new tool is very good as it can help people to detect illegal activity an protect many children from violence. Unfortunately, the system is not ideal and some problems can arise.

How can neural networks help people to identify pictures?

One of the most popular machine-learning algorithm used for identifying images is based on unsupervised clustering. This means that the developers of such an algorithm are feeding it with a large amount of visual data without making any criterion for choices or marks. Basically, from the very beginning, an algorithm has no idea about the information depicted on the images. Yet, with time, going over more and more pictures, such an algorithm is able to pick some similarities and create so-called clusters. As a result, the algorithm is capable of finding the typical markers identifying illegal content in any pictures.

Once the training is finished, the algorithm is checked on the data sets which were not used for the learning process. This is a very typical way of applying artificial intelligence to fraud identification.

Modern algorithms are not working ideally

No matter how much we are impressed by the abilities of neural networks, they are not ideal and they are prone to making mistakes. Actually, the algorithms of artificial intelligence are prone to giving false positive results which can cause a lot of trouble to people who have never done anything illegal.

Certainly, the tool can also be misused by people on purpose, especially for controlling the political opposition.

Can anything be done to protect the users of iOS?

According to the specialists, the only way of protecting users from the downsides of this programme is to use double encryption for information exchange. The problem is that so far the developers of Apple were not capable to use this technology with iCloud. It might happen there will be an exception for information exchange when it comes to the detection of illegal content.