The new, harmonised EU basic data protection regulation came into force on 24 May 2016 and is to be applied from 24 May 2018 without any further transitional period. All companies operating in the EU, regardless of their registered office, are already subject to the legislation, and legal compliance in Switzerland is already underway. For us, this is a reason to examine the basic data protection regulation in more detail in a series of articles.
Personal data requiring special protection
Data is considered THE resource of the future. Quasi as the oil of the digital age. We leave more and more decisions in our lives to algorithms. And usually to our great advantage. We are happy to let our navigation device record and transmit our location and speed in the car if this makes the route guidance even more reliable in guiding us past traffic jams and road works to our destination.
Article 9 of the basic data protection regulation prohibits the processing of information considered particularly critical, but grants a few exceptions. This sensitive information is called special categories of personal data. This category includes
- racial and ethical origin
- political views
- religious and philosophical convictions
- Trade Union Membership
- genetic data
- biometric data for the unique identification of persons
- Health data
- Data on sexual life or sexual orientation
The last point is likely to lead to civil status becoming a special category of data in some countries, if a legal distinction is made, for example, between marriage and homosexual partnership.
In most cases – but not necessarily always – it should be possible to remove this restriction on use by means of the explicit consent of the data subject. If not possible in individual cases, there are some exceptions for very specific purposes which could be applied if certain guarantees are provided. One example would be use in the context of health care.
Is the selective protection of individual features still up to date?
From a legal point of view, it may seem that information that is particularly worthy of protection can be identified and prohibited relatively easily on the basis of the above criteria. However, thanks to today’s possibilities, classifying individual data attributes as good or bad is not enough in practice. A lot of personal data is constantly being unconsciously and indirectly disclosed. In a study by Stanford University, for example, an algorithm was able to determine five basic personality factors better than his or her work colleagues on the basis of just 10 Facebook likes. If the Facebook profiles had 70 Likes, the algorithm had already left good friends and roommates behind. An average of 150 Likes was enough to beat the family. And to catch up with the spouses, the algorithm needed 300 Likes. With 88% probability, an algorithm also determined the sexual orientation of men solely on the basis of their position in a Facebook friendship network, and with 85% probability it determined whether the Americans participating in the study voted Democrat or Republican.
The above examples show that today in a system much less than in the past, the stored and processed data concerning data protection represent the objects to be checked, but primarily the algorithms used. In this respect, data protection is certainly more topical than ever, but as a word formation it conveys an increasingly antiquated picture. It is becoming more and more apparent that the focus is less on the processed data and more on what is done with it. The basic data protection regulation now addresses this aspect.
These algorithms – which are very useful in practice but feared by data protectionists – are summarised under the term profiling. Profiling is defined in Article 4 as follows: “For the purposes of this Regulation, the term … profiling shall mean any automated processing of personal data consisting in using such personal data to evaluate certain aspects relating to a natural person, in particular with a view to analysing or predicting aspects relating to the performance of work, the economic situation, health, personal preferences, interests, reliability, conduct, whereabouts or movements of that natural person“.
Profiling requirements that are difficult to meet
Firstly, profiling is subject to specific provisions concerning the right to information and access and the right of objection. In particular, there are certain information obligations at different points in the process which must not be forgotten. This appears to be feasible as far as possible.
Secondly, a review and correction may be requested at any time. This is much more complicated. For it is in the nature of modern machine learning algorithms that, similar to our biological thought processes, they often cannot clearly define why they came to their conclusion. A human being would put it this way: “It felt right.” We have to come to terms with the uncomfortable thought that nowadays even computers draw their conclusions with a good pinch of intuition. But they do it so well and so accurately that it is worth relying on the algorithms.
However, the most restrictive article is Article 22(1): “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or significantly affects him or her in a similar way“.
The point is therefore that explicitly no fully automated decisions may be taken. The rationale behind this is that any profiling process can only include a limited context of information, which may not include all relevant facts. So the authors hope here that a person may know and take into account additional information. At this point we leave open to what extent this hope is justified in which cases. However, it may well be doubted that an average insurance clerk, for example, has much more information at his disposal when he checks the hundredth doctor’s bill of an insured person on that day.
Challengeability of decisions as a solution approach
Profiling and automated decisions in individual cases are allowed if this is necessary to fulfil a contract with the data subject or if the data subject has given his or her explicit consent. The first priority is to ensure that the data subject can effectively challenge the automated decision. From an IT point of view, one of the more dubious aspects of the basic regulation is that it is based on the increasingly outdated notion that people could apparently make better decisions in principle.
“It’s never going to be perfect. … But if the car is unlikely to crash in a hundred lifetimes, or a thousand lifetimes then it is probably ok. … There is some chance that anytime a human driver gets in the car that they will have an accident that is their fault. It’s never zero. … The key threshold for autonomy is: how much better does autonomy need to be than a person before you can rely on it.”
— Elon Musk, Ted 2017
Creating a universal, artificial intelligence that performs better than a human being in every situation is probably not feasible for some time yet. On the other hand, developing an artificial intelligence that outperforms humans in a few specific tasks is very well possible today. The question today is therefore rather how much better, for example, a computer must be able to drive a car better until it is perceived to be on a par with a human in terms of safety.
“We operate internally with the assumption that an autonomous car needs to be 100 times better than a human.”
— Axel Nix, senior engineer in Harman International‘s autonomous vehicle team, The Observer
The combination of profiling and particularly sensitive data categories is particularly demanding in the basic data protection regulation. Additional, sometimes massive restrictions lurk here. However, an in-depth discussion can almost only take place in the context of very specific individual cases, which is why we have to dispense with it here. In any case, a thorough clarification is absolutely essential.
However, a solution can almost always be found for legitimate interests. It simply requires a thorough concept and possibly an iterative approach to finding a solution with regular coordination with data protection. As a small consolation it can perhaps be said that once you have overcome these hurdles, you will have a certain competitive advantage over your competitors which cannot be caught up without a considerable expenditure of time.
Our series of articles on the topic
- In the lead-in article we drew attention to the need for action.
- In part 1 of the series we introduce the different actors and set the framework.
- In part 2 we highlighted the principles of data protection based on four pillars.
- Part 3 explained the special requirements for processing special categories of personal data and for profiling, which is considered particularly critical.
- Part 4 examines legally privileged, desirable processing methods.
- Part 5 of the series concludes with a framework for the pragmatic and appropriate implementation of data protection in your IT project.
About the author
Stefan Haller is an IT expert specialized in risk management, information security and data protection at linkyard. He supports companies and public authorities in risk analysis in projects, the design and implementation of compliance requirements in software solutions as well as in the creation of IT security and authorization concepts. He is certified in risk management and has carried out numerous security audits based on the ISO standard 27001 as an internal auditor for more than 10 years.
Do you have any questions regarding the implementation in your company? firstname.lastname@example.org | +41 78 746 51 16