Data protection: privileged data processing

Share on facebook
Share on twitter
Share on email

The new, harmonised EU basic data protection regulation came into force on 24 May 2016 and is to be applied from 24 May 2018 without any further transitional period. All companies operating in the EU, regardless of their registered office, are already subject to the legislation, and legal compliance in Switzerland is already underway. For us, this is a reason to examine the basic data protection regulation in more detail in a series of articles.

Privileged procedures

With the documentation of the Technical and Organisational Measures (“TOMs”), the means by which adequate protection is achieved are recorded for a data processing. The Data Protection Ordinance describes a series of procedures which are legally privileged, i.e. which are to be applied as part of the technical and organisational measures if possible. These are these:

  1. the use of anonymised data,
  2. the use of pseudonymised data,
  3. the use of statistical data,
  4. the use of encrypted data.

This order of precedence cannot be interpreted in a completely linear way, as there are of course possible combinations. In the following we will discuss these listed procedures from the perspective of data protection. As a rule, all procedures are much more difficult to implement correctly and securely than they may appear on the surface. However, in the context of an article on data protection, these technical aspects lead too far here, which is why we are assuming a professional and error-free technical implementation here for once. (If only it would be as simple as this during the implementation.


Anonymization is the process of removing from a database those characteristics that allow the data held to be assigned to specific persons. Where this can be implemented, this is a great procedure. This is because anonymisation means that the data is no longer related to a person and is therefore no longer subject to the provisions of the Data Protection Act. Because as we stated in part 1 of this series of articles, only those data that relate to an identified or identifiable natural person are subject to data protection, nothing else.

Let us assume that we compile a crime statistic. For this we have as a basis a list of all crime cases of the last 10 years. The anonymisation of the perpetrator for his protection means that the information that makes him identifiable is systematically removed from this list of crime cases. The first, obvious measure, therefore, is to delete those attributes from the database that allow a direct assignment to a person. These could be, for example, the name and address of the perpetrator.

However, this does not really make the data stock anonymous. It remains to be checked whether indirect identification is still possible. For a person is only considered to be anonymous if he or she cannot be identified even with considerable effort, for example by using other sources of information. Perhaps newspaper articles have been published about the crime, which nevertheless make it possible to re-identify the person. Or there could be further – at first glance perhaps inconspicuous – information on the perpetrator stored, the combination of which in many cases makes it possible to restrict the group of people in question to the person in question.

Accordingly, irreversible anonymisation is usually not easy to implement or even impossible due to the requirements.


Pseudonymisation is the process of removing personal identifying features from data and storing them separately. It is therefore a kind of partial anonymisation. The key for reconversion is stored in a separate place.

Let us illustrate this using the example from the Anonymisation section. Pseudonymisation would mean that, analogous to anonymisation, the name and name of the perpetrator would be deleted from the database. A unique sequence number would be used for this. A translation table would then be kept in a second data memory, listing the surname and first name for each sequence number. This key table must then be stored separately from the data and protected.

Pseudonymization is therefore not as effective as anonymization. However, the effort required to identify persons increases. For this reason, this procedure is legally privileged if full anonymization is not possible.

Statistical data

Statistical data are understood to be data aggregations from individual case data. To stay with the above-mentioned example of crime statistics, this would mean that the starting point for updating data is always an individual case, but the data is only stored as a sum of incidents, e.g. the number of burglaries per month. Such an aggregation also results in partial anonymisation with the same advantages under data protection law. It should be noted here that if the number of cases is low, the sum of a single case in the statistics remains an individual case. This means that identification of the person is potentially possible again.

Example: Insufficient protection of voting secrecy in statistics

I saw a vivid example of this problem when an e-voting system was introduced for Swiss people abroad, which affected not only data protection legislation but also laws on political rights. The secrecy of a vote or election must be maintained, i.e. it must not be possible to trace who voted for which candidate or who agreed or disagreed with which proposal.
On voting Sunday, local authorities should record the votes cast by “their Swiss abroad” separately by e-voting so that the public could be informed transparently about the use of this voting channel – also because of the many concerns about the possibility of manipulation. Unfortunately, in practice it turned out that individual small municipalities had so few registered Swiss abroad that only a few votes were regularly cast by Swiss abroad. As a result, it was possible in part to read directly from the statistics how the Swiss abroad in question had voted. The problem was finally solved by no longer crediting these votes to the communes, but to a separate, cross-communal constituency for Swiss abroad.


The encryption of data is a method to increase the protection of data. In a first step, data is usually protected from access by third parties using firewalls, authorization concepts, and so on. By using encryption, this access can be restricted even more specifically to only a few, specific persons. In particular, the use of encryption can also prevent access by internal system administrators and other IT personnel, for example, or at least restrict it to a few people. At the same time, this introduces an additional security hurdle for potential attackers if the other protective measures could be circumvented.

However, any encryption method is only as secure as the key or keys used and their storage. It is therefore often difficult to achieve truly effective protection. This is because “pragmatic” implementation often results primarily in so-called security-by-obscurity, i.e. a nice tranquilliser with the effectiveness of placebos. You feel better, but not really because of the ingredients.

The encryption of data also has significant disadvantages. On the one hand, there is usually a loss of performance, i.e. all processing necessarily becomes longer and the system slows down. On the other hand, recovery scenarios for restoring a service after an outage can also be made more difficult and slowed down as a result.

From a data protection perspective, the encryption of as much personal data as possible is actually always welcome. However, from a cost efficiency point of view, i.e. the additional protection factor per invested capital that is effectively achieved, the calculation often does not add up. Accordingly, encryption is usually used very selectively, where it can effectively counteract really considerable risks.

So which procedure to use where?

The presented procedures, which are privileged under data protection law, represent a toolbox to increase the protection of persons. All of them are desirable, the complete anonymization the ideal solution. Which procedures are to be used is decided on the basis of the risk assessment in the data protection impact assessment.

Our series of articles on the topic

About the author

Stefan Haller is an IT expert specialized in risk management, information security and data protection at linkyard. He supports companies and public authorities in risk analysis in projects, the design and implementation of compliance requirements in software solutions as well as in the creation of IT security and authorization concepts. He is certified in risk management and has carried out numerous security audits based on the ISO standard 27001 as an internal auditor for more than 10 years.
Do you have any questions regarding the implementation in your company? | +41 78 746 51 16