Please note: The algorithm descriptions in English have been automatically translated. Errors may have been introduced in this process. For the original descriptions, go to the Dutch version of the Algorithm Register.

Protective Monitoring

The police use Protective Monitoring to quickly see when information in police systems is being misused.

Last change on 15th of December 2025, at 8:34 (CET) | Publication Standard 1.0
Publication category
Impactful algorithms
Impact assessment
DPIA
Status
In use

General information

Theme

Public Order and Safety

Begin date

2025-01

Contact information

https://www.politie.nl/

Responsible use

Goal and impact

Protective Monitoring is a system that monitors log files of police systems. The system is designed to quickly find anomalous or risky use of police data. This helps protect the police, police officers, citizens and society from misuse of data.


Protective Monitoring makes it possible to intervene quickly when there is suspected misuse, so that the damage is limited. It is also a preventive tool, allowing employees to be warned and corrected in time. This prevents structural misuse of police systems.

Considerations

The police rely on the professionalism and integrity of its employees. Police employees have a certain professional freedom of action to carry out their work as they see fit. However, it is not possible to prevent this freedom from being abused by preventive measures alone, whether consciously or unconsciously. That is why Protective Monitoring helps to detect early unlawful use of police systems.


Other ways to achieve the same goal are impractical and would be much more intrusive for employees, such as constantly applying the 4-eye principle.


Protective Monitoring infringes to some extent on the right to protection of personal data. This applies to both police employees and citizens whose data are in the police systems. But Protective Monitoring also protects these employees and citizens from misuse of police data. Protective Monitoring therefore also contributes to the protection of privacy.

Human intervention

All reports from Protective Monitoring are hand-checked by an analyst. If there are indications of abnormal, risky or abusive use, a supervisor will ask the employee what exactly happened. Sometimes it may also be unintentional improper behaviour. This conversation will help the employee adjust the way he or she works so that the information remains better protected. If necessary, further investigation and possibly a sanction will follow.

Risk management

A Data Protection Impact Assessment (DPIA) has been carried out to see what risks exist. To manage these risks, measures have been taken.


Protective Monitoring uses various detection rules. New rules and significant changes are monitored by a group of experts. This group consists of members of the works council and experts in privacy, ethics, integrity and day-to-day policing. They ensure that everything is fair, reliable and effective.


The functioning of Protective Monitoring is regularly tested and evaluated. If necessary, adjustments are made. What we learn from reports is used to improve detection rules.


No profiling takes place based on gender, age, origin, or other personal characteristics. Protective Monitoring only looks at the actions recorded in the systems' log files.

Legal basis

Proactive monitoring for unlawful access to or processing of police data is a legal obligation arising from sections 4a(2) and 32a Wpg (Police Data Act).

Links to legal bases

  • Wpg article 4a: https://wetten.overheid.nl/BWBR0022463/2025-07-01/#Paragraaf1_Artikel4a
  • Wpg article 32a: https://wetten.overheid.nl/BWBR0022463/2025-07-01/#Paragraaf5_Artikel32a

Impact assessment

Data Protection Impact Assessment (DPIA)

Operations

Data

Protective Monitoring uses log data from the police systems. The log data is used to determine what actions a user performed at a specific time and place. Personnel and police data are also used to assess the context of those actions. Protective Monitoring uses only pre-existing personnel and police data.

Technical design

Protective Monitoring calculates a risk score for each user of the police systems. This score indicates the extent to which a user's actions deviate from what is normal when using the police systems. The risk score is based on multiple detection rules. Each detection rule analyses the log data to see if there may be unauthorised use of the systems. Each rule contributes a certain percentage to the overall risk score.


There is no alarm per detection rule. Only when the total risk score exceeds a certain threshold, an alert is generated. Each notification can be explained and linked back to the specific detection lines that have a score and the log data found with it.


Protective Monitoring is not self-learning and does not use opaque or inimitable algorithms, as is sometimes the case with applications of AI.

Similar algorithm descriptions

  • The reporting system's algorithm recognises words in reports, such as 'rubbish' or 'pavement', and automatically determines the correct category and department. As a result, reporters no longer have to choose a category, and reports are dealt with faster at the right department.

    Last change on 8th of January 2025, at 11:23 (CET) | Publication Standard 1.0
    Publication category
    Other algorithms
    Impact assessment
    DPIA
    Status
    In use
  • Data analysis for evaluating the need to apply camera surveillance at a specific location to maintain public order.

    Last change on 9th of September 2025, at 15:33 (CET) | Publication Standard 1.0
    Publication category
    Impactful algorithms
    Impact assessment
    DPIA
    Status
    In use
  • This algorithm creates a risk sorting of educational institutions (schools, courses or school boards) to conduct targeted desk research, as part of the annual performance and risk analysis.

    Last change on 22nd of May 2024, at 8:41 (CET) | Publication Standard 1.0
    Publication category
    Impactful algorithms
    Impact assessment
    IAMA
    Status
    In use
  • Software for monitoring social media sentiments on specified topics.

    Last change on 21st of June 2024, at 10:25 (CET) | Publication Standard 1.0
    Publication category
    Impactful algorithms
    Impact assessment
    Field not filled in.
    Status
    Out of use
  • Tool that assists front desk staff in establishing identity by comparing the face of the person reporting to the photo on an identity document. Helps prevent look-alike fraud.

    Last change on 26th of June 2025, at 9:59 (CET) | Publication Standard 1.0
    Publication category
    Other algorithms
    Impact assessment
    Field not filled in.
    Status
    In use