Please note: The algorithm descriptions in English have been automatically translated. Errors may have been introduced in this process. For the original descriptions, go to the Dutch version of the Algorithm Register.

Threat To Life model

This model is used by specialist detection teams within the police. It helps them quickly detect serious threats, such as planned murders, kidnappings or aggravated assaults, to prevent these crimes. Searching through millions of messages manually is impossible. Therefore, this model works like a search engine that finds the 'needle in the haystack'.

Last change on 15th of December 2025, at 8:39 (CET) | Publication Standard 1.0
Publication category
Other algorithms
Impact assessment
Field not filled in.
Status
In use

General information

Theme

Public Order and Safety

Begin date

2020-04

Contact information

https://www.forensischinstituut.nl/

Link to publication website

https://www.politie.nl/

Responsible use

Goal and impact

The threat-to-life model automatically examines messages, such as intercepted EncroChat messages, for life-threatening content, such as death threats, kidnappings or assaults. The system looks at signal words (such as 'death', 'cutting off' and 'sleeping') and the situation in which they are used, to determine how serious the threat is. This saves police a lot of time searching through millions of messages, something almost impossible to do by hand.

Considerations

The advantages of this model are the large volume of messages that can be quickly checked for potential life threats and the sensitivity of the model. This makes it possible to recognise even words that seem innocuous, such as 'sleep', as a threat. This helps to prevent real violence.


A possible drawback is that the system can sometimes have blind spots. This means that threatening messages can be missed (so-called false negatives) or alarms set off wrongly (so-called false positives).

Human intervention

The algorithm gives each message a "threat score" (between 0 and 1), but the decision to actually alert or intervene lies with people.

Risk management

People check the threat score to avoid false alarms. The model is also constantly improving through feedback and new training data. The police and the NFI ensure that the data used remains clean and that the model adapts to new forms of communication, such as street language or changing threat patterns.

Legal basis

The processing of investigative data falls under the Police Data Act (Wpg) section 9; processing for the purpose of maintaining law and order in a particular case.

The data to be analysed by the threat to life model was obtained under section 94, 126h-126w, 552i Code of Criminal Procedure.

Links to legal bases

  • Wpg Article 9: https://wetten.overheid.nl/BWBR0022463/2025-07-01#Paragraaf2_Artikel9
  • Code of Criminal Procedure article 94: https://wetten.overheid.nl/BWBR0001903/2018-07-28/#BoekEerste_TiteldeelIV_AfdelingDerde_Paragraaf1_Artikel94
  • Code of Criminal Procedure article 126h: https://wetten.overheid.nl/BWBR0001903/2018-07-28/#BoekEerste_TiteldeelIVA_AfdelingTweede_Artikel126h
  • Code of Criminal Procedure section 552i: https://wetten.overheid.nl/BWBR0001903/2018-07-28

Operations

Data

The model is trained with examples of (death) threats from cryptocommunications. These examples were selected by police experts and labelled to indicate which threats they are.

Technical design

Supervised learning was used to train a language model. This model scores new texts between 0 and 1. The closer to 1, the more likely it is a threatening message.

External provider

Netherlands Forensic Institute (NFI), part of the Ministry of Justice and Security

Similar algorithm descriptions

  • The risk model (the algorithm) helps choose which NOW applications to investigate further. It gives clues as to whether the information given in the NOW application is correct. With these indications from the risk model, SZW reviews the application.

    Last change on 10th of June 2024, at 13:42 (CET) | Publication Standard 1.0
    Publication category
    Impactful algorithms
    Impact assessment
    Field not filled in.
    Status
    In use
  • Using this model, we predict how likely it is that someone living alone might live together. This model was used to gain experience and was never put into use. The development of this model has been stopped.

    Last change on 3rd of September 2025, at 7:35 (CET) | Publication Standard 1.0
    Publication category
    High-Risk AI-system
    Impact assessment
    IAMA
    Status
    Out of use
  • The model helps detect and analyse irregularities and irregularities following the allocation of a Wmo/Jeugdwet provision. The model signals whether further investigation is needed into the spending of funds.

    Last change on 5th of July 2024, at 9:31 (CET) | Publication Standard 1.0
    Publication category
    Impactful algorithms
    Impact assessment
    IAMA
    Status
    In use
  • This algorithm helps retrieve information in cold case files. It uses a language model to search for the meaning of words and not just the exact words.

    Last change on 28th of January 2025, at 13:28 (CET) | Publication Standard 1.0
    Publication category
    Other algorithms
    Impact assessment
    DPIA, Quickscan ethiek
    Status
    In use
  • This algorithm, the forecasting model, provides insight into future peak moments at the municipality's Citizen Affairs desks. This allows additional staff to be hired in time for expected peak moments. This model also supports financial planning.

    Last change on 27th of June 2024, at 13:10 (CET) | Publication Standard 1.0
    Publication category
    Other algorithms
    Impact assessment
    Field not filled in.
    Status
    In use