Please note: The algorithm descriptions in English have been automatically translated. Errors may have been introduced in this process. For the original descriptions, go to the Dutch version of the Algorithm Register.
Threat-to-life model
- Publication category
- Other algorithms
- Impact assessment
- Field not filled in.
- Status
- In use
General information
Theme
Begin date
Contact information
Link to publication website
Responsible use
Goal and impact
The threat-to-life model automatically examines messages, such as intercepted EncroChat messages, for life-threatening content, such as death threats, kidnappings or aggravated assaults. The system is trained on messages with signal words (such as 'death', 'shooting' and 'sleeping') and the context in which they are used, to determine whether there is a threat and how serious it is. This saves police a lot of time searching through millions of messages, something almost impossible to do by hand.
Considerations
Using a model allows for quick pre-selection on possible death threats in a large amount of data which was not possible manually. This makes it possible to recognise even words that seem innocuous, such as 'sleep', as a threat. This helps to prevent real violence.
A possible drawback is that the system can sometimes have blind spots. This means that threatening messages may be missed (so-called false negatives) or alarms may go off wrongly (so-called false positives).
Human intervention
The algorithm gives each message a 'threat score' (between 0 and 1), then all messages with a high threat score are checked by a human after which the human decides whether to actually warn or intervene. So this decision does not lie with the algorithm.
Risk management
People check messages with a high threat score to avoid false alarms. Also, people always have the option to investigate messages on their own initiative and based on their own estimation, even if they might be missed by the model. In addition, the model is constantly improving through feedback and new training data. The police and the NFI ensure that the data used remain clean and that the model adapts to new forms of communication, such as street language or changing threat patterns.
Legal basis
The processing of investigative data falls under the Police Data Act (Wpg) Article 9; processing for the purpose of maintaining law and order in a particular case.
The data to be analysed by the threat-to-life model was obtained under section 94, 126h-126w, 552i of the Code of Criminal Procedure.
Links to legal bases
- Wpg Article 9: https://wetten.overheid.nl/BWBR0022463/2025-07-01#Paragraaf2_Artikel9
- Code of Criminal Procedure article 94: https://wetten.overheid.nl/BWBR0001903/2018-07-28/#BoekEerste_TiteldeelIV_AfdelingDerde_Paragraaf1_Artikel94
- Code of Criminal Procedure article 126h: https://wetten.overheid.nl/BWBR0001903/2018-07-28/#BoekEerste_TiteldeelIVA_AfdelingTweede_Artikel126h
- Code of Criminal Procedure section 552i: https://wetten.overheid.nl/BWBR0001903/2018-07-28
Operations
Data
The model is trained with examples of (death) threats from cryptocommunications. These examples were selected by police experts and labelled to indicate which threats they are.
Technical design
Supervised learning was used to train a language model. This model scores new texts between 0 and 1. The closer to 1, the more likely it is a threatening message.
External provider
Similar algorithm descriptions
- The risk model (the algorithm) helps choose which NOW applications to investigate further. It gives clues as to whether the information given in the NOW application is correct. With these indications from the risk model, SZW reviews the application.Last change on 10th of June 2024, at 13:42 (CET) | Publication Standard 1.0
- Publication category
- Impactful algorithms
- Impact assessment
- Field not filled in.
- Status
- In use
- The model helps detect and analyse irregularities and irregularities following the allocation of a Wmo/Jeugdwet provision. The model signals whether further investigation is needed into the spending of funds.Last change on 5th of July 2024, at 9:31 (CET) | Publication Standard 1.0
- Publication category
- Impactful algorithms
- Impact assessment
- IAMA
- Status
- In use
- Using this model, we predict how likely it is that someone living alone might live together. This model was used to gain experience and was never put into use. The development of this model has been stopped.Last change on 3rd of September 2025, at 7:35 (CET) | Publication Standard 1.0
- Publication category
- High-Risk AI-system
- Impact assessment
- IAMA
- Status
- Out of use
- This algorithm helps Customs and its enforcement partners select goods for control on a risk-based basis. Among other things, it uses declaration data from companies and assesses whether or not there are risks of non-compliant non-veterinary feed, feed materials and feed additives.Last change on 2nd of April 2025, at 12:48 (CET) | Publication Standard 1.0
- Publication category
- Impactful algorithms
- Impact assessment
- Field not filled in.
- Status
- In use
- This algorithm, the forecasting model, provides insight into future peak moments at the municipality's Citizen Affairs desks. This allows additional staff to be hired in time for expected peak moments. This model also supports financial planning.Last change on 27th of June 2024, at 13:10 (CET) | Publication Standard 1.0
- Publication category
- Other algorithms
- Impact assessment
- Field not filled in.
- Status
- In use