Please note: The algorithm descriptions in English have been automatically translated. Errors may have been introduced in this process. For the original descriptions, go to the Dutch version of the Algorithm Register.
Sextortion Classifier
- Publication category
- High-Risk AI-system
- Impact assessment
- Field not filled in.
- Status
- In use
General information
Theme
Begin date
Contact information
Responsible use
Goal and impact
The tool was developed to get a better overview of all sextortion registrations in BVH. By using this tool, these registrations are more quickly recognised and passed on to the right departments. As a result, victims can be helped faster and better.
Considerations
By using this tool, vice departments get more work. On the other hand, registrations are picked up faster. Victims are helped better and faster as a result.
Human intervention
The tool only gives BVH numbers as results. Staff manually check these numbers to determine whether it is really sextortion. The use of the tool complements the normal work process of vice departments.
Risk management
The tool shows only BVH numbers. This prevents employees from automatically relying on the system's judgement. The vice investigator himself checks whether the registration is correct and determines follow-up steps. This working method also ensures that no substantive information is shared.
Legal basis
Article 3 Police Act 2012
Links to legal bases
Operations
Data
Currently, only data from the Rotterdam region are processed.
The model was trained with BVH registrations from the period from 22 November 2024 to 8 October 2025. For this purpose, registrations were used from Rotterdam unit and from seven selected categories important for vice cases.
These include reports on vice, abuse of sexual images, blackmail or extortion, sex chat, online sexual harassment, grooming and child pornography.
The Sextortion Classifier uses this model to assess BVH registrations.
Technical design
The classifier looks at the free text in BVH registrations. The system uses multiple decision trees, each looking at a part of the registrations. Each decision tree gives a score between 0 and 1. A score of 0 means no sextortion. A score of 1 does mean sextortion.
The limit is set at 0.52. This was done to make the result as good as possible. After all decision trees have given a score, a vote is taken. The outcome of this vote determines whether the registration is classified as sextortion.
Similar algorithm descriptions
- Application supports the process of determining wage value. The aim of the application is to determine wage value in a uniform manner; a national methodology for this has been available since 2021.Last change on 10th of September 2025, at 14:24 (CET) | Publication Standard 1.0
- Publication category
- Impactful algorithms
- Impact assessment
- Privacy Quickscan, DPIA
- Status
- In use
- The reporting system's algorithm recognises words in reports, such as 'rubbish' or 'pavement', and automatically determines the correct category and department. As a result, reporters no longer have to choose a category, and reports are dealt with faster at the right department.Last change on 8th of January 2025, at 11:23 (CET) | Publication Standard 1.0
- Publication category
- Other algorithms
- Impact assessment
- DPIA
- Status
- In use
- In criminal investigations involving a sex offence or sexually transgressive behaviour by juveniles (aged 12-18), the J-SOAP D (version III) is used (in addition to the LIJ instruments). The J-SOAP D is used to estimate the risk profile and recidivism risk for vice.Last change on 3rd of December 2025, at 13:11 (CET) | Publication Standard 1.0
- Publication category
- Impactful algorithms
- Impact assessment
- Field not filled in.
- Status
- In use
- Based on measurements, sewage discharge is controlled. The algorithm determines whether a valve for the transit is open or closed. This makes it possible to control where this water goes during periods of high rainfall.Last change on 5th of January 2024, at 14:22 (CET) | Publication Standard 1.0
- Publication category
- Other algorithms
- Impact assessment
- Field not filled in.
- Status
- In use
- The algorithm in the software recognises and anonymises personal data and other sensitive information in documents. Governments regularly publish information related to the drafting and implementation of their policies (e.g. based on the Woo). This tool is used to render sensitive data unrecognisable in the process.Last change on 20th of November 2024, at 14:27 (CET) | Publication Standard 1.0
- Publication category
- Other algorithms
- Impact assessment
- DPIA
- Status
- In use