Please note: The algorithm descriptions in English have been automatically translated. Errors may have been introduced in this process. For the original descriptions, go to the Dutch version of the Algorithm Register.
Text classification for undermining themes
- Publication category
- Other algorithms
- Impact assessment
- Field not filled in.
- Status
- In use
General information
Theme
Begin date
Contact information
Responsible use
Goal and impact
A group of police staff review and assess registrations from police systems.
The police use keywords to find these registrations. But keywords do not always find the right registrations.
Therefore, the police use a model that reviews and sorts texts. This model has learned from what police staff find relevant or not.
This allows staff to work faster on the most important registrations.
Considerations
Without this sorting, police staff would spend a lot of time reading records that are not important.
Human intervention
The police officer decides which registrations to review. The model only helps with sorting.
Risk management
Users can turn off the AI suggestions if they want.
With the outcomes, the model provides explanations, with a technique that shows why something was judged as it was.
There is also a list of words that should not be used. These words are not included in the model, for ethical reasons.
The model is re-trained regularly. This way it stays current and works with recent information.
We keep track of which model made a prediction, so this can always be reflected.
Operations
Data
The model uses text from the Basic Enforcement Facility (BVH).
Technical design
The model learns based on examples assessed by police officers. This is called 'supervised learning'.
The input is text from registrations. The output is an assessment: does this registration belong to a particular topic or not.
Similar algorithm descriptions
- The algorithm recognises (personal) data and otherwise confidential information in a document and makes a proposal to anonymise it. A staff member evaluates the proposal and makes the final adjustment, making the document suitable for publication.Last change on 25th of January 2024, at 12:18 (CET) | Publication Standard 1.0
- Publication category
- Other algorithms
- Impact assessment
- Field not filled in.
- Status
- In use
- The algorithm recognises (personal) data and otherwise confidential information in a document and makes a proposal to anonymise it. A staff member evaluates the proposal and makes the final adjustment, making the document suitable for publication.Last change on 25th of January 2024, at 12:17 (CET) | Publication Standard 1.0
- Publication category
- Other algorithms
- Impact assessment
- Field not filled in.
- Status
- In use
- The algorithm recognises (personal) data and otherwise confidential information in a document and makes a proposal to anonymise it. A staff member evaluates the proposal and makes the final adjustment, making the document suitable for publication.Last change on 16th of August 2024, at 8:50 (CET) | Publication Standard 1.0
- Publication category
- Other algorithms
- Impact assessment
- DPIA
- Status
- In use
- The algorithm recognises (personal) data and otherwise confidential information in a document and makes a proposal to anonymise it. A staff member evaluates the proposal and makes the final adjustment, making the document suitable for publication.Last change on 7th of October 2024, at 15:33 (CET) | Publication Standard 1.0
- Publication category
- Other algorithms
- Impact assessment
- Field not filled in.
- Status
- In use
- The algorithm recognises (personal) data and otherwise confidential information in a document and makes a proposal to anonymise it. A staff member evaluates the proposal and makes the final adjustment, making the document suitable for publication.Last change on 25th of January 2024, at 12:16 (CET) | Publication Standard 1.0
- Publication category
- Other algorithms
- Impact assessment
- Field not filled in.
- Status
- In use