Please note: The algorithm descriptions in English have been automatically translated. Errors may have been introduced in this process. For the original descriptions, go to the Dutch version of the Algorithm Register.

Module to screen for language bias

CompetentNL has been developed within the Vaardig met Vaardigheden programme. For the development of CompetentNL, the Bias module is used behind the scenes, i.e. not directly in externally oriented applications. The module promotes the inclusiveness of CompetentNL by analysing and simplifying language use, with a focus on accessibility for the low-literate.

Last change on 6th of March 2026, at 9:50 (CET) | Publication Standard 1.0
Publication category
Other algorithms
Impact assessment
Field not filled in.
Status
In use

General information

Theme

Work

Begin date

09-25

Contact information

support@competentnl.nl

Link to publication website

https://competentnl.nl/

Responsible use

Goal and impact

A bias module has been developed within CompetentNL. The module was developed to support the creation of CompetentNL, a national standard for describing skills. The module helps to improve descriptions of skills and competences by reducing bias. It does not involve data from individuals.


Considerations

Benefits

- The module promotes inclusiveness of CompetentNL by simplifying language and making it more accessible to a wide audience.

Disadvantages and risks

- The module works behind the scenes and has no direct user feedback, which can make error detection difficult.

Justification

- The deployment of the module is reasonable and justified because they significantly improve the inclusiveness of CompetentNL without direct decision-making on individuals. Human experts remain involved in the validation and assessment of the output, reducing risk.

Human intervention

The outcomes of the module are reviewed by content experts before being included in CompetentNL. Experts check the relevance, accuracy and quality of the generated bias suggestions. Based on their subject matter knowledge, they can correct, supplement or reject the output.

This human monitoring is crucial to detect and correct errors, outdated data or potential biases. In addition, periodic evaluation and adjustment of the module takes place based on expert feedback and new developments in the labour market. This keeps the output up-to-date, reliable and appropriate to the policy objectives.

Risk management

Inaccuracies

- Outcomes are always checked by human content experts before being applied. Experts continuously review output to identify when adjustments are needed.

Over-dependence on the system

- The system is set up so that outcomes are not automatically adopted; human validation is mandatory.

- Training and awareness of experts ensure critical use of the module.

Limitations in interpretation and use

- Outcomes are not shown directly to end-users without assessment.

- Errors and imperfections can thus be detected and corrected in a timely manner.

Legal basis

The use of the module to support the development and enrichment of CompetentNL is based on the goal of promoting transparency, efficiency and inclusiveness in the labour market and education domain. The European Union's AI regulation, which includes requirements for high-risk AI systems, is followed and integrated into the governance of the programme. The processing of data and use of the module takes place with respect for the rights of data subjects and under expert supervision to ensure that the applications are lawful, fair and transparent.

Links to legal bases

AI regulation (Regulation (EU) 2021/0106): https://eur-lex.europa.eu/legal-content/NL/TXT/?uri=CELEX%3A32021R0106

Operations

Data

The module calculates the complexity of input sentences (skills descriptions and titles). To do so, it uses several publicly available lists of words with associated properties, annotated or calculated by humans.

External provider

TNO

Similar algorithm descriptions

  • This algorithm helps the ACM determine the area where companies' customers are located. This reveals whether there is overlap and whether enough competition remains. This is important for competition investigations. To determine this, location data of customers or deliveries are requested.

    Last change on 18th of July 2025, at 9:46 (CET) | Publication Standard 1.0
    Publication category
    Other algorithms
    Impact assessment
    Field not filled in.
    Status
    In use
  • The Operational Readiness Dashboard provides insight into the current operational readiness and deployability of the armed forces. Specifically, the dashboard provides insight into personnel readiness, materiel readiness and exercise readiness per unit. This dashboard runs in BI/VEST, which stands for Business Intelligence/Improved Steering Information.

    Last change on 28th of November 2024, at 12:30 (CET) | Publication Standard 1.0
    Publication category
    Other algorithms
    Impact assessment
    DPIA
    Status
    In use
  • The algorithm calculates the educational outcomes of schools (cluster, branch, programme). The algorithm provides information that helps an inspector assess whether a school achieves the legal lower limit for learning outcomes to be achieved with these pupils.

    Last change on 9th of October 2024, at 7:35 (CET) | Publication Standard 1.0
    Publication category
    Impactful algorithms
    Impact assessment
    Field not filled in.
    Status
    In use
  • A GBA is one part of the selection process. Candidates play several games to give an impression of the fit with the trainee profile. The test scores are visible in the candidate file as a report.

    Last change on 9th of October 2025, at 11:08 (CET) | Publication Standard 1.0
    Publication category
    Impactful algorithms
    Impact assessment
    IAMA, DPIA
    Status
    Out of use
  • Municipality of Almere, together with TU/e and Fontys, is investigating how generative AI can support the assessment of supplier documentation in tenders. The aim is to make the process more efficient without compromising transparency, accuracy and safety. The project ties in with ongoing TU/e and Fontys PhD research on trust in AI.

    Last change on 14th of October 2025, at 12:57 (CET) | Publication Standard 1.0
    Publication category
    Other algorithms
    Impact assessment
    Field not filled in.
    Status
    In development