Please note: The algorithm descriptions in English have been automatically translated. Errors may have been introduced in this process. For the original descriptions, go to the Dutch version of the Algorithm Register.
Risk scan of Negligent Unemployment
- Publication category
- Impactful algorithms
- Impact assessment
- DPIA
- Status
- In use
General information
Theme
- Social Security
- Work
Begin date
Contact information
Link to source registration
Responsible use
Goal and impact
Considerations
Human intervention
- The employee himself assesses the WW application and makes his own decision on the possible next steps. The risk scan therefore does not make any decisions about benefit entitlement.
- Every month, specialised UWV employees check the quality of the data used by the risk scan. The proper functioning of the scan is also monitored. In this way, we constantly check whether the scan continues to meet our quality standards. For example, we check whether the population used in developing the risk scan is still representative.
- The risk scan is regularly developed further so that it only uses data that adds value to its operation. We remove data that does not (or no longer) add value. We also regularly check whether there are still opportunities to improve the scan.
Risk management
- We constantly check data quality.
- We always ensure that employees do the final assessment and not the algorithm.
- If a scan is (re)developed by us, the quality of our work is reviewed by an independent, reputable organisation. This way, we reduce the chances of errors or sub-optimal quality.
- We use data provided by the client as much as possible.
- The algorithm always uses the same data and behavioural characteristics.
- The risk scan does not use personal characteristics such as origin, gender, age or other privacy-sensitive data.
- The risk scan does not use data from social media or other public sources. This way, we treat everyone the same and human biases do not affect treatment.
- The risk scan produces a selection of signals that staff investigate further. A number (30%) of randomly selected WW applications are always added to this selection. So the employee does not know whether a signal comes from the risk scan or from the applications added at random. In this way, we ensure that employees are not influenced and remain critical of situations they have to investigate.
- The algorithm only signals. An employee assesses whether there is actual culpable unemployment or not.
- Only the applications that pass the risk scan (including the 30% randomly selected WW applications added to it) are additionally checked and examined by our employees.
Elaboration on impact assessments
Instead of the national IAMA standard, a UWV Ethical Impact Assessment was conducted.
Impact assessment
Operations
Data
- data about (possible) previous WW applications and benefits)
- details of your employment history
- details of your current WW claim
Technical design
Similar algorithm descriptions
- The risk model (the algorithm) helps choose which NOW applications to investigate further. It gives clues as to whether the information given in the NOW application is correct. With these indications from the risk model, SZW reviews the application.Last change on 10th of June 2024, at 13:42 (CET) | Publication Standard 1.0
- Publication category
- Impactful algorithms
- Impact assessment
- Field not filled in.
- Status
- In use
- The SVB conducts surveys focused on risk. This involves using algorithms to create risk profiles. If a citizen falls into a risk profile, they can be selected for a check by an employee of the Prevention & Enforcement Department.Last change on 29th of October 2024, at 8:06 (CET) | Publication Standard 1.0
- Publication category
- Impactful algorithms
- Impact assessment
- DPIA
- Status
- In use
- The algorithm is used to make an automated risk assessment for all Applications for Fixed Chargesgemoetkoming, prior to automated or manual granting and payment of the advance.Last change on 30th of May 2024, at 12:57 (CET) | Publication Standard 1.0
- Publication category
- High-Risk AI-system
- Impact assessment
- Field not filled in.
- Status
- In use
- This algorithm creates a risk sorting of educational institutions (schools, courses or school boards) to conduct targeted desk research, as part of the annual performance and risk analysis.Last change on 22nd of May 2024, at 8:41 (CET) | Publication Standard 1.0
- Publication category
- Impactful algorithms
- Impact assessment
- Field not filled in.
- Status
- In use
- This algorithm helps Customs to select goods for inspection based on risk. It uses declaration data from companies and considers whether or not there are risks of inaccuracies in the declarations.Last change on 28th of February 2024, at 12:29 (CET) | Publication Standard 1.0
- Publication category
- Impactful algorithms
- Impact assessment
- Field not filled in.
- Status
- In use