Please note: The algorithm descriptions in English have been automatically translated. Errors may have been introduced in this process. For the original descriptions, go to the Dutch version of the Algorithm Register.
Researchworthiness: Smart check livelihood
- Publication category
- Impactful algorithms
- Impact assessment
- DPIA, IAMA
- Status
- In development
General information
Theme
Begin date
End date
Contact information
Responsible use
Goal and impact
The municipality of Amsterdam issues welfare benefits to Amsterdam residents who are entitled to them. Not everyone who applies for welfare benefits qualifies for them. That is why we investigate assistance applications that may be unlawful. The municipality of Amsterdam wants to prevent Amsterdammers from receiving unjustified assistance, accumulating debts and getting into trouble.
Even Amsterdam residents who are entitled to social assistance are sometimes investigated by an employee of Work and Income Enforcement. We want to prevent this as much as possible, because such investigations can be annoying. That is why we are testing whether an algorithm can help us determine which applications should and should not be investigated. So that Amsterdammers are less inconvenienced by our enforcement and so that fewer Amsterdammers get into trouble. But also to ensure that the provision of assistance remains affordable.
Currently, an employee determines whether an application is worthy of investigation and whether this should be assessed by an employee with additional powers. The 'Smart Check' algorithm supports the caseworker to determine whether a subsistence application is investigation-worthy. The algorithm makes transparent and explainable which data led to the label 'research-worthy'. All data used to arrive at an assessment are documented and described.
Pilot
From April to July 2023, the municipality is conducting a pilot with a new working method. In this pilot, the 'smart check' algorithm determines whether an application is labelled 'research-worthy' or 'not research-worthy'. This is then checked by an employee of Enforcement Work and Income. The algorithm can find connections and patterns in a large amount of information on requested social assistance benefits and determines which information is more likely to be associated with applications that required further investigation and which are not. 'Smart Check' is trained on historical data and consists of 15 data points. Upon completion of the pilot, we will determine whether this way of working is better for Amsterdam citizens.
Link to service: https://www.amsterdam.nl/werk-inkomen/
Considerations
This new approach to the pilot has a number of advantages:
- Equivalence: We analysed the bias ('bias') of the algorithm. This showed that on almost all sensitive characteristics (such as age, country of birth and nationality), the developed model treats different groups more equally than it does now.
- Effectiveness: The model can assess which applications are worthy of investigation better than a staff member. As a result, capacity can be better utilised, and fewer unlawful benefits are issued.
- Proportionality: Because the model can better estimate which applications require additional checking, it is less common for applications to be examined unnecessarily. This leads to better proportionality. We also prevent the unlawful provision of benefits more often, leading to a more preventive rather than repressive approach.
Human intervention
There is no automated decision-making. The algorithm only advises whether an application needs additional investigation. After this advice, several employees carry out further extensive research.
- First, an employee from the Work and Income Enforcement Department checks whether the application is really worth investigating.
- If it is, another employee from the Work and Income Enforcement Department will conduct an extensive investigation into its legitimacy.
- An opinion is issued on this and yet another employee (from Inkomensvoorziening) makes the final decision.
So there is no automated decision-making. There is meaningful human intervention before a decision is made. Work instructions are drawn up to avoid too much trust in the outcome of the model ("automation bias"). In addition, employees undergo training on how to use the information from the model in their work process.
Risk management
For this product, we relied on the General Court of Auditors' Framework. In addition, the main risk management analyses were done the DPIA (Data Protection Impact Assessment), KIIA (Artificial Intelligence Impact Assessment), IAMA (Human Rights and Algorithms Impact Assessment ) and the BIO Quickscan. These indicated the extent to which the risks can be eliminated or reduced, and what the residual risks are. We have taken several measures to ensure that the output of the model is correct, transparent and consistent.
The model will also be closely monitored during the pilot (and in case of a successful pilot during management) to ensure the quality and fairness of the model also in the future.
Impact assessment
- Data Protection Impact Assessment (DPIA)
- Impact Assessment Mensenrechten en Algoritmes (IAMA)
Operations
Data
When a life support application is submitted, we check its legitimacy. This is done by employees of the Income Support Department. Not everyone who submits an application is entitled to benefits. The employees of the Income Support Department process many applications based on the information in the application form and the documents provided. In those cases, it is clear that the applicant is entitled to benefits. Sometimes the information is not complete or unambiguous. Then it may be necessary to carry out additional research. This is done by staff from the Work and Income Enforcement Department.
The 'Smart Check' algorithm supports staff in assessing life support applications by labelling them as worthy or not worthy of investigation. All life maintenance applications are presented to 'Smart check'.
When an Amsterdam resident submits a maintenance application, the application enters the Income Directorate's application to process the application. The applications are offered to 'Smart check' and 'Smart check' uses the following data to give the application the label 'research-worthy' or 'not research-worthy':
From the Basic Registration of Persons (BRP)
- BSN
- Information about home address
- Information on housing situation
From the applications of the Income Department
- BSN
- Information about possible previous social assistance benefits
- Information about assets
- Information about income
Applications labelled 'research-worthy' are processed by employees of the Work and Income Enforcement Department. This employee assesses whether the application is actually worthy of investigation.
- If so, an employee of the Work and Income Enforcement Department handles the application and gives advice on the decision to be taken to an employee of the Income Support Department.
- If not, the application is transferred to an employee of the Income Support Department for further processing
Applications that are not labelled investigation-worthy are dealt with by employees of the Income Support Department. An Income Support staff member may still transfer a life maintenance application to a Work and Income Enforcement staff member for additional investigation.
The employee of the Income Support Department makes the decision on the life maintenance application. The decision may be: an application is granted, rejected or dismissed. Peer review takes place here.
Technical design
This new way of working has a number of advantages:
Equivalence:
We analysed the bias ('bias') of the algorithm. This showed that on almost all sensitive characteristics (such as age, country of birth and nationality), the developed model treats different groups more equally than it does now.
Effectiveness:
The model can assess which applications are worthy of investigation better than a staff member. As a result, capacity can be better utilised, and fewer unlawful benefits are issued.
Proportionality:
Because the model can better estimate which applications need additional checking, it is less common for applications to be examined unnecessarily. This leads to better proportionality. We also prevent the unlawful provision of benefits more often, leading to a more preventive rather than repressive approach.
Similar algorithm descriptions
- The algorithm in the software recognises and anonymises personal data and other sensitive information in documents. Governments regularly publish information related to the drafting and implementation of their policies (e.g. based on the Woo). This tool is used to render sensitive data unrecognisable in the process.Last change on 9th of January 2025, at 9:23 (CET) | Publication Standard 1.0
- Publication category
- Other algorithms
- Impact assessment
- DPIA
- Status
- In use
- The algorithm in the software recognises and anonymises personal data and other sensitive information in documents. Governments regularly publish information related to the drafting and implementation of their policies (e.g. based on the Woo). This tool is used to render sensitive data unrecognisable in the process.Last change on 20th of November 2024, at 14:27 (CET) | Publication Standard 1.0
- Publication category
- Other algorithms
- Impact assessment
- DPIA
- Status
- In use
- To correctly determine eligibility of residents, we use the application of an algorithm as a source of information. For example, by pre-determining whether the required information fields are filled in the application.Last change on 12th of July 2024, at 9:55 (CET) | Publication Standard 1.0
- Publication category
- Impactful algorithms
- Impact assessment
- DPIA, ...
- Status
- In use
- Based on read-in data and answers provided by the applicant, the algorithm determines whether the applicant is eligible for any of the benefits to be applied for.Last change on 24th of June 2024, at 7:02 (CET) | Publication Standard 1.0
- Publication category
- Impactful algorithms
- Impact assessment
- Field not filled in.
- Status
- In use
- The AP uses this algorithm to classify data breach reports by severity. Based on that classification, inspectors can prioritise serious reports. The algorithm does not contain any personal data.Last change on 11th of October 2024, at 9:33 (CET) | Publication Standard 1.0
- Publication category
- Other algorithms
- Impact assessment
- Field not filled in.
- Status
- In use