Please note: The algorithm descriptions in English have been automatically translated. Errors may have been introduced in this process. For the original descriptions, go to the Dutch version of the Algorithm Register.
LIAS InControl
- Publication category
- Other algorithms
- Impact assessment
- Field not filled in.
- Status
- In use
General information
Theme
Begin date
Contact information
Link to publication website
Responsible use
Goal and impact
The LIAS AI text assistant helps municipal employees write clear and structured explanations of financial data. This makes the work easier and faster, and the texts more understandable to anyone reading them. These texts are published publicly. The AI text assistant thus indirectly helps make the municipality's communication to residents clearer.
Considerations
Clear and readable explanations of financial data are important. They help residents better understand what it is all about. The AI text assistant helps employees write these texts easier and faster. This saves time and improves external communication.
A possible drawback is that the AI assistant may make mistakes or not always use the right tone. Therefore, employees should always check the text themselves. If necessary, a colleague also watches (four-eye principle). The AI text assistant is a tool and not a substitute for human judgement.
The use of the AI assistant is not mandatory. Employees choose for themselves whether to use it. Since it is only a support tool and employees themselves remain responsible for the content, the use of this algorithm is reasonable and justified.
Human intervention
The AI text assistant is voluntary to use as a text aid. Texts are created based on input from the writer. The text assistant can ensure that the supplied text is improved and structured. After this, the text can be copied 1-to-1. These texts are always checked by the user himself.
Risk management
Use of the AI Text Assistant is voluntary and not compulsory. Employees using the AI Text Assistant have received prior training on how the AI Text Assistant works. The tool can only use predefined commands, which employees can choose from. Texts created with the tool are always user-checked.
Legal basis
AI Regulation, AVG
Operations
Data
The data used as input (prompts) are written texts by the user. These texts explain or clarify financial data and do not contain personal data.
Technical design
LIAS uses language models from OpenAI hosted as a service in a Microsoft Azure environment. The service does not interact with OpenAI's services. The modelling and data processing takes place within the European Economic Area (EEA). The prompts and responses are not shared or made available outside the application. In addition, all communication takes place via the application itself.
External provider
Similar algorithm descriptions
- Automated facial comparison and document check during first registration in the Basic Registry of Persons (BRP), resettlement applications and naturalisation applications for persons over 18 years old. This algorithm supports the BRP employee to prevent "look-alike fraud" and check authenticity of a document.Last change on 8th of May 2025, at 11:13 (CET) | Publication Standard 1.0
- Publication category
- Impactful algorithms
- Impact assessment
- Field not filled in.
- Status
- In use
- Automated document check and facial comparison for registrations in the BRP, applications for travel documents and driving licences, and for reporting removals (intra-municipal and migrations). The tool checks the authenticity characteristics of identity documents and prevents "look-alike fraud".Last change on 1st of May 2025, at 12:11 (CET) | Publication Standard 1.0
- Publication category
- Impactful algorithms
- Impact assessment
- Field not filled in.
- Status
- In use
- The aim of the e-services with the underlying algorithm is to provide maximum support/ guidance to residents and entrepreneurs when making a digital request.Last change on 19th of December 2024, at 14:09 (CET) | Publication Standard 1.0
- Publication category
- Impactful algorithms
- Impact assessment
- DPIA
- Status
- In use
- The algorithm identifies personal data and pre-entered words in documents. An employee must go through the document and check whether the alert is justified and approve or reject it. An employee can add further markings himself. After approval by the employee, all approved alerts and markings are blacklisted.Last change on 14th of October 2024, at 10:02 (CET) | Publication Standard 1.0
- Publication category
- Other algorithms
- Impact assessment
- Field not filled in.
- Status
- In use
- Automated document check and face comparator when registering in the Basic Registry of Persons (BRP) and applying for identity documents for persons over 18 years old. This Algorithm helps the registrant prevent "look-alike fraud". Sourced from the Rijksdienst voor Identiteitsgegevens (RvIG).Last change on 2nd of February 2024, at 7:56 (CET) | Publication Standard 1.0
- Publication category
- High-Risk AI-system
- Impact assessment
- Field not filled in.
- Status
- In use