Please note: The algorithm descriptions in English have been automatically translated. Errors may have been introduced in this process. For the original descriptions, go to the Dutch version of the Algorithm Register.
Exploring Generative AI in the tender evaluation process
- Publication category
- Other algorithms
- Impact assessment
- Field not filled in.
- Status
- In development
General information
Theme
Begin date
End date
Contact information
Responsible use
Goal and impact
Within municipal tenders, much time is spent reading through, interpreting and reviewing supplier documentation. This process is labour-intensive, prone to interpretation differences and has little support from smart technology. Municipality of Almere wants to explore whether Generative AI can contribute to organising this process more efficiently, without sacrificing transparency, accuracy and safety.
In cooperation with Eindhoven University of Technology and Fontys University of Applied Sciences, Gemeente Almere has started a project. For TU/e and Fontys, this project ties in with ongoing PhD research into trust in AI and the cooperation between humans and technology within decision-making processes.
The project started on 1 September 2025 and is in the pilot and test phase. In this phase, we investigate the deployment of the algorithm exclusively in a controlled research environment, with the aim of exploring its potential added value. The project will run until the end of 2025. During this period, interim evaluations will take place to assess operation, ethical considerations and process design.
The aim of this project is to explore, in, cooperation with TU Eindhoven, whether, and in what way, Generative AI (GenAI) can support the assessment of tender documents. The focus is on strengthening the assessment process by supporting, not replacing, human considerations and speeding up the inventory process without compromising due diligence or transparency.
Considerations
The consideration of advantages and disadvantages takes place within the context of this pilot. Potential benefits such as time savings, consistency and initial analyses are critically contrasted with risks such as overreliance, possible bias and limited explainability. Ethics play a central role here, with a focus on human autonomy, transparency and inclusion. The pilot provides space to carefully test these aspects before considering further application.
Human intervention
The pilot incorporates structural human intervention: the AI only generates suggestions or summaries based on documents, but each output is manually reviewed by an experienced professional. Final responsibility remains with the human. In addition, the prompt development specifically takes into account ethical principles: the model is for support only, should not suggest preferences and should not make discriminatory choices.
Risk management
Within this pilot phase, risks were identified and an EU-hosted platform that meets privacy and data security requirements was deliberately chosen. Although no formal DPIA was conducted due to the research and experimental nature, the principles of a DPIA were followed, such as data minimisation, transparency, human control and avoidance of bias. If the system is eventually deployed more widely, a full DPIA will be recommended.
Operations
Data
Within this pilot phase, the algorithm does not process direct personal data such as name, address or BSN. However, we do analyse documents such as tender texts and policy documents that may contain indirect or context-sensitive information. If personal data is nevertheless found, processing will only take place within a secure, EU-hosted environment and according to the requirements of the AVG.
Technical design
The algorithm involves a Large Language Model (LLM) similar to GPT, but is not publicly or commercially hosted within this pilot. Instead, it runs on the Azure AI Foundry platform within an EU-resident, closed test environment. The model processes input texts such as policy documents and generates draft answers or summaries from them. The model is not retrained on new data and no data is stored unless it is explicitly part of the study.
External provider
Similar algorithm descriptions
- Generative AI (artificial intelligence) created summaries of existing objection opinions. These support lawyers in their information needs and ensures faster legal assessment of new objections.Last change on 9th of October 2025, at 10:56 (CET) | Publication Standard 1.0
- Publication category
- Other algorithms
- Impact assessment
- DPIA, The Ethical Guide
- Status
- In use
- To correctly determine eligibility of residents, we use the application of an algorithm as a source of information. For example, by pre-determining whether the required information fields are filled in the application.Last change on 12th of July 2024, at 9:55 (CET) | Publication Standard 1.0
- Publication category
- Impactful algorithms
- Impact assessment
- Uthiek, DPIA
- Status
- In use
- To speed up reintegration into work, citizens on welfare benefits have an obligation to apply for jobs. To support this process, Stichting Inlichtingenbureau (IB)* provides enrolments at colleges and universities to municipalities. This makes it possible to determine whether enrolment in a course of study prevents the possibility of working.Last change on 5th of August 2025, at 13:02 (CET) | Publication Standard 1.0
- Publication category
- Impactful algorithms
- Impact assessment
- DPIA
- Status
- In use
- Employees can use the LIAS AI text assistant to write explanatory notes to parts of Planning & Control documents, such as the budget and annual accounts. The AI text assistant helps to make texts clearer and more structured. Writing is thus faster, easier and the text becomes more readable.Last change on 21st of August 2025, at 11:26 (CET) | Publication Standard 1.0
- Publication category
- Other algorithms
- Impact assessment
- Field not filled in.
- Status
- In use
- The aim of the e-services with the underlying algorithm is to provide maximum support/ guidance to residents and entrepreneurs when making a digital request.Last change on 19th of December 2024, at 14:09 (CET) | Publication Standard 1.0
- Publication category
- Impactful algorithms
- Impact assessment
- DPIA
- Status
- In use