Please note: The algorithm descriptions in English have been automatically translated. Errors may have been introduced in this process. For the original descriptions, go to the Dutch version of the Algorithm Register.

ChatAmsterdam

Study on deployment of a Generative AI chatbot for employees of the Municipality of Amsterdam

Last change on 21st of October 2024, at 15:57 (CET) | Publication Standard 1.0
Publication category
High-Risk AI-system
Impact assessment
DPIA, ...
Status
In development

General information

Theme

Organisation and business operations

Begin date

11-2024

End date

03-2025

Contact information

algoritmen@amsterdam.nl

Link to source registration

Dit is een interne repository die alleen toegankelijk is voor medewerkers van de gemeente Amsterdam.

Responsible use

Goal and impact

Generative AI offers many opportunities to make work easier such as finding information, summarising documents, analysing data or helping to write papers.

But this technology also has downsides: training AI takes a lot of energy and natural resources, owners of the source material for training the models are not paid, and the model - depending on the sources - can produce racist or other ethically undesirable results.


Apart from these general criticisms, there are also risks to the organisation. Documents offered to a chatbot can be used by the supplier to further train the model. What if they contain sensitive information? Or if an official consults the bot to find information to support decision-making, while the model answers based on dated or outright incorrect information?

Research is needed to explore the value of Generative AI for Amsterdam while identifying practical risks. Amsterdam employees will therefore start using an internal Generative AI chatbot for a limited period of time: ChatAmsterdam. This chatbot is built with Open AI's GPT4.0 language model and is not otherwise modified or supplemented with (internal) training data.

During the research period, usage behaviour, value and risks will be assessed to determine whether we will use this technology in the future.

Considerations

We know that Generative AI is already being used by employees AND we recognise the opportunities and risks. There are plenty of applications to consider for the organisation: both internal and external. But before we allow Generative AI (within the compliancy requirements) within the Municipality of Amsterdam, research is needed. How do people communicate with a bot? What kind of (sensitive) information is sent along in the prompts? What are the results used for?How valuable is the technology for the Amsterdam municipality and the city's citizens and entrepreneurs?


These answers will help the Amsterdam municipality deploy the technology safely, responsibly and in a targeted manner.

Human intervention

The chatbot and underlying language model do not make independent decisions, nor are they part of an automatic decision-making process without human intervention.

All generated answers are viewed by the user who asked the question/command to the chatbot. The user then proceeds to work on the answer, for example by adopting it in part or in full or by asking further questions to the chatbot. The user is always between the generated answer from the chatbot and the task or process the answer is used for.

Risk management

ChatAmsterdam is offered as an internal chatbot to internal employees of the Amsterdam municipality.

The chatbot application, data and hosting environment comply with applicable BBN2 security standards to minimise risk of data leaks and security incidents.

All prompts entered are stored and manually reviewed for potential ethical risks, bias and possible data leaks.

In addition, there is a two-tier filter function. Microsoft's prompt filter filters all prompts for dangerous and unethical issues such as discrimination, suicide or abuse. The second filter is an internally developed prompt filter that specifically looks for risky content such as use of sensitive data, ethnic profiling et cetera

These risk prompts are stored separately and submitted to the relevant Risk/AI officers for content review.

Elaboration on impact assessments

Amsterdam is currently conducting the ethical impact test in the form of an Ethical Leaflet. This has been completed for this algorithm.

Impact assessment

  • Data Protection Impact Assessment (DPIA)
  • Ethische Bijsluiter

Operations

Data

ChatAmsterdam uses the Open AI LLM, specifically GPT4.0. This is the same language model (algorithm) used in ChatGPT.

This language model has not been trained further with additional (internal) data. The language model is used as is, with no changes or fine-tuning.

Technical design

ChatAmsterdam as a Generative AI chatbot uses the GPT4.0 language model in its current form without adding any (internal) training data or other data. It is the same language model that ChatGPT uses.

ChatAmsterdam works in a similar way to ChatGPT. Employees can ask a question or issue a command (called a prompt) to the chat application and then receive a generated response. An example of this is summarising or rewriting texts.

The language model processes the entered question or command and included information and then sends the generated answer back to the user.

External provider

The algorithm (in this case, the language model GPT4.0) is provided by Microsoft through the Azure cloud platform.

Similar algorithm descriptions

  • To support jobseekers in their orientation or job search, the Amsterdam municipality offers citizens an online matching platform as a tool.

    Last change on 27th of November 2024, at 16:52 (CET) | Publication Standard 1.0
    Publication category
    Other algorithms
    Impact assessment
    Field not filled in.
    Status
    In development
  • To apply for life support, the municipality of Arnhem uses Centric's eDienst. Based on read-in data and answers provided by the applicant, the algorithm determines whether the applicant is eligible for social assistance (maintenance) benefits.

    Last change on 2nd of July 2024, at 9:52 (CET) | Publication Standard 1.0
    Publication category
    Impactful algorithms
    Impact assessment
    DPIA
    Status
    In use
  • Users can ask questions in a chatbot, about published Woo requests from Coevorden municipality. The chatbot bases answers and summaries only on documents published by the municipality.

    Last change on 20th of November 2024, at 10:50 (CET) | Publication Standard 1.0
    Publication category
    Other algorithms
    Impact assessment
    Field not filled in.
    Status
    In use
  • The virtual assistant Gem is a digital help on Roosendaal websites to answer civic and business questions. Using the chat button on our websites, you can talk to Gem.

    Last change on 7th of August 2024, at 9:44 (CET) | Publication Standard 1.0
    Publication category
    Impactful algorithms
    Impact assessment
    Field not filled in.
    Status
    In use
  • Mai is a chatbot that answers general questions from citizens 24/7 on the municipality of Montferland's website. The chatbot replaces the current live chat to reduce waiting times and provide instant answers.

    Last change on 2nd of December 2024, at 11:21 (CET) | Publication Standard 1.0
    Publication category
    Impactful algorithms
    Impact assessment
    DPIA
    Status
    In use