Please note: The algorithm descriptions in English have been automatically translated. Errors may have been introduced in this process. For the original descriptions, go to the Dutch version of the Algorithm Register.

ChatAmsterdam

ChatAmsterdam is a generative AI assistant developed for the municipality of Amsterdam to promote AI literacy in the organisation and safely and responsibly use AI as a tool in our operations.

Last change on 25th of March 2025, at 10:04 (CET) | Publication Standard 1.0
Publication category
Other algorithms
Impact assessment
DPIA, ...
Status
In development

General information

Theme

Organisation and business operations

Begin date

11-2024

End date

03-2025

Contact information

algoritmen@amsterdam.nl

Link to source registration

Dit is een interne repository die alleen toegankelijk is voor medewerkers van de gemeente Amsterdam.

Responsible use

Goal and impact

Generative AI offers many opportunities to make work easier such as finding information, summarising documents, analysing data or helping to write papers.

But this technology also has downsides: training AI takes a lot of energy and natural resources, owners of the source material for training the models are not paid, and the model - depending on the sources - can produce racist or other ethically undesirable results.


Apart from these general criticisms, there are also risks to the organisation. Documents offered to a chatbot can be used by the supplier to further train the model. What if they contain sensitive information? Or if an official consults the bot to find information to support decision-making, while the model answers based on dated or outright incorrect information?

Research is needed to explore the value of Generative AI for Amsterdam while identifying practical risks. Amsterdam employees will therefore start using an internal Generative AI chatbot for a limited period of time: ChatAmsterdam. This chatbot is built with Open AI's GPT4.0 language model and is not otherwise modified or supplemented with (internal) training data.

During the research period, usage behaviour, value and risks will be assessed to determine whether we will use this technology in the future.

Considerations

We know that Generative AI is already being used by employees AND we recognise the opportunities and risks. There are plenty of applications to consider for the organisation: both internal and external. But before we allow Generative AI (within the compliancy requirements) within the Municipality of Amsterdam, research is needed. How do people communicate with a bot? What kind of (sensitive) information is sent along in the prompts? What are the results used for?How valuable is the technology for the Amsterdam municipality and the city's citizens and entrepreneurs?


These answers will help the Amsterdam municipality deploy the technology safely, responsibly and in a targeted manner.

Human intervention

The chatbot and underlying language model do not make independent decisions, nor are they part of an automatic decision-making process without human intervention.

All generated answers are viewed by the user who asked the question/command to the chatbot. The user then proceeds to work on the answer, for example by adopting it in part or in full or by asking further questions to the chatbot. The user is always between the generated answer from the chatbot and the task or process the answer is used for.

Risk management

ChatAmsterdam is offered as an internal chatbot to internal employees of the Amsterdam municipality.

The chatbot application, data and hosting environment comply with applicable BBN2 security standards to minimise risk of data leaks and security incidents.

All prompts entered are stored and manually reviewed for potential ethical risks, bias and possible data leaks.

In addition, there is a two-tier filter function. Microsoft's prompt filter filters all prompts for dangerous and unethical issues such as discrimination, suicide or abuse. The second filter is an internally developed prompt filter that specifically looks for risky content such as use of sensitive data, ethnic profiling et cetera

These risk prompts are stored separately and submitted to the relevant Risk/AI officers for content review.

Elaboration on impact assessments

Amsterdam is currently conducting the ethical impact test in the form of an Ethical Leaflet. This has been completed for this algorithm.

Impact assessment

  • Data Protection Impact Assessment (DPIA)
  • Ethische Bijsluiter Amsterdam

Operations

Data

ChatAmsterdam uses the Open AI LLM, specifically GPT4.0. This is the same language model (algorithm) used in ChatGPT.

This language model has not been trained further with additional (internal) data. The language model is used as is, with no changes or fine-tuning.

Technical design

ChatAmsterdam as a Generative AI chatbot uses the GPT4.0 language model in its current form without adding any (internal) training data or other data. It is the same language model that ChatGPT uses.

ChatAmsterdam works in a similar way to ChatGPT. Employees can ask a question or issue a command (called a prompt) to the chat application and then receive a generated response. An example of this is summarising or rewriting texts.

The language model processes the entered question or command and included information and then sends the generated answer back to the user.

External provider

The algorithm (in this case, the language model GPT4.0) is provided by Microsoft through the Azure cloud platform.

Similar algorithm descriptions

  • Mai is a chatbot that answers general questions from citizens 24/7 on the municipality of Montferland's website. The chatbot replaces the current live chat to reduce waiting times and provide instant answers.

    Last change on 6th of February 2025, at 13:44 (CET) | Publication Standard 1.0
    Publication category
    Impactful algorithms
    Impact assessment
    DPIA
    Status
    In use
  • WooChat is an AI artificial intelligence, chatbot. The bot generates new texts based on sources provided as input such as various books and the Woo law text. The AI recognises patterns in texts and uses pre-entered knowledge to answer questions and structure complex requests/information efficiently.

    Last change on 10th of April 2025, at 9:46 (CET) | Publication Standard 1.0
    Publication category
    Other algorithms
    Impact assessment
    Field not filled in.
    Status
    In use
  • A chatbot that answers general questions from citizens 24/7 on Gennep municipality's website. The chatbot should help reduce waiting times and provide instant answers.

    Last change on 25th of March 2025, at 12:32 (CET) | Publication Standard 1.0
    Publication category
    Impactful algorithms
    Impact assessment
    DPIA
    Status
    In development
  • Someone asks Chatbot a question. Chatbot responds with an answer generated by Ai. Municipality of Veere feeds the chatbot with information (links to sites/knowledge base) and monitors/analyses chatbot usage.

    Last change on 18th of April 2024, at 14:14 (CET) | Publication Standard 1.0
    Publication category
    Other algorithms
    Impact assessment
    Field not filled in.
    Status
    In development
  • How does a chatbot help make information from cadastral data sources accessible to citizens? That's what we investigated with Loki.

    Last change on 4th of June 2024, at 11:19 (CET) | Publication Standard 1.0
    Publication category
    Other algorithms
    Impact assessment
    Field not filled in.
    Status
    In development