Please note: The algorithm descriptions in English have been automatically translated. Errors may have been introduced in this process. For the original descriptions, go to the Dutch version of the Algorithm Register.

Microsoft Copilot

Employees can use Microsoft CoPilot. This is to generate texts and images or get answers to questions. No confidential information may be used in the process, but no personal data either. An important condition is that the information is good, reliable and checked for accuracy.

Last change on 10th of November 2025, at 13:27 (CET) | Publication Standard 1.0
Publication category
Other algorithms
Impact assessment
Field not filled in.
Status
In use

General information

Theme

Organisation and business operations

Begin date

2025-01

Contact information

https://bergenopzoom.nl/contact-en-openingstijden

Responsible use

Goal and impact

The purpose of Microsoft 365 Copilot is to support employees in their daily work. For example, Microsoft 365 Copilot helps with writing texts, creating summaries, drafting e-mails or preparing for meetings. This is done with the help of artificial intelligence (AI).


The impact is that employees can do their work faster and easier. They need to spend less time on standard tasks and can better focus on content and collaboration. Citizens and businesses notice this indirectly, for example because letters are clearer, answers come faster or meetings are better prepared.


Using Microsoft 365 Copilot contributes to increasing digital skills and familiarity with generative AI. At the same time, it requires conscious and responsible use. The municipality sees the opportunities of AI and follows technological developments with attention. There is a clear ambition to move with these advances, as long as this is done carefully and responsibly. To guide employees in this, ground rules have been drawn up for dealing with generative AI. In addition, several clusters are working on tactical policies that tie in with daily practice and contribute to the development of a comprehensive strategic framework for dealing with AI within the organisation.

Considerations


The municipality chooses to use Microsoft 365 Copilot to support employees in their daily work. For example, Microsoft 365 Copilot helps with writing texts, creating summaries and preparing for meetings. This saves time, ensures better document quality and gives employees more space to focus on content. It also contributes to job satisfaction and digital skills.


At the same time, there are risks. Microsoft 365 Copilot answers may contain errors and should therefore always be checked. Attention is also needed to privacy and secure data handling. Dependence on technology requires clear agreements and conscious use.


The municipality prefers Microsoft 365 Copilot and is banking on awareness around generative AI. Microsoft 365 Copilot is integrated into widely used programmes and recommended within the organisation.


This choice involves ethical considerations where necessary. Consider comprehensibility, human control, equal treatment and transparency.


Employees are trained in the responsible use of artificial intelligence and specifically generative AI. Rules of the game have been drawn up for dealing with this technology. In addition, the municipality is working on strategic and tactical policies for dealing with AI. A DPIA and human rights test are carried out when necessary.

Human intervention


Microsoft 365 Copilot supports employees in their work, but the outcomes are always used and assessed by humans. For example, when Microsoft 365 Copilot makes a text proposal, the employee checks whether the content is correct and appropriate to the situation. Even with summaries or e-mails, the employee remains responsible for the final version.


The deployment of Microsoft 365 Copilot thus requires active human intervention. Employees use the outcomes as a tool, but make the decisions themselves. They can modify the suggestions, add to them or ignore them altogether. This ensures the quality and reliability of the work.


There is no automatic decision-making. Microsoft 365 Copilot supports, but does not replace humans. Employees are trained in the responsible use of generative AI, so they know how to properly assess and adjust outcomes.

Risk management


When using Microsoft 365 Copilot, several risks come into play. These include technical risks, such as errors in generated texts, and legal risks around privacy and data use. There are also ethical risks, such as the risk of unequal treatment, unclear decision-making or loss of human control.


To mitigate these risks, employees are trained in the responsible use of AI, including generative AI such as Microsoft 365 Copilot. They learn how to monitor, adjust outcomes and handle information carefully. Ground rules have been drawn up for handling generative AI so that it is clear what is and what is not allowed. This is also secured within the internal code of conduct.


The municipality is working on a strategic policy framework for dealing with AI. This includes periodic evaluation and monitoring of the use of Microsoft 365 Copilot in pilots. In this way, risks can be identified and addressed in a timely manner. This framework is in line with the standards around information security and privacy, which is considered leading when deploying new technologies.


Legal basis

The European AI Regulation forms the basis of municipal policy around the responsible development and deployment of AI and algorithms. It aims to ensure that AI systems are safe, work transparently and respect people's fundamental rights. The regulation distinguishes between different levels of risk associated with AI systems. The higher the risk, the stricter the rules. For generative AI such as Microsoft 365 Copilot, there are especially obligations around transparency and human oversight.


Besides the AI regulation, municipal policies rest on existing information security and privacy laws, such as the General Data Protection Regulation (GDPR). These laws ensure that personal data is properly protected and that employees handle data with care.

Links to legal bases

AI Act: https://eur-lex.europa.eu/legal-content/NL/TXT/PDF/?uri=OJ:L_202401689

Operations

Data

Microsoft 365 Copilot only uses data that an individual employee has access to and that is actively used while working with Microsoft 365 Copilot. This could include documents, e-mails, calendar items, chats and files in programmes such as Word, Outlook, Teams, SharePoint and OneDrive, if the employee has a lientie. So Microsoft 365 Copilot does not work with all data within the organisation, but only with information opened, shared or used by the employee in a task. Sensitive information is involved only if the employee can access it himself and deploys it consciously. No data from external sources such as BRP or BKR are used.


From the municipal ground rules, employees are expressly advised against sharing personal data and business-sensitive information with generative AI such as Microsoft 365 Copilot. This helps to mitigate risks and handle confidential data carefully.


Microsoft 365 Copilot also processes user input, such as questions or commands given by employees. These prompts and the generated responses are temporarily stored to support functionality, but not used for training the AI models. Personal data such as name, e-mail address or position may be processed indirectly if they appear in documents or communications, but are not structurally stored or shared.


The municipality ensures that employees are aware of the use of data in Microsoft 365 Copilot. Ground rules have been drawn up and a DPIA or human rights test is used where necessary.

Technical design

Microsoft 365 Copilot uses large language models (LLMs), such as GPT-4, GPT-4 Turbo, GPT-5 and Claude Opus 4.1 (Anthropic), hosted via the Azure OpenAI Service. These models are trained on large amounts of text data and are able to generate, interpret and structure human-like texts. Microsoft 365 Copilot is not self-learning: it does not learn based on user interaction. Updates to the model are performed centrally by Microsoft.

Input:

Input consists of prompts entered by employees.


Processing:

After entering a prompt, it is processed through a combination of:

  • Contextual enrichment: the prompt is augmented with relevant information from the user's working environment if licensed, such as documents, emails or calendar entries if shared. If user does not have a licence, only the prompt is used in processing.
  • Access control: Microsoft 365 Copilot only uses data to which the employee himself has access (if a licence applies) and which is actively deployed.
  • Model processing: the enriched prompt is forwarded to the language model, which produces a generated response based on the context and command.


Processing takes place within the organisation's secure Microsoft 365 environment. Microsoft 365 Copilot respects existing access rights, such as Conditional Access and MFA settings. The generated output is not used for model training and is temporarily stored for functional purposes.

Output:

The output is a generated text, summary, proposal, analysis or other type of content, depending on the application and the purpose of the prompt. The employee always retains control over the end result and can modify, add to or ignore the output.

Architecture and model selection:

Microsoft 365 Copilot uses multiple models, including GPT-4, GPT-4 Turbo, GPT-5 and Claude Opus 4.1 (Anthropic).

External provider

Microsoft

Similar algorithm descriptions

  • Employees can use Microsoft CoPilot. This to generate texts and images or get answers to questions. No confidential information may be used in the process, but no personal data either. An important condition is that the information is good, reliable and checked for accuracy.

    Last change on 7th of April 2025, at 13:51 (CET) | Publication Standard 1.0
    Publication category
    Other algorithms
    Impact assessment
    Field not filled in.
    Status
    In use
  • Employees can use Microsoft 365 CoPilot as an AI chatbot to generate texts and get comprehensive answers to questions. Some principles described in our guidelines are that no internal or confidential information should be used and that the information should be properly checked for accuracy.

    Last change on 27th of March 2025, at 13:24 (CET) | Publication Standard 1.0
    Publication category
    Other algorithms
    Impact assessment
    DPIA, IAMA
    Status
    In use
  • Employees may use Microsoft 365 CoPilot as a smart chatbot. With it, they can have texts created and get detailed answers to questions. Our rules state that employees must not use secret or confidential information. Also, employees must always check carefully whether the information is correct.

    Last change on 16th of September 2025, at 10:32 (CET) | Publication Standard 1.0
    Publication category
    Other algorithms
    Impact assessment
    Field not filled in.
    Status
    In use
  • Employees can use Microsoft 365 Copilot: Copilot Chat for questions, texts and summaries. Agents perform tasks such as taking notes or starting workflows. Controls regulate data access, privacy and features by user group.

    Last change on 3rd of November 2025, at 7:27 (CET) | Publication Standard 1.0
    Publication category
    Impactful algorithms
    Impact assessment
    IAMA, DPIA
    Status
    In use
  • Internally, Microsoft Copilot is used. It is integrated into applications such as Outlook, Word and Excel and is made available to employees on demand. Employees use it mainly for summarising or producing texts.

    Last change on 3rd of November 2025, at 13:20 (CET) | Publication Standard 1.0
    Publication category
    Other algorithms
    Impact assessment
    Field not filled in.
    Status
    In use