Please note: The algorithm descriptions in English have been automatically translated. Errors may have been introduced in this process. For the original descriptions, go to the Dutch version of the Algorithm Register.

Microsoft 365 Copilot

Employees can use Microsoft 365 Copilot: Copilot Chat for questions, texts and summaries. Agents perform tasks such as taking notes or starting workflows. Controls regulate data access, privacy and features by user group.

Last change on 3rd of November 2025, at 7:27 (CET) | Publication Standard 1.0
Publication category
Impactful algorithms
Impact assessment
IAMA, DPIA
Status
In use

General information

Theme

Organisation and business operations

Begin date

2025-01

End date

--

Contact information

algoritmeregister@arnhem.nl

Link to publication website

https://opendata.arnhem.nl/

Link to source registration

Niet beschikbaar

Responsible use

Goal and impact

The purpose of Microsoft 365 Copilot is to support employees in their daily work. For example, Microsoft 365 Copilot helps with writing texts, creating summaries, drafting e-mails or preparing for meetings. This is done with the help of artificial intelligence (AI).


The impact is that employees can do their work faster and easier. They need to spend less time on standard tasks and can better focus on content and collaboration. Citizens and businesses notice this indirectly, for example because letters are clearer, answers come faster or meetings are better prepared.


Using Microsoft 365 Copilot contributes to increasing digital skills and familiarity with generative AI. At the same time, it requires conscious and responsible use. The municipality sees the opportunities of AI and follows technological developments with attention. There is a clear ambition to move with these advances, as long as this is done carefully and responsibly. To guide employees in this, ground rules have been drawn up for dealing with generative AI. In addition, several clusters are working on tactical policies that tie in with daily practice and contribute to the development of a comprehensive strategic framework for dealing with AI within the organisation.


Considerations

The municipality chooses to deploy Microsoft 365 Copilot to support employees in their daily work. For example, Microsoft 365 Copilot helps with writing texts, creating summaries and preparing for meetings. This saves time, ensures better document quality and gives employees more space to focus on content. It also contributes to job satisfaction and digital skills.


At the same time, there are risks. Microsoft 365 Copilot answers may contain errors and should therefore always be checked. Attention is also needed to privacy and secure data handling. Dependence on technology requires clear agreements and conscious use.


Alternatives such as ChatGPT have been considered but not found suitable for this type of support. Microsoft 365 Copilot is integrated into widely used programmes and is actively recommended within the organisation. Other applications are not recommended unless they are approved and supported by the information department.


Ethical considerations were made in making this choice. Consider comprehensibility, human control, equal treatment and transparency.


Employees are trained in the responsible use of artificial intelligence and specifically generative AI. Rules of the game have been drawn up for dealing with this technology. In addition, the municipality is working on tactical policies from different clusters, as well as a strategic framework for dealing with AI. A DPIA and human rights test are in preparation.

Human intervention

Microsoft 365 Copilot supports employees in their work, but the outcomes are always used and assessed by humans. For example, when Microsoft 365 Copilot makes a text proposal, the employee checks whether the content is correct and appropriate to the situation. Even with summaries or e-mails, the employee remains responsible for the final version.


The deployment of Microsoft 365 Copilot thus requires active human intervention. Employees use the outcomes as a tool, but make the decisions themselves. They can modify the suggestions, add to them or ignore them altogether. This ensures the quality and reliability of the work.


There is no automatic decision-making. Microsoft 365 Copilot supports, but does not replace humans. Employees are trained in the responsible use of generative AI, so they know how to properly assess and adjust outcomes.

Risk management

When using Microsoft 365 Copilot, several risks come into play. These include technical risks, such as errors in generated texts, and legal risks around privacy and data use. There are also ethical risks, such as the risk of unequal treatment, unclear decision-making or loss of human control.


To mitigate these risks, employees are trained in the responsible use of AI, including generative AI such as Microsoft 365 Copilot. They learn how to monitor, adjust outcomes and handle information carefully. Ground rules have been drawn up for handling generative AI so that it is clear what is and what is not allowed.


The municipality is working on a strategic policy framework for dealing with AI. This includes periodic evaluation and monitoring of the use of Microsoft 365 Copilot. In this way, risks can be identified and addressed in a timely manner. This framework is in line with existing information security and privacy policies, which are considered leading when deploying new technologies.


In addition, a DPIA (Data Protection Impact Assessment) and a human rights test are in preparation. These help to properly identify risks to privacy, transparency and public values.

Legal basis

The European AI Regulation forms the basis of municipal policy around the responsible development and deployment of AI and algorithms. It aims to ensure that AI systems are safe, work transparently and respect people's fundamental rights. The regulation distinguishes between different levels of risk associated with AI systems. The higher the risk, the stricter the rules. For generative AI such as Microsoft 365 Copilot, there are especially obligations around transparency and human oversight.


Besides the AI regulation, municipal policies rest on existing information security and privacy laws, such as the General Data Protection Regulation (GDPR). These laws ensure that personal data is properly protected and that employees handle data with care.

Links to legal bases

AI-Act: paginahttps://eur-lex.europa.eu/legal-content/NL/TXT/PDF/?uri=OJ:L_202401689

Link to Processing Index

Niet van toepassing

Elaboration on impact assessments

Both an IAMA and a DPIA are currently in progress

Impact assessment

  • Human Rights and Algorithms Impact Assessment (IAMA): In uitvoering
  • Data Protection Impact Assessment (DPIA): In uitvoering

Operations

Data

Microsoft 365 Copilot only uses data that an individual employee has access to and that is actively used while working with Microsoft 365 Copilot. This could include documents, e-mails, calendar items, chats and files in programmes such as Word, Outlook, Teams, SharePoint and OneDrive. So Microsoft 365 Copilot does not work with all data within the organisation, but only with information opened, shared or used by the employee in a task. Sensitive information is involved only if the employee can access it himself and deploys it consciously. No data from external sources such as BRP or BKR are used.


From the municipal ground rules, employees are expressly advised against sharing personal data and business-sensitive information with generative AI such as Microsoft 365 Copilot. This helps to mitigate risks and handle confidential data carefully.


Microsoft 365 Copilot also processes user input, such as questions or commands given by employees. These prompts and the generated responses are temporarily stored to support functionality, but not used for training the AI models. Personal data such as name, e-mail address or position may be processed indirectly if they appear in documents or communications, but are not structurally stored or shared.


The municipality ensures that employees are aware of the use of data in Microsoft 365 Copilot. Rules of the game have been drawn up and a DPIA and human rights test are in progress to ensure proper handling of data. The use of Microsoft 365 Copilot falls under the existing information security and privacy policy.

Technical design

Microsoft 365 Copilot uses large language models (LLMs), such as GPT-4, GPT-4 Turbo, GPT-5 and Claude Opus 4.1 (Anthropic), hosted via the Azure OpenAI Service. These models are trained on large amounts of text data and are able to generate, interpret and structure human-like texts. Microsoft 365 Copilot is not self-learning: it does not learn based on user interaction. Updates to the model are performed centrally by Microsoft.

Input:

Input consists of prompts entered by employees.


Processing:

After entering a prompt, it is processed through a combination of:

  • Contextual enrichment: the prompt is augmented with relevant information from the user's work environment, such as documents, emails or calendar items in case they have been shared.
  • Access control: Microsoft 365 Copilot only uses data to which the employee himself has access and which is actively deployed. There is no access to organisation-wide data or external sources.
  • Model processing: the enriched prompt is forwarded to the language model, which produces a generated response based on the context and task.


Processing takes place within the organisation's secure Microsoft 365 environment. Microsoft 365 Copilot respects existing access rights, Conditional Access and MFA settings. The generated output is not used for model training and is temporarily stored for functional purposes.

Output:

The output is a generated text, summary, proposal, analysis or other type of content, depending on the application and the purpose of the prompt. The employee always retains control over the end result and can modify, add to or ignore the output.

Architecture and model selection:

Microsoft 365 Copilot uses multiple models, including GPT-4, GPT-4 Turbo, GPT-5 and Claude Opus 4.1 (Anthropic), depending on the task and agent type.

External provider

Microsoft

Link to code base

Niet beschikbaar

Similar algorithm descriptions

  • Employees may use Microsoft 365 CoPilot as a smart chatbot. With it, they can have texts created and get detailed answers to questions. Our rules state that employees must not use secret or confidential information. Also, employees must always check carefully whether the information is correct.

    Last change on 16th of September 2025, at 10:32 (CET) | Publication Standard 1.0
    Publication category
    Other algorithms
    Impact assessment
    Field not filled in.
    Status
    In use
  • Employees can use Microsoft 365 CoPilot as an AI chatbot to generate texts and get comprehensive answers to questions. Some principles described in our guidelines are that no internal or confidential information should be used and that the information should be properly checked for accuracy.

    Last change on 27th of March 2025, at 13:24 (CET) | Publication Standard 1.0
    Publication category
    Other algorithms
    Impact assessment
    DPIA, IAMA
    Status
    In use
  • Employees can use Microsoft 365 Copilot Chat (hereafter Copilot Chat). Copilot Chat is a generative AI tool offered by Microsoft, where county employees with a county Microsoft account can ask questions, generate texts and images, and create translations and summaries.

    Last change on 15th of April 2025, at 9:56 (CET) | Publication Standard 1.0
    Publication category
    Other algorithms
    Impact assessment
    DPIA, IAMA
    Status
    In use
  • Employees can use Microsoft 365 Copilot Chat (hereafter Copilot Chat). Copilot Chat is a generative AI tool offered by Microsoft, where employees with a Microsoft account from the municipality can ask questions, generate texts and images, and create translations and summaries.

    Last change on 30th of September 2025, at 12:24 (CET) | Publication Standard 1.0
    Publication category
    Impactful algorithms
    Impact assessment
    Field not filled in.
    Status
    In use
  • Employees can use Microsoft CoPilot. This to generate texts and images or get answers to questions. No confidential information may be used in the process, but no personal data either. An important condition is that the information is good, reliable and checked for accuracy.

    Last change on 7th of April 2025, at 13:51 (CET) | Publication Standard 1.0
    Publication category
    Other algorithms
    Impact assessment
    Field not filled in.
    Status
    In use