Please note: The algorithm descriptions in English have been automatically translated. Errors may have been introduced in this process. For the original descriptions, go to the Dutch version of the Algorithm Register.

Microsoft CoPilot

Employees can use Microsoft 365 CoPilot as an AI chatbot to generate texts and get comprehensive answers to questions. Some principles described in our guidelines are that no internal or confidential information should be used and that the information should be properly checked for accuracy.

Last change on 27th of March 2025, at 13:24 (CET) | Publication Standard 1.0
Publication category
Other algorithms
Impact assessment
DPIA, IAMA
Status
In use

General information

Theme

Organisation and business operations

Begin date

2023-06

Contact information

privacy@veiligheidsregioaa.nl

Responsible use

Goal and impact

The VrAA sees AI as the beginning of a systemic change. In this technology, several areas of knowledge converge, making a multidisciplinary approach (information security, information assurance, privacy, communication, etc.) necessary to deal responsibly with both the huge opportunities and risks of this technology.

Considerations

The VrAA is familiar with SLM Empire's DPIA and FRAIA and has factored that into its risk assessment. Policies and internal guidelines have been developed and published to enable responsible use. In addition, certain integrations of CoPilot in the Microsoft 365 environment have been disabled to prevent automatic and unintended use of the application as much as possible.

Human intervention

The current policy is that generated content is always checked by the user and, if necessary, also checked by a colleague (the four-eye principle) before being used.

Risk management

Existing policies from information security, privacy, procurement, staff codes of conduct and requirements for cloud services form the starting point for risk management around generative AI.


Additionally, guidelines for responsible use of Generative AI have been drafted to provide clarity to employees on what is and what is not possible. Furthermore, internal awareness, communication and training on AI literacy are being developed and AI is a focus area of existing compliance processes.

Elaboration on impact assessments

The DPIA and FRAIA SLM realm were reviewed and the identified risks were assessed by CISO, PO and FG for relevance and applicability when used within the experiment.

Impact assessment

  • Data Protection Impact Assessment (DPIA): https://slmmicrosoftrijk.nl/wp-content/uploads/2024/12/20241217-Memo-M365-Copilot-DPIA-en-FRAIA.pdf
  • Impact Assessment Mensenrechten en Algoritmes (IAMA): Zie bovenstaande link.

Operations

Data

The existing policies from information security, privacy and information security regarding integrity and responsible use of information continue to apply, and participants should take these into account when using CoPilot. In addition, separate rules and guidelines have been established for responsible use of AI within the organisation, in accordance with the AI Regulation.

External provider

Microsoft

Similar algorithm descriptions

  • To improve internal efficiency, a test project has been set up to see if Microsoft CoPilot can help employees perform day-to-day tasks.

    Last change on 20th of January 2025, at 10:12 (CET) | Publication Standard 1.0
    Publication category
    High-Risk AI-system
    Impact assessment
    Field not filled in.
    Status
    In development
  • To improve internal efficiency, a test project has been set up to see if Microsoft CoPilot can help employees perform day-to-day tasks.

    Last change on 21st of January 2025, at 10:50 (CET) | Publication Standard 1.0
    Publication category
    High-Risk AI-system
    Impact assessment
    Field not filled in.
    Status
    In development
  • A group of employees is experimenting with Microsoft 365 Copilot to gain hands-on experience with generative AI. This pilot aims to gain knowledge and experience and explore, through case studies, how it can help with overview, analysis, speed and cost savings.

    Last change on 17th of March 2025, at 17:20 (CET) | Publication Standard 1.0
    Publication category
    Other algorithms
    Impact assessment
    DPIA
    Status
    In development
  • The algorithm in the software is mainly set to recognise and anonymise privacy-sensitive information in documents. Basis for this is the AVG. The tool is also used to highlight and mask information that cannot be shared for other reasons (based on another basis, e.g. the Woo) in a document.

    Last change on 14th of January 2025, at 10:39 (CET) | Publication Standard 1.0
    Publication category
    Other algorithms
    Impact assessment
    Field not filled in.
    Status
    In use
  • Residents and business owners can apply for citizenship products digitally via the website. In doing so, the system performs checks on personal data.

    Last change on 30th of September 2024, at 12:42 (CET) | Publication Standard 1.0
    Publication category
    Impactful algorithms
    Impact assessment
    Field not filled in.
    Status
    In use