Please note: The algorithm descriptions in English have been automatically translated. Errors may have been introduced in this process. For the original descriptions, go to the Dutch version of the Algorithm Register.
Microsoft CoPilot
- Publication category
- Other algorithms
- Impact assessment
- DPIA
- Status
- In development
General information
Theme
Begin date
Contact information
Responsible use
Goal and impact
ICTU sees AI as the beginning of a system change. Various fields of knowledge converge in this technology, requiring a multidisciplinary approach to deal responsibly with both the enormous opportunities and risks of this technology. Because of the diverse perspectives, it was decided to explore 6 perspectives in more detail. One perspective is to experiment with generative AI using copilot to support employees in document creation, note-taking and information analysis, among others. The aim is to gain experience with genAI and understand its impact and capabilities in the workplace. Awareness of the risks of AI is also emphasised for responsible use.
Considerations
Due to the multidisciplinary nature of AI, the experiment is supervised and driven from a core team consisting of staff with different expertise in diverse fields of knowledge. A limited number of licences will be used so that participants can work with information within the organisation's tennant.
The organisation is familiar with SLM Rijk's DPIA and has included it in its risk assessment. Internal ground rules have been developed and published to enable responsible use.
Human intervention
The current policy is that generated content is always checked by the user and, if necessary, also checked by a colleague (the four-eye principle) before being used. Furthermore, users are informed that transparency requirements are always taken into account in (future) publications.
Risk management
As part of the experiment (one of the perspectives), compliance under the AI regulation was looked at separately. The existing policies from privacy, security, procurement and the associated focus areas of awareness, code of conduct and cloud services form the starting point for risk management around generative AI.
In addition, rules for responsible use of Generative AI have been drawn up to clarify to employees what is/is not possible. Furthermore, internal awareness, communication and training on AI literacy are being developed and AI is a focus area of existing compliance processes and compliance officers.
The AI model will not be trained with data from participants in the experiment. The general principle is that without procurement and contracting of an AI service, and the associated risk assessment, appropriate control measures and advice from a compliance officer, no personal and business confidential data should be used.
Elaboration on impact assessments
The DPIA SLM-rich was reviewed and named risks were assessed by compliance officers for relevance and applicability when used within the experiment.
Impact assessment
Operations
Data
Participants in the experiment will only have access to information and data for which they are authorised. The existing policy on handling internal information with integrity and responsibility still applies, and participants should take this into account when using Copilot. The AI model is not trained with data from the experiment. In addition, separate rules have been established for responsible use of AI within the organisation, in accordance with the AI regulation.
Technical design
Copilot uses language models from OpenAI hosted as a service in a Microsoft Azure environment. The language models are trained on huge amounts of text data, which enables them to generate and understand human-like texts. Copilot can call these language models and integrate them with business data in the organisation's microsoft environment (organisation tenant). This allows Copilot to answer questions, write texts and provide relevant information.
https://learn.microsoft.com/nl-nl/copilot/microsoft-365/microsoft-365-copilot-architecture;
https://learn.microsoft.com/nl-be/azure/ai-services/openai/overview
External provider
Similar algorithm descriptions
- To improve internal efficiency, a test project has been set up to see if Microsoft CoPilot can help employees perform day-to-day tasks.Last change on 21st of January 2025, at 10:50 (CET) | Publication Standard 1.0
- Publication category
- High-Risk AI-system
- Impact assessment
- Field not filled in.
- Status
- In development
- To improve internal efficiency, a test project has been set up to see if Microsoft CoPilot can help employees perform day-to-day tasks.Last change on 20th of January 2025, at 10:12 (CET) | Publication Standard 1.0
- Publication category
- High-Risk AI-system
- Impact assessment
- Field not filled in.
- Status
- In development
- Employees can use Microsoft 365 CoPilot as an AI chatbot to generate texts and get comprehensive answers to questions. Some principles described in our guidelines are that no internal or confidential information should be used and that the information should be properly checked for accuracy.Last change on 27th of March 2025, at 13:24 (CET) | Publication Standard 1.0
- Publication category
- Other algorithms
- Impact assessment
- DPIA, IAMA
- Status
- In use
- An application that helps staff at the Regional Self-Employment Department determine whether a business is viable.Last change on 5th of September 2024, at 13:06 (CET) | Publication Standard 1.0
- Publication category
- Impactful algorithms
- Impact assessment
- IAMA
- Status
- In use