Please note: The algorithm descriptions in English have been automatically translated. Errors may have been introduced in this process. For the original descriptions, go to the Dutch version of the Algorithm Register.
Work - Pilot Copilot Pro
- Publication category
- Other algorithms
- Impact assessment
- DPIA
- Status
- In development
General information
Theme
Begin date
Contact information
Responsible use
Goal and impact
The aim of this pilot is to allow the organisation to gain experience with generative AI and understand its impact and potential with regard to personal effectiveness and potential efficiency gains in specific business processes. The expected impact of the pilot is to increase the AI literacy of the colleagues participating. In addition, a positive impact is expected with regard to the responsible use of the AI application through practical experience and special attention to the risks and impact of AI.
Considerations
The pilot is guided and managed from a project and follows an established roadmap. The project is supported by a core group consisting of employees with different areas of expertise such as technology, data, privacy, security and ethics.
A limited number of licences are used which are assigned to representatives from various organisational units. Use cases are described prior to the pilot. A rights scan is performed on each participant beforehand. Participants with extensive access rights are deliberately not admitted to prevent oversharing.
The organisation is familiar with SLM Rijk's DPIA and has included it in its risk assessment. Internal ground rules have been developed and published to enable responsible use.
Human intervention
The current policy is that generated content should always be checked by the user before being used. It is explicitly explained that the output of generative AI should not be used as decision-making without human intervention. Users are also informed that transparency requirements are always taken into account in (future) publications.
Risk management
Existing policies from privacy, security, procurement and the associated focus areas of awareness, code of conduct and cloud services form the starting point for risk management around generative AI.
In addition, rules for responsible use of Generative AI have been drawn up and communicated to provide clarity to employees on what is expected of them. Furthermore, attention is paid to internal awareness, communication and training in the context of AI literacy.
General principle is that without procurement and contracting of an AI service, and the associated risk assessment and appropriate control measures, no personal and business confidential data should be processed in an (AI) system.
Elaboration on impact assessments
The SLM-rich DPIA is known and named risks have been assessed for relevance and applicability within the pilot.
Impact assessment
Operations
Data
Copilot only has access to a limited set of data for which participants are already authorised. This data can also be used as context for Copilot during the pilot. This concerns data from the mail environment and files from Onedrive and Sharepoint but not data from business applications.
The existing policy on handling internal information with integrity and responsibility applies; participants should also take this into account when using Copilot. It is expected that the processing of residents' personal data will not take place in the pilot or will be very limited. The AI model will not be trained with data from the pilot.
Technical design
Copilot uses language models from OpenAI that are hosted as a service within a trusted boundary from Microsoft, and thus are not sent to any other party. The language models are trained on huge amounts of text data and images, which enables them to generate and understand human-like texts and images. Copilot can invoke these language models and integrate them with business data in the organisation's Microsoft environment (organisation tenant). This allows Copilot to answer questions, write texts and provide relevant information.
Link to external pagehttps://learn.microsoft.com/nl-nl/copilot/microsoft-365/microsoft-365-copilot-architecture
External provider
Similar algorithm descriptions
- A group of employees is experimenting with Microsoft 365 Copilot to gain hands-on experience with generative AI. This pilot aims to gain knowledge and experience and explore, through case studies, how it can help with overview, analysis, speed and cost savings.Last change on 17th of March 2025, at 17:20 (CET) | Publication Standard 1.0
- Publication category
- Other algorithms
- Impact assessment
- DPIA
- Status
- In development
- A group of employees is experimenting with Microsoft 365 Copilot to gain hands-on experience with generative AI. This pilot aims to gain knowledge and experience and explore, through case studies, how it can help with overview, analysis, speed and cost savings.Last change on 17th of April 2025, at 14:39 (CET) | Publication Standard 1.0
- Publication category
- Other algorithms
- Impact assessment
- DPIA
- Status
- In development
- An application that helps staff at the Regional Self-Employment Department determine whether a business is viable.Last change on 5th of September 2024, at 13:06 (CET) | Publication Standard 1.0
- Publication category
- Impactful algorithms
- Impact assessment
- IAMA
- Status
- In use
- Application with AI model used to make proposals for environmentally harmful activities using other existing data. The data used is company-specific and it does not contain privacy-sensitive information.Last change on 3rd of November 2025, at 12:58 (CET) | Publication Standard 1.0
- Publication category
- Other algorithms
- Impact assessment
- Field not filled in.
- Status
- In use
- Municipality of Almere, together with TU/e and Fontys, is investigating how generative AI can support the assessment of supplier documentation in tenders. The aim is to make the process more efficient without compromising transparency, accuracy and safety. The project ties in with ongoing TU/e and Fontys PhD research on trust in AI.Last change on 14th of October 2025, at 12:57 (CET) | Publication Standard 1.0
- Publication category
- Other algorithms
- Impact assessment
- Field not filled in.
- Status
- In development