Please note: The algorithm descriptions in English have been automatically translated. Errors may have been introduced in this process. For the original descriptions, go to the Dutch version of the Algorithm Register.

Object Recognition Public Space Containers

Directorate V&OR commissioned the Computer Vision Team of the City of Amsterdam to explore how object recognition can help prevent vulnerable bridges and quay walls from collapsing as heavy objects are placed on them.

Last change on 27th of November 2024, at 15:11 (CET) | Publication Standard 1.0
Publication category
Impactful algorithms
Impact assessment
Field not filled in.
Status
In development

General information

Theme

  • Economy
  • Space and Infrastructure

Begin date

Field not filled in.

Contact information

algoritmen@amsterdam.nl

Responsible use

Goal and impact

Directorate V&OR commissioned the Computer Vision Team of the City of Amsterdam (hereafter CVT) to explore how object recognition can help prevent vulnerable bridges and quay walls from collapsing due to heavy objects being placed on them. Containers are heavy objects that increase the risk of collapse. Currently, there is no clear view of where these containers are located in (vulnerable) parts of the city.


During the Pilot, we will test whether containers can be successfully recognised in public spaces with a scanning vehicle equipped with a camera. After recognising containers in public spaces, a signal can be generated. By providing this signal with additional information from (municipal) sources, the work of supervisors can also be carried out better. This can be done, for example, by including in a report information on how urgent the signal is based on the vulnerability of the particular quay on which the container has been placed. As a result, urgent situations can be prioritised.


In addition to generating a signal, employees can use a digital map showing the containers found.


Update 2024:

The pilot has now been successfully completed. It has been shown that containers can be successfully recognised and that the technology can be used to identify misplaced containers more effectively.

Considerations

The bridges and quay walls in the city centre have been severely weakened by years of excessive loads. At the time these bridges and quays were built, there was no heavy traffic. We are strengthening or replacing the bridges and quay walls. Therefore, heavy traffic and heavy objects are now prohibited in several places.

Human intervention

There is no automated decision-making by using the image recognition facility. The facility does generate a signal automatically. This signal is then assessed (manually) by a supervisor, after which an off-site investigation may take place. The supervisor makes an independent assessment of whether the situation is legal or illegal. If the latter, an enforcer will make an independent decision. With this, there is sufficiently meaningful human intervention.


The 'output' of the algorithms does contribute to making the 'decision' whether or not to conduct further investigation (field investigation) into the observed container in the public space. Thus, the facility (with its associated algorithms) does have a substantial influence.

Risk management

Across the board, measures have been taken to process the data securely and to resolve incidents (for example: the blurring algorithm no longer works) quickly and effectively according to established procedures. In particular, there is attention within the project to carefully anonymise the ambient images created and remove redundant data. In addition, much attention is paid to the end users of the facility. They must be well informed about how the facility works (with its algorithms) and what the potential risks are. Supervisors should always be able to arrive at choices independently. As a result, it is important that the output can be interpreted properly and that they can also set aside a signal.

Operations

Data

Training dataset for algorithm development:


The municipality used an already available dataset for the development of the image recognition and blur algorithm.


Blur algorithm


It involved roughly 10,000 images of raw, i.e. not anonymised, data. These images were needed to teach the algorithm to manually recognise people and license plates so that they can be removed from images. These images are only accessible to a few developers who train the models. The images will be kept for as long as the algorithm may need further development.


Image recognition algorithm


To teach this algorithm to recognise containers properly, roughly 1,500 partly anonymised and partly non-anonymised images were used to train the algorithm manually. Using non-anonymised images was necessary for this, so that the context (public space of the municipality of Amsterdam) is kept intact as much as possible. This ensures that the algorithm is better able to recognise the containers. Unlike e.g. Google Maps, the municipality also blurs the entire posture of people in public space.



Production data:


The Scan system captures images containing metadata such as date, time, location and heading.


These images are then all anonymised using the blur algorithm developed by the municipality of Amsterdam. Immediately after this, the images are filtered for images with containers on them with the image recognition algorithm 'containers'. All images on which no containers are visible are immediately removed.


After previous data is obtained, it is enriched with information from permits and information about vulnerable bridges and quay walls.


It is then investigated whether the container is located on a vulnerable bridge or quay wall.


Based on the above data, the following information can then be generated:

  • Category Orange (or Red): potentially illegal object (on vulnerable quay)
  • Distance to vulnerable quay: 25 metres
  • Distance to object permit: 40 metres


The above data is sent to the Signals Information Agency Amsterdam. SIA processes this into a 'signal' provided with a map with location indication and forwards it to CityControl, so that a supervisor can work with it.


Anonymised images showing containers are reused to re-train/improve the image recognition algorithm.


Technical design

Performance


The algorithms perform well based on results with the training dataset. The intended Pilot is necessary to test the algorithms with production data. Performance is measured accurately during the Pilot. Nevertheless, each algorithm also has a so-called margin of error. The CFT has investigated this and these occur in the following situation:


  • Image recognition algorithm


The container is too far away so the algorithm does not recognise the container. However, the risk of missing the container is small, now that it is likely that in the high-risk area, the scanning vehicle will eventually pass the container anyway, allowing it to be recognised.

  • Blural algorithm


The blur algorithm currently has an accuracy of roughly 95% for people standing close to the camera. For people standing further away from the camera, it is around 92%. Through visual inspection on a sample, it was found that the people who are not recognised are usually not recognisable because they are, for example, partially behind a tree. Ideally, of course, these persons would also be blurred. In the coming months, the blurring algorithm will be further developed. The performance is expected to improve further and approximate 99%. For the Pilot, and given all the measures taken to process the images safely, the 95% accuracy is considered sufficient for now. As additional measures, images based on which no signal is generated will be removed. This will be done within 24 hours.

Update 2024: Blurring as a Service (BaaS) has now been deployed, this will be used in a possible follow-up project to anonymise the images.

Similar algorithm descriptions

  • This algorithm helps Customs to select goods for control based on customs declarations and risks. It uses declaration data from companies and considers whether or not there are risks of bringing in raw materials (precursors) or mixtures that can be misused for the illegal manufacture of explosives.

    Last change on 10th of December 2024, at 9:54 (CET) | Publication Standard 1.0
    Publication category
    Impactful algorithms
    Impact assessment
    Field not filled in.
    Status
    In use
  • To correctly determine the types and locations of street waste in the city, we can collect waste in a better and more targeted way. By doing so, we ensure that relatively little waste is visible in public spaces and contribute to a cleaner city.

    Last change on 12th of July 2024, at 10:00 (CET) | Publication Standard 1.0
    Publication category
    Impactful algorithms
    Impact assessment
    DPIA, ...
    Status
    In use
  • This algorithm has a low impact. On 4 March 2020, the municipality of Amsterdam demonstrated to a large group of interested people at its data lab how it can currently record when placements are made via moving cameras.

    Last change on 26th of November 2024, at 15:30 (CET) | Publication Standard 1.0
    Publication category
    Impactful algorithms
    Impact assessment
    Field not filled in.
    Status
    Out of use
  • To correctly determine when and where waste should be collected from containers, we ensure that relatively little waste is visible in public spaces and contribute to a cleaner city.

    Last change on 12th of July 2024, at 10:01 (CET) | Publication Standard 1.0
    Publication category
    Impactful algorithms
    Impact assessment
    DPIA, ...
    Status
    In use