First AI4Life challenge launched: Denoising microscopy images

First AI4Life challenge launched: Denoising microscopy images

by Vera Galinova and Beatriz Serrano-Solano

We are happy to announce the launch of the first AI4Life challenge aimed at improving denoising techniques for microscopy images.

Noise introduced during the image acquisition process can degrade their quality and complicate interpretation. But deep learning can help with that!

The challenge focuses on unsupervised denoising to be applied to four datasets featuring two types of noise: structured and unstructured. 

To participate, please visit the dedicated website and the Grand Challenge page on which the challenge is hosted:


Outcomes of the Second AI4Life Open Call

Outcomes of the Second AI4Life Open Call


by Beatriz Serrano-Solano

AI4Life launched its second Open Call on January 22nd, and applications were accepted until March 8th. We’re very happy to share that we received a total of 51 applications for the second AI4Life Open Call!

The applications span a wide range of fields, including developmental and marine biology, as well as cancer research. But that’s not all—we also received submissions from areas such as plant biology, parasitology, ecology, biophysics, microbiology, immunology, and many more.

Why did scientists apply? What challenges are they facing?

Similar to the first open call, most applicants focused on improving their image analysis workflows. However, unlike the first Open Call, we did not offer consultancy as an option to choose, as it is now an implicit step in the process.

Before finalizing project selections, we will hold consultancy calls with a number of project applicants to offer quick guidance that could help researchers find solutions. Projects requiring deep learning support will then be prioritized for the final selection.

How have applicants addressed their analysis problem so far?

Nearly three-quarters of the applicants had already analysed the data, a higher percentage compared to last year. However, only one-fifth of them had not done so yet. The remaining applicants had already analyzed the data and found the outcome satisfactory. Last year, twice the fraction of projects were satisfied with the outcome compared to this time.

When asked about the tools used to analyze their image data, Fiji and ImageJ remain the most popular choices, followed by custom scripts and commercial software.

What kind of data and what format do applicants deal with?

In terms of data format, we observe a change in the trend compared to the first call. Now, 3D images are slightly more frequent than 2D images, reversing the trend we observed last time.

TIFF remains the most popular format, while the second group comprises more proprietary file formats. We are thrilled to observe that the Next Generation File Format (NGFF) has appeared this time!

How much project-relevant data do the applicants bring?

The majority of projects (45 out of 51) manage data in the order of gigabytes or above. Only 6 projects involve data sizes up to a few hundred megabytes. This highlights the prevalence of larger-scale datasets within the applicants, suggesting a growing demand for proper data management and processing capabilities.

Is ground truth readily available?

When asked about the availability and percentage of labelled data, approximately half of the applicants (23 out of 51) reported a lack of sufficient labelled data. Despite a variance in the definition of labelled data compared to the first open call, we observe a comparable trend. Additionally, the proportion of projects indicating high-quality labels has decreased compared to the previous call.

In the application form for this second open call, we introduced an additional question regarding the percentage of labelled data. Over three-quarters of the projects have less than 25% of their data labelled, while approximately 20% of the projects have more than half of their data labelled.

Is the available data openly shareable?

Interestingly, the ratio of projects that can provide access to all their data remains consistent compared to the first open call. This time, we’ve introduced the option of exclusively sharing the controls. The reason behind this decision is that projects unable to share any data are not eligible for support in applying deep learning to their research project.

What’s next?

We have completed the eligibility check and expert reviews for the submitted projects. Projects were reviewed by 17 experts, resulting in an average of 3 reviews per project. The reviews have been aggregated, and scores have been computed to rank the projects based on these reviews. As a result, a preselection of projects has been made, and these applicants will soon be notified.


What to expect from the consultation phase?

During the consultation call, selected applicants will have the opportunity to engage with experts who will provide insights, tips, existing tools and recommendations to guide their project. Following the consultation phase, a subset of projects will be chosen to receive expert support based on their potential and need for deep learning support.


Hackathon Summary: BioImage Model Zoo Enhancements

Hackathon Summary: BioImage Model Zoo Enhancements

by Beatriz Serrano-Solano and Joran Deschamps

AI4Life recently organised a hackathon aimed at enhancing the capabilities of the BioImage Model Zoo. Participants from across Europe gathered for a week-long event hosted at EMBL Heidelberg in Germany.

The event kicked off with a round of introductions, allowing participants to outline their personal goals for the week and to form teams that would tackle various project ideas.

Model uploader

One of the key focus areas was to put the final touches to a new model uploader, aimed at simplifying the process of uploading models to the BioImage Model Zoo. The uploader will no longer rely on external platforms like Zenodo; instead, models will be hosted internally and authentication will be requested for contributors who want to upload a model. One of the teams worked to simplify authentication procedures and optimize model uploads to S3 by integrating Google authentication, providing a unified system that would enhance the overall user experience.

Infrastructure improvement

Teams dedicated their efforts to refining Continuous Integration (CI) processes, which have now been migrated to the collection-bioimage-io GitHub repository. The uploader now triggers the CI workflow, automating the process of pushing models to the designated storage location on S3.

JupyterHub and DL4MicEverywhere

Another focus area involved transitioning the infrastructure for JupyterHub from Google Colab to Google Cloud, providing users with a more robust and flexible environment.

Model quantization
Model quantization allows making networks smaller and faster without loss of precision. We held discussions describing current state of the art.. As an example of the performance gains, a 3D Unet model from the BioImage Model Zoo reduced the inference of a batch of images from 60 ms to 30 ms.

Hypha launcher

Hypha can now launch BioEngine (triton-server) on a Slurm cluster using Apptainer. Additionally, a service-id option is now implemented in the BioEngine web client to easily switch the execution backend to high-performance computing (HPC) environments. Furthermore, BioEngine can now be launched on desktop environments.

Model export to new specifications

This team focused on exporting models using the new specifications. Additionally, the team explored approaches to export CellPose models.

Documentation enhancement

This project was split into two phases: firstly, restructuring the current documentation and secondly, creating new needed documentation for the BioImage Model Zoo. Community input and feedback are highly encouraged in this project!

Second Open Call

The deadline for the second Open Call was March 8th. During the hackathon, we had the opportunity to engage with all the reviewers, many of whom were participating in the event. Projects were assigned to each reviewer, kickstarting officially the review process.

Thank you to everyone who contributed either onsite or online. It was a pleasure to work with this engaged group of people. And thank you to the AI4Health innovation cluster for supporting this event. We look forward to meeting you at the next event! 


Why is the AI4Life Logo a Giraffe? Unraveling the Mystery

Why is the AI4Life Logo a Giraffe? Unraveling the Mystery

by Beatriz Serrano-Solano and Dorothea Dörr

Have you ever wondered why the AI4Life logo is a charming giraffe? Well, you’re not alone! AI4Life decided to host a contest to unravel the mystery, and the responses were so good that we had to pick three winners! Let’s dive into the proposed theories:

Because you've got to stand out somehow (both literally and figuratively)

A giraffe represents the essence of evolution and adaptation. Giraffes, with their long necks, have evolved to reach higher, literally and metaphorically. In the context of AI4Life, the giraffe signifies the aspiration for continuous growth and progress. Just like the giraffe adapts to its environment, AI technology continually adapts and evolves to meet new challenges and opportunities. Furthermore, the giraffe's spots could symbolise unique and significant data points. In the world of artificial intelligence, data is invaluable. The spots represent the diverse data sets AI algorithms analyze to gain insights and make decisions. Ultimately, the giraffe logo signifies AI's ability to reach for new horizons, adapt to change, and embrace the complexities of the digital world with grace and innovation. By the way, the colour gradient of the giraffe, transitioning from light green to dark blue, adds a touch of depth. It could signify the spectrum of possibilities AI encompasses, ranging from the initial stages of innovation (light green) to the deep complexities it can explore (dark blue). It's like a journey through the vast landscape of artificial intelligence.

Because we do things so cool, everyone who is not part of us gets a looooong neck… 😉

General Intelligence Requires Artificial Fauna Front Emblem.
With a Giraffe's Reach: Uplifting the Creatures of AI4Life Models to the Cloud. In the innovative landscape, a whimsical giraffe emerged as a symbol of boundless aspirations. Its spots, emblematic of neural networks, and towering neck, represented a conduit to the cloud. This giraffe wasn't just a playful icon, but a tool enabling AI4Life models to transcend the earthly bounds, reaching for the vast computational skies. With the giraffe's assistance, the creatures of AI4Life models soared above the low-hanging fruit, venturing into the expansive realm of cloud computation where innovation knew no bounds. Through the giraffe's reach, every AI model found its passage to the cloud, unlocking a realm of endless exploration and innovation. The narrative finds a whimsical visual representation in this depiction of a giraffe amidst clouds (at, symbolizing the limitless voyage AI4Life models undertake with the giraffe's aid to the cloud.

With its long neck, it keeps an overview of all other animals in the zoo.

Because of climate change species are moving north, AI4Life has developed an inclusive adaptation strategy to future climate and welcomes even giraffes incl. the virtual ones.
Well, it all started one day during a brainstorming session. The future AI4Life team was racking their brains trying to come up with a mascot that would perfectly represent their cutting-edge work at the intersection of AI and life sciences. They wanted something that conveyed innovation, adaptability, and, of course, a sense of elegance in the world of biology and imaging. Someone jokingly mentioned that they needed a symbol as unique as a giraffe's neck – able to reach up to those hard-to-reach branches in the data tree. Laughter erupted as people imagined giraffes huddled around microscopes, diligently analysing bioimages with their long necks. They laughed, but then it hit them. The giraffe! A creature known for its exceptional vision, reaching heights that others can only dream of, and demonstrating a remarkable ability to adapt to its environment. Just like their AI algorithms, always reaching for new heights in analysing and understanding life sciences data. They thought, "Why not? Let's embrace the giraffe as a symbol of our willingness to stretch the boundaries of science and technology!" So, the giraffe became the mascot of AI4Life, representing not just their work but also the visionary approach of Euro-BioImaging in advancing the field of bioimaging. And that's how a giraffe ended up in AI4Life's logo, proving that sometimes, in the world of science and AI, the most unexpected inspirations can lead to great discoveries!

AI-generated based on project objectives. ChatGPT or similar.

From all the answers received, 3 winners were selected:

Estibaliz Gomez de Mariscal


Being the tallest animal on the earth, the giraffe has the perfect general perspective of what's going around. Thanks to its long neck, it can both, aim to reach the highest appealing branch and get down into the lower details of the animal world. Not only it's a friendly non-predator animal that inspires peacefulness, but rarely other predators attack it, thus becoming a unique symbol of long-standing equilibrium, collaboration and unity for our beloved BioImage Model Zoo.

Caterina Fuster-Barceló

Universidad Carlos III Madrid

To understand why you chose a giraffe as the logo for AI4Life is to consider what makes giraffes special. You might immediately think of their long necks, but the unique aspect that likely inspired this choice is that giraffes communicate through infrasounds, as they are relatively silent animals. This ability allows them to communicate over significant distances. So, when pondering the connection between AI4Life and a giraffe, the commonality lies in the fact that even though partners may be physically distant, they can effectively communicate, akin to the way giraffes do through... infrasounds.

Davide Di Cioccio


It's a reference to the Lamarck theory of evolution: the great effort made on the neck of generations of giraffes (re-iterations of machine learning algorithms) has determined that only the most successful organism (the best algorithm) would find a solution to the problem. Giraffe legs: it's a power plug of a computer.

What a journey of creativity and imagination! From the giraffe’s long neck symbolizing reaching new heights to its unique spots representing valuable data points, the interpretations have been as diverse as they are exciting. Big shoutout to all who joined the fun! And a huge round of applause to our awesome winners!


AI4Life chosen by the European Commission to be showcased at R&I Research Days 2024

AI4Life chosen by the European Commission to be showcased at R&I Research Days 2024

by Dorothea Dörr and Beatriz Serrano-Solano

AI4Life has been selected by the European Commission Directorate-General for Research and Innovation (RTD) to be showcased at the European Research and Innovation (R&I) Days 2024. The event took place during the Research and Innovation Week on the 20th and 21st of March 2024 in Brussels.

The R&I Days 2024 are an exceptional opportunity for AI4Life to be in the spotlight of policymakers, researchers, stakeholders, and the general public who gathered to discuss and influence the future of research and innovation in Europe and beyond.

For more information, visit the R&I Days 2024 website.


Euro-BioImaging Virtual Pub session: Tools from AI4Life that anyone can use

Euro-BioImaging Virtual Pub session: Tools from AI4Life that anyone can use

by Beatriz Serrano-Solano

Euro-BioImaging’s Virtual Pub sessions have been a weekly event every Friday since the beginning of the pandemic back in the spring of 2020.
On March 1st, 2024, the session was dedicated to showcasing the tools developed within AI4Life presented by experts among the project partners. Attendees had the opportunity to learn about the BioImage Model Zoo, BioEngine, the BioImage.IO chatbot, and Open Calls and Challenges.

  • Anna Kreshuk, Group Leader at EMBL Heidelberg and scientific coordinator of AI4Life (together with Florian Jug), provided an overview of AI4Life & the BioImage Model Zoo.
  • Wei Ouyang, Assistant Professor at KTH and leader of the AICell Lab at SciLifeLab, Sweden, introduced BioEngine.
  • Caterina Fuster-Barceló, Post-doctoral researcher at Universidad Carlos III de Madrid (UC3M), presented the BioImage.IO chatbot.
  • Vera Galinova, bioimage analyst and research software engineer at Human Technopole, showcased the first Open Call selected projects and future Challenges.

The session recording is now available for public access, so if you missed it, here’s your opportunity to catch up!


AI4Life project shines at the “Effectively Communicating BioImage Analysis” workshop

AI4Life Project shines at the “Effectively Communicating Bioimage Analysis” workshop

by Caterina Fuster-Barceló and Florian Jug

This past February, the AI4Life project was one of the efforts that took part on the stage at the Effectively Communicating Bioimage Analysis Workshop, held from the 12th to the 15th. Organised by The Company of Biologists and Focal Plane, the event proved to be a resounding success, drawing in members of the AI4Life project alongside a host of other well-established members of our great community.

The workshop served as a critical platform for exchange over some of the bioimage analysis community’s most pressing challenges. 

Among the highlights was the participation of Florian Jug from the HT in Milan, who captivated the audience as one of the invited speakers. Jug presented the AI4Life project and its initiatives, including the BioImage Model Zoo and Open Calls, showcasing the remarkable progress and achievements of the project over recent years. His presentation underscored the project’s efforts in bridging the divide between life scientists and developers, earning widespread admiration for its contributions.

Caterina Fuster-Barceló, representing the Universidad Carlos III de Madrid, Spain, also made significant contributions as part of the early-career researchers funded to attend. Chosen from numerous applications, Caterina represented the deepImageJ team, a Community Partner of the BioImage Model Zoo. She introduced the latest developments of the BioImage.IO Chatbot, a tool designed to address the challenges faced by deepImageJ and bioimage analysis at large.

The workshop not only served as a venue for learning and sharing but also as an opportunity for participants to connect with both new and familiar faces in a friendly and engaging environment. The event’s success reflects the community’s collective effort to foster an atmosphere conducive to growth, collaboration, and fun.

AI4Life stands at the forefront of reducing the gap between AI method development and biological imaging, offering essential services through European transnational and virtual access infrastructures. The project’s participation in the workshop is a testament to its commitment to advancing the field of bioimage analysis, marking yet another milestone in its journey towards integrating AI-based methods into the life sciences.


DL4MicEverywhere joins as a community partner

DL4MicEverywhere: reproducible and portable deep learning workflows for bioimage analysis

by Estibaliz Gómez-de-Mariscal, Iván Hidalgo-Cenalmor, Mariana G Ferreira, Ricardo Henriques

DL4MicEverywhere ( is a platform developed within the AI4Life project. It offers researchers an easy-to-use gateway to reproducible and portable deep learning techniques for bioimage analysis. The platform utilizes Docker containers to encapsulate deep learning-based approaches together with user-friendly interactive notebooks, guaranteeing smooth operation across various computing environments such as personal devices or high-performance computing (HPC) systems1. It currently incorporates numerous pre-existing ZeroCostDL4Mic notebooks –yet another community partner– for tasks such as segmentation, reconstruction, image translation and image generation.

The functionalities of DL4MicEverywhere are supported by a user-friendly GUI that allows users to rapidly launch the Docker containers and interact with the notebooks in a zero-code fashion. This interface is designed in a way that public methods available in the BioImage Model Zoo or local ones can be automatically launched without dealing with the intricacies of Docker configurations, environment setups, or coding. This is enabled by the advanced mode of the user interface.



Figure 1: DL4MicEverywhere interface models. a) The basic mode allows the launch of containerised notebooks that are publicly available and tested. b) Advanced options allowing for the automatic containerisation of local or private models.

Watch a brief tutorial:

Read the preprint:  I. Hidalgo-Cenalmor et al., DL4MicEverywhere: Deep learning for microscopy made flexible, shareable, and reproducible, bioRxiv 2023,

Empowering Developers

DL4MicEverywhere serves as an infrastructure and service for containerising deep learning methods in the context of bioimage analysis. The platform provides the tools to automatically containerise their methods and ensure the correct configuration of the built Docker images.

These are the key features of DL4MicEverywhere:

  • Automatic containerisation of the BioImage Analysis pipelines. The platform automatically builds Docker images for AMD64 (Windows/Linux/macOS – Intel) (with and without GPU access) and ARM64 (macOS-M1/M2) systems. This process controls the versions of the required libraries upstream and downstream of the Docker container, enabling the automatic containerisation of bioimage analysis pipelines.
  • Integration of user-friendly Jupyter Notebooks: It allows the encapsulation of Jupyter Notebooks for high-level and documented programmatic interaction. These notebooks can be automatically converted into interactive interfaces for a zero-code experience. 
  • Continuous integration system: DL4MicEverywhere incorporates an automatic validation pipeline to test the correct containerisation of image processing pipelines. This ensures the reliability and accuracy of the containerised methods, contributing to their robustness and reproducibility.
  • Publicly available Docker images in Docker Hub ( The platform automatically uploads validated Docker images to Docker Hub, ensuring their long-term accessibility. 

Figure 2: Schematic description of the automatic containerisation proposed by DL4MicEverywhere. Developers can contribute their models within DL4MicEverywhere notebooks directly to the DL4MicEverywhere GitHub Repository. The repository runs an automatic continuous integration pipeline to test the format of the notebooks, the correct building of Docker Images and publishes a versioned Docker Image in Docker Hub. This containerisation is synchronised with the BioImage Model Zoo and follows the same specifications, ensuring that the methods are accessible to non-expert users. Non-expert users access the containerised workflows with a user-friendly graphical user interface (GUI) that automatically launches the Docker container corresponding to the operating system and configuration of the users. Once the Docker container is set up, the users can interact with the method directly in Jupyter Notebooks without dealing with the intricacies of Docker containerisation. Likewise, the users will be able to reproduce the pipelines, train their models and contribute them to the BioImage Model Zoo within a reproducible and portable ecosystem.


1 Docker containers allow the full virtualisation of computational environments without affecting local installations. They allow building the specific environments and dependency setup needed for each workflow. These virtualisations, once built, they are portable and installable across systems. Therefore, they are highly recommended to ensure the reproducibility of computational pipelines.  


Engaging with AI4Life made easier

Engaging with AI4Life made easier

by Beatriz Serrano-Solano

We’ve launched a new section on our website dedicated to guiding you on how to engage with our project.

Are you looking to participate, collaborate, or simply learn more about AI4Life? Our new section has all the answers. Find out how you can contribute, connect, and engage with us effortlessly!

Explore the new section: 


BiaPy joins as a Community Partner

BiaPy joins the BioImage Model Zoo as a Community Partner

by Daniel Franco

The Bioimage Analysis software BiaPy has officially joined the BioImage Model Zoo as a Community Partner! This means that the BiaPy software supports the format for deep learning models.

BiaPy is an open source Python library to easily build bioimage analysis pipelines based on deep-learning approaches. The library supports the image processing of 2D, 3D and multichannel microscopy image data. Specifically, BiaPy contains ready-to-use solutions for tasks such as semantic segmentation, instance segmentation, object detection, image denoising, single image super-resolution and image classification, as well as self-supervised learning alternatives.

At present, BiaPy Jupyter notebooks already exporting compatible models are accessible through the BioImage Model Zoo. A future expansion of the current offer by adding a variety of models, including transformers, is expected. The integration of BiaPy in the BioImage Model Zoo aims to enhance the library’s visibility, foster greater collaboration, and serve the community better by increasing the variety of advanced image processing approaches, which significantly empowers the field of BioImage Analysis.