AI4Life

Categories
News

AI4Life Standards and Interoperability

AI4Life Standards and Interoperability

by Teresa Zulueta-Coarasa, Fynn Beuttenmueller, Anna Kreshuk, Beatriz Serrano-Solano

In AI4Life, we believe that interoperability and standardisation are the backbone of a healthy AI research ecosystem, allowing data and models to be reused and combined across different research groups, institutions, and platforms. Without standards, valuable datasets and AI models often remain underutilised, difficult to reuse, reproduce, and less impactful than they could be.

One of the main goals of AI4Life has been to create and promote standards for sharing AI models and AI-ready datasets for biological images. By doing this, we aim to ensure that data and models are truly FAIR (Findable, Accessible, Interoperable, and Reusable) so they can support scientific discovery for years to come.

Setting standards for biological image datasets

In January 2023, the BioImage Archive organised a workshop that brought together 45 experts from diverse backgrounds: data producers, annotators, curators, AI researchers, bioimage analysts, and software developers. Together, they defined recommendations for sharing annotated, AI-ready biological image datasets.

These recommendations are grouped under the acronym MIFA:

  • Metadata: clear information about datasets and annotations.
  • Incentives: giving proper recognition to dataset creators.
  • Formats: adopting a small set of interoperable formats, such as OME-Zarr.
  • Accessibility: making datasets openly available in repositories like the BioImage Archive.

The MIFA guidelines have been published in Nature Methods (https://www.nature.com/articles/s41592-025-02835-8). They are expected to help researchers more easily train and evaluate AI models across diverse biological imaging tasks and unlock the value of archived imaging data.

A standard for AI models

In addition to datasets, AI4Life also supports a model metadata standard. This standard describes how pre-trained models should be documented so that others can find, reuse, and integrate them into their work. It is openly available and registered in FAIRsharing, a trusted global resource for standards, repositories, and policies.

The model standard is implemented through the bioimageio.spec Python package, which provides a versioned metadata format for models. It works with the bioimageio.core library offers utilities and adapters to make models compatible with different tools and frameworks.

With this approach, models can be shared in a way that is:

  • Findable: authors and citations are clearly tracked.
  • Accessible: models and their documentation are available through the bioimage.io website.
  • Interoperable: the model metadata allows for programmatic execution through the bioimageio.core library makes models seamlessly usable through all our Community Partner tools or simply through Python or Java code.
  • Reusable: thanks to the metadata, model inference can be executed in a standardised way even without access to the model’s original code.

The BioImage Archive has developed the AI4Life Model Evaluation Platform to benchmark datasets and models more directly, building bridges between the BioImage Archive and the BioImage Model Zoo. 

While pre-trained models are already very useful, they are even more powerful when bundled together with their training datasets and training code. Model metadata supports linking to datasets and code by introducing the corresponding metadata field and a minimal description format for datasets and notebooks. 

The dataset description is currently available in bioimageio.spec serves as a starting point; plans are underway to extend this with deeper integration of the MIFA guidelines. In the future, this will make programmatic access to well-described datasets even easier, enabling researchers worldwide to train, compare, and improve AI models for bioimaging.

Categories
News

AI4Life Denoising Challenges 2025: Results

AI4Life Denoising Challenges 2025: Results

by Vera Galinova

The AI4Life Denoising Challenges returned in 2025 with two new tasks: the Microscopy Supervised Denoising Challenge (MDC25) and the Calcium Imaging Denoising Challenge (CIDC25). Both aim to benchmark and improve methods that address noise in microscopy data, a common obstacle for biological and medical imaging.

Why denoising matters

Microscopy is a key tool in life sciences, but image quality is often limited by acquisition noise. This noise can mask fine structures or dynamic processes, making quantitative analysis more difficult. Deep learning–based denoising methods, which learn directly from data rather than relying only on predefined filters, are increasingly used to address this challenge.

The Challenges

Microscopy Supervised Denoising Challenge (MDC25)
This challenge focused on supervised denoising, where models are trained with pairs of noisy and clean images. The setup allowed participants to directly assess how well their methods recover ground truth structures, and to explore strategies for making denoising more precise and consistent across diverse microscopy data. The results can be viewed at https://ai4life-mdc25.grand-challenge.org/results

Calcium Imaging Denoising Challenge (CIDC25)
This task addressed calcium imaging, a widely used technique to record neuronal and cellular activity. Because calcium signals are both spatially and temporally structured, effective denoising needs to preserve not only cell morphology but also the temporal dynamics of activity traces. The challenge provided synthetic datasets with known ground truth to allow controlled evaluation across different noise levels and image content. Participants were also encouraged to develop unsupervised approaches and new evaluation strategies that could be applied to real experimental data, where noise-free ground truth is not available. The challenge is still running!

Participation in 2025

MDC25

  • Leaderboard 1: 3 participants / 2 methods
  • Leaderboard 2: 6 / 3
  • Leaderboard 3: 13 / 4
  • Leaderboard 4: 7 / 4

CIDC25

  • Preliminary (Content Generalization): 5 / 3
  • Final (Content Generalization): 8 / 3
  • Preliminary (Noise Level Generalization): 5 / 3
  • Final (Noise Level Generalization): 8 / 3

What’s next?

This year’s challenges highlighted how supervised and unsupervised methods can be applied to different types of microscopy data and evaluation settings. While the AI4Life grant has now concluded, the Calcium Imaging Denoising Challenge platform remains accessible, and late submissions are welcome for benchmarking purposes.

👉 https://ai4life-cidc25.grand-challenge.org/