AI4Life

Categories
News

Models developed in AI4Life and Bioimage.io now made available in AIVIA

Models developed in AI4Life and Bioimage.io now made available in AIVIA

AI4Life and Leica are announcing their collaboration to make deep-learning models developed by the bioimage community available to a wider user community through integration into Leicas AIVIA software.

In a commitment to the scientific community, AI4Life and Leica Microsystems join forces to help researchers to leverage AI in complex experiments. AI4Life, coordinated by Euro-BioImaging, is a Horizon Europe-funded project that brings together the computational and life science communities. Its goal is to empower life science researchers to harness the full potential of Artificial Intelligence (AI) methods for bioimage analysis – and in particular microscopy image analysis, by providing services, and developing standards aimed at both developers and users.

One of the consortiums objectives is to build an open, accessible, community-driven repository (the BioImage Model Zoo) of FAIR pre-trained AI models and develop services to deliver these models to life scientists. Together with their community partners, the consortium ensures that models and tools are interoperable with Fiji, ImageJ, Ilastik and other open-source software tools.

In the meantime, Leica Microsystems has cultivated a valuable connection with the AI4Life project and the BioImage Model Zoo. Most recently, the Leica team had the opportunity to meet the people behind the AI4Life project at a workshop organized by the Euro-BioImaging Industry Board (EBIB). It was there, through their participation, that they realized the power of their combined resources.

Widely recognized for optical precision and innovative technology, Leica Microsystems supports the imaging needs of the scientific community with AIVIA, their advanced Al-powered image analysis software. AIVIA is a complete 2-to-5D image visualization and analysis platform designed to allow researchers to unlock insights previously out of reach. Through the joined effort, BioImage Model Zoo models can now also be easily integrated into Leicas commercial software AIVIA.

We see this initiative as an opportunity to tie in AIVIA with the scientific community and to make work done by the community accessible in a convenient way to AIVIA users.” said Constantin Kappel, Manager AI Microscopy and Insights at Leica Microsystems.

To achieve this, Leica is taking the models that are published in BioImage.io and converting them to their own AIVIA Model repository format for interoperability with AIVIA. From a small number of models, the offer will gradually be increased.

This further supports the aim of AI4Life to lower the barriers for using pre-trained models in image data analysis for users without substantial computational expertise or those with existing licences, who might rely on commercial solutions.

This is a great example of how resources curated by academia and available in open access can be of interest to the imaging industry. The source of the models are recognized directly in AIVIA, a nice testimony to industry-academia collaboration and a nice endorsement of the models curated by the BioImage Model Zoo.

“We much appreciate the interest of Leica. Models relevant to AIVIA will be directly available to users, which is a testimony to fruitful industry-academia collaboration and a great endorsement of the utility of the BioImage Model Zoo. We hope many other software tools will follow Leicas lead and also start benefitting from this community resource we are currently building” said Florian Jug, one of the scientific coordinators of AI4Life.

Bioimage.io is supported by AI4Life. AI4Life has received funding from the European Unions Horizon Europe research and innovation programme under grant agreement number 101057970.”

 

Categories
News

First AI4Life Open Call: Announcement of selected projects

First AI4Life Open Call: 
Announcement of selected projects

by Florian Jug & Beatriz Serrano-Solano

The first AI4Life Open Call received an impressive response, with a total of seventy-two applicationsIt proved to be an incredible opportunity for both life scientists seeking image analysis support and computational scientists eager to explore the evolving landscape of AI methodologies. In this blog post, we announce the awarded projects and invite you to join us behind the scenes as we explore the selection process that determined which projects have been selected.

Awarded projects

First things first, here is the list of titles of the selected projects (in alphabetical order):

  • Analysis of the fiber profile of skeletal muscle.
  • Atlas of Symbiotic partnerships in plankton revealed by 3D electron microscopy.
  • Automated and integrated cilia profiling.
  • Identifying senescent cells through fluorescent microscopy.
  • Image-guided gating strategy for image-enabled cell sorting of phytoplankton.
  • Leaf tracker plant species poof.
  • SGEF, a RhoG-specific GEF, regulates lumen formation and collective cell migration in 3D epithelial cysts.
  • Treat CKD.

The projects are diverse, covering scientific topics ranging from Plant Biology, Physiology, Metabolism, Cell Biology, Molecular Biology, Marine Biology, Flow Cytometry, Medical Biology, Regenerative Biology, Neuroscience, etc. The researchers who have proposed the projects come from the following countries: France (2x), Germany, Italy, Netherlands, Portugal, and the USA (2x).

How did the review procedure work?

1. Eligibility checks

The selection procedure started with internal eligibility checks. Is the project submitted completely? Is the information complete and telling a complete story that is fit for external reviews? At this stage, we only had to drop 10 projects of a grant total of 72 submitted projects. Our intention was to only filter projects that drew an incomplete picture and leave the judgement of the scientific aspects to our reviewers.

2. Reviewing procedure

After assembling a panel of 16 international reviewers (see list below), we distributed anonymized projects among them. All personal and institutional information was removed, only leaving project-relevant data to be reviewed. We aimed at receiving 3 independent reviews per project, requiring each review to review about 11 projects total.

Here is the list of questions we asked our reviewers via an electronic form:

  1. Please rank the following statements from 1 (Likely not) to 5 (Likely):
    1. The proposed project is amenable to Deep Learning methods/approaches/tools.
    2. Does the project have well-defined goals (and are those goals the correct ones)?
    3. A complete solution to the proposed project will require additional classical routines to be developed.
    4. The project, once completed, will be useful for a broader scientific user base.
    5. The project will likely require the generation of significant amounts of training data.
    6. This project likely boils down to finding and using the right (existing) tool.
    7. Approaches/scripts/models developed to solve this project will likely be reusable for other, similar projects.
    8. The project, once completed, will be interesting to computational researchers (e.g. within a public challenge).
    9. The applicant(s) might have a problematic attitude about sharing their data.
    10. Data looks as if the proposed project might be feasible (results good enough to make users happy).
    11. Do you expect that we can (within reasonable effort) improve on the existing analysis pipeline?
  2. What are the key sub-tasks the project needs us to improve?
  3. What would you expect will it take (in person-days again) to generate sufficient training data?
  4. Do suitable tools for this exist? What would you use?
  5. Once sufficient training data exists, what would you expect is the workload for AI4Life to come up with a reasonable solution for the proposed project? Please answer first in words and then (further below) with the minimum and maximum number of days you expect this project to take.
  6. What is your estimated minimum number of days for successfully working on this project?
  7. What is your estimated maximum number of days for successfully working on this project?
  8. On a scale from 1 to 10, how enthusiastic are you about this project?

Due to the unforeseen unavailability of some reviewers, we ended up with about 2.7 reviews per project, with some projects receiving 2 but most projects receiving all 3 desired reviews.

 
3. Scoring projects according to reviewer verdicts

We first aggregated all reviews per project by averaging numerical values and concatenating textual evaluations. We then developed three project scores: a quality score (main metric), a total effort score, and a slightly more subjective excitingness score.

  1. The quality score was computed by taking a weighted average of the evaluations we received. I.e., questions (1-a) to (1-k) from above.  (Note: not for all questions higher values are better. We have of course first inverted the “low-is-better” ones to make all values compatible.)
  2. The effort score was taking the (minimum) time estimates for label data generation and successfully completing the project, and computing a value corresponding to the estimated total person-month to completion.
  3. The excitingness score simply is the average of the values received as answers to question 8.

The final score was computed by: 0.75*(quality/effort) + 0.25*excitingness

This formula favors projects that are estimated to be conducted in less time, which is in line with our aim to help more individuals through the AI4Life Open Calls. 

 4. Final decisions by the Open Call Selection Committee 
  1. After anonymized scoring of all projects, we have added the applicants’ identities and institutions back into the final decision matrix.
  2. We have prepared ourselves to break ties and potentially remove better-ranked projects for the sake of having a higher diversity. To our surprise, the top-ranked projects showed a wonderful diversity, making this step unnecessary.
  3. The final decision was taken by the Open Call Selection Committee. With the members of the committee (see below), we have re-lived all steps of the Open Call process, from the application, and reviewing, to the final grading stage. After some stability analysis (i.e., after changing the weights for the weighted sums in the procedure outlines above and noticing that the best projects remained rather stably top-ranked), the Committee decided to simply select as many of the best-evaluated projects as we could fit into the AI4Life time budget for this round of Open Calls. This led to a total number of 8 selected projects.
  4. Seeing the extraordinary quality of many of the submitted projects, it was clear to us that many more than 8 projects would deserve to receive support. We have therefore decided to put a sizeable number of additional projects on a waiting list, hoping that we can engage more helping hands. 

Who was involved in the review process?

And now? What’s next?

The selected projects will be assigned to our AI4Life experts waiting to support them. All other projects are offered a space in the AI4Life Bartering Corner, a new section soon to appear on our website, where projects will be showcased to computational experts who can reach out to the proposing parties and engage in a fruitful collaboration. 

If you did not apply to the first Open Call, we invite you to do so at the beginning of 2024. Subscribe to our newsletter, we will inform you when the next call opens.

Additionally, if you are interested to put any open analysis problem you have on our Bartering Corner, please fill out this form.

If you need help quicker, we recommend Euro-BioImaging’s Web Portal, where you can access a network of experts in the field of image analysis. Please note that this service may involve associated costs, but access funds for certain research topics are available through initiatives such as ISIDORe.