AI4Life

First AI4Life Open Call: 
Announcement of selected projects

by Florian Jug & Beatriz Serrano-Solano

The first AI4Life Open Call received an impressive response, with a total of seventy-two applicationsIt proved to be an incredible opportunity for both life scientists seeking image analysis support and computational scientists eager to explore the evolving landscape of AI methodologies. In this blog post, we announce the awarded projects and invite you to join us behind the scenes as we explore the selection process that determined which projects have been selected.

Awarded projects

First things first, here is the list of titles of the selected projects (in alphabetical order):

  • Analysis of the fiber profile of skeletal muscle.
  • Atlas of Symbiotic partnerships in plankton revealed by 3D electron microscopy.
  • Automated and integrated cilia profiling.
  • Identifying senescent cells through fluorescent microscopy.
  • Image-guided gating strategy for image-enabled cell sorting of phytoplankton.
  • Leaf tracker plant species poof.
  • SGEF, a RhoG-specific GEF, regulates lumen formation and collective cell migration in 3D epithelial cysts.
  • Treat CKD.

The projects are diverse, covering scientific topics ranging from Plant Biology, Physiology, Metabolism, Cell Biology, Molecular Biology, Marine Biology, Flow Cytometry, Medical Biology, Regenerative Biology, Neuroscience, etc. The researchers who have proposed the projects come from the following countries: France (2x), Germany, Italy, Netherlands, Portugal, and the USA (2x).

How did the review procedure work?

1. Eligibility checks

The selection procedure started with internal eligibility checks. Is the project submitted completely? Is the information complete and telling a complete story that is fit for external reviews? At this stage, we only had to drop 10 projects of a grant total of 72 submitted projects. Our intention was to only filter projects that drew an incomplete picture and leave the judgement of the scientific aspects to our reviewers.

2. Reviewing procedure

After assembling a panel of 16 international reviewers (see list below), we distributed anonymized projects among them. All personal and institutional information was removed, only leaving project-relevant data to be reviewed. We aimed at receiving 3 independent reviews per project, requiring each review to review about 11 projects total.

Here is the list of questions we asked our reviewers via an electronic form:

  1. Please rank the following statements from 1 (Likely not) to 5 (Likely):
    1. The proposed project is amenable to Deep Learning methods/approaches/tools.
    2. Does the project have well-defined goals (and are those goals the correct ones)?
    3. A complete solution to the proposed project will require additional classical routines to be developed.
    4. The project, once completed, will be useful for a broader scientific user base.
    5. The project will likely require the generation of significant amounts of training data.
    6. This project likely boils down to finding and using the right (existing) tool.
    7. Approaches/scripts/models developed to solve this project will likely be reusable for other, similar projects.
    8. The project, once completed, will be interesting to computational researchers (e.g. within a public challenge).
    9. The applicant(s) might have a problematic attitude about sharing their data.
    10. Data looks as if the proposed project might be feasible (results good enough to make users happy).
    11. Do you expect that we can (within reasonable effort) improve on the existing analysis pipeline?
  2. What are the key sub-tasks the project needs us to improve?
  3. What would you expect will it take (in person-days again) to generate sufficient training data?
  4. Do suitable tools for this exist? What would you use?
  5. Once sufficient training data exists, what would you expect is the workload for AI4Life to come up with a reasonable solution for the proposed project? Please answer first in words and then (further below) with the minimum and maximum number of days you expect this project to take.
  6. What is your estimated minimum number of days for successfully working on this project?
  7. What is your estimated maximum number of days for successfully working on this project?
  8. On a scale from 1 to 10, how enthusiastic are you about this project?

Due to the unforeseen unavailability of some reviewers, we ended up with about 2.7 reviews per project, with some projects receiving 2 but most projects receiving all 3 desired reviews.

 
3. Scoring projects according to reviewer verdicts

We first aggregated all reviews per project by averaging numerical values and concatenating textual evaluations. We then developed three project scores: a quality score (main metric), a total effort score, and a slightly more subjective excitingness score.

  1. The quality score was computed by taking a weighted average of the evaluations we received. I.e., questions (1-a) to (1-k) from above.  (Note: not for all questions higher values are better. We have of course first inverted the “low-is-better” ones to make all values compatible.)
  2. The effort score was taking the (minimum) time estimates for label data generation and successfully completing the project, and computing a value corresponding to the estimated total person-month to completion.
  3. The excitingness score simply is the average of the values received as answers to question 8.

The final score was computed by: 0.75*(quality/effort) + 0.25*excitingness

This formula favors projects that are estimated to be conducted in less time, which is in line with our aim to help more individuals through the AI4Life Open Calls. 

 4. Final decisions by the Open Call Selection Committee 
  1. After anonymized scoring of all projects, we have added the applicants’ identities and institutions back into the final decision matrix.
  2. We have prepared ourselves to break ties and potentially remove better-ranked projects for the sake of having a higher diversity. To our surprise, the top-ranked projects showed a wonderful diversity, making this step unnecessary.
  3. The final decision was taken by the Open Call Selection Committee. With the members of the committee (see below), we have re-lived all steps of the Open Call process, from the application, and reviewing, to the final grading stage. After some stability analysis (i.e., after changing the weights for the weighted sums in the procedure outlines above and noticing that the best projects remained rather stably top-ranked), the Committee decided to simply select as many of the best-evaluated projects as we could fit into the AI4Life time budget for this round of Open Calls. This led to a total number of 8 selected projects.
  4. Seeing the extraordinary quality of many of the submitted projects, it was clear to us that many more than 8 projects would deserve to receive support. We have therefore decided to put a sizeable number of additional projects on a waiting list, hoping that we can engage more helping hands. 

Who was involved in the review process?

And now? What’s next?

The selected projects will be assigned to our AI4Life experts waiting to support them. All other projects are offered a space in the AI4Life Bartering Corner, a new section soon to appear on our website, where projects will be showcased to computational experts who can reach out to the proposing parties and engage in a fruitful collaboration. 

If you did not apply to the first Open Call, we invite you to do so at the beginning of 2024. Subscribe to our newsletter, we will inform you when the next call opens.

Additionally, if you are interested to put any open analysis problem you have on our Bartering Corner, please fill out this form.

If you need help quicker, we recommend Euro-BioImaging’s Web Portal, where you can access a network of experts in the field of image analysis. Please note that this service may involve associated costs, but access funds for certain research topics are available through initiatives such as ISIDORe.