Explore the real-world projects addressed and selected through our Open Calls, showcasing the diverse applications of AI methods in life sciences.
Researchers from the University of Trieste are studying the regulation of the metabolical properties in murine skeletal muscles by analyzing a series of sections of a single muscle stained with different markers and imaged with light microscopy. The goal of the analysis is to segment all cells in each section and identify them across different acquired images. Although the cells should be present in each section, their morphology can be vastly different, making identification across sections a challenging task.
As a first step, we aligned the different muscle slices to each other using BigWarp – a Fiji plugin for sample registration and alignment. We then segmented the cells with Cellpose – a popular cell segmentation deep learning library. Finally, to link the resulting cell segmentation across multiple slices, we used the Fiji Trackmate plugin.
Link to BMZ
Researchers at Illinois State University are studying how the expression level of motor transport proteins affects their function in mediating the assembly and length of cilia.
To do this, they use fluorescence microscopy in conjunction with AI-based image analysis of hundreds of cells. This complex analysis required the integration of three separate segmentation steps: 1) the cell bodies of the cells ectopically expressing fluorescently labelled motor proteins, 2) the nuclei of all of the cells, and 3) the cilia themselves (a challenging task requiring the integration of two separate markers).
The first two segmentations were achieved using a custom deep-learning model trained in Cellpose, while the last was performed by random-forest pixel classification using Labkit. To approximate expression level, the fluorescent intensity of the motor protein was measured in a perinuclear region surrounding the nucleus, while the length of the cilia belonging to expressing and non-expressing cells was measured. These measurements can be combined to correlate cilia length to the intensity of motor protein expression across the entire field of cells.
In this project, the main challenge faced by the researchers from the CNRS and Grenoble University was the segmentation of various organelles of microalgae in free-living cell and symbiotic forms in large 3D electron microscopy images. This segmentation is required in order to reconstruct in 3D and quantify the morphometrics of key organelles, such as the chloroplast. In particular, the complexity of the cells and size of the stacks render any complete manual annotation extremely time-consuming.
Therefore, obtaining ground-truth annotations by manual annotation for training a deep-learning model is very time-consuming and extremely expensive. New methods for rapid automated segmentation are necessary to unveil the cellular architecture of microalgae.
In the absence of ground truth for training a deep-learning algorithm, we decided to tackle this challenge using a different approach. We developed a napari plugin to train a Random Forest model on the embeddings of the Segment Anything Model (SAM), guided by a few scribble labels provided by the user. The use of a Random Forest algorithm allows semantic segmentation of multiple types of organelles across the whole stack, with little manual effort.
Link to BIA
Link to BMZ
Researchers from the University of Toledo (USA) are imaging a confluent 2D monolayer of epithelial cells. The monolayer is scratched with a pipette tip, and the video shows the migration of the cells to close the inflicted wound. They would like to compare the behavior of different cell lineages automatically, including cell morphology, for which segmentation and tracking over time of each cell and nucleus is necessary.
In this project, the researchers, working for RD Néphrologie in Montpellier (France), are studying the effect of Chronic Kidney Disease (CKD) on the density of collagen in specific tissues, such as the heart and kidney. Using mouse and rat models, they extract tissue slices and detect collagen using a biochemical marker.
In order to estimate the density of collagen in the tissue, we used Labkit to classify each pixel into one of four classes: background, cells, tissue or collagen. Labkit is a Fiji plugin with an intuitive interface that allows labelling pixels in the images and training a random forest classifier. We designed a collection of Fiji scripts to normalise the images, exclude parts of the images from the analysis using masks and perform the quantification. We proposed several ways to obtain the masks for region exclusion, from creating regions of interest in Fiji to using advanced deep learning algorithms such as the Segment Anything Model (SAM).
Link to BIA
Link to BMZ
In this project, a researcher from the European Molecular Biology Laboratory (EMBL) in Heidelberg is using a cutting-edge commercial flow cytometer to sort phytoplankton from lab-grown cultures or field samples. Sorting is traditionally done by selecting features measured by the instrument on the sample and manually drawing a gate defining the range of values in these features that correspond to the cells being selected. However, this new instrument is image-enabled and allows exporting not only traditional features but features derived from fluorescent images as well. In addition, it supports the import of gating strategies to the control software. This opens the door to automated analysis of the features and consequently, the generation of a gating strategy that can be uploaded directly to the flow cytometer.
To tackle this project, we developed a simple feature selection approach, then generated polygon gates out of 2D histograms while using a similarity threshold between samples. Finally, we explored the upload of gating strategies to the instrument as a proof of principle.
Researchers from Wageningen University are cultivating various plants in a unique growing facility called NPEC. In each of the NPEC chambers, plants experience identical conditions in terms of light, water, and nutrients. Positioned above the platform, a camera captures images of each plant at specified intervals over several weeks, enabling comprehensive monitoring of their growth and development. This camera system incorporates measurements of RGB data, as well as data from fluorescence, thermal, and hyperspectral cameras. However, the original system and analysis involve averaging measurements from both older and younger leaves of each plant. To gain a deeper understanding of leaf physiology and development under varying light conditions, a quantitative analysis of individual leaves is necessary. Thus, the objective of this project is to develop an AI model capable of analyzing each leaf throughout its developmental stages.
To crop individual plants, we used PlantCV – a popular library for plant image processing. Then, we segment the individual leaves on the central plant using an instance segmentation deep learning network. Finally, we track the resulting segmentations over time using existing tracking packages written in Python.