AI4Life

Categories
event report

Outcomes of the hackathon on web and cloud infrastructure for AI-powered bioimage analysis

Outcomes of the Hackathon on Web and Cloud infrastructure for AI-powered bioimage analysis

by Caterina Fuster-Barceló

The AI4Life Hackathon on Web and Cloud Infrastructure for AI-Powered BioImage Analysis recently took place at SciLifeLab in Stockholm, Sweden. Organized by Wei Ouyang of KTH Sweden, in partnership with AI4Life and Global BioImaging, the event aimed to bring together experts in the field to discuss and design advanced web/cloud infrastructure for bioimage analysis using AI tools. Participants from both academia and industry worldwide attended, showcasing platforms like BioImage Model Zoo, Fiji, ITK, Apeer, Knime, ImJoy, Piximi, Icy, and deepImageJ. Read more in this article written by the project partners in FocalPlane.

 

 

Categories
News

AI4Life teams up with the Galaxy Training Network to enhance training resources

AI4Life teams up with Galaxy Training Network (GTN) to enhance training resources

by Caterina Fuster-Barceló

In an exciting collaboration, AI4Life has joined forces with the Galaxy Training Network (GTN) project to revolutionize the way researchers access training materials. The GTN, known for its dedication to promoting FAIR (Findable, Accessible, Interoperable, and Reusable) and Open Science practices globally, now incorporates AI4Life to expand its training offerings.

Through this collaboration, BioImage Model Zoo (BMZ) and AI4Life trainers have developed videos and slides to introduce the community to the BMZ, demonstrate proper utilization, and guide contributions. This exciting development allows the BMZ to reach a wider audience within the research community and offers a simplified, visual approach to understanding and utilizing the BMZ.

This collaboration between the BMZ and GTN opens up new opportunities for researchers to access training materials and gain a better understanding of the BMZ’s capabilities. By making the process more accessible and intuitive, the BMZ aims to facilitate its adoption among researchers from diverse backgrounds.

The integration of the BMZ into the GTN project represents a significant advancement in training resources, empowering researchers worldwide and fostering collaboration within the scientific community. Stay tuned for upcoming training materials that will unlock the full potential of the BMZ for your research pursuits.

Categories
past events

AI4Life at the 5th NEUBIAS Conference

AI4Life at the 5th NEUBIAS Conference

by Estibaliz Gómez-de-Mariscal

The 5th NEUBIAS Conference took place in Porto during the week of May 8th, 2023. It brought together experts in BioImage Analysis for the Defragmentation Training School and the Open Symposium. AI4Life actively participated in the event, contributing to both parts and covering topics from zero code Deep Learning tools, the Bioimage Model Zoo, BiaPy, Segment Anything for Microscopy, among others. Estibaliz Gómez-de-Mariscal has written a post in FocalPlane summarising the discussions and outcomes.

Categories
News

BioImage Model Zoo joins Image.sc forum as a Community Partner

BioImage Model Zoo joins Image.sc forum as a Community Partner

by Caterina Fuster-Barceló

The BioImage Model Zoo (BMZ) has been incorporated as a Community Partner of the Image.sc forum, a discussion forum for scientific image software sponsored by the Center for Open Bioimage Analysis (COBA). The BMZ is a repository of pre-trained deep learning models for biological image analysis, and its integration into the Image.sc forum will provide a platform for the community to discuss and share knowledge on a wide range of topics related to image analysis.

The Image.sc forum aims to foster independent learning while embracing the diversity of the scientific imaging community. It provides a space for users to access a wide breadth of experts on various software related to image analysis, encourages open science and reproducible research, and facilitates discussions about elements of the software. All content on the forum is organized in non-hierarchical topics using tags, such as the “bioimageio” tag, making it easy for people interested in specific areas to find relevant discussions.

As a Community Partner, the BMZ joins other popular software tools such as CellProfiler, Fiji, ZeroCostDL4Mic, StarDist, ImJoy, and Cellpose, among others. The partnership means that the BMZ will use the Image.sc forum as a primary recommended discussion channel, and will appear in the top navigation bar with its logo and link.

The Image.sc forum has been cited in scientific publications, and users may reference it using the following citation:

Rueden, C.T., Ackerman, J., Arena, E.T., Eglinger, J., Cimini, B.A., Goodman, A., Carpenter, A.E. and Eliceiri, K.W. “Scientific Community Image Forum: A discussion forum for scientific image software.” PLoS biology 17, no. 6 (2019): e3000340. doi:10.1371/journal.pbio.3000340

The integration of the BMZ into the Image.sc forum will undoubtedly facilitate knowledge-sharing and collaborative efforts in the field of biological image analysis, benefiting researchers, developers, and users alike.

Categories
News

Outcomes of the First AI4Life Open Call

Outcomes of the First AI4Life Open Call

 

by Beatriz Serrano-Solano & Florian Jug

We are thrilled to announce that we received an impressive number of 72 applications to the first AI4Life Open Call!

The first AI4Life Open Call was launched in mid-February and closed on March 31st, 2023. It is part of the first of a series of three that will be launched over the course of the AI4Life project.

AI4Life involves partners with different areas of expertise, and it covers a range of topics, including marine biology, plant phenotyping, compound screening, and structural biology. Since the goal of AI4Life is to bridge the gap between life science and computational methods, we are delighted to see so much interest from different scientific fields seeking support to tackle scientific analysis problems with Deep Learning methods.

We noticed that the most prominent scientific field that applied was cell biology, but we also received applications from neuroscience, developmental biology, plant ecology, agronomy, and marine biology, as well as aspects of the medical and biomedical fields, such as cardiovascular and oncology.

Why did scientists apply? What challenges are they facing?

Applications were classified based on the type of problem that will need to be addressed. We found that improving the applicant’s image analysis workflow was the most common request (67 applications out of 72). This was followed by improving image analysis data and/or data storage strategy, training data creation, and consultancy on available tools and solutions. Of course, we also welcomed more specific challenges that were not falling within any of those categories.

How have applicants addressed their analysis problem so far?

We were pleased to see that most of the applicants had already analyzed the data, but half of them were not fully satisfied with the outcomes. We interpreted this as an opportunity to improve existing workflows. The other half of the applicants were satisfied with their analysis results, but longed for better automation of their workflow, so it becomes less cumbersome and time-consuming. Around 20% of all applications haven’t yet started analyzing their data.

When asked about the tools applicants used to analyze their image data, Fiji and ImageJ were the most frequently used ones. Custom code in Python and Matlab is also popular. Other frequently used tools included Napari, Amira, Qupath, CellProfiler, ilastik, Imaris, Cellpose, and Zen.

What kind of data and what format do applicants deal with?

The most common kind of image data are 2D images, followed by 3D images, multi-channel images, and time series.

Regarding data formats, TIFF was the most popular one, followed by JPG, which is borderline alarming due to the lossy nature of this format. AVI was the third most common format users seem to be dealing with. CSV was, interestingly, the most common non-image data format. 

Additionally, we asked about the relevance of the metadata to address the proposed project. 17 applicants didn’t reply, and 24 others did, but do not think metadata is quite relevant to the problem at hand. While this is likely true for the problem at hand, these responses show that the reusability and FAIRness of acquired and analyzed image data is not yet part of the default mindset of applicants to our Open Call.

How much project-relevant data do the applicants bring?

We found that most of the data available was large, with most of them in the range between 100 and 1000 GB, followed by projects that come with less than 10 GB, between 10 and 100 GB, and less than 200 MB. 14 projects had more data than 1 TB to offer.

Is ground truth readily available?

We also asked about the availability of labelled data and provided some guidelines regarding the kind and quality of such labels. We distinguished: (i) Silver ground truth, i.e. results/labels good enough to be used for publication (but maybe fully or partly machine-generated, and (ii) gold ground truth, i.e. human curated labels of high fidelity and quality.

The majority of applicants (40) had no labelled data or only very few examples. The rest have silver level (8), a mix of silver and gold levels (10) and gold level (14) ground truth available. The de-facto quality of available label data is, at this point in time, not easy to be assessed, but our experience is that users who believe to have gold-level label data are not always right.

Is the available data openly shareable?

To train Deep Learning models, the computational experts will need access to the available image data. We found that only a small portion of applicants were not able to share their data at all. The rest is willing to share either all or at least some part of their data. When only parts of the data are sharable, reasons were often related to data privacy issues or concerns about sharing unpublished data.

What’s next? How will we proceed?

We are currently undergoing an eligibility check and the pool of reviewers will start looking at the projects in more detail. In particular, they will rank the projects based on the following criteria:

  • The proposed project is amenable to Deep Learning methods/approaches/tools.
  • Does the project have well-defined goals (and are those goals the correct ones)?
  • A complete solution to the proposed project will require additional classical routines to be developed.
  • The project, once completed, will be useful for a broader scientific user base.
  • The project will likely require the generation of significant amounts of training data.
  • This project likely boils down to finding and using the right (existing) tool.
  • Approaches/scripts/models developed to solve this project will likely be reusable for other, similar projects.
  • The project, once completed, will be interesting to computational researchers (e.g. within a public challenge).
  • The applicant(s) might have a problematic attitude about sharing their data.
  • Data looks as if the proposed project might be feasible (results good enough to make users happy).
  • Do you expect that we can (within reasonable effort) improve on the existing analysis pipeline?

The reviewers will also identify the parts of the project that can be improved, evaluate if deep learning can be of help and provide an estimation of the time needed to support the project.

We will keep you posted about all developments. Thanks for reading thus far! 🙂

Categories
News past events

AI4Life at Focus on Microscopy

AI4Life at Focus on Microscopy

14 April 2023

by Estibaliz Gómez de Mariscal

AI4Life was present this year at the Focus on Microscopy (FOM) 2023 conference in Porto, Portugal. 

FOM is a yearly conference series presenting the latest innovations in optical microscopy and its applications to life sciences. 

This year, the BioImage Model Zoo was presented again in one of the two dedicated oral sessions for image analysis under the title “BioImage Model Zoo: Accessible AI models for microscopy image analysis in one-click”. We highlight two of the most exciting discussion topics around AI4Life: “We need more deep-learning model benchmarks tailored for direct applications in life sciences” and “How can I upload my work to the BioImage Model Zoo”.

Remarkably, this year there was for the first time a dedicated section about smart microscopy where hybrid approaches using deep learning for adaptive optics and data-driven acquisitions were presented.

AcknowledgementS
Categories
News

Icy joins as a Community Partner!

Icy joins as a community partner!

by Carlos Garcia-López-de-Haro

The Bioimage Analysis software Icy has officially joined the BioImage Model Zoo as a Community Partner! This means that the Icy software will soon be compatible with the Deep Learning (DL) models present in the BioImage.io repository.

Icy is a powerful, open-source software designed for bioimage analysis, with features including visualization, annotation, graphical programming, and more. Now, with the compatibility with BioImage Model Zoo, Icy will further enhance its capabilities by leveraging the power of Deep Learning to analyze complex biological images better.

Meanwhile, Icy users will be encouraged to upload new models and datasets to the BioImage.io website, improving the collaboration and pushing the Bioimage Analysis field forward. The plugin to run Deep Learning models in Icy is in its final stage of development and it will be released soon. The Icy team is also providing the backend of their plugin as an independent Java library to run any Deep Learning model from various of the supported DL frameworks by the BioImage Model Zoo (Tensorflow 1, Tensorflow 2, Pytorch and Onnx) in an easy way.

Categories
News

New videos in the AI4Life YouTube channel

First videos in the AI4Life YouTube channel

 

The AI4Life YouTube channel is officially inaugurated! It features two training videos currently, with more to come in the future. The first two videos mark the beginning of a new playlist for training, showing how to upload models to the BioImage Model Zoo and the cross-compatibility of these models, which allows researchers to use them with different software tools and platforms.

We look forward to seeing more content on this channel in the future.

Categories
event report

Outcomes of the hackathon “Deep Learning in Java”

Outcomes of the Hackathon “deep Learning in Java”

Milan, 6-10 February 2023

by Florian Jug

Global BioImaging and AI4Life organized a hackathon in Milan from February 6-10, 2023. The overarching goal of this event was to improve the accessibility of Deep Learning methods in Java-based image analysis tools and libraries. The event was held at the Human Technopole and was attended by a total of 21 participants from various parts of the world. 

The participants, representing tools such as bioimage.io, deepImageJ, Fiji, Icy, ImageJ, ImJoy, and QuPath, self-organized into topic-groups on day one and then tackled various challenges to bridge the system gap between typically python-based deep learning methods and Java (i.e. ImgLib2 based) image processing. 

These topic-groups made significant progress on different fronts over the 5-day event. A more in-depth report will soon be made available as an BioHackrXiv preprint. Among the highlights was the integration of a library by Carlos Garcia and colleagues (model-runner-java) into deepImageJ (and therefore into Fiji) and several other participants using this new way of running deep learning models on images opened in ImgLib2 containers (e.g. directly from Fiji). This was even pushed to extremes by combining the execution of models live from within BigDataViewer, e.g., enabling lazy prediction on terabyte sized datasets.

Additionally, another topic-group explored alternative ways to use the model-runner-java library, by directly sharing memory between native python processes and running Java VMs. Similar solutions exist (see for example imglyb or PyImageJ), but the newly explored idea is not any longer based on sub-processes but instead on inter-process communication. The big advantage of this approach is that parallel processes can be started independently, hook into each other on demand using shared memory, work together but die alone.

All participants are now continuing to flesh out the work that was started during the event and releases of updated versions of deepImageJ and a Fiji and Icy based deep learning integration are on their way. These updates will benefit hundreds of users world-wide.

 
AcknowledgementS