Trainable Image Analysis for High Content Analysis

© Photo Fraunhofer FIT

Zeta facilitates end user development through examples

High content analysis requires userfriendly and flexible software. Users demand intuitive access to the output of increasingly sophisticated automated instruments. Research in user-oriented software systems has enabled Fraunhofer FIT to develop a new generation of simple yet powerful software technologies.

The variety of high content cellular assays is increasing quickly. Applications are moving towards dynamic imaging, complex distribution patterns, and mixed and primary cell lines. Automatic image analysis is a critical bottleneck in assay development. Currently, there are two major approaches to image analysis:

  • Tailored solution packs validated for specific applications.
  • Programmable or configurable toolkits.

For assay development, solution packs always carry the risk that they are not suitable for a new assay. Creating a new solution pack requires software specialists and is expensive. Programmable toolkits are more flexible, but even an experienced developer needs several iterations to arrive at the desired solution.

Trainable Image Analysis saves time and reduces development risk

A radically new approach developed at Fraunhofer FIT lets the user define examples of the desired results on a few sample images. The system configures itself accordingly to recognize relevant structures in further samples. The results can quickly be assessed and validated.

This approach is as flexible and powerful as a toolkit, but as easy to apply as a solution pack. Ultimately, it saves time and reduces development risk, as the image analysis for a new assay can be developed and validated in a few hours. This approach thus removes the image analysis bottleneck.

Of course, the power of this approach has its limits. But even if the image analysis package should be missing a certain discriminative feature, that limitation will immediately become clear. Manual configuration or programming on the other hand might incur a risk of failure even after considerable investments of time and effort.

How it works

The Zeta image analysis package lets the user specify positive and negative examples (e.g. of cells) in a few sample images and automatically learns optimal decision and quantification procedures for subsequent automatic application. For discriminative power Zeta relies on the underlying toolkit, which contains standard recognition functions of other toolkits as well as advanced capabilities to distinguish structures also on their texture-like visual qualities. The Zeta image analysis package provides:

  • Reproducible large-scale analysis of cell images
  • User-trained recognition for specific conditions by giving positive and negative examples
  • Wide (extensible) selection of features to classify objects and to quantify effects

A few example applications demonstrate the ease of use and the wide range of applications of this approach.

Example applications

Automatic determination of the degree of confluence based on a phase contrast image. The cell area is marked in red by the system, the background in blue. The red and blue lines indicate the regions encircled by the user as examples.

Cell counting in suspension.

Identification of lipid droplets within a cell population.

Degree of confluence

This application is used for supervision of cell culturing.

For training of targeted recognition modules, a biologist uses the mouse to mark examples of the two regions that can be distinguished in the image data: covered and cell-free regions. Zeta learns from these examples and is then able to discriminate further image data automatically. In general, an arbitrary number of different regions can be trained.

For validation, a set of images can be immediately analyzed and quantitatively compared with manual analysis results or external standards. If necessary, mismatches can be selected to improve the recognition rate. Provided that the images are available, the application can be developed and validated in a few hours.

Cell counting

The Zeta system is first trained by the biologist marking examples of individual cells via mouse clicks. After this training the system can recognize this structure and find respective cells automatically.

Similar to the detection of regions, cell detection can operate on all kinds of images and automatically uses the characteristics of the image to determine the appropriate recognition method.

Classification of cell types

Zeta can furthermore discriminate different cell types. Again, the user chooses a number of examples for each cell type to be distinguished.
In this application, detached cells are distinguished from adherent cells, and the resulting ratio can then be used as the assay readout.

Spot detection

Even substructures of cells can be detected. Again, different classes can be distinguished and quantified.

Primary cells

The system can also be used for characterization of primary cells, in this example for the rough analysis of size and shape.

Zeta can identify rough contours of recognized objects. Subsequently, shape-based and quantitative features can be used.

Mitotic index

A further application of cell classification is to determine the ratio of mitotic cells within the whole population. Such an assay can analyze effects such as mitotic arrest. The Zeta system offers retraining functions that can take misclassifications of the first training phase as input for an improved detection.

Calcium imaging

The Zeta system can also analyze dynamic images. In this case, the calcium response of each cell is separately quantified and available as a result.

Translocation assays

Of course, Zeta can quantify fluorescence in translocation assays. It can distinguish different fluorescence patterns based on their visual appearance and thus characterize translocation in more detail.

Availability and Services

The Zeta system can be made available in two ways:

  • Application package on demand
  • Full environment

The full environment contains all the toolkit functionality, the training environment and the validation environment.

Application packages on demand are an economic way of leveraging the power of Zeta for specific applications. A set of validation images needs to be provided by the user. From that, we will generate a tailored application package using the Zeta training facilities. The quality of the results can be assessed by the user and either be rejected or accepted. If accepted, a fixed processing tool will be delivered with interfaces as needed by the client.

Additional services are available for data management interfaces and solutions, for visualization and visual data mining, and for integrated solutions and process support.