Recent advances in imaging equipment and probes have enabled the acquisition of huge numbers of bioimages. One researcher can easily inspect a few hundred images. However, when the number of images increases to thousands, millions, or billions, no individual could process them alone. Cell imaging techniques in particular yield large-scale systematic image data characterizing gene and protein localizations . In these situations, computer assistance becomes increasingly important for image inspection. To develop and evaluate computer programs for image inspection, a benchmark image dataset is crucial. For this purpose, the benchmark dataset should not be a miscellaneous image collection but systematically-captured images designed with a precise aim. For example, the MitoCheck consortium focused on identifying human genes involved in mitosis progression and collected two days of time-lapse images of 67 average HeLa cells expressing histone-GFP with a chemically synthesized short interfering RNA (siRNA) knockdown system with 21,000 protein-coding genes . This time-lapse image dataset is freely available (http://www.mitocheck.org/). In addition, Dr. Murphy’s laboratory at Carnegie Mellon University acquired HeLa cell images with fluorescently-labeled cell nuclei, nucleoli, endoplasmic reticulum (ER), Golgi bodies, lysosomes, mitochondria, plasma membrane, endosomes, actin microfilaments, and microtubules, and proposed organelle recognition algorithms for the HeLa image dataset . Benchmark images of 80–91 HeLa cells for every 10 markers are freely available at Dr. Murphy’s laboratory website (http://murphylab.web.cmu.edu/data/2Dhela_images.html). Public release of these image datasets will contribute to the future development of computational image analytics.
In the plant sciences, a few fluorescent microscopic image databases have focused on intracellular structures, such as Plant Cell Imaging (http://deepgreen.stanford.edu/) , the Illuminated Plant Cell (http://www.illuminatedcell.com) , and the Plant Organelles Database (http://podb.nibb.ac.jp/Organellome/) . These databases have provided beautiful pictures of plant cell structures and cellular dynamics and have potential value as resources for model analysis . However, these image databases contain images of different plant cell types, complicating analysis.
Guard cells of plant stomata have become a model system for characterizing signal transduction mechanisms from environmental perception to turgor movement . Previous biological studies of guard cells have proven that intracellular structures are required for healthy stomatal movement [9–11]. To comprehensively visualize guard cell intracellular dynamics during stomatal movement, we have already released microscopic images of 50–60 pairs of Arabidopsis guard cells fluorescently labeled with 18 kinds of organelle markers in the Live Images of Plant Stomata (LIPS) database . However, with the first version of the database, visualizing intracellular three-dimensional configurations and/or mining biologically meaningful information such as the relationship between intracellular configuration and stomatal apertures was inconvenient.
We have updated the LIPS database with additional datasets. The original serial optical sections are still available as LIPS dataset I, and volume-rendering and aligned-image datasets are newly released as LIPS datasets II and III, respectively. In addition, a database table, named LIPService, was newly established to easily inspect the relationship between intracellular configuration and cell status/morphology. In this article, we describe the updated content and utility of LIPService. This database will serve as an image data mining tool, a web-based educational resource, and a benchmark dataset for plant intracellular structures in plant guard cells.