A novel mesh processing based technique for 3D plant analysis
- Anthony Paproki^{1}Email author,
- Xavier Sirault^{2}Email author,
- Scott Berry^{2},
- Robert Furbank^{2} and
- Jurgen Fripp^{1}Email author
DOI: 10.1186/1471-2229-12-63
© Paproki et al.; licensee BioMed Central Ltd. 2012
Received: 30 January 2012
Accepted: 3 May 2012
Published: 3 May 2012
Abstract
Background
In recent years, imaging based, automated, non-invasive, and non-destructive high-throughput plant phenotyping platforms have become popular tools for plant biology, underpinning the field of plant phenomics. Such platforms acquire and record large amounts of raw data that must be accurately and robustly calibrated, reconstructed, and analysed, requiring the development of sophisticated image understanding and quantification algorithms. The raw data can be processed in different ways, and the past few years have seen the emergence of two main approaches: 2D image processing and 3D mesh processing algorithms. Direct image quantification methods (usually 2D) dominate the current literature due to comparative simplicity. However, 3D mesh analysis provides the tremendous potential to accurately estimate specific morphological features cross-sectionally and monitor them over-time.
Result
In this paper, we present a novel 3D mesh based technique developed for temporal high-throughput plant phenomics and perform initial tests for the analysis of Gossypium hirsutum vegetative growth. Based on plant meshes previously reconstructed from multi-view images, the methodology involves several stages, including morphological mesh segmentation, phenotypic parameters estimation, and plant organs tracking over time. The initial study focuses on presenting and validating the accuracy of the methodology on dicotyledons such as cotton but we believe the approach will be more broadly applicable. This study involved applying our technique to a set of six Gossypium hirsutum (cotton) plants studied over four time-points. Manual measurements, performed for each plant at every time-point, were used to assess the accuracy of our pipeline and quantify the error on the morphological parameters estimated.
Conclusion
By directly comparing our automated mesh based quantitative data with manual measurements of individual stem height, leaf width and leaf length, we obtained the mean absolute errors of 9.34%, 5.75%, 8.78%, and correlation coefficients 0.88, 0.96, and 0.95 respectively. The temporal matching of leaves was accurate in 95% of the cases and the average execution time required to analyse a plant over four time-points was 4.9 minutes. The mesh processing based methodology is thus considered suitable for quantitative 4D monitoring of plant phenotypic features.
Background
In the coming decades, it is expected that mankind will need to double the quantity of food and biofuel produced in order to meet global demand [1]. To achieve this with existing resources, new plant characteristics need to be identified, quantified, and bred to obtain more productive plant varieties within existing environments. This will require a greater understanding of how the genetic make-up of plants determines their phenotype (visible traits) in high resolution and in high throughput. Performing plant phenomics involves screening large germplasm collections to facilitate the discovery of new interesting traits (forward phenomics), and analysing known phenotypic data in order to uncover the genes involved in their evolution and use these genes in plant breeding (reverse phenomics) [1]. Investigated plants are usually grown in thoroughly controlled conditions (growth chambers or glasshouses) and subjected to different environmental conditions and stresses (e.g. drought, salt, heat, etc.) with the primary aim of monitoring their phenotypic response using various measurements [2, 3].
Common plant morphological traits of interest include parameters such as main stem height, size and inclination, petiole length and initiation angle, and leaf width, length, inclination, thickness, area, and biomass [1–4]. The usual procedure to collect these data consists of many laborious manual measurements, often requiring destructive harvests and thus multiple replicates of individual plant genotypes or varieties to allow successive harvests over time. A typical manual phenotypic analysis of 200 plants (daily objective) would require approximately 100 man-hours of work (≃ 30 minutes per plant depending on the size and complexity), which is impractical. In light of the importance of gene discovery and agricultural crop improvement, the development of solutions to automate such a tedious task is imperative.
High-throughput plant phenotyping aims to extend the standard approach by growing, measuring and analysing temporally thousands of plants [5]. In recent years, the plant phenotyping research has seen the emergence of high-throughput plant screening facilities [1, 6]; however, few image and mesh processing solutions are available to analyse the large amount of data captured and extract yield determinants (i.e. plant, leaf, or root characteristics). Among existing solutions, PHENOPSIS [7] and GROWSCREEN [8, 9], provide 2D image-processing based semi-automated solutions for leaf phenotyping (leaf width, length, area, and perimeter) and root data monitoring (number of roots, root area, and growth rate). LAMINA [10], another 2D-image based tool for leaf shape and size probing proposes a leaf analysis for various plant species. Recent image-processing solutions, such as TraitMill [11] and HTPheno [12], provide a more general plant analysis and measure information such as plant height, width, centre of gravity, projected area and bio-volume, and provide colorimetric analysis (e.g. greenness-differences between plants). Due to the importance of rice as a primary food resource, image-based solutions for rice phenotyping have been developed [6, 13] and involve the measurements of parameters such as grain size (length, width, and thickness), panicle length, and number of tillers. In the past 2 years, fully automated imaging techniques for the high-throughput investigation of plant root characteristics (yield determinants) have been developed [14–16] to analyse non-destructively phenotypic traits such as root average radius, area, maximum horizontal width, and length distribution.
The latest applications have introduced a third dimension to the plant analysis. Stereo-imaging and mesh processing based systems, such as GROWSCREEN 3D [17], the 3D imaging and RootReader3D software platform[18], or the solution proposed in [19], have pioneered the explicit 3D analysis of leaves and roots, allowing more accurate measurements of leaf area, and extraction of additional volumetric data.
To date, the literature is distinctly dominated by 2D image-processing techniques for high-throughput phenotyping of plants [6–16]. The major limitation of these 2D solutions is the loss of crucial spatial and volumetric information (e.g. thickness, bending, rolling, orientation) when transposing available data from 3D to 2D. The recent introduction of new tools for plant analysis based on explicit 3D reconstructions [17–19] (as opposed to inferred 3D based analysis [20, 21], widely used since the 1960’s) promises to increase potential of high-throughput studies in terms of accuracy and exhaustiveness of the measured features, but available three-dimensional solutions are currently focussed on a specific organ (e.g. leaves [17, 19] or roots [18]), tailored to a particular image acquisition system [22], and tend to be qualitative (or applied) rather than providing quantitative information and estimates of accuracy. Hence, a clear need exists for a more generalised plant analysis based on increasingly explicit 3D models and in which the reliability of the measurements is questioned and quantitatively assessed.
In this paper, we present a novel mesh-based technique developed for the high-throughput 3D analysis of plant aerial-parts. A focus is made on the feasibility of accurately extracting plant phenotypic parameters from a 3D mesh acquired for the dicotyledonous crop cotton. In this initial study, meshes were reconstructed using a low cost commercial 3D reconstruction system [23]. The proposed methodology aims at a non-exhaustive, accurate, cross-sectional (observation of a representative subset of a population at a fixed time-point), and temporal investigation of the plant macroscopic phenotype. This requires advanced features such as plant mesh morphological segmentation [24, 25], accurate plant data extraction [26], and plant organs tracking over-time. The mesh based methodology was tested on plant meshes reconstructed [23, 27] for a set of six plants studied at four time-points (i.e. 6×4 = 24 plant meshes).
Methods
Plant material
The prototype study involved acquiring and processing images for an initial set of six Gossypium hirsutum plants studied over four time-points. Manual measurements, performed by X.S and S.B for each plant and each time-point, were used to validate the accuracy and quantify the error on the mesh-based phenotypic data estimation. The first three time-points involved measuring invasively (but non-destructively) parameters such as main stem height, leaf width and leaf length using a measuring tape. For the last time-point, measurements were collected after destructive harvest in order to optimise their precision. The petioles and leaves were cut from the main stem, laid flat on a table and carefully measured. Overall, a set of 384 measurements was manually collected (24 main stem height measurements, 180 leaf width measurements, and 180 leaf length measurements).
Plant data acquisition and mesh reconstruction
A manual data capture process similar to that described in [12] was used to collect multiple plant images from different viewing angles using a high-resolution Pentax K10 SLR camera with a sigma 20-40mm aspherical lens. Each cotton plant pot was placed at the centre of a rotating tray. The camera was fixed on a tripod during all the acquisition process. The rotating tray was manually turned and pictures were taken at each rotation angle (every $\frac{360}{64}$ degree). The acquisition process completed, 64 images were available (per plant and time-point) for the multi-view 3D reconstruction. An example of acquired plant image is shown on Figure 1, the image resolution was 3872x2592 pixels (≃ 10 Megapixels).
Plant 3D models (meshes) were created from the high-resolution images using 3DSOM, a commercial 3D digitisation software [23]. The number of polygons constituting the reconstructed meshes fluctuated between 120000 and 270000.
The acquisition and mesh generation are not the primary focus of the current paper, however we acknowledge the “semi-automated” steps involved. An automated image acquisition platform [28] and a mesh reconstruction algorithm (based on [29–31]) are under development and will allow full automation for future experiments.
Automated plant mesh segmentation
The identification of different plant organs is a critical stage in performing mesh-based plant phenotyping and has proven problematic with 2D based image analysis solutions [1]. To complete this task, we developed an advanced mesh segmentation algorithm that partitions the plant mesh into morphological regions.
Mesh segmentation algorithms involve assigning a unique value (called a label) to all the points of the mesh (called vertices) that belong to the same region. A surface mesh is constituted of triangles that link the vertices together through their edges. Two vertices are said to be topologically connected (neighbours) if they share the edge of a triangle. Finally, a vertex comprises a normal vector equal to the average of the normal vectors of the neighbouring triangles.
Due to the complex and irregular morphology of plants, no generic mesh segmentation algorithm [24, 25] is accurate and robust enough to identify the different plant parts (main stem, petioles, leaves). This paper introduces a “hybrid” segmentation pipeline that overcomes the morphological shape differences between cotton plants and various reconstruction inconsistencies due to occlusions (the most common being missing petioles, as they are occluded by the leaves in the images used by the reconstruction scheme). Our automated segmentation pipeline, illustrated in Figure 2, is constituted of 4 successive steps: a coarse segmentation, a stem segmentation, petioles segmentations, and leaves segmentations. All the operations described in the next paragraphs are fully automatic and do not require any manual input [28].
Step 1: Coarse segmentation
The purpose of this first step is to partition the plant into n+1 coarse regions (with n = number of leaves), one for the main stem (region M) and n for the pairs of petioles and leaves (regions N_{ i }i=1,…,n). This is performed by a region-growing algorithm [24, 32]. Region-growing algorithms start from a seed point (automatically selected based on prior criteria defined by the application) and gradually grow a region from neighbour to neighbour until a given criteria is met. Since the criteria to stop the growth of a region are user-defined, this generic approach is particularly convenient for coarse segmentations but often shows limitations when seeking accurate region delineation.
The scheme starts by defining a coarse region M as the main stem by fitting a curve c_{ p } to the main stem from one extremity to the other and assigning to the region M all the vertices in a given planar radius of c_{ p }. Remaining vertices are classified into the n regions N_{ i }using a region-growing algorithm. The algorithm finds the first vertex that is not part of any region yet (at the start only one region is defined: M), uses it as seed point, and recursively grows a new region to all the eligible topological neighbours (creating a second region N_{1}). A neighbour is eligible if it does not belong to M or any of the regions N_{ i }already created. The region stops to grow when there is no eligible neighbour remaining, i.e. all neighbours are labelled. The algorithm iterates through all the vertices of the mesh and grows a new region N_{ i }each time it finds a vertex that does not belong to any of the regions M or N_{ i } already created (new seed point). This scheme is robust to reconstruction issues such as holes in the mesh or detached mesh pieces, as a vertex does not need to be connected to the main mesh to become a seed. A typical result of this pass is shown in Figure 2.1.
Step 2: Main stem segmentation
The second segmentation step is based on a primitive fitting segmentation approach [33, 34] that aims at refining the rough stem segmentation and partitioning the stem into different internodes (using the previously extracted region M). Primitive fitting algorithms involve finding a given shape (chosen based on the mesh structure) in a complex mesh and considering that all the vertices within the registered shape belong to the same region. In this work, the tubular shape fitting algorithm involves finding the tube parameters that minimise the point to surface distance to the region M.
Step 3: Petiole segmentation
The petioles are also segmented (and separated) from the leaves in each regions N_{ i } (created in the first step of the segmentation) using tubular fitting. For each petiole and associated leaf in the regions N_{ i }, we interpolate a curve along the petiole (using the local centre of mass of the vertices) and build a tube around it (see red tube on Figure 4.a/b). The tube follows the petiole and extends to the apex of the leaf. If we define B_{ i } as the vertex outside the tube which is the closest to the main stem (leaf stalk), then all the vertices inside the tube which are closer to the main stem than B_{ i } belong to the petiole region P_{ i } (see Figure 4.b). Other vertices naturally belong to the leaf region L_{ i }. In the case of a missing petiole (detected by a non-topological connectivity to the main stem), this step is skipped, and the region is processed using Step 4. Figure 2.3 illustrates a typical plant mesh after the petiole segmentations.
Step 4: Leaf segmentation
For the sagittal segmentation, a 2D-symmetry based algorithm was found to be the most robust, accurate, and computationally efficient. It is obtained by projecting the vertices of the leaf onto the plane having the main stem axis as normal and comparing the sign of the angle between the vector going from the main stem (c_{ p }) to the apex and the vector going from c_{ p }to the considered vertex. The region to which a vertex belongs to depends on the sign of the angle (α_{1} and α_{2} in Figure 4.c). An illustration of this process is provided in Figure 4.c.
The transversal segmentation involves using a normal clustering algorithm that starts by computing $\stackrel{\u20d7}{{V}_{{n}_{i}}}$ as the average normal vector for the whole leaf L_{ i }($\stackrel{\u20d7}{{V}_{{n}_{i}}}$ will naturally point away from adaxial or abaxial surface). A first pass will sort the leaf vertices into two different regions (adaxial or abaxial) depending on the angle formed by their normal vector and $\stackrel{\u20d7}{{V}_{{n}_{i}}}$ (higher or lower than $\frac{\Pi}{2}$). We then recompute $\stackrel{\u20d7}{{V}_{{n}_{i}}}$ using only the normal vectors of the vertices belonging to one of the two created regions and repeat the sorting process using the updated $\stackrel{\u20d7}{{V}_{{n}_{i}}}$. This scheme is repeated until a natural convergence occurs (i.e. $\stackrel{\u20d7}{{V}_{{n}_{i}}}$ does not change between two iterations).
In the case where two leaves were merged together due to occlusion issues in the 3D reconstruction, the algorithm detects a largely greater volume, splits the region L_{ i } into two regions, and performs the normal leaf segmentation on each region. To create the two regions, the algorithm detects the points f_{i,1} and f_{i,2} which are the furthest apart in the region L_{ i }, computes the centroid f_{ c i }of these two points, and uses as split plane the plane defined with f_{ c i } as origin and the normalised vector from f_{ c i }to f_{i,2} as normal. The mesh segmentation after this step is shown on Figure 2.4.
Phenotypic parameters of interest
For phenotypic analysis, important parameters are main stem height, size and inclination, petiole length and initiation angle, and leaf width, length, area, and inclination. This section presents the process used to extract these parameters from the segmented plant mesh, and focuses in particular on the leaf parameters, which are crucial indicators of the level of stress to which the plant is subjected to [2, 3].
Main stem
The main stem height can be expressed as the height difference between the highest and lowest vertices of the region M. The normalised vector between these two vertices defines the main stem axis and the angle between this axis and the coordinates system up-vector gives the inclination of the main stem. In this work the main stem length is defined as the length of the curve c_{ p }fitted to the main stem.
Petiole
If c_{ i } is a curve interpolated along the petiole P_{ i }(using local vertices centre of mass), the length of the petiole can then be expressed as the length of c_{ i }. In addition, if l_{ i } and h_{ i } denote the points of c_{ i } that are the closest to c_{ p } and the highest respectively, then the angle α between the main stem axis and the vector $\stackrel{\u20d7}{{l}_{i}{h}_{i}}$ defines the petiole initiation angle (see αon Figure 4.a).
Leaf blade
For each segmented leaf L_{ i }, we define L_{ ci } as the centroid of the leaf, $\stackrel{\u20d7}{{u}_{i,1}}$ as the average of the vectors going from L_{ ci } to the vertices belonging to the right part of the leaf, and $\stackrel{\u20d7}{{u}_{i,2}}$ as the vector going from L_{ c i } to the tip of the leaf. Let ${\pi}_{i,1}=(L{c}_{i},\stackrel{\u20d7}{{u}_{i,1}})$ and ${\pi}_{i,2}=(L{c}_{i},\stackrel{\u20d7}{{u}_{i,2}})$ define the leaf sagittal and coronal planes (in which L_{ c i }and $\stackrel{\u20d7}{{u}_{i,x}}$ are the origin and normal of the plane _{π i,x}) as displayed in Figure 5.a. Let S_{i,1}and S_{i,2} (resp. C_{i,1} and C_{i,2}) be the points on each side of π_{i,1} (resp. π_{i,2}) that maximise the distance to π_{i,1}(resp. π_{i,2}) as illustrated on Figure 5.b/c.
To estimate the leaf width (resp. length), we compute the length of the curve w_{ i } (resp. l_{ i }) interpolated to the leaf shape from S_{i,1}to S_{i,2} (resp. C_{i,1} to C_{i,2}) and projected onto π_{i,2}(resp. π_{i,1}) (to remove additional transversal length). Illustrations are provided in Figure 5.b/c. The projection of $\stackrel{\u20d7}{{C}_{i,1}{C}_{i,2}}$ onto π_{i,1} is used as leaf axis. The angle between this axis and the main stem axis gives the leaf inclination. The leaf area can be estimated by averaging the areas of the adaxial and abaxial surfaces (see Figure 5.c), which are computed by summing the area of the triangles composing them. The leaf thickness can be estimated by averaging the distance between each vertex of the adaxial surface to the closest vertex on the abaxial surface.
Analysis over-time
This step of the pipeline involved monitoring the variations of the estimated plant parameters over time. Even though this is a straightforward process for stem height monitoring, the temporal petiole and leaf parameters analysis requires an efficient matching algorithm that tracks the different plant parts over time (orientation and size of the leaves change over time as a result of variations in growing conditions, making it difficult to find robust descriptors).
To perform this task, we developed the pipeline presented on Figure 3 which is based on the assumption that a plant organ position does not vary much between two close imaging dates. We apply the same “pairwise matching” algorithm (horizontal axis on Figure 3) throughout all the available time-points (i.e. matching of T_{1} and T_{2}, of T_{2} and T_{3}, …, of T_{N−1} and T_{ N }. See vertical axis on Figure 3) in order to obtain the sequences of leaves and petioles. The pair-wise scheme is divided into two main steps: an alignment of the two plants and a parts matching algorithm.
Plants alignment
Plant parts matching
Internodes matching
Leaf blades and petioles matching
Using two aligned plants, we match the different leaves and petioles of the plants by solving an assignment problem. We build an adjacency matrix (comporting ${\Psi}_{{T}_{x}}$ rows and ${\Psi}_{{T}_{x+1}}$ columns) such that at a given position (i j) in the matrix, we store the distance $D(L{c}_{{T}_{x},i},L{c}_{{T}_{x+1},j})$ between the centroids of the leaf i of the plant at T_{ x } and of the leaf j of the plant at T_{x + 1}. Since the plants are now aligned, two leaves are eligible for pair-wise correspondence only if they belong to a given angular range from each other, and we set to $\infty $ the distance between them in the adjacency matrix when this condition is not satisfied. The pair-wise matching is performed using a simplified version of the Hungarian algorithm [35] that minimises the sum of the distances between the paired leaves. The petioles linked to the paired leaves are paired at the same time.
After this step, the morphological parts of the plant are matched over time.
Validation methodology
A similar analysis allows to compute E_{ l }and RMSE_{ l } for the leaf length measurements.
These errors were computed either using the whole datasets mentioned, or using the datasets trimmed from 10% of the outliers (5% of the best and worst relative errors). In addition, to be able to test the correlation between the automated and manual measurements, we calculated the squared Pearson product-moment correlation coefficient (^{R2}) [36] and the Intraclass Correlation Coefficient (ICC - Two-ways random single measures) [37–39]. The closer the ^{R2} and ICC coefficients are to 1, the stronger the correlation between two measurements.
Results
The results we obtained by applying our processing pipeline on the initial population of 6 Gossypium hirsutum plants studied over 4 time-points are presented bellow.
Plant mesh segmentation
Phenotypic parameters estimation
Main stem and leaf measurements analysis
Comparison between the automated and manual measurements | |||||||||
---|---|---|---|---|---|---|---|---|---|
|v_{ x }| | E _{ x } | Range (mm) | σ _{ x } | E _{x,10%} | σ _{x,10%} | ${R}_{x}^{2}$ | ICC _{ x } | RMSE _{ x } | |
Main Stem Height | 24 | 9.34% | 15.95 | 11.50% | 7.29% | 6.88% | 0.887 | 0.941 | 19.043 |
Leaf Width | 180 | 5.75% | 5.11 | 6.40% | 4.78% | 3.20% | 0.957 | 0.974 | 7.287 |
Leaf Length | 180 | 8.78% | 6.93 | 8.36% | 7.92% | 5.42% | 0.948 | 0.967 | 9.707 |
Temporal analysis
Computational cost
Analysis of the computational cost (in minutes)
Analysis of the computational cost (in minutes) | |||||||
---|---|---|---|---|---|---|---|
Operation | Time-Point | Plant 1 | Plant 2 | Plant 3 | Plant 4 | Plant 5 | Plant 6 |
T0 | 0.62 | 0.64 | 0.71 | 0.70 | 0.61 | 0.62 | |
T1 | 0.64 | 0.62 | 0.57 | 0.51 | 0.55 | 0.81 | |
Segmentation | |||||||
T2 | 0.71 | 0.69 | 0.52 | 0.68 | 0.65 | 0.60 | |
T3 | 0.68 | 0.62 | 0.67 | 0.80 | 0.71 | 0.81 | |
T0 | 0.075 | 0.065 | 0.082 | 0.073 | 0.074 | 0.078 | |
T1 | 0.067 | 0.066 | 0.075 | 0.062 | 0.074 | 0.074 | |
Data extraction | |||||||
T2 | 0.072 | 0.068 | 0.061 | 0.063 | 0.068 | 0.067 | |
T3 | 0.073 | 0.072 | 0.071 | 0.076 | 0.071 | 0.071 | |
Temporal organs matching | 2.21 | 2.51 | 2.16 | 2.01 | 2.05 | 2.12 | |
Complete mesh analysis | 4.97 | 5.19 | 4.65 | 4.77 | 4.67 | 5.06 |
Additional results
Additional results are provided in the web-site associated with this paper [see Additional file 1].
Discussion
As illustrated by Figure 9.a, 9.b, 9.c, and 9.d, that present a comparative study of the temporal evolution of phenotypic parameters for 6 Gossypium hirsutum plants, our methodology allows an accurate monitoring of the plants’ phenotypic traits over-time. By developing a hybrid mesh segmentation and analysis methodology for plant phenotyping, we have demonstrated that the automated temporal mesh-based analysis of the plant aerial part is feasible (from the temporal broad plant analysis to the evolution of individual organs).
Nevertheless, our initial study has several limitations which should be acknowledge and will lead to further investigation and development.
As of today, the pilot study was limited in terms of the exhaustiveness of the phenotypic parameters estimated, but the explicit 3D reconstruction and robust identification of the morphological parts of the plant allow estimation of a large number of parameters of interest to plant biologists not easily extracted from 2D images with existing software platforms (accurate leaf area and biovolume rather than projected area, growth of individual leaves, organ quantification over time, leaf number / phyllochron, leaf angle). More phenotypic parameter extractions can be easily developed and incorporated to our pipeline as the biologists’ requirements evolve, allowing re-use of existing libraries of 3D models and the capacity to tailor the pipeline to new trait identification and quantification. Plant architecture is an important determinant of radiation use efficiency in crops and analysis of this trait in explicit 3D and over time has previously been an intractable problem with anything other than low throughput [1]. We should acknowledge, however, that tools for 3D analysis of roots based on inferred 3D “reconstructions” (i.e. 3D approximation using shapes such as tubes) exist and have been extensively used since the early 1960’s [20, 21, 40].
Although the methodology was solely tested on Gossypium hirsutum plants, it is expected that the method will be broadly and easily adaptable to other dicotyledonous crops such as canola, tomato, and low tillering monocotyledons with simple architectures such as corn. The pipeline can be easily adapted, and operators can be implemented and combined in order to increase the flexibility of the algorithm. Preliminary results (unpublished), obtained by reusing the two first steps of the segmentation pipeline (rough segmentation and stem segmentation) on corn, allowed to isolate the main stem, the leaves, and inter-nodes, and allowed the direct computation of corn specific data. Due to the importance of rice and wheat as major food crops, the application of image based plant phenomics tools to grasses is of great interest. Significant development of our pipeline is needed to cope with occlusions due to the complex structure and the tillering observed in cereal crops. Their investigation will involve pushing the state-of-the-art reconstruction and segmentation algorithms to their limit.
With respect to the accuracy on the phenotypic parameters, errors between 5 and 10% (involving ranges between 5mm to 7mm for the leaves) are acceptable for morphological scale phenotyping, reflecting the magnitude of errors already inherent in manual measurements and variations observed between individual plants of identical genetic make-up, and are low enough to distinguish changes in the relevant traits between two imaging dates during development (which is the aim of our research). Measurements for which the mean absolute error is above 10% (or over 10mm range, e.g. main stem measurements) will involve further work to improve the accuracy (for instance, the mean bias error - that characterises systematic over/under estimations - for the main stem height measurements was MBE_{ s }≃9.8mm, against MBE_{ w }≃−2.7mm and MBE_{ l }≃3.1mm for the leaf width and length measurements, entailing that a systematic over-estimation on the main stem height measurements is made). Finally, our current aim involves reducing the error on the measurements to less than 5%, which we believe is achievable by training our algorithms on phantom plant meshes (with phenotypic parameters exactly known) generated using existing plant modelling technologies [4, 41].
While the focus of the current study has been the processing of meshes produced by a commercial 3D reconstruction product, major future work will involve improving the digitisation of plant structure and function by incorporating data other than visible light images into the 3D model. In addition to visible light cameras collecting multiple view geometries, PlantScan, a new screening platform recently developed in our laboratory [28] is equipped with LiDAR (Light Detection and Ranging sensors), infra-red cameras, and multi-wavelength cameras. The LiDAR cameras allow the reconstruction of accurate point-clouds (precision of 200 microns) that will be integrated in our probabilistic reconstruction scheme [29–31] in order to improve the accuracy of the reconstructed plant meshes that currently limits the quality of the morphological segmentation and temporal analysis. These meshes will be overlaid with thermal infra-red data and multi-spectral images data that provide colorimetric information (for chemical composition and photosynthetic functional analysis). Our laboratory expects to scan one plant every 7 minutes, making the current mesh-based methodology (3D reconstruction excluded) suitable for high-throughput dicotyledonous plant analysis. As 3DSOM required an average processing time of 15 minutes to reconstruct suitable meshes, a special focus will be placed on the efficiency of the reconstruction scheme developed.
Conclusions
In this paper, we presented a hybrid mesh-based methodology developed for high-throughput plant phenomics research. The proposed solution provides advanced mesh-processing features, including plant mesh morphological segmentation, accurate plant aerial-part phenotypic parameters estimation, and individual organ tracking and data monitoring over-time. Experiments involved testing our processing pipeline on an initial set of six Gossypium hirsutum plants analysed over four time-points.
From the qualitative and quantitative results presented in the paper, we believe that the development of a mesh based methodology for high-resolution and high-throughput plant phenomics platform is feasible and offers multiple advantages over current systems that use a small number of 2D images. The hybrid mesh segmentation presented allowed the identification of the different plant organs for all the test plants. The phenotypic parameter estimation algorithms allowed the retrieval of measurements such as main stem height and inclination, petiole length and initiation angle, and leaf width, length, area and inclination. By comparing 384 mesh-based measurements with manual measurements, we observed errors ranging from 5.75% to 9.34% and correlations ranging from 0.887 to 0.974. The temporal organ tracking algorithm successfully matched plant organs between time-points in 95% of the cases. Finally, the proposed analysis required only 4.9 minutes in average to analyse a plant over four time-points. The mesh-based analysis is thus considered a suitable mean to perform accurate and efficient 3D plant phenotypic analysis.
Availability and requirements
The different operators presented in this paper, as well as an initial set of plant meshes, are available for download from the PlantScan home page (Microsoft Windows 64-bits installer): [28].
Declarations
Acknowledgements
The analysis pipeline described in this manuscript was developed as part of an infrastructure grant received under the Commonwealth Government’s National Collaborative Research Infrastructure Strategy (NCRIS). AP was supported by a High Resolution Plant Phenomics Centre Studentship funded by the Australian Capital Territory Government, Canberra. Authors wish to thank Dr Francois Tardieu (INRA, Montpellier, FRANCE) for helpful comments and thoughtful review of the manuscript.
Authors’ Affiliations
References
- Furbank RT, Tester M: Phenomics - technologies to relieve the phenotyping bottleneck. Trends Plant Sci. 2011, 16 (12): 635-10.1016/j.tplants.2011.09.005.PubMedView ArticleGoogle Scholar
- Granier C, Tardieu F: Multi-scale phenotyping of leaf expansion in response to environmental changes: the whole is more than the sum of parts. Plant Cell Environ. 2009, 32 (9): 1175-10.1111/j.1365-3040.2009.01955.x.PubMedView ArticleGoogle Scholar
- Schurr U, Heckenberger U, Herdel K, Walter A, Feil R: Leaf development in Ricinus communis during drought stress: dynamics of growth processes, of cellular structure and of sink-source transition. J Exp Bot. 2000, 51 (350): 1515-10.1093/jexbot/51.350.1515.PubMedView ArticleGoogle Scholar
- Vos J, Evers JB, Buck-Sorlin GH, Andrieu B, Chelle M, de Visser PHB: Functional-structural plant modelling: a new versatile tool in crop science. J Exp Bot. 2010, 61 (8): 2101-10.1093/jxb/erp345.PubMedView ArticleGoogle Scholar
- Eberius M, Lima-Guerra J: High-Throughput plant phenotyping–data acquisition, transformation, and analysis. Bioinformatics. 2009, : 259.Google Scholar
- Duan L, Yang W, Huang C, Liu Q: A novel machine-vision-based facility for the automatic evaluation of yield-related traits in rice. Plant Methods. 2011, 7: 44-10.1186/1746-4811-7-44.PubMedPubMed CentralView ArticleGoogle Scholar
- Granier C, Aguirrezabal L, Chenu K, Cookson S, Dauzat M, Hamard P, Thioux J, Rolland G, Bouchier-Combaud S, Lebaudy A, et al: PHENOPSIS, an automated platform for reproducible phenotyping of plant responses to soil water deficit in Arabidopsis thaliana permitted the identification of an accession with low sensitivity to soil water deficit. New Phytologist. 2006, 169 (3): 623-10.1111/j.1469-8137.2005.01609.x.PubMedView ArticleGoogle Scholar
- Walter A, Scharr H, Gilmer F, Zierer R, Nagel K, Ernst M, Wiese A, Virnich O, Christ M, Uhlig B, et al: Dynamics of seedling growth acclimation towards altered light conditions can be quantified via GROWSCREEN: a setup and procedure designed for rapid optical phenotyping of different plant species. New Phytologist. 2007, 174 (2): 447-10.1111/j.1469-8137.2007.02002.x.PubMedView ArticleGoogle Scholar
- Jansen M, Gilmer F, Biskup B, Nagel KA, Rascher U, Fischbach A, Briem S, Dreissen G, Tittmann S, Braun S, De Jaeger I, Metzlaff M, Schurr U, Scharr H, Walter A: Simultaneous phenotyping of leaf growth and chlorophyll fluorescence via GROWSCREEN FLUORO allows detection of stress tolerance in Arabidopsis thaliana and other rosette plants. Funct Plant Biol. 2009, 36 (11): 902-10.1071/FP09095.View ArticleGoogle Scholar
- Bylesjö M, Segura V, Soolanayakanahally R, Rae A, Trygg J, Gustafsson P, Jansson S, Street N: LAMINA: a tool for rapid quantification of leaf size and shape parameters. BMC Plant Biol. 2008, 8: 82-10.1186/1471-2229-8-82.PubMedPubMed CentralView ArticleGoogle Scholar
- Reuzeau C, Pen J, Frankard V, de Wolf J, Peerbolte R, Broekaert W: TraitMill: a discovery engine for identifying yield-enhancement genes in cereals. Fenzi Zhiwu Yuzhong (Mol Plant Breeding). 2005, 3: 7534.Google Scholar
- Hartmann A, Czauderna T, Hoffmann R, Stein N, Schreiber F: HTPheno: An image analysis pipeline for high-throughput plant phenotyping. BMC Bioinf. 2011, 12: 148-10.1186/1471-2105-12-148.View ArticleGoogle Scholar
- Yang W, Xu X, Duan L, Luo Q, Chen S, Zeng S, Liu Q: High-throughput measurement of rice tillers using a conveyor equipped with x-ray computed tomography. Rev Sci Instrum. 2011, 82 (2): 025102-10.1063/1.3531980.PubMedView ArticleGoogle Scholar
- Naeem A, French A, Wells D, Pridmore T: High-throughput feature counting and measurement of roots. Bioinformatics. 2011, 27 (9): 1337-10.1093/bioinformatics/btr126.PubMedView ArticleGoogle Scholar
- Yazdanbakhsh N, Fisahn J: High throughput phenotyping of root growth dynamics, lateral root formation, root architecture and root hair development enabled by PlaRoM. Funct Plant Biol. 2009, 36 (11): 938-10.1071/FP09167.View ArticleGoogle Scholar
- Iyer-Pascuzzi A, Symonova O, Mileyko Y, Hao Y, Belcher H, Harer J, Weitz J, Benfey P: Imaging and analysis platform for automatic phenotyping and trait ranking of plant root systems. Plant Physiol. 2010, 152 (3): 1148-10.1104/pp.109.150748.PubMedPubMed CentralView ArticleGoogle Scholar
- Biskup B, Scharr H, Fischbach A, Wiese-Klinkenberg A, Schurr U, Walter A: Diel growth cycle of isolated leaf discs analyzed with a novel, high-throughput three-dimensional imaging method is identical to that of intact leaves. Plant Physiol. 2009, 149 (3): 1452-10.1104/pp.108.134486.PubMedPubMed CentralView ArticleGoogle Scholar
- Clark R, Maccurdy R, Jung J, Shaff J, McCouch S, Aneshansley D, Kochian L: Three-dimensional root phenotyping with a novel imaging and software platform. Plant Physiol. 2011, 156 (10): 455.PubMedPubMed CentralView ArticleGoogle Scholar
- Wang H, Zhang W, Zhou G, Yan G, Clinton N: Image-based 3D corn reconstruction for retrieval of geometrical structural parameters. Int J Remote Sensing. 2009, 30 (20): 5505-10.1080/01431160903130952.View ArticleGoogle Scholar
- Huang Q, Stockman GC: Generalized tube model: recognizing 3Delongated objects from 2D intensity images. In PROCEEDINGS CVPRIEEE; 1993:104–109.Google Scholar
- Huang Q, Jain A, Stockman G, Smucker A: Automatic image analysis ofplant root structures. In Pattern Recognition, 1992. Vol.II. Conference B:Pattern Recognition Methodology and Systems, Proceedings., 11th IAPRInternational Conference on; 1992:569–572.View ArticleGoogle Scholar
- Nathalie W, Jean-Christophe P: High-contrast three-dimensional imaging of the Arabidopsis leaf enables the analysis of cell dimensions in the epidermis and mesophyll. Plant Methods. 2010, 6 (17): 1.Google Scholar
- Baumberg A, Lyons A, Taylor R: 3D SOM–A commercial software solution to 3D scanning. Graphical Models. 2005, 67 (6): 476-10.1016/j.gmod.2004.10.002.View ArticleGoogle Scholar
- Shamir A: A survey on Mesh Segmentation Techniques. Computer Graphics Forum. 2008, 27 (6): 1539-10.1111/j.1467-8659.2007.01103.x.View ArticleGoogle Scholar
- Golovinskiy A, Funkhouser T: Consistent Segmentation of 3D Models. Comput Graphics (Shape Modeli Int). 2009, 33 (3): 262.View ArticleGoogle Scholar
- Cornelissen JHC, Lavorel S, Garnier E, Díaz S, Buchmann N, Gurvich DE, Reich PB, Steege HT, Morgan HD, Heijden MGaVD, Pausas JG, Poorter H: A handbook of protocols for standardised and easy measurement of plant functional traits worldwide. Aust J Bot. 2003, 51 (4): 335-10.1071/BT02124.View ArticleGoogle Scholar
- Niem W, Wingbermuhle J: Automatic reconstruction of 3D objectsusing a mobile monoscopic camera. In 3-D Digital Imaging andModeling, 1997. Proceedings., International Conference on Recent Advancesin IEEE; 1999:173–180.Google Scholar
- The PlantScan Webpage. [http://www.plantphenomics.org.au/node/157].
- Franco J, Boyer E: Fusion ofmultiview silhouette cues using a spaceoccupancy grid. In Computer Vision, 2005. ICCV 2005. Tenth IEEEInternational Conference on, Volume 2 IEEE; 2005:1747–1753.Google Scholar
- Kolev K, Klodt M, Brox T, Cremers D: Continuous global optimization in multiview 3d reconstruction. Int J Comput Vision. 2009, 84: 80-10.1007/s11263-009-0233-1.View ArticleGoogle Scholar
- Hosoi F, Omasa K: Voxel-Based 3-D modeling of individual trees for estimating leaf area density using high-resolution portable scanning lidar. Geoscience Remote Sensing, IEEE Transact on. 2006, 44 (12): 3610.View ArticleGoogle Scholar
- Vieira M, Shimada K: Surface mesh segmentation and smooth surface extraction through region growing. Comput Aided Geometric Des. 2005, 22 (8): 771-10.1016/j.cagd.2005.03.006.View ArticleGoogle Scholar
- Attene M, Falcidieno B, Spagnuolo M: Hierarchical mesh segmentation based on fitting primitives. Visual Comput. 2006, 22 (3): 181-10.1007/s00371-006-0375-x.View ArticleGoogle Scholar
- Mortara M, Patan´e G, Spagnuolo M, Falcidieno B, Rossignac J: Plumber: a method for a multi-scale decomposition of 3D shapes into tubular primitives and bodies. In Proceedings of the ninth ACM symposium on Solid modeling and applications; 2004:339–344.Google Scholar
- Kuhn H: The Hungarian method for the assignment problem. Naval Res Logistics Q. 1955, 2 (1–2): 83.View ArticleGoogle Scholar
- Rodgers J, Nicewander W: Thirteen ways to look at the correlation coefficient. Am Statistician. 1988, 42 (1): 59.View ArticleGoogle Scholar
- Koch G: Intraclass correlation coefficient. Encyclopedia Stat Sci. 1983, 4: 212.Google Scholar
- Shrout P, Fleiss J: Intraclass correlations: uses in assessing rater reliability. Psychological Bull. 1979, 86 (2): 420.View ArticleGoogle Scholar
- Bartko J: The intraclass correlation coefficient as a measure of reliability. Psychological R. 1966, 19: 3.View ArticleGoogle Scholar
- Danjon F, Reubens B: Assessing and analyzing 3D architecture of woody root systems, a review of methods and applications in tree and soil stability, resource acquisition and allocation. Plant Soil. 2008, 303: 1-10.1007/s11104-007-9470-7.View ArticleGoogle Scholar
- Pradal C, Dufour-Kowalski S, Boudon F, Fournier C, Godin C: OpenAlea: a visual programming and component-based software platform for plant modelling. Funct Plant Biol. 2008, 35 (10): 751-10.1071/FP08084.View ArticleGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.