In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients—manually annotated by up to four raters—and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%–85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource.
This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Cell image analysis software CellProfiler, the first free, open-source system for flexible and high-throughput cell image analysis is described.
Recent developments in MRI data acquisition technology are starting to yield images that show anatomical features of the hippocampal formation at an unprecedented level of detail, providing the basis for hippocampal subfield measurement. However, a fundamental bottleneck in MRI studies of the hippocampus at the subfield level is that they currently depend on manual segmentation, a laborious process that severely limits the amount of data that can be analyzed. In this article, we present a computational method for segmenting the hippocampal subfields in ultra-high resolution MRI data in a fully automated fashion. Using Bayesian inference, we use a statistical model of image formation around the hippocampal area to obtain automated segmentations. We validate the proposed technique by comparing its segmentations to corresponding manual delineations in ultra-high resolution MRI scans of 10 individuals, and show that automated volume measurements of the larger subfields correlate well with manual volume estimates. Unlike manual segmentations, our automated technique is fully reproducible, and fast enough to enable routine analysis of the hippocampal subfields in large imaging studies.
We propose a nonparametric, probabilistic model for the automatic segmentation of medical images, given a training set of images and corresponding label maps. The resulting inference algorithms rely on pairwise registrations between the test image and individual training images. The training labels are then transferred to the test image and fused to compute the final segmentation of the test subject. Such label fusion methods have been shown to yield accurate segmentation, since the use of multiple registrations captures greater inter-subject anatomical variability and improves robustness against occasional registration failures. To the best of our knowledge, this manuscript presents the first comprehensive probabilistic framework that rigorously motivates label fusion as a segmentation approach. The proposed framework allows us to compare different label fusion algorithms theoretically and practically. In particular, recent label fusion or multiatlas segmentation algorithms are interpreted as special cases of our framework. We conduct two sets of experiments to validate the proposed methods. In the first set of experiments, we use 39 brain MRI scans—with manually segmented white matter, cerebral cortex, ventricles and subcortical structures—to compare different label fusion algorithms and the widely-used FreeSurfer whole-brain segmentation tool. Our results indicate that the proposed framework yields more accurate segmentation than FreeSurfer and previous label fusion algorithms. In a second experiment, we use brain MRI scans of 282 subjects to demonstrate that the proposed segmentation tool is sufficiently sensitive to robustly detect hippocampal volume changes in a study of aging and Alzheimer’s Disease.
BackgroundImage-based screens can produce hundreds of measured features for each of hundreds of millions of individual cells in a single experiment.ResultsHere, we describe CellProfiler Analyst, open-source software for the interactive exploration and analysis of multidimensional data, particularly data from high-throughput, image-based experiments.ConclusionThe system enables interactive data exploration for image-based screens and automated scoring of complex phenotypes that require combinations of multiple measured features per cell.
We present the Spherical Demons algorithm for registering two spherical images. By exploiting spherical vector spline interpolation theory, we show that a large class of regularizors for the modified Demons objective function can be efficiently approximated on the sphere using iterative smoothing. Based on one parameter subgroups of diffeomorphisms, the resulting registration is diffeomorphic and fast. The Spherical Demons algorithm can also be modified to register a given spherical image to a probabilistic atlas. We demonstrate two variants of the algorithm corresponding to warping the atlas or warping the subject. Registration of a cortical surface mesh to an atlas mesh, both with more than 160k nodes requires less than 5 minutes when warping the atlas and less than 3 minutes when warping the subject on a Xeon 3.2GHz single processor machine. This is comparable to the fastest non-diffeomorphic landmark-free surface registration algorithms. Furthermore, the accuracy of our method compares favorably to the popular FreeSurfer registration algorithm. We validate the technique in two different applications that use registration to transfer segmentation labels onto a new image: (1) parcellation of in-vivo cortical surfaces and (2) Brodmann area localization in ex-vivo cortical surfaces.
Many biological pathways were first uncovered by identifying mutants with visible phenotypes and by scoring every sample in a screen via tedious and subjective visual inspection. Now, automated image analysis can effectively score many phenotypes. In practical application, customizing an image-analysis algorithm or finding a sufficient number of example cells to train a machine learning algorithm can be infeasible, particularly when positive control samples are not available and the phenotype of interest is rare. Here we present a supervised machine learning approach that uses iterative feedback to readily score multiple subtle and complex morphological phenotypes in high-throughput, image-based screens. First, automated cytological profiling extracts hundreds of numerical descriptors for every cell in every image. Next, the researcher generates a rule (i.e., classifier) to recognize cells with a phenotype of interest during a short, interactive training session using iterative feedback. Finally, all of the cells in the experiment are automatically classified and each sample is scored based on the presence of cells displaying the phenotype. By using this approach, we successfully scored images in RNA interference screens in 2 organisms for the prevalence of 15 diverse cellular morphologies, some of which were previously intractable.high-content screening ͉ high-throughput image analysis ͉ phenotype T he history of biology has been dramatically shaped by classic visual screens in model organisms, including Drosophila melanogaster (1-3), Saccharomyces cerevisiae (4), Caenorhabditis elegans (5), and the zebrafish Danio rerio (6, 7). In each case, biological pathways were discovered because researchers were intrigued by groups of peculiar-looking mutants and identified the genes underlying their phenotypes. Because researchers have favored the extensive study of relatively few genes (8), classic, wide-net approaches like screening are as relevant as ever to probe known biological pathways and discover new ones. Modern technology now enables large-scale experiments in cultured cells to identify human genes that underlie biological processes via RNAi. Automation also allows the screening of chemical libraries to identify perturbants useful as research tools or drugs.Despite these advances, scoring cells in images for rare and unusual morphologies has, in general, remained a significant bottleneck (9-12). Cell image analysis allows accurate identification and measurement of cells' features, enabling automated analysis of certain phenotypes that were previously intractable (13-26). However, many interesting phenotypes require the assessment of several measured features of cells. Machine learning methods that select and combine multiple features for automated cell classification have been used to score many phenotypes (15)(16)(17)(18)(19)(20)(21)(22)(23)(24)(25)(26). These methods require the provision of example cells that do and do not display the morphology of interest (i.e., positive and negative cells). Finding posi...
Estimating statistical significance of detected differences between two groups of medical scans is a challenging problem due to the high dimensionality of the data and the relatively small number of training examples. In this paper, we demonstrate a non-parametric technique for estimation of statistical significance in the context of discriminative analysis (i.e., training a classifier function to label new examples into one of two groups). Our approach adopts permutation tests, first developed in classical statistics for hypothesis testing, to estimate how likely we are to obtain the observed classification performance, as measured by testing on a hold-out set or cross-validation, by chance. We demonstrate the method on examples of both structural and functional neuroimaging studies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.