Segmentation python

Segmentation python DEFAULT

OpSeF: Open Source Python Framework for Collaborative Instance Segmentation of Bioimages

Introduction

Phenomics, the assessment of the set of physical and biochemical properties that completely characterize an organism, has long been recognized as one of the most significant challenges in modern biology (Houle et al., 2010). Microscopy is a crucial technology to study phenotypic characteristics. Advances in high-throughput microscopy (Lang et al., 2006; Neumann et al., 2010; Chessel and Carazo Salas, 2019), slide scanner technology (Webster and Dunstan, 2014; Wang et al., 2019), light-sheet microscopy (Swoger et al., 2014; Ueda et al., 2020), semi-automated (Bykov et al., 2019; Schorb et al., 2019) and volume electron microscopy (Titze and Genoud, 2016; Vidavsky et al., 2016), as well as correlative light- and electron microscopy (Hoffman et al., 2020) have revolutionized the imaging of organisms, tissues, organoids, cells, and subcellular structures. Due to the massive amount of data produced by these approaches, the traditional biomedical image analysis tool of “visual inspection” is no longer feasible, and classical, non-machine learning-based image analysis is often not robust enough to extract phenotypic characteristics reliably in a non-supervised manner.

Thus, the advances mentioned above were enabled by breakthroughs in the application of machine learning methods to biological images. Traditional machine learning techniques, based on random-forest classifiers and support vector machines, were made accessible to biologists with little to no knowledge in machine learning, using stand-alone tools such as ilastik (Haubold et al., 2016; Berg et al., 2019; Kreshuk and Zhang, 2019) or QuPath (Bankhead et al., 2017). Alternatively, they were integrated into several image analysis platforms such as Cellprofiler (Lamprecht et al., 2007), Cellprofiler Analyst (Jones et al., 2009), Icy (de Chaumont et al., 2012), ImageJ (Schneider et al., 2012; Arganda-Carreras et al., 2017) or KNIME (Sieb et al., 2007).

More recently, deep learning methods, initially developed for computer vision challenges, such as face recognition or autonomous cars, have been applied to biomedical image analysis (Cireşan et al., 2012, 2013). The U-Net is the most commonly used deep convolutional neural network specifically designed for semantic segmentation of biomedical images (Ronneberger et al., 2015). In the following years, neural networks were broadly applied to biomedical images (Zhang et al., 2015; Akram et al., 2016; Albarqouni et al., 2016; Milletari et al., 2016; Moeskops et al., 2016; Van Valen et al., 2016; Çiçek et al., 2016; Rajchl et al., 2017). Segmentation challenges like the 2018 Data Science Bowl (DSB) further promoted the adaptation of computer vision algorithms like Mask R-CNN (He et al., 2017) to biological analysis challenges (Caicedo et al., 2019). The DSB included various classes of nuclei. Schmidt et al. use the same dataset to demonstrate that star-convex polygons are better suited to represent densely packed cells (Schmidt et al., 2018) than axis-aligned bounding boxes used in Mask R-CNN (Hollandi et al., 2020b). Training of deep learning models typically involves tedious annotation to create ground truth labels. Approaches that address this limitation include optimizing the annotation workflow by starting with reasonably good predictions (Hollandi et al., 2020a), applying specific preprocessing steps such that an existing model can be used (Whitehead, 2020), and the use of generalist algorithms trained on highly variable images (Stringer et al., 2020). Following the latter approach, Stringer et al. trained a neural network to predict vector flows generated by the reversible transformation of a highly diverse image collection. Their model includes a function to auto-estimate the scale. It works well for specialized and generalized data (Stringer et al., 2020).

Recently, various such pre-trained deep learning segmentation models have been published that are intended for non-machine learning experts in the field of biomedical image processing (Schmidt et al., 2018; Hollandi et al., 2020b; Stringer et al., 2020). Testing such models on new data sets can be time-consuming and might not always give good results. Pre-trained models might fail because the test images do not resemble the data network was trained on sufficiently well. Alternatively, the underlying network architecture and specification, or the way data is internally represented and processed might not be suited for the presented task. Biomedical users with no background in computer science are often unable to distinguish these possibilities. They might erroneously conclude that their problem is in principle not suited for deep learning-based segmentation. Thus, they might hesitate to create annotations to re-train the most appropriate architecture. Here, we present the Open Segmentation Framework OpSeF, a Python framework for deep-learning-based instance segmentation of cells and nuclei. OpSeF has primarily been developed for staff image analysts with solid knowledge in image analysis, thorough understating of the principles of machine learning, and basic skills in Python. It wraps scikit-image, a collection of Python algorithms for image processing (van der Walt et al., 2014), the U-Net implementation used in Cellprofiler 3.0 (McQuin et al., 2018), StarDist (Schmidt et al., 2018; Weigert et al., 2019, 2020), and Cellpose (Stringer et al., 2020) in a single framework. OpSeF defines the standard in- and outputs, facilitates modular workflow design, and interoperability with other software (Weigert et al., 2020). Moreover, it streamlines and semi-automates preprocessing, CNN-based segmentation, postprocessing as well as evaluation of results. Jupyter notebooks (Kluyver et al., 2016) serve as a minimal graphical user interface. Most computations are performed head-less and can be executed on local workstations as well as on GPU clusters. Segmentation results can be easily imported and refined in ImageJ using AnnotatorJ (Hollandi et al., 2020a).

Materials and Methods

Data Description

Cobblestones

Images of cobblestones were taken with a Samsung Galaxy S6 Active Smartphone.

Leaves

Noise was added to the demo data from “YAPiC - Yet Another Pixel Classifier” available at https://github.com/yapic/yapic/tree/master/docs/example_data using the Add Noise function in ImageJ.

Small Fluorescent Nuclei

Images of Hek293 human embryonic kidney stained with a nuclear dye from the image set BBBC038v1 (Caicedo et al., 2019) available from the Broad Bioimage Benchmark Collection (BBBC) were used. Metadata is not available for this image set to confirm staining conditions. Images were rescaled from 360 × 360 pixels to 512 × 512 pixels.

3D Colon Tissue

We used the low signal-to-noise variant of the image set BBBC027 (Svoboda et al., 2011) from the BBBC showing 3D colon tissue images.

Epithelial Cells

Images of cervical cells from the image set BBBC038v1 (Caicedo et al., 2019) available from the BBBC display cells stained with a dye that labels membranes weakly and nuclei strongly. The staining pattern is reminiscent of images of methylene blue-stained cells. However, metadata is not available for this image set to confirm staining conditions.

Skeletal Muscle

A methylene blue-stained skeletal muscle section was recorded on a Nikon Eclipse Ni-E microscope equipped with a Märzhäuser SlideExpress2 system for automated handling of slides. The pixel size is 0.37 × 0.37 μm. Thirteen large patches of 2048 × 2048 pixels size were manually extracted from the original 44712 × 55444 pixels large image. Color images were converted to grayscale.

Kidney

HE stained kidney paraffin sections were recorded on a Nikon Eclipse Ni-E microscope equipped with a Märzhäuser SlideExpress2 system for automated handling of slides. The pixel size is 180 × 180 nm. The original, stitched, 34816 × 51200 pixels large image was split into two large patches (18432 × 6144 and 22528 × 5120 pixel). Next, the Eosin staining was extracted using the Color Deconvolution ImageJ plugin. This plugin implements the method described by Ruifrok and Johnston (2001).

Arabidopsis Flowers

H2B:mRuby2 was used for the visualization of somatic nuclei of Arabidopsis thaliana flower. The flower was scanned from eight views differing by 45° increments in a Zeiss Z1 light-sheet microscope (Valuchova et al., 2020). We used a single view to mimic a challenging 3D segmentation problem. Image files are available in the Image Data Resource (Williams et al., 2017) under the accession code: idr0077.

Mouse Blastocysts

The DAPI signal from densely packed E3.5 mouse blastocysts nuclei was recorded on a Leica SP8 confocal microscope using a 40× 1.30 NA oil objective (Blin et al., 2019). Image files are available in the Image Data Resource (Williams et al., 2017) under the accession code: idr0062.

Neural Monolayer

The DAPI signal of a neural monolayer was recorded on a Leica SpE confocal microscope using a 63× 1.30 NA oil objective (Blin et al., 2019). Image files are available in the Image Data Resource (Williams et al., 2017) under the accession code: idr0062.

Algorithm

Ideally, OpSeF is used as part of collaborative image analysis projects, to which both the user and the image analyst contribute their unique expertise (Figure 1A). All analyst tasks are optimized for deployment on Linux workstations or GPU clusters, all user tasks may be performed on any laptop in ImageJ. If challenges arise, the image analyst (Figure 1A) might consult other OpSeF users or the developer of tools used within OpSeF. The analyst will – to the benefit of future users – become more skilled using CNN-based segmentation in analysis workflows. The user, who knows the sample best, plays an important role in validating results and discovering artifacts (Figure 1A). Exemplary workflows and new models might be shared to the benefit of other OpSeF users (Figure 1B).

www.frontiersin.org

Figure 1. Image analysis using OpSeF (A) Illustration on how to use OpSeF for collaborative image analysis (B) Illustration on how developers, image analysts and users might contribute toward the further development of OpSeF. (C) OpSeF analysis pipeline consists of four groups of functions used to import and reshape the data, to preprocess it, to segment objects, and to analyze and classify results. (D) Optimization procedure. Left panel: Illustration of a processing pipeline, in which three different models are applied to data generated by four different preprocessing pipelines each. Right panel: Resulting images are classified into results that are correct; suffer from under- or over-segmentation or fail to detect objects. (E) Illustration of postprocessing pipeline. Segmented objects might be filtered by their region properties or a mask, results might be exported to AnnotatorJ and re-imported for further analysis. Blue arrows define the default processing pipeline, gray arrows feature available options. Dark blue boxes are core components, light blue boxes are optional processing steps.

OpSeF’s analysis pipeline consists of four principal sets of functions to import and reshape the data, to preprocess it, to segment objects, and to analyze and classify results (Figure 1C). Currently, OpSeF can process individual tiff files and the proprietary Leica ‘.lif’ container file format. During import and reshape, the following options are available for tiff-input: tile in 2D and 3D, scale, and make sub-stacks. For lif-files, only the make sub-stacks option is supported. Preprocessing is mainly based on scikit-image (van der Walt et al., 2014). It consists of a linear workflow in which 2D images are filtered, the background is removed, and stacks are projected. Next, the following optional preprocessing operations might be performed: histogram adjustment (Zuiderveld, 1994), edge enhancement, and inversion of images. Available segmentation options include the pre-trained U-Net used in Cellprofiler 3.0 (McQuin et al., 2018), the so-called “2D_paper_dsb2018” StarDist model (Schmidt et al., 2018) and Cellpose (Stringer et al., 2020). The 2D StarDist model (Schmidt et al., 2018) was trained on a subset of fluorescent images from the 2018 Data Science Bowl (DSB) (Caicedo et al., 2019). Although good performance on non-fluorescent images cannot be taken for granted, the StarDist versatile model, which was trained on the same data, generalizes well and can be used to segment cells in diaminobenzidene and hematoxylin stained tissue sections (Whitehead, 2020). We thus used the 2D_paper_dsb2018 StarDist model for all 2D examples. Available options for preprocessing in 3D are limited (Figure 1B, lower panel). Segmentation in 3D is computationally more demanding. Thus, we recommend a two-stage strategy for Cellpose 3D. Preprocessing parameters are first explored on representative planes in 2D. Next, further optimization in 3D is performed. Either way, preprocessing and selection of the ideal model for segmentation are one functional unit. Figure 1D illustrates this concept with a processing pipeline, in which three different models are applied to four different preprocessing pipelines each. The resulting images are classified into results that are mostly correct, suffer from under- or over-segmentation, or largely fail to detect objects. In the given example, the combination of preprocessing pipeline three and model two gives overall the best result. We recommend an iterative optimization which starts with a large number of models, and relatively few, but conceptually different preprocessing pipelines. For most datasets, some models outperform others. In this case, we recommend fine-tuning the most promising preprocessing pipelines in combination with the most promising model. OpSeF uses matplotlib (Virtanen et al., 2020) to visualize results in Jupyter notebooks and to export exemplary results that may be used as figures in publications. All data is managed in pandas (Virtanen et al., 2020) and might be exported as csv file. Scikit-image (van der Walt et al., 2014), and scikit-learn (Pedregosa et al., 2011; Figures 1E, 2A–C) are used for pre- and postprocessing of segmentation results, which might e.g., be filtered based on their size, shape or other object properties (Figure 2B). Segmentation objects may further be refined by a user-provided (Figures 2D,E) or an autogenerated mask. Results might be exported to AnnotatorJ (Hollandi et al., 2020a) for editing or classification in ImageJ. AnnotatorJ is an ImageJ plugin that helps hand-labeling data with deep learning-supported semi-automatic annotation and further convenient functions to create and edit object contours easily. It has been extended with a classification mode and import/export fitting the data structure used in OpSeF. After refinement, results can be re-imported and further analyzed in OpSeF. Analysis options include scatter plots of region properties (Figure 2B), T-distributed Stochastic Neighbor Embedding (t-SNE) analysis (Figure 2F), and principal component analysis (PCA) (Figure 2G).

www.frontiersin.org

Figure 2. Example of how postprocessing can be used to refine results (A) StarDist segmentation of the multi-labeled cells dataset detected nuclei reliably but caused many false positive detections. These resemble the typical shape of cells but are larger than true nuclei. Orange arrows point at nuclei that were missed, the white arrow at two nuclei that were not split, the blue arrows at false positive detections that could not be removed by filtering. (B) Scatter plot of segmentation results shown in panel (A). Left panel: Mean intensity plotted against object area. Right panel: Circularity plotted against object area. Blue Box illustrates the parameter used to filter results. (C) Filtered Results. Orange arrows point at nuclei that were missed, the white arrow at two nuclei that were not split, the blue arrows at false positive detections that could not be removed by filtering. (D,E) Example for the use of a user-provided mask to classify segmented objects. The segmentation results (false-colored nuclei) are superimposed onto the original image subjected to [median 3 × 3] preprocessing. All nuclei located in the green area are assigned to Class 1, all others to Class 2. The red box indicates the region shown enlarged in panel (E). From left to right in panel (E): original image, nuclei assigned to class 1, nuclei assigned to class 2. (F) T-distributed Stochastic Neighbor Embedding (t-SNE) analysis of nuclei assigned to class 1 (purple) or class 2 (yellow). (G) Principal component analysis (PCA) of nuclei assigned to class 1 (purple) or class 2 (yellow).

Results

We provide demonstration notebooks to illustrate how OpSeF might be used to elucidate efficiently whether a given segmentation task is solvable with state of the art deep convolutional neural networks (CNNs). In the first step, preprocessing parameters are optimized. Next, we test whether the chosen model performs well without re-training. Finally, we assess how well it generalizes on heterogeneous datasets.

Preprocessing can be used to make the input image more closely resemble the visual appearance of the data on which the models for the CNNs bundled with OpSeF were trained on, e.g., by filtering and resizing. Additionally, preprocessing steps can be used to normalize data and reduce heterogeneity. Generally, there is not a single, universally best preprocessing pipeline. Instead, well-performing combinations of preprocessing pipelines and matching CNN-models can be identified. Even the definition of a “good” result depends on the biological question posed and may vary from project to project. For cell tracking, very reproducible cell identification will be of utmost importance; for other applications, the accuracy of the outline might be more crucial. To harness the full power of CNN-based segmentation models and to build trust in their more widespread use, it is essential to understand under which conditions they are prone to fail.

We use various demo datasets to challenge the CNN-based segmentation pipelines. Jupyter notebooks document how OpSeF was used to obtain reliable results. These notebooks are provided as a starting point for the iterative optimization of user projects and as a tool for interactive user training.

The first two datasets – cobblestones and leaves – are generic, non-microscopic image collections, designed to illustrate common analysis challenges. Further datasets exemplify the segmentation of a monolayer of fluorescent cells, fluorescent tissues, cells in which various compartments have been stained with the same dye, as well as histological sections stained with one or two dyes. The latter dataset exemplifies additionally how OpSeF can be used to process large 2D images.

Nuclei and cells used to train CNN-based segmentation are most commonly round or ellipsoid shaped. Objects in the cobblestone dataset are approximately square-shaped. Thus, the notebook may be used as an example to explore the segmentation of non-round cells (e.g., many plant cells, neurons). Heterogeneous intensities within objects and in the border region, as well as a five-fold variation of object size, challenge segmentation pipelines further. In the first round of optimization, minor smoothing [median filter with 3 × 3 kernel (median 3 × 3)] and background subtraction were applied. Next, the effect of additional histogram equalization, edge enhancement, and image inversion was tested. The resulting four preprocessed images were segmented with all models [Cellpose nuclei, Cellpose Cyto, StarDist, and U-Net]. The Cellpose scale-factor range [0.2, 0.4, 0.6] was explored. Among the 32 resulting segmentation pipelines, the combination of image inversion and the Cellpose Cyto 0.4 model produced the best results in both training images (Figures 3A–C) without further optimization. The segmentation generalized well to the entire dataset. Only in one image, three objects were missed, and one object was over-segmented. Borders around these stones are very hard to distinguish for a human observer, and even further training might not resolve the presented segmentation tasks (Figures 3D–F). Cellpose has been trained on a large variety of images and had been reported to perform well on objects of similar shape [compare Figure 4, Images 21, 22, 27 in Stringer et al. (2020)]. Thus, it is no surprise that Cellpose outperformed the StarDist 2D model (Schmidt et al., 2018), which had been trained only on fluorescent images.

www.frontiersin.org

Figure 3. Cobblestones notebook: Segmentation of non-roundish cells (A,B) Segmentations (red line) of the Cellpose Cyto 0.4 model are superimposed onto the original image subjected to [median 3 × 3] preprocessing. The inverted image (not shown) was used as input to the segmentation. Outlines are well defined, no objects were missed, none over-segmented. These settings fit accurately to the entire dataset (train and test) shown in panels (C,D). Only in one image, three objects were missed and one was over-segmented. Borders around these stones are hard to discern. Individual objects are false color-coded in panels (C,D). The red squares in panel (D) highlight one of the two problematic regions shown as a close-up in panels (E,F).

www.frontiersin.org

Figure 4. Leaves notebook: Segmentation of rare concave cells (A–C) Segmentation results (red line) of the Cellpose Cyto 0.5 and 0.7 and StarDist model are superimposed onto the original image subjected to [median 3 × 3] preprocessing. The inverted image (not shown) was used as input to the segmentation. Outlines are well defined, few objects were missed (A: blue arrow), some over-segmented (B,C: green and orange arrow). Green arrow points at an oak leave with prominent leaf veins, orange arrow at maple leaves with less prominent leave veins. (D–F) Results of further optimization. Further smoothing (E) reduced the rate of false negatives (blue arrow) and over-segmentation in the Cellpose Cyto model. However, object outlines were less precise (E).

Segmentation of the leaves dataset seems trivial and could easily be solved by any threshold-based approach. Nevertheless, it challenges CNN-based segmentation due to the presence of concave and convex shapes. Moreover, objects contain dark lines, vary 20-fold in area, and are presented on a heterogeneous background. Preprocessing was performed as described for the cobblestone dataset. The most promising result was obtained with the Cellpose Cyto 0.5 model in combination with [median 3 × 3 & image inversion] preprocessing (Figures 4A,B) and the StarDist model with [median 3 × 3 & histogram equalization] preprocessing (Figure 4C). Outlines were well defined, few objects were missed (blue arrow in Figure 4A), few over-segmented (green and orange arrow in Figures 4B,C). The Cellpose Cyto 0.7 model gave similar results.

Maple leaves (orange arrows in Figures 4B,C) were most frequently over-segmented. Their shape resembles a cluster of touching cells. Thus, the observed over-segmentation might be caused by the attempt of the CNN to reconcile their shapes with structures it has been trained on. Oak leaves were the second most frequently over-segmented leaf type. These leaves contain dark leaf veins that might be interpreted as cell borders. However, erroneous segmentation mostly does not follow these veins (green arrow in Figure 4B). Next, the effect of stronger smoothing [mean 7 × 7] was explored. The Cellpose nuclei model (Figure 4E) reduced the rate of false-negative detections (Figure 4D blue arrow) and over-segmentation (Figure 4F orange arrow) at the expense of loss in precision of object outlines. Parameter combinations tested in Figures 4D,E generalize well in the entire dataset.

Next, we used OpSeF to segment nuclei in a monolayer of cells. Most nuclei are well separated. We focused our analysis on the few touching nuclei. Both the Cellpose nuclei model and the Cellpose Cyto model performed well across a broad range of scale-factors. Interestingly, strong smoothing made the Cellpose nuclei but not the Cellpose Cyto model more prone to over-segmentation (Figure 5A). The StarDist model performed well, while the U-Net failed surprisingly, given the seemingly simple task. Pixel intensities have a small dynamic range, and nuclei are dim and rather large. To elucidate whether any of these issues led to this poor performance, we binned the input 2 × 2 (U-Net+BIN panel in Figure 5A) and adjusted brightness and contrast. Adjusting brightness and contrast alone had no beneficial effect (data not shown). The U-Net performed much better on the binned input. Subsequently, we batch-processed the entire dataset. StarDist was more prone to over-segmentation (green arrow in Figure 5B), but detected smaller objects more faithfully (orange arrow in Figure 5B). This might indicate that the size of test objects was larger than the size of train objects. StarDist was more likely to include atypical objects, e.g., nuclei during cell division that display a strong texture (blue arrow in Figure 5B). Substantial variation in brightness was well tolerated by both models (white arrow in Figure 5B). Both models complement each other well.

www.frontiersin.org

Figure 5. Segmentation of dense fluorescent nuclei (A) Segmentation of the sparse nuclei data set. Example of two touching cells. The segmentation result (red line) superimposed onto the original image subjected to [median 3 × 3] preprocessing. Columns define the model used for segmentation, the rows the filter used for preprocessing. Cellpose nuclei model and the Cellpose Cyto model performed well across a large range of scale-factors. The U-Net failed initially. Improved results after binning are shown in the lower right panel. (B) Comparison of Cellpose nuclei (CP) and StarDist (SD) segmentation results (red line). StarDist is more prone to over-segmentation (green arrow), but detects smaller objects more reliably (orange arrow) and tends to include objects with strong texture (blue arrow). Strong variation in brightness was tolerated well by both models (white arrow). (C,D) 2D Segmentation of colon tissue from the Broad BioImage Benchmark Collection with Cellpose nuclei (CP) or StarDist (SD). Both models gave reasonable results. Only a few dense clusters could not be segmented (white arrow).

We also tested a more complex dataset: 3D colon tissue from the Broad Bioimage Benchmark Collection. This synthetic dataset is ideally suited to assess segmenting clustered nuclei in tissues. We chose the low signal-to-noise variant, which allowed us to test denoising strategies. Sum, maximum, and median projection of three Z-planes was tested in combination with the preprocessing variants previously described for the monolayer of cells dataset. Twelve different preprocessing pipelines were combined with all models [Cellpose nuclei, Cellpose Cyto, StarDist, and U-Net]. The Cellpose scale-factor range [0.15, 0.25, 0.4, 0.6] was explored. Many segmentation decisions in the 3D colon tissue dataset are hard to perform even for human experts. Within this limitation, [median projection & histogram equalization] preprocessing produced reasonable results without any further optimization in combination with either Cellpose nuclei 0.4 or the StarDist model (Figures 5C,D). Only a few cell clusters were not segmented (Figures 5C,D white arrow). Both models performed equally well on the entire data set.

We subsequently tried to segment a single layer of irregular-shaped epithelial cells, in which the nucleus and cell membranes had been stained with the same dye. In the first run, minor [median 3 × 3] or strong [mean 7 × 7] smoothing was applied. Next, the effect of additional histogram equalization, edge enhancement, and image inversion was tested. The resulting eight preprocessed images were segmented with all models [Cellpose nuclei, Cellpose Cyto, StarDist, and U-Net]. The Cellpose scale-factor range [0.6, 0.8, 1.0, 1.4, 1.8] was explored. The size of nuclei varied more than five-fold. We thus focused our analysis on a cluster of particularly large nuclei and a cluster of small nuclei. The Cellpose nuclei 1.4 and StarDist model detected both small and large nuclei similarly well (Figure 6A). StarDist segmentation results included many cell-shaped false positive detections. Given the model was trained on different data, retraining would be the best way to improve performance. Alternatively, false-positive detections, which were much larger than true nuclei, could be filtered out during postprocessing. While the U-Net did not perform well on the same input [median 3 × 3] (Figure 6A), it returned better results (Figure 6A) upon [mean 7 × 7 & histogram equalization] preprocessing. As weak smoothing was beneficial for the Cellpose and StarDist pipelines and stronger smoothing for the U-Net pipelines, we explored the effect of intermediate smoothing [median 5 × 5] for Cellpose and StarDist and even stronger smoothing [mean 9 × 9] for the U-Net pipelines. A slight improvement was observed. Thus, we used [median 5 × 5] preprocessing in combination with Cellpose nuclei 1.5 or StarDist model to process the entire dataset. Cellpose frequently failed to detect bright, round nuclei (Figure 6B, arrows) and StarDist (Figure 6C) had many false detections. Thus, re-training or postprocessing is required.

www.frontiersin.org

Figure 6. Segmentation of multi-labeled cells (A) Images of large epithelial cells from the 2018 Data Science Bowl collection were used to test the segmentation of a single layer of cells. These cells vary in size. The Cellpose nuclei 1.4 model and [median 3 × 3] preprocessing gave a reasonable segmentation for large and small nuclei. StarDist segmentation based on the same input detected nuclei more reliably. However, many false-positive detections were present. Interestingly, the shape of false detections resembles the typical shape of cells well. The U-Net did not perform well with the same preprocessing, but with [mean 7 × 7 & histogram equalization] preprocessing. (B,C) [Median 5 × 5] preprocessing in combination with the Cellpose 1.5 nuclei or the StarDist model was applied to the entire data set. (B) The Cellpose model missed reproducibly round, very bright nuclei (blue arrow). (C) StarDist predicted many false-positive cells.

In the DSB, most algorithms performed better on images classified as small or large fluorescent, compared to images classified as “purple tissue” or “pink and purple” tissue. We used methylene blue-stained skeletal muscle sections as a sample dataset for tissue stained with a single dye and Hematoxylin and eosin (HE) stained kidney paraffin sections as an example for multi-dye stained tissue. Analysis of tissue sections might be compromised by heterogenous image quality cause e.g., by artifacts created at the junctions of tiles. To account for these artifacts all workflows used the fused image as input to the analysis pipeline.

While most nuclei in the skeletal muscle dataset are well separated, some form dense clusters, others are out of focus (Figure 7A). The size of nuclei varies ten-fold; their shape ranges from elongated to round. The same preprocessing and model as described for the epithelial cells dataset were used; the Cellpose scale-factor range [0.2, 0.4, 0.6] was explored. [Median 3 × 3 & invert image] preprocessing combined with the Cellpose nuclei 0.6 model produced satisfactory results without further optimization (Figure 7B). Outlines were well defined, some objects were missed, few over-segmented. Neither StarDist nor the U-Net performed similarly well. We could not overcome this limitation by adaptation of preprocessing or binning. The performance of other – most likely more appropriate – StarDist model (2D_versatile_fluo, 2D_versatile_he) was not tested. Processing of the entire dataset identified inadequate segmentation of dense clusters (Figure 7C, white arrow) and occasional over-segmentation of large, elongated nuclei (Figure 7C, orange arrow) as the main limitations. Nuclei that are out-of-focus were frequently missed (Figure 7C, blue arrow). Limiting the analysis to in-focus nuclei is feasible.

www.frontiersin.org

Figure 7. Skeletal muscle notebook: Segmenting irregular nuclei in tissue (A) A methylene blue-stained skeletal muscle section was used to test the segmentation of tissue that has been stained with one dye. (B) Segmentation was tested on 2048 × 2048 pixels large image patches. The star shown in panels (A,B) is located at the same position within the image displayed at different zoom factors. The segmentation result (red line) of the Cellpose nuclei 0.6 model is super-imposed onto the original image subjected to [median 3 × 3] preprocessing. (C,D) Close-up on regions that were difficult to segment. Segmentation of dense clusters (white arrow) often failed, and occasional over-segmentation of large, elongated nuclei (orange arrow) was observed. Nuclei that are out-of-focus (blue arrow) were frequently missed (blue arrow).

Cell density is very heterogeneous in the kidney dataset. The Eosin signal from a HE stained kidney paraffin section (Figures 8A,B) was obtained by color deconvolution. Nuclei are densely packed within glomeruli and rather sparse in the proximal and distal tubules. Two stitched images were split using OpSeF’s “to tiles” function. Initial optimization was performed on a batch of 16 image tiles, the entire dataset contains 864 tiles. The same preprocessing and model were used as described for the skeletal muscle dataset, the Cellpose scale-factor range [0.6, 0.8, 1.0, 1.4, 1.8] was explored. [Median 3 × 3 & histogram equalization] preprocessing in combination with the Cellpose nuclei 0.6 model produced fine results (Figure 8C). [Mean 7 × 7 & histogram equalization] preprocessing in combination with StarDist performed similarly well (Figure 8C). The latter pipeline resulted in more false-positive detections (Figure 8C, purple arrows). The U-Net performed worse, and more nuclei were missed (Figure 8C, blue arrow). All models failed for dense cell clusters (Figures 8C,D, white arrow).

www.frontiersin.org

Figure 8. Kidney notebook: Segmenting cell cluster in tissues (A,B) Part of a HE stained kidney paraffin section used to test segmentation of tissue stained with two dyes. The white box in panel (A) highlights the region shown enlarged in panel (B). Star in panels (A,B,D) marks the same glomerulus. (C) Eosin signal was extracted by color deconvolution. The segmentation result (blue line) is superimposed onto the original image subjected to [median 3 × 3] preprocessing. Cellpose nuclei 0.6 model with scale-factor 0.6 in combination with [median 3 × 3 & histogram equalization] preprocessing and the StarDist model with [mean 7 × 7 & histogram equalization] performed similarly well (C,D). StarDist resulted in more false-positive detections (purple arrows). The U-Net performed worse, more nuclei were missed (blue arrow). All models failed in very dense areas (white arrow).

Next, we sought to expand the capability of OpSeF to volume segmentation. To this aim, we trained a StarDist 3D model using the annotation of Arabidopsis thaliana lateral root nuclei dataset provided by Wolny et al. (2020). Images were obtained on a Luxendo MuVi SPIM light-sheet microscope (Wolny et al., 2020). We first tested the model with a similar, publically available dataset. Valuchova et al. (2020) studied differentiation within Arabidopsis flowers. To this aim, the authors obtained eight views of H2B:mRuby2 labeled somatic nuclei on a Zeiss Z1 light-sheet microscope. We used a single view to mimic a challenging 3D segmentation (Figures 9A–C). Changes in image quality along the optical axis, in particular, deteriorating image quality deeper in the tissue (Figure 9C) are a major challenge for any segmentation algorithm. While the segmentation quality of the interactive H-Watershed ImageJ plugin (Vincent and Soille, 1991; Najman and Schmitt, 1996; Lotufo and Falcao, 2000; Schindelin et al., 2015), a state of the art traditional image processing method, is still acceptable in planes with good contrast (Figure 9B, xy Slice), results deeper in the tissue are inferior to CNN-based segmentation (data not shown). The H-Watershed plugin consequently fails to segment precisely in 3D (Figure 9C, zy-slices). The StarDist 3D model, which was trained on a similar dataset, performs slightly better than the Cellpose nuclei model. To evaluate the performance of these models further, we used the DISCEPTS dataset (Blin et al., 2019). DICEPTS stands for “DifferentiatingStemCells & Embryos are a PainToSegment” and contains various datasets of densely packed nuclei that are heterogeneous in shape, size, or texture. Blin et al. elegantly solved the segmentation challenge by labeling the nuclear envelope. We thought to assess the performance of models contained in OpSeF on the more challenging DAPI signal. While the StarDist model trained on Arabidopsis thaliana lateral root nuclei shows satisfactory performance on E3.5 mouse blastocysts, notably, the size of nuclei is underestimated, and cells in dense clusters are sometimes missed. Fine-tuning of the non-maximum suppression and the detection threshold might suffice to obtain more precise segmentation results.

www.frontiersin.org

Figure 9. Volume segmentation. In all images, segmented nuclei are assigned a unique random color. (A) Side-by-side comparison of segmented somatic nuclei of Arabidopsis thaliana flowers obtained with the Cellpose 3D nuclei model (left) and a StarDist 3D model that was trained on a similar dataset (right). The middle panel shows the central xy-plane of the image volume obtained by light-sheet microscopy. The red box marks the area that is shown enlarged in panel (B). The green line indicates the location of the yz-plane shown in panel (C). In panels (B,C) the results obtained manually using the interactive ImageJ H-watershed plugin are additionally shown. Orange arrows point at non-nuclear artifact signal that is ignored by StarDist 3D model trained on a dataset containing similar artifacts. Blue arrows point at nuclear signal in a low-contrast area. (D–F) Evaluation of 3D model based on the DISCEPTS dataset. Central xy- or yz-planes of original images (gray) are shown side-by-side to the segmentation results of the model as indicated. All data were obtained on confocal microscopes. (D,E) Segmentation of E3.5 mouse blastocysts. The StarDist 3D model trained on well-spaced nuclei missed nuclei in dense-cluster (green arrow) that were reliably identified by the Cellpose 3D cyto model. (F) Segmentation results of Cellpose nuclei model (right panel) on the neural “monolayer” dataset (left). These cells contain – in contrast to Arabidopsis thaliana lateral root nuclei – strong texture in their nuclei, are densely spaced and vary in size.

Interestingly, the more versatile Cellpose cyto, rather than the Cellpose nuclei model, is ideally suited for segmenting the E3.5 mouse blastocysts nuclei (Figures 9D,E). Next, we used the neural “monolayer” dataset (Figure 9F). In this dataset flat cells form tight 3D clusters. It proved to be challenging to segment (Blin et al., 2019). Our pre-trained StarDist model failed to give satisfactory segmentation results (data not shown). The presented cells contain – in contrast to Arabidopsis thaliana lateral root nuclei – strong texture in their nuclei. The more versatile Cellpose nuclei model displayed promising initial results with little fine-tuning (Figure 9F) that might be further improved by re-training the model on the appropriate ground truth.

Discussion

Intended Use and Future Developments

Examining the relationship between biochemical changes and morphological alterations in diseased tissues is crucial to understand and treat complex diseases. Traditionally, microscopic images are inspected visually. This approach limits the possibilities for the characterization of phenotypes to more obvious changes that occur later in disease progression. The manual investigation of subtle alterations at the single-cell level, which often requires quantitative assays, is hampered by the data volume. A whole slide tissue image might contain over one million cells. Despite the improvement in machine learning technology, completely unsupervised analysis pipelines have not been widely accepted. Thus, one of the major challenges for the coming years will be the development of efficient strategies to keep the human-expert in the loop. Many biomedical users still perceive deep learning models as black boxes. The mathematical foundation of how CNNs make decisions is improving. OpSeF facilitates understanding the strength of pre-trained models and network architecture on the descriptive, operational level. Thereby, awareness of intrinsic limitations such as the inability of StarDist to segment non-star-convex shapes well, or issues relating to the limited field-of-view of neural networks can be reached. It further allows us to quickly assess how robust models are against artifacts such as shadows present in light-sheet microscopy or how well they are in predicting cell shapes accurately that are neither round nor ellipsoid shaped (e.g., neurons, amoebas). Collectively, increased awareness of limitations and better interpretability of results will be pivotal to increase the acceptance of machine learning methods. It will improve the quality control of results and allow efficient integration of expert knowledge in analysis pipelines (Holzinger et al., 2019a, b).

As illustrated in the provided Jupyter notebooks, the U-Net often performed worst. Why is that the case? As previously reported the learning capacity of a single, regular U-Net is limited (Caicedo et al., 2019). Alternatively, the similarity of train and test images might be insufficient. Either way, the provision of a set of U-Nets trained on diverse data, might be a promising approach to address this limitation. [ods.ai] topcoders, the winning team of the 2018 Data Science Bowl (Caicedo et al., 2019), combined simple U-Nets with dedicated pre- and postprocessing pipelines. In doing so, they outperformed teams using more complex models like Mask R-CNN (Caicedo et al., 2019). OpSeF allows for the straightforward integration of a large number of pre-trained CNNs. We plan to include the possibility of saving pixel probabilities in future releases of OpSeF. This option will grant users more flexibility in designing custom postprocessing pipelines that ensemble results from a set of useful intermediate predictions.

OpSeF allows semi-automated exploration of a large number of possible combinations of preprocessing pipelines and segmentation models. Even if satisfactory results are not achievable with pre-trained models, OpSeF results may be used as a guide for which CNN architecture, re-training on manually created labels might be promising. The generation of training data is greatly facilitated by a seamless integration in ImageJ using the AnnotatorJ plugin. We hope that many OpSeF users will contribute their training data to open repositories and will make new models available for integration in OpSeF. Thus, OpSeF might soon become, an interactive model repository, in which an appropriate model might be identified with reasonable effort. Community provided Jupyter notebooks might be used to teach students in courses how to optimize CNN based analysis pipelines. This could educate them and make them less dependent on turn-key solutions that often trade performance for simplicity and offer little insight into the reasons why the CNN-based segmentation works or fails. The better users understand the model they use, the more they will trust them and, the better they will be able to quality control them. We hope that OpSeF will be widely accepted as a framework through which novel models might be made available to other image analysts in an efficient way.

Integrating Various Segmentation Strategies and Quality Control of Results

Multiple strategies for instance segmentation have been pursued. The U-Net belongs to the “pixel to object” class of methods: each pixel is first assigned to a semantic class (e.g., cell or background), then pixels are grouped into objects (Ronneberger et al., 2015). Mask R-CNNs belong to the “object to pixel” class of methods (He et al., 2017): the initial prediction of bounding boxes for each object is followed by a semantic segmentation. Following an intermediate approach, Schmidt et al. first predict star-convex polygons that approximate the shape of cells and use non-maximum suppression to prune redundant predictions (Schmidt et al., 2018; Weigert et al., 2019, 2020). Stringer et al. use stimulated diffusion originating from the center of a cell to convert segmentation masks into flow fields. The neural network is then trained to predict flow fields, which can be converted back into segmentation masks (Stringer et al., 2020). Each of these methods has specific strengths and weaknesses. The use of flow fields as auxiliary representation proved to be a great advantage for predicting cell shapes that are not roundish. At the same time, Cellpose is the most computationally demanding model used. In our hands, Cellpose tended to result in more obviously erroneously missed objects, in particular, if objects displayed a distinct appearance compared to their neighbors (blue arrows in Figures 5B, 6B, 7D). StarDist is much less computationally demanding, and star-convex polygons are well suited to approximate elliptical cell shapes. The pre-trained StarDist model implemented in OpSeF might be less precise in predicting novel shapes it has not been trained on, e.g., maple leaves (Figure 4C). This limitation can be overcome by retraining, given the object is star-convex, which includes certain concave shapes such as maple leaves. However, some cell-types (e.g., neurons, amoeba) are typically non star-convex, and StarDist – due to “limitation by design” – cannot be expected to segment these objects precisely. Segmentation errors by the StarDist model were generally plausible. It tended to predict cell-like shapes, even if they are not present (Figure 6B). Although the tendency of StarDist to fail gracefully might be advantageous in most instances, this feature requires particularly careful quality control to detect and fix errors. The “pixel-to-object” class of methods is less suited for segmentation of dense cell clusters. The misclassification of just a few pixels might lead to the fusion of neighboring cells.

OpSeF integrates three mechanistically distinct methods for CNN-based segmentation in a single framework. This allows comparing these methods easily. Nonetheless, we decided against integrating an automated evaluation, e.g., by determining F1 score, false positive and false negative rates, and accuracy. Firstly, for most projects no ground-truth is available. Secondly, we want to encourage the user to visually inspect segmentation results. Reviewing 100 different segmentation results opened in ImageJ as stack takes only a few minutes and gives valuable insight into when and how segmentations fail. This knowledge is easily missed when just looking at the output scores of commonly used metrics but might have a significant impact on the biological conclusion. Even segmentation results from a model with 95% precision and 95% recall for the overall cell population might be not suited to determine the abundance of a rare cell type if these cells are systematically missed, detected less accurately in the mutant situation, or preferentially localized to areas in the tissue that are not segmented well. Although it is difficult to capture such issues with standard metrics, they are readily observed by a human expert. Learning more about the circumstances in which certain types of CNN-based segmentation fail helps to decide when human experts are essential for quality control of results. Moreover, it is pivotal for the design of postprocessing pipelines. These might select among multiple segmentation hypotheses – on an object by object basis – the one which gives the most consistent results for reconstructing complex cells-shapes in large 3D volumes or for cell-tracking.

Optimizing Results and Computational Cost

Image analysis pipelines are generally a compromise between ease-of-use and performance as well as between computational cost and accuracy. Until now, rather simple, standard U-Nets are most frequently used models in the major image analysis tools. In contrast, the winning model of the 2018 Data Science Bowl by the [ods.ai] topcoders team used sophisticated data postprocessing to combine the output of 32 different neural networks (Caicedo et al., 2019). The high computational cost currently limits the widespread use of this or similar approaches. OpSeF is an ideal platform to find the computationally most efficient solution to a segmentation task. The [ods.ai] topcoders algorithm was designed to segment five different classes of nuclei: “small” and “large fluorescent,” “grayscale,” “purple tissue” and “pink and purple tissue” (Caicedo et al., 2019). Stringer et al. used an even broader collection of images that included cells of unusual appearance and natural images of regular cell-like shapes such as shells, garlic, pearls, and stones (Stringer et al., 2020).

The availability of such versatile models is precious, in particular, for users, who are unable to train custom models or lack resources to search for the most efficient pre-trained model. For most biological applications, however, no one-fits-all solution is required. Instead, potentially appropriate models might be pre-selected, optimized, and tested using OpSeF. Ideally, an image analyst and a biomedical researcher will jointly fine-tune the analysis pipeline and quality control results. This way, resulting analysis workflows will have the best chances of being both robust and accurate, and an ideal balance between manual effort, computational cost, and accuracy might be reached.

Comparison of the models available within OpSeF revealed that the same task of segmenting 100 images using StarDist took 1.5-fold, Cellpose with fixed scale-factor 3.5-fold, and Cellpose with flexible scale-factor 5-fold longer compared to segmentation with the U-Net.

The systematic search of the optimal parameter and ideal performance might be dispensable if only a few images are to be processed that can be easily manually curated, but highly valuable if massive datasets produced by slide-scanner, light-sheet microscopes or volume EM techniques are to be processed.

Deployment Strategies

We decided against providing OpSeF as an interactive cloud solution. A local solution uses existing resources best, avoids limitations generated by the down- and upload of large datasets, and addresses concerns regarding the security of clinical datasets. Although the provision of plugins is perceived as crucial to speed up the adoption of new methods, image analysts increasingly use the Jupyter notebooks that allow them to document workflows step-by-step. This is a significant advantage compared to interactive solutions, in which parameters used for analysis are not automatically logged. Biologists might hesitate to use Jupyter notebooks for analysis due to an initial steep learning curve. Once technical challenges such as the establishment of the conda environment are overcome, notebooks allow them to integrate data analysis and documentation with ease. Notebooks might be deposited in repositories along with the raw data. This builds more trust in published results by improving transparency and reproducibility.

Data Availability Statement

The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below:

Test data for the Jupyter notebooks is available at:

https://owncloud.gwdg.de/index.php/s/nSUqVXkkfUDPG5b.

Test data for AnnotatorJ is available at:

https://owncloud.gwdg.de/index.php/s/dUMM6JRXsuhTncS.

The source code is available at:

https://github.com/trasse/OpSeF-IV.

Author Contributions

TR designed and developed the OpSeF, and analyzed the data. RH and PH designed and developed the AnnotatorJ integration. TR wrote the manuscript with contributions from all authors.

Funding

Research in the Scientific Service Group Microscopy was funded by the Max Planck Society. PH and RH acknowledge the LENDULET-BIOMAG Grant (2018-342) and from the European Regional Development Funds (GINOP-2.3.2-15-2016-00006, GINOP-2.3.2-15-2016-00026, and GINOP-2.3.2-15-2016-00037).

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

We would like to thank Thorsten Falk and Volker Hilsenstein (Monash Micro Imaging) for testing of the software and critical comments on the manuscript and the IT service group of the MPI Heart and Lung Research for technical support. We gratefully acknowledge Varun Kapoor (Institut Curie), Constantin Pape (EMBL), Uwe Schmidt (MPI CBG), Carsen Stringer (Janelia Research Campus), and Adrian Wolny (EMBL) for advice. We thank anyone who made their data available, in particular: the Broad Bioimage Benchmark Collection for image set BBBC027 (Svoboda et al., 2011) and BBBC038v1 (Caicedo et al., 2019) and Christoph Möhl (DZNE). We appreciate the referee’s valuable comments. This manuscript has been released as a pre-print at bioRxiv (Rasse et al., 2020).

References

Akram, S. U., Kannala, J., Eklund, L., and Heikkilä, J. (2016). “Cell segmentation proposal network for microscopy image analysis,” in Deep Learning and Data Labeling for Medical Applications, eds G. Carneiro, D. Mateus, L. Peter, A. Bradley, J. M. R. S. Tavares, V. Belagiannis, et al. (New York, NY: Springer International Publishing), 21–29. doi: 10.1007/978-3-319-46976-8_3

CrossRef Full Text | Google Scholar

Albarqouni, S., Baur, C., Achilles, F., Belagiannis, V., Demirci, S., and Navab, N. (2016). AggNet: deep learning from crowds for mitosis detection in breast cancer histology images. IEEE Trans. Med. Imaging 35, 1313–1321. doi: 10.1109/TMI.2016.2528120

PubMed Abstract | CrossRef Full Text | Google Scholar

Arganda-Carreras, I., Kaynig, V., Rueden, C., Eliceiri, K. W., Schindelin, J., Cardona, A., et al. (2017). Trainable Weka Segmentation: a machine learning tool for microscopy pixel classification. Bioinformatics 33, 2424–2426. doi: 10.1093/bioinformatics/btx180

PubMed Abstract | CrossRef Full Text | Google Scholar

Bankhead, P., Loughrey, M. B., Fernandez, J. A., Dombrowski, Y., McArt, D. G., Dunne, P. D., et al. (2017). QuPath: open source software for digital pathology image analysis. Sci. Rep. 7:16878. doi: 10.1038/s41598-017-17204-5

PubMed Abstract | CrossRef Full Text | Google Scholar

Berg, S., Kutra, D., Kroeger, T., Straehle, C. N., Kausler, B. X., Haubold, C., et al. (2019). ilastik: interactive machine learning for (bio)image analysis. Nat. Methods 16, 1226–1232. doi: 10.1038/s41592-019-0582-9

PubMed Abstract | CrossRef Full Text | Google Scholar

Blin, G., Sadurska, D., Portero Migueles, R., Chen, N., Watson, J. A., and Lowell, S. (2019). Nessys: a new set of tools for the automated detection of nuclei within intact tissues and dense 3D cultures. PLoS Biol. 17:e3000388. doi: 10.1371/journal.pbio.3000388

PubMed Abstract | CrossRef Full Text | Google Scholar

Bykov, Y. S., Cohen, N., Gabrielli, N., Manenschijn, H., Welsch, S., Chlanda, P., et al. (2019). High-throughput ultrastructure screening using electron microscopy and fluorescent barcoding. J. Cell Biol. 218, 2797–2811. doi: 10.1083/jcb.201812081

PubMed Abstract | CrossRef Full Text | Google Scholar

Caicedo, J. C., Goodman, A., Karhohs, K. W., Cimini, B. A., Ackerman, J., Haghighi, M., et al. (2019). Nucleus segmentation across imaging experiments: the 2018 data science bowl. Nat. Methods 16, 1247–1253. doi: 10.1038/s41592-019-0612-7

PubMed Abstract | CrossRef Full Text | Google Scholar

Chessel, A., and Carazo Salas, R. E. (2019). From observing to predicting single-cell structure and function with high-throughput/high-content microscopy. Essays Biochem. 63, 197–208. doi: 10.1042/EBC20180044

PubMed Abstract | CrossRef Full Text | Google Scholar

Çiçek, Ö, Abdulkadir, A., Lienkamp, S. S., Brox, T., and Ronneberger, O. (2016). “3D U-net: learning dense volumetric segmentation from sparse annotation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2016, eds S. Ourselin, L. Joskowicz, M. R. Sabuncu, G. Unal, and W. Wells (New York, NY: Springer International Publishing), 424–432. doi: 10.1007/978-3-319-46723-8_49

CrossRef Full Text | Google Scholar

Cireşan, D. C., Giusti, A., Gambardella, L. M., and Schmidhuber, J. (2012). “Deep neural networks segment neuronal membranes in electron microscopy images,” in Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 2, (Lake Tahoe, Nev: Curran Associates Inc.).

Google Scholar

Cireşan, D. C., Giusti, A., Gambardella, L. M., and Schmidhuber, J. (2013). “Mitosis detection in breast cancer histology images with deep neural networks,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2013, eds K. Mori, I. Sakuma, Y. Sato, C. Barillot, and N. Navab (Berlin: Springer), 411–418. doi: 10.1007/978-3-642-40763-5_51

CrossRef Full Text | Google Scholar

de Chaumont, F., Dallongeville, S., Chenouard, N., Herve, N., Pop, S., Provoost, T., et al. (2012). Icy: an open bioimage informatics platform for extended reproducible research. Nat. Methods 9, 690–696. doi: 10.1038/nmeth.2075

PubMed Abstract | CrossRef Full Text | Google Scholar

Haubold, C., Schiegg, M., Kreshuk, A., Berg, S., Koethe, U., and Hamprecht, F. A. (2016). Segmenting and tracking multiple dividing targets using ilastik. Adv. Anat. Embryol. Cell Biol. 219, 199–229. doi: 10.1007/978-3-319-28549-8_8

CrossRef Full Text | Google Scholar

He, K., Gkioxari, G., Dollar, P., and Girshick, R. (2017). “Mask R-CNN,” in Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), (Venice: IEEE).

Google Scholar

Hoffman, D. P., Shtengel, G., Xu, C. S., Campbell, K. R., Freeman, M., Wang, L., et al. (2020). Correlative three-dimensional super-resolution and block-face electron microscopy of whole vitreously frozen cells. Science 367:eaaz5357. doi: 10.1126/science.aaz5357

PubMed Abstract | CrossRef Full Text | Google Scholar

Hollandi, R., Diósdi, Á, Hollandi, G., Moshkov, N., and Horváth, P. (2020a). AnnotatorJ: an ImageJ plugin to ease hand-annotation of cellular compartments. Mol. Biol. Cell 31, 2179–2186. doi: 10.1091/mbc.E20-02-0156

PubMed Abstract | CrossRef Full Text | Google Scholar

Hollandi, R., Szkalisity, A., Toth, T., Tasnadi, E., Molnar, C., Mathe, B., et al. (2020b). nucleAIzer: a parameter-free deep learning framework for nucleus segmentation using image style transfer. Cell Syst. 10, 453–458.e6. doi: 10.1016/j.cels.2020.04.003

CrossRef Full Text | Google Scholar

Holzinger, A., Haibe-Kains, B., and Jurisica, I. (2019a). Why imaging data alone is not enough: AI-based integration of imaging, omics, and clinical data. Eur. J. Nucl. Med. Mol. Imaging 46, 2722–2730. doi: 10.1007/s00259-019-04382-9

PubMed Abstract | CrossRef Full Text | Google Scholar

Holzinger, A., Langs, G., Denk, H., Zatloukal, K., and Muller, H. (2019b). Causability and explainability of artificial intelligence in medicine. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 9:e1312. doi: 10.1002/widm.1312

PubMed Abstract | CrossRef Full Text | Google Scholar

Jones, T. R., Carpenter, A. E., Lamprecht, M. R., Moffat, J., Silver, S. J., Grenier, J. K., et al. (2009). Scoring diverse cellular morphologies in image-based screens with iterative feedback and machine learning. Proc. Natl. Acad. Sci. U.S.A. 106, 1826–1831. doi: 10.1073/pnas.0808843106

PubMed Abstract | CrossRef Full Text | Google Scholar

Kluyver, T., Ragan-Kelley, B., Pérez, F., Granger, B. E., Bussonnier, M., Frederic, J., et al. (2016). “Jupyter Notebooks – a publishing format for reproducible computational workflows,” in Positioning and Power in Academic Publishing: Players, Agents and Agendas, eds F. Loizides and B. Schmidt (Amsterdam: IOS Press).

Google Scholar

Kreshuk, A., and Zhang, C. (2019). Machine learning: advanced image segmentation using ilastik. Methods Mol. Biol. 2040, 449–463. doi: 10.1007/978-1-4939-9686-5_21

CrossRef Full Text | Google Scholar

Lamprecht, M. R., Sabatini, D. M., and Carpenter, A. E. (2007). CellProfiler: free, versatile software for automated biological image analysis. Biotechniques 42, 71–75. doi: 10.2144/000112257

PubMed Abstract | CrossRef Full Text | Google Scholar

Lotufo, R., and Falcao, A. (2000). “The ordered queue and the optimality of the watershed approaches,” in Mathematical Morphology and its Applications to Image and Signal Processing, eds J. Goutsias, L. Vincent, and D. S. Bloomberg (Boston, MA: Springer), 341–350. doi: 10.1007/0-306-47025-x_37

CrossRef Full Text | Google Scholar

McQuin, C., Goodman, A., Chernyshev, V., Kamentsky, L., Cimini, B. A., Karhohs, K. W., et al. (2018). CellProfiler 3.0: next-generation image processing for biology. PLoS Biol. 16:e2005970. doi: 10.1371/journal.pbio.2005970

PubMed Abstract | CrossRef Full Text | Google Scholar

Milletari, F., Navab, N., and Ahmadi, S. (2016). “V-Net: fully convolutional neural networks for volumetric medical image segmentation,” in Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), (Stanford, CA: IEEE), 565–571.

Google Scholar

Moeskops, P., Viergever, M. A., Mendrik, A. M., Vries, L. S. D., Benders, M. J. N. L., and Išgum, I. (2016). Automatic segmentation of MR brain images with a convolutional neural network. IEEE Trans. Med. Imaging 35, 1252–1261. doi: 10.1109/TMI.2016.2548501

PubMed Abstract | CrossRef Full Text | Google Scholar

Najman, L., and Schmitt, M. (1996). Geodesic saliency of watershed contours and hierarchical segmentation. IEEE Trans. Pattern Anal. Machine Intell. 18, 1163–1173. doi: 10.1109/34.546254

CrossRef Full Text | Google Scholar

Neumann, B., Walter, T., Heriche, J. K., Bulkescher, J., Erfle, H., Conrad, C., et al. (2010). Phenotypic profiling of the human genome by time-lapse microscopy reveals cell division genes. Nature 464, 721–727. doi: 10.1038/nature08869

PubMed Abstract | CrossRef Full Text | Google Scholar

Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., et al. (2011). Scikit-learn: machine learning in python. J. Machine Learn. Res. 12, 2825–2830.

Google Scholar

Rajchl, M., Lee, M. C. H., Oktay, O., Kamnitsas, K., Passerat-Palmbach, J., Bai, W., et al. (2017). DeepCut: object segmentation from bounding box annotations using convolutional neural networks. IEEE Trans. Med. Imaging 36, 674–683. doi: 10.1109/TMI.2016.2621185

PubMed Abstract | CrossRef Full Text | Google Scholar

Rasse, T. M., Hollandi, R., and Horváth, P. (2020). OpSeF IV: open source python framework for segmentation of biomedical images. bioRxiv[Preprint] doi: 10.1101/2020.04.29.068023

CrossRef Full Text | Google Scholar

Ruifrok, A. C., and Johnston, D. A. (2001). Quantification of histochemical staining by color deconvolution. Anal. Quant Cytol. Histol. 23, 291–299.

Google Scholar

Schindelin, J., Rueden, C. T., Hiner, M. C., and Eliceiri, K. W. (2015). The imageJ ecosystem: an open platform for biomedical image analysis. Mol. Reprod. Dev. 82, 518–529. doi: 10.1002/mrd.22489

PubMed Abstract | CrossRef Full Text | Google Scholar

Schmidt, U., Weigert, M., Broaddus, C., and Myers, G. (2018). “Cell detection with star-convex polygons,” in Medical Image Computing and Computer Assisted Intervention – MICCAI 2018, eds A. F. Frangi, J. A. Schnabel, C. Davatzikos, C. Alberola-López, and G. Fichtinger (Cham: Springer International Publishing), 265–273. doi: 10.1007/978-3-030-00934-2_30

CrossRef Full Text | Google Scholar

Schorb, M., Haberbosch, I., Hagen, W. J. H., Schwab, Y., and Mastronarde, D. N. (2019). Software tools for automated transmission electron microscopy. Nat. Methods 16, 471–477. doi: 10.1038/s41592-019-0396-9

PubMed Abstract | CrossRef Full Text | Google Scholar

Sieb, C., Meinl, T., and Berthold, M. R. (2007). Parallel and distributed data pipelining with KNIME. Mediterr. J. Comput. Netw. 3, 43–51.

Google Scholar

Stringer, C., Michaelos, M., and Pachitariu, M. (2020). Cellpose: a generalist algorithm for cellular segmentation. bioRxiv[Preprint] doi: 10.1101/2020.02.02.931238

CrossRef Full Text | Google Scholar

Svoboda, D., Homola, O., and Stejskal, S. (2011). Generation of 3D Digital Phantoms of Colon Tissue. Berlin: Springer, 31–39.

Google Scholar

Swoger, J., Pampaloni, F., and Stelzer, E. H. (2014). Light-sheet-based fluorescence microscopy for three-dimensional imaging of biological samples. Cold Spring Harb. Protoc. 2014, 1–8. doi: 10.1101/pdb.top080168

PubMed Abstract | CrossRef Full Text | Google Scholar

Ueda, H. R., Erturk, A., Chung, K., Gradinaru, V., Chedotal, A., Tomancak, P., et al. (2020). Tissue clearing and its applications in neuroscience. Nat. Rev. Neurosci. 21, 61–79. doi: 10.1038/s41583-019-0250-1

PubMed Abstract | CrossRef Full Text | Google Scholar

Valuchova, S., Mikulkova, P., Pecinkova, J., Klimova, J., Krumnikl, M., Bainar, P., et al. (2020). Imaging plant germline differentiation within Arabidopsis flowers by light sheet microscopy. eLife 9:e52546. doi: 10.7554/eLife.52546

PubMed Abstract | CrossRef Full Text | Google Scholar

van der Walt, S., Schonberger, J. L., Nunez-Iglesias, J., Boulogne, F., Warner, J. D., Yager, N., et al. (2014). scikit-image: image processing in Python. PeerJ 2:e453. doi: 10.7717/peerj.453

PubMed Abstract | CrossRef Full Text | Google Scholar

Van Valen, D. A., Kudo, T., Lane, K. M., Macklin, D. N., Quach, N. T., DeFelice, M. M., et al. (2016). Deep learning automates the quantitative analysis of individual cells in live-cell imaging experiments. PLoS Comput. Biol. 12:e1005177. doi: 10.1371/journal.pcbi.1005177

PubMed Abstract | CrossRef Full Text | Google Scholar

Vidavsky, N., Akiva, A., Kaplan-Ashiri, I., Rechav, K., Addadi, L., Weiner, S., et al. (2016). Cryo-FIB-SEM serial milling and block face imaging: large volume structural analysis of biological tissues preserved close to their native state. J. Struct. Biol. 196, 487–495. doi: 10.1016/j.jsb.2016.09.016

PubMed Abstract | CrossRef Full Text | Google Scholar

Vincent, L., and Soille, P. (1991). Watersheds in digital spaces: an efficient algorithm based on immersion simulations. IEEE Trans. Pattern Anal. Machine Intell. 13, 583–598. doi: 10.1109/34.87344

CrossRef Full Text | Google Scholar

Virtanen, P., Gommers, R., Oliphant, T. E., Haberland, M., Reddy, T., Cournapeau, D., et al. (2020). SciPy 1.0: fundamental algorithms for scientific computing in Python. Nat. Methods 17, 261–272. doi: 10.1038/s41592-019-0686-2

PubMed Abstract | CrossRef Full Text | Google Scholar

Wang, S., Yang, D. M., Rong, R., Zhan, X., and Xiao, G. (2019). Pathology image analysis using segmentation deep learning algorithms. Am. J. Pathol. 189, 1686–1698. doi: 10.1016/j.ajpath.2019.05.007

PubMed Abstract | CrossRef Full Text | Google Scholar

Webster, J. D., and Dunstan, R. W. (2014). Whole-slide imaging and automated image analysis: considerations and opportunities in the practice of pathology. Vet. Pathol. 51, 211–223. doi: 10.1177/0300985813503570

PubMed Abstract | CrossRef Full Text | Google Scholar

Weigert, M., Schmidt, U., Haase, R., Sugawara, K., and Myers, G. (2019). Star-convex polyhedra for 3D object detection and segmentation in microscopy. arXiv[Preprint] doi: 10.1109/WACV45572.2020.9093435

CrossRef Full Text | Google Scholar

Weigert, M., Schmidt, U., Haase, R., Sugawara, K., and Myers, G. (2020). “Star-convex Polyhedra for 3D object detection and segmentation in microscopy,” in Proceedings of The IEEE Winter Conference on Applications of Computer Vision (WACV), (Piscataway, NJ: IEEE), 3655–3662.

Google Scholar

Williams, E., Moore, J., Li, S. W., Rustici, G., Tarkowska, A., Chessel, A., et al. (2017). The image data resource: a bioimage data integration and publication platform. Nat. Methods 14, 775–781. doi: 10.1038/nmeth.4326

PubMed Abstract | CrossRef Full Text | Google Scholar

Wolny, A., Cerrone, L., Vijayan, A., Tofanelli, R., Barro, A. V., Louveaux, M., et al. (2020). Accurate and versatile 3D segmentation of plant tissues at cellular resolution. eLife 9:e57613. doi: 10.7554/eLife.57613

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhang, W., Li, R., Deng, H., Wang, L., Lin, W., Ji, S., et al. (2015). Deep convolutional neural networks for multi-modality isointense infant brain image segmentation. Neuroimage 108, 214–224. doi: 10.1016/j.neuroimage.2014.12.061

PubMed Abstract | CrossRef Full Text | Google Scholar

Zuiderveld, K. (1994). “Contrast limited adaptive histogram equalization,” in Graphics gems IV, ed. P. S. Heckbert (Cambridge, MA: Academic Press Professional, Inc), 474–485. doi: 10.1016/b978-0-12-336156-1.50061-6

CrossRef Full Text | Google Scholar

Keywords: deep learning, biomedical image analysis, segmentation, convolutional neural network, U-net, cellpose, StarDist, python

Citation: Rasse TM, Hollandi R and Horvath P (2020) OpSeF: Open Source Python Framework for Collaborative Instance Segmentation of Bioimages. Front. Bioeng. Biotechnol. 8:558880. doi: 10.3389/fbioe.2020.558880

Received: 04 May 2020; Accepted: 15 September 2020;
Published: 06 October 2020.

Edited by:

Mehdi Pirooznia, National Heart, Lung, and Blood Institute (NHLBI), United States

Copyright © 2020 Rasse, Hollandi and Horvath. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Tobias M. Rasse, [email protected]

Sours: https://www.frontiersin.org/articles/10.3389/fbioe.2020.558880/full

Customer segmentation with Python

Introduction

Customer segmentation is important for businesses to understand their target audience. Different advertisements can be curated and sent to different audience segments based on their demographic profile, interests, and affluence level.

There are many unsupervised machine learning algorithms that can help companies identify their user base and create consumer segments.

In this article, we will be looking at a popular unsupervised learning technique called K-Means clustering.

This algorithm can take in unlabelled customer data and assign each data point to clusters.

The goal of K-Means is to group all the data available into non-overlapping sub-groups that are distinct from each other.

That means each sub-group/cluster will consist of features that distinguish them from other clusters.

K-Means clustering is a commonly used technique by data scientists to help companies with customer segmentation. It is an important skill to have, and most data science interviews will test your understanding of this algorithm/your ability to apply it to real life scenarios.

In this article, you will learn the following:

  • Data pre-processing for K-Means clustering
  • Building a K-Means clustering algorithm from scratch
  • The metrics used to evaluate the performance of a clustering model
  • Visualizing clusters built
  • Interpretation and analysis of clusters built

Pre-requisites

You can download the dataset for this tutorial here.

Make sure to have the following libraries installed before getting started: pandas, numpy, matplotlib, seaborn, scikit-learn, kneed.

Once you're done, we can start building the model!

Imports and reading the data frame

Run the following lines of code to import the necessary libraries and read the dataset:

Now, lets take a look at the head of the data frame:

There are five variables in the dataset. CustomerID is the unique identifier of each customer in the dataset, and we can drop this variable. It doesn't provide us with any useful cluster information.

Since gender is a categorial variable, it needs to be encoded and converted into numeric.

All other variables will be scaled to follow a normal distribution before being fed into the model. We will standardize these variables with a mean of 0 and a standard deviation of 1.

Standardizing variables

First, lets standardize all variables in the dataset to get them around the same scale.

You can learn more about standardization here.

Now, lets take a look at the head of the data frame:

We can see that all the variables have been transformed, and are now centered around zero.

One hot encoding

The variable 'gender' is categorical, and we need to transform this into a numeric variable.

This means that we need to substitute numbers for each category. We can do this with Pandas using pd.get_dummies().

Lets take a look at the head of the data frame again:

We can see that the gender variable has been transformed. You might have noticed that we dropped 'Gender_Male' from the data frame. This is because there is no need to keep the variable anymore.

The values for 'Gender_Male' can be inferred from 'Gender_Female,' (that is, if 'Gender_Female' is 0, then 'Gender_Male' will be 1 and vice versa).

To learn more about one-hot encoding on categorical variables, you can watch this YouTube video.

Building the clustering model

Visualizing the model's performance:

We can see that the optimal number of clusters is 4.

Now, lets take a look at another clustering metric.

Silhouette coefficient

A silhouette coefficient, or a silhouette score is a metric used to evaluate the quality of clusters created by the algorithm.

Silhouette scores range from -1 to +1. The higher the silhouette score, the better the model.

The silhouette score measures the distance between all the data points within the same cluster. The lower this distance, the better the silhouette score.

It also measures the distance between an object and the data points in the nearest cluster. The higher this distance, the better.

A silhouette score closer to +1 indicates good clustering performance, and a silhouette score closer to -1 indicates a poor clustering model.

Lets calculate the silhouette score of the model we just built:

The silhouette score of this model is about 0.35.

This isn't a bad model, but we can do better and try getting higher cluster separation.

Before we try doing that, lets visualize the clusters we just built to get an idea of how well the model is doing:

The output of the above code is as follows:

From the above diagram, we can see that cluster separation isn't too great.

The red points are mixed with the blue, and the green are overlapping the yellow.

This, along with the silhouette score shows us that the model isn't performing too well.

Now, lets create a new model that has better cluster separability than this one.

Building clustering model #2

For this model, lets do some feature selection.

We can use a technique called Principal Component Analysis (PCA).

PCA is a technique that helps us reduce the dimension of a dataset. When we run PCA on a data frame, new components are created. These components explain the maximum variance in the model.

We can select a subset of these variables and include them into the K-means model.

Now, lets run PCA on the dataset:

The above code will render the following chart:

This chart shows us each PCA component, along with it variance.

Based on this visualization, we can see that the first two PCA components explain around 70% of the dataset variance.

We can feed these two components into the model.

Lets build the model again with the first two principal components, and decide on the number of clusters to use:

The code above will render the following chart:

Again, it looks like the optimal number of clusters is 4.

We can calculate the silhouette score for this model with 4 clusters:

The silhouette score of this model is 0.42, which is better than the previous model we created.

We can visualize the clusters for this model just like we did earlier:

Model 1 vs Model 2

Lets compare the cluster separability of this model to that of the first model:

Notice that the clusters in the second model are much better separated than that in the first model.

Furthermore, the silhouette score of the second model is a lot higher.

For these reasons, we can pick the second model to go forward with our analysis.

Cluster Analysis

Now that we're done building these different clusters, lets try to interpret them and look at the different customer segments.

First, lets map the clusters back to the dataset and take a look at the head of the data frame.

Notice that each row in the data frame is now assigned to a cluster.

To compare attributes of the different clusters, lets find the average of all variables across each cluster:

We can interpret these clusters more easily if we visualized them. Run these four lines of code to come up with different visualizations of each variable:

Spending Score vs Annual Income vs Age

Gender Breakdown

Main attributes of each segment

Cluster 0:

  • High average annual income, low spending.
  • Mean age is around 40 and gender is predominantly male.

Cluster 1:

  • Low to mid average income, average spending capacity.
  • Mean age is around 50 and gender is predominantly female.

Cluster 2:

  • Low average income, high spending score.
  • Mean age is around 25 and gender is predominantly female.

Cluster 3:

  • High average income, high spending score.
  • Mean age is around 30 and gender is predominantly female.

It is important to note that calculating the median age would provide better insight on the distribution of age within each cluster.

Also, females are more highly represented in the entire dataset, which is why most clusters contain a larger number of females than males. We can find the percentage of each gender relative to the numbers in the entire dataset to give us a better idea of gender distribution.

Building personas around each cluster

Now that we know the attributes of each cluster, we can build personas around them.

Being able to tell a story around your analysis is an important skill to have as a data scientist.

This will help your clients or stakeholders understand your findings more easily.

Here is an example of building consumer personas based on the clusters created:

Cluster 0: The frugal spender

This persona comprises of middle aged individuals who are very careful with money.

Despite having the highest average income compared to individuals in all other clusters, they spend the least.

This might be because they have financial responsibilities - like saving up for their kid's higher education.

Recommendation: Promos, coupons, and discount codes will attract individuals in this segment due to their tendency to spend less.

Cluster 1: Almost retired

This segment comprises of an older group of people.

They earn less and spend less, and are probably saving up for retirement.

Recommendation: Marketing to these individuals can be done through Facebook, which appeals to an older demographic. Promote healthcare related products to people in this segment.

Cluster 2: The careless buyer

This segment is made up of a younger age group.

Individuals in this segment are most likely first jobbers. They make the least amount of money compared to all other segments.

However, they are very high spenders.

These are enthusiastic young individuals who enjoy living a good lifestyle, and tend to spend above their means.

Recommendation: Since these are young individuals who spend a lot, providing them with travel coupons or hotel discounts might be a good idea. Providing them with discounts off top clothing and makeup brands would also work well for this segment.

Cluster 3: Highly affluent individuals

This segment is made up of middle-aged individuals.

These are individuals who have worked hard to build up a significant amount of wealth.

They also spend large amounts of money to live a good lifestyle.

These individuals have likely just started a family, and are leading baby or family-focused lifestyles. It is a good idea to promote baby or child related products to these individuals.

Recommendation: Due to their large spending capacity and their demographic, these individuals are likely to be looking for properties to buy or invest in. They are also more likely than all other segments to take out housing loans and make serious financial commitments.


Conclusion

We have successfully built a K-Means clustering model for customer segmentation. We also explored cluster interpretation, and analyzed the behaviour of individuals in each cluster.

Finally, we took a look at some business recommendations that could be provided based on the attributes of each individual in the cluster.

You can use the analysis above as starter code for any clustering or segmentation project in the future.

Sours: https://www.natasshaselvaraj.com/customer-segmentation-with-python/
  1. Ross dress for less blankets
  2. Age of empires icons
  3. Plastic drum manufacturing
  4. Fallout 3 100 repair
  5. Vw dealerships denver

Image Segmentation using Python’s scikit-image module.

Sooner or later all things are numbers, including images.

People who have seen The Terminator would definitely agree that it was the greatest sci-fi movie of that era. In the movie, James Cameron introduced an interesting visual effect concept that made it possible for the viewers to get behind the eyes of the cyborg called Terminator. This effect came to be known as the Terminator Visionand in a way, it segmented humans from the background. It might have sounded totally out of place then, but Image segmentation forms a vital part of many Image processing techniques today.

We all are pretty aware of the endless possibilities offered by Photoshop or similar graphics editors that take a person from one image and place them into another. However, the first step of doing this is identifying where that person is in the source image andthis is where Image Segmentation comes into play. There are many libraries written for Image Analysis purposes. In this article, we will be discussing in detail about scikit-image, a Python-based image processing library.

The entire code can also be accessed from the Github Repository associated with this article.

Scikit-image is a Python package dedicated to image processing.

Installation

scikit-image can be installed as follows:

pip install scikit-image# For Conda-based distributions
conda install -c conda-forge scikit-image

Overview of Images in Python

Before proceeding with the technicalities of Image Segmentation, it is essential to get a little familiar with the scikit image ecosystem and how it handles images.

  • Importing a GrayScale Image from the skimage library

The skimage data module contains some inbuilt example data sets which are generally stored in jpeg or png format.

from skimage import data
import numpy as np
import matplotlib.pyplot as pltimage = data.binary_blobs()
plt.imshow(image, cmap='gray')
  • Importing a Colored Image from the skimage library
from skimage import data
import numpy as np
import matplotlib.pyplot as pltimage = data.astronaut()plt.imshow(image)
  • Importing an image from an external source
# The I/O module is used for importing the image
from skimage import data
import numpy as np
import matplotlib.pyplot as plt
from skimage import ioimage = io.imread('skimage_logo.png')plt.imshow(image);
images = io.ImageCollection('../images/*.png:../images/*.jpg')print('Type:', type(images))
images.files
Out[]: Type: <class ‘skimage.io.collection.ImageCollection’>#Saving file as ‘logo.png’
io.imsave('logo.png', logo)

Now that we have an idea about scikit-image, let us get into details of Image Segmentation. Image Segmentation is essentially the process of partitioning a digital image into multiple segments to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze.

In this article, we will approach the Segmentation process as a combination of Supervised and Unsupervised algorithms.

Supervised segmentation: Some prior knowledge, possibly from human input, is used to guide the algorithm.

Unsupervised segmentation: No prior knowledge is required. These algorithms attempt to subdivide images into meaningful regions automatically. The user may still be able to tweak certain settings to obtain desired outputs.

Let’s begin with the simplest algorithm called Thresholding.

Thresholding

It is the simplest way to segment objects from background by choosing pixels above or below a certain threshold. This is generally helpful when we intend to segment objects from their background. You can read more about thresholding here.

Let’s try this on an image of a textbook that comes preloaded with the scikit-image dataset.

Basic Imports

import numpy as np
import matplotlib.pyplot as pltimport skimage.data as data
import skimage.segmentation as seg
import skimage.filters as filters
import skimage.draw as draw
import skimage.color as color

A simple function to plot the images

def image_show(image, nrows=1, ncols=1, cmap='gray'):
fig, ax = plt.subplots(nrows=nrows, ncols=ncols, figsize=(14, 14))
ax.imshow(image, cmap='gray')
ax.axis('off')
return fig, ax

Image

text = data.page()image_show(text)

This image is a little darker but maybe we can still pick a value that will give us a reasonable segmentation without any advanced algorithms. Now to help us in picking that value, we will use a Histogram.

A histogram is a graph showing the number of pixels in an image at different intensity values found in that image. Simply put, a histogram is a graph wherein the x-axis shows all the values that are in the image while the y-axis shows the frequency of those values.

fig, ax = plt.subplots(1, 1)
ax.hist(text.ravel(), bins=32, range=[0, 256])
ax.set_xlim(0, 256);

Our example happens to be an 8-bit image so we have a total of 256 possible values on the x-axis. We observe that there is a concentration of pixels that are fairly light(0: black, 255: white). That’s most likely our fairly light text background but then the rest of it is kind of smeared out. An ideal segmentation histogram would be bimodal and fairly separated so that we could pick a number right in the middle. Now, let’s just try and make a few segmented images based on simple thresholding.

Supervised thresholding

Since we will be choosing the thresholding value ourselves, we call it supervised thresholding.

text_segmented = text > (value concluded from histogram i.e 50,70,120 )image_show(text_segmented);

We didn’t get any ideal results since the shadow on the left creates problems. Let’s try with unsupervised thresholding now.

Unsupervised thresholding

Scikit-image has a number of automatic thresholding methods, which require no input in choosing an optimal threshold. Some of the methods are :

text_threshold = filters.threshold_ # Hit tab with the cursor after the underscore to get all the methods.image_show(text < text_threshold);

In the case of , we also need to specify the .helps to tune the image for better results.

text_threshold = filters.threshold_local(text,block_size=51, offset=10)
image_show(text > text_threshold);

This is pretty good and has got rid of the noisy regions to a large extent.

Thresholding is a very basic segmentation process and will not work properly in a high-contrast image for which we will be needing more advanced tools.

For this section, we will use an example image that is freely available and attempt to segment the head portion using supervised segmentation techniques.

# import the image
from skimage import io
image = io.imread('girl.jpg')
plt.imshow(image);

Before doing any segmentation on an image, it is a good idea to de-noise it using some filters.

However, in our case, the image is not very noisy, so we will take it as it is. Next step would be to convert the image to grayscale with .

image_gray = color.rgb2gray(image)
image_show(image_gray);

We will use two segmentation methods that work on entirely different principles.

Active contour segmentation

Active Contour segmentation also called snakes andis initialized using a user-defined contour or line, around the area of interest, and this contour then slowly contracts and is attracted or repelled from light and edges.

For our example image, let’s draw a circle around the person’s head to initialize the snake.

def circle_points(resolution, center, radius): """
Generate points which define a circle on an image.Centre refers to the centre of the circle
"""
radians = np.linspace(0, 2*np.pi, resolution) c = center[1] + radius*np.cos(radians)#polar co-ordinates
r = center[0] + radius*np.sin(radians)

return np.array([c, r]).T# Exclude last point because a closed path should not have duplicate points
points = circle_points(200, [80, 250], 80)[:-1]

The above calculations calculate x and y co-ordinates of the points on the periphery of the circle. Since we have given the resolution to be 200, it will calculate 200 such points.

fig, ax = image_show(image)
ax.plot(points[:, 0], points[:, 1], '--r', lw=3)

The algorithm then segments the face of a person from the rest of an image by fitting a closed curve to the edges of the face.

snake = seg.active_contour(image_gray, points)fig, ax = image_show(image)
ax.plot(points[:, 0], points[:, 1], '--r', lw=3)
ax.plot(snake[:, 0], snake[:, 1], '-b', lw=3);

We can tweak the parameters called and. Higher values of alpha will make this snake contract faster while beta makes the snake smoother.

snake = seg.active_contour(image_gray, points,alpha=0.06,beta=0.3)fig, ax = image_show(image)
ax.plot(points[:, 0], points[:, 1], '--r', lw=3)
ax.plot(snake[:, 0], snake[:, 1], '-b', lw=3);

Random walker segmentation

In this method, a user interactively labels a small number of pixels which are known as labels. Each unlabeled pixel is then imagined to release a random walker and one can then determine the probability of a random walker starting at each unlabeled pixel and reaching one of the prelabeled pixels. By assigning each pixel to the label for which the greatest probability is calculated, high-quality image segmentation may be obtained. Read the Reference paper here.

We will re-use the seed values from our previous example here. We could have
done different initializations but for simplicity let’s stick to circles.

image_labels = np.zeros(image_gray.shape, dtype=np.uint8)

The random walker algorithm expects a label image as input. So we will have the bigger circle that encompasses the person’s entire face and another smaller circle near the middle of the face.

indices = draw.circle_perimeter(80, 250,20)#from hereimage_labels[indices] = 1
image_labels[points[:, 1].astype(np.int), points[:, 0].astype(np.int)] = 2image_show(image_labels);

Now, let’s use Random Walker and see what happens.

image_segmented = seg.random_walker(image_gray, image_labels)# Check our results
fig, ax = image_show(image_gray)
ax.imshow(image_segmented == 1, alpha=0.3);

It doesn’t look like it’s grabbing edges as we wanted. To resolve this situation we can tune in the beta parameter until we get the desired results. After several attempts, a value of 3000 works reasonably well.

image_segmented = seg.random_walker(image_gray, image_labels, beta = 3000)# Check our results
fig, ax = image_show(image_gray)
ax.imshow(image_segmented == 1, alpha=0.3);

That’s all for Supervised Segmentation where we had to provide certain inputs and also had to tweak certain parameters. However, it is not always possible to have a human looking at an image and then deciding what inputs to give or where to start from. Fortunately, for those situations, we have Unsupervised segmentation techniques.

Unsupervised segmentation requires no prior knowledge. Consider an image that is so large that it is not feasible to consider all pixels simultaneously. So in such cases, Unsupervised segmentation can breakdown the image into several sub-regions, so instead of millions of pixels, you have tens to hundreds of regions. Let’s look at two such algorithms:

SLIC( Simple Linear Iterative Clustering)

SLIC algorithm actually uses a machine-learning algorithm called K-Means under the hood. It takes in all the pixel values of the image and tries to separate them out into the given number of sub-regions. Read the Reference Paper here.

SLIC works in color so we will use the original image.

image_slic = seg.slic(image,n_segments=155)

All we’re doing is just setting each sub-image or sub-region that we have found, to the average of that region which makes it look less like a patchwork of randomly assigned colors and more like an image that has been decomposed into areas that are kind of similar.

# label2rgb replaces each discrete label with the average interior color
image_show(color.label2rgb(image_slic, image, kind='avg'));

We’ve reduced this image from 512*512 = 262,000 pixels down to 155 regions.

Felzenszwalb

This algorithm also uses a machine-learning algorithm called minimum-spanning tree clustering under the hood. Felzenszwaib doesn’t tell us the exact number of clusters that the image will be partitioned into. It’s going to run and generate as many clusters as it thinks is appropriate for that
given scale or zoom factor on the image. The Reference Paper can be accessed here.

image_felzenszwalb = seg.felzenszwalb(image)
image_show(image_felzenszwalb);

These are a lot of regions. Let’s calculate the number of unique regions.

np.unique(image_felzenszwalb).size
3368

Now let’s recolor them using the region average just as we did in the SLIC algorithm.

image_felzenszwalb_colored = color.label2rgb(image_felzenszwalb, image, kind='avg')image_show(image_felzenszwalb_colored);

Now we get reasonably smaller regions. If we wanted still fewer regions, we could change the parameter or start here and combine them. This approach is sometimes called over-segmentation.

This almost looks more like a posterized image which is essentially just a reduction in the number of colors. To combine them again, you can use the Region Adjacency Graph(RAG) but that’s beyond the scope of this article.

Till now, we went over image segmentation techniques using only the scikit image module. However, it will be worth mentioning some of the image segmentation techniques which use deep learning. Here is a wonderful blog post that focuses on image segmentation architectures, Losses, Datasets, and Frameworks that you can use for your image segmentation projects.

Image segmentation is a very important image processing step. It is an active area of research with applications ranging from computer vision to medical imagery to traffic and video surveillance. Python provides a robust library in the form of scikit-image having a large number of algorithms for image processing. It is available free of charge and free of restriction having an active community behind it. Have a look at their documentation to learn more about the library and its use cases.

Sours: https://towardsdatascience.com/image-segmentation-using-pythons-scikit-image-module-533a61ecc980
Customer Segmentation Tutorial - Python Projects - K-Means Algorithm - Python Training - Edureka

Plan Of Attack

Suppose that we have a company that selling some of the product, and you want to know how well does the selling performance of the product.

You have the data that can we analyze, but what kind of analysis that we can do?

Well, we can segment customers based on their buying behavior on the market.

Keep in mind that the data is really huge, and we can not analyze it using our bare eye. We will use machine learning algorithms and the power of computing for it.

This articlewill show you how to cluster customers on segments based on their behavior using the K-Means algorithm in Python.

I hope that this article will help you on how to do customer segmentation step-by-step from preparing the data to cluster it.

Before we get into the process, I will give you a brief on what kind of steps we will get.

  • Gather the data
  • Create Recency Frequency Monetary (RFM) table
  • Manage skewness and scale each variable
  • Explore the data
  • Cluster the data
  • Interpret the result

Data Gathering

In this step, we will gather the data first. For this case, we will take the data from UCI Machine Learning called Online Retail dataset.

The dataset itself is a transactional data that contains transactions from December 1st 2010 until December 9th 2011 for a UK-based online retail.

Each row represents the transaction that occurs. It includes the product name, quantity, price, and other columns that represents ID.

You can access to the dataset from here.

Here is the size of the dataset.

(541909, 8)

For this case, we don’t use all of the rows. Instead, we will sample 10000 rows from the dataset, and we assume that as the whole transactions that the customers do.

The code will look like this,

# Import The Libraries
# ! pip install xlrd
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np# Import The Dataset
df = pd.read_excel('dataset.xlsx')
df = df[df['CustomerID'].notna()]# Sample the dataset
df_fix = df.sample(10000, random_state = 42)

Here is the glimpse of the dataset,

Create The RFM Table

After we sample the data, we will make the data easier to conduct an analysis.

To segmenting customer, there are some metrics that we can use, such as when the customer buy the product for last time, how frequent the customer buy the product, and how much the customer pays for the product. We will call this segmentation as RFM segmentation.

To make the RFM table, we can create these columns, such as Recency, Frequency, and MonetaryValue column.

To get the number of days for recency column, we can subtract the snapshot date with the date where the transaction occurred.

To create the frequency column, we can count how much transactions by each customer.

Lastly, to create the monetary value column, we can sum all transactions for each customer.

The code looks like this,

# Convert to show date only
from datetime import datetime
df_fix["InvoiceDate"] = df_fix["InvoiceDate"].dt.date# Create TotalSum colummn
df_fix["TotalSum"] = df_fix["Quantity"] * df_fix["UnitPrice"]# Create date variable that records recency
import datetime
snapshot_date = max(df_fix.InvoiceDate) + datetime.timedelta(days=1)# Aggregate data by each customer
customers = df_fix.groupby(['CustomerID']).agg({
'InvoiceDate': lambda x: (snapshot_date - x.max()).days,
'InvoiceNo': 'count',
'TotalSum': 'sum'})# Rename columns
customers.rename(columns = {'InvoiceDate': 'Recency',
'InvoiceNo': 'Frequency',
'TotalSum': 'MonetaryValue'}, inplace=True)

Here is the glimpse of the dataset,

Right now, the dataset consists of recency, frequency, and monetary value column. But we cannot use the dataset yet because we have to preprocess the data more.

Manage Skewness and Scaling

We have to make sure that the data meet these assumptions, they are,

The data should meet assumptions where the variables are not skewed and have the same mean and variance.

Because of that, we have to manage the skewness of the variables.

Here are the visualizations of each variable,

As we can see from above, we have to transform the data, so it has a more symmetrical form.

There are some methods that we can use to manage the skewness, they are,

  • log transformation
  • square root transformation
  • box-cox transformation

Note: We can use the transformation if and only if the variable only has positive values.

Below are the visualization each variable and with and without transformations. From top left clockwise on each variable shows the plot without transformation, log transformation, square root transformation, and box-cox transformation.

Based on that visualization, it shows that the variables with box-cox transformation shows a more symmetrical form rather than the other transformations.

To make sure, we calculate each variable using the skew function. The result looks like this,

variable, without, log, sqrt, box-cox transformations
Recency, 14.77, 0.85, 3.67, 0.16
Frequency, 0.93, -0.72, 0.32, -0.1

Here is how to interpret the skewness value. If the value is close to 0, the variable tend to have symmetrical form. However, if it’s not, the variable has skew on it. Based on that calculation, we use variables that use box-cox transformations.

Based on that calculation, we will utilize variables that use box-cox transformations. Except for the MonetaryValue variable because the variable includes negative values. To handle this variable, we can use cubic root transformation to the data, so the comparison looks like this,

By using the transformation, we will have data that less skewed. The skewness value declines from 16.63 to 1.16. Therefore, we can transform the RFM table with this code,

from scipy import stats
customers_fix = pd.DataFrame()
customers_fix["Recency"] = stats.boxcox(customers['Recency'])[0]
customers_fix["Frequency"] = stats.boxcox(customers['Frequency'])[0]
customers_fix["MonetaryValue"] = pd.Series(np.cbrt(customers['MonetaryValue'])).values
customers_fix.tail()

It will look like this,

Can we use data right now? Not yet. If we look at the plot once more, each variable don’t have the same mean and variance. We have to normalize it. To normalize, we can use StandardScaler object from scikit-learn library to do it. The code will look like this,

# Import library
from sklearn.preprocessing import StandardScaler# Initialize the Object
scaler = StandardScaler()# Fit and Transform The Data
scaler.fit(customers_fix)
customers_normalized = scaler.transform(customers_fix)# Assert that it has mean 0 and variance 1
print(customers_normalized.mean(axis = 0).round(2)) # [0. -0. 0.]
print(customers_normalized.std(axis = 0).round(2)) # [1. 1. 1.]

The data will look like this,

Finally, we can do clustering using that data.

Modelling

Right after we preprocess the data, now we can focus on modelling. To make segmentation from the data, we can use the K-Means algorithm to do this.

K-Means algorithm is an unsupervised learning algorithm that uses the geometrical principle to determine which cluster belongs to the data. By determine each centroid, we calculate the distance to each centroid. Each data belongs to a centroid if it has the smallest distance from the other. It repeats until the next total of the distance doesn’t have significant changes than before.

To implement K-Means in Python is easy. We can use the KMeans function from scikit-learn to do this.

To make our clustering reach its maximum performance, we have to determine which hyperparameter fits to the data. To determine which hyperparameter is the best for our model and data, we can use the elbow method to decide. The code will look like this,

from sklearn.cluster import KMeanssse = {}
for k in range(1, 11):
kmeans = KMeans(n_clusters=k, random_state=42)
kmeans.fit(customers_normalized)
sse[k] = kmeans.inertia_ # SSE to closest cluster centroidplt.title('The Elbow Method')
plt.xlabel('k')
plt.ylabel('SSE')
sns.pointplot(x=list(sse.keys()), y=list(sse.values()))
plt.show()

Here is the result,

How to interpret the plot? The x-axis is the value of the k, and the y-axis is the SSE value of the data. We will take the best parameter by looking at where the k-value will have a linear trend on the next consecutive k.

Based on our observation, the k-value of 3 is the best hyperparameter for our model because the next k-value tend to have a linear trend. Therefore, our best model for the data is K-Means with the number of clusters is 3.

Now, We can fit the model with this code,

model = KMeans(n_clusters=3, random_state=42)
model.fit(customers_normalized)
model.labels_.shape

By fitting the model, we can have clusters where each data belongs. By that, we can analyze the data.

Interpret The Segment

We can summarize the RFM table based on clusters and calculate the mean of each variable. The code will look like this,

customers["Cluster"] = model.labels_
customers.groupby('Cluster').agg({
'Recency':'mean',
'Frequency':'mean',
'MonetaryValue':['mean', 'count']}).round(2)

The output from the code looks like this,

Besides that, we can analyze the segments using snake plot. It requires the normalized dataset and also the cluster labels. By using this plot, we can have a good visualization from the data on how the cluster differs from each other. We can make the plot by using this code,

# Create the dataframe
df_normalized = pd.DataFrame(customers_normalized, columns=['Recency', 'Frequency', 'MonetaryValue'])
df_normalized['ID'] = customers.index
df_normalized['Cluster'] = model.labels_# Melt The Data
df_nor_melt = pd.melt(df_normalized.reset_index(),
id_vars=['ID', 'Cluster'],
value_vars=['Recency','Frequency','MonetaryValue'],
var_name='Attribute',
value_name='Value')
df_nor_melt.head()# Visualize it
sns.lineplot('Attribute', 'Value', hue='Cluster', data=df_nor_melt)

And here is the result,

By using this plot, we know how each segment differs. It describes more than we use the summarized table.

We infer that cluster 0 is frequent, spend more, and they buy the product recently. Therefore, it could be the cluster of a loyal customer.

Then, the cluster 1 is less frequent, less to spend, but they buy the product recently. Therefore, it could be the cluster of new customer.

Finally, the cluster 2 is less frequent, less to spend, and they buy the product at the old time. Therefore, it could be the cluster of churned customers.

In conclusion, customer segmentation is really necessary for knowing what characteristics that exist on each customer. The article has shown to you how to implement it using Python. I hope that this article will be useful to you, and you can implement on your case.

If you want to see how the code is, you check on this Google Colab here.

References

[1] Daqing C., Sai L.S, and Kun G. Data mining for the online retail industry: A case study of RFM model-based customer segmentation using data mining (2012), Journal of Database Marketing and Customer Strategy Management.
[2] Millman K. J, Aivazis M. Python for Scientists and Engineers (2011), Computing in Science & Engineering.
[3] RadečićD. Top 3 Methods for Handling Skewed Data(2020), Towards Data Science.
[4] Elbow Method for optimal value of k in KMeans, Geeks For Geeks.

Sours: https://towardsdatascience.com/customer-segmentation-in-python-9c15acf6f945

Python segmentation

Image Segmentation¶

Image segmentation is the task of labeling the pixels of objects of interest in an image.

In this tutorial, we will see how to segment objects from a background. We use the image from . This image shows several coins outlined against a darker background. The segmentation of the coins cannot be done directly from the histogram of gray values, because the background shares enough gray levels with the coins that a thresholding segmentation is not sufficient.

../_images/sphx_glr_plot_coins_segmentation_001.png
>>> fromskimageimportdata>>> fromskimage.exposureimporthistogram>>> coins=data.coins()>>> hist,hist_centers=histogram(coins)

Simply thresholding the image leads either to missing significant parts of the coins, or to merging parts of the background with the coins. This is due to the inhomogeneous lighting of the image.

../_images/sphx_glr_plot_coins_segmentation_002.png

A first idea is to take advantage of the local contrast, that is, to use the gradients rather than the gray values.

Edge-based segmentation¶

Let us first try to detect edges that enclose the coins. For edge detection, we use the Canny detector of

>>> fromskimage.featureimportcanny>>> edges=canny(coins/255.)

As the background is very smooth, almost all edges are found at the boundary of the coins, or inside the coins.

>>> fromscipyimportndimageasndi>>> fill_coins=ndi.binary_fill_holes(edges)
../_images/sphx_glr_plot_coins_segmentation_003.png

Now that we have contours that delineate the outer boundary of the coins, we fill the inner part of the coins using the function, which uses mathematical morphology to fill the holes.

../_images/sphx_glr_plot_coins_segmentation_004.png

Most coins are well segmented out of the background. Small objects from the background can be easily removed using the function to remove objects smaller than a small threshold.

>>> label_objects,nb_labels=ndi.label(fill_coins)>>> sizes=np.bincount(label_objects.ravel())>>> mask_sizes=sizes>20>>> mask_sizes[0]=0>>> coins_cleaned=mask_sizes[label_objects]

However, the segmentation is not very satisfying, since one of the coins has not been segmented correctly at all. The reason is that the contour that we got from the Canny detector was not completely closed, therefore the filling function did not fill the inner part of the coin.

../_images/sphx_glr_plot_coins_segmentation_005.png

Therefore, this segmentation method is not very robust: if we miss a single pixel of the contour of the object, we will not be able to fill it. Of course, we could try to dilate the contours in order to close them. However, it is preferable to try a more robust method.

Region-based segmentation¶

Let us first determine markers of the coins and the background. These markers are pixels that we can label unambiguously as either object or background. Here, the markers are found at the two extreme parts of the histogram of gray values:

>>> markers=np.zeros_like(coins)>>> markers[coins<30]=1>>> markers[coins>150]=2

We will use these markers in a watershed segmentation. The name watershed comes from an analogy with hydrology. The watershed transform floods an image of elevation starting from markers, in order to determine the catchment basins of these markers. Watershed lines separate these catchment basins, and correspond to the desired segmentation.

The choice of the elevation map is critical for good segmentation. Here, the amplitude of the gradient provides a good elevation map. We use the Sobel operator for computing the amplitude of the gradient:

>>> fromskimage.filtersimportsobel>>> elevation_map=sobel(coins)

From the 3-D surface plot shown below, we see that high barriers effectively separate the coins from the background.

../_images/elevation_map.jpg

and here is the corresponding 2-D plot:

../_images/sphx_glr_plot_coins_segmentation_006.png

The next step is to find markers of the background and the coins based on the extreme parts of the histogram of gray values:

>>> markers=np.zeros_like(coins)>>> markers[coins<30]=1>>> markers[coins>150]=2
../_images/sphx_glr_plot_coins_segmentation_007.png

Let us now compute the watershed transform:

>>> fromskimage.segmentationimportwatershed>>> segmentation=watershed(elevation_map,markers)
../_images/sphx_glr_plot_coins_segmentation_008.png

With this method, the result is satisfying for all coins. Even if the markers for the background were not well distributed, the barriers in the elevation map were high enough for these markers to flood the entire background.

We remove a few small holes with mathematical morphology:

>>> segmentation=ndi.binary_fill_holes(segmentation-1)

We can now label all the coins one by one using :

>>> labeled_coins,_=ndi.label(segmentation)
../_images/sphx_glr_plot_coins_segmentation_009.png
Sours: https://scikit-image.org/docs/dev/user_guide/tutorial_segmentation.html
51 - Image Segmentation using K-means
Image Segmentation Techniques in OpenCV Python

Contents

Introduction

In this article, we will show you how to do image segmentation in OpenCV Python by using multiple techniques. We will first explain what is image processing and cover some prerequisite concepts. And then we will go through different techniques and implementations one by one.

What is Image Segmentation?

Image Segmentation Techniques in OpenCV Python

Image segmentation is an image processing task in which the image is segmented or partitioned into multiple regions such that the pixels in the same region share common characteristics.

There are two forms of image segmentation:

  1. Local segmentation – It is concerned with a specific area or region of the image.
  2. Global segmentation – It is concerned with segmenting the entire image.

Types of Image Segmentation Approaches

  1. Discontinuity detection – This is a method of segmenting a picture into areas based on discontinuity. This is where edge detection comes in. Discontinuity in edges generated due to intensity is recognized and used to establish area borders. Examples: Histogram filtering and contour detection.
  2. Similarity detection – A method of segmenting a picture into sections based on resemblance. Thresholding, area expansion, and region splitting and merging are all included in this methodology. All of them split the image into sections with comparable pixel counts. Based on established criteria, they divide the picture into a group of clusters with comparable features. Example: Kmeans, Colour detection/classification.
  3. Neural network approach – For the goal of decision making, neural network-based segmentation algorithms replicate the learning techniques of the human brain. This approach is widely used in segmenting medical images and separate them from the background. A neural network is made up of a vast number of linked nodes, each with its own weight.

Pre-requisite Concepts

i) K-Means Algorithm

K-means is a clustering algorithm that is used to group data points into clusters such that data points lying in the same group are very similar to each other in characteristics.

Ad
Deep Learning Specialization on Coursera

K-means algorithm can be used to find subgroups in the image and assign the image pixel to that subgroup which results in image segmentation.

K-means Algorithm

ii) Contour Detection

Contours can be simply defined as curves/polygons formed by joining the pixels that are grouped together according to intensity or color values.

OpenCV provides us with inbuilt functions to detect these contours in images. Contour detection is generally applied on binary images(grayscale images) after edge detection or thresholding(or both) has been applied to them.

Contour detection with OpenCV

ii) Masking

The application of masks (which are binary images with only 0 or 1 as pixel values) to transform a picture is known as masking. The pixels (of the picture) that coincide with the zero in the mask are turned off when the mask is applied to it.

Mask applied to an image

iv) Color Detection

Detection and classification of colors by using their RGB colorspace values are known as color detection. For example:

R G B Red = (255, 0, 0) Green = (0, 255, 0) Blue = (0, 0, 255) Orange = (255, 165, 0) Purple = (128, 0, 128)

Image Segmentation in OpenCV Python

We will be looking at the following 4 different ways to perform image segmentation in OpenCV Python and Scikit Learn –

  1. Image Segmentation using K-Means
  2. Image Segmentation using Contour Detection
  3. Image Segmentation using Thresholding
  4. Image Segmentation using Color Masking

We will be using the below image to perform image segmentation with all the techniques.

Query image

1. Image Segmentation using K-means

i) Importing libraries and Images

Import matplotlib, numpy, OpenCV along with the image to be segmented.

import matplotlib as plt import numpy as np import cv2 path = 'image.jpg' img = cv2.imread(path)

ii) Preprocessing the Image

Preprocess the image by converting it to the RGB color space. Reshape it along the first axis to convert it into a 2D vector i.e. if the image is of the shape (100,100,3) (width, height, channels) then it will be converted to (10000,3). Next, convert it into the float datatype.

img = cv2.cvtColor(img,cv2.COLOR_BGR2RGB) twoDimage = img.reshape((-1,3)) twoDimage = np.float32(twoDimage)

iii) Defining Parameters

Define the criteria by which the K-means algorithm is supposed to cluster pixels.

The ‘K’ variable defines the no of clusters/groups that a pixel can belong to (You can increase this value to increase the degree of segmentation).

criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0) K = 2 attempts=10

iv) Apply K-Means

The K variable randomly initiates K different clusters and the ‘center’ variable defines the center of these clusters. The distance of each point from these centers is computed and then they are assigned to one of the clusters. Then they are divided into different segments according to the value of their ‘label variable’.

ret,label,center=cv2.kmeans(twoDimage,K,None,criteria,attempts,cv2.KMEANS_PP_CENTERS) center = np.uint8(center) res = center[label.flatten()] result_image = res.reshape((img.shape))

Output:

Image Segmentation using K-Means Python OpenCV

2. Image Segmentation using Contour Detection

i) Importing libraries and Images

Import OpenCV, matplotlib, numpy and load the image to memory.

import cv2 import matplotlib.pyplot as plt import numpy as np path = 'image.jpg' img = cv2.imread(path) img = cv2.resize(img,(256,256))

ii) Preprocessing the Image

  1. Convert the image to grayscale.
  2. Compute the threshold of the grayscale image(the pixels above the threshold are converted to white otherwise zero).
  3. Apply canny edge detection to the thresholded image before finally using the ‘cv2.dilate’ function to dilate edges detected.
gray = cv2.cvtColor(img,cv2.COLOR_RGB2GRAY) _,thresh = cv2.threshold(gray, np.mean(gray), 255, cv2.THRESH_BINARY_INV) edges = cv2.dilate(cv2.Canny(thresh,0,255),None)

Output:

Contour Detection Pre processing

iii) Detecting and Drawing Contours

  1. Use the OpenCV find contour function to find all the open/closed regions in the image and store (cnt). Use the -1 subscript since the function returns a two-element tuple.
  2. Pass them through the sorted function to access the largest contours first.
  3. Create a zero pixel mask that has equal shape and size to the original image.
  4. Draw the detected contours on the created mask.
cnt = sorted(cv2.findContours(edges, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)[-2], key=cv2.contourArea)[-1] mask = np.zeros((256,256), np.uint8) masked = cv2.drawContours(mask, [cnt],-1, 255, -1)

iv) Segmenting the Regions

In order to show only the segmented parts of the image, we perform a bitwise AND operation on the original image (img) and the mask (containing the outlines of all our detected contours).

Finally, Convert the image back to RGB to see it segmented(while being comparable to the original image).

dst = cv2.bitwise_and(img, img, mask=mask) segmented = cv2.cvtColor(dst, cv2.COLOR_BGR2RGB)

Output:

Image Segmentation using Contour Detection OpenCV Python

3. Image Segmentation using Thresholding

i) Importing libraries and Images

Import numpy, scikit-image, matplotlib, and OpenCV.

import numpy as np import matplotlib.pyplot as plt from skimage.filters import threshold_otsu import cv2 path ='image.jpg' img = cv2.imread(path)

ii) Preprocessing the Image

Convert the image to the RBG color space from BGR in order to finally convert it to grayscale.

img_rgb=cv2.cvtColor(img,cv2.COLOR_BGR2RGB) img_gray=cv2.cvtColor(img_rgb,cv2.COLOR_RGB2GRAY)

iii) Segmentation Process

Create a “filter_image” function that multiplies the mask (created in the previous section) with the RGB channels of our image. Further, they are concatenated to form a normal image.

Next, we calculate the threshold (thresh) for the gray image and use it as a deciding factor i.e. values lying below this threshold are selected and others are discarded. This creates a mask-like (img_otsu) image that can later be used to segment our original image.

Finally, apply the “filter_image” function on the original image(img) and the mask formed using thresholding (img_otsu)

def filter_image(image, mask):    r = image[:,:,0] * mask    g = image[:,:,1] * mask    b = image[:,:,2] * mask     return np.dstack([r,g,b]) thresh = threshold_otsu(img_gray) img_otsu  = img_gray < thresh filtered = filter_image(img, img_otsu)

Output:

Image Segmentation using Thresholding in OpenCV Python

4. Segmentation using Color Masking

i) Import libraries and Images

Import OpenCV and load the image to memory.

import cv2 path ='image.jpg' img = cv2.imread(path)

ii) Preprocessing the Image

OpenCV default colorspace is BGR so we convert it to RGB. Next, we convert it to the HSV colorspace.

rgb_img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) hsv_img = cv2.cvtColor(rgb_img, cv2.COLOR_RGB2HSV)

iii) Define the Color Range to be Detected

Define the RGB range for the color we want to detect. Use the OpenCV in range function to create a mask of all the pixels that fall within the range that we defined. It will later help to mask these pixels.

light_blue = (90, 70, 50) dark_blue = (128, 255, 255) # You can use the following values for green # light_green = (40, 40, 40) # dark_greek = (70, 255, 255) mask = cv2.inRange(hsv_img, light_blue, dark_blue)

iv) Apply the Mask

Use the bitwise AND operation to apply our mask to the query image.

result = cv2.bitwise_and(img, img, mask=mask)

Output:

Color masking (blue)

Image Segmentation in OpenCV Python

Gaurav Maindola

I am an undergraduate machine learning enthusiast with a keen interest in web development. My main interest is in the field of computer vision and I am fascinated with all things that comprise making computers learn and love to learn new things myself.

Sours: https://machinelearningknowledge.ai/image-segmentation-in-python-opencv/

Now discussing:

Hello there fellow coder! Today in this tutorial we will understand what Image Segmentation is and in the later sections implement the same using OpenCV in the Python programming language.

What is Image Segmentation?

Image Segmentation implies grouping a similar set of pixels and parts of an image together for easy classification and categorization of objects in the images.

Why is Image Segmentation Needed?

Image Segmentation is an important stage in Image processing systems as it helps in extracting the objects of our interest and makes the future modeling easy. It helps to separate the desired objects from the unnecessary objects.

Applications of Image Segmentation

Image Segmentation has various applications in the real life. Some of them are as follows:

  1. Traffic Management System
  2. Cancer and other medical issue detection
  3. Satellite Image Analysis

Image Segmentation Implementation

1. Importing Modules

All the necessary modules required for Image Segmentation implementation and Image plotting are imported into the program.

import numpy as np import cv2 from matplotlib import pyplot as plt

2. Loading Original Image

The next step is to load the original image ( stored in the same directory as the code file ) using the code below. The output is also displayed right below the code.

img = cv2.imread('image1.jpg') img=cv2.cvtColor(img,cv2.COLOR_BGR2RGB) plt.figure(figsize=(8,8)) plt.imshow(img,cmap="gray") plt.axis('off') plt.title("Original Image") plt.show()
Original Image Segmentation

3. Converting to Grayscale

To make future image processing less complex and simple we will be converting the image loaded in the previous step to grayscale image using the code mentioned below. The output image is also displayed below the code.

gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) plt.figure(figsize=(8,8)) plt.imshow(gray,cmap="gray") plt.axis('off') plt.title("GrayScale Image") plt.show()
Grayscale Image Segmentation

4. Converting to a Binary Inverted Image

To study the image in more detail and have a very precise study of the image we will be converting the image into a binary inverted image using the code mentioned below. The output is also displayed along with the code.

ret, thresh = cv2.threshold(gray, 0, 255,cv2.THRESH_BINARY_INV +cv2.THRESH_OTSU) plt.figure(figsize=(8,8)) plt.imshow(thresh,cmap="gray") plt.axis('off') plt.title("Threshold Image") plt.show()
Threshold Img Segmentation

5. Segmenting the Image

Now the last step is to get the segmented image with the help of the code mentioned below. We will be making use of all the previous images somewhere or the other to try to get the most accurate segmented image we can.

kernel = np.ones((3, 3), np.uint8) closing = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE,kernel, iterations = 15) bg = cv2.dilate(closing, kernel, iterations = 1) dist_transform = cv2.distanceTransform(closing, cv2.DIST_L2, 0) ret, fg = cv2.threshold(dist_transform, 0.02*dist_transform.max(), 255, 0) cv2.imshow('image', fg) plt.figure(figsize=(8,8)) plt.imshow(fg,cmap="gray") plt.axis('off') plt.title("Segmented Image") plt.show()
Segmented Img Segmentation

The Final Output

After all the processing is done and the image is segmented, let’s plot all the results in one frame with the help of subplots. The code for the same is mentioned below.

plt.figure(figsize=(10,10)) plt.subplot(2,2,1) plt.axis('off') plt.title("Original Image") plt.imshow(img,cmap="gray") plt.subplot(2,2,2) plt.imshow(gray,cmap="gray") plt.axis('off') plt.title("GrayScale Image") plt.subplot(2,2,3) plt.imshow(thresh,cmap="gray") plt.axis('off') plt.title("Threshold Image") plt.subplot(2,2,4) plt.imshow(fg,cmap="gray") plt.axis('off') plt.title("Segmented Image") plt.show()

The final results are as follows.

Segmentation Output1

The same algorithm was tested for another image and the results are as follows. You can see the results are pretty satisfying.

Segmentation Output2

Conclusion

So today we learned about Image Segmentation and now you know how to implement the same on your own. Try out things by yourself for various images. Happy coding!

Thank you for reading!

Sours: https://www.askpython.com/python/examples/image-segmentation


1223 1224 1225 1226 1227