cvpl_tools/im/seg_process.py

View source at seg_process.py.

Q: Why are there two baseclasses SegProcess and BlockToBlockProcess? When I define my own pipeline, which class should I be subclassing from?

A: BlockToBlockProcess is a wrapper around SegProcess for code whose input and output block sizes are the same. For general processing whose output are list of centroids, or when input shape of any block is not the same as output shape of that block, use BlockToBlockProcess.

APIs

async cvpl_tools.im.process.base.block_to_block_forward(np_forward: Callable, im: ndarray[tuple[int, ...], dtype[_ScalarType_co]] | Array | NDBlock, context_args: None | dict = None, out_dtype: dtype = None, compute_chunk_sizes: bool = False)

Call np_forward() on im, and optionally cache the result locally

Parameters:
  • np_forward – Chunk-wise process function

  • im – The image to process

  • context_args – dictionary of contextual arguments, see docstring for cvpl_tools.tools.fs.cache_im for more info

  • out_dtype (np.dtype) – Output data type

  • compute_chunk_sizes (bool) – If True, compute chunk sizes before caching the loaded image

Returns:

Returns the loaded image from the cached image

cvpl_tools.im.process.base.lc_interpretable_napari(layer_name: str, lc: ndarray[tuple[int, ...], dtype[_ScalarType_co]], viewer: napari.Viewer, ndim: int, extra_features: Sequence, text_color: str = 'green')

This function is used to display feature points for LC-typed output

Parameters:
  • layer_name – displayed name of the layer

  • lc – The list of features, each row of length (ndim + nextra)

  • viewer – Napari viewer to add points to

  • ndim – dimension of the image

  • extra_features – extra features to be displayed as text

  • text_color – to be used as display text color

Built-in function processes

async cvpl_tools.im.process.base.in_to_bs_custom(pred_fn, im, context_args: dict = None)

Process the array and return an np.uint8 label array of the same size

Parameters:
  • pred_fn – A chunk-wise function that takes in a numpy image array and returns a label mask of the same size i.e. the input and output are npt.NDArray -> npt.NDArray[np.uint8]

  • im – The array to predict on

  • context_args – Dictionary of contextual arguments, see docstring of cvpl_tools.tools.fs.cache_im for more info

Returns:

Processed binary segmentation array

async cvpl_tools.im.process.base.in_to_bs_simple_threshold(threshold: int | float, im, context_args: dict = None)

Returns im > threshold

async cvpl_tools.im.process.base.in_to_lc_blobdog_forward(im: ndarray[tuple[int, ...], dtype[float32]] | Array, min_sigma=1, max_sigma=2, threshold: float = 0.1, reduce: bool = False, context_args: dict = None) NDBlock
async cvpl_tools.im.process.base.in_to_cc_sum_scaled_intensity(im, scale: float = 0.008, min_thres: float = 0.0, spatial_box_width: None | int = None, reduce: bool = True, context_args: dict = None)

Summing up the intensity and scale it to obtain number of cells, directly

Parameters:
  • im – The image to perform sum on

  • scale – Scale the sum of intensity by this to get number of cells

  • min_thres – Intensity below this threshold is excluded (set to 0 before summing)

  • spatial_box_width – If not None, will use this as the box width for adding points to Napari

  • reduce – If True, reduce the result to a single number in numpynumpy array

  • context_args – Contextual arguments - viewer_args (dict, optional): specifies the viewer arguments related to napari display of intermediate and end results - cache_url (str | RDirFileSystem, optional): Points to the directory under which cache will be saved

Returns:

An array in which number represents the estimated object count in chunk after summing intensity

async cvpl_tools.im.process.base.bs_lc_to_os_forward(bs: ndarray[tuple[int, ...], dtype[uint8]] | Array, lc: NDBlock[float64], max_split: int = 10, context_args: dict = None) ndarray[tuple[int, ...], dtype[int32]] | Array

bs_to_os

binary segmentation to ordinal segmentation

This section contains algorithms whose input is binary (0-1) segmentation mask, and output is instance segmentation (0-N) integer mask where the output ndarray is of the same shape as input.

async cvpl_tools.im.process.bs_to_os.bs_to_os_watershed3sizes(bs: ndarray[tuple[int, ...], dtype[uint8]], size_thres=60.0, dist_thres=1.0, rst=None, size_thres2=100.0, dist_thres2=1.5, rst2=60.0, context_args: dict = None)

lc_to_cc

list of centroids to cell counts

This section contains algorithms whose input is a 2d array or a 2d array of each block describing the centroid locations and meta information about the objects associated with the centroids in each block. The output is a single number summarizing statistics for each block.

async cvpl_tools.im.process.lc_to_cc.lc_to_cc_count_lc_by_size(lc: NDBlock[float64], ndim: int, min_size: int, size_threshold, volume_weight, border_params, reduce: bool = False, context_args: dict = None) ndarray[tuple[int, ...], dtype[float64]]

Counting list of cells by size

Several features: 1. A size threshold, below which each contour is counted as a single cell (or part of a single cell, in the case it is neighbor to boundary of the image) 2. Above size threshold, the contour is seen as a cluster of cells an estimate of cell count is given based on the volume of the contour 3. For cells on the boundary location, their estimated ncell is penalized according to the distance between the cell centroid and the boundary of the image; if the voxels of the cell do not touch edge, this penalty does not apply 4. A min_size threshold, below (<=) which the contour is simply discarded because it’s likely just an artifact

async cvpl_tools.im.process.lc_to_cc.lc_to_cc_count_lc_edge_penalized(lc: NDBlock[float64], chunks: Sequence[Sequence[int]] | Sequence[int], border_params: tuple[float, float, float] = (3.0, -0.5, 2.0), reduce: bool = False, context_args: dict = None) NDBlock[float64]

os_to_cc

oridnal segmentation to cell counts

This section contains algorithms whose input is instance segmentation (0-N) integer mask where the output is a single number summarizing statistics for each block.

async cvpl_tools.im.process.os_to_cc.os_to_cc_count_os_by_size(os, size_threshold: int | float = 25.0, volume_weight: float = 0.006, border_params: tuple[float, float, float] = (3.0, -0.5, 2.0), min_size: int | float = 0, reduce: bool = False, context_args: dict = None)

Counting ordinal segmentation contours

Several features: 1. A size threshold, below which each contour is counted as a single cell (or part of a single cell, in the case it is neighbor to boundary of the image) 2. Above size threshold, the contour is seen as a cluster of cells an estimate of cell count is given based on the volume of the contour 3. For cells on the boundary location, their estimated ncell is penalized according to the distance between the cell centroid and the boundary of the image; if the voxels of the cell do not touch edge, this penalty does not apply 4. A min_size threshold, below (<=) which the contour is simply discarded because it’s likely just an artifact

os_to_lc

ordinal segmentation to list of centroids

This section contains algorithms whose input is instance segmentation (0-N) integer mask where the output is a list of centroids with meta information.

async cvpl_tools.im.process.os_to_lc.os_to_lc_direct(os, min_size: int = 0, reduce: bool = False, is_global: bool = False, ex_statistics: Sequence[str] = (), context_args: dict = None)

any_to_any

other

This sections contain image processing steps whose inputs and outputs may adapt to different types of data or are not adequately described by the current classifications.