Loss¶
Image Loss¶
Provide different loss or metrics classes for images.
-
class
deepreg.loss.image.
GlobalMutualInformation
(*args: Any, **kwargs: Any)¶ Differentiable global mutual information via Parzen windowing method.
y_true and y_pred have to be at least 4d tensor, including batch axis.
- Reference: https://dspace.mit.edu/handle/1721.1/123142,
Section 3.1, equation 3.1-3.5, Algorithm 1
Init.
- Parameters
num_bins – number of bins for intensity, the default value is empirical.
sigma_ratio – a hyper param for gaussian function
name – name of the loss
kwargs – additional arguments.
-
call
(y_true: tensorflow.Tensor, y_pred: tensorflow.Tensor) → tensorflow.Tensor¶ Return loss for a batch.
- Parameters
y_true – shape = (batch, dim1, dim2, dim3) or (batch, dim1, dim2, dim3, ch)
y_pred – shape = (batch, dim1, dim2, dim3) or (batch, dim1, dim2, dim3, ch)
- Returns
shape = (batch,)
-
get_config
() → dict¶ Return the config dictionary for recreating this class.
-
class
deepreg.loss.image.
GlobalMutualInformationLoss
(*args: Any, **kwargs: Any)¶ Revert the sign of GlobalMutualInformation.
Init without required arguments.
- Parameters
kwargs – additional arguments.
-
class
deepreg.loss.image.
GlobalNormalizedCrossCorrelation
(*args: Any, **kwargs: Any)¶ Global squared zero-normalized cross-correlation.
Compute the squared cross-correlation between the reference and moving images y_true and y_pred have to be at least 4d tensor, including batch axis.
Reference:
- Zero-normalized cross-correlation (ZNCC):
Init.
- Parameters
name – name of the loss
kwargs – additional arguments.
-
call
(y_true: tensorflow.Tensor, y_pred: tensorflow.Tensor) → tensorflow.Tensor¶ Return loss for a batch.
- Parameters
y_true – shape = (batch, …)
y_pred – shape = (batch, …)
- Returns
shape = (batch,)
-
class
deepreg.loss.image.
GlobalNormalizedCrossCorrelationLoss
(*args: Any, **kwargs: Any)¶ Revert the sign of GlobalNormalizedCrossCorrelation.
Init without required arguments.
- Parameters
kwargs – additional arguments.
-
class
deepreg.loss.image.
LocalNormalizedCrossCorrelation
(*args: Any, **kwargs: Any)¶ Local squared zero-normalized cross-correlation.
Denote y_true as t and y_pred as p. Consider a window having n elements. Each position in the window corresponds a weight w_i for i=1:n.
Define the discrete expectation in the window E[t] as
E[t] = sum_i(w_i * t_i) / sum_i(w_i)
Similarly, the discrete variance in the window V[t] is
V[t] = E[t**2] - E[t] ** 2
The local squared zero-normalized cross-correlation is therefore
E[ (t-E[t]) * (p-E[p]) ] ** 2 / V[t] / V[p]
where the expectation in numerator is
E[ (t-E[t]) * (p-E[p]) ] = E[t * p] - E[t] * E[p]
Different kernel corresponds to different weights.
For now, y_true and y_pred have to be at least 4d tensor, including batch axis.
Reference:
- Zero-normalized cross-correlation (ZNCC):
Code: https://github.com/voxelmorph/voxelmorph/blob/legacy/src/losses.py
Init.
- Parameters
kernel_size – int. Kernel size or kernel sigma for kernel_type=’gauss’.
kernel_type – str, rectangular, triangular or gaussian
smooth_nr – small constant added to numerator in case of zero covariance.
smooth_dr – small constant added to denominator in case of zero variance.
name – name of the loss.
kwargs – additional arguments.
-
calc_ncc
(y_true: tensorflow.Tensor, y_pred: tensorflow.Tensor) → tensorflow.Tensor¶ Return NCC for a batch.
The kernel should not be normalized, as normalizing them leads to computation with small values and the precision will be reduced. Here both numerator and denominator are actually multiplied by kernel volume, which helps the precision as well. However, when the variance is zero, the obtained value might be negative due to machine error. Therefore a hard-coded clipping is added to prevent division by zero.
- Parameters
y_true – shape = (batch, dim1, dim2, dim3, 1)
y_pred – shape = (batch, dim1, dim2, dim3, 1)
- Returns
shape = (batch, dim1, dim2, dim3. 1)
-
call
(y_true: tensorflow.Tensor, y_pred: tensorflow.Tensor) → tensorflow.Tensor¶ Return loss for a batch.
TODO: support channel axis dimension > 1.
- Parameters
y_true – shape = (batch, dim1, dim2, dim3) or (batch, dim1, dim2, dim3, 1)
y_pred – shape = (batch, dim1, dim2, dim3) or (batch, dim1, dim2, dim3, 1)
- Returns
shape = (batch,)
-
get_config
() → dict¶ Return the config dictionary for recreating this class.
-
class
deepreg.loss.image.
LocalNormalizedCrossCorrelationLoss
(*args: Any, **kwargs: Any)¶ Revert the sign of LocalNormalizedCrossCorrelation.
Init without required arguments.
- Parameters
kwargs – additional arguments.
Label Loss¶
Provide different loss or metrics classes for labels.
-
class
deepreg.loss.label.
CrossEntropy
(*args: Any, **kwargs: Any)¶ Define weighted cross-entropy.
- The formulation is:
loss = − w_fg * y_true log(y_pred) - w_bg * (1−y_true) log(1−y_pred)
Init.
- Parameters
binary – if True, project y_true, y_pred to 0 or 1
background_weight – weight for background, where y == 0.
smooth – smooth constant for log.
name – name of the loss.
kwargs – additional arguments.
-
call
(y_true: tensorflow.Tensor, y_pred: tensorflow.Tensor) → tensorflow.Tensor¶ Return loss for a batch.
- Parameters
y_true – shape = (batch, …)
y_pred – shape = (batch, …)
- Returns
shape = (batch,)
-
get_config
() → dict¶ Return the config dictionary for recreating this class.
-
class
deepreg.loss.label.
CrossEntropyLoss
(*args: Any, **kwargs: Any)¶ Define loss with multi-scaling options.
Init.
- Parameters
scales – list of scalars or None, if None, do not apply any scaling.
kernel – gaussian or cauchy.
kwargs – additional arguments.
-
class
deepreg.loss.label.
DiceLoss
(*args: Any, **kwargs: Any)¶ Revert the sign of DiceScore and support multi-scaling options.
Init without required arguments.
- Parameters
kwargs – additional arguments.
-
class
deepreg.loss.label.
DiceScore
(*args: Any, **kwargs: Any)¶ Define dice score.
The formulation is:
w_fg + w_bg = 1
let y_prod = y_true * y_pred and y_sum = y_true + y_pred
- num = 2 * (w_fg * y_true * y_pred + w_bg * (1−y_true) * (1−y_pred))
= 2 * ((w_fg+w_bg) * y_prod - w_bg * y_sum + w_bg) = 2 * (y_prod - w_bg * y_sum + w_bg)
- denom = (w_fg * (y_true + y_pred) + w_bg * (1−y_true + 1−y_pred))
= (w_fg-w_bg) * y_sum + 2 * w_bg = (1-2*w_bg) * y_sum + 2 * w_bg
dice score = num / denom
where num and denom are summed over all axes except the batch axis.
- Reference:
Sudre, Carole H., et al. “Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations.” Deep learning in medical image analysis and multimodal learning for clinical decision support. Springer, Cham, 2017. 240-248.
Init.
- Parameters
binary – if True, project y_true, y_pred to 0 or 1.
background_weight – weight for background, where y == 0.
smooth_nr – small constant added to numerator in case of zero covariance.
smooth_dr – small constant added to denominator in case of zero variance.
name – name of the loss.
kwargs – additional arguments.
-
call
(y_true: tensorflow.Tensor, y_pred: tensorflow.Tensor) → tensorflow.Tensor¶ Return loss for a batch.
- Parameters
y_true – shape = (batch, …)
y_pred – shape = (batch, …)
- Returns
shape = (batch,)
-
get_config
() → dict¶ Return the config dictionary for recreating this class.
-
class
deepreg.loss.label.
JaccardIndex
(*args: Any, **kwargs: Any)¶ Define Jaccard index.
The formulation is: 1. num = y_true * y_pred 2. denom = y_true + y_pred - y_true * y_pred 3. Jaccard index = num / denom
w_fg + w_bg = 1
let y_prod = y_true * y_pred and y_sum = y_true + y_pred
- num = (w_fg * y_true * y_pred + w_bg * (1−y_true) * (1−y_pred))
= ((w_fg+w_bg) * y_prod - w_bg * y_sum + w_bg) = (y_prod - w_bg * y_sum + w_bg)
- denom = (w_fg * (y_true + y_pred - y_true * y_pred)
w_bg * (1−y_true + 1−y_pred - (1−y_true) * (1−y_pred)))
= w_fg * (y_sum - y_prod) + w_bg * (1-y_prod) = (1-w_bg) * y_sum - y_prod + w_bg
dice score = num / denom
Init.
- Parameters
binary – if True, project y_true, y_pred to 0 or 1.
background_weight – weight for background, where y == 0.
smooth_nr – small constant added to numerator in case of zero covariance.
smooth_dr – small constant added to denominator in case of zero variance.
name – name of the loss.
kwargs – additional arguments.
-
call
(y_true: tensorflow.Tensor, y_pred: tensorflow.Tensor) → tensorflow.Tensor¶ Return loss for a batch.
- Parameters
y_true – shape = (batch, …)
y_pred – shape = (batch, …)
- Returns
shape = (batch,)
-
class
deepreg.loss.label.
JaccardLoss
(*args: Any, **kwargs: Any)¶ Revert the sign of JaccardIndex.
Init without required arguments.
- Parameters
kwargs – additional arguments.
-
class
deepreg.loss.label.
SumSquaredDifference
(*args: Any, **kwargs: Any)¶ Actually, mean of squared distance between y_true and y_pred.
The inconsistent name was for convention.
y_true and y_pred have to be at least 1d tensor, including batch axis.
Init.
- Parameters
name – name of the loss.
kwargs – additional arguments.
-
call
(y_true: tensorflow.Tensor, y_pred: tensorflow.Tensor) → tensorflow.Tensor¶ Return mean squared different for a batch.
- Parameters
y_true – shape = (batch, …)
y_pred – shape = (batch, …)
- Returns
shape = (batch,)
-
class
deepreg.loss.label.
SumSquaredDifferenceLoss
(*args: Any, **kwargs: Any)¶ Define loss with multi-scaling options.
Init.
- Parameters
scales – list of scalars or None, if None, do not apply any scaling.
kernel – gaussian or cauchy.
kwargs – additional arguments.
-
deepreg.loss.label.
compute_centroid
(mask: tensorflow.Tensor, grid: tensorflow.Tensor) → tensorflow.Tensor¶ Calculate the centroid of the mask. :param mask: shape = (batch, dim1, dim2, dim3) :param grid: shape = (1, dim1, dim2, dim3, 3) :return: shape = (batch, 3), batch of vectors denoting
location of centroids.
-
deepreg.loss.label.
compute_centroid_distance
(y_true: tensorflow.Tensor, y_pred: tensorflow.Tensor, grid: tensorflow.Tensor) → tensorflow.Tensor¶ Calculate the L2-distance between two tensors’ centroids. :param y_true: tensor, shape = (batch, dim1, dim2, dim3) :param y_pred: tensor, shape = (batch, dim1, dim2, dim3) :param grid: tensor, shape = (1, dim1, dim2, dim3, 3) :return: shape = (batch,)
-
deepreg.loss.label.
foreground_proportion
(y: tensorflow.Tensor) → tensorflow.Tensor¶ Calculate the percentage of foreground vs background per 3d volume. :param y: shape = (batch, dim1, dim2, dim3), a 3D label tensor :return: shape = (batch,)
Deformation Loss¶
Provide regularization functions and classes for ddf.
-
class
deepreg.loss.deform.
BendingEnergy
(*args: Any, **kwargs: Any)¶ Calculate the bending energy of ddf using central finite difference.
y_true and y_pred have to be at least 5d tensor, including batch axis.
Init.
- Parameters
name – name of the loss.
kwargs – additional arguments.
-
call
(inputs: tensorflow.Tensor, **kwargs) → tensorflow.Tensor¶ Return a scalar loss.
- Parameters
inputs – shape = (batch, m_dim1, m_dim2, m_dim3, 3)
kwargs – additional arguments.
- Returns
shape = (batch, )
-
class
deepreg.loss.deform.
GradientNorm
(*args: Any, **kwargs: Any)¶ Calculate the L1/L2 norm of ddf using central finite difference.
y_true and y_pred have to be at least 5d tensor, including batch axis.
Init.
- Parameters
l1 – bool true if calculate L1 norm, otherwise L2 norm
name – name of the loss
kwargs – additional arguments.
-
call
(inputs: tensorflow.Tensor, **kwargs) → tensorflow.Tensor¶ Return a scalar loss.
- Parameters
inputs – shape = (batch, m_dim1, m_dim2, m_dim3, 3)
kwargs – additional arguments.
- Returns
shape = (batch, )
-
get_config
() → dict¶ Return the config dictionary for recreating this class.
-
deepreg.loss.deform.
gradient_dx
(fx: tensorflow.Tensor) → tensorflow.Tensor¶ Calculate gradients on x-axis of a 3D tensor using central finite difference.
It moves the tensor along axis 1 to calculate the approximate gradient, the x axis, dx[i] = (x[i+1] - x[i-1]) / 2.
- Parameters
fx – shape = (batch, m_dim1, m_dim2, m_dim3)
- Returns
shape = (batch, m_dim1-2, m_dim2-2, m_dim3-2)
-
deepreg.loss.deform.
gradient_dxyz
(fxyz: tensorflow.Tensor, fn: Callable) → tensorflow.Tensor¶ Calculate gradients on x,y,z-axis of a tensor using central finite difference.
The gradients are calculated along x, y, z separately then stacked together.
- Parameters
fxyz – shape = (…, 3)
fn – function to call
- Returns
shape = (…, 3)
-
deepreg.loss.deform.
gradient_dy
(fy: tensorflow.Tensor) → tensorflow.Tensor¶ Calculate gradients on y-axis of a 3D tensor using central finite difference.
It moves the tensor along axis 2 to calculate the approximate gradient, the y axis, dy[i] = (y[i+1] - y[i-1]) / 2.
- Parameters
fy – shape = (batch, m_dim1, m_dim2, m_dim3)
- Returns
shape = (batch, m_dim1-2, m_dim2-2, m_dim3-2)
-
deepreg.loss.deform.
gradient_dz
(fz: tensorflow.Tensor) → tensorflow.Tensor¶ Calculate gradients on z-axis of a 3D tensor using central finite difference.
It moves the tensor along axis 3 to calculate the approximate gradient, the z axis, dz[i] = (z[i+1] - z[i-1]) / 2.
- Parameters
fz – shape = (batch, m_dim1, m_dim2, m_dim3)
- Returns
shape = (batch, m_dim1-2, m_dim2-2, m_dim3-2)
Loss Util¶
Provide helper functions or classes for defining loss or metrics.
-
class
deepreg.loss.util.
MultiScaleMixin
(*args: Any, **kwargs: Any)¶ Mixin class for multi-scale loss.
It applies the loss at different scales (gaussian or cauchy smoothing). It is assumed that loss values are between 0 and 1.
Init.
- Parameters
scales – list of scalars or None, if None, do not apply any scaling.
kernel – gaussian or cauchy.
kwargs – additional arguments.
-
call
(y_true: tensorflow.Tensor, y_pred: tensorflow.Tensor) → tensorflow.Tensor¶ Use super().call to calculate loss at different scales.
- Parameters
y_true – ground-truth tensor, shape = (batch, dim1, dim2, dim3).
y_pred – predicted tensor, shape = (batch, dim1, dim2, dim3).
- Returns
multi-scale loss, shape = (batch, ).
-
get_config
() → dict¶ Return the config dictionary for recreating this class.
-
class
deepreg.loss.util.
NegativeLossMixin
(*args: Any, **kwargs: Any)¶ Mixin class to revert the sign of the loss value.
Init without required arguments.
- Parameters
kwargs – additional arguments.
-
call
(y_true: tensorflow.Tensor, y_pred: tensorflow.Tensor) → tensorflow.Tensor¶ Revert the sign of loss.
- Parameters
y_true – ground-truth tensor.
y_pred – predicted tensor.
- Returns
negated loss.
-
deepreg.loss.util.
separable_filter
(tensor: tensorflow.Tensor, kernel: tensorflow.Tensor) → tensorflow.Tensor¶ Create a 3d separable filter.
Here tf.nn.conv3d accepts the filters argument of shape (filter_depth, filter_height, filter_width, in_channels, out_channels), where the first axis of filters is the depth not batch, and the input to tf.nn.conv3d is of shape (batch, in_depth, in_height, in_width, in_channels).
- Parameters
tensor – shape = (batch, dim1, dim2, dim3, 1)
kernel – shape = (dim4,)
- Returns
shape = (batch, dim1, dim2, dim3, 1)