probly.train.evidential.torch

Unified Evidential Train Function.

Functions

der_loss

Deep Evidential Regression loss for uncertainty-aware regression.

dirichlet_entropy

Dirichlet entropy for predictive uncertainty estimation.

evidential_ce_loss

Evidential Cross Entropy Loss for classification uncertainty estimation.

evidential_kl_divergence

Evidential KL divergence loss for classification uncertainty estimation.

evidential_log_loss

Evidential Log Loss for classification uncertainty estimation.

evidential_mse_loss

Evidential Mean Squared Error loss for classification uncertainty estimation.

evidential_nignll_loss

Evidence-based Normal-Inverse-Gamma (NIG) regression loss.

evidential_regression_regularization

Regularization term for evidential regression.

ird_loss

Information Robust Dirichlet (IRD) loss for predictive uncertainty estimation.

kl_dirichlet

Compute KL(Dir(alpha_p) || Dir(alpha_q)) for each batch item.

lp_fn

Lp calibration loss for predictive uncertainty estimation.

make_in_domain_target_alpha

Construct target Dirichlet distribution for in-distribution samples.

make_ood_target_alpha

Construct flat Dirichlet target distribution for out-of-distribution samples.

natpn_loss

Natural Posterior Network (NatPN) classification loss.

normal_wishart_log_prob

Compute simplified univariate Normal-Wishart log-likelihood.

pn_loss

Paired ID/OOD training loss for Dirichlet Prior Networks.

postnet_loss

Posterior Networks (PostNet) classification loss.

predictive_probs

Expected categorical probabilities under Dirichlet.

regularization_fn

Regularization term for Information Robust Dirichlet Networks.

rpn_distillation_loss

Compute the distillation loss for Regression Prior Networks (RPN).

rpn_loss

Paired in-distribution and out-of-distribution loss for Regression Prior Networks.

rpn_ng_kl

KL divergence between two Normal-Gamma distributions.

rpn_prior

Normal-Gamma prior with zero evidence for Regression Prior Networks.

unified_evidential_train

Trains a given Neural Network using different learning approaches, depending on the approach of a selected paper.