probly.train.evidential.torch¶
Unified Evidential Train Function.
Functions¶
Deep Evidential Regression loss for uncertainty-aware regression. |
|
Dirichlet entropy for predictive uncertainty estimation. |
|
Evidential Cross Entropy Loss for classification uncertainty estimation. |
|
Evidential KL divergence loss for classification uncertainty estimation. |
|
Evidential Log Loss for classification uncertainty estimation. |
|
Evidential Mean Squared Error loss for classification uncertainty estimation. |
|
Evidence-based Normal-Inverse-Gamma (NIG) regression loss. |
|
Regularization term for evidential regression. |
|
Information Robust Dirichlet (IRD) loss for predictive uncertainty estimation. |
|
Compute KL(Dir(alpha_p) || Dir(alpha_q)) for each batch item. |
|
Lp calibration loss for predictive uncertainty estimation. |
|
Construct target Dirichlet distribution for in-distribution samples. |
|
Construct flat Dirichlet target distribution for out-of-distribution samples. |
|
Natural Posterior Network (NatPN) classification loss. |
|
Compute simplified univariate Normal-Wishart log-likelihood. |
|
Paired ID/OOD training loss for Dirichlet Prior Networks. |
|
Posterior Networks (PostNet) classification loss. |
|
Expected categorical probabilities under Dirichlet. |
|
Regularization term for Information Robust Dirichlet Networks. |
|
Compute the distillation loss for Regression Prior Networks (RPN). |
|
Paired in-distribution and out-of-distribution loss for Regression Prior Networks. |
|
KL divergence between two Normal-Gamma distributions. |
|
Normal-Gamma prior with zero evidence for Regression Prior Networks. |
|
Trains a given Neural Network using different learning approaches, depending on the approach of a selected paper. |