Additive Bayesian networks (ABN) are types of graphical models that extend the usual Bayesian-generalised linear model to multiple dependent variables through the factorisation of the joint probability distribution of the underlying variables. When fitting an ABN model, the choice of the prior for the parameters is of crucial importance. If an inadequate prior—like a not sufficiently informative one—is used, data separation and data sparsity may lead to issues in the model selection process. In this work we present a simulation study to compare two weakly informative priors with a strongly informative one. For the weakly informative prior, we use a zero mean Gaussian prior with a large variance, currently implemented in the R package abn. The candidate prior belongs to the Student’s t-distribution. It is specifically designed for logistic regressions. Finally, the strongly informative prior is Gaussian with a mean equal to the true parameter value and a small variance. We compare the impact of these priors on the accuracy of the learned additive Bayesian network as function of different parameters. We create a simulation study to illustrate Lindley’s paradox based on the prior choice. We then conclude by highlighting the good performance of the informative Student’s t-prior and the limited impact of Lindley’s paradox. Finally, suggestions for further developments are provided.
Comparison between suitable priors for additive Bayesian networks / Kratzer G.; Furrer R.; Pittavino M.. - ELETTRONICO. - 296:(2019), pp. 95-104. (Intervento presentato al convegno Bayesian Statistics and New Generations: Baysm 2018, Warwick, Uk, July 2-3) [10.1007/978-3-030-30611-3_10].
Comparison between suitable priors for additive Bayesian networks
Pittavino M.
2019
Abstract
Additive Bayesian networks (ABN) are types of graphical models that extend the usual Bayesian-generalised linear model to multiple dependent variables through the factorisation of the joint probability distribution of the underlying variables. When fitting an ABN model, the choice of the prior for the parameters is of crucial importance. If an inadequate prior—like a not sufficiently informative one—is used, data separation and data sparsity may lead to issues in the model selection process. In this work we present a simulation study to compare two weakly informative priors with a strongly informative one. For the weakly informative prior, we use a zero mean Gaussian prior with a large variance, currently implemented in the R package abn. The candidate prior belongs to the Student’s t-distribution. It is specifically designed for logistic regressions. Finally, the strongly informative prior is Gaussian with a mean equal to the true parameter value and a small variance. We compare the impact of these priors on the accuracy of the learned additive Bayesian network as function of different parameters. We create a simulation study to illustrate Lindley’s paradox based on the prior choice. We then conclude by highlighting the good performance of the informative Student’s t-prior and the limited impact of Lindley’s paradox. Finally, suggestions for further developments are provided.File | Dimensione | Formato | |
---|---|---|---|
P11_BayesianStatisticsAndNewGenerations_ComparisonBetweenSuitablePriorForABN.pdf
accesso aperto
Tipologia:
Pdf editoriale (Version of record)
Licenza:
Open Access
Dimensione
4.57 MB
Formato
Adobe PDF
|
4.57 MB | Adobe PDF |
I documenti in FLORE sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.