Bayesian Neural Network
What Are Bayesian Neural Networks?
Bayesian Neural Networks (BNNs) refers to extending standard networks with posterior inference in order to control over-fitting. From a broader perspective, the Bayesian approach uses the statistical methodology so that everything has a probability distribution attached to it, including model parameters (weights and biases in neural networks). In programming languages, variables that can take a specific value will turn the same result every-time you access that specific variable. Let’s begin with the revision of a simple linear model, which will predict the output by the weighted sum of a series of input features.
What Are Some of the Main Advantages of BNNs?
- Bayesian neural nets are useful for solving problems in domains where data is scarce, as a way to prevent overfitting. Example applications are molecular biology and medical diagnosis (areas where data often come from costly and difficult experimental work).
- Bayesian nets are universally useful
- They can obtain better results for a vast number of tasks however they are extremely difficult to scale to large problems.
- BNNs allow you to automatically calculate an error associated with your predictions when dealing with data of unknown targets.
- allow you to estimate uncertainty in predictions, which is a great feature for fields like medicine
Why should you use Bayesian Neural Networks?
Instead of taking into account a single answer to one question, Bayesian methods allow you to consider an entire distribution of answers. With this approach, you can naturally address issues such as:
- regularization (overfitting or not),
- model selection/comparison,without the need for a separate cross-validation data set