Conversation
Added custom print function to minibatch loop
Added output plots Removed unneeded notebook
|
|
||
| :param inference_algorithm: The applied inference algorithm | ||
| :type inference_algorithm: InferenceAlgorithm | ||
| :param grad_loop: The reference to the main loop of gradient optimization |
There was a problem hiding this comment.
Could you add a comment that this defaults to minibatch?
| :param kwargs: The keyword arguments specify the data for inferences. The key of each argument is the name of | ||
| the corresponding variable in model definition and the value of the argument is the data in numpy array format. | ||
| """ | ||
| # data = [kwargs[v] for v in self.observed_variable_names] |
There was a problem hiding this comment.
Can you remove this if you don't need it?
|
Looks cool Tom, haven't had a chance to actually go through what the results look like yet but the changes to the core MXFusion codebase look fine to me. |
|
@meissnereric can you have a look at the failing tests? Don't think this was happening before. |
|
I think this was happening before, I remember seeing it. The reason is that you're using Python 3.6 only string formatting in places. this "(f"Context device id {ctx.device_id} outside range of list {ctx_list} or None")" style isn't supported in 3.4/3.5, use the classic "blah".format() style. Shouldn't be a big change, thanks Tom! |
Codecov Report
@@ Coverage Diff @@
## develop #143 +/- ##
===========================================
- Coverage 85.19% 84.78% -0.42%
===========================================
Files 78 78
Lines 3850 3917 +67
Branches 654 666 +12
===========================================
+ Hits 3280 3321 +41
- Misses 376 395 +19
- Partials 194 201 +7
Continue to review full report at Codecov.
|
Description of changes:
This is some benchmarking of Bayesian Neural Networks (meanfield VI) against a non-Bayesian NN. Hopefully this could provide a useful starting point for further analysis (e.g. different kinds of BNN).
The script
examples/benchmarking/bnn_classification_benchmark.pyruns through several datasets (MNIST, FashionMNIST, CIFAR10, CIFAR100), with 3 different NN architectures. Several metrics are computed (Accuracy, MSE (=Brier score), Log loss). Some "sensible" defaults are set for the hyperparameters - no HP tuning is performed. Results are stored in theresults.txtfile as a list of json strings.Also added a notebook in the notebooks directory for exploring the results. This outputs figures to the directory
examples/benchmarking/figs(figures also included).Changes to MXFusion core files:
mxfusion/components/functions/mxfusion_gluon_function.py: Made the exception more helpfulmxfusion/inference/batch_loop.py: added a callback for custom status messagesmxfusion/inference/grad_based_inference.py: addedGradIteratorBasedInference- a version ofGradBasedInferencethat operates on a data loadermxfusion/inference/minibatch_loop.py: fixed bug that stopped it working on GPUs; added callback for custom status messagesBy submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.