The Benchmarking Keras PyTorch GitHub project benchmarks every pre-trained model in PyTorch and Keras (Tensorflow). All benchmarks are reproducible.
Why this is helpful
Combining Keras and PyTorch benchmarks into a single framework lets researchers decide which platform is best for a given model. For example
resnet architectures perform better in PyTorch and
inception architectures perform better in Keras (see below). These benchmarks serve as a standard from which to start new projects or debug current implementations.
But pre-trained models are already reproducible… right?
In PyTorch, yes. However some Keras users struggle with reproducibility, with issues falling into three categories:
- The published benchmarks on Keras Applications cannot be reproduced, even when exactly copying the example code. In fact, their reported accuracies (as of Feb. 2019) are usually higher than the actual accuracies. See here1 and here2.
- Some pre-trained Keras models yield inconsistent or lower accuracies when deployed on a server (here3) or run in sequence with other Keras models (here4).
- Keras models using batch normalization can be unreliable. For some models, forward-pass evaluations (with gradients supposedly off) still result in weights changing at inference time. See here5.
One of the goals of this project is to help reconcile some of these issues with reproducible benchmarks for Keras pre-trained models. The way I deal with these issues is three-fold. In Keras I
- avoid batches during inference.
- run each example one at a time. This is silly slow, but yields a reproducible output for every model.
- only run models in local functions or use
withclauses to ensure no aspects of a previous model persist in memory when the next model is loaded.
Below I provide a table of actual validation set accuracies for both Keras and PyTorch (verified on macOS 10.11.6, Linux Debian 9, and Ubuntu 18.04).
Benchmark Results on ImageNet
Get the ImageNet validation dataset
- Preprocess/Extract validation data
ILSVRC2012_img_val.taris downloaded, run:
# Credit to Soumith: https://github.com/soumith/imagenet-multiGPU.torch $ cd ../ && mkdir val && mv ILSVRC2012_img_val.tar val/ && cd val && tar -xvf ILSVRC2012_img_val.tar $ wget -qO- https://raw.githubusercontent.com/soumith/imagenetloader.torch/master/valprep.sh | bash
Reproduce in 10 seconds
The top 5 predictions for every example in the ImageNet validation set have been pre-computed for you here for Keras models and here for PyTorch models. The following code will use this for you to produce Keras and PyTorch benchmarking in a few seconds:
$ git clone https://github.com:cgnorthcutt/imagenet-benchmarking.git $ cd benchmarking-keras-pytorch $ python imagenet_benchmarking.py /path/to/imagenet_val_data
Reproduce model outputs (hours)
You can also reproduce the inference-time output of each Keras and PyTorch model without using the pre-computed data. Inference for Keras takes a long time (5-10 hours) because I compute the forward pass on each example one at a time and avoid vectorized operations: this was the only approach I found would reliably reproduce the same accuracies. PyTorch is fairly quick (less than one hour). To reproduce:
$ git clone https://github.com:cgnorthcutt/imagenet-benchmarking.git $ cd benchmarking-keras-pytorch $ # Compute outputs of PyTorch models (1 hour) $ ./imagenet_pytorch_get_predictions.py /path/to/imagenet_val_data $ # Compute outputs of Keras models (5-10 hours) $ ./imagenet_keras_get_predictions.py /path/to/imagenet_val_data $ # View benchmark results $ ./imagenet_benchmarking.py /path/to/imagenet_val_data
You can control GPU usage, batch size, output storage directories, and more. Run the files with the
-h flag to see command line argument options.
Thanks to Jessy Lin for pointing out the issues with batch normalization in Keras and Anish Athalye for feedback.