Transfer Learning, Finding Learning Rate, and More


Training a neural network from scratch is unnecessary in a situation when pretrained models are readily available. This notebook showcases loading a pretrained model directly from pytorch.models subpackage, its subsequent re-training for the specific problem at hand, a method used to determine the optimal learning rate, ways of defining custom transforms and finally assembling several models into an ensemble.


Once again, the examples are based on Chapter 4 of Ian Pointer’s “Programming PyTorch for Deep Learning”, released by O’Reilly in 2019.

Jupyter notebook of the model may be found on GitHub:

Also, the same notebook is available on Google Colab (where it can be tested it against the GPU available there):

Leave a Reply