Best Practices for Fine-tuning Visual Classifiers to New Domains.

View/ Open
Average rating
votes
Date
2016Author
Chu, Brian
Madhavan, Vashisht
Beijbom, Oscar
Hoffman, Judy
Darrell, Trevor
Status
Published
Metadata
Show full item recordAbstract
Recent studies have shown that features from deep convolutional neural networks learned using large labeled datasets, like ImageNet, provide effective representations for a variety of visual recognition
tasks. They achieve strong performance as generic features and are even
more effective when fine-tuned to target datasets. However, details of
the fine-tuning procedure across datasets and with different amount of labeled data are not well-studied and choosing the best fine-tuning method
is often left to trial and error. In this work we systematically explore the
design-space for fine-tuning and give recommendations based on two key
characteristics of the target dataset: visual distance from source dataset
and the amount of available training data. Through a comprehensive experimental analysis, we conclude, with a few exceptions, that it is best
to copy as many layers of a pre-trained network as possible, and then
adjust the level of fine-tuning based on the visual distance fro.....
Resource URL
http://adas.cvc.uab.es/task-cv2016/papers/0002.pdfTitle of Book
Computer Vision – ECCV 2016 Workshops: Amsterdam, The Netherlands ..., Part 3.Editor(s) of Book
Hua, GangJégou, Hervé
Page Range
pp.435-442Publisher
SpringerSwitzerland
Document Language
enSustainable Development Goals (SDG)
14.AEssential Ocean Variables (EOV)
Zooplankton biomass and diversityMaturity Level
TRL 4 Component/subsystem validation in laboratory environmentBest Practice Type
Best PracticeCitation
Chu, B. et al (2016) Best Practices for Fine-tuning Visual Classifiers to New Domains. In: Comouter Visions, ECCV 2016 Workshop. (eds. G. Hua and H. Jégou). Switzerlnd, Springer International Publishings, 8pp. DOI: http://dx.doi.org/10.25607/OBP-765Collections