Best Practices for Fine-tuning Visual Classifiers to New Domains.
Average rating votes
MetadataShow full item record
Recent studies have shown that features from deep convolutional neural networks learned using large labeled datasets, like ImageNet, provide effective representations for a variety of visual recognition tasks. They achieve strong performance as generic features and are even more effective when fine-tuned to target datasets. However, details of the fine-tuning procedure across datasets and with different amount of labeled data are not well-studied and choosing the best fine-tuning method is often left to trial and error. In this work we systematically explore the design-space for fine-tuning and give recommendations based on two key characteristics of the target dataset: visual distance from source dataset and the amount of available training data. Through a comprehensive experimental analysis, we conclude, with a few exceptions, that it is best to copy as many layers of a pre-trained network as possible, and then adjust the level of fine-tuning based on the visual distance fro.....
Title of BookComputer Vision – ECCV 2016 Workshops: Amsterdam, The Netherlands ..., Part 3.
Editor(s) of BookHua, Gang
Sustainable Development Goals (SDG)14.A
Essential Ocean Variables (EOV)Zooplankton biomass and diversity
Maturity LevelTRL 4 Component/subsystem validation in laboratory environment
Best Practice TypeBest Practice
CitationChu, B. et al (2016) Best Practices for Fine-tuning Visual Classifiers to New Domains. In: Comouter Visions, ECCV 2016 Workshop. (eds. G. Hua and H. Jégou). Switzerlnd, Springer International Publishings, 8pp. DOI: http://dx.doi.org/10.25607/OBP-765