Heinrich K, Möller B, Janiesch C, Zschech P (2019)
Publication Language: English
Publication Type: Conference contribution
Publication year: 2019
Publisher: Association for Information Systems
Conference Proceedings Title: Proceedings of the Pre-ICIS SIGDSA Symposium
There exist numerous scientific contributions to the design of deep learning networks. However, using the right architecture that is suited for a given business problem with all constraints such as memory and inference time requirements can be cumbersome. We reflect on the evolution of the state-of-the-art architectures for convolutional neural networks(CNN) for the case of image classification. We compare architectures regarding classification results, model size, and inference time to discuss the choices of designs for CNN architectures. To maintain scientific comprehensibility, the established ILSVRC benchmark is used as a basis for model selection and benchmark data. The quantitative comparison shows that while the model size and the required inference time correlate with result accuracy across all architectures, there are major trade-offs between those factors. The qualitative analysis further depicts that published models always build on previous research and adopt improved components in either evolutionary or revolutionary ways. Finally, we discuss design and result improvement during the evolution of CNN architectures. Further, we derive practical implications for designing deep learning networks
Heinrich, K., Möller, B., Janiesch, C., & Zschech, P. (2019). Is Bigger Always Better? Lessons Learnt from the Evolution of Deep Learning Architectures for Image Classification. In Proceedings of the Pre-ICIS SIGDSA Symposium. Munich, DE: Association for Information Systems.
Heinrich, Kai, et al. "Is Bigger Always Better? Lessons Learnt from the Evolution of Deep Learning Architectures for Image Classification." Proceedings of the Pre-ICIS SIGDSA Symposium, Munich Association for Information Systems, 2019.