Retrospective Loss: Looking Back to Improve Training of Deep Neural Networks

Jandial, Surgan and Chopra, Ayush and Sarkar, Mausoom and Gupta, Piyush and Krishnamurthy, Balaji and Balasubramanian, Vineeth (2020) Retrospective Loss: Looking Back to Improve Training of Deep Neural Networks. In: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 23 August 2020 - 27 August 2020.

Full text not available from this repository. (Request a copy)

Abstract

Deep neural networks (DNNs) are powerful learning machines that have enabled breakthroughs in several domains. In this work, we introduce a new retrospective loss to improve the training of deep neural network models by utilizing the prior experience available in past model states during training. Minimizing the retrospective loss, along with the task-specific loss, pushes the parameter state at the current training step towards the optimal parameter state while pulling it away from the parameter state at a previous training step. Although a simple idea, we analyze the method as well as to conduct comprehensive sets of experiments across domains - images, speech, text, and graphs - to show that the proposed loss results in improved performance across input domains, tasks, and architectures

[error in script]
IITH Creators:
IITH CreatorsORCiD
Balasubramanian, Vineeth NUNSPECIFIED
Item Type: Conference or Workshop Item (Paper)
Uncontrolled Keywords: Learning machines; Model state; Neural network model; Optimal parameter; Parameter state; Prior experience;Data mining; Deep neural networks; Image enhancement; Learning systems
Subjects: Computer science
Depositing User: . LibTrainee 2021
Date Deposited: 31 Jul 2021 11:14
Last Modified: 07 Mar 2022 10:49
URI: http://raiithold.iith.ac.in/id/eprint/8610
Publisher URL: http://doi.org/10.1145/3394486.3403165
Related URLs:

Actions (login required)

View Item View Item
Statistics for RAIITH ePrint 8610 Statistics for this ePrint Item