Sequence-to-Sequence Learning for Human Pose Correction in Videos

Swetha, Sirnam and Balasubramanian, Vineeth N and Jawahar, C V (2017) Sequence-to-Sequence Learning for Human Pose Correction in Videos. In: 4th IAPR Asian Conference on Pattern Recognition (ACPR), 26-29 November 2017, Nanjing, China.

Full text not available from this repository. (Request a copy)

Abstract

The power of ConvNets has been demonstrated in a wide variety of vision tasks including pose estimation. But they often produce absurdly erroneous predictions in videos due to unusual poses, challenging illumination, blur, self-occlusions etc. These erroneous predictions can be refined by leveraging previous and future predictions as the temporal smoothness constrain in the videos. In this paper, we present a generic approach for pose correction in videos using sequence learning that makes minimal assumptions on the sequence structure. The proposed model is generic, fast and surpasses the state-of-the-art on benchmark datasets. We use a generic pose estimator for initial pose estimates, which are further refined using our method. The proposed architecture uses Long Short-Term Memory (LSTM) encoder-decoder model to encode the temporal context and refine the estimations. We show 3.7% gain over the baseline Yang & Ramanan (YR) and 2.07% gain over Spatial Fusion Network (SFN) on a new challenging YouTube Pose Subset dataset.

[error in script]
IITH Creators:
IITH CreatorsORCiD
Balasubramanian, Vineeth NUNSPECIFIED
Item Type: Conference or Workshop Item (Paper)
Uncontrolled Keywords: Pose estimation, sequence to sequence learning, LSTM
Subjects: Computer science
Divisions: Department of Computer Science & Engineering
Depositing User: Team Library
Date Deposited: 17 May 2019 04:59
Last Modified: 17 May 2019 04:59
URI: http://raiithold.iith.ac.in/id/eprint/5211
Publisher URL: http://doi.org/10.1109/ACPR.2017.126
Related URLs:

Actions (login required)

View Item View Item
Statistics for RAIITH ePrint 5211 Statistics for this ePrint Item