Swetha, Sirnam and Balasubramanian, Vineeth N and Jawahar, C V
(2017)
Sequence-to-Sequence Learning for Human Pose Correction in Videos.
In: 4th IAPR Asian Conference on Pattern Recognition (ACPR), 26-29 November 2017, Nanjing, China.
Full text not available from this repository.
(
Request a copy)
Abstract
The power of ConvNets has been demonstrated in a wide variety of vision tasks including pose estimation. But they often produce absurdly erroneous predictions in videos due to unusual poses, challenging illumination, blur, self-occlusions etc. These erroneous predictions can be refined by leveraging previous and future predictions as the temporal smoothness constrain in the videos. In this paper, we present a generic approach for pose correction in videos using sequence learning that makes minimal assumptions on the sequence structure. The proposed model is generic, fast and surpasses the state-of-the-art on benchmark datasets. We use a generic pose estimator for initial pose estimates, which are further refined using our method. The proposed architecture uses Long Short-Term Memory (LSTM) encoder-decoder model to encode the temporal context and refine the estimations. We show 3.7% gain over the baseline Yang & Ramanan (YR) and 2.07% gain over Spatial Fusion Network (SFN) on a new challenging YouTube Pose Subset dataset.
Actions (login required)
|
View Item |