TY - GEN
T1 - Learning generator networks for dynamic patterns
AU - Han, Tian
AU - Yang, Lu
AU - Wu, Jiawen
AU - Xing, Xianglei
AU - Wu, Ying Nian
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/3/4
Y1 - 2019/3/4
N2 - We address the problem of learning dynamic patterns from unlabeled video sequences, either in the form of generating new video sequences, or recovering incomplete video sequences. This problem is challenging because the appearances and motions in the video sequences can be very complex. We propose to use the alternating back-propagation algorithm to learn the generator network with the spatial-temporal convolutional architecture. The proposed method is efficient and flexible. It can not only generate realistic video sequences, but can also recover the incomplete video sequences in the testing stage or even in the learning stage. The proposed algorithm can be further improved by using learned initialization which is useful for the recovery tasks. Further, the proposed algorithm can naturally help to learn the shared representation between different modalities. Our experiments show that our method is competitive with the existing state of the art methods both qualitatively and quantitatively.
AB - We address the problem of learning dynamic patterns from unlabeled video sequences, either in the form of generating new video sequences, or recovering incomplete video sequences. This problem is challenging because the appearances and motions in the video sequences can be very complex. We propose to use the alternating back-propagation algorithm to learn the generator network with the spatial-temporal convolutional architecture. The proposed method is efficient and flexible. It can not only generate realistic video sequences, but can also recover the incomplete video sequences in the testing stage or even in the learning stage. The proposed algorithm can be further improved by using learned initialization which is useful for the recovery tasks. Further, the proposed algorithm can naturally help to learn the shared representation between different modalities. Our experiments show that our method is competitive with the existing state of the art methods both qualitatively and quantitatively.
UR - http://www.scopus.com/inward/record.url?scp=85063568805&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85063568805&partnerID=8YFLogxK
U2 - 10.1109/WACV.2019.00091
DO - 10.1109/WACV.2019.00091
M3 - Conference contribution
AN - SCOPUS:85063568805
T3 - Proceedings - 2019 IEEE Winter Conference on Applications of Computer Vision, WACV 2019
SP - 809
EP - 818
BT - Proceedings - 2019 IEEE Winter Conference on Applications of Computer Vision, WACV 2019
T2 - 19th IEEE Winter Conference on Applications of Computer Vision, WACV 2019
Y2 - 7 January 2019 through 11 January 2019
ER -