TY - JOUR
T1 - Replicating neuroscience observations on ML/MF and AM face patches by deep generative model
AU - Han, Tian
AU - Xing, Xianglei
AU - Wu, Jiawen
AU - Wu, Ying Nian
N1 - Publisher Copyright:
© 2019 Massachusetts Institute of Technology.
PY - 2019/12/1
Y1 - 2019/12/1
N2 - A recent Cell paper (Chang & Tsao, 2017) reports an interesting discovery. For the face stimuli generated by a pretrained active appearance model (AAM), the responses of neurons in the areas of the primate brain that are responsible for face recognition exhibit a strong linear relationship with the shape variables and appearance variables of the AAM that generates the face stimuli. In this letter, we show that this behavior can be replicated by a deep generative model, the generator network, that assumes that the observed signals are generated by latent random variables via a top-down convolutional neural network. Specifically, we learn the generator network from the face images generated by a pretrained AAM model using a variational autoencoder, and we show that the inferred latent variables of the learned generator network have a strong linear relationship with the shape and appearance variables of the AAM model that generates the face images. Unlike the AAM model, which has an explicit shape model where the shape variables generate the control points or landmarks, the generator network has no such shape model and shape variables. Yet it can learn the shape knowledge in the sense that some of the latent variables of the learned generator network capture the shape variations in the face images generated by AAM.
AB - A recent Cell paper (Chang & Tsao, 2017) reports an interesting discovery. For the face stimuli generated by a pretrained active appearance model (AAM), the responses of neurons in the areas of the primate brain that are responsible for face recognition exhibit a strong linear relationship with the shape variables and appearance variables of the AAM that generates the face stimuli. In this letter, we show that this behavior can be replicated by a deep generative model, the generator network, that assumes that the observed signals are generated by latent random variables via a top-down convolutional neural network. Specifically, we learn the generator network from the face images generated by a pretrained AAM model using a variational autoencoder, and we show that the inferred latent variables of the learned generator network have a strong linear relationship with the shape and appearance variables of the AAM model that generates the face images. Unlike the AAM model, which has an explicit shape model where the shape variables generate the control points or landmarks, the generator network has no such shape model and shape variables. Yet it can learn the shape knowledge in the sense that some of the latent variables of the learned generator network capture the shape variations in the face images generated by AAM.
UR - http://www.scopus.com/inward/record.url?scp=85074737276&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85074737276&partnerID=8YFLogxK
U2 - 10.1162/neco_a_01236
DO - 10.1162/neco_a_01236
M3 - Letter
C2 - 31614107
AN - SCOPUS:85074737276
SN - 0899-7667
VL - 31
SP - 2348
EP - 2367
JO - Neural Computation
JF - Neural Computation
IS - 12
ER -