TY - GEN
T1 - Public scene recognition using mobile phone sensors
AU - Liang, Shuang
AU - Du, Xiaojiang
AU - Dong, Ping
N1 - Publisher Copyright:
© 2016 IEEE.
PY - 2016/3/23
Y1 - 2016/3/23
N2 - Smartphones evolve rapidly and become more powerful in computing capabilities. More importantly, they are becoming smarter as more sensors such as the accelerometer, gyroscope, compass and the camera have been embedded on the digital board. In this paper, we propose a novel framework to recognize public scenes based on the sensors embedded in mobile phones. We build individual models for audio, light, wifi and bluetooth first, then integrate these sub-models using dynamically-weighted majority voting. We consider two factors when deciding the voting weight. One factor is the recognition rate of each sub-model and the other factor is recognition precision of the sub-model in specific scenes. We build the data-collecting app on the Android phone and implement the recognition algorithm on a Linux server. Evaluation of the data collected in the bar, cafe, elevator, library, subway station and the office shows that the ensemble recognition model is more accurate and robust than each individual sub-models. We achieved 83.33% (13.33% higher than audio sub-model) recognition accuracy when we evaluated the ensemble model with test dataset.
AB - Smartphones evolve rapidly and become more powerful in computing capabilities. More importantly, they are becoming smarter as more sensors such as the accelerometer, gyroscope, compass and the camera have been embedded on the digital board. In this paper, we propose a novel framework to recognize public scenes based on the sensors embedded in mobile phones. We build individual models for audio, light, wifi and bluetooth first, then integrate these sub-models using dynamically-weighted majority voting. We consider two factors when deciding the voting weight. One factor is the recognition rate of each sub-model and the other factor is recognition precision of the sub-model in specific scenes. We build the data-collecting app on the Android phone and implement the recognition algorithm on a Linux server. Evaluation of the data collected in the bar, cafe, elevator, library, subway station and the office shows that the ensemble recognition model is more accurate and robust than each individual sub-models. We achieved 83.33% (13.33% higher than audio sub-model) recognition accuracy when we evaluated the ensemble model with test dataset.
KW - ensemble learning
KW - mobile sensing
KW - scene recognition
UR - http://www.scopus.com/inward/record.url?scp=84966656232&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84966656232&partnerID=8YFLogxK
U2 - 10.1109/ICCNC.2016.7440683
DO - 10.1109/ICCNC.2016.7440683
M3 - Conference contribution
AN - SCOPUS:84966656232
T3 - 2016 International Conference on Computing, Networking and Communications, ICNC 2016
BT - 2016 International Conference on Computing, Networking and Communications, ICNC 2016
T2 - International Conference on Computing, Networking and Communications, ICNC 2016
Y2 - 15 February 2016 through 18 February 2016
ER -