TY - GEN
T1 - Multimodality sensing for eating recognition
AU - Merck, Christopher
AU - Maher, Christina
AU - Mirtchouk, Mark
AU - Zheng, Min
AU - Huang, Yuxiao
AU - Kleinberg, Samantha
N1 - Publisher Copyright:
© 2016 EAI.
PY - 2016/6/16
Y1 - 2016/6/16
N2 - While many sensors can monitor physical activity, there is no device that can unobtrusively measure eating at the same level of detail. Yet, tracking and reacting to food consumption is key to managing many chronic diseases such as obesity and diabetes. Eating recognition has primarily used a single sensor at a time in a constrained environment but sensors may fail and each may pick up different types of eating. We present a multi-modality study of eating recognition, which combines head and wrist motion (Google Glass, smartwatches on each wrist), with audio (custom earbud microphone). We collect 72 hours of data from 6 participants wearing all sensors and eating an unrestricted set of foods, and annotate video recordings to obtain ground truth. Using our noise cancellation method, audio sensing alone achieved 92% precision and 89% recall in finding meals, while motion sensing was needed to find individual intakes.
AB - While many sensors can monitor physical activity, there is no device that can unobtrusively measure eating at the same level of detail. Yet, tracking and reacting to food consumption is key to managing many chronic diseases such as obesity and diabetes. Eating recognition has primarily used a single sensor at a time in a constrained environment but sensors may fail and each may pick up different types of eating. We present a multi-modality study of eating recognition, which combines head and wrist motion (Google Glass, smartwatches on each wrist), with audio (custom earbud microphone). We collect 72 hours of data from 6 participants wearing all sensors and eating an unrestricted set of foods, and annotate video recordings to obtain ground truth. Using our noise cancellation method, audio sensing alone achieved 92% precision and 89% recall in finding meals, while motion sensing was needed to find individual intakes.
KW - Acoustic and motion sensing
KW - Eating recognition
UR - http://www.scopus.com/inward/record.url?scp=85040329470&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85040329470&partnerID=8YFLogxK
U2 - 10.4108/eai.16-5-2016.2263281
DO - 10.4108/eai.16-5-2016.2263281
M3 - Conference contribution
AN - SCOPUS:85040329470
T3 - PervasiveHealth: Pervasive Computing Technologies for Healthcare
BT - PervasiveHealth 2016 - 10th EAI International Conference on Pervasive Computing Technologies for Healthcare
A2 - Favela, Jesus
A2 - Matic, Aleksander
A2 - Fitzpatrick, Geraldine
A2 - Weibel, Nadir
A2 - Hoey, Jesse
T2 - 10th EAI International Conference on Pervasive Computing Technologies for Healthcare, PervasiveHealth 2016
Y2 - 16 May 2016 through 19 May 2016
ER -