TY - GEN
T1 - Blind Users Accessing Their Training Images in Teachable Object Recognizers
AU - Hong, Jonggi
AU - Gandhi, Jaina
AU - Mensah, Ernest Essuah
AU - Zeraati, Farnaz Zamiri
AU - Jarjue, Ebrima
AU - Lee, Kyungjun
AU - Kacorri, Hernisa
N1 - Publisher Copyright:
© 2022 ACM.
PY - 2022/10/22
Y1 - 2022/10/22
N2 - Teachable object recognizers provide a solution for a very practical need for blind people-instance level object recognition. They assume one can visually inspect the photos they provide for training, a critical and inaccessible step for those who are blind. In this work, we engineer data descriptors that address this challenge. They indicate in real time whether the object in the photo is cropped or too small, a hand is included, the photos is blurred, and how much photos vary from each other. Our descriptors are built into open source testbed iOS app, called MYCam. In a remote user study in (N = 12) blind participants' homes, we show how descriptors, even when error-prone, support experimentation and have a positive impact in the quality of training set that can translate to model performance though this gain is not uniform. Participants found the app simple to use indicating that they could effectively train it and that the descriptors were useful. However, many found the training being tedious, opening discussions around the need for balance between information, time, and cognitive load.
AB - Teachable object recognizers provide a solution for a very practical need for blind people-instance level object recognition. They assume one can visually inspect the photos they provide for training, a critical and inaccessible step for those who are blind. In this work, we engineer data descriptors that address this challenge. They indicate in real time whether the object in the photo is cropped or too small, a hand is included, the photos is blurred, and how much photos vary from each other. Our descriptors are built into open source testbed iOS app, called MYCam. In a remote user study in (N = 12) blind participants' homes, we show how descriptors, even when error-prone, support experimentation and have a positive impact in the quality of training set that can translate to model performance though this gain is not uniform. Participants found the app simple to use indicating that they could effectively train it and that the descriptors were useful. However, many found the training being tedious, opening discussions around the need for balance between information, time, and cognitive load.
KW - blind
KW - machine teaching
KW - object recognition
KW - participatory machine learning
KW - visual impairment
UR - http://www.scopus.com/inward/record.url?scp=85141211593&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85141211593&partnerID=8YFLogxK
U2 - 10.1145/3517428.3544824
DO - 10.1145/3517428.3544824
M3 - Conference contribution
AN - SCOPUS:85141211593
T3 - ASSETS 2022 - Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility
BT - ASSETS 2022 - Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility
T2 - 24th International ACM SIGACCESS Conference on Computers and Accessibility, ASSETS 2022
Y2 - 23 October 2022 through 26 October 2022
ER -