Exploring machine teaching for object recognition with the crowd

Jonggi Hong, June Xu, Kyungjun Lee, Hernisa Kacorri

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

6 Scopus citations

Abstract

Teachable interfaces can enable end-users to personalize machine learning applications by explicitly providing a few training examples. They promise higher robustness in the real world by significantly constraining conditions of the learning task to a specific user and their environment. While facilitating user control, their efectiveness can be hindered by lack of expertise or misconceptions. Through a mobile teachable testbed in Amazon Mechanical Turk, we explore how non-experts conceptualize, experience, and reflect on their engagement with machine teaching in the context of object recognition.

Original languageEnglish
Title of host publicationCHI EA 2019 - Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems
ISBN (Electronic)9781450359719
DOIs
StatePublished - 2 May 2019
Event2019 CHI Conference on Human Factors in Computing Systems, CHI EA 2019 - Glasgow, United Kingdom
Duration: 4 May 20199 May 2019

Publication series

NameConference on Human Factors in Computing Systems - Proceedings

Conference

Conference2019 CHI Conference on Human Factors in Computing Systems, CHI EA 2019
Country/TerritoryUnited Kingdom
CityGlasgow
Period4/05/199/05/19

Keywords

  • Crowdsourcing
  • Interactive machine learning
  • Object recognition
  • Teachable machines

Fingerprint

Dive into the research topics of 'Exploring machine teaching for object recognition with the crowd'. Together they form a unique fingerprint.

Cite this