TY - GEN
T1 - Accessible Slide Presentation via Intelligent Real-time Editing
T2 - 27th International ACM SIGACCESS Conference on Computers and Accessibility, ASSETS 2025
AU - Haque, Azizul
AU - Hong, Jonggi
N1 - Publisher Copyright:
© 2025 Copyright held by the owner/author(s).
PY - 2025/10/22
Y1 - 2025/10/22
N2 - Live presentations often pose accessibility challenges for blind and low vision (BLV) audiences due to limited verbal references to slide content. We propose a novel approach that introduces a short temporal buffer to the live stream, enabling the removal of redundant speech - such as filler phrases, repetitions, and side remarks - and the insertion of concise slide descriptions in the resulting gaps. To explore the effectiveness of this method, We conducted a Wizard-of-Oz-inspired study using pre-recorded videos that simulate real-time editing using edited presentation videos that included slide numbers, titles, and image descriptions. Four blind participants viewed both edited and unedited versions and shared their experiences in post-task interviews. Participants reported clearer transitions, better comprehension of slide content, and improved ability to mentally visualize slide layouts in the edited condition. We also analyze the types of removable speech and outline key design and technical challenges in building a real-time, automated version of the system. Our findings highlight the potential of near-real-time editing as a lightweight strategy to enhance the accessibility of live spoken content. While the study simulates live accessibility using pre-recorded content, it provides valuable insight into future real-time applications.
AB - Live presentations often pose accessibility challenges for blind and low vision (BLV) audiences due to limited verbal references to slide content. We propose a novel approach that introduces a short temporal buffer to the live stream, enabling the removal of redundant speech - such as filler phrases, repetitions, and side remarks - and the insertion of concise slide descriptions in the resulting gaps. To explore the effectiveness of this method, We conducted a Wizard-of-Oz-inspired study using pre-recorded videos that simulate real-time editing using edited presentation videos that included slide numbers, titles, and image descriptions. Four blind participants viewed both edited and unedited versions and shared their experiences in post-task interviews. Participants reported clearer transitions, better comprehension of slide content, and improved ability to mentally visualize slide layouts in the edited condition. We also analyze the types of removable speech and outline key design and technical challenges in building a real-time, automated version of the system. Our findings highlight the potential of near-real-time editing as a lightweight strategy to enhance the accessibility of live spoken content. While the study simulates live accessibility using pre-recorded content, it provides valuable insight into future real-time applications.
KW - Assistive Technology
KW - Blind User
KW - Concurrent Access
KW - Presentation
KW - Understanding Slides
UR - https://www.scopus.com/pages/publications/105022633308
UR - https://www.scopus.com/pages/publications/105022633308#tab=citedBy
U2 - 10.1145/3663547.3759719
DO - 10.1145/3663547.3759719
M3 - Conference contribution
AN - SCOPUS:105022633308
T3 - ASSETS 2025 - Proceedings of the 27th International ACM SIGACCESS Conference on Computers and Accessibility
BT - ASSETS 2025 - Proceedings of the 27th International ACM SIGACCESS Conference on Computers and Accessibility
A2 - Shinohara, Kristen
A2 - Bennett, Cynthia L.
A2 - Mott, Martez
A2 - Kane, Shaun K.
Y2 - 26 October 2025 through 29 October 2025
ER -