Let's Read: Designing a smart display application to support CODAS when learning spoken language

Authors

  • Katie Rodeghiero Chapman University
  • Yingying Yuki Chen Chapman University
  • Annika M. Hettmann Chapman University
  • Franceli L. Cibrian Chapman University

DOI:

https://doi.org/10.47756/aihc.y6i1.80

Keywords:

CODAs, Deaf, Smart display, Reading, Mix-ability

Abstract

Hearing children of Deaf adults (CODAs) face many challenges including having difficulty learning spoken languages, experiencing social judgment, and encountering greater responsibilities at home. In this paper, we present a proposal for a smart display application called Let's Read that aims to support CODAs when learning spoken language. We conducted a qualitative analysis using online community content in English to develop the first version of the prototype. Then, we conducted a heuristic evaluation to improve the proposed prototype. As future work, we plan to use this prototype to conduct participatory design sessions with Deaf adults and CODAs to evaluate the potential of Let's Read in supporting spoken language in mixed-ability family dynamics.

Downloads

Download data is not yet available.

Downloads

Published

2021-11-30

How to Cite

[1]
Rodeghiero, K. et al. 2021. Let’s Read: Designing a smart display application to support CODAS when learning spoken language. Avances en Interacción Humano-Computadora. 6, 1 (Nov. 2021), 18–21. DOI:https://doi.org/10.47756/aihc.y6i1.80.

Issue

Section

Research Papers

Similar Articles

1 2 > >> 

You may also start an advanced similarity search for this article.