wordplaydev / wordplay

An accessible, language-inclusive programming language and IDE for creating interactive typography on the web.
Other
64 stars 45 forks source link

Sound output #390

Open amyjko opened 9 months ago

amyjko commented 9 months ago

What's the problem?

Wordplay has no audio output, such as sounds, noise, music, or other audio media, other than rudimentary screen reader support via browsers and operating systems. This is a major gap for anyone who relies on sound to perceive output, and limits the kinds of output that can be created.

What's the design idea?

Partner with sound-reliant students and teachers to explore the kinds of audio media that we might support and co-design that media. Such media might include sounds, music, audioscapes and other types of output. These might be designed to work alongside the current visual media, or be separate media with visual complements to describe sound.

This interacts with issues #21 and #22, which also propose specific kinds of sound output.

Who benefits?

Anyone who wants to express themselves via sound, or to people who rely on sound.

Design specification

(This section should be included after a design proposal is ready and approved, and the buildable tag is added. This text can remain until then. Designers should add their proposal here, not in a comment).

fchung26 commented 6 months ago

I will keep working on this problem for the fall quarter, here is the research I have so far: https://docs.google.com/document/d/1yHE4QmKhjbI9Qr6YKZ2btZI5pVTOC-SiWRGK65uQpGE/edit?usp=sharing

amyluyx commented 6 months ago

I am hoping to return either fall quarter or winter quarter next year. I've conducted literature reviews and in-person research throughout the quarter that is available in my updated contract: https://docs.google.com/document/d/1L0Lw3VkC7xW04ML9yBdIMmjRdeHKTX7aFq9Y2cEjNUY/edit?usp=sharing Future areas of exploration are also noted at the bottom of my report.

amyjko commented 6 months ago

Thank you both. Unfortunately, your posts don't follow the community guidelines: we don't permit external links in comments, because those links can die. Please post your research as comments, so we can all continue having access to it.

fchung26 commented 6 months ago

Wordplay Research.pdf

amyjko commented 6 months ago

@fchung26 Sigh, fine, a PDF will do. (PDFs are not accessible by default to people who use screen readers, so anyone contributing to this issue may not be able to read your research — ideally, all comments are text with images that have image descriptions).

amyluyx commented 6 months ago

Sound Output https://github.com/wordplaydev/wordplay/issues/390

Contents Introduction Existing research as a basis: case studies Importance of audio in learning Existing technologies that successfully integrate audio Research purposes: Inclusion Blind learners Deaf learners Research Methodology User Research Approach: Describe your approach, including community outreach, surveys, and interviews. Participants: Detail the demographics of the 20 people interviewed, emphasizing diverse backgrounds and learning capabilities. Data Collection: Explain how you gathered feedback (e.g., online surveys, face-to-face interviews). Findings Applications Issue #21: background sound loops Issue #22: pose transition sounds Design related applications My design wip Design wip #2 Limitations, Future work Conclusion Sources

Introduction

Wordplay is a programming platform for typography that involves the integration of worldwide languages, while creating an environment that fosters inclusivity and accessibility. With an emphasis on creativity in programming, Wordplay aims to gather individuals’ culture, identity, and values. However, one of its biggest problems is its current lack of audio output, which significantly limits its accessibility and user engagement, particularly for individuals who rely on auditory input for learning and interaction. This research aims to explore how incorporating various forms of auditory media can encourage learning through the goals of Wordplay.

Target Audience

As stated in the issue on Github, the people who benefit from advancements in this area include “anyone who wants to express themselves via sound, or to people who rely on sound.” 

Research Focus

My goal throughout this quarter was to explore avenues involving accessibility and finding methods to not only accommodate, but also elevate the programming knowledge for those who have visual impairments to and even beyond what is offered currently as an avenue of learning for those who aren’t visually impaired. I also wanted to find backing and reasoning behind creating a space within the Wordplay application for musicians and musically-oriented creatives to be able to have music-related feedback. Exploring ways to do this through researching what is already used in the programming world in terms of auditory stimulus is what I think the first step to embarking on this, and this was another main element I focused on this quarter.

Existing Research as a basis

Case study 1: A pre-existing solution for blind users

APL, or Audio Programming Language, is a programming language made by blind users, specifically designed for blind novice programmers. While traditional programming languages tend to be more dependent on visual interfaces, APL takes advantage of audio interfaces to make programming more easily accessible for blind learners. The language has been programmed with a much more straightforward command navigation as well as syntax structure to make it more user-friendly and accessible for its target users. Usability testing on both expert programmers and blind novice learners demonstrated that APL has successfully mapped to the mental representations of its blind users so that they could understand and make use of programming effectively. The outcomes suggest that APL is enough to motivate blind learners to learn programming since it is an effective tool for learning and accessible as well. With the provision of audio commands and commands to give feedback, the user can write and debug the programs using the software to develop their algorithmic and problem-solving skills. The program uses a circular command list and query. Circular command list contains “a command list to be chosen according to the program state such as cycle, condition, input, output, and variables”, while query is used to “define variable names and values, and input/output audio” [Aguayo]. Text is optionalThe present study, therefore, suggests that there is a potential to use audio interfaces in developing inclusive programming and the scope for further development and research possibilities in this area. Major user-end developments in APL includes the commonly found text-to-speech (TTS) system, pre-recorded audio to reduce latency effects commonly found in TTS, and other elements. The main breakthrough I found in this study was the concept of 3D audio interfaces: they were created with the intention of mapping the “surrounding space and thus helping blind children to construct cognition through audio-based interfaces such as tempo-spatial relationships, short-term memory, abstract memory, spatial abstraction, haptic perception, and mathematic reasoning” [Sanchez]. This goes well beyond the concept of only improving programming skills.

Case study 2: Enriching Programming Student Feedback with Audio Comments This case study investigates how the integration of audio feedback in programming education encourages student learning and engagement, attempting to overcome limitations of text-based feedback (literary, verbal, or written) which are particularly relevant to online learners who do not interact face-to-face with tutors. Exploring user-interface integration of audio feedback within learning management systems is a crucial aspect to consider when thinking about Wordplay implementations of audio feedback. The study implemented audio feedback within the Doubtfire learning management system (LMS) to encourage student perception of direct and individualized feedback when learning to program. Audio feedback allows a range of expressions that span beyond textual output, which enhanced their system implementation. This kind of feedback system incorporated a few major integrations: Audio Feedback Integration: The system enables tutors to record audio-based feedback directly through the LMS, which is an additional way of providing contextualized personal feedback. Compatibility with All Browsers: The audio recording feature runs in all web browsers, which can be used by tutors as well as students from their Etutor Online app itself. Threaded Comments: The ability to use threaded comments to encourage ongoing responses from students and tutors helps to increase the feedback loop. Through usability testing, it was found that spoken feedback in fact facilitated a more emotional engagement as compared to written text. Both audio feedback in terms of tutors speaking and audio feedback implemented as part of the Doubtfire application allowed tutors to be able to respond to student work in more depth, with better understanding from the student [Renzella et. al]. Though it may first appear that audio feedback and tutoring is not completely aligned with aspects of Wordplay, which currently does not feature live human tutors, considering the adaptation of these findings to Wordplay’s purposes are more thoroughly evaluated in the Analysis section of my research.

Case study 3: Gamifying learning through audio

Finding audio incorporations within studies that specifically catered towards programming learning for young students proved difficult. However, more broadly, gamification theory was something prevalent in many learning applications, and this could be applied to auditory-based implementation within Wordplay as well. 
Apps like Habitica and Kahoot are built under the inherent concept of gamification; Habitica is based on a roleplay game model, while Kahoot utilizes the time pressure and bright colors to imitate the feel that is most commonly found in games. However, looking closer at some applications of gamification through auditory feedback specifically, applications like Duolingo and Quizlet utilize this more extensively. 

Duolingo: points, levels, and streaks, as well as correct and wrong answer audio feedback is used to help users practice pronunciation and listening skills. For instance, when users select an answer or complete a lesson, they receive immediate audio feedback such as positive reinforcement sounds (e.g., a chime or a "ding") for correct answers and gentle corrective sounds for incorrect ones. This assists in the learning process, making sure users understand their mistakes immediately, and would work in a Wordplay context for blind individuals regardless of color displays (for example, red for wrong and green for correct). Quizlet: options for flashcards, games, and quizzes provide multiple modes of learning, and it also uses audio feedback to read out flashcard terms and definitions, helping users to learn through listening and repetition. For example, pressing buttons to flip a flashcard, they hear the term or definition read aloud, which reinforces their memory through auditory learning. Audio cues are also used to provide fast feedback on users’ performance, similar to Duolingo.

Interviews and Real-world Findings

A major part of my research stressed the importance of audio interfaces to improve inclusivity; being able to conduct in-person research, whether that be through interviews and other methods of observation through real-world findings, was crucial to developing potential ideas for design choices within Wordplay in the auditory realm. 
After conducting research through literature reviews, I needed methods for determining the importance of 
I was able to interview 5 high school students and 2 middle school students that self-identified as legally visually impaired from 3 schools located in Seattle and Bellevue. The following questions were asked: 

How do you feel about using audio feedback in educational tools? Can you share any experiences where audio feedback has helped or hindered your learning? What types of game-like elements (e.g., sound effects for correct answers, level-up sounds) do you think would make learning programming more engaging for you? As a visually impaired learner, what challenges do you face with current visual programming tools, and how do you think audio feedback can help overcome these challenges? How effective do you find audio feedback in helping you understand and navigate programming concepts compared to visual feedback? What specific audio feedback features would you like to see implemented in Wordplay to make it more accessible and engaging for you?

Insights:

How does audio feedback enhance your learning experience compared to other forms of feedback? Audio feedback provides immediate and clear guidance, reducing the need for guesswork and the latency of TTS What challenges do you face when using current learning tools and technologies, and how do you think these challenges could be addressed? Common incompatibility issues such as poor screen reader compatibility, inaccessible navigation elements, and the lack of descriptive audio cues in existing tools exist Improving the integration of screen readers, adding more detailed audio descriptions, and designing interfaces that are easier to navigate using voice commands Can you describe an instance where audio cues significantly helped you understand a concept or complete a task? This individual was not blind, but said different sound/ tonal feedback indicating errors or progress helped him learn platform use. How does the use of spatial audio (audio that changes with head movement) impact your understanding of spatial relationships and navigation in a learning environment? spatial audio helps him better perceive the layout and structure of virtual environments, making navigation more intuitive. He thinks in mental maps, so not visual inside head; aids in building mental maps, which enhances his overall learning experience and makes abstract concepts more concrete.

Analysis & Design Ideas

The biggest conclusions I have come to realize through the in-person interviews is the difference in perception and methods of learning. A lot of our coding learning is based on visualization, but the inherent way of thinking of those who are blind especially, do not at all align with visualizing data structures and code the same way that we do. A high school student described their method of internalizing information as far fetched from visualization, saying that although he does not visualize objects, he visualizes trees and maps and often thinks by mapping things in his head. Often he will associate concepts with touch, almost like he is feeling the braille or some component of the object. 
The difference in the way of thinking of blind users in particular: as concluded in my literature review and through interview results, blind learners were found to rely more heavily, on average, on concrete experience rather than building abstract thinking, and often, abstract thinking would come after rounds of experience accumulated. A large amount of programming skill set boils down to fundamental problem solving and abstract thinking. 

What was mentioned quite commonly throughout research was something along the lines of establishing a multi-dimensional space in audio-based interfaces. Whether it was for the purposes of improving the spatial awareness of blind people through 3D audio interfaces, it’s clear that these methods would greatly benefit visually impaired individuals when implemented in Wordplay. Something to be suggested in this realm is the existence of spatial audio, which is being able to have technological systems implemented that allow users to perceive audio from all angles. A technological development that could be integrated within Wordplay would be head-tracked spatial audio. Although uncommonly implemented in current headphone and other portable sound systems, head-tracked spatial audio technologies use built-in gyroscopes and accelerometers to track the movement of your head, adjusting the audio accordingly. This technology creates an effect of sound coming from fixed points in space, even as you move your head, which more closely resembles the perception of real life. Not only would users be able to more closely experience soundscapes that mimic those you would hear without headphones; those who are visually impaired would learn to more accurately perceive the space around them through this technology.

Looking at this wordplay interface, head-tracked spatial audio, for example, popping noises that would help indicate where on the screen the pulsing orange dots would be to make selections, would greatly help those who are visually impaired, or people in general who want to find where to click. These aides, in addition to sounds that indicate selection paired with moving buttons (for example, hovering over the back and next buttons on the top left makes it wobble, and clicking it should intuitively make a clicking sound) would be beneficial to provide a user feedback for those who are looking for it.

Limitations & Future Work

One limitation of the head-tracked spatial audio design to note is that users would be required to have headphone or earbud access to be able to experience this. Although most devices that Wordplay is able to run on have integrated dual-speaker systems, having sound coming from only two sides of the device is wildly different from an immersive 360 degree head-tracked spatial audio system. This limiting factor then requires users to own a headset, which poses financial issues, as well as accessibility issues. Working for a way to implement 360 audio through personal devices and making this as accessible as possible without any additional attachments (such as headphones) needed would be a good step. However, this would be something that is more technologically centric and not yet an interface design project for Wordplay.

Literary Works Cited

Sánchez, J., & Aguayo, F. (n.d.). APL: Audio programming language for blind learners. Department of Computer Science, University of Chile.

Aguayo, F. (2005, April 2-7). Blind learners programming through audio. In Proceedings of the ACM CHI 2005 Conference on Human Factors in Computing Systems (pp. 1769-1772). Portland, OR, USA. ACM.

Renzella, J., & Cain, A. (2020, May 23-29). Enriching programming student feedback with audio comments. In Proceedings of the 42nd International Conference on Software Engineering: Software Engineering Education and Training (ICSE-SEET) (pp. 173-183). Seoul, Republic of Korea. ACM. https://doi.org/10.1145/3377814.3381712

Seo, K. K., & Gibbons, S. (Eds.). (2021). Learning technologies and user interaction: Diversifying implementation in curriculum, instruction, and professional development (1st ed.). Routledge. https://doi.org/10.4324/9781003089704

Li, Y., & Finch, S. (2021). Using sound to enhance interactions in an online learning environment. In K. K. Seo & S. Gibbons (Eds.), Learning technologies and user interaction: Diversifying implementation in curriculum, instruction, and professional development (1st ed., pp. [specific pages]). Routledge. https://doi.org/10.4324/9781003089704

fchung26 commented 6 months ago

Wordplay Research:

Pandey, Maulishree, et al. “Understanding Accessibility and Collaboration in Programming for People with Visual Impairments.” Proceedings of the ACM on Human-Computer Interaction, vol. 5, no. CSCW1, 13 Apr. 2021, pp. 1–30, https://doi.org/10.1145/3449203. Accessed 31 May 2021.

The paper aims to investigate the collaborative experiences and challenges of visually impaired programmers in professional contexts through semi-structured interviews with 22 visually impaired software professionals. It further aims to provide insights and recommendations for improving the accessibility and inclusivity of collaborative programming environments. The information provided by the study can help us understand how to accommodate and create an accessible and inclusive environment for programming. Programming has been deemed relatively accessible as it is text-based, and new assistive tech like graphical user interfaces (GUIs), but there is still more to discover and analyze how those findings can be applied to Wordplaypen. The study grounded its focus by looking at the challenges of visually impaired programmers in professional contexts through semi-structured interviews with 22 visually impaired software professionals. The paper focuses on how assistive technology is used in social settings. There has been research on how accessibility in Human-computer interaction emphasizes the social contexts of assistive technology use and found that they often lag behind mainstream products and draw unwanted attention. As a result, users have to balance utility with the desire to avoid attention and maintain self-esteem. Thus, it is important to denote that when designing assistive tech one must consider both functional and social scenarios and involve both users with and without disabilities in the design process, which often is overlooked. There is an emphasis on how research has indicated that people with disabilities face social costs when seeking help, as it can make them appear less competent. Due to that, people with disabilities prefer using external assistance to avoid burdening others. In mixed-ability contexts, accessibility is achieved through collaboration, often requiring people with visual impairments to perform additional work to address accessibility challenges, continually advocating for their needs. The accessibility challenges that are faced are from being hesitant to seek one’s employer for accommodations. It mainly stems from assistive tech having an expense, or even fearing being seen as making excuses, and it would additionally reflect negatively on their programming ability. The study’s participants preferred to demonstrate their preferred work practices to colleagues to familiarize them with assistive tech and workflows. Through the study, he researches the need for more accessible internal tools as they help enhance work experience by enabling more efficient work and reducing the need for assistance. Overall, the paper proved the need for accessibility and showed insights into the need for implementing assistive technology as well as working with programmers who have disabilities.

Yee-King, Matthew John, et al. “Automatic Programming of vst Sound Synthesizers Using Deep Networks and Other Techniques.” IEEE Transactions on Emerging Topics in Computational Intelligence, vol. 2, no. 2, Apr. 2018, pp. 150–159, https://doi.org/10.1109/tetci.2017.2783885. Accessed 26 Mar. 2021.

This thesis investigates the techniques and applications of automatic sound synthesizer programming. It discusses the types of systems such as tone-matching programmers and synthesis space explorers. Tone-matching programmers take a sound synthesis algorithm and a target sound as input. Synthesis Space Explorers provides users with a representation of the synthesizer's sound space, which allows for interactive exploration of the space. It uses Studio tools, autonomous musical agents, and self-reprogramming drum machines. This paper was much too advanced for me, however I wrote down what I was able to understand. But I think it is a thesis worth revisiting. However, what I was able to understand from the conclusion, the thesis delves into the concept of automatic sound synthesizer programming, which is aimed at removing the need for users to specify parameter settings explicitly. It believed that preset banks, presentation of synthesizer sound space, and search algorithms to find specific target sounds, would help remove the need. It also discussed the Integration of automated timbral exploration into a drum machine enhances creativity for musicians. I believe the thesis explores the implications and impacts of sound systems, but at the moment my knowledge is lacking to fully understand the scope of what this thesis is trying to explain.

Pires, Ana Cristina, et al. “Exploring Accessible Programming with Educators and Visually Impaired Children.” Proceedings of the Interaction Design and Children Conference, 21 June 2020, https://doi.org/10.1145/3392063.3394437. Accessed 13 Mar. 2023.

This thesis explored the emergence of computational thinking as a discipline in schools, which further emphasized the importance of it beyond just the computing context. It believes that through the use of visual programming environments such as Scratch and Blocky, it will help promote computational thinking to enhance children’s abilities and prepare them for programming in the future. However, they found out that the tools provided were not accessible to visually impaired children. The paper proposes new approaches to address the lack of accessibility, and explore more opportunities for spatial activities. The paper focuses on two of the studies that they did. The first study explores current approaches to promote computational thinking environments for visually impaired children putting a focus on spatial programming activities. The different environments that they investigated were fully virtual ones, virtual environments with tangible output, tangible environments with virtual output, and fully tangible environments. Though schools (Portugal) have adopted it, these environments are still not fully accessible to visually impaired children. As such, the study gathered a focus group with special needs educators and information tech instructors from inclusive schools. The participants were asked to discuss the qualities and limitations of the environments and further explore avenues to make them more accessible. The researchers found the importance of using robots, tangible blocks, boards, and maps to help programming activities for visually impaired children. The participants also believed that there needs to be more tactile feedback, auditory cues, and Braille inscriptions in order to enhance accessibility for the children. As such, the study saw that in order to design an accessible programming environment, one must engage robots with feedback mechanisms, tactile-rich maps for spatial perception, and tangible blocks with sensory representations. Overall, the participants did feel enthusiastic about using these new environments with visually impaired children but further stressed the importance of making them more accessible through sensory enhancements. The second study explored the adaptation of solutions from Study 1 in order to create accessible programming environments for visually impaired children. They mainly focused on using tangible blocks and a robot with augmented physicality. The study worked with seven visually impaired children in a workshop, and later analyzed them through thematic analysis and validation with educators. They implemented tangible blocks and a robot with augmented physicality to facilitate programming. The robot called DASH was chosen for its existing usage in schools, however modifications were made such as adding tactile cues and audio feedback to the blocks and robot actions. The researchers wanted to make sure that the workshop was unstructured and just had goal-directed spatial activities. As a result, children were more excited to participate and had better interactions that helped perceive agency in controlling the robot. Through the workshop's analysis, it revealed that the children were able to understand the programming concepts, and looked like they wanted to learn more. Though successful, the educators put focus on the importance of using step-by-step instructions and real life context that will help with learning. During the workshop, the researchers also saw how children naturally collaborated and engaged with the robot through the use of tangible elements that helped sharing and exploration. The researchers also looked at the children's cognitive development, and found that older children showed more intentional movements and debugging skills compared to younger children, which indicated the development of abstract thinking. They also looked at spatial cognition and saw how it was enhanced through activities involving the robot's movement, promoting spatial orientation and conceptual understanding. The setup of the workshop proved to be beneficial for visually impaired children, improving their spatial cognition, mental rotation, and navigation skills. The educators from study 1 recognized the potential of tangible programming environments to reinforce existing educational goals and promote inclusive learning. They identified opportunities to integrate programming activities with other subjects like math and science. Collaboration was seen as a crucial aspect, facilitated by tangible elements and spatial activities. The researchers also looked at the limitations of the study, and saw how they focus solely on visually impaired children and the novelty effect of the activities. However, the study offers valuable insights for researchers, developers, and educators to develop inclusive programming environments and foster collaborative learning among children with mixed abilities. The educators in the previous study recognized the potential of tangible programming environments to reinforce existing educational goals and promote inclusive learning. They identified opportunities to integrate programming activities with other subjects like math and science. Collaboration was seen as a crucial aspect, facilitated by tangible elements and spatial activities. The researchers also looked at the limitations within the research. One of them was just focusing exclusively on a group of visually impaired children, but it did not include sighted children. This in turn restricts the understanding of how these activities might function in a fully inclusive classroom setting where children of mixed abilities interact. Additionally, the study was only conducted in a single session, which doesn’t show how children might behave in a long-term setting. Another limitation is lack of diverse educational context, it limited the generalizability of the findings to different educational contexts or settings. Different schools or cultural environments may get different results. Overall, I think this paper helps understand the importance of providing accessible tools in programming languages. Not only do programmers need it, but as Wordplay also wants educators to use it, it is important that the assistive technology we are using can also help include children with disabilities that want to learn programming languages.

Romano, Simone, et al. “The Effect of Noise on Software Engineers’ Performance.” ArXiv (Cornell University), 11 Oct. 2018, https://doi.org/10.1145/3239235.3240496.

In this paper it doesn’t particularly talk about the accessibility with programmers who vision impairments but more or else shows the effects of noise on programming. It looks into different theories of noise effects on performance. The theories consist of, Arousal Theory, Composite Theory, and Maximal Adaptability Theory. Broadbent's Arousal Theory explains noise effects through an arousal-induced attentional narrowwing mechanism. Meaning that the noise can increase arousal helping intially with exclusig irrelevant cues, and imporving performance. However, it doesn’t include beyond the optimal arousal level, so there is no way of knowing when performance declines as relevant cues are excluded. Moreover, the theory believes that noise intensity and duration influence one’s performance, with intermittent noise causing more impairment than continuous noise. Poulton's Composite Theory believes that noise degrades performance when it is masked with inne speech, which is a crucial for task performance. They believe continuous noise can increase arousal, offsetting masking effects, but over time, arousal decreases, and masking dominates, impairing performance. Noise effects are similar across tasks and types but vary with intensity, duration, and schedule. The Maximal Adaptability Theory believes that theb stress from noise affects performance through input (environmental factors like noise), adaptation (individual coping mechanisms), and output (task performance). Noise impairs performance by masking relevant auditory information. Individuals adapt to varying stress levels, but beyond a threshold, performance declines. These theories all suggest that noise effects on performance depend on the nature of the task, the characteristics of the noise, and individual adaptation mechanisms. The study evaluates the effect of noise on software engineering tasks through two controlled experiments. Noise negatively impacted fault fixing but not the comprehension of functional requirements. This indicates that tasks requiring more cognitive resources, like fault fixing, are more susceptible to noise, highlighting the need for quieter work environments for such tasks. I am not sure yet how I could connect this with WordPlay, but it give me a deeper understanding on the effects of noise in engineering work environments.

Sánchez, Jaime, and Fernando Aguayo. Blind Learners Programming through Audio. 2 Apr. 2005, https://doi.org/10.1145/1056808.1057018. Accessed 30 Mar. 2024.

The paper discusses the efforts to make programming more accessible to end-users which include languages like Basic, Logo, Smalltalk, Pascal, and others. Those programming languages have improved beginners skills by using user interface principles. However, the languages are not accessible to visually impaired learners. Many studies have shown that audio-based applications are able to enhance cognitive skills to blind children, which focus on 3D audio interfaces for spatial and abstract reasoning. The Audio Programming Language (APL) was developed to aid blind novice programmers by simplifying syntax and enhancing problem-solving and thinking skills through audio interfaces. The APL uses a circular command list and query system, making programming accessible without requiring memorization of commands. It features a dynamic command list and unconventional variables to store sounds, facilitating interaction with the machine. APL underwent usability testing with expert and beginner users, which revealed initial functionality issues and showed that blind learners eventually grasped programming concepts through concrete experience and interaction. Learners created simple and complex programs, demonstrating increased understanding and enthusiasm. The study indicates that audio interfaces can help blind learners develop algorithmic thinking and cognitive skills, suggesting the need for further research on how blind users map the programming process differently from sighted users. Though this paper didn’t show that much insight, it helped understand what other people are doing in order to make programming languages more accessible, and show that it is possible.

Howard, A. M., et al. “Using Haptic and Auditory Interaction Tools to Engage Students with Visual Impairments in Robot Programming Activities.” IEEE Transactions on Learning Technologies, vol. 5, no. 1, 2012, pp. 87–95, https://doi.org/10.1109/tlt.2011.28.

The paper first recognizes the number of college freshmen with disabilities has been increasing, with vision impairments accounting for 16% of these students. However, only 3.9% of disabled students major in computer science. This disparity is largely due to inadequate pre-college math and science education, which is foundational for computing degrees. Approximately 11% of children aged 6 to 14 have disabilities, but they take fewer science and math courses than their peers, often due to inaccessible information and unfamiliarity with nonvisual teaching methods. There isn’t much effort being made to engage visually impaired students in computing at the precollege level. There have been some initiatives such as the National Center for Blind Youth in Science, the AccessComputing Alliance, and Project ACE. Conversely, robotics appeals broadly to students, including those with disabilities. However, the lack of accessible interfaces for educational robots means visually impaired students often cannot participate equally in robot-based computing activities. Most robot programming interfaces rely on visual and keyboard-based inputs, which are unsuitable for many visually impaired students. The research in the paper focuses on creating accessible interfaces for robot programming to engage visually impaired students. Their goal is to leverage the appeal of robotics to involve students with disabilities towards computing, hypothesizing that alternative interface technologies will enable active participation and encourage future interest in computing. The Programming/Robot Interaction System uses a lot of calculus which I am still working on learning. However I believe that it will help me understand the code they used and how it will inspire me to implement it on Word Playpen.