People internalize different emotional connotations when they listen to speech in conversation; similarly, people interpret different emotional qualities when they listen to music. Research reveals that people with Autism Spectrum Disorder (ASD) generally have enhanced pitch perception yet experience difficulty detecting emotion from prosodic cues in speech. In collaboration with UCI’s Center for Autism & Neurodevelopmental Disorders, we will recruit adults with ASD and age-matched controls to examine whether musical scale-sensitivity and perception of emotion in speech are separate or interrelated processes. Participants’ musical scale- sensitivity and ability to extract emotion from musical stimuli will be assessed through a tonality discrimination task, where they listen to tone-scramble stimuli (rapid, randomly- ordered tone sequences) and attempt to classify them as major (“happy”) or minor (“sad”). Previous studies have consistently found that 70% of neurotypical individuals perform at chance on this task. Performance on this task will be compared to pitch discrimination sensitivity, assessed through participants’ ability to classify two pitch frequencies as same or different. Participants will also perform a task where they match speech stimuli to pitch contour shapes, and a task where they judge speech as happy or sad. We hypothesize that people with ASD will perform better at the pitch discrimination and the tone-scramble task than the two speech tasks. Data collection will begin after prospective IRB approval. These results will support whether scale- sensitivity and perceiving emotion in speech are exclusive processes. Subsequent studies can investigate whether utilizing the tone-scramble task can help people with ASD improve their speech prosody detection.