I am a psycholinguist who studies speech perception, which is the process of transforming the raw acoustic signal into a linguistic message. To accomplish this, humans prioritize certain types of information and discard others – in other words, they use biases. A simple example is the real-word bias, whereby a listener will interpret the signal for beb [bɛb] as the real word bib [bɪb], especially in certain contexts, such as feeding a baby. Some biases are perceptual in nature, while others are social; some biases operate at the moment of perception, while others affect how we remember speech interactions later on. My research investigates these biases as a way of understanding human behavior, by examining American English as well as French, Arabic, and Spanish. We are now at a new moment in history, in which speech is a behavior exhibited not just by humans, but also by language technology. My work with colleagues and students is starting to examine two questions that are crucial for shaping this moment responsibly. First, are human biases in speech perception exhibited by technology? Second, are human biases in speech perception shaped by technology?