Voice-based AI-powered digital assistants, such as Alexa, Siri, and Google Assistant, present an exciting opportunity to translate healthcare from the hospital to the home. But building a digital, medical panopticon can raise many legal and ethical challenges if not designed and implemented thoughtfully. This paper highlights the benefits and explores some of the challenges of using digital assistants to detect early signs of cognitive impairment, focusing on issues such as consent, bycatching, privacy, and regulatory oversight. By using a fictional but plausible near-future hypothetical, we demonstrate why an “ethics-by-design” approach is necessary for consumer-monitoring tools that may be used to identify health concerns for their users.
“Janice is a 77-year-old woman living alone. Her daughter, Maggie, works in another city and worries about Janice. Maggie buys Janice Amazon’s Hear, which is a new speaker with a built-in microphone and a voice-activated digital assistant. The Hear is equipped with a new AI tool that can detect early signs of cognitive impairment by identifying and analyzing patterns in speech and use of the monitoring tool. It even has the capability to produce a “speech report” that displays how speech patterns identified by the monitoring tool compare to the user’s past speech, as well as the speech of other users who own and use the tool. Janice and Maggie both felt the monitoring tool offered a promising way to help Janice remain in her beloved home while addressing Maggie’s concern about keeping her mother safe. Over the next year, Maggie receives alerts that her mother’s speech patterns have changed and that she repeats questions. Maggie schedules a visit with her mother’s primary care physician and brings a printout of the speech report to the appointment. The physician refers Janice to a neurologist.”