Bang Tran, Sai Harshavardhan Reddy Kona, Xiaohui Liang, Gabriel Ghinita, Caroline Summerour, and John A. Batsis
Voice assistant systems (VAS), such as Google Assistant or Amazon Alexa, provide convenient means for users to interact verbally with online services. VAS is particularly important for users with severe health conditions or motor skills impairment. At the same time, voice commands may contain highly-sensitive information about individuals. Therefore, sharing such data with service providers must be done in a carefully controlled and transparent manner in order to prevent privacy breaches. One important challenge is identifying which voice commands contain sensitive information. Different individuals are likely to have distinct interpretations of what is sensitive and what must be kept private, depending on gender, age, cultural background, etc. Furthermore, even for the same individual, the context in which a command is issued can result in significantly different sensitivity perceptions. We introduce a framework named VPASS that supports the management of personalized privacy requirements for VAS systems. Specifically, we propose mechanisms to quantify two key aspects: the amount of information disclosure and the level of privacy sensitivity that each voice command has. Our mechanisms employ deep transfer learning techniques for processing voice commands and can accurately detect privacy-sensitive commands based on an individual’s prior history of VAS interaction. Finally, VPASS generates monthly reports or immediate privacy alerts based on the privacy policies pre-defined by users.
Accepted as a poster at The Nineteenth Symposium on Usable Privacy and Security (SOUPS 2023)
Accepted as a regular paper at the 20th Annual International Conference on Privacy, Security & Trust (PST2023)