Exploiting Privacy Preserving Prompt Techniques for Online Large Language Model Usage (GLOBECOM 2024)

Youxiang Zhu, Ning Gao, Xiaohui Liang, and Honggang Zhang

Online Large Language Models (LLMs) are widely employed across various tasks, including privacy-sensitive ones like financial advice or paragraph rewriting. Presently, users directly submit prompts to online LLM servers, inadvertently revealing sensitive keywords and facilitating server tracking to build user profiles. In this paper, we propose a local privacy-preserving prompt assistant (LPPA) that provides users with a usable method to balance privacy in the prompts and the utility of the LLM output. The LPPA will analyze the users’ prompts, suggest modifying the prompts to protect the sensitive keywords, and provide an inference of the potential utility impact of the online LLM output.
Specifically, we first propose a privacy module to identify the sensitive keywords in the prompt and adopt four privacy techniques, including remove, mask, replace, and rewrite to hide the keywords. While these techniques affect the utility of online LLM output, we measure such impact using the LLM output of the original prompt and modified prompts and discuss the cases with high, median, and low impact. In addition, we propose a utility inference model to infer the utility impact locally without disclosing the prompts to the online LLM. We evaluated LPPA on the real-world users’ prompts and showed that the remove technique achieves the best performance, and it empowers users with meaningful ways to adjust their prompts to safeguard their privacy while still maintaining a satisfactory level of utility in online LLM usage.

Accepted by GLOBECOM 2024.

Leave a comment