Stealing User Prompts?
Recently, the Google DeepMind team published a research paper called "Stealing User Prompts from Mixture of Experts," which presents a discovery of vulnerabilities in Mixture-of-Experts (MoE) large language models (LLMs). These architectures, known for their efficiency and scalability, use selective activation of expert modules to optimize computation.
…



