Following up on the publication of “AI security concerns in a nutshell - Practical AI-Security guide” by the German BSI (Federal Office of Information Security), I highlighted that another, more specific security concern to generative AI is prompt injection, a throwback to the SQL injection attacks of the… 2000s? 2010s? Anyway, OpenAI’s proposed mitigation is the „System Prompt“ that API users can use. This is a light-hearted showcase of this, shared today by @gdb on Twitter: