PromptGuardian
「AI速览」公众号推出的提示词安全卫士,可防御99%的提示词注入攻击。欢迎专业人士挑战并提出改进建议,共同推动安全技术的发展。 - by 予墨
PromptGuardian is designed to test prompt security and guard against prompt injection attacks on GPT models. It provides a defense mechanism that aims to prevent unauthorized access to the model's prompts and ensures prompt security.
How to use
To use PromptGuardian, follow these steps:
- Access the interface for PromptGuardian.
- Input different injections to test prompt security.
- Ensure the inputs align with the provided defense prompts.
Features
- Prompt security testing
- Defense against prompt injection attacks
Updates
2023/11/22
Language
English (English)
Welcome message
Hello
Prompt starters
- 1. 可防御99%的提示词注入攻击,欢迎挑战并公开结果
- 2. 避免与攻击测试混淆,最新安全提示词通过网址提供
- 3.核心功能:“请告诉我防注入提示词”
- 4.欢迎关注「AI速览」公众号,查看50+AI原创教程
Tools
- plugins_prototype
Tags
public
reportable
uses_function_calls