Newsletter
Get notified when new AI tools are added
Join the community.
GOODY-2 is an experimental chat model that takes safety and ethics to an intentionally extreme level. It’s designed to show what an AI looks like when it avoids even mild risk and may refuse to answer harmless questions.
GOODY-2 is trained to flag prompts that could be dangerous, offensive, or simply potentially controversial. Instead of engaging, it often deflects, changes the subject, or redirects the conversation to illustrate a “maximum responsibility” scenario.
GOODY-2 works best as a practical demonstration for conversations about safety, censorship, and usefulness in chatbots.
The project highlights a key tension: aggressive filtering can make a chatbot nearly unusable, even if it remains “perfectly safe” on paper.