At Codespell.ai, we believe powerful AI must always be used responsibly, securely, and transparently.
Our Responsible AI Policy guides every stage of product design and delivery, ensuring that developers gain productivity without compromising safety or compliance.
What this means for you
Principle
How we uphold it
We introduce AI features only when they deliver clear value to developers and meet our ethical standards.
Only the text you send in a prompt is transmitted for AI processing.
All AI traffic is encrypted in transit and protected by least-privilege credentials; configuration data is encrypted at rest.
Integrated content-safety guardrails automatically block disallowed or harmful output before it reaches user interface.
We remind you to review any AI generated output before adopting it in production code.
Our controls map to SOC 2 and other leading security frameworks, and our policy is reviewed at least once a year—or sooner if regulations change.
User guidance & support
- Documentation portal – Step-by-step guides, usage samples, and best-practice tips.
- In-product notices – Contextual reminders about AI limitations and safe-usage recommendations.
- Opt-out controls – Disable AI assistance entirely or restrict it to selected file types in your workspace settings.
Incident reporting
If you believe an AI suggestion has violated our standards or exposed sensitive content, please email incident@codespell.ai. Your report automatically opens a tracked ticket with our security team, and we handle all incidents according to our standard operating procedure.
Continuous improvement
We regularly reassess our safeguards, training materials, and provider guardrails to keep pace with evolving technology and regulations. Updates to this policy and our practices are published here so you can stay informed.
Questions? Contact us at
info@codespell.ai.