We actively support and provide security updates for the following versions of AI Society:
| Version | Supported |
|---|---|
| 1.0.x | β Fully Supported |
| < 1.0 | β Not Supported |
AI Society implements several security measures to protect users and their data:
- Local Processing: All model inference happens locally - no data sent to external services
- No Data Persistence: User queries and responses are not stored permanently
- Memory Cleanup: Sensitive data is cleared from memory after processing
- Local-First: Web interface runs on localhost by default
- CORS Protection: Cross-origin requests are properly validated
- Input Validation: All user inputs are validated and sanitized
- Verified Models: Only models from trusted sources (Ollama library)
- Resource Limits: GPU memory and model size constraints prevent resource exhaustion
- Sandboxing: Models run in isolated Ollama environment
We take security vulnerabilities seriously. If you discover a security issue, please follow these steps:
Security vulnerabilities should not be reported through public GitHub issues.
Use GitHub's private vulnerability reporting feature:
- Go to the repository's Security tab
- Click "Report a vulnerability"
- Fill out the vulnerability report form
Or create a private issue with the security label for sensitive reports.
- Type of vulnerability (e.g., injection, authentication bypass, etc.)
- Step-by-step reproduction instructions
- Affected versions of AI Society
- Potential impact of the vulnerability
- Suggested fix (if you have one)
- Your contact information for follow-up questions
- Acknowledgment: We'll acknowledge receipt within 24 hours
- Assessment: We'll assess the vulnerability within 72 hours
- Updates: We'll provide regular updates on our progress
- Resolution: We aim to resolve critical vulnerabilities within 7 days
- Credit: We'll credit you in our security acknowledgments (if desired)
- Verify Downloads: Always download from official sources
- Check Signatures: Verify release signatures when available
- Use Virtual Environments: Install in isolated Python environments
- Keep Updated: Regularly update to the latest version
-
Network Configuration:
- Keep web interface on localhost unless needed
- Use firewall rules to restrict access
- Monitor network connections
-
Resource Monitoring:
- Monitor GPU memory usage
- Set appropriate model size limits
- Watch for unusual resource consumption
-
Access Control:
- Restrict file system access
- Use appropriate user permissions
- Avoid running as root/administrator
-
Secure Configuration:
{ "max_model_size": "8GB", "auto_download": false, "restricted_mode": true, "log_queries": false } -
Environment Variables:
- Don't commit sensitive config to version control
- Use environment variables for sensitive settings
- Regularly rotate any credentials
We encourage security researchers to test AI Society responsibly:
In Scope:
- Web application security (XSS, CSRF, injection attacks)
- API endpoint security
- Authentication and authorization bypasses
- Local file access vulnerabilities
- Resource exhaustion attacks
- Model inference security
Out of Scope:
- Physical attacks
- Social engineering
- Denial of service attacks that require excessive resources
- Issues in third-party dependencies (report to respective projects)
- Don't harm others: Only test on your own installations
- Respect resources: Don't consume excessive computational resources
- Follow disclosure: Use responsible disclosure practices
- Document thoroughly: Provide clear reproduction steps
Our development team follows these security practices:
- Input Validation: All inputs are validated and sanitized
- Dependency Management: Regular dependency updates and vulnerability scanning
- Code Review: All code changes undergo security review
- Static Analysis: Automated security scanning in CI/CD
- Access Control: Minimal access principles for development infrastructure
- Monitoring: Continuous monitoring for security events
- Backup: Secure backup and recovery procedures
- Incident Response: Defined procedures for security incidents
We thank the following security researchers for responsibly disclosing vulnerabilities:
No vulnerabilities reported yet - be the first to help improve AI Society's security!
Before submitting code, ensure:
- Input validation is implemented for all user inputs
- No sensitive data is logged or persisted
- Error messages don't leak sensitive information
- Dependencies are up to date and don't have known vulnerabilities
- Code follows secure coding practices
- No hardcoded credentials or secrets
- Proper error handling prevents information disclosure
-
Safety: Check Python dependencies for known vulnerabilities
pip install safety safety check
-
Bandit: Security linter for Python
pip install bandit bandit -r src/
-
Semgrep: Static analysis for security issues
pip install semgrep semgrep --config=auto src/
Monitor your AI Society installation:
# Basic security monitoring
import psutil
import logging
def monitor_security():
# Check for unusual network connections
connections = psutil.net_connections()
for conn in connections:
if conn.status == 'ESTABLISHED':
logging.info(f"Active connection: {conn.laddr} -> {conn.raddr}")
# Monitor resource usage
cpu_percent = psutil.cpu_percent()
if cpu_percent > 90:
logging.warning(f"High CPU usage: {cpu_percent}%")If you suspect a security incident:
- Isolate: Disconnect from network if necessary
- Preserve: Don't delete logs or evidence
- Report: Contact security team immediately
- Document: Record all relevant information
- Update: Apply security patches as soon as available
- Security Reports: Use GitHub's private vulnerability reporting feature
- Security Team: @dexmac221 (project maintainer)
- Response Time: Within 24 hours for critical issues
This security policy is reviewed and updated regularly. Last updated: September 4, 2025
For the latest version, always check: https://github.com/dexmac221/AiSociety/blob/main/SECURITY.md