Confidential AI Models
RedPill exclusively uses Confidential AI models - AI models running entirely within GPU Trusted Execution Environments (TEE). Your prompts and responses never leave the secure hardware enclave.Unlike other AI platforms, we don’t route to OpenAI, Anthropic, or other cloud providers. Every model runs in verified TEE infrastructure.
Available Model Providers
RedPill sources Confidential AI models from three verified TEE providers:Phala Network
8 models including DeepSeek V3, GPT-OSS, Qwen
Tinfoil
4 models including DeepSeek R1, Qwen3 Coder
Near AI
3 models including GLM-4.6, DeepSeek V3.1
Model Catalog
Phala Network Models
| Model | Parameters | Best For |
|---|---|---|
| DeepSeek V3.2 | 685B MoE | Latest reasoning, complex tasks |
| DeepSeek V3 | 685B MoE | General reasoning, analysis |
| GPT-OSS 120B | 120B | Open-source GPT alternative |
| Qwen 2.5 72B | 72B | Multilingual, coding |
| Qwen 2.5 7B | 7B | Fast responses, chat |
| Gemma 2 27B | 27B | Efficient, balanced |
| DeepSeek Coder | 33B | Code generation, review |
| DeepSeek Chat | 67B | Conversational AI |
Tinfoil Models
| Model | Parameters | Best For |
|---|---|---|
| DeepSeek R1 | 685B MoE | Advanced reasoning |
| Qwen3 Coder 480B | 480B | Large-scale coding |
| Llama 3.3 70B | 70B | General purpose, chat |
| Llama 3.3 8B | 8B | Fast, lightweight |
Near AI Models
| Model | Parameters | Best For |
|---|---|---|
| DeepSeek V3.1 | 685B MoE | Latest reasoning |
| GLM-4.6 | 9B | Chinese language, fast |
| Qwen3 32B | 32B | Balanced performance |
Choosing the Right Model
For general conversations
For general conversations
Recommended: DeepSeek V3 or Llama 3.3 70BThese models handle everyday questions, brainstorming, and general assistance well.
For complex reasoning
For complex reasoning
Recommended: DeepSeek R1 or DeepSeek V3.2Best for multi-step problems, analysis, and tasks requiring deep thinking.
For coding tasks
For coding tasks
Recommended: Qwen3 Coder 480B or DeepSeek CoderOptimized for code generation, debugging, and technical documentation.
For multilingual content
For multilingual content
Recommended: Qwen 2.5 or GLM-4Strong performance in non-English languages, especially Chinese.
For fast responses
For fast responses
Recommended: Qwen 2.5 7B or Llama 3.3 8BSmaller models that respond quickly for simple queries.
Model Usage by Plan
| Feature | Free | Pro | Enterprise |
|---|---|---|---|
| Basic models (7B-27B) | ✅ | ✅ | ✅ |
| Large models (70B+) | Limited | ✅ | ✅ |
| Massive models (480B+) | ❌ | ✅ | ✅ |
| Model switching | ✅ | ✅ | ✅ |
| Priority access | ❌ | ✅ | ✅ |
Why Confidential AI Only?
RedPill is designed for true privacy. Here’s why we only use Confidential AI models:- No third-party exposure - Your data doesn’t go to OpenAI, Anthropic, or Google
- Hardware isolation - TEE ensures even the hosting provider can’t see your data
- Verifiable execution - Cryptographic attestation proves the model runs in genuine TEE
- Consistent privacy - Every model meets the same security standard
Want access to 60+ models including OpenAI GPT-4 and Claude? Use the RedPill API for development - it offers broader model selection with TEE-protected routing.
Attestation & Verification
Every Confidential AI model comes with cryptographic attestation proving it runs in genuine TEE hardware.Learn about verification
Verify model execution yourself →