I recently spent some time getting AFFiNE’s AI Copilot feature working in my self-hosted environment. After encountering several roadblocks and finding solutions, I wanted to share my experience to help others avoid the same issues.
The Challenge
AFFiNE’s AI Copilot is a powerful feature, but getting it configured properly with custom models in a self-hosted environment isn’t straightforward. The documentation suggests using a config.json file, but I found the admin UI approach much more reliable.
Initial Setup Success
Instead of wrestling with configuration files, I got it working through the web interface:
- Navigate to
admin/settings/copilot - Add the OpenAI provider with your API configuration
- Configure your preferred models
This approach is much more user-friendly and provides immediate feedback if something’s wrong.
The Default Model Problem
Here’s where things got interesting. The latest AFFiNE build defaults to claude-sonnet-4@20250514 as the model, but this causes issues depending on your AI provider:
- GCP Vertex AI: Uses
claude-sonnet-4@20250514(with @ symbol) ✅ - Anthropic Direct API: Requires
claude-sonnet-4-20250514(with hyphens) ✅ - OpenAI only: Fails because Claude models aren’t available ❌
The Solution: Source Code Modification
Since I build my own Docker images, I was able to fix this at the source code level. Here’s how:
Step 1: Find the Problem Files
find . -name "*.ts" -type f -exec grep -l "claude-sonnet-4@20250514" {} \;
This revealed that the problematic model reference appears in multiple TypeScript files across the codebase.
Step 2: Replace with Working Models
For OpenAI-only setups:
find . -name "*.ts" -type f -exec sed -i 's/claude-sonnet-4@20250514/gpt-4o/g' {} \;
For Anthropic Direct API:
find . -name "*.ts" -type f -exec sed -i 's/claude-sonnet-4@20250514/claude-sonnet-4-20250514/g' {} \;
For GCP Vertex AI users, the default should work as-is.
Step 3: Rebuild and Test
After rebuilding with these changes, the OpenAI provider worked perfectly through the admin UI configuration.
Experimenting with Ollama
Curious about using local AI models, I decided to test Ollama integration. This is where things got really interesting.
The Setup
- Replaced the default model with
deepseek-r1:14b - Configured Ollama in admin settings:
{"apiKey":"ollama","baseUrl":"https://ollama.private.domain/v1"}
The First Roadblock
AFFiNE immediately failed with:
"no_copilot_provider_available: No copilot provider available: deepseek-r1:14b"
The issue? Custom models weren’t defined in the OpenAI provider configuration.
The Provider Fix
I discovered that AFFiNE’s OpenAI provider acts as a universal interface for OpenAI-compatible APIs, but it needs to know about specific models beforehand.
The fix involved editing packages/backend/server/src/plugins/copilot/providers/openai.ts:
// Find the "// Text to Text models" comment and add after it:
{
id: 'deepseek-r1:14b',
capabilities: [
{
input: [ModelInputType.Text, ModelInputType.Image],
output: [ModelOutputType.Text, ModelOutputType.Object],
},
],
},
{
id: 'gemma3:12b',
capabilities: [
{
input: [ModelInputType.Text, ModelInputType.Image],
output: [ModelOutputType.Text, ModelOutputType.Object],
},
],
},
So Close, Yet So Far
After rebuilding, AFFiNE recognized the custom models and I could see requests hitting my Ollama instance. Success! Or so I thought…
The requests were failing because AFFiNE was calling POST /v1/responses HTTP/1.1 which returned a 404. AFFiNE seems to use a custom /v1/responses endpoint instead of the standard OpenAI /v1/chat/completions endpoint that Ollama implements.
Current Working Solution
For now, I’m running AFFiNE successfully with:
- OpenAI ✅
- Gemini ✅
- Anthropic ✅
- Default model:
gpt-4o - Configuration: Admin UI (
admin/settings/copilot)
Key Takeaways
- Admin UI > config.json: The web interface is much more reliable for configuration
- Model naming matters: Different providers use different naming conventions
- Source code fixes work: For self-hosted setups, modifying the default model in source code is effective
- Ollama is almost there: The architecture supports it, but API endpoint compatibility needs work
For Docker Users
If you’re building custom Docker images like I am, you can automate the model replacement:
ARG AI_MODEL=gpt-4o
# Update all .ts files to use the specified model
RUN find . -name "*.ts" -type f -exec sed -i 's/claude-sonnet-4@20250514/'"$AI_MODEL"'/g' {} \;
Then build with:
docker build --build-arg AI_MODEL="your-preferred-model" .
Conclusion
While AFFiNE’s AI Copilot feature has some rough edges in self-hosted environments, it’s definitely workable with the right approach. The admin UI configuration combined with source code modifications provides a reliable path to get AI features working with major providers.
The Ollama experiment shows that local AI integration is tantalizingly close - it’s really just an API endpoint compatibility issue away from working. Hopefully, future AFFiNE releases will include better support for OpenAI-compatible APIs like Ollama.