Vexa: Self-Hosted Meeting Intelligence Platform with Real-Time Transcripts

Summary
Vexa is an open-source, self-hostable meeting intelligence platform designed for real-time transcription across Google Meet and Microsoft Teams. It provides a multi-user API that deploys bots to meetings, offering robust data sovereignty and flexible deployment options for various enterprise needs. Built with Python, Vexa supports real-time multilingual transcription and translation.
Repository Info
Tags
Click on any tag to explore related repositories
Introduction
Vexa is an open-source, self-hostable meeting intelligence platform that allows you to deploy bots into virtual meetings (Google Meet, Microsoft Teams, and soon Zoom) for real-time transcription and translation. Built for data sovereignty, Vexa offers full control over your data, making it ideal for regulated industries and privacy-conscious teams. Its modular architecture and robust API enable seamless integration and scaling from edge devices to millions of users.
Installation
Vexa offers flexible deployment options, from a fully hosted service to complete self-hosting.
Option 1: Hosted (Fastest)
You can start immediately by grabbing your API key from the official Vexa dashboard.
Option 2: Self-host with Lite Container (Single Container, No GPU)
This compact version of Vexa depends on an external database and transcription service.
docker run -d \
--name vexa \
-p 8056:8056 \
-e DATABASE_URL="postgresql://user:pass@host/vexa" \
-e ADMIN_API_TOKEN="your-admin-token" \
-e TRANSCRIBER_URL="https://transcription.service" \
-e TRANSCRIBER_API_KEY="transcriber-token" \
vexaai/vexa-lite:latest
For complete setup examples and more configurations, refer to the official documentation.
Option 3: Self-host with Docker Compose
Ideal for development, this option allows you to run Vexa with Docker Compose.
git clone https://github.com/Vexa-ai/vexa.git
cd vexa
make all # CPU by default (Whisper tiny) — good for development
# For GPU:
# make all TARGET=gpu # (Whisper medium) — recommended for production quality
A full guide is available in the project's docs/deployment.md.
Examples
Vexa's API allows you to easily send bots to meetings and retrieve transcripts.
Send a bot to Microsoft Teams
curl -X POST https://<API_HOST>/bots \
-H "Content-Type: application/json" \
-H "X-API-Key: <API_KEY>" \
-d '{
"platform": "teams",
"native_meeting_id": "<NUMERIC_MEETING_ID>",
"passcode": "<MEETING_PASSCODE>"
}'
Send a bot to Google Meet
curl -X POST https://<API_HOST>/bots \
-H "Content-Type: application/json" \
-H "X-API-Key: <API_KEY>" \
-d '{
"platform": "google_meet",
"native_meeting_id": "<MEET_CODE_XXX-XXXX-XXX>"
}'
Get transcripts over REST
curl -H "X-API-Key: <API_KEY>" \
"https://<API_HOST>/transcripts/<platform>/<native_meeting_id>"
For real-time streaming with sub-second delivery, a WebSocket API is also available.
Why Use Vexa?
- Data Sovereignty: Vexa is built for data sovereignty, allowing full self-hosting of the platform, database, and transcription services on your own infrastructure. This is crucial for regulated industries and organizations with strict privacy requirements.
- Real-time Transcription & Translation: Leverage real-time multilingual transcription supporting 100 languages with Whisper, along with instant translation across all supported languages.
- Flexible Deployment: Choose from fully hosted, GPU-free self-hosting (using external transcription services), or full self-hosting to perfectly match your privacy and infrastructure needs.
- Build Custom Solutions: The Vexa API provides powerful abstractions, enabling developers to build sophisticated meeting assistants, custom integrations, and applications rapidly, similar to Otter.ai or Fireflies.ai.
- Multi-Platform Support: Seamlessly integrate with Google Meet and Microsoft Teams, with Zoom support coming soon.
Links
- GitHub Repository: https://github.com/Vexa-ai/vexa
- Official Website: https://vexa.ai
- Discord Community: https://discord.gg/Ga9duGkVz9
- LinkedIn: https://www.linkedin.com/company/vexa-ai/