§ Download · v3.0.0 · MIT

Three platforms.
No account.

Native installers for Windows, macOS, and Linux. Or build from source with pnpm. No sign-up wall. No credit card on file. Run on Ollama with zero cloud, full feature parity, fully offline. The license is MIT, the code is on GitHub, and your data stays on your machine.

§01 Native installers

Pick your platform.

Windows
Team-X-Setup.exe
x64 · arm64 · 84 MB
Download
macOS
Team-X.dmg
Intel · Apple Silicon · 92 MB
Download
Linux
Team-X.AppImage
x64 · .deb available · 78 MB
Download

Checksums and signatures published in every GitHub release . Verify before you install on a regulated machine.

§02 Local model setup (Ollama)

Run the entire org locally.
Zero cloud.

Install Ollama , pull a model that fits your hardware, start the daemon. Team-X auto-detects the local instance the next time you open the app and uses it for any role configured for the Local privacy tier.

# macOS / Linux: install Ollama
curl -fsSL https://ollama.com/install.sh | sh

# Windows: download from ollama.com/download

# Pull a model. Pick one:
ollama pull llama3.1:8b     # 4.7 GB · solid generalist
ollama pull qwen2.5:14b     # 9 GB · stronger reasoning
ollama pull phi3:14b        # 7.9 GB · fast, light

# Start the daemon
ollama serve
§03 From source

Build from a clean checkout.

Node 20+ and pnpm 9+ required. The clone is ~80 MB; install pulls electron-rebuild and the native deps; first build takes ~3-4 minutes; HMR is instant after that.

git clone https://github.com/git-rocky-stack/team-x.git
cd team-x
pnpm install
pnpm dev

# Run the unit suite (2,150 tests)
pnpm test

# Run the Playwright E2E suite
pnpm -F @team-x/desktop test:e2e

# Build a platform installer
pnpm dist:win        # NSIS, x64 + arm64
pnpm dist:mac        # DMG, Intel + Apple Silicon
pnpm dist:linux      # AppImage + .deb

See CONTRIBUTING.md for the full development guide, the role-pack contribution flow, and the PR rubric.

§04 Cloud providers (optional)

Wire any of ten providers.
Or none.

Team-X supports Anthropic, OpenAI, Google, Groq, OpenRouter, Together, Fireworks, Ollama (local), LM Studio (local), and any OpenAI-compatible endpoint. Add a provider in Settings → Providers: paste the API key, click Test, toggle on. Keys live in the OS keychain via keytar; they never enter a config file.

Privacy tier filtering is enforced at the router. Set a role to the Local tier and it physically cannot make a proprietary cloud call, regardless of what model the agent's spec says it prefers. Read more in the privacy posture .

§05 Get build notes while you wait