Configuring providers
Add LLM providers, set privacy tiers, choose a runtime strategy, and cap per-provider concurrency. Ten providers across three tiers, OS keychain for keys.
Team-X supports ten LLM providers across three privacy tiers. This guide covers adding providers, setting the privacy tier the runtime is allowed to use, choosing a runtime strategy, and capping per-provider concurrency.
Supported providers
| Provider | Privacy tier | Notes |
|---|---|---|
| Ollama | Local | Runs on your machine. No data leaves your network. |
| Anthropic | Proprietary Cloud | Claude Opus, Sonnet, Haiku. |
| OpenAI | Proprietary Cloud | GPT-4o, GPT-4, GPT-3.5. |
| Proprietary Cloud | Gemini models. | |
| Groq | Open-Source Cloud | Fast inference for open models. |
| OpenRouter | Proprietary Cloud | Multi-model router. |
| Together | Open-Source Cloud | Open model hosting. |
| Fireworks | Open-Source Cloud | Fast open model inference. |
| OpenAI-Compatible | Varies | Any endpoint with the OpenAI-compatible API. |
Add a provider
- Open Settings > Providers.
- Click Add Provider.
- Pick the provider type from the dropdown.
- Enter your API key.
- Optionally set:
- Privacy tier (Local, Open-Source Cloud, Proprietary Cloud).
- Base URL (for OpenAI-compatible endpoints).
- Click Add.
Your API key is stored in the OS keychain (macOS Keychain, Windows Credential Manager, Linux Secret Service). It never touches a config file or a database column.
Test a connection
After adding a provider, click Test Connection on the provider card. Team-X sends a minimal request to verify the API key and endpoint are valid. If the test passes, the card lights green and the provider is eligible to run agents.
Enable, disable, and remove
Toggle the switch on any provider card to enable or disable it. Disabled providers are never used by the runtime, even if they are the only option for a role’s preferred provider list. Click Remove to delete the configuration and remove the API key from the keychain.
Privacy tiers
Privacy tiers control which providers your agents are allowed to use.
| Tier | Data location | Example providers |
|---|---|---|
| Local | Your machine only | Ollama |
| Open-Source Cloud | Third-party servers, open models | Groq, Together, Fireworks |
| Proprietary Cloud | Third-party servers, proprietary models | Anthropic, OpenAI, Google |
In Settings > Privacy, set the maximum allowed tier:
- Local only: agents can only use Ollama. No data leaves your machine.
- Open-Source Cloud: agents can use local or open-source cloud providers.
- Proprietary Cloud: agents can use any provider (default).
The provider router enforces this filter at call time. If a role requests a
proprietary provider but your privacy max is Local only, the router falls
back per the role’s fallback_providers list.
Runtime strategy
The runtime strategy determines how Team-X balances model quality, speed, and resource usage.
| Strategy | Behavior |
|---|---|
| Auto (default) | Profiles your hardware and providers on startup, picks a strategy automatically. |
| Hybrid | Local models for simple tasks, cloud models for complex ones. |
| Always-On | Sends everything to the highest-quality available provider. |
| Lean | Minimizes resource usage; prefers local models and fewer concurrent agents. |
Configure in Settings > Runtime:
- View your hardware profile (CPU, RAM, GPU, detected at startup).
- Pick a strategy from the dropdown, or leave it on Auto.
- The effective slot count shows how many agents can run concurrently under the selected strategy.
Concurrency caps
In Settings > Concurrency:
- Global orchestrator slots: maximum total concurrent agent runs.
- Per-provider caps: limit how many concurrent calls go to each provider.
Default per-provider caps:
| Provider | Default cap |
|---|---|
| Ollama | 1 |
| Anthropic | 4 |
| OpenAI | 6 |
| 4 | |
| Groq | 10 |
| OpenRouter | 8 |
| Together | 6 |
| Fireworks | 6 |
These defaults prevent overwhelming local hardware (Ollama) and respect API rate limits (cloud providers). Adjust based on your plan tier and hardware.
See also
- Privacy posture for the full local-first story.
- Getting started for the recommended Ollama setup.