API reference
Team-X's extension surface is Electron IPC, not a hosted REST API. Typed channels in @team-x/shared-types, a one-way events.dashboard push stream, and two declarative extension entry points (MCP servers and Skills). Local-first, no Bearer auth, no api.team-x.app.
Team-X exposes no hosted API. There is no api.team-x.app. There are no
Bearer tokens issued by Team-X. There is no outbound webhook receiver.
Everything that talks to Team-X talks to a local Electron process over IPC, or registers as an MCP server or Skill and is invoked by the agent runtime from inside the desktop app. This page documents the IPC channels, the event stream, the type contract, error semantics, and the two declarative extension entry points. For higher-level integration patterns (providers, workspace export, audit-log streaming), see the Integration guide.
Protocol
Team-X uses Electron IPC for all surface:
- Request / response:
ipcRenderer.invoke(channel, args)in the renderer is paired withipcMain.handle(channel, handler)in the main process. Every channel is typed in@team-x/shared-types/src/ipc.ts. - Event stream: a one-way push from the main process to every open
renderer window via
webContents.send('events.dashboard', event). The event taxonomy is theDashboardEventdiscriminated union in the same types package.
There is no third protocol. No HTTP server inside Team-X. No gRPC. No persistent message bus. No hosted webhook receiver. The entire surface lives in one Electron process.
Channel families
The IPC surface spans 41 channel families. Each follows the
<family>.<verb> naming convention so the type contract and the on-disk
handler files line up one-to-one. The full schema with every request and
response type is in @team-x/shared-types/src/ipc.ts (538 lines as of
v3.0). The map below groups the families by domain.
Workspaces and access
| Family | Purpose |
|---|---|
companies | Workspace CRUD, archive, export and import as .tx-pack |
companies.listTemplates / installTemplate | Pre-built workspace templates |
operators | Human supervisor enumeration, readiness, invite flow |
cloud | Optional Strategia-X cloud workspace linkage |
Hires, hierarchy, authority
| Family | Purpose |
|---|---|
employees | Hire, fire, update, promote, set manager |
orgchart | Rendered org chart projection |
authority | Capability and path grants, request review, effective authority |
Work intake
| Family | Purpose |
|---|---|
tickets | Create, assign, comment, attach files, close, reopen |
goals | Long-horizon goals |
projects | Mid-horizon projects with leads and target dates |
routines | Recurring work definitions and run history |
schedule | Calendar items, completion tracking |
Execution
| Family | Purpose |
|---|---|
chat | Threaded messaging with employees, stop in-flight runs |
meetings | Multi-employee meetings: round-robin, chair-directed, freeform |
command | Cmd+K palette: parse, execute, history, suggest, stop |
proactive | Proactive execution: goal decomposition, work scanning |
Governance
| Family | Purpose |
|---|---|
budgets | Policies, ledger entries, overview, approval items |
approvals | Unified approval inbox |
autonomyDoctor / autonomyBenchmark / agentImprovement | Autonomy diagnostics |
runtimeProfiles / runtimeOperations | Runtime configuration and snapshots |
State and memory
| Family | Purpose |
|---|---|
events | Timeline event query |
memory | Thread digests, run checkpoints, packed thread context |
artifacts | Work-product listing |
vault | File storage: upload, download, search, verify, stats |
rag | Retrieval-augmented generation stats, rebuild, delete |
Extensions
| Family | Purpose |
|---|---|
mcp | Model Context Protocol server registration and lifecycle |
extensions | Skill installation (local folder or GitHub) and assignment |
providers | AI provider adapters: add, update, test, list models |
Observability and lifecycle
| Family | Purpose |
|---|---|
telemetry | Company stats, daily usage, employee stats, cost breakdown |
audit | Audit log query, stats, export (CSV / JSON) |
copilot | Copilot insights: list, dismiss, ask, configure, export |
backup | Local backup create, restore, list |
system | Directory picker, system queries |
settings | Runtime, privacy, concurrency, RAG, agentic, planner, copilot |
updater | Update check and install |
Every family expands into 3 to 12 verbs. The combined surface is roughly 220 typed channels. The source of truth is the type contract; treat any inline list (including this one) as a navigational aid, not a spec.
Type contract
The single source of truth for every request and response shape is
@team-x/shared-types. The package ships request and response interfaces
keyed to channel name, plus the entity types they reference (Company,
Employee, Ticket, Meeting, BudgetPolicy, AuditEvent, every
event variant, and so on).
If you are writing a Skill or MCP server that calls back into Team-X (the host-callback pattern), depend on the published types so the contract is checked at compile time:
// my-skill/package.json
{
"dependencies": {
"@team-x/shared-types": "^3.0.0"
}
}
import type {
CompaniesCreateRequest,
Employee,
DashboardEvent,
} from '@team-x/shared-types';
const args: CompaniesCreateRequest = {
name: 'Acme Corp',
theme: 'dark',
};
Calling the surface
From the renderer (React)
The renderer reaches the main process through the preload bridge. The
canonical wrapper is apps/desktop/src/renderer/lib/ipc.ts:
import { invoke } from '@/lib/ipc';
const companies = await invoke('companies.list');
const newCompany = await invoke('companies.create', {
name: 'Acme Corp',
theme: 'dark',
});
Event subscriptions wire into the events.dashboard push stream:
import { useEffect } from 'react';
useEffect(() => {
const unsubscribe = window.events.onDashboard((event) => {
if (event.type === 'work.started') {
console.log('Agent started:', event.payload.employeeId);
}
});
return unsubscribe;
}, []);
From the preload bridge
You author this only when adding a new channel family. The existing
bridge lives in apps/desktop/src/preload/index.ts:
import { contextBridge, ipcRenderer } from 'electron';
contextBridge.exposeInMainWorld('invoke', (channel: string, args?: unknown) =>
ipcRenderer.invoke(channel, args)
);
contextBridge.exposeInMainWorld('events', {
onDashboard: (callback: (event: DashboardEvent) => void) => {
const listener = (_evt: unknown, args: DashboardEvent) => callback(args);
ipcRenderer.on('events.dashboard', listener);
return () => ipcRenderer.removeListener('events.dashboard', listener);
},
});
The contextBridge.exposeInMainWorld boundary keeps the renderer
sandboxed; the renderer never has a direct require() to Electron or
Node APIs.
From the main process (handler authoring)
A new channel family lives in
apps/desktop/src/main/ipc/<family>.ts and is registered from the
top-level IPC bootstrap. Each handler is a thin adapter that validates
input, calls the repository or service, and returns a typed response:
import { ipcMain } from 'electron';
import type {
CompaniesCreateRequest,
Company,
} from '@team-x/shared-types';
import { companiesRepo } from '@/main/data/companies';
ipcMain.handle(
'companies.create',
async (_event, args: CompaniesCreateRequest): Promise<{
companyId: string;
agentEmployeeId: string;
copilotEmployeeId: string;
}> => {
validateCompaniesCreate(args);
return companiesRepo.create(args);
}
);
Event stream
The events.dashboard channel is the one-way push from main to renderer.
Every interesting state change in the orchestrator becomes an event. The
renderer is the only intended consumer; if you need to relay events to an
external system, see the
Audit log streaming
section of the Integration guide.
The event payload is a tagged union (type discriminator) carrying a
companyId, an actorId (who or what caused the event), and a
type-specific payload. Selected variants:
type | When fired | Payload |
|---|---|---|
work.started | Agent picks up a thread | { threadId, employeeId } |
work.progress | Streaming token delta | { threadId, delta } |
work.completed | Run finishes cleanly | { threadId, employeeId, runId } |
work.failed | Run errors out | { threadId, employeeId, error } |
employee.created | New hire | full Employee |
employee.fired | Termination | { employeeId } |
employee.promoted | Promotion | full Employee |
ticket.created | Ticket opened | full Ticket |
ticket.assigned | Assigned to employee | { ticketId, assigneeId } |
ticket.closed | Done or cancelled | { ticketId } |
meeting.started | Meeting begins | { meetingId } |
meeting.ended | Meeting ends | { meetingId, minutesMd } |
copilot.insight | New insight ready | CopilotInsight |
copilot.expired | Old insights aged out | { count } |
agentic.failed-budget_exhausted | Step / token / wall budget hit | { runId, budgetPolicyId } |
company.created / updated / deleted | Workspace lifecycle | Company or { companyId } |
The complete union lives in apps/desktop/src/main/orchestrator/event-bus.ts
alongside the bus implementation. Adding a new variant requires:
- Extending the
DashboardEventunion in@team-x/shared-types. - Calling
bus.emit({ type: '...', ... })from the producing service. - Adding a handler clause in any renderer subscriber that cares.
The bus deliberately has no buffering or replay. Renderer subscribers
that connect after an event fires do not see the event. For history,
query the events channel family (events.list) which is backed by the
durable event log in SQLite.
Error handling
Every IPC handler can throw. The error’s message field propagates back
to the renderer as a rejected promise. Common error types defined in
@team-x/shared-types/src/errors.ts:
| Type | Semantics |
|---|---|
ValidationError | Request payload failed schema validation |
NotFoundError | Resource does not exist (companyId, employeeId, ticketId, …) |
ConflictError | Duplicate resource or state collision |
AuthorizationError | Caller lacks the required capability or grant |
ProviderError | Upstream AI provider failed (rate limit, auth, network) |
BudgetError | Step / token / wall budget exhausted |
Renderer code catches and surfaces; there is no global error boundary on the IPC side:
import { invoke } from '@/lib/ipc';
import { showToast } from '@/lib/toast';
try {
await invoke('companies.create', { name: 'Acme Corp' });
} catch (error) {
// error.message comes from the main process
showToast({ type: 'error', message: error.message });
}
Extension entry points
Two surfaces extend Team-X without modifying the codebase. Both are local-only; both run inside the desktop process.
MCP servers
Standard Model Context Protocol servers. Each server exposes tools, resources, and prompts that become available to the agent runtime.
Two transports are supported:
- stdio: spawn as a child process, communicate over stdin / stdout. The most common transport.
- SSE (Server-Sent Events): connect to a remote HTTP server.
Register via the mcp.addServer channel:
await invoke('mcp.addServer', {
companyId: 'cmp_123',
name: 'GitHub MCP',
transport: 'stdio',
configJson: JSON.stringify({
command: 'node',
args: ['/path/to/github-mcp-server/dist/index.js'],
env: { GITHUB_TOKEN: '${env:GITHUB_TOKEN}' },
}),
});
The configJson field is opaque to Team-X; it is forwarded verbatim to
the MCP transport layer. Variable interpolation against process.env
is the responsibility of the transport.
Once enabled (mcp.toggle), the server’s tools are advertised to the
agent runtime alongside Team-X’s built-in tools. The agent decides when
to call them based on its plan and the tool descriptions in the MCP
manifest.
Pre-built templates (mcp.listTemplates and mcp.installTemplate) let
operators install common servers without hand-authoring config JSON.
Skills extensions
Skills extend an employee’s capabilities at a higher level than
individual tools. A Skill is a folder containing a SKILL.md (the
instruction document) and any supporting scripts, prompts, or data
files. The agent runtime treats a Skill as a labeled capability that can
be invoked by name.
Two installation paths:
// Local folder (point at a folder on disk containing SKILL.md)
await invoke('extensions.installLocalSkill', {
companyId: 'cmp_123',
folderPath: '/Users/rocky/skills/my-skill',
});
// GitHub source (any public repo following the Skills schema)
await invoke('extensions.installGithubSkill', {
companyId: 'cmp_123',
sourceUrl: 'https://github.com/your-org/your-skill',
});
Assignment to a specific employee, or company-wide, is via
extensions.upsertSkillAssignment:
await invoke('extensions.upsertSkillAssignment', {
companyId: 'cmp_123',
extensionId: 'ext_456',
employeeId: null, // null = available to every employee
enabled: true,
});
GitHub-source Skills are pinned by commit on install. Updating to a newer commit requires reinstalling.
What is intentionally not here
- No hosted REST API. There is no
api.team-x.app. Team-X is local-first by design. - No Bearer tokens or API keys for Team-X itself. AI provider keys
(Anthropic, OpenAI, etc) live in your OS keychain via
keytar. Team-X never authenticates against itself; it is a single-process app. - No outbound webhooks to external SaaS. The
events.dashboardstream is renderer-only. To relay events outside the process, use the audit log export, or write an MCP server that pollsevents.list. - No published SDKs in Python, JS, Go, or Rust. Only the TypeScript
type contract in
@team-x/shared-typesis shipped. Other-language callers should round-trip through an MCP server. - No plugin system for the React renderer. Custom panels, custom themes, and custom React components are out of scope. Skills and MCP servers are the supported extension surface.
If a hosted surface or non-Electron transport is ever added, this page will be the source of truth for what it covers. Until then, treat any documentation claiming otherwise as drift.
See also
- Integration guide: provider configuration, workspace portability, audit-log streaming, the patterns layer on top of this surface.
- Configuring providers: step-by-step for adding Anthropic, OpenAI, Ollama, and OpenAI-compatible endpoints.
- Command palette: the
command.*channel family in user terms. - Agentic loop: how the runtime decides when to invoke MCP tools versus Skills versus built-in tools.