| Method | Endpoint | Description |
|---|---|---|
GET | /routing/models | List all 14 models with capabilities, costs, speed, coding/reasoning scores |
GET | /routing/stats | Per-provider request counts, latency (avg/p95), cost, failure rates |
GET | /routing/config | Current routing strategy (auto/cheapest/fastest/best_quality/local_only) |
POST | /routing/config?strategy=X | Change routing strategy |
GET | /routing/explain?prompt=X | Dry run — shows which model would be selected and why |
| Method | Endpoint | Description |
|---|---|---|
GET | /privacy | Get current privacy mode, cloud_llm_blocked, allowed_providers |
POST | /privacy?mode=X | Set mode: full_privacy, balanced, permissive. Returns 400 on invalid. |
| Method | Endpoint | Description |
|---|---|---|
GET | /shadow-ai | Scan for AI tools — returns detected tools, approved vs unauthorized |
| Method | Endpoint | Description |
|---|---|---|
GET | /graph/stats | Node/edge/event counts, nodes by type |
| Method | Endpoint | Description |
|---|---|---|
GET | /block-rules | List all block rules |
POST | /block-rules | Add a new block rule (keyword or regex) |
DELETE | /block-rules/{id} | Remove a block rule |
| Method | Endpoint | Description |
|---|---|---|
GET | /config/taxonomy | Get active breach taxonomy |
GET | /config/taxonomy/template | Get default template for customization |
PUT | /config/taxonomy | Update taxonomy (severity levels, breach types, detection hints) |
| Method | Endpoint | Description |
|---|---|---|
POST | /v1/chat/completions | Secure chat completions proxy (OpenAI-compatible). Use model: "auto" for smart routing. |
GET | /health | Liveness probe |
GET | /v1/audit | Query audit log entries |
| Service | Port | Start Command |
|---|---|---|
| API Gateway + Dashboard | 8000 | uvicorn main:app --port 8000 |
| LLM Proxy | 18790 | python3 llm/llm_proxy.py --port 18790 |
| Breach Engine | 8081 | uvicorn breach_intel.main:app --port 8081 |