MCP
See Overview for how MCP fits with the other integration surfaces.
Remote MCP URL
Most MCP clients only need the hosted Colabra MCP URL:
https://api.colabra.ai/mcpWhat MCP gives access to
The MCP surface exposes the same working system teams use for M&A due diligence inside Colabra.
| Capability area | What the MCP client can access or do | Typical use |
|---|---|---|
| Workspace | Workspaces, diligence settings, AI settings, workflow settings, custom properties, prompts, and templates | Load the operating model before doing any diligence work |
| Projects | List projects, fetch a project, read the project overview, and inspect project-level diligence overrides | Understand the deal, status, and scope |
| Tasks | List tasks and fetch individual tasks | Review the active work queue and who owns what |
| Files | List files, search files, fetch metadata, read file text, inspect structured output, check file status, upload files, and list entities found in a file | Search evidence, read source material, and add new files from the client |
| Entities and risk | List/search entities, fetch entity overviews, read risk domain outputs, inspect linked files, and fetch relationship graphs | Investigate org structure, sanctions, litigation, PEP, IP, licences, and related risk |
| Findings | List findings, fetch individual findings, and read the QoE view | Review red flags, QoE output, and downstream diligence analysis |
| Requests and comments | List requests, create requests, resolve requests, list comments, create comments, and reply in threads | Run follow-up with counterparties and record reviewer judgement |
| Reports | List reports, fetch reports, create reports, update reports, and generate reports from built-in templates | Draft or update written diligence output without leaving the client |
That makes MCP the right surface for interactive AI workflows where the agent should reason over live Colabra context and write back into the same working system.
The key difference from generic chat is that the model is not reasoning over pasted snippets or detached exports. It can inspect live files, entities, findings, requests, and reports with user-approved access, then write its work back into the same project.
End-to-end M&A due diligence workflows
The market thesis plus diligence draft
Real deal example: Claude Cowork for market context, Colabra for source-backed output
A deal team can use Claude Cowork to do live web research on the target, its competitors, and the market around it, while also pulling project overview, findings, entity-risk signals, and key file evidence from Colabra through MCP. The model can combine external market context with the actual diligence record, draft a report inside Colabra, and leave comments on the findings or tasks that need follow-up.
The financial workbook review loop
Real deal example: ChatGPT for Excel, Colabra for evidence and task follow-up
A team can use ChatGPT for Excel to analyze exported financial workbooks, inspect tabs and formulas, summarise variances, and test spreadsheet assumptions outside Colabra. Once that analysis produces something worth keeping, MCP lets the team upload the workbook or derived output back into project evidence and post comments on the relevant diligence tasks or requests. The spreadsheet work happens in Excel; the evidence trail, follow-up, and reporting stay inside Colabra.
The board deck or IC pack workflow
Real deal example: Microsoft Copilot for the presentation layer, Colabra for the diligence record
A team can use Microsoft Copilot to turn Colabra findings, requests, and draft reports into a PDF or PowerPoint-style presentation for an investment committee or board update, while also bringing in external market context or online research. MCP lets the agent pull the live diligence record from Colabra, synthesize the presentation in the external client, and then write the resulting report or narrative back into Colabra so the working record stays complete.
What a good MCP session looks like
The strongest MCP workflows keep the model anchored to the live deal state:
- load the project, settings, and working queue
- inspect source files, findings, or entities relevant to the question
- synthesize or draft against that evidence
- write the result back as a comment, request, or report draft when it is worth keeping
That pattern is better than asking for a free-floating summary because it preserves both context and output inside the project record.
Client guides
| Client | Best for |
|---|---|
| General-purpose conversational use inside ChatGPT | |
| Anthropic workflows using remote MCP with OAuth | |
| Agentic coding workflows that need direct access to Colabra context | |
| Microsoft-hosted copilots that support remote MCP connections | |
| Gemini clients that expose remote MCP setup |
Manual OAuth details
Only use these when the client asks for explicit OAuth endpoints rather than discovering them automatically.
Authorization server metadata
https://api.colabra.ai/.well-known/oauth-authorization-serverProtected resource metadata
https://api.colabra.ai/.well-known/oauth-protected-resource/mcpToken endpoint
https://api.colabra.ai/mcp/oauth/tokenWhen to use MCP vs. the REST API
Use MCP for interactive AI clients that should operate inside Colabra with user-approved access.
Use the API reference when you are building a normal integration or automation against /v1.