Creating Business Workflows with LLMs and MCP
Open Spaces is Gun.io’s field guide to the tools our engineers actually ship with. In this fourth and final installment, veteran full-stack architect Tim Kleier shows how to orchestrate multiple MCP tools into autonomous business workflows. This builds on Part 1: Model Context Protocol (MCP)—The Missing Layer Between AI and Your Apps, Part 2: Wrapping an Existing API with MCP, and Part 3: Building a Standalone MCP Server.
Is it possible to create entire business workflows with LLMs and MCP? Not only is it possible, but it’s only scratching the surface. What we’ll cover in this article lays the foundation for autonomous agents that can operate key business functions.
In this post, we’ll walk through a customer service scenario powered by MCP tools alongside Zapier and Zendesk. You’ll see how tools can be sequenced into repeatable business flows.
From Tools to Tasks
In MCP, tools define how an LLM can interact with systems. (For a deeper dive into MCP fundamentals, see our introduction to Model Context Protocol.) But what if you want an LLM to coordinate multiple tools, in sequence, depending on the situation?
Let’s say a customer creates a ticket with this text:
“I upgraded to the Pro plan yesterday, but I still can’t access any of the premium features—like advanced analytics or team sharing. Can you help me out?”
Here’s how an LLM could complete the entire workflow using MCP:
- Read the Ticket – The LLM first reads the ticket, calling zendesk_find_ticket.
- Tag the Ticket – The model reads the complaint and adds tags (using zendesk_tag_ticket) like “pro_features”.
- Consult the Knowledge Base – The model searches the knowledge base for related articles using zendesk_search_help_center_articles.
- Respond and Resolve – Using zendesk_api_request_beta, the model drafts a clear response explaining the situation–the need to logout and log back in–and marks the ticket as “solved”.
Let’s see it in action.
MCP Setup
First, we need to equip an LLM with MCP support. While we’re using Zapier’s connector here for simplicity, you can also wrap your existing APIs with MCP or build standalone MCP servers for more complex integrations. We’re going to use Claude and connect it to Zendesk MCP tools. For ease of use, I’ve done so through Zapier’s MCP connector.
Zapier allows you to install and configure an MCP server that lives at a given URL, and then your client–in our case, Claude–can call that MCP server to perform actions. You can see the basic config below, in Zapier’s UI.
We configure our MCP client and then add the Zendesk tools we’d like to make available. On the Claude side, we connect to this MCP server in our claude_desktop_config.json file, like so:
{
"mcpServers": {
"zapier-mcp": {
"command": "npx",
"args": [
"mcp-remote",
"https://mcp.zapier.com/api/mcp/s/YjFlY…FmMA==/mcp"
]
}
}
}
Once we restart Claude, it will have access to our Zapier / Zendesk MCP server.
Handling Tickets
Now it’s time to test out our workflow, which starts with finding a Zendesk ticket. Here’s a snapshot of the open ticket in Zendesk:
As you can see, the user submitted a ticket about not being able to access Pro plan features. We’ll add our prompt above to Claude, describing the workflow. Assuming the MCP tools are enabled, it might respond like this:
If we look back at Zendesk, it completed all the work for us! It found the ticket, which it noted was already tagged. Then it searched the knowledge base, locating an FAQ article describing a current limitation requiring users to log out and log back in to see Pro features after they upgrade. Finally, it added a comment to the ticket, which sends an email to the customer. Resolving the ticket is not in the snapshot, but Claude completed that step as well.
Looking back in Zendesk, we see the end result: a solved customer support ticket!
Related Reading
This workflow demonstrates MCP in action, but there’s much more to explore:
- Model Context Protocol: The Missing Layer Between AI and Your Apps – Understanding MCP fundamentals
- Wrapping an Existing API with MCP – Expose your current APIs to LLMs
- Building a Standalone MCP Server – Connect legacy databases and systems
Wrapping Up
We didn’t get into a ton of detail in this article, but the intention was to show what’s possible with MCP. When we make MCP tools available to an LLM, we can describe and execute business workflows. We would need to mature this solution a bit–for example, handling edge cases or introducing formal automation flows–but we are not far from creating an autonomous Level 1 customer support agent. For production implementations, consider wrapping your existing APIs with MCP for simpler integrations or building dedicated MCP servers for legacy system access.
That is the future of business operations. Converting SOPs to instructions for agents and giving them tools to perform actions, run workflows, even run a business.
Ready to Build AI-Powered Workflows for Your Business?
Building autonomous agents requires deep technical expertise in AI integration, API architecture, and workflow automation. Gun.io connects you with senior engineers who specialize in MCP implementations, LLM integrations, and production-ready AI systems.