Designing APIs for LLMs as a new craft?
Designing a traditional API, such as for a library, rest, or rpc service, has some underlying assumptions we’ve all taken for granted for a long time.
Maybe the one taken for granted most often… there will be some procedural code which is wrapped around the API and helping to ensure the API is used correctly.
With LLM functions and tool use, things are a bit different. You throw the tools in as a part of the call to the LLM and hope for the best. There’s a lot less assurance about coordinating calls in sequence or with dependencies.
While there’s a lot of interest in MCP, it seems like that mostly arises from familiarity with APIs. I can just put another layer in front of my existing API, and now it is AI Ready!
There’s room here to explore and find practices for designing APIs which are more suited for integration into conversational ux, agentic workflows, and full agents.
This might involve smaller surface areas which are easier to explain in natural language, higher level abstractions which are easier for a human or agent to understand, items and actions which have better semantic stability, more resilience and natural language errors, and so on.
There are a couple of really memorable Python libraries that sought to re-imagine APIs. One was Requests, with the tag line “HTTP For Humans”.
It’s time for API Design for LLMs.