This guide explains how to create and contribute community adapters for the TanStack AI ecosystem.
Community adapters extend TanStack AI by integrating external services, APIs, or custom model logic. They are authored and maintained by the community and can be reused across projects.
A community adapter is a reusable module that connects TanStack AI to an external provider or system.
Common use cases include:
Community adapters are not maintained by the core TanStack AI team, and can be reused across different projects.
Follow the steps below to build a well-structured, type-safe adapter.
Start by reviewing the existing internal adapter implementations in the TanStack AI GitHub repository. These define the expected structure, conventions, and integration patterns.
For a complete, detailed reference, use the OpenAI adapter, which is the most fully featured implementation.
Model metadata describes each model’s capabilities and constraints and is used by TanStack AI for compatibility checks and feature selection.
Your metadata should define, at a minimum:
Refer to the OpenAI adapter’s model metadata for a concrete example.
After defining metadata, group models by supported functionality using exported arrays. These arrays allow TanStack AI to automatically select compatible models for a given task.
Example:
export const OPENAI_CHAT_MODELS = [
// Frontier models
GPT5_2.name,
GPT5_2_PRO.name,
GPT5_2_CHAT.name,
GPT5_1.name,
GPT5_1_CODEX.name,
GPT5.name,
GPT5_MINI.name,
GPT5_NANO.name,
GPT5_PRO.name,
GPT5_CODEX.name,
// ...other models
] as const
export const OPENAI_IMAGE_MODELS = [
GPT_IMAGE_1.name,
GPT_IMAGE_1_MINI.name,
DALL_E_3.name,
DALL_E_2.name,
] as const
export const OPENAI_VIDEO_MODELS = [SORA2.name, SORA2_PRO.name] as const
export const OPENAI_CHAT_MODELS = [
// Frontier models
GPT5_2.name,
GPT5_2_PRO.name,
GPT5_2_CHAT.name,
GPT5_1.name,
GPT5_1_CODEX.name,
GPT5.name,
GPT5_MINI.name,
GPT5_NANO.name,
GPT5_PRO.name,
GPT5_CODEX.name,
// ...other models
] as const
export const OPENAI_IMAGE_MODELS = [
GPT_IMAGE_1.name,
GPT_IMAGE_1_MINI.name,
DALL_E_3.name,
DALL_E_2.name,
] as const
export const OPENAI_VIDEO_MODELS = [SORA2.name, SORA2_PRO.name] as const
Each array should only include models that fully support the associated functionality.
Each model exposes a different set of configurable options. These options must be typed per model name so that users only see valid configuration options.
Example:
export type OpenAIChatModelProviderOptionsByName = {
[GPT5_2.name]: OpenAIBaseOptions &
OpenAIReasoningOptions &
OpenAIStructuredOutputOptions &
OpenAIToolsOptions &
OpenAIStreamingOptions &
OpenAIMetadataOptions
[GPT5_2_CHAT.name]: OpenAIBaseOptions &
OpenAIReasoningOptions &
OpenAIStructuredOutputOptions &
OpenAIToolsOptions &
OpenAIStreamingOptions &
OpenAIMetadataOptions
// ... repeat for each model
}
export type OpenAIChatModelProviderOptionsByName = {
[GPT5_2.name]: OpenAIBaseOptions &
OpenAIReasoningOptions &
OpenAIStructuredOutputOptions &
OpenAIToolsOptions &
OpenAIStreamingOptions &
OpenAIMetadataOptions
[GPT5_2_CHAT.name]: OpenAIBaseOptions &
OpenAIReasoningOptions &
OpenAIStructuredOutputOptions &
OpenAIToolsOptions &
OpenAIStreamingOptions &
OpenAIMetadataOptions
// ... repeat for each model
}
This ensures strict type safety and feature correctness at compile time.
Models typically support different input modalities (e.g. text, images, audio). These must be defined per model to prevent invalid usage.
Example:
export type OpenAIModelInputModalitiesByName = {
[GPT5_2.name]: typeof GPT5_2.supports.input
[GPT5_2_PRO.name]: typeof GPT5_2_PRO.supports.input
[GPT5_2_CHAT.name]: typeof GPT5_2_CHAT.supports.input
// ... repeat for each model
}
export type OpenAIModelInputModalitiesByName = {
[GPT5_2.name]: typeof GPT5_2.supports.input
[GPT5_2_PRO.name]: typeof GPT5_2_PRO.supports.input
[GPT5_2_CHAT.name]: typeof GPT5_2_CHAT.supports.input
// ... repeat for each model
}
Model options should be composed from reusable fragments rather than duplicated per model.
A common pattern is:
Example (based on OpenAI models):
export interface OpenAIBaseOptions {
// base options that every chat model supports
}
// Feature fragments that can be stitched per-model
/**
* Reasoning options for models
*/
export interface OpenAIReasoningOptions {
//...
}
/**
* Structured output options for models.
*/
export interface OpenAIStructuredOutputOptions {
//...
}
export interface OpenAIBaseOptions {
// base options that every chat model supports
}
// Feature fragments that can be stitched per-model
/**
* Reasoning options for models
*/
export interface OpenAIReasoningOptions {
//...
}
/**
* Structured output options for models.
*/
export interface OpenAIStructuredOutputOptions {
//...
}
Models can then opt into only the features they support:
export type OpenAIChatModelProviderOptionsByName = {
[GPT5_2.name]: OpenAIBaseOptions &
OpenAIReasoningOptions &
OpenAIStructuredOutputOptions &
OpenAIToolsOptions &
OpenAIStreamingOptions &
OpenAIMetadataOptions
}
export type OpenAIChatModelProviderOptionsByName = {
[GPT5_2.name]: OpenAIBaseOptions &
OpenAIReasoningOptions &
OpenAIStructuredOutputOptions &
OpenAIToolsOptions &
OpenAIStreamingOptions &
OpenAIMetadataOptions
}
There is no single correct composition; this structure should reflect the capabilities of the provider you are integrating.
Finally, implement the adapter’s runtime logic.
This includes:
Adapters are implemented per capability, so only implement what your provider supports:
Refer to the OpenAI adapter for a complete, end-to-end implementation example.
Once your adapter is complete:
After adding your adapter, run the pnpm run sync-docs-config in the root of the TanStack AI monorepo. This ensures your adapter appears correctly in the documentation navigation. Open a PR with the generated changes.
As a community adapter author, you are responsible for ongoing maintenance.
This includes:
If you add new features or breaking changes, open a follow-up PR to keep the docs in sync.
