ModelMeta
452 model profiles·22 providers·Refresh cadence hourly

AI model reference database

Look up AI model specs, pricing, and parameters without digging through docs.

ModelMeta turns scattered provider documentation into structured model records. People come here to check pricing, context windows, max output, supported parameters, capability flags, rate limits, and provider coverage in one place.

PricingContext windowOutput limitsParametersCapabilitiesProvider coverage
452

Model profiles with structured pricing, limits, parameters, and capability metadata.

22

Provider sources attached to the record, without turning the homepage into a provider directory.

1h

Registry refresh cadence for normalized model snapshots and route metadata.

Example normalized model record

The first screen should show the structure of a model record, not just feature one model.

A good model record combines commercial terms, token limits, runtime controls, capability flags, and source coverage in one clean page.

Input price

$2.50

sample normalized field

Output price

$7.50

sample normalized field

Context window

2M

sample token limit

Max output

32K

sample token limit

Supported parameters

temperaturetop_ptop_kreasoning_effort

Capability flags

VisionTool useStreamingStructured output

Source layer

OpenAISiliconFlowInfini-AICohereAlibaba CloudByteDance

Provider pages remain useful as provenance and coverage context, but the model record stays at the center of the product.

What this product helps answer

Teams usually need a few concrete answers before they pick a model.

People usually arrive with a practical question. The homepage should expose those questions first, then show the field groups that answer them.

01

What will this model cost in production?

Compare input, output, cached, and batch pricing before a team commits to a model family.

02

Can it fit the workload and token limits?

Context windows, max output, modality support, and rate limits should be visible before implementation starts.

03

Which controls and capabilities are actually supported?

Teams need explicit fields for parameters, structured output, tool use, streaming, and reasoning behavior.

04

Which providers expose the model and where did the data come from?

Provider coverage stays attached to the record as source metadata instead of replacing the model as the main object.

Structured field groups

The database should make the important model fields visible in seconds.

Open the full catalog >

Pricing

Pricing

Commercial terms belong near the top so users can filter quickly and compare like-for-like.

Input price
Output price
Cached input
Batch pricing

Context and limits

Context and limits

Token limits decide what fits into a workflow before anyone builds around a model.

Context window
Max input
Max output
Rate limits

Parameters

Parameters

Runtime controls should be listed explicitly, not buried inside prose or provider-specific docs.

temperature
top_p
top_k
reasoning_effort

Capabilities

Capabilities

Structured flags make model behavior easier to compare across providers and model families.

Vision
Tool use
Streaming
Structured output

Product flow

Built like a reference workflow, not a single overloaded page.

01

Browse the catalog

Search and filter by the model fields people actually compare, not by a homepage state machine.

02

Open structured profiles

Each model gets its own page for pricing, limits, capabilities, parameters, and reference metadata.

03

Compare a shortlist

Shortlists should turn into side-by-side comparisons instead of more tabs across provider docs.

Provider coverage

Providers stay visible as source context, not as the homepage headline.

Provider pages explain provenance, access points, and coverage. The homepage stays centered on the model information people compare when making a decision.

Open provider directory >