Is abliteration.ai OpenAI-compatible?
Yes. Point any OpenAI SDK athttps://api.abliteration.ai/v1. See OpenAI compatibility.
Do you also support the Anthropic Messages API?
Yes. Same base URL —https://api.abliteration.ai/v1/messages. See Anthropic compatibility.
What model IDs are available?
One:abliterated-model. See models.
Do you support embeddings?
No. Use a separate embedding provider alongsideabliterated-model for the LLM.
Do you support structured outputs (response_format)?
No — response_format is ignored by the backend. See the compatibility matrix for the full list.
Do you support web search?
Yes, on all three API surfaces. Three different request shapes — see web search.Do you support web fetch?
On the Anthropic Messages API only. Not on OpenAI chat completions or Responses. See web fetch.Can I count tokens before sending?
Yes, viaPOST /v1/messages/count_tokens. See count tokens.
Do you retain my prompts?
Prompt and output content is not retained by default. Operational telemetry (token counts, timestamps, error codes) is kept for billing and reliability. See security.Do your models have safety filters?
Base inference is unrestricted — that’s the product. Governance is opt-in via Policy Gateway, where you write the rules.What’s the difference between /v1/* and /policy/*?
/v1/* is a transparent compat surface. /policy/* adds project quotas, policy evaluation, policy events, and streaming policy metadata. See policy endpoints.
