Skip to main content
LlamaIndex works with abliteration.ai through its OpenAI-compatible LLM class.

Install

pip install llama-index llama-index-llms-openai-like

Usage

from llama_index.llms.openai_like import OpenAILike

llm = OpenAILike(
    model="abliterated-model",
    api_base="https://api.abliteration.ai/v1",
    api_key=os.environ["ABLIT_KEY"],
    is_chat_model=True,
)

print(llm.complete("Hello").text)

RAG

abliteration.ai does not currently serve embeddings. For a RAG pipeline, use a separate embedding provider (OpenAI, Cohere, local sentence-transformers) alongside abliterated-model as the LLM:
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader, Settings
from llama_index.embeddings.openai import OpenAIEmbedding

Settings.llm = llm  # abliteration.ai LLM from above
Settings.embed_model = OpenAIEmbedding(api_key=os.environ["OPENAI_API_KEY"])

docs = SimpleDirectoryReader("./docs").load_data()
index = VectorStoreIndex.from_documents(docs)
print(index.as_query_engine().query("What is abliteration.ai?"))
See the compatibility matrix for what’s supported.