API Reference

POST

/v1/chat

Handles general chat interactions.

Parameters

  • prompt string, required

    The user's chat message.

  • model string, required

    The model to use for the chat.

  • personality string, optional

    An optional personality for the assistant.

cURL
curl https://api.ravonix.com/v1/chat \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "prompt": "Hello, how are you?",
    "model": "chat-model-v1",
    "personality": "friendly"
  }'

Response Body: Here's what a typical successful response looks like:

  • id: A unique identifier for this specific API request.
  • model: The identifier of the specific model that processed this request.
  • output: The main conversational response generated by the AI chat model.
  • usage: Provides insights into the token consumption of the API request, including:
    • input_tokens: The number of tokens sent in the user's prompt.
    • output_tokens: The number of tokens generated in the model's response.
    • total_tokens: The sum of input and output tokens for this request.
{
  "id": "req_chat_7a2b3c",
  "model": "chat-model-v1",
  "output": "I am an AI assistant, and I'm here to help you. How can I assist you today?",
  "usage": {
    "input_tokens": 12,
    "output_tokens": 25,
    "total_tokens": 37
  }
}
POST

/v1/assistant

Acts as a server assistant for various tasks.

Parameters

  • prompt string, required

    The user's query or command for the assistant.

  • model string, required

    The model to use for the assistant.

  • personality string, optional

    An optional personality for the assistant.

cURL
curl https://api.ravonix.com/v1/assistant \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "prompt": "How do I restart the server?",
    "model": "assistant-model-v1",
    "personality": "helpful"
  }'

Response Body: Here's what a typical successful response looks like:

  • id: A unique identifier for this specific API request.
  • model: The identifier of the specific model that processed this request.
  • output: The AI assistant's response or solution to the user's query or command.
  • usage: Provides insights into the token consumption of the API request, including:
    • input_tokens: The number of tokens sent in the user's prompt.
    • output_tokens: The number of tokens generated in the model's response.
    • total_tokens: The sum of input and output tokens for this request.
{
  "id": "req_assistant_x5y6z7",
  "model": "assistant-model-v1",
  "output": "To restart the server, you typically need to access the server's command line or control panel and issue a 'restart' command. For Linux, 'sudo reboot' or 'sudo systemctl restart [service_name]' are common.",
  "usage": {
    "input_tokens": 15,
    "output_tokens": 40,
    "total_tokens": 55
  }
}
POST

/v1/npc

Generates NPC (Non-Player Character) dialogue and logic.

Parameters

  • prompt string, required

    The user's chat message.

  • model string, required

    The model to use for the chat.

  • personality string, optional

    An optional personality for the assistant.

cURL
curl https://api.ravonix.com/v1/npc \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "prompt": "Tell me about the blacksmith in this town.",
    "model": "npc-model-v1",
    "personality": "gruff"
  }'

Response Body: Here's what a typical successful response looks like:

  • id: A unique identifier for this specific API request.
  • model: The identifier of the specific model that processed this request.
  • output: The generated dialogue or action for the Non-Player Character (NPC).
  • usage: Provides insights into the token consumption of the API request, including:
    • input_tokens: The number of tokens sent in the user's prompt.
    • output_tokens: The number of tokens generated in the model's response.
    • total_tokens: The sum of input and output tokens for this request.
{
  "id": "req_npc_p0q1r2",
  "model": "npc-model-v1",
  "output": "Old man Grognak, he's been forging steel here for decades. Keeps to himself mostly, but his work is solid. Don't try to haggle, he sets his prices firm.",
  "usage": {
    "input_tokens": 14,
    "output_tokens": 35,
    "total_tokens": 49
  }
}
POST

/v1/roleplay

Engages in roleplay scenarios.

Parameters

  • prompt string, required

    The user's chat message.

  • model string, required

    The model to use for the chat.

  • personality string, optional

    An optional personality for the assistant.

cURL
curl https://api.ravonix.com/v1/roleplay \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "prompt": "You are a pirate. Greet me.",
    "model": "roleplay-model-v1",
    "personality": "A talkative pirate captain."
  }'

Response Body: Here's what a typical successful response looks like:

  • id: A unique identifier for this specific API request.
  • model: The identifier of the specific model that processed this request.
  • output: The AI-generated response conforming to the specified roleplay scenario and personality.
  • usage: Provides insights into the token consumption of the API request, including:
    • input_tokens: The number of tokens sent in the user's prompt.
    • output_tokens: The number of tokens generated in the model's response.
    • total_tokens: The sum of input and output tokens for this request.
{
  "id": "req_roleplay_a1b2c3",
  "model": "roleplay-model-v1",
  "output": "Ahoy there, matey! What brings ye to these here shores? Speak yer mind or walk the plank!",
  "usage": {
    "input_tokens": 18,
    "output_tokens": 28,
    "total_tokens": 46
  }
}
POST

/v1/quest

Generates a quest for a game or story.

Parameters

  • prompt string, required

    The user's quest prompt.

  • model string, required

    The model to use for quest generation.

  • personality string, optional

    An optional personality for the quest generator.

cURL
curl https://api.ravonix.com/v1/quest \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "prompt": "Create a quest to retrieve a lost artifact from a haunted forest.",
    "model": "quest-model-v1",
    "personality": "mysterious"
  }'

Response Body: Here's what a typical successful response looks like:

  • id: A unique identifier for this specific API request.
  • model: The identifier of the specific model that processed this request.
  • output: The generated quest description, providing details on the objective, setting, and potential challenges.
  • usage: Provides insights into the token consumption of the API request, including:
    • input_tokens: The number of tokens sent in the user's prompt.
    • output_tokens: The number of tokens generated in the model's response.
    • total_tokens: The sum of input and output tokens for this request.
{
  "id": "req_quest_d4e5f6",
  "model": "quest-model-v1",
  "output": "A whispered legend speaks of the 'Moonpetal Amulet,' lost in the Whispering Woods. Recover it from the ancient, spectral guardian lurking within its deepest shadows to restore balance to the nearby village.",
  "usage": {
    "input_tokens": 25,
    "output_tokens": 45,
    "total_tokens": 70
  }
}
POST

/v1/lore

Generates lore for a world or story.

Parameters

  • prompt string, required

    The user's lore prompt.

  • model string, required

    The model to use for lore generation.

  • personality string, optional

    An optional personality for the lore generator.

cURL
curl https://api.ravonix.com/v1/lore \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "prompt": "Tell me about the ancient history of the elves in this world.",
    "model": "lore-model-v1",
    "personality": "wise"
  }'

Response Body: Here's what a typical successful response looks like:

  • id: A unique identifier for this specific API request.
  • model: The identifier of the specific model that processed this request.
  • output: The AI-generated lore or world-building description based on the prompt.
  • usage: Provides insights into the token consumption of the API request, including:
    • input_tokens: The number of tokens sent in the user's prompt.
    • output_tokens: The number of tokens generated in the model's response.
    • total_tokens: The sum of input and output tokens for this request.
                        {
  "id": "req_lore_g7h8i9",
  "model": "lore-model-v1",
  "output": "The elves of Eldoria, an ancient and reclusive race, trace their lineage back to the First Trees. They witnessed the birth of stars and the forging of mountains, their history interwoven with the very fabric of this world. Their ancient prophecies speak of an age of twilight, followed by rebirth.",
  "usage": {
    "input_tokens": 20,
    "output_tokens": 55,
    "total_tokens": 75
  }
}
POST

/v1/custom

Handles custom prompts.

Parameters

  • prompt string, required

    The user's custom prompt.

  • model string, required

    The model to use for the custom prompt.

  • personality string, optional

    An optional personality for the custom prompt.

cURL
curl https://api.ravonix.com/v1/custom \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "prompt": "Translate the following to French: 'Hello, world!'",
    "model": "custom-model-v1",
    "personality": "formal"
  }'

Response Body: Here's what a typical successful response looks like:

  • id: A unique identifier for this specific API request.
  • model: The identifier of the specific model that processed this request.
  • output: The AI-generated response based on the custom prompt (e.g., translation, summarization, creative text).
  • usage: Provides insights into the token consumption of the API request, including:
    • input_tokens: The number of tokens sent in the user's prompt.
    • output_tokens: The number of tokens generated in the model's response.
    • total_tokens: The sum of input and output tokens for this request.
{
  "id": "req_custom_j0k1l2",
  "model": "custom-model-v1",
  "output": "Bonjour le monde!",
  "usage": {
    "input_tokens": 10,
    "output_tokens": 3,
    "total_tokens": 13
  }
}

Error Codes

The API uses standard HTTP status codes to indicate the success or failure of a request. Below are the possible error codes and their corresponding responses.

400

Bad Request

Reason: The request is invalid. This usually happens because:

  • A required field is missing (e.g., prompt, model)
  • The JSON body is malformed
{
  "error": {
    "code": 400,
    "message": "Bad Request",
    "reason": "Missing required field or invalid JSON format.",
    "solution": "Ensure all required fields are included and the JSON syntax is valid."
  }
}
401

Unauthorized

Reason: The API key is missing, invalid, or revoked.

{
  "error": {
    "code": 401,
    "message": "Unauthorized",
    "reason": "The API key is missing, invalid, or revoked.",
    "solution": "Provide a valid API key in the Authorization header."
  }
}
402

Payment Required

Reason: You have used all tokens included in your plan.

{
  "error": {
    "code": 402,
    "message": "Quota exceeded",
    "reason": "Your monthly token balance has been fully consumed.",
    "solution": "Upgrade your plan or wait until your next billing cycle."
  }
}
403

Forbidden

Reason: The model or feature is not available on your current plan.

{
  "error": {
    "code": 403,
    "message": "Access denied",
    "reason": "Your current plan does not allow access to this model or feature.",
    "solution": "Upgrade your plan or choose a model available in your tier."
  }
}
404

Not Found

Reason: You used an endpoint that does not exist.

{
  "error": {
    "code": 404,
    "message": "Endpoint not found",
    "reason": "The requested endpoint does not exist.",
    "solution": "Check the endpoint path and spelling. Example: /v1/chat."
  }
}
413

Request Too Large

Reason: Your request exceeds the model’s maximum context window.

{
  "error": {
    "code": 413,
    "message": "Request too large",
    "reason": "The total input exceeds the model's allowed context window.",
    "solution": "Shorten the prompt or remove older messages from the conversation."
  }
}
429

Too Many Requests

Reason: You exceeded your plan’s RPM (requests per minute) or TPM (tokens per minute) limits.

{
  "error": {
    "code": 429,
    "message": "Too many requests",
    "reason": "You exceeded your plan's rate limits.",
    "solution": "Reduce request frequency or upgrade your plan for higher limits."
  }
}
500

Internal Server Error

Reason: Something failed on Ravonix’s side.

{
  "error": {
    "code": 500,
    "message": "Internal server error",
    "reason": "An unexpected error occurred while processing the request.",
    "solution": "Retry the request later. If the issue continues, contact support."
  }
}
503

Service Unavailable

Reason: Ravonix is under maintenance or temporarily overloaded.

{
  "error": {
    "code": 503,
    "message": "Service temporarily unavailable",
    "reason": "The service is currently under maintenance or experiencing high load.",
    "solution": "Wait a few moments and retry the request."
  }
}

Success

Operation completed successfully.