This API provides access to the telkom-ai model for chat completions. The API allows developers to interact with the model by sending a series of messages and receiving model-generated responses. It supports both single-response and stream-based outputs, enabling a flexible integration of LLM functionality into applications.
This large language model was trained on data that predates June 2024.
To use the API, you need to authenticate with your API key. The API key should be passed either as a request header (x-api-key) or set programmatically.
POST /chat/completions
Description: This endpoint is used to generate completions based on a series of messages. It can either return a single response or stream responses in chunks.
Request Format:
Headers:
Content-Type: application/jsonx-api-key: API_KEY (replace API_KEY with your actual API key)Request Body: A JSON object containing the model to use, an array of messages exchanged with the model, and optionally a stream parameter for streaming responses.
import openai
# Set your API key for authentication
openai.api_key = "blaablaa"
# Configure client options (if needed)
openai.base_url = "https://ini_url.com/" # Custom base URL
openai.default_headers = {"x-api-key": "API_KEY"} # Set your API key here
# Make a request to generate chat completion
completion = openai.chat.completions.create(
model="telkom-ai-instruct", # Specify the model to use
messages=[
{
"role": "user", # User's message
"content": "How do I output all files in a directory using Python?",
},
],
stream=True, # Enable streaming if required
)
# Print the response content (non-streaming example)
print(completion.choices[0].message.content)
# If streaming, print the chunks
# for chunk in completion:
# print(chunk.choices[0].delta.content or "", end="")
curl "https://ini_url.com/chat/completions" -H "Content-Type: application/json" -H 'x-api-key: API_KEY' -d '{
"model": "telkom-ai-instruct", # Specify the model to use
"messages": [
{
"role": "system", # System's message setting up context
"content": "You are a helpful assistant."
},
{
"role": "user", # User's message
"content": "Write a haiku that explains the concept of recursion."
}
]
}'
model (string): The model to use (e.g., telkom-ai-instruct).
messages (array): A list of message objects to pass to the model. Each message should have:
system, user, or assistant).stream (boolean, optional): If true, the response will be streamed as chunks. If false or omitted, a single response will be returned.
Latency
2.1 s
Throughput
7 rpm
Basepath
/Telkom-LLM/0.0.4
Host
telkom-ai-dag.api.apilogy.id, telkom-ai-dag-api.apilogy.id
Schemes
HTTP, HTTPS
Supported Authentication
key-auth
Servers
Find Out More