Apilogy logo
HomeExplore APIsBlog
logologo

GET IN TOUCH

Contact UsPartner With UsAbout Us

SUPPORT

Apilogy DocumentationBlogFAQ

LEGAL

Terms and ConditionsPrivacy Policy

FOLLOW US

logologo

Copyright © 2026 Apilogy. All rights reserved.

LLM Telkom AI API Documentation

Overview

This API provides access to the telkom-ai model for chat completions. The API allows developers to interact with the model by sending a series of messages and receiving model-generated responses. It supports both single-response and stream-based outputs, enabling a flexible integration of LLM functionality into applications. This large language model was trained on data that predates June 2024.

Authentication

To use the API, you need to authenticate with your API key. The API key should be passed either as a request header (x-api-key) or set programmatically.


API Endpoints

POST /chat/completions

  • Description: This endpoint is used to generate completions based on a series of messages. It can either return a single response or stream responses in chunks.

  • Request Format:

    • Headers:

      • Content-Type: application/json
      • x-api-key: API_KEY (replace API_KEY with your actual API key)
    • Request Body: A JSON object containing the model to use, an array of messages exchanged with the model, and optionally a stream parameter for streaming responses.


Example Code

Python Example

import openai

# Set your API key for authentication
openai.api_key = "blaablaa"

# Configure client options (if needed)
openai.base_url = "https://ini_url.com/"  # Custom base URL
openai.default_headers = {"x-api-key": "API_KEY"}  # Set your API key here

# Make a request to generate chat completion
completion = openai.chat.completions.create(
    model="telkom-ai-instruct",  # Specify the model to use
    messages=[
        {
            "role": "user",  # User's message
            "content": "How do I output all files in a directory using Python?",
        },
    ],
    stream=True,  # Enable streaming if required
)

# Print the response content (non-streaming example)
print(completion.choices[0].message.content)

# If streaming, print the chunks
# for chunk in completion:
#     print(chunk.choices[0].delta.content or "", end="")

cURL Example

curl "https://ini_url.com/chat/completions"     -H "Content-Type: application/json"     -H 'x-api-key: API_KEY'     -d '{
        "model": "telkom-ai-instruct",  # Specify the model to use
        "messages": [
            {
                "role": "system",  # System's message setting up context
                "content": "You are a helpful assistant."
            },
            {
                "role": "user",  # User's message
                "content": "Write a haiku that explains the concept of recursion."
            }
        ]
    }'

Request Parameters

  • model (string): The model to use (e.g., telkom-ai-instruct).

  • messages (array): A list of message objects to pass to the model. Each message should have:

    • role (string): The role of the sender (either system, user, or assistant).
    • content (string): The text content of the message.
  • stream (boolean, optional): If true, the response will be streamed as chunks. If false or omitted, a single response will be returned.


Latency

2.1 s

Throughput

7 rpm

Basepath

/Telkom-LLM/0.0.4

Host

telkom-ai-dag.api.apilogy.id, telkom-ai-dag-api.apilogy.id

Schemes

HTTP, HTTPS

Supported Authentication

key-auth

Servers

http://telkom-ai-dag.api.apilogy.id/Telkom-LLM/0.0.4
http://telkom-ai-dag-api.apilogy.id/Telkom-LLM/0.0.4
https://telkom-ai-dag.api.apilogy.id/Telkom-LLM/0.0.4
https://telkom-ai-dag-api.apilogy.id/Telkom-LLM/0.0.4

Find Out More

Home / Explore / Telkom-LLM
Publisher Icon

Telkom LLM

Category: AI/Machine Learningby telkom_ai_dag
Version IconVersion0.0.4