Skip to main content

💵 Billing

Bill users for their usage.

🚨 Requirements

Steps:

  • Connect the proxy to Lago
  • Set the id you want to bill for (customers, internal users, teams)
  • Start!

Quick Start

Bill internal users for their usage

1. Connect proxy to Lago

Set 'lago' as a callback on your proxy config.yaml

model_name:
- model_name: fake-openai-endpoint
litellm_params:
model: openai/fake
api_key: fake-key
api_base: https://exampleopenaiendpoint-production.up.railway.app/

litellm_settings:
callbacks: ["lago"] # 👈 KEY CHANGE

general_settings:
master_key: sk-1234

Add your Lago keys to the environment

export LAGO_API_BASE="http://localhost:3000" # self-host - https://docs.getlago.com/guide/self-hosted/docker#run-the-app
export LAGO_API_KEY="3e29d607-de54-49aa-a019-ecf585729070" # Get key - https://docs.getlago.com/guide/self-hosted/docker#find-your-api-key
export LAGO_API_EVENT_CODE="openai_tokens" # name of lago billing code
export LAGO_API_CHARGE_BY="user_id" # 👈 Charges 'user_id' attached to proxy key

Start proxy

litellm --config /path/to/config.yaml

2. Create Key for Internal User

curl 'http://0.0.0.0:4000/key/generate' \
--header 'Authorization: Bearer sk-1234' \
--header 'Content-Type: application/json' \
--data-raw '{"user_id": "my-unique-id"}' # 👈 Internal User's ID

Response Object:

{
"key": "sk-tXL0wt5-lOOVK9sfY2UacA",
}

3. Start billing!

curl --location 'http://0.0.0.0:4000/chat/completions' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer sk-tXL0wt5-lOOVK9sfY2UacA' \ # 👈 User's Key
--data ' {
"model": "fake-openai-endpoint",
"messages": [
{
"role": "user",
"content": "what llm are you"
}
],
}
'

See Results on Lago

Advanced - Lago Logging object

This is what LiteLLM will log to Lagos

{
"event": {
"transaction_id": "<generated_unique_id>",
"external_customer_id": <selected_id>, # either 'end_user_id', 'user_id', or 'team_id'. Default 'end_user_id'.
"code": os.getenv("LAGO_API_EVENT_CODE"),
"properties": {
"input_tokens": <number>,
"output_tokens": <number>,
"model": <string>,
"response_cost": <number>, # 👈 LITELLM CALCULATED RESPONSE COST - https://github.com/BerriAI/litellm/blob/d43f75150a65f91f60dc2c0c9462ce3ffc713c1f/litellm/utils.py#L1473
}
}
}

Advanced - Bill Customers, Internal Teams

For:

  • Customers (id passed via 'user' param in /chat/completion call) = 'end_user_id'
  • Internal Users (id set when creating keys) = 'user_id'
  • Teams (id set when creating keys) = 'team_id'
  1. Set 'LAGO_API_CHARGE_BY' to 'end_user_id'

    export LAGO_API_CHARGE_BY="end_user_id"
  2. Test it!

    curl --location 'http://0.0.0.0:4000/chat/completions' \
    --header 'Content-Type: application/json' \
    --data ' {
    "model": "gpt-3.5-turbo",
    "messages": [
    {
    "role": "user",
    "content": "what llm are you"
    }
    ],
    "user": "my_customer_id" # 👈 whatever your customer id is
    }
    '