I have been calling the OpenRouter API directly from my code recently.
Earlier my code used to look a bit like this
from openai import OpenAI
client = OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key="<OPENROUTER_API_KEY>",
)
completion = client.chat.completions.create(
extra_headers={
"HTTP-Referer": "<YOUR_SITE_URL>", # Optional. Site URL for rankings on openrouter.ai.
"X-Title": "<YOUR_SITE_NAME>", # Optional. Site title for rankings on openrouter.ai.
},
model="openai/gpt-4o",
messages=[
{
"role": "user",
"content": "What is the meaning of life?"
}
]
)
print(completion.choices[0].message.content)
And now they look more like this
import requests
import json
response = requests.post(
url="https://openrouter.ai/api/v1/chat/completions",
headers={
"Authorization": "Bearer <OPENROUTER_API_KEY>",
"HTTP-Referer": "<YOUR_SITE_URL>", # Optional. Site URL for rankings on openrouter.ai.
"X-Title": "<YOUR_SITE_NAME>", # Optional. Site title for rankings on openrouter.ai.
},
data=json.dumps({
"model": "openai/gpt-4o", # Optional
"messages": [
{
"role": "user",
"content": "What is the meaning of life?"
}
]
})
)
The second option makes a lot more sense for my use case – extracting structured outputs.
Here are some reasons why
The direct API call is not any more complicated
Usually one big reason to avoid direct API calls is because it is a lot more complicated to set up everything. That isn’t true in this case.
The “reasoning” object is easier to use from direct API call
Since I am mostly focused on extracting structured outputs from text, the reasoning parameter is something I can use and tweak as per my needs.
This is much easier to do in the direct API call compared to using the SDK
The code is more readable
The Python SDK does make the code a little less readable because of the way the messages are assembled.
I have found this to be especially true when working with structured outputs.
So I will just use the direct API call in all my material moving forward.