Send 100 requests to the LLM on OpenRouter
- The first step is to send 100 requests to one of the LLMs on OpenRouter
- This is a reasonably high number which provides a fairly good understanding of how well each LLM works
- particularly when called via OpenRouter
- A round number makes it easy to track the percentages
This is the method used in the code (which is called 100 times)
url = "https://openrouter.ai/api/v1/chat/completions"
headers = {
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json",
"HTTP-Referer": "https://botflo.com/", # Optional. Site URL for rankings on openrouter.ai.
"X-Title": "LLM Accuracy Comparisons for Structured Outputs",
}
payload = {
"model": model_name,
"messages": [
{"role": "system", "content": f'{system_prompt}'},
{"role": "user", "content": f'{prompt}'}
],
"reasoning": {
"effort": "high"
}
}
response_full = requests.post(url, headers=headers, data=json.dumps(payload))
response = response_full.json()
after = time.time()
elapsed = after - before
inner_response_text = response['choices'][0]['message']['content']
full_response_json = response_full.json()
Later in the code, I will be saving all this information into a JSON file.
Make note of the different variables used in this code snippet because we will be revisiting them over the next few lessons.