Generating the comparison dataset
For each LLM, you need 100 valid API responses All the responses should have valid JSON Do a retry until each of the 100 reports contains valid JSON
For each LLM, you need 100 valid API responses All the responses should have valid JSON Do a retry until each of the 100 reports contains valid JSON
Code
Code
I used Claude to generate this code walkthrough from the Python script. This script validates the API response and marks contains_valid_json as True or False. You can download the Python script (.py file) from my Measuring LLM Accuracy course. The script is part of a bigger project and cannot be run without pulling in code…
I used Claude to generate this code walkthrough from the Python script. This code saves the response from the LLM API to the local folder. It is also used for doing retries in case previous calls to the API did not produce valid JSON. You can download the Python script (.py file) from my Measuring…
A comparison Demo
Use the system defined in the previous chapter for a single field Evaluate and compare the accuracy for each field (which is comparable) Use Citations and Explanation for the adjudication
We use four metrics related to the JSON responses coming back from LLMs api_error If response_full.json() throws an error, that is considered an api_error is_pure_json If json.loads(inner_response_text) does not throw any error, then is_pure_json is true contains_valid_json Sometimes you have inner_response_text which looks like this: This would be perfectly valid JSON once you remove the…
This is the method used in the code (which is called 100 times) Later in the code, I will be saving all this information into a JSON file. Make note of the different variables used in this code snippet because we will be revisiting them over the next few lessons.
Using LLMs to extract structured data will allow you to easily separate AI hyper from genuine progress It allows you to use a systematic process (like the one I explain in this course) to get an intuition for how well AI is able to do certain tasks When you see how often AI can fail…
Some people refer to agents as “models using tools in a loop” Understanding how structured data extraction works will be an important part of learning about agentic AI since it is often the extracted structured data that is sent to the tool
You can run this script for each LLM on OpenRouter and get a good idea of how things are evolving in terms of LLM reasoning I use this approach in this course to evaluate the ability of many different LLMs to extract structured data (so you can just get this course if you don’t want…