LLMs vs NLP libraries
People will prefer LLMs over NLP libraries like spaCy for three reasons
Three forces are converging
Three forces are converging now:
Verifier’s Law
Verifier’s law: The ease of training AI to solve a task is proportional to how verifiable the task is. All tasks that are possible to solve and easy to verify will be solved by AI. Source
Why four LLMs?
Obviously, we want to use as few LLMs as possible for cost reasons Four LLMs is the smallest number which a) allows a supermajority b) allows human to act as the tiebreaker
OpenRouter Response Schema vs Structured Outputs
You can see both mentioned in the supported parameters in OpenRouter model search page What is the difference? Response format makes a best effort to return valid JSON. In other words, you can expect JSON.parse to not throw an error. Structured Outputs, on the other hand, is a subset of response_format. It makes a best…
Introduction to Structured Outputs
Killer app for LLMs Simon Willison considers structured data extraction the “killer app” for LLMs Versatile Use Cases In this course I will explain how many different ways you can use this feature Required for tool calling You will be using structured output for tool calling Important step in MCP use And this in turn…