Site icon BotFlo

What is a good example of agentic AI?

See the previous article: What is the difference between agentic and non-agentic AI?

AI Summary search is usually not agentic, but here is an example of an agentic task.

I copied the transcript from one of the YouTube vides in the Make with Notion 2025 playlist.

I then prompted Notion AI to create a database of 5 minute transcript segments and their summary bullet points.

Here is the result

By generating this summary, I was able to see that between the 55th and 60th minute they talk about upcoming features like custom agents.

The database creation task can be considered agentic.

But it is important to understand that agentic is not a binary yes/no answer.

It is better to think of it like a volume knob.

The level of “agency” can differ greatly among tasks which are all considered agentic, and I will explain this in a future article.

I asked Grok to rate how “agentic” this task is on a scale of 1 to 10

Grok’s assessment

Evaluation of the Prompt as “Agentic AI”

Rating: 8/10

This prompt scores highly on the agentic AI scale because it embodies key hallmarks of agentic behavior: autonomous task decomposition, multi-step planning, and creative execution with minimal oversight. Here’s a breakdown of why it lands at an 8 (strong but not maximal, as it relies on a predefined structure without full open-ended adaptation or external tool integration):Strengths Indicating High Agency (Pushing Toward 10):

  • Goal-Oriented Autonomy: The prompt sets a clear, complex goal—transforming a raw transcript into a structured Notion database—while leaving room for the AI to independently handle subtasks like precise chunking (e.g., deciding exact 5-minute boundaries from timestamps), generating “suitable” titles, crafting concise 5-bullet summaries, and selecting relevant emojis. This mirrors an agent’s ability to pursue objectives without hand-holding.
  • Multi-Step Reasoning and Execution: It requires sequential actions: (1) parse timestamps, (2) segment into chunks, (3) build database schema (columns for times/transcript), (4) enrich each row’s linked page with dynamic content (title + timed summary + collapsible transcript). This demands planning, memory of prior chunks, and iterative content generation, akin to an agent chaining actions.
  • Creative Decision-Making: Elements like “suitable title” and “suitable emojis representing each bullet point” inject subjectivity, forcing the AI to infer context, tone, and relevance from the transcript—hallmarks of agentic inference rather than rote rule-following.

Limitations Capping It Below 10:

  • High Specificity Reduces Flexibility: The rigid format (exactly 5 bullets, collapsible widget, verbatim inclusion) constrains the AI’s adaptability. A more agentic prompt might allow the AI to propose optimizations (e.g., variable chunk sizes based on topic shifts) or self-correct via feedback loops.
  • No Explicit Tooling or Iteration: It assumes the AI can “create” in Notion but doesn’t invoke tools (e.g., API calls) or loops for refinement, which ultra-agentic systems (like those with ReAct patterns) would include for robustness.

Overall, this is a sophisticated prompt that turns an LLM into a pseudo-agent for content organization, far beyond simple Q&A. With tweaks for more openness (e.g., “optimize chunking as needed”), it could hit 9-10.

Exit mobile version