By Byron V. Acohido
Stephen Klein didn’t simply stir the pot. He lit a hearth.
Associated: Klein’s LinkedIn debate
In a sharply worded publish that rapidly went viral on LinkedIn, the technologist and tutorial took direct goal at what he referred to as the “hype-as-a-service” enterprise mannequin behind so-called agentic AI. His critique was blunt: what the business is promoting as autonomous, goal-directed intelligence is, usually, little greater than brittle immediate chains and hard-coded workflows dressed up in fancy language.
In Klein’s view, most present agentic techniques are glorified wrappers — process orchestrators stitched collectively from APIs and huge language fashions. They’re not “brokers,” he argues, except they display the hallmarks of true autonomy: self-directed objective setting, adaptive reasoning, reminiscence, and the power to function throughout altering environments with minimal human intervention. Something much less? Advertising noise.
To his credit score, Klein struck a nerve. His publish drew a wave of applause from engineers and skeptics pissed off by the overreach of AI branding. However the backlash was telling, too. A quieter refrain — business practitioners, startup builders, just a few considerate researchers — responded not with denial, however with a query: even when most of in the present day’s techniques aren’t totally agentic, aren’t they nonetheless meaningfully new?
Cybersecurity use instances
That’s the place Klein’s readability turns brittle. As a result of whereas his tutorial rigor is efficacious, his framing misses what’s truly occurring — not within the hype decks, however on the bottom.
At RSAC 2025, I spoke with over a dozen cybersecurity distributors quietly integrating LLM-powered resolution assist into core operations. Simbian is utilizing GenAI to energy a co-pilot that helps SOC analysts prioritize alerts in actual time. Corelight is utilizing it to sift community telemetry for refined menace patterns. Are these “brokers” within the Kleinian sense? Not fairly. Are they meaningfully altering how work will get accomplished in high-stakes, regulated environments? Completely.
And it’s not simply the safety sector.
At NTT Knowledge, I encountered some of the grounded — and arguably most agentic — use instances but. Their system presently makes use of conventional pc imaginative and prescient fashions to tag visible parts in live-stream video — helmet vs. no helmet, license plate vs. background. These pixel-level attributes information Attribute-Based mostly Encryption (ABE) that redacts content material dynamically, preserving privateness whereas implementing coverage.
However what makes this really next-gen is what comes subsequent: NTT’s engineers are layering in Mistral, a compact, open-source vision-language mannequin (VLM), domestically fine-tuned to function as a domain-specific AI agent. This isn’t a general-purpose chatbot. It’s an embedded mannequin designed to interpret reside video semantically — figuring out nuanced occasions like theft or assault, flagging concerned actors, and triggering differential encryption in actual time.
In brief: Mistral isn’t simply including inference — it’s turning into an embedded decision-maker. Educated on each private and non-private datasets, it brings contextual judgment to surveillance duties that have been as soon as binary. That’s not hype. That’s a purpose-built agent system, architected for real-world autonomy beneath strict coverage constraints.
Agentic AI residents
Klein is true to name for clearer definitions. However in instances like this, the semantics are chasing one thing that’s already actual — techniques quietly reshaping how autonomy is engineered and utilized.
Tanaka
Dr. Hidenori Tanaka, head of NTT’s Physics of AI group, takes this concept a step additional. He envisions a future the place LLM-enabled brokers should not merely optimized for engagement, however purposefully designed with domain-specific personalities aligned to their meant use. Chatbots, he argues, are not inert instruments; they’re new actors within the societal cloth—”residents,” in his phrases—shaping human cognition via on a regular basis interplay.
Tanaka’s central perception is that AI persona isn’t unintended. It’s engineered—via system prompts, coaching information, and company incentives. And this, he warns, creates macro-level results: if AI is universally optimized for consolation or virality, it dangers reinforcing polarization and eroding public belief. As a substitute, he requires a scientific self-discipline that may translate open-ended ethical questions—What ought to an AI worth? What does it imply to be sort?—into measurable benchmarks and controllable behaviors.
His objective is to not anthropomorphize machines however to embed deliberate design into how brokers evolve. He needs to remodel LLM growth from an advert hoc enterprise right into a grounded, interdisciplinary science—one rooted in physics, psychology, and ethics, and able to cultivating brokers that assist, quite than distort, our shared cognitive house.
The coining of “agentic AI”
The reality is, the time period agentic AI didn’t start in academia. It crept into the lexicon in mid-2024, because the generative wave matured. With instruments like LangChain, OpenAI’s Brokers SDK, and AutoGen, builders started constructing techniques that might bear in mind context, choose instruments, pursue objectives, and adapt their subsequent steps based mostly on real-world outcomes. The business wanted language to explain what felt like a brand new functionality — and agentic sounded proper.
Ng
Thought leaders like Andrew Ng — a pioneering AI educator and founding father of DeepLearning.AI — helped popularize the time period agentic AI in 2023 and 2024. Via his newsletters, programs, and public commentary, Ng framed agentic techniques as LLM-powered purposes able to goal-seeking habits and multi-step coordination — a framing that gave the time period vital traction amongst builders and enterprise adopters. By late 2024, it was all over the place: product sheets, panel discussions, investor pitches.
Critics like Klein noticed this as definitional drift. However I’d argue it’s nearer to pure language evolution — messy, natural, formed by use, not decree.
Onerous traces vs. gradient adoption
Which brings us to the current stress: tutorial purists need laborious traces. Practitioners are working in gradients.
And whereas we completely must push again on deceptive claims — particularly when real-world belief and security are on the road — we ought to be cautious to not flatten the dialog right into a binary. As a result of a lot of what’s now labeled agentic AI might fall wanting Klein’s threshold, however that doesn’t make it trivial.
The shift is actual. We’re shifting from instruments that merely reply to enter, to techniques that assist provoke, coordinate, and execute. It’s not synthetic common intelligence. It’s not even full autonomy. However it’s a totally different texture of software program — and that issues.
In a latest essay I referred to as Wither Genius?, I described how this shift is crowding the center: the house as soon as occupied by mid-tier skilled fluency — the technical author, the monetary analyst, the coverage drafter — is being compressed by LLMs that may now emulate construction and tone with alarming fluency. And but, the higher and decrease bounds of creativity — the intuition to ask a brand new query, the instinct to problem the immediate — stay deeply human. The form of genius expressed by Truman Capote’s narrative nuance, Rachel Springs’ creative social worldbuilding, or Frank Herbert’s philosophical scaffolding in Dune continues to be far past what language fashions can conjure. That frontier stays ours — for now.
What we’re seeing is scaffolding being laid for one thing new. That scaffolding won’t meet each checkbox on Klein’s autonomy rubric, but it surely’s already supporting workflows, insights, and resolution fashions that didn’t exist two years in the past.
A brand new form of company
Extra importantly, it’s enabling a brand new form of company — not simply in machines, however in individuals.
You see it within the daikon farmer tuning a Hugging Face mannequin to automate irrigation. Within the native instructor tweaking GenAI lesson plans for her college students — however refusing to let the mannequin monitor them. Within the musicians who launched a streaming radio station in my coastal hometown, co-composing their scripts with AI.
None of this matches neatly into Klein’s body. However it’s occurring. And it’s highly effective.
So sure — let’s name out overhyped claims. Let’s increase the bar for what we imply by agentic. However let’s additionally acknowledge the deeper transformation underway. This isn’t only a semantic debate. It’s the early friction of a brand new human-machine relationship — one which’s nonetheless taking form.
Klein needs to outline the time period. The remainder of us are attempting to outline the longer term.
Let’s not confuse the 2. I’ll hold watch – and hold reporting.
Pulitzer Prize-winning enterprise journalist Byron V. Acohido is devoted to fostering public consciousness about the right way to make the Web as personal and safe because it must be.
(Editor’s be aware: A machine assisted in creating this content material. I used ChatGPT-4o to speed up analysis, to scale correlations, to distill complicated observations and to tighten construction, grammar, and syntax. The evaluation and conclusions are totally my very own—drawn from lived expertise and editorial judgment honed over many years of investigative reporting.)