Task by Task: The Workflows We're Handing to AI — One Decision at a Time
By Sean Martin, CISSP
Lens Four — Where business, innovation, and messaging come into focus
March 4, 2026
Listen to this article, read by TAPE9
Watch the video summary: Why Hackers Beat Your Security in Just 72 Minutes
The Hidden Pattern: How AI Takes Over [Task by Task]
Nobody decided to remove humans from workflows - it happened one small decision at a time. Here's the uncomfortable truth about AI adoption.
This isn't about fighting AI - it's about understanding how workplace transformation really happens. Each "reasonable" decision builds toward complete process automation.
Watch the video summary ▶ https://youtu.be/R4n8ZCThmrM
I look at the intersection of business, technology, and messaging regularly through three lenses: how organizations operate and run their programs, how innovation and market forces are reshaping what's possible, and how the language and narrative around technology shapes what gets funded, prioritized, and trusted. This week, all three lenses are pointing at the same thing — and the picture is clearer than most people are comfortable admitting.
Nobody decided to remove the human from the workflow.
That's the part worth sitting with. In boardrooms, in budget reviews, in vendor evaluations — nobody stood up and said "let's build a business process with no humans in the loop." What happened instead was a series of smaller decisions, each of them reasonable, each of them local, each of them defensible. An HR team bought a screening tool to handle application volume. A legal department licensed an AI drafting platform to reduce outside counsel spend. A finance team deployed automated invoice processing to close faster. None of those decisions, on their own, looks like giving up control. But map them together — task by task, across a single workflow — and something significant emerges: the human is already optional across most of the process.
That's what I want to examine this week. Not whether AI should take on more of the work — that debate is largely settled in the data. But whether organizations have consciously mapped what they've actually handed over, and what that means for how businesses operate, compete, and carry accountability going forward.
LENS ONE — BUSINESS OPERATIONS
Are We Delegating Efficiently, or Giving Up Control?
The honest answer is: both — and most organizations can't tell the difference yet.
Let me trace two workflows that most businesses run every week. Not as edge cases, not as experiments, but as normal operating processes with deployed tools and real outcomes.
The Hiring Workflow
A job requisition opens. What happens next used to require a recruiter's judgment at every step. Here's what the same process looks like with current tools.
Task 1 — Resume screening. Unilever reported saving over £1 million annually after deploying AI screening tools across its hiring pipeline.1 McDonald's rolled out Paradox's conversational AI "Olivia" across thousands of locations to handle applicant screening and scheduling — candidates move from application to interview without a human recruiter touching the file.2 AI tools now rank and filter applicants in seconds, against criteria a human set once and that now runs autonomously at scale.
Task 2 — Interview scheduling. A large U.S. financial services firm using GoodTime reduced time-to-fill by weeks by automating calendar coordination alone — the moment a candidate cleared screening, an interview invite went out within hours.2 No recruiter coordination required.
Task 3 — First-round interviewing and assessment. HireVue — used by JPMorgan, Goldman Sachs, Amazon, Microsoft, Emirates Airlines, and hundreds of others — conducts asynchronous first-round interviews with no human present.3 The candidate records answers to pre-set questions on their own schedule; the AI analyzes speech, language, and behavioral indicators and returns a ranked score. No recruiter watches the recording until after the AI has already filtered and ranked the pool. Emirates Airlines reduced its hiring cycle from 60 days to 7 using this approach. The human interviewer enters at round two — but by then, the AI has already determined who gets that meeting.
Task 4 — Offer generation and outreach. Recruiting platforms including Lindy and Recruiterflow's Agent Mode draft, personalize, and send offer communications and follow-up sequences autonomously.4 The offer letter is written before a recruiter opens their inbox.
Task 5 — Onboarding initiation. End-to-end workflow automation — deployable today through platforms like n8n — covers the full pipeline from CV submission through assessment, scheduling, and status tracking, without human intervention at any step.5
Five tasks. Five separate vendor decisions. Each one made independently, each one with its own ROI story. And together: a process where a candidate can move from application to offer without a single human making an active decision along the way.
The Legal Contracting Workflow
Now run the same analysis across a legal department's standard contracting process.
Task 1 — Matter intake and triage. Checkbox AI handles incoming legal requests through intelligent chatbots that capture context, ask clarifying questions, and route matters to the right team automatically — no paralegal spending the morning clearing an email queue.6
Task 2 — Legal research. Harvey AI, now embedded in Am Law 100 firms, surfaces relevant case law, statutes, and precedent across large document sets in minutes.7 Lexis+ AI provides contextual legal reasoning on demand. What used to be a junior associate's full day is now a prompt.
Task 3 — Contract drafting. Spellbook drafts contracts inside Microsoft Word. LegalOn users report NDA reviews dropping from two hours to thirty minutes.8 One managing partner reported a 40% increase in billing capacity — not from doing better work, but because AI wrote the first draft on every matter.
Task 4 — Contract review and redlining. Luminance identifies anomalies and flags deviations from playbooks across thousands of contracts simultaneously.9 Kira extracts specific clauses at scale. Across the category, AI contract review tools are reducing review time by 75 to 85 percent.8 A task that defined legal practice for decades is compressing toward minutes.
Task 5 — Approval routing and post-execution management. ContractPodAi's AI handles routing, obligation tracking, and compliance monitoring after signature.10 Ironclad manages the full contract lifecycle — renewals, expirations, obligation triggers — without a paralegal maintaining a spreadsheet.
Again: five tasks, five products, five separate procurement decisions. And end to end: a contracting workflow where a contract can move from request to executed agreement without a lawyer authoring a single original clause.
And this pattern runs across the business, not just in these two functions. Finance has it — AI invoice processing platforms like Ramp and HighRadius handle capture, validation, approval routing, and payment scheduling end to end, with one hospital association reporting a reduction in batch processing time from ten hours to minutes.11 Customer service has it — Gartner projects that agentic AI will resolve 80 percent of common customer service issues without human intervention by 2029, up from effectively zero in 2024.12 Security operations has it — and I've had direct conversations with the vendors building these tools. Edward Wu, founder of Dropzone AI, told me ahead of Black Hat USA 2025: "Nobody wants to be a tier-one analyst forever." Subo Guha of Stellar Cyber described a "digital army" of AI agents that filter 70 to 80 percent of alerts before a human analyst sees them.13 The pattern looks the same whether the workflow is closing a contract or closing a security incident.
The business question this raises isn't whether the tools work — most of them do. The question is whether organizations have a clear, deliberate answer to: which tasks require a human decision, and why? Because right now, many organizations are answering that question by default — one purchase at a time — rather than by design.
Gartner puts a number on the trajectory: at least 15 percent of day-to-day work decisions will be made autonomously through agentic AI by 2028, up from essentially zero in 2024.14 That isn't a distant forecast. It's a projection from a baseline that is already moving inside most mid-to-large enterprises today.
LENS TWO — INNOVATION AND MARKET SHIFTS
What Is the Market Building, and How Fast Is It Moving?
The market knows exactly what it's building. It's not naming it directly — but the architecture is unmistakable.
Gartner predicts that 40 percent of enterprise applications will include integrated task-specific AI agents by the end of 2026 — up from less than five percent today.14 Not AI assistants that help people do their jobs. Agents that do the job, within defined parameters, without waiting for a human to initiate each step. By 2035, Gartner's best-case scenario has agentic AI driving approximately $450 billion in enterprise software revenue — roughly 30 percent of the entire market.14
Notice what the market is selling: "task-specific." Not "workflow-replacing." Not "role-eliminating." Task-specific. One task at a time, each one rationalized locally, each one with a discrete budget line and an ROI model. The cumulative effect — a workflow that no longer requires human participation — isn't what's being sold. It's what's being assembled.
This is where the business opportunity gets genuinely interesting, and where the strategic gap between leading and lagging organizations is opening up.
The companies deploying these tools aggressively are not doing so because they ran an experiment. They're doing it because the economics are compelling in ways that compound over time. Recruiterflow data shows recruiters saving six or more hours per week — a 33 percent productivity increase per person.4 LegalOn users report reducing outside counsel dependency by thousands of dollars per contract cycle.8 HighRadius customers report invoice processing costs dropping from $12 to $20 per invoice manually, down to a fraction of that.11 Individually, those numbers are meaningful. Across a workforce, across a fiscal year, they represent a structural cost advantage that competitors without these tools cannot match.
But the more consequential shift isn't cost reduction. It's speed and scale. A hiring process that moves at machine speed — screening hundreds of applications in seconds, scheduling without back-and-forth, assessing without a calendar bottleneck — doesn't just save money. It changes who gets the best candidates. A legal team that can review, redline, and execute contracts in minutes rather than days doesn't just reduce billable hours. It changes how fast the business can move on deals.
The organizations that figured this out early are already operating at a different tempo than those still treating AI as a pilot program. Forrester predicts that less than 15 percent of firms will activate the agentic features already built into their automation platforms — meaning most organizations are sitting on deployed capability they have not yet turned on.[15] That gap between available and activated is where the competitive separation is opening up. That's not adoption at the margin. That's a structural shift in how work gets done.
The complication — and it's a real one — is that the vendor market is significantly ahead of organizational readiness for what these tools actually do. Gartner estimates that only around 130 of the thousands of companies now claiming to offer "agentic AI" are delivering genuine agentic capability.14 The rest are rebranding existing automation and RPA under a new label. The difference matters enormously for buyers. A genuine agentic system reasons across tasks, adjusts based on outcomes, and handles exceptions without a human rewriting the playbook. A rebranded chatbot executes a fixed sequence and breaks at the edge case. Buying the latter under the belief it's the former is how organizations end up with expensive tools that create new workflow fragility instead of removing old bottlenecks.
The cybersecurity sector is already several steps ahead of most enterprise functions on this curve — and the outcomes are real, not experimental. As I wrote in the first Lens Four article, "The 72-Minute Gap," organizations deploying agentic SOC automation are realizing documented, measurable budget savings. Dropzone AI's Edward Wu described it plainly when we spoke at Black Hat USA 202516: at roughly $36,000 per year, their platform ran 4,000 automated alert investigations — a number that simply cannot be staffed at comparable cost, and one that represents real savings on real security budgets, not a proof-of-concept. Subo Guha of Stellar Cyber, in two separate conversations with me — at RSAC 202517 and Black Hat 202518 — described their "digital army" of AI agents filtering 70 to 80 percent of incoming alerts, allowing analysts to focus on the fraction that require human judgment. Both companies are emphatic that the value isn't hypothetical. The savings are already in the operating budget.
The market is also generating the next layer of infrastructure, which is itself a leading indicator of how far adoption has already gone. When AI agent identity governance becomes a funded product category — and it has — it means organizations have already deployed enough autonomous agents into production that they've discovered they can't see what those agents are doing or control what systems they can reach. Token Security, named a finalist in the RSAC 2026 Innovation Sandbox this week, was built entirely around this problem, governing AI agent identities with the same rigor applied to human users19: continuous discovery, intent-aware access controls, lifecycle management from deployment through decommissioning. Moderna has already scaled from 750 to more than 3,000 internal AI agents in a single year. Token's pitch is built on exactly that trajectory. The governance market doesn't emerge until the adoption that requires governance is already underway. That tells you where the actual baseline is.
Where this market is going: the agentic orchestration platform
Here is the structural shift worth watching closely. Right now, organizations are assembling workflows task by task through separate vendor decisions. The next phase of the market eliminates that friction entirely — and the infrastructure for it is already being built.
What's emerging is the agentic orchestration platform: a single governed environment where workflows can be defined in plain language, purpose-built agents can be selected, configured, guardrailed, and monitored, and the cumulative workflow is visible as a designed whole rather than discovered after the fact as an accumulated pile of vendor contracts.
The signs are already clear. Nintex — which serves more than 7,000 organizations across 100 countries and has been in workflow automation since long before "agentic AI" was a term — announced its Agentic Business Orchestration platform in September 2025, explicitly positioning it as a single governed layer unifying legacy systems, manual processes, and AI agents.20 Their incoming Agent Designer feature enables IT leaders and business technologists to build, evaluate, and orchestrate specialized agents in a low-code environment — without writing code and without leaving the governance framework. IDC's Maureen Fleming, commenting on the Nintex announcement, framed it directly: "Agentic business orchestration represents a shift toward coordinating people, systems and AI agents in governed ways that ensure automation and AI deliver measurable results at scale."20
Microsoft is moving in the same direction at enterprise scale. Copilot Studio — already connected to more than 1,400 systems via Model Context Protocol and Power Platform connectors — allows agents to be built in natural language, configured, monitored, and governed from a single interface.21 Their Workflows Agent creates, manages, and runs automations directly from natural language chat. Every agent now gets a Microsoft Entra Agent ID — an identity credential that enables governance across the fleet. Microsoft's own framing for 2026 is pointed: the transition is from AI that helps people do work faster to AI that handles work on behalf of the organization, with humans escalating into exceptions rather than executing by default.21
The pattern is the same whether you're watching Nintex, Microsoft, Salesforce Agentforce, or ServiceNow's agentic capabilities. The market is converging on a platform model where: the workflow is defined up front in plain language, agents are selected and scoped to specific tasks with explicit permissions, guardrails are set before deployment — not bolted on after, human oversight points are designed in rather than assumed, and the full workflow is auditable and measurable as a system.
This is the architecture that changes the equation for every organization currently assembling workflows task by task through individual vendor decisions. When a single governed platform can map the complete workflow, configure the agents, set the guardrails, and show where human judgment is required — the gap between organizations that design their AI workflows deliberately and those that accumulate them by default closes. The organizations that get ahead of this transition will enter it with clear workflow maps and defined accountability structures. The ones that don't will find themselves importing their accumulated default choices into the new architecture and inheriting all the governance gaps that came with them.
LENS THREE — LANGUAGE, MESSAGING, AND MARKET NARRATIVE
Why Does Everyone Say "Augment" When the Direction Is "Replace"?
Because "augment" gets funded, "replace" gets scrutinized, and the actual outcome is somewhere neither word honestly describes.
There is a phrase that appears in virtually every vendor pitch, analyst briefing, and enterprise communication about AI and automation: "we augment human capabilities, we don't replace them." It surfaces in hiring tech. It surfaces in legal AI. It surfaces in financial automation and customer service platforms. It is, at this point, essentially obligatory in the category.
The phrase is doing real work. It manages three audiences simultaneously: employees watching their job functions shift, procurement committees answering to boards who want to see AI investment justified, and increasingly — regulators watching closely how AI is being deployed in consequential decisions. "Augment, not replace" threads all three needles cleanly. It implies human oversight remains intact, accountability structures are unchanged, and the organization is being measured and responsible.
But walk the data back against that framing and it doesn't hold up.
Swimlane projects AI will resolve or escalate over 90 percent of Tier 1 security alerts by 2026 — not assist with them, resolve them.22 Gartner projects autonomous AI handling 80 percent of customer service issues without human involvement by 2029.12 The legal contract review tools marketing 75 to 85 percent time reduction aren't augmenting lawyers — they're doing the task, and asking the lawyer to review the output.8 The hiring platforms aren't helping recruiters screen faster — they're screening, and asking the recruiter to validate the ranking.
When the AI handles 80 percent of the task and the human handles 20 percent — or handles exceptions after the fact — that's not augmentation in any meaningful operational sense. That's oversight of an autonomous system. The distinction isn't semantic. It has direct implications for where accountability lives, what skills the organization needs to maintain, and what happens when the output is wrong.
I explored a version of this tension from an unexpected angle on the Music Evolves Podcast, in a conversation with Chandler Lawn, AI Innovation and Law Fellow at the University of Texas School of Law, Drew Thurlow, Adjunct Professor at Berklee College of Music, and Puya Partow-Navid, Partner at Seyfarth Shaw LLP.23 We were talking about AI-generated music and who owns the output — but the underlying question was the same one running through every enterprise workflow: when the system produces the thing that used to require a human, what does the human's role actually become?
The music industry is a few years ahead of most enterprise functions on this curve. Labels have been navigating it through lawsuits and licensing deals (Universal Music Group and Warner Music Group both reached landmark settlements with AI music platforms in late 202523), and the answers they're landing on involve drawing explicit lines around what requires human creative judgment and what can be systematically produced. Enterprise operations will need to draw the same kinds of lines — probably with less drama, but with the same underlying logic.
The language gap has a practical consequence beyond messaging. When leadership describes every AI deployment as augmentation, it becomes difficult to have honest internal conversations about what the organization has actually delegated, where the accountability gaps are, and what happens when a consequential decision turns out to be wrong. That conversation is easier to have before the workflow is fully assembled than after.
Gartner's prediction that over 40 percent of agentic AI projects will be cancelled by end of 2027 is worth reading through this lens.14 It's not because the technology fails. It's because organizations bought capability without building the governance, accountability structures, and organizational clarity to run it responsibly. The language that got the tool funded — "augment, not replace," "AI-assisted," "human in the loop" — made those harder conversations easier to avoid at purchase time. They don't stay avoided.
At Black Hat USA 2025, Marco Ciappelli and I talked after walking the floor about exactly this: when every vendor claims the same positioning, the actual distinctions disappear from the buyer's view. In our post-show episode, we called it the marketing milkshake problem — every vendor's message going into the same promotional blender and coming out tasting the same, regardless of what the underlying technology actually does.24 The agentwashing problem isn't just a market integrity issue. It's a decision-quality issue for every organization trying to figure out what they're actually acquiring — and what decision authority they're actually transferring.
THE FOURTH LENS
When Did You Decide to Hand Over Control — and Who Was in the Room When You Did?
Here is what I keep coming back to when I look at all three lenses together, and it's the thing I find myself saying in conversations that rarely makes it into polished conference presentations: we are already past the point of no return. The human-optional workflow is not the exception being cautiously piloted. It is the operational default for hiring, contracting, finance, customer service, and security operations in organizations that made five individually rational procurement decisions and never looked at what those decisions assembled.
That's not naivety. I want to be clear about that. The organizations deploying these tools are not confused about what they're buying. What they haven't done — and what the vendors selling to them have never required them to do — is map the cumulative shape of those decisions before committing to them.
And I don't think that's an accident.
When I look at the language the vendor market has built around this transition — "augment, not replace," "human in the loop," "AI-assisted" — I don't read it as imprecise. I read it as precise in exactly the right direction. These are companies staffed with product managers, lawyers, and communications teams who understand exactly what their tools do when deployed at scale. "Augment, not replace" threads every needle it needs to thread: employee relations, procurement approval, regulatory scrutiny, board optics. It's not a description. It's a strategy. And it has worked — because organizations bought the framing along with the capability, and now have workflows they couldn't honestly describe as "augmented" with a straight face.
So where does accountability land when an AI-assembled workflow produces a bad outcome? Right now: nowhere. That is not hyperbole. The procurement signer approved a task-specific tool with its own contained ROI case. The vendor sold a product that performs as specified. The workflow that those tools assembled collectively — that's in a gap between contracts, between org chart lines, between the legal definitions anyone drafted when they wrote the terms of service. Nobody owns the workflow. Everybody owns a task.
That should be alarming. Not because bad outcomes are inevitable, but because the accountability structure that would catch and correct a bad outcome before it becomes a crisis — that structure doesn't exist yet in most organizations. The efficiency gains are real and already in the budget. The accountability architecture is still theoretical.
What I'm watching closely is whether the agentic orchestration platform changes this dynamic or accelerates it. My honest read: both, depending on the organization. A small group of mature, deliberate organizations — the ones who were already doing workflow mapping before procurement, who already had security and legal at the design table — will use Nintex, Copilot Studio, and platforms like them to do exactly what those platforms were designed to enable: define the workflow first, configure the agents inside it, set the guardrails before deployment, and maintain a complete audit trail of what was delegated and why. For those organizations, the platform genuinely forces the design conversation, because you cannot configure guardrails without deciding what you're guarding.
For everyone else, the platform will make accumulation faster and cheaper. The same five irrational decisions — each locally rational, collectively unexamined — will just be easier to make in one place.
Here's the structural reality I think most organizations are not yet reckoning with: the auditors haven't arrived yet. The regulatory frameworks that will eventually require organizations to account for autonomous workflow decisions — who authorized them, under what criteria, with what human oversight, and how exceptions are handled — those frameworks are being drafted right now. GDPR took years to land on AI. The EU AI Act is already in motion. The U.S. regulatory posture is slower but not absent. The window between "we accumulated this workflow through procurement" and "we need to demonstrate we designed it with intention" is open, but it is not going to stay open.
The organizations that use that window to map what they've built, establish where accountability sits, and make explicit decisions about what requires human judgment — not because the AI can't do it, but because the organization has determined that accountability requires a person — will be positioned to operate without disruption when the frameworks arrive. The ones that don't will discover that the workflow they built by default is not the workflow they would have chosen under scrutiny.
The vendors knew what they were building. The buyers, in most cases, didn't ask the right questions. The auditors haven't arrived yet.
That window is closing.
If this analysis is useful — whether you are a CISO evaluating your program, a vendor shaping go-to-market strategy, a product marketer cutting through noise, or an analyst mapping the landscape — I would welcome the conversation. This is what I do: connect the dots between business operations, the technology that serves them, and the market forces that shape both. Reach out at seanmartin.com.
References
- Unilever AI hiring pipeline savings — HeroHunt.ai, "AI-Driven Candidate Screening: The 2025 In-Depth Guide" | Link
- McDonald's / Paradox "Olivia" and GoodTime scheduling — HeroHunt.ai, "AI-Driven Candidate Screening: The 2025 In-Depth Guide" | Link
- HireVue asynchronous interviewing, Emirates Airlines case — Hirevire Blog | Link
- Lindy, Recruiterflow Agent Mode; recruiter time savings — Lindy, "The Complete AI Recruiting Guide" | Link
- End-to-end hiring pipeline automation — n8n | Link
- Checkbox AI legal intake — Checkbox.ai, "Best AI Tools for Legal Departments 2025" | Link
- Harvey AI, Am Law 100 — Harvey.ai | Link
- LegalOn outcomes; 75–85% review time reduction — LegalOn, "Best AI Contract Review Tools 2025" | Link
- Luminance M&A contract review — LegalFly | Link
- ContractPodAi, Ironclad CLM — ContractPodAi | Link
- Invoice processing automation — Ramp / HighRadius industry data
- Gartner: 80% of customer service issues resolved autonomously by 2029 — Gartner | Link
- "The 72-Minute Gap" — Lens Four | Link
- Gartner: 40% of enterprise apps with task-specific agents by 2026; $450B by 2035; 40% of agentic AI projects cancelled by 2027 — Gartner | Link
- Forrester: less than 15% of firms will activate agentic features in automation platforms by 2026 — Forrester, “Predictions 2026: Automation at the Crossroads” | forrester.com
- Dropzone AI / Edward Wu, Black Hat USA 2025 — ITSPmagazine | Link
- Stellar Cyber / Subo Guha, RSAC 2025 — ITSPmagazine | Link
- Stellar Cyber / Subo Guha, Black Hat 2025 — Stellar Cyber | Link
- Token Security RSAC 2026 Sandbox finalist; Moderna 3,000 agents — GlobeNewswire | Link
- Nintex Agentic Business Orchestration, IDC quote — Nintex | Link
- Microsoft Copilot Studio, Entra Agent ID — Microsoft | Link
- Swimlane: AI to resolve 90%+ of Tier 1 alerts by 2026 — TheHGTech | Link
- Music Evolves Podcast, "Who Owns the Sound of AI?" with Chandler Lawn, Drew Thurlow, Puya Partow-Navid; UMG/WMG settlements | Watch on YouTube
- "We're Becoming Dumb and Numb": Why Black Hat 2025's AI Hype Is Killing Cybersecurity — Random and Unscripted with Sean Martin and Marco Ciappelli | Episode | Watch on YouTube
Sean Martin is a cybersecurity market analyst, content strategist, and advisor with 30+ years across engineering, product development, marketing, and media. Co-founder of ITSPmagazine and Studio C60, host of the Redefining CyberSecurity Podcast and the Music Evolves Podcast. Sean works with CISOs and security leaders, vendors and service providers, go-to-market and marketing teams, and analyst firms to connect technology operations and cybersecurity programs to business outcomes. Connect at seanmartin.com.
Subscribe to Lens Four — Where business, innovation, and messaging come into focus.
Topics Covered in This Analysis
Agentic AI, workflow automation, task-specific AI agents, agentic business orchestration, human accountability in AI, AI hiring tools, HireVue, Paradox Olivia, resume screening automation, GoodTime scheduling, AI candidate assessment, Recruiterflow, Lindy AI recruiting, legal AI, Harvey AI, LegalOn, Spellbook, Luminance, Kira Systems, contract review automation, CLM platforms, Ironclad, ContractPodAi, Checkbox AI, invoice processing automation, Ramp, HighRadius, agentic SOC, Dropzone AI, Stellar Cyber, customer service automation, Gartner agentic AI predictions, enterprise AI adoption, agentwashing, AI augmentation vs replacement, AI workforce impact, AI organizational design, AI governance, AI agent identity, Token Security, RSAC 2026, Nintex, Microsoft Copilot Studio, Salesforce Agentforce, ServiceNow, workflow orchestration platforms, agent guardrails, AI agent lifecycle management, music AI copyright, Suno, Udio, UMG, WMG, AI and creative ownership, Forrester AI infrastructure, business process design, AI decision accountability, AI risk management, competitive advantage AI, Redefining CyberSecurity Podcast, Music Evolves Podcast, Lens Four, Sean Martin.
Frequently Asked Questions
Q: What is "agentwashing" in AI? A: Agentwashing refers to vendors rebranding existing automation or RPA tools as "agentic AI" without delivering genuine agentic capability — systems that reason across tasks, adjust based on outcomes, and handle exceptions autonomously. Gartner estimates only ~130 of the thousands of vendors claiming agentic AI are delivering genuine capability.
Q: Who is accountable when an AI-automated workflow produces a bad outcome? A: Currently, in most organizations, no one. The procurement signer approved a task-specific tool. The vendor sold a product that performed as specified. The workflow those tools assembled collectively exists in a gap between contracts and org chart lines. Nobody owns the workflow — everybody owns a task.
Q: What is an agentic orchestration platform? A: A single governed environment where workflows are defined in plain language, purpose-built AI agents are configured with explicit permissions and guardrails, human oversight points are designed in, and the complete workflow is auditable as a system. Examples include Nintex's Agentic Business Orchestration platform and Microsoft Copilot Studio.
Q: What does Gartner predict about agentic AI adoption? A: Gartner predicts 40% of enterprise applications will include task-specific AI agents by end of 2026 (up from <5% in 2025), that 15% of day-to-day work decisions will be made autonomously by 2028, and that over 40% of agentic AI projects will be cancelled by end of 2027 — primarily due to lack of governance rather than technology failure.
Q: What does "augment not replace" actually mean in AI vendor marketing? A: It's language that manages three audiences simultaneously — employees, procurement committees, and regulators — by implying human oversight remains intact. But when AI handles 80–90% of a task and humans handle exceptions after the fact, that's oversight of an autonomous system, not augmentation. The distinction determines where accountability lives when something goes wrong.