Nicholas Ayala
· 5 min read

The Last 10% Is the Moat

Getting to 90% with AI is table stakes. The separation happens in the last 10%. Here is what to test for when evaluating AI capability on your team.

Share:
The Last 10% Is the Moat

I was comparing notes with an AI leader recently.

Not about models or tools.

About what happens when you try to scale what one person has figured out to an entire team.

The pattern we kept landing on wasn’t technical.

It was this: people are skipping the last step.


What AI Slop Actually Is

If you haven’t heard the term, you’ve seen the output.

AI slop is content that was generated by a tool and never reviewed by a human. Reports, emails, proposals, technical documents.

Any of it.

I’ve even added an emoji to my LinkedIn name to catch people scraping messages to me.

AI-generated outreach that scraped the emoji from my LinkedIn name

The paragraph that trails into nothing.

The summary that contradicts the data two slides earlier.

The formatting that looks like a first draft because it is one.

You know it when you see it.


The Expectation That Broke the Habit

Every tool humans have ever used has required a human to check the output.

The calculator doesn’t audit the formula.

The spreadsheet doesn’t verify the logic.

The CAD model doesn’t confirm the part will function under load.

The human was always the last line of quality. That expectation never moved.

Until AI.

Until this “ChatGPT” moment.

Somewhere in the last 12 to 24 months, a shift happened.

AI output started being treated as finished output.

Not a draft. Not a starting point.

Done.

And when it was wrong, the instinct became: “That’s what the AI produced.”

No.

That’s what YOU published.


The Accountability Has Not Moved

Capability does not replace ownership.

When a report goes out with a hallucinated figure, that is a quality failure.

Not a technology failure.

When a proposal reaches a customer looking like a rough draft, that is a representation failure.

When a technical document circulates with sections that don’t connect, that is an attention failure.

The name on the deliverable is yours. The tool does not sign it.


90% Is Now the Floor

Here is the shift executives need to internalize.

Eighteen months ago, generating a polished first draft in minutes was a competitive advantage.

Today it is table stakes.

Everyone has access to the same tools.

Everyone can produce the first 90% at roughly the same speed.

The separation is what happens in the last 10%.

If someone on your team is not using these tools, that is an efficiency problem worth a separate conversation.

But if someone is using the tools and the output is still mediocre?

That is no longer acceptable.

Producing C-level quality work with AI is the minimum standard in this environment.

Not the bar.

The floor.


The Last 10% Is the Moat

The last 10% is judgment.

It is reading the output with the same critical eye you would apply to anything carrying your name.

It is asking: Does this actually say what I mean?

It is asking: Would I be comfortable if an investor, a customer, or the board read this exactly as written?

It is the proofreading standard your first manager expected before anything left the building. Except now it is not about grammar.

It is about whether you have the rigor and taste to know the difference between output and quality.

Anu Atluru put a name to this in Taste is Eating Silicon Valley.

Her thesis: in a world of abundance, we treasure taste.

AI has created that abundance. Every team now has access to the same tools, the same models, the same ability to generate volume at speed.

Taste is what cannot be generated.

It is the judgment to know when something is off. The instinct to push further when 90% feels close but not right. The standard that exists before the draft is even opened.

And here is the part worth internalizing: taste cannot be copied.

The output can be replicated. The instinct behind it cannot.

That skill is decaying.

The rate at which I have had to remind experienced professionals, across teams, across industries, that they are still responsible for reviewing what their tools produce is unlike anything I have seen in over a decade.

That decay is also an opportunity.

If the majority is skipping the last step, the cost to separate is lower than you think.


What to Test For

This is the part that should change how you evaluate AI capability on your team.

Most conversations about AI in hiring focus on breadth.

What tools do you use? How often? Show me an example.

That is the wrong test.

Sure, make sure they have started to adopt these tools…

…but don’t stop there.

The right test is the last 10%.

Give a candidate something to generate using AI tools within a short, pre-defined time.

You want to see if they are only focused on the production of work, or if there was also time spent refining it.

Are there any factual inconsistencies? Odd formatting breaks? A logical gap?

Watch what they do with it.

If they catch it, correct it, and explain why it matters: they understand what the job actually is.

If they don’t: they will produce slop at scale and call it productivity.

The organizations that build this standard into how they hire and evaluate AI capability today will have a structural advantage inside 18 months.

The ones that don’t will be moving fast.

They will also be producing noise.

Moving fast and producing noise is not a competitive position.

Enjoying this? Join the Thursday Trailblazer.

One insight every Thursday on money, careers, and systems that compound. No fluff.

Enjoyed this article? Share it with others:

Share:

Keep Reading

Nick Ayala

Nick Ayala

Chief of Staff, Director of Strategy & Operations at GrayMatter Robotics. Paid off $18K in debt in 13 months. Writes about money, careers, and the systems that compound over time.

About · LinkedIn · X

Working on a problem in manufacturing, AI, or scaling operations?

Let's connect →