This presentation is optimized for desktop viewing. Please open on a larger screen.
AI will reshape every company this decade. The ones that move first will dominate.
AI is about to create an influx of competitors unlike anything the market has seen before. There will be new challengers for every product and every feature you offer. A lean AI-native team will do in two months what took you two years to build, and the competitive edge you spent years establishing will disappear overnight if you are not prepared.
This is already happening in public. Shopify CEO Tobi Lutke issued a company-wide memo requiring teams to demonstrate why a job cannot be done by AI before requesting new headcount. Klarna publicly reported that AI now handles work previously done by the equivalent of 700 customer service agents. Structural decisions, made at the highest levels, already in production.
This shift is already underway. The organizations that build AI capability now will set the terms for their industry.
Ask Your Team Monday Morning
What percentage of your team's time is spent on work that does not require human creativity?
When was the last time learnings in one department were shared with another?
What moats does the company have, and how can we further enhance them with AI?
Build the tools that enable humans to do the best creative work possible. Automate everything else.
Build Tools For
Writing product strategy with full org context
Designing user experiences informed by real data
Making judgment calls with full context surfaced
Evaluating markets with synthesized research
Crafting brand narratives backed by audience insight
Architecting systems with full org knowledge
Automate Away
Formatting reports into three different templates
Routing support tickets to the right team
Copying data between systems
Writing status update emails
Scheduling meetings across time zones
Generating weekly analytics summaries
Automation alone is half the picture. If you only eliminate busywork without building better tools for the creative work that remains, your people are just free with nothing new to leverage.
Layering AI onto broken workflows preserves the busywork and limits the upside. The processes themselves need to be rethought.
Without a unified vision, teams optimize locally. Some automate aggressively, others build tools in isolation, and the organization stays fragmented.
Two sides of the same effort. Give your people purpose-built tools that amplify their judgment, creativity, and decision-making. Then systematically automate every repetitive task so they can spend all of their time using those tools.
Enterprise companies need a clear operating model for applied AI: shared standards, targeted tooling, workflow automation, and a path for teams to adopt it safely.
You don't need everyone to become an AI expert. You need a small, high-leverage group to set standards, build shared systems, and unblock the rest of the company.
Training the entire company on AI produces broad awareness but no depth. The field evolves weekly. By the time training material is written, reviewed, and delivered, the methods it teaches are already outdated. The result is thousands of hours spent for surface-level familiarity that does not translate into real output.
Focusing on the most technical people sounds practical, but AI is not their full-time job. They attend a workshop, try a tool, then go back to their actual responsibilities. Within weeks they are behind again. Each team ends up building its own approach in isolation, duplicating effort and making decisions based on outdated information.
A small team of 3–5 people with strong applied AI judgment and shipping ability. Their job is to set standards, build shared infrastructure, and support partner teams on high-leverage opportunities.
Every department operates in its own silo. Work that should be a single unified effort is instead scattered across teams with no shared context, no shared process, and no one with clear oversight over how it all fits together.
Each department makes decisions using only its own data. Insights that would change another team's direction never reach them.
Five teams solve the same problem five different ways. Nothing is standardized, nothing is reusable, and handoffs break constantly.
Nobody has a clear view of the full picture. Priorities conflict, efforts overlap, and high-value work gets buried under coordination overhead.
Knowledge stays trapped in the team that created it instead of flowing to the teams that need it
Every department reinvents the same solutions because nobody knows what already exists
Without unified oversight, high-value cross-functional work never gets prioritized over departmental busywork
Your organization already sits on massive amounts of data: CRM records, support tickets, Slack threads, meeting transcripts, docs, code repositories, sales calls, analytics dashboards. The problem is that none of it is connected. A centralized knowledge base pulls from all these existing sources and rebuilds them under a single system.
One Source of Truth
Not a new system people have to feed manually. A layer that ingests what already exists across every tool and department, structures it, and makes the entire organization's knowledge accessible in one place.
The data already exists. Your CRM knows which deals closed and why. Your support tickets show which problems recur. Your code reviews capture architectural decisions. Your Slack history holds hundreds of micro-decisions that never made it into a doc. The knowledge base connects all of it.
Example
Architecture Decisions from Code Reviews
The reasoning behind why a system was built a certain way already lives in pull request comments and design docs. The knowledge base indexes it. Six months later, when another team faces a similar decision, the context surfaces automatically. No one needs to be interrupted.
Example
Sales Patterns from CRM and Call Data
Your CRM already tracks which deals closed, at what stage they stalled, and what pricing worked. Call recordings capture the objections and framings that your best reps use instinctively. The knowledge base structures all of it so every rep on the team can prepare with the same depth, pulled directly from data that already exists.
All the enterprise companies I've seen attempt this simply don't understand how to go about creating these knowledge base properly. Half the industry is still stuck trying to make RAG work and investing in vector DBs. They chain together 1 LLM call, 2 or 3 at the most, and call it a day.
I had already figured out the most optimal approach for large enterprise use cases when Karpathy recently started the conversion around his own method. It was validating to see the overlap. His approach is excellent and very well suited to personal use, but it starts to break down at the enterprise level where you are dealing with dozens of data sources, permissions layers, cross-team context, and content that changes daily. Enterprise knowledge bases need a different architecture, and getting that architecture right is the difference between an expensive search bar nobody trusts and data that is actually useful for building tools and automations on top of.
Traditionally, most business insights come from quantitative data. There are decades of established practice around dashboards, metrics, and analytics. Numbers are useful, but they have a fundamental limitation: a chart or a spreadsheet sits there until someone goes and makes sense of it. That interpretation step, and the action that follows, has always been the bottleneck. AI can now do that work.
But the bigger shift is this: AI makes it possible to work with qualitative data at scale in ways that were previously only possible with numbers. And qualitative data is far more efficient for actionable insights. When most people think of an Insights team, they picture data analysts staring at charts. That picture is incomplete now.
Example
From Tweet to QA Ticket in Seconds
A company ships a product update. A user tweets about a bug and attaches a screenshot. With quantitative data alone, you might eventually see a dip in sentiment scores, which tells you something is wrong and roughly where to look. That is the starting point, not the answer.
Now take that tweet, the attached image, combine it with similar posts from other users and their media, and route them directly to your QA team as structured, actionable reports. You just cut out multiple layers of manual triage, interpretation, and escalation. The qualitative data did the work that numbers could only point at.
Social media monitoring is just one example. Most companies are not even aware of the high-quality qualitative data sources they already have access to: support conversations, sales call transcripts, onboarding feedback, community forums, internal Slack threads. All of it can now be structured, analyzed, and turned into action at a speed and scale that was never possible before.
Compound Learning Effect
Each team's work informs every other team's work, and the advantage compounds. No vendor can replicate this. It is built from your specific decisions, customers, and history.
Purpose-built internal tools outperform generic solutions because they're designed around your actual workflows, data structures, and decision patterns.
Current State
Fragmented Tools
Multiple disconnected AI subscriptions across departments
No shared context between tools
Vendor lock-in with each individual tool
Humans still doing repetitive work that should be automated
The average user struggles to extract meaningful value from a raw AI model
Proposed
Unified Platform
A small, elite team of builders actively goes to teams and departments, identifies high-value opportunities, and builds the tools for them.
A purpose-built tool for a specific use case sees dramatically higher adoption than giving someone access to a raw AI model.
With AI, building custom software for each specific task is no longer cost-prohibitive. Don't be silly, go take advantage of this.
When people hear "AI for tooling," they assume every tool needs to make AI calls. It does not. With a proper knowledge layer underneath, some of the highest-value tools involve zero AI and still produce massive efficiency gains.
Phase 3
With the knowledge base and tooling in place, automation is the next step. Identify every task where a human adds no unique value, then build systems to handle it entirely.
Before
After
Reporting, triage, data entry, routine coordination, document processing, cross-system syncing. All of it handled automatically. Your team works on the problems that matter.
Phase 4
Building large-scale applied AI systems is a fundamentally different engineering problem than traditional software engineering. Even if you have the strongest engineers and the best machine learning experts in the world, they will still struggle to build the right systems from the start. It is a distinct, new discipline where traditional ML experience does not apply. Schools will not teach you. Books will not teach you. Online courses are months behind, if only that. The only way to stay at the forefront of it in terms of best practices is a high volume of trial and error. There's no way around that.
The difference is not just knowing where to build. It is knowing how. Applied AI architecture involves decisions that have no equivalent in traditional engineering or ML: how to structure knowledge layers, how to design agent workflows that stay reliable at scale, how to build systems that improve with use instead of degrading, and how to integrate AI into real business processes without creating brittle dependencies. These are hard technical problems that require specific experience to get right.
What Happens Without Applied AI Expertise
The Impressive Demo That Nobody Uses
Wrong ProblemA team of strong ML engineers builds a technically sophisticated AI feature. The model performs well on benchmarks. But adoption is near zero, because the problem it solves was never the bottleneck. An applied AI consultant would have identified the right problem in week one, before any model was trained.
The Over-Engineered Solution
Wrong ApproachML engineers default to what they know: custom models, fine-tuning, complex training pipelines. Months of work go into a system that could have been replaced by a well-architected prompt chain and retrieval layer, shipped in a fraction of the time.
I have watched teams miss out on the real oppurtunity spaces, and attempt to build products that either don't work or can work a lot better. If they just had someone running them over the correct architecture, it could have saved them months of trial and error.
Your team needs people who can identify which problems AI should solve, architect real integrations, and drive adoption. Unless you are a leading AI research lab, hiring people to train models in house is monumentally stupid. Have you seen the price of intelligence nowadays? It's already trending to zero, do not waste your resources thinking you can gain an edge here. You can go much farther with better system design.
The hiring market has this backwards right now. Machine learning has better brand recognition, more established university programs, and a clear career ladder. Applied AI is newer, harder to credential. Unless you are a leading AI lab, you need to worry less about the models themselves.
Grab whatever is the latest best and use it to upgrade the systems you build, instead of building systems around the latest best.
Optimize for learning velocity over static expertise. The most valuable hire can master a new framework in a week and see immediately where automation fits.
The philosophy only works if the culture supports it. People need to see eliminating mechanical work as the point, not a side effect.
Automation is the prerequisite for meaningful work.
This sounds thoughtful, but it kills momentum. When leadership frames AI as something to tiptoe around, everyone gets defensive. Adoption stalls and talent leaves.
Delegation without direction. Without a clear definition from leadership, people will protect their existing workflows regardless of whether those workflows involve real creative thinking. Self-assessment of what should be automated does not work.
The best organizations draw a clear line: this is the creative, high-judgment work we value. Everything on the other side of that line gets automated.
Every automated task is a win. Someone identified low-value work and eliminated it. The culture should celebrate that, and the people who make it happen.
The real culture shift happens when people stop spending time on work they know a machine should be doing.
From Busywork to Creative Work
Most companies investing in AI right now are making the same mistakes. Every one of these is the default path, and they waste enormous amounts of time, money, and organizational goodwill.
While I am the strongest advocate for internal tooling, some external vendors are worth adopting. These fall into two categories: services that are too hard to build in-house, and services that are too general to need customization.
Too Hard to Build In-House
Use the best. Upgrade constantly.
Some tools represent years of frontier research and billions in compute. Building your own version is not a realistic option. The right move is to adopt the best available, integrate it deeply into your workflows, and upgrade the moment something better ships.
Too General to Need Customization
Pick one. Roll it out. Move on.
Some tools solve a well-defined problem and every major option does it well enough. The value is already there. Spending time evaluating alternatives or building your own creates friction without meaningful upside.
For any generic tool in your stack, it is worth evaluating whether a custom version would serve you better. For example, most companies paying for ChatGPT Enterprise subscriptions could replace them with a purpose-built internal tool that calls the same APIs, connects to their knowledge base, and fits their actual workflows. That tool can be built in a day, and it will outperform the generic version immediately.
As of today, April 9th 2026, I am available for consulting engagements. I will evaluate your applied AI strategy, identify high-value opportunities, and help your teams avoid the mistakes that might cost you months of wasted effort.
For the right engagement, we can negotiate an agreement beyond advising. I will:
Come into your organization and learn your workflows firsthand
Map where custom tooling has the highest impact
Build and ship purpose-built tools alongside your own engineers
Walk your team through key architectural insights so they can continue the work down the line
DM me on X for a quote.