Intentional Partners
Helping Design Thrive As Startups Scale
What if your 2026 product roadmap was also your AI adoption strategy?
As companies wind down for the holidays, annual planning looms on the horizon. And somewhere on many leaders’ new years lists is “accelerate AI adoption.”
But for many folks I speak with, this gets treated exclusively as a company-wide initiative, floating above the actual roadmap rather than embedded into it. A mandate without a map.
One of my big takeaways from this year: the friction AI adoption faces is massively influenced by both organisational size (change management is real) AND where in the product you’re trying to adopt it. But planning provides a great forcing function to not only decide which outcomes you’re investing in, but also where you’re experimenting with new ways of working.
So, as you approach the planning process, here’s a simple lens to layer onto your roadmap investments. Think of your product portfolio like terrain, and drop each product area into the following buckets:
🛣️ Paved roads.
These are established product areas with clear patterns and infrastructure. The team knows the route. AI use is optional; forcing it might create friction without much payoff if there’s low AI adoption across the org so far.
🦺 Roads under construction.
You're extending or improving what exists. There's a foundation, but room to experiment. AI should demonstrate gains, expect improvements in speed or quality.
🌌 Uncharted trails.
New surfaces, fewer constraints or legacy to integrate with. AI should set the bar. Staff these with your AI enthusiasts and let them blaze paths the rest of the org can learn from or be inspired by.
Not every product area is equally suited to the same types of AI adoption gains, especially when there’s huge momentum in existing product infrastructure or ways of working. The same is true of people of course.
But adding this lens to your planning criteria can help staff teams based on AI-mindset, creating the conditions for enthusiasts to showcase what's possible, build proof points you can scale from later, and create a safe space for folks who aren’t yet convinced of AI’s value or need stable footing for a little longer.