Today we’re at a point where many of the tools once seen as aids for innovation are starting to shift how we think, learn, remember, and create. Across conversations at DFFRNT and with clients, it’s clear that AI is not just a new toy: it’s reshaping the very terrain of innovation management, changing how we innovate... maybe if we innovate. To lead well in these circumstances, organisations must see both the risks and the opportunities inherent in this new paradigm, then build in habits, practices, and culture to adapt to them.
Over-reliance on AI for tasks like fact recall, early idea generation, and routine assignments can lead to a decline in our memory, critical thinking skills, and willingness to venture into challenging areas. If the default is to accept outputs, not to question; to curate rather than invent, then innovation becomes incremental by default and the ability to innovate radically slips away. Homogenisation sets in when many teams use similar AI tools in similar ways. Traits like novel thinking, courage in ambiguity, strategic foresight that once separated organisations, become harder to maintain.
Yet if we take the wrong lesson, we’ll lose what makes innovation meaningful. I have to admit that this is starting to sound like a cliché, but to thrive, innovation management must be redesigned so AI becomes a lever rather than a crutch. That means putting intention around how we use tools. It means preserving human creative space. It means asserting our values in how we frame problems, select ideas, and decide what gets built.
So what does this look like in practice? First, at the front end of innovation, where the “fuzzy work” such as framing problems, sensing opportunity, imagining possibility happens, AI can help with breadth and speed, exposing weak signals or patterns we might miss. But people must lead here. The strategic challenge cannot be outsourced. Framing, asking the hard questions, defining what “good enough” really means that remains human work.
During idea generation and selection, some rounds can include AI-supported idea sparking. Other rounds need to be human only, without algorithmic influence, so divergent thinking stays sharp and original. When prototyping and testing, AI can lower cost and accelerate iteration. But we must resist the pull toward convergence on the first plausible solution, instead preserving space for experimentation, risk, and sometimes failure.
Culture and governance are where change sticks or slips. Innovation managers should push for learning, critique, reflection. Teams must be trained in AI literacy, on how to challenge their outputs, detect bias, and check assumptions. Leadership must model critical thinking and curiosity. It is not enough to say “use AI well.” It requires creating mechanisms that promote debate, reward taking risks, and protect time for human insight.
This is not dystopian. It is an opportunity. AI can free us from low-value repetition. It can allow people to explore systems, ethics, emotion, and texture. It can expand what’s possible. But it won’t do that by default. We must build our innovation practices around that potential.
The organizations that thrive in this shift will treat AI as a partner: accelerating what we can do, enabling us to focus on why it matters. Those same organizations will protect what AI can’t replace: originality, judgement, moral imagination, ethics. The management of innovation activities will change but it won’t disappear. It will need us to lean into our uniquely human capabilities more than ever.