Skip to content
A long, partially built bridge stretching into dark clouds, with a shaft of sunlight breaking through — symbolizing the uncertain but necessary path beyond 2030.
co-intelligence futurebraining deep learning

Whatever happened To The Long Term?

Huibert Evekink |

Over the summer, I have been working on what we half-jokingly call the mother of all AI upskilling apps: a tool to help people reimagine both today's jobs and the ones coming in the next decade.

So, we researched and studied recent material( 2023-2025). Reports, white papers, think tank pieces. McKinsey, World Economic Forum, OECD, future gurus, and academic and public policy labs, etc. All, kindly try to predict and help us prepare for "the future of work."

We focused on "serious" sources, but none are entirely neutral. Big consultants have their brand and commercial agendas. Thinktanks have sponsors. The techbros are locked into a fierce battle for investment and dominance. AI gurus are riding their ideological hobby horses from doom to hype. That doesn't make them useless, but you have to keep biases in mind and compare to be safe, just like with AI itself.

We got all the information we wanted. The tonal range was wide — from calm upskilling and readjustment narratives to grim automation warnings—some paint AI as a net positive; others as a civilizational threat.

 

Shock

 But what really surprised me most was how few of them looked seriously beyond 2030. They may not know or dare to go beyond; I guess there is no ROI in being early and wrong. A few stretched to 2035 or 2045 with a scenario or speculative chart, but the deeper thinking, the hard questions, the detailed reform ideas? They mostly stopped at the end of this decade.

Seriously?

This is strange, and even a little frightening. Because 2030 is 4 years and 3 months away. If that's what we now call "long term," then something has dramatically changed in how we talk about time, and things are moving even faster than we thought.

When Timeframes Still Meant Something

 Not too long ago, we shared some sense of time horizons. In finance, short-term meant months. In strategy, maybe a year. Medium term stretched from three to ten years, with enough space for transformation, without tipping into speculation. Long-term meant 20 years or more: the domain of generational change, policy overhaul, infrastructure, and climate. These weren’t precise, but they gave us reference points.

Then came AI.

And with it, everything sped up: product cycles, hype cycles, job forecasts, existential debates. Time shrank.

Why does timing matter?

 Because the space beyond 2030 is where the real change lives, where policy takes shape, where education needs to land, and where social contracts either fracture or evolve. If we can’t talk clearly about that space, we can’t build for it, and the conversation gets dominated by extremes: either utopia or doom.

That leaves a vacuum. The near future is saturated with advice. The mid and long-term future is covered in clouds like the image of the bridge.

We Started Sketching

 Unhindered by commercial, ideological, or financial restraints, we've started mapping a different kind of 2035. Not a beautiful fantasy, or a horrible crisis, just a plausible image of the misty road ahead. The risk of not preparing is far greater than the risk of being wrong, or naïve, especially when we’re talking about our jobs, our schools, our kids’ futures.

Here’s what we see:

  • Most people still work. However, jobs are becoming modular, hybrid, and AI-augmented. AI literacy is the baseline, like email was in the 90s. People work with AI agents, not around them. And that changes the nature of work itself: less repetitive execution, more orchestration, like conductors or architects shaping flows of skills, needs, and systems.

  • For displaced workers, a new social contract is emerging. Not basic income as charity, but a form of income tied to contribution: community work, care, learning, mentoring. Economic dignity in exchange for genuine participation.

  • Education looks nothing like school, with personal AI tutors from an early age. Employers run academies. Degrees fade; learning stacks rise. But underneath it: deep thinking, not just clicking. Reflection and synthesis are prized again.

  • Trust is rare and valuable. In a world of synthetic everything, human intention becomes a signal. What’s verified, personal, and imperfect has new weight. People want to know: Did a real person make this? Did they mean it?

  • Human skills never left. Emotional intelligence, moral reasoning, and aesthetic taste are now professional assets. We’re seeing a 'human premium': people paying extra for what feels as if real people made it.

  • Global (AI) governance has fractured. Different regions, different AI rules. Alignment isn’t coming anytime soon. We’re operating in a patchwork.

  • The real skill is no longer prompting and contexting. It’s co-agency: the ability to work with AI — critically and collaboratively. Since we’re all managing artificial resources now, everyone is a kind of leader. Knowing when to lead, when to follow, and how to collaborate well in this AI-soaked environment is the new super skill. As individual co-intelligence connects, it becomes a kind of collective super-intelligence, where humans and AI agents learn and act together.

Final Thoughts

 2035 isn’t science fiction. It’s just ten years out. Or maybe we need to redefine what counts as science fiction. The fact that so few reports even try to describe it tells us something.

We can’t afford to stop imagining. If the reports all end in 2030, we’ll keep going.

That’s what we’re doing at Futurebraining: helping individuals and teams build readiness not just for what’s coming next quarter, but for the kind of future that doesn’t define itself. We do, if we train for it.

Thanks for reading futurebraining! Subscribe for free to receive new posts and support our work.

 

If you’re interested in the full version of our Future of Work report, feel free to connect.

Share this post