Prioritizing human-centered tech innovation

1 day ago 15

While tech leaders debate AGI timelines infinitum, we’re ignoring the values crisis in current artificial intelligence. It’s widely reported that as much as 85% of AI projects fail, however this is not from a technical limitations standpoint - rather absent values design.

That’s to say, most AI failures aren't technical - they are actually behavioral. We build systems based on idealized user behavior rather than real human psychology. In effect, most organizations struggle with what I call the value fog: brilliant AI models that can't connect to tangible impact.

Globally recognized Data/AI strategist, author and behavioral economics advocate.

So, how can we realistically govern super-intelligence going forward when we can’t yet govern today’s far less powerful algorithms? My three-pronged way of thinking is this:

1. Current AI exposes our values problem

Organizations are making profound values choices but by default, not by design. We currently optimize for measurable metrics (efficiency, speed) over meaningful outcomes (trust, human progress) which are going to matter even more so in an AGI-dominated world.

Take, for example, an AI fraud detection system that works and prevents brilliantly, but destroys community trust and employee morale. AGI will simply amplify this values misalignment exponentially.

2. The real AGI race is institutional, not technical

Technical capabilities are advancing faster than ethical frameworks. So, a key question would be, which societies have institutional maturity to guide (not just react to) AGI?

We need cross-sector collaboration, long-term thinking and democratic input processes, because AGI governance requires societal readiness, not just computational readiness.

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

3. AGI without values design = super-intelligence with the wrong optimization

Current AI optimizes whatever we measure; but AGI can optimize wrong civilizational outcomes too, which is dangerous. Then there is the ‘flawed imagination’ tendency of building systems that solve problems we don’t actually have.

Instead we need multidimensional value frameworks embedded from the start, not bolted on afterwards or as afterthoughts. Moreover, surely it makes sense to start practicing values-first AI design now, before AGI becomes the norm.

In short and looking at this from various societal perspectives, what becomes clear is that tech leaders should implement ‘values-first AI’ in all current and future projects as AGI preparation; for policymakers, we should be pointing at cross-sector AGI governance capability, starting from today; while organizational leaders need to master current multidimensional AI values before attempting AGI ethics.

This last point especially shines a spotlight on the gaps in current leadership traits because AGI now needs a new kind of trailblazer - those who design systems, not dictate outcomes.

AGI debates are currently focusing on control and competition - so who will rule super intelligence you might ask? That’s the wrong question, because we rather need leaders who design frameworks that democratize AI value, not concentrate it.

The future will no doubt belong to those who enable and connect collective wisdom - not individual dominance - and here are a few further areas to consider:

Traditional leadership models are already failing at current AI scale

Command-and-control leadership can’t handle AI’s complexity and speed and current AI projects fail because leaders try to dictate technical solutions instead of designing human-centered outcomes.

AGI will amplify this mismatch between leadership models and technological reality, so we now need leaders who architect systems for collective success.

Design and ‘democratize’ leadership for the AGI era

Create frameworks that enable good outcomes, rather than micromanaging processes, and make AI value creation accessible across all organizations and societies, not just tech elites.

For example, instead of centrally controlling AI decisions, design systems where humans can meaningfully participate in AI-augmented choices. After all, AGI governance should be through system design, not system control.

From optimization to orchestration

Leaders traditionally optimize for single metrics, but our increasingly digital age requires design-democratize leaders who orchestrate multidimensional value and create cross-functional collaboration mechanisms rather than hierarchical decision chains.

We now need to enable stakeholder input into AI values rather than imposing values from above, because AGI requires orchestrating collective intelligence, not just maximizing individual intelligence.

Fundamentally, we need to be asking ourselves and others; how do we create AI systems that enhance collective decision-making and how do we ensure AGI benefits serve broad human progression?

And the ongoing leadership challenge is all about system thinking, behavioral design, inclusive innovation and values architecture.

To summarize, AGI’s greatest power won’t be in what it can do, but in how we design it to amplify collective human wisdom. The leaders truly shaping beneficial AGI are those learning to design values-first systems that democratize intelligence, not concentrate it.

The organizations leading this space have stopped treating ethics as something you add to AI after it's built. They've made it fundamental to how they think about AI from the beginning. They understand that trust, once broken, becomes their most expensive business problem to solve.

Consequently, our next generation of AI companies – those who build AI tools as well as those who utilize them - won't be defined by their algorithms but by their ability to transform technical possibilities into human value.

Whether you're optimizing supply chains or serving citizens, the same principle applies: AI amplifies not just our capabilities, but our choices about what matters.

The organizations that truly thrive won't just deploy AGI as a tool; they'll rebuild their entire operating models around a central question - how can we create value that would be impossible without AI?

This isn't just about competitive advantage. It's about using one of humanity's most powerful inventions to solve our most pressing problems - to create businesses that are not just profitable but meaningful.

Quite simply, the future belongs to organizations that understand this simple truth; that the real tangible value of AGI and super-intelligence adoption lies not in what it is, but what it enables us to become.

We've featured the best AI chatbot for business.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Read Entire Article