Designing for Wisdom. How Emerging Technologies Can Augment Human Judgment and Ethical Action in Organizations

“Technology alone is not enough. It’s technology married with the liberal arts, married with the humanities, that yields us the results that make our hearts sing.” – Steve Jobs 

“The struggle itself towards the heights is enough to fill a man's heart. One must imagine Sisyphus happy.” – Albert Camus, The Myth of Sisyphus 

“Do engines get rewarded for their steam?” – Johnny Cash, The Legend of John Henry’s Hammer 

As we enter the Knowledge Age, technology is no longer just a tool; it is a partner in decision-making. From generative AI to agentic systems, from robotic process automation (RPA) to intelligent recommendation engines, we are building machines that don't just compute – they suggest, respond, and increasingly, act

But this begs the question: what kind of wisdom are we encoding into these systems? 
And more importantly: how can organizations ensure that these technologies enhance human judgment rather than undermine it? 

Why Augmented Wisdom Matters Now 

The AI systems of today are fast, scalable, and increasingly persuasive. They can summarize vast reports, answer open-ended queries, and offer recommendations based on historical data. But what they still lack – and may always lack – is moral insight, situational awareness, and context-sensitive discernment. 

These are the foundations of wisdom

Research from scholars like Shannon Vallor (2016), in Technology and the Virtues, underscores that ethical design of intelligent systems must incorporate philosophical traditions of virtue ethics – not just compliance checklists. Similarly, the work of AI ethicist Wendell Wallach argues that we need “moral machines” that can engage with values, not just rules. 

As AI capabilities expand, organizations must ask: What is our role as stewards of wisdom, both human and artificial? 

From Smart Machines to Wise Organizations 

To make the leap from automation to augmentation, organizations need to embed three types of intelligence into their strategic design: 

  1. Computational Intelligence (what machines do well) 
    Pattern recognition, optimization, summarization, and speed. 

  1. Contextual Intelligence (what humans do well) 
    Nuanced decision-making, awareness of culture, narrative, and ethical complexity. 
    (See: Khanna, T. (2014). Contextual Intelligence, HBR

  1. Collective Intelligence (what organizations can do well together) 
    The ability to learn, adapt, and make wise decisions through group sensemaking, often supported—but not replaced—by AI. 
    (See: Malone, T. (2018). Superminds

Neuroscience research shows that teams exhibiting greater inter-brain synchrony – shared patterns of neural activity during collaboration – consistently outperform others in group tasks, suggesting that collective insight is quite literally a shared neural event (Van Bavel, Dikker & Reinero, 2021; see also De Felice et al., 2023; Holroyd, 2022). 

By designing systems and cultures that integrate these three forms, companies can unlock augmented wisdom – a fusion of machine performance and human purpose. 

The Role of Agentic Design and Ethical Alignment 

Agentic systems – AI agents that take initiative, make recommendations, and sometimes act autonomously – are becoming more prevalent. But as they take on greater roles in workflows and decision-making, the ethics of design becomes mission-critical. 

Studies on team cognition reveal that cooperative goals significantly enhance interpersonal synchrony and task performance, offering a biological basis for designing AI systems that promote shared objectives rather than siloed optimization (Allsop et al., 2016; see also Gucciardi et al., 2018). 

To support wisdom generation, organizations should: 

  • Ensure human oversight at key decision thresholds 
    Not all decisions should be automated. For example, hiring, firing, or policy decisions should require human judgment and review – even when AI is part of the input. 

  • Build ethical reflection into the organizational cadence 
    Regular forums for “ethics retrospectives” can create habits of shared moral reflection. Like postmortems for systems, but for values. 

Technology as a Mirror for Culture 

Ultimately, technology reflects the values of its makers. If your organizational culture prizes speed over reflection, AI will optimize for speed. If your culture encourages ethical reflection and learning, AI can be tuned to support those aims. 

This is why culture and technology must evolve together. Else you will end up having the same fate as the man who was deemed “The Fastest Draw in the West”: the man who never cleared leather, aka Toeless Joe.  

All jest aside, just as distributed neural populations integrate diverse inputs to guide decisions, organizations must foster collaborative structures that align diverse perspectives into coherent collective judgment (Kelly & O'Connell, 2015; see also Rollwage et al., 2020). 

Leaders must ask: 

  • Are we designing AI to support our highest values – or just our KPIs? 

  • Are our teams empowered to question AI outputs – or do they defer by default? 

  • Are we investing in systems that deepen human insight – or replace it? 

The future belongs to organizations that can answer those questions wisely – and act on them. 

Designing the Conditions for Wisdom 

Here are tangible steps leaders can take today: 

  • Pair AI implementation with learning journeys on ethics, systems thinking, and context literacy. 

  • Incentivize reflective practice, not just output – reward those who slow down to consider alternatives. 

  • Appoint “Wisdom Stewards” or rotating ethics leads within teams to monitor the interplay of human and artificial judgment. 

Emerging work in social neuroscience shows that when leaders and followers achieve neural synchrony during group deliberations, teams are more cohesive and responsive – offering a new frontier for cultivating wise organizational leadership (Lu & Pan, 2023; see also Zhang et al., 2023). These small shifts can create the cultural soil in which wisdom can take root: human and artificial alike. 

Your Turn 

How is your organization integrating human judgment and ethical reflection into your use of AI or automation? 
What challenges – or successes – have you seen in aligning emerging tech with your organization’s values? 
I’d love to hear your thoughts or examples; please share them in the comments. 
 

References Cited 

 

Want to be part of the (r)evolution?  

I am putting the finishing touches on the first draft of a book with a friend and colleague Andrew Lopianowski on the concept, which we are calling HumanCorps. If you’d like to learn more about the book, or perhaps have some amazing stories of people who are putting these efforts in motion to be the change we need, please drop me a line.  

Previous
Previous

Thriving in Uncertainty: Why Purpose, Not Prediction, Wins the Future

Next
Next

Collective Wisdom as Competitive Advantage. How Organizations Can Turn Culture and Intelligence into Strategic Power