Designing for Wisdom. How Emerging Technologies Can Augment Human Judgment and Ethical Action in Organizations
“Technology alone is not enough. It’s technology married with the liberal arts, married with the humanities, that yields us the results that make our hearts sing.” – Steve Jobs
“The struggle itself towards the heights is enough to fill a man's heart. One must imagine Sisyphus happy.” – Albert Camus, The Myth of Sisyphus
“Do engines get rewarded for their steam?” – Johnny Cash, The Legend of John Henry’s Hammer
As we enter the Knowledge Age, technology is no longer just a tool; it is a partner in decision-making. From generative AI to agentic systems, from robotic process automation (RPA) to intelligent recommendation engines, we are building machines that don't just compute – they suggest, respond, and increasingly, act.
But this begs the question: what kind of wisdom are we encoding into these systems?
And more importantly: how can organizations ensure that these technologies enhance human judgment rather than undermine it?
Why Augmented Wisdom Matters Now
The AI systems of today are fast, scalable, and increasingly persuasive. They can summarize vast reports, answer open-ended queries, and offer recommendations based on historical data. But what they still lack – and may always lack – is moral insight, situational awareness, and context-sensitive discernment.
These are the foundations of wisdom.
Research from scholars like Shannon Vallor (2016), in Technology and the Virtues, underscores that ethical design of intelligent systems must incorporate philosophical traditions of virtue ethics – not just compliance checklists. Similarly, the work of AI ethicist Wendell Wallach argues that we need “moral machines” that can engage with values, not just rules.
As AI capabilities expand, organizations must ask: What is our role as stewards of wisdom, both human and artificial?
From Smart Machines to Wise Organizations
To make the leap from automation to augmentation, organizations need to embed three types of intelligence into their strategic design:
Computational Intelligence (what machines do well)
Pattern recognition, optimization, summarization, and speed.
Contextual Intelligence (what humans do well)
Nuanced decision-making, awareness of culture, narrative, and ethical complexity.
(See: Khanna, T. (2014). Contextual Intelligence, HBR)
Collective Intelligence (what organizations can do well together)
The ability to learn, adapt, and make wise decisions through group sensemaking, often supported—but not replaced—by AI.
(See: Malone, T. (2018). Superminds)
Neuroscience research shows that teams exhibiting greater inter-brain synchrony – shared patterns of neural activity during collaboration – consistently outperform others in group tasks, suggesting that collective insight is quite literally a shared neural event (Van Bavel, Dikker & Reinero, 2021; see also De Felice et al., 2023; Holroyd, 2022).
By designing systems and cultures that integrate these three forms, companies can unlock augmented wisdom – a fusion of machine performance and human purpose.
The Role of Agentic Design and Ethical Alignment
Agentic systems – AI agents that take initiative, make recommendations, and sometimes act autonomously – are becoming more prevalent. But as they take on greater roles in workflows and decision-making, the ethics of design becomes mission-critical.
Studies on team cognition reveal that cooperative goals significantly enhance interpersonal synchrony and task performance, offering a biological basis for designing AI systems that promote shared objectives rather than siloed optimization (Allsop et al., 2016; see also Gucciardi et al., 2018).
To support wisdom generation, organizations should:
Embed ethical reasoning into AI workflows
This includes incorporating moral frameworks (virtue ethics, consequentialism, deontology) into training data and human-in-the-loop review.
(See: Moor, J. (2006). The Nature, Importance, and Difficulty of Machine Ethics)
Ensure human oversight at key decision thresholds
Not all decisions should be automated. For example, hiring, firing, or policy decisions should require human judgment and review – even when AI is part of the input.
Design for explainability and accountability
Trust in AI depends on transparency. Employees need to understand not just what the AI recommends, but why.
(See: Doshi-Velez & Kim, 2017. Towards a Rigorous Science of Interpretable Machine Learning)
Build ethical reflection into the organizational cadence
Regular forums for “ethics retrospectives” can create habits of shared moral reflection. Like postmortems for systems, but for values.
Technology as a Mirror for Culture
Ultimately, technology reflects the values of its makers. If your organizational culture prizes speed over reflection, AI will optimize for speed. If your culture encourages ethical reflection and learning, AI can be tuned to support those aims.
This is why culture and technology must evolve together. Else you will end up having the same fate as the man who was deemed “The Fastest Draw in the West”: the man who never cleared leather, aka Toeless Joe.
All jest aside, just as distributed neural populations integrate diverse inputs to guide decisions, organizations must foster collaborative structures that align diverse perspectives into coherent collective judgment (Kelly & O'Connell, 2015; see also Rollwage et al., 2020).
Leaders must ask:
Are we designing AI to support our highest values – or just our KPIs?
Are our teams empowered to question AI outputs – or do they defer by default?
Are we investing in systems that deepen human insight – or replace it?
The future belongs to organizations that can answer those questions wisely – and act on them.
Designing the Conditions for Wisdom
Here are tangible steps leaders can take today:
Pair AI implementation with learning journeys on ethics, systems thinking, and context literacy.
Incentivize reflective practice, not just output – reward those who slow down to consider alternatives.
Appoint “Wisdom Stewards” or rotating ethics leads within teams to monitor the interplay of human and artificial judgment.
Emerging work in social neuroscience shows that when leaders and followers achieve neural synchrony during group deliberations, teams are more cohesive and responsive – offering a new frontier for cultivating wise organizational leadership (Lu & Pan, 2023; see also Zhang et al., 2023). These small shifts can create the cultural soil in which wisdom can take root: human and artificial alike.
Your Turn
How is your organization integrating human judgment and ethical reflection into your use of AI or automation?
What challenges – or successes – have you seen in aligning emerging tech with your organization’s values?
I’d love to hear your thoughts or examples; please share them in the comments.
References Cited
Allsop, JS, et al. 2016. Coordination and collective performance: Cooperative goals boost interpersonal synchrony and task outcomes. Frontiers in Psychology 7.1462.
De Felice, S, Hamilton, AFDC, Ponari, M & Vigliocco, G. 2023. Learning from others is good, with others is better: the role of social interaction in human acquisition of new knowledge. Philosophical Transactions of the Royal Society B 378.1870.20210357.
Doshi-Velez, F & Kim, B. 2017. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.
Gucciardi, DF, Crane, M, Ntoumanis, N, Parker, SK, Thøgersen‐Ntoumani, C, Ducker, KJ, Peeling, P, Chapman, MT, Quested, E & Temby, P. 2018. The emergence of team resilience: A multilevel conceptual model of facilitating factors. Journal of Occupational and Organizational Psychology 91.4.729-68.
Holroyd, CB. 2022. Interbrain synchrony: on wavy ground. Trends in Neurosciences 45.5.346-57.
Kelly, SP & O’Connell, RG. 2015. The neural processes underlying perceptual decision making in humans: recent progress and future directions. Journal of Physiology-Paris 109.1-3.27-37.
Khanna, T. 2014. Contextual Intelligence. Harvard Business Review.
Lu, K & Pan, Y. 2023. A collective neuroscience lens on intergroup conflict. Trends in Cognitive Sciences 27.11.985-86.
Malone, TW., 2018. Superminds: The surprising power of people and computers thinking together. Little, Brown Spark.
Moor, J. 2006. The Nature, Importance, and Difficulty of Machine Ethics. IEEE Intelligent Systems. 21.4.18-21.
Rollwage, M, Loosen, A, Hauser, TU, Moran, R, Dolan, RJ & Fleming, SM. 2020. Confidence drives a neural confirmation bias. Nature communications 11.1.2634.
Van Bavel, JJ, et al. 2021. Inter-brain synchrony in teams predicts collective performance. Social Cognitive and Affective Neuroscience 16.1-2.43–57.
Wallach, W, & Allen, C. 2008. Moral Machines: Teaching Robots Right from Wrong.
Zhang, H, Yang, J, Ni, J, De Dreu, CK & Ma, Y. 2023. Leader–follower behavioural coordination and neural synchronization during intergroup conflict. Nature Human Behaviour 7.12.2169-81.
Want to be part of the (r)evolution?
I am putting the finishing touches on the first draft of a book with a friend and colleague Andrew Lopianowski on the concept, which we are calling HumanCorps. If you’d like to learn more about the book, or perhaps have some amazing stories of people who are putting these efforts in motion to be the change we need, please drop me a line.