From Human Choice to Co-Agency. How AI is Reshaping Responsibility, Decision-Making, and What It Means to Lead
Every day, more decisions are being made not just by people—but with the help of machines.
Whether you’re using ChatGPT to draft a proposal, letting an algorithm screen resumes, or relying on predictive analytics to target customers, you’re no longer making choices alone.
Welcome to the age of co-agency—where human and machine intelligence are deeply entangled.
This shift challenges not just our workflows, but some of our most cherished assumptions about authorship, responsibility, and free will. It raises old questions—echoed in religious traditions, philosophical systems, and modern neuroscience—about what it really means to choose, to act, and to be accountable.
And for business leaders, the answers are more than theoretical. They’re strategic.
When the Mind Extends Beyond the Brain
Cognitive scientists Andy Clark and David Chalmers introduced the idea of the “extended mind”—the theory that our thinking isn’t confined to the skull. When you write notes, use a calculator, or ask AI to draft an email, those tools become part of your cognitive system.
In business, this is already happening at scale:
AI shapes hiring decisions, performance reviews, and even who gets funding.
Predictive models influence what products we build and who we target.
Generative AI is involved in design, coding, writing, and even leadership communication.
But when a decision is co-created by a human and a system, who owns the outcome? And how do we assign credit—or blame?
Ancient Questions, New Forms
These aren’t new dilemmas. For centuries, thinkers have asked: Are we truly the authors of our actions? Or are we instruments of something larger?
The Stoics, especially Epictetus, emphasized Prohairesis—the faculty of moral will. You can’t control the world, but you can govern how you respond. That inner choice is the seat of dignity.
Aristotle saw humans as rational agents capable of deliberative choice—but also acknowledged that our habits, upbringing, and society shape what we’re capable of choosing.
In Christian theology, particularly through Aquinas, the will is free but requires alignment with reason and divine grace. Sin and virtue both assume agency.
Islamic scholars such as Al-Ghazali debated the tension between divine decree (qadar) and human moral responsibility.
Hinduism and Buddhism question the idea of a fixed self entirely. In the Gita, Krishna teaches Arjuna to act without attachment to outcomes—a form of ethical agency beyond ego. Meanwhile, Buddhist thinkers reject the notion of a permanent agent altogether, seeing actions as arising from interdependent causes.
Now, as AI becomes part of how decisions are made, these spiritual and philosophical struggles come roaring back—but in the language of design, automation, and algorithmic ethics.
The Crisis (and Opportunity) of Moral Ownership
Business has long operated on the idea that people make choices and own their results. But when decisions emerge from a hybrid of human intent and machine suggestion, our old accountability models start to break down.
For example:
If an algorithm rejects a job applicant based on biased data, who is responsible?
If an AI-generated financial report leads to a wrong investment, who’s at fault?
If a leader uses AI to shape communication and policy, but it’s emotionally tone-deaf, whose voice was it really?
We’re entering an era where responsibility must be reframed as shared, not shifted. The question isn’t “Who clicked the button?” but “Who designed the system, and how did it shape the choice?” It raises the challenges and opportunities to integrate better risk-modeling and probabilistic thinking into the contours of decisions themselves.
Co-Agency in the Workplace: From Control to Curation
This new world asks for new models of leadership and organizational design. It’s no longer enough to focus on personal judgment or isolated decisions. Instead, leaders must become curators of environments and systems where good decisions are more likely to emerge.
That means:
Understanding how machine outputs shape human behavior (and vice versa).
Building teams that are literate in ethics as much as AI.
Embedding human values—like fairness, inclusivity, and compassion—into the tools we use.
Leaders become stewards of human-machine ecosystems, not just bosses of individuals. The very nature and structure of organizations must change, for the better, toward a human-centric one. Or we run the risk of doing some extreme long-term damage (and not merely the sort imagined in science fiction dystopias of a Skynet).
Toward a More Humble, More Powerful Form of Agency
AI doesn’t eliminate human agency—it transforms it. It forces us to recognize that our decisions are always the result of systems, histories, and inputs that extend beyond ourselves.
In this light:
Prohairesis is still relevant—but must now include how we shape and respond to technological influence.
Spiritual traditions that encourage humility, reflection, and responsibility within interdependence offer frameworks for ethical leadership in a hybrid age.
Neuroscience, from Libet to Sapolsky, teaches us that even before AI, our sense of control may have been less complete than we imagined.
The future belongs not to those who cling to a narrow idea of control—but to those who understand that real leadership in this era comes from designing systems with care, owning their consequences, and staying grounded in values.
Further Reading & Exploration
To explore this emerging frontier—where AI, agency, and ethics meet—here’s a cross-disciplinary reading list:
Neuroscience & AI Ethics
Andy Clark & David Chalmers (1998) “The Extended Mind”
Shannon Vallor (2016). Technology and the Virtues
Kate Crawford (2021) Atlas of AI
Philosophy of Action & Agency
Aristotle
Religious & Cultural Perspectives
Buddhist –
Christian –
Final Thought: The Future of Decision-Making Is Shared
We are entering an age where intelligence is no longer only human, and where decisions are no longer made in isolation. The leaders of the future will not just be smart or strategic. They will be ethically aware, system literate, and deeply human—capable of sharing agency with machines without losing responsibility, and of designing choices that reflect not just what is efficient, but what is right.
Want to be part of the (r)evolution?
I am putting the finishing touches on the first draft of a book with a friend and colleague Andrew Lopianowski on the concept, which we are calling HumanCorps. If you’d like to learn more about the book, or perhaps have some amazing stories of people who are putting these efforts in motion to be the change we need, please drop me a line.