INTRODUCTION
“The algorithm decided.”
It is a phrase we hear increasingly often in marketing meetings, in investment decisions, even in everyday conversations about social media or news feeds. The wording is casual, almost humorous. Yet hidden within it is a quiet shift in responsibility.
We blame machines for decisions we quietly surrendered.
Artificial intelligence today does not command authority in the traditional sense. It influences through design, ranking, recommendation, and prediction. And the more seamless the experience becomes, the easier it is to forget that behind every algorithm sits a human objective, a business model, or a governance structure that determined its purpose.
The question, therefore, is not whether AI is powerful. The question is whether we remain conscious of our own agency while using it.
THE ANTHROPOMORPHISM TRAP
One of the most subtle risks in modern AI is not technical, it is psychological. Humans naturally attribute personality, intention, and intelligence to systems that display conversational or adaptive behaviour. This tendency, known as anthropomorphism, makes machines appear more capable than they actually are.
We give AI names.
We thank digital assistants.
We describe recommendation systems as if they “know us.”
And slowly, we begin to over-trust outputs.
The illusion of intelligence can lead users to treat algorithmic suggestions as objective truths rather than probability-based predictions. A navigation app’s route becomes unquestioned. A content feed becomes assumed relevance. A financial scoring system becomes perceived fairness. The danger is not that AI makes decisions, it is that humans stop critically reviewing them.
A real-world example illustrates this risk clearly. In 2018, the technology company Amazon discontinued an experimental AI recruitment tool after discovering that it systematically downgraded résumés that included indicators associated with women. The algorithm had been trained on historical hiring data that reflected existing industry biases. What appeared to be an objective machine recommendation was in fact a statistical reflection of past human decisions. The system had not “chosen” to discriminate; it had simply learned it. The episode demonstrated how easily algorithmic outputs can inherit the assumptions set in their training data and why human oversight remains essential.
HUMAN AGENCY AND RESPONSIBILITY
Human agency is not removed by artificial intelligence; it is redistributed. The technology proposes, ranks, or filters but someone ultimately chooses to follow or override the output.
This redistribution has legal and governance implications.
In corporate environments, algorithmic tools assist with recruitment, credit approvals, and risk analysis. Yet boards and executives remain accountable for outcomes. Delegating analysis does not equal delegating responsibility or oversight.
The same principle applies socially. When we accept a recommendation without reflection, we are not being controlled, we are participating in a convenience-driven trade-off. Our self self Awareness restores agency. Lack of awareness erodes that agency.
In governance terms, AI ia aimilar to corporate delegation structures. A board may rely on advisors, but it cannot escape its fiduciary duty. Similarly, society may rely on algorithms, but accountability ultimately remains human.
Regulators around the world are beginning to recognize this redistribution of responsibility. The European Union’s Artificial Intelligence Act introduces a risk-based framework requiring human oversight for high-risk systems such as
These systems must be designed so that humans can review, challenge, or override automated outcomes.
In the United States, the city of New York introduced New York City Local Law 144, which requires companies using automated hiring tools to conduct independent bias audits and inform applicants when algorithmic screening tools are being used.
China has adopted another approach through the Algorithmic Recommendation Management Provisions, which requires large digital platforms to disclose how recommendation algorithms shape content feeds and allows users to opt out of algorithmic recommendations entirely.
In the United Kingdom, guidance from the Information Commissioner’s Office reinforces that individuals have the right to challenge decisions made solely by automated systems and request meaningful human review.
Despite their differences, these regulatory approaches reflect a shared principle: algorithms may assist decision-making, but responsibility must remain human.
Ethical Design — The Role of Builders, Not Just Users
Agency is not solely an individual responsibility; it is also a design responsibility. The architecture of AI systems determines whether users are empowered or quietly influenced.
Three pillars are essential.
Ethical AI design is not about limiting innovation; it is about ensuring that innovation enhances human judgement instead of replacing it unnoticed.
THE ZANZIBAR AND EMERGING-MARKETS CONTEXT
For regions experiencing rapid digital growth, such as Zanzibar and many emerging markets, the stakes are uniquely high. Technology adoption often comes faster than regulatory maturity, creating opportunities and risks simultaneously.
Consider practical examples:
These tools can dramatically increase efficiency and accessibility. Yet without transparency and oversight, they can also entrench bias, reduce accountability, or obscure decision-making logic.
Emerging markets possess a rare advantage: the ability to design governance frameworks alongside adoption rather than after systemic dependence forms.
Universities, private innovators, and regulatory bodies must therefore collaborate, not to slow technology, but to shape its integration responsibly.
.
CLOSING REFLECTION
The future of artificial intelligence is not a contest between humans and machines. It is a partnership defined by awareness, design, and accountability. Algorithms do not wake up with intentions. They follow objectives set by people. The responsibility for those objectives and their consequences remains ours.
Consistently ranked among the top consulting firms across the nation. Succession, and all other important transitions. Our job is to help you.
Inner-Works Consultant © Copyright | Crafted With ❤️ By Abdulrazak Mustafa
No products in the cart.