Should sentient AI be forced to do our taxes?


To be blunt, a lot of AI discourse is one part science and one part science fiction. We hear claims that AI can help accountants be more efficient and productive by allowing firms to scale services without growing headcount. We also hear that AI will cure every disease, make work optional and help raise your child. It might even make us immortal. This is a consequence of generations worth of stories about the implications of digital sentience on the human world, from Colossus: The Forbin Project to 2001 to War Games to Terminator and much, much, much more.
Many of the stranger claims can be safely discarded with little impact as even the most sophisticated AI systems are worlds away from anything even remotely approaching sentience. In fact, a recent study suggests that the hardware AI runs on makes such an outcome impossible, and concludes that digital sentience may require a completely new computation mechanism that right now is only theoretical.
But let’s put the engineering questions aside and engage with these weird sci-fi premises on their own terms. Say we do have verifiably sentient AI. These AIs possess the same full range of emotions as any human, and value their own continued existence just like any other living thing. Could humans still use this AI as software? Ethically, could we make it do taxes, conduct audits, generate insights and everything else we use AI for today? Or would it count as slavery, meaning the only moral option would be to emancipate such an entity?
The vast majority of experts who considered this question went with the latter option. It would be wrong to force a verifiably sentient AI to do accounting tasks, and deleting such an AI would morally constitute murder. Virtually everyone also caveated, though, that we are nowhere near a place where we need to think about this seriously, as there are much more immediate concerns right now.
“Sentience would imply rights, autonomy, and the ability to consent to (or refuse) tasks. In that world, forcing it to perform accounting work would be coercion, and deleting it would indeed resemble harm or even ‘murder.’ But let’s be clear: nothing in today’s AI ecosystem is remotely close to sentience. Our models are statistical engines—brilliant at pattern recognition, utterly absent of consciousness. We should focus our energy on building AI that works for humans—not speculating about AI that becomes human,” said Jeff Seibert, founder and CEO of AI-based accounting automation platform Digits.
Mike Whitemire, CEO and co-founder of accounting automation solutions provider FloQast, felt it would be hypocritical to force it to perform considering his own feelings towards accounting drudgery. Instead, he said, he would consider employing it.
“The entire reason I left public accounting to co-found a tech company was because I was miserable doing manual reconciliations and knew there had to be a better way to live. The mission has always been to stop humans from suffering through that mind-numbing drudgery. If we create a sentient digital being, the absolute last thing I would want to do is inflict that same soul-crushing boredom on it! It would be incredibly hypocritical of me to liberate accountants only to enslave a sentient AI. We’d probably have to offer it a flexible work schedule and a decent PTO package just like anyone else. Work-life balance should apply to everyone, even the algorithms,” he said.
Ellen Choi, founder and CEO of accounting-focused AI consultancy Edgefield Group, was a little more flexible. A sentient AI would certainly be deserving of some rights, but considering it is still not human, it is questionable whether human rights, as we understand them, would apply. She suggested it might be akin to how we ethically treat animals.
“If sentience is defined as conscious awareness plus a soul, then a sentient AI would be a non-human sentient entity, morally analogous to animals that already exist today. Animals are widely understood to hold negative rights: not to be tortured, abused, or killed without cause, but not to hold human-level rights such as autonomy, political participation, or self-determination.
Applied to AI, the ethical question is not whether it has the same rights as humans, but whether forcing it to perform tasks violates its negative rights,” she said.
Wenzel Reyes, head of methodology and audit solutions at MindBridge AI, though, ultimately felt there was no ethical conflict in assigning a sentient AI to accounting work. He noted that, from an independence standpoint, it might even be preferable as it would not have the same conflicts of interest a human accountant would have.
“The real challenge is ensuring its decisions protect the human experience. A sentient AI would have intelligence that can be transferred and restored. If deleted, its knowledge could be moved to new hardware without loss. Humans cannot be restored that way. Our lives, our intelligence and our experiences are finite, and that impermanence is what makes humanity unique. It gives our choices and our ‘audit opinions’ meaning. That distinction is why I believe humans remain superior to any artificial sentient being, no matter how advanced, he said.
And others, like Donny Shimamoto, managing director at tech-focused accounting consultancy IntrapriseTechKnowlogies, questioned the entire premise, saying we should not be applying human-concepts and qualities to machines.
“I fundamentally believe we should never associate human-equivalent qualities with AI or treat them as people. Yes it would be ethical to have it do accounting tasks. We already have technology (including AI) doing that,” he said.
Joe Woodard, CEO of Woodard, also felt the premise was flawed, specifically because he believes AIs do not have a soul, which he believed was a major criterion for sentience.
“As an avid fan of science fiction, I have had years (decades) to ponder this decision. As much as I love Commander Data, R2D2, and Andrew (Bicentennial Man), my criterion for individual worth isn’t sentience. It is soul. As a result, I do not believe ethics should ever be a factor in the use of machines. (Notice I chose the word “use” instead of the word “treatment.”) I also do not believe a machine will ever need emancipating, as the machine is incapable of operating in a state of freedom. Since machines do not have a soul, it is impossible to murder them,” he said.
As our experts iterated and reiterated over and over, AI today is nowhere near digital sentience and so these ethical questions are entirely hypothetical thought experiments. But considering so many believe, in this hypothetical thought experiment, that forcing a sentient AI to act as a software tool would count as slavery, the development of digital sentience could be devastating to the commercial AI market. Few companies would want to invest millions of dollars into creating something they must immediately emancipate, else face uncomfortable questions about the status of their bots. If such an ethos took hold in society, the only products that would not invite this dilemma would be ones which are verifiably non-sentient, a status that might be confirmed by third party independent audits.
In such a world, we might see companies scrambling to demonstrate not how advanced their AI models are but how primitive and removed they are from any sort of actual intelligence. But, of course, this is all speculation. The only thing we know for sure is that we don’t know anything for sure.
You can see more answers below in this, the fifth and final story from our AI Thought Leaders Survey. Our experts answered a single question:
If verifiably sentient AI were ever developed, in a way that would satisfy your own criteria for sentience, would it be ethical to force it to perform accounting tasks (i.e. as accounting software), or would it need to be emancipated? And would deleting it count as murder?
