joel.software
The Transamerica Pyramid in San Francisco
8 min read

Lack of Conviction in the Age of AI

My one-on-one with a work friend is often more coffee chat than work meeting. They're remote, so it's Zoom instead of a café, but the conversation flows—we talk about the work, why connection matters, and what's energizing or draining us.

This week, we were both wrestling with a new pattern we've been observing: the "empty eloquence" of AI-generated work. We had both been in meetings where a colleague shared something that was polished, but when we asked follow-up questions to understand the thinking, the conversation stalled.

"It would have been better if they'd come with questions," my colleague said. "Or rough ideas and competitive examples. At least then we could have a conversation."

We were seeing something that felt new and unsettling: the quiet acceptance that maybe conviction is no longer necessary. That smart-sounding arguments are sufficient, whether or not you believe them, learned from developing them, or built any connection with peers in the process.

On Conviction

Conviction isn't a flash of insight. It's a slow burn.

You start with a hypothesis—an architectural approach or design decision. You share it with coworkers for feedback. You test it against their challenges. When your arguments don't hold up, you learn. Or you should, if your strong opinions are held with humility.

This process is rarely about ending up where you started. The reward is winning over peers you respect, learning from them, being corrected by them, and creating something valuable through communication and connection.

Few things bond teams better than challenging each other and changing each other's minds in a hard-won debate with candor.

The process requires desirable difficulties—the effortful struggle that cements mastery, not shortcuts it. When we wrestle with ideas ourselves, when we face the discomfort of defending our reasoning to skeptical colleagues, we develop more durable and flexible understanding.

Something valuable is happening: we're building relationships. Decades of research confirm that strong relationships are the primary predictor of career success and life satisfaction. The constructive conflict from genuine intellectual disagreement doesn't just produce better software; it creates the trust and mutual respect high-performing teams depend on.

The allure of AI is its promise to bypass this struggle. It offers eloquent arguments without demanding conviction. But when we take this shortcut, the loss of conviction is just the first sign of what we're truly giving up.

What We Lose

The cost is deeper than impotent meetings or shallow technical discussions. When we shortcut conviction, we lose things fundamental to good software and strong teams.

Memory. You don't remember arguments with computers the way you remember debates with colleagues. The ideas that stick, that shape how you approach future problems, are the ones you've wrestled with alongside other people. The social context of idea development makes learning stick in ways solo interaction with AI cannot.

Trust. Professional relationships are built on intellectual honesty and demonstrated thinking. When people know you as someone who thinks deeply and can defend your ideas under pressure, you build compounding professional capital. Eloquent words without conviction erodes trust faster than you can rebuild it.

Joy. Not caring about your ideas, not building conviction through struggle and debate, makes work less satisfying. Collaborative idea development makes challenging technical work sustainable, and enjoyable, long-term.

Serendipity. Colleagues constantly acquire new context and share insights, connections, and opportunities. They remember your interests, your expertise, your past arguments. Humans have a massively larger context window than LLMs. Unlike an LLM, your colleagues will reach out to you—but only if you've built real relationships through meaningful collaboration.

The Empty Eloquence Problem

"These machines honor me with their lips and their bytes, but their heart is far from me. Computers do words better than you - but they don't feel anything... the universe is created to have people who feel... the worth, glory, beauty and wonder of grace." - John Piper

You don't need Piper's theology to see his point. AI excels at producing authoritative, reasoned, even compelling words. But conviction isn't about eloquence; it's the experience of wrestling with ideas until they become yours.

When someone presents AI-generated work as their own, they're offering empty eloquence—words without the underlying experience of belief, doubt, and discovery that makes human discourse meaningful. The problem isn't necessarily dishonesty, but the allure of a shortcut that bypasses the very struggle that builds conviction and trust.

Human-written content carries richer emotional signals than AI-generated text, which tends to be blandly positive. We can sense the difference, even when we can't articulate it.

This matters because our colleagues aren't processing units. They're relationship partners in the deep work of building something meaningful. When we shortcut the conviction-forming process, we aren't just being inefficient; we're undermining the connections that make our work sustainable.

This erosion has costs beyond individual conversations.

The Case for AI (With Radical Transparency)

I'm not arguing against AI tools. I use them extensively and think engineers should lean into AI more, not less. The issue isn't the technology; it's our use and our honesty.

The key is transparency. When I collaborate with AI, I say that: "I collaborated with Claude on this." When I pull research from an AI conversation, I frame it clearly: "I got this perspective from an AI, and I'm curious if it holds weight with you." When I'm unconvinced by an AI-generated argument, I say: "I find this reasoning compelling on the surface, but it doesn't resonate with me. What do you think?"

This transparency doesn't weaken your position; it strengthens it. It signals intellectual honesty and invites collaboration. It distinguishes between repeating borrowed ideas and sharing something you've wrestled with, regardless of whether that involved AI.

The goal isn't to avoid AI. It's to preserve the human process of forming conviction while leveraging AI where it helps. AI excels at gathering new context, surfacing counterarguments, and organizing information. It can't care about the outcome, build relationships, or develop the conviction that comes from rigorous inquiry.

For example, I use AI to keep documentation up-to-date by feeding it call transcripts and having it compare what was discussed with what's written. It identifies where decisions have drifted and suggests updates, while also writing comprehensive meeting notes. This frees me to focus on the strategic thinking and relationship building that happens during those conversations, rather than getting bogged down in aligning documentation.

When AI helps gather context or identify risks, it frees cognitive space for the uniquely human work of judgment, creativity, and relationship building. The question isn't whether to use AI; it's whether you're transparent about its use and how you ensure the valuable parts of the work remain yours.

The Path Forward

The teams that do this well won't reject AI on principle, nor will they automate every task. They'll use AI to handle toil so they can spend more time on the hard problems that require human judgment and interaction—collaboration, learning, and the intrinsic value in being challenged by another person.

Their relationships won't get bogged down in conversations over AI slop. They'll embrace AI for what it is—a tool—and their approach will vary. For instance, sometimes I let AI do a first pass to catch any code quality issues, freeing me to think on architectural concerns. Other times, I do my own review first, then use AI as a "second reviewer" to see what I missed.

I recently saw a situation where AI could have documented patterns in a legacy codebase in minutes, but the team chose to do it manually, which took significantly more time. We don't yet have a shared framework for when to use these tools. The goal isn't to replace human judgement, but to augment it, and we're all still figuring out what that looks like in practice.

The stakes are higher than productivity. When we shortcut conviction, we lose the relationships that make our work meaningful. We lose the joy from wrestling with ideas alongside people we respect. We lose the trust that builds when colleagues know you as someone who thinks deeply.

Preface contributions with context: "I used AI to research this, and here's what I think..." Less posturing and more curiosity.

The goal isn't proving we can do everything by hand, but using AI thoughtfully so we can focus on judgment, creativity, and cultivating shared understanding. AI will assist with information gathering; conviction will be formed through human dialogue. The results won't just be better technical outcomes; they’ll be deeper working relationships and people learning from each other, not just repeating information.

I'm trying to approach these ideas with humility. I'm still working out when to lean on AI versus when to insist on human processes. The answer isn't a blanket rule but a series of intentional choices made with transparency and relationships in mind.

I'm also having more fun writing software and doing knowledge work with AI assistance than I have in years. The premise isn't avoid AI or embrace it uncritically. Use it thoughtfully and protect what makes us human: our capacity for conviction, connection, and collaborative growth.

When we get the balance right—when we use AI for augmentation, are transparent, and preserve space for the desirable difficulties that build conviction—we create something beyond better software. We create the connection that comes from intellectual struggle shared with people we respect.