Nils Durner's Blog Ahas, Breadcrumbs, Coding Epiphanies

AI and Agency: Navigating Choice and Responsibility

Agency has increasingly become a critical dimension in discussions about artificial intelligence. While AI’s rapidly growing capabilities promise convenience and efficiency, they also raise essential questions about autonomy, choice, and responsibility. Two concepts from theoretical computer science—Automata Theory and Entropy—can help illuminate how humans can retain meaningful agency in an AI-mediated world.

Automata and Entropy: Breaking Algorithmic Loops

Automata Theory introduces the notion of “state,” representing memory or memorization within a system. Today’s recommender systems are essentially state machines, continuously updated by our interactions. If not approached mindfully, these systems can trap us in self-reinforcing loops, narrowing rather than expanding our horizons.

One practical approach to maintaining broader agency is to deliberately manage your digital “state.” For example, the performing artist Claire Boucher (“Grimes”) maintains multiple distinct online identities to avoid cross-contamination among recommendation systems, intentionally diversifying rather than narrowing her informational landscape. Similarly, we can adopt deliberate strategies—such as using separate accounts or consciously engaging with diverse information sources—to preserve and expand our personal agency.

Entropy, in information theory, refers to the level of surprise or unpredictability within a system. Deliberately introducing entropy, by occasionally selecting unusual or unexpected choices, enriches the diversity of AI-generated suggestions. For instance, a single encounter with a local indie band during a vacation opened up an entire new musical subgenre for me—a lasting enrichment of my personal choice landscape. We can consciously seed our algorithms with variety, preventing our digital environments from becoming static echo chambers.

In short, treating AI as an “Apparatus” we actively manage—intentionally inputting diverse information and breaking habitual patterns—puts us in control. Operate the Apparatus; don’t let the Apparatus operate you.

Predictive vs. Generative AI: Clarifying the Confusion

Today, media coverage around AI often conflates fundamentally different technologies like generative AI (such as large language models, or LLMs) with predictive AI (recommender systems, automated decision-making). While generative AI creates novel outputs—text, images, or code—predictive AI makes decisions based on existing patterns. Using generative models for purely predictive tasks, without careful prompting or proper reasoning frameworks, has frequently been the source of misleading or unsound conclusions—something I’ve observed firsthand during peer reviews of AI research.

Yet, generative AI can indeed contribute responsibly to predictive tasks, provided we employ careful prompting, structured reasoning models, or specialized traditional software approaches (often inaccurately marketed simply as “AI”). Good reasoning models and clear guidelines can potentially mitigate biases and increase reliability.

Ultimately, confusion in public discourse arises from insufficiently rigorous science communication and exaggerated media reporting. As with any software system, the appropriate stance towards AI is straightforward: trust, but verify.

Agency, AI Literacy, and the Role of Responsible Engineering

Navigating the complex interplay between human and AI agency requires more than verification or “Explainable AI” marketing hype. Responsible AI practice demands robust, long-established principles from classical computer science—methodologies such as rigorous testing (including Blackbox Testing), stringent quality assurance, and accountable governance. AI is not a fresh start; rather, it builds upon seven decades of advances in computer science and software engineering. We must apply the lessons we’ve already mastered, ensuring rigorous engineering practice and careful oversight remain at the forefront.

However, even solid engineering alone isn’t sufficient. AI literacy—the capability of end-users to understand, navigate, and critically engage with AI systems—is becoming foundational. As AI increasingly shapes our daily choices, a new kind of media competency is required. Users must develop the skills to see beyond default algorithmic recommendations and explore deeper options intentionally, akin to knowledgeable diners asking for off-menu items.

Practically, this means fostering user education, providing transparency about AI-assisted processes, and clearly communicating the scope and limitations of these systems. Recent discussions at forums, such as the ENISA Trust Services Forum, underline that everyone along the value chain holds responsibility—engineers, businesses, policymakers, and consumers alike.

Leopold Aschenbrenner’s insightful essay, Situational Awareness, identifies two polarized and fundamentally unserious public factions dominating the AI discourse, leaving the realistic middle ground alarmingly underrepresented. Such polarization underscores the urgent need for deeper, more nuanced conversations about AI literacy and responsible use.

Real-world professional examples illustrate generative AI’s potential to enhance human insight when used responsibly. Consider this example from immunology researcher Dr. Derya Unutmaz, who turned to OpenAI Deep Research to make sense of complex experimental data—demonstrating how AI, combined with human expertise, can expand our agency rather than restrict it.

Recent research by the Australian Treasury (Evaluation of Generative AI) confirms that even relatively modest AI systems ((lesser) Level 1 on OpenAI’s Superintelligence scale) require only a few days for effective workplace integration. This underscores both the urgency and feasibility of rapidly enhancing AI literacy among professional and end-user audiences.

Cascading Agency: AI’s Positive Potential

The broader philosophical discussion around agency—highlighted by Andrej Karpathy and Reid Hoffman—underscores agency’s transformative potential. Karpathy emphasizes that agency surpasses sheer intelligence as a driver of meaningful outcomes, while Hoffman highlights AI’s unique potential to multiply human agency at scale, cascading from individuals through teams, companies, and eventually society.

By consciously managing our interaction with AI—through deliberate entropy, rigorous verification, and increased AI literacy—we enable ourselves to benefit from AI’s positive potential. Rather than passively allowing AI to reinforce yesterday’s behaviors, informed and proactive engagement can empower individuals to explore new intellectual and creative territories.

In conclusion, AI need not diminish our agency. Instead, through thoughtful engagement and rigorous practice, AI can become a tool that magnifies our individual and collective agency—unlocking human potential rather than constraining it.