Skip to main content

Research

Original frameworks for understanding how AI reshapes institutions, judgment, and accountability.

Governance that doesn’t ship isn’t governance.

My research sits at the intersection of AI governance, institutional theory, and production systems. I study what happens when organisations actually deploy AI — not in theory, but in practice. The work combines philosophical frameworks (McMurtry’s Life-Value Onto-Axiology), empirical data (venture capital flows, institutional adoption patterns), and production validation (BORG, Arctic Tracker, Gjöll).

The methodology — Diagnostic Sociology — applies emergency medicine pattern recognition to complex social systems. Sixteen years as a paramedic taught me to read vital signs, identify cascading failures, and intervene before systems collapse. I apply the same diagnostic discipline to AI’s societal effects.

Original Contributions

Seven frameworks introduced through empirical research, thesis work, and production system validation.

Responsibility Fog

How accountability dissolves when decisions are distributed across algorithms, committees, and platforms.

When a hiring algorithm rejects a candidate, who is responsible? The developer? The deploying organisation? The vendor? The regulator who approved the framework? Responsibility Fog describes the systematic diffusion of accountability in AI-mediated decision-making — not as a bug, but as an emergent property of how institutions adopt algorithmic systems. The concept traces back to my 2015 BA thesis on telecommunications data retention, where I first identified unclear distribution of responsibility in surveillance systems.

First introduced:Beyond Fragmentation (MA thesis, 2026, grade: 9.5/10)

Forthcoming:The Irreducible Human (Brill Publishers, with Dr. Kristian Guttesen)

Cognitive Debt

How judgment atrophies when we outsource thinking to machines.

Analogous to technical debt in software engineering, Cognitive Debt describes the gradual erosion of human evaluative capacity through sustained algorithmic dependence. The concept captures something specific: not that AI makes us lazy, but that repeated delegation of judgment creates a compounding deficit — one that becomes visible only when the system fails and no one remembers how to think without it.

First introduced:Beyond Fragmentation (MA thesis, 2026)

Applied:Teaching curriculum design at UNAK; GRÓ Fisheries Training Programme; forthcoming nursing course on AI and women’s health

The Investment-Sentiment Gap

41% of AI venture capital targets tasks workers resist. 1.26% targets tasks they want automated.

An empirical finding from analysis of AI investment flows against worker sentiment data. For every $33 invested in AI development, $32 flows toward automating tasks workers value and resist losing, while $1 goes toward tasks they actually want automated. This is not a market inefficiency — it is a structural misalignment between capital allocation and human need.

First introduced:Beyond Fragmentation (MA thesis, 2026)

Methodology:Cross-referencing Crunchbase venture capital data with worker sentiment surveys

The Benevolent Cage

When protective governance becomes the constraint it was designed to prevent.

AI governance frameworks designed to protect human autonomy can themselves become mechanisms of control — setting boundaries so rigid that they prevent the adaptive capacity they claim to preserve. The concept interrogates the paradox of institutional AI policy: at what point does protection become containment?

First introduced:Beyond Fragmentation (MA thesis, 2026)

The VALOR Framework

A governance model grounded in life-value rather than market logic.

VALOR provides a structured alternative to existing AI governance models (EU AI Act risk tiers, NIST RMF) by grounding assessment in McMurtry’s Life-Value Onto-Axiology — asking not “what is the risk level?” but “does this deployment enable or disable the means of life?” The framework was validated against three production systems at the University of Akureyri.

First introduced:Beyond Fragmentation (MA thesis, 2026)

Validated through:BORG (AI governance platform), Arctic Tracker (conservation analytics), Gjöll (fire safety database)

Diagnostic Sociology

Emergency medicine pattern recognition applied to AI’s societal effects.

A methodological contribution that formalises the transfer of diagnostic thinking from clinical practice to social systems analysis. Just as a paramedic reads vital signs, identifies cascading organ failure, and triages intervention — Diagnostic Sociology reads institutional adoption patterns, identifies systemic governance failures, and proposes targeted interventions before collapse becomes irreversible.

First introduced:Applied throughout Beyond Fragmentation, institutional advisory work, teaching methodology

Cognitive Complacency

What happens when comfort meets capability and produces nothing.

Where Cognitive Debt describes judgment erosion through AI dependence, Cognitive Complacency describes the prior condition: wealthy societies with the best resources to deploy AI meaningfully are the ones reducing it to convenience layers. Rwanda uses generative AI to save lives. Western welfare states use it to roast marshmallows.

First introduced:Blog post, “Stop Roasting Marshmallows!” (February 2026)

Thesis

Beyond Fragmentation: A Life-Value Alternative for AI Governance

MA in Social Sciences (AI and Society) · University of Akureyri · 2026

Grade: 9.5/10 · Highest grade award · 90 ECTS

Supervisor: Professor Giorgio Baruchello

An empirically grounded study analyzing systemic failures in AI governance. Introduces Responsibility Fog, Cognitive Debt, the Benevolent Cage, and the VALOR Framework. Demonstrates the Investment-Sentiment Gap through cross-referencing venture capital data with worker sentiment surveys. Validated through three production systems and a 775-node Neo4j knowledge graph mapping the relationships between governance actors, frameworks, and failure modes.

QuartoNeo4jPythonXeLaTeXCypherGitHub Actions

Forthcoming Book

The Irreducible Human: Life, Value, and Meaning in the Age of AI

Brill Publishers · Co-authored with Dr. Kristian Guttesen

A philosophical framework for AI governance grounded in what makes life worth living. Builds on the theoretical foundations of Beyond Fragmentation to propose a comprehensive alternative to market-driven AI governance — one rooted in the conditions that enable human flourishing rather than the conditions that enable capital accumulation.

Production Systems as Empirical Validation

This research is not purely theoretical. Each framework has been tested against production systems serving real users.

BORG

University of Akureyri’s hybrid AI governance platform. Multi-provider (Claude, GPT, Gemini), institutional knowledge retention, policy enforcement. The system that operationalises the governance theory.

Arctic Tracker

Conservation analytics platform integrating 473,000+ CITES trade records, IUCN assessments, and illegal seizure data for 43 Arctic species. Co-authored preprint with Dr. Tom Barry. Targeting Nature journal publication.

Gjöll

Open-access fire safety database documenting every confirmed fire fatality in Iceland since 1968 (113 incidents, 145 lives). Published on GAGNÍS national repository with DOI. Key finding: zero deaths in post-1998 buildings.

Sumarhus Alpha

Personal sovereign AI infrastructure. 5 Go agents, Rust MCP server, NATS JetStream mesh, 21 models across 7 providers, 15 containers, CI/CD to production. Vendor-agnostic, security-first.

Speaking & Presentations

Oxford AI Summit 2025

Autonomous AI Agents: Learning from Deployments

Oxford Lifelong Learning

Course instructor, AI courses (listed tutor)

Atlas of the Future Conference

Palazzo Adriano, Sicily (2024) — with Prof. Giorgio Baruchello and Prof. Rachael Lorna Johnstone

Rannís Expert Panel

Technology Development Fund, Spring & Fall 2025

Temjum tæknina Podcast

Two seasons, guests including Dr. Roberto Buccola, Giorgio Baruchello, Lilja Dögg Jónsdóttir (CEO, Almannaróm)

RÚv (Icelandic National Broadcasting)

Featured commentary on AI adoption

University of Akureyri Continuing Education

“Taming Technology” course, 100+ participants

Research Arc

The intellectual thread is continuous.

2012–2015

BA thesis identifies unclear distribution of responsibility in telecommunications surveillance → precursor to Responsibility Fog

2007–2022

Sixteen years of emergency services develops pattern recognition under pressure → foundation for Diagnostic Sociology

2022–present

4,000+ hours building AI systems → empirical grounding for governance theory

2024–2026

MA thesis formalises Responsibility Fog, Cognitive Debt, Investment-Sentiment Gap, VALOR Framework → validated through production systems

2026

The Irreducible Human (Brill) extends the theoretical framework to book-length treatment

Interested in this work?

I’m a researcher and builder focused on how AI reshapes institutions, judgment, and accountability. If your work intersects with AI governance, societal impacts, or responsible deployment — I’d be glad to hear from you.