Words matter. They shape how we think about the world—and how we respond to it. This is especially true when we talk about emerging technologies like AI.

I'm not launching a crusade to change how people speak. But I do believe the term artificial intelligence is both misleading and emotionally loaded. That’s why I’m trying something different: thinking of this technology not as “intelligence” but as an information factory.
Think about what “artificial intelligence” evokes. Hollywood machines like HAL 9000, Skynet, or friendly humanoids with consciousness and will. These myths, shaped by decades of science fiction and tech industry hype, dominate the conversation. They fuel both fear and misplaced faith—and blur the actual stakes.
The reality? We’re not dealing with sentient beings. We’re dealing with industrial tools. And it matters that we see that clearly.
Instead of intelligence, picture a factory.
This factory processes text and data instead of raw materials. It doesn’t understand meaning—it computes probabilities. It produces text, code, images—outputs that may look coherent but can contain hallucinations and errors.
We, the users, have become the quality control. If we don’t take that responsibility seriously, we risk being misled by flawed or synthetic content.
Thinking in terms of factories has four key consequences:
This isn’t just an academic point. It’s about digital sovereignty.
We need our own information factories—trained on Icelandic data, aligned with Icelandic values. Not to isolate ourselves, but to preserve choice and agency.
This is an experiment—not a doctrine.
But if we want to have grounded, responsible conversations about this technology, we need to choose our words carefully. Not out of dogma, but out of clarity.
If we stop seeing these machines as “intelligent agents” and start seeing them as information factories, we’ll be in a much better position to use them wisely.

The Sjalli-Kiss is a symbol of the freedom to make mistakes. In an age of surveillance culture and AI — are we truly living?

If it is considered administrative malpractice to use Claude to judge the contribution of scholars, why is it considered "academic integrity" to use Turnitin to judge the originality of a student?

A recent incident in Iceland reveals dangerous flaws when AI like Claude assesses human contribution without understanding. AI is powerful but shouldn't judge careers. The article is based entirely on public media coverage and does not assume that all facts of the case are fully known.