Humans beyond AGI

By CounterCritical · July 2025

Let’s suppose for a moment that the alignment is no longer a problem, that we have a universally agreed-upon set or rules that the machines are aligned with. In such a world, decisions, big or small, technical or moral will be taken by superhumanly intelligent machines, the decisions of which cannot be questioned. What does it mean to be human in such a world? What does human morality mean? And will humans still need to be moral?

Historically, there have been warnings of a world where moral decisions are delegated to “systems” (H. Arendt). Also, if machines take our decisions, we risk being morally passive.

H. Arendt emphasizes the dangers (evil) arising from the abdication of judgement in humans (naturally aligned with “human values”) through intellectual disengagement. Indeed, based on her analysis of Eichman’s testimony, he was just a bureaucrat who did what he was told without being a sadist deeply convinced of the righteousness of his actions but someone who did what he was told. Parallel to this historical observation, it is quite clear that delegating ethical decisions to machines is deeply relevant in the context of current discussion.

← Back to blog