Today, after years of debate and uncertain discussions, it seems policymakers, computer scientists and the general public alike are coming to a consensus that AI will, in fact, have an unprecedented impact on society. The technology is expected to impact most, if not all, societal spheres, in ways dramatically different from past technological shifts. Despite this, there are still some consequential gaps between what governance efforts aim to tackle and the actual major risks that AI systems represent, according to groups at the frontier of AI development. While governance has thus far mainly focused on established risks – from online harms to data privacy questions – this is vastly different from the prospective concerns being brought up by technical researchers in the AI safety sphere: both in terms of cognitive displacement and of loss of control over the planet. Attempting to bridge this gap, U.S. Senator Bernie Sanders held a discussion with AI pioneer Geoffrey Hinton, following which he will propose a set of specialized recommendations to Congress to attempt to address some of these concerns.
With Hinton, recognized as one of the ‘Godfathers of AI’, Sen. Sanders undertook a discussion at Georgetown University earlier in November. Hinton has been recognized for his foundational work in establishing the field of deep learning and the powerful neural networks that are at the core of most AI systems today. Together, the two covered some of the key topics in AI safety. The opening question that Sen. Sanders uses in this discussion, and in his following speeches (both to the general public and to the Senate floor), focuses on the major actors of AI advancements. Indeed, he questions who should be in charge of the transformation to an AI world. As of today, this has fallen to the hands of a dozen ever-powerful and rich men who, at the head of the major AI companies, get a disproportionate amount of control over the future of the field. Considering the impacts that AI is expected to have, this can even be understood as this handful of men having control over determining the future of humanity at large. In an important interlocution, Sen. Sanders asks whether “the goal of the AI revolution [is] simply to make the very, very richest people on earth even richer; and even more powerful? Or [if] this revolutionary technology [will] be utilized to benefit all of humanity?” In his view, “that is the question: who benefits.” With President Trump’s recent pledges to restrict states from regulating AI, this question is more important than ever.
There are a few other key subjects that Hinton and Sen. Sanders bring up, which we can therefore also expect to find in the official recommendations Sen. Sanders will propose to Congress. One of the major subjects was the impact of AI on the human workforce: a major topic in discussions around AI risks, which Hinton argues will take an unprecedented form. Unlike previous technological advances which usually impacted one specific sphere, AI has the potential (if it follows its current trajectory) to overtake a majority, if not all, of professional fields. This would mean that, unlike previous technological advances, AI might not leave any industry untouched by automating labour in most (or all) fields, thus leading to a labour demand void.
Sen. Sanders stresses the point that AI could push progress in either a positive or negative direction; particularly with regards to democracy. While AI could empower democratic foundations, it can already be seen as posing serious threats to civil liberties. Notably, Sen. Sanders points to the predictions made by Larry Ellison, centibillionaire and co-founder of Oracle, who projects a future with AI-powered surveillance states where citizens, because constantly monitored, would “be on their best behaviour;” a chilling reminder of the decreasing privacy rights of most citizens in developed countries today.
Moving past economic consequences and individual rights, Sen. Sanders discusses the question of interpersonal relationships and the impact AI can have on them. Already today, we are seeing an increasing part of AI users develop emotional relations with these large language models. As Sen. Sanders puts it, this could go as far as to transform what it means to be human. Indeed: if not interpersonal relationships, then what else can we rely on for our emotional and intellectual development? Today already 72% of US teenagers say they have used AI for companionship; and more than half of them do so regularly. If the youth of today form their formative relationships with artificial entities, which demand very little effort and are specifically trained to conform to the user’s preferences, the very consciousness of the population at large is at risk of unpredictable changes (which can reasonably be expected to be mostly negative.)
Together with Hinton, Sen. Sanders also discusses the environmental impacts of AI, and especially of the datacenters necessary for its development. These have already had major communal and local effects, causing uproar in the United States. Sen. Sanders also considers the use of AI and robotics for warfare as a major shift for armed conflicts, eliminating the constraint of human loss which often serves as a deterring factor. This could affect foreign policy in various ways, especially endangering smaller nations that may not have the same technological capacities.
Finally, Sen. Sanders brings up the point of existential risk. While a major topic within AI safety communities, the threat that AI poses to human control over the planet is still vastly absent from discussions held by policymakers and politicians. As Hinton puts it, it is only a matter of time before AI surpasses human intelligence. As he says, 10 years ago, he and other leading experts of the field were far from expecting the levels of LLMs we are witnessing today; thus, we cannot claim to be able to predict what the future of progress holds for us. Concerns around Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI) have now become central to AI safety teams both in industry and academia. And a major point of consensus is that they require proactive efforts to prepare for their dangerous impact. Yet, very little is being done to advance such mitigation strategies, let alone to slow down the AI race to a pace that would allow us to figure out alignment and other essential questions to avoid these risks.
As Senator Sanders puts it, revolutionary impact can go both ways; but the U.S. Congress, like other political entities, "must act now. And maybe the people on top, the billionaires who control the technology, want us to ignore it. But for the future of our world, our kids, the environment, etc., this is an issue that cannot be ignored.”