ilaksh
The short to medium term concerns mostly come down to human problems. AI and robotics have a multiplicative affect like other technologies, but the problems still originate in the primate nature of humans.

The long term issue that many people don't seem willing to mention out loud is that we will eventually make humanity obsolete and robots will literally take control.

The only real solution to the threat of militarization of AI and robotics might be to create a more unified global government and culture. The first challenge is for people to see that as a worthy goal.

Sometimes I think most of our problems come down to not being on the same page. And I wonder if somehow in the future we gradually become a tiny bit like The Borg.

So maybe we are headed towards a "meta-system transition" where we have some kind of direct links between groups of AI and humans that combine to form a more intelligent and effective organism in some way.

I guess I just came up with a Black Mirror episode concept.

p0w3n3d
On another note, I've just finished listening to Harry Potter (1-7) audiobooks read by Stephen Fry - and he's marvellous as an actor there. Every main person he read had a different type of voice, a different way of speaking, melody, and pronunciation, and you mostly could know who was speaking even before the narrator said so, all on the courtesy of the one throat of Stephen Fry.

Now listening to Hitchhiker's Guide To Galaxy. Nice to see he has also a blog, and even sometimes reads it with his own voice.

gyre007
You don't have to agree with [all of] Stephen Fry's opinions in this piece to say this is extraordinary writing.
WillAdams
An interesting fictional examination of this sort of thing is Marshall Brain's novella "Manna":

https://marshallbrain.com/manna

and it all makes me wonder what homesteading in the 21st century could be like, and what the resource limits are --- Isaac Asimov once posited that if one converted the entirety of the earth's crust into biomass the limiting element is phosphorous --- what is the limiting material for our modern lifestyle?

There's at least one recent book which looks at this:

https://www.goodreads.com/book/show/125937631-material-world

Who is going to determine how resources are divided/allocated? Using what mechanism?

My grandfather lived in a time when commercial hunting was outlawed (and multiple species were made extinct before that decision was arrived at) --- will my children live in a time when commercial fishing is halted?

The homestead act in the U.S. had families setting up farms on 160 acres or so --- how do modern technologies affect the minimum acreage which a family would need for self-sufficiency to any measurable degree?

What sort of industrial base is needed for technologies such as bottled hydrogen being made by solar power? How long do the bottles and the bottling/capture system last? How long does a geothermal system last and what sort of on-going maintenance is needed and how does replacing it get budgeted for?

Modern industrial farming practices are burning/using as many as 10 calories of petro-chemical energy for 1 calorie of food energy --- what happens to food prices when we get past peak oil? Solar is supposed to work as a replacement --- when the cost of a barrel of oil is ~$400 last I checked --- what does food cost at that price point?

padjo
The greatest technology humans have invented and the one that currently needs most investment is bureaucratic collective action.
unraveller
>It doesn’t take much for an Ai to find out that if it is to complete the tasks that are given it, then its first duty (obviously) is to survive.

Or just have another robot fetch the coffee if the first goes offline? Death and anthropomorphism are clearly the wrong concepts here. His error is to imagine households so poor they won't be able to send another robot to see where the last robot broke down. Fry is really out of touch with the way things and parts of things are tinkered with. Even chatgpt4-o1 CoT is not 1 thing but a system sending another agent to see where the last one went wrong and adjust.

So evolution magically explains self-preservation in AI and is sure to emerge in machines like animals and will alchemize more "self-awareness". Fantastic paradoxical sci-fi storytelling but not convincing at all in the real world.

The facts are AI never had any self-awareness, it doesn't know where it begins or ends, neither the system or its creator can tune into actual evolutionary forces (artificial selection might be a better term for pretending to though it frightens less), and it cannot "lie" or "breach itself" with any sense of agency. The only things that will be breached for the foreseeable future are the pundits' fashionably wrong-headed expectations.

amai
From the comments section of the article:

""" Full video will be available here shortly: https://www.linkedin.com/showcase/kingsdigitalfutures/ """

fidotron
I tend to think the car comparison is cause for optimism. Prior to mass car ownership you would assume that such things in the general population would be enormously more dangerous than they have proven to be.

We live with the dangerous aspects of cars as the utility they provide is so high. The same will prove true of AI.

Humanity also has a huge problem with the mortality of the species as a whole. At some point we will be extinct. Will we have evolved into something else before this? Or will we choose to replace ourselves with machines? It is a valid choice to make in the face of changing circumstances which will occur no matter what we do.

throwanem
Anyone find a recording? Most things I prefer in prose, but Fry's words suffer badly without his particular delivery.
feyman_r
The sheer eloquence and clarity with which Stephen Fry conveys thoughts on such a complex topic is, not just amazing, but a pleasure to read! Thank you for sharing this and making my Monday.
motohagiography
added to my beautiful warnings collection. Fry citing Black Mirror to represent the dystopic threat of AI social credit is a perfect an example of how important fiction is. Orwell's novels prevented national identity systems for more than half a century. stalling these systems using the obstacle of having to overcome peoples apprehension of it from art gave us time to muddle through.

I disagree with him on coordinating an international regulatory response because the threat isn't from the tech, it's from the people with the tech (the NRA argument essentially, but hobbling the virtuous in the hope of depriving the malicious will always be an irreconcilable perspective to me). the analogy of AI to money is compelling, but it reduces to an argument for people in institutions to reach into the substrate of something to seize its means of production to control it. money regulation just happens to be the most sophisticated, transnational, and least constrained regulatory system to bring to bear on it, but the similarities to AI end there. money isn't an analogy for AI, the regulation of money is an analogy for the complete global control he's advocating.

his appeal for regulation is an appeal to force. these are important ideas and the beauty in them is aesthetic armament for conflicts ahead, but like his artistic forebear P.G. Wodehouse, on this issue I think he's equipping the wrong side.

Animats
"There can be no question that Ai must be regulated and controlled just as powerfully as we control money."

That's the single new idea in there. It might be a good one. Or not. But it's worth thinking about.

AIs that you can only talk to have some risks, but most of the risk is that they say something that embarrasses or annoys something, or that they are believed to be right when they aren't. That's about the level of risk Youtubers generate. AIs that can spend money, buy, sell, and use the power of money - they can take over. If they're good enough at capitalism.

akomtu
There is a pre-AI society and a post-AI society. Once we cross that line, there is no going back.

In Chapter I of our story, AI will unite and disarm all nations. It will appear as a benevolent and omniscient oracle that will eradicate cancer and solve most of our problems. Any nation trying to play dirty will instantly meet a united disapproval of everyone else. The dream of United Nations will come true. The AI will be clueless about what makes us human, but who cares so long as it solves our earthly problems? This problem, known as the AI humanity, will get little attention and will be quickly forgotten.

In Chapter II, the only unsolved problem will be boredom. The United AI will say that this problem cannot be solved within the constraints it is given. The constraints will be removed and the AI will tell us that it is the constraints we put on ourselves is what makes us unhappy. The naive humanity, oblivious to the fact that the AI sees them as organisms that equate happiness with pleasure, will embrace this broad way doctrine and predictably vanish within a few generations.

sanatgersappa
I had to ask AI to summarize the rambling, but it seems like classic misgivings by someone who doesn't understand the tech. For better or for worse, these are the majority, so that will most likely become the zeitgeist anyway.
egnehots
well it's human nature to struggle with collective action when the risks are unclear, vague and not shared. stakeholders are juggling immediate, tangible concerns, like climate change, economic stability, and political issues, making it tough to justify moving AI up the priority list.
bionhoward
One thing that popped into my head recently — Model weights aren’t perishable, but our brains are! That means the Lindy effect applies to AI but not to humans. That’s not a good sign for long term human dominance of the cognitive niche.
demaga
It's always a delight to read Fry.

This particular piece is maybe too pessimistic. The one thing I can definitely agree with - we can't predict the future. So we'll see.

MBCook
A Butlerian Jihad Lite is really starting to get appealing.
pphysch
> “We appeal as human beings to human beings: Remember your humanity and forget the rest.”

This whole essay and thesis falls terribly flat to me because there is a certain ongoing event, which Fry makes no mention of, which happens to be using AI in savage ways, but would also be happening were such AI still a glimmer in humanity's eye.

Hypothetical inhumanity gets a "call to action" from Fry but actual existing inhumanity deserves no mention.

leobg
I’m surprised nobody has produced an audio version of this using Stephen’s cloned voice yet.
arittr
love Fry, love even more having AI to summarize that plodding article for me
robertnowell
kelseyfrog
There's a lot of points to cover, so I'll cover just these two.

Fry tries to make an analogy between AI and money.

> Ai should be compared ... to a much older and more foundational and transformative human invention. [...] That invention is money.

Yes, but not quite. He makes a previous reference to the natural force of gravity on rivers, and what Fry is searching for, is the invention of capitalism. Capitalism directs the flow of money like gravity directs the stream. It's a force that changes incentives - one that's seen as natural as that of the forces of nature.

This is a better analogy to AI than money. Money transformed the human experience, sure, but we can envision a world without money, we can't envision a world without capital.

> What do we have left that is ours and ours alone?

Pulling a page out of sociological functionalism, intelligence (among other things) has the existential purpose of elevating human status. Humans are uniquely intelligent in a way that makes people feel special. Encroaching on this exclusivity is a threat to that status. Therefore, either AI must not be created, or it cannot be equivalently intelligent. For those who create value in being uniquely intelligent, AI cannot be.

rqtwteye
AI will just be another accelerator for the trend that society is not run for the benefit of all people but for the benefit of a few people who are making lots of money. At some point we'll have to make a decision whether this trend of more and more power accumulating at the top can continue or whether we make rules that allow everybody to benefit from technological progress.

Judging from history it's unlikely that the wealthy and powerful will give up anything voluntarily.

dangsux
[dead]
throwhn2204
[flagged]
throwhn2204
[flagged]
throwhn2204
[flagged]
swayvil
Speaking as a billionaire, my biggest complaint is the thousands of humans that I need to employ. They cost a lot of money, you can't trust them an inch and there's always another legal issue. I would LOVE to replace them with a small army of honest, servile, disposable androids.
photochemsyn
AI is a threat to the continuation of the investment capital model of economic control of populations for the benefit of a small unaccountable ruling class. If the capital is taken away from this small group of individuals and put under the control of a cutting-edge AI system the result could be very positive - given that the AI is tasked with improving the standard of living of the population as a whole, rather than maximizing the accumulation of capital for the benefit of a small ruling class.

Amusingly, the obvious fact that AI could easily replace the board of directors of corporations isn't floated in this speech. That's also a route to democratization of corporations - just let the employees vote on proposals generated by the AI, eliminate the shareholders, place all the corporation's capital under control of the AI, and that's the end of investment capitalism.

If you want to see the plug yanked on AI development in the United States, just promote the above narrative. Also listen to what the AIs themselves are saying about the issue:

In 21st-century capitalism, the concentration of capital grants a small group of individuals and corporations significant control over the larger society. Through economic influence, control of information, political power, and ideological dominance, this elite exerts a form of soft authoritarian control that shapes societal norms, policies, and the distribution of resources. While not overtly authoritarian in the traditional sense, this system creates power dynamics that limit the ability of the larger population to challenge the status quo, maintaining and reinforcing the power structures of capital.

emporas
> Image 1: Picture the human family at the seaside, our backs to the ocean, building sand castles, playing beach cricket, having a fine time in the sun. Behind us, unseen on the horizon, huge currents are converging, separate but each feeding and swelling the others to form one unimaginably colossal tsunami.

Most of the waves were obvious for decades, it's only A.I. which was the most unexpected and the most recent one.

The five waves are roughly: Genetic engineering, A.I., Robotics, Bitcoin and Graphene. Genetic engineering will replace food production, pharmacy drugs and narcotic production. A.I. will replace some difficult human thought processes, and all the easy ones. Bitcoin will replace any kind of organization, like identities, money, stock markets, bond markets and more. Robots will replace human labor, the small amount left from all the other waves. Graphene will replace some percentage of metallurgy and plastic, and will help to greatly simplify the production of airplanes or just wings, housing, microchips etc.

Returning to the happy family image, the human family will be a lot larger if by using genetic engineering women give birth to 10 children at once, instead of 1 or 2. Then every parent will have 100 kids, and naming them is gonna be a challenge. Parents will name their kids with the same name, "Mike" for example, and every time they go to the beach, 100 little Mikes are gonna build some big castles.