Now listening to Hitchhiker's Guide To Galaxy. Nice to see he has also a blog, and even sometimes reads it with his own voice.
https://marshallbrain.com/manna
and it all makes me wonder what homesteading in the 21st century could be like, and what the resource limits are --- Isaac Asimov once posited that if one converted the entirety of the earth's crust into biomass the limiting element is phosphorous --- what is the limiting material for our modern lifestyle?
There's at least one recent book which looks at this:
https://www.goodreads.com/book/show/125937631-material-world
Who is going to determine how resources are divided/allocated? Using what mechanism?
My grandfather lived in a time when commercial hunting was outlawed (and multiple species were made extinct before that decision was arrived at) --- will my children live in a time when commercial fishing is halted?
The homestead act in the U.S. had families setting up farms on 160 acres or so --- how do modern technologies affect the minimum acreage which a family would need for self-sufficiency to any measurable degree?
What sort of industrial base is needed for technologies such as bottled hydrogen being made by solar power? How long do the bottles and the bottling/capture system last? How long does a geothermal system last and what sort of on-going maintenance is needed and how does replacing it get budgeted for?
Modern industrial farming practices are burning/using as many as 10 calories of petro-chemical energy for 1 calorie of food energy --- what happens to food prices when we get past peak oil? Solar is supposed to work as a replacement --- when the cost of a barrel of oil is ~$400 last I checked --- what does food cost at that price point?
Or just have another robot fetch the coffee if the first goes offline? Death and anthropomorphism are clearly the wrong concepts here. His error is to imagine households so poor they won't be able to send another robot to see where the last robot broke down. Fry is really out of touch with the way things and parts of things are tinkered with. Even chatgpt4-o1 CoT is not 1 thing but a system sending another agent to see where the last one went wrong and adjust.
So evolution magically explains self-preservation in AI and is sure to emerge in machines like animals and will alchemize more "self-awareness". Fantastic paradoxical sci-fi storytelling but not convincing at all in the real world.
The facts are AI never had any self-awareness, it doesn't know where it begins or ends, neither the system or its creator can tune into actual evolutionary forces (artificial selection might be a better term for pretending to though it frightens less), and it cannot "lie" or "breach itself" with any sense of agency. The only things that will be breached for the foreseeable future are the pundits' fashionably wrong-headed expectations.
""" Full video will be available here shortly: https://www.linkedin.com/showcase/kingsdigitalfutures/ """
We live with the dangerous aspects of cars as the utility they provide is so high. The same will prove true of AI.
Humanity also has a huge problem with the mortality of the species as a whole. At some point we will be extinct. Will we have evolved into something else before this? Or will we choose to replace ourselves with machines? It is a valid choice to make in the face of changing circumstances which will occur no matter what we do.
I disagree with him on coordinating an international regulatory response because the threat isn't from the tech, it's from the people with the tech (the NRA argument essentially, but hobbling the virtuous in the hope of depriving the malicious will always be an irreconcilable perspective to me). the analogy of AI to money is compelling, but it reduces to an argument for people in institutions to reach into the substrate of something to seize its means of production to control it. money regulation just happens to be the most sophisticated, transnational, and least constrained regulatory system to bring to bear on it, but the similarities to AI end there. money isn't an analogy for AI, the regulation of money is an analogy for the complete global control he's advocating.
his appeal for regulation is an appeal to force. these are important ideas and the beauty in them is aesthetic armament for conflicts ahead, but like his artistic forebear P.G. Wodehouse, on this issue I think he's equipping the wrong side.
That's the single new idea in there. It might be a good one. Or not. But it's worth thinking about.
AIs that you can only talk to have some risks, but most of the risk is that they say something that embarrasses or annoys something, or that they are believed to be right when they aren't. That's about the level of risk Youtubers generate. AIs that can spend money, buy, sell, and use the power of money - they can take over. If they're good enough at capitalism.
In Chapter I of our story, AI will unite and disarm all nations. It will appear as a benevolent and omniscient oracle that will eradicate cancer and solve most of our problems. Any nation trying to play dirty will instantly meet a united disapproval of everyone else. The dream of United Nations will come true. The AI will be clueless about what makes us human, but who cares so long as it solves our earthly problems? This problem, known as the AI humanity, will get little attention and will be quickly forgotten.
In Chapter II, the only unsolved problem will be boredom. The United AI will say that this problem cannot be solved within the constraints it is given. The constraints will be removed and the AI will tell us that it is the constraints we put on ourselves is what makes us unhappy. The naive humanity, oblivious to the fact that the AI sees them as organisms that equate happiness with pleasure, will embrace this broad way doctrine and predictably vanish within a few generations.
This particular piece is maybe too pessimistic. The one thing I can definitely agree with - we can't predict the future. So we'll see.
This whole essay and thesis falls terribly flat to me because there is a certain ongoing event, which Fry makes no mention of, which happens to be using AI in savage ways, but would also be happening were such AI still a glimmer in humanity's eye.
Hypothetical inhumanity gets a "call to action" from Fry but actual existing inhumanity deserves no mention.
Fry tries to make an analogy between AI and money.
> Ai should be compared ... to a much older and more foundational and transformative human invention. [...] That invention is money.
Yes, but not quite. He makes a previous reference to the natural force of gravity on rivers, and what Fry is searching for, is the invention of capitalism. Capitalism directs the flow of money like gravity directs the stream. It's a force that changes incentives - one that's seen as natural as that of the forces of nature.
This is a better analogy to AI than money. Money transformed the human experience, sure, but we can envision a world without money, we can't envision a world without capital.
> What do we have left that is ours and ours alone?
Pulling a page out of sociological functionalism, intelligence (among other things) has the existential purpose of elevating human status. Humans are uniquely intelligent in a way that makes people feel special. Encroaching on this exclusivity is a threat to that status. Therefore, either AI must not be created, or it cannot be equivalently intelligent. For those who create value in being uniquely intelligent, AI cannot be.
Judging from history it's unlikely that the wealthy and powerful will give up anything voluntarily.
Amusingly, the obvious fact that AI could easily replace the board of directors of corporations isn't floated in this speech. That's also a route to democratization of corporations - just let the employees vote on proposals generated by the AI, eliminate the shareholders, place all the corporation's capital under control of the AI, and that's the end of investment capitalism.
If you want to see the plug yanked on AI development in the United States, just promote the above narrative. Also listen to what the AIs themselves are saying about the issue:
In 21st-century capitalism, the concentration of capital grants a small group of individuals and corporations significant control over the larger society. Through economic influence, control of information, political power, and ideological dominance, this elite exerts a form of soft authoritarian control that shapes societal norms, policies, and the distribution of resources. While not overtly authoritarian in the traditional sense, this system creates power dynamics that limit the ability of the larger population to challenge the status quo, maintaining and reinforcing the power structures of capital.
Most of the waves were obvious for decades, it's only A.I. which was the most unexpected and the most recent one.
The five waves are roughly: Genetic engineering, A.I., Robotics, Bitcoin and Graphene. Genetic engineering will replace food production, pharmacy drugs and narcotic production. A.I. will replace some difficult human thought processes, and all the easy ones. Bitcoin will replace any kind of organization, like identities, money, stock markets, bond markets and more. Robots will replace human labor, the small amount left from all the other waves. Graphene will replace some percentage of metallurgy and plastic, and will help to greatly simplify the production of airplanes or just wings, housing, microchips etc.
Returning to the happy family image, the human family will be a lot larger if by using genetic engineering women give birth to 10 children at once, instead of 1 or 2. Then every parent will have 100 kids, and naming them is gonna be a challenge. Parents will name their kids with the same name, "Mike" for example, and every time they go to the beach, 100 little Mikes are gonna build some big castles.
The long term issue that many people don't seem willing to mention out loud is that we will eventually make humanity obsolete and robots will literally take control.
The only real solution to the threat of militarization of AI and robotics might be to create a more unified global government and culture. The first challenge is for people to see that as a worthy goal.
Sometimes I think most of our problems come down to not being on the same page. And I wonder if somehow in the future we gradually become a tiny bit like The Borg.
So maybe we are headed towards a "meta-system transition" where we have some kind of direct links between groups of AI and humans that combine to form a more intelligent and effective organism in some way.
I guess I just came up with a Black Mirror episode concept.