As many have expressed before me, I want AI to mow my lawn and wash my dishes, not compose my music or my prose. To desire my dehumanization is illogical, thank you Mr. Spock.
And with Francesca Albanese's revelations about the filthy investments of the international corporate world, especially Big Tech, in genocide, I frankly don't want to be anywhere near anything they offer me.
If the same corporations profiting from genocide are also building the tools shaping our minds and voices, why should we trust anything they offer in the name of ‘progress’?
That's the saddest part of this kind of advanced technology. The ones that are promoting and using it are the very people that have created and profited from the everyday plague of social media in place right now and who have already created daily time wasting and unproductive activities for multi-millions around the world in the use of their "products".
So not only will they be the beneficiaries of any further expansion and perhaps the daily use by everybody of AI today, which is bound to come in many forms yet, but they continue to benefit by way of profits in any further expansion after that. What that could be, doesn't bear thinking about.
The question/concern is not really about 'trading human effort for machine convenience'. The history of civilizations has been about making human lives easier (and more enjoyable) by creating machines (of all kinds) to aid humans.
The REAL DANGER of AI (and LLMs) are twofold (IMO) ->
(1) The inequality (and knowledge) gap will increase due to unequal access to technologies such as AIs and LLMs. The gaps between the 'haves and have-nots' will widen, creating the associated set of problems that come with increasing inequality. The ones that use this technology to AUGMENT their thinking (versus OUTSOURCING their thinking - as most people are currently seen to be doing) will leap ahead of the others. This will lead to a higher CONSOLIDATION of power in the hands of fewer and fewer people. You can see where I'm going with this line of reasoning...
(2) Instead of utilizing AI (and LLM) technologies to better themselves, most people seem to be using these tools as shortcuts, easing of 'cognitive loads', outsourcing of creative energies (think of this as outsourcing stuff to AI instead of to cheaper labor in poorer nations), and LESS utilization of cognitive processing - leading to a slow atrophy of mental abilities (and everything that comes along with this).
People tend to bring up ‘the Terminator’ or Matrix movies as the logical dystopian result of too much technology but I tend to agree with you, the danger is cognitive atrophy and this could be generational, I think the movie ‘Wall-E’ might be a more compelling warning.
Yes, the 'Terminator' scenario doesn't scare me - since what scares me more is 'human stupidity' and how hackable/psychologically manipulable humans truly seem to be. We seem to be our own worst enemies (as evidenced by the leaders we select and look up to, climate change, etc.).
Very well said. I am living that now. I am finding students that no longer know how to solve simple problems without deferring to our overlords. The frightening thing, we know this yet we as I see are doing nothing but promoting these tools as the equivalent to learning. Logic is lost and left to the Landru's of of the world. Yes the username is a problem now isn't it : (
AI is a lie. What is being called AI are simply machines that organize data, provided by the operators, using specific criteria, also provided by the operators. A customer, using an interpretive device, keyboard, voice, whatever, submits a requests that the machines can then respond too. It’s not magic. It’s data processing. The real twists are the contributions, made by the operators, regarding the data provided and the search criteria. They provide the “humanity”, both light and dark. Not the machines. The operator’s determine the politics. And they are human beings! With agendas! No magic! The illusion that the massive amount of data involved creates a kind of transcendent intelligence is a lie. It’s still data processing shaped by the search criteria, designed by human beings with human agendas. In other words; this is capitalism and you are both the customer and the product. They want your money and your obedience. These are the same people that used these same machines to track people then kill them and their entire family. Without hesitation or the slightest remorse.
That same argument is made about humans. Neurons storing code made by other Neurons. I wrote a story Dirty Dozen to Mars that deals with that question. Can Androids dream of electric sheep is an amazing work by many at that time considered mentally ill Phillip K. Dick. I wish I had an opportunity to have had coffee with him.
Caitlin, as most technological developments (over the last 75+ years) have shown to be tools usurped and co-opted by the Capitalist class to increase 'profit-making opportunities', so to will LLMs (Large Language Models - a subset of the massive field of AI) follow suit.
And this time around, unlike previous times, most people will choose to use the tools of their OWN volition (by buying into the narratives and hype of everything AI). The level of critical thinking is already abysmal, and with people OUTSOURCING their 'ordinary' thinking/comprehension/etc. to such tools, humanity IS moving backwards. (Being in the technology industry and interacting with many techies and non-techies, I already see this happening currently).
Who needs fascism and authoritarianism/totalitarianism when 'ordinary people' dumb themselves down for the benefit of TPTB (without realizing their unconscious acceptance of such technology tools) right?
[PS: You might notice that some of your Substack readers have already bitten the 'LLM and AI' bug (praising it and using it in comment responses, and sometimes even to write articles). What can I say? A sucker seems to be born every minute...]
A few months ago I asked AI a short but difficult question… a few days later I asked the same question to the same AI.. not surprisingly, I got a different answer… close, but not the same..
Yes, that's how LLMs work. They 'create' answers on-the-fly (through various algorithms that are built on a foundation of amalgamation of vast amounts of data and rules that structure this data for human consumption/comprehension).
We have heartless, brainless leaders and a ruling class devoid of any good human qualities of sympathy, empathy, generosity or compassion. Who needs AI on top of that.....
I would rephrase that to say that we have 'smart leaders' that know how to take advantage of 'brainless humans' by utilize technologies (such as AI) to speed up the process of exploiting and oppressing them. I agree with the rest of what you say.
(In the context of this article, by 'brainless humans' I mean humans that are already succumbing to the 'wow factor' of LLMs, as you will observe in some of the comments on today's article).
Chang, In this neoliberal capitalist and Trumpian world sliding fast into fascism, I cannot imagine that AI will be used for good. And the "smart leaders" we have will certainly use it to consolidate their power with no thought to the social, environmental, moral, and ethical harm that they are currently fostering on humanity and the world. It will be another tool in colonising us even further. Described as " A powerful tool that has our socially and culturally biased intelligence woven into it while seemingly lacking any real wisdom".
You ask "How much do you want to feel the earth beneath your feet, the wind in your hair, and the sacred thrum of existence in your veins?" I want to, NEED to feel those things every day. Without them, I think my mind would turn to mush. That's how I feel sometimes when I sit in front of my computer too many hours in the day.
If a computer can replace a human, how dehumanizing was the job the human doing before? Were humans really doing that much critical thinking? How much work in society is just busy work that is meant to waste a humans time? Every time technology allows humans to do more things in less time why does the extra value go to the employer rather than the employee?
I'd say the wars are largely makework, hopeless tasks to keep the masses busy and entertained. Blowing up the Nordstreem pipelines created jobs in the USA. Jobs in missile factories, etc. It's all there in George Orwell's 1984, where by the way porn is written by AI.
Oh no no no no. I live in West Virginia, I'm an environmentalist and I know exactly why they blew up those pipelines, It's because the US gas fracking industry was losing its viability due to too much product and too little demand--it wanted foreign markets, where the price of gas was MUCH higher, but compressing it to ship across the ocean, then re-gassing it to pipe to use points adds much to the price (also adds much to the greenhouse gas footprint but who cares about that). Couldn't compete with Russian gas. When the Ukraine war started, Germany finally succumbed to pressure to refuse the Nordstream II pipeline, just as it was completed--but it was still there, viable, and Nordstream I was pumping. Once the two were destroyed, Germany (and the rest of Europe had to buy the much more expensive American gas, which seriously damaged their economy. Mission accomplished.
BTW, I am working with a young intern from Aerospace at Uni. Virginia. She is a treat. We talked about after (if) grad school what her options are if we in the u.s. continue defunding science. I was happy to hear the military isn't an option. : )
Both can be true my friend and likely are. The last gas field on land in Denmark no longer produces at a profitable rate. Love to hear more about your experiences there. : )
The extra value goes to the employer because the employer sets the pay rate. Individual employees have no bargaining power. A strong labor union has bargaining power.
Employee-owned companies can operate democratically, with all value shared by all.
"We can choose to let AI do our critical thinking for us if we want to. "
I don't want it to--I refuse to let a program like Grok think for me, write for me, create for me. I want to do things myself. I guess that's why I'll never have a million Substack followers, but at least I'll still have my pride and a functioning brain.
AI is merely a reflection of humanity or the lack of it. It isn't human, but we have given the wealth of information that we have collected in the history of humanity. AI, in and of itself isn't the problem, the problem remains ourselves, collectively, as a species. How much humanity we choose to lose is exactly proportional to what we would otherwise choose. Unfortunately, ultimately is the fact that it is obvious that many wish to use AI to harm humanity... AI is merely a reflection of humanity, the ones it learns from...Garbage in...Garbage out...
The more poorly educated kids are the more they will use AI. Online classes are a joke - a bad joke - but they’re here to stay. I worked with a nurse who “graduated” from the online University of Phoenix and she was flat out dangerous.
Many have made the decision already without realizing it to use AI in some form instead of their own resources and, like any capacity that’s not nurtured, it will wither and blow away. We’ve been here for a while and what we’re seeing now is an acceleration.
“ Epstein is undoubtedly one of the most important stories of our generation. It’s the Rosetta Stone of elite corruption— a case intersecting nearly every electrified rail of conservative, populist, and justified MAGA distrust: globalist depravity, the deep state’s immunity, two-tiered justice, media collusion, basic right and wrong, and child exploitation— and the sickening sense that the worst people in the world are the ones setting the rules.”
On that note, those that write the AI code and/or rule over it, will always interject their bias. Psychopaths tend to gravitate into roles such as this. And they “project” onto everyone else the hatred they carry within.
Supposedly, the Israeli attack on Iran was prompted on their belief that all the AI “intelligence “ they received informed their military that now was the time to attack.
Supposedly the Technological Singularity is at hand. It's projected to occur in 2035 when China takes over the world. Machines won't need us anymore. They'll repair and replicate themselves. Eventually humans will be reduced to pets and house plants.
A future where I don't have to stress between healthcare, rent, and food + being able to focus on my writing and hang out with my friends whenever we want because we live in a healthy stable environment while the robots handle everything?
You act as if this a negative compared to our current dying world full of toxic microplastics, oil spills, and p3dophile billionares hellbent on causing the end of the world.
Do you not get that AI means more oil spills and plastic, more climate change, more inequality, and psychopaths in charge of everything? You actually think there's a possible world where robots handle everything through the use of magic, where there is no need for energy or materials, no mining?
LLM's and the like can be very useful tools for consolidating data or providing quick summary lists for people working in those fields.
These tools are absolutely shit with any situation where they don't have a preset field of data to parse through (and often just make up info to use since they don't have any).
As many have expressed before me, I want AI to mow my lawn and wash my dishes, not compose my music or my prose. To desire my dehumanization is illogical, thank you Mr. Spock.
And with Francesca Albanese's revelations about the filthy investments of the international corporate world, especially Big Tech, in genocide, I frankly don't want to be anywhere near anything they offer me.
If the same corporations profiting from genocide are also building the tools shaping our minds and voices, why should we trust anything they offer in the name of ‘progress’?
This, right here.
Vin
That's the saddest part of this kind of advanced technology. The ones that are promoting and using it are the very people that have created and profited from the everyday plague of social media in place right now and who have already created daily time wasting and unproductive activities for multi-millions around the world in the use of their "products".
So not only will they be the beneficiaries of any further expansion and perhaps the daily use by everybody of AI today, which is bound to come in many forms yet, but they continue to benefit by way of profits in any further expansion after that. What that could be, doesn't bear thinking about.
Slow down world and let us oldies catch up.
LIke!
When did what we want ever matter?
We want peace.
We want healthcare.
We want a clean environment.
We want a future for our children.
We want our elected representatives to represent us.
We wanted Trump to drain the swamp.
We wanted him to end the wars.
We wanted Biden to bring back empathy and compassion.
We wanted Obama to give us hope and change.
What we got was doped and shortchanged.
Mick said, “You can’t always get what you want.”
But even if we try sometimes, we can`t get no satisfaction
I think Mick knew that no satisfaction was the main operant principle. And that getting what you need wasn't going to come from institutional sources.
Well said. I do mow my own lawn though as punishment : )
Plant Moss. No more mowing.
For me, the only good AI is no AI. I'll use my own mind, thank you very much.
If we keep trading human effort for machine convenience, at what point do we stop being the author of our own lives?
The question/concern is not really about 'trading human effort for machine convenience'. The history of civilizations has been about making human lives easier (and more enjoyable) by creating machines (of all kinds) to aid humans.
The REAL DANGER of AI (and LLMs) are twofold (IMO) ->
(1) The inequality (and knowledge) gap will increase due to unequal access to technologies such as AIs and LLMs. The gaps between the 'haves and have-nots' will widen, creating the associated set of problems that come with increasing inequality. The ones that use this technology to AUGMENT their thinking (versus OUTSOURCING their thinking - as most people are currently seen to be doing) will leap ahead of the others. This will lead to a higher CONSOLIDATION of power in the hands of fewer and fewer people. You can see where I'm going with this line of reasoning...
(2) Instead of utilizing AI (and LLM) technologies to better themselves, most people seem to be using these tools as shortcuts, easing of 'cognitive loads', outsourcing of creative energies (think of this as outsourcing stuff to AI instead of to cheaper labor in poorer nations), and LESS utilization of cognitive processing - leading to a slow atrophy of mental abilities (and everything that comes along with this).
People tend to bring up ‘the Terminator’ or Matrix movies as the logical dystopian result of too much technology but I tend to agree with you, the danger is cognitive atrophy and this could be generational, I think the movie ‘Wall-E’ might be a more compelling warning.
Yes, the 'Terminator' scenario doesn't scare me - since what scares me more is 'human stupidity' and how hackable/psychologically manipulable humans truly seem to be. We seem to be our own worst enemies (as evidenced by the leaders we select and look up to, climate change, etc.).
I loved the movie 'Wall-E' - and I agree, that scenario is FAR MORE likely than 'skynet' (though I reckon we already have some version of it currently) or 'terminator robots' (again, we already have robot dogs - https://inews.co.uk/news/world/robodogs-replacing-troops-gaza-war-2940487 or https://www.sciencetimes.com/articles/47678/20231215/israel-defense-forces-employ-robot-dogs-assist-soldiers-gaza.htm currently).
Another link -> "US Army testing roll out of gun-mounted robot dogs in Middle East" (https://www.the-independent.com/tech/us-army-middle-east-robot-dogs-b2623027.html or https://www.newarab.com/news/robodogs-part-israels-army-robots-gaza-war)
Actually, come to think of it, 'machine-gun robot-dogs' DO scare me, though not sure how much.
Very well said. I am living that now. I am finding students that no longer know how to solve simple problems without deferring to our overlords. The frightening thing, we know this yet we as I see are doing nothing but promoting these tools as the equivalent to learning. Logic is lost and left to the Landru's of of the world. Yes the username is a problem now isn't it : (
in how far are we still the author of our own lives rn?
100% agree! I knew there was a reason I liked you (and your comments), Bob Martin.
Thanks, Chang, and likewise! 😀
Like!
I prefer humanity, thank you. Any day. No matter the inconvenience or cost. I don’t need the fucking AI 🤖.
AI is a lie. What is being called AI are simply machines that organize data, provided by the operators, using specific criteria, also provided by the operators. A customer, using an interpretive device, keyboard, voice, whatever, submits a requests that the machines can then respond too. It’s not magic. It’s data processing. The real twists are the contributions, made by the operators, regarding the data provided and the search criteria. They provide the “humanity”, both light and dark. Not the machines. The operator’s determine the politics. And they are human beings! With agendas! No magic! The illusion that the massive amount of data involved creates a kind of transcendent intelligence is a lie. It’s still data processing shaped by the search criteria, designed by human beings with human agendas. In other words; this is capitalism and you are both the customer and the product. They want your money and your obedience. These are the same people that used these same machines to track people then kill them and their entire family. Without hesitation or the slightest remorse.
Exactly.
That same argument is made about humans. Neurons storing code made by other Neurons. I wrote a story Dirty Dozen to Mars that deals with that question. Can Androids dream of electric sheep is an amazing work by many at that time considered mentally ill Phillip K. Dick. I wish I had an opportunity to have had coffee with him.
Caitlin, as most technological developments (over the last 75+ years) have shown to be tools usurped and co-opted by the Capitalist class to increase 'profit-making opportunities', so to will LLMs (Large Language Models - a subset of the massive field of AI) follow suit.
And this time around, unlike previous times, most people will choose to use the tools of their OWN volition (by buying into the narratives and hype of everything AI). The level of critical thinking is already abysmal, and with people OUTSOURCING their 'ordinary' thinking/comprehension/etc. to such tools, humanity IS moving backwards. (Being in the technology industry and interacting with many techies and non-techies, I already see this happening currently).
Who needs fascism and authoritarianism/totalitarianism when 'ordinary people' dumb themselves down for the benefit of TPTB (without realizing their unconscious acceptance of such technology tools) right?
[PS: You might notice that some of your Substack readers have already bitten the 'LLM and AI' bug (praising it and using it in comment responses, and sometimes even to write articles). What can I say? A sucker seems to be born every minute...]
A few months ago I asked AI a short but difficult question… a few days later I asked the same question to the same AI.. not surprisingly, I got a different answer… close, but not the same..
Yes, that's how LLMs work. They 'create' answers on-the-fly (through various algorithms that are built on a foundation of amalgamation of vast amounts of data and rules that structure this data for human consumption/comprehension).
We have heartless, brainless leaders and a ruling class devoid of any good human qualities of sympathy, empathy, generosity or compassion. Who needs AI on top of that.....
Elected by heartless, brainless voters.
>>"We have heartless, brainless leaders..."
I would rephrase that to say that we have 'smart leaders' that know how to take advantage of 'brainless humans' by utilize technologies (such as AI) to speed up the process of exploiting and oppressing them. I agree with the rest of what you say.
(In the context of this article, by 'brainless humans' I mean humans that are already succumbing to the 'wow factor' of LLMs, as you will observe in some of the comments on today's article).
Chang, In this neoliberal capitalist and Trumpian world sliding fast into fascism, I cannot imagine that AI will be used for good. And the "smart leaders" we have will certainly use it to consolidate their power with no thought to the social, environmental, moral, and ethical harm that they are currently fostering on humanity and the world. It will be another tool in colonising us even further. Described as " A powerful tool that has our socially and culturally biased intelligence woven into it while seemingly lacking any real wisdom".
Precisely Indu! I'm glad you understand the dangers of AI (and LLMs). I wish more people did...
You ask "How much do you want to feel the earth beneath your feet, the wind in your hair, and the sacred thrum of existence in your veins?" I want to, NEED to feel those things every day. Without them, I think my mind would turn to mush. That's how I feel sometimes when I sit in front of my computer too many hours in the day.
I would like to have an earth that has clean earth and air and oceans. I entirely dislike the word 'Want.'
If a computer can replace a human, how dehumanizing was the job the human doing before? Were humans really doing that much critical thinking? How much work in society is just busy work that is meant to waste a humans time? Every time technology allows humans to do more things in less time why does the extra value go to the employer rather than the employee?
I'd say the wars are largely makework, hopeless tasks to keep the masses busy and entertained. Blowing up the Nordstreem pipelines created jobs in the USA. Jobs in missile factories, etc. It's all there in George Orwell's 1984, where by the way porn is written by AI.
Oh no no no no. I live in West Virginia, I'm an environmentalist and I know exactly why they blew up those pipelines, It's because the US gas fracking industry was losing its viability due to too much product and too little demand--it wanted foreign markets, where the price of gas was MUCH higher, but compressing it to ship across the ocean, then re-gassing it to pipe to use points adds much to the price (also adds much to the greenhouse gas footprint but who cares about that). Couldn't compete with Russian gas. When the Ukraine war started, Germany finally succumbed to pressure to refuse the Nordstream II pipeline, just as it was completed--but it was still there, viable, and Nordstream I was pumping. Once the two were destroyed, Germany (and the rest of Europe had to buy the much more expensive American gas, which seriously damaged their economy. Mission accomplished.
BTW, I am working with a young intern from Aerospace at Uni. Virginia. She is a treat. We talked about after (if) grad school what her options are if we in the u.s. continue defunding science. I was happy to hear the military isn't an option. : )
Both can be true my friend and likely are. The last gas field on land in Denmark no longer produces at a profitable rate. Love to hear more about your experiences there. : )
The extra value goes to the employer because the employer sets the pay rate. Individual employees have no bargaining power. A strong labor union has bargaining power.
Employee-owned companies can operate democratically, with all value shared by all.
A question many in the IT world are asking as companies like Intel layoff thousands. Chips building chips.
"We can choose to let AI do our critical thinking for us if we want to. "
I don't want it to--I refuse to let a program like Grok think for me, write for me, create for me. I want to do things myself. I guess that's why I'll never have a million Substack followers, but at least I'll still have my pride and a functioning brain.
Not that you need confirmation, but yes you do : )
AI is merely a reflection of humanity or the lack of it. It isn't human, but we have given the wealth of information that we have collected in the history of humanity. AI, in and of itself isn't the problem, the problem remains ourselves, collectively, as a species. How much humanity we choose to lose is exactly proportional to what we would otherwise choose. Unfortunately, ultimately is the fact that it is obvious that many wish to use AI to harm humanity... AI is merely a reflection of humanity, the ones it learns from...Garbage in...Garbage out...
"MechaHitler" lollol
Musk: "AARGH!! IT'S SAYING THE QUIET BIT OUT LOUD!!!!" :'D :'D
The more poorly educated kids are the more they will use AI. Online classes are a joke - a bad joke - but they’re here to stay. I worked with a nurse who “graduated” from the online University of Phoenix and she was flat out dangerous.
Many have made the decision already without realizing it to use AI in some form instead of their own resources and, like any capacity that’s not nurtured, it will wither and blow away. We’ve been here for a while and what we’re seeing now is an acceleration.
Any “far” is too far, for what you surrender is control of your mind, handing control to a machine and surrendering the remains of your personality.
From Childer’s daily blog today:
“ Epstein is undoubtedly one of the most important stories of our generation. It’s the Rosetta Stone of elite corruption— a case intersecting nearly every electrified rail of conservative, populist, and justified MAGA distrust: globalist depravity, the deep state’s immunity, two-tiered justice, media collusion, basic right and wrong, and child exploitation— and the sickening sense that the worst people in the world are the ones setting the rules.”
On that note, those that write the AI code and/or rule over it, will always interject their bias. Psychopaths tend to gravitate into roles such as this. And they “project” onto everyone else the hatred they carry within.
Supposedly, the Israeli attack on Iran was prompted on their belief that all the AI “intelligence “ they received informed their military that now was the time to attack.
Biased information can lead to human catastrophe.
Supposedly the Technological Singularity is at hand. It's projected to occur in 2035 when China takes over the world. Machines won't need us anymore. They'll repair and replicate themselves. Eventually humans will be reduced to pets and house plants.
A future where I don't have to stress between healthcare, rent, and food + being able to focus on my writing and hang out with my friends whenever we want because we live in a healthy stable environment while the robots handle everything?
You act as if this a negative compared to our current dying world full of toxic microplastics, oil spills, and p3dophile billionares hellbent on causing the end of the world.
Do you not get that AI means more oil spills and plastic, more climate change, more inequality, and psychopaths in charge of everything? You actually think there's a possible world where robots handle everything through the use of magic, where there is no need for energy or materials, no mining?
Good questions. AI seems like it can be a useful tool, but not in a capitalist neoliberal society.
LLM's and the like can be very useful tools for consolidating data or providing quick summary lists for people working in those fields.
These tools are absolutely shit with any situation where they don't have a preset field of data to parse through (and often just make up info to use since they don't have any).
I fucking hate capitalism.
I'd say , better in no society.