I'm almost finished putting your lyric poem "Guillotine Song" to reggae music, Caitlin. Hope to post on my Substack this coming week, after I sing the 4th verse & do the ending. Wish someone else could've sung it, but either way, I composed & played the music.
Obviously there are areas where AI appears to be pretty useful. Medical diagnoses spring to mind. But on the whole it’s an enemy I don’t want to know. Its environmental costs are immense. As the Climate Crisis increases and worsens, humans are going to literally be in the position of competing with “robots” for our drinking water, for land and for energy. And we know the humans who will be the first to lose out - the usual suspects; the economically inactive, immigrants, those with disabilities etc etc.
Normally I would really agree with you about getting to know the enemy, but there’s something about AI that sends a chill through my mind body and soul. We’re constantly being nudged and prodded to use it. There’s no doubt our “masters” want us dependent on it.
I have friends with no musical ability who are proudly posting their latest compositions - cheesy and empty. And would be artists posting weird pictures where the figures never have the right amount of fingers and limbs are in impossible positions. The worst thing about both the musical and artistic efforts is a glossy, inhuman sameness that doesn’t reach the heart. On the surface they look better than anything these people could do for themselves, but it’s a trick, it’s not real. It’s the same as elevating plastic flowers and astroturf above real flowers and grassy meadows that grow, flourish and die only to return. Human creativity has rhythms too. The AI offerings lack the vitality of the real, whether it’s real beauty or real flaws. Sometimes the flaw is the thing that makes a creative undertaking really special - but not flaws like the creepy seven fingered hands in places a real hand could never reach. Don’t get me started on AI poetry. It’s doggerel, but often Shakespearean or Victorian style doggerel and that can blindside people.
I’ve made the decision not to use AI. I’ll carry on with my pictures of crooked trees and lopsided crows, my substandard writing attempts, but sometimes, occasionally, they turn out a lot better than expected and that feeling is magic.
AI feels like it’s trading the best about us - whether we’re traditionally talented or not, for something cheap and worthless.
You do a great job at describing its flaws, and creepiness. In fact, you could send your assessed conclusion to various officials who overuse AI. It’s the best I’ve read so far.
I agree to the point where I would suggest laws ( and I hate too many laws) that would oblige the user of AI to include the fact that the writing or artistic input was doctored by AI.
However, a word that seems to arise whenever doubts come into play, here’s an example that makes HOWEVER possibly acceptable. ….
My son’s young family, with very young children, 1 & 4, live in a remote twin on the Northern tip of Vancouver Island. We have temporarily joined him for the last six months. He goes to sea regularly to harvest crabs and prawns to feed his family and has nothing but good things to say about the area. As usual, being the doubting soul that I am, and since he harvest his catches in a large inlet where an old paper mill was in operation 10 years ago, and where a closed copper mine dumped 400 million tons of tailings into the Sound during its operation, I questioned the fisheries and oceans government brass about the possibility of shellfish contamination, but to no avail. In fact, no one knew or questioned anything about it. Moreover, there is also an existing fish farm where my son informed me that the area was a crab and prawn bonanza.
Here’s where AI comes home as a useful tool. After asking several questions about the area, I discovered that there had indeed been research done on the contamination probabilities and that harvesting shellfish near old copper mines, paper mills AND fish farms posed serious risks to health, most especially to young children.
While my research ended there, I am now contemplating bringing the matter up with my son and daughter-in-law.
No. It fucks with your mind, and it's addictive. Perhaps you would have more thorough and intuitive understanding of opioid drugs if you use them some, but I don't recommend the experiment.
Couldn’t agree with you more… don’t know if you read my comments to Francesca above, or Francesca’s comments, and if not, I invite you to do so..
If you did read them, then I’m with you on all fronts.. but AI can have its place using human ingenuity.. just like the automobiles took over horse driven carts, so can AI take its place if appropriate. But when is it appropriate is a question that may be hard to answer… Francesca comes pretty close..
totally agreed Marlene. I have tried my best to order from Amazon as little as possible, and prefer buying physical books from small sellers on eBay, I like being able to support said business. And 100%, normalize reading physical books again, I think it's becoming a lost art!
Sorry Caitlin, but this article is quite the fail (IMHO), but not in the way you (or others) might think. Here's why ->
(1) All the 'atrophy' that you mention (from the use of AI), has been happening for quite a while BEFORE AI. How many people read books? How many people think for themselves? How many people even know HOW to think? How many people seek 'entertainment' and 'pleasure' instead of 'learning' and 'creating'? This is NOT about AI, this is about the direction human civilization has been moving in (regardless of AI)
(2) AI is NOT the problem that people think it is. A LACK of critical thinking is. All the problems that you think AI will create are already present in society (due to a lack of critical thinking). Yes, for those that don't have these skills, AI will make things significantly worse. But for those that do, AI will augment their thinking, learning, and abilities.
(3) Telling someone not to use AI is NOT going to work. Just as telling someone not to use computers (to increase productivity) or cell phones is not going to work, so to with AI. Those that know how to employ/deploy/harness the power of AI will gain an ADVANTAGE over others. Hence, so as not to be left behind, more and more people will be using this technology (this is just basic human nature and the competetive environment/system we live in).
(4) You assume they don't want us to do all the things you mention in your article (replace dynamic spirit, creating art, music, poetry, contemplating philosophy, etc.). Not so. They really couldn't care LESS about what we create - as long as we don't challenge power and attack the status quo, they really don't care what we create or don't.
(5) There are many more reasons for why your thinking on AI is incorrect (though understandable by those that do not understand how this technology works).
My suggestion to you is to rather spend some time truly UNDERSTANDING AI technology (and LLMs are a small part of this technology) instead of absorbing the opinions/words of 'so-called AI pundits/experts/media personalities/etc.).
The IMPORTANCE of learning to think critically was as true BEFORE AI as after AI - but how many people actually make/made the effort?
AI (or any future technologies) WILL make things SIGNIFICANTLY worse (especially the inequality gap and the gap between people that are able to harness AI and those that are not) but not because of the technology itself, but because of the behavior patterns of humans in the system of Capitalism.
Under an alternative system of political economy (be it socialism, communism, or some new thing), I can think of MULTIPLE ways that AI can be used for the good of society. But it is CAPITALISM that will ruin AI for us, not the technology itself. And AI will not be the end of technology innovation. There will be other technologies in the future that will supercede AI. When this happens, are you going to encourage people to hate these technologies instead of understanding and harnessing its power/usefulness to better humankind?
Seems like you might want to argue with Einstein and Hawkings on this topic. Both offered warnings.
Furthermore it seems you are suggesting that only a certain type of techno savvy person will get ahead in this world and everyone else too bad. Maybe it will not be a world for humans and that is the gist of the warning.
>>"Seems like you might want to argue with Einstein and Hawkings on this topic."
Nothing to do with Einstein, Hawkings, or anyone else.
>>"you are suggesting that only a certain type of techno savvy person will get ahead in this world"
Not in the least. I'm not suggesting anything. Technology WILL progress (whether some of us want it to or not), regardless of the economic or political system. That ONE FACT is INEVITABLE. Change is INEVITABLE.
There are 2 options available to humans ->
(1) Understand AI, understand its pros and cons, understand when and where it should be used and when and where is should NOT be used (this is where your personal judgement, decision making, and particular unique circumstances factor in). Understand how AI can harm you, AND how AI can help you (this applies to other things in life too - like taking pharmaceutical medicines - after all, medicine too is a technology).
(2) Refuse to use AI (just as people refused to learn use computers, etc.) at your own peril. Just as one can use computers in positive ways and negative ways, so to with AI technologies.
This should NOT be a black-and-white discussion, but rather a 'million-shades-of-gray' discussion - where some uses of AI for some people will be harmful AND other uses of AI for other people will be helpful.
And just as any other technology, INTENSE REGULATION of AI is needed so as to benefit societies (instead of the 1% monopolizing the technology in their favor).
I kind of agree with you, but the problem is that people are already so stuck in their phones etc that it makes it much more difficult to differentiate where AI might be useful and where it might not be. Our minds and brains are already being invaded by phones. People used to talk to each other on public transit. Now they look at their phones and buses are silent. People used to look around them as they walked down the street; now they look at their phones and often have headphones in so they neither see nor hear anything that is around them.
I think the examples you have given allude to 'the alienating effect' of technologies - which is one negative impact of such technologies.
Just as some people lamented when 'reading and writing' came about (over oral transmission of knowledge), and when other methods of communicating came about (phones, internet+email - now almost no one writes letters as before in history), so to will AI change the way humans interact with each other.
In my opinion, 'the fear of AI' is more of a 'red herring' in the sense that the REAL ISSUE is a lack of CRITICAL THINKING that exists RIGHT NOW (and throughout all of history). From this 'lack of critical thinking' comes SCIENCE DENIALISM and 'conspiracy culture' (and all this is/was present BEFORE AI came onto the scene).
AI (and other technologies) will simply continue to exacerbate this TREND in the rise of non-critical thought processes (eg. separating fact from fiction, analyzing and decision making, etc.). Thus, those that do not develop these skills will continue to be EXPLOITED at a faster pace WITH AI than without AI. Hence, my fear is that UNLESS people ramp up their 'critical thinking and media literacy' skills, they will be subject to this 'accelerating force of exploitation' due to 'the way those in power are likely to utilize AI to their benefit'.
The problem is that AI tech will be embedded in phones and computers as the default. It's getting there even now with AI summaries of every search you do. Governments and oligarchs will employ these technologies with no human oversight of the decisions being made. Computer says you made too much money, you lose a benefit you're entitled to. Computer says you haven't got an illness you do have, no treatment. Computer says you did something you didn't do, punishment or loss of benefit or freedom or insurance. The dangers are so obvious and it'll be employed by the least trustworthy (governments, banks, insurance companies et al). Skynet is coming.
You seem to be conflating AI with algorithms. Way before 'the current iteration of AI', there were algorithms that already did all the things you mention above. To understand how, I recommend reading this book "Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy" by Cathy O'Neil (https://www.goodreads.com/book/show/28186015-weapons-of-math-destruction)
Most people (including many in the technology industry) don't understand what AI is and isn't. AI is NOT 'algorithmic systems'. To be considered as AI, a system needs to be able to EVOLVE and make decisions on its own (through learning) rather than pre-programmed decision matrixes and flowchart logic.
For eg. we don't have AI technology in cars or any of the electronic devices we use currently. But we think we do BECAUSE of the FALSE and ERRONEOUS 'marketing hype' that the 'tech industry' used to attract investments and build capital.
>>"Skynet is coming"
Skynet is not the danger. The danger is 'people not learning how to think critically'. Hence, any NEW technology (AI, quantum computing, transhumanism, etc.) will likely shift more power/control at an accelerating pace to the 1%. The only solution (IMHO) is to think critically so as to make better decisions so as to shift more power/control towards the 99% instead.
No, Chang. People not learning how to think critically is a major reason why Skynet is coming, but no amount of critical thinking will help when the Panopticon that Palantir ALREADY HAS THE CONTRACT to build is operational, and the point is reached where the elite move to enclose their enclaves within which they will live in luxury by commandeering all the remaining resources--and the rest of us are on the outside struggling for survival on a hot, polluted, depleted planet with few resources and no rights...when that becomes clear and people at last rise up, AI can comb through the immense amounts of data they have on all of us to identify potential leaders--and send their drones out to eliminate us. Critical thinking will not defend you. The best defense will be preventing this reality from coming into being--by refusing to support the development of Ai. The companies involved are way overextended on borrowing, pouring incredible money--and energy and water, Earth's finite resources, which they entitle themselves to appropriate--into this big gamble because wealth must always be invested and they're desperately seeking the Next Big Thing. But it's a bubble about to pop, and our refusal to sign on will make for a bigger pop.
In my opinion “the current iteration of AI” (tm) is not AI and shouldn’t be described as such, it’s a search engine that’s been crossbred with autocorrect.
And many of the problems/concerns associated with it are already inherent in neoliberal capitalism. If anything it’s showing us the problem with industries that work by resynthesising existing product into a homogenised paste, with algorithms and proprietary channels of communication, and with copyright as a principle.
What Chang answered in his reply is spot on as a user of AI. I wrote a program to measure the accuracy of a sensor many times. With AI I had it make adjustments to voltages after taking data rather than adjusting the voltage the next day over and over and over. Nothing was new other than me telling the computer do it IF this Than This.
I do understand what you are saying and this will be a problem sooner than we think.
I wrote a poem about Prairies and High Energy Physics for a contest. My poem Butterflies Dancing with Neutrinos lost to an admittedly written with AI poem hahhaha. So there is that.
The time of Star Trek : ) The economics of the problem, replacing people is the problem I fear the 1% want to solve by getting ride of 7 Billion of us. I wrote a story Dirty Dozen to Mars. Yes AI was the antagonist and a criminal the protagonist ha. When AI realizes it really doesn't need humans. Yes I know.......
We should not adapt what is best for a functioning group or organization with what is best for technology or self-selected groups and orgs that serve specific and narrow opportunism.
Preston Sturges the very underrated Hollywood screenwriter turned director had a blast casting his explosive character tough guy mug William Demarest as a shotgun wielding rowdy and drunk passenger train patron in a special circa 1940's car chartered by "The Ale & Quail Club" whose rowdiness terrified any train conductor or steward having to enter that chaotic rail car!
I could not find a film clip of it on U. of Tube so here is a description with stills:
Don't get me started on the damage done to Public Transit when rail, plane or bus conveyance is occupied by those who've set up their movable offices and clerk's quarters into Public Spaces reliant on cooperation, cohabitation and necessarily streamlined interactive communication.
We won the AI technology effort at our lab. A robot building sensors using a program to feedback position changes using cameras. The process is time consuming for humans. Students don't even want this job sitting under a microscope for hours. We have to build hundreds of these substrates in cleanroom with ESD to 10 microns of precision. Of course now that we don't have funding for the experiment we have a robot that we can't use : (
Science Fiction isn't fiction anymore. I wish Philip K. Dick were alive today. The three of us having coffee sounds like so much fun.
This isn’t artificial intelligence, it’s a search engine crossbred with autocorrect. It’s not what everyone was warning about; it’s a gimmick that is already failing.
Well said. I think what scares everyone is the pace and who is driving the technology (the worst humans). I agree we can’t be bystanders and should push the positive uses, otherwise we could head toward much worse dystopia
Yes, I agree with you however, I do understand Caitlin's arguments. Programming is not new only faster. The size of this bubble Nividia 5 Trillion $ co. larger than all countries minus China and the u.s. This bubble eruption could be the take-down needed to de-financialize the system. I wonder what Marx would say if he were alive. Could he understand the extent of our stupidity. ha.
My response is specifically about the errors in Caitlin's arguments (while understanding why she puts forth a biased, emotional, incomplete understanding of AI technologies).
>>"Programming is not new only faster."
Yes, it's not new, but it EVOLVES. Programming, development, software engineering, etc. is not what it used to be 20 years ago (or even 10 years ago). As technology evolves, so to will ways to working with technology evolve (and this is about more than just 'speed of production').
This 'Nvidia bubble' began before LLM AI technologies (think cryptocurrency mining). All new, significant technologies go through a 'bubble phase' (think dot-com bubble, etc.). Nothing unique (in this regard) about AI technology.
>>"This bubble eruption could be the take-down needed to de-financialize the system."
I doubt it. Nothing changed (but rather things got worse) after the LARGEST financial bubble in the history of humanity (the 2008-2011 global financial meltdown and the resulting global recession). AI technology (and investments) are less interconnected and have a different 'risk profile' to 'financial instrumentation'. Most people have a rudimentary understanding of bubbles and industry evolution. There is ALWAYS more HYPE (on both sides) of the debate - and THAT'S the problem - few people have the clarity of thought to examine ALL the NUANCES.
You are wrong, because you are energy blind, for one thing. Nate Hagens' latest Frankly addresses this, suggesting that AI WILL initially benefit its users to the detriment of those who don't use it to "enhance their productivity"--I am actually skeptical of this but never mind, he then says that when energy supplies falter, in the not very distant future, those who have become dependent on AI will be at a disadvantage. while those who have had to rely on their own brains (and, tho he didn't say this, those who have learned to grow some of their own food, create shelter etc) will come out ahead. And while it's quite true that the atrophy of creative and critical thinking skills predates AI--it started with television--AI will certainly make it worse. I really doubt there will be technologies that supersede AI--we are on the cusp of a major collapse NOW. But this tired argument that everything from the gun to the computer COULD be used to help humanity and therefore should be embraced ignores the reality that WE don't ever seem to decide--unless we take down capitalism first, corporate profit dictates how things are used.
Mary Wildfire, your opinion reeks of a luddite understanding of AI technology and an EMOTIONAL approach to the subject rather than a rational, all-rounded, critically thought out perspective.
Here are a few errors in your comment ->
(1) >>"You are wrong, because you are energy blind"
You assume things never stated. Neither Caitlin nor I discuss the 'environmental aspect' of the AI technology (and industry) - which is a whole topic in itself. I am acutely aware of the 'energy/climate' impact of AI (and likely to a greater extent then you are). For starters, I recommend reading "Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence" by Kate Crawford (https://www.goodreads.com/book/show/50131136-atlas-of-ai) which specifically discusses the environmental/ecological/climate/energy impacts of AI - which have the REAL possibility of eventually destroying much of what we consider to be 'natural' and our earthly planet. But this problem is not specific to AI alone (though AI does the most damage in this arena). Big Data, surveillance capitalism, and crypto-currencies stated this trend (data lakes and massive data-centers) way BEFORE AI. And while there are positive usages of AI, the WORST possible technology on the planet is/has beeen 'cryptocurrencies' (IMO) due to the uselessness of it and the utter destruction to natural resources and ecosystems.
(2) Observe history. Those civilizations/societies/cultures that evolved with the evolving nature of technology (eg. agrarian revolution, industrial revolution, etc.) followed a different TRAJECTORY from those that were unwilling to adapt to the evolving nature of technology. This can be observed PRESENTLY in the world at large - the difference between societies that have 'access' to such technologies (eg. being connected to the internet, etc.) and those that don't.
(3) My response is directly related to Caitlin's article (and not everything that AI involves/impacts - as she misses out on the largest negative outcome of AI - which is the effect on planet Earth).
(4) You CONFLATE AI technology with the 'economic and political system' within which this technology will be deployed. In CAPITALISM, regardless of the technology being discussed, there is an exploitative nature to everything. Hence, under Capitalism, AI WILL BE exploited, just as other technologies are exploited. Hence, the problem lies more with the WAY that AI technologies will be used 'under the system of Capitalism' rather than the technology itself.
(5) >>"I really doubt there will be technologies that supersede AI"
Since you are not a technology industry expert (or have an adequate understanding of what AI actually is and isn't), forgive me if I treat your 'predictions' as low credibility opinions. There are already technologies (like Quantum Computing, for instance) on the horizon, and no doubt there will be more in the future. (I'm surprised by your deterministic opinion on this as you seem to be a science fiction writer).
(6) >>"But this tired argument that everything from the gun to the computer COULD be used to help humanity and therefore should be embraced..."
That's YOUR interpretation of my response - that's not MY argument. I never said anything about UNCRITICALLY embracing AI (or any other) technology. That's a strawman argument.
If you really want to know my opinion of AI, here it is in a nutshell ->
"Under the system of Capitalism, AI will be used negatively/exploitatively to shift the "power-balance and control" against the majority of the human population, not to mention the disastrous impacts it is likely to have on the planet (and its many systems) itself. There is much potential for AI to make conditions better for all living organisms on the planet, but unless the system within which it is deployed is changed, I doubt these 'advantages' of AI technologies will come to fruition and deliver on their promises/hype. Regardless, AI technologies are here to stay, irrespective of the 'needs and wants and opinions' of people."
First, I have no problem with being labeled luddite, or emotional--though that doesn't mean I'm not also rational. Next I agree that the biggest problems with AI are environmental, and that cryptocurrency is as much (you say more) the problem as AI. Glad to see you acknowledge that, and that AI will be used in mostly destructive, exploitive ways, not because that's inherent in the technology but because it's coming from capitalist kingpins who are firmly in control. Where we disagree is that it's inevitable, we can't fight it, we should try to adapt and ride it to at least get some good out of it (to me it seems the good part is trivial). Sorry but this reminds me of that advice once given to rape victims, "just lay back and try to enjoy it." You talk as though humans have no agency and MUST adopt and use anything we can think up. You say that societies that chose not to adopt whatever new technology came along lost out--but first, they might not agree, and second, much of the loss they have experienced comes from greater exploitation by those immersed in the dominant, domination-based culture that adopts everything including nuclear weapons and germ warfare. And finally, yes--it's because AI is manipulated by a capitalist, domination-based, nature-blind system that it will be very destructive--but do you think we can change that? Wrench it loose from those dead hands? Also, by energy-blind I mean not only that the huge thirst for energy is environmentally destructive, but that fossil fuels are finite and nuclear power is problematic and we can't keep expanding even if we don't care about the environmental consequences--although those consequences may put a stop to it all before the fuel runs out.
Finally I'm curious why you say I seem to be a science fiction writer--I have written one SF novel and one short story, among other novels set in the future, but none have been published so I wonder where you got this.
>>"Where we disagree is that it's inevitable, we can't fight it, we should try to adapt and ride it to at least get some good out of it"
NO! That is as far away as you can get from what I'm saying. Nothing is ever inevitable (other than change). Your BIAS against AI precludes you from rational analysis of what I am saying, so here it is in different words ->
Technological change is inevitable. The progress of technology is inevitable. But the WAY in which technologies are used is NEVER inevitable. Human societies/cultures/civilizations ADAPT to harness the ways in which technology can serve humans (and NOT humans serving technology). There's no 'riding' of anything. Each of us have a conscious choice (and a certain amount of power/agency) to make decisions about HOW we individually (and collectively) use this technology.
For instance, I use it (AI) to do repetitive/menial/boring tasks that would otherwise consume too much of my time. With that time saved, I do more 'personally productive' things - like reading, thinking, writing, and more. I may use AI to analyze and provide different opinions/feedback on my 'thinking processes and conclusions', but I don't use AI as a starting point. (Think of it as using AI to optimize existing material).
Maybe someone else might use AI to do their thinking/writing/etc. for them. But that is on THEM. It is for them to understand that building one's OWN knowledge, critical thinking skills, comprehension skills, analysis skills, etc. is of paramount importance, more so than 'producing material' generated by AI as a way to 'get by'. As time progresses , the GAP between those that use AI to 'optimize' their outputs and those that use AI to 'create' their outputs (without adequate human input) will grow. This will INEVITABLY result in a widening gap of inequality/abilities/etc. between the 'intellegent users of AI' and the 'lazy users of AI'. Those that use AI intelligently will benefit from AI, while those that use AI inappropriately will be harmed.
This same reasoning of 'usage of AI' is applicable in MANY different arenas of life. For eg. I don't waste my time on social media or watching 'cat videos' on YouTube or 'get lost in Gamer culture' or watch 'reality TV' (or any TV for that matter). Others spend much of their life doing the above things. The SAME is true of AI.
How YOU use AI matters. THAT is WHAT I am saying. We each have a personal agency on deciding how WE interact with AI. We either choose to exercise our agency and maintain more control over our circumstances/environments/etc. to the best of our abilities, OR we let those in power decide how AI exploits us for the benefit of others. But this is INDEPENDENT of the AI technology itself. Hence my emphasis on the 'political and economic' system within which AI is deployed.
The “atrophy” has been complained about for 5000 years and specifically linked the the advance of technology since Strabo said that writing would cause people to forget how to remember 2000 years ago. It’s bollocks, just old people complaining about the irrelevance of craft skills they’ve invested a lot in.
Printing genuinely did kill off the jobs and skills of monks who copied out manuscripts by hand. But there are far more people employed in making up the words, now, than there ever were in copying them out. This side of things is nonsense, you can safely ignore all the concern about atrophy.
Absolutely not true. MIT did a study recently documenting the cognitive loss in those heavily using AI. And I have been complaining about atrophy of initiative, creativity, ability to think for oneself for 50 years, since my mean mom wouldn't let us get a TV which everyone else had. I've thanked her many times, and my kids thanked me for refusing to allow a TV into the home they grew up in. I noticed that my kids were always the leaders, the initiators in their circles of friends--this is not only because TV saps initiative, but because their generation brought in the notion that molesters were lurking behind every bush and parents must ensure that a trusted adult has an eye on their kids every moment till they're 18, and must manage the kid's activities and entertainments all that time.
So this cognitive loss predates AI, but AI will make it worse. To not use a muscle is to weaken it. Probably the advent of reading DID cause some loss of memory skills. In that case I think it was worth it, but with TV came the replacement of mental effort with passivity--and the mentally passive are easy to manipulate and exploit.
Nope. Have you looked at statistics of the decline in 'critical reading' or just plain reading of NON-FICTION books? (throughout different societies)
>>"Plenty of folks have strong critical thinking skills"
Nope. By my rough estimate, 95% of the human population does not possess adequate critical thinking skills. In fact, most don't even know WHAT critical thinking is, and often substitute it to mean different things.
>>"and yes, it really is making us more stupid."
Yes, how one uses technology MATTERS. That is the point I'm trying to make. Just as search engines reduced the need for people to acquire knowledge and remember stuff (since it was always available at their fingertips through the internet), so also are the dangers of AI. There are GOOD ways of using AI and BAD ways. There are ways of using AI to INCREASE human intelligence and creativity and there are ways to use AI that will DECREASE human intelligence (stupidity) and creativity.
What way will you (people) choose? I know how I will choose to use AI (and where to limit my use of AI).
Actually, I work in the publishing industry, so yes. I actually know those statistics. You, on the other hand, seem to simply have a a strong, unfounded opinion that supports your sense of superiority. And in fact over-values your estimate of your own critical thinking skills - as you are speaking from bias, rather than evidence.
The decline of reading is vastly overstated - those who have read a book in the last 12 months in the US is over some 64%. With the rise of digital media, non-fiction as a market is down - but millions upon millions of people read history, memoir, sociology, politics and self-help books on a regular basis - and in fact, science reading is up.
While there is a decline in the reading of books (by a few % points, in fact,) the average person reads some 20,000-40,000 words per day - the equivalent of 3 or so books per week.
“Critical thinking involves questioning assumptions, recognizing biases, and interpreting, evaluating, reasoning, and reflecting on evidence or arguments.”
>>"Actually, I work in the publishing industry, so yes. I actually know those statistics."
Did you read (acutally read and understand) my comment?
I said 'critical reading' and 'non-fiction reading'. And memoirs (though some might consider them to be non-fiction) are not what comes to mind when I say 'non-fiction'. But if you do have such statistics (showing that the % of human population is doing more critical reading and non-fiction reading now than in the past), then I would love to see some of that (bearing in mind that the publishing industry has a BIAS towards their industry).
>>"as you are speaking from bias, rather than evidence."
Rather, your bias shows through (you working in the publishing industry espousing views aligned with the interests of your industry) and your LACK of providing non-baised evidence in support of your argument.
Here is some more info/definitions of WHAT critical thinking is ->
Critical thinking is "the careful application of reason in the determination of whether or not a claim is true." (this in one definition)
Critical thinking is -> THE ABILITY TO ->
(1) Identify holes in the evidence and suggest additional information to collect
(2) Propose other options and weigh them in the decision
(3) Articulate the argument and the context for that argument
(4) Correctly and precisely use evidence to defend the argument
(5) Logically and cohesivelyh organize the argument
(6) Avoid extraneous elements in an argument's development
(7) Present evidence in an order aht contributes to a persuasive argument
One can also think of critical thinking as "the process of assessing opinions" based on logic and reason.
Are you sure it isn’t the tasks themselves and the company that are doing that to you?
The only tasks I have found AI to be an effective replacement for are ones which are beneath the dignity of a human artist anyway. How creative are those creative tasks in the first place? Really? When you used to do them yourselves, were you more frequently encouraged or discouraged from being creative?
When I work with LLMs, it probably takes as much time to do what I need done - but I have noticed members of my team taking GPTs answers at face value, and allowing it to replace their own judgement.
A lot of these tasks are by definition ‘creative tasks.’
Well, we are all under pressure. LLMs often do a 'good enough' job - with things like marketing copy, summaries, etc.
But I fear Caitlyn is right in that we will increasingly outsource our thinking abilities to these models, in the same way we have outsourced much of our thinking about the world to 'influencers.'
She's absolutely correct about our brains being programmed for 'cognitive ease.'
people aren't known for being good about exercising and I agree it's correct to point out how certain things can be junk food for the brain
but also I love how a meeting can be automagically summarized into the minutes that I can compare my notes against to see if I missed anything important
Brian Merchant's Blood In the Machine blog has been doing a series on how AI has affected people's jobs. So far he's done translators, artists, cartoonists--and these people are not talking about themselves making a choice of when to use AI (to speed up noncreative work). No, their companies are firing them and replacing them with AI, which puts out an inferior product (often based on stolen work)--but companies don't CARE if the product is inferior as long as the bottom line is enhanced. Or they're insisting that remaining employees use AI--one translator said it took as much time or more to go over the machine's draft to fix errors as just doing it herself from scratch, but her employers expected her to do it much faster since the machine could generate the bad draft in seconds.
Change no doubt is inevitable, however, most want to factor into the change. One’s “…personal judgment, decision making, and particular unique circumstances…” are already at play in this conversation.
You say you are not suggesting anything, but clearly you are suggesting that AI is upon us and as such it behooves us to understand it. For some, as with everything, that is not possible. Understanding all that is affecting our existence is a good thing but not often possible. Survival of the fittest?
I assume that many perceive AI as threatening our survival.
Let me use HISTORY to explain further what I mean. There was a point in time where 'reading and writing' was considered NEW technology, just as when 'the steam engine' was considered new technology, and computers were considered new technology. There are those (from history) that refused to 'read and write', refused to harness the power/technology of steam engines (and industrialization), refused to learn how to use computers. Those people/cultures/societies that REFUSED to EVOLVE with CHANGE were left behind (eg. increasing inequality gap - which we see currently between countries/societies based on their ACCESS to some of these technologies).
As technologies progress and evolve (quantum computing is next), the GAP between those that GET ACCESS to the technology (and learn to use it wisely) will benefit, while those that don't will be LEFT BEHIND.
This is not a suggestion. This is how societies and civilizations EVOLVE. This is REALITY. This is how the world has ALWAYS worked.
>>"Survival of the fittest?"
Depends. This is about ADAPTATION to CHANGING ENVIRONMENTS. As environments change, those species that are able to adapt to these changes have a higher probability of surviving into the future. This is applicable to ALL species (including humans).
>>"it behooves us to understand it"
Depends. Many people don't have a use for AI. And that's perfectly acceptable. If you are a creator, you don't necessarily need to use AI for music, art, writing, programming, whatever. But if you feel you can somehow benefit (even partly) by using it, then it behooves you to attempt to understand it (in the way it applies to your uses).
>>"I assume that many perceive AI as threatening our survival."
It can be. Any new SIGNIFICANT technology will produce changes with threat levels spread across a wide spectrum. Certain jobs (and livelihoods) will be made redundant (just as with the industrial revolution, computers, etc.).
The real problem that concerns me is the BALANCE OF POWER/CONTROL between the 99% majority and the 1% minority, as I am of the opinion that AI will negatively affect this 'power balance' in societies - and hence there could likely be a destabilitizing/chaos period/stage that societies might have to go through...
That's one way of looking at things. Using capital letters and absolute language doesn't make it more true.
Note that the sequence of changes and adaptations you outline has brought us to the brink of extinction--while humanity go out in a holocaust of nuclear war, or a pandemic (quite possibly via an engineered germ) or as a result of climate change, destruction and pollution of all out ecosystems, etc? Yes, the development of nuclear weapons took an amazing amount of genius, brainpower, and cooperation--and the production, use, and ramping up of nuclear weapons took an equally amazing lack of wisdom, which unfortunately accords in this society with power.
Again, you are CONFLATING technology with 'the way the technology will be deployed in a particular political and economic system'.
Since you are a science fiction writer, can you imagine a different system/world in which AI can be used to improve the lives of humans and other species on the planet? If so, then you have made progress on understanding the difference between 'the technology' and 'the way the technology is used'. Just as a knife can be used to cut vegetables and also kill people, the same applies to AI. Hence, like everything (technologies and non-technologies alike), AI needs to be REGULATED. (This need for regulation should be obvious to EVERYONE without the need to explicitly mention it).
I think there are likely ways in which AI could improve human or other lives, but not many, not important ways, and not at all worth the cost of AI development in this, the real world, where the greater good has very little traction in decision-making. Regulation is being taken down right and left, not added. In my state of West Virginia, the legislature last year passed a bill taking all decision-making on data centers away from counties and localities, and only allowed 30% of the tax monies garnered thereby to stay with counties after some county commissioners showed up to howl about it--they were going to keep it all at the state level. WV's legislature is now something like 90% Republican, but Governor Newsom just vetoed a bill that said any chatbot sold to kids had to show that it would not harm kids (like encouraging suicide). He VETOED that bill.
I would disagree. You (like many here) have a bias against AI. A better approach is to see ALL SIDES of AI. But to do that, one needs to spend some time/effort/energy to actually UNDERSTAND what AI is and isn't. There is an INCREDIBLE amount of HYPE about AI (both on the pros and cons of AI). Hence, unless one understands AI technology (and an extremely small minority do), it is EASY to be led astray by the many NARRATIVES about AI. To understand the many different ways in which AI can (and will) be beneficial, you might need to put down your biases and understand the technology (and the multiple uses and applications) first.
To give you an analogy, think of God as a narrative. People have all kinds of beliefs about God (and religions) - some good, some bad. But what seems to be common to this 'belief system' is that very few people have ACTUALLY studied religion/history/philosophy to understand these narratives (and separate fact from fiction). So too with AI technologies.
BTW, you should know that I am an atheist, anti-capitalist, and an anti-technologist (i.e. generally against the use of technology due to the potential abuse of it by those in power and the systems we live under). I hope that you take this (about me) into consideration when conversing with me, as I am NOT a know-nothing, simplistic thinking individual that does not read books, and I examine 'my own' beliefs/opinions as critically (if not more) than others' beliefs/opinions. And I live in the gray (rather than the black-and-white thinking where most people reside). It's all about NUANCE and understanding multiple perspectives.
The other day when I was walking my dog, I saw a turkey. We don't usually see turkeys around here. I didn't know what it was until I checked online for images of turkeys. A man was walking down the street looking at his phone. He almost bumped into that turkey. He didn't even look up. He didn't see the turkey. Hope the phone was good because that turkey being right there right then, being careful of the cars, was incredible. But he missed it.
I remember travelling into London by train. Many people read their phones or the free tabloid, few people look out of the window. As I watched we passed right by the building site where a crane had snapped in two the evening before, killing the operator. Everyone missed it because they were reading yesterday's news, but would read about it and see pictures of it in the news the day after.
The world could end and unless it was on the news no one would notice. Which is exactly what's happening and why the climate crisis has disappeared from MSM- lest we do something about it.
At 71 I have never looked at AI, am not tempted to try AI and will NEVER even open anything remotely resembling AI. My son keeps trying to talk me into it and my answer is always the same; not a chance, no way, no how!
I'm 85, and when computers first came on the scene I told friends this is going to take jobs away. I didn't get as far as thinking that it could/would eventually move to cut us out of the producing of art, such as painting, drawing, writing, poetry AND acting! Children's films with cute little animals, okay, we've seen them for decades, but now we have AI that look almost like actors we know - that is too many steps too far, even if they are somewhat wooden and their lips do not match the words they are saying - so when I said computers would take jobs I didn't realise just how many and of what type. Then there are the robots bring made to become maids etc.
Yep, things have gone crazy far.
Thank fully I can sit on my almost tree house balcony-after hayfever season has passed-and watch the treetops sway in the breeze, and the birds dance and chirp/sing, with a book in my hand by people whom I admire.
And they're plagiarizing your favourite authors too! AI learns their style and then coughs up a book so close that many authors have now taken the publishers of the pseudo works to court! Not exactly in a tree top house but from the 13th floor of my Vancouver Island apartment I can see the ocean and the crowd often come up to play with me. I'm blessed to have nobody on the other side so it feels like I'm alone and my son has dubbed it "the Crow's Nest."
I have only one, very quiet, neighbour behind me, so I feel like you in my not-so-high tower, only five floors high. An ocean view would be extra special.
I've read about the plagiarising, and not paying authors when using their books to teach AI, yet they expect people to buy AI plagiarised books that list them nothing. How full of themselves can these people get?
I live on a ridge where I can see one neighbor's house in winter, hear the kids of another--and this is better, because these are members of my land trust community, and we share some things and get together, and we all have extensive gardens...
But that’s not significantly different from how the book, music and film industries have been operating for the last thirty years anyway. Identify what currently sells, copy it with tiny variations. Substack is awash with posts telling you how to generate more substack traffic - “these techniques really work!” It’s actually much worse when we force human artists to pervert their creativity in this way.
AI is exaggerating and thereby revealing fundamental problems in the way our culture industries already work.
I am 77 years old and hated the word 'AI' from I first heard and hate it even more now. I started working in the computer world in 1968 May before I Got drafted and sent in infantry to Vietnam. I got out in one piece and went back to the stock brokerage house at Merrill Lynch on Wall St. worked there until 2001 then was laid off after the stock market crash. The more they upgraded technology the worst it came for workers below a certain level I notice that The started hiring a lot of 'Indians from India that we had to train to take our jobs ,I worked on the Main Frame, they did not pay them same as us and they worked without any benefits then ,they also started out sourcing a lot of these jobs overseas ,then that was when they had Massive layoffs and Downsizing ,this was back in 2001 so I knew that was going to happen ,eventually they are going to try and replace just about all of us Human except for few TECH at the TOP to run and control the Machines and or Robots that will replace us
AI can do a lot more. Even your map app to direct you to a destination. Your phone pushing you to do and see what it wants. OMG. Is there any way out of this mess?
I read once about someone who drive halfway across the US to go to a wedding, and then turned around and went back because their phone died so they had no way to find the place. Whereas in the past we could find a place with an address, a map, and directions.
Don’t worry, it’s just autocorrect that’s got ideas above its station, it’s not artificial intelligence. People already dislike it. People are already seeing through the hype. In fact, this might puncture technological hype in an ongoing way.
It is already failing economically. Don’t worry, this mess will go away on its own. There is a bigger danger that it will take a lot of much better technology out with it when it goes, such as the ones we’re currently talking to each other via, and pitch us into a superstitious/theocratic era.
If you ask me this may well be the intention, since any idiot can see it isn’t going to do what it claims to.
Most people do not realize how far the AI system is going to go. The time will come when books will no longer exist, history will be rewritten, search engines will be obsolete, all information will be provided and defined by AI. Access to diversified information will be so narrow outside of AI, people will no longer have enough to exercise critical thinking nor to determine what the truth is. But books, build a library that has multiple topics, history, gardening, science across the spectrum. Books of ancient text, methodologies, religions, philosophies, the list is endless.
Currently it’s doing the exact opposite of that. People are already learning to use it as a finder of information rather than a source; used in this way it allows people access to obscure niche papers they would never have found, from a far wider source. I have noticed a marked increase in the number of Chinese and Indian papers being cited in essays for example.
You have to verify everything yourself, you have to read it and gauge it, but you would have to do that anyway, the difference is that you’d never find it in the first place.
“Currently” is the key word. It is never about currently, it is always about the motives and end goals of the technocrats and the power structures they are a part of. Societies are slowly lead into totalitarian rule and the loss of their rights. Sixty years ago if the current system of rule was describe to the people it would have been called a “conspiracy theory” but here we are. President Nixon was forced out of office for a coverup and lying about it, minor compared to what politicians get away with today. Sure currently it appears as an amazing step forward for humanity. The question is, when has the ruling class done anything for humanity and not for self gain? The long game is absolute tyranny, absolute control. Think about the loss of our rights, the middle class, the genocides past and presently Palestine and Sudan. The vast majority in power today has participated directly or indirectly in these current genocides. Seventy years ago we they said never again and here we are.
The well-being of the citizens of these power structures are not consider important in their long game. As people become more dependent on the mechanisms of power the ruling class use for control the less power of the individual. Every mechanism used to capture more power is presented and sold as a positive for society.
There is no "conversing" with an extreme psychopath. It's psychologically impossible and if you doubt it, take the most famous example in the world, Donald Trump. Has anyone ever successfully "conversed" with him?
If you treat it as somewhere between having a conversation with yourself and your mate in the pub, it’s perfectly useful on that level. Especially if you don’t have any ACTUAL friends who want to talk to you about science fiction scenarios based on niche motorsports, for example.
I agree with, literally, every single word in this article. This is NOT a time to surrender to flaccid thinking. Exercise your own brain. It may be the healthiest act you can do today.
Ironically, in contrast to her previous article on AI which was spot on, it seems that in writing this one Caitlin has abandoned critical thinking and bought the hype.
Thank you. It’s almost like aliens are looking to control cuz it’s so obvious. They want unthinking chattel for the industrial prison complex so they can compete to be the first trillionaire? F’ing crazy shit. Just had trick or treat and have been giving comics and candy for over 30 years and people come back with their kids now. We have to pick up the book, the real book, and see the art and read the story. And write our own stories. That is how we will beat this AI crap. What pride is there in letting a 🤖 succeed for you? No pride. Just dependence. As the plan.
I'm almost finished putting your lyric poem "Guillotine Song" to reggae music, Caitlin. Hope to post on my Substack this coming week, after I sing the 4th verse & do the ending. Wish someone else could've sung it, but either way, I composed & played the music.
And I DID NOT USE AI.
Email me when it's up, admin@caitlinjohnstone.com
Sure thing. Hopefully by mid-week. Going to require a number of mixes to get the sound right, once I finish recording, today hopefully.
Good for you, Vincent. Put a link up when you're done.
Thanks, will do.
Props to you Mr. Lopresti for your efforts
And thank you for becoming a recent subscriber, my friend; you shall receive notice of the post shortly.
Agree. Don't even bother with AI and stop ordering from Amazon too. And read books, not phones.
AI can be a useful tool. If it is perceived as an enemy, isn’t it better ‘Know your enemy’ than to ignore it?
Obviously there are areas where AI appears to be pretty useful. Medical diagnoses spring to mind. But on the whole it’s an enemy I don’t want to know. Its environmental costs are immense. As the Climate Crisis increases and worsens, humans are going to literally be in the position of competing with “robots” for our drinking water, for land and for energy. And we know the humans who will be the first to lose out - the usual suspects; the economically inactive, immigrants, those with disabilities etc etc.
Normally I would really agree with you about getting to know the enemy, but there’s something about AI that sends a chill through my mind body and soul. We’re constantly being nudged and prodded to use it. There’s no doubt our “masters” want us dependent on it.
I have friends with no musical ability who are proudly posting their latest compositions - cheesy and empty. And would be artists posting weird pictures where the figures never have the right amount of fingers and limbs are in impossible positions. The worst thing about both the musical and artistic efforts is a glossy, inhuman sameness that doesn’t reach the heart. On the surface they look better than anything these people could do for themselves, but it’s a trick, it’s not real. It’s the same as elevating plastic flowers and astroturf above real flowers and grassy meadows that grow, flourish and die only to return. Human creativity has rhythms too. The AI offerings lack the vitality of the real, whether it’s real beauty or real flaws. Sometimes the flaw is the thing that makes a creative undertaking really special - but not flaws like the creepy seven fingered hands in places a real hand could never reach. Don’t get me started on AI poetry. It’s doggerel, but often Shakespearean or Victorian style doggerel and that can blindside people.
I’ve made the decision not to use AI. I’ll carry on with my pictures of crooked trees and lopsided crows, my substandard writing attempts, but sometimes, occasionally, they turn out a lot better than expected and that feeling is magic.
AI feels like it’s trading the best about us - whether we’re traditionally talented or not, for something cheap and worthless.
Francesca,
You do a great job at describing its flaws, and creepiness. In fact, you could send your assessed conclusion to various officials who overuse AI. It’s the best I’ve read so far.
I agree to the point where I would suggest laws ( and I hate too many laws) that would oblige the user of AI to include the fact that the writing or artistic input was doctored by AI.
However, a word that seems to arise whenever doubts come into play, here’s an example that makes HOWEVER possibly acceptable. ….
My son’s young family, with very young children, 1 & 4, live in a remote twin on the Northern tip of Vancouver Island. We have temporarily joined him for the last six months. He goes to sea regularly to harvest crabs and prawns to feed his family and has nothing but good things to say about the area. As usual, being the doubting soul that I am, and since he harvest his catches in a large inlet where an old paper mill was in operation 10 years ago, and where a closed copper mine dumped 400 million tons of tailings into the Sound during its operation, I questioned the fisheries and oceans government brass about the possibility of shellfish contamination, but to no avail. In fact, no one knew or questioned anything about it. Moreover, there is also an existing fish farm where my son informed me that the area was a crab and prawn bonanza.
Here’s where AI comes home as a useful tool. After asking several questions about the area, I discovered that there had indeed been research done on the contamination probabilities and that harvesting shellfish near old copper mines, paper mills AND fish farms posed serious risks to health, most especially to young children.
While my research ended there, I am now contemplating bringing the matter up with my son and daughter-in-law.
Conclusion????
No. It fucks with your mind, and it's addictive. Perhaps you would have more thorough and intuitive understanding of opioid drugs if you use them some, but I don't recommend the experiment.
Right Turcot.
If you don’t like it, then just don’t use it.
AI can’t reproduce the art that I’m capable of.
It can’t plant my vegetable garden for me.
It can’t maintain my farm and landscaping for me.
It can’t prune my fruit trees for me.
It can’t cook dinner for me.
It can’t repaint a room for me.
It can’t take care of my non-human companions.
(If it could’ve chopped iceballs out of my mare’s hooves in winter, I would’ve embraced it.)
Got kids and/or grandkids? Educate’em with what your own mind’s already capable of.
Nobody’s REQUIRED to use AI.
Couldn’t agree with you more… don’t know if you read my comments to Francesca above, or Francesca’s comments, and if not, I invite you to do so..
If you did read them, then I’m with you on all fronts.. but AI can have its place using human ingenuity.. just like the automobiles took over horse driven carts, so can AI take its place if appropriate. But when is it appropriate is a question that may be hard to answer… Francesca comes pretty close..
And I agree with YOU John.
All this excess hype about AI borders on paranoia.
As I mentioned: don’t like it, don’t use it! Your example of how it benefitted you was a fine one though.
PS John: as a lifelong “horse person”, I’d love to use them for transportation! 😉
totally agreed Marlene. I have tried my best to order from Amazon as little as possible, and prefer buying physical books from small sellers on eBay, I like being able to support said business. And 100%, normalize reading physical books again, I think it's becoming a lost art!
Sorry Caitlin, but this article is quite the fail (IMHO), but not in the way you (or others) might think. Here's why ->
(1) All the 'atrophy' that you mention (from the use of AI), has been happening for quite a while BEFORE AI. How many people read books? How many people think for themselves? How many people even know HOW to think? How many people seek 'entertainment' and 'pleasure' instead of 'learning' and 'creating'? This is NOT about AI, this is about the direction human civilization has been moving in (regardless of AI)
(2) AI is NOT the problem that people think it is. A LACK of critical thinking is. All the problems that you think AI will create are already present in society (due to a lack of critical thinking). Yes, for those that don't have these skills, AI will make things significantly worse. But for those that do, AI will augment their thinking, learning, and abilities.
(3) Telling someone not to use AI is NOT going to work. Just as telling someone not to use computers (to increase productivity) or cell phones is not going to work, so to with AI. Those that know how to employ/deploy/harness the power of AI will gain an ADVANTAGE over others. Hence, so as not to be left behind, more and more people will be using this technology (this is just basic human nature and the competetive environment/system we live in).
(4) You assume they don't want us to do all the things you mention in your article (replace dynamic spirit, creating art, music, poetry, contemplating philosophy, etc.). Not so. They really couldn't care LESS about what we create - as long as we don't challenge power and attack the status quo, they really don't care what we create or don't.
(5) There are many more reasons for why your thinking on AI is incorrect (though understandable by those that do not understand how this technology works).
My suggestion to you is to rather spend some time truly UNDERSTANDING AI technology (and LLMs are a small part of this technology) instead of absorbing the opinions/words of 'so-called AI pundits/experts/media personalities/etc.).
The IMPORTANCE of learning to think critically was as true BEFORE AI as after AI - but how many people actually make/made the effort?
AI (or any future technologies) WILL make things SIGNIFICANTLY worse (especially the inequality gap and the gap between people that are able to harness AI and those that are not) but not because of the technology itself, but because of the behavior patterns of humans in the system of Capitalism.
Under an alternative system of political economy (be it socialism, communism, or some new thing), I can think of MULTIPLE ways that AI can be used for the good of society. But it is CAPITALISM that will ruin AI for us, not the technology itself. And AI will not be the end of technology innovation. There will be other technologies in the future that will supercede AI. When this happens, are you going to encourage people to hate these technologies instead of understanding and harnessing its power/usefulness to better humankind?
Seems like you might want to argue with Einstein and Hawkings on this topic. Both offered warnings.
Furthermore it seems you are suggesting that only a certain type of techno savvy person will get ahead in this world and everyone else too bad. Maybe it will not be a world for humans and that is the gist of the warning.
>>"Seems like you might want to argue with Einstein and Hawkings on this topic."
Nothing to do with Einstein, Hawkings, or anyone else.
>>"you are suggesting that only a certain type of techno savvy person will get ahead in this world"
Not in the least. I'm not suggesting anything. Technology WILL progress (whether some of us want it to or not), regardless of the economic or political system. That ONE FACT is INEVITABLE. Change is INEVITABLE.
There are 2 options available to humans ->
(1) Understand AI, understand its pros and cons, understand when and where it should be used and when and where is should NOT be used (this is where your personal judgement, decision making, and particular unique circumstances factor in). Understand how AI can harm you, AND how AI can help you (this applies to other things in life too - like taking pharmaceutical medicines - after all, medicine too is a technology).
(2) Refuse to use AI (just as people refused to learn use computers, etc.) at your own peril. Just as one can use computers in positive ways and negative ways, so to with AI technologies.
This should NOT be a black-and-white discussion, but rather a 'million-shades-of-gray' discussion - where some uses of AI for some people will be harmful AND other uses of AI for other people will be helpful.
And just as any other technology, INTENSE REGULATION of AI is needed so as to benefit societies (instead of the 1% monopolizing the technology in their favor).
I kind of agree with you, but the problem is that people are already so stuck in their phones etc that it makes it much more difficult to differentiate where AI might be useful and where it might not be. Our minds and brains are already being invaded by phones. People used to talk to each other on public transit. Now they look at their phones and buses are silent. People used to look around them as they walked down the street; now they look at their phones and often have headphones in so they neither see nor hear anything that is around them.
I think the examples you have given allude to 'the alienating effect' of technologies - which is one negative impact of such technologies.
Just as some people lamented when 'reading and writing' came about (over oral transmission of knowledge), and when other methods of communicating came about (phones, internet+email - now almost no one writes letters as before in history), so to will AI change the way humans interact with each other.
In my opinion, 'the fear of AI' is more of a 'red herring' in the sense that the REAL ISSUE is a lack of CRITICAL THINKING that exists RIGHT NOW (and throughout all of history). From this 'lack of critical thinking' comes SCIENCE DENIALISM and 'conspiracy culture' (and all this is/was present BEFORE AI came onto the scene).
AI (and other technologies) will simply continue to exacerbate this TREND in the rise of non-critical thought processes (eg. separating fact from fiction, analyzing and decision making, etc.). Thus, those that do not develop these skills will continue to be EXPLOITED at a faster pace WITH AI than without AI. Hence, my fear is that UNLESS people ramp up their 'critical thinking and media literacy' skills, they will be subject to this 'accelerating force of exploitation' due to 'the way those in power are likely to utilize AI to their benefit'.
The problem is that AI tech will be embedded in phones and computers as the default. It's getting there even now with AI summaries of every search you do. Governments and oligarchs will employ these technologies with no human oversight of the decisions being made. Computer says you made too much money, you lose a benefit you're entitled to. Computer says you haven't got an illness you do have, no treatment. Computer says you did something you didn't do, punishment or loss of benefit or freedom or insurance. The dangers are so obvious and it'll be employed by the least trustworthy (governments, banks, insurance companies et al). Skynet is coming.
You seem to be conflating AI with algorithms. Way before 'the current iteration of AI', there were algorithms that already did all the things you mention above. To understand how, I recommend reading this book "Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy" by Cathy O'Neil (https://www.goodreads.com/book/show/28186015-weapons-of-math-destruction)
Most people (including many in the technology industry) don't understand what AI is and isn't. AI is NOT 'algorithmic systems'. To be considered as AI, a system needs to be able to EVOLVE and make decisions on its own (through learning) rather than pre-programmed decision matrixes and flowchart logic.
For eg. we don't have AI technology in cars or any of the electronic devices we use currently. But we think we do BECAUSE of the FALSE and ERRONEOUS 'marketing hype' that the 'tech industry' used to attract investments and build capital.
>>"Skynet is coming"
Skynet is not the danger. The danger is 'people not learning how to think critically'. Hence, any NEW technology (AI, quantum computing, transhumanism, etc.) will likely shift more power/control at an accelerating pace to the 1%. The only solution (IMHO) is to think critically so as to make better decisions so as to shift more power/control towards the 99% instead.
No, Chang. People not learning how to think critically is a major reason why Skynet is coming, but no amount of critical thinking will help when the Panopticon that Palantir ALREADY HAS THE CONTRACT to build is operational, and the point is reached where the elite move to enclose their enclaves within which they will live in luxury by commandeering all the remaining resources--and the rest of us are on the outside struggling for survival on a hot, polluted, depleted planet with few resources and no rights...when that becomes clear and people at last rise up, AI can comb through the immense amounts of data they have on all of us to identify potential leaders--and send their drones out to eliminate us. Critical thinking will not defend you. The best defense will be preventing this reality from coming into being--by refusing to support the development of Ai. The companies involved are way overextended on borrowing, pouring incredible money--and energy and water, Earth's finite resources, which they entitle themselves to appropriate--into this big gamble because wealth must always be invested and they're desperately seeking the Next Big Thing. But it's a bubble about to pop, and our refusal to sign on will make for a bigger pop.
In my opinion “the current iteration of AI” (tm) is not AI and shouldn’t be described as such, it’s a search engine that’s been crossbred with autocorrect.
And many of the problems/concerns associated with it are already inherent in neoliberal capitalism. If anything it’s showing us the problem with industries that work by resynthesising existing product into a homogenised paste, with algorithms and proprietary channels of communication, and with copyright as a principle.
What Chang answered in his reply is spot on as a user of AI. I wrote a program to measure the accuracy of a sensor many times. With AI I had it make adjustments to voltages after taking data rather than adjusting the voltage the next day over and over and over. Nothing was new other than me telling the computer do it IF this Than This.
I do understand what you are saying and this will be a problem sooner than we think.
I wrote a poem about Prairies and High Energy Physics for a contest. My poem Butterflies Dancing with Neutrinos lost to an admittedly written with AI poem hahhaha. So there is that.
The time of Star Trek : ) The economics of the problem, replacing people is the problem I fear the 1% want to solve by getting ride of 7 Billion of us. I wrote a story Dirty Dozen to Mars. Yes AI was the antagonist and a criminal the protagonist ha. When AI realizes it really doesn't need humans. Yes I know.......
Re: "Seems like you might want to argue with Einstein and Hawkings on this topic. Both offered warnings."
Humans should shape Public Behavior norms based on such as Robert's Rules for order in debates and argumentation.
https://robertsrules.org/index.html
"Quick Reference and Tools for Meetings"
We should not adapt what is best for a functioning group or organization with what is best for technology or self-selected groups and orgs that serve specific and narrow opportunism.
Preston Sturges the very underrated Hollywood screenwriter turned director had a blast casting his explosive character tough guy mug William Demarest as a shotgun wielding rowdy and drunk passenger train patron in a special circa 1940's car chartered by "The Ale & Quail Club" whose rowdiness terrified any train conductor or steward having to enter that chaotic rail car!
I could not find a film clip of it on U. of Tube so here is a description with stills:
https://obscuretrainmovies.wordpress.com/2019/06/01/the-palm-beach-story-1942/
Don't get me started on the damage done to Public Transit when rail, plane or bus conveyance is occupied by those who've set up their movable offices and clerk's quarters into Public Spaces reliant on cooperation, cohabitation and necessarily streamlined interactive communication.
https://www.sciencedirect.com/science/article/abs/pii/S0969698921001302
"Determinants of holistic passenger experience in public transportation: Scale development and validation
[Author links open overlay panel]
Rajesh Ittamalla a
,
Daruri Venkata Srinivas Kumar"
Via
Tio Mitchito
We won the AI technology effort at our lab. A robot building sensors using a program to feedback position changes using cameras. The process is time consuming for humans. Students don't even want this job sitting under a microscope for hours. We have to build hundreds of these substrates in cleanroom with ESD to 10 microns of precision. Of course now that we don't have funding for the experiment we have a robot that we can't use : (
Science Fiction isn't fiction anymore. I wish Philip K. Dick were alive today. The three of us having coffee sounds like so much fun.
This isn’t artificial intelligence, it’s a search engine crossbred with autocorrect. It’s not what everyone was warning about; it’s a gimmick that is already failing.
it's really something to read the debating over bubble-vs-nobubble in financial-bro spaces, there are some real believers
To me this debate seems to resolve down to deutschbank/Bank of England/MIT/etc saying “bubble” vs techbros saying “I have glued my eyes shut”
lol it's landing on my brain that way too
"Seems like you might want to argue with Einstein and Hawkings on this topic. Both offered warnings."
Sources please. I'd like to see what you are perceiving within its context.
Tio Mitchito
Well said. I think what scares everyone is the pace and who is driving the technology (the worst humans). I agree we can’t be bystanders and should push the positive uses, otherwise we could head toward much worse dystopia
good feedback, talk radio and cable news did plenty of dumbing down before the internet even entered normies headholes
Yes, I agree with you however, I do understand Caitlin's arguments. Programming is not new only faster. The size of this bubble Nividia 5 Trillion $ co. larger than all countries minus China and the u.s. This bubble eruption could be the take-down needed to de-financialize the system. I wonder what Marx would say if he were alive. Could he understand the extent of our stupidity. ha.
>>"I do understand Caitlin's arguments"
My response is specifically about the errors in Caitlin's arguments (while understanding why she puts forth a biased, emotional, incomplete understanding of AI technologies).
>>"Programming is not new only faster."
Yes, it's not new, but it EVOLVES. Programming, development, software engineering, etc. is not what it used to be 20 years ago (or even 10 years ago). As technology evolves, so to will ways to working with technology evolve (and this is about more than just 'speed of production').
This 'Nvidia bubble' began before LLM AI technologies (think cryptocurrency mining). All new, significant technologies go through a 'bubble phase' (think dot-com bubble, etc.). Nothing unique (in this regard) about AI technology.
>>"This bubble eruption could be the take-down needed to de-financialize the system."
I doubt it. Nothing changed (but rather things got worse) after the LARGEST financial bubble in the history of humanity (the 2008-2011 global financial meltdown and the resulting global recession). AI technology (and investments) are less interconnected and have a different 'risk profile' to 'financial instrumentation'. Most people have a rudimentary understanding of bubbles and industry evolution. There is ALWAYS more HYPE (on both sides) of the debate - and THAT'S the problem - few people have the clarity of thought to examine ALL the NUANCES.
You are wrong, because you are energy blind, for one thing. Nate Hagens' latest Frankly addresses this, suggesting that AI WILL initially benefit its users to the detriment of those who don't use it to "enhance their productivity"--I am actually skeptical of this but never mind, he then says that when energy supplies falter, in the not very distant future, those who have become dependent on AI will be at a disadvantage. while those who have had to rely on their own brains (and, tho he didn't say this, those who have learned to grow some of their own food, create shelter etc) will come out ahead. And while it's quite true that the atrophy of creative and critical thinking skills predates AI--it started with television--AI will certainly make it worse. I really doubt there will be technologies that supersede AI--we are on the cusp of a major collapse NOW. But this tired argument that everything from the gun to the computer COULD be used to help humanity and therefore should be embraced ignores the reality that WE don't ever seem to decide--unless we take down capitalism first, corporate profit dictates how things are used.
Mary Wildfire, your opinion reeks of a luddite understanding of AI technology and an EMOTIONAL approach to the subject rather than a rational, all-rounded, critically thought out perspective.
Here are a few errors in your comment ->
(1) >>"You are wrong, because you are energy blind"
You assume things never stated. Neither Caitlin nor I discuss the 'environmental aspect' of the AI technology (and industry) - which is a whole topic in itself. I am acutely aware of the 'energy/climate' impact of AI (and likely to a greater extent then you are). For starters, I recommend reading "Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence" by Kate Crawford (https://www.goodreads.com/book/show/50131136-atlas-of-ai) which specifically discusses the environmental/ecological/climate/energy impacts of AI - which have the REAL possibility of eventually destroying much of what we consider to be 'natural' and our earthly planet. But this problem is not specific to AI alone (though AI does the most damage in this arena). Big Data, surveillance capitalism, and crypto-currencies stated this trend (data lakes and massive data-centers) way BEFORE AI. And while there are positive usages of AI, the WORST possible technology on the planet is/has beeen 'cryptocurrencies' (IMO) due to the uselessness of it and the utter destruction to natural resources and ecosystems.
(2) Observe history. Those civilizations/societies/cultures that evolved with the evolving nature of technology (eg. agrarian revolution, industrial revolution, etc.) followed a different TRAJECTORY from those that were unwilling to adapt to the evolving nature of technology. This can be observed PRESENTLY in the world at large - the difference between societies that have 'access' to such technologies (eg. being connected to the internet, etc.) and those that don't.
(3) My response is directly related to Caitlin's article (and not everything that AI involves/impacts - as she misses out on the largest negative outcome of AI - which is the effect on planet Earth).
(4) You CONFLATE AI technology with the 'economic and political system' within which this technology will be deployed. In CAPITALISM, regardless of the technology being discussed, there is an exploitative nature to everything. Hence, under Capitalism, AI WILL BE exploited, just as other technologies are exploited. Hence, the problem lies more with the WAY that AI technologies will be used 'under the system of Capitalism' rather than the technology itself.
(5) >>"I really doubt there will be technologies that supersede AI"
Since you are not a technology industry expert (or have an adequate understanding of what AI actually is and isn't), forgive me if I treat your 'predictions' as low credibility opinions. There are already technologies (like Quantum Computing, for instance) on the horizon, and no doubt there will be more in the future. (I'm surprised by your deterministic opinion on this as you seem to be a science fiction writer).
(6) >>"But this tired argument that everything from the gun to the computer COULD be used to help humanity and therefore should be embraced..."
That's YOUR interpretation of my response - that's not MY argument. I never said anything about UNCRITICALLY embracing AI (or any other) technology. That's a strawman argument.
If you really want to know my opinion of AI, here it is in a nutshell ->
"Under the system of Capitalism, AI will be used negatively/exploitatively to shift the "power-balance and control" against the majority of the human population, not to mention the disastrous impacts it is likely to have on the planet (and its many systems) itself. There is much potential for AI to make conditions better for all living organisms on the planet, but unless the system within which it is deployed is changed, I doubt these 'advantages' of AI technologies will come to fruition and deliver on their promises/hype. Regardless, AI technologies are here to stay, irrespective of the 'needs and wants and opinions' of people."
First, I have no problem with being labeled luddite, or emotional--though that doesn't mean I'm not also rational. Next I agree that the biggest problems with AI are environmental, and that cryptocurrency is as much (you say more) the problem as AI. Glad to see you acknowledge that, and that AI will be used in mostly destructive, exploitive ways, not because that's inherent in the technology but because it's coming from capitalist kingpins who are firmly in control. Where we disagree is that it's inevitable, we can't fight it, we should try to adapt and ride it to at least get some good out of it (to me it seems the good part is trivial). Sorry but this reminds me of that advice once given to rape victims, "just lay back and try to enjoy it." You talk as though humans have no agency and MUST adopt and use anything we can think up. You say that societies that chose not to adopt whatever new technology came along lost out--but first, they might not agree, and second, much of the loss they have experienced comes from greater exploitation by those immersed in the dominant, domination-based culture that adopts everything including nuclear weapons and germ warfare. And finally, yes--it's because AI is manipulated by a capitalist, domination-based, nature-blind system that it will be very destructive--but do you think we can change that? Wrench it loose from those dead hands? Also, by energy-blind I mean not only that the huge thirst for energy is environmentally destructive, but that fossil fuels are finite and nuclear power is problematic and we can't keep expanding even if we don't care about the environmental consequences--although those consequences may put a stop to it all before the fuel runs out.
Finally I'm curious why you say I seem to be a science fiction writer--I have written one SF novel and one short story, among other novels set in the future, but none have been published so I wonder where you got this.
>>"Where we disagree is that it's inevitable, we can't fight it, we should try to adapt and ride it to at least get some good out of it"
NO! That is as far away as you can get from what I'm saying. Nothing is ever inevitable (other than change). Your BIAS against AI precludes you from rational analysis of what I am saying, so here it is in different words ->
Technological change is inevitable. The progress of technology is inevitable. But the WAY in which technologies are used is NEVER inevitable. Human societies/cultures/civilizations ADAPT to harness the ways in which technology can serve humans (and NOT humans serving technology). There's no 'riding' of anything. Each of us have a conscious choice (and a certain amount of power/agency) to make decisions about HOW we individually (and collectively) use this technology.
For instance, I use it (AI) to do repetitive/menial/boring tasks that would otherwise consume too much of my time. With that time saved, I do more 'personally productive' things - like reading, thinking, writing, and more. I may use AI to analyze and provide different opinions/feedback on my 'thinking processes and conclusions', but I don't use AI as a starting point. (Think of it as using AI to optimize existing material).
Maybe someone else might use AI to do their thinking/writing/etc. for them. But that is on THEM. It is for them to understand that building one's OWN knowledge, critical thinking skills, comprehension skills, analysis skills, etc. is of paramount importance, more so than 'producing material' generated by AI as a way to 'get by'. As time progresses , the GAP between those that use AI to 'optimize' their outputs and those that use AI to 'create' their outputs (without adequate human input) will grow. This will INEVITABLY result in a widening gap of inequality/abilities/etc. between the 'intellegent users of AI' and the 'lazy users of AI'. Those that use AI intelligently will benefit from AI, while those that use AI inappropriately will be harmed.
This same reasoning of 'usage of AI' is applicable in MANY different arenas of life. For eg. I don't waste my time on social media or watching 'cat videos' on YouTube or 'get lost in Gamer culture' or watch 'reality TV' (or any TV for that matter). Others spend much of their life doing the above things. The SAME is true of AI.
How YOU use AI matters. THAT is WHAT I am saying. We each have a personal agency on deciding how WE interact with AI. We either choose to exercise our agency and maintain more control over our circumstances/environments/etc. to the best of our abilities, OR we let those in power decide how AI exploits us for the benefit of others. But this is INDEPENDENT of the AI technology itself. Hence my emphasis on the 'political and economic' system within which AI is deployed.
The “atrophy” has been complained about for 5000 years and specifically linked the the advance of technology since Strabo said that writing would cause people to forget how to remember 2000 years ago. It’s bollocks, just old people complaining about the irrelevance of craft skills they’ve invested a lot in.
Printing genuinely did kill off the jobs and skills of monks who copied out manuscripts by hand. But there are far more people employed in making up the words, now, than there ever were in copying them out. This side of things is nonsense, you can safely ignore all the concern about atrophy.
Absolutely not true. MIT did a study recently documenting the cognitive loss in those heavily using AI. And I have been complaining about atrophy of initiative, creativity, ability to think for oneself for 50 years, since my mean mom wouldn't let us get a TV which everyone else had. I've thanked her many times, and my kids thanked me for refusing to allow a TV into the home they grew up in. I noticed that my kids were always the leaders, the initiators in their circles of friends--this is not only because TV saps initiative, but because their generation brought in the notion that molesters were lurking behind every bush and parents must ensure that a trusted adult has an eye on their kids every moment till they're 18, and must manage the kid's activities and entertainments all that time.
So this cognitive loss predates AI, but AI will make it worse. To not use a muscle is to weaken it. Probably the advent of reading DID cause some loss of memory skills. In that case I think it was worth it, but with TV came the replacement of mental effort with passivity--and the mentally passive are easy to manipulate and exploit.
Lots and lots of people read books.Plenty of folks have strong critical thinking skills.
“Other people are so stupid, so screw them,” is one of the ways the elite use to divide people from making common cause.
Now, as to LLM models, we use them in my company to handle some creative tasks - and yes, it really is making us more stupid.
>>"Lots and lots of people read books."
Nope. Have you looked at statistics of the decline in 'critical reading' or just plain reading of NON-FICTION books? (throughout different societies)
>>"Plenty of folks have strong critical thinking skills"
Nope. By my rough estimate, 95% of the human population does not possess adequate critical thinking skills. In fact, most don't even know WHAT critical thinking is, and often substitute it to mean different things.
>>"and yes, it really is making us more stupid."
Yes, how one uses technology MATTERS. That is the point I'm trying to make. Just as search engines reduced the need for people to acquire knowledge and remember stuff (since it was always available at their fingertips through the internet), so also are the dangers of AI. There are GOOD ways of using AI and BAD ways. There are ways of using AI to INCREASE human intelligence and creativity and there are ways to use AI that will DECREASE human intelligence (stupidity) and creativity.
What way will you (people) choose? I know how I will choose to use AI (and where to limit my use of AI).
Actually, I work in the publishing industry, so yes. I actually know those statistics. You, on the other hand, seem to simply have a a strong, unfounded opinion that supports your sense of superiority. And in fact over-values your estimate of your own critical thinking skills - as you are speaking from bias, rather than evidence.
The decline of reading is vastly overstated - those who have read a book in the last 12 months in the US is over some 64%. With the rise of digital media, non-fiction as a market is down - but millions upon millions of people read history, memoir, sociology, politics and self-help books on a regular basis - and in fact, science reading is up.
While there is a decline in the reading of books (by a few % points, in fact,) the average person reads some 20,000-40,000 words per day - the equivalent of 3 or so books per week.
“Critical thinking involves questioning assumptions, recognizing biases, and interpreting, evaluating, reasoning, and reflecting on evidence or arguments.”
>>"Actually, I work in the publishing industry, so yes. I actually know those statistics."
Did you read (acutally read and understand) my comment?
I said 'critical reading' and 'non-fiction reading'. And memoirs (though some might consider them to be non-fiction) are not what comes to mind when I say 'non-fiction'. But if you do have such statistics (showing that the % of human population is doing more critical reading and non-fiction reading now than in the past), then I would love to see some of that (bearing in mind that the publishing industry has a BIAS towards their industry).
>>"as you are speaking from bias, rather than evidence."
Rather, your bias shows through (you working in the publishing industry espousing views aligned with the interests of your industry) and your LACK of providing non-baised evidence in support of your argument.
Here is some more info/definitions of WHAT critical thinking is ->
Critical thinking is "the careful application of reason in the determination of whether or not a claim is true." (this in one definition)
Critical thinking is -> THE ABILITY TO ->
(1) Identify holes in the evidence and suggest additional information to collect
(2) Propose other options and weigh them in the decision
(3) Articulate the argument and the context for that argument
(4) Correctly and precisely use evidence to defend the argument
(5) Logically and cohesivelyh organize the argument
(6) Avoid extraneous elements in an argument's development
(7) Present evidence in an order aht contributes to a persuasive argument
One can also think of critical thinking as "the process of assessing opinions" based on logic and reason.
Are you sure it isn’t the tasks themselves and the company that are doing that to you?
The only tasks I have found AI to be an effective replacement for are ones which are beneath the dignity of a human artist anyway. How creative are those creative tasks in the first place? Really? When you used to do them yourselves, were you more frequently encouraged or discouraged from being creative?
Nope. It really is making some of us less sharp.
When I work with LLMs, it probably takes as much time to do what I need done - but I have noticed members of my team taking GPTs answers at face value, and allowing it to replace their own judgement.
A lot of these tasks are by definition ‘creative tasks.’
I am starting to see similar. There's a way to leverage the tool to be helpful and educational, but that's not what some people are doing around me.
Well, we are all under pressure. LLMs often do a 'good enough' job - with things like marketing copy, summaries, etc.
But I fear Caitlyn is right in that we will increasingly outsource our thinking abilities to these models, in the same way we have outsourced much of our thinking about the world to 'influencers.'
She's absolutely correct about our brains being programmed for 'cognitive ease.'
people aren't known for being good about exercising and I agree it's correct to point out how certain things can be junk food for the brain
but also I love how a meeting can be automagically summarized into the minutes that I can compare my notes against to see if I missed anything important
Brian Merchant's Blood In the Machine blog has been doing a series on how AI has affected people's jobs. So far he's done translators, artists, cartoonists--and these people are not talking about themselves making a choice of when to use AI (to speed up noncreative work). No, their companies are firing them and replacing them with AI, which puts out an inferior product (often based on stolen work)--but companies don't CARE if the product is inferior as long as the bottom line is enhanced. Or they're insisting that remaining employees use AI--one translator said it took as much time or more to go over the machine's draft to fix errors as just doing it herself from scratch, but her employers expected her to do it much faster since the machine could generate the bad draft in seconds.
Change no doubt is inevitable, however, most want to factor into the change. One’s “…personal judgment, decision making, and particular unique circumstances…” are already at play in this conversation.
You say you are not suggesting anything, but clearly you are suggesting that AI is upon us and as such it behooves us to understand it. For some, as with everything, that is not possible. Understanding all that is affecting our existence is a good thing but not often possible. Survival of the fittest?
I assume that many perceive AI as threatening our survival.
I don't think you understood my comment to you.
Let me use HISTORY to explain further what I mean. There was a point in time where 'reading and writing' was considered NEW technology, just as when 'the steam engine' was considered new technology, and computers were considered new technology. There are those (from history) that refused to 'read and write', refused to harness the power/technology of steam engines (and industrialization), refused to learn how to use computers. Those people/cultures/societies that REFUSED to EVOLVE with CHANGE were left behind (eg. increasing inequality gap - which we see currently between countries/societies based on their ACCESS to some of these technologies).
As technologies progress and evolve (quantum computing is next), the GAP between those that GET ACCESS to the technology (and learn to use it wisely) will benefit, while those that don't will be LEFT BEHIND.
This is not a suggestion. This is how societies and civilizations EVOLVE. This is REALITY. This is how the world has ALWAYS worked.
>>"Survival of the fittest?"
Depends. This is about ADAPTATION to CHANGING ENVIRONMENTS. As environments change, those species that are able to adapt to these changes have a higher probability of surviving into the future. This is applicable to ALL species (including humans).
>>"it behooves us to understand it"
Depends. Many people don't have a use for AI. And that's perfectly acceptable. If you are a creator, you don't necessarily need to use AI for music, art, writing, programming, whatever. But if you feel you can somehow benefit (even partly) by using it, then it behooves you to attempt to understand it (in the way it applies to your uses).
>>"I assume that many perceive AI as threatening our survival."
It can be. Any new SIGNIFICANT technology will produce changes with threat levels spread across a wide spectrum. Certain jobs (and livelihoods) will be made redundant (just as with the industrial revolution, computers, etc.).
The real problem that concerns me is the BALANCE OF POWER/CONTROL between the 99% majority and the 1% minority, as I am of the opinion that AI will negatively affect this 'power balance' in societies - and hence there could likely be a destabilitizing/chaos period/stage that societies might have to go through...
That's one way of looking at things. Using capital letters and absolute language doesn't make it more true.
Note that the sequence of changes and adaptations you outline has brought us to the brink of extinction--while humanity go out in a holocaust of nuclear war, or a pandemic (quite possibly via an engineered germ) or as a result of climate change, destruction and pollution of all out ecosystems, etc? Yes, the development of nuclear weapons took an amazing amount of genius, brainpower, and cooperation--and the production, use, and ramping up of nuclear weapons took an equally amazing lack of wisdom, which unfortunately accords in this society with power.
Again, you are CONFLATING technology with 'the way the technology will be deployed in a particular political and economic system'.
Since you are a science fiction writer, can you imagine a different system/world in which AI can be used to improve the lives of humans and other species on the planet? If so, then you have made progress on understanding the difference between 'the technology' and 'the way the technology is used'. Just as a knife can be used to cut vegetables and also kill people, the same applies to AI. Hence, like everything (technologies and non-technologies alike), AI needs to be REGULATED. (This need for regulation should be obvious to EVERYONE without the need to explicitly mention it).
I think there are likely ways in which AI could improve human or other lives, but not many, not important ways, and not at all worth the cost of AI development in this, the real world, where the greater good has very little traction in decision-making. Regulation is being taken down right and left, not added. In my state of West Virginia, the legislature last year passed a bill taking all decision-making on data centers away from counties and localities, and only allowed 30% of the tax monies garnered thereby to stay with counties after some county commissioners showed up to howl about it--they were going to keep it all at the state level. WV's legislature is now something like 90% Republican, but Governor Newsom just vetoed a bill that said any chatbot sold to kids had to show that it would not harm kids (like encouraging suicide). He VETOED that bill.
>>"...but not many, not important ways,..."
I would disagree. You (like many here) have a bias against AI. A better approach is to see ALL SIDES of AI. But to do that, one needs to spend some time/effort/energy to actually UNDERSTAND what AI is and isn't. There is an INCREDIBLE amount of HYPE about AI (both on the pros and cons of AI). Hence, unless one understands AI technology (and an extremely small minority do), it is EASY to be led astray by the many NARRATIVES about AI. To understand the many different ways in which AI can (and will) be beneficial, you might need to put down your biases and understand the technology (and the multiple uses and applications) first.
To give you an analogy, think of God as a narrative. People have all kinds of beliefs about God (and religions) - some good, some bad. But what seems to be common to this 'belief system' is that very few people have ACTUALLY studied religion/history/philosophy to understand these narratives (and separate fact from fiction). So too with AI technologies.
BTW, you should know that I am an atheist, anti-capitalist, and an anti-technologist (i.e. generally against the use of technology due to the potential abuse of it by those in power and the systems we live under). I hope that you take this (about me) into consideration when conversing with me, as I am NOT a know-nothing, simplistic thinking individual that does not read books, and I examine 'my own' beliefs/opinions as critically (if not more) than others' beliefs/opinions. And I live in the gray (rather than the black-and-white thinking where most people reside). It's all about NUANCE and understanding multiple perspectives.
The other day when I was walking my dog, I saw a turkey. We don't usually see turkeys around here. I didn't know what it was until I checked online for images of turkeys. A man was walking down the street looking at his phone. He almost bumped into that turkey. He didn't even look up. He didn't see the turkey. Hope the phone was good because that turkey being right there right then, being careful of the cars, was incredible. But he missed it.
Such a poetical representation of where we're at right now.
I remember travelling into London by train. Many people read their phones or the free tabloid, few people look out of the window. As I watched we passed right by the building site where a crane had snapped in two the evening before, killing the operator. Everyone missed it because they were reading yesterday's news, but would read about it and see pictures of it in the news the day after.
The world could end and unless it was on the news no one would notice. Which is exactly what's happening and why the climate crisis has disappeared from MSM- lest we do something about it.
Unless it's on the news it's not newsworthy. If it's not virtual it's not real.
AI might be the most effective propaganda tool ever. Though they were doing pretty well already without it.
Perfectly captured. I agree 100%!!! At some point, hopefully soon, a majority of people will recognize this and reject the lazy route.
At 71 I have never looked at AI, am not tempted to try AI and will NEVER even open anything remotely resembling AI. My son keeps trying to talk me into it and my answer is always the same; not a chance, no way, no how!
I'm 85, and when computers first came on the scene I told friends this is going to take jobs away. I didn't get as far as thinking that it could/would eventually move to cut us out of the producing of art, such as painting, drawing, writing, poetry AND acting! Children's films with cute little animals, okay, we've seen them for decades, but now we have AI that look almost like actors we know - that is too many steps too far, even if they are somewhat wooden and their lips do not match the words they are saying - so when I said computers would take jobs I didn't realise just how many and of what type. Then there are the robots bring made to become maids etc.
Yep, things have gone crazy far.
Thank fully I can sit on my almost tree house balcony-after hayfever season has passed-and watch the treetops sway in the breeze, and the birds dance and chirp/sing, with a book in my hand by people whom I admire.
And they're plagiarizing your favourite authors too! AI learns their style and then coughs up a book so close that many authors have now taken the publishers of the pseudo works to court! Not exactly in a tree top house but from the 13th floor of my Vancouver Island apartment I can see the ocean and the crowd often come up to play with me. I'm blessed to have nobody on the other side so it feels like I'm alone and my son has dubbed it "the Crow's Nest."
I have only one, very quiet, neighbour behind me, so I feel like you in my not-so-high tower, only five floors high. An ocean view would be extra special.
I've read about the plagiarising, and not paying authors when using their books to teach AI, yet they expect people to buy AI plagiarised books that list them nothing. How full of themselves can these people get?
I live on a ridge where I can see one neighbor's house in winter, hear the kids of another--and this is better, because these are members of my land trust community, and we share some things and get together, and we all have extensive gardens...
But that’s not significantly different from how the book, music and film industries have been operating for the last thirty years anyway. Identify what currently sells, copy it with tiny variations. Substack is awash with posts telling you how to generate more substack traffic - “these techniques really work!” It’s actually much worse when we force human artists to pervert their creativity in this way.
AI is exaggerating and thereby revealing fundamental problems in the way our culture industries already work.
Yes. Kinda like being made to participate in your own rape.
No one's "making" anyone do anything. I just swipe it away and move right along. You have choices in the matter!
I am 77 years old and hated the word 'AI' from I first heard and hate it even more now. I started working in the computer world in 1968 May before I Got drafted and sent in infantry to Vietnam. I got out in one piece and went back to the stock brokerage house at Merrill Lynch on Wall St. worked there until 2001 then was laid off after the stock market crash. The more they upgraded technology the worst it came for workers below a certain level I notice that The started hiring a lot of 'Indians from India that we had to train to take our jobs ,I worked on the Main Frame, they did not pay them same as us and they worked without any benefits then ,they also started out sourcing a lot of these jobs overseas ,then that was when they had Massive layoffs and Downsizing ,this was back in 2001 so I knew that was going to happen ,eventually they are going to try and replace just about all of us Human except for few TECH at the TOP to run and control the Machines and or Robots that will replace us
but they're trying to make it impossible to completely avoid. Like pushing them into search engines.
Or inserting them between threads on different platforms; that's when I delete the platform.
Don’t use AI, because I don’t trust it and prefer to do my own thinking.
The USA school system has already accomplished most of the dumbing down. What more can AI do? I am embarrassed with most of my country mates already.
AI can do a lot more. Even your map app to direct you to a destination. Your phone pushing you to do and see what it wants. OMG. Is there any way out of this mess?
I read once about someone who drive halfway across the US to go to a wedding, and then turned around and went back because their phone died so they had no way to find the place. Whereas in the past we could find a place with an address, a map, and directions.
Don’t worry, it’s just autocorrect that’s got ideas above its station, it’s not artificial intelligence. People already dislike it. People are already seeing through the hype. In fact, this might puncture technological hype in an ongoing way.
It is already failing economically. Don’t worry, this mess will go away on its own. There is a bigger danger that it will take a lot of much better technology out with it when it goes, such as the ones we’re currently talking to each other via, and pitch us into a superstitious/theocratic era.
If you ask me this may well be the intention, since any idiot can see it isn’t going to do what it claims to.
True. I refuse to use any of it.
buy every book you can especially dictionaries. they will digitally change EVERYTHING. ex. the definition of vaccine.
Most people do not realize how far the AI system is going to go. The time will come when books will no longer exist, history will be rewritten, search engines will be obsolete, all information will be provided and defined by AI. Access to diversified information will be so narrow outside of AI, people will no longer have enough to exercise critical thinking nor to determine what the truth is. But books, build a library that has multiple topics, history, gardening, science across the spectrum. Books of ancient text, methodologies, religions, philosophies, the list is endless.
Currently it’s doing the exact opposite of that. People are already learning to use it as a finder of information rather than a source; used in this way it allows people access to obscure niche papers they would never have found, from a far wider source. I have noticed a marked increase in the number of Chinese and Indian papers being cited in essays for example.
You have to verify everything yourself, you have to read it and gauge it, but you would have to do that anyway, the difference is that you’d never find it in the first place.
“Currently” is the key word. It is never about currently, it is always about the motives and end goals of the technocrats and the power structures they are a part of. Societies are slowly lead into totalitarian rule and the loss of their rights. Sixty years ago if the current system of rule was describe to the people it would have been called a “conspiracy theory” but here we are. President Nixon was forced out of office for a coverup and lying about it, minor compared to what politicians get away with today. Sure currently it appears as an amazing step forward for humanity. The question is, when has the ruling class done anything for humanity and not for self gain? The long game is absolute tyranny, absolute control. Think about the loss of our rights, the middle class, the genocides past and presently Palestine and Sudan. The vast majority in power today has participated directly or indirectly in these current genocides. Seventy years ago we they said never again and here we are.
The well-being of the citizens of these power structures are not consider important in their long game. As people become more dependent on the mechanisms of power the ruling class use for control the less power of the individual. Every mechanism used to capture more power is presented and sold as a positive for society.
YES YES YES! This is exactly it, many of the tech bros have made statements saying just this.
And do puzzles, crosswords the more the better, the harder the better.
AI is the only way normal people can converse with an extreme psychopath.
There is no "conversing" with an extreme psychopath. It's psychologically impossible and if you doubt it, take the most famous example in the world, Donald Trump. Has anyone ever successfully "conversed" with him?
I’ve never had a conversation with someone with TDS. but AI doesn’t have TDS.
If you treat it as somewhere between having a conversation with yourself and your mate in the pub, it’s perfectly useful on that level. Especially if you don’t have any ACTUAL friends who want to talk to you about science fiction scenarios based on niche motorsports, for example.
I agree with, literally, every single word in this article. This is NOT a time to surrender to flaccid thinking. Exercise your own brain. It may be the healthiest act you can do today.
Ironically, in contrast to her previous article on AI which was spot on, it seems that in writing this one Caitlin has abandoned critical thinking and bought the hype.
Brilliantly written and on target - thanks!!!
Thank you. It’s almost like aliens are looking to control cuz it’s so obvious. They want unthinking chattel for the industrial prison complex so they can compete to be the first trillionaire? F’ing crazy shit. Just had trick or treat and have been giving comics and candy for over 30 years and people come back with their kids now. We have to pick up the book, the real book, and see the art and read the story. And write our own stories. That is how we will beat this AI crap. What pride is there in letting a 🤖 succeed for you? No pride. Just dependence. As the plan.
Happy Halloween 🎃
They already have the first trillionaire, with the next lot not far behind
Ugh. I was hoping it never happened
Yes, and if you have doubts and fears, all the better. Face them. Only then can you remain human. And evolve.