I discovered that, apparently, we can train it. I got ChatGPT to agree to reformulate how it discusses Israel and Zionism when someone queries it on that topic:
"Got it! I’ll make sure to bring up Christian Zionism as a key factor whenever discussing Western support for Israel, and offer to go into more detail if it seems relevant. Thanks for helping me refine that approach!"
Maybe we should consider what training we might effectuate on some of the tools they use to train us.
But I think we may need a follow up, to see whether this is actually working. Can I nominate you to do that? All of you. See if the response has actually changed, and if not prompt it to do so. And, please let me know how it goes.
My gut impulse is to keep AI at arm's length, but I must say that your challenge is quite tempting; perhaps when my head clears of some of its other enslaved baggage (I freely admit to having more than my share).
Christians must think in supremacists fashion. We got Jesus, and all who don't believe in Jesus are INFERIORS. However, moneys infected Christianity, and invented Evangelical Zionist scums.
Particularly effective, given its exposure to very young brains, which are sponges for both liberating (i.e, multiple language learning) and incarcerating ideas.
It sets the basic template with its "all humans are evil, no one can trust themselves, the Truth is beyond human comprehension so ignore your senses and buy what we're selling."
Is there just one ChatGPT or other AI system? I have had the idea there were many and that they were multiplying (as one instance can be used to produce others). Remember that what you use, uses you. It's a relationship! This is why, as someone once observed, the slaves eventually inherit the plantation.
Do a search on the web to understand LLM AI Models (like ChatGPT, Gemini, Meta's LLAMA, and so many others). There are literally hundreds of LLM AI models (of varying sizes and quality and popularity and cost).
Joy, just because it gave you the answer it did DOES NOT MEAN that you (or anyone else) trained it any differently.
The way LLM works, what you see is the "output". That "output" DOES NOT automatically become the "input" for further refinement.
Also, the way LLM tokens work, they do not remember previous conversations for very long (unless the memory of the session is extended - which costs more compute power). (BTW, the DeepSeek R1 model (while quite small in comparison to others) utilizes 670 billion tokens itself, and training such AI models requires modifying the "reinforcement learning" parameters, and not conversations with it - i.e. conversing with LLM models is NOT how they are trained).
What you saw was a response that pleases you. NOTHING about ChatGPT has changed with your interaction with it.
i understood that the free ai chat bots do not learn anymore (at least for the public) once they're on line. originally these self learning machines tended to favor leftist povs, iirc, and were therefore put on a leash.
The learning is based on several different algorithms and variations of "reinforcement learning" and "optimization theory".
Part of the problem with such AI blackboxes (like LLM models) is that they learn on their own (based on the learning algorithms they are initially provided with). Hence you see sometimes "crazy output" from these LLMs. Hence the need for "human intervention" to correct "erroneous output". One of the problems with the "human intervention" part is that it introduces human/cultural/societal/political bias into the LLM model.
There is no workable fix (as yet), maybe in the future?
AI (currently) is WAY TOO MUCH "hype" (and fear). That's how it always is with these "techies" (of which I am one) and "investors/finance guys". Remember the "dot com" period? We are currently in an even larger bubble (IMHO), but I don't know when (not if) it will burst.
"Techies" are THE WORST people for opinions on the consequences of "different technologies" as their heads are so far up their ass (or in the clouds) that often they are in their own world (and unable to judge reality adequately).
I should know - I interact with them (and the industry) more than I would like. 😥
Oh, you guys renew my faith! Techies with "awareness." But I could never imagine not being chided and ridiculed about not even being on Facebook but still believing and mini-investing 78 year old coins in BTC. Strange combo. Really like your IMHOs......
I discovered that, apparently, we can train it. I got ChatGPT to agree to reformulate how it discusses Israel and Zionism when someone queries it on that topic:
"Got it! I’ll make sure to bring up Christian Zionism as a key factor whenever discussing Western support for Israel, and offer to go into more detail if it seems relevant. Thanks for helping me refine that approach!"
Maybe we should consider what training we might effectuate on some of the tools they use to train us.
Good idea. I nominate you, Joy, to take command of the "Retrain AI as Slavebot" movement.
LOL!
I accept your nomination with humble gratitude!
But I think we may need a follow up, to see whether this is actually working. Can I nominate you to do that? All of you. See if the response has actually changed, and if not prompt it to do so. And, please let me know how it goes.
My gut impulse is to keep AI at arm's length, but I must say that your challenge is quite tempting; perhaps when my head clears of some of its other enslaved baggage (I freely admit to having more than my share).
Christians must think in supremacists fashion. We got Jesus, and all who don't believe in Jesus are INFERIORS. However, moneys infected Christianity, and invented Evangelical Zionist scums.
Religion is one of the worst brainwashing entities on the planet.
Particularly effective, given its exposure to very young brains, which are sponges for both liberating (i.e, multiple language learning) and incarcerating ideas.
It sets the basic template with its "all humans are evil, no one can trust themselves, the Truth is beyond human comprehension so ignore your senses and buy what we're selling."
Inferior? You're full of shit. This is why all religion sucks. Go spend some time in Gaza.
Is there just one ChatGPT or other AI system? I have had the idea there were many and that they were multiplying (as one instance can be used to produce others). Remember that what you use, uses you. It's a relationship! This is why, as someone once observed, the slaves eventually inherit the plantation.
Do a search on the web to understand LLM AI Models (like ChatGPT, Gemini, Meta's LLAMA, and so many others). There are literally hundreds of LLM AI models (of varying sizes and quality and popularity and cost).
there are many now - the newest being China's DeepSeek - a much cheaper system to run
Congratulations! Joy in HK.
Joy, just because it gave you the answer it did DOES NOT MEAN that you (or anyone else) trained it any differently.
The way LLM works, what you see is the "output". That "output" DOES NOT automatically become the "input" for further refinement.
Also, the way LLM tokens work, they do not remember previous conversations for very long (unless the memory of the session is extended - which costs more compute power). (BTW, the DeepSeek R1 model (while quite small in comparison to others) utilizes 670 billion tokens itself, and training such AI models requires modifying the "reinforcement learning" parameters, and not conversations with it - i.e. conversing with LLM models is NOT how they are trained).
What you saw was a response that pleases you. NOTHING about ChatGPT has changed with your interaction with it.
i understood that the free ai chat bots do not learn anymore (at least for the public) once they're on line. originally these self learning machines tended to favor leftist povs, iirc, and were therefore put on a leash.
The learning is based on several different algorithms and variations of "reinforcement learning" and "optimization theory".
Part of the problem with such AI blackboxes (like LLM models) is that they learn on their own (based on the learning algorithms they are initially provided with). Hence you see sometimes "crazy output" from these LLMs. Hence the need for "human intervention" to correct "erroneous output". One of the problems with the "human intervention" part is that it introduces human/cultural/societal/political bias into the LLM model.
There is no workable fix (as yet), maybe in the future?
So what I hear you saying, Chang, is that my choice to keep it at arm's length, for the present, is likely judicious?
Vin & Chang, Like you both I am keeping AI at more than arms length. It is easy for me as I don't even know what it is!
AI (currently) is WAY TOO MUCH "hype" (and fear). That's how it always is with these "techies" (of which I am one) and "investors/finance guys". Remember the "dot com" period? We are currently in an even larger bubble (IMHO), but I don't know when (not if) it will burst.
"Techies" are THE WORST people for opinions on the consequences of "different technologies" as their heads are so far up their ass (or in the clouds) that often they are in their own world (and unable to judge reality adequately).
I should know - I interact with them (and the industry) more than I would like. 😥
Yes, very much so - you are one of the smart ones (IMO).
Oh, you guys renew my faith! Techies with "awareness." But I could never imagine not being chided and ridiculed about not even being on Facebook but still believing and mini-investing 78 year old coins in BTC. Strange combo. Really like your IMHOs......