Joy, just because it gave you the answer it did DOES NOT MEAN that you (or anyone else) trained it any differently.
The way LLM works, what you see is the "output". That "output" DOES NOT automatically become the "input" for further refinement.
Also, the way LLM tokens work, they do not remember previous conversations for very long (unless the memory of the session is extended - which costs more compute power). (BTW, the DeepSeek R1 model (while quite small in comparison to others) utilizes 670 billion tokens itself, and training such AI models requires modifying the "reinforcement learning" parameters, and not conversations with it - i.e. conversing with LLM models is NOT how they are trained).
What you saw was a response that pleases you. NOTHING about ChatGPT has changed with your interaction with it.
Joy, just because it gave you the answer it did DOES NOT MEAN that you (or anyone else) trained it any differently.
The way LLM works, what you see is the "output". That "output" DOES NOT automatically become the "input" for further refinement.
Also, the way LLM tokens work, they do not remember previous conversations for very long (unless the memory of the session is extended - which costs more compute power). (BTW, the DeepSeek R1 model (while quite small in comparison to others) utilizes 670 billion tokens itself, and training such AI models requires modifying the "reinforcement learning" parameters, and not conversations with it - i.e. conversing with LLM models is NOT how they are trained).
What you saw was a response that pleases you. NOTHING about ChatGPT has changed with your interaction with it.