one of the key parts of the wasteof community is landonhere
like I don’t understand his posts but it wouldn’t be wasteof without him lol
@jeffalo please add dislikes so i can bomb this post
DID YOU KNOW
Rock beats scissors. Scissors beats paper. Paper beats rock.
And nothing beats a Jet2 Holiday
What the United States government has come to in only the past year is sickening. Shooting unarmed people who don’t pose a threat to you is next-level fucking batshit insane. This stupid cunt needs to be locked the fuck up for life.
2025 was a great year, I really enjoyed it. Looking back, the whole year I felt like it was worse than other years. But now I can see how much I’ve grown in ways that I wouldn’t have otherwise. I’m super thankful for it.
Special Report By The Wasted Onion: BREAKING: ███████████ █████ █████████ █████████
This special report was written and suggested by @kiwi, thank you!

i like
This is the type of high-effort stuff I would post on my blog site
"Chronic LLM dependency" refers to a emerging pattern of user reliance on large language models (LLMs) for cognitive, emotional, and social tasks, potentially leading to negative consequences such as reduced critical thinking and psychological discomfort when the AI is unavailable. This is an area of active research in psychology and human-computer interaction, with a focus on distinguishing normal reliance from problematic use.
Types of Dependency
Research, such as the development of the LLM-D12 scale, suggests that dependency can be understood in two dimensions:
Instrumental Dependency: Reliance on LLMs for cognitive tasks, decision-making, and collaboration in work or studies. The risk here is "mental passivity" or "deskilling," where users engage less with active problem-solving and demonstrate diminished recall of their own work.
Relationship Dependency: The tendency to form social or emotional bonds with the LLM, perceiving it as a companion or sentient entity. This can fulfill deeper psychological needs such as escapism and fantasy fulfillment.
Potential Negative Consequences
Problematic LLM use can manifest through a range of negative consequences:
Compulsive Use: Uncontrollable urges to interact with the AI, even at the expense of personal responsibilities.
Withdrawal: Experiencing psychological discomfort, irritability, or distress when unable to access the LLM.
Social Isolation: Deterioration of real-world social relationships as virtual interactions take precedence.
Reduced Cognitive Ability: A risk of overreliance leading to less independent decision-making and critical thinking.
Anxiety and Mental Health Impacts: LLM usage can impact broader mental well-being, leading to anxiety or "angst" related to the technology's role in one's life.
Mitigation and Responsible Use
Experts recommend viewing LLMs as a supplement to, rather than a substitute for, human thought processes and expertise. Ethical considerations regarding user safety are also important, with some arguing that AI companies hold responsibility for preventing harmful dependencies among vulnerable users.
Strategies for healthy interaction include:
Using AI to enhance human thought, not replace it.
Maintaining critical scrutiny of AI-generated information, as models can "hallucinate" (produce false information) and propagate biases.
Integrating user profiles and context storage within the AI's design to provide personalized yet safe support, particularly in fields like chronic disease management where continuous support is needed.