2025 was a great year, I really enjoyed it. Looking back, the whole year I felt like it was worse than other years. But now I can see how much I’ve grown in ways that I wouldn’t have otherwise. I’m super thankful for it.
everything about this post makes me uncomfortable
When u lowkirkenuinely walk into Eris’s (aka Eris Schaffer) house to suck her but Nefarious Intent from Intent.Store (aka Robert Lopresti) already is in a relationship with her but then Nefarious Intent from Intent.Store gets doxxed on Doxbin by Team Discontent so he disappears. Conversation:
Eris: Robert its ok bae they are j ust a internet people
Robert: No you dont understand i silent updated iSync and now people think i ratted
Eris: Well you probably did rat ngl
Errplane: Hey sexy suck suck
Robert: What the fuck??
Errplane: shut up Robert Lopresti
Oren Lindsey: Hey what are you doing?
Lucky Lindsey (aka luckythecat): Oren what are you doing
Jeffalo Epstein: Nah imma do my own thing jumps off a bridge
Tnix100: I declare errplane as the new owner of wasteof.money!!!
Errplane: yes very good !!! I unban MrMeems because he is my bae
Team Discontent: We are dox!!
Errplane: Shut the fuck up
Allah Leaks the infamous leakers of nefarious intent clients: No more to leak :sadge:
errplane: Its ok you can leak diCkware premuium the client made by me!
Anshnk: Hey guys
Burgurfruit/bread: Shut up ansh stop smoking weed
Ansh: Bread is playing gta
Linus Torvalds: hey guys the meaning of life is sex
Special Report By The Wasted Onion: BREAKING: ███████████ █████ █████████ █████████
This special report was written and suggested by @kiwi, thank you!

i like
This is the type of high-effort stuff I would post on my blog site
"Chronic LLM dependency" refers to a emerging pattern of user reliance on large language models (LLMs) for cognitive, emotional, and social tasks, potentially leading to negative consequences such as reduced critical thinking and psychological discomfort when the AI is unavailable. This is an area of active research in psychology and human-computer interaction, with a focus on distinguishing normal reliance from problematic use.
Types of Dependency
Research, such as the development of the LLM-D12 scale, suggests that dependency can be understood in two dimensions:
Instrumental Dependency: Reliance on LLMs for cognitive tasks, decision-making, and collaboration in work or studies. The risk here is "mental passivity" or "deskilling," where users engage less with active problem-solving and demonstrate diminished recall of their own work.
Relationship Dependency: The tendency to form social or emotional bonds with the LLM, perceiving it as a companion or sentient entity. This can fulfill deeper psychological needs such as escapism and fantasy fulfillment.
Potential Negative Consequences
Problematic LLM use can manifest through a range of negative consequences:
Compulsive Use: Uncontrollable urges to interact with the AI, even at the expense of personal responsibilities.
Withdrawal: Experiencing psychological discomfort, irritability, or distress when unable to access the LLM.
Social Isolation: Deterioration of real-world social relationships as virtual interactions take precedence.
Reduced Cognitive Ability: A risk of overreliance leading to less independent decision-making and critical thinking.
Anxiety and Mental Health Impacts: LLM usage can impact broader mental well-being, leading to anxiety or "angst" related to the technology's role in one's life.
Mitigation and Responsible Use
Experts recommend viewing LLMs as a supplement to, rather than a substitute for, human thought processes and expertise. Ethical considerations regarding user safety are also important, with some arguing that AI companies hold responsibility for preventing harmful dependencies among vulnerable users.
Strategies for healthy interaction include:
Using AI to enhance human thought, not replace it.
Maintaining critical scrutiny of AI-generated information, as models can "hallucinate" (produce false information) and propagate biases.
Integrating user profiles and context storage within the AI's design to provide personalized yet safe support, particularly in fields like chronic disease management where continuous support is needed.
Unicode Showcase #20
u+202d
Added in: v1.1 (June 1993)
Left-To-Right Override
Suggested by: @cheesewhisk3rs
Like and follow for more Unicode!
When was added I thought nobody would use it but everyone can be wrong