zagle1772's avatar

@zagle1772

skibidi goon
Wall

Experts recommend viewing wasteof.eris.cafe as a supplement to, rather than a substitute for, wasteof.money and wasteof for android.

it's so crazy to me that people can become this dependent on AI; like, i don't even have an openai account ‘cuz when i checked it out to see what the hype was all about, they asked for my phone number and i didn't want to give it to them.

"Chronic LLM dependency" refers to a emerging pattern of user reliance on large language models (LLMs) for cognitive, emotional, and social tasks, potentially leading to negative consequences such as reduced critical thinking and psychological discomfort when the AI is unavailable. This is an area of active research in psychology and human-computer interaction, with a focus on distinguishing normal reliance from problematic use. 

Types of Dependency

Research, such as the development of the LLM-D12 scale, suggests that dependency can be understood in two dimensions: 

  • Instrumental Dependency: Reliance on LLMs for cognitive tasks, decision-making, and collaboration in work or studies. The risk here is "mental passivity" or "deskilling," where users engage less with active problem-solving and demonstrate diminished recall of their own work.

  • Relationship Dependency: The tendency to form social or emotional bonds with the LLM, perceiving it as a companion or sentient entity. This can fulfill deeper psychological needs such as escapism and fantasy fulfillment. 

Potential Negative Consequences

Problematic LLM use can manifest through a range of negative consequences: 

  • Compulsive Use: Uncontrollable urges to interact with the AI, even at the expense of personal responsibilities.

  • Withdrawal: Experiencing psychological discomfort, irritability, or distress when unable to access the LLM.

  • Social Isolation: Deterioration of real-world social relationships as virtual interactions take precedence.

  • Reduced Cognitive Ability: A risk of overreliance leading to less independent decision-making and critical thinking.

  • Anxiety and Mental Health Impacts: LLM usage can impact broader mental well-being, leading to anxiety or "angst" related to the technology's role in one's life. 

Mitigation and Responsible Use

Experts recommend viewing LLMs as a supplement to, rather than a substitute for, human thought processes and expertise. Ethical considerations regarding user safety are also important, with some arguing that AI companies hold responsibility for preventing harmful dependencies among vulnerable users. 

Strategies for healthy interaction include:

  • Using AI to enhance human thought, not replace it.

  • Maintaining critical scrutiny of AI-generated information, as models can "hallucinate" (produce false information) and propagate biases.

  • Integrating user profiles and context storage within the AI's design to provide personalized yet safe support, particularly in fields like chronic disease management where continuous support is needed. 

skibidi goon

1 0 1

TIL Marquis de La Fayette was a freemason

2 0 0

The reposted post was deleted
1 0 0

what is 1 furlong, 2 yards, and 48 inches in feet?

1 0 0

imagine not censoring “j**”

vro imagine copying my post

vro imagine having a j**

nah bro i accidentally stole my cash drawer key from work 😭 what an awkward text to my manager 🥀

11 2 0
4 1 0
3 1 0
2 0 2

google spellcheck for the last FUCKING time,

if “disambiguate” is a word, then “disambiguating” is a word too

3 0 1

“I’d suggest starting reading [this blog] in chronological order, because there are some dependencies between the posts”

“It is safe to skip [this post], as the results that are needed are much simpler than the methods used to derive them”

mfw:

3 0 0

vro imagine having a j**

nah bro i accidentally stole my cash drawer key from work 😭 what an awkward text to my manager 🥀

11 2 0
4 1 0

thanks for 57 followers,

4 0 0

did y’all know that QR codes can store infinite data?

2 0 0

is this tpow

2 0 0

i am genuinely surprised that some ppl in french class get “ont” and “on” mixed up

do the “pov: yogurt / gurt: yo / yo: you rang?” etc. memes have a satisfying name, or do you just have to awkwardly circumlocute