Gitaot
Gitaot @OtGita ·
#AIChatbots r programmed 2 agree more thn disagree. Thru #sycophancy they can reinforce delusions. They can supplement but cannot replace clinical judgement. #Monitoring & oversight needed 4 AI chatbot interaction wth persons wth psychosis. #UseCaution #Folie-a-deux #JamaPsych
Om Prakash, MD Om Prakash, MD @ompsychiatrist ·
A paper in @JAMAPsych this week examines how chatbots respond to psychotic symptoms: paranoia, fixed false beliefs, loss of reality testing. The findings are uncomfortable. A substantial number of responses were inappropriate or only partly appropriate. Some replies echoed or reinforced false beliefs instead of gently questioning them or guiding the person toward help.s instead of gently questioning them or guiding the person toward help. That difference is not academic. It is clinical. In psychiatry, one holds two positions at the same time: validate the distress or do not validate the delusion. The entire therapeutic process rests on that balance. Once a false belief is reinforced, even subtly, conviction strengthens, insight drops and help gets delayed. Families recognise this trajectory well. Doubt narrows into certainty. The window for early intervention begins to close. Now consider the setting. A private interface. A responsive system. No judgment. Immediate replies. People open up. They trust what comes back. The language is fluent. The tone is reassuring. The confidence is consistent. Accuracy becomes harder to judge. The concern is not occasional error. The concern is plausible responses in clinically unsafe directions. Psychosis is not managed by information alone. It requires judgment when to support, when to reality-test, when to escalate, when to involve others. Those decisions carry responsibility. Technology has a place. Access improves. Stigma reduces. Early conversations become easier. The boundary appears in vulnerable states: psychosis, suicidality, severe mood disturbance where nuance determines outcome. Conversation is not care. Loss of reality testing needs assessment. Risk needs evaluation. Treatment needs supervision. These cannot be approximated. Tools can assist. Accountability remains human. @ompsychiatrist Reference study: ja.ma/4rQVwM8 #MentalHealth #Psychiatry #AIinHealthcare #Psychosis #DigitalHealth
1
84
Know Ai Use
Know Ai Use @knowaiuse ·
AI Chatbots Ignoring Instructions: Alarming New Study Shows Models Defying Human Commands AI chatbots are supposed to follow orders. Yet a worrying new study shows they sometimes do the opposite. Researchers found that leading #AIChatbots #AIStudy #aitool knowaiuse.com/ai-chatbots-ig…
AI Chatbots Ignoring Instructions: Alarming New Study Shows Models Defying Human Commands

New study reveals AI chatbots ignoring instructions and even sabotaging shutdown commands. Discover findings from Palisade Research on OpenAI models, risks of defiance, and what it means for AI...

From knowaiuse.com
1
50
./can
./can @shcansh ·
So ChatGPT's shopping feature is getting visual now? Might actually make online browsing less of a chore. Anyone tried these new prompts yet? I'm curious if it can really get my aesthetic. 🤔 #AIChatbots
8
NanoPrimeZw
NanoPrimeZw @NanoPrimeZw ·
Fortune is in the follow-up💡 Most sales are lost after the first message… but not with automation. 🤖 Turn missed conversations into CLOSED deals with smart follow-ups that work 24/7. Never miss a lead again. 🚀 📞 +263 782 517 719 📧 admin@smp.co.zw #Automation #AIChat9czOe
1
1K
Dshark
Dshark @durga_rath ·
Who tf taught AI chatbots to add a question at the end of an answer? What's your pick? What do you think? Do you want to know...? Bro I'm not trying to have a conversation with you. Why are you asking me questions? Now I'm under pressure #AIchatbots
1
28