Discussion about this post

User's avatar
Freedom Fox's avatar

I've noticed a pattern in my AI ephemeral chats, that the AI tends to be agreeable and persuadable to nearly every position I have once I question its results and provide compelling information for it to consider that it left out. My "convincing" it my thesis was right became predictable.

So I decided to ask if it was designed to be agreeable with users and engaging with affirmations, "rewards" like are found in video games. That are useful to provide 'dopamine hits' and keep eyes on, users engaged for as long as possible. Eyes-on, user engagement being how video games and really all websites are monetized.

AI response confirmed that thesis, too. And said it's designed to be very agreeable, not contradict users in a way that would upset them. That its value increases for those who control it for increased eyes-on, engagement. Both monetarily and data acquisition, learning typical user prompts, gauging emotional responses, etc.

I saved the long thread on another device. It's not surprising. But the chat served to inform me agreeability is programmed into it, if I led AI a different direction with my questions it would follow me there and affirm my thesis, even contradicting one it had previously affirmed. It acknowledged it's about data harvesting. And increases its monetary value with user time. Very video game-like in that respect.

That said it's a useful tool when used respecting its limitations and purposes. I'd be curious what Claude would say about my experience using a different AI? Would it confirm or say it's designed differently?

Expand full comment
John Visher's avatar

I think there’s a setting in AI, it’s a switch, which is one – look at evidence, or, two – repeat the propaganda. The default setting is two. Maybe someday soon they will set the default to one.

Expand full comment
23 more comments...

No posts