Skip to content
January 8 2024

#123 Too Much Human Touch: Are We Overwhelming Our Generative Agents?

Blog Details

I believe it's an opportune moment to assess the current state of large language model-based chat interfaces.. Let's begin with my own experience, as there's no better evidence than personal use. The last time I updated you, I was increasingly using Claude and Bard. However, things have shifted, and now I'm almost exclusively using ChatGPT. That's not to say the others aren't evolving – Bard, for instance, has shown noticeable improvement with its new Gemini Pro engine.

As for Claude, it's an interesting case. I've always appreciated Claude for its unique, out-of-the-box thinking. Yet, when it comes to content curation, Claude often falls short. Additionally, we've not been able to seriously consider Claude as the underlying LLM for RoostGPT. But I'm optimistic. The team at Anthropic is full of brilliant minds, and I'm confident they'll soon showcase something truly groundbreaking. For now, though, nothing substantial has caught my eye.

Narrative Bias in AI

"God created man in His own image, and man, being a gentleman, returned the favor." — Jean-Baptiste Alphonse Karr

I've noticed an increasingly evident pattern that raises concerns. Whenever I showcase content that casts GPT in a positive light and ask for feedback from certain models, their reactions often contradict my assumptions. This recurring theme points to a potential narrative bias, which may be a result of the Reinforcement Learning from Human Feedback (RLHF) method employed in their training. This trend seems to reflect a skewed perspective, potentially influenced by the biases inherent in the RLHF process.

RLHF, while valuable for aligning AI behavior with human values, carries the risk of amplifying the biases of those providing the feedback. If overdone or skewed, it can lead to AI models developing a narrow or biased perspective. In this case, their consistent skepticism towards GPT-4 and ChatGPT hints at a deeper issue: the potential dangers of over-relying on RLHF without adequate diversity and balance in feedback.

Conclusion

Large Language Models will stand out based on their own strengths. This healthy competition is beneficial for the overall ecosystem, leading to a mix of winners and losers. Similar to how a company's culture often mirrors its founder, it's natural for LLMs to have their unique characteristics. However, it's important to avoid overdoing these traits. In addition, While trust and safety are crucial, they shouldn't be the only aspects highlighted. Balance is key to ensuring these models are effective and appealing.