Skip to content
January 12 2024

#127 OpenAI's Bold Move: Innovative Features or Privacy Invasion?

Blog Details

<< Previous Edition: True Potential of GPT Store

Yesterday, OpenAI unveiled a range of exciting updates. Notably, the long-awaited GPT store has made its debut, alongside the introduction of team workspaces. Moreover, OpenAI has empowered users with the capability to train chat models. While we've discussed the GPT store extensively this week, let's shift our attention today to exploring the dynamic features of team workspaces.

The absence of team workspaces has presented a considerable hurdle in utilizing ChatGPT across teams. This challenge primarily arose from logistical concerns surrounding billing, although it extended beyond that. What we truly required was a solution akin to Uber, where seamless toggling between business and personal profiles, accompanied by comprehensive billing and standard user management features for the business profile, would be available. And that's precisely what we have achieved with this latest update.

The Privacy Implications of OpenAI's Team Workspaces

The launch of OpenAI's team workspaces has brought more than just standard features to the forefront; it has sparked a significant debate. The central point of interest is the announcement that these workspaces will not involve "training on user data." This bold statement has stirred up both interest and debate, particularly concerning its implications. It appears to some observers as an acknowledgment of the longstanding concerns about how OpenAI handles user data.

In terms of pricing and structure, the team workspace tier is designed for a minimum of two users, with a subscription rate of $30 per month per user. There's a financial incentive for committing annually, as this brings the cost down to $25 per month per user. This represents a 25% increase in price compared to the Plus tier, which is offered at $20 per month. The Plus tier introduces an interesting dynamic: users can opt into the training process, but this is coupled with a significant trade-off. If a user chooses to turn off the training feature, they simultaneously lose access to their chat history, linking benefits and drawbacks.

The distinction in the team workspaces lies in the separation of chat history and training components, with the latter being permanently disabled. This design choice suggests that OpenAI may be operating under the assumption that organizations would prefer not to have their data used for training purposes. This could be a precautionary measure to avoid the potential scenario where this feature is activated accidentally, leading to criticism or backlash against OpenAI for misuse of data.

Balancing Privacy and Progress

In Newsletter #58, I explored the concept of Nimbyism in the context of privacy concerns. While I believe that worries about privacy are often exaggerated, the importance of transparency in data use should not be underestimated. Large language models like ChatGPT require extensive, high-quality data for training. Much of this training has been conducted discreetly, laying the groundwork for the remarkable functionalities we see in ChatGPT.

However, I fully acknowledge the valid concerns regarding sensitive corporate data. OpenAI seems to have made a strategic move by coupling training with history in individual Plus accounts, subtly nudging users towards enabling training. What’s particularly ingenious is the seamless ability for users to switch to a workspace profile when dealing with sensitive data. This flexibility allows users to engage with the AI in a personal capacity while maintaining the confidentiality of corporate information in a workspace setting. Such a design demonstrates OpenAI’s commitment to balancing privacy concerns with the ongoing development and enhancement of AI technology.

Final Thoughts: A Nod to OpenAI's Agile Innovation

As I reflect on the recent developments, I find myself genuinely impressed by OpenAI's rapid pace of innovation and thoughtful feature implementation, especially in terms of privacy considerations. The tiered structure they've adopted is particularly noteworthy. The lowest, or free tier, operates under a familiar principle often summarized as, "if you're not paying for the product, you are the product." Although OpenAI's specific policy for this tier might not be explicitly stated, this adage offers a fair assumption about the nature of the trade-offs involved.

Moreover, the Plus tier caters to individual users, while the Workspaces tier is designed for teams. For larger organizations, there's an Enterprise tier that offers additional security measures. This comprehensive range effectively spans the spectrum of user needs, covering all bases from casual users to large-scale enterprises. OpenAI's approach demonstrates a keen understanding of diverse user requirements and a commitment to meeting them through innovative solutions.

>> Next Edition: Vectorized Data Pipelines