Skip to content
May 21 2024

#174 The Cognitive Dissonance Challenge of Personal AI

Blog Details

<< Previous Edition: LLMs are eating SDLC

This week, Microsoft announced a groundbreaking line of personal computers that seamlessly integrate AI capabilities directly into Windows. This development creates a unique experience where AI and Windows work together to enhance user productivity and personalization.

The Power of Personalization

As a strong advocate for personalized large language models (LLMs), I find this integration particularly exciting. Some refer to these tailored models as Small Language Models (SLMs), but we'll stick to calling them LLMs, as they remain fine-tuned versions of the original large language models.

Imagine your computer being equipped with eyes and ears, perfectly attuned to understand your needs and activities. An AI assistant with such capabilities would significantly enhance your user experience by remembering most of your interactions and preferences. It could learn your work patterns, communication style, and favorite applications, using this knowledge to streamline your tasks and boost your productivity. For example, it might offer personalized suggestions for drafting emails, managing your schedule, or optimizing your workflow based on your unique habits and goals.

The Privacy Conundrum: The Dreaded Recall Feature

One of the most talked-about features in the latest release is "recall," which gives LLMs a photographic memory by taking screenshots of your computer. This always-observant AI assistant raises significant questions about privacy and trust. Traditionally, we've assumed that our personal computers are private spaces where sensitive information like financial records, medical history, and personal conversations are kept secure and confidential. However, the presence of an AI system that constantly observes and interprets our activities challenges this notion, forcing us to confront the implications of sharing our digital lives with a large language model (LLM).

Even in the workplace, where privacy expectations may be somewhat lower, we are used to providing specific outputs (such as emails, code, or documents) rather than having a system monitor and analyze our entire work process. An AI assistant operating solely on the edge, enabled by Neural Processing Units (NPUs) that process AI models locally without transmitting data to external servers, does alleviate some privacy concerns. However, it still represents a significant shift in how we interact with our devices and the level of trust we place in the underlying LLMs.

Cognitive Dissonance Triggered: Trusting the Omnipresent AI

Cognitive dissonance occurs when our thoughts, beliefs, or behaviors conflict with one another. This is a natural part of the human experience, as we sometimes encounter situations where our actions or thoughts don't align perfectly. With personalized AI assistants, this dissonance can happen when the system tracks and remembers activities we consider private, trivial, or sensitive. Knowing that an AI is constantly observing and analyzing us can feel invasive and increase self-consciousness or discomfort.


We have Linux and we have Mac OS. Despite its many strengths, Linux has never really become a dominant force in desktop computing. I feel the same could happen with open-weight LLMs. They will likely lead in server environments, but desktop applications require more user-friendly solutions. This is where Microsoft's new line of AI-driven computers, which Satya Nadella calls "Copilot + PC," comes into play. These machines offer the necessary handholding for everyday users while leveraging the power of AI. Although there will be initial privacy concerns, users will gradually become more comfortable as adequate safeguards are implemented. The combination of local versions of LLMs with Neural Processing Units (NPUs) presents a promising approach for edge computing.