Skip to content
January 15 2024

#130 Applied Diffusion

Blog Details

In our recent newsletter, we delved into the intricacies of diffusion using a relatable metaphor. We discovered that diffusion comprises three key steps: forward-diffusion, reverse-diffusion, and Sampling. Just as understanding the inner workings of a car engine enhances our driving skills, it is crucial to grasp the fundamentals of diffusion. Yesterday, we embarked on this enlightening journey, exploring each component under the hood.

This also helps us gain insights into the nature of things. For instance, when creating visuals for this newsletter, I rely on either MidJourney or Dall-E. While both perform admirably for image-based content, their accuracy with text leaves much to be desired. I deliberately leave the text as is, as it truly reflects the current state of the AI generation revolution. Resorting to traditional graphic design tools to rectify it would deviate from the purist approach and diminish the fun factor.

Diffusion-based models are a distinct class within the generative AI family, separate from large language models. They are specialized for generating visual content, such as images, rather than text. While part of the broader generative AI landscape, these models differ from other generative tools like Generative Adversarial Networks (GANs), which are known for their dual-network approach (generator and discriminator) in creating highly realistic images.

Titans of Diffusion: Distinctive Approaches of Leading Models

In the dynamic field of diffusion-based generative AI, Stable Diffusion, MidJourney, and DALL-E stand out, each marked by its distinctive approach and specialized capabilities.Stable Diffusion sets itself apart with its open-source accessibility, allowing a broad range of users and developers to experiment and adapt its functionalities. I have heard a lot about Stable Diffusion but still have to test it out.

MidJourney stands out with its user-friendly interface, adept at converting abstract and imaginative prompts into visually stunning images. It is particularly favored by photographers for its capacity to follow detailed instructions, while casual users appreciate its straightforward 'point and shoot' functionality. The platform is acclaimed for its user experience-oriented commands such as /imagine, /describe, /ask, and /blend, making it highly accessible. Additionally, MidJourney offers various subscription levels, with the premium tier providing not only faster processing and private creation options but also grants users the rights to freely use the generated content.

DALL-E, a creation of OpenAI, is celebrated for its sophisticated text-to-image conversion, adept at producing images that are not only high-quality but also contextually aligned with detailed text descriptions. As a blogger, I find its ability to generate relevant images from entire blog posts particularly valuable. However, a notable drawback is its predictable 'point and shoot' style, which can sometimes be too recognizable and standard, leading to a lack of uniqueness. This uniformity often makes DALL-E generated images instantly identifiable on social media platforms.

Conclusion: The Multimodal Future of Generative AI

The landscape of generative AI, initially dominated by large language models specializing in text generation, is undergoing a transformative expansion, thanks to diffusion models. These models are pivotal in extending the capabilities of generative AI into a truly multimodal arena. While our focus has primarily been on image generation, significant progress is also being made in the realms of audio and video creation. A notable player in this advancement is RunwayML, which employs Stable Diffusion and similar models for diverse generative tasks. The possibility of producing entire movies in Hollywood or Bollywood solely through generative video and audio techniques is rapidly transitioning from a futuristic vision to an impending reality, heralding an exciting new era in digital content creation.