Advertisement

Adobe's next-gen Firefly 2 offers vector graphics, more control and photorealistic renders

Ce n'est pas une canette.

Adobe

Just seven months after its beta debut, Adobe's Firefly generative AI is set to receive a trio of new models as well as more than 100 new features and capabilities, company executives announced at the Adobe Max 2023 event on Tuesday. The Firefly Image 2 model promises higher fidelity generated images and more granular controls for users and the Vector model will allow graphic designers to rapidly generate vector images, a first for the industry. The Design model for generating print and online advertising layouts offers another first: text-to-template generation.

Adobe is no stranger to using machine learning in its products. The company released its earliest commercial AI, Sensei, in 2016. Firefly is built atop the Sensei system and offers image and video editors a whole slew of AI tools and features, from "text to color enhancement" saturation and hue adjustments to font and design element generation and even creating and incorporating background music into video scenes on the fly. The generative AI suite is available across Adobe's product ecosystem including Premiere Pro, After Effects, Illustrator, Photoshop and Express, as well as on all subscription levels the Creative Cloud platform (yes, even the free one).

Adobe firefly 2 side by side comparison against the original using
Adobe

Firefly Image 2 is the updated version of the existing text-to-image system. Like its predecessor, this one is trained exclusively on licensed and public domain content to ensure that its output images are safe for commercial use. It also accommodates text prompts in any of 100 languages.

Image 1 vs Image 2 models in terms of brightly colored blue-red bird images.
Adobe

Adobe's AI already works across modalities, from still images, video and audio to design elements and font effects. As of Tuesday, it also generates vector art thanks to the new Firefly Vector model. Currently available in beta, this new model will also offer Generative Match, which will recreate a given artistic style in its output images. This will enable users to stay within bounds of the brand's guidelines, quickly spin up new designs using existing images and their aesthetics, as well as seamless, tileable fill patterns and vector gradients.

The final, Design model, is geared heavily towards advertising and marketing professionals for use in generating print and online copy templates using Adobe Express. Users will be able to generate images in Firefly then port them to express for use in a layout generated from the user's natural language prompt. Those templates can be generated in any of the popular aspect ratios and are fully editable through conventional digital methods.

rainbow aura fashion show
Adobe

The Firefly web application will also receive three new features: Generative Match, as above, for maintaining consistent design aesthetics across images and assets. Photo Settings will generate more photorealistic images (think: visible, defined pores) as well as enable users to tweak images using photography metrics like depth of field, blur and field of view. The system's depictions of plant foliage will reportedly also improve under this setting. Prompt Guidance will even rewrite whatever hackneyed prose you came up with into something it can actually work from, reducing the need for the wholesale re-generation of prompted images.