Adobe’s Shantanu Narayen| Business News

Even as tech giants Adobe insists that creativity is entering an era of artificial intelligence (AI), the company makes clear that for any AI initiative to succeed, the four pillars of creation, ideation, production and delivery shouldn’t be overwhelmed. At this year’s MAX keynote, Adobe CEO Shantanu Narayen detailed an approach that sees them play well with the best models that other AI companies have to offer.

Adobe CEO Shantanu Narayen shares his vision for the AI era, and (right) first glimpse at an upcoming connector for Express and Photoshop within ChatGPT. (Vishal Mathur/ HT Photo)
Adobe CEO Shantanu Narayen shares his vision for the AI era, and (right) first glimpse at an upcoming connector for Express and Photoshop within ChatGPT. (Vishal Mathur/ HT Photo)

“Creativity is a balance of innovation, and imagination, technology, and humanity. We will provide the tools, platforms, and integrate with the ecosystems, to empower creators, to unleash their boldest imagination,” says Narayen. He insists it is a unique proposition of choice for the consumer, within one subscription, and without having to jump between different apps.

The company’s AI approach has three key elements at its foundation — Adobe’s own Firefly models, partner models including those from OpenAI and Google, as well as custom models that will help creatives and businesses find an AI with focused relevance. Adobe isn’t competing with other AI companies, instead partnering with them to give a dropdown menu of choice for users — that itself is a unique approach, for now.

At this time, partner models total 23, across photo, video and audio generation lines, also from AI companies Runway, Luma, ElevenLabs and Pika. And these are available across Adobe’s apps including Firefly, Photoshop and Express. This makes Adobe’s Firefly and Creative Cloud platforms, the first of their kind, to deliver this extent of choice.

The reasoning behind this range, is explained by David Wadhwani, President for the Digital Media Business, at Adobe, is all about specificity and strength. “The best model for generating a video may not be the same model for adding atmospheric elements like rain or snow. The right model for generating an image may differ based on whether you’re optimising for details, or lighting, or pristine landscapes, or a private city street,” he says.

Some popular options include Google Gemini 2.5 also called Nano Banana, Flux.1 Kontext Max, Pika 2.2, Runway Gen4 and ElevenLabs Multilingual v2 generation models. Wadhwani says it is key for these models to understand, generate and operate in different workflows.

“We’re blending AI with ingenuity and in intuitive ways,” says Narayen, which underlines Adobe’s broad spectrum intent to build for consumer and enterprise users. These models are already making their presence felt in Photoshop that now has the Generative Fill capabilities powered by Google and Black Forest Labs’ AI models.

Adobe is offering users unlimited image generations with Firefly and partner AI models through December — post this, regular Creator Cloud subscription and the generative AI credits, will be required. The company also gave us a first glimpse at an upcoming connector for Express and Photoshop within ChatGPT, which they expect will be released for users in the coming months.

It was earlier in the year, when the company announced the first steps in that direction, with OpenAI’s GPT image, Google’s Imagen as well as Veo 2 and Black Forest Labs’ Flux 1.1 models being added to the Firefly app.

Firefly and custom AI era

Alongside, Adobe’s own Firefly Image Model 5 joins their more specialised video, audio, sound effects and vector models.

Firefly as well as partner models are helping Adobe develop what they call “conversational experiences” with complex reasoning, multi-model inputs with world knowledge and semantic intelligence. Examples include conversational prompts in apps including Photoshop and Express to do editing, which are otherwise multi-step processes.

Narayen insists Adobe isn’t willing to compromise on authenticity and establishing ownership in the era of AI. “Through initiatives like content credentials, Adobe is also working to ensure transparency and recognition for creators,” he says. This moment in time, he says is a call to action to celebrate an ability to turn ideas into reality.

“While technology will amplify human ingenuity and unlock new possibilities, it’s one thing you can never replicate,” says Narayen. For him, the emotion and humanity is unique to a creator’s art.

There is a sense that Adobe isn’t adding partner AI models to Firefly purely for the sake of optics. In fact, the company is adding a number of new tools that will be useful for creators. The Generate Soundtrack option, for instance, though still in the beta test stage, will be able to generate what Adobe claims will be studio quality tracks that are also fully licensed.

Adobe is also betting big on Firefly custom models. For context, these are models that are trained specifically on a particular set of data to achieve results that are tuned for a focused workflow — this could be a set of brand visuals and images to build more content matching that language.

Custom models may help Adobe deliver on the promise of AI agents to partner businesses, something they’ve hinted at with the Project Moonlight.

Source link

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *