If you consider yourself at all arty and creative, you're pretty much guaranteed to have experimented with Adobe's software tools at one time or another.
I'm more of a casual creative, mostly using Lightroom and occasionally dipping into Photoshop and Premiere. But at my first Adobe Max Creative Conference in London this week, I was impressed by how the many new features and tools that the company unveiled across its entire suite of products – from Firefly generative AI tools to Creative Cloud – make it easier than ever for people like me to tap into our artistic side.
I'm not simply talking about the ability to use generative AI to do the heavy lifting of making art for us. Adobe is using AI to demystify the more technical aspects of its platforms, decrease the time and effort needed to complete the more repetitive and mundane tasks, and ultimately pick up the slack where our own lack of forethought or finesse has let us down.
The debate about the role of AI in the creative process continues, and isn't likely to be resolved anytime soon. Adobe's approach is to support the creative professionals who rely on its software through the AI transition as well as possible. Deepa Subramaniam, vice president of creative cloud, summed up the company's philosophy in the keynote, saying: "If you use generative AI, you want it to complement, not replace, your skills and experience."
At the conference, I witnessed many demos of Adobe's latest creative tools, and even got to try some out for myself. Here are the ones that stood out to me.
Firefly - Text and Image to Video
This lil guy is all AI.
Katie Collins/CNETThe cutest demo of the day at Adobe Max came courtesy of Firefly. Adobe released its latest Firefly generative AI models at the event, including the Firefly Video Model. This tool can be used to create a range of different video types, but the demo that caught my eye was a claymation-style video generated from a combination of an image and a text prompt.
The prompt read: "claymation character pushing a wheelbarrow through a Tuscan village," while the image already had a path for the figure to follow. Sure enough, the generated video saw a little clay guy with a broad-rimmed hat enter the scene, and push his wheelbarrow along the snaking cobblestone path. Adorable – and genuinely impressive. It elicited gasps from the crowd and I overheard people talking about it throughout the day.
When I got the opportunity to try out the model for myself, I asked Firefly to make me a video of a giraffe wearing a fruit hat in Scotland. It struggled with the Scotland aspect of the prompt, but I was more than satisfied with how it interpreted the rest of the command.
Photoshop - Generate Image from Composition Reference
For some time now, you've been able to use Firefly to generate a new image based on the same structure or arrangement as a reference image. Now Adobe has brought this capability directly into Photoshop.
In one example I saw, a photo of a sweeping horseshoe-shaped road was used as a reference along with the prompt "dark stormy winter snow." Photoshop generated a series of different romantic scenes, all of which maintained the structural integrity of the road from the original image.
In a second example, I saw how a child's drawing of a monster could be imported into Photoshop and used as the basis to generate a cartoon-style version of that same monster. What a way to give your kid's scrappy doodles a second lease on life.
Photoshop - Select Details
The Select Details tool could save people hours of work.
Katie Collins/CNETIf you've ever tried to remove the background from an image with lots of fine details in Photoshop, you'll appreciate how tricky and time-consuming the process can be. Trying to pick around certain objects can be like chipping away at a block of ice with a toothpick.
But no longer. Thanks to Adobe deploying AI via its new Select Details tool to read a photo, Photoshop is now significantly better at distinguishing the details of an image. In a demo, I saw how, with a single click, it's now possible to isolate a tennis racket, strings and all, from its background. Each string had been perfectly defined, and each square in between cut away with precision. The same effect was successfully applied to a fish caught in a net.
In another example, it was able to delineate the shape of a woman wearing a black turtleneck from a black background. To the naked eye, it was almost impossible to see where the woman's turtleneck ended and the background began, but Photoshop was able to segment the image perfectly.
Photoshop - Action Panel (Beta)
Make it pop.
Katie Collins/CNETAdobe has reimagined Photoshop's Action Panel so that you now have access to 1,000 different actions – instructions such as "blur background" or "soft black and white" – to transform your image at the touch of a button.
My favorite thing about this new feature is that Photoshop takes the guesswork out of which actions you might want to use by analyzing your image locally using a machine learning model and putting forward a list of suggestions. As someone who has felt intimidated by the vast array of options in Photoshop, the new Action Panel makes the software more accessible. The actions are also searchable, and I like the fact that you don't necessarily need to know the correct Photoshop terminology to achieve your desired results.
For example, in the demo I saw, you could select the phrase "make the subject pop," which picked out the retro Italian car in the foreground of the image and boosted the contrast and saturation so that it stood out even more. Making something "pop" is not a technical photography term, but Photoshop is able to interpret natural language, making this the closest thing yet to an AI assistant embedded in the software.
Premiere Pro - Generative Extend
If you've ever been editing a video and realized the clip you've filmed isn't quite long enough, you'll love Generative Extend in Premiere Pro. The tool, which has been available in beta for a while, is available from today, and it can generate an extra few seconds of footage based on the clip you've uploaded.
"It's super seamless, and it's up to 4K now," said Eric Snowden, senior vice president of design at Adobe, who listed it as one of his favorite new features across Creative Cloud, while speaking in a briefing. The tool can even generate sound, as long as it's not music or speech.
Examples I saw included extended footage of flags waving in the wind and a clip of a woman continuing to smile and nod for two seconds after the real recording of her had ended. For producers who are just missing those extra few frames of B-roll, this feature will be an absolute lifesaver.