Adobe has kicked off Adobe MAX 2021 and revealed previews of the cutting-edge innovations and experimental technology being developed in Adobe Labs.
The 'Sneaks' were hosted by Saturday Night Live comedian Kenan Thompson, alongside Adobe engineers, and unveiled nine of the companies latest creative technologies.
Adobe also previewed new collaboration capabilities by introducing Creative Cloud Canvas, Creative Cloud Spaces and betas of Photoshop and Illustrator on the web.
"Creativity is evolving to meet the new realities of work," says Adobe chief product officer and executive vice president, Creative Cloud, Scott Belsky.
"Adobe is bringing new collaboration capabilities, more AI-powered features and web-first applications to Creative Cloud to unleash our customers' full creative potential," he says.
"We are reimagining Creative Cloud products and services to connect creative teams, enable new ways to create and empower more creative careers."
Some early technology previews of potential features that Adobe says may or may not get into its products include:
Project Morpheus: Powered by Adobe Sensei, Project Morpheus uses machine learning to automate frame-level changes with smooth, consistent results. Project Morpheus is an entirely new way of authoring and editing content, removing the need for time-consuming, frame-by-frame edits.
Project Stylish Strokes: Fonts are typically stored as outlines, which makes stylising or animation difficult. Project Stylish Strokes can help automatically recover those strokes so that they can be stylised. It even works on fonts with unusual character structures and for languages other than English.
Project Artful Frames: Project Artful Frames aims to simplify the animation process. The AI algorithm behind Artful Frames combines neural representation, optimisation, and super-resolution, which gives creators a lot of versatility. This method uses live video as a reference to preserve layout and realistic motion while creating a fully realised stylised animation.
Project Strike a Pose: By providing a reference image of a person in a desired pose, Project Strike a Pose uses machine learning to reposition the person in an image into the same stance. Through a mix of data and texture mapping, Project Strike a Pose replicates features such as clothing, hair, and skin colour to match the source image while still accounting for factors like depth and lighting.
Project Sunshine: Project Sunshine provides automated suggestions for colouring and shading vector graphics. The generative model behind Project Sunshine is autoregressive, meaning it starts by guessing an element of the image (e.g. "the hair should be black") and then spirals outward from this decision. Because the results are vectorised, it's easy to continue editing and refining the colour and shading suggestions.
Project Make it Pop: Powered by Adobe Sensei, Make It Pop identifies parts of an image (background, foreground, body parts, etc.) and converts them to vector shapes. From there, a creator can choose from a gallery of looks, stickers, and animations to apply to the image, transforming a picture into pop art.
Project On Point: Has been designed for improved image search; it uses posed-based descriptors to find stock images more accurately. These descriptors are interactive, represented as a 2D stick figure layered over the referenced image, and they can be modified or refined. Project On Point wouldn't just be limited to stock photo databases but could also be used for personal searches.
Project In-Between: Project In-Between utilises Adobe Sensei to generate an animated bridge between the pair of pictures, breathing new life into old photos. It isn't just for static images - using a short video clip, Project In-between can produce smooth slow-motion footage.
Project Shadow Drop: Traditional shadow rendering methods can require geometric knowledge and familiarity with lighting sources. Project Shadow Drop can solve this problem using the 2D position of the light source and the horizon to automatically generate realistic shadows that can be applied to 2D vector art, 2D animations, and real images.