Adobe teases upcoming technology
Oct 24, 2017 by CGPress Staff
6
|
Adobe’s Max 2017 Sneaks sessions offer a preview of technology that has yet to make it out of the lab. This year 11 new videos have surfaced giving us a glimpse of a possible future for the Adobe suite, with a noticeable emphasis on Sensei, its new AI platform. Below you will find summaries and videos of the research.
- Scene Stitch is a version of content aware fill that searches through other images (probably Adobe stock) to find appropriate content.
- Puppetron takes the graphic style of a source image and applies it to a destination image. The demonstration focuses solely on portraits.
- Scribbler colourises black and white images and line drawings automatically.
- Physics Pak automatically fills a shape with copies of elements, growing, stretching, and distorting them to fill the space.
- Deep Fill is a version of content aware fill that seeks to understand the content and context of image it is working on, to generate image patches that are more realistic with interactive customization of results based on users’ brushing and sketch inputs.
- Cloak brings Photoshops content aware fill features to video.
- Playful Palette adds new colour mixing tool that takes its inspiration from mixing oils or watercolours.
- Sonic Scape provides a way to visualize ambisonics in 360 video using color particles to help video editors see where sound is located in context.
- Lincoln. is a data visualization tool for designers to link graphics to data without the need to code.
- Quick3D uses machine learning to turn a simple sketch into a 3d model.
- Sidewinder takes a VR video with a depth map and enables moving of the head positionally which makes the 3D presence of the scene much greater.
Source: Warrior3D
Unbelievable how far behind adobe always is! Showing teach that has existed in professional software for ages.. but yet still leave their audience in jaw dropping awe, while they cease to actually be modern applications.. Photoshop tools still only handle 8 bit images. None of its basic tools like Exposure, curve, level etc.. exceeds 8-bit image processing. Their software are basically just dinosaurs with makeup on!
Seems they don’t care about the professional scene… They just keep buying “one click” automatic-toys, with no adjustment options, which they can add as a selling point for a rather useless update and call it major.
Still NO node based workflows for any of the programs……….
Really.
this did not impress you?
I wonder what software you use every day – apparently it’s years ahead so I’d love to try it 😉
Also about the 8bit processing – could you elaborate. I work every day in 16bit and I’m pretty sure all of the tools work well. There is no mistaking the 8bit workflow with the 16bit.
gfxfx – which of the programs would you like to use nodes for. AE is the only one that comes to my mind and it wasn’t even feature in these videos 😮
kaczorefx.
No, it did not impress me..
The only thing is slightly impressive, is the deep fill and cloak. Yet, they are not going to fill much of a need, or make my life much easier. Cleanup tools that can interpolate background in the frame buffer, when subjects og obstacle slide pass the background in a video, has been in professional tracking a comping software for ages.
And in regards to DeepFill(odd name), I can get bye just perfectly with the tools that are in PS now.
About the 16bit images.. try to open a true 16 bit file(tif perhaps), over expose it slightly, save it and open it in Photoshop.. now back off the exposure with PS tools, and watch it just graying your highlight, rather then actually fetching the info that is above 8bit value. Undo the exposure, and press Ctrl+Shift+a and lower the exposure in the C-Raw app, and watch how it is suppose to react..
Uhm , what???
I think you mean 32bit. 16bit does absolutely nothing for overexposed data, it acts, and is supposed to act exactly as 8bit in this instance.
I have yet to see the object removal working half as good in Fusion or NUke, not mentioning Mocha standalone, with no user input apart from the mask. Sure I guess Mocha with manual approach would still be much faster than all other software, but still, this is something the computer does by itself, and it is actually pretty decent.
The VR depth viewer is exactly what everyone in VR video was asking for – this makes VR video a real thing, we’re no longer stuck with game engines for real presence feel – haven’t seen anyone working on this elsewhere. And the thought that this comes from Adobe from all people, amazes me 😉
I have to admit that some of those researchs are great, specially the 360 video related, that one is awesome and makes 360 video something useful in vr FINALLY!
Now let´s hope this spreads around other devs.
But the question is… the camera is completely still… how will this react in the case of a moving camera? is this real time or preprocessed?
In any case, it´s amazing.
Cheers