CGPress uses technology like cookies to analyse the number of visitors to our site and how it is navigated. We DO NOT sell or profit from your data beyond displaying inconspicuous adverts relevant to CG artists. It'd really help us out if you could accept the cookies, but of course we appreciate your choice not to share data.
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
The technical storage or access that is used exclusively for statistical purposes.
The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
err.. how you animate the camera or the objects then if u use this ?
Presumably if the text input and seed is un-changed you could get interesting animations if the camera moved between source frames. I’m guessing the workflow would be clunky as it seems to be designed to create stills rather than animations?
I can imagine this technology evolving into a full-blown ai ‘renderer’, but it needs more control to give predictable results. Maybe a workflow might be to define the target style with a ‘hero-shot’. The AI knows what it did to achieve that and reverse engineers/interpolates to accommodate the moving camera – basically keeping the style on ‘rails’. Moving objects no doubt add another layer of complexity.
The same way as before. With the animation tools and keyframe animation. In general Stable diffusion is for stills. Sou you go frame by frame. But you can also feed forks like deforum stable diffusion with videos. This addon is for stills though from what i can see.
Great news. But not happy with the need of using it via DreamStudio. The offline version is free …
Yes i would like this to work with a Workstation Version of Stable Diffusion too.
Indeed
I heard the author is already working at an offline version. Great news 🙂
The only thing that still worries me now is that Blender also needs VRam when it runs. Need bigger card !