AI Render has released a new free plugin that allows the user to generate an image based on the scene combined with a text prompt for Stable Diffusion. Find out more on Gumroad.
AI Render has released a new free plugin that allows the user to generate an image based on the scene combined with a text prompt for Stable Diffusion. Find out more on Gumroad.
Paul is the owner and editor of CGPress, an independent news website built by and for CG artists. With more than 25 years in the business, we are one of the longest-running CG news organizations in the world. Our news reporting has gathered a reputation for credibility, independent coverage and focus on quality journalism.
CGPress is an independent news website built by and for CG artists. With more than 15 years in the business, we are one of the longest-running CG news organizations in the world. Our news reporting has gathered a reputation for credibility, independent coverage and focus on quality journalism. Our feature articles are known for their in-depth analyses and impact on the CG scene. “5 out of 5 artists recommend it.”
© 2025 CGPress
CGPress uses technology like cookies to analyse the number of visitors to our site and how it is navigated. We DO NOT sell or profit from your data beyond displaying inconspicuous adverts relevant to CG artists. It'd really help us out if you could accept the cookies, but of course we appreciate your choice not to share data.
err.. how you animate the camera or the objects then if u use this ?
Presumably if the text input and seed is un-changed you could get interesting animations if the camera moved between source frames. I’m guessing the workflow would be clunky as it seems to be designed to create stills rather than animations?
I can imagine this technology evolving into a full-blown ai ‘renderer’, but it needs more control to give predictable results. Maybe a workflow might be to define the target style with a ‘hero-shot’. The AI knows what it did to achieve that and reverse engineers/interpolates to accommodate the moving camera – basically keeping the style on ‘rails’. Moving objects no doubt add another layer of complexity.
The same way as before. With the animation tools and keyframe animation. In general Stable diffusion is for stills. Sou you go frame by frame. But you can also feed forks like deforum stable diffusion with videos. This addon is for stills though from what i can see.
Great news. But not happy with the need of using it via DreamStudio. The offline version is free …
Yes i would like this to work with a Workstation Version of Stable Diffusion too.
Indeed
I heard the author is already working at an offline version. Great news 🙂
The only thing that still worries me now is that Blender also needs VRam when it runs. Need bigger card !