TyFlow brings Stable Diffusion AI directly into 3ds Max
Tyson Ibele has released version 1.111 of TyFlow. This releases introduces Stable Diffusion directly in 3ds Max through a new tyDiffusion module. This new feature is available for all users of tyFlow Pro and Free, and compatible with 3ds Max 2023 and later. After installation, it can be accessed from any viewport shading context menu and offers automated installation of all the necessary modules as well as a selection of popular models. Other features include the ability to bake a generated image onto a model; advanced IP-Adapter, ControlNet and Upscaling support; extensive Prompt and LORA stylization; full support for AnimateDiff, prompt scheduling, and frame interpolation; 1:1 translation to ComfyUI Nodes, as well as a built-in asset library for ControlNet animation. Also included is an experimental tyDiffusionTexGen modifier.
That’s not all, other less headline grabbing improvements are added including a “pivot” export channel added to tyCache export settings, improving compatibility with Multifracture meshes. The Birth Flow and Flow Update operators can now import pivot-modification data from tyCaches, enhancing data consistency.
The Alembic point cloud exporter has received a name presets menu and custom float/vector prefix options. Additionally, the Terrain Display operator can now print grid height information to the MAXScript Listener, aiding Unreal Engine conversion. The Terrain Tile operator has new naming convention options, while the Export Terrain operator supports Unreal Engine-compatible resample resolutions. The Actor Animation operator now features start/stop animation loop and normalize time modes. The “editor_close()” MAXScript function has been added to tyFlow objects.
For more details and to download the plugin, visit the TyFlow website.
Looks amazing! I want to try it but its only compatible with 3ds Max 2023 and upwards. I wish would work on Max 2022. As Max 2022 is the last perpetual license version I am sure there are many animators staying with this version. Is it possible to make it work with this version?
Yes, I`m on the same boat bro.
I wonder how many are sticking with 2022. It should be the base version.
Tyson said there was a bug in versions below 2023 which stopped it working with the implementation.
Thanks for the update! Hopefully he can fix it
worth mentioning, that this is FREE, you just need Tyflow free to be able to run comfyUI in 3dsmax. Tyflow only supports CUDA on the pro version, but Tydifusion doesnt have this limitation on the free version.
Tyson should be a member of the Media & Entertainment development board. What a great tool!
I just tried it. It’s more of a gimmick at this point. You can have some fun playing with it, though. But that’s about it.
Whats your expertise in this ?
I tried it too and got promising stuff out of it ( and i’m a total Stable Diffusion noop ) but given the baking features it already provides i got the strong impression your statement is absolutely false
I guess it depends on how you use it. For me, the most useful aspect is I can generate some new ideas on the fly. It will be a long discussion if i tell you why it doesn’t work for me.
Just imagine if an art director asked you to draw concept art pieces for a movie, for all of which you have to have precise artistic control. And he will keep telling you to fix certain aspects of your drawings(colors, lighting, poses, props placement.. the list goes on). At the end of the day, a skilled artist can do good sketches really fast with precise control and understanding of what he is drawing and what he is supposed to draw.
With gen ai, you can get some decent results if you get lucky. Are those drawings only for you to see? If you’re satisfied, then that’s great. I might even think they look great too if you show them to me.
I bet you use your max(z) just for fun… ????
ignore the “????”… its a cgpress limitation when using emojis
comfyui with the proper workflows and nodes can give you an pretty crazy amount of artistic control over your results. IPAdapter and Controlnet alone are amazing for this… Generating simple scenes in 3d and using a puzzlematte to prompt different portions of your 3d rendered result is just a simple example. Depth maps combined with this can let you control precise camera angles and results on a very granular level. Your reply sounds like the typical response from someone who is barfing out the same old cookie cutter stuff without actually knowing what you’re talking about.