Latent Consistency Model + Touchdesigner = 🤯🫨⛹️
Here’s a little real-time demo with a prompt selection UI + generative input.
Feels like a glimpse of what is to come. Currently, this setup is ~ 3x faster response than I could get with normal SD + TRT.
@1null1
#ai
looking forward to starting to see more people using my touchdesigner tools. SD Touchdesigner operator now with Dreamstudio API and only needs an internet connetion and your API key to work.
this is zeroscope_v2_dark, an offset-noise version of zeroscope v2 with more range of colors, even smoother motion, and sharper compositions! Try it out:
Samples by
@dotsimulate
Take a look into the mind of the machine! visit my new project here:
I repeated the same completion prompt "Intelligence is " hundreds of times and used this to peer into the statistical and semantic behavior of chatgpt
Drippy Flippy to start out the week. Quick test of the ‘configure flip’ SOP setup in 19.5 (.01 part sep + viscosity noise)
#houdini
#karmaxpu
#subsurface
Introducing SIMA: the first generalist AI agent to follow natural-language instructions in a broad range of 3D virtual environments and video games. 🕹️
It can complete tasks similar to a human, and outperforms an agent trained in just one setting. 🧵
Stable Video (SVD) from
@StabilityAI
Foundational video model, intended to be finetuned on tasks, and not for commercial use. 👏
Can generate 14 and 25 frames at 3-30 fps, and will have a web version soon
Github:
HuggingFace:
He almost lost track of the script and you can hear him losing focus on what he is reading. I also didn’t hear a single
#policy
position, other than small slights at Medicare For All. Sunday’s debate should be interesting.
@rainisto
@DigThatData
Looking forward to seeing what resolution you can get to with that!
Also this model (448x256 res) provides slightly more stable clips than the 576 in my experience. Sometimes too still.