I will join the Hong Kong University of Science and Technology
@HKUST
as an assistant professor working on AI for creativity, recruiting Ph.D. students, research assistants, and visiting students. Please see openings for more details.
[1/3] Wanna transfer the character and camera behavior from Lalaland to our own 3D character assets? Check our new project **Cinematic Behavior Transfer via NeRF-based Differentiable Filming**
arXiv:
We are excited to share that the AI for Creative Visual Content Generation Editing and Understanding Workshop
@cveu_workshop
has been accepted to
#CVPR2024
@CVPRConf
. See you in Seattle to meet art, tech, and creativity! It is the first time we come to the US🇺🇸
If you miss the
#ICCV2023
@ICCVConference
, you may check our interactive 360 recording of the
#CVEU2023
workshop
@cveu_workshop
to get a feeling of being at Paris (But it is a bit pity that our 360 camera cannot well handle the HDR environment)
.
@Stanford
@UCBerkeley
&
@Caltech
computer vision faculty& their students meet today to exchange research ideas, topics include 3D vision, language-visual models, robotic learning, computational photography, vision foundation models, etc. At the EOD, AI is truly fun Science! 1/
Excited to share that the acceptance of
#ICCV2023
3rd Workshop on AI for
Creative Video Editing and Understanding!
Looking forward to seeing you in Paris, a global center of art, fashion, gastronomy, and culture!
@cveu_workshop
[6/6] Cinematic Behavior Transfer is accepted to
#CVPR2024
at Seattle! This is my 10th movie paper since MovieNet at ECCV2020. What a milestone! There are more at
Maybe you've heard of
#ControlNet
for controlling text-to-image diffusion models.
At
#ICCV2023
,
@lvminzhang
will explain how it works and present more of the behind the scenes details.
arXiv:
A111 webui plugin:
“Black-box models are difficult to understand failures and costly to improve performance or fix problems” It is a really inspiring talk by Prof.
@YiMaTweets
with a bunch of theoretical insights 👨🏻🔬, which is also very inspiring to my AI + creativity research👨🏻🎨
Personalization means a lot. We thus introduce the lightweight motion LoRA into
#AnimateDiff
. Fine-grained motions like the following camera control are enabled. Models are available now at GitHub. Just create more motions as you want.
Congrats to the Best Paper Awards and the Runner-up!!! 🥳 Thanks to all the committee members!
Please click the image to see the whole
@cveu_workshop
@ICCV_2021
Recall my first PhD research project on movies when my professor told me, "Your job is to watch movies and have fun." I will continue to pursue my passion for blending art and technology to enhance collaboration between AI and humans and unleash human creativity and productivity.
I really enjoy the creation process to make a intelligent tool for musicians we like. And we are happy to share this with you
@ACMUIST
#UIST2023
Come to join us in San Francisco 🎵
🚀 Dive into the future with my blend of 3D sound & AI in animation, a passion project at
@dogstudio
! 🎧🤖
Crafted with Cinema 4D, finessed with ComfyUI & AnimateDiff.
A huge shoutout to those who provided invaluable learning resources! 📖
@PurzBeats
,
@8bit_e
and
@c0nsumption_
We are back at
#SIGGRAPH2024
for Courses on Generative Models for Visual Content Editing and Creation
@siggraph
on Thursday, August 1st 2:00pm - 5:15pm MDT at Four Seasons 4, Denver Colorado Convention Center
It is always frustrating to work with complex and long videos. We study a semantic consecutive unit "scene"
in our
#CVPR2020
work to facilitate the plot/story understanding of long videos.
Paper:
Webpage:
The submission deadline extended to August 6th! Come to join the amazing workshop with excellent academic researchers, industry group leaders, artists, designers, and entrepreneurs
@ICCV_2021
@cveu_workshop
[3/3] Cinematic Behavior Transfer can also apply to music videos with multiple characters. It is grounded with controllable components for fine-grained manual control in the creation workflow. …arXiv:
Animatediff + ControlNet results have started popping up on Reddit, and they are incredible. Full video (marginally NSFW and disturbing at times) with workflow here:
Yuwei (
@GuoywGuo
) just released
#AnimateDiff
v3 and
#SparseCtrl
which allows to animate ONE keyframe, generate transition between TWO keyframes and interpolate MULTIPLE sparse keyframes. RGB images and scribbles are supported for now.
Github:
The CVEU Workshop is back in Paris at
@ICCVConference
. It will be the first time the CVEU workshop is in person. Please start preparing for the third installment of this creative workshop. We have a program full of surprises!
Visit our website:
This year at our workshop in
#ICCV2023
, we put a lot of effort into involving more artists to contribute their ideas and to create really good content. Hope this will inspire more people.
@ICCVConference
@hkust
If you are interested, please fill out this form and send an email to anyirao
@outlook
.com with title “School-Name-PhD/RA/Visiting Student" and content of your motivation and creativity. Attach your resume, academic ranking, awards, and research experience.
92 days left to CVEU workshop on creative visual content. Who are your most interested speakers (new or old friends)? Let us know by leaving a message below. We are also open to sponsorship to support diversity, equity, and inclusion!
[2/3] Referencing the character and camera behaviors in an existing movie ("Ip Man 4" in this case) is commonly used in practical filming production. Together with our previous Virtual Dynamic Storyboard , it greatly helps the pre-production in the workflow
I will be presenting this work today at 2:20pm at
#UIST2023
in the Gold Room. Hope to see many of you there!
If you want a quick break in between talks, check out some result videos of this work at
The Sunriver banquet at
#uist2022
is an awesome and new experience for academia conference. Congrats to OmniTouch for winning the Time of Last award!
@ACMUIST
"We are currently in the “visual magic” stage of creative A.I. for artists and amateurs." Check out the impressive blog by our keynote speaker
@pinar_demirdag
@Seyhan_Lee
Thanks for enlightening the workshop 💫✨🌟
#ECCV2022
#CVEU2022
Exciting to see that we will really make this happen next week! The workshop starts with our passion for bridging the gap between art and AI. It has accompanied us for four years from the time when video generation totally does not work to today!
@CVPR
#CVPR2024
🎨✨ Unveil the future of creativity during
#CVPR2024
! Delve into "The Future of Generative Visual Art" and witness the fusion of art and AI. Get ready for groundbreaking insights and innovations!
🗓️ June 18th
📍 Summit 343
Be there to shape tomorrow's visual art landscape! 🌟
When crafting a shot, often you’re working right up to the edge of frame. Everything out of frame is pure chaos. Using
#dalle2
out-painting to break the fourth wall for the endings of films. Turns out Dall-e is the perfect vfx tool for doing set extension.
#ai
#aiart
#dalle
#zoom
Professor
@akanazawa
from UC Berkeley will be another of our keynote speakers! Join us at
@cveu_workshop
at
@ICCV_2021
for her exciting talk!
Title: Infinite Nature: Perpetual View Generation of Natural Scenes from a Single Image
Time: 13:00 PM - 13:45 PM PST
#CVEUatICCV2021
[4/6] And if we add conditions on more frames, let's say the last four frames of the previous video clip,
#SparseCtrl
can generate a longer video. Codes to be released soon here
#AnimateDiff
, please star ✨ it and get noticed
[5/5] Cinematic Behavior Transfer can solely reuse the camera behavior from an existing movie (Batman vs. Superman) to new 3D character assets and new environment assets. Check more results here
You still have time to submit to CVEU!
We have extended the deadline for the in-proceedings track to the 2nd of August.
In-Proceedings track deadline: 2̶7̶t̶h̶ ̶o̶f̶ ̶J̶u̶l̶y̶, 2nd of August 11:59 AM PT.
+ details here:
How to build a NeRF from a building to the whole planet? We study such extreme multi-scale scene modeling in our
#ECCV2022
paper BungeeNeRF (aka CityNeRF), inspired by movie effects.
Webpage:
Paper:
Code:
A cozy morning with two
#ECCV
poster sessions at the same time.😆 There are much fewer people coming compared to a face-to-face one. Maybe at that time, we were just so boring and had nothing to do but attend the poster session at an overseas conference.🤣
Our keynote speakers!
Oct. 17th, 2021 at
#ICCV2021
:
Prof. James E. Cutting - 08:30 AM - 09:15 AM PST
Prof.
@MarcChristie4
- 09:15 AM - 10:00 AM PST
Prof.
@irrfaan
- 10:15 AM - 11:00 AM PST
Prof.
@akanazawa
- 13:00 PM - 13:45 PM PST
Prof.
@magrawala
- 13:45 PM - 14:30 PM PST
Prof
@magrawala
's papers inspired me to step into this exciting research field. Really looking forward to the exciting keynote on how to make and break videos!😃
CVEU workshop has another exciting Keynote Speaker!
Professor
@magrawala
from Stanford University
@Stanford
will be having an exciting talk!
Title: Making (and Breaking) Video
Time: 13:45 PM - 14:30 PM PST
Visit us at
@cveu_workshop
@ICCV_2021
2023 was a breakout year for AI video.
In January, there were no public text-to-video models. Now, there are dozens of video gen products and millions of users.
A recap of the biggest developments + companies to watch 👇
“There is growing interest in the research community to augment human decision making with AI assistance”
[2112.11471] Towards a Science of Human-AI Decision Making: A Survey of Empirical Studies
Generative video technology (eg Sora) has two huge challenges in front of it that will likely slow adoption by Hollywood et al: 1) high latency (waiting minutes/hours to get seconds of footage), and 2) lack of controllability (the video you get back isn't necessarily what you
How to inpaint The Arnolfini Portrait? It is not easy for a machine to inpaint it since the hole is too big. But still, human beings are able to create.
#ComputerVision
&
#Arts
@EvenEveno