![Simon Niklaus Profile](https://pbs.twimg.com/profile_images/1747324328000335872/tH-gL5mu_x96.jpg)
Simon Niklaus
@simon_niklaus
Followers
1K
Following
544
Statuses
143
Senior Research Scientist at Adobe
Joined February 2017
@jon_barron @eccvconf You can also only register one main paper, thus potentially requiring another registration.
0
0
4
@VMarcOttawa Very kind of you to ask, and thank you for the encouraging words! I removed the donation button a while ago, your message means more to me than any donation.
0
0
0
Jess was an extraordinary friend and I am devastated by these news. I lost touch with them after moving, but my wife and I occasionally bought gifts for them for when we reconnect. It is too late now, so take it as a reminder to not take things for granted, don't push things out.
0
0
1
It has been great working with @YaoChihLee on this. If you are looking for a strong intern then consider reaching out to him.
Fast View Synthesis of Casual Videos paper page: Novel view synthesis from an in-the-wild video is difficult due to challenges like scene dynamics and lack of parallax. While existing methods have shown promising results with implicit neural radiance fields, they are slow to train and render. This paper revisits explicit video representations to synthesize high-quality novel views from a monocular video efficiently. We treat static and dynamic video content separately. Specifically, we build a global static scene model using an extended plane-based scene representation to synthesize temporally coherent novel video. Our plane-based scene representation is augmented with spherical harmonics and displacement maps to capture view-dependent effects and model non-planar complex surface geometry. We opt to represent the dynamic content as per-frame point clouds for efficiency. While such representations are inconsistency-prone, minor temporal inconsistencies are perceptually masked due to motion. We develop a method to quickly estimate such a hybrid video representation and render novel views in real time. Our experiments show that our method can render high-quality novel views from an in-the-wild video with comparable quality to state-of-the-art methods while being 100x faster in training and enabling real-time rendering.
0
2
12
@oliver_wang2 Thanks! I should add pre-2020 awards at some point as well, this isn't a super ideal representation of our field. 🙃
0
0
0
A little overly simplified but still interesting that these things are becoming more visible in the mainstream: Featuring work from @zhengqi_li and @Jimantha among many others.
0
0
6
I am being considered a senior reviewer, nice. 🎉 Which means I have to review 8 papers, not so nice. 🫠
#ICCV2023 reviewer assignments are now available at 8,088 paper submissions have been assigned to reviewers. Senior reviewers have been assigned a max of 8 papers, while student reviewers have <= 4. 1/n
2
0
24
@wpmed92 I tried ffmpeg at first but ended up using gifski instead, seems to do better for GIFs. 🤷♂️
1
0
0
@KK_Ilkal I have removed the conference information from the navigation panel since I more and more believe that the quality of a paper isn't defined by the venue it is published in. That is, paper acceptance seems to become increasingly random so why even bother. 🤷♂️
0
0
4
We finally released the full implementation of our #CVPR2020 paper on "Softmax Splatting for Video Frame Interpolation" and it only took more than two years to do so. 🙃 Github:
3
10
78