33 years of deadlines since my first submission for ECCV’90. So grateful for being healthy and fit to stay late and for having these wonderful students! Good luck everybody!
NVIDIA’s job applicants should have excellent knowledge of theory and practice of LLM’s and foundation models. I am honestly looking for papers on theory of foundation models. Could academic twitter reply with any links? I would be very grateful.
#CVPR2022
Motion 3: "Any reviewer who has accepted an invitation to review but violates the reviewing guidelines set forth by the conference will be prohibited from submitting any papers to CVPR for up to two
years." My vote is a resounding NO! 1/n
@ylecun
++: "damage to the language system within an adult human brain leaves most other cognitive functions intact." There is ample evidence that reasoning can happen after language ability disappears:
Best Fall present ever: the new book by Tristan Needham. Thanks to
@ch402
for tweeting about it.
@CSProfKGD
Wish we had more diffgeom in the past so that we understand faster
@TacoCohen
's recent papers :-)
#CVPR2024
Such a great effort by my wonderful students. Excellent finish. Good luck to all of you. What a great feeling, 31 years of CVPR deadlines :-)
🌟FisherRF introduces a novel approach for view selection & uncertainty in 3D Gaussian Splatting & NeRFs. Fisher information can be computed without altering model architecture, and it is as cheap as back-propagation! 🚀More details:
Congratulations to my amazing student Georgios Pavlakos
@geopavlakos
for receiving the Best Computer Science Dissertation Award at Penn, the 2021 Morris and Dorothy Rubinoff Award.
ICLR23 AC: "....impressive performance on recovering dense reconstruction from a single point cloud, while also enabling zero-shot generalization to multi-object scenes. " SE(3)-Equivariant Attention is what you need!
TRAM recovers 3D human pose in a global world frame with 60% root trajectory error reduction wrt SotA!
Watch the video on our project page
arxiv:
Thanks to my amazing coauthors:
@YufuWang_
,
@ZiyunClaudeWang
,
@LingjieLiu1
.
Mens sana in corpore sano! Back in the game, after 7 years, I finished my 6th half-marathon. I am so grateful today, in particular, to my wife for urging me to do it.
Congratulations on the new differential geometry book by colleague Jean Gallier and Jocelyn Quaintance. It was written with computer vision and robotics in mind.
Student: "What are the three most important problems in computer vision?"
Takeo Kanade: "Correspondence, correspondence, correspondence!"
The first question that I ask when I read a DeepX geometry paper is which step solves the correspondence problem.
Our
#CVPR2022
paper introduces a representation of canonical shape for deformed humans, animals and articulated objects via neural homeomorphisms. Very proud of first paper with my student
@Jiahui77036479
:
arxiv:
Project:
Incorporating stronger geometric bias within the architectural design and taking advantage of the inductive bias provided by the vision foundation models enables really fast and robust tracking: Yunzhou Song,
@JiahuiLei1998
@ZiyunClaudeWang
@LingjieLiu1
!
[Code now available!] Gaussian Articulated Template Model (GART) captures humans and animals from monocular videos (ZJU MoCap and PeopleSnapshot in 30s), renders novel poses and views at 150fps, on a laptop, and facilitates text-to-3D generation.
Project:
Guess the dance! Produced by the amazing notebook developed by
@nikoskolot
@geopavlakos
based on our
#ICCV2021
ProHMR. (So easy to run that even I an old professor can run it. Video cut to hide the name of the dance).
There is a progress in linear algebra education. I always keep asking this question in the vision class: How many of you have never heard of SVD. It was 1-2 out of approx 120 students (40 undergrad + 80 grad) this semester. 10 years ago, it was half of the class. .
@CSProfKGD
Congratulations to our continuous source of inspiration, Ruzena Bajcsy, for receiving the 2021 Rosenfeld Lifetime Achievement Award
@iccv2021
,
@PennEngineers
@GRASPlab
Segment fast independent motion with an event camera without training labels! 📷 Un-EvMoSeg runs in real-time without heavy optimization and requires zero segmentation annotation. 📷 Check out the project:
We usually teach vanishing points. What about the opposite: When does a set of lines intersecting in R^2 project to a set of exactly parallel lines or why it is worth teaching P^2 (and P^3) in a vision class.
Today England will face Germany in a match with a history in computer vision and not only. Reid and Zisserman
@oxford_VGG
wrote one of the most elegant papers in geometric vision at ECCV'96. I devote one of my courser lectures to this pearl:
Congratulations to Dr. Kenneth Chaney, the best engineer I have ever met! So lucky to have met him during the DARPA Robotics challenge when he was doing his CoOp and convinced him to do a PhD! I am so proud of you Ken!
Four generations of my academic tree at U of Minnesota, Maryland, and Drexel! And Vishnu PhD student at UMD. Could not be prouder!
@AgriRobot
@ptokekar
@Lifeng917
Congratulations to Program Chairs of
#CVPR2022
@richasingh26
@KristinJDana
@ganghua1
and Samaras. This CVPR has set a new standard of checks and balances cross AC decisions and reviews with greater granularity and maximum consistency. It felt like unit tests in decision making.
Kourosh Durmohamadi Bagi arrived in a refugee camp in Lesbos in 2019 and this year he ranked 2nd in the STEM Panhellenic entry exam (the gaokao of Greece). He wants to study EE in Thessaloniki. Congratulations Kyros!!!
Our Equivariant Vision workshop features five great speakers
@erikjbekkers
@HaggaiMaron
@ninamiolane
@_machc
, and Leo Guibas, spotlight talks, posters, and a tutorial prepared for the vision audience. Come tomorrow, Tuesday, at 8:30am in Summit 321! Thank you
@CongyueD
for
The culmination of my first NeurIPS could not have been more rewarding: The Neural Wave Machines talk by
@wellingmax
. We started working on equivariance inspired by Max and
@TacoCohen
6 years ago. Thank you and your group (
@maurice_weiler
and all).
I am really puzzled by the almost 3000 paper registrations without submission. Is it strict last minute self-criticism or naive optimism that a miracle will happen in the experiments during the last week or might be plain opportunism that hyper parameter tuning will make it work?
Preliminary
#CVPR2022
paper stats:
Paper IDs registered: 11,083
Actual paper submissions: 8,162
The actual paper submissions are subject to change due to desk rejects. Currently, our program chairs are reviewing the papers for desk rejects.
At
#CPAL2024
with
@YiMaTweets
! Thank you for being the inspiration and the host of the first
@cpalconf
. I remember sitting next to Yi during the summer of 1998 at Berkeley when
@jana_kosecka
and Yi conceived the self-calibration for SfM that received the ICCV'99 Marr Prize!
Bad reviewing is indeed a problem but it can be mitigated with training (taking online courses) and positive incentives. I would be very happy to contribute to such a course creation. 6/n
With rampant mental health issues among doctoral students and researchers you will be penalized for any regression of your issues. What do they expect the student to do: submit a certificate from the counseling office ?? 3/n
I found this piece in Horn's Robot Vision. Read the fairy tale attributed to Kermit Klingteil. The moral: "When you have difficulty in classification, do not look for ever more esoteric mathematical tricks; instead, find better features." The triumph of DL stated in the 80's.
Would you like to know how to define and realize an equivariant convolution and attention on a light field / feature field? Come to listen to
@xu_yinshua86846
and
@JiahuiLei1998
presenting our spotlight at 10:45am at poster
#309
@NeurIPSConf
This is an amazing dataset showing the triumph of COLMAP "COLMAP, a mature photogrammetry framework, provides 3D annotations that are treated as ground truth"
@mapo1
@SattlerTorsten
More ICCV nostalgia
#ICCV2021
: I had the privilege to do the clerical work at the in-person meeting of the PCs of ICCV 1993. Hans-Hellmut Nagel, Yoshiaki Shirai, and the late Tom Huang met on a very cold weekend in Karlrsuhe to make paper decisions. 1/3
DynMF is a sparse trajectory decomposition that enables robust per-point tracking. In addition to NVS, it allows us to control trajectories, enable/disable them, leading to new ways of video editing, dynamic motion decoupling, and novel motion synthesis.
#cvpr2022
I am very grateful to all in-time reviewers but also to all late reviewers who informed the AC about an ETA or even that they are unable to do it. And to all emergency reviewers who are stepping in this week.
Reviewing load this year is insane. 🙏🙏🙏
@JiahuiLei1998
and
@WenJiang_PL
finishing intrinsic calibration. Ready to capture the cowbird mating season in 4D with 8 cameras and 24 microphones in our smart aviary, a project with Marc Schmidt from
@PennSAS
Biology/Neuroscience and
@PennCIS
@GRASPlab
.
What a delight to see Professor Sugihara performing again a geometry mastery! The first year of my PhD, my advisor had passed me on his book on Algebraic approach to the recovery of three-dimensional shapes from single images. It was an eye opener to realize how constraint
I am filled with a sense of wonder and admiration when I think about this incredible computer engineering accomplishment of NASA. It is still one of the most fulfilling places to work and my answer when people ask me what I would do if I were not in academia. I experience this
Amazing that a chip bricked in 46-yr-old Voyager I, preventing it from sending data, and NASA figured out how to split up and reallocate its functions to other hardware, sending code 15 billion miles away (45 hours round trip!)--and Voyager's back online.
So proud to be advisor together with
@vijay_r_kumar
of Wenxin Liu who defended today with an amazing presentation! Check her TLIO inertial neural ofometry! Thank you
@davsca1
,
@loiannog
,
@dineshjayaraman
, Jianbo and CJ!
It was clarified by the motion authors that one of the violations will be being late. This means that your PhD will be pretty much jeopardized if you happen to be late for any reason. 2/n
"Reviewers with an active penalty will be placed on a
list". Creating black lists is not a tradition in democratic organizations. It will also open pandora's box in terms of legal challenges and violates the academic freedom of scientific communication. 5/n
Many students and colleagues ask me why I am so obsessed with teaching (and publishing on) optical flow: I now have the right answer: Because a $69.99 COSTCO drone uses it !!!
@Michael_J_Black
@davsca1
@CSProfKGD
Thrilled to announce that I will be joining the University of Pennsylvania as an Assistant Professor in January 2023! I'm extremely grateful to all those who have supported me all the way and I look forward to working with students and colleagues at Penn!
@PennEngineers
@CIS_Penn
"The most powerful companies in the world are shaping what artificial intelligence will become—but they’ll never get it right without the ethos and values of university scientists." Fei-fei Li
The experience of suggesting reviewers for
#cvpr2022
was so rewarding. I spent hours and hours visiting more than 100 GS pages, reading papers by potential reviewers, and learning better our exponentially growing community. Back to writing letters!
@Michael_J_Black
@geopavlakos
@nikoskolot
Today SPIN reached 1000 citations. A very powerful idea to leverage geometric model optimization in the training loop. It can be applied to other geometric problems natively solvable using optimization, too.
Do we really need scene supervision? Not really! Equivariant shape priors suffice for segmenting 3D objects. Great work,
@Jiahui77036479
@CongyueD
, and Karl S. !!!
Happy to introduce our new
#CVPR2023
paper EFEM with
@CongyueD
, Karl Schmeckpeper ,
@GuibasLeonidas
,
@KostasPenn
:
- Learn Equivariant object shape prior on ShapeNet;
- Directly Inference for scene object segmentation!
- A new dataset "Chairs and Mugs"!
Mitch Marcus hired Benjamin Pierce,
@RajeevAlur
and me, among many others between 1996-2001. But most importantly, he created the Penn Treebank more than 30 years ago, the first golden annotation of a large corpus.
Thank you
@rodneyabrooks
for the in-person talk at
@GRASPlab
and for sharing your views on computation. It is such a pleasure to listen to something different!
with Petros Marangos and Nikos Paragios at the HIAS Inaugural Symposium: ECCV 2010 Program Chairs Reunion! Hope everybody still has great memories from beautiful Crete!
Apparently, the motion targets senior researchers. Why should my students submit their papers without my name? How will it look in their CV? Will they have to put an asterisk (advisor blacklisted) ? 4/n
#ECCV2020
How can you predict video from both interaction and observation? Karl Schmeckpeper's paper with
@_oleh
Annie Xie Steven Tian
@svlevine
@chelseafinn
will be presented at 9a and 7p ET.
This was an amazing presentation by
@erikjbekkers
on equivariant neural fields and neural ideograms at the Equivision Workshop. I want to go and read all his 2024 papers now!
Looking forward to this! I'll talk about
Neural Ideograms and Geometry-Grounded Representation Learning
... and cats and dogs and owls!
Thank you
@KostasPenn
@CongyueD
and co for organizing this! Excited to be part of the program!
Proud to announce the workshop that
@EdgarDobriban
and I are organizing on Sep 4. If you are excited about invariance, symmetry, equivariance, data augmentation register here:
Why we teach optical flow in computer vision. In the moving cubes illusion, take the middle row for 100 first frames and plot it as xt-slice: optical flow is orientation in space-time. Speed is inverse of slope. Rest of the illusion motions will be homework
#CIS580
.
Astonishing family of illusions. It's caused by the pixels at the edges of each segment, which change from white to black or vice-versa in synch with the reversal of the segment itself. The w-to-w or b-to-b jump from segment to edge is seen as movement.
Students in Greece: The Archimedes Scholarship is a unique opportunity to do a PhD in Greece on foundational problems in AI. If you are interested in geometric problems and/or neuromorphic methods in vision and learning contact me for more information.
Optics may perform deep learning calculations more efficiently than electronics, but most work only performs linear operations optically. Here, Marandi’s group at Caltech realizes a very promising all-optical ReLU nonlinear activation. (1/3)