Frank Dellaert
@fdellaert
Followers
13K
Following
13K
Media
177
Statuses
3K
CTO at Verdant Robotics, Robotics & Computer Vision Professor at Georgia Tech (on leave). Before: sabbatical at KUL, stints at Skydio, Facebook B*8, Google AI.
San Mateo, CA
Joined June 2008
This is one of the most lucid and accessible intros to Bayesian inference I have seen, by @rlmcelreath. No background required: Statistical Rethinking 2022 Lecture 02 - Bayesian Inference via @YouTube.
13
223
1K
Andrew Marmon and I rounded up all #CVPR2022 papers on NeRF/Neural Radiance Fields we could find in a new blog post here:
9
134
501
Our #CVPR22 paper on Panoptic NeRF is now released on arxiv. TL;DR: view synthesis + semantics on “stuff” and objects in the scene. Object-based NeRFs also allow for editing/moving/removing.
9
67
361
Google Research/Brain is *rocking* it! The field of neural rendering is moving so fast it makes my head spin :-).
Our paper, “NeRF in the Wild”, is out! NeRF-W is a method for reconstructing 3D scenes from internet photography. We apply it to the kinds of photos you might take on vacation: tourists, poor lighting, filters, and all. (1/n)
2
26
176
NeRF now in PyTorch 3D !.
We’re thrilled to announce that PyTorch3D now supports implicit shape rendering. #PyTorch Try it here:
0
14
148
Honored and humbled to get the “test of time award” at RSS, with @michaelkaess for our work on Square Root SAM, introducing factor graphs and eventually leading to iSAM, iSAM2, and @gtsam4 . Tune in for a special keynote next Tuesday:
.@gtcomputing Did you know CoC has a faculty member who was #honored for #pioneering new approaches to #robotic design & problem solving? You do now! Congrats to Frank Dellaert, recipient of the RSS Test of Time Award.
10
22
150
Hiring 700 roboticists, sounds exciting!.
Today, at @ieee_ras_icra, Dyson gives a glimpse into the future of household robots for the first time. From manipulation and robot learning, to visual perception and compliant control…. Intrigued by what you see? Join us #Dyson #Robotics #ICRA2022
4
25
146
Oh yeah! Structure from Motion to the rescue :-).
Perseverance rover landing on Mars, 3D reconstruction by ( . The reconstruction in RealityCapture was done using @NASA raw images. See the model on Sketchfab #realitycapture #photogrammetry #3Dscanning
2
22
148
Congratulations to @lucacarlone1and team for the best TRO paper on Multi-Robot Mapping !!!!
4
18
139
Apple seems to have some pretty good Visual Odometry under the AVP hood - not surprising as ARKit is quite good already ( - but this seems better than 2cm/second. Expected with higher field of view, still impressive!.
[5/6] Nice thing about Vision Pro is that it tracks everything in a global frame: the app constantly localizes the device from where the app is started. This opens up exciting possibilities of capturing human motions in a larger scale environment.
1
21
139
This is a great way to learn pytorch and SOTA papers at the same time. I still recommend reading the papers first :-).
Annotated PyTorch Paper Implementations by @labmlai is an AMAZING resource:. • Deep learning papers explained in-depth with code side-by-side.• Constantly updated with some of the latest papers!.• 100% free and open-source! . Check it out here →
0
23
134
This is a *brilliant* video, so many cool things about eye movement and optical flow during walking, something I often muse about while walking over difficult terrain.
Retinal Optic Flow During Natural Locomotion is *finally* published - 20 months after the initial submission!. Paper available here, in an open-access-by-default journal, of course!. #neuroscience #vision #eyetracking #motioncapture
3
11
130
Nice to see a beautiful 3D geometry paper alongside all the LLM/VIT/DETR bonanza :-) At #CVPR23: LIMAP: A toolbox for mapping and localization with line features.
3
18
123
At @MIT_CSAIL today to give a talk on “Factor Graphs for Perception *and* Action”. In person!!.
3
1
116
DUSt3R is quite something! SFM and SLAM are evolving, fast!.
An example of how DUSt3R can do "impossible matching": given two images without any shared visual content (my office, obviously never seen at training), it can output an accurate reconstruction (no intrinsics, no poses!) in seconds
0
7
117
My talk at @MIT_CSAIL last week on “Factor Graphs for Perception and Action” is now available on YouTube:
4
9
114
I gave a 2-hour lecture on NeRF and SFM at #icvss2023 in Sicily- now I can relax a bit and enjoy the talks by the other (stellar!) speakers, followed by some enjoyment of Sicily itself :-).
Second day of #icvss2023 starts with a great talk by @fdellaert! . As a non-3D person, I really enjoyed the journey starting from NeRF, then going back to the basics then again to recent cool NeRF stuff!
5
3
112
Honored to share an IEEE ICRA Milestone Award this year for Monte Carlo Localization (1999). If you follow the link, also check out the *many* Georgia Tech papers at ICRA 2020! With Dieter Fox, @wolfram_burgard and @SebastianThrun
7
14
110
If you want to know what state of the art drone mapping looks like, this is a mechanical *and* software tour de force: PULSAR, a self-rotating, single-actuated UAV with extended sensor field of view . via @YouTube.
0
10
100
Conformal Prediction seems such a simple and powerful technique for uncertainty calibration of neural networks. Intuitive tutorial here: via @YouTube.
1
19
99
Wow, amazing that there is enough signal in WiFi to do full pose reconstruction. Cool work from CMU!.
🤯 Full body tracking now possible using only WiFi signals. A deep neural network maps the phase and amplitude of WiFi signals to UV coordinates within 24 human regions. The model can estimate the dense pose of multiple subjects by utilizing WiFi signals as the only input. 🧵
0
15
100
Quick demo of Particle Filtering / MCL, with associated CoLab notebook in comment section: via @YouTube.
4
11
98
Amazing paper from @NianticLabs on scene reconstruction: paradigm shift in structure from motion? I think so! At least for a large class of datasets and applications, including quick phone captures. Beautiful visualizations in the thread. Wow, @eric_brachmann !!!.
📢A new learning-based approach to SfM: #ACEZero. No img-to-img matching, optimises image-to-scene correspondences directly. Needs no pose priors. Works on unordered image sets. Efficiently handles thousands of images. Paper: Page:
1
13
96
Interesting paper that mimics A* search dynamics. It does not even come close to A* in execution speed, of course, but it shows transformers can reason symbolically, when trained on specific problems. My bets in this space are in neural heuristics instead, though. What are the.
Meta presents Beyond A*. Better Planning with Transformers via Search Dynamics Bootstrapping. While Transformers have enabled tremendous progress in various application settings, such architectures still lag behind traditional symbolic planners for solving complex decision making
6
11
89
For an overview of factor graphs @gtsam4 in robotics, and the announcement of OpenSAM and SwiftFusion, see the recording of our July 14 "Test of Time Award" Talk by Michael Kaess @michaelkaess and myself, here:
3
16
88
When it rains, it pours. More cool differential SfM work from @vincesitzmann and co.
Introducing “FlowMap”, the first self-supervised, differentiable structure-from-motion method that is competitive with conventional SfM like Colmap!. IMO this solves a major missing piece for internet-scale training of 3D Deep Learning methods. 1/n
1
3
86
This is such a cool paper. Simple idea that works like gangbusters!.
[1/6] What representation comes to mind when you think of a ‘camera’? Perhaps an extrinsic + intrinsic matrix? In our ICLR (oral) paper, we instead infer a distributed representation where each pixel is associated with a ray, and show SoTA results for few-view pose estimation.
0
6
82
Incredibly proud of my daughter @zdellaert who graduated from U Chicago yesterday with highest honors! Now on to saving some coral!.
2
0
82
Besides GS dethroning NeRF, we have a matching “new wave” in the vision community with Dust3r and Mast3r, Mickey, Ray Diffusion, etc. Truly inspiring!.
Awesome inspiring talk from @JeromeRevaud !!! So delighted by the positive response of everyone about our work for the past 4 years ! And what a crowd!
2
8
82
I joined forces with @yen_chen_lin from the awesome NeRF repo and put an annotated bibliography “Neural Volume Rendering: NeRF And Beyond” on Arxiv, based on .bibtex: paper:
2
7
77
Wow, talk about a SLAM challenge!.
🚀Launching #ICCV2023 SLAM Challenge! Navigate through complex environments with our TartanAir & SubT-MRS datasets, pushing the robustness of your SLAM algorithms. Let's redefine sim-to-real transfer together! Go to for details! #SLAM #challenge #ICCV2023
0
4
77
book is nearing completion - needs some more DRL code - but we are starting to work on the index - thanks to the good folks at @MystMarkdown
4
12
77
Impressive new 360->”CAD” method from @RealityLabs . Tiny brag: they use an “Atlanta World” prior, a term Grant Schindler and I coined after being dissatisfied with the rigid “Manhattan World” assumptions. Ah, good times :-).
Today we’re introducing SceneScript, a novel method for reconstructing environments and representing the layout of physical spaces from @RealityLabs Research. Details ➡️ SceneScript is able to directly infer a room’s geometry using end-to-end machine
1
5
73
Oooh! Fast and simple Bayesian deep learning using new “laplace” library. Video is also a great primer on what Bayes (with some help from Laplace) can do for you.
In our #NeurIPS2021 paper (, we introduce laplace-torch for effortless Bayesian deep learning. Despite their simplicity, we find that Laplace approximations are surprisingly competitive with more popular approaches.
1
10
67
Got a ride in a @wayve_ai autonomous vehicle from @alexgkendall at #ICRA2023, very cool as end-to-end RL!
1
3
72
Very cool results by representing a scene with 3D Gaussians instead of voxels. A bit longer to train (1hr).
We will be in @siggraph 2023 with "3D Gaussian Splatting for Real-Time Radiance Field Rendering", have you ever seen radiance fields with 100+ FPS and MipNeRF360 quality?. Check out our website here: .
4
10
64
This feels almost alive. Very cool legged robot climbing/jumping.
Roboticists from @leggedrobotics and @NVIDIAEmbedded are teaching four-legged robots climb and jump. After training in simulation, the robots can autonomously decide how to scramble over and under obstacles, which will help them do dangerous jobs so that humans don't have to.
1
4
62
From last year, but this interactive article on Gaussian Loopy Belief Propagation is brilliantly done.
Very excited to share our interactive article: .A visual introduction to Gaussian Belief Propagation! .It's part proposition paper, part tutorial with interactive figures throughout to give intuition. Article: Work with: @talfanevans, @AjdDavison .1/n
0
8
62
I am kicking off this seminar series tomorrow, hope to see some of you there! I will mainly be talking about factor graphs for action….
After a tremendous response to our first SLAM series, we are glad to announce the Fall Edition of the Tartan SLAM Series. Learn more about the series on our website: Sign up to receive reminders and Zoom links to participate.
0
3
58
I gave a short talk at TUM about Shonan Rotation Averaging surrounded by a larger talk about Factor Graphs :-) Recording is available on YouTube. .
there will be an online seminar on youtube about "Pushing Factor Graphs beyond SLAM" which is given by Frank Dellaert (developer of the GTSAM package). TUM AI Lecture Series - Pushing Factor Graphs beyond SLAM (Frank Dellaert) via @YouTube.
3
3
57
Today we released GTSAM 4.0.3 in advance of 4.1, with an important Pose3 update, and a cool blog post by @FanOnRobotics about how we got there
0
11
55
This new I❤️LA language is very cool. I wish this was available, as the authors suggest in future work, in Jupyter notebooks right now :-).
Excited to share our SIGGRAPH Asia'21 paper with Shoaib Kamil, @_AlecJacobson and @yig: "I❤️LA: Compilable Markdown for Linear Algebra" (1/7)
0
8
56
Very cool work on fully differentiable SLAM!.
Today we release gradslam - automagically differentiable SLAM. Run dense 3D reconstruction in @PyTorch!. And, it's fully differentiable :) Co-led w/ @S_Saryazdi, ably supported by @mautkiungli @duckietown_coo. Webage: Paper:
1
6
56
I'm not at RSS but based on reports from CHI, ICRA, and CVPR I'd strongly recommend masking with N-95 #RSS2022.
3
3
52