Nathan Kallus Profile Banner
Nathan Kallus Profile
Nathan Kallus

@nathankallus

Followers
2,420
Following
248
Media
29
Statuses
326

🏳️‍🌈👨‍👨‍👧‍👦 Assoc Prof @Cornell @Cornell_Tech @Netflix @NetflixResearch causal inference, experimentation, optimization, RL, statML, econML, fairness

New York, NY
Joined December 2010
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
@nathankallus
Nathan Kallus
9 months
The Machine Learning & Inference Research team I co-lead @Netflix @NetflixResearch is hiring PhD interns for Summer 2024. Looking for a research internship (tackling industry problems while also focusing publishable research!)? Apply thru this listing:
6
65
375
@nathankallus
Nathan Kallus
2 years
The Product ML Research team I co-lead @netflix @NetflixResearch is hiring! Want to do ML research that drives basic science+pubs *and* business impact? Are you a deep thinker *and* a builder? Join us! Still in PhD? Intern with us!
7
45
284
@nathankallus
Nathan Kallus
2 years
I am boycotting @informs2022 . We cannot ask our woman colleagues to travel to a state where their health is put at danger and their choices limited about their own bodies. I am calling on @INFORMS to move venues from Indiana or change to virtual-only in light of the new law.
13
15
247
@nathankallus
Nathan Kallus
5 years
My research group (located at Cornell Tech campus in NYC) is looking to recruit a postdoc to work on topics related to causal inference, fairness in ML, and sequential decision making (bandits+RL). Positions are renewable (1-2 years). Please retweet to spread the word. 🙏
3
119
232
@nathankallus
Nathan Kallus
3 years
Excited to be co-organizing the NeurIPS 2021 Workshop on Causal Inference Challenges in Sequential Decision Making happening Dec 14 online. Please consider submitting contributions. CfP on website. Due date 9/30.
1
22
93
@nathankallus
Nathan Kallus
3 years
Offline #ReinforcementLearning converges faster than you think! Offline RL is about learning new dynamic decision policies from existing data -- crucial in high-stakes domains like medicine. Theory predicts its regret converges as 1/√n. But when we run a sim we see 1/n 🤔🧵
Tweet media one
1
12
81
@nathankallus
Nathan Kallus
5 years
My favorite part is finally here: the panel discussion!! With our awesome lineup of speakers, moderated by @david_sontag . At the #NeurIPS2019 causal ML workshop “Do the Right Thing”
Tweet media one
1
11
55
@nathankallus
Nathan Kallus
5 years
. @alexdamour tells us about deconfounding scores, which generalize propensity and prognostic scores and help with covariate reduction, at the #NeurIPS2019 causal ML workshop “Do the Right Thing”
Tweet media one
1
6
45
@nathankallus
Nathan Kallus
4 years
@yisongyue Missing options: c. Raging pandemic. d. Future of the country you live in.
1
1
46
@nathankallus
Nathan Kallus
4 years
Very excited to share new work with @angelamczhou on partial identification in off-policy evaluation (OPE) in infinite-horizon RL when there are unobserved confounders: . OPE is crucial for RL applications where exploration is limited, like medicine.👩‍⚕️ 1/n
1
6
45
@nathankallus
Nathan Kallus
5 years
Personalized interventions using heterogeneous causal effects are the next big thing. But are they fair? Impossible to say: standard disparity measures are unidentifiable! In @angelamczhou & I give ways to credibly assess fairness despite this #NeurIPS2019
3
7
45
@nathankallus
Nathan Kallus
5 years
We cannot fix what we cannot measure! Thank you @NSF for funding my FAI proposal on *credible* fairness assessments and robustly fair algorithms: Proud+excited to be working with the amazing people at on this project.
1
5
36
@nathankallus
Nathan Kallus
4 years
Very excited to be involved in four papers being presented at @icmlconf #ICML2020 this week. A short thread spotlighting the papers with just *one* sentence each:
1
2
36
@nathankallus
Nathan Kallus
3 years
Excited to post new paper with the amazing @Jacobb_Douglas & Kevin Guo "Doubly-Valid/Doubly-Sharp Sensitivity Analysis for Causal Inference with Unmeasured Confounding" In this time of uncertainty it's good to have some checks on your causal inferences 1/n
Tweet media one
2
4
39
@nathankallus
Nathan Kallus
4 years
Had a blast presenting at the Online Causal Inference Seminar yesterday together with @XiaojieMao . You can watch the recorded presentation here: And a big thank you to Alex Belloni for a fantastic discussion!
@nathankallus
Nathan Kallus
4 years
Posted a big update to Localized Debiased ML in advance of talk next week at Online Causal Inference Seminar New: more special cases incl IV-LQTE, empirics, code, more discussion/exposition, + more #EconTwitter #causality @VC31415
0
1
17
0
9
35
@nathankallus
Nathan Kallus
5 years
Andrea Rotnitzky on a general recipe for choosing the *optimal* minimal adjustment sets for causal inference given a particular causal DAG at the #NeurIPS2019 causal ML workshop “Do the Right Thing”
Tweet media one
2
3
31
@nathankallus
Nathan Kallus
5 years
Heard that RL doubly robust off-policy evaluation is data efficient, right? Not quite true in fact if we're dealing with a Markov decision process, as we often are in RL. In we provide the first efficient estimator, Double Reinforcement Learning.
0
10
29
@nathankallus
Nathan Kallus
5 years
Next up in the #NeurIPS2019 causal inference session series, Andrew will be presenting our work on off-policy evaluation with latent confounders, where optimal balancing saves the day via duality! (5:30pm East Exhibition Hall B+C #137 )
Tweet media one
2
1
27
@nathankallus
Nathan Kallus
3 years
Congrats @hamsabastani , Kimon, Vishal, and coauthors for this incredible achievement, and thank you for showing the way on using AI/ML to inform intelligent COVID policy without stupid travel bans.
@EricTopol
Eric Topol
3 years
Just published @Nature May represent the most important successful application of #AI in the pandemic (only a few are on the list) Reinforcement learning for efficient testing at the Greece border
Tweet media one
Tweet media two
10
138
367
0
2
26
@nathankallus
Nathan Kallus
5 years
Thanks for the incredible 99 submissions to @NeurIPSConf {Causal}∩{ML} workshop "Do the Right Thing." Notifications will be out any minute. Congrats to the accepted contributions. Looking forward to seeing all of you in Vancouver! #NeurIPS2019
2
4
27
@nathankallus
Nathan Kallus
2 years
Tenured/Tenure-track faculty job opening @Cornell_ORIE @cornell_tech in #NewYorkCity
0
6
26
@nathankallus
Nathan Kallus
2 years
Some can easily afford not to come, but for job candidates it's detrimental to their career. Consider a pregnant woman on the job market, even with a known and desired pregnancy and even a currently-seemingly healthy one. Should we put her in this position at all?
4
2
24
@nathankallus
Nathan Kallus
3 years
@Adam235711 This 100%! I never heard of "OR" until Martin Wainwright suggested I apply to MIT ORC (never even realized Berkeley had IEOR). Only applied to math PhDs otherwise, with slim/no chance at academic job just a foregone conclusion. Visited MIT ORC and realized that's where I belong.
0
0
24
@nathankallus
Nathan Kallus
3 years
*Today* at 11:30am Eastern in Online Causal Inference Seminar Differencing (DID) identifies effects in TWFE → analogous xforms identify effects in factor models! & don't need T→∞ as in synth ctrl and matrix factorization @XiaojieMao @EconometricaEd
1
5
24
@nathankallus
Nathan Kallus
5 years
To bring in the new year @XiaojieMao +Masa+I just posted a paper on Localized Debiased ML for estimating causal quantities using ML methods when hi-dim nuisances depend on estimand In this thread I'll explain why this prob is so important and what we did 1/
1
5
23
@nathankallus
Nathan Kallus
3 years
Had a great time as discussant for the peerless Guido Imbens and his fantastic talk at the Online Causal Inference Seminar #OCIS Rewatch it here: Talked about the larger context for data combination for identification, efficiency, & partial identification
0
2
22
@nathankallus
Nathan Kallus
4 years
Yep! Important distinction. Never liked the term ITE (Sorry, @ShalitUri @frejohk @david_sontag . Otherwise ❤️ TARNet+relatives) But I do think if we read the I as "Individual-level" it does capture something that, tho equivalent to CATE, is important conceptually when data is rich
@yudapearl
Judea Pearl
4 years
I have been reading several papers recently where the term "individualized treatment effect" is wrongly defined by E[Y(1)-Y(0)| C=ci] and ci is a set of characteristics associated with individual i. See . Warning: This is still population-based 1/2
1
10
75
1
1
23
@nathankallus
Nathan Kallus
5 years
IVs are often viable identification alternative to assuming all confounders observed. But existing tools can have hard time handling complex data/relationships. Turns out using deep adversarial training to solve continuum GMM works pretty well #NeurIPS2019
Tweet media one
0
1
22
@nathankallus
Nathan Kallus
4 years
Very exciting work on partial identification for off-policy evaluation with confounded transition data
@angelamczhou
Angela Zhou
4 years
I'll be talking about Confounding-Robust Policy Evaluation in Infinite-Horizon Reinforcement Learning today, 12pm EST (Sess. 6)! Poster: Gather: Paper:
Tweet media one
1
8
36
0
1
21
@nathankallus
Nathan Kallus
5 years
Hello world. Been lurking for a bit, and in meantime learned about some exciting new papers and listened in on thought-provoking conversations. Time to participate too! To start, in my next few tweets I am going to tell you about some new papers I'm excited to be involved in.
1
1
20
@nathankallus
Nathan Kallus
2 years
Wrote a post for Mgmt Sci blog with @XiaojieMao & @angelamczhou for our featured article "Assessing Algorithmic Fairness with Unobserved Protected Class Using Data Combination" If You Can't Measure It, Bound It: Credibly Auditing Algorithms for Fairness
0
3
21
@nathankallus
Nathan Kallus
2 years
(Re health danger: I know there are "exceptions" in law but anything that makes drs hesitate for purely political/punitive and non-medical reasons or opt to xfer a patient rather than save their lives asap is putting women's health at risk. & that's even putting choice aside 🤬)
0
0
18
@nathankallus
Nathan Kallus
5 years
Andrea says “read the stats literature!” @SusanMurphylab1 says “pay more attention to @yudapearl @Susan_Athey says “read the empirical literature! Do supplementary analyses”
2
3
18
@nathankallus
Nathan Kallus
5 years
At #NeurIPS2019 and want to learn about using adversarial training to solve conditional moment problems (eg, instrumental variables) with neural networks? Then come see Andrew present our work on DeepGMM () today at 10:45AM at East Exhibition Hall B+C #183 .
Tweet media one
0
2
17
@nathankallus
Nathan Kallus
4 years
Posted a big update to Localized Debiased ML in advance of talk next week at Online Causal Inference Seminar New: more special cases incl IV-LQTE, empirics, code, more discussion/exposition, + more #EconTwitter #causality @VC31415
@nathankallus
Nathan Kallus
5 years
To bring in the new year @XiaojieMao +Masa+I just posted a paper on Localized Debiased ML for estimating causal quantities using ML methods when hi-dim nuisances depend on estimand In this thread I'll explain why this prob is so important and what we did 1/
1
5
23
0
1
17
@nathankallus
Nathan Kallus
4 years
Masa and I just posted a new paper on *efficient* off-policy policy gradients: . We establish a lower bound on how well one can estimate policy gradients and develop an algo that achieves this bound & exhibits 3-way double robustness. ☘️ 1/n
Tweet media one
@DeepAI
DeepAI
4 years
Statistically Efficient Off-Policy Policy Gradients by @nathankallus et al. #ReinforcementLearning #Statistics
0
4
7
1
1
17
@nathankallus
Nathan Kallus
5 years
In @angelamczhou @XiaojieMao & I tackle the question of how to credibly assess disparate impacts on classes when membership is unobserved -- an urgent question in both fair lending and healthcare equity reform. 1/3
Tweet media one
1
2
17
@nathankallus
Nathan Kallus
5 years
Next up at the #NeurIPS2019 fairness cluster, let us tell you about the xAUC fairness metric for bipartite ranking and how to use it to assess disparities in predictive risk scores. Poster #118 at 5pm today. W/ @angelamczhou .
Tweet media one
0
1
16
@nathankallus
Nathan Kallus
5 years
@Susan_Athey on “growing up in a small-sample, low-signal world” changes your perspective on what’s *truly* *in-practice* possible with AI/ML
1
1
15
@nathankallus
Nathan Kallus
5 years
👏🙌 Congrats @Susan_Athey And if you wanna see the awardee speak do make sure to join us at our #NeurIPS2019 workshop 😉
@HealthEconBot
HealthEconBot
5 years
#HealthEconNews von Neumann award to Susan Athey
0
0
1
0
2
16
@nathankallus
Nathan Kallus
5 years
Curse of horizon in #ReinforcementLearning is longer & longer trajectories from different policies look less alike. This is fatal to RL in unsimulatable/unexplorable settings like medicine. In we use special RL structure to efficiently break the curse 🤖👩‍⚕️
1
1
14
@nathankallus
Nathan Kallus
5 years
What are the disparate impacts of personalizing to maximize conditional treatment effects? Unknowable. Let us tell you how to *credibly* assess disparities of personalized interventions at our poster #72 at 10:45am today at #NeurIPS2019 . W/ @angelamczhou .
Tweet media one
0
1
15
@nathankallus
Nathan Kallus
3 years
If I didn’t already have a job, working with the amazing @shiraamitchell and @davidshor would be a very appealing proposition 🤩
@StatModeling
Andrew Gelman et al.
3 years
Come work with me and David Shor !
0
10
33
1
0
15
@nathankallus
Nathan Kallus
5 years
@SusanMurphylab1 & @Susan_Athey : even if we shouldn’t really care about classical confidence intervals, the decision makers at funding agencies, world bank, etc do care right now and so that’s where we have to start. Maybe we can change that in the future...
0
0
13
@nathankallus
Nathan Kallus
3 years
There may be very different objectives in “fair pricing” depending on the context. We @angelamczhou try to categorize and reconcile them and their (in)compatibility in this new paper. (To appear at @FAccTConference )
@deaneckles
Dean Eckles
3 years
@MuhammedKambal @katforrester This paper by @nathankallus @angelamczhou is nice in noting how quite different fairness concerns pop up in different algorithmic applications, perhaps encouraging some humility about applying one abstract idea across the board
Tweet media one
1
3
12
0
0
13
@nathankallus
Nathan Kallus
5 years
FYI we updated our Double Reinforcement Learning draft -- we got some questions about asymptotic variance vs finite samples so we added new finite-sample guarantees where leading term is controlled by the efficient variance. Thanks for the questions! 🙏
0
1
13
@nathankallus
Nathan Kallus
3 years
A thoroughly enjoyable interview with @red_abebe . What a journey!! For those curious about the work she discusses, she gave an excellent talk (recorded) about it at @red_abebe please please bring us some real Ethiopian coffee to the next in-person conf 🙏
@QuantaMagazine
Quanta Magazine
3 years
From “The Joy of x,” a Quanta podcast hosted by @StevenStrogatz : Rediet Abebe’s journey from Ethiopia to Harvard, and more importantly, the inequity she encountered in America, helped inspire her to design algorithms that optimize resource allocation.
Tweet media one
3
26
120
0
2
13
@nathankallus
Nathan Kallus
3 years
This sounds like it's going to be an amazing workshop. Awesome topic, organizers, speakers, and panelists. Looking forward!
@k__niki
Niki Kilbertus
3 years
Excited about our @icmlconf workshop on the *Neglected Assumptions in Causal Inference* () with my amazing co-organizers @LauraBBalzer @alexdamour @uhlily @raziehnabi @ShalitUri We warmly welcome contributions from outside CS as well (see CFP) @ICML2021
2
56
245
0
2
12
@nathankallus
Nathan Kallus
5 years
. @Susan_Athey talking about estimation and hypothesis testing in bandits at the #NeurIPS2019 causal ML workshop “Do the Right Thing”
Tweet media one
0
2
12
@nathankallus
Nathan Kallus
5 years
Doubly robust off-policy eval is asymp locally efficient. Self-normalized importance sampling is stable in finite samples. 🤔Which to choose? Get best of both worlds (even if misspecified) thanks to a (normalized) empirical likelihood approach #NeurIPS2019
0
1
12
@nathankallus
Nathan Kallus
5 years
Susan Shortreed and @Susan_Athey contrast educations and traditions that start with the *data generating process* versus with the *algorithm*
1
2
11
@nathankallus
Nathan Kallus
5 years
So I've tried and given up on setting up an application system. To apply: - Send me an email with subject “[PostDoc App] <Applicant Name>” with cover letter and CV. - Ask two recommenders to send me an email with subject “[PostDoc Rec] <Applicant Name>” with their rec letter.
@nathankallus
Nathan Kallus
5 years
My research group (located at Cornell Tech campus in NYC) is looking to recruit a postdoc to work on topics related to causal inference, fairness in ML, and sequential decision making (bandits+RL). Positions are renewable (1-2 years). Please retweet to spread the word. 🙏
3
119
232
0
2
11
@nathankallus
Nathan Kallus
3 years
Well done PC chairs!! 👏 @KLdivergence @mkearnsupenn @fborgesius @angelaxiaowu Ok now @icmlconf 's turn? 🙏 @CsabaSzepesvari @StefanieJegelka @dasongle (typing while running after toddler at home... 🤯)
@FAccTConference
ACM FAccT
3 years
Due to recent disruptions & challenges arising in light of the evolving COVID-19 pandemic, we have decided to postpone our PAPER SUBMISSION DEADLINE to Friday Janurary 21, 2022! 🗓️📢✍️ Updates to our website () will be made shortly.
Tweet media one
2
37
97
0
0
11
@nathankallus
Nathan Kallus
3 years
Happy to host interested CIFellows. My mentor profile is here: Pasting it below in thread. Reach out if you think there's a match.
@CRAtweets
Computing Research
3 years
CRA/CCC Announces CIFellows 2021 Program
0
71
124
1
1
8
@nathankallus
Nathan Kallus
5 years
Contextual bandits w/ linear rewards don't need exploration b/c can extrapolate ∞ly. For nondifferentiable rewards run separate non-contextual algos b/c so little info. We give optimal algo for all smoothness levels in b/w using both context & exploration
Tweet media one
1
3
10
@nathankallus
Nathan Kallus
5 years
@david_sontag “where have observational studies and methods had a big impact in medicine?” Andrea: “HIV/AIDS treatment!! When-to-start policies revolutionized by such work.” Reference: see @_MiguelHernan ’s work
1
1
10
@nathankallus
Nathan Kallus
4 years
Optimizing efficient policy value estimates does *not* imply efficient learning of policy parameters! In a new paper () we consider what would actually be efficient for the common reduction of policy learning to weighted (cost-sensitive) classification 1/n
1
1
10
@nathankallus
Nathan Kallus
5 years
Link to application system will be shared soon.
1
1
9
@nathankallus
Nathan Kallus
5 years
Today at the #NeurIPS2019 #ReinforcementLearning session, let us tell you about combining the benefits of self-normalization and double robustness in off-policy evaluation (no; DR w/ normed weights is not enough). (10:45 AM @ East Exhibition Hall B+C #209 )
Tweet media one
0
0
9
@nathankallus
Nathan Kallus
3 years
In 2 hours Andrew will be giving our @aistats_conf oral presentation on Off-Policy Evaluation in Infinite-Horizon Reinforcement Learning with Latent Confounders. Come hear him present:
0
2
7
@nathankallus
Nathan Kallus
5 years
our line up at the #NeurIPS2019 causal ML workshop “Do the Right Thing”
0
2
8
@nathankallus
Nathan Kallus
4 years
The talks and discussion today were amazing! 🤩🙏 Don’t worry if you missed it — it’s all recorded and available on demand 😁 #NeurIPSfromBed #NoFOMO
@k__niki
Niki Kilbertus
4 years
Excited to announce the Workshop on Consequential Decisions in Dynamic Environments @NeurIPSConf Co-org dream team: @angelamczhou , @AshiaWilson , John Miller, @uhlily , Lydia Liu, @nathankallus & @shiraamitchell special props to John for conceiving & website
2
7
46
0
0
8
@nathankallus
Nathan Kallus
4 years
@johncarlosbaez The phenomenon is more subtle in general stochastic optimization: You have K stoch opt problems with separate datasets; what do you do? Minimize each sample avg approx? Or pool your data? Depends! But you can still mimic oracle shrinkage amount (even if 0)
0
2
8
@nathankallus
Nathan Kallus
4 years
@deaneckles @alex_peys If you're trying to estimate the value of a treatment regime, Radon-Nykodim says as long as it is abs cts wrt logging policy then you have an importance weight and DR is exactly as usual. Otherwise (e.g., you want E[Y(f(X))] but logging is cts dist) the estimand is not regular 1/
1
0
8
@nathankallus
Nathan Kallus
4 years
Proud of my future colleague!
@2plus2make5
Emma Pierson
4 years
Woke up to email from grandma. Followed orders. Piece here: . Gotta do what your grandma says. Thanks to @leah_pierson , @serinachang5 , @scorbettdavies , Pang Wei Koh, @ShengwuLi , @SethS_D , and others for helpful comments.
Tweet media one
4
35
234
0
0
8
@nathankallus
Nathan Kallus
2 years
Thanks for having me! Had a blast talking to you guys about fairness in A/B tests. & repeating @SergeBelongie (apxly): "come to @AiCentreDK for the best combo of top-notch AI and top-notch quality of life" 😍
@SergeBelongie
Serge Belongie
2 years
Thank you @nathankallus for your thought-provoking seminar today @AiCentreDK !
Tweet media one
Tweet media two
0
2
9
0
0
8
@nathankallus
Nathan Kallus
4 years
Amen
@acmi_lab
ACMI Lab (CMU)
4 years
We are not a political organization. But we cannot operate an honest academic operation under a dictatorship. And we cannot continue to operate in the US with our international students constantly under threat. Get out and vote.
2
19
183
0
0
8
@nathankallus
Nathan Kallus
4 years
Excited to be speaking at the Applied Reinforcement Learning Seminar tomorrow: See you on zoom!
@ArlSeminar
ARLSeminar
4 years
The next seminar will be on Thursday, November 19th, 2020 at 10:00 AM ET / 3:00 PM London / 11:00 PM Beijing. Nathan Kallus from Cornell University will give a talk on “Statistically Efficient Offline Reinforcement Learning”. Chengchun Shi from LSE will lead the discussion.
Tweet media one
Tweet media two
0
3
3
0
2
7
@nathankallus
Nathan Kallus
5 years
. @scottniekum teaching a robot to play tennis with *safe* inverse reinforcement learning at #NeurIPS2019 workshop on Safety and Robustness in Decision Making
Tweet media one
1
0
6
@nathankallus
Nathan Kallus
4 years
Another piercing analysis by @uhlily And if you haven't already read her previous excellent two-part piece Disparate Causes, do it now:
@jainfamilyinst
JAIN FAMILY INSTITUTE
4 years
"Why would we expect social causal dynamics in a counterfactual world to be the same as those in our world? If they aren’t the same, why do we care about these quantities at all?" NEW from @uhlily , on DAGs, causality, and race in social science research.
3
30
94
0
3
7
@nathankallus
Nathan Kallus
2 years
@netflix @NetflixResearch I should add that the Research Scientist role is flexible regarding level depending on the candidate (what we call in academia "open rank")
1
0
7
@nathankallus
Nathan Kallus
5 years
Applicants should submit a cover letter, CV, and 2 letters of recommendation. Applications will be considered on a rolling basis and sending your materials early is encouraged. Applicants are also encouraged to email/DM me to notify me of intent to submit a complete application.
1
2
6
@nathankallus
Nathan Kallus
4 years
DeepMatch: Balancing Deep Covariate Representations for Causal Inference Using Adversarial Training Balancing methods for causal inference are great but rely on a known representation (linear or kernel); we extend to neural nets using adversarial training
0
0
6
@nathankallus
Nathan Kallus
5 years
Assessing fairness of predictive risk scores requires us to think beyond binary classification. In we consider (un)fairness in bipartite ranking, where a natural metric, xAUC, arises for diagnosing disparities in risk scores algo @angelamczhou #NeurIPS2019
2
1
6
@nathankallus
Nathan Kallus
5 years
We know we can correct noisy state obs using latent var models. But still exists no outcome-model-independent unbiased importance weights for off-policy eval as in noiseless case. Surprise: balanced policy eval works and beats outcome modeling #NeurIPS2019
0
0
5
@nathankallus
Nathan Kallus
4 years
"Despite immigrants only making up 16% of inventors, they are responsible for 30% of aggregate US innovation since 1976." And, this is just patents.
@AbdouNdiayeNYU
Abdoulaye Ndiaye
4 years
Paper of the day
Tweet media one
13
721
3K
1
2
5
@nathankallus
Nathan Kallus
5 years
. @sshortreed telling us about the outcome-adaptive lasso at the #NeurIPS2019 causal ML workshop “Do the Right Thing”
Tweet media one
0
0
4
@nathankallus
Nathan Kallus
5 years
Contributor or not, come participate in an exciting dialogue between {Causal}∩{ML} researchers from all fields. We have an amazing line up of invited talks by @Susan_Athey , @SusanMurphylab1 , Andrea Rotnitzky, @sshortreed , & Ying-Qi Zhao. 🤩🤩 Happening December 14 @NeurIPSConf
0
0
5
@nathankallus
Nathan Kallus
3 years
@NickArnosti Make n~Poisson 🙃
0
0
5
@nathankallus
Nathan Kallus
5 years
. @thorstenjoachim asks whether conventional learning-to-rank methods are fair at the #NeurIPS2019 workshop on Safety and Robustness in Decision Making
Tweet media one
0
1
4
@nathankallus
Nathan Kallus
3 years
Elated to have truly incredible invited speakers: @Susan_Athey , @eliasbareinboim , @raziehnabi , Rui Song, @mark_vdlaan , @vernadec Privileged to work w/ an amazing group of (mostly twitterless) organizers: Aurelien Bibaut, Maria Dimakopoulou, Xinkun Nie, Masatoshi Uehara, @kewzha
0
0
5
@nathankallus
Nathan Kallus
4 years
@EpiEllie You (& @Susan_Athey @edwardhkennedy ) may also be interested in w/ @XiaojieMao where we study the value of *combining* surrogate-outcome data into causal analysis without the strong assumptions usually needed to ensure surrogates can *replace* real outcomes
0
1
5
@nathankallus
Nathan Kallus
3 years
Looking to host a postdoc interested in working within the disjunction of the pairwise conjunctions of causal inference using machine learning (or vice versa), reinforcement learning (esp. offline), algorithmic fairness, contextual bandits, optimization under uncertainty, or with
1
1
5
@nathankallus
Nathan Kallus
5 years
. @SusanMurphylab1 on the practical challenges of designing and implementing reinforcement learning algorithms in mobile health at the #NeurIPS2019 causal ML workshop “Do the Right Thing”
Tweet media one
0
0
4
@nathankallus
Nathan Kallus
4 years
Looking forward to being discussant on this! See you at QME.
0
0
5
@nathankallus
Nathan Kallus
4 years
Another @NeurIPSConf workshop to be excited for!
@LihongLi20
Lihong Li
4 years
Interested in reinforcement learning *without* interaction with the environment or simulator? We're organizing a @NeurIPSConf 2020 workshop on Offline RL. Visit the homepage for more details including Call for Papers!
0
29
162
0
0
5
@nathankallus
Nathan Kallus
4 years
@deaneckles @alex_peys and then there's no single right way and no sqrt-n consistent regular estimator. We investigate 9 different ways to kernelize DR and analyze behavior under just rate conditions for nuisances in .
1
1
4
@nathankallus
Nathan Kallus
5 years
I lol'ed 😂👍 @eliasbareinboim
@zacharylipton
Zachary Lipton
5 years
@eliasbareinboim “People ask me, Elias, *why do you love the graph so much?*”
Tweet media one
0
0
19
0
0
4
@nathankallus
Nathan Kallus
5 years
@david_sontag asks “what annoys you in the ML community’s approach to causal inference”
1
1
4
@nathankallus
Nathan Kallus
4 years
And tomorrow some very exciting work on the interaction of fairness, equity, and welfare considerations in personalized pricing
@angelamczhou
Angela Zhou
4 years
Tomorrow, Friday---giving a spotlight talk on fairness considerations for covariate-personalized pricing at Fair AI in Finance, 18:55 – 19:10 EST Video: Workshop:
1
1
5
0
0
4
@nathankallus
Nathan Kallus
4 years
So excited to participate in this!
@EmmaBrunskill
Emma Brunskill
4 years
To help better understand the theoretical foundations of batch offline RL, , S. Meyn & I are organizing a Simons workshop this week with wonderful speakers! Schedule: Webinar link:
1
22
165
0
0
4
@nathankallus
Nathan Kallus
5 years
Yingqi Zhao on estimation and inference on *high-dimensional* individualized treatment regimes and semiparametric approaches thereto at the #NeurIPS2019 causal ML workshop “Do the Right Thing”
Tweet media one
0
3
4
@nathankallus
Nathan Kallus
5 years
Dhanya Sridhar giving us a glimpse of her work with @yixinwang_ and @blei_lab on using counterfactual predictions to operationalize equality of opportunity and affirmative action in ML at the #NeurIPS2019 causal ML workshop
Tweet media one
0
0
4
@nathankallus
Nathan Kallus
3 years
@jondr44 There's still bias, albeit 1/n, and you can't do exact inference. The magic of RCTs is unbiased, airtight causal inference. Shameless plug: you get the same efficiency w/o bias and w/ rand inf by balancing *before* randomization &
1
0
3