Brenden Lake Profile Banner
Brenden Lake Profile
Brenden Lake

@LakeBrenden

Followers
7,078
Following
221
Media
76
Statuses
856

Associate Professor of Psychology and Data Science @ NYU. Co-Director of the NYU Minds, Brains, and Machines Initiative. Posts are my views only

Manhattan, NY
Joined June 2018
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
Pinned Tweet
@LakeBrenden
Brenden Lake
7 months
Now out in Nature Machine Intelligence, Emin Orhan shows how high-level visual representations are learnable from a child's proxy visual input, without strong inductive biases, and can be applied to a range of visual benchmarks.
Tweet media one
4
76
334
@LakeBrenden
Brenden Lake
1 year
Today in Nature, we show how a standard neural net, optimized for compositional skills, can mimic human systematic generalization (SG) in a head-to-head comparison. This is the capstone of a 5 year effort with Marco Baroni to make progress on SG. (1/8)
Tweet media one
24
391
2K
@LakeBrenden
Brenden Lake
4 years
We train a self-supervised net "through the eyes" of one baby across 2 years of development. At #NeurIPS2020 , Emin Orhan shows how high-level visual representations emerge. Paper & pre-trained net Poster Thurs Noon EST
Tweet media one
7
128
827
@LakeBrenden
Brenden Lake
2 years
Which is more incorrect, the original tweet or Twitter's added context?
Tweet media one
97
31
804
@LakeBrenden
Brenden Lake
6 months
I got tenure! It was fitting that I got to celebrate with the lab right after the news. Working together for the last 6.5 years has been a blast.
Tweet media one
64
13
692
@LakeBrenden
Brenden Lake
6 years
What makes people smarter than machines? Reading list for my NYU class "Advancing AI through Cognitive Science" has paired papers in AI and CogSci organized by topic, highlighting key ingredients for building machines that learn and think like people.
9
144
508
@LakeBrenden
Brenden Lake
4 years
Children use the mutual exclusivity bias to learn new words, while standard deep nets show the opposite bias, often hindering learning. @gandhikanishk introduces this challenge for ML at #NeurIPS2020 . Paper Poster Thu Noon EST
Tweet media one
3
60
466
@LakeBrenden
Brenden Lake
4 years
We train large-scale neural nets "through the eyes" of one baby across 2 years of development. New paper from Emin Orhan shows how high-level visual representations emerge from a subset of one baby's experience, through only self-supervised learning. (1/2)
8
97
466
@LakeBrenden
Brenden Lake
5 years
Children use the mutual exclusivity (ME) bias to learn new words, while standard neural nets show the opposite bias, hindering learning in common scenarios. New preprint from @gandhikanishk introduces ME as a challenge for neural networks
Tweet media one
6
121
431
@LakeBrenden
Brenden Lake
3 years
Professor Gary Cottrell took me in as a summer intern in high school, and introduced me to neural networks and cognitive science. I loved it, and his lessons ended up shaping my education and career. It's only appropriate that Gary teaches the next generation Lake too.
Tweet media one
7
15
326
@LakeBrenden
Brenden Lake
6 years
New few-shot learning challenge: people generalize compositionally from just a few examples, while powerful seq2seq models don't. We found clues to how people do it: they use 3 inductive biases that could be incorporated into models. w/ @tallinzen & Baroni
5
81
284
@LakeBrenden
Brenden Lake
5 years
Neural nets struggle with compositionality, but they can be improved through "meta seq2seq learning": training on a series of seq2seq problems to acquire the compositional skills needed for solving new problems.
Tweet media one
2
60
258
@LakeBrenden
Brenden Lake
9 months
Published in Science today, @wkvong reports a dream experiment: he trained a multi-modal AI model from scratch on a subset of one child's experiences, as captured by headcam video. Shows how grounded word learning is possible in natural settings, as discussed in his thread:
@wkvong
Wai Keen Vong
9 months
1/ Today in Science, we train a neural net from scratch through the eyes and ears of one child. The model learns to map words to visual referents, showing how grounded language learning from just one child's perspective is possible with today's AI tools.
Tweet media one
56
723
3K
3
50
241
@LakeBrenden
Brenden Lake
5 years
It learns on its own "just like a human," who plays with a cube for 13,000 years while following the instructions of a symbolic solver @washingtonpost
@washingtonpost
The Washington Post
5 years
This robotic hand learned to solve a Rubik’s Cube on its own ��� just like a human. Researchers say the feat moves robots one step closer to "human-level dexterity."
22
81
134
3
23
236
@LakeBrenden
Brenden Lake
3 years
NYU invites applications for Assistant Professor position, joint with Psychology Dept. and Center for Data Science (my two favorite units on campus!). Position is part of new Mind, Brains, and Machines cluster. Applications due Dec. 1st. Please RT!
1
137
237
@LakeBrenden
Brenden Lake
4 years
My lab at NYU has a postdoc opening. We are starting a big project on self-supervised learning from child headcam videos, studying emergent category, object, and agent representations. Other directions also encouraged!...(1/2) Apply here
5
83
233
@LakeBrenden
Brenden Lake
5 years
This is an exciting problem that requires both high-level reasoning and low-level control, but the network doesn't "learn to solve a Rubik's cube," -- it learns to manipulate a Rubik's cube. The robot uses a symbolic solver to create a sequence of sub-goals. (1/2)
@OpenAI
OpenAI
5 years
We've trained an AI system to solve the Rubik's Cube with a human-like robot hand. This is an unprecedented level of dexterity for a robot, and is hard even for humans to do. The system trains in an imperfect simulation and quickly adapts to reality:
251
4K
11K
5
46
228
@LakeBrenden
Brenden Lake
3 years
GPT3 and people make that same gut errors on reasoning tests like "A bat and a ball cost $1.10..." Inspired by how people can override their gut, @Maxwell_Nye shows how to augment a neural "System 1" with a symbolic "System 2", no extra training required
Tweet media one
3
40
216
@LakeBrenden
Brenden Lake
6 years
The Omniglot challenge: A 3-year progress report. There has been genuine progress on one-shot learning, but neural nets are far from human-level learning on Omniglot, a challenge that requires performing many tasks with a single model w/ @rsalakhu Tenenbaum
Tweet media one
0
46
203
@LakeBrenden
Brenden Lake
5 years
Yoshua Bengio: "Another topic that is much on the minds of people in deep learning. Systematic generalization ... Today’s machine learning doesn’t know how to do that." I completely agree. It's a foundational problem that we need to be studying.
4
44
192
@LakeBrenden
Brenden Lake
7 months
Linda Smith, one of the scientists I most admire, writes in Nature "News and Views" about @wkvong 's work. Linda's conjecture: "problems of data-greedy AI could be mitigated by determining and then exploiting the natural statistics of infant experience"
Tweet media one
@wkvong
Wai Keen Vong
9 months
1/ Today in Science, we train a neural net from scratch through the eyes and ears of one child. The model learns to map words to visual referents, showing how grounded language learning from just one child's perspective is possible with today's AI tools.
Tweet media one
56
723
3K
1
44
188
@LakeBrenden
Brenden Lake
6 months
News & Views from Justin Wood, framing Emin's article perfectly: "To date, the nature-nurture debate largely stems from different intuitions about the nature of the experiences available for learning." Now, we can test these intuitions computationally!
Tweet media one
1
34
186
@LakeBrenden
Brenden Lake
5 years
New neuro-symbolic model can learn explicit rule systems from just a few examples. It combines a neural "proposer" and a symbolic "checker" to make both fast and robust inferences. The model can solve SCAN and learn number words. From @Maxwell_Nye
Tweet media one
0
36
172
@LakeBrenden
Brenden Lake
6 years
I'm excited that my lab at NYU has two new postdoc positions available immediately! Seeking candidates to work at the interface of machine learning and cognitive science. Please circulate. position 1: position 2:
0
109
148
@LakeBrenden
Brenden Lake
4 years
People make compositional generalizations in language, thought, and action, while the best AI systems struggle to do the same. How do people do it, and how can we build more compositional machine learners? Hear about some of our recent work (35 mins + Qs)
2
18
143
@LakeBrenden
Brenden Lake
4 years
How can we build machines that understand words as people do? Models must look beyond patterns in text to secure a more grounded, conceptual foundation for word meaning, with links to beliefs and desires while supporting flexible composition. w/ @glmurphy39
3
20
142
@LakeBrenden
Brenden Lake
10 months
Linda Smith is always inspiring and gave amazing plenary at #NeurIPS2023 , “AI would do better for itself if it studied the structure of the data” that children receive.
Tweet media one
4
20
109
@LakeBrenden
Brenden Lake
3 years
New #ICLR2021 paper on Generative Neuro-Symbolic modeling: Learning compositional and causal generative programs from raw data, using powerful neural sub-routines. The best model yet on the Omniglot Challenge. By @ReubenFeinman . Poster today (paper below)
Tweet media one
2
16
136
@LakeBrenden
Brenden Lake
9 months
Now out in Cognition, @yanlisagezhou models how people learn compositional visual concepts through both Bayesian program induction and meta-learning. There are two stories in her work... (thread below)
3
27
134
@LakeBrenden
Brenden Lake
8 months
Out today in Cognition. My favorite finding: a net trained from scratch on one child's visual input (i.e., recent Science paper) shows beautiful categorical perception, e.g., emergent "above vs. below" relation category in model embeddings (left) for visual stimuli (right)
Tweet media one
@guyd33
Guy Davidson
8 months
New paper alert! There's a wealth of evidence that infants categorize spatial relations starting from 3-4 months onward. Can we evaluate neural networks in similar paradigms to babies, without special relational training? If so, what do we learn? 1/N
5
28
102
2
23
132
@LakeBrenden
Brenden Lake
3 years
Progress in NLP has been striking, so @glmurphy39 and I ask: Can recent models serve as psychological theories of word meaning? We propose 5 desiderata for building models that understand words more like people do. Now out at Psych Review (thread 1/7)
4
29
133
@LakeBrenden
Brenden Lake
9 months
Love the home page right now (this isn't the baby who participated in the current study, but in the next one :))
Tweet media one
2
17
130
@LakeBrenden
Brenden Lake
2 years
Deep nets are biased toward texture over shape, right? Well, the standard tests used in ML (left) differ quite a bit from those used in CogSci (right). In #CogSci2022 paper, @ARTartaglini shows that many deep nets prefer shape when tested like kids
Tweet media one
3
15
124
@LakeBrenden
Brenden Lake
5 years
My lab at NYU has a new postdoc position in self-supervised learning. Work with me to model child headcam videos, with the aim of learning knowledge of objects and agents from raw input. Please circulate.
1
52
122
@LakeBrenden
Brenden Lake
3 years
Self-supervised learning works well on vision+language data, but is it relevant to how children learn words? @wkvong considers 7 behavioral phenomena, finding recent algorithms capture many (and learn quickly), but fail when mutual exclusivity is required
2
21
119
@LakeBrenden
Brenden Lake
5 years
Neural nets struggle with systematic generalizaton, but can be improved through "meta seq2seq learning": training on many seq2seq problems to acquire the compositional skills needed for solving new problems. Come by #NeurIPS2019 poster 178 Thu. at 10:45
Tweet media one
1
20
120
@LakeBrenden
Brenden Lake
6 years
Reviewer #2 : "The paper is written well with a grammatical touch." I am glad we added that sprinkling of grammatical sentences :)
0
5
117
@LakeBrenden
Brenden Lake
5 years
People can ask interesting questions without expending much effort. A new model can quickly synthesize novel questions by combining neural nets and symbolic programs, and can learn to ask good questions without supervised examples. Preprint from Ziyun Wang
Tweet media one
2
16
115
@LakeBrenden
Brenden Lake
6 years
People utilize causal understanding for classification, yet deep nets often fail to do so. This preprint shows that people can do few-shot learning of recursive causal processes, in ways consistent with an ideal Bayesian program learner. w/ @spiantado
Tweet media one
0
23
106
@LakeBrenden
Brenden Lake
3 years
Look, it's Mom on the big screen!
Tweet media one
Tweet media two
1
2
108
@LakeBrenden
Brenden Lake
11 months
It's tempting to use folk-psych terms with LLMs (beliefs, intentions, etc.) because, until now, we have only had human-like conversations with humans. But this is a mistake. LLMs are role-players more than coherent agents. See this great paper @mpshanahan
1
17
106
@LakeBrenden
Brenden Lake
1 year
Honored to be a part of this wonderful unit at NYU. Truly a pioneering interdisciplinary effort
@ylecun
Yann LeCun
1 year
10 years ago: September 1, 2013 was the official birthday of the NYU Center for Data Science. Probably the first such center in the US, it has flourished since its inception. CDS offers a PhD program, a Master's program, an undergraduate major, an undergraduate minor, and joint
16
19
421
0
3
89
@LakeBrenden
Brenden Lake
5 years
The system only partially succeeds, even after 13,000 years worth of simulated experience: failing 80% of the time when the cube is fully scrambled, and 40% from partial scrambles. Performance is even worse without a "smart" cube that communicates the state to the robot. (2/2)
3
8
91
@LakeBrenden
Brenden Lake
5 years
New "Grounded SCAN" benchmark covers 7 types of systematic generalization e.g. few-shot learning of "cautiously" ("walk to the red circle cautiously" requires looking both ways before moving). Baseline is just 5% correct after 50 examples. From @LauraRuis7
Tweet media one
1
13
86
@LakeBrenden
Brenden Lake
4 years
The new @ICEgov policy is appalling. International students enrich our classes, universities, and communities. All students are welcome here, and this cruel policy can't change this.
3
12
87
@LakeBrenden
Brenden Lake
5 years
NYU will offer a new undergraduate major in Data Science -- the interdisciplinary study of extracting knowledge from data. It's an opportunity to study computational approaches to intelligence and learning, with applications to the sciences and society. An exciting development!
@NYUDataScience
NYU Data Science
5 years
The @NYUDataScience undergraduate major has been approved by @NYSEDNews ! We have already begun to offer undergraduate courses for those looking to minor. You can check out our course selection on the @NyuAlbert "Public Course Search."
4
10
51
0
14
85
@LakeBrenden
Brenden Lake
5 years
I believe that perfect autonomous driving will be impossible until we build machines with intuitive psychology. "Developing software that can reliably anticipate what other drivers, pedestrians and cyclists are going to do, will be much more difficult."
8
15
84
@LakeBrenden
Brenden Lake
5 years
It’s misleading to lump together nature and nurture as just “experience.” The extent to which capabilities are built-in vs. learned is foundational in cognitive science and cognitive development, because it suggests different representations, architectures, algorithms, etc. (1/2)
@karpathy
Andrej Karpathy
5 years
A 4 year old child actually has a few hundred million years of experience, not 4. Their rapid learning/generalization is much less shocking/magical considering this fact.
94
368
3K
2
10
83
@LakeBrenden
Brenden Lake
3 years
Exciting new initiative at NYU focused on understanding and engineering intelligence.
@NYUDataScience
NYU Data Science
3 years
We’re excited to announce the launch of the Minds, Brains, and Machines Initiative! The project, led by @lakebrenden & @todd_gureckis , seeks to promote research at the intersection of human & machine intelligence. More information at
Tweet media one
5
41
242
0
6
81
@LakeBrenden
Brenden Lake
4 years
These days, babies are seeing many more masked faces. This is a ready-made image completion problem. Are babies doing BERT-style, self-supervised learning to complete the masked faces?
6
4
80
@LakeBrenden
Brenden Lake
6 years
Kudos to OpenAI for releasing #OpenAIFive to the public for testing. Despite 45,000 years worth of training for the bot (a quarter of human history), the human community is starting to win after a few hours. A clear victory for humans?
4
14
79
@LakeBrenden
Brenden Lake
6 years
Recurrent neural nets have trouble combining familiar words in new ways, such as inferring the meaning of "around right" from the meaning of "around" and "right." @JoaoLoula 's new paper is on arXiv, with Marco Baroni and me
0
22
80
@LakeBrenden
Brenden Lake
5 years
"Systematic generalization" and "compositionality" were buzzwords at #NeurIPS2019 . At the meta-learning workshop, I discussed compositionality in human language and thought, and how understanding it would inform machine intelligence. Video begins 26:35
1
12
75
@LakeBrenden
Brenden Lake
6 years
We re-released Omniglot with the drawing demonstrations in a more accessible format. Human learners use this compositional and causal structure. We hope models will use it too, as it's the only known way to get human-level learning w/ @rsalakhu Tenenbaum
Tweet media one
1
19
78
@LakeBrenden
Brenden Lake
4 years
Busy day for the lab at #NeurIPS2020 ! We hope you will stop by our posters in Session 6 (Thurs noon EST), and hear about our work at the interface of cognitive science and AI. We would love to discuss our research with you! Poster links in thread.
Tweet media one
2
7
77
@LakeBrenden
Brenden Lake
3 years
So glad to finally get the lab together in person this afternoon! I introduced our newest member, developmental expert Logan Kwan Lake! (the little guy on the left). Also bittersweet since we are saying goodbye to @gandhikanishk , who is starting his PhD at Stanford next year.
Tweet media one
4
3
75
@LakeBrenden
Brenden Lake
3 years
People can ask interesting questions without much effort. This model treats questions as compositional, symbolic programs and uses rapid neural synthesis. It also learns to ask good questions without supervised examples. #CogSci2021 paper from Ziyun Wang
Tweet media one
2
11
75
@LakeBrenden
Brenden Lake
2 years
Hiking with some members of the NYU Human & Machine Learning Lab. It's hard to beat the Hudson Valley in the Fall!
Tweet media one
Tweet media two
2
2
73
@LakeBrenden
Brenden Lake
3 years
Wow, that's my wife up there on the billboards in Times Square!! New York, New York.. If you can make it there, you’ll make it anywhere. Feeling proud of @tammykwan and her company @CognitiveToyBox
3
2
73
@LakeBrenden
Brenden Lake
5 years
More robust ImageNet classifiers using elements of human visual cognition: an episodic memory and a shape bias. New paper by Emin Orhan extending his work on cache-based object recognition models
2
10
72
@LakeBrenden
Brenden Lake
5 years
The source code for meta seq2seq learning is now available through @facebookai . You can reproduce the experiments from my #NeurIPS2019 paper, or run memory-based meta learning on other seq2seq problems.
@LakeBrenden
Brenden Lake
5 years
Neural nets struggle with systematic generalizaton, but can be improved through "meta seq2seq learning": training on many seq2seq problems to acquire the compositional skills needed for solving new problems. Come by #NeurIPS2019 poster 178 Thu. at 10:45
Tweet media one
1
20
120
0
17
67
@LakeBrenden
Brenden Lake
5 years
Paper with @spiantado is now out in @CompBrainBeh . We find that people can learn recursive causal processes from just a few examples, consistent with an ideal Bayesian program learner.
@CompBrainBeh
Computational Brain & Behavior
5 years
new in @CompBrainBeh : a Bayesian program learning model that searches the space of programs for the best explanation of the observations that explains how people learn concepts from only few examples
0
2
19
0
12
71
@LakeBrenden
Brenden Lake
3 years
New benchmark for few-shot learning of compositional concepts (e.g., "all objects are blue and have equal size"), and new difficulty metric for systematic generalization splits. From @rama_vedantam #ICML2021 paper poster today
0
12
68
@LakeBrenden
Brenden Lake
1 year
@IntuitMachine I am glad you are excited too, and I like a lot about your summary, especially the importance of integrating cogsci and ML. But come on, this is *not* confirmation that AGI is here!
2
1
68
@LakeBrenden
Brenden Lake
5 years
Special issue on AI is now out in the journal Current Opinion in Behavioral Sciences. Many interesting papers to check out,
0
28
65
@LakeBrenden
Brenden Lake
4 years
Incorporating cognitive ingredients can strengthen deep RL: adding object masks to Frostbite leads to higher scores and better generalization to novel test scenarios. An agent surrounded by crabs now knows it's toast! New #CogSci2020 paper from @guyd33
Tweet media one
0
9
63
@LakeBrenden
Brenden Lake
6 years
My lab at NYU has a new website! Thank you @ReubenFeinman for the design. Check out the research we are doing and see the "Apply" tab if you would like to join us. Human & machine learning lab
Tweet media one
1
9
63
@LakeBrenden
Brenden Lake
1 year
@AndrewLampinen Thanks for the comments Andrew! Indeed, I am a fan of your papers below. Yes, my perspective on compositionality is changing.. I did think it needed to be built-in and I don't believe that anymore.
2
0
61
@LakeBrenden
Brenden Lake
3 years
GPT3 and people make similar gut errors on reasoning tests like "A bat and a ball cost $1.10.." Inspired by how people can override their gut, @Maxwell_Nye shows how to augment a neural "System 1" with a symbolic "System 2" See #NeurIPS2021 poster tonight 7:30pm EST (paper below)
Tweet media one
1
2
60
@LakeBrenden
Brenden Lake
3 years
Traditional categorization models use highly abstract stimulus representations. Do classic models like ALCOVE "just work" with raw images if you add a CNN front-end? Not really! Hear lessons learned from @ARTartaglini #CogSci2021 poster 3-A-4 today Paper
Tweet media one
0
9
60
@LakeBrenden
Brenden Lake
1 year
Well-balanced Nature news feature about the current challenges of evaluating intelligence in LLMs, featuring ConceptArc and quotes from @MelMitchell1 , @TomerUllman , @sleepinyourhat , me, and others
1
10
60
@LakeBrenden
Brenden Lake
7 months
"The perspective of a child could help AI learn language—and tell us more about how humans manage the same feat." The Atlantic reports on recent studies from our lab, with comments from baby headcam pioneers Linda Smith, @Chen_Yu_CY , @mcxfrank
@TheAtlantic
The Atlantic
7 months
Capturing the everyday experience of a small cohort of babies could help scientists train AI to learn language more like a toddler does. @sarahzhang reports:
2
6
16
3
9
57
@LakeBrenden
Brenden Lake
3 years
Reminder that NYU is hiring an Assistant Prof. of Psychology and Data Science this year. Applications due in a little over 2 weeks!
@LakeBrenden
Brenden Lake
3 years
NYU invites applications for Assistant Professor position, joint with Psychology Dept. and Center for Data Science (my two favorite units on campus!). Position is part of new Mind, Brains, and Machines cluster. Applications due Dec. 1st. Please RT!
1
137
237
1
20
56
@LakeBrenden
Brenden Lake
3 years
A vision model trained through a baby's eyes explains results from infant relation categorization: above/below is easier than between, relations w/ consistent objects are easier, etc. Many relational nets also tested. New #CogSci2021 paper from @guyd33
0
12
56
@LakeBrenden
Brenden Lake
8 months
In just a minute, @wkvong is being interviewed on NPR Science Friday, about training models through the eyes and ears of a child. listen in here: Also, see the NPR link here:
2
12
57
@LakeBrenden
Brenden Lake
4 years
I'm looking forward to joining @ev_fedorenko and @MelMitchell1 at this event! I will be speaking about my recent paper with Greg Murphy on "Word meaning in minds and machines" (). Event registration is open to the public.
@columbiacss
Center for Science and Society
4 years
Neural language models like GPT-3 can generate human-like text that is grammatically correct, coherent, and topically relevant. How does language processing in machines compare to language processing in humans? RSVP to learn more:
Tweet media one
0
12
38
2
10
56
@LakeBrenden
Brenden Lake
5 years
Our Omniglot progress report is now out! Neural nets have made important progress on Omniglot, but they are still far from human-like concept learning. The renewed challenge is to develop models that can perform many tasks together. w/ @rsalakhu Tenenbaum
1
10
54
@LakeBrenden
Brenden Lake
11 months
Big language models lead to big drama. Nice article discussing progress in engineering more efficient, small models, and opportunities for modeling child language acquisition through much smaller, more natural input data
0
7
54
@LakeBrenden
Brenden Lake
9 months
Video interview with @wkvong and I regarding his new paper, training a neural net through a child's eyes and ears
@nyuniversity
New York University
9 months
What if AI systems had to learn language NOT by gobbling up all the text on the whole internet, but rather the way babies do? NYU researchers tried training a neural network solely on what a single child saw and heard in day-to-day life:
2
15
69
2
7
52
@LakeBrenden
Brenden Lake
1 year
Scientific American article describing our meta-learning for compositionality approach and its implications. Excellent (and measured) science journalism by @lauren_leffer , with thoughts from @rsalakhu , Armando Solar-Lezama, and Paul Smolensky
@sciam
Scientific American
1 year
New training method helps AI generalize like people do
3
19
75
2
5
49
@LakeBrenden
Brenden Lake
1 year
MLC's demonstration raises new developmental questions. What if SG is learned? What if children have similar opportunities and incentives for practice in natural experience? If so, then MLC could explain the origin of these remarkable human abilities. (8/8)
4
4
48
@LakeBrenden
Brenden Lake
5 years
We're inviting applications for our Faculty Fellows position at the Center for Data Science @NYUDataScience . A unique, independent, interdisciplinary postdoctoral fellowship -- check it out! (Due Dec. 23)
0
29
48
@LakeBrenden
Brenden Lake
2 years
Love this well-balanced article ( @MelMitchell1 ). This sums it up well: "Humans, unlike machines, seem to have a strong innate drive for [conceptual] understanding... requiring few data, minimal or parsimonious models, ... and strong mechanistic intuition"
2
8
47
@LakeBrenden
Brenden Lake
6 years
"The Sweet Lesson of the history of AI is that, while finding the right inductive biases is hard, doing so enables massive progress on otherwise intractable problems." Well-said... from @shimon8282 response to Rich Sutton's new blog post
@shimon8282
Shimon Whiteson
6 years
Rich Sutton has a new blog post entitled “The Bitter Lesson” () that I strongly disagree with. In it, he argues that the history of AI teaches us that leveraging computation always eventually wins out over leveraging human knowledge.
18
141
478
0
3
47
@LakeBrenden
Brenden Lake
5 years
The best part of conferences is catching up with friends like these two. We should dig up a picture from the 2010 machine learning summer school in Sardinia, where we really got to know each other
@celestekidd
Celeste Kidd
5 years
Tweet media one
1
1
43
2
1
47
@LakeBrenden
Brenden Lake
1 year
NYU Psychology is hiring in Higher Cognition this year. Deadline Sept. 1. We seek candidates in any area of human cognition... thinking, reasoning, language, analogy, knowledge representation, creativity, categorization, perception...
0
22
45
@LakeBrenden
Brenden Lake
5 months
More commentary on Emin's recent work showing "the computational plausibility that children can acquire sophisticated visual representations from natural input data without [strong] biases", now in TICS from Lei Yuan
Tweet media one
@LakeBrenden
Brenden Lake
6 months
News & Views from Justin Wood, framing Emin's article perfectly: "To date, the nature-nurture debate largely stems from different intuitions about the nature of the experiences available for learning." Now, we can test these intuitions computationally!
Tweet media one
1
34
186
0
9
45
@LakeBrenden
Brenden Lake
10 months
Looking forward to #NeurIPS2023 in-person, for the first time in a few years! ✈️ If you're interested in chatting about research, collaborations, or graduate school, I'd be glad to meet you. You can send me a DM or email.
2
2
44
@LakeBrenden
Brenden Lake
4 years
I'm relieved this heartless policy has been rescinded. Thanks everyone for speaking out, signing petitions, and standing up for our values.
0
8
44
@LakeBrenden
Brenden Lake
1 year
I always look forward to the fall lab retreat!
Tweet media one
1
1
45
@LakeBrenden
Brenden Lake
3 years
New work at #NeurIPS2021 led by Kanishk as a collaboration between our lab and @LDM_NYU . Humans are adept at reasoning about other agents, even from infancy, but this ability is underexplored in AI today. Take up the BIB challenge to help move machine intelligence forward!
@gandhikanishk
Kanishk Gandhi
3 years
Before their first birthdays, infants intuitively reason about the goals, preferences, and actions of other agents. Can machines achieve such generalizable, flexible reasoning about agents? To find out, we present the Baby Intuitions Benchmark (BIB) @NeurIPSConf #NeurIPS2021 1/6
Tweet media one
1
28
167
0
4
44
@LakeBrenden
Brenden Lake
4 years
Human concepts are rich in structure and statistics, yet models rarely capture both. @ReubenFeinman uses neuro-symbolic models to generate new concepts with complex compositional, causal, and statistical structure. Poster P-2-361 2morrow. Video #cogsci2020
0
2
42
@LakeBrenden
Brenden Lake
5 years
"If you ever feel cynical about human beings, a good antidote is to talk to artificial-intelligence researchers... they’re always talking about how marvelous the human brain is, how adaptable, how efficient, how infinite in faculty."
1
3
38
@LakeBrenden
Brenden Lake
4 years
This work was possible because of the *amazing* SAYCam dataset of baby headcam videos. Please check out the SAYCam paper here from Jess Sullivan, Michelle Mei, @AmyPerfors @ewojcik @mcxfrank (2/2)
0
1
39
@LakeBrenden
Brenden Lake
10 months
Training on more realistic input is central to understanding what today's ML approaches can tell us about how children develop and acquire language. Can't wait to dive in to these (31!) paper, wow, what a turn out for the challenge!
@a_stadt
Alex Warstadt
10 months
LLMs are now trained >1000x as much language data as a child, so what happens when you train a "BabyLM" on just 100M words? The proceedings of the BabyLM Challenge are now out along with our summary of key findings from 31 submissions: Some highlights 🧵
Tweet media one
13
200
1K
0
3
40
@LakeBrenden
Brenden Lake
3 years
Multi-modal nets are popular, but are they good psychological models? @wkvong tests them on a range of behavioral findings (7 Exps!). Surprisingly they can reach human-level accuracy from one pass through the stimuli! But they stumble on mutual exclusivity
1
8
39