fredahshi Profile Banner
Freda Shi Profile
Freda Shi

@fredahshi

Followers
2K
Following
2K
Media
22
Statuses
415

https://t.co/ZkHLb0G4Cw | Assistant Professor @UWCheritonCS, Faculty Member & Canada CIFAR AI Chair @VectorInst | Past: PhD @TTIC_Connect, BS @PKU1898

Waterloo, ON
Joined December 2016
Don't wanna be here? Send us removal request.
@fredahshi
Freda Shi
3 months
I'm probably (not too) late in joining the migration, but I'm here:
0
0
2
@fredahshi
Freda Shi
2 years
Personal update: I'll be starting in July 2024 as an Assistant Professor @UWCheritonCS and a Faculty Member @VectorInst! Looking forward to working with all the amazing folks!.Prospective students: if you are interested in NLP and/or comp. linguistics, please consider applying!.
33
19
318
@fredahshi
Freda Shi
8 months
šŸšØLong thread warning: excited to share that I defended my PhD thesis earlier in May!.Here's my thesis, Learning Language Structures through Grounding: 1/
Tweet media one
30
11
307
@fredahshi
Freda Shi
2 years
Large language models show reasoning abilities in English with chain-of-thought prompting - how are their multilingual reasoning abilities? . New preprintšŸ“„: Language models are multilingual chain-of-thought reasoners. (1/n)
Tweet media one
5
51
267
@fredahshi
Freda Shi
3 years
Honored to receive the 2021 Google PhD fellowship in natural language processing. Thanks @GoogleAI for the support! Kudos to my advisors and mentors: thanks for teaching me everything over the past years, and for showing me concrete examples of best researchers---yourselves!.
@GoogleAI
Google AI
3 years
Continuing our tradition of supporting outstanding graduate students in their pursuit of research in computer science and related fields, we congratulate our 13th annual PhD Fellowship Program recipients! See the list of 2021 Fellowship recipients below:
9
4
178
@fredahshi
Freda Shi
4 months
Iā€™m extremely honored to be appointed as a CIFAR AI chair and grateful to everyone who offered generous help along the path. Even more exciting news that makes today super special to me: I just officially received my doctorate hood!.
@CIFAR_News
CIFAR
4 months
Freda Shi (@fredahshi @UWaterloo @UWCheritonCS) works on computational linguistics and natural language processing, striving towards a deeper understanding of human language comprehension in order to make more efficient and effective AI systems.
19
10
162
@fredahshi
Freda Shi
1 year
šŸšØ(Not Really) Old Paper AlertšŸšØ: sharing our 2-year-old NeurIPS paper that Iā€™m still quite excited about. We learn grounded, neuro-symbolic CCGs from multi-modal data and demonstrate nearly perfect compositional generalization to unseen sentences and scenes. (1/)
Tweet media one
2
17
107
@fredahshi
Freda Shi
2 years
Late post but letā€™s do this! Happy to share our #EMNLP2022 work on translating natural language to executable code with execution-aware minimum Bayes risk decoding.šŸ“Paper: šŸ“‡Code: šŸ“¦Data (codex output):(1/n)
Tweet media one
3
19
103
@fredahshi
Freda Shi
5 years
Just got a paper w/ scores 4, 4, 4 rejected by #acl2020nlp, but the comments from the meta-reviewer and all reviewers are super, super constructive. Would like to say thank you to them all!.
1
1
89
@fredahshi
Freda Shi
6 months
I'm heading to Bangkok for ACL 2024āœˆ Looking forward to meeting old and new friends. I'd also be glad to chat about PhD opportunities in my group at the University of Waterloo, and job search experience in Canada. Feel free to send me an email to secure a slot! 1/.
7
8
68
@fredahshi
Freda Shi
3 months
Iā€™d always be proud of receiving my PhD from TTIC, a magic place which gives you the most unique (in a positive sense, of course!) experience among all PhD programs. Do apply to @TTIC_Connect !.
@Matthew_Turk
Matthew Turk
3 months
A reminder that there is NO APPLICATION FEE to apply for TTICā€™s PhD program
0
2
43
@fredahshi
Freda Shi
6 months
Happening right now!
Tweet media one
@fredahshi
Freda Shi
6 months
My coauthors and I will present 2 papers at ACL: .1. Structured Tree Alignment for Evaluation of (Speech) Constituency Parsing .Aug 14 10:30 Poster Session 6.Calculating the similarity of two "speech C-parse trees" with continuous time spans as nodes. 2/.
0
2
42
@fredahshi
Freda Shi
6 months
I'm taking a few international interns through the Mitacs Globalink program! .Interested in working with me next summer? Learn more at and apply by Sep 18, 2024!.
1
7
42
@fredahshi
Freda Shi
2 years
Though time it quite limited, I'm happy to spend most of my weekend reviewing for #iclr2023 - my assigned papers are all interesting, carefully written and relevant (to me), as most ICLR papers I've reviewed before - kudos to the ICLR matching system (and my ACs)!.
1
0
39
@fredahshi
Freda Shi
2 years
#ACL2023 attendees: Welcome to Canada! šŸ‡ØšŸ‡¦. I'll be at the conference from Monday to Wednesday. First time attending a conference without presenting a paper, and Iā€™m sure Iā€™ll enjoy all the cool presentations. Old & new friends: please donā€™t hesitate to come & say hi!.
1
0
38
@fredahshi
Freda Shi
4 months
I'm very excited about this collaborative effort: it's known that vision-language models perform poorly recognizing spatial relations---in this work, we, for the first time, systematically analyze VLM behaviors from the view of ambiguity from reference frames!.
@ziqiao_ma
Martin Ziqiao Ma
4 months
Do Vision-Language Models represent space, and how? . Spatial terms like "left" or "right" may not be enough to match images with spatial descriptions, as we often overlook the different frames of reference (FoR) used by speakers and listeners. See Figure 1 for examples!
Tweet media one
Tweet media two
1
5
37
@fredahshi
Freda Shi
1 year
Looking forward to visiting tomorrow!.
@michigan_AI
MichiganAI
1 year
šŸ“¢Delighted to host @fredahshi's #AI Seminar on "Learning Syntactic Structures from Visually Grounded Text and Speech"!. TOMORROW, OCT. 24 @ 4pm ET:.
Tweet media one
1
1
36
@fredahshi
Freda Shi
1 year
Yes, we are looking for PhD students at Waterloo! Come join us ā€” apply by Dec 1!.
@yuntiandeng
Yuntian Deng
1 year
I am hiring NLP/ML PhD students at UWaterloo, home to 5 NLP professors! Apply by Dec 1. Strong consideration will be given to those who can tackle the below challenge: Can we use LM's hidden states to reason multiple problems simultaneously?. ā€‹ā€‹Retweets/shares appreciatedšŸ„°
Tweet media one
1
0
36
@fredahshi
Freda Shi
2 years
Finally, I'll be presenting this work at EMNLP 2022 in person! Cannot wait to meet old and new friends - come and say hi!.
@fredahshi
Freda Shi
2 years
Late post but letā€™s do this! Happy to share our #EMNLP2022 work on translating natural language to executable code with execution-aware minimum Bayes risk decoding.šŸ“Paper: šŸ“‡Code: šŸ“¦Data (codex output):(1/n)
Tweet media one
0
4
32
@fredahshi
Freda Shi
2 years
This has been one of the most exciting posters Iā€™ve visited at EMNLP2022. Neat results showing syntax and semantics are learnably separated in spectrums!.
@mxmeij
Max
2 years
For #EMNLP2022, we (w/ @robvanderg, @barbara_plank) look through differentiale, rainbow-colored glasses to find linguistic timescale profiles for 7 #NLProc tasks across 6 languages šŸŒˆ. šŸ“‘ šŸ“½ļø šŸ’¬ 10th Dec 9:00 at Poster Session 7 & 8
Tweet media one
0
0
21
@fredahshi
Freda Shi
6 months
My coauthors and I will present 2 papers at ACL: .1. Structured Tree Alignment for Evaluation of (Speech) Constituency Parsing .Aug 14 10:30 Poster Session 6.Calculating the similarity of two "speech C-parse trees" with continuous time spans as nodes. 2/.
3
2
20
@fredahshi
Freda Shi
1 year
Are there any resource/study showing which words (in any language) are more likely to be mispronounced (by either native speakers or L2 learners)? Any pointer is appreciated!.
2
0
17
@fredahshi
Freda Shi
4 months
Starting this year, I'm also participating as an advisor in the collaborative program between @CIFAR_News and @ELLISforEurope. Consider applying to the ELLIS PhD program if you are interested in joint supervision from me and other ELLIS advisors!.
@ELLISforEurope
ELLIS
4 months
The #ELLISPhD application portal is now open! Apply to top #AI labs & supervisors in Europe with a single application, and choose from different areas & tracks. The call for applications: Deadline: 15 November 2024. #PhD #PhDProgram #MachineLearning #ML
Tweet media one
0
1
18
@fredahshi
Freda Shi
6 months
The #CMCL workshop today at #ACL2024 was extremely interesting and enlightening. I find myself enjoy this workshop even more than the main conference! Strongly recommend everyone interested in CL/NLP check out .1/.
1
3
17
@fredahshi
Freda Shi
2 years
I very much enjoyed this paper, and of course, the poster! Large-sized data and LLMs present a fantastic opportunity for studying cultural differences.
@_emliu
Emmy Liu
2 years
"ą¤†ą¤œ-ą¤•ą¤² NLP Research ą¤•ą„‡ ą¤øą¤¾ą¤„ ą¤¬ą¤Øą„‡ ą¤°ą¤¹ą¤Øą¤¾ ą¤‰ą¤¤ą¤Øą¤¾ ą¤¹ą„€ ą¤†ą¤øą¤¾ą¤Ø ą¤¹ą„ˆ ą¤œą¤æą¤¤ą¤Øą¤¾ ą¤•ą¤æ ą¤®ą¤¾ą¤Øą¤øą„‚ą¤Ø ą¤®ą„†ą¤‚ ą¤­ą„€ą¤—ą¤Øą„‡ ą¤øą„‡ ą¤¬ą¤šą„‡ ą¤°ą¤¹ą¤Øą¤¾!" . Did you understand? How about LMs? Our #ACL2023 Findings paper explores multilingual models' cultural understanding through figurative language in 7 langs šŸŒŽ(1/9).
Tweet media one
2
0
17
@fredahshi
Freda Shi
4 years
And she feels so lucky to be a student at @TTIC_Connect ;).
@TTIC_Connect
TTIC
4 years
Third-year PhD student Freda Shi bridges the gap between linguistics and computer science in her natural language processing research. Follow the link to learn more: #computerscience #womeninstem
Tweet media one
1
0
14
@fredahshi
Freda Shi
2 years
If you're at ICML, chat with @xinyun_chen_ on this paper in the poster session 3 11am tomorrow!.
@dmdohan
David Dohan
2 years
Come by the 11am posters on Wednesday to learn how irrelevant context effects LLMs:.
Tweet media one
0
1
13
@fredahshi
Freda Shi
3 months
We are hiring šŸ‡ØšŸ‡¦.
@thegautamkamath
Gautam Kamath
3 months
The Cheriton School of Computer Science @UWCheritonCS at the University of Waterloo is the best computer science program in Canada šŸ‡ØšŸ‡¦. We are hiring multiple tenure-track faculty positions, with a focus on data systems @dsg_uwaterloo. Deadline November 30.
Tweet media one
Tweet media two
0
1
15
@fredahshi
Freda Shi
2 years
Surprisingly, PaLM-540B shows decent multilingual reasoning ability, solving >40% problems in any of the 10 investigated languages, including the underrepresented ones (such as Bengali and Swahili) that only cover <0.01% tokens of the pretraining data. (3/n)
Tweet media one
1
2
14
@fredahshi
Freda Shi
3 years
Back to 2017, when thinking about visually grounded syntax induction (, I dreamed for 1 second if we could parse image in similar ways---apparently it's too difficult for me then (and now), so, super excited to see this! Congrats on the nice work!.
@xiaolonw
Xiaolong Wang
3 years
Introducing #CVPR2022 GroupViT: Semantic Segmentation Emerges from Text Supervision šŸ‘Øā€šŸ‘©ā€šŸ‘§. Without any pixel label ever, Our Grouping ViT can group pixels bottom-up to open vocabulary semantic segments. The only training data is 30M noisy image-text pairs.
0
0
14
@fredahshi
Freda Shi
2 years
In the coming year, I'll finish my PhD @TTIC_Connect, and visit @roger_p_levy. Huge thanks to my advisors @kevingimpel and Karen, my mentors @LukeZettlemoyer, @sidawxyz and @denny_zhou , and everyone who helped me along the way!.
1
0
13
@fredahshi
Freda Shi
2 years
Again, welcome to check out our paper and dataset for more details! .PaperšŸ“„:DatašŸ’¾: (8/n).
1
0
12
@fredahshi
Freda Shi
3 years
Same here. Even worse: I feel I'm probably not qualified to review some of them -- no experience in this domain, not quite familiar with recent work, no labmates or close friends working on it -- while relevant papers (I thought) were not assigned to me.
@yufanghou
Yufang Hou
3 years
Got 5 papers to review for ARR today, all from different AEs, the due date is Dec 16! Logged into the system, there's no option to reject the assignment or discuss with AEs to extend the deadline/find a replacement. I wonder what's the average review load for NovšŸ¤”@ReviewAcl.
1
1
12
@fredahshi
Freda Shi
3 months
Interested in how VLMs represent spatial relations and why thatā€™s super hard? This is *the* paper to read.
@zheyuanzhang99
Brian Zheyuan Zhang
3 months
Do Vision-Language Models represent space, and how?. Introducing šŸ›‹ļøCOnsistent Multilingual Frame Of Reference Test (COMFORT), an evaluation protocol to assess the spatial reasoning in VLMs under ambiguities. šŸŒ šŸ“„ MorešŸ‘‡
Tweet media one
0
0
13
@fredahshi
Freda Shi
4 months
I look forward to contributing more to the NLP/CL community, and to pushing forward CS based in Waterloo, Ontario, and Canada!.
1
0
11
@fredahshi
Freda Shi
2 years
1. Chain-of-thought prompting is essential for the reasoning performance for both GPT-3 and PaLM; and notably, reasoning steps in English (EN-CoT) almost always outperform the ones in the same language as the problem (Native-CoT). (5/n)
Tweet media one
1
1
10
@fredahshi
Freda Shi
2 years
In this work, we introduce the Multilingual Grade School Math (MGSM) dataset, by manually translating 250 English GSM8K test examples to 10 typologically diverse languages, and investigate language modelsā€™ reasoning ability with it. (2/n).
1
0
10
@fredahshi
Freda Shi
2 years
The multilingual reasoning abilities of language models also extend to other tasks: on XCOPA, a multilingual commonsense reasoning dataset, PaLM-540B sets a new state of the art (89.9% average accuracy) using only 4 examples, outperforming the prior best by 13.8%. (7/n).
1
0
9
@fredahshi
Freda Shi
8 months
Prior work has mainly categorized grounding into 2 types: semantic grounding (finding meanings for forms; Harnad, 1990) and communicative grounding (finding common ground in dialogue; Clark and Brennan, 1991 + earlier work in pragmatics). 3/
Tweet media one
1
0
10
@fredahshi
Freda Shi
2 years
Joint work with Mirac Suzgun, @markuseful, Xuezhi Wang, Suraj Srivats, @CrashTheMod3, @hwchung27, @yitayml, @seb_ruder, @denny_zhou, @dipanjand, @_jasonwei (9/9).
0
0
8
@fredahshi
Freda Shi
6 years
I'll be talking about Visually Grounded Neural Syntax Acquisition, one of the listed papers, on Monday 4:00 pm at Session 3E! This is a joint work with Jiayuan Mao, @kevingimpel and Karen Livescu. Paper: .Project page:
@ACL2019_Italy
ACL2019
6 years
We are delighted to announce the list of papers that have been nominated as candidates for ACL 2019 Best Paper Awards! Check the list at #acl2019nlp.
0
1
10
@fredahshi
Freda Shi
8 months
2 great surveys centered around the above 2 senses of grounding, respectively:.In the Harnad (1990) sense, by @ybisk, @universeinanegg, @_jessethomason_ and colleagues.In the Clark & Brennan (1991) sense, by folks incl. @ybisk.4/
Tweet media one
1
0
10
@fredahshi
Freda Shi
8 months
In my thesis, I discuss a family of tasks: learning language structures from supervision in other sources (through grounding) and corresponding methods to deal with each considered task. As many have recognized, grounding is a highly ambiguous term. More in šŸ§µ.2/.
1
0
10
@fredahshi
Freda Shi
8 months
Thanks to @McAllesterDavid, our anti-grounding prof at @TTIC_Connect: thank you for all the inspiring conversations and writings that push back the idea of grounding, e.g., I hope (and believe) my grounding above is not what you are against :).10/.
1
0
9
@fredahshi
Freda Shi
2 years
Both Heinrich (who didnā€™t wanna share seat with others) and I enjoyed your excellent defense talk ā€” huge congrats Dr. Kanishka Misra!.
@kanishkamisra
Kanishka Misra šŸŒŠ
2 years
Oh and my favorite photo from the defense was taken by @fredahshi -- I hope everyone here enjoys it as much as I did (what a great cat!). 5/6
Tweet media one
1
0
9
@fredahshi
Freda Shi
8 months
An interesting and counterintuitive example of grounding under this formalization is GroupViT by @Jerry_XU_Jiarui, @xiaolonw, and folks, where an image segmentation model is trained from textual supervision---vision can be grounded in language, too!.8/
Tweet media one
1
0
9
@fredahshi
Freda Shi
8 months
@ybisk @universeinanegg @_jessethomason_ In my thesis, I proposed the following definition of grounding, unifying all cases above. Grounding means processing the primary data X with supervision from source Y (the ground), where the mutual information I(X; Y) > 0, so we can find meaningful connections between them. 6/.
1
0
9
@fredahshi
Freda Shi
8 months
@ybisk @universeinanegg @_jessethomason_ One exception is acoustically grounded word embeddings (e.g., Settle et al., 2019), where they encode acoustic knowledge into word embeddings. Perhaps no one thinks the pronunciation of a word is its meaning, but still, this is an acceptable usage of "grounding.".5/
Tweet media one
1
0
9
@fredahshi
Freda Shi
8 months
@ybisk @universeinanegg @_jessethomason_ In real-world scenarios, the conditional entropy H(Y|X) almost always> 0, meaning that the ground is usually more complicated than what is to be grounded from certain perspectives. 7/.
1
0
8
@fredahshi
Freda Shi
6 months
Really an interesting paper and well-deserved award! Congratulations!.
@mhahn29
Michael Hahn
6 months
Excited and honored to receive a Best Paper Award for our work on the inductive bias of the transformer architecture with @broccolitwit šŸŒŸšŸŽ‰ #ACL2024NLP
Tweet media one
1
0
8
@fredahshi
Freda Shi
2 years
@yoavartzi Congrats on the Best Paper Award!! Super well deserved!.
0
0
8
@fredahshi
Freda Shi
3 months
Najoung and her lab always work on problems that attract me a lot. Go and apply to be her student! . p.s. From the logo, I think I'll claim the most important nature of language is recursion.
@najoungkim
Najoung Kim šŸ« 
3 months
tinlab at Boston University (with a new logo! šŸŖ„) is recruiting PhD students for F25 and/or a postdoc! Our interests include meaning, generalization, evaluation design, and the nature of computation/representation underlying language and cognition, in both humans and machines. ā¬‡ļø
Tweet media one
1
0
8
@fredahshi
Freda Shi
2 years
2. When example problems in the same language as the problem of interest are available, use them for prompting. If not, use examples from a diverse set of languages. (6/n)
Tweet media one
1
0
6
@fredahshi
Freda Shi
8 months
I'm extremely thankful to my advisors Karen and @kevingimpel & my committee members and mentors @lukezettlemoyer and @roger_p_levy, for the great questions and suggestions on my thesis. 12/.
1
0
7
@fredahshi
Freda Shi
2 years
In addition, we analyze the effect of the choices on prompting examples and prompting techniques, and highlight the following takeaways. (4/n).
1
0
6
@fredahshi
Freda Shi
7 months
@denny_zhou @kchonyc I believe both explanations are valid, although marginalizing over reasoning paths that share the same result is probably the most natural way to think about it. My thesis (P123) discusses three explanations of SC and MBR-Exec ().
1
0
6
@fredahshi
Freda Shi
8 months
@McAllesterDavid @TTIC_Connect Also here's a quick guide for readers interested in the additional content covered by my thesis. 11/
Tweet media one
1
0
6
@fredahshi
Freda Shi
8 months
To my friends, mentors, coauthors, and everyone who has offered direct or indirect help in the past years, please read my thanks in acknowledgment, which is probably the most exciting part of every PhD thesis. 14/14.
0
0
6
@fredahshi
Freda Shi
6 years
Excited to have the work on tree-based neural sentence modeling (joint with my excellent collaborators Hao Zhou, Jiaze Chen and Lei Li) accepted by #EMNLP2018.
0
0
5
@fredahshi
Freda Shi
11 months
Sida is beyond amazing! Go work with him!.
@sidawxyz
Sida Wang
11 months
I'm hiring a PhD intern for the FAIR CodeGen (Code Llama) team. Do research on Code LLMs, execution feedback, evaluation, etc. Apply here:
0
0
5
@fredahshi
Freda Shi
6 months
2. LogogramNLP: Comparing Visual and Textual Representations of Ancient Logographic Writing Systems for NLP (led by @danlu_ai).Aug 12 16:00 Poster Session 3.Visual representation-based system for NLP on ancient logographic languages outperforms conventional Latin transliteration!.
0
1
6
@fredahshi
Freda Shi
8 months
Special thanks to @MichaelHBowling, Dale Schuurmans, and @nidhihegde for the wonderful discussion on grounding at a dinner a year ago. The conversation has made the term grounding (in my mind) more articulable. 9/.
1
0
5
@fredahshi
Freda Shi
1 year
@akoksal_ Thank you Abdullatif! Definitely checking out!.
0
0
1
@fredahshi
Freda Shi
7 months
@yuntiandeng We have multilingual GSM here ( and by @davlanade and colleagues! Curious about the performance across languages ;).
1
1
5
@fredahshi
Freda Shi
2 years
I had some difficulty figuring out the horizontal scroll (ęØŖę‰¹; hĆ©ng pÄ«)ā€”while eventually I realize in this case it should be read from left to right, we typically write it from right to left in China :) Happy New Year to my friends who are celebrating!.
@LanguageLog
Language Log
2 years
The difficulty of expressing "nothing": This is a clever attempt to write aĀ spring couplet (chÅ«nliĆ”n ꘄčÆ), not in the usual Sinoglyphs / Chinese characters, but in pictographs: (source) I could figure out about half of the character equivalents (rebusesā€¦
Tweet media one
0
0
5
@fredahshi
Freda Shi
3 months
@universeinanegg You should probably try .Helped me find a few good cogsci, (non-comp) linguistics and social sci papers. Support on CS papers wasnā€™t good enough though at the last time I checked it (.5 year ago).
1
0
5
@fredahshi
Freda Shi
6 months
Hearty thanks to organizers @ttk_kuribayashi @g_rambelli @ecekt2, Philipp Wicke, Yohei Oseki and @jixingli for the wonderful event! .2/2.
2
0
5
@fredahshi
Freda Shi
3 months
If not conditioned on their logo, though, I will claim grounding is more important šŸ¤“.
1
0
5
@fredahshi
Freda Shi
4 months
Check out Kanishkaā€™s talk this Friday! Highly recommended!.
@kanishkamisra
Kanishka Misra šŸŒŠ
4 months
In case people recovering from colm want more lm content:.
0
0
5
@fredahshi
Freda Shi
11 months
Go Palatino!.
@shuyanzhxyc
Shuyan Zhou
11 months
#COLM template is so visually pleasant, the joy of writing šŸ†™šŸ†™šŸ†™.
1
0
5
@fredahshi
Freda Shi
1 year
My brain LM is still favo(u)ring ā€œfavoriteā€ - you should seriously consider coming to Canada šŸ.
@maojiayuan
Jiayuan Mao
1 year
Definitely one of my top 3 favourite papers :) It marries deep learning with a minimal set of universal grammar rules for grounded language learning. It draws inspiration from lexicalist linguistics and cognitive science (bootstrapping from core knowledge).
0
0
4
@fredahshi
Freda Shi
4 years
Can't agree more. I voted for "that's syntax", but I wouldn't be happy to see a paper using "syntactic features" to refer to POS tags only, and I've been not so happy for >3 times.
@carlosgr_nlp
C. GĆ³mez-RodrĆ­guez
4 years
@emilymbender At the very least it's a misleading use of the term. To me it's like doing linear regression and calling it a neural approach. technically true (linear regression can be seen as a 1-neuron neural network) but I don't see why anyone would say it (w/o context) if not to oversell.
0
0
4
@fredahshi
Freda Shi
6 years
See you then, Sam!.
0
0
4
@fredahshi
Freda Shi
6 years
Our work got the same result on sentence encoder!.
@gneubig
Graham Neubig
6 years
#EMNLP2018 "A Tree-based Decoder for NMT", a framework for incorporating trees in target side of MT systems. We compare constituency/dependency/non-syntactic binary trees, find surprising result that non-syntactic trees perform best, and try to explain why
Tweet media one
Tweet media two
Tweet media three
Tweet media four
1
0
4
@fredahshi
Freda Shi
6 years
Madhur's course is really nice! I'd recommend it to everyone who wishes to review/learn some fundamental mathematical concepts related to machine learning.
@_onionesque
Shubhendu Trivedi
6 years
@EugeneVinitsky Madhur Tulsiani runs a very similar course every other year (this has links to iterations of the class, the latter ones have more refined notes).
0
0
4
@fredahshi
Freda Shi
8 months
Of course, the work covered in this thesis is built on the foundation of the literatureā€”my thanks go to the authors of the papers I cited. I hope I've discussed your work in a fair way. 13/.
1
0
4
@fredahshi
Freda Shi
10 months
And great to see. Yoav brings back the cute llama arts!.
@yoavartzi
Yoav Artzi
10 months
The @COLM_conf reviewing period has started. Reviewers should now receive emails, and all papers are now assigned. Thanks to all our ACs who adjusted assignments in the last few days. Happy reviewing all!
Tweet media one
1
0
4
@fredahshi
Freda Shi
8 months
@MorrisAlper @moranynk @ElorHadar @RGiryes Excited to see more work on quantifying visual concreteness! Our ACL'19 work on quantifying text span concreteness and using it for syntactic parsing might also be of interest:
0
0
4
@fredahshi
Freda Shi
3 years
(and guess which is me w/o exact matching on either first or last name! :).
@fredahshi
Freda Shi
3 years
Honored to receive the 2021 Google PhD fellowship in natural language processing. Thanks @GoogleAI for the support! Kudos to my advisors and mentors: thanks for teaching me everything over the past years, and for showing me concrete examples of best researchers---yourselves!.
2
0
4
@fredahshi
Freda Shi
6 years
Code for experiments is available at now šŸ˜ƒ.
1
0
4
@fredahshi
Freda Shi
3 years
@RTomMcCoy Sometimes I do Ctrl/Cmd+Shift+V for 2) and 3) XD.
0
0
3
@fredahshi
Freda Shi
1 year
@maojiayuan @jiajunwu_cs @roger_p_levy As in a CCG, each lexicon entry has its syntactic type and semantic representation. We induce the syntax and semantics of the questions, execute the neuro-symbolic semantic program with visual input, and reward the parser if the execution result is correct. (4/)
Tweet media one
1
0
3
@fredahshi
Freda Shi
5 years
@amitmoryossef @emilymbender My advisor Karen Livescu is working on ASL as well:
0
0
3
@fredahshi
Freda Shi
1 year
@sharonlevy21 Also wonder if this would happen for inanimate objects/subjects as well!.
0
0
2
@fredahshi
Freda Shi
6 months
@leonlianglu @dmort27 @juice500ml Congratulations Liang! Super well-deserved!.
1
0
3
@fredahshi
Freda Shi
1 year
@tallinzen Thatā€™s part of the reason why I started using GitHub to manage my working papers. Another part is the nice combination of VSCode & LaTeX workshop.
0
0
3
@fredahshi
Freda Shi
1 year
@mrdrozdov HUGE congrats, Andrew!! šŸŽ‰šŸ¾.
0
0
1
@fredahshi
Freda Shi
1 year
@UndefBehavior Sorry to hear this! As an alternative, my coauthors and I tried to publish at ML conferences (for our case, NeurIPS) on highly linguistic topics. We got constructive feedback from reviewers, but very little attention for our presentation.
0
0
2
@fredahshi
Freda Shi
1 year
@sharonlevy21 Congrats on the excellent work! I found the Table 1 example very interesting: these four sentences are clearly negative to me, and I can't imagine if anyone would label any of them positive---wonder if more data could fix this?.
2
0
3
@fredahshi
Freda Shi
6 months
@ruoxi_ning Great, looking forward to seeing you in person!.
0
0
2
@fredahshi
Freda Shi
6 years
Happy days!.
@TTIC_Connect
TTIC
6 years
Midwest Speech and Language Days | Day 2. Thank you to all participants, speakers and organizers!
Tweet media one
0
0
2
@fredahshi
Freda Shi
1 year
@universeinanegg I occasionally came across this before ā€”ā€” perhaps itā€™s something close to what youā€™re looking for?.
1
0
2
@fredahshi
Freda Shi
1 year
@kanishkamisra I started to use onenote (not supposed for managing todos though). Just start a new page for all todos this week each Monday morning, and copy leftovers from the prior week.
0
0
2
@fredahshi
Freda Shi
1 year
@joycjhsu This is cool! Can I ask a quick question - why would humans say "no" to the teaser question? From a quick glance, it could perfectly be a "wug" to me :).
3
0
2
@fredahshi
Freda Shi
2 years
Definitely reach out to @WenhuChen @yuntiandeng and/or me if youā€™d like to learn more about Waterloo NLP (and probably catch @hllo_wrld & @lintool next time :).
0
0
2
@fredahshi
Freda Shi
8 months
@SonglinYang4 Thanks Sonta! Now I know who to blame if someone calls me Dr Meow at conferencesšŸ˜¼.
0
0
2
@fredahshi
Freda Shi
1 year
@lukeZhu20 Yes!! Thanks much, Jian! I shouldā€™ve pinged you before asking here ;).
0
0
1
@fredahshi
Freda Shi
6 years
Great paper and impressive results. Very excited to see it!.
@mrdrozdov
Andrew Drozdov
6 years
Now with paper link: And code: New results on unsupervised parsing: +6.5 F1 compared to ON-LSTM (2019), +6 F1 compared to PRLG (2011).
0
0
2
@fredahshi
Freda Shi
1 year
We are also aware that the method comes with efficiency issues in complicated real-world settings, and thatā€™s an exciting direction to explore in the future! (13/13).
0
0
2
@fredahshi
Freda Shi
1 year
In summary, we show what will happen when neuro-symbolic models meet grammars (generalized CCGs), where we achieve significantly improved performance on compositional generalization. (12/).
1
0
2
@fredahshi
Freda Shi
1 year
1
0
2