![Matt Dunn Profile](https://pbs.twimg.com/profile_images/1654197361407676417/9VV1uwsz_x96.jpg)
Matt Dunn
@m_att_dunn
Followers
349
Following
9K
Statuses
911
LLMs for Due Diligence | ex. Cohere, LSPN | NYU
paris, boston, sf
Joined September 2009
@alanli2020 @ThereBeLyte @jenboland @pelaseyed @nlpnoah I wondered about this while reading the paper, but using the beam to get the subsequent candidate documents makes sense.
0
0
1
@sh_reya We made all responses cite sentence/paragraph level spans in the source documents and make it really easy for users to interrogate the references. But, maybe your point is that it would be nice to show what it didn’t include and why…
0
0
1
@_colemurray @pelaseyed Apple - Search differently Summarization-Based Document IDs for Generative Retrieval with Language Models -
0
0
2
How did the US solve exploding healthcare costs? @misc{vaswani2023attentionneed, title={Attention Is All You Need}, author={Ashish Vaswani and Noam Shazeer and Niki Parmar and Jakob Uszkoreit and Llion Jones and Aidan N. Gomez and Lukasz Kaiser and Illia Polosukhin}, year={2023}, eprint={1706.03762}, archivePrefix={arXiv}, primaryClass={, url={, }
0
0
0
@ThereBeLyte @jenboland @pelaseyed We built it out of necessity, but Alan Li, et from @nlpnoah's group put together a nice paper that explores the approach.
1
1
3
@gpjanik @pelaseyed I think the typical implementation of RAG is to have some search step via vector or keyword (BM25) or combination of both. We use the models to build LLM searchable indices during ingestion and traverse the index during QA inference.
1
0
0