Nicolay Rusnachenko Profile Banner
Nicolay Rusnachenko Profile
Nicolay Rusnachenko

@nicolayr_

Followers
260
Following
86
Media
160
Statuses
2,013

Information Retrieval・Medical Multimodal NLP (🖼+📝) Research Fellow @BU_Research ・software developer ・PhD in NLP・Opinions are mine

Bournemouth / London, UK
Joined December 2015
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
Pinned Tweet
@nicolayr_
Nicolay Rusnachenko
14 days
📢During @wassa_ws at @aclmeeting we present a framework for empathy and emotion extraction developed that exploits application of combined loss for BERT-based models 🤖 made in colab with @UniofNewcastle . 📜 🧵[for more details]
Tweet media one
1
0
0
@nicolayr_
Nicolay Rusnachenko
5 months
Excited for the first day workshops and talks at #ecir2024
Tweet media one
0
1
14
@nicolayr_
Nicolay Rusnachenko
5 months
Let's be honest, arranging the @ecir2024 banquet at "Glasgow City Chambers" was a fabulous decision to say the least 🍷✨ #ecir2024
Tweet media one
0
1
13
@nicolayr_
Nicolay Rusnachenko
4 months
That's a 💎 milestone on better synthetic data preparation practices! Wondered of it's application towards the low-resourse-domian SFT 👀
@arankomatsuzaki
Aran Komatsuzaki
4 months
Better Synthetic Data by Retrieving and Transforming Existing Datasets repo: abs:
Tweet media one
1
88
415
0
1
8
@nicolayr_
Nicolay Rusnachenko
5 months
This week we present ARElight system aimed at memory-effective structuring collections of large texts @ecir2024 demo track 📜💻 Keynotes and materials: Github: Poster: #ecir2024 #arelight #nlp #ir #sampling #graphs
Tweet media one
0
1
9
@nicolayr_
Nicolay Rusnachenko
5 months
Thanks for the fabulous night and celebration!
@ecir2024
ECIR2024
5 months
🍽️ The banquet is beginning at #ECIR2024 ! Wishing everyone a fantastic evening filled with great food, laughter, and wonderful conversations! 🎉🥂
4
4
33
0
0
9
@nicolayr_
Nicolay Rusnachenko
4 months
📢 Excited to share that our studies on LLMs reasoning capabilities in Target Sentiment Analysis are out 🎉 🧵1/n [More on finding highlights ...] #llm #reasoning #nlp #sentimentanalysis #cot #chainofthought #zeroshot #finetuning
Tweet media one
1
0
7
@nicolayr_
Nicolay Rusnachenko
4 months
So far experimented 🧪 with LLaVa 1.5 and Idefics 9B at scale and they are quite handy out-of-the-box 📦 Eventhough, it is nice to see even smaller versions are out, that based on most recent LLMs 👏👀
@Prince_Canuma
Prince Canuma
4 months
mlx-vlm v0.0.4 is here 🎉 New models 🤖: - Idefics 2 - Llava (Phi and Llama 3) Improvements 🚀: - Q4 quantisation support for all models - Less imports to use generate() Up next 🚧: - More models - Support for multiple images Please leave us a star and send a PR
Tweet media one
4
8
39
0
1
5
@nicolayr_
Nicolay Rusnachenko
5 months
It is nice too see a small step towards the target-oriented LLM adaptation from the prospect of retrieval augmenting techniques and enhanced end-to-end adaptation of 🥞: (I) knowledge (ii) passages (iii) LLM
@ContextualAI
Contextual AI
5 months
Today, we’re excited to announce RAG 2.0, our end-to-end system for developing production-grade AI. Using RAG 2.0, we’ve created Contextual Language Models (CLMs), which achieve state-of-the-art performance on a variety of industry benchmarks. CLMs outperform strong RAG
Tweet media one
35
138
997
0
0
5
@nicolayr_
Nicolay Rusnachenko
3 months
@lucas__crespo Data leakage of course 🌊😎
0
0
5
@nicolayr_
Nicolay Rusnachenko
4 months
These findings 👇 on (1) reliability in news articles and (2) in the language models application for generating writer ✍️ feedback generation from the angle of readers 👀📃 view through personalities" are 💎 to skim.
@hen_drik
Hendrik Heuer
4 months
Excited for our #CHI2024 contributions (1/2): Reliability Criteria for News Websites, 16 May, 11:30am Writer-Defined AI Personas for On-Demand Feedback Generation, 15 May, 2:00pm #CAISontour
Tweet media one
5
1
38
0
1
3
@nicolayr_
Nicolay Rusnachenko
4 months
The most recent OmniFusion VLLM in which authors adopt merging-features for CLIP-ViT-L and DINOv2 was such an impressive 🔥 and powered by Mistral-7B. This makes me wonder, how far the another 💎 concept for images encoding 👇that involves MoE goes ... 👀
@mervenoyann
merve
4 months
it's raining vision language models ☔️ CuMo is a new vision language model that has MoE in every step of the VLM (image encoder, MLP and text decoder) and uses Mistral-7B for the decoder part 🤓
Tweet media one
3
59
302
1
0
4
@nicolayr_
Nicolay Rusnachenko
1 month
0
0
4
@nicolayr_
Nicolay Rusnachenko
5 months
@stevenhoi @hypergai Nice to see you publicly contribute to Multimodal AI advances, and LLM in particular 👏
0
0
4
@nicolayr_
Nicolay Rusnachenko
4 months
@alan_karthi Well done and thank you for sharing this technical report! 👏 I believe that the access to the Med-Gemini is restricted due to the specifics for the medical domain as well as the result LLM. Nonetheless, is Med-Gemini available for chatting and under which license of so?
1
0
4
@nicolayr_
Nicolay Rusnachenko
5 months
Text2Story workshop opener at #ecir2024 shares a handy methodology aligned studies for processing large texts such as books 📚 aimed at narratives extraction #nlp #story #books #narratives
Tweet media one
0
0
4
@nicolayr_
Nicolay Rusnachenko
4 months
Such tools like this end up becoming a swiss knife for deeper understanding and looking 👀 on how one LLM differs from other 💎
@mahnerak
Karen Hambardzumyan
5 months
[1/7] 🚀 Introducing the Language Model Transparency Tool - an open-source interactive toolkit for analyzing Transformer-based language models. We can't wait to see how the community will use this tool!
Tweet media one
3
53
216
0
0
4
@nicolayr_
Nicolay Rusnachenko
3 months
📢 This finds me as a valuable milestone 💎 on inputting personality traits into large language models 👀
@sylee_ai
Seongyun Lee
3 months
🚨 New LLM personalization/alignment paper 🚨 🤔 How can we obtain personalizable LLMs without explicitly re-training reward models/LLMs for each user? ✔ We introduce a new zero-shot alignment method to control LLM responses via the system message 🚀
Tweet media one
4
53
209
1
1
3
@nicolayr_
Nicolay Rusnachenko
3 months
0
0
2
@nicolayr_
Nicolay Rusnachenko
2 months
📢 I believe such instruction tuned LMs may represent a valuable contributions to IR related task advances such as sentiment analysis 💎 📝
@AniketVashisht8
Aniket Vashishtha
2 months
Can we teach Transformers Causal Reasoning? We propose Axiomatic Framework, a new paradigm for training LMs. Our 67M-param model, trained from scratch on simple causal chains, outperforms billion-scale LLMs and rivals GPT-4 in inferring cause-effect relations over complex graphs
Tweet media one
15
131
701
0
0
2
@nicolayr_
Nicolay Rusnachenko
5 months
Self-attention -> windowed / sparse-self attention -> local +global self attention -> infini attention 👏✨
@arankomatsuzaki
Aran Komatsuzaki
5 months
Google presents Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention 1B model that was fine-tuned on up to 5K sequence length passkey instances solves the 1M length problem
Tweet media one
27
260
1K
0
0
3
@nicolayr_
Nicolay Rusnachenko
5 months
An interesting viewpoint on pre-training of the so-called language-centric LLMs in low-resource domain to "preserve" knowledge about rare languages #ecir2024 #llm #lowresourcedomain #pretraining
Tweet media one
0
0
3
@nicolayr_
Nicolay Rusnachenko
3 years
During last few years the importance of quick transoformer tunings for downstream tasks become even more demanded. Earlier announced awesome list of sentiment analysis papers has been enlarged with recent advances in more time-effective tuning techniques
Tweet media one
1
1
3
@nicolayr_
Nicolay Rusnachenko
4 months
0
0
3
@nicolayr_
Nicolay Rusnachenko
5 months
Catch me to find out more on how ARElight may be applied for your large texts 📚/ 📰 @ #ecir2024 demo track 📜💻. Thanks to my @UniofNewcastle colleagues: Huizhi Liang, Maxim Kalameyets, @lshi_ncl for work on system✨📺 📍lobby poster / demo session #nlp #lm #ir #sampling
Tweet media one
0
0
1
@nicolayr_
Nicolay Rusnachenko
4 months
@burkov Notably, the generated one tends to be verbosely commented so that skimming through comments for the brief sure of it's correctness
1
0
2
@nicolayr_
Nicolay Rusnachenko
7 months
@drivelinekyle @abacaj This template varies from task to task, but the impelementation on torch is pretty much is similar. My personal experience is sentiment analysis, so I can recommend:
0
0
3
@nicolayr_
Nicolay Rusnachenko
2 years
A short post which demonstrates pipeline organization for inferring sentiment attitudes from mass-media texts #deepPavlov #arekit #ml #nlp #sentimentanalysis
Tweet media one
0
1
3
@nicolayr_
Nicolay Rusnachenko
6 months
@cpt_harv @selina_mey @delsweil Well done, joining with the congratulations!
0
0
2
@nicolayr_
Nicolay Rusnachenko
5 months
One of the inspirative directions (Dimension) of narrative in long texts that were highlighted at #ecir2024 #Text2Story is Spacial 🌎🗺️ ... Further details on pipeline and timeline tracking ... 🧵[1/3]
Tweet media one
1
0
3
@nicolayr_
Nicolay Rusnachenko
5 months
Thanks for the fun time at the dancefloor ✨💃🕺
@ecir2024
ECIR2024
5 months
Now the fun begins - it's ceilidh time!! 🕺🎉
0
2
21
0
0
3
@nicolayr_
Nicolay Rusnachenko
2 months
@yufanghou Thanks! By being not physically at #NAACL2024 this year, rather remotely at #SemEval , finding it 💎 a quick poster skimming from the Summary Content Units as well as semantical tree construction for the textual content 👀
0
0
3
@nicolayr_
Nicolay Rusnachenko
2 years
Managing attitudes of a single texts Is not the only option available in AREkit (). We are also consider a laaaa...aaarge scale collections of texts with relations, and a way of how such collections could be handled by AREkit. Stay tuned) #arekit #nlp
Tweet media one
1
2
3
@nicolayr_
Nicolay Rusnachenko
10 months
It was always encouraging to see how high academia compete in scientific advances, while even more amazing to see that in sports! 🔥 💪 💪 💪 🚣‍♀️ 🚣‍♂️ 🛶 Yes, and nowadays it is still possible to see such traninings at Hammersmith quayside in London! 🔥
Tweet media one
0
0
3
@nicolayr_
Nicolay Rusnachenko
16 days
@zouharvi Brilliant idea with cards that showcase the contribution and experiments outcome 👏
0
0
3
@nicolayr_
Nicolay Rusnachenko
3 months
@HongyiWang10 @RutgersCS Congratulations, all the best on the Assistant Professor role 👏✨
1
0
2
@nicolayr_
Nicolay Rusnachenko
4 months
@bindureddy Thanks for sharing it! 👏👀
0
0
3
@nicolayr_
Nicolay Rusnachenko
3 months
📢 I am happy to share that our studies made at @UniofNewcastle , aimed at fictional character personality extraction from literature novel books 📚, by solely rely on ⚠️ book content⚠️, become ACCEPTED at #LOD2024 @ Toscana, Italy 🇮🇹 🎉 👨‍💻: 🧵1/n
Tweet media one
1
0
3
@nicolayr_
Nicolay Rusnachenko
5 months
@_kire_kara_ @barbara_plank @IAugenstein Congratulations, well done! 👏
0
0
2
@nicolayr_
Nicolay Rusnachenko
5 months
@shaily99 Handy to go with Overleaf + DrawIO for concept diagrams. For the overleaf, forming "lastest.tex" 📝 which later become renamed to specific date, so that becomes a 📑 eventually, which could be gathered into "main.tex"
0
0
3
@nicolayr_
Nicolay Rusnachenko
5 months
@omarsar0 After a brief paper skimming of the main figure, the ranking idea finds me in such a unique way of reasoning enhancing. Thanks for sharing 👏
0
0
1
@nicolayr_
Nicolay Rusnachenko
4 months
@Leik0w0 @Prince_Canuma Interesting! ... any other prospects on necessity of quantized siglip besides its adaptation for Moondream tiny VLLM?
3
0
1
@nicolayr_
Nicolay Rusnachenko
2 months
@SemEvalWorkshop our 🛠️ reforged 🛠️ version of the THoR which has: ✅1. Adapted prompts for Emotions Extraction ✅2. Implemented Reasoning-Revision 🧠 ✅3. GoogleColab launch notebook (details in further posts 🧵) ⭐️Github: 🧵 4/n #NAACL2024
1
1
3
@nicolayr_
Nicolay Rusnachenko
3 months
@mudler_it @LocalAI_API Thanks for such a verbose explanation on the related differences! I believe I have to first find out more about function calling 👀
0
0
2
@nicolayr_
Nicolay Rusnachenko
4 months
@yuqirose Congratulations! 👏
0
0
1
@nicolayr_
Nicolay Rusnachenko
4 months
@RezaeiKeivan Congratulations with the paper acceptance! Well done 👏👀
0
0
1
@nicolayr_
Nicolay Rusnachenko
2 months
📢 Excited to share the details of our submission🥉 @SemEvalWorkshop Track-3, which is based on CoT reasoning 🧠 with Flan-T5 🤖, as a part of self-titled nicolay-r 📜. Due to remote avalablity at #NAACL2024 , presenting it by breaking down the system highlights here 👇 🧵1/n
Tweet media one
@SemEvalWorkshop
SemEval
2 months
@SemEvalWorkshop 2024 starts tomorrow! Check out our exciting lineup of 65 posters and 10 talks here: Don’t miss our invited talks by @hengjinlp (with @_starsem ) and @andre_t_martins ! #mexico @naaclmeeting @shashwatup9k @harish @seirasto @giodsm
0
4
6
1
0
3
@nicolayr_
Nicolay Rusnachenko
4 months
@Paul_Antara04 The concept is really good, so that making it mobile friendly seems to be a huge step forward ✨👏
0
0
2
@nicolayr_
Nicolay Rusnachenko
3 months
@Mai_Mahmoud_ Congratulations, Phinally Done! 👏
0
0
2
@nicolayr_
Nicolay Rusnachenko
2 years
Thanks for the provided opportunity to Huizhi Liang and for all students who got an opportunity to attend on the first guest lecture at Newcasle University! 🎉🎉🎉 The presentation was devoted to advances in sentiment attitude extraction #newcastle #ml #nlp #arekit #lecture
Tweet media one
0
0
3
@nicolayr_
Nicolay Rusnachenko
2 months
🤔 Coming to this from the prospect of IR from large textual data, becoming wondered on how it would be interesting to see such a forecasting for extracted stories (series of events) from literature novel books 📚
@chenchenye_ccye
Chenchen Ye
2 months
📢New LLM Agents Benchmark! Introducing 🌟MIRAI🌟: A groundbreaking benchmark crafted for evaluating LLM agents in temporal forecasting of international events with tool use and complex reasoning! 📜 Arxiv: 🔗 Project page: 🧵1/N
14
71
304
1
0
3
@nicolayr_
Nicolay Rusnachenko
2 months
💎 Fact checking domain and advances in it are important for performing IR from news, and mass-media. These advances may serve with the new potential approaches aimed at enhancing LLMs reasoning capabilities in author opinion mining / Sentiment Analysis ✨
@ManyaWadhwa1
Manya Wadhwa
2 months
Refine LLM responses to improve factuality with our new three-stage process: 🔎Detect errors 🧑‍🏫Critique in language ✏️Refine with those critiques DCR improves factuality refinement across model scales: Llama 2, Llama 3, GPT-4. w/ @lucy_xyzhao @jessyjli @gregd_nlp 🧵
Tweet media one
2
24
123
0
0
1
@nicolayr_
Nicolay Rusnachenko
4 months
@Francis_YAO_ Wow, can't get enough you're not the only there 😁 💯 ... All these LLM breakthroughs make me convinced in the robustness of such a timer setup feature 🗣️⏱️😁
0
0
2
@nicolayr_
Nicolay Rusnachenko
3 months
@SayanRanu Yeah, sounds great! Congratulations! 👏
0
0
1
@nicolayr_
Nicolay Rusnachenko
4 months
@j_foerst @AmazonScience @clockwk7 @JonnyCoook Congratulations on this achievement! 👏🎉
0
0
2
@nicolayr_
Nicolay Rusnachenko
2 months
@SemEvalWorkshop we experiment with Flan-T5-base model in GoogleColab, and preliminary results show benefits of using Reasoning-Revision (THoR-cause-rr) over prompt-based and THoR-cause. After 3 epochs of training Flan-T5-base is ready to use ✨ 🧵 3/n
Tweet media one
1
0
2
@nicolayr_
Nicolay Rusnachenko
1 month
0
0
2
@nicolayr_
Nicolay Rusnachenko
3 months
0
0
1
@nicolayr_
Nicolay Rusnachenko
5 months
This is my very second scientific traveling to Glasgow 🏴󠁧󠁢󠁳󠁣󠁴󠁿 that again finds me with train cancellation 😁❌🚅 Coincidence? no! Good for me, this time was no storms 🌀 so that was the only 1430🕜 cancellation, so taking the 1530 🚅 then ✨
Tweet media one
0
0
2
@nicolayr_
Nicolay Rusnachenko
3 months
0
0
0
@nicolayr_
Nicolay Rusnachenko
14 days
📢 @wassa_ws as a part of @aclmeeting , we presenting studies 🧪 aimed at empathy and emotion detection. Credits to Tian Li @UniofNewcastle who is presenting this work, so feel free to catch him during workshop for questions! 🧵📹📊 details in thread 👇 #WASSA2024 #ACL2024
Tweet media one
@nicolayr_
Nicolay Rusnachenko
1 month
📢Excited to announce that attending 🟢 @aclmeeting (ACL-2024) and @wassa_ws workshop in partcilar by presenting a framework for empathy and emotion extraction developed in collaboration with @UniofNewcastle Details: #reasoning #empathy #emotion
Tweet media one
1
0
1
1
0
2
@nicolayr_
Nicolay Rusnachenko
1 month
1
0
2
@nicolayr_
Nicolay Rusnachenko
3 months
@pesarlin @mapo1 @Jimantha @quantombone @cvg_ethz Congratulations, well deserved! 👏🎓
0
0
1
@nicolayr_
Nicolay Rusnachenko
2 months
1
0
2
@nicolayr_
Nicolay Rusnachenko
5 months
Surprisingly for now to see the 7B sized model by @NexusflowX among larger competitors above at the top of the Chatbot Arena Leaderboard 💪
@ivanfioravanti
ifioravanti
5 months
Apple MLX: considering the power of Starling-LM-7B-beta from @NexusflowX and its ranking on @lmsysorg Chatbot Arena Leaderboard, I converted and uploaded 4bit and 8bit versions on HuggingFace mlx-community! Performance 🔥 on M2 Ultra 76GPU: - 4bit: tokens/sec Prompt: 158 -
Tweet media one
6
10
113
0
0
1
@nicolayr_
Nicolay Rusnachenko
4 months
Excited that the report behind the recently announced ✨Med-Gemini ✨sheds the light on the datasets 📊 utilized in training setup 👀
Tweet media one
@AziziShekoofeh
Shek Azizi
4 months
Excited to share latest ✨Med-Gemini✨ additions - our new research unlocks possibilities in medical data analysis with 3 new models built upon Gemini 1.5 that can handle 2D medical images, and for the first time genomic risk score & 3D radiology scans.
Tweet media one
6
62
322
0
0
2
@nicolayr_
Nicolay Rusnachenko
5 months
@xue_yihao65785 Not yet an expert in Multimodal NLP, but delighted for such short contributions sharing! 👀 Well deserved 👏
1
0
1
@nicolayr_
Nicolay Rusnachenko
20 days
@sucholutsky @NYUDataScience Congratulations, all the best in this role!👏
0
0
2
@nicolayr_
Nicolay Rusnachenko
1 month
0
0
2
@nicolayr_
Nicolay Rusnachenko
3 months
@PMinervini @ale_suglia @tetraduzione That already makes me feel that LLAvA and other similar solutions on top of the textual LLM are just temporary implementation crutches for multimodality problems
1
0
2
@nicolayr_
Nicolay Rusnachenko
4 months
Very interesting system concept of LLM based QA-alike systems involving meta-token generation (RET) that emphasizes the necessity of extra information (no-answer) and then (optionally) perform IR to augment the question with context and ask again LLM for the final answer 👏
@omarsar0
elvis
4 months
When to Retrieve? This new paper presents an approach to train LLMs to effectively utilize information retrieval. It first proposes a training approach to teach an LLM to generate a special token, <RET>, when it's not confident or doesn't know the answer to a question. The
Tweet media one
17
117
579
0
0
2
@nicolayr_
Nicolay Rusnachenko
5 months
Worth to think of embedding such features as IFD scoring into other SFT frameworks for the non low resource domain studies 🤔
@zhoutianyi
Tianyi Zhou
5 months
Cherry LLM🍒🍒selects training data 📚♻️based on a novel instruction-following difficulty (IFD) score computed from the perplexity of the LLM to be finetuned. It can reduce data to 5% or 10% 🍒yet achieve an LLM comparable to the one trained on the full 100% data✨✨
Tweet media one
0
0
5
0
0
1
@nicolayr_
Nicolay Rusnachenko
3 months
0
0
1
@nicolayr_
Nicolay Rusnachenko
10 days
0
0
2
@nicolayr_
Nicolay Rusnachenko
3 months
@alvarobartt @Alibaba_Qwen @huggingface Any recommendations on remote launching 72B for inferring?
1
0
1
@nicolayr_
Nicolay Rusnachenko
3 months
@AnkitaBhaumik3 After @SemEvalWorkshop 2024 Task 3 aimed at emotion caused prediction in dialogues, finding your studies pretty valuable to explore 👀👏 Thanks for sharing!
1
0
1
@nicolayr_
Nicolay Rusnachenko
4 months
@AtakanTekparmak @maximelabonne Thanks for sharing! 👏
1
0
2
@nicolayr_
Nicolay Rusnachenko
5 months
0
0
2
@nicolayr_
Nicolay Rusnachenko
14 days
@taguchi_c @davidweichiang Congratulations! 👏
1
0
2
@nicolayr_
Nicolay Rusnachenko
2 months
@kavel_r @uwcse @liweijianglw @YejinChoinka Congratulations, well done! 👏✨
0
0
2
@nicolayr_
Nicolay Rusnachenko
4 months
When you're too confident with the large batch-size on LLM-finetuning, but the model generates long responses:
@darrenangle
darren
4 months
・ *゚   ・ ゚* ・。 *・。 *.。 。・ °*. RuntimeError: CUDA out of memory. 。。 ・ 。 ・゚ 。°*. 。*・。・ *゚   ・ ゚*
Tweet media one
35
415
6K
0
0
2
@nicolayr_
Nicolay Rusnachenko
5 months
Thanks everyone for the social and talks @ welcome evening event of #ecir2024 🥂
0
1
2
@nicolayr_
Nicolay Rusnachenko
3 months
@Hamptonism 👏 Is there similar visualization attempts on the transformers architecture?
0
0
1
@nicolayr_
Nicolay Rusnachenko
5 months
@kamilazdybal Congratulations on your milestone! 👏✨ Same here so far so I can imagine how valuable it is!
0
0
1
@nicolayr_
Nicolay Rusnachenko
3 months
@JLopez_160 @cohere Thank you for sharing this, interesting to see how it goes with LLMs 👏👀 Legal documents are tend to be long in length, so how specifically you sample them?
1
0
0
@nicolayr_
Nicolay Rusnachenko
5 months
@cramraj8 @KaustubhDhole @eugeneAgichtein Congratulations, well deserved! 👏
0
0
2
@nicolayr_
Nicolay Rusnachenko
2 months
🤔 ... There is actually a huge room on improvements for this. Wonder on the related studies, however it seems that any further customizations are ending up being too domain oriented
@maximelabonne
Maxime Labonne
2 months
2024: LLM chat interfaces are still ridiculously bad. - There is no tree representation of the current conversation - You can't edit messages containing artifacts with Claude - Writing prompts is still 100% manual, and most users are terrible at it
Tweet media one
27
28
298
0
0
2
@nicolayr_
Nicolay Rusnachenko
4 months
@ashpreetbedi @GroqInc @streamlit This is so interesting from the prospect of the news generation for model fine-tuning/training purposes, thank you! 👀👏
0
1
2
@nicolayr_
Nicolay Rusnachenko
5 months
@KaustubhDhole @ecir2024 And a fabulous view from the venue ✨
0
0
2
@nicolayr_
Nicolay Rusnachenko
2 months
📢Can't gen enough, how these studies on opinion mining might be a perfect source of further advances / assessment of reasoning capabilities 🧠 in Sentiment Analysis 💎👀
@dustin_wright37
Dustin Wright
2 months
🔎What values and opinions do we see when we use 6 LLMs to generate 156,000 responses to 62 political propositions? Our paper "Revealing Fine-Grained Values and Opinions in Large Language Models" answers this. 📰 #NLProc #LLMs
Tweet media one
3
14
72
0
0
2
@nicolayr_
Nicolay Rusnachenko
3 months
@mudler_it @LocalAI_API Thanks for sharing this! 👀😊 Was the function calling available for the llama 2 and was that a big advance with this version if so?
1
0
2