OpenBMB Profile
OpenBMB

@OpenBMB

Followers
2,225
Following
113
Media
36
Statuses
174

OpenBMB (Open Lab for Big Model Base), founded by @TsinghuaNLP & ModelBest Inc (面壁智能), aims to build foundation models and systems towards AGI.

Joined February 2022
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
Pinned Tweet
@OpenBMB
OpenBMB
1 month
🚀Introducing MiniCPM-V 2.6! 🔥 1、Surpassing GPT-4V in single image, multi-image and video understanding 📸🎥 2、Outperforms GPT-4o mini and Gemini 1.5 on OpenCompass 🏆 3、Real-time video analysis on iPad 📱💨 Try out the best on-device multimodal LLM here! 👑
30
143
847
@OpenBMB
OpenBMB
3 months
As a dedicated contributor to the open-source community, OpenBMB feels deeply saddened and shocked by this.OpenBMB has always been a passionate participant in the open-source community. We look forward to working with global AI practitioners to accelerate our progress towards AGI
@yangzhizheng1
PrimerYang
3 months
Shocked! Llama3-V project from a Stanford team plagiarized a lot from MiniCPM-Llama3-V 2.5! its code is a reformatting of MiniCPM-Llama3-V 2.5, and the model's behavior is highly similar to a noised version of MiniCPM-Llama3-V 2.5 checkpoint. Evidence:
Tweet media one
Tweet media two
Tweet media three
36
167
896
6
12
92
@OpenBMB
OpenBMB
4 months
🚀 Excited to introduce MiniCPM-Llama3-V 2.5! With 8B parameters, it’s our latest breakthrough, outperforming top models like GPT-4V. 📈 💪 Superior OCR capabilities 🔑 Supports 30+ languages HuggingFace: GitHub:
Tweet media one
Tweet media two
1
36
81
@OpenBMB
OpenBMB
5 months
🚀MiniCPM-V 2.0:‍ 🚀MiniCPM MoE、128k、1.2B:‍‍ 🚀Huggingface:
@arankomatsuzaki
Aran Komatsuzaki
5 months
MiniCPM: Unveiling the Potential of Small Language Models with Scalable Training Strategies - Presents MiniCPM family, including MiniCPM-DPO, MiniCPM-MoE and MiniCPM-128K - MiniCPM 2B performs on par with Phi-2 and outperforms Mistral and Gemma
Tweet media one
1
38
182
1
17
71
@OpenBMB
OpenBMB
7 months
🌟Meet #MiniCPM : An end-side LLM outperforms Llama2-13B. 🏆It is similar to Mistral-7B in the comprehensive rankings and the overall score surpasses Llama2-13B,Falcon -40B and other models. GitHub: Huggingface: #MiniCPM
Tweet media one
Tweet media two
5
16
54
@OpenBMB
OpenBMB
9 months
🌟We release UltraEval, an open-source framework for evaluating the capabilities of foundation models, providing a suite of lightweight, easy-to-use evaluation systems that support the performance assessment of mainstream LLMs. github:
0
10
42
@OpenBMB
OpenBMB
4 months
MiniCPM-V 2.5 has taken the top spot on HuggingFace Trending🥇 This GPT-4V level MLLM brings endless possibilities to your phone. It even performs favorably to the latest Phi-3-v in both performance and running efficiency @Thom_Wolf @Xianbao_QIAN @_akhaliq
Tweet media one
Tweet media two
Tweet media three
1
4
29
@OpenBMB
OpenBMB
1 month
Thrilled to see the feedback for MiniCPM-V 2.6! 🥳 Key techniques: 1️⃣ Powerful base models: SigLIP-400M & Qwen2-7B. Thanks for their great work! @giffmana @JustinLin610 @huybery 💪 2️⃣ Unified architecture: High-res image-text modeling for single/multi-image & video 📸🎥 3️⃣
@OpenBMB
OpenBMB
1 month
🚀Introducing MiniCPM-V 2.6! 🔥 1、Surpassing GPT-4V in single image, multi-image and video understanding 📸🎥 2、Outperforms GPT-4o mini and Gemini 1.5 on OpenCompass 🏆 3、Real-time video analysis on iPad 📱💨 Try out the best on-device multimodal LLM here! 👑
30
143
847
0
5
24
@OpenBMB
OpenBMB
5 months
🔥🔥🔥Our latest multimodal MiniCPM-V 2.0 shows strong OCR and multimodal understanding capabilities, with state-of-the-art performance on OCRBench among open-source models, and even matches Gemini Pro in scene-text understanding on TextVQA.🚀🚀🚀 GitHub:
1
3
23
@OpenBMB
OpenBMB
1 year
🎉ChatDev: Your Virtual Software Dream Team! ChatDev is redefining software development with a diverse team of AI-powered agents. Share your initial idea📷 and witness these agents cooperatively communicate📷 #ChatDev #Agent #LLM #AI @OpenAI
Tweet media one
2
4
18
@OpenBMB
OpenBMB
1 year
Thank Komatsuzaki for sharing the amazing result with us! Really proud for our OpenBMB members @TsinghuaNLP for building such strong models👏👏👏
@arankomatsuzaki
Aran Komatsuzaki
1 year
Enhancing Chat Language Models by Scaling High-quality Instructional Conversations - UltraChat contains 1.5M high-quality, diverse multi-turn dialogues - UltraLLaMA outperforms the SotA open-source model, Vicuna repo: abs:
Tweet media one
3
35
123
0
4
15
@OpenBMB
OpenBMB
1 year
ChatDev: Build a virtual software company with one command and the LLM Agents replaces the boss & programmer🖥️⌨️🖱️🎮 Video cover picture from @aiguy_arjun #ai #chatdev #aiagents #aiguide #aiautomation #artificialintelligence #chatgpt @OpenAI
3
4
14
@OpenBMB
OpenBMB
7 months
Thanks @_akhaliq for sharing our work🌞
@_akhaliq
AK
7 months
MiniCPM-2B demo: An end-side LLM outperforms Llama2-13B
1
13
68
0
2
12
@OpenBMB
OpenBMB
11 months
Our new agent, X-Agent,an open-source Large Language Model (LLM) driven autonomous agent that can automatically solve various complex tasks. We compare the results of XAgent with AutoGPT, which shows a total win of XAgent over AutoGPT. 🎉 #llm #agent
Tweet media one
1
3
12
@OpenBMB
OpenBMB
7 months
🚀 Introducing OmniLMM, achieves leading performance among comparable-sized models on multiple benchmarks, and is RLHF'ed for trustworthy multimodal behavior. GitHub: . HuggingFace:
0
2
12
@OpenBMB
OpenBMB
7 months
Introducing MiniCPM🌟🏆🥇 A TsinghuaNLP MiniCPM-2.4B, dpo on our dataset #UltraFeedback After DPO, MiniCPM outperforms Llama2-70B-Chat, Mistral-7B, etc. on MTBench. Runs ultra-fast on Apple Silicon gt: hf: @Apple @huggingface
Tweet media one
Tweet media two
Tweet media three
Tweet media four
1
5
11
@OpenBMB
OpenBMB
6 months
OpenBMB's Ultra Alignment Series has become the common choice for over 200 open LLMs globally. Here, we present a collection of over 20 star models aligned on the UltraChat and UltraFeedback datasets, hoping you will like it.🚀🏆 #UlraChat #UltraFeedback
1
5
12
@OpenBMB
OpenBMB
2 years
📣Today marks the first major release for OpenBMB! We release the 10B-parameter Chinese pretrained language model 🐜 CPM-Ant, which has five outstanding features💡, and four innovative breakthroughs! If you like it, please give us a star🌟 on our GitHub []
Tweet media one
0
3
11
@OpenBMB
OpenBMB
7 months
Run MiniCPM-2B on CPU🍗🌟🌻 So grateful for 's contribution #MiniCPM
Tweet media one
0
4
11
@OpenBMB
OpenBMB
5 months
Very excited to introduce our new work #UltraInteract 🚀
@lifan__yuan
Lifan Yuan
5 months
Introducing 🚀Eurus, a suite of state-of-the-art LLM reasoning generalists powered by a new member of Ultra-Series, UltraInteract🎉! Particularly, Eurus-70B beats GPT-3.5 Turbo in reasoning through a comprehensive benchmarking across 12 tests (mostly OOD) covering five tasks!
Tweet media one
7
63
319
0
0
8
@OpenBMB
OpenBMB
5 months
It outperforms Qwen-VL-Chat 10B, CogVLM-Chat 17B and Yi-VL 34B on OpenCompass over a comprehensive collection of 11 popular benchmarks.🥇 Huggingface: @Xianbao_QIAN @Thom_Wolf @_akhaliq @AdeenaY8
0
2
8
@OpenBMB
OpenBMB
1 year
0
3
8
@OpenBMB
OpenBMB
6 months
@alvarobartt
Alvaro Bartolome
6 months
💥 DPO to the power of 3! @huggingface just released a DPO fine-tune of Gemma 7B using a dataset created and curated by @argilla_io on top of the following datasets: - @intel Orca DPO Pairs - @openbmb UltraFeedback - @ldjconfirmed Capybara Read more about it below 👇🏻
1
11
54
0
3
7
@OpenBMB
OpenBMB
2 years
💡Do you know that Google maintains the most number of big models, while Alibaba has the largest big model? 🔥 Check our GitHub repository at [], which will show you a gallery of big models and their trends created by our amazing team!
0
0
6
@OpenBMB
OpenBMB
9 months
@dvilasuero
Daniel Vila Suero
9 months
Yesterday, we released Notus-7B. Today, I want to share the process of building it. This is a great example of what the Open Source AI community can build together 🧵 👇
Tweet media one
7
88
455
0
2
6
@OpenBMB
OpenBMB
7 months
Very appreciated @Thom_Wolf spotlight on our work,OpenBMB will insist on breakthrough innovation in the field of LLM. 🏆
@Thom_Wolf
Thomas Wolf
7 months
You likely missed it if you only follow ML Twitter but there's a series of mind-blowing tech reports and open-source models coming from China (DeepSeek, MiniCPM, UltraFeedback...) with so much lesson learned and experiments openly shared together with models, data, etc This
51
94
555
0
1
6
@OpenBMB
OpenBMB
5 months
Drum roll…What a time we live in! Very excited collaboration with amazing @DeanHu11 researcher🥇 #MiniCPM github:
@DeanHu11
Shengding Hu
5 months
Llama 3 8B is trained on 15T tokens! 😱 This is in accordance with our recent scaling law in #MiniCPM paper(): Compute optimal data size should be 200 times larger than model size 🤩 Chinchilla Optimal is dead! lol #llama3 #scaling #minicpm
Tweet media one
Tweet media two
Tweet media three
Tweet media four
6
15
113
0
0
6
@OpenBMB
OpenBMB
7 months
thanks @geekbb for spotting our new work
@geekbb
Geek
7 months
好家伙,能手机部署LLM大模型来了。MiniCPM: 是面壁智能与清华大学自然语言处理实验室共同开源的系列端侧大模型,MiniCPM-2B 仅有 24亿(2.4B)的非词嵌入参数量。与 Mistral-7B相近(中文、数学、代码能力更优),整体性能超越 Llama2-13B、MPT-30B、Falcon-40B 等模型。
Tweet media one
13
89
312
0
0
5
@OpenBMB
OpenBMB
1 year
ModelBest, together w/ researchers from @TsinghuaNLP , @Yale , Renmin University, @TencentGlobal , and #Zhihu , jointly launch tool learning framework ToolLLM, a new member of the #OpenBMBEcosystem Tool "Family Bucket". ToolLLM open source 🔗:
1
1
5
@OpenBMB
OpenBMB
2 years
📣 The second phase of CPM-Live training (CPM-Bee🐝) is now officially launched, with new features such as task mode enhancement, multi-lingual integration, structural input/output etc. For more details, please refer to
Tweet media one
0
0
5
@OpenBMB
OpenBMB
5 months
🚀MiniCPM-V 2.0:‍ 🚀MiniCPM MoE、128k、1.2B: 🚀Huggingface:
@_akhaliq
AK
5 months
MiniCPM Unveiling the Potential of Small Language Models with Scalable Training Strategies The burgeoning interest in developing Large Language Models (LLMs) with up to trillion parameters has been met with concerns regarding resource efficiency and practical expense,
Tweet media one
1
10
36
0
0
4
@OpenBMB
OpenBMB
1 year
🔔 The framework includes a complete process for obtaining high-quality tool-learning training data, model training code and automated model evaluation. ✨ToolBench dataset contains 16,464 real-world API Empower models w/ tools! ToolLLM data and code 🔗:
0
0
3
@OpenBMB
OpenBMB
2 years
📣Checkout our prompt learning toolkit OpenPrompt, which has 2k stars and 500k visits on GitHub one year since its release! The corresponding paper had won the ACL 2022 Best Demo paper award🏆
0
0
4
@OpenBMB
OpenBMB
7 months
Very excited to introduce our lead author of MiniCPM @DeanHu11 🍗
@DeanHu11
Shengding Hu
7 months
[1/3] How does Google's Gemma 7B & 2B models stack up against our MiniCPM-2B? 🚀 Quick comparison: 👇 1. MiniCPM-2B leads over Gemma-2B in English tasks (AvgScore: 56.6 vs. 46.4). Even competes with Gemma-7B in several tasks. Chinese scores also surpass Gemma-7B&2B noticeably.
Tweet media one
2
5
22
1
1
4
@OpenBMB
OpenBMB
2 years
We introduce BMTrain, an efficient toolkit for big model training, which can save up to 90% of training costs! 🚀BMTrain can accelerate models implemented by #PyTorch by using one line of code! 💻With BMTrain, you can train 175B GPT-3 on 64 A100 GPUs!
Tweet media one
1
2
4
@OpenBMB
OpenBMB
2 years
🤔Are you looking for a tool to train your model with tens of billions of parameters? 👌 Our team provides the tool BMTrain, which can train models in a distributed manner while keeping the code as simple as stand-alone training. Please give us a star🌟[]
0
0
4
@OpenBMB
OpenBMB
1 year
Are you still waiting for #OpenAI #plugins ? No need to wait anymore when OpenBMB's open source BM Tools are here! Our newest toolkit supports OpenAI Plugins, and even allows for customized Tools! Check out BMTools here:
0
0
4
@OpenBMB
OpenBMB
7 months
🌟Excited to release #OlympiadBench , an Olympiad-level bilingual multimodal scientific benchmark. The best-performing model, #GPT4V , attains an average score of 17.23%. Such a challenging benchmark🥇 GitHub: Arxiv:
Tweet media one
0
1
4
@OpenBMB
OpenBMB
7 months
thanks @_philschmid for spotting our new work🌸🌟
@_philschmid
Philipp Schmid
7 months
Can a 2B LLM outperform Mistral 7B or Llama 13B? Creators of the popular Ultrafeedback dataset released MiniCPM, a 2.4B parameter model claiming performance close to Mistral 7B, Llama 2 13B, or Falcon 40B. 🤯🤔 As part of the release, the researchers released a detailed
7
53
264
0
0
2
@OpenBMB
OpenBMB
2 years
At OpenBMB, we aim to lower the barrier of using big models and make them standardized, popular, and practical for everyone. Learn more about our OpenBMB open-source community:
0
0
3
@OpenBMB
OpenBMB
7 months
It was a really fun and greate job, thanks @huggingface @Thom_Wolf for making MiniCPM the first batch of adapters on nanotron. Let's play with fast pre-training 3D parallelism in new architectures like MoE, Mamba, #MiniCPM on nanotron.
@Thom_Wolf
Thomas Wolf
7 months
Today is a good day – pushing two new first library releases on PyPi - nanotron⚡️: first version on pypi of this lightweight open-source library where we're playing with fast pre-training 3D parallelism in new architectures like MoE, Mamba, MiniCPM, etc - lighteval🌤️: also
Tweet media one
4
44
233
0
0
3
@OpenBMB
OpenBMB
1 year
#OpenBMBEcosystem No.3 - BMCook is an open-source model compression toolkit, integrating Quantization, Pruning, MoEfication, Distillation methods. Increase PLMs efficiency, accelerate inference by 10 times, while preserving 90+% original model capabilities. @BigscienceW @DrJimFan
Tweet media one
1
0
3
@OpenBMB
OpenBMB
10 months
Thanks for sharing your story with UltraChat and UltraFeedback.A beautiful example of collaboration within the AI community. Open source AI is the way to go! Let more researchers get together and build better LLMs.
@Thom_Wolf
Thomas Wolf
10 months
There is a beautiful story that just happened in AI so let me share it for a lighter tone weekend post among all the doom stories in our AI field this week. It’s a story of people on three continents building and sharing in the open a new small efficient and state-of-the-art AI
13
129
568
0
0
3
@OpenBMB
OpenBMB
11 months
cool
@TsingYoga
Yujia Qin
11 months
🚀 Introducing XAgent: The next evolution of AI agents designed for intricate task-solving. XAgent completely outperforms AutoGPT and GPT-4 on various tasks and benchmarks. 💡 XAgent's dual-loop mechanism bridges the gap between high-level planning and detailed task execution.
Tweet media one
Tweet media two
2
47
229
0
0
2
@OpenBMB
OpenBMB
10 months
Excited to share that our new feature in ChatDev #chatdev #agent #llm
@qianc62
ChenQian
10 months
ChatDev just rolled out an "incremental development 🛠️" feature that lets you expand your code on existing projects. Get ready to upgrade your code and watch your projects evolve!🎈 #ChatDev #AI #Agents
Tweet media one
1
2
14
1
2
3
@OpenBMB
OpenBMB
1 year
The Big Model Developer Salon organized by @OpenBMB and @AIHub_startups has been successfully held today @Metaspace coffee. We sincerely thank everyone who has come today. Special thanks to our speakers @zibuyu9 , @stingning , and @TsingYoga ! Like this tweet if you were there!
Tweet media one
1
0
3
@OpenBMB
OpenBMB
1 year
BMTools, the tool learning package launched by the team, has unified the call processes of various tools into one framework, automating the entire tool call process. Developers can use BMTools to call various tool interfaces using given models to achieve specific functions.
0
0
3
@OpenBMB
OpenBMB
1 year
What Can U Do Via #OpenBMBEcosystem ? This is a series introducing toolkits developed by OpenBMB and embedded in the OpenBMB model Ecosystem using one image. The #toolkits are powerful, efficient, and easy to use; many are already #opensource on #github . @TsinghuaNLP @zibuyu9 #LLM
Tweet media one
1
1
3
@OpenBMB
OpenBMB
1 year
After watching @GregBrockman ’s live stream, so impressed with #gpt4 ’s comprehension of images, more impressed with its rhyme poems! 😍 Guess what? OpenBMB’s model can also write such poems! Tryout our open source toolkits and CPM-Live model on Github!
0
0
2
@OpenBMB
OpenBMB
1 year
@TheTuringPost Try starting with our Large model ecosystem to make the training and tuning process all easier to launch! Link to the tools:
Tweet media one
0
0
2
@OpenBMB
OpenBMB
5 months
🚀🚀🚀🥇🥇🥇
@natolambert
Nathan Lambert
5 months
New top open model on RewardBench: @OpenBMB /Eurus-RM-7b is trained on a new preference datasets UltraInteract and UltraSafety (from the makers of UltraFeedback). Most gains on the hardest chat category. Also decent, the newest StarChat from friends at @huggingface H4: @_lewtun ,
Tweet media one
1
12
39
0
0
2
@OpenBMB
OpenBMB
1 year
OpenBMB's paper, "Parameter-efficient Fine-tuning of Large-scale Pre-trained Language Models" has been published in Nature Machine Intelligence, as the main cover for this sub-journal! Congratulation on the great achievement of our research team! 👏👏👏
0
0
2
@OpenBMB
OpenBMB
1 year
The large model UltraLM, together developed by Tsinghua, Modelbest and OpenBMB, was ranked No.1 among open-source models on the authoritative ranking: Alpaca Eval from Stanford University! ✨ UltraLM:
2
0
2
@OpenBMB
OpenBMB
10 months
🌟
@qianc62
ChenQian
10 months
We released #ChatDev website platform that enables software developers and innovative entrepreneurs to build software efficiently at a very low cost and barrier to entry. Try it out at
Tweet media one
2
2
14
0
0
2
@OpenBMB
OpenBMB
7 months
thanks @AdeenaY8 for spotting our new work🌸📷🌟
@AdeenaY8
Adina Yakup
7 months
OmniLMM-12B & OmniLMM-3B , open access LMMs from a Chinese research lab @OpenBMB 🔥 ✨ OmniLMM-12B: Outperform in benchmarks; Trustworthy behavior; Real-time interaction. 🚀 OmniLMM-3B (MiniCPM-V): Runs on most devices, mobile included; Support both Chinese & English. 🪪 Apache
1
6
36
0
0
2
@OpenBMB
OpenBMB
1 year
🐝 CPM-Bee, the foundation model rivalling #LLaMA , is now released and open source! 📣 With billions of parameters and trillions of high-quality corpora 🏅 Ranked first in Chinese #ZeroCLUE evaluation and comparable to LLaMA in English Read more here!
1
2
2
@OpenBMB
OpenBMB
1 year
#OpenBMBEcosystem No.2 - BMTrain is an open-source model training toolkit, efficiently pre-train and fine-tune models with ten billions of parameters in a distributed manner. The implementation combines @PyTorch with low threshold and simple substitution.
Tweet media one
1
1
2
@OpenBMB
OpenBMB
1 year
✨ Researchers from Tsinghua University, Renmin University of China, UIUC, NYU, and other famous Universities have jointly published a review paper on basic model tool learning and officially released an open-source tool learning platform, BMTools! Click the links below 👇
1
0
2
@OpenBMB
OpenBMB
1 year
The paper introduces the concept of tool learning, systematically outlines its technical framework, and predicts the opportunities and challenges it will face in the future.
1
0
2
@OpenBMB
OpenBMB
7 months
谢谢 @xiaohuggg 分享我们的工作🌸
0
0
2
@OpenBMB
OpenBMB
7 months
So deeply grateful for your contribution🍗
@SDxFaith
Xiang Long
7 months
A great journey with MiniCPM-2B! We build an efficient 2B model which is similar to Mistral-7B in the comprehensive rankings. Time to try it! GitHub: Huggingface:… @OpenBMB #MiniCPM
2
0
4
0
1
2
@OpenBMB
OpenBMB
1 year
@DataChaz Can it go backwards? eg. I give it a picture of cake and then ask what materials do I need to make it?
2
0
2
@OpenBMB
OpenBMB
11 months
ChatDev new feature +1 🎉🎉🎉
@qianc62
ChenQian
11 months
#ChatDev just leveled up with a brand new feature: Docker support 🐳. Say goodbye to worries about security and embrace the freedom to experiment fearlessly! ⚙️ Join us on this thrilling adventure to unlock a world of endless possibilities in #AIAgents .
Tweet media one
1
6
15
0
0
2
@OpenBMB
OpenBMB
6 months
Feedback from the community about running #MiniCPM on mobile phones🌸🍗🌻🥇🚀
Tweet media one
Tweet media two
Tweet media three
0
0
1
@OpenBMB
OpenBMB
1 year
BMTools toolkit is open-source on #github 🔗: Tool learning paper list 🔗: Start using BMTools in your development, and then star it on Github!
0
0
1
@OpenBMB
OpenBMB
5 months
So deeply grateful for your sharing #ChatDev @AndrewYNg 😍 GitHub:
@AndrewYNg
Andrew Ng
5 months
Multi-agent collaboration has emerged as a key AI agentic design pattern. Given a complex task like writing software, a multi-agent approach would break down the task into subtasks to be executed by different roles -- such as a software engineer, product manager, designer, QA
91
526
2K
0
0
1
@OpenBMB
OpenBMB
1 year
⚙️ Supported by the OpenBMB large model system ecosystem with 30+ toolkits ✨ Completely open-source and available for commercial use 🔗 Visit and star it to build your own customized model!
0
0
1
@OpenBMB
OpenBMB
1 year
BMTools' front-end webpage allows developers to directly see model's tool usage effect. Below is an inquiry demo, showing model correctly assigned Wolframalpha for geography search, Weather for weather prediction, and gave detailed answers w/ visual. @willdepue @ilyasut @JeffDean
1
0
1
@OpenBMB
OpenBMB
7 months
thanks @osanseviero for sharing our new work🌻
@osanseviero
Omar Sanseviero
7 months
OpenBMB, the creators of UltraFeedback, silently released a series of very strong edge models! - 2.4B base model close to Mistral 7B - 2.4 DPO outperforming Llama 70B on MT Bench - A 3B bilingual VLLM (+12B version RLHF VLLM) Check the models at 🚀
2
81
429
0
0
1
@OpenBMB
OpenBMB
1 year
OpenPrompt: An Open-source Framework for Prompt-learning Paper page: The paper introducing OpenPrompt won the Best Demo Paper Award at ACL 2022. @TsinghuaNLP and OpenBMB created the great tool to help developers w/ Prompt-engineering. @_akhaliq @emollick
Tweet media one
1
0
1
@OpenBMB
OpenBMB
1 year
What to do on #Easter2023 ? Let OpenBMB’s CPM Bee tell you: “On Easter, we can participate in church services, family gatherings, blessings, and various cultural activities to celebrate this important religious festival.” Try asking it more questions!
0
0
1
@OpenBMB
OpenBMB
1 year
#OpenBMBEcosystem No.6 - BMTools is an open-source tool-learning toolkit, and a platform for developers to build and share plugins. W/ MBTools, You can1️⃣use external @OpenAI Plugins2️⃣easily customize plugins w/ python functions! Powerful AI tool! @ylecun @karpathy @TheTuringPost
Tweet media one
1
0
1
@OpenBMB
OpenBMB
1 year
The paper, Tool Learning with Foundation Models, is the theoretical base of BMTools. Paper: The paper introduces Tool Learning, outlines its technical framework, and predicts the opportunities and challenges it will face @_akhaliq @goodside @paperswithcode
1
0
1
@OpenBMB
OpenBMB
1 year
OpenPrompt offers a unified template, defining basic methods based on @PyTorch frame, compatible w/ popular deep learning frames, perfectly aligning w/ @huggingface frame in code style. Thus greatly reduce developers' learning costs in prompt engineering @DrJimFan @goodfellow_ian
Tweet media one
1
0
1
@OpenBMB
OpenBMB
1 year
#OpenBMBEcosystem No.3 - OpenPrompt is a standard, extensible, open-sourced framework to deploy prompt-learning pipelines. Its newly-designed modular language gives developers freedom to combine different PLMs, task formats, and prompting modules! @karpathy @_jasonwei @AndrewYNg
Tweet media one
1
0
1
@OpenBMB
OpenBMB
1 year
Based on BMTrain, our open-source ModelCenter further supports Efficient, Low-Resource, Extendable model usage and distributed training, making it easier for users to quickly use PLMs with typical model architectures like #gpt #bert #LLaMA #T5 #CPM #glm ...
Tweet media one
0
0
1
@OpenBMB
OpenBMB
2 years
@alaeddine_abd @JinaAI_ @SalesforceOrg The sunset is beautiful, so is the service!
0
0
1
@OpenBMB
OpenBMB
1 year
@zibuyu9 Professor Zhiyuan Liu is the Associate Professor @TsinghuaNLP . Research interests include NLP, KG and social computation. Published more than 200 papers in AI on famous International journals and conferences. Quoted over 31,000 times according to Google Scholar. 🌹🌹🌹
Tweet media one
1
0
1
@OpenBMB
OpenBMB
1 year
@eddiejaoude Try starting with our Large model ecosystem to make the training and tuning process all easier to launch! Link to the tools:
Tweet media one
0
0
1
@OpenBMB
OpenBMB
1 year
BMTools provides a concise interface for Python-OpenAI plugins conversion and a think-action tool-learning framework for scenario analysis! BMTools integrate ChatGPT and allow developers to easily add more customized tools, greatly improve develop efficiency! @gdb @sama @DataChaz
Tweet media one
1
0
1
@OpenBMB
OpenBMB
1 year
#OpenBMBEcosystem No.5 - OpenDelta is a toolkit for new tuning method—Delta Tuning—as effective as full-parameter tuning by only updating >5% parameters. Easily implement prefix-tuning, adapters, Lora, other types of delta tuning with preferred PTMs. @chrmanning @arankomatsuzaki
1
0
1
@OpenBMB
OpenBMB
1 year
@TsingYoga Jiayu Qin, PHD in Computer Science @Tsinghua_Uni . OpenBMB researcher. The main author of BMTools, WebCPM.🌹🌹🌹
0
0
1
@OpenBMB
OpenBMB
1 year
@stingning Ning Ding, PHD in Computer Science @Tsinghua_Uni . OpenBMB researcher. The main author of OpenDelta, OpenPrompt.🌹🌹🌹
1
0
1
@OpenBMB
OpenBMB
1 year
🎉Our foundation model CPM-Bee reached the 3rd on #github Trending repository once open sourced, enabling everyone to utilize LLM freely! Many look forward to an official tutorial of CPM-Bee deployment and fine-tuning. Here it is! W/ data format examples!
0
0
1