Percy Liang Profile
Percy Liang

@percyliang

Followers
68K
Following
2K
Media
80
Statuses
1K

Associate Professor in computer science @Stanford @StanfordHAI @StanfordCRFM @StanfordAILab @stanfordnlp | cofounder @togethercompute | Pianist

Stanford, CA
Joined October 2009
Don't wanna be here? Send us removal request.
@percyliang
Percy Liang
8 months
We should call models like Llama 3, Mixtral, etc. “open-weight models”, not “open-source models”. For a model to be open-source, the code and training data need to be public (good examples: GPT-J, OLMo, RedPajama, StarCoder, K2, etc.). Weights are like an exe file, which would be.
42
303
2K
@percyliang
Percy Liang
2 years
📣 CRFM announces PubMedGPT, a new 2.7B language model that achieves a new SOTA on the US medical licensing exam. The recipe is simple: a standard Transformer trained from scratch on PubMed (from The Pile) using @mosaicml on the MosaicML Cloud, then fine-tuned for the QA task.
41
324
2K
@percyliang
Percy Liang
2 years
Writing on a whiteboard can make it easier for students to follow compared to slides (especially for math). During the pandemic, I added a feature to sfig (my Javascript slides library) to allow me to reveal parts of a slide using the mouse as if I were writing on a whiteboard:
10
71
1K
@percyliang
Percy Liang
1 year
Myth: open foundation models are antithetical to AI safety. Fact: open foundation models are critical for AI safety. Here are three reasons why:.
26
271
1K
@percyliang
Percy Liang
2 years
I worry about language models being trained on test sets. Recently, we emailed support@openai.com to opt out of having our (test) data be used to improve models. This isn't enough though: others running evals could still inadvertently contribute those test sets to training.
38
108
980
@percyliang
Percy Liang
2 years
RL from human feedback seems to be the main tool for alignment. Given reward hacking and the falliability of humans, this strategy seems bound to produce agents that merely appear to be aligned, but are bad/wrong in subtle, inconspicuous ways. Is anyone else worried about this?.
77
84
945
@percyliang
Percy Liang
2 months
I miss the days when we evaluated algorithms rather than models. Rather than "how well does model M do?", it should be "given data D and compute C, how well does running algorithm A on D with C do?" I don't think we can get scientific clarity unless we do the latter.
22
95
798
@percyliang
Percy Liang
2 years
Language models are becoming the foundation of language technologies, but when do they work or don’t work? In a new CRFM paper, we propose Holistic Evaluation of Language Models (HELM), a framework to increase the transparency of LMs. Holistic evaluation includes three elements:.
14
200
771
@percyliang
Percy Liang
2 months
This year, I have 4 exceptional students on the academic job market, and they couldn’t be more diffferent, with research spanning AI policy, robotics, NLP, and HCI. Here’s a brief summary of their research, along with one representative work each:.
7
47
706
@percyliang
Percy Liang
3 years
Meta's release of OPT is an exciting step towards opening new opportunities for research. In general, we can think of stronger release as enabling researchers to tackle deeper questions. There are different levels of strength:.
3
76
576
@percyliang
Percy Liang
2 years
ChatGPT is reactive: user says X, ChatGPT responds with Y. Risks exist but are bounded. Soon it will be tempting to have proactive systems - an assistant that will answer emails for you, take actions on your behalf, etc. Risks will then be much higher.
24
72
558
@percyliang
Percy Liang
1 year
Many "open" language models only come with released weights. In software, this is analogous to releasing a binary without code (you wouldn't call this open-source). To get the full benefits of transparency, you need the training data. GPT-J, GPT-NeoX, BLOOM, RedPajama do this.
11
84
544
@percyliang
Percy Liang
2 years
Announcing Holistic Evaluation of Language Models (HELM) v0.2.0 with updated results on the new @OpenAI, @AI21Labs, and @CohereAI models. HELM now evaluates 34 prominent language models in a standardized way on 42 scenarios x 7 metrics.
4
91
553
@percyliang
Percy Liang
2 years
I have 6 fantastic students and post-docs who are on the academic job market this year. Here is a short thread summarizing their work along with one representative paper:.
11
62
512
@percyliang
Percy Liang
3 years
There are legitimate and scientifically valuable reasons to train a language model on toxic text, but the deployment of GPT-4chan lacks them. AI researchers: please look at this statement and see what you think:
69
134
481
@percyliang
Percy Liang
2 years
When will the original GPT-3 model (davinci) be old enough that its weights can be safely released? It would be very useful for science and poses no additional risks (since open models will catch up anyway). In general, all models should expire and be released eventually.
17
38
498
@percyliang
Percy Liang
1 year
My TEDAI talk from Oct 2023 is now live:.It was a hard talk to give:.1. I memorized it - felt more like giving a piano recital than an academic talk. 2. I wanted it to be timeless despite AI changing fast…still ok after 3 months. Here’s what I said:.
19
84
491
@percyliang
Percy Liang
2 years
No matter how good LMs get at writing, I will always want to write some things from scratch - for the same reason that I sometimes grow my own tomatoes, make my own granola, learn to play a Chopin etude. not because it's better, but because of the sheer joy of creation.
13
39
471
@percyliang
Percy Liang
3 years
Vision took autoregressive Transformers from NLP. Now, NLP takes diffusion from vision. What will be the dominant paradigm in 5 years? Excited by the wide open space of possibilities that diffusion unlocks.
@XiangLisaLi2
Xiang Lisa Li
3 years
We propose Diffusion-LM, a non-autoregressive language model based on continuous diffusions. It enables complex controllable generation. We can steer the LM to generate text with desired syntax structure ( [S [NP. VP…]]) and semantic content (name=Coupa)
Tweet media one
3
82
453
@percyliang
Percy Liang
1 year
I have 4 incredible students/post-docs on the academic job market this year. As per tradition, I'll attempt to summarize their research + one representative paper:.
3
28
446
@percyliang
Percy Liang
2 years
Lack of transparency/full access to capable instruct models like GPT 3.5 has limited academic research in this important space. We make one small step with Alpaca (LLaMA 7B + self-instruct text-davinci-003), which is reasonably capable and dead simple:.
@tatsu_hashimoto
Tatsunori Hashimoto
2 years
Instruction-following models are now ubiquitous, but API-only access limits research. Today, we’re releasing info on Alpaca (solely for research use), a small but capable 7B model based on LLaMA that often behaves like OpenAI’s text-davinci-003. Demo:
Tweet media one
13
83
442
@percyliang
Percy Liang
2 years
I am excited to be part of 7 NeurIPS papers on understanding and improving foundation models. We. .
3
42
427
@percyliang
Percy Liang
2 years
2nd-order optimization has been around for 300+ years. we got it to scale for LLMs (it's surprisingly simple: use the diagonal + clip). Results are promising (2x faster than Adam, which halves your $$$). A shining example of why students should still take optimization courses!.
@tengyuma
Tengyu Ma
2 years
Adam, a 9-yr old optimizer, is the go-to for training LLMs (eg, GPT-3, OPT, LLAMA). Introducing Sophia, a new optimizer that is 2x faster than Adam on LLMs. Just a few more lines of code could cut your costs from $2M to $1M (if scaling laws hold). 🧵⬇️
Tweet media one
19
60
418
@percyliang
Percy Liang
9 months
model = learn(data). Synthetic data is great, but it’s not data. It’s an intermediate quantity created by learn(). Data is created by people and has privacy and copyright considerations. Synthetic “data” does not - it’s internal to learn().
29
53
420
@percyliang
Percy Liang
1 year
Having a hard time keeping track of all the foundation models, upstream datasets, and downstream products that come out every day? We built ecosystem graphs to monitor these assets:.
Tweet media one
6
69
383
@percyliang
Percy Liang
2 years
While instruction tuning is clearly necessary for producing usable interfaces like ChatGPT, the "magic" of language models comes from self-supervised learning on broad data, which enables emergent behavior like in-context learning and chain-of-thought.
10
51
380
@percyliang
Percy Liang
4 years
I just discovered this account I made 11 years ago. So how does one use these Twitters?.
21
12
339
@percyliang
Percy Liang
2 years
One thing I really like about language models is that they are stateless (they are functional programs of type text -> text). This allows us to share prompts (essentially currying the LM) and reproduce results.
10
77
311
@percyliang
Percy Liang
2 years
When people say GPT-3, do they mean the original GPT-3 or InstructGPT? And which version? It makes a huge difference, so it'd be nice to explicitly specify davinci, text-davinci-002, etc. when making a claim about GPT-3.
19
18
284
@percyliang
Percy Liang
6 months
Now that we have a frontier model that's open-weight (not open-source), it's time to go back to all those ambitious use cases where open-weight models failed to deliver (agents) and try again, so we can have reproducible science and not worry about API models getting deprecated.
7
28
278
@percyliang
Percy Liang
1 year
HELM v0.4.0 is out!.1) We have a new frontend (thanks to community contribution from Mike Lay). 2) We have added Mistral 7B, which really is punching above its weight (see , rivaling models an order of magnitude larger on the 16 core scenarios:
Tweet media one
7
42
271
@percyliang
Percy Liang
2 years
LM APIs are fickle, hurting reproducibility (I was really hoping that text-davinci-003 was going to stick around for a while, given the number of papers using it). Researchers should seriously use open models (especially as they are getting better now!).
@OpenAI
OpenAI
2 years
GPT-4 API is now available to all paying OpenAI API customers. GPT-3.5 Turbo, DALL·E, and Whisper APIs are also now generally available, and we’re announcing a deprecation plan for some of our older models, which will retire beginning of 2024:
7
42
261
@percyliang
Percy Liang
3 years
1/ Benchmarks clearly have had a huge impact in AI, but I think everyone agrees that they ought to be better. How should we improve them? It depends on which of the two goals you're after:.
9
39
264
@percyliang
Percy Liang
6 months
LM agents are consequential for cybersecurity, both for offense (cyberrisk) and defense (penetration testing). To measure these capabilities, we are excited to release Cybench, a new cybersecurity benchmark consisting of 40 professional Capture the Flag (CTF) tasks:
Tweet media one
9
54
258
@percyliang
Percy Liang
3 years
I want to thank each of my 113 co-authors for their incredible work - I learned so much from all of you, @StanfordHAI for providing the rich interdisciplinary environment that made this possible, and everyone who took the time to read this and give valuable feedback!.
@StanfordHAI
Stanford HAI
3 years
NEW: This comprehensive report investigates foundation models (e.g. BERT, GPT-3), which are engendering a paradigm shift in AI. 100+ scholars across 10 departments at Stanford scrutinize their capabilities, applications, and societal consequences.
Tweet media one
3
29
258
@percyliang
Percy Liang
1 year
Llama 2 was trained on 2.4T tokens. RedPajama-Data-v2 has 30T tokens. But of course the data is of varying quality, so we include 40+ quality signals. Open research problem: how do you automatically select data for pretraining LMs? Data-centric AI folks: have a field day!.
@togethercompute
Together AI
1 year
We are excited to release RedPajama-Data-v2: 30 trillion filtered & de-duplicated tokens from 84 CommonCrawl dumps, 25x larger than our first dataset. It exposes a diverse range of quality annotations so you can slice & weight the data for LLM training.
Tweet media one
1
41
256
@percyliang
Percy Liang
1 year
The goal is simple: a robust, scalable, easy-to-use, and blazing fast endpoint for open models like LLama 2, Mistral, etc. The implementation is anything but. Super impressed with the team for making this happen! And we're not done yet. if you're interested, come talk to us.
@togethercompute
Together AI
1 year
Announcing the fastest inference available anywhere. We released FlashAttention-2, Flash-Decoding, and Medusa as open source. Our team combined these techniques with our own optimizations and we are excited to announce the Together Inference Engine.
5
38
257
@percyliang
Percy Liang
1 year
As capabilities of foundation models are waxing, *transparency* is waning. How do we quantify transparency? We introduce the Foundation Models Transparency Index (FMTI), evaluating 10 foundation model developers on 100 indicators.
Tweet media one
11
69
242
@percyliang
Percy Liang
3 years
Foundation models (e.g., GPT-3) demonstrate emergence, where small models perform as well as random guessing on some task (e.g., addition), but large models obtain non-trivial error rates. Is there a much simpler learning problem that also exhibits emergence?.
12
18
223
@percyliang
Percy Liang
4 years
Executable papers on CodaLab Worksheets are now linked from pages thanks to a collaboration with @paperswithcode! For example:.
Tweet media one
1
42
223
@percyliang
Percy Liang
10 months
Most leaderboards just give you scores, leaving one wondering: what does 76.8% mean? In HELM, we are committed to full transparency, meaning clicking on a score will reveal the full set of instances, and you can even inspect the exact prompt (which we know makes a big
6
33
228
@percyliang
Percy Liang
4 months
Position: When a foundation model developer reports a test score, they should report the corresponding train-test overlap. Does this happen? Based on public documentation, only 9/30 language models have train-test overlap for the test sets they report on (or have open data).
Tweet media one
9
46
217
@percyliang
Percy Liang
11 months
Open or closed foundation models? This is one of the most important, contentious question in AI today. Important because it will determine structurally how AI will be developed, and contentious because we don’t have a shared framework. We offer guidance on this in a new paper:
Tweet media one
6
39
207
@percyliang
Percy Liang
9 months
HELM Lite v1.2.0 is out!.Datasets: NarrativeQA, NaturalQA, OpenbookQA, MMLU, MATH, GSM8K, LegalBench, MedQA, WMT14.Results (we still need to add Claude 3, which requires more prompt finagling):.
Tweet media one
9
40
206
@percyliang
Percy Liang
9 months
MMLU is the standard LM evaluation but model developers (i) use different prompting strategies and (ii) often do not release prompts. 3rd-party researchers often obtain lower scores 🤯. 📢 HELM MMLU uses simple, standardized prompts, resulting in fair, reproducible comparisons of
Tweet media one
14
28
206
@percyliang
Percy Liang
10 months
As expected, lots of new models in the last few weeks. We're tracking them (along with datasets and applications) in the ecosystem graphs:.
Tweet media one
4
51
203
@percyliang
Percy Liang
3 months
How close can LM agents simulate people? We interview person P for 2 hours and prompt an LM with the transcript, yielding an agent P'. We find that P and P' behave similarly on a number of surveys and experiments. Very excited about the applications; this also forces us to think.
@joon_s_pk
Joon Sung Park
3 months
Simulating human behavior with AI agents promises a testbed for policy and the social sciences. We interviewed 1,000 people for two hours each to create generative agents of them. These agents replicate their source individuals’ attitudes and behaviors. 🧵
Tweet media one
7
35
199
@percyliang
Percy Liang
1 year
What if whenever an API model is deprecated (presumably because it's not relevant commercially), its model weights are released so that researchers can continue to do reproducible science?.
9
14
169
@percyliang
Percy Liang
2 years
The most two most surprising things to me was that the trained Transformer could exploit sparsity like LASSO and that it exhibits double descent. How on earth is the Transformer encoding these algorithmic properties, and how did it just acquire them through training?.
@tsiprasd
Dimitris Tsipras
2 years
LLMs can do in-context learning, but are they "learning" new tasks or just retrieving ones seen during training? w/ @shivamg_13, @percyliang, & Greg Valiant we study a simpler Q:. Can we train Transformers to learn simple function classes in-context? 🧵.
2
30
172
@percyliang
Percy Liang
4 years
. where I will attempt to compress all of my students' work on robust ML in the last 3 years into 40 minutes. We'll see how that goes.
@trustworthy_ml
Trustworthy ML Initiative (TrustML)
4 years
1/ 📢 Registration now open for Percy Liang's (@percyliang) seminar this Thursday, Oct 29 from 12 pm to 1.30 pm Eastern Time! 👇🏾. Register here: #TrustML #MachineLearning #ArtificialIntelligence #DeepLearning
Tweet media one
2
18
167
@percyliang
Percy Liang
2 years
Holistic Evaluation of Language Models (HELM) v0.2.2 is updated with results from @CohereAI's command models and @Aleph__Alpha's Luminous models. Models are definitely getting better on average, but improvements are uneven.
6
39
164
@percyliang
Percy Liang
1 year
We just updated *ecosystem graphs* with the latest datasets, models, and products:.
Tweet media one
7
38
153
@percyliang
Percy Liang
2 years
Blog: GitHub: Model:
5
18
153
@percyliang
Percy Liang
2 years
In HELM, we evaluated language models. Now, we evaluate organizations that build language models. Just like model evaluations incentivize improvement in model quality, we hope that these evaluations will incentivize improvement in development and deployment practices.
Tweet media one
6
38
147
@percyliang
Percy Liang
1 year
Until now, HELM has evaluated LMs with on short responses, where evaluation is simple. We now introduce HELM Instruct, which evaluates open-ended instruction following. We evaluate 4 models on 7 scenarios using 4 evaluators against 5 criteria:
Tweet media one
5
35
151
@percyliang
Percy Liang
8 months
Why this confusion? First, because our standards for openness in AI are so low. The status quo for frontier models is API access, so we cheer when we can get our hands on weights.
1
7
150
@percyliang
Percy Liang
2 years
My favorite detail about @nelsonfliu's evaluation of generative search engines is he takes queries from Reddit ELI5 as soon as they are posted and evaluates them in real time. This ensures the test set was not trained on (or retrieved from).
4
16
150
@percyliang
Percy Liang
1 year
First, open models enable a tremendous amount of (badly needed) safety research, which requires full access to model weights (ideally with training data). API access is insufficient.
2
9
147
@percyliang
Percy Liang
2 years
Interested in building and benchmarking LLMs and other foundation models in a vibrant academic setting? @StanfordCRFM is hiring research engineers!.Here are some things that you could be a part of:.
2
39
145
@percyliang
Percy Liang
1 year
Announcing HELM lite v1.0.0, a revamp of the HELM classic benchmark, built on the same modular HELM framework. New scenarios: LegalBench (law), MedQA (medicine), WMT2014 (machine translation).New models: GPT-4, Claude, PaLM 2, Mixtral, Yi.
5
27
148
@percyliang
Percy Liang
2 years
This is the dream: having a system whose action space is universal (at least in the world of bits). And with foundation models, it is actually possible now to produce sane predictions in that huge action space. Some interesting challenges:.
@AdeptAILabs
Adept
2 years
1/7 We built a new model! It’s called Action Transformer (ACT-1) and we taught it to use a bunch of software tools. In this first video, the user simply types a high-level request and ACT-1 does the rest. Read on to see more examples ⬇️
2
16
144
@percyliang
Percy Liang
3 years
The term "foundation model" and its motivation unfortunately continues to be misunderstood. We wrote a blog post last year (see "Naming" section of which aims to explain our thought process. Some selected quotes from the post:.
4
21
142
@percyliang
Percy Liang
4 years
Excited to see what kind of methods the community will come up with to address these realistic shifts in the wild! Also, if you are working on a real-world application and encounter distributional shifts, come talk to us!.
@shiorisagawa
Shiori Sagawa
4 years
We're excited to announce WILDS, a benchmark of in-the-wild distribution shifts with 7 datasets across diverse data modalities and real-world applications. Website: Paper: Github: Thread below. (1/12)
Tweet media one
2
8
142
@percyliang
Percy Liang
8 months
But for open science, we really need open-source models. How do you interpret test accuracies without knowledge of the training data? How can we understand model capabilities without knowing what the sources are? While open-weight models are hugely enabling, we also risk building.
2
11
136
@percyliang
Percy Liang
1 year
2021: let's increase model size!.2023: let's increase FLOPs!.2025: let's increase ???!.Shouldn't FLOPs be in the denominator rather than the numerator? Numerator should be some measure of capability+safety. We need better evals to capture this!.
7
10
131
@percyliang
Percy Liang
2 years
These powerful foundation models will be deployed to billions of people soon, which means there will be economic incentives for bad actors to start messing around. So we better figure out security for foundation models soon.
3
15
130
@percyliang
Percy Liang
2 years
A better solution would to have all the LM providers agree on a common repository of examples that should be excluded from any training run.
5
3
130
@percyliang
Percy Liang
3 years
Should powerful foundation models (FMs) be released to external researchers? Opinions vary. With @RishiBommasani @KathleenACreel @robreich, we propose creating a new review board to develop community norms on release to researchers:
5
30
131
@percyliang
Percy Liang
2 years
What is the largest fully reproducible language model? That is, where I can get the data and code and run a sequence of commands that deterministically produces the exact model?.
6
5
130
@percyliang
Percy Liang
7 months
HELM MMLU v1.5.0 is out. Claude 3.5 Sonnet takes the top position.
Tweet media one
1
24
128
@percyliang
Percy Liang
3 years
The Stanford Center for Research on Foundation Models (CRFM) is looking for a research engineer to join our development team! Interested in large-scale training / being immersed in an interdisciplinary research environment? Please apply!
0
39
129
@percyliang
Percy Liang
9 months
How should you prompt an LM for MMLU? (You could say MMLU is contaminated/saturated and we should just use vibes, but that’s a separate conversation. As long as people are bragging about their MMLU scores, we should make sure we know what these scores mean). Two extremes:.
6
25
125
@percyliang
Percy Liang
3 years
Excited about the workshop that @RishiBommasani and I are co-organizing on foundation models (the term we're using to describe BERT, GPT-3, CLIP, etc. to highlight their unfinished yet important role). Stay tuned for the full program!.
@StanfordHAI
Stanford HAI
3 years
AI is undergoing a sweeping paradigm shift with models (e.g., GPT-3) trained at immense scale, carrying both major opportunities and serious risks. Experts from multiple disciplines will discuss at our upcoming workshop on Aug. 23-24:
Tweet media one
0
32
124
@percyliang
Percy Liang
3 months
We need 3rd party evals/audits of AI systems. How can we do this technically? What are best practices for disclosure? How can AI researchers be legally protected? If you're interested in these questions, join join our Oct 28 workshop. RSVP: Details:
Tweet media one
6
21
125
@percyliang
Percy Liang
3 years
Join us tomorrow (Wed) at 12pm PT to discuss the recent statement from @CohereAI @OpenAI @AI21Labs on best practices for deploying LLMs with @aidangomezzz @Miles_Brundage @Udi73613335. Please reply to this Tweet with questions!
12
41
124
@percyliang
Percy Liang
1 year
Third, open models can of course be misused. But it's far better for society to strengthen its ability to defend against misuse (before the stakes get higher), rather than be blindsighted in case of a future model leak or new vulnerability.
1
14
118
@percyliang
Percy Liang
1 year
In Dec 2022, we released HELM for evaluating language models. Now, we are releasing HEIM for text-to-image models, building on the HELM infrastructure. We're excited to do more in the multimodal space!.
@michiyasunaga
Michi Yasunaga
1 year
Text-to-image models like DALL-E create stunning images. Their widespread use urges transparent evaluation of their capabilities and risks. 📣 We introduce HEIM: a benchmark for holistic evaluation of text-to-image models.(in #NeurIPS2023 Datasets). [1/n]
Tweet media one
4
23
117
@percyliang
Percy Liang
2 years
But this might not be enough either: if we want to measure cross-task generalization, we have to ensure that no examples of a task/domain are represented in the training data. This is essentially impossible.
9
6
111
@percyliang
Percy Liang
3 years
New blog post reflecting on the last two months since our center on #foundationmodels (CRFM) was launched out of @StanfordHAI:
2
35
111
@percyliang
Percy Liang
1 year
Finally, will foundation models become so powerful that they pose catastrophic risks? No one truly knows (though everyone seems to have an opinion). But if it is the case, I'd say: let's not build it at all.
5
9
106
@percyliang
Percy Liang
3 years
1/ @ChrisGPotts and I gave back to back talks last Friday at an SFI workshop giving complementary (philosophical and statistical, respectively) views on foundation models and grounded understanding.
1
15
106
@percyliang
Percy Liang
2 years
One assistant's behavior will affect others, which will then affect others, etc. This is the same type of virality that exists in social media and Internet worms (which operate at frightening speed).
4
7
106
@percyliang
Percy Liang
8 months
Also, there are two benefits of open-source: transparency (ability to comprehend) and extensibility (ability to modify). For software, you need open-source to have both. For AI models, you can get some extensibility with only weights via fine-tuning, so that is good enough for.
2
2
105
@percyliang
Percy Liang
1 year
Second, open models offer transparency and auditability. Much of the Internet is based on open-source software (Linux, Apache, MySQL) and as a result is more secure.
1
9
103
@percyliang
Percy Liang
1 year
Given the ease of jailbreaking to bypass safety controls, it's clear we have poor understanding and control over current models. Open models expose this! Let's fix it (research required) before we build our entire critical infrastructure out of duct tape.
1
6
102
@percyliang
Percy Liang
5 months
How does o1-preview do on Cybench (, the professional CTF benchmark we released last month? The results are a bit mixed….
@percyliang
Percy Liang
6 months
LM agents are consequential for cybersecurity, both for offense (cyberrisk) and defense (penetration testing). To measure these capabilities, we are excited to release Cybench, a new cybersecurity benchmark consisting of 40 professional Capture the Flag (CTF) tasks:
Tweet media one
4
22
107
@percyliang
Percy Liang
1 year
Structured access for "trusted" actors helps, but still limits the diversity of voices who can participate. There is already too much disparity in terms of access to technology, and many innovations do come from grassroots efforts.
1
7
100
@percyliang
Percy Liang
8 months
HELM MMLU v1.4.0 is out! Lots of new models added: Yi Large, OLMo 1.7, Command R(+), Gemini 1.5 {Flash,Pro}, Mistral Instruct v0.3, GPT-4 Turbo 2024-04-09, Qwen {1.5, 2}. Usual suspects at the top; notably, the top open-weight model is now Qwen 2 Instruct, surpassing Llama 3.
Tweet media one
8
18
100
@percyliang
Percy Liang
2 years
I would not say that LMs *have* opinions, but they certainly *reflect* opinions represented in their training data. OpinionsQA is an LM benchmark with no right or wrong answers. It's rather the *distribution* of answers (and divergence from humans) that's interesting to study.
@tatsu_hashimoto
Tatsunori Hashimoto
2 years
We know that language models (LMs) reflect opinions - from internet pre-training, to developers and crowdworkers, and even user feedback. But whose opinions actually appear in the outputs? We make LMs answer public opinion polls to find out:
Tweet media one
0
20
97
@percyliang
Percy Liang
2 months
Joon Park (@joon_s_pk) leverages language models to build generative agents that simulate people, potentially paving the way for a new kind of tool to study human behavior. Here’s his latest ambition, LM agent simulations of 1000 real individuals:.
1
9
98
@percyliang
Percy Liang
3 years
With @MinaLee__ @fabulousQian, we just released a new dataset consisting of detailed keystroke-level recordings of people using GPT-3 to write. Lots of interesting questions you can ask now around how LMs can be used to augment humans rather than replace them.
@MinaLee__
Mina Lee
3 years
CoAuthor: Human-AI Collaborative Writing Dataset #CHI2022. 👩‍🦰🤖 CoAuthor captures rich interactions between 63 writers and GPT-3 across 1445 writing sessions. Paper & dataset (replay): .Joint work with @percyliang @fabulousQian 🙌
1
14
96
@percyliang
Percy Liang
10 months
We often grab whatever compute we can get - GPUs, TPUs. Levanter now allows you to train on GPUs, switch to TPUs half-way through, switch back. maintaining 50-55% MFU on either hardware. And, with full reproducibility, you pick up training exactly where you left off!.
@dlwh
David Hall
10 months
I like to talk about Levanter’s performance, reproducibility, and scalability, but it’s also portable! So portable you can even switch from TPU to GPU in the middle of a run, and then switch back again!
Tweet media one
1
11
95
@percyliang
Percy Liang
9 months
HELM is now multimodal! In addition to evaluating language models, text-to-image models, we now have vision-language models.
@tonyh_lee
Tony Lee
9 months
📢 HELM now supports VLM evaluation to evaluate VLMs in a standardized and transparent way. We started with 6 VLMs on 3 scenarios: MMMU, VQAv2 and VizWiz. Stay tuned for more - this is v1!. ✍️ Blog post: 💯 Raw predictions/results:
Tweet media one
1
15
94
@percyliang
Percy Liang
2 years
Modern Transformer expressivity + throwback word2vec interpretability. Backpack's emergent capabilities come from making the model less expressive (not more), creating bottlenecks that force the model to do something interesting.
@johnhewtt
John Hewitt
2 years
#acl2023! To understand language models, we must know how activation interventions affect predictions for any prefix. Hard for Transformers. Enter: the Backpack. Predictions are a weighted sum of non-contextual word vectors. -> predictable interventions!.
2
26
90
@percyliang
Percy Liang
10 months
@dlwh has been leading the effort at @StanfordCRFM on developing levanter, a production-grade framework for training foundation models that is legible, scalable, and reproducible. Here’s why you should try it out for training your next model:.
1
22
94
@percyliang
Percy Liang
10 months
Agree that rigor is undervalued - not shiny enough for conferences, takes time and resources. MM1 is a commendable example; @siddkaramcheti 's Prismatic work is similar in spirit. Other exemplars? T5 paper is thorough, Pythia has been a great resource. .
@NandoDF
Nando de Freitas
10 months
There appears to be a mismatch between publishing criteria in AI conferences and "what actually works". It is easy to publish new mathematical constructs (e.g. new models, new layers, new modules, new losses), but as Apple's MM1 paper concludes:. 1. Encoder Lesson: Image
Tweet media one
3
13
93
@percyliang
Percy Liang
2 years
Details: We took Hugging Face’s Transformer implementation, added FlashAttention, built our own tokenizer, and trained over 300B tokens (110 GB text) on 128 A100 GPUs for ~6.25 days. We did full fine-tuning on downstream tasks (e.g., MedQA-USMLE) for evaluation.
2
6
91
@percyliang
Percy Liang
9 months
HELM MMLU v1.3.0 is out. We've added GPT-4o, Gemini 1.5 Flash, and Palmyra-X v3 - all 3 made it into the top 10. Click on the numbers to drill down into the predictions.
Tweet media one
12
25
90
@percyliang
Percy Liang
1 year
I’m excited to partner with @MLCommons to develop an industry standard for AI safety evaluation based on the HELM framework:.We are just getting started, focusing initially on LMs. Here’s our current thinking:.
1
17
92
@percyliang
Percy Liang
1 year
There are other ways to build technology that is beneficial to society, rather than going down a path that leads to national security-style gating of this general purpose technology to a privileged few.
1
12
86