Supermaven Profile Banner
Supermaven Profile
Supermaven

@SupermavenAI

Followers
6,754
Following
21
Media
28
Statuses
295
Explore trending content on Musk Viewer
Pinned Tweet
@SupermavenAI
Supermaven
1 month
We've trained Babble, a new model with a 1 million token context window. Babble is 2.5x larger than the previous Supermaven model and upgrades our context length from 300,000 to 1 million tokens. We are deploying the new model to all Supermaven users today.
Tweet media one
26
29
517
@SupermavenAI
Supermaven
2 months
Introducing Supermaven Chat: the best way for developers to use GPT-4o, Claude 3.5 Sonnet, and other chat models. If you've ever copy-pasted code into ChatGPT, then Supermaven Chat is for you. Save time by using Chat to upload your code while staying in your editor.
25
17
225
@SupermavenAI
Supermaven
2 months
Big update coming next week
33
0
153
@SupermavenAI
Supermaven
3 months
Our Neovim plugin is ready! Check our profile for the link to try it
Tweet media one
15
10
105
@SupermavenAI
Supermaven
9 days
VS Code extension version 1.0.5 - Adds "Generate Commit Message" button in Source Control tab (Pro users only) - Chat window no longer needs to reload whenever it's closed and reopened - Added checkbox to control whether "Fix with Supermaven" is shown in Quick Fixes
15
3
97
@SupermavenAI
Supermaven
3 months
We have a Neovim plugin ready (in beta). DM us or check our Discord to try it!
11
4
81
@SupermavenAI
Supermaven
4 months
We've added a free tier to Supermaven! It doesn't have our unique 300,000 token context window, but it still has a great model with unmatched latency. And it can see your current diff, so it understands what you're working on.
5
7
78
@SupermavenAI
Supermaven
5 months
The Supermaven Jetbrains plugin has been submitted to Jetbrains for review and should be ready in a couple days. We'd be grateful to anyone who wants to beta test the plugin - DM us if interested.
10
4
77
@SupermavenAI
Supermaven
1 month
The Supermaven Neovim plugin has reached 256 stars! 65,536 soon
Tweet media one
0
0
73
@SupermavenAI
Supermaven
4 months
We deployed a new scheduler that reduces latency ~3x for the most challenging subset of queries:
Tweet media one
4
2
70
@SupermavenAI
Supermaven
5 months
We've advertised our 300,000-token context window. But can Supermaven actually use all those tokens to improve its suggestions? In our new post, we evaluate Supermaven's long-context performance.
Tweet media one
8
5
69
@SupermavenAI
Supermaven
20 days
Supermaven Chat now supports GPT-4o mini! Pictured: GPT-4o (left), GPT-4o mini (right)
Tweet media one
7
2
60
@SupermavenAI
Supermaven
5 months
Supermaven 0.1.23 (VSC) makes a significant change to suggestions in the middle of the line. They will now display off to the right so your text doesn't move around as you're typing:
Tweet media one
5
3
52
@SupermavenAI
Supermaven
27 days
We're organizing a meetup in SF next Thursday July 18 at 5:30pm! Come to meet our team, talk about the future of AI for coding, and give us feedback on Supermaven. Link below-
4
4
51
@SupermavenAI
Supermaven
19 days
Supermaven writes a thread pool in C++
3
0
49
@SupermavenAI
Supermaven
1 month
Compared to our previous model, Babble has been trained on more data for longer, and is 2.5x larger. This makes the completions much better. Despite its larger size, Babble is still faster than our previous model thanks to improvements in our serving infrastructure.
1
1
47
@SupermavenAI
Supermaven
1 month
We benchmarked Babble to test its ability to use its 1 million context window. Running a "needle in a haystack" test, we find that the model can retrieve a "needle" string from anywhere in the context window, regardless of where it is hidden.
Tweet media one
1
1
42
@SupermavenAI
Supermaven
5 months
There is no longer a delay for indexing when using Supermaven with a new repository. Supermaven will now return completions immediately and run indexing in the background.
5
2
40
@SupermavenAI
Supermaven
5 months
The Supermaven Jetbrains plugin is ready (in beta). We tried to DM everyone who messaged us about it but we got rate limited by Twitter, so you can access it here:
3
3
41
@SupermavenAI
Supermaven
1 month
Babble has been deployed to all Supermaven users, and Supermaven Pro users will be able to use its 1 million token context window moving forward. We hope you enjoy using Supermaven 1.0!
2
0
39
@SupermavenAI
Supermaven
1 month
Besides Babble, we've made several improvements to Supermaven in the past month, including team billing, Supermaven Chat, and infrastructure improvements to ensure low latency. We're calling this release "Supermaven 1.0" to reflect all these improvements.
2
0
36
@SupermavenAI
Supermaven
4 months
Supermaven for VS Code 0.1.31 supports enabling/disabling Supermaven for specific languages. For example, you can disable Supermaven in Markdown files.
8
1
36
@SupermavenAI
Supermaven
4 months
We're excited to announce support for Jetbrains IDEs! Thanks to all our beta testers who have helped work out issues with the plugin over the past weeks.
Tweet media one
4
3
34
@SupermavenAI
Supermaven
19 days
@fasterthanlime We’re working to improve the UX in Zed with the help of the Zed team
2
0
35
@SupermavenAI
Supermaven
23 days
Writing a simple Objective-C program with Supermaven
4
0
34
@SupermavenAI
Supermaven
4 months
In addition to .gitignore, Supermaven now supports .supermavenignore to give you additional control over which files are indexed by Supermaven. Syntax is the same as .gitignore.
1
2
30
@SupermavenAI
Supermaven
20 days
Supermaven Chat now supports 8,192 tokens for Claude 3.5 Sonnet
@alexalbert__
Alex Albert
23 days
Good news for @AnthropicAI devs: We've doubled the max output token limit for Claude 3.5 Sonnet from 4096 to 8192 in the Anthropic API. Just add the header "anthropic-beta": "max-tokens-3-5-sonnet-2024-07-15" to your API calls.
Tweet media one
155
262
3K
3
1
29
@SupermavenAI
Supermaven
1 month
Comparing to our previous model at predicting randomly shuffled Python files from the PyTorch repository, we find that Babble outperforms our previous model at all context lengths and also scales better with increased context.
Tweet media one
1
0
25
@SupermavenAI
Supermaven
4 months
Using Supermaven to edit Supermaven
2
3
24
@SupermavenAI
Supermaven
4 months
We made changes to the backend that speed up building prompts, giving latency improvements up to 30ms depending on your repo.
1
1
23
@SupermavenAI
Supermaven
2 months
Supermaven Chat has hotkeys for starting conversations, uploading files, requesting edits, and automatically applying them. Here's an example of using Supermaven Chat to add documentation to a function. We request the edit using Cmd+I and apply it with Esc+A.
3
0
24
@SupermavenAI
Supermaven
4 months
Starting from version 0.1.33, Supermaven is disabled by default in files ignored by .gitignore, and these files will not be sent to our servers. You can re-enable Supermaven in these files by clicking the status icon:
Tweet media one
Tweet media two
2
1
23
@SupermavenAI
Supermaven
5 months
We added a Code Policy to our website to give customers additional clarity on the protection of their proprietary data. We do not train models on customer code.
Tweet media one
1
2
22
@SupermavenAI
Supermaven
4 months
Using Supermaven to write a simple toy program. You can see we get streaming completions from the Jetbrains plugin:
8
1
21
@SupermavenAI
Supermaven
1 month
With Supermaven Chat there are no rate limits - if you exceed your free credits, you'll be asked to authorize an overage charge and if you accept you can keep using Claude/GPT-4 as much as you like.
6
2
21
@SupermavenAI
Supermaven
5 months
Some users on Windows reported that the Supermaven extension would keep a blank terminal window open. We reproduced the issue and it is fixed in extension version 0.1.13.
2
1
19
@SupermavenAI
Supermaven
3 months
A long context window doesn't just help Supermaven look up function definitions - it also lets us learn your personal style and write code the way you would write it.
@EvHaus
Ev Haus
3 months
@SupermavenAI is the only AI tool I've paid for. I never got much value out of GH Copilot. It's just too slow. I am able to type out what's in my head faster than what Copilot could suggest. This is not so with Supermaven. Not only is Supermaven faster, but it writes like me.
1
0
5
2
0
19
@SupermavenAI
Supermaven
2 months
Supermaven Chat complements our existing inline completion functionality - the fastest of any copilot - and is available in our VS Code plugin starting with version 0.2.10. Chat is coming to Jetbrains IDEs soon!
3
1
19
@SupermavenAI
Supermaven
2 months
Here, we upload our file and request a change to it. Supermaven Chat automatically associates the model's response with the section of our file that it's editing, allowing us to display the diff and apply the changes in one click.
1
0
18
@SupermavenAI
Supermaven
1 month
@ddunderfelt It's trained from scratch
1
1
16
@SupermavenAI
Supermaven
4 months
Do you use Vim or Neovim?
Vim
26
Neovim
111
I use neither
92
9
1
16
@SupermavenAI
Supermaven
4 months
We made changes to Supermaven's handling of large diffs that should significantly improve quality and latency when making many changes or adding large files.
0
0
16
@SupermavenAI
Supermaven
2 months
Supermaven Pro users can use Chat at no additional charge. There are no usage limits - if you exceed your monthly free credits, we'll charge you for the additional usage. If you're a Free Tier user, you can use Chat by providing your own OpenAI or Anthropic API key.
4
0
14
@SupermavenAI
Supermaven
5 months
Supermaven had an issue that would cause completions not to show in large files (200KB+). This is fixed in VS Code extension version 0.1.22.
4
0
14
@SupermavenAI
Supermaven
5 months
We fixed an issue with our inference kernels that caused corruption of model state under load, leading to incorrect/degenerate completions.
3
1
14
@SupermavenAI
Supermaven
4 months
VSC extension version 0.1.47 improves trimming of redundant lines in situations with trailing commas
1
0
13
@SupermavenAI
Supermaven
5 months
There was an issue with incorrect handling of carriage returns that caused Supermaven to fail to return completions on Windows in some cases. This issue is fixed. It is a server-side change, so no update is required.
6
0
12
@SupermavenAI
Supermaven
5 months
We changed Supermaven to be less conservative in suppressing suggestions in the middle of a line. To get these changes, update to extension version 0.1.17.
2
1
13
@SupermavenAI
Supermaven
3 months
VSC version 0.1.49 and Jetbrains version 1.24 fix an issue where Supermaven wouldn't work in repositories with no commits.
2
1
13
@SupermavenAI
Supermaven
4 months
You could be the 3rd view on our Youtube channel (Uploaded because twitter was compressing our video too much)
1
0
12
@SupermavenAI
Supermaven
22 days
Our SF meetup is in 2 days (this Thursday)! We'll provide food and drinks.
@SupermavenAI
Supermaven
27 days
We're organizing a meetup in SF next Thursday July 18 at 5:30pm! Come to meet our team, talk about the future of AI for coding, and give us feedback on Supermaven. Link below-
4
4
51
0
2
11
@SupermavenAI
Supermaven
5 months
There was an issue where Supermaven would consume a large amount of system memory when the repository contained large files. This is fixed in extension version 0.1.14. Thanks to @PhilipKung5 for reporting the issue.
1
0
10
@SupermavenAI
Supermaven
1 month
@MadLadLol No fixed limit - it sees the files you've recently opened
1
0
10
@SupermavenAI
Supermaven
19 days
@thekitze Yes, one day
0
0
9
@SupermavenAI
Supermaven
4 months
@SupermavenAI
Supermaven
4 months
VSC extension version 0.1.47 improves trimming of redundant lines in situations with trailing commas
1
0
13
0
0
8
@SupermavenAI
Supermaven
4 months
VSC extension version 0.1.48 fixes an issue where Supermaven wouldn't return completions if large files were recently committed to the repository
0
1
7
@SupermavenAI
Supermaven
5 months
A common evaluation for a long-context model is the "needle in the haystack", where a piece of text (the "needle") is hidden within a large context (the "haystack"). The model is tested on whether it can retrieve the needle after consuming the entire context.
Tweet media one
1
0
7
@SupermavenAI
Supermaven
4 months
@jullerino It's the trailing comma that's throwing it off. We can fix this
0
0
6
@SupermavenAI
Supermaven
5 months
@lino_levan It's for easy deployment and cross-platform support - most of the complicated logic is in the backend, which is all Rust.
0
0
5
@SupermavenAI
Supermaven
1 month
@MenchoRiesco We would love to support chat in Vim, but our current implementation of chat is a JS app, so it would require a full UI rewrite to support Vim
2
0
5
@SupermavenAI
Supermaven
2 months
1
0
5
@SupermavenAI
Supermaven
5 months
0
0
5
@SupermavenAI
Supermaven
5 months
@ArpitTambi_ It’s not supported yet, but we want to add it
0
0
5
@SupermavenAI
Supermaven
4 months
VSC extension version 0.1.35 fixes an issue where too much whitespace would be deleted before the cursor in some cases.
0
0
5
@SupermavenAI
Supermaven
5 months
@calbach_ We don’t support nvim yet, but we hope to have a plugin within a month.
1
0
4
@SupermavenAI
Supermaven
4 months
@pashynnyk @jbfja @zeddotdev Yep, we’d love to support Zed
1
0
4
@SupermavenAI
Supermaven
5 months
But the needle in the haystack task is made easier because the needle stands out from the rest of the text: to get it correct, the model only needs to remember the needle. To give a harder test, we made a "dense retrieval" benchmark.
1
0
4
@SupermavenAI
Supermaven
5 months
In this task, the model is given a sequence of key-value pairs. Each pair occurs twice. The model is scored on whether it can use the first occurrence of the pair to correctly guess the value when it sees the pair a second time.
2
0
4
@SupermavenAI
Supermaven
3 months
@ChrisPerthen It’s on our list, we want to support it soon
0
0
4
@SupermavenAI
Supermaven
5 months
@arvind_subraman We will fix the re-suggestion issue!
0
0
4
@SupermavenAI
Supermaven
2 months
@munawwarfiroz You can drag it around like this:
1
0
3
@SupermavenAI
Supermaven
5 months
Here, we plot the dense retrieval performance as a function of the number of tokens separating the first and second occurrence of a pair. Many groups have found models are best at remembering the start and end of the context, and we reproduce this finding.
Tweet media one
1
0
3
@SupermavenAI
Supermaven
4 months
@RockzMRockz @jbfja Yes, we’d like to do this
0
0
3
@SupermavenAI
Supermaven
5 months
@1baga It’s in JetBrains review - we submitted a new version today that should resolve the last remaining issues
0
0
3
@SupermavenAI
Supermaven
5 months
@ITLSR @cursor_ai We are planning to add a chat feature in the future. Note Cursor and Supermaven are fully compatible since Cursor is a VSCode fork. You'll just want to disable Copilot++ so they don't conflict
2
0
3
@SupermavenAI
Supermaven
4 months
Needs VSC version 0.1.40 or Jetbrains version 1.22
0
0
3
@SupermavenAI
Supermaven
3 months
@JustDeeevin Will be ready soon
1
0
3
@SupermavenAI
Supermaven
5 months
@lino_levan @zeddotdev We're not actively developing a Zed plugin at the moment, but it's something we'd love to do
0
0
2
@SupermavenAI
Supermaven
5 months
0
0
3
@SupermavenAI
Supermaven
5 months
@0x4ym4n We will have it next week
2
0
3
@SupermavenAI
Supermaven
5 months
Lastly, we plot prediction error as a function of token index on our internal code, averaging over several hundred orderings of the files to get a smooth result.
Tweet media one
1
0
3
@SupermavenAI
Supermaven
3 months
@ItsAyrock We don't have that plugin yet but it's almost ready
2
0
3
@SupermavenAI
Supermaven
5 months
@calbach_ @zeddotdev Thank you, we’re glad you like it. We would love to support Zed.
1
0
3
@SupermavenAI
Supermaven
5 months
@mukamuc We'll have it ready this week.
0
0
2
@SupermavenAI
Supermaven
3 months
@_jsolly We're working on it!
0
0
2
@SupermavenAI
Supermaven
5 months
0
0
2
@SupermavenAI
Supermaven
3 months
@0xwendel It's just a better scheduler that avoids starving requests. The model is the same, p50/90 latency is unchanged.
0
0
2
@SupermavenAI
Supermaven
3 months
0
0
2
@SupermavenAI
Supermaven
4 months
@JasonStillerma1 Yes, will be ready soon
0
0
2
@SupermavenAI
Supermaven
3 months
@JeoCryp We'll add those sorts of features in due time
1
0
2
@SupermavenAI
Supermaven
5 months
@codedbyjordan In Manage -> Keyboard Shortcuts, change 'Accept Inline Suggestion'
Tweet media one
0
0
2
@SupermavenAI
Supermaven
1 month
@GrantSlatton @paul_cal The new Supermaven model should be better at that sort of thing
0
0
2
@SupermavenAI
Supermaven
5 months
@aaronash Yes, it's something we want to support soon.
0
0
2