Nate Sesti Profile Banner
Nate Sesti Profile
Nate Sesti

@NateSesti

Followers
3,718
Following
270
Media
7
Statuses
103

Coding @continuedev , Publicly Thinking @ , (no longer) Studying Physics @ MIT ('23)

SF
Joined July 2018
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
Pinned Tweet
@NateSesti
Nate Sesti
7 months
today @continuedev released v1 of tab autocomplete it's 100% local and open-source for the next few months i'm going to share (live, as i learn) the tricks that improve our acceptance rate if you follow along you might learn...
3
6
46
@NateSesti
Nate Sesti
7 months
🦀 @rustlang 🐍 @ThePSF 🐫 @OCamlLang ☕️ @java 🐘 @official_php 🐦 @SwiftLang 🦎 @ziglang what could go wrong?
Tweet media one
0
0
4
@NateSesti
Nate Sesti
6 months
⚡️ excited to share that @continuedev is releasing our open-source tab-autocomplete! ⚡️ why might you want to use it? - local (code remains on your machine) - customizable (change model, temp, and more) - open-source - and free! here's how we built it:
1
6
26
@NateSesti
Nate Sesti
6 months
yet another obvious autocomplete update that makes a huge difference: even if truncating the completion after one line, let the model keep going. 90% of the time the user just presses enter, and then you can present them with the 2nd, 3rd, etc. lines almost immediately!
0
1
21
@NateSesti
Nate Sesti
1 year
🎉 Excited to share Continue—open-source, as dev tools should be
@continuedev
Continue
1 year
Introducing Continue: the open-source coding autopilot, built to be deeply customizable and continuously learn from development data. Join us at
4
10
40
0
3
19
@NateSesti
Nate Sesti
5 months
better cross-encoders are a big deal. out of 300,000 loc, @Voyage_AI_ embeddings + reranker found 300 loc that allowed for basically the answer i'd give as the author
4
5
19
@NateSesti
Nate Sesti
6 months
editing code with @GroqInc is incredible, this is 1x speed
1
0
20
@NateSesti
Nate Sesti
10 months
We have many difficult problems to solve, but for each that we do, software becomes easier for the world to build. If you want to help tackle these, Continue is now hiring. Join us:
@continuedev
Continue
10 months
We are excited to share that we’ve raised $2.1M to make building software feel like making music!
4
4
55
1
1
15
@NateSesti
Nate Sesti
6 months
open-source is never far behind
@JustinLin610
Junyang Lin
6 months
Me and @huybery are discussing about reproducing Devin. Come and join us and see if we can make something great together!
49
249
2K
1
0
11
@NateSesti
Nate Sesti
10 months
The problem "given a question and a directory, find the 8,192 most relevant tokens" turns out to have some depth
@continuedev
Continue
10 months
Today we’re releasing codebase retrieval! If you want to edit a large codebase but don’t know where to start, just use ⌘+⏎ and Continue will automatically pull in the relevant snippets of code.
2
1
10
1
0
9
@NateSesti
Nate Sesti
5 months
continue for jetbrains getting an upgrade 👀
0
2
11
@NateSesti
Nate Sesti
6 months
grok + @continuedev incoming...
@elonmusk
Elon Musk
6 months
This week, @xAI will open source Grok
9K
11K
92K
0
0
9
@NateSesti
Nate Sesti
7 months
the first improvement that feels magical (since originally getting autocomplete working) is adding recently edited files to context
1
0
6
@NateSesti
Nate Sesti
7 months
seeing really impressive autocomplete results from @deepseek_ai 's 1.3b model it's become clear that a) at least 1/2 the work is in constructing the right prompt, making the model's job easier b) once you do this, small (local!) models will shine
0
0
5
@NateSesti
Nate Sesti
4 years
Not everyone has the freedom and motivation to be idealistic. But if you do, time spent thinking long-term is underrated. Question your foundations and rigorously determine your angle.
0
1
5
@NateSesti
Nate Sesti
4 years
Twitter seems to be a mandatory counterpart to a blog, so here goes nothing... This is The End of Invisibility, and The Beginning of Infinity:
0
1
5
@NateSesti
Nate Sesti
7 months
whenever you type the opening parenthesis of a function call, @continuedev 's autocomplete will now use the language server protocol () to add the function definition to the prompt
1
0
4
@NateSesti
Nate Sesti
6 months
a nice part of building a dev tool is coming across other dev tools and deeply appreciating the work that went into simplifying an interface is one that i’ve appreciated lately
0
0
4
@NateSesti
Nate Sesti
5 months
🔊
@tylerjdunn
Ty Dunn
5 months
We believe in a future where developers are amplified, not automated. Read more at
Tweet media one
0
3
9
0
0
6
@NateSesti
Nate Sesti
7 months
at the core of most tab autocomplete systems is a tool called tree-sitter () tree-sitter makes it fast and easy to parse abstract syntax trees in any programming language where we've found it extremely helpful so far is by using the "ast path"
1
0
4
@NateSesti
Nate Sesti
8 months
This is why Continue helps you collect your own “development data” (dumped to a local .jsonl file) Training on fine-grained dev data isn’t currently commonplace, but there have been explorations like Google Research’s DIDACT And the longer you’ve been
@karpathy
Andrej Karpathy
8 months
The ideal training data for an LLM is not what you wrote. It's the full sequence of your internal thoughts and all the individual edits while you wrote it. But you make do with what there is.
187
272
3K
0
0
2
@NateSesti
Nate Sesti
6 months
@Bharathi19145 @kirat_tw @continuedev Just made a quick fix; this should now be solved—let me know if not!
0
0
3
@NateSesti
Nate Sesti
10 months
@metcalfc @Ollama_ai @continuedev It should! I’ve got Continue x Ollama working on my own Windows machine with WSL. And I believe all instances of WSL would share a loopback interface
0
1
1
@NateSesti
Nate Sesti
7 months
and as usual, Continue lets you customize. any stop token can be added through config.json, and if you never want multi-line, there's an option to `disableMultilineCompletions` 2. mis-matched brackets will be saved for another day
0
1
3
@NateSesti
Nate Sesti
7 months
@Nigh8w0lf @MikeBirdTech @erhartford @SourcegraphCody @ollama @continuedev I took a deeper look, solved a problem with cancelling requests. likely this is what you were experiencing, but if not feel free to dm me—would love to make sure this is squared away!
0
0
2
@NateSesti
Nate Sesti
6 months
if you want to know more, please follow along! there's much more where this came from, including topics here that i haven't yet discussed:
@NateSesti
Nate Sesti
7 months
today @continuedev released v1 of tab autocomplete it's 100% local and open-source for the next few months i'm going to share (live, as i learn) the tricks that improve our acceptance rate if you follow along you might learn...
3
6
46
0
0
3
@NateSesti
Nate Sesti
2 years
Wrote a short primer on a math trick that feels underutilized:
0
1
3
@NateSesti
Nate Sesti
6 months
@erhartford @DrTBehrens @ollama @continuedev @BrianRoemmele looks like someone shared setup below, but also just dm'd! autocomplete was only in pre-release for a bit and has improved a ton since, so very possible this is what happened
0
0
3
@NateSesti
Nate Sesti
1 year
@FilterPunk @continuedev @MetaAI @replicatehq @togethercompute You can edit the `server_url` of the model you are using in the config file. For example with Ollama: `default=Ollama(model="codellama", server_url="<your_hosted_endpoint>")`. Many other options as well if you're self hosting:
0
0
3
@NateSesti
Nate Sesti
4 years
For those who don't like being wrong: be humble, not stubborn. To be correct is to systematically change your mind. Here's how:
0
0
3
@NateSesti
Nate Sesti
7 months
they won't all be this mundane, but autocomplete optimization #1 is very necessary debouncing (modified) and a related trick...
@NateSesti
Nate Sesti
7 months
today @continuedev released v1 of tab autocomplete it's 100% local and open-source for the next few months i'm going to share (live, as i learn) the tricks that improve our acceptance rate if you follow along you might learn...
3
6
46
1
0
3
@NateSesti
Nate Sesti
6 months
debouncing and reusing requests:
@NateSesti
Nate Sesti
7 months
they won't all be this mundane, but autocomplete optimization #1 is very necessary debouncing (modified) and a related trick...
1
0
3
1
0
2
@NateSesti
Nate Sesti
4 years
What is happiness? The reading on a scale. A lake. Appreciation of a symphony.
0
0
2
@NateSesti
Nate Sesti
7 months
this finds useful code in a surprising number of cases, but we add in one more source of context by also keeping track of recently edited ranges of code, we can prompt the language model with the exact few lines that you just got done editing
1
0
1
@NateSesti
Nate Sesti
7 months
5) data cleaning eventually models will need to be fine-tuned. the hard part is finding the right data. what heuristics will distinguish useful training examples from that time you pressed tab just to see what happens?
1
0
1
@NateSesti
Nate Sesti
7 months
@erhartford @SourcegraphCody @ollama With @continuedev you can use any model from any provider for chat, Codellama-70b on Ollama included! And we’re releasing tab-autocomplete tomorrow (also allowing any model, and access to configure basically every setting you could want)
4
0
2
@NateSesti
Nate Sesti
7 months
between these two retrieval tricks, autocomplete feels on another level next up will be using "go to definition" in more situations, i think likely to have similar impact
0
0
2
@NateSesti
Nate Sesti
7 months
@MikeBirdTech @erhartford @SourcegraphCody @ollama @continuedev It does! This and a lot more is editable in config.json, and you can also switch between multiple chat models with a keyboard shortcut:
1
0
1
@NateSesti
Nate Sesti
7 months
so instead of cancelling and sending another request, we just keep listening to the current one until it's invalidated by the time i've typed 'con', the response might be complete and we can display this to the user because i can't type 1000+ wpm, this will come in handy
1
0
1
@NateSesti
Nate Sesti
6 months
truncating the completion:
@NateSesti
Nate Sesti
7 months
how to build tab autocomplete p4: half of the problem is just knowing when to stop!
1
0
1
1
0
1
@NateSesti
Nate Sesti
4 years
Can individuals create lasting change or achieve beyond the inevitable? Whether or not we believe so, we are right.
0
0
1
@NateSesti
Nate Sesti
7 months
next up is - applying the same to implementations of interfaces/methods - collecting information about surrounding variables - and resolving imports to important snippets
0
0
1
@NateSesti
Nate Sesti
7 months
but now that we know to generate multiple lines, when do we stop? there are two main issues: 1. generating too much code 2. mis-matched brackets / parentheses / quotes
1
0
1
@NateSesti
Nate Sesti
7 months
first, we construct the abstract syntax tree (ast) for the current file and find the leaf node that contains the current cursor position then, repeatedly moving up one parent node, we walk to the top of the tree, keeping track of all the nodes seen on the way: the "ast path"
1
0
1
@NateSesti
Nate Sesti
6 months
using the language server protocol to add function/type definitions to context:
@NateSesti
Nate Sesti
7 months
whenever you type the opening parenthesis of a function call, @continuedev 's autocomplete will now use the language server protocol () to add the function definition to the prompt
1
0
4
1
0
1
@NateSesti
Nate Sesti
7 months
this is another chance to use the "ast path" we check whether any parent node is of the "statement_block" type (a fancy way of saying you're in a `{ }` in .js/.ts) and whether the cursor is at the start of this block if so, then we use multi-line
@NateSesti
Nate Sesti
7 months
at the core of most tab autocomplete systems is a tool called tree-sitter () tree-sitter makes it fast and easy to parse abstract syntax trees in any programming language where we've found it extremely helpful so far is by using the "ast path"
1
0
4
1
0
1
@NateSesti
Nate Sesti
6 months
using jaccard similarity to find important snippets from recently edited files:
@NateSesti
Nate Sesti
7 months
the first improvement that feels magical (since originally getting autocomplete working) is adding recently edited files to context
1
0
6
1
0
1
@NateSesti
Nate Sesti
7 months
6-100) idk yet either you'll find out when i do, or be kind enough to share ideas
0
0
1
@NateSesti
Nate Sesti
7 months
@MikeBirdTech @erhartford @SourcegraphCody @ollama @continuedev Personally I have Ollama running in the background. If you’re on Windows or want UI, LM Studio is fantastic. And I learned about Jan just recently, but am impressed so far
1
0
1
@NateSesti
Nate Sesti
7 months
the outcome is seen below. only two requests are sent: immediately after i type the first character (generates '// ...') and again after I'm done typing but there's still more to do
1
0
1
@NateSesti
Nate Sesti
7 months
what we want instead is: 1. immediately send a request after the first keystroke, but also set a variable `debouncing = true` 2. set a timer for ~300ms. when it reaches zero, set `debouncing = false` 3. whenever `debouncing === true`, use the same policy as vs code search
1
0
1
@NateSesti
Nate Sesti
7 months
step 1 is just keeping track of the most recent ~10 files step 2 is to search over them. we use a sliding window matcher to find the range in each file that is closest to the range around the cursor
1
0
0
@NateSesti
Nate Sesti
6 months
calculating "AST path" with tree-sitter:
@NateSesti
Nate Sesti
7 months
at the core of most tab autocomplete systems is a tool called tree-sitter () tree-sitter makes it fast and easy to parse abstract syntax trees in any programming language where we've found it extremely helpful so far is by using the "ast path"
1
0
4
1
0
1
@NateSesti
Nate Sesti
7 months
how to build tab autocomplete p4: half of the problem is just knowing when to stop!
1
0
1