![Steve Yegge Profile](https://pbs.twimg.com/profile_images/1103565932843098112/i3QC7MpX_x96.png)
Steve Yegge
@Steve_Yegge
Followers
11K
Following
128
Statuses
230
Joined December 2011
RT @sqs: Horizontal vs. vertical AI coding agents: A horizontal AI coding agent automates a specific repetitive task that's done 100x+/day…
0
3
0
Has anyone else noticed that GenAI gives you Google Maps-like zooming capabilities over long-form text like articles, blogs, books? Information zoom levels have always had to be implemented by hand. You can create an abridged edition of a book, which is like zooming out halfway-ish. Or you can get CliffsNotes or other summaries that are zoomed out, say, 90%. But it was always a ton of human labor, so most of the time, you just have the one zoom level, or a couple. With LLMs, it's now a continuum. You can "zoom out" to any level you like, by asking the LLM to summarize the text at that level. And you can also "zoom in" by having the LLM expand on some paragraph or concept that wasn't covered in enough detail in the original article. There are some commercial hardwired zoom levels available like @blinkist, and they're useful today. But with those, you're stuck with whatever they feel is the right level of detail, and you have to use the same skimming tactics with them too. I think at some point a tool will emerge that lets you dial the summarization ("zoom") level to whatever you want, for whatever you want to read, and then it's all just enhance, enhance, enhance. I mean, I'm kind of excited by the idea. I think I'll wind up (a) reading much more than ever before, and (b) wasting a lot less time skimming through weak ideas and bad writing. Thoughts?
12
8
94
RT @RealGeneKim: I’ve mentioned how I’m starting to believe that GenAI making coding tasks faster is the least important metric. Implement…
0
30
0
@jordanjhamel I've been putting off porting my Gradle build to Kotlin, thousands of lines of yucky build code, for 7+ years. I knew it was going to take me at least a week, and it was never worth it. I finally did it last week with chop, and got it completely done in a day and a half.
0
0
2
Sigh. I'm officially abandoning Ruby for scripting, and will be porting many thousands of lines to "something else". I still like the language plenty, but the ruby+gems setup on Macs is such a shitshow that I'm done. Forever, unfortunately. @rubylangorg -- get your shit together.
16
1
51
@RealGeneKim @RealGeneKim -- I appreciate the kind words, and the incredible enthusiasm you bring to this space that we both love so much: The cult of dev productivity. And thank you for putting in all the hard work to create all these excerpts of our pairing session. Lots to unpack here!
0
1
4
Really appreciate all the incredible work @RealGeneKim put into this. We'll have a lot more to share about this in the coming days. Stay tuned!
In September, I got the chance to pair program for two hours with the legendary Steve Yegge (@Steve_Yegge), where he coached me on what he calls “CHOP, or chat-oriented programming,” and built something that I’ve wanted to build for nearly a year. I learned how to use @SourcegraphCody to level up in ways that I couldn’t quite imagine before. You may have seen the video excerpts I’ve made of videos and podcasts of talks I enjoyed — here’s the first one that I created of the famous Dr. Erik Meijer (@headinthebox), where he talks about how we might be the last generation of developers to write code by hand, and that we should have fun doing it. I created the tool to make these video excerpts during that two-hour pair programming session with Steve! We recorded the entire session — and here, I'm posting the “highlights reels,” where I show the prompts that I used (coached by Steve) to create the app, and the lessons learned along the way. I can’t think of a better way to learn. Dr. Anders Ericsson, renowned for his research on expertise and deliberate practice, wrote the fantastic book “Peak,” where he identifies key elements to acquire new skills and achieve mastery, such as learning to play musical instruments, play sports, and practice medicine. Those elements are: - Expert coaching: you learn best when guided by an expert (that’s Steve!) - Fast feedback: you learn best when you get immediate, actionable feedback, so you can identify and correct errors quickly, and reinforce positive behaviors (check!) - Intentional practice: you learn best when focusing on specific tasks (let’s CHOP more, as opposed to manually typing out code!) - Challenging tasks: you learn best when you tackle tasks slightly beyond your current abilities (check!) I can’t overstate how much I learned in two hours. In this thread, I post segments from that session, with some introductions, a statement of goals, and portions from the approximately 50 minutes required to build the code that uses ffmpeg to generate video excerpts, with transcribed captions. (Building the app in 40 easy steps.) It was fascinating to re-watch the recording — I’ve watched it in its entirety several times, which I found wildly entertaining. But I wanted to see if I could extract the lessons, so people wouldn’t need to watch the entire 90-minute video. I inserted video captions that describe what is going on, with any prompts I’m giving to @SourcegraphCody / Claude / ChatGPT, so you can follow along, as well as other insights or lessons learned. (In the lower-right corner of the video that shows the elapsed time — I was astonished to discover that, with Steve’s help, we had gotten the video extraction working in about 47 minutes. The remainder of the two hours was learning the tools, chit-chatting, joking around, etc.) Among the lessons learned: - in the beginning, my prompts were unambitious — Steve kept encouraged me to “type less, and lean on the LLM more.” - despite Steve saying that the tools fully supporting CHOP still being a long way off, you’ll see that the interaction model becomes very evident by the end — give the LLM the relevant context, ask it to build or modify something for you, and ideally, it’ll appear in place, or it’ll be something you can copy/paste into your code base. - a key skill is breaking tasks down to make steps more concrete for the LLM — or as Steve (and many Clojure programmers) likes to say, you reify your tasks (i.e., you make it more concrete or realized) - having a good way to run your tests quickly becomes critical, because you often won't read the code that the LLM wrote — until the tests fail. - when tests fail, a technique is just to ask the LLM to “try it again,” but lots of human judgment is required here. Sometimes this works, while other times, you’ll be iterating in circles, never getting closer to your goal 1/??
1
3
26
RT @RealGeneKim: To file in the "all the ways I'm using LLMs to build/code things" — Inspired by @simonw's amazing stories of doing codin…
0
3
0
RT @DynamicWebPaige: "@Sourcegraph's use of Gemini 1.5 long-context models drastically reduced the overall hallucination rate (the generati…
0
7
0
RT @RealGeneKim: While traveling a couple of weeks ago, I listened to a fantastic interview of @Sourcegraph founder and CEO Slack Quinn (@s…
0
5
0
RT @RealGeneKim: Thank you for giving this wonderful and important tal, Steve! You can watch Steve's complete talk here (just register wit…
0
2
0
@RealGeneKim Thanks for the excerpts and the kind words, @RealGeneKim! And thank you for having me at your conference. What an amazing group of IT leaders there. Loved it!
0
0
0
I was privileged to be the guest on today's Software Misadventures podcast with Guang and Ronak, where we talk about writing, AI, and of course @SourcegraphCody. This whole episode was a lot of fun. Thanks for having me!
2
11
43
This looks like it will be pretty fun; I'm heading down to SF for it. And it's the first day of our Cody hackathon week!
We're hosting an AI Dev Tools Night on June 24 at the @sourcegraph offices in San Francisco! We got an awesome list of speakers lined up including @beyang, @itsajchan, @thedanigrant, and more. Link to RSVP in comment!
0
0
5