![Adithyan Profile](https://pbs.twimg.com/profile_images/1718280724187435009/Qs85NKEY_x96.jpg)
Adithyan
@adithyan_ai
Followers
664
Following
1K
Statuses
969
Founder | https://t.co/jTw4HMnjE2 | Record your podcast, we'll take care of the rest Full Stack Entrepreneur | 100% Bootstrapped | $10k MRR
Berlin, Germany
Joined August 2020
@labenz Okay, pleasantly surprised. That's a big recommendation. Because I know the level of questions that you typically ask the guests. Time to splurge that $200 to get my hands on I think.
0
0
1
@MarcKlingen Hey Marc 👋! Would love to. Please let me know if there are still some spots available.
1
0
0
@shaoruu @cursor_ai Agent YOLO mode running for more than 25 steps. Basically a long running agent.
0
0
1
@HamelHusain Thx. Really appreciate it for sharing. I have been meaning to run my own experiment by splurging that 500$ for now. To get my own hands on with it. But this blog pretty conclusively answers everything I wanted to experiment. Cursor with Agents it's for now.
0
0
1
@appakaradi Exactly... kinda exactly what I did in this situation too. Rewrote with AI the open source codebase. Probably direction and judgement is more important now (the volume of code is anyway provided now by AI)?
0
0
1
@AsRa_Engineer Haha yeah.... funny thing is I thought I was good at it.. at least in work I thought I was... but apparently not.
0
0
1
@bernhardsson Hahahaha.... Very few jokes really make me go lol. This made me just do it 😂. I have done this myself a few times before.
0
0
1
Prediction: Early 2024, I was recommending @cursor_ai to everyone I know. I had this feeling many devs were gonna will write AI assisted-code in that. It has hit widespread adoption now. And my prediction is that in 2025 @modal_labs will be the medium many people will deploy AI related (python) code to infra. It's so good🤌🤌!
0
0
0
Complexity asymmetricity. - Easy to add complexity. - Hard to remove complexity. - Easy to confound introducing complexity with progress. What ends up happening is you accumulate constant bloat that you never trim. And you iterate and do incremental progress on long-expired complexity. When you should be investing that effort in the next "simple" thing. And you will be outrun and outcompeted finally because of this accumulated bloat. I've seen this happen with companies I work in. And I see it clearly happening in my own journey now ( I'm refactoring this week the codebase that I wrote last year... so this thought is triggered primarily because of that... but I notice this broadly also in my thinking... I have accumulated too much long-expired complexity in my thinking). New self-written principle: - Only invest in complexity X -> if the return on investment is far greater than X. The return value should not just be X. - Deciding what not to introduce is as important as what I introduce.
0
0
1
Normally, I would take that as a good hypothesis why that is true. But in my case it's not true. For me on a super high level there are these three operations: 1. Piped inputs from the cloud stream, which feeds to 2. FFmpeg transformations that process the video/media on serverless functions, which feeds 3. Piped to the cloud stream output to store for further operations or just final output. Now (2) is almost always the bottleneck in my use case and it determines the overall "bandwidth". Not (1) and (3). How do I know this? I benchmark the speed of all my ffmpeg operations and almost always they are memory/cpu bound, not network bandwidth bound. When I switched from s3 buckets to r2 buckets, in ffmpeg the multiple of realtime processing speed - X just stayed the same. That is how I know I am not taking any performance hit but I am slide not paying hundreds of $ as egress cost for AWS.
0
0
0