Evi: AI video podcasts
@geteviapp
Followers
16
Following
403
Statuses
144
Evi AI: AI generated explainer videos just for you (for you, just for you). Health, wellness, bite size science. All AI content is verified agains CDC data.
San Francisco, USA
Joined February 2025
App is here: I am preparing update, mostly removing onboarding screens and paywall. A few of my friends have TestFlight version and 2 (out of 12) are still using it daily. I foolishly added onboarding and aggressive paywall, maybe this stops people. Also my appstore screenshots kids of suck as is icon :)
1
0
0
@devmakarov honestly, saying TikTok makes almost no sense if target audience is USA, I can't even download TikTok...
0
0
1
@burkov often the real scientists do not need raw money, they basically need mental freedom to do great work
0
0
1
@NizzyABI NVidia just released a blog post showing that R1 when left alone for 15min coded CUDA kernels which are better than when their engineers did:
New blog post from Nvidia: LLM-generated GPU kernels showing speedups over FlexAttention and achieving 100% numerical correctness on 🌽KernelBench Level 1
0
0
1
RT @geteviapp: @burkov Excellent clarifications! I noticed some folks in comens talk about "mathematical proof" that agents > 1 agent. It…
0
1
0
Excellent clarifications! I noticed some folks in comens talk about "mathematical proof" that agents > 1 agent. It is indeed correct (Condorcet's theorem) that a group of N predictors ("agents") under assumptions that: 1. error rate is < 50% 2. they are independent 3. not systematically biased has exponentially low error rate as N grows. I guess it is quite clear that even if 1 is possible, 2-3 are not so obvious (eg because pretraining web dump is still the same). Empirically splitting original problem into tasks works visibly better (eg my app creates a 12-stage pipeline of LLM & search-grounding calls). I only did basic pipelining (plan -> refinement -> grounding -> "hook ideas" -> ...) where output of the previous stages is used in the next stage, I didn't see any benefit in calling different models or prompts (i.e. "agents") at the same stage. The idea of "swarm" or "crew" looks a bit like a skeuomorph, why "legal agent" should be reading my transcript in parallel with "HR agent"??
0
1
2
@basit_designs is it more logical to put [analyse data] / [generate images] etc somehow below the input field, so that chat is above, response prep is below... hard problem just brainstorming, my screen is no better:
0
0
2
Can confirm, it works and UI is nicely non-glitchy! Here is example of it helping me create protocol buffers. I love how it follows my naming style [1]. Will you have a way to add explicit copilot instructions in .copilotrules or similar? Also please consider *generating* such rules from my codebase so I can review, i.e. don't ask me write them from scratch like @cursor_ai does. [1] I use "Proto" suffix for all proto messages so that various languages especially Swift play nicely:) ❤️🩶💚💛🧡
0
0
1