Robert Profile
Robert

@LacertaXG1

Followers
10K
Following
18K
Statuses
2K

markets, LLMs, epistemology, python, heidegger. Views don't reflect the views of my employer etc. I will never be shilling a token, nor pitching a trade.

Joined June 2017
Don't wanna be here? Send us removal request.
@LacertaXG1
Robert
3 years
0/9 I've spent the last few months reflecting /writing up my approach to "second brain" note-taking. The result is Molecular Notes – I consider this thread to be the biggest alpha I have, in the sense that it explains my "metamodel" for learning.
Tweet media one
21
90
446
@LacertaXG1
Robert
7 hours
@investingidiocy my main takeaway from that thread was that I really ought to think twice before doing options trades. Big knowledge gap between me and ppl who actually know this stuff. but gradatim ferociter
0
0
21
@LacertaXG1
Robert
10 hours
@bennpeifert ahhh this finally clicked. doing it by regression you get to incorporate info from all the strikes rather than explicitly fitting it to three. and this note I wrote ages ago finally makes sense to me lol (from some old Carr presentation)
Tweet media one
2
3
84
@LacertaXG1
Robert
11 hours
@bennpeifert Thanks for writing this thread! might be a stupid q, but to get the GVV fit of IV vs strike from that, I guess for each strike you plug in the BSM greeks and your coeffs to get theta at that strike, then solve for IV from that?
2
0
14
@LacertaXG1
Robert
23 hours
@bennpeifert tanner smiths / faces & names / bartley dunnes. Worst case there’s an Irish pub between 53rd and 54th on 7th
0
0
1
@LacertaXG1
Robert
1 day
@cam_perrault they are really cooking with the long context stuff. I think a lot of the pdf parsing startups are gonna get smoked by these multimodal models
3
0
2
@LacertaXG1
Robert
1 day
@cam_perrault o3 mini on the other hand <3 What do you make of gemini?
1
0
2
@LacertaXG1
Robert
1 day
@eightyhi Honestly I’d even settle for a metaculus style thing. I just want to be able to say I told you so, but it feels mean to tweet about individual startups im bearish on
1
0
1
@LacertaXG1
Robert
2 days
@therobotjames this is a nice twist on efficient inefficiency
0
0
3
@LacertaXG1
Robert
2 days
@LunixA380 @paul_cal i’n comparing vs 1.5pro, but yes good point on 2.0pro
0
0
0
@LacertaXG1
Robert
3 days
It’s a simple and interpretable extension of OLS, and v easy to impose the types of priors that I have in practice. Like y is linear in x1, increasing monotonically in x2, and there is an interaction between x2 and x3. As opposed to transforming a variable in OLS where you have to guess the transformation. Also, it’s easy to intuitively tune regularisation
1
0
2
@LacertaXG1
Robert
4 days
@salr_nyc @bennpeifert the serious answer is yes, people have either built promoting pipelines (eg llamaparse) or fine tuned LLMs for it. But the base models are getting good enough at it, like anthropic has an API for parsing now
0
0
4
@LacertaXG1
Robert
4 days
@weaponizedFOMO Good question. I’m not sure what to use o1 for, now that there’s o3-mini-high o1pro seems to have some edge when it’s a deep reasoning task, but the interactivity of o3-mini relative to o1pro is a big UX difference. Feels like you are reasoning together
0
0
1
@LacertaXG1
Robert
4 days
@bennpeifert to me, the approach you described — which is very sensible — is practically the same thing as continuous Kelly? one still needs to guess the expected return, which is art more than science, and not an art i yet have much skill at
0
0
2
@LacertaXG1
Robert
6 days
@rupwalker it’s remarkable how easily Borges wanders over so many intellectual domains. In the garden of forking paths, he anticipates Everett’s many worlds interpretation of QM. Tlon, Uqbar and Orbis Tertius is also excellent
0
0
1
@LacertaXG1
Robert
7 days
In most domains, "the work of a critic is easy" – it's easier to verify that reasoning is correct than to generate correct reasoning. so it's inevitable that LLMs are going to saturate this difficulty gap, and bring us close to the frontier of knowledge.
Tweet media one
1
2
12