Robert
@LacertaXG1
Followers
10K
Following
18K
Statuses
2K
markets, LLMs, epistemology, python, heidegger. Views don't reflect the views of my employer etc. I will never be shilling a token, nor pitching a trade.
Joined June 2017
@investingidiocy my main takeaway from that thread was that I really ought to think twice before doing options trades. Big knowledge gap between me and ppl who actually know this stuff. but gradatim ferociter
0
0
21
@bennpeifert ahhh this finally clicked. doing it by regression you get to incorporate info from all the strikes rather than explicitly fitting it to three. and this note I wrote ages ago finally makes sense to me lol (from some old Carr presentation)
2
3
84
@bennpeifert Thanks for writing this thread! might be a stupid q, but to get the GVV fit of IV vs strike from that, I guess for each strike you plug in the BSM greeks and your coeffs to get theta at that strike, then solve for IV from that?
2
0
14
@bennpeifert tanner smiths / faces & names / bartley dunnes. Worst case there’s an Irish pub between 53rd and 54th on 7th
0
0
1
@cam_perrault they are really cooking with the long context stuff. I think a lot of the pdf parsing startups are gonna get smoked by these multimodal models
3
0
2
@eightyhi Honestly I’d even settle for a metaculus style thing. I just want to be able to say I told you so, but it feels mean to tweet about individual startups im bearish on
1
0
1
It’s a simple and interpretable extension of OLS, and v easy to impose the types of priors that I have in practice. Like y is linear in x1, increasing monotonically in x2, and there is an interaction between x2 and x3. As opposed to transforming a variable in OLS where you have to guess the transformation. Also, it’s easy to intuitively tune regularisation
1
0
2
@salr_nyc @bennpeifert the serious answer is yes, people have either built promoting pipelines (eg llamaparse) or fine tuned LLMs for it. But the base models are getting good enough at it, like anthropic has an API for parsing now
0
0
4
@weaponizedFOMO Good question. I’m not sure what to use o1 for, now that there’s o3-mini-high o1pro seems to have some edge when it’s a deep reasoning task, but the interactivity of o3-mini relative to o1pro is a big UX difference. Feels like you are reasoning together
0
0
1
@bennpeifert to me, the approach you described — which is very sensible — is practically the same thing as continuous Kelly? one still needs to guess the expected return, which is art more than science, and not an art i yet have much skill at
0
0
2
@rupwalker it’s remarkable how easily Borges wanders over so many intellectual domains. In the garden of forking paths, he anticipates Everett’s many worlds interpretation of QM. Tlon, Uqbar and Orbis Tertius is also excellent
0
0
1