ruben_bloom Profile Banner
Ruben Bloom (Ruby) Profile
Ruben Bloom (Ruby)

@ruben_bloom

Followers
342
Following
553
Statuses
201

Guy who wants to give his daughter a flourishing world she can live in for aeons. Been building https://t.co/BeBTmfCdFi for 6 years

SF Bay Area
Joined September 2015
Don't wanna be here? Send us removal request.
@ruben_bloom
Ruben Bloom (Ruby)
2 days
@ohabryka @_Pa0la___ I don't fully see the "context of working effectively" from Paola's post but maybe that's "politics" means. If so, your points make more sense
0
0
0
@ruben_bloom
Ruben Bloom (Ruby)
2 days
1st order is the "animals are raised in horrendous conditions then murdered so people can consume, don't be someone who creates demand for this or benefits from it" as a specific instance of "don't murder or inflict great suffering for your selfish benefit", which is also a policy that gets good results if everyone does it. I think this is intuitive and kinda reasonable baseline take that most people just ignore.
1
0
1
@ruben_bloom
Ruben Bloom (Ruby)
3 days
@suelinwong Congratulations!!
0
0
2
@ruben_bloom
Ruben Bloom (Ruby)
3 days
RT @daniel_271828: At some point, some AI company is going to accidentally call their RSP a name that’s already been taken by another AI co…
0
4
0
@ruben_bloom
Ruben Bloom (Ruby)
5 days
@peterwildeford If you can predict your impulsive purchases, are they really impulsive?
0
0
4
@ruben_bloom
Ruben Bloom (Ruby)
5 days
There definitely has to be diminishing returns, so I'm not surprised if in other situations they found a point where that didn't happen (then again, are they making the same mistake of not controlling for disease severity)? I also am not infinitely sure there could never be some super weird interaction that means larger margins make things worse for some reason I'm just pretty sure that for osteosarcoma, the reasoning process for thinking that amputation wouldn't provide more benefit was really flawed, and the commonsense reasoning of removing more should get equal or better outcomes, is the model to bet on – and if you really want to live, given the doubt, go for the amputation
0
0
0
@ruben_bloom
Ruben Bloom (Ruby)
5 days
I'm not sure what the technical definition of "non-derivative idea" is such that we can say an architecture cannot do it But neural networks (at least MLPs which are part of transformers) are universal function approximators which means could in theory represent whatever function the human brain does Plus no one currently understands what gets trained in models or what exactly is going on. I do know that those LLMs are really good at next token prediction, and one of the ways I think they're doing that is via putting 2 and 2 together, according standard deductive and inductive reasoning rules (perhaps they derivatively copy a process for generating new ideas, as humans do)
0
0
3
@ruben_bloom
Ruben Bloom (Ruby)
5 days
@neurcs I provided it all the medical records. I kept them (meticulously organized) together with detailed case notes. That was not the issue. This system simply isn't trying to do inference like this
0
0
2
@ruben_bloom
Ruben Bloom (Ruby)
5 days
@steve_ike_ I'll for sure rerun as there are new systems
0
0
1
@ruben_bloom
Ruben Bloom (Ruby)
5 days
Well, many people are also underestimating in the short term Hard to overestimate in the long-term though
0
0
5
@ruben_bloom
Ruben Bloom (Ruby)
5 days
And if you underestimate just how powerful these system(s) will be, you will overestimate your (or anyone else's) ability to control them/get them to do what you want (as always, this isn't to say that they won't understand what you want, but that you failed to build them in a way that means they'll care about difference between what you said and what you meant – plus who wants to be turned off?)
0
0
2
@ruben_bloom
Ruben Bloom (Ruby)
5 days
Do not think that because today's release is flawed means that that's the limit. Remember how much less they could do just one year ago
0
0
1
@ruben_bloom
Ruben Bloom (Ruby)
5 days
My guess is that Deep Research is fairly rigidly ~prompted to do a specific thing which is not "reason freely to outcome" and that one could build something out of o1/o3 that did it more I'd have to go back and check which papers it pulled up. It definitely seemed to have pulled up the relevant raw findings and the regurgitated their conclusions uncritically
2
0
3
@ruben_bloom
Ruben Bloom (Ruby)
6 days
@chris_j_paxton I don't know that I'd say o3 can't do this. Deep Research feels heavily scaffolded or something in a way that I think makes it not reason this way. I wonder if with this right scaffolding, it might
0
0
1
@ruben_bloom
Ruben Bloom (Ruby)
6 days
@robofinancebk Thank you, but they're plenty replaceable, just a matter of when. Ultimately, both AIs and human brains are made of the same stuff - regular old atoms
3
0
9
@ruben_bloom
Ruben Bloom (Ruby)
6 days
@ruben_bloom
Ruben Bloom (Ruby)
6 days
Okay, I did it. Threw Deep Research at the medical questions I tackled for ~months in 2020 when battling my wife's cancer Based on my test case, this iteration of Deep Research can tell you what the current literature on a topic would advise, but not make novel deductions to improve upon where the human experts are at I think it might have sped up my cancer research in 2020 but not replaced it. That guy saying it's better than his $150k/year team...maybe needs to get better at hiring, idk 🧵Thread with more details 0/n
0
0
0
@ruben_bloom
Ruben Bloom (Ruby)
6 days
I agree with many that this could usher in utopia. But not by default, and not with the level of caution I think humanity is bringing to the challenge 11/11
0
0
55