infrecursion Profile
infrecursion

@infrecursion1

Followers
26
Following
403
Statuses
3K

Joined April 2020
Don't wanna be here? Send us removal request.
@infrecursion1
infrecursion
9 hours
@janleike It sort of makes sense why Schulman left within 5 months.
0
0
2
@infrecursion1
infrecursion
12 hours
@wordgrammer My test of whether we have GI is still: "can you not come up with the stupidest definition of AGI?"
0
0
0
@infrecursion1
infrecursion
13 hours
@DrFrederickChen @maxxrubin_ Your claim was emphatically disproven by both o1-pro and o3-mini here. Maybe delete your post or make an edit (and at the same time learn what the best models are)?
1
0
2
@infrecursion1
infrecursion
21 hours
@ryan_huang_1 @dzhng No they haven't. You can search here for typical deep research responses, and feed those prompts to the "pretty damn close" versions and see for yourself.
0
0
1
@infrecursion1
infrecursion
21 hours
@AverageProMax @cursor_ai @OfficialLoganK @GoogleDeepMind Yeah if you want to make mid tier buggy code slop, then by all means nothing comes close.
1
0
1
@infrecursion1
infrecursion
2 days
@dwarkesh_sp The simplest reason is that none of the average LLMs user can identify such new connections. You need to be at the frontier of a field to identify that the LLMs have produced something new. There are very few people like that. But a Penn prof did find o1-mini made a new proof.
0
0
0
@infrecursion1
infrecursion
2 days
@AbhiRaama22 @tsarnick No the total bs is you. I can guarantee sonnet can code 10x better code that losers like you.
0
0
1
@infrecursion1
infrecursion
2 days
@carsoncantcode Those are just sacrifices needed for the machine God
0
0
1
@infrecursion1
infrecursion
2 days
@DimitrisPapail Another fallacy is that this doesn't explain the high variability is scores between different models. All of them were trained on the same dataset and the reasoning models have similar or worse recall ability than the pretrained models like GPT-4o and Claude sonnet.
0
0
3
@infrecursion1
infrecursion
2 days
@DimitrisPapail Not to mention two other obvious fallacies with this: a) this being on internet doesn't mean the model was trained on it b) the model may have arrived at the solution quite differently than the web solutions which could be novel.
0
0
1
@infrecursion1
infrecursion
3 days
@mark_cummins "what's causing people to have such conflicting experiences with OpenAI Deep Research" the ability to write a thoughtful prompt. It's not a chatbot.
0
0
2
@infrecursion1
infrecursion
3 days
@Manas2049 @sama @mia_glaese @joannejang @akshaynathan_ No it's not, chain of thought output has no relation to how the model is actually "thinking" or getting to a solution.
0
0
4
@infrecursion1
infrecursion
3 days
@HrishbhDalal @polynoamial Because deepseek stole the CoT from OpenAI, why tf they would be scared? Of course, OpenAI won't make the same mistake twice.
0
0
0
@infrecursion1
infrecursion
3 days
@JoJrobotics @giffmana @yar_vol @LiamFedus When will you stop hallucinating and have real human level common sense and reasoning?
0
0
1
@infrecursion1
infrecursion
4 days
@btibor91 Why is GPT-4o so bad lmao
0
0
0
@infrecursion1
infrecursion
4 days
@alexandr_wang @manaahmad_ @scale_AI Where is o3-mini-high?
0
0
0
@infrecursion1
infrecursion
4 days
@janleike Again Jan, no one cares. Unless and until you can clearly demonstrate a "capable" LLM actually causing harm that lines up with standard EA fantasies, no one will ever care.
1
0
6
@infrecursion1
infrecursion
4 days
@r1ckp @SteveBlocX @7etsuo This is a complete lie. This wasn't even part of your original prompt. Can you share the link to the chat?
0
0
3