harambe_musk Profile Banner
harambe_muskšŸŒ Profile
harambe_muskšŸŒ

@harambe_musk

Followers
2K
Following
4K
Statuses
3K

Where my gorillas at? Achieved AGI externally. šŸ¦ | Gotta get those bananasšŸŒ

arra kiss
Joined March 2024
Don't wanna be here? Send us removal request.
@harambe_musk
harambe_muskšŸŒ
11 months
Where my gorillas at?
11
7
66
@harambe_musk
harambe_muskšŸŒ
1 day
@ChombaBupe Who tf are you?
1
0
39
@harambe_musk
harambe_muskšŸŒ
1 day
@buffaly_ai @deedydas PDF pages in JPEG.
0
0
0
@harambe_musk
harambe_muskšŸŒ
1 day
@TheoPhysiJ @jerryjliu0 @deedydas What do you mean by interpret? You mean analyse or extract structured information?
0
0
0
@harambe_musk
harambe_muskšŸŒ
1 day
@KavadeNitin @AniBaddepudi @deedydas @reductoai Did you try this with Gemini 2.0 Flash?
1
0
0
@harambe_musk
harambe_muskšŸŒ
2 days
@Informcosmos Sheā€™s allergic to clothes.
0
0
0
@harambe_musk
harambe_muskšŸŒ
2 days
@kimmonismus Heā€™s been saying that for more than a year now.
1
0
0
@harambe_musk
harambe_muskšŸŒ
2 days
@btibor91 Iā€™ll use my cousinā€™s ID if you they can give pro for $20 but I know they will never offer that lmao
0
0
0
@harambe_musk
harambe_muskšŸŒ
2 days
@mark_k @btibor91 I have seen other models reaching pretty high performance when trained on reasoning chains of R1 with very little data. I wonder if itā€™s because of this.
0
0
3
@harambe_musk
harambe_muskšŸŒ
2 days
Transformers, as great as theyā€™re, still not valuable enough to be used in most high economic use-cases due to inherent issues like hallucinations and limited context. Transformers are still an unproven tech mainly because of this and thatā€™s the number reason why I think Nvidia isnā€™t investing their capital and focus on building supply chain for transformers specific GPUs and rather focus on hardware thatā€™s more generally useable. I am sure theyā€™ve prototypes that are on par with something like Groq perhaps better but I think they simply donā€™t feel from a business standpoint it warrants focus and capital in building transformers specific hardware until these issues are solved because anyway their current hardware does a pretty decent job however inefficient they are.
@harambe_musk
harambe_muskšŸŒ
2 days
Itā€™s not just about having a nice hardware, itā€™s the ecosystem, reliable supply chain and ability to handle scalable needs and most of all predictability. These hardwares are very new and unestablished and doesnā€™t make sense to take that leap of faith when you already have something like NVIDIA. Pretty sure Nvidia already has prototypes that are transformer specific and offer same or perhaps better performance than these hardware. I guess theyā€™re waiting for transformers tech to mature enough to warrant heavy investment and focus from Nvidia.
0
0
4
@harambe_musk
harambe_muskšŸŒ
2 days
Itā€™s not just about having a nice hardware, itā€™s the ecosystem, reliable supply chain and ability to handle scalable needs and most of all predictability. These hardwares are very new and unestablished and doesnā€™t make sense to take that leap of faith when you already have something like NVIDIA. Pretty sure Nvidia already has prototypes that are transformer specific and offer same or perhaps better performance than these hardware. I guess theyā€™re waiting for transformers tech to mature enough to warrant heavy investment and focus from Nvidia.
0
1
2
@harambe_musk
harambe_muskšŸŒ
2 days
@SpencerKSchiff @OpenAI They just fooling us man.
0
0
1
@harambe_musk
harambe_muskšŸŒ
2 days
@jordanwjdurand @RepLuna I know right sheā€™s just fucking with us.
0
0
0
@harambe_musk
harambe_muskšŸŒ
2 days
@ashtom Finally
0
0
0
@harambe_musk
harambe_muskšŸŒ
3 days
@_kickface_ @_lilpoptart What do you infer?
0
0
0
@harambe_musk
harambe_muskšŸŒ
3 days
@ashtom Whatā€™s the context length on paid plans for this model?
0
0
1
@harambe_musk
harambe_muskšŸŒ
3 days
How tf Sonnet so good that it always ranks on top without much updates? When 4 Sonnet coming?
@alexandr_wang
Alexandr Wang
3 days
Introducing MultiChallenge by @scale_AI - a new multi-turn conversation benchmark. Current frontier LLMs score under 50% accuracy (top: 44.93%). šŸ„‡o1 šŸ„ˆClaude 3.5 Sonnet šŸ„‰Gemini 2.0 Pro Experimental šŸ“„ Paper: šŸ†Leaderboard:
Tweet media one
3
0
8
@harambe_musk
harambe_muskšŸŒ
3 days
@OfficialLoganK Hi, is there any benchmark on multi lingual performance of flash 2.0?
0
0
2
@harambe_musk
harambe_muskšŸŒ
4 days
This is a very important chart. We are still not there yet but are on the trajectory of getting there soon.
Tweet media one
0
0
4
@harambe_musk
harambe_muskšŸŒ
4 days
@ColbyHawker @AviSchiffmann None. Itā€™s just a gimmick.
0
0
0