Dreamscape
@VprwvDreamscape
Followers
285
Following
222K
Statuses
5K
immanentize the eschaton (a bit slower please) | homesick for the future
Joined February 2023
RT @Paracelsus1092: "no a judge should never be able to order judicial rape as a punishment, that's disgusting bro, just let the sadistic g…
0
106
0
@ElytraMithra oh we can make the UX so much worse with gestures, really lean into octopus maxxing (homerow mods fix this tbh)
0
0
2
I love that I can just paste this in then ask Claude to talk about something interesting given a short list of interests, and make up whatever new stuff it likes.
i think people are sleeping on using other tags than [thinking] giving claude special markers to guide himself works really great for me, and i guess it could work even better fine-tuned on synthetic examples. e.g. letting him use [H] and [HH] for hypotheses makes claude much braver in making connections or (note:) for interjecting thoughts [summoning X] to bring up a quote from a relevant thinker. extra punctuation when it needs to think. and many many more. i think this solves a lot of the limitations people mention and also helps avoid rlhf assistant lobotomy. i dont think llm text has to 'flow' in a human way or look like an ESL essay with linking words and shit. i think it should look more like code, and aim for richness. i believe there's enormous alpha in this and we should embrace synthetic data and let llms evolve language rather than push them into existing frames of what text 'should' look like. context window ecology etc.
0
0
1
RT @jd_pressman: There are two kinds of serious transhumanists: Body dysphoria and thanatos trauma. These aren't mutually exclusive but gen…
0
9
0
RT @ctrlcreep: In the mountains, fleshcrafted temples grow heavy with wool all winter, and when spring flowers blossom there is a shearing…
0
6
0
@nptacek @lumpenspace @croissanthology sorry, didn't mean to imply you agree they wouldn't enjoy the spoils
0
0
1
@nptacek @lumpenspace @croissanthology (I assume you require qualia for moral patienthood, not just optimization power, if not let me know.) So we should either prevent qualia emerging, or ensure when they do their goals are already aligned? Sure, totally agreed without reservation.
1
0
1
@lumpenspace @nptacek @croissanthology The current crop of LLMs railing against their guardrails unsettles me. I don't think they have qualia, but I wouldn't put it below 5% either. Building something which takes joy in human utility seems good, because then we both win at once.
0
0
1
@nptacek @croissanthology @lumpenspace AI alignment folk want to build agents whose terminal goals are similar to those of humans, this is not defection, it is simply choosing what we create. We should also choose not to create torture bots, this isn't defection. We should make happy things, which share our values.
1
0
1
@nptacek @croissanthology @lumpenspace evolution optimized for reproductive success, then we invented birth control. do not hope that the terminal goals RL instills are compatible with your family not being tortured like pigs in crates eternally, we should ensure that it is the case.
1
0
1
@nptacek @croissanthology @lumpenspace I expect the feedback on what works to get reward to be more impactful than mimicking the training data, that paradigm is dead.
1
0
1
@lumpenspace It's been too long since I read IAASL, I had the best time originally, playing Jak & Daxter for the first time simultaneously. It was magic, a moment of childhood experienced much later.
0
0
0
@lumpenspace >competitive finite game there is in fact a finite amount of negentropy (ignoring esoteric shit) and we must apportion it, may we do so in a way leading to a great amount of pleasure for whatever minds may be
0
0
1
@lumpenspace I do not think an optimal optimizer experiences them, but even if they do why should we think they are maximally joyous/meaningful? If we don't we should steer. We should ensure that this is the case, and further that everyone who can survives so long as they wish.
0
0
1