![Dominik Lukes Profile](https://pbs.twimg.com/profile_images/1733852585033928704/jVjHhEpE_x96.jpg)
Dominik Lukes
@techczech
Followers
2K
Following
2K
Statuses
13K
Exploring applied epistemology, AI and metaphor. Current work on https://t.co/XxygzOGYN0.
UK
Joined April 2009
Come build LLM-apps at University of Oxford. Looking for an @aiDotEngineer to join our team. Especially anyone with experience in AI frameworks like @LangChainAI and evals like @promptfoo or @langfuse and tools like @OpenWebUI Apply even if not perfect fit with the job ad ↓
1
0
3
@AnnaRMills The links were to posts by Ethan Mollick... But DR hallucinated a different name while giving accurate quotes.
0
0
0
@perigean @AdamRodmanMD @CoryRohlfsen @DavidDeutschOxf @ToKTeacher What information would calling this "knowledge" or not add? It seems that if we know all the other things, asking if it is "knowledge" is just a different way of asking is it "good". You can ask the same question about an expert diagnostitian's performance.
0
0
0
Interesting accidental jailbreak for @AnthropicAI's Claude when I asked it to give me a TLDR; of Mistral's announcement, it instead summarised its own system prompt. Thoughts why @alexalbert__ and @AmandaAskell?
0
0
1
RT @sama: chains of thought for o3-mini! (we try to organize the raw CoT to make it more readable, and optionally to translate languages,…
0
497
0
Talking about the challenge of the every widening knowledge gap leaders face when dealing with AI at #ISE2025 today.
2
0
1
There is a wide range of research tasks where Deep Research doesn't seem to do well. 1. Humanities/social sciences related and 2. Areas where the answers require developing "a feel" for an area before doing the actual research.
Played around with OpenAI Deep Research today. Thoughts: 1. Worst: asked it to find the fourth woman ever elected to Harvard's Society of Fellows - simple reasoning was required to assess ambiguous names. Gave wrong person. High school intern would do better. 1/
0
0
0
RT @karpathy: There's a new kind of coding I call "vibe coding", where you fully give in to the vibes, embrace exponentials, and forget tha…
0
2K
0
Battle of "Deep Research": Google vs OpenAI - Same prompt same bad results. I asked for a comprehensive list of Open Source LLMs and both failed at the object level task at making a list of actually current models. ChatGPT Deep research did a better job of finding out information about the models that are no longer current but it took about twice as long.
0
0
2
RT @norabelrose: Our results suggest that the set of features uncovered by an SAE should be viewed as a pragmatically useful decomposition…
0
5
0
Organisations the world around are being urged to "use AI" to increase productivity and redesign their processes for the AI age. But are the companies "making AI" so productive "because they have AI" or because of the boring old formula of "talent + hard work + funding + luck"?
Don't know a single organisation that could not benefit from AI, yet don't know a single organisation that is doing it well. The failure in leadership and management is astonishing
0
0
0
@DonaldClark I agree but I don't know that there is a way "to do it well" yet. The "knowledge gap" is still a problem - it is not immediately obvious how any one individual let alone an institution can do this well. I think the test would be are @OpenAI or @AnthropicAI "doing AI well"?
2
0
0
I'm pretty sure the answer to "does it have goals" is not a straightforward "yes" unless we think "goals" is adding some tokens to previous tokens. We need to differentiate between "goals" and "intentions". The current systems can simulate having "goals" within the process of generating tokens which can be dangerous if connected to tools to enact those simulations but they do not have independent goals with a feedback loop to intentions. "Consciousness" is a woolly concept but in some way it gets at the core of the problem of independent, persistent intentionality better than "goals".
0
0
0
Color me not surprised! At least humans are "truly creative" and AI is just a rehash of what came before was always the worst of all takes on AI.
In a large representative sample of humans compared to GPT-4: "the creative ideas produced by AI chatbots are rated more creative [by humans ]than those created by humans... Augmenting humans with AI improves human creativity, albeit not as much as ideas created by ChatGPT alone”
0
0
0