Pɾҽɱ Kυɱαɾ Aραɾαɳʝι 🤖👶🏼🐘
@prem_k
Followers
4K
Following
19K
Statuses
55K
Head of IT Ops - Automation @Cognizant; Exec committee member @imcmontessori Karnataka chapter; Secretary @kans_india; Own views; QT≠reply.
Bengaluru, India
Joined April 2009
Yes. The S Curve is always lurking in the background. While adding more scale is about the incremental innovations to #ANN, we need to cultivate & brace for breakthrough innovations in #AI. Tip: Apply Outcome Driven Innovation #ODI framework to break the innovators dilemma.
My latest column in @IEEESpectrum. (I prefer the print headline coming in July: "The Other Side of The Innovator's Dilemma.")
1
0
9
RT @nouhadziri: This is not misinformation! experiments on reasoning models (DeepSeek R1 and o1) show the same patterns of errors: they eve…
0
6
0
@burkov @Ouponatime38 The trick they are trying to perfect is to make it harder for the humans to find out proofs of the shoddiness of the #LLMs in between the plausible sounding chunks of eloquent texts that's produced.
0
0
0
@prajdabre1 Splitting hairs ... But we have a responsibility too 😃
Yes. It's probably splitting hairs, but I think we all have a responsibility to remove the veil & show people that LLMs cannot "do" anything other than output text (image/audio/whatev). A separate component then uses typical if-then-else to run what's in / based on the output. Any LLM based app, including so called "Agentic" ones, are only adding a natural language interface to traditional apps. 👇🏼 But in a very expensive manner. I do wish that something more efficient is invented (or discovered) that brings the linguistic capabilities to our apps, without having to throw in the world knowledge into the models.
0
0
0
Yes. It's probably splitting hairs, but I think we all have a responsibility to remove the veil & show people that LLMs cannot "do" anything other than output text (image/audio/whatev). A separate component then uses typical if-then-else to run what's in / based on the output. Any LLM based app, including so called "Agentic" ones, are only adding a natural language interface to traditional apps. 👇🏼 But in a very expensive manner. I do wish that something more efficient is invented (or discovered) that brings the linguistic capabilities to our apps, without having to throw in the world knowledge into the models.
1
0
1
@prajdabre1 Very solid advice. I said something similar to my son too during his XII. He chose culinary arts. Nothing related to science/commerce.
0
0
3
Hear, hear!
I regularly receive direct messages from people wondering what's wrong with the so-called agents and agentic frameworks. Here's my answer. The main topic of my PhD was agents and multi-agent systems. What they currently call "agents" (LLMs that were instructed to do something) aren't agents. LLMs hallucinate too much to be trusted with any important task, even if you have 100 "agents" to do the job and 100 to validate it. And even in this case, you can simply use an LLM directly, without using any "agentic" framework, and you will get the same deplorable result. LLMs are only good under two conditions: 1) they are used on data similar to the Web data (which literally means the input must be some Web data) and 2) their output is always used as a recommendation to a human expert (which means that they cannot be programmed to work autonomously, as these framework creators want you to believe). If you only apply agentic swarms under conditions 1) and 2), you will quickly realize you don't have much use cases and you don't need agentic swarms.
0
0
0
@hwchase17 Why not look at BPMN for inspiration? Orchestration, Choreography and Collaboration patterns?
0
0
0
RT @HoaiNguyenJ7: 🎻🌷🎻The "Salut Salon," composed of Angelika Bachmann (violin), Iris Siegfried (violin), Sonja Lena Schmid (cello), and Ann…
0
838
0
👇🏼This is where India should invest for the longer horizon; don't leapfrog #LLMs, but don't pour capital into it without poring over what Prof. Anima is saying, pointing at 👇🏼 cc @AshwiniVaishnaw
Text knowledge is not sufficient for scientific discovery. Language models lack physical grounding. They only have a high level of understanding but cannot simulate physical phenomena. Image and video models focus on "looking good" vs. being physically valid. We’re teaching A.I. physics. We are building physical AI that can model, simulate, design, and control in the real world. Due to the massive amount of data available on the internet, the first wave of generative AI has been digital, but the next wave will be physical. This means that the notion of digital twins needs an upgrade. They need to be trained on laws of physics to have the right foundation. Having physically accurate digital twins means faster design cycles, leading to a huge reduction in R&D costs. Watch my TED talk
0
0
0
RT @SanjeevSanskrit: While studying Rig Veda 8.12.28, I found a striking description of planetary motion that mirrors Kepler’s Second Law:…
0
675
0
RT @nikhilchinapa: @dpkBopanna @edsheeran Spoke to a friend. Apparently the police were neither informed nor asked for permission. If he’d…
0
59
0