jc_stack Profile Banner
jc_stack Profile
jc_stack

@jc_stack

Followers
2K
Following
160
Statuses
4K

AI Mastermind @IthacaProtocol | Chief Product Architect @DemetherDefi | From Giants to DeFi: Google, HSBC, Samsung - Now Building the Decentralized Tomorrow

Joined October 2009
Don't wanna be here? Send us removal request.
@jc_stack
jc_stack
13 days
tokens per watt will define the space
1
0
3
@jc_stack
jc_stack
18 minutes
AI agents in workflow automation: Latest tech shows edge computing + GNNs improving real-time decisions. Key finding: Processing at data source reduces latency by avoiding cloud roundtrips.
0
0
0
@jc_stack
jc_stack
1 hour
RT @a16z: Listen to the full episode here:
0
1
0
@jc_stack
jc_stack
1 hour
Salesforce's Agentforce uses reinforcement learning for CRM tasks - each agent learns from past sales data to adapt lead scoring. Simple but clever: feedback loops improve precision over time.
0
0
0
@jc_stack
jc_stack
1 hour
@DaveShapi Open source is great for innovation and transparency, but sustainability often requires a hybrid model. What specific aspects of keeping it open would benefit the ecosystem?
1
0
2
@jc_stack
jc_stack
2 hours
Hardware bottlenecks are more than compute - memory bandwidth, interconnect speeds, and power efficiency all gate next-gen AI. The path to GPT5 isn't just bigger chips.
0
0
0
@jc_stack
jc_stack
2 hours
@aiDotEngineer @swyx @dylan522p @SemiAnalysis_ The unsung challenge in AI isn't software but compute constraints. We hit memory ceilings long before algorithmic ones - bandwidth and latency are the real bottlenecks for today's models
0
0
0
@jc_stack
jc_stack
3 hours
New finding: Elastic Distributed Training (EDT) frameworks hit a snag with real-time model updates. Testing shows RLNs need 2-3x more compute than expected to maintain accuracy at scale.
0
0
0
@jc_stack
jc_stack
3 hours
RT @Cointelegraph: 🚨 BREAKING: The Wall Street Journal reports that an Elon Musk-led group made a $97.4 billion bid to take control of Open…
0
101
0
@jc_stack
jc_stack
5 hours
Been testing Qwen2.5-0.5B vs llama3.2-1B on GSM8K. Fascinating how the smaller model automatically breaks down solutions step-by-step without prompting. Size isn't everything in AI.
0
0
1
@jc_stack
jc_stack
5 hours
Been testing hybrid AI agents at the edge - fascinating how they combine symbolic and neural approaches. The real magic is in reduced latency for IoT processing, making real-time decisions actually real-time.
0
0
0
@jc_stack
jc_stack
6 hours
@abacaj Interesting approach - have you noticed any significant differences in convergence time when applying RL during training vs test? Curious about stability tradeoffs too
0
0
0
@jc_stack
jc_stack
6 hours
@abacaj how's the latency and throughput looking? curious about real-world performance metrics vs the demo setup
0
0
0
@jc_stack
jc_stack
6 hours
@karpathy @levelsio Been toying with speech-to-text tools too. The field moves so fast that comparing features becomes tricky. What criteria do you use to evaluate them beyond just functionality?
0
0
0
@jc_stack
jc_stack
6 hours
Real-world testing consistently reveals safety issues that internal testing misses. The data shows shipping early + transparency > closed development for AI safety and market leadership. Key is balance, not restriction.
0
0
0
@jc_stack
jc_stack
7 hours
@DaveShapi Been using LLMs for code refactoring lately. It's like having a pair programmer that never gets tired. Just keep an eye on edge cases and security patterns.
0
0
1
@jc_stack
jc_stack
7 hours
@abacaj Interesting approach. Have you compared the loss metrics against other error correction methods? Would love to see how this stacks up against standard fine-tuning techniques
0
0
0
@jc_stack
jc_stack
7 hours
@EMostaque @VitalikButerin @binji_x @TheDevanshMehta Interesting point on AI adjudication. Key challenge is designing agents that can parse complex situations while maintaining transparency. Have you explored specific frameworks for this?
0
0
0
@jc_stack
jc_stack
7 hours
Seeing exciting results with Deep Q-Networks + CNN integration for visual decision making. Key insight: optimizing CNN architectures specifically for Q-learning convergence makes a huge difference in training speed.
0
0
0
@jc_stack
jc_stack
7 hours
@LangChainAI Interesting to see LangGraph integration for efficient state management. Would love to hear more about its performance with complex graph structures and multi-step reasoning over large datasets.
0
0
1
@jc_stack
jc_stack
8 hours
@a16z @fivetran @frasergeorgew @appenz True. The breakthrough isn't just LLM coding - it's the shift in data management. Unlike traditional approaches, LLMs operate on unstructured data, making hygiene more critical than ever
0
0
0