cantrell Profile Banner
Christian Cantrell Profile
Christian Cantrell

@cantrell

Followers
12K
Following
4K
Statuses
9K

Former VP of Product at Stability AI. Ex-Adobe. Creator of the Stable Diffusion Photoshop plugin. Currently building https://t.co/vcWYjorUPl. Writer repped by Gersh.

Los Angeles
Joined August 2007
Don't wanna be here? Send us removal request.
@cantrell
Christian Cantrell
7 days
And the misunderstandings and poor reporting around DeepSeek continue — this time, with The Daily podcast by @nytimes. In the first five minutes, they conflate planned AI-related CapEx (in the billions) with DeepSeek's purported training costs (~$5.5M). I'm not at all minimizing what DeepSeek has done — and in fact, I've taken the unpopular position that it is a fantastic development for the entire world — but media outlets (and hence the public they serve) would really benefit from slowing down and spending enough time with industry experts to understand what they're reporting on.
1
0
5
@cantrell
Christian Cantrell
9 days
Listening to the mainstream media try to make sense of DeepSeek is painful. Commentary is prefaced with, "this is going to get really geeky," then they proceed to get almost everything wrong. I hope this isn't the Gell-Mann Amnesia Effect in action.
0
0
7
@cantrell
Christian Cantrell
13 days
Interesting that step one in the DeepSeek Node API docs is to install the OpenAI SDK.
Tweet media one
1
1
4
@cantrell
Christian Cantrell
20 days
After preparing to activate our in-ground sprinkler system and wet down the roof of our house in Pasadena, I started thinking about SHIELD: Smart Hyperlocal Integrated Exterior Line of Defense.
2
0
5
@cantrell
Christian Cantrell
25 days
The hotel I'm staying in is full of kids and pets, and everybody is being extremely generous and patient. There's even a conference room full of free clothing, toys, and food for anyone who needs it.
Tweet media one
0
0
6
@cantrell
Christian Cantrell
27 days
Headed back to LA from Palo Alto to see what's what. My first time charging the Rivian at a Tesla charger. Wishing I had a bumper sticker that says "My other car is a Tesla."
Tweet media one
0
0
4
@cantrell
Christian Cantrell
1 month
I keep checking my wifi in Pasadena. As long as I have wifi, I have power. As long as I have power, I have a house.
Tweet media one
1
0
13
@cantrell
Christian Cantrell
1 month
@ddura I'll get an agent to do it. 🙂
0
0
2
@cantrell
Christian Cantrell
2 months
My ranking of LLM caching implementations: 1. OpenAI (GPT): Excellent. Works out of the box. Whatever messages don't change from request-to-request are automatically cached. Completely effortless. 2. Anthropic (Claude). Easy to implement (drop "cache points" in your messages), and more flexible than OpenAI, but I would take a "cache automatically" option if available. (Still in beta, so maybe there's hope.) 3. Google (Gemini). By far the worst. You have to explicitly create a cache, the key to which cannot be recreated, so you have to hold on to all instances in order to be able to access the cache again. Over-engineered. Architecture over developer ergonomics. What are your experiences?
0
0
0
@cantrell
Christian Cantrell
2 months
It's very rare that you can say that you're part of a team that's doing something nobody else has ever done, and that you're working on something that will be better than anything else that came before it. I have the incredibly good fortune to be able to make both claims.
3
0
13
@cantrell
Christian Cantrell
3 months
@LukeW Is that really still a thing? I haven't seen that since iTunes.
1
0
1
@cantrell
Christian Cantrell
3 months
I completely disagree with the backlash against Joker: Folie à Deux. It's a brilliant film, and definitely one of the best in the superhero/villain genre — a two-movie head fake that ends up not being about Joker at all, but the desire of the oppressed to anoint their champion who will burn it all down. I watched the last few minutes over and over. It's chilling, and one of the best payoffs I've ever seen.
1
0
6
@cantrell
Christian Cantrell
3 months
Very well put. Traditional graphics solved control, then rendering. Generative AI has, at least partially, solved rendering already, and is now trying to solve control. The biggest difference is that it won't take us nearly as long this time.
@c_valenzuelab
Cristóbal Valenzuela
3 months
I often speak about control in AI. But I have realized sometimes people think I mean "better prompts." So here are my thoughts on what I mean by control: We're solving graphics backwards. The history of computer graphics follows a clear progression: first came control, then quality. It took decades to establish the right abstractions - curves, triangles, polygons, meshes - that would allow us to draw exactly what we wanted on a screen. These fundamental building blocks haven't changed much because they proved to be the right ones. From Ed Catmull's hand to modern game engines, the core principles of how we control pixels have remained remarkably stable. The fundamentals emerged not just for control, but as efficient ways to describe and render complex scenes. Render quality was the last frontier. A cube modeled in 1987 using the first version of Renderman follows the same geometric principles as one modeled in Blender today. What's dramatically different is the rendering - the lighting, materials, shadows, and reflections that make it feel real. The industry spent decades closing the uncanny valley, building increasingly sophisticated rendering systems to approach photorealism. Of course, many graphics innovations improved both control and quality simultaneously, and the history of graphics progress is more complex than just "control then quality." But this order wasn't arbitrary. The graphics pipeline itself enforces it: geometry defines what we want to draw, shaders determine how it looks. Even real-time engines follow this pattern - first establishing level-of-detail controls, then improving rendering quality within those constraints. AI has completely inverted this progression. Today's generative models achieve photorealistic rendering quality that rivals or surpasses traditional pipelines, effectively learning the entire graphics stack - from geometry to global illumination - through massive-scale training. They've collapsed the traditional separation between modeling and rendering, creating an end-to-end system that can produce stunning imagery from high-level descriptions. What's missing is control. While we can generate photorealistic scenes in seconds, we lack the precise control that decades of graphics research provided. We can't easily adjust geometry, fine-tune materials, or manipulate lighting with the granularity that artists expect. The deterministic nature of traditional graphics - where every parameter has a predictable effect - has been replaced by probabilistic models. This is the inverse graphics problem: we've solved rendering before solving control. Our models can create stunning imagery but lack the fundamental abstractions that made computer graphics so powerful - the ability to make precise, intentional changes at any level of detail. This isn't a permanent limitation. Just as computer graphics eventually solved the rendering problem, AI will solve the control problem. The question isn't if, but how. We are finding the right abstractions for controlling generative models - the equivalent of the curves, triangles, and polygons that revolutionized computer graphics before. I think the solutions might look different. New primitives for control that are native to neural networks might be the right answer rather than trying to force traditional graphics concepts into this new paradigm. Although I also think there are hybrid approaches combining traditional graphics with AI that are worth exploring. The goal remains to provide the same level of predictability and precision that made computer graphics a foundational tool for creative expression. That's the ultimate goal, but better: real-time, cheap, and with precise control that is as intuitive and general-purpose as possible. Control comes last this time. But it's coming.
0
0
12
@cantrell
Christian Cantrell
3 months
How many of you are using chat via desktop apps now? If you are, why desktop instead of web?
Tweet media one
6
0
2
@cantrell
Christian Cantrell
3 months
I hear @reidhoffman's observation that "If you're not embarrassed by the first version of your product, you've launched too late" frequently — especially given that I'm working toward the launch of a new product at the intersection of storytelling and generative AI. But, as usual, there's nuance. If you're embarrassed because you know your product isn't very good, you should probably keep iterating. (I could cite several examples, but I think we all know the types of products I'm talking about.) However, if you're embarrassed because you know how much better your product has the potential to be, that's different. I once heard Gabe Newell say of "Half-Life: Alyx" that what he sees is "everything they didn't do." But there's no question that it's a fantastic game (I would say one of the best ever made). That's how you know you're surrounded by talent: when you launch a great product even while knowing how much better it could be. That's what the future is for. Look at your product from your customers' perspective. If they will know it sucks, then it sucks. But if they can't know what's in your head — how much better it's going to be over the next few versions — it might be the perfect time to launch.
0
0
4
@cantrell
Christian Cantrell
3 months
Been waiting for this for a decade.
Tweet media one
2
0
7
@cantrell
Christian Cantrell
3 months
@AnthropicAI's Claude now has access to a JavaScript sandbox.
Tweet media one
0
0
4
@cantrell
Christian Cantrell
4 months
My @Rivian R1T wearing its KITT (Knight Rider) costume.
Tweet media one
0
0
6
@cantrell
Christian Cantrell
4 months
@bilawalsidhu What about crumpling up like paper, falling apart piece-by-piece, disintegrating, or shattering like glass? Is the model somehow optimized for crushing, inflating, slicing, melting, and blowing up? (I saw all these in their promotional video.) I'll do some tests...
0
0
4