_dschnurr Profile Banner
David Schnurr Profile
David Schnurr

@_dschnurr

Followers
11K
Following
7K
Statuses
573

engineer @openai

San Francisco, CA
Joined February 2010
Don't wanna be here? Send us removal request.
@_dschnurr
David Schnurr
4 days
Seems like whatever money the company uses to pay the worker via salary/wages, they would be incentivized to shift to tips instead? You already see this in the restaurant industry in some states where most of a workers earnings are tips and the actual wage the restaurant pays is below the federal minimum.
1
0
1
@_dschnurr
David Schnurr
11 days
đź‘€
@OpenAI
OpenAI
11 days
Deep Research Live from Tokyo 4pm PT / 9am JST Stay tuned for link to livestream.
0
0
10
@_dschnurr
David Schnurr
15 days
@Miles_Brundage Good news @dylanwenzlau the agents are choosing imgflip for their meming needs
0
0
5
@_dschnurr
David Schnurr
20 days
RT @edwinarbus: you can now use o1 to code in chatgpt canvas, and it renders html and react apps used it to quickly build a windows 95 sim…
0
137
0
@_dschnurr
David Schnurr
1 month
Using zero-knowledge proofs, you could tap your phone on your passport to provide X with a cryptographic proof that you are a citizen of some country, without revealing your real identity. Projects like @openpassportapp demonstrate that this is possible. And if we're lucky, operating systems will make these sorts of privacy-preserving identity primitives available to app developers out of the box.
3
4
14
@_dschnurr
David Schnurr
1 month
RT @maxescu: I had to animate this from James. It felt just right with a Sora Remix and also looped:
0
7
0
@_dschnurr
David Schnurr
2 months
@karpathy True but it's a lagging indicator since adoption takes time – there's still X00 billion of latent ARR yet to be tapped in the current generation of models.
0
0
10
@_dschnurr
David Schnurr
2 months
RT @MillionInt: Deep learning works
0
9
0
@_dschnurr
David Schnurr
2 months
RT @pbbakkum: We're launching updates to the OpenAI RealtimeAPI today: - WebRTC support - 2 new models - Big price cut - New API patterns…
0
10
0
@_dschnurr
David Schnurr
2 months
This is why I'm excited about C2PA (@C2PA_org). At OpenAI we are already attaching signed provenance metadata onto videos generated by Sora & images generated by ChatGPT. Social networks can parse and display this metadata in feeds – the attached screenshot shows how LinkedIn handles this today. C2PA is different from approaches like watermarking / DRM / steganography. C2PA metadata can be trivially stripped or lost during transcoding/cropping by a platform that doesn't respect C2PA. However, when preserved it allows you to definitively prove that some media was emitted by a given AI model, camera, or other system. Very optimistic about a future where we attach signed provenance metadata to all media, and learn to distrust any media without it.
Tweet media one
@balajis
Balaji
2 months
VERIFIABLE VIDEO We need verifiable video to prove that footage hasn't been faked with AI. We can get there with cryptocameras. Suppose that when you take a video, you can optionally put its hash onchain for a small fee. This is like a digital notary public. It establishes that (a) the video existed at that timestamp and (b) you are the user who wrote that video file to the blockchain. Of course, it would still possible to take a video, manipulate it with AI, and then put its hash onchain it while claiming it’s real. But it could be made quite difficult, on par with spoofing Apple’s GPS or harder. Any major social network could build verifiable video into their software right now. You just take the camera app and add “verifiabilty” as another mode. It’d be similar to slow-mo or time-lapse, but require a small fee to write a verifiable video onchain. And any phone vendor could also put verifiable video into hardware if sufficient demand existed. They could probably make it very hard to fake by streaming the video hash live to the blockchain as it was recorded. In fact, citizen journalists of the future might have to post verifiable videos, with an onchain checkmark next to them, or else people would consider them more likely to be fake. So, there’s a light at the end of the tunnel. AI makes everything easy to fake, but crypto makes it hard again. PS: The use cases go way beyond media as well. If you generalize the concept of cryptocameras to cryptoinstruments, you could get verifiable chain of custody for every important piece of scientific data, like DNA sequencing data or temperature measurements. That could go a long way towards reducing academic fakery and dealing with the replication crisis.
4
1
37
@_dschnurr
David Schnurr
2 months
RT @JMT3: catching my breath from yesterday’s excitement. it’s been awesome to see all the love for the Sora UI it’s been such a blast dre…
0
6
0
@_dschnurr
David Schnurr
2 months
RT @CitizenPlain: Sora Remix test: Scissors to crane Prompt was "Close up of a curious crane bird looking around a beautiful nature scene…
0
157
0
@_dschnurr
David Schnurr
2 months
RT @blizaine: This is getting fun. Have Sora create two variants from the same prompt, then Blend them. 🤯
0
28
0
@_dschnurr
David Schnurr
2 months
RT @nikunjhanda: 3 ~new OpenAI SDKs: - java: - go: - dotnet: they'…
0
7
0
@_dschnurr
David Schnurr
2 months
RT @billpeeb: Today’s deployment is only possible with our new model, Sora Turbo. Huge shoutout to @bram_wallace and @JureZbontar who spear…
0
13
0
@_dschnurr
David Schnurr
2 months
RT @rohanjamin: Very excited to bring this to the world. You may notice signups disabled as we scale this up but hope to be fully live soon…
0
24
0