Excited for my first public contribution at
@OpenAI
— Embeddings v2: it unifies text search, text similarity, and code search, outperforms our best v1 models at most tasks, while being priced 99.8% cheaper!
love the phrasing of "world simulators": imagine the scale of intelligence needed to accurately model relationships between complex entities in a physical environment
the beautiful aesthetics are just a cherry on top
@stephenbalaban
"Behind it all is surely an idea so simple, so beautiful, that when we grasp it – in a decade, a century, or a millennium – we will all say to each other, how could it have been otherwise?"
The most popular use case of our new embedding model is, by far, retrieval-augmented question-answering. Checkout
@isafulf
's new plugin that makes this a native experience with ChatGPT!
We've written plugins for browsing & Python code execution (amazing for data science use-cases), launched with 11 partners, and have open-sourced a high-quality plugin for retrieval over any data you'd like to make accessible to ChatGPT:
one of the biggest lessons i've learned from my first year doing research is that, when someone comes up with an idea, it's almost certain that someone else has either already done it or is about to do it
i wish more recommendation algorithms gave users control of the diversity of their feeds, like the temperature slider of a language model
give me entropy. help me discover the unknown
i can’t wait for content recommendation algorithms to get augmented with natural language feedback, letting users granularly and iteratively critique and refine their feeds in real-time
the simplest of changes, like going to a park, or turning on some music, can make a piece of writing resonate so much more deeply with me. like my comprehension is limited by my environment. and it's empowering to remember how easy it is to shape it
@bcjordan
@OpenAI
i'm excited to see how such a cheap ($0.40 per 1m tokens), easy-to-use model (versatile representational space, reliable API / don't need to self-host) enables more innovative retrieval-based apps like
@omniscience42
to be created and scaled :)
Borges' Funes is a prophetic meditation on large language models: the mapping of words to numbers, the idea of something that has a greater capacity of memory than all of mankind, and contrasting that capacity with that of "thinking"
being in tune with your tastes compounds: knowing what you like makes it easier to find things you do; the more you discover them, the more opportunities you have to understand what draws you to them
Life Universe
Explore the infinitely recursive universe of Game of Life! Works in real-time and is perfectly consistent, never fails to remember where you are and where you came from.
無限に再帰するライフゲームの宇宙を探索できる作品を作りました
#indiedev
Many believe that great AI advances must contain a new “idea”. But it is not so: many of AI’s greatest advances had the form “huh, turns out this familiar unimportant idea, when done right, is downright incredible”
@sociaIrate
this is an example of why "realize" is one of my favorite polysemes
realize
1. become fully aware of (something) as a fact; understand clearly
2. cause (something desired or anticipated) to happen
@dwarkesh_sp
i've gotten a lot of incredible recommendations this year by telling ChatGPT my favorite ideas and asking for thinkers / writings who've thought about similar things
Borges' Funes is a prophetic meditation on large language models: the mapping of words to numbers, the idea of something that has a greater capacity of memory than all of mankind, and contrasting that capacity with that of "thinking"
the amount you are able to take away from an experience is scaled by your interest in it; in this sense, being in tune with your tastes is a type of meta-learning
@arankomatsuzaki
the new text-embedding-ada-002 should actually be pretty good at multilingual tasks; we removed this as a limitation in our documentation a while back!
@sociaIrate
i keep a log of experiences that give me aesthetic chills (commonly reading certain books or listening to music)
i've been feeding this list to base GPT-4, and having it continue the generation, suggesting future experiences for me. some of my favorite muses have come from this
@AkiyaCollective
i happened to stumble upon it being rented out on Airbnb last year and had such a lovely stay there. happy to hear there are communities like this set out to build more!
@highchinar1
it's usually better to embed short texts tbh. returning a paragraph of just the most important information, instead of hiding it in pages of boilerplate
@lifeonmarsspace
one of my favorite embedding papers is a study of this: despite different models embedding concepts in completely different positions, the relation between concepts is largely universal, allowing for communication