Met an OpenAI engineer that so strongly believes we’ll have ASI soon that he sold his SF house and is looking to short the housing market of any cities that rely on human intellectual capital
It's Time to Build (with ChatGPT)
Collecting the best APIs, hacks and extensions to get the most out of ChatGPT and to build next-gen product experiences ✨ —
Never been a better time to start an AI startup.
Start now, build out the workflows and the userbase. And then just surf the wave of relentless AI improvement until it all just works.
We got second place at the AGI house hackathon for a real time coding assistant.
Chat to your codebase, see the impact straight away. Errors are piped in and automatically fixed.
Few points
- openai functions work really well
- this thing feels better and more powerful that
O1 acquired and moving to the 🇺🇸 in ~2 weeks!
Going to San Francisco with my wife, 1 yo and 3 yo... suggestions to make it awesome for them all, much appreciated!
GPT-4 coming for Adept
Here's Taxy – an OSS extension that uses GPT to automate browser tasks
- simplified DOM
- Reason + Act prompting
- executes the action using the chrome.debugger API.
Here it is: doing something esoteric with github settings
Leaked Google document: “We Have No Moat, And Neither Does OpenAI”
The most interesting thing I've read recently about LLMs - a purportedly leaked document from a researcher at Google talking about the huge strategic impact open source models are having
Unofficial Python API
(there are more popular repos, but this one doesn't use a headless browser, looks most reliable and deserves the credit for the reverse engineering)
The more I think about it, the more i think this interface will be the layout for the coming wave of AI agent startups.
- chat
- browser
- canvas
- tools
with rewind and edit exposed to the user.
To explain...
With things like Copilot, the idea was that you'd be using your
.
@CohereAI
just announced some huge moves on pricing for their LLMs!
Charging per document, not per token, and the same price for any size model. And free fine tuning!*
Means you'll care less about optimising the prompt and can just focus on the performance that matters
I'm collecting these and more here:
Star it to keep track of the latest tools, demos and extensions.
This is the time to be hacking with AI.
What else should I include?
I wrote a VSCode extension that allows you to use ChatGPT within the context of your code 🔥
You can click on code blocks to paste them into the editor, or use a selected code snippet as a reference.
The extension is on GitHub if you want to check it out
Ideas aren't hard here. All knowledge work gets transformed. Just reimagine everyone's job if they had a smart, fast, sometimes unreliable assistant looking over their shoulder.
📟 Interpreter Plugin
Gives GPT a "sandboxed, firewalled, execution environment, with some ephemeral disk space"
With GPT-4's code writing ability the concern is more on safety than utility. This is extremely useful.
Here's it doing data science
@petergyang
Most are just using GPT-3 with a crafted prompt, w a different prompt for each tool. The leading ones are heavily finetuning (using OpenAI) on historically good data and creating a data flywheel.
Some have switched to GPT-J/other providers, but mostly just for forbidden usecases
~Some personal news~
With gratitude, I've left my job as a research scientist at Amazon. I'm going to be starting a startup.
A final hail mary to make it on the 30 under 30
Today, we've funded our 100th Pioneer. Better e-commerce search, an easier way to publish a blog, Fitbit for productivity, and more. Check out the latest Pioneer projects:
GPT Plugins are the eyes, ears and hands of practically useful AI.
Collecting together the best examples so you can build your own, and take advantage of this technology yourself.
↓
Just build a GPT-4 Discord bot... using GPT-4.
Honestly, took 20 minutes. Any bugs? Just paste and it fixes. Ideas for improvements? Just ask and it does it.
🤯 The world is not ready for this level of productivity increase
Best designer i know
@whatthefurr
(he did our branding at Humanloop) is doing NFTs and its getting the attention it deserves.
The next Invisible Friends
👉
@ThePossessedNFT
@vercel
I've loved working with Svelte, but when there's this much development around making Next.js better, I feel good about switching back to the React world.
Worth pointing out that the UK is now getting a few of the most important things right
- Noting the historical leadership in research and engineering
- viewing tech innovation as the route out of the degrowth decade
- Establishing ARIA
- Giving special focus to AI
It's been an honour advising the Government on the creation of the AI Taskforce and I'm delighted that we've been able to secure someone as expert, principled and ambitious as Ian Hogarth as its first Chair. The coming months will be exciting...
OSS enables permissionless innovation. This going to be the driving force that makes models small, fast and installed everywhere
...but the open source models will always be playing catchup for the best quality.
Machine translation now works without parallel texts
Learn which words co-occur in each language then align the space of these distributed dictionaries to get rough word-to-word translation and tidying up with a language model.
🔌browsing: enabled!
I built "ChatGPT Advanced" – a chrome extension that lets you augment your prompts to
#ChatGPT
with results from the web.
This can help make replies more accurate and up-to-date.
Here's a demo:
Perplexity now lets you chat with your search results!
This is a broad interaction need for so many AI apps. A single turn prompt is unlikely to get things right for anything complicated. A back and forth dialogue is needed to refine and clarify. This looks great.
Announcing a major update to Perplexity Ask: the world’s first conversational search engine! Now, you can read answers with up-to-date sources and ask follow-up questions to dig deeper. In other words, you can chat with your search engine!
Try it out at
Current training runs for the largest models cost 10s-100s of millions of dollars, excluding staffing costs.
It isn't long before we're talking billions of dollars per training run. And hundreds of millions on high quality datasets.
“OpenAI may try to raise as much as $100 billion in the coming years to achieve its aim of developing artificial general intelligence that is advanced enough to improve its own capabilities”
I mean, at this point, who is counting.
GPT-4 API access and playground now available in Humanloop!
Want to prototype with the GPT-4 API but don't have access yet?
Sign up and start building!
The big AI labs will not open source their large models, nor will they reveal the tricks they've learned on their way.
This is crucial for them as a business. AI Safety and geopolitical concerns will further reinforce the need to restrict access.
In a few years time, there will be ~3 large enough players that can afford the compute needed to train them (think CapEx of a cloud infra company, only bigger).
Open-source/community efforts cannot keep pace, because they won't be able to raise the capital needed.
2.6M views! I guess there’s demand for bets about the impact of AI. Here’s mine:
AGI and ASI will likely happen in our lifetimes and sooner than you think.
The impact will be slower than you think.
And human intellectual capital will go through a golden age on the way.
Met an OpenAI engineer that so strongly believes we’ll have ASI soon that he sold his SF house and is looking to short the housing market of any cities that rely on human intellectual capital
Every small drop in training error "unlocks" new capabilities as it understands the world more.
**We're no where near the limits of scale yet**. There are many high value tasks (science, business...) where you would pay a *lot* for marginally better performance.
The OSS community was gifted the leak of LLaMa weights. We shouldn't expect that to happen for larger models going forward. The most capable OSS models are at least a year behind right now, and I'd expect that to only increase.
Notable moment: we've just stepped onto a new abstraction level for AI
With assistants, you no longer set (or can set!) model parameters like temperature, token penalties, bias etc.
its a great sign that the models are now so usable you don't need to care.
Tough period of my life and was off twitter but plan on coming back now.
Unnecessary I'm sure, but I feel the need to talk about it. Seems wrong to skip past the bad times. Everyone has gone or will go through personal tragedies.
So mad to see your face in the youtube recommendations!
Thanks to EO for doing these startup interview series. Incredible operators producing top tier content.
How to Maximize LLM Performance
There was a talk at OpenAI devday that you probably missed, but has GOLD in it. For those that want build with LLMs, it maps out the process you need to get it production ready.
I've turned that into a written guide, sprinkled in with my own
I'm here in SF now!
Finally set us up now with a house, gym and a office (the essentials).
Keen to meet more people in the area! Especially people that are working in AI... or have young kiddos.
Happy to share that I've officially moved to San Francisco! Today's my first day in our new SF office.
Alongside
@jordnb
, I'll be growing
@humanloop
's US team.
Would love to meet more people in the city so please DM me!