matt_ambrogi Profile Banner
Matt Ambrogi Profile
Matt Ambrogi

@matt_ambrogi

Followers
652
Following
10K
Statuses
3K

Programming, product, and machine learning.

Joined November 2019
Don't wanna be here? Send us removal request.
@matt_ambrogi
Matt Ambrogi
1 year
New post: Strategies for Improving the Performance of Retrieval Augmented Generation Systems. I cover ideas such as chunking approach, meta-data use, query routing, fine tuning embeddings, reranking, and more. Link below.
6
7
62
@matt_ambrogi
Matt Ambrogi
7 hours
@safwaankay Oo I do like this one
0
0
1
@matt_ambrogi
Matt Ambrogi
11 hours
@dannypostmaa Maybe a dangerously simple a heuristic? I use snapchat everyday. If I had invested in 2021 my shares would currently be worth 1/7th of their purchase price.
0
0
1
@matt_ambrogi
Matt Ambrogi
11 hours
@jeremyphoward
Jeremy Howard
1 day
Tweet media one
0
0
0
@matt_ambrogi
Matt Ambrogi
1 day
What I take from this: lots of opportunities to build niched down versions of deep research.
@8teAPi
Prakash (Ate-a-Pi)
2 days
I hope you’re ready. Deep Research will be used on you by HR before the interview On a Tinder match before a meetup On a tenant before renting On a landlord too It will prepare be used to provide background on people, companies, policies, theories. The big use cases are not the ones that are advertised. The meaning of intelligence too cheap to meter is to use intelligence on the trivial. And that will have its own consequences.
0
0
3
@matt_ambrogi
Matt Ambrogi
1 day
@jxmnop diabolical that you didn’t drop those closing parentheses to a new line
0
0
0
@matt_ambrogi
Matt Ambrogi
3 days
Type sh I’m on right now
0
0
1
@matt_ambrogi
Matt Ambrogi
3 days
As an aside: I think this is an area where lightweight reasoning models could be amazing. For example: monitoring adherence in long context rag. Pass statistically significant amount of generation and context pairs to one of these models everyday and have them determine faithfulness.
0
0
0
@matt_ambrogi
Matt Ambrogi
3 days
@JohnGilhuly @_Jarrad_ @ArizePhoenix @aiDotEngineer @aparnadhinak Met Aparna at last one in the summer! I'll find you there. Look forward to seeing the research.
0
0
2
@matt_ambrogi
Matt Ambrogi
3 days
I remember once reading a study that found that the number one predictor of how good someone ultimately becomes at math is how long they're willing to sit with problems they don't understand when starting out.
@fermatslibrary
Fermat's Library
4 days
Andrew Wiles on being smart
Tweet media one
0
0
2
@matt_ambrogi
Matt Ambrogi
4 days
Amazing that AI solved getting up to speed on a new code base. Used to take weeks. Now you can just jump in and try to do things in a new code base and figure it out as you go. Makes it easier to collaborate on projects, contribute to open source, or switch jobs.
@simonw
Simon Willison
4 days
o3-mini is really good at writing internal documentation - feed it a codebase, get back a detailed explanation of how specific aspects of it work
0
0
1
@matt_ambrogi
Matt Ambrogi
4 days
@JohnGilhuly @_Jarrad_ @ArizePhoenix @JohnGilhuly are you going to be at @aiDotEngineer in new york? Would love to chat about this.
1
0
0
@matt_ambrogi
Matt Ambrogi
4 days
@deadly_onion How’d you navigate cost of embedding and storing so many chunks?
1
0
0
@matt_ambrogi
Matt Ambrogi
5 days
@_Jarrad_ @ArizePhoenix This is very cool. What’s an example of an assertion you’d use?
1
0
2
@matt_ambrogi
Matt Ambrogi
5 days
Another interesting idea is trying to mimic drift detection in traditional ml. If you have a list of expected queries. Could you have a classifier looking for user inputs that don't seem to match 'expected'? Then you can incorporate those types of questions into your pre-deployment evals.
0
0
0