Exo Labs Profile Banner
Exo Labs Profile
Exo Labs

@exolabs_

Followers
8,745
Following
3
Media
4
Statuses
59

Run your own AI cluster at home using everyday devices

Joined March 2024
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
@exolabs_
Exo Labs
12 days
Llama-3-405b is coming on 23rd July and exo will support it on day 0.
@ac_crypto
Alex Cheema - e/acc
12 days
You don't need a H100 to run Llama-3-405b. 2 MacBooks and 1 Mac Studio will do the job, with @exolabs_ to aggregate the memory/compute. I'm ready for you, Llama-3-405b.
Tweet media one
41
114
1K
4
18
79
@exolabs_
Exo Labs
13 days
New exo feature just dropped.
@ac_crypto
Alex Cheema - e/acc
13 days
@exolabs_ now tracks in real-time if you are GPU poor or GPU rich based on all the devices connected in your AI cluster. Here I have 2 MacBook Pro’s, 1 MacBook Air and 1 Max Studio connected. h/t @caseykcaruso and @huggingface for inspiring this.
5
11
70
2
4
45
@exolabs_
Exo Labs
15 days
Mixture of Expert (MOE) models + distributed inference = a match made in heaven. Soon, you’ll be able to run 100b+ parameter models like this on normal laptops / phones with exo. Track the GitHub issue here:
@awnihannun
Awni Hannun
15 days
@ac_crypto @exolabs_ It's an MOE so only 21B active params. Actually an interesting candidate for distributed inference. Easier to make it faster sharding across experts
1
0
9
1
4
37
@exolabs_
Exo Labs
12 days
Does anyone have 8 maxed out mac studios? @BasedBeffJezos wants to know what we could do with them. We can aggregate the memory/compute on exo and they would have almost as much compute and 20x the memory of a H100.
@BasedBeffJezos
Beff – e/acc
12 days
@ac_crypto @exolabs_ What could you do with 8 maxed out mac studios? 🤔
2
2
33
5
4
40
@exolabs_
Exo Labs
15 days
From clean MacBooks to chat interface in 60 seconds
@ac_crypto
Alex Cheema - e/acc
15 days
How long does it take to get distributed inference running locally across 2 MacBook GPUs from a fresh install? About 60 seconds, running @exolabs_ Watch till the end, I chat to the cluster using @__tinygrad__ ChatGPT web interface Code is open source 👇
17
51
627
1
4
33
@exolabs_
Exo Labs
14 days
Exo was featured in Tom’s Hardware. “Thanks to the work of a team of developers, a new software could allow you to run your own AI cluster at home using your existing smartphones, tablets, and computers.” Link to repo:
Tweet media one
@tomshardware
Tom's Hardware
15 days
New software lets you run a private AI cluster at home with networked smartphones, tablets, and computers — Exo software runs LLama and other AI models
Tweet media one
0
3
17
1
4
29
@exolabs_
Exo Labs
14 days
@ac_crypto The new live network topology view also shows which device is active in the inference in real-time.
Tweet media one
3
0
12
@exolabs_
Exo Labs
15 days
@ac_crypto @__tinygrad__ Got to try this with 100b+ parameter Mixture of Expert (MOE) models next h/t @awnihannun
@exolabs_
Exo Labs
15 days
Mixture of Expert (MOE) models + distributed inference = a match made in heaven. Soon, you’ll be able to run 100b+ parameter models like this on normal laptops / phones with exo. Track the GitHub issue here:
1
4
37
1
0
5
@exolabs_
Exo Labs
12 days
@ac_crypto Who was it? 🤷
1
0
5
@exolabs_
Exo Labs
14 days
@ac_crypto Uses MLX, Apple’s open source ML library h/t @awnihannun
0
1
4
@exolabs_
Exo Labs
14 days
@twin_primes @ac_crypto Looking forward to having you as a contributor!
0
0
3
@exolabs_
Exo Labs
14 days
@ac_crypto Uses the awesome @__tinygrad__ tinychat for the chat web interface
Tweet media one
1
0
1