Working on something interesting
Previously: CTO @ Sale Stock & Software Engineer @ PS4 & PS Now core team at Sony Interactive /
@PlayStation
San Francisco
Just built a Trello clone with
@senimanjs
in <400 LOC
All rendering incl. task reordering while dragging are driven by the server in realtime via websocket
All with 3KB of JS + ~8KB of websocket messages for initial UI. All data access are delayed by 10ms to simulate DB calls
a sample mini shop app with
@senimanjs
with its 3KB of client JS
1.8KB websocket packet comes immediately to build initial UI for the user to see
then a few more KBs arrive to fill up the rest of the homepage feed
all data access has 10ms delay to simulate internal RPCs
most insane progression:
not being able to properly interview in English for a Facebook internship (but passed anyway) --> presenting to Zuck during the internship after winning an internal hackathon
Fun fact waktu internship: saya dan
@HendriTan88
menang Hackathon. Kami salah satu dari 6 team yang present ke Mark Zuckerberg
Panik banget sampe ga berani foto bareng
In the pic: me, dying after 3 days of no-sleep hackathon
just in the two tweets we had
an Airlangga University student who wrote an OSS NLP library used by the world’s major tech companies
an ITB fresh-grad who helped WhatsApp design their first API's on-prem deployment strategy used by the rest of the world
long 🇮🇩 uni students
the very one and only
@sepyke
fun fact -- he wrote the Python library because he needed it for internal work and gained use around the world very quickly
To be frank after 4 years I still get the periodic heart pangs of missing working with the folks at SaleStock
I wasn't the best leader I could be, so to have worked with these folks always felt like something I never deserved
@tunggalp
Most interesting one: entered the US PlayStation group as an intern, convinced them to convert me to a full-time employee about a month later. Two months later: inducted into one of PS4's core teams -- the youngest and the only Indonesian to ever do so AFAIK 🇮🇩🇮🇩
I don't tweet, I don't do movie reviews, and I don't rant, but The movie "Marlina: Pembunuh Dalam Empat Babak" is so bad and the fact that it receives virtually unanimous good reviews on Twitter compels me to do otherwise. TLDR: don't believe other Twitter reviews and here's why:
Seniman + Ollama + mixtral8x7b @ q3 quantization running on RTX3090 at 40 tokens/s
Near-GPT4 (but faster) model running on the same card I use to play games with.. not even a year after GPT4's launched.. insane
additional data point from I believe late 2019; SS's app & web team was 3 FE product engineers writing on a universal RN codebase. one of them out of
@hacktiv8id
rating I believe was 4.3
absolutely not perfect but not too bad given the team budget. everything was on the margins
2022: Explore RN/Flutter/KMM with
@andhikayuana
& team.
Feb: Ruangguru 1st brownfield RN screen.
Aug: Started Universal Frontend with
@broerjuang
& team using
@tamagui_js
2023: TV app with Universal Frontend - one codebase/UI kit for web, Android, iOS, TV. 2 weeks dev time.
Just built a basic
@senimanjs
integration with ChatGPT API
has 600KB of gzip'd JS, while this is 3KB of JS, with the rest of the UI streamed on-the-fly thru WebSocket -- as with any Seniman-built apps
Just basic UI for now; code blocks, paragraphs, etc.
Spent about 30 minutes this noon writing down where some of them are right now -- out of the ~50 engineers that have joined SaleStock
Honestly few things feel as good as knowing they're doing great. I stalk their profiles periodically to heal the pangs
Tested out the update latency of the counter button in (running in a Singapore-region server, accessed from a Jakarta wifi)
21ms of latency from click to update
The user won't notice it
this is the right take
I've seen people making fun of freshgrads trying to reasonably negotiate & it's frankly pretty heinous
don't bully these kids just wanting to improve their lives and learning to negotiate
just let the supply & demand curve cook
@indrazulfi
Tapi gpp kan nego sebenernya? 😀
Kalau emang dirasa pantas, why not? Lagian emang udah "lazim"nya strategi HR Indonesia kok buat "lowballing" 😵
regarding LPDP; the right thing to do is to let them work for a few years after graduation
after N years they can either go home or pay it back with interest rate that's fair
LPDP is losing the added value of state-of-the-art industry experience by forcing them back too early
Some of Sale Stock magics:
Bayangin bikin ginian tahun 2016, ketika Tokopedia masih throwing people into thousands of VMs. Sale Stock disupport hanya dengan 1 DevOps saja
Yang lain bisa diliat di sini:
We didn't write much but maybe an anecdote
SS & FB generally worked together closely since we were among the pages with highest messaging traffic globally
Sometime @ 2017 WhatsApp wanted to start opening their API (for the first time ever), and they picked us as a pilot partner
7ms click->rerender latency from a WiFi connection in Jakarta
@sonnylazuardi
this is less than half a frame time on a 60fps display. less than a frame time on 90fps
Lagi rame bahas vps, baru aja deploy ke personal vps.
Memperkenalkan Budiman, Seniman running on Bun
Stack:
⁃Bun
⁃Seniman
⁃Valtio
⁃TailwindCSS
👉
Bisa bikin multiplayer real-time counter
Have had the primitives for this roaming around my head for years now. Happy to finally write the thing down to something real and usable
It's nowhere near it should be yet, much to build -- but have watched friends try the early version out and quickly build actual apps with it
We're excited to early-release Seniman, a fast server-driven JSX UI framework for Node.JS
More below, but TL;DR: run your JSX components on the server, streaming your UI through binary WebSocket, and have a completely interactive app with a 3KB JS bundle!
Just got Seniman working on Cloudflare Workers 🤠Check the latency:
Quite timely: Cloudflare just changed its Workers pricing from wall-time to CPU-time, meaning you won't get billed when CPU's waiting for network, only for when it's actively running code
Just open-sourced a simple ChatGPT API UI built with
@senimanjs
, all served by 3KB of JS.
Every single UI element you see here's streamed over WebSocket. Perf numbers in the next tweet. Initial code's over at
@ryanflorence
@frederic_ooo
okay adding the `onDragEnd` handler might be the fix since there are some drags where the browser doesn't emit the `onDrop` events after we're finished dragging
let me know if folks can still see the same problem
SenimanGPT now has syntax highlighting!
Already using this daily for one-off questions now -- much smoother than the native UI, starts up instantly too.
History management is next
for the same GPT4 conversation
@senimanjs
downloads 3KB of JS & 5KB of websocket for UI set up. then ~10KB per message with 3 code blocks
OpenAI's UI downloads 1.6MB of JS, then downloads 450KB (!) per message of same size
45x larger per message, 200x larger initial set up
the very one and only
@sepyke
fun fact -- he wrote the Python library because he needed it for internal work and gained use around the world very quickly
If you take the job at the startup instead of the safer one at a big company, all your coworkers will also be people who took the job at the startup. In ten years they'll be running everything, even if the startup tanks.
As you type: 1.1kb, then 0.8kb, then 0.4kb
All binary commands very lightly applied to the DOM without heavy decoding
1.1kb is about 300 bytes worth of binary templates to install at the first render of the result set
0.8kb is the second result set without the templates
Just saw this autocomplete demo this morning, decided to implement it in Seniman
pretty fast! check the latency at
will add it to the examples folder soon
can confirm
some magic were worked
case in point; the dude swapped in V8 into our RN app and made the whole app faster without touching app code
this was long before hermes when the default JS engine perf was lacking
not a TDD person but this might change once app-wide coding LLMs get better
have it first generate readable tests from conversation, then have a 2nd stage LLM read that to generate final code
couple it w devin style active iteration loop & you could have something very reliable
many years ago decided to just get a Linux box & do remote dev from my Macbook
- less need for top spec laptop / more life for existing laptop
- <500USD of PC hardware gets you more than enough CPU & RAM for dev
- add nvidia GPU and that's your gaming box & an AI training node
One thing that makes developing on Linux (and WSL) extra compelling is that there's no discernible performance penalty to running in Docker. With macOS, I'm still seeing a 20-25% performance hit when running dependencies like MySQL or Redis inside of Docker vs outside.
I felt like I had the closest view of what some of the best of Indonesia could do
I had a leader of another local startup say to my face that, word-for-word, that Indonesians couldn't do what they need, so they had to hire outside
Nothing could be further than the truth
Just saw this autocomplete demo this morning, decided to implement it in Seniman
pretty fast! check the latency at
will add it to the examples folder soon
impressive performance shouldn't need daily essays telling you it's impressive
it should just be impressive
here's searching through 30,000 patients at 60fps
from a real app customers pay 5 figures a month for
RSCs won't help you do this
with the native audio tokenization in gpt-4o, you might be able to ask it to correct your pronunciations if you're learning a foreign language. if you're speaking too fast, or have lapses in clarity in parts of your sentences, etc.
not having to compile to text opens things up
mixtral on
@GroqInc
API: 560 tokens/s. blink once and the answer's done
~15x faster than running the model locally on top-end consumer GPU
~5 to 28x faster than other mixtral api providers. the cheapest too at $0.27 / million tokens
great value prop
Seniman + Ollama + mixtral8x7b @ q3 quantization running on RTX3090 at 40 tokens/s
Near-GPT4 (but faster) model running on the same card I use to play games with.. not even a year after GPT4's launched.. insane
Initial version of Seniman's HTML renderer for crawlers is now running at !
Seniman has a unique rendering method involving the the server sending websocket commands to an interpreter running in the browser — we didn't want to add on a heavy (cont...)
Train a personal AI on your artwork
Exactly AI raised a $4.3M Seed to allow artists to train their own AI models on their artwork.
Artists can then license their AI model to make money.
@regrezan
unfortunately Izzan we didn't really write down much posts wise
We do have these ancient slide decks tho from early 2016
Also a few others in that account
any impactful free lunch program should be centered around how cost-effective your proteins are
with Rp7500, a vegan meal is probably the only bet
200g tempeh costs 3.5k & has 38g of protein & is 100% of a 13yo child's daily protein needs
2 eggs cost 5k @ only 12g of protein
leg periphery nerves run at ~40m/s which means it takes 42ms for signal to reach the leg muscles from the brain
it also means at ~10ms internet latency, a brain in a vat in Singapore can hypothetically control a bionic leg in Jakarta 4x faster than a biological brain in Jakarta
We're doing a tech talk on Sale Stock Engineering's new recommendation engine next week! Quite novel approach [*] -- haven't seen anything quite like it in the open source world.
if you want just one global DC region API-backing your
@senimanjs
UI running on Cloudflare, the answer might be DigitalOcean NYC3
free egress. great NA latency, good EU latency
SEA gets worst end of the stick @ 250ms. worse than Sydney
made a script to simulate UX; feels fast
throttling to 3G speeds actually hurts SPAs more than server-controlled apps by downloading _much more_ on startup time, the one metric the exact people with 3G would worry about the most (speed-wise and data plans-wise)
When you focus on edge cases, like the speed of a modal on an artificially-throttled 3G test case, you miss the net sum of all the trade-offs. Here's a great video by Jason walking through all of the HEY Calendar, what it looks like, what it feels like.
Slides of our `In-Memory, Component-Based Recommender Architecture` talk is finally up!
Hopefully this gives some useful ideas for those of you implementing recommenders -- this new approach has been practically transformative for us.
GLOBAL OUTAGES
- Major banks, media and airlines affected by major IT outage
- Significant disruption to some Microsoft services
- 911 services disrupted in several US states
- Services at London Stock Exchange disrupted
- Sky News is off air
- Reports the issue relates to
Just built a Trello clone with
@senimanjs
in <400 LOC
All rendering incl. task reordering while dragging are driven by the server in realtime via websocket
All with 3KB of JS + ~8KB of websocket messages for initial UI. All data access are delayed by 10ms to simulate DB calls
Video recorded on a Jakarta WiFi connection, accessing CloudFlare Workers in Singapore
Demo is at (desktop-only)
Code is at
Another sample app (mini shop):
a sample mini shop app with
@senimanjs
with its 3KB of client JS
1.8KB websocket packet comes immediately to build initial UI for the user to see
then a few more KBs arrive to fill up the rest of the homepage feed
all data access has 10ms delay to simulate internal RPCs
AMD's always made great hardware
Before AMD Lisa Su actually lead the Cell CPU arch team that powered the PS3; a CPU so OP the US DoD made a supercomputer out of
it was so complex to program that only Sony internal teams like Naughty Dog could fully utilize
familiar situation
We've fought a good (and exciting!) fight, but luck just isn't on our side. We're looking for new opportunities, a chance to continue making a greater impacts.
@hacktiv8id
a single infrastructure engineer maintained the kubernetes cluster running the entire company serving how many millions of users
I'm saying this partly with pride & embarrassment
the rizz of the engineer was unmatched
@sonnylazuardi
I think the trick was to have native-side bootstrap code load a “prefetch” server endpoint in parallel to starting up our RN VM
This endpoint executes the identical React tree we’d eventually execute on the RN side -- just on the server side
This is almost like SSR, but
Last night's Sale Stock Engineering's presentation slide deck for Laskar is up! Had fun presenting last night. I think we presented quite a few interesting ideas that's proved useful for us -- hope it's useful for some of you too.
@RyanCarniato
very much inspired by Solid's APIs & execution model, all running on the server :)
have used it for internal projects for the past year and would find it very hard to go back to non-stateful models
there is a very funny Key & Peele type skit somewhere here
where two office workers would one up each other bringing increasingly cooler drink tumblers to the office, until one ends up bringing the orange penguin tank
The Tribal Layers: the different pieces of the Tribe ecosystem that build on each other to create a very powerful ecosystem.
(Too lazy to write an entire thread right now, but after weeks of thinking - the Tribal Layers as illustrated are what will make Tribe win)
the more data centers have Starlink ground stations on their roofs, the more likely your Starlink packets could bypass the internet backhaul entirely
pure-LEO backhaul sounds nice
Just built a Trello clone with
@senimanjs
in <400 LOC
All rendering incl. task reordering while dragging are driven by the server in realtime via websocket
All with 3KB of JS + ~8KB of websocket messages for initial UI. All data access are delayed by 10ms to simulate DB calls
Just built a Trello clone with
@senimanjs
in <400 LOC
All rendering incl. task reordering while dragging are driven by the server in realtime via websocket
All with 3KB of JS + ~8KB of websocket messages for initial UI. All data access are delayed by 10ms to simulate DB calls
Just tried this demo and updated my repo of Budiman:
@bunjavascript
+
@senimanjs
- Real-time "React-like" Server Components!
- Keep business/rendering logic on the server, send UI over the wire in real-time
Try it here 👉
Repo
this is clever since you have a native draw->compile->redraw loop
i.e
- draw
- model compiles to code
- draw on top of the rendered code
- model re-compiles
it's much closer to the code-and-recompile development loop that's been tried and tested for years
So we're doing something pretty funky at work... created a custom TCP proxy that translates MySQL queries to Postgres protocol so that we could move hundreds of independently running MySQL databases to a single CockroachDB cluster without modifying our existing apps.
took a good highschool friend around Jakarta on holiday from Germany, a fiber optic engineer
would install net backbone in US military bases in Germany amongst many -- using WDM, an increasingly antique tech his older coworkers say less than 100 engineers in DE are familiar with
the more data centers have Starlink ground stations on their roofs, the more likely your Starlink packets could bypass the internet backhaul entirely
pure-LEO backhaul sounds nice
@sonnylazuardi
@senimanjs
took about two dozen lines for handling the streaming SSE response body from
then passing them to a few layers of token processing code on top of Seniman's new Stream API
@_fikri_auliya
@hacktiv8id
wasn't good enough for us which was why we started pushing for the RN abstraction pretty soon after we launched the Cordova app
@_fikri_auliya
@gadingnstn
it's okay to not graduate CS and say that it's not your life path
absolutely not okay to "graduate" and say it's acceptable to not be able to code
was curious so set up a build list
Ryzen 5600 + 2x8GB DDR4 + ASRock a520m + PNY nvme 512GB + Enlight matx case-PSU-combo
total's IDR 3.8 million or $240 for a box that's more powerful than Apple M1 (1.5x multicore passmark)
+ a crap $8 GPU from 2012 for initial setup