💃 Try our new model that enables 3D Pose Detection live in your web browser with MediaPipe BlazePose GHUM + TensorFlow.js.
Share your creations with
#MadeWithTFJS
.
Get started →
Read the blog →
Diffusion for music synthesis!
We trained a “notes2audio” pipeline to synthesize audio from multi-instrument MIDI notes.
Listen 🔊:
Play 🎼:
Code 👩💻:
Read 📝 :
1/
Something wild is happening on the Midjourney subreddit.
People are telling stories and sharing photos of historic events - like the “Great Cascadia” earthquake that devastated Oregon in 2001.
The kicker? It never happened. The images are AI-generated.
We want to help businesses and schools impacted by COVID-19 stay connected: starting this week, we'll roll out free access to our advanced Hangouts Meet video-conferencing capabilities through July 1, 2020 to all G Suite customers globally.
Multimodal Embeddingsとベクトル検索によるマルチモーダル検索の記事を書きました。メルカリさん提供の600万件の商品画像を使った公開デモでその凄さを体感できます。またNomic AIによるemb空間の可視化で、大規模VLMが持つ画像の理解力の「解像度」を確認できます。
#gcpja
What is Multimodal
Interesting physics analogy of ML from the viewpoint of compression (
@elonmusk
)
“Physics formulas are compression algorithms for reality…If you ran physics simulation of the universe, eventually you will have sentience…At what point from hydrogen to us did it become sentient?”
With
@TensorFlow
we developed a pipeline for predictive prefetching using Google Analytics.
In this post learn:
‣ How to build a TensorFlow.js model from analytics
‣ How to use it in your Angular app to improve UX