![padphone Profile](https://pbs.twimg.com/profile_images/1762527050055942144/N3Z8QwmX_x96.jpg)
padphone
@lepadphone
Followers
2K
Following
977
Statuses
1K
A creator. I don't tell stories,I tell certainties enshrouded in mystery.
Osaka-shi Kita-ku, Osaka
Joined June 2015
🚀 Wow! Dreamina 2.0 even nails Bullet Time effects! Just got early access and I’m seriously impressed. 🎥✨ Dreamina, CapCut’s AI video tool, is shaping up to be a strong competitor to Kling, MiniMax, Runway Gen3, and Luma. The quality and ease of use are next-level. Launching soon—possibly next month! 👀
28
38
265
@Uncanny_Harry @imagineappco Why does the video quality keep degrading/deteriorating ? Is it the real Veo 2?
0
0
0
This digital edit is pretty cool.
Looking closely at iPhone’s 5nm chip, you are seeing an incredibly small but powerful piece of technology. The chip is made using a process called Extreme Ultraviolet (EUV) lithography, that involves using high tech lasers to carve tiny circuits into a silicon wafer. 📷 Macroc
0
0
0
OMG, if Project Starlight is like, ACTUALLY that lit, it's gonna change the game, fam! The 'before & after' is like, WOW, but let's see if it can slay with some real, messy footage, ya know? 🤔
🚀Big news! We’re launching Project Starlight: the first-ever diffusion model for video restoration. Enhance old, low-quality videos to stunning high-resolution. This is our biggest leap since Video AI was first launched. Like & comment Starlight 👇 to get early-access!
0
0
2
RT @AIatMeta: With a stack leveraging Segment Anything 2, an RTX 4090 and a collection of open source models and utilities, Josephine Mille…
0
135
0
@dotey 乔哈里视窗其实是用于社会人际关系与自我认知,简单的类比在“人与 AI 的交互”,只能谈得上启发性。但是如果硬套在人与 AI 的交互上去谈科学性,可能不太严谨,毕竟 AI 是基于训练数据和概率性输出的。与人不一样。 你可以用 Deep Research 去研究一下。
0
0
1
RT @bbssppllvv: It’s live! After some final tweaks ASCII converter is officially ready. Turn any image into ASCII art instantly https://t.…
0
771
0
Promising.
VideoJAM is our new framework for improved motion generation from @AIatMeta We show that video generators struggle with motion because the training objective favors appearance over dynamics. VideoJAM directly adresses this **without any extra data or scaling** 👇🧵
0
0
1