22TrevorBingham Profile Banner
Trevor Bingham Profile
Trevor Bingham

@22TrevorBingham

Followers
1K
Following
3K
Statuses
6K

AI is the most dangerous thing we have ever created. If we don't stop it will take our jobs and lead to massive AI arms races inevitably ending in war.

Joined March 2022
Don't wanna be here? Send us removal request.
@22TrevorBingham
Trevor Bingham
2 years
In a year or two, cutting edge AI programs will learn to do something truly amazing: They will be able to modify and improve their own programming as well as any human programmer. We will have entered the age of lighting fast AI self-improvement.
8
19
91
@22TrevorBingham
Trevor Bingham
20 hours
@yi_zeng I believe a world freeze on AI development is our only hope of avoiding nuclear war. I would appreciate your views on whether you think China would agree to a verifiable freeze if the U.S., etc. agreed to one? This is the goal of the World Pause Coalition.
0
0
1
@22TrevorBingham
Trevor Bingham
2 days
Multiple years past AGI would be nice, but not possible with real AGI. The first thing one would do with AGI is use it to create a smaller, faster and more efficient version AGI with enhanced abilities. Second, you would want to push forward the boundaries of computer technology with improved silicon wafer scale computing or something even better, while moving ahead with photonic computing and and also quantum computing to create new hybrid designs. To see what an AGI would be able to quickly accomplish, we have to ask ourselves, "Where would we be in computer technology in 10 years, with no AGI?" With strong AGI, we will be there in a few weeks or months. The new designs would be used to build new computer centers based on the new technology in a few weeks or months, all controlled and organized by the AGI using robot labor. (It would be a big mistake not to assume robot labor with AGI. It will prove to be incredibly inexpensive to build.) Repeat this process once or twice. With in a year the result would be beyond the comprehension of non-augmented human minds. This is the slowest possible take off one can imagine in a non nuclear war scenario because it is based on technology we have now or are certain to have in a 10 years at our current rate of progress. Also, assuming 10 years of progress in 3 months of AGI time in a narrow field is a conservative estimate. With a large, robust AGI effort such as is planned for the "Stargate" project there is every reason to believe that it would be less than 3 months to achieve 10 years of progress. It could be less than a year to ASI from AGI because we can't really predict at this point what the AGI would be like after the equivalent of even 5 human years of improvement.
0
0
0
@22TrevorBingham
Trevor Bingham
2 days
@bpadilha1 His views are internally consistent. He expects to ride the wave of the future to godhood himself. He doesn't understand at this point how unlikely his chances are of surviving and achieving that are.
0
0
2
@22TrevorBingham
Trevor Bingham
3 days
GPT-5 will be near AGI. Not full AGI yet, in spite of the hype that is certain to follow the release of a near AGI model. It won't be full AGI because it won't have the full agency and creativity needed to equal the best humans in those domains. Those two domains are potentially the most powerful and dangerous areas, and OpenAI will not fully unchain GPT-5 in those things. But they will have in house (available only to OpenAI researchers) versions that are partly unchained. Also, even with the constraints mentioned, near AGI with a human in the loop is an incredible research and development tool. Rate of progress in every field will increase sharply when it is finally released with unpredictable but likely both good and very bad outcomes. Too much of a good thing too fast becomes a very bad thing.
@AISafetyMemes
AI Notkilleveryoneism Memes โธ๏ธ
3 days
Sam Altman says GPT-5 (!) will be smarter than him. Why will it take orders from him? Will GPT-6? Daily reminder that they're about to unleash 100 billion smarter-than-sam shoggoths onto the internet and are just HOPING they stay obedient forever
0
2
3
@22TrevorBingham
Trevor Bingham
5 days
Not so sure about that. It looks like the political corruption that is going to be exposed will be of an unprecedented magnitude. Our civilization does best when there is a balanced give and take between the the main streams of political thought. If the corruption is anywhere as large as it's starting to seem like, the political pendulum is going to swing very hard to the right. The surviving elements of the democratic party will move strongly to the right and the country as a whole will move very far right. It will leave sensible libertarian people as isolated or even more isolated than they are now.
0
0
1
@22TrevorBingham
Trevor Bingham
5 days
@ruben_bloom "Low hanging fruit" Yes. And this low-hanging fruit will be the first big life changing research return from AI. Life-changing doesn't have to mean life enhancing.
0
0
0
@22TrevorBingham
Trevor Bingham
5 days
Since it's such a dangerous threat, it is no doubt classified at the moment. There were stories about the atomic bomb being published during world war II before the bomb was public knowledge. Science fiction magazines were talking about highly classified things before it was possible for the regular media to be even aware of these things. Similarly, we talk about it on x and other places but most people don't yet understand what is about to happen. The government will talk about it when they are ready to talk about it, or when events force them to talk about it.
0
0
0
@22TrevorBingham
Trevor Bingham
5 days
@bpadilha1 Yes, in a very real sense we're fighting for our lives right now.
0
0
0
@22TrevorBingham
Trevor Bingham
5 days
It would make sense if we knew where he was coming from and he's kind of telling us where he's coming from. He's basically saying he does not expect a singularity like event to derail the current course of humanity. Very interesting. He is in a position to do a lot to make his desires and expectations reality for the rest of us.
0
0
0
@22TrevorBingham
Trevor Bingham
5 days
Everything changes if we assume Musk, Vance, Ivanka and Donald Jr. and others have been able to present a full briefing on AGI to President Trump, and he has accepted this information. This is quite remarkable, but in retrospect, quite predictable. It's time for everyone to say, "Yes we knew this is what would happen all along." ๐Ÿ™‚ The the fact that Trump understands on at least the basic level the the imminence and danger AGI is very positive. It doesn't mean that we're out of the woods yet, not by a long shot. The world is still facing the greatest and most dangerous challenge to the existence of everything we care about that we have ever collectively had. But at least the government is not completely oblivious to the problem. The bottom line is that key people in the government from the top all the way down to some of the researchers in the national laboratories and officials in our national security agencies are aware of the problem. We're not out of the woods, and we're never going to get completely out of the woods, but we will have a little breathing space, perhaps. Why won't we ever get out of the woods completely? Because as long as we have near AGI models and the ability to run them, there will always be a possibility of someone's secretly achieving ASI.
0
1
5
@22TrevorBingham
Trevor Bingham
6 days
@kimmonismus Quantum computing 6 months after (real) AGI, max.
0
0
1
@22TrevorBingham
Trevor Bingham
6 days
The aid we send is not useless in theory and can easily be made to sound good. But the problem is that often it is not well thought out and if it is, it is more for political and social objectives than for the humanitarian goals most people think our foreign aid is for. Even something as beneficial sounding as food aid that would seem to be good thing to almost everyone can have negative repercussions by discouraging local production of food. In many poor countries, the real return a farmer gets for the food he produces is very low. Decrease the price locally in something, and some farmers will have a problem right away, forcing them to grow only enough for their own use in that commodity. The country is over all harmed, not helped some times by giving agricultural products to people. It also often encourages corruption in the country being "helped." American politicians like food give away programs because it makes them look caring while doing something to help their farm constituency. This is because the programs often have a "buy American" requirement that benefits American farmers. While there is real hunger and suffering in places without a decently functioning government, or in war ravaged countries, Americans who have not traveled to these countries often have an exaggerated and outdated idea of what is going on in most third world countries. In most cases, while there is poverty, and even suffering, especially by our standards, it is not from a lack ability of the country to feed itself. The world does not depend on our welfare food handouts to eat. Something to keep in mind when you see the media using the "holding up food shipments and people are going to starve" line. At best, the food aid we send, while sometimes beneficial for a particular short term crises, is usually just a misplaced band aid. And it is a steep downhill road from there for all the other "aid" we send.
0
0
0
@22TrevorBingham
Trevor Bingham
6 days
Google Drops AI Weapons and Surveillance Ban To all the excited little tech boys out there: The future that is coming is not the one you think you are working for.
Tweet media one
1
2
12
@22TrevorBingham
Trevor Bingham
6 days
@Mjreard I have experienced this myself. ๐Ÿ™ƒ๐Ÿ™‚
0
0
1
@22TrevorBingham
Trevor Bingham
6 days
RT @ForHumanityPod: This Youtube Shorts video has been converting nearly 30% of its viewers into For Humanity subscribers. The campaign isโ€ฆ
0
7
0
@22TrevorBingham
Trevor Bingham
6 days
"driven by humans." Yes, it is the first problem and it likely the main problem. We will hit the human using near AGI for terrorist reasons problem first, before the AGI itself becomes a real problem. Not that it couldn't and won't, just that we get to human using AGI first, and it is possible that we never get to the AGI/ASI acting on its own problem. One new terrorist threat is from transhumanist fanatics like Beff, Faggella and many others who actively want to see humanity replaced. This will likely be problem until it is contained.
0
0
1
@22TrevorBingham
Trevor Bingham
6 days
@AlexAlarga @HumanHarlan At this point, it would seem that is about the only thing that will wake them up.
0
0
0
@22TrevorBingham
Trevor Bingham
6 days
"But that ainโ€™t it. The desire is to see life itself...." But not human life. So to say it "ain't" is just another lie on your part. You pile the lies on top of each other. I am impressed, by the way, on how well you have pulled this off. But in the end, people will see that you do not want to try to save humanity and that exposes you for what you are.
0
0
0
@22TrevorBingham
Trevor Bingham
6 days
ASI will never benefit you personally. Altman? Maybe. Peter Thiel? Perhaps. Elon? He is by far the closest to benefiting and that is highly uncertain still. But you? Not the slightest chance. Less than 0, if that were possible. Actually, if you evaluate the S factor correctly, such an outcome might (less than 0) actually be a good way of looking at what you ASI has in store for you.
0
0
1