![DronesPeiskos Profile](https://pbs.twimg.com/profile_images/1287571986470838273/VgY1tlhC_x96.jpg)
DronesPeiskos
@DPeiskos
Followers
2K
Following
57K
Statuses
7K
Drones - Artificial Intelligence - Machine Learning - Bussines Analytics - Humanly Smart
Joined July 2020
Please... we have to make sure that the left pic ... will not happen again 🙏🏽🤨
🚨 Estados Unidos pasó de tener un gordo progre, trans, disfrazado de mujer como subsecretario de Salud con Biden, a tener a Robert F Kennedy Jr, sano, tonificado y anti globalista como Secretario de Salud con Trump. ⚠️
0
0
0
¡En estos últimos ~20 años, los ingenieros del lenguaje se han aprovechado de cada letra del abecedario para hacer desastres!! ... Por favor, no se vuelvan locos con las palabras ... Los malos se aprovechan de eso ... "Safety"... volvió loco a medio mundo ... why? 🤔🤨
At the Artificial Intelligence Action Summit in Paris this week, U.S. Vice President J.D. Vance said, “I’m not here to talk about AI safety.... I’m here to talk about AI opportunity.” I’m thrilled to see the U.S. government focus on opportunities in AI. Further, while it is important to use AI responsibly and try to stamp out harmful applications, I feel “AI safety” is not the right terminology for addressing this important problem. Language shapes thought, so using the right words is important. I’d rather talk about “responsible AI” than “AI safety.” Let me explain. First, there are clearly harmful applications of AI, such as non-consensual deepfake porn (which creates sexually explicit images of real people without their consent), the use of AI in misinformation, potentially unsafe medical diagnoses, addictive applications, and so on. We definitely want to stamp these out! There are many ways to apply AI in harmful or irresponsible ways, and we should discourage and prevent such uses. However, the concept of “AI safety” tries to make AI — as a technology — safe, rather than making safe applications of it. Consider the similar, obviously flawed notion of “laptop safety.” There are great ways to use a laptop and many irresponsible ways, but I don’t consider laptops to be intrinsically either safe or unsafe. It is the application, or usage, that determines if a laptop is safe. Similarly, AI, a general-purpose technology with numerous applications, is neither safe nor unsafe. How someone chooses to use it determines whether it is harmful or beneficial. Now, safety isn’t always a function only of how something is used. An unsafe airplane is one that, even in the hands of an attentive and skilled pilot, has a large chance of mishap. So we definitely should strive to build safe airplanes (and make sure they are operated responsibly)! The risk factors are associated with the construction of the aircraft rather than merely its application. Similarly, we want safe automobiles, blenders, dialysis machines, food, buildings, power plants, and much more. “AI safety” presupposes that AI, the underlying technology, can be unsafe. I find it more useful to think about how applications of AI can be unsafe. Further, the term “responsible AI” emphasizes that it is our responsibility to avoid building applications that are unsafe or harmful and to discourage people from using even beneficial products in harmful ways. If we shift the terminology for AI risks from “AI safety” to “responsible AI,” we can have more thoughtful conversations about what to do and what not to do. I believe the 2023 Bletchley AI Safety Summit slowed down European AI development — without making anyone safer — by wasting time considering science-fiction AI fears rather than focusing on opportunities. Last month, at Davos, business and policy leaders also had strong concerns about whether Europe can dig itself out of the current regulatory morass and focus on building with AI. I am hopeful that the Paris meeting, unlike the one at Bletchley, will result in acceleration rather than deceleration. In a world where AI is becoming pervasive, if we can shift the conversation away from “AI safety” toward responsible [use of] AI, we will speed up AI’s benefits and do a better job of addressing actual problems. That will actually make people safer. [Original text: ]
0
0
0
@Americanmight5 @shashigalore I was going to comment on this!! But you did it first... So... It's... Demonstrated, beyond any doubt.... That... People... 1) don't want to read or... 2) don't know how to read... So ... Abolish the Education Department. Effectivity ~0.0%
1
0
1
Taiwan? 🤔
0
0
1
The MORAL STANDARDS have to be re-calibrated!!! People have to re think about priorities and about what they become mad at, in order!! I bet there's a lot of people thinking ahhh you know there's no connection between printing money in the US and the rest of the world... 🤔🤷🏼💰🧐
“Let’s make one thing clear. Maybe you guys did not vote for this, but we the people who won this Election, yeah we definitely voted of this” AN ABSOLUTE MIC DROP 🔥
0
0
0
2 meses preparando a este perfil para los números grandes. 1 trillion (en inglés)= 10^12 1 billón (español)= 10^12 Una persona gana~1000$.Pónganse a pensar cuánta energía humana no se traga alguien que se robe 10^(9-12)!La gente debe enfurecerse por estos números. gm💰🎁🔥🌍
Holy crap!! One TRILLION American taxpayer dollars were stolen during the pandemic & 70% PERCENT of that money went overseas. This is unreal!!👇
0
0
0
@elonmusk I don't think "the privacy of the people" .... 1) grants and funding are not personal data!! They actually have to be public data. 2) ~1000 people involved in these payments is: ~1000/320,000,000 = 0.0003% of the population cannot be considered "the American people" so .. 🤷🏼🧐
0
0
1