You may be wondering: why are some of the very people who develop and deploy artificial intelligence sounding the alarm about it's existential threat? Consider two reasons--
We’ve released a statement on the risk of extinction from AI.
Signatories include:
- Three Turing Award winners
- Authors of the standard textbooks on AI/DL/RL
- CEOs and Execs from OpenAI, Microsoft, Google, Google DeepMind, Anthropic
- Many more
Former NYPD police investigator, now law professor. Officers may not cover their names or badge numbers. Make a note of any numbers on equipment and nearby squad cars and file public records requests for vehicle and body camera video.
The second is to try to convince everyone that AI is very, very powerful. So powerful that it could threaten humanity! They want you to think we’ve split the atom again, when in fact they’re using human training data to guess words or pixels or sounds.
If AI threatens humanity, it’s by accelerating existing trends of wealth and income inequality, lack of integrity in information, & exploiting natural resources. Think
@GreatDismal
’s jackpot, not Skynet. I agree with the simple statement to this degree.
I just heard something so chilling, and yet plenty familiar to privacy advocates—that the Taliban has seized American biometric and facial recognition equipment, which it could use to identify U.S. collaborators.
The first reason is to focus the public’s attention on a far fetched scenario that doesn’t require much change to their business models. Addressing the immediate impacts of AI on labor, privacy, or the environment is costly. Protecting against AI somehow “waking up” is not.
Don’t let anyone tell you we need new surveillance tools or powers to prevent the next insurrection. The problem has always been one of priorities, not technology or law. We have to track these folks down all over the country because *they weren’t arrested at the scene*
I think Harvard and MIT get a lot of credit for shit they don't deserve but how about a round of applause for staring down this xenophobic administration. Kudos. 👏👏👏
Woah, woah, WOAH. An official
@FTC
blog post by a staff attorney noting that "The FTC Act prohibits unfair or deceptive practices. That would include the sale or use of – for example – racially biased algorithms."
I get that many of these folks hold a sincere, good faith belief. But ask yourself how plausible it is. And whether it's worth investing time, attention, and resources that could be used to address privacy, bias, environmental impacts, labor impacts, that are actually occurring.
We need to stop investors from gaming the market. I propose a new federal initiative called "GameStop." I will not be taking questions or reading replies.
It's been decades, but it's time to revisit space law's accounting of the public interest. This company and others are putting hundreds of objects into orbit that will obscure the stars and become debris.
This is what robots and AI are for, btw. Using well understood applications to make progress in areas of clear need. Instead we try to curb crime with bs predictive algorithms and use bomb robots to kill suspects.
800 scientists signed a letter saying we should abandon statistical significance. But as they represent fewer than 5% of all scientists, we can safely ignore them.
Today I and 8 colleagues resigned from the Axon Ethics Advisory Board in the wake of the company's decision to respond to the Uvalde shooting by placing schools under surveillance and weaponizing drones. You can read our statement here.
In many jurisdictions, it is also a First Amendment violation for police to interfere with you taking pictures or video of them. Should be clearly established for purposes of a Sec. 1983 lawsuit.
A flagship artificial intelligence system designed to predict gun and knife violence in the UK before it happens had serious flaws that made it unusable, local police have admitted. The error led to large drops in accuracy. via
@WIREDUK
“I ain’t no chatbot” is among the more problematic sentences I’ve read in recent memory. It’s got everything! Questionable racially. Possibly violates California’s bot disclosure law and FTC guidelines. Congrats Meta 😀
Handy guide of how to refer to artificial intelligence depending on who your audience is
Press: robot
Law review: machine
Non-technical symposium: artificial intelligence
Technical symposium: machine learning
Really technical symposium: statistics
😎
Trump Orders Purge of ‘Critical Race Theory‘ from Federal Agencies via
@BreitbartNews
This is a sickness that cannot be allowed to continue. Please report any sightings so we can quickly extinguish!
A heartbreaking aspect of reading philosophy of technology from the 1970s and 80s is the realization that much academic success today is premised on pretending old, forgotten ideas are exciting and new.
this is
@bencsmoke
, Huck’s contributing editor, here in parliament square where students are gathering to demonstrate against the controversial a level results that many received last thursday which saw 40% of students marked down from their predicted grades.
Aerial footage shows the scene in Robbinsville Township, New Jersey, where officials say 24 Amazon workers were hospitalized after a robot accidentally tore a can of bear repellent spray in a warehouse.
"Public forum" is a free speech term that doesn't apply to private entities and Section 230 (2)(c) specifically says platforms can "restrict access to or availability of material." But what do I know I'm just a law professor at an actual, not pretend university.
Section 230 requires that platforms like Facebook, Google, YouTube, and Twitter promote a "true diversity of political discourse."
Are they really living up to that standard? WATCH👇
I'm beyond excited to share a new book from the
@TechPolicyLab
. Telling Stories: On Culturally Responsive Artificial Intelligence is a collection of short stories from around the world on the social and cultural impacts of AI.
Apps that purport to track people infected with COVID-19 are a terrible idea imo for several reasons. Here are five:
1. In areas of low adoption, they will give people a false sense of security and could interfere with critical social distancing measures.
I investigated allegations of police misconduct against the NYPD for almost three years. Now I’m a privacy and tort law expert but I’m not barred and this isn’t legal advice (though it’s an accurate statement of law). Stay safe out there everyone.
Astonishing. Daniel Kahneman throws out his whole talk to spontaneously clarify that system 1 and system 2 don't correlate to machine learning and symbolic systems in artificial intelligence.
#AIdebate2
A student artist in my class is representing torts as decorative masks. This is intentional infliction of emotional distress. How amazing 💜
@UWSchoolofLaw
I have not been as professionally reliable as I aspire this year. I’ve backed out of obligations, missed deadlines, failed to read & comment on drafts. If you’ve been on the wrong end of this, I’m truly sorry. As if you need the headache either in these difficult times 😞
AND: "But keep in mind that if you don’t hold yourself accountable, the FTC may do it for you. For example, if your algorithm results in credit discrimination against a protected class, you could find yourself facing a complaint alleging violations of the FTC Act and ECOA."
By “A robot wrote this entire article,” we mean we wrote a prompt and then stitched together a response from eight outputs, which were based on a model trained on writing by real people.
The sky over New York tonight!
A feat by our team - it’s one thing to get some drones, but another to coordinate with the FAA and the many other local and national regulators and get all the permits and follow all the protocols and look out for wildlife and the environment and
50% of teleworking parents with children younger than 18 say, since the beginning of the coronavirus outbreak, it’s been difficult for them to be able to get their work done without interruptions.
Privacy Law Scholars Conference has dropped Palantir as a sponsor due to the discomfort of many in the community---including among the program committee that selects papers and awards---with the company's practices.
Kanye West’s campaign missed a filing deadline by 14 seconds and is now waging a legal battle against the state of Wisconsin that hinges on whether or not “the seconds from 5:00:00 to 5:00:59 are inclusive to 5 p.m.”
I've worked closely with the
@UW_iSchool
community for some years. This Fall, I will be formally joining the faculty as a quarter cross-appointment. I'll be teaching tech policy and ethics to undergrads and JDs. Thank you for welcoming me onto your world-class faculty 🙏
When I talk to tech reporters nowadays, especially about emerging technology like AI, I divide my remarks into two categories: (1) problems if it doesn’t work and (2) problems if it works. (1) is more common but (2) scarier.
I have a new peer-reviewed paper (with six coauthors in four disciplines) analyzing how medium, phrasing of search query, and location of search affect the likelihood of encountering election misinformation on Google.
“This is a lesson we learned decades ago from economists and game theorists: Once cooperation breaks down, the only play to restore it is tit-for-tat. It’s the only way both sides can learn that neither side wins unless they cooperate.”
The United States is using an algorithm named after a famously tumultuous Roman emperor (developed by a company named after an evil surveillance artifact from Lord of the Rings) to choose COVID-19 vaccination order.
You know, I didn't think this was possible. Congratulations to all the people---the academics, the activists, the artists---who have pushed back so effectively against facial recognition.
My jaw dropped to the floor when I got this news. Facebook is shutting down the facial recognition system that it introduced more than 10 years ago and deleting the faceprints of 1 billion people: