- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Robocalls of President Biden already confused primary voters in New Hampshire – but measures to curb the technology could be too little too late
The AI election is here.
Already this year, a robocall generated using artificial intelligence targeted New Hampshire voters in the January primary, purporting to be President Joe Biden and telling them to stay home in what officials said could be the first attempt at using AI to interfere with a US election. The “deepfake” calls were linked to two Texas companies, Life Corporation and Lingo Telecom.
It’s not clear if the deepfake calls actually prevented voters from turning out, but that doesn’t really matter, said Lisa Gilbert, executive vice-president of Public Citizen, a group that’s been pushing for federal and state regulation of AI’s use in politics.
This is just another attempt to divert from the actual issue.
No, it’s not AI. It’s desinformation that is the problem. People who believed and parroted badly spelled bullshit lies from some troll farm will keep doing so. They will not suddenly fall for AI generated propaganda because AI evolved to be that indistinguishable. They already fell for everything long before.
Sure… let’s discuss the risks of AI and how we need to regulate it instead of finally acknowledging that the problem is a lack of education and media competence in general.
I pretty much agree.
Their argument that AI will impinge on the election, relies on the assumptions that a) people are currently getting good quality information And b) have the skills to process it.
I think we’ ve seen more obviously than ever, that’s just not the case. AI might be a bit of icing on the pudding, but not much more than that.
Disinformation is the core issue, but isn’t that like saying “the fire is the problem, not the gasoline being poured on it.”
Nope.
Or to use you analogy: It’s like discussing bans of kerosin to fix the fire instead of acknowledging that they are pouring gasoline on it.
There are good reasons to discuss limiting AI. But this discussion is somewhere between stupidity and diversion. It will not change the fact that a lot of todays (especially social) media is running on narratives, bullshit and desinformation. That’s not new and AI will barely be able to make it even worse.
This none-topic is however very useful in ignoring the underlying issue (lack of media competence and education) and diverting from the actual risks of AI (surveilance).
I get your point. I would disagree that AI will “barely” make it worse, since it’s basically a tool to churn out disinformation at a higher order of magnitude than ever before. However, I do agree that targeting AI isn’t the solution; once the tools exist you can’t put them back in the box. We should be focusing on how to get our society to value truth again.
You said what I was thinking as a read the title!