https://www.wired.com/story/dangerous-ai-open-source/
In February, Elon Musk’s OpenAI artificial intelligence (AI) research company created a language generator so realistic it can create “authentic” spam, fake news, and clickbait (such as this blog post), much faster (and maybe even better) than a human. Since OpenAI’s goal is to “benefit humanity as a whole,” one would be surprised when they didn’t release this software. But two young folks fresh out of grad school, claiming they “aren’t out to cause havoc,” compiled millions of Reddit links (you know something’s about to go wrong), basically replicated OpenAI’s project, and posted their software on the internet, culminating in a philosophical clickbait bot (that happens to support Trump). Even better, this is something that a high school freshman can create with some time, effort, and copy-and-pasting skills. What could possibly go wrong? (This is just a rhetorical question; actual ones are below.)
I’m just a bit worried. Several years ago, British political consulting firm Cambridge Analytica harvested personal info from millions of Facebooks, used this data to create algorithms displaying *personalized* fake news to people in swing states in the U.S. during the 2016 election, and potentially… changed things up. But as this article clearly demonstrates: we’ve learned our lessons.
Questions:
- Am I overreacting?
- (How) can we regulate AI?
- What can a Model Don do about this?
Well that was certainly a very interesting blog post to read... but onto your non-rhetorical questions! I, personally, don't really think you're overreacting, but at the same time, I think those graduates really mean no harm. I feel like they are trying to demonstrate that humans aren't dumb enough to actually fall for the ludicrous content the AI can generate, but unfortunately they're kind of too optimistic. People are dumb. The issue with regulating AI based on this statement, though, is that people don't like to be called dumb, and claim they can avoid fake news that they fall for anyway. Despite this, I think younger people and students are really starting to get better at recognizing clickbait and fake news due to being able to grow up with the internet, so maybe we aren't too far off from an age where we can actually not have to worry about these things. For now, a Model Don can stay well informed, and learn to corroborate sources before making any questionable decisions to avoid falling into these traps.
ReplyDeleteI believe that you are not overreacting. A language replicator that can replicate human writing patterns can have serious implications for the general population. Aside from possibly creating chaos by sending spam messages to random people, advanced AI can be used to imitate real sources of information. As you stated, fake news has already been a problem in the prior election, but if those articles are able to replicate credible news sources, it may be harder to distinguish between fake news and real news. On a smaller scale, AI can be used to copy the writing pattern of a specific person as well. If that is used to send offensive messages, it would result in a huge problem for the one getting framed. If the AI is good enough, it may even hold up to the scrutinies of linguistic experts. I believe that the only way that AI can be regulated is by not making it in the first place, but because we have progressed past that stage, the best way to regulate it is to give law enforcement a way to combat it.
ReplyDelete1. No, you aren't overreacting. These tools are becoming more commonplace every day (talktotransformer.com is another online and intuitive implementation of the GPT-2 protocol; ask me if you want to see some examples), and even if we only use it to generate funny news stories with headlines such as "Trump Reveals Stormy Affair with Ruth Bader Ginsburg," the real possibilities are more sinister.
ReplyDeleteAs a brief example, here's something I cooked up with the GPT-2 software as part of an excerpt of a fictitious Trump interview:
""And they put out a story like that, it was a total hoax, it was made up – as you know, I'm a little bit smart. I go on blogs and I read things and they are made up. Some of them aren't. And I have great respect for the people that work for the New York Times, a lot of them are good people. I have friends that work there, some of them very good people. But they made this story up on the front page of the paper. It was totally made up by the media, frankly because it didn't happen. There were no meetings."
Not bad, isn't it? Now imagine if hundreds of pages of transcripts of these interviews come up where Trump/Obama/whoever appears a complete and utter loon. Not good, right? I think the realism and accuracy of all of this is so scary, that on first thought people would read the above and think "What an idiot." All of us have that power to sway people now, and it won't be long before people use this to mislead.
2. My gut reaction would be to say that we just ban people from posting these networks online, but if anything that ensures that the seedier elements of society will be the ones primarily using these AI tools. I have the feeling that most of the AI we will be seeing, at least for a while, will be largely human-directed: using the example of fake news, I expect people to use these tools to generate massive amounts of text and have human editors go through and tweak things to be most convincing. This does the heavy lifting, especially when these neural networks can spit out full news articles of current events, fictitious mass shootings, drug advertisements, and so on. Public awareness isn't technically a means of regulating AI, but it is probably most effective to combat its intrusive effects. Teaching skepticism and fact-checking, teaching people how these tools work and how to use them, "know thine enemy" and all that, will be most effective in the long run for minimizing the effect someone suitably inclined with one of these free tools could have.
3. What can we do? Stay informed, be cognizant of how these tools work and how to use them effectively (and consequently how our enemies use them), corroboration, sourcing, pretty much everything all of us are learning organically and through the curriculum. Mr. Felder, if you're reading this, playing the game of "spot the computer-generated article versus the real one" would be a powerful way to illuminate the necessity of awareness. These tools aren't perfect as is, at least the public ones we know about, but they're good enough to fool people. With a specific subject matter, one that has a strong corpus of material and with identifiable linguistic patterns, these tools can produce some uncannily accurate results.
We're entering a brave new world with unimaginable horrors ("deepfake technology," or fake videos and audio, is making scary progress too, and I imagine that within a year someone will find some way to let people mass produce them), so everyone hold tight, as this will be a bumpy ride.
As a bonus, here's some computer-generated information about our impending nuclear war:
"Nukes are a very good thing. Nukes ... can protect us (from) all kinds of different problems, and the Russians, too, are now talking about nukes. We gotta start thinking real hard about a new cold war with Russia -- or just another one, it wouldn't be all that bad."
Ryan, thank you for you insightful response! It is amazing that you had the chance to use the GPT-2 software to create some pretty convincing fake messages! I completely agree with your concerns about deepfake. I think right now, all messages made with AI should have some sort of watermark that can easily be detected. But as this technology develops, even this will probably not be enough.
DeleteNo, I do not believe you are overreacting. Regardless of whether or not the graduate students meant no harm, they still released their software onto the internet, allowing for anyone to use it in a potentially harmful way. The topic of AI is a controversial one and figuring out how to regulate it is a difficult question to answer. Nonetheless, AI is a part of the reality that we live in today. I believe that people, and Aragon Dons, should be more cautious than ever when browsing the internet.
ReplyDeleteAs for the first point, you aren't overreacting. But this also shouldn't be the first time you are reacting, as AI has been a rising concern for the past 5 years, and technologies like these have been around that whole time. This advance itself is only one in a long string which is continuously moving along. But you shouldn't have to just have a negative reaction to AI, as Artificial Intelligence can absolutely be a positive change for society. We have a chance to make society more efficient that it has ever been, allowing for far more flexibility in policy making and world relations. As for regulating AI, I'm afraid that isn't really an option for the US. We cannot afford to lag behind on Artificial Intelligence, as China is very quickly outpacing us in that regard:
ReplyDeletehttps://www.businessinsider.com/pentagon-admits-china-could-outpace-us-on-ai-without-changes-2019-8
As everyone else has said, you are definitely not overreacting. This type of technology is extremely dangerous, and can cause chaos, even when its intention is not malicious at all. Unfortunately the public is very easily manipulated, and technology like this can cause misinformation. I do not think Al can be regulated at all, because as technology grows, people will still learn to find loopholes anyways, and theres very little ways of preventing that from happening. I think the best way to go about this is to simply publicize that this technology will be happening, so that people are more aware of what's going on and be careful with their news sources.
ReplyDeleteI disagree with those who believe that the only way to regulate AI is to ban it. Rather than fearing Artificial Intelligence for its daunting potential to wreak havoc on the reliability of our media, either the government or private companies themselves ought to reap insight from the algorithms and output patterns of these fake news generators to filter which articles are real and which aren’t. If it is as easy to recreate such potentially dangerous software that a high schooler could do it, couldn’t qualified software engineers produce a combatant technology? A software, perhaps, that could comb through articles and ensure that fake ones never reach the search results, although that begs the censorship question. The evolution of technology is inevitably ongoing and one would be naive to try and cap the gushing geyser of technological advancements the minds of the world have to offer. I would say that OpenAI did the ethical thing in not releasing a software they thought was hazardous, but instead drawing attention to the issue. The public can only be informed if information is released to them. A model don can do in depth research on a topic if they suspect they are reading bot-generated news.
ReplyDeleteLike the others have stated, this is not an overreaction because the capabilities of artificial intelligence are only continuing to grow as time passes on and people will keep finding new ways to use AI. Although AI can just start at harmless things such as generating unbelievably blown out of proportion clickbait, it can quickly evolve to do more harmful acts as it expands its capabilities over time and falls into the hands of those who wish to abuse its powers. Although it may be an option to place laws and regulations to restrict the development of AI, I don’t think that that would be enough to stop this from causing large changes in the future. While this would surely slow down the growth of AI development, there would still be people that continue to pursue the advancement of this technology. As model Dons, we should use our skills that we have learned to verify the validity of information and find out when we are being fed misinformation.
ReplyDelete