These newscasters you may have seen online are not real people

We are all far more aware of the misinformation out there on social media in a variety of platforms. But tonight, a new way to spread falsehoods. A research firm has identified realistic appearing newscasts featuring age generated newscasters who are disparaging the United States. They’re called deepfakes. And interestingly, the content of what you’re saying is distinctly aligned with Chinese messaging. CNN’s Selina Wang has more Hello, everyone. This is Wolf News. I’m Alex. On first glance, these look like news anchors and the top leaders of China. On second glance, you might notice something uncanny. The two heads of state and how their voices don’t align with their mouth movements. That’s because they aren’t real people. They’re deep, fake avatars made with artificial intelligence. It’s unclear who’s behind this, but last year, pro-China bot accounts sent them out over Twitter and Facebook. This is the first time we’ve seen footage of an entirely fictitious fake person used in the politically motivated influence operation. Of this particular set of videos was promoted by an operation that we call camouflage, which we’ve been tracking since at least 2019 and routinely amplifies narratives that align with Beijing’s strategic interests. Research firm GRAPHIKA issued a report on this broader campaign that says, in part, more videos portrayed the U.S. in a negative light than focused on any other theme, presenting it as a lawbreaking hegemonic stick racked by civil strife and failing in the fight against COVID 19 This meeting is of great significance. They pushed China’s geopolitical agenda. Gun violence has killed nearly 40,000 people and exposed America’s shortcomings. The US National Security Commission on Artificial Intelligence says A.I. is deepening the threat posed by cyber attacks and disinformation campaigns. That Russia, China and others are using to infiltrate our society, steal our data and interfere in our democracy. Hey there. I’m Anna Hey there. I’m Jason and these fake news anchors. They were made with technology from British artificial intelligence company synthesizer. Let me show you how easy it is to create your own deface video. So I’m on the SYNTHESE, a company website. I’m clicking on Create a Free AI video. And for the script, how about let’s have the avatar say hi. I’m a correspondent for CNN. They say I’ll get the video in my email in just a few minutes. Hi, I’m a correspondent for CNN. Thanks, Anderson, for having me on your show. Cynthia’s website shows that the technology is mainly used for corporate training and marketing videos. The company said in a statement to CNN the recent videos that emerged are in breach of our terms of service, and we have identified and banned the user in question. Graphika says these news anchor deepfake videos are low quality and did not get a lot of traction on social media to build a better world with. But this technology is spreading rapidly around the world. I know just a few years ago, a Chinese tech firm made this deface video of then-President Donald Trump speaking Mandarin as a demonstration to promote their company’s technology at a Beijing conference. Chinese state media has even created a whole team of AI news anchors. They’re showing it off as a novel new technology that can mass produce shows with these anchors that can work. 20 47. The proliferation of deepfake videos makes it dramatically harder to combat disinformation. Experts say it’s used by foreign and criminal actors will only grow bending reality And I’m joined now by an actual human, Selina Wang. Selina, thanks for joining us. The videos you showed us, the beginning of your piece, you said they didn’t get much traction but the potential for this technology, it’s kind of mind boggling. Yeah, it really is. I mean, look, you could tell from those videos something was off the mouth, wasn’t aligning with the voice. They were low quality, clumsily done. They didn’t get a lot of views online, according to Graphika. But what is so concerning here is how it opens the door for a whole new territory in information warfare. And this technology, it’s only going to get better There are so many ways that you can see bad actors taking advantage of this. It’s already been used to make it look like global leaders said things they did not. Last April, a deep fake of Ukrainian President Volodymyr Zelensky appearing to tell his soldiers to surrender was widely shared online before the uploads were taken down. And you saw there just how easy it was for me to create my own deface. Literally a matter of minutes. And when I spoke to Graphika, the researcher said the biggest benefit for these kinds of pro state spam operations is its efficiency, the potential to mass produce this convincing, deceptive content. Selina Wang, I appreciate it. Thanks.

0 views

답글 남기기

이메일 주소는 공개되지 않습니다. 필수 필드는 *로 표시됩니다

Shopping Cart
/study-room/
http://pf.kakao.com/_xeAFxdG
https://talkya.co.kr/video-category/
https://www.readingn.com/?utm_source=naver_bspc&utm_medium=banner&utm_campaign=homepage_landing&n_media=27758&n_query=%EB%A6%AC%EB%94%A9%EC%95%A4&n_rank=1&n_ad_group=grp-a001-04-000000018019355&n_ad=nad-a001-04-000000266292918&n_keyword_id=nkw-a001-04-000003255044813&n_keyword=%EB%A6%AC%EB%94%A9%EC%95%A4&n_campaign_type=4&n_contract=tct-a001-04-000000000757110&n_ad_group_type=5&NaPm=ct%3Dlnju29co%7Cci%3D0z00002lPgfz397IXfl2%7Ctr%3Dbrnd%7Chk%3Dd0f544a47fd94ae9a321e278152b228f765250ec
https://blog.naver.com/brainfinder