doublejj
Well-Known Member
FTFYleftism is so brittle & fragile - that a incel russian troll might totally destroy it
if your worldview is so brittle & fragile that obvious truths will destroy it
it deserves to be destroyed
FTFYleftism is so brittle & fragile - that a incel russian troll might totally destroy it
if your worldview is so brittle & fragile that obvious truths will destroy it
it deserves to be destroyed
i'm a little surprised at how many right wing radical assholes hang out on a weed website, really.I do wonder what the ratio of lurkers without a dog in the hunt/fresh sock trolls really is. But I don’t wonder that often.
I haven't looked for a while, but when I did it was abuot 500-1500 accounts logged in all day, everyday, and this scored at the top of American weed websites.It might be worthwhile to consider how few that would be.
i just have trouble seeing them have much success here...i suppose they might occasionally influence someone, but i have to think that most of those that are susceptible to his kind of horseshit were probably already leaning that way, and aren't much of a loss anyway. if you have a way to shut them down, i'm onboard, but just about anything else you can do is just going to give them free advertising, unfortunatelyI haven't looked for a while, but when I did it was abuot 500-1500 accounts logged in all day, everyday, and this scored at the top of American weed websites.
And with just one troll controlling thousands of trolls across the internet, and then the ability to microtarget and radicalize idiots whereever they happen to be in real time, I would imagine it would be enough for these spamming hatemongers to run a cyborg/bot to keep their bullshit titled threads alive and kicking.
I haven't looked for a while, but when I did it was abuot 500-1500 accounts logged in all day, everyday, and this scored at the top of American weed websites.
And with just one troll controlling thousands of trolls across the internet, and then the ability to microtarget and radicalize idiots whereever they happen to be in real time, I would imagine it would be enough for these spamming hatemongers to run a cyborg/bot to keep their bullshit titled threads alive and kicking.
That is why I try to not post in their bullshit troll threads and keep actual information up as much as possible.i just have trouble seeing them have much success here...i suppose they might occasionally influence someone, but i have to think that most of those that are susceptible to his kind of horseshit were probably already leaning that way, and aren't much of a loss anyway. if you have a way to shut them down, i'm onboard, but just about anything else you can do is just going to give them free advertising, unfortunately
A sprawling disinformation network originating in Russia sought to use hundreds of fake social media accounts and dozens of sham news websites to spread Kremlin talking points about the invasion of Ukraine, Meta revealed Tuesday.
The company, which owns Facebook and Instagram, said it identified and disabled the operation before it was able to gain a large audience. Nonetheless, Facebook said it was the largest and most complex Russian propaganda effort that it has found since the invasion began.
The operation involved more than 60 websites created to mimic legitimate news sites including The Guardian newspaper in the United Kingdom and Germany’s Der Spiegel. Instead of the actual news reported by those outlets, however, the fake sites contained links to Russian propaganda and disinformation about Ukraine. More than 1,600 fake Facebook accounts were used to spread the propaganda to audiences in Germany, Italy, France, the U.K. and Ukraine.
The findings highlighted both the promise of social media companies to police their sites and the peril that disinformation continues to pose.
“Video: False Staging in Bucha Revealed!” claimed one of the fake news stories, which blamed Ukraine for the slaughter of hundreds of Ukrainians in a town occupied by the Russians.
The fake social media accounts were then used to spread links to the fake news stories and other pro-Russian posts and videos on Facebook and Instagram, as well as platforms including Telegram and Twitter. The network was active throughout the summer.
“On a few occasions, the operation’s content was amplified by the official Facebook pages of Russian embassies in Europe and Asia,” said David Agranovich, Meta’s director of threat disruption. “I think this is probably the largest and most complex Russian-origin operation that we’ve disrupted since the beginning of the war in Ukraine earlier this year.”
The network’s activities were first noticed by investigative reporters in Germany. When Meta began its investigation it found that many of the fake accounts had already been removed by Facebook’s automated systems. Thousands of people were following the network’s Facebook pages when they were deactivated earlier this year.
Researchers said they couldn’t directly attribute the network to the Russian government. But Agranovich noted the role played by Russian diplomats and said the operation relied on some sophisticated tactics, including the use of multiple languages and carefully constructed imposter websites.
Since the war began in February, the Kremlin has used online disinformation and conspiracy theories in an effort to weaken international support for Ukraine. Groups linked to the Russian government have accused Ukraine of staging attacks, blamed the war on baseless allegations of U.S. bioweapon development and portrayed Ukrainian refugees as criminals and rapists.
“Even though Russia is fully involved in Ukraine in the military conflict, they’re able to do more than one thing at a time,” said Brian Murphy, a former Department of Homeland Security intelligence chief who is now a vice president at the counter-disinformation firm Logically. “They have never stopped their sophisticated disinformation operations.”
Social media platforms and European governments have tried to stifle the Kremlin’s propaganda and disinformation, only to see Russia shift tactics.
A message sent to the Russian Embassy in Washington, D.C., asking for a response to Meta’s recent actions was not immediately returned.
Researchers at Meta Platforms Inc., which is based in Menlo Park, California, also exposed a much smaller network that originated in China and attempted to spread divisive political content in the U.S.
The operation reached only a tiny U.S. audience, with some posts receiving just a single engagement. The posts also made some amateurish moves that showed they weren’t American, including some clumsy English language mistakes and a habit of posting during Chinese working hours.
Despite its ineffectiveness, the network is notable because it’s the first identified by Meta that targeted Americans with political messages ahead of this year’s midterm elections. The Chinese posts didn’t support one party or the other but seemed intent on stirring up polarization.
“While it failed, it’s important because it’s a new direction” for Chinese disinformation operations, said Ben Nimmo, who directs global threat intelligence for Meta.
Twitter is loaded with Russian bots, paid spammers and Russian sympathizers like MAGA republicans or others. I'm surprised they haven't gone full Qanon and Russian propaganda arm, so they must be controlling some of it, but not enough. Twitter should label them as bots at least and let the bot try and defend itself in discussion.
Or at least warn all the people who have been interacting with the militarized trolls. People should know if they are getting attacked.Twitter is loaded with Russian bots, paid spammers and Russian sympathizers like MAGA republicans or others. I'm surprised they haven't gone full Qanon and Russian propaganda arm, so they must be controlling some of it, but not enough. Twitter should label them as bots at least and let the bot try and defend itself in discussion.
This is what Elon Musk wanted to unleash yet people still think of him as brilliant.Twitter is loaded with Russian bots, paid spammers and Russian sympathizers like MAGA republicans or others. I'm surprised they haven't gone full Qanon and Russian propaganda arm, so they must be controlling some of it, but not enough. Twitter should label them as bots at least and let the bot try and defend itself in discussion.
WASHINGTON (AP) — Federal officials are warning ahead of the November elections that Russia is working to amplify doubts about the integrity of U.S. elections while China is interested in undermining American politicians it perceives as threats to Beijing’s interests.
An unclassified intelligence advisory, newly obtained by The Associated Press, says China is probably seeking to influence select races to “hinder candidates perceived to be particularly adversarial to Beijing.” In the advisory, sent to state and local officials in mid-September, intelligence officials said they believe Beijing sees a lower risk in meddling in the midterms versus a presidential election.
While officials said they’ve not identified any credible threats to election infrastructure in the U.S., the latest intelligence warning comes amid the peak of a midterm campaign in which a rising number of candidates and voters openly express a lack of confidence in the nation’s democratic processes.
Foreign countries have long sought to sway public opinion in America, perhaps most notably in a covert Russian campaign that used social media to sow discord on hot-button social issues ahead of the 2016 presidential election. The U.S. government has been on high alert since, warning about efforts by Russia, China and Iran to meddle in American politics and shape how voters think.
The U.S. faces foreign influence campaigns while still dealing with growing threats to election workers domestically and the systematic spread of falsehoods and disinformation about voter fraud. Former President Donald Trump and many of his supporters — including candidates running to oversee elections in several states — continue to lie about the 2020 presidential election even as no evidence has emerged of significant voter fraud.
“The current environment is pretty complex, arguably much more complex than it was in 2020,” Jen Easterly, director of DHS’ cybersecurity arm, told reporters Monday.
Russia is amplifying divisive topics already circulating on the Internet — including doubts about the integrity of American elections — but not creating its own content, said a senior FBI official who briefed reporters Monday on the condition of anonymity under terms set by the bureau.
Overall, the official said, China’s efforts are focused more on shaping policy perspectives, including at the state and local level, rather than on electoral outcomes.
Still, China appears to have focused its attention on a “subset of candidates” in the U.S. it sees as opposed to its policy interests, the official explained. In one high-profile case, the Justice Department in March charged Chinese operatives in a plot to undermine the candidacy of a Chinese dissident and student leader of the Tiananmen Square protests in 1989 who was running for a congressional seat in New York.
The briefing Monday came weeks after the private distribution of an advisory from the Department of Homeland Security’s intelligence arm that described China’s approach during this midterm as different from the 2020 election, when the intelligence community assessed that China considered but did not deploy efforts to influence the presidential election.
There were publicly revealed examples during the last presidential election of influence campaigns originating in China. Facebook in September 2020 took down pages that posted what it said was a “small amount of content” on the election; that effort focused primarily on the South China Sea.
The DHS advisory doesn’t list specific races or states where it thinks China-linked actors might operate, but cites the March indictment alleging efforts to undermine the New York congressional candidate. It also suggests China’s interest in politics extends beyond the U.S., saying Australian intelligence since 2017 has scrutinized Chinese government attempts to support legislators or candidates — including amplifying Beijing’s stances on select issues.
A DHS spokesperson said the department regularly shares threat information with federal, state and local officials.
Chinese and Russian officials and state media have historically rejected U.S. allegations of election meddling and pointed in turn to American influence efforts in other countries.
State and local governments are limited in what they can do against influence campaigns, given that “their job isn’t to police political conversation,” said Larry Norden, an election security expert with the Brennan Center for Justice.
“I do think there is a lot voters should be doing,” he added. “If they are seeing messages about candidates presented in an alarmist or emotionally charged way, their radar should be going up. They should be checking the accuracy of claims, and if they are seeing false claims, they should be letting the social media companies know.”
Scott Bates, the deputy secretary of state in Connecticut, noted that election officials in the state had responded to warnings about foreign influence dating back to 2016.
“Our best defense is to have an educated populace,” he said.
He drew a distinction between misinformation about election processes and misinformation about a candidate or campaign.
“The election process, we can protect that,” he said. “If you’re talking about talking trash about a candidate, we’re not in the business of patrolling that.”
Some signs of influence operations from Russia and China are already public.
Meta, which owns Facebook and Instagram, said in late September that it disabled a sprawling disinformation network coming from Russia involving sham news websites and hundreds of fake social media accounts. Researchers also exposed a much smaller network originating in China that was intended to spread divisive political content in the U.S., but reached only a tiny audience.
Officials at the FBI and the Department of Homeland Security said Monday they were not aware of any credible threat to election infrastructure. A senior FBI official said that though the FBI was not tracking any specific effort by a foreign government to hack election equipment, officials were nonetheless concerned that an adversary could spread exaggerated or false claims of compromise to undermine confidence in the elections.
Besides concerns about cybersecurity and foreign influence campaigns, the FBI is increasingly focused on physical threats to election workers around the country.
Between June 2020 and June of last year, the FBI received more than 1,000 reports of harassing communication directed at election personnel. Most of the harassment came from email, phone calls and social media, and the majority primarily originated in states where there were ongoing audits of election results.
Of those tips, about 11% met the threshold of a potential federal crime. A specialized Justice Department task force focused on the issue has made four arrests. Officials cited constitutional barriers in bringing more cases because of the First Amendment’s strong protection of an individual’s political speech.
Russia has devised yet another way to spread disinformation about its invasion of Ukraine, using digital tricks that allow its war propaganda videos to evade restrictions imposed by governments and tech companies.
Accounts linked to Russian state-controlled media have used the new method to spread dozens of videos in 18 different languages, all without leaving telltale signs that would give away the source, researchers at Nisos, a U.S.-based intelligence firm that tracks disinformation and other cyber threats, said in a report released Wednesday.
The videos push Kremlin conspiracy theories blaming Ukraine for civilian casualties as well as claims that residents of areas forcibly annexed by Russia have welcomed their occupiers.
English-language versions of the Russian propaganda videos are now circulating on Twitter and lesser-known platforms popular with American conservatives, including Gab and Truth Social, created by former President Donald Trump, giving Russia a direct conduit to millions of people.
In an indication of the Kremlin’s ambitions and the sprawling reach of its disinformation operations, versions of the videos were also created in Spanish, Italian, German and more than a dozen other languages.
“The genius of this approach is that the videos can be downloaded directly from Telegram and it erases the trail that researchers try to follow,” Nisos’ senior intelligence analyst Patricia Bailey told The Associated Press. “They are creative and adaptable. And they are analyzing their audience.”
The European Union moved to ban RT and Sputnik, two of Russia’s leading state-run media outlets, after Russia’s invasion of Ukraine in late February. Tech companies such as Google’s YouTube and Meta’s Facebook and Instagram also announced they would ban content from the outlets within the 27-nation EU, undermining Russia’s ability to spread its propaganda.
Russian attempts to get around the new rules began almost immediately. New websites were created to host videos that make debunked claims about the war. Russian diplomats took on some of the work.
The latest effort revealed by analysts at Nisos involved uploading propaganda videos to Telegram, a loosely moderated platform that is broadly popular in Eastern Europe and used by many conservatives in the United States. In some cases, watermarks identifying the video as RT’s were removed in a further attempt to disguise their source.
Once on Telegram, the videos were downloaded and reposted on platforms including Twitter without indications that the video was produced by Russian state media. Hundreds of accounts that later posted or reposted the videos were linked by Nisos researchers to the Russian military, embassies or state media.
Some of the accounts appeared to use fake profile photos or posted content in strange ways that suggested they were inauthentic.
One example: a Twitter account supposedly run by a woman living in Japan that had a singular interest in Russian propaganda. Instead of posting about a variety of topics such as entertainment, food, travel or family, the account user only posted Russian propaganda videos — and not just in Japanese, but also in Farsi, Polish, Spanish and Russian.
The account also cited or reposted content from Russian embassies hundreds of times, researchers found, showing again the close relationship between Russian diplomats and the country’s propaganda work.
When it comes to Russia’s overall disinformation capabilities, Bailey said, the network is “just one piece of a puzzle that is quite large.”
Last week, Russia sought to spread a baseless conspiracy theory blaming the U.S. for sabotage to the Nord Stream natural gas pipelines in the Baltic Sea.
The same week, Meta announced the discovery of a sprawling Russian disinformation network that created websites designed to look like major European news outlets. Instead of news, the websites carried propaganda intended to drive a wedge between Ukraine and its western allies.
That operation was the largest of its kind to originate in Russia since the war began, researchers concluded.
“The network exhibited an overarching pattern of targeting Europe with anti-Ukraine narratives and expressions of support for Russian interests,” according to a report from the Atlantic Council’s Digital Forensic Research Lab, which helped identify the network disabled by Meta.
Twitter has disrupted three China-based operations that were covertly trying to influence American politics in the months leading up the midterm elections by amplifying politically polarizing topics, according to a trove of data released by the social media giant to researchers and The Washington Post.
The operations spanned nearly 2,000 user accounts, some of which purported to be located in the United States, and weighed in on a wide variety of hot-button issues, including election-rigging claims about the 2020 presidential election and criticism of members of the transgender community.
Two of the three networks favored the U.S. right and one skewed left. At least some repeated pro-China narratives aimed at an American audience.
Twitter also took down three networks that were based in Iran but often claimed to be based in the United States or Israel, the data shows. At least one of the accounts involved in the Iranian efforts, 10Votes81, endorsed candidates even in local races. An account named 10Votes and using the same logo as an avatar was also active on YouTube, TikTok and especially Reddit, said Renée DiResta of Stanford’s Election Integrity Partnership, one of the data’s recipients.
Twitter said in its disclosure to researchers that it was not attributing the activity to any specific governments. Twitter did not respond to a request for further comment.
A spokesman for the Chinese Embassy in Washington said the country was not behind the accounts that were suspended.
“Such accusations are completely fictitious and made out of thin air, and the Chinese side firmly objects to them,” spokesman Liu Pengyu said. “China upholds the principle of non-interference in other countries’ internal affairs and we have not the slightest interest to interfere in the U.S. election.”
Twitter’s takedown of the networks, which mostly operated between April and October, came during a stormy period for the social media giant as it prepared to be sold to billionaire Elon Musk and faced ongoing scrutiny over how it polices misinformation ahead of next week’s midterms, when political control of Congress is up for grabs.
Twitter and other tech platforms have struggled particularly to curb the spread of false claims of widespread voter fraud during the 2020 presidential election and to mitigate suggestions of fraud in the upcoming contests.
The disclosure by Twitter adds to what is known about China-based efforts to influence American audiences by mimicking the strategies Russia-based operatives used to stoke cultural and political tensions during the 2016 election. In September, Meta announced it had disrupted a China-based operation seeking to influence U.S. politics. The U.S. government also has issued warnings about Chinese influence efforts, as have a spate of reports from cybersecurity firms including Google’s Mandiant, Recorded Future and Alethea Group.
Graham Brookie, head of the Atlantic Council’s Digital Forensics Research Lab, which also received the data, said the tweets issued by the Chinese networks largely amplified ideas that originated with members of America’s ideological extremes.
“This is equal opportunity hyper-partisanship, a tactic that’s been more embraced by Russia,” said Brookie, who added that the campaign was more assertive than past Chinese efforts. “It’s the same theory of the case: A weakened adversary is one that allows you to shape geopolitics more.”
One network that Twitter removed, the data showed, included 22 user accounts that tweeted more than 250,000 times. Between April and early October, their posts were generally pro-Trump and conspiratorial, particularly about the pandemic and coronavirus vaccines.
Alethea had concluded that Chinese-linked accounts on Twitter and elsewhere were pursuing divisiveness but plugging right-wing issues more than left, sometimes with nods to conspiracy theories. In the newly suspended batch, one account tweeted in May that former president Barack Obama was a “lizard person who is a member of the Illuminati,” according to copy of the tweet archived by the Internet Archive.
Twitter said that while many of the network’s accounts purported to be located in the United States, the company discovered technical signals that indicated many were based in China. Twitter removed the accounts because they violated the company’s rules against platform manipulation and spam, the company said.
While the network was small, some of its users attracted high levels of engagement. One of those accounts, which went by the name Ultra MAGA BELLA Hot Babe, author of the Obama tweet, attracted 26,000 followers, more than 400,000 likes and more than 180,000 retweets before it was taken down.
In May, Ultra MAGA BELLA Hot Babe tweeted a meme with a photo of someone holding paper near a purported ballot drop box with the caption “MULE TAKING PICS! PROOF OF CRIME REQUIRED TO GET PAID BY THE DNC.” In June, the account tweeted a comment implying that children from the transgender community are simply impressionable and being abused by their parents, according to archived copies.
Ultra MAGA BELLA Hot Babe was also frequent participant in “Trump trains” — where popular right-wing users urge their audience to follow other pro-Trump Twitter users.
Another account in that network, “Salome Cliff,” took a more liberal perspective, saying Donald Trump persecuted minorities and praised Joe Biden as “poised, calm and collected.” That account, the second-most popular account in the China-linked network, after Bella, had about 7,000 followers but far less engagement, earning less than 1 percent of Bella’s total.
Stanford’s DiResta said the 10Votes account acted as a moderator on Reddit’s discussion board Political_Revolution, which has more than 100,000 subscribers. A recent 10Votes post quoting Pennsylvania Democratic Senate candidate John Fetterman drew more than 800 upvotes in the past 12 days. (On Twitter, 10Votes said its name derived from Bernie Sanders’s first margin of victory when he was elected mayor of Burlington, Vt.)
Reddit said it had locked the 10Votes account last week and coordinated influence campaigns are prohibited.
Another China-based account removed by Twitter mixed anti-Russia posts with what appeared from the text to be politically tinged pro-Trump pornography.
An Iran-based network was also successful in getting nearly 25,000 followers and millions of likes on its tweets, which interspersed liberal, anti-Trump messaging with harsh anti-Israel slogans. It, too, appears to have taken advantage of being included on lists of liberal Twitter users who request users to follow back fellow “resisters.” The Iran-based network also included at least one purportedly conservative user, but DiResta said they were primarily “left-leaning personas,” including the one posing as an advocacy group, 10Votes.
She said 10Votes making down-ballot endorsements was new territory for foreign influence efforts.
Two other China-based accounts didn’t get nearly as much engagement on their tweets, the Twitter data shows. In one network, two accounts posed as Florida liberals, posting about gun control and Marco Rubio; none of their tweets got more than 100 likes or retweets.
Twitter also took down a network that included more than 1,900 accounts that often posted overtly pro-China narratives in both English and Chinese. Many of this network’s tweets directly echoed the Chinese government’s rhetoric, particularly in condemning House Speaker Nancy Pelosi (D-Calif.)’s visit to Taiwan this year.
“It’s an extension of the more assertive tone that you heard from President Xi at the party Congress a couple weeks ago,” Brookie said. “That tone trickles down to Chinese and Chinese-adjacent actors on social media.”
China is Russia’s most powerful weapon for information warfare
During the early months of the war in Ukraine, China became a potent outlet for Kremlin disinformation, portraying Ukraine and NATO as the aggressors and sharing false claims about neo-Nazi control of the Ukrainian government, according to researchers. Chinese channels touted the false claim that the United States runs bioweapons labs in Ukraine and suggested that Ukrainian President Volodymyr Zelensky was being manipulated by U.S. billionaire George Soros.
In September, Meta announced it had taken action against a China-based network that included at least 81 Facebook accounts and two accounts on Instagram that were seeking to influence U.S. politics ahead of the 2022 midterm.
The users posed as Americans to post opinions about issues such as abortion, gun control and high-profile politicians such as Biden and Rubio (R-Fla.), who faces voters next week. The network didn’t appear to gain much traction or user engagement and often posted content during working hours in China rather than at times when their target audience in the United States would be awake, according to the company.
Anyone with an internet connection can watch breaking news unfold in real time, or at least some version of it. Across social media, posts can fly up faster than most fact-checkers and moderators can handle, and they’re often an unpredictable mix of true, fake, out of context and even propaganda.
This kind of misinformation spikes before, during and after elections including this week’s midterms. Look out for confusing narratives about everything from how to vote to who actually won specific races.
How do you know what to trust, what not to share and what to flag to tech companies? Here are some basic tools everyone should use when consuming breaking news online.
Election Days are rife with misinformation. Here’s what to watch for.
Know what to look out for
Think about who would benefit from spreading confusing information during a news event, and brush up on specific narratives going around. During the midterm elections, for example, experts say to look out for conflicting information and baseless accusations about poll watchers and poll workers and unfounded concerns about voter fraud. After election night, be on alert for premature declarations of victory and misinformation about vote counting. Learn more about exactly what election misinformation is expected in our Technology 202 newsletter.
Do not hit that share button. Social media is built for things to go viral, for users to quickly retweet before they’re even done reading the words they’re amplifying. No matter how devastating, enlightening or enraging a TikTok, tweet or YouTube video is, you must wait before passing it on to your own network. Assume everything is suspect until you confirm its authenticity.
Look at who is sharing the information. If it’s from friends or family members, don’t trust the posts unless they are personally on the ground or a confirmed expert. If it’s a stranger or organization, remember that a verified check mark or being well-known does not make an account trustworthy. There are plenty of political pundits and big-name internet characters who are posting inaccurate information right now, and it’s on you to approach each post with skepticism.
If the account posting is not the source of the words or images, investigate where it came from by digging back to find the original Facebook, YouTube or Twitter account that first shared it. If you can’t determine the origin of something, that’s a red flag. Be wary of screenshots, which can be even harder to trace back, or anything that elicits an especially strong emotional reaction. Disinformation can prey on that type of response to spread.
You are probably spreading misinformation. Here’s how to stop.
When screening individual accounts, look at the date it was created, which should be listed in the profile. Be wary of anything extremely new (say, it started in the past few months) or with very few followers. For a website, you can see what year it was started on Google. Search for the name of the site, then click on the three vertical dots next to the URL in the results to see what date it was first indexed by the search engine. Again, avoid anything too new. And don’t skip the basics: Do a Google search for the person or organization’s name.
Make a collection of trusted sources
Doing mini background checks on every random Twitter account is extremely time-consuming, especially with new content coming from so many places simultaneously. Instead, trust the professionals. Legitimate mainstream news organizations are built to vet these things for you, and often do report on the same videos or photos taken by real people after they’ve confirmed their origin.
Use a dedicated news tool such as Apple News, Google News or Yahoo News, which choose established sources and have some built-in moderation. On social media, make or find lists of vetted experts and outlets to follow specifically for news about the topic you’re following. If you consume breaking news on Twitter, be especially careful to follow confirmed reporters from trusted outlets who are on the ground. New changes coming to Twitter’s verification system could make this more difficult.
Many news events will include information from the ground, like smartphone videos and first-person narratives. Even if you see only real posts, it can still be confusing or misleading. Try to augment any one-off clips or stories with broader context about what is happening. They may be the most compelling pieces of a puzzle, but they are not the whole picture. Mix in information from established experts on the topic, whether it’s foreign policy, cyberwarfare, history or politics. You can also turn to online or television outlets that add this context for most stories.
If you’re interested in doing deeper dives into unverified reports, start with this extensive guide on how to screen videos. Look for multiple edits and odd cuts, listen closely to the audio and run it through a third-party tool such as InVid, which helps check the authenticity of videos. This can be harder on live-streamed videos, like what’s on Twitch or any other live social media option.
To check images, put them into Google’s image search by grabbing a screenshot and dragging it to the search field. If it’s an old image that’s circulated before, you may see telling results.
Use fact-checking sites and tools
Social media sites do have some of their own fact-checking tools and warning labels, and many have added special sections to promote official election results. However, given the sheer volume of posts they’re dealing with, a problematic video or post can still be seen by millions before ever getting flagged.
Keep an eye out for content warnings on social media sites for individual posts, which can appear as labels below links or as warnings before you post something that could be misleading. Look up individual stories or images on fact-checking sites such as The Washington Post’s Fact Checker, Snopes and PolitiFact.
Russian hackers are behind a massive cyber attack at Australia's largest health insurer, Medibank, the Australian Federal Police (AFP) said Friday.
Data stolen from Medibank began appearing on the dark web on Wednesday after the company refused to pay a ransom to the hackers.
About 9.7 million current and former Medibank customers are believed to have been affected by the October hack.
AFP commissioner Reece Kershaw told a media conference the attack was carried out by a Russian group that was "likely responsible" for other significant breaches across the world.
The group may have affiliates in other countries, Kershaw said.
"So to the criminals, we know who you are, and moreover the AFP has some significant runs on the scoreboard when it comes to bringing overseas offenders back to Australia to face the justice system."
Kershaw said he knew Australians were "angry, distressed and seeking answers" about the hack.
"This cyber attack is an unacceptable attack on Australia and it deserves a response that matches the malicious and far-reaching consequences that this crime is causing."
Medibank cief executive David Koczkar said the release of stolen data was "disgraceful" and was causing harm.
"These are real people behind this data and the misuse of their data is deplorable and may discourage them from seeking medical care," he said in a statement.
"It's obvious the criminal is enjoying the notoriety. Our single focus is the health and wellbeing and care of our customers."
Australian Prime Minister Anthony Albanese told media he was "disgusted" by the hack.
"We know where they're coming from, we know who is responsible and we say they should be held to account."
Data taken during the hack includes names, addresses, birth dates, passport numbers, phone numbers and health claims.
they have the right fucking attitude...take this shit home to the hackers, go after them, AGGRESSIVELY, make it so the rest of them have to live in fear, or quit the shit. keep an eye on the ones you know about, and the second they step outside of hostile territory, put a fucking bag over their heads and put them on trial for international terrorism, convict them (because they're most definitely guilty) and sentence them to the harshest terms possible...make it so they'll need trifocals to see a monitor when they get out, they'll be so fucking old