Benjamin Strick
Benjamin Strick

@BenDoBrown

25 Tweets 10 reads Aug 06, 2021
A coordinated network of accounts is using major social platforms to deny human rights abuses, distort narratives on significant issues and elevate Chinaโ€™s reputation.
This is a thread of what we found & how we found it. Full report @Cen4infoRes: info-res.org
๐Ÿงต๐Ÿ‘‡
We identified this network after catching public data from Twitter under tags of simplified terms often targeted by these networks:
-#้ฆ™ๆธฏ (Hong Kong)
-#็พŽๅ›ฝ (United States), and
-#้ƒญๆ–‡่ดต - (Guo Wengui)
I've visualised that entire conversation in a network graph (below).
For those new to these visualisations, what are we looking at? Itโ€™s the โ€˜conversationsโ€™ of accounts using those hashtags above.
- The dots are nodes (accounts)
- The lines are edges (retweets, mentions or likes)
By visualising this data in @Gephi, we can identify trends.
In the analysis of the conversation around those tags, we looked through all of the accounts analysing activity, content, username, profile pictures and bios, account age, number of people following/followed and retweet vs tweet ratios.
Let's look at what we found ๐Ÿ”
By following this method, we were able to identify specific accounts and their networks. For example, we identified clusters like this one surrounding account @voQ1d96GDRTjJZD. Note the usernames of accounting around it.
The tweet collected in that case claimed that the US had โ€˜aggressive DNAโ€™. Below is a screenshot of that tweet and the accounts that amplified it.
All of the accounts retweeting that post, and many others like it, follow zero accounts, and have zero followers. Their purpose is to amplify content.
We identified many clusters like this, operating in the same manner of a central core poster, and other accounts retweeting/liking/commenting. We call them 'amplifiers'.
For example, in the image below was just a small group of clusters that all operated in the exact same way.
Some of the other core posting accounts had tweets that were not so much retweeted or liked, but commented on. For example in the analysis of a tweet by user @Zoe51610873 we found it had 152 comments - abnormally high in comparison to the 11 retweets and four likes of the post.
What were the comments? Mere iterations of support for the post with comments such as โ€œthatโ€™s funnyโ€, โ€œI'm embarrassed to hear thatโ€ and โ€œjust rediclious๏ผamazing!โ€.
After identifying these patterns through Gephi, it was quite simple to identify other accounts uploading the same content, which happened en-mass. For example, these are tweets all uploaded in one day that used the same text, images and hashtags.
We also identified a number of retweeter accounts that appeared to use profile pictures of fake portait-style faces. Many of these accounts were created on similar dates, in batches.
These accounts are using images of faces that have been generated. They are StyleGAN images. You can see more of these from the website thispersondoesnotexist.com. In the past, Iโ€™ve always found that a simple way to identify StyleGAN faces is through the matching of the eyes.
However, for extra verification, we can zoom in on the details of those faces to see some of the giveaways of generated faces. For example in this account we can see the blurring around the ears and above the hairline where a background has blurred with the hair.
Whereas other accounts have differing features that, upon closer inspection, appear to reveal the GAN glitch. We can see those in the mismatched angle of the teeth, the hair that has been blended into the background and the hand in the left of the image that has been blurred.
After identifying the narratives and the hashtags used to exploit them on Twitter, we moved the search over to Facebook and found very similar content there, some of which was the same as was seen on Twitter.
We found a number of cross-posted graphics and texts that were amplified by repeated networks of Facebook Pages, new accounts and repurposed accounts. These were used to post, comment, share and like.
Some of the accounts on Facebook showed signs of a previous life, such as having 1000+ friends and never communicating in Chinese, and then all of a sudden in April 2021 the account was interested in pro-China views and only communicated in Chinese.
Our research also uncovered the use of StyleGAN images used as profile images of some of the amplification accounts. On Facebook, the majority of those accounts were commenting on posts.
Some of these accounts might be linked to third party amplification services.
Similarly on YouTube there appeared to be signs of repurposed accounts. In the screenshots below we can see there were videos posted in the past in one language, then years pass without activity before a new wave of pro-Government video uploads occurred, this time in Chinese.
During the research, we identified numerous serious issues the network attempted to target, distort & influence with its own narratives.
Some of those subjects were:
-US and gun laws
-US claims about Xinjiang and human rights
-US and Afghanistan
-US and COVID-19
And more.
These narratives are similar to those shared by China State representatives and state-linked media. Especially on issues such as US and gun laws, Xinjiang and human rights, discrimination, COVID and more.
It should be noted that pro-China networks are not new and have been reported on and removed, in the past. Organisations like @Graphika_NYC & @ASPI_ICPC have done superb reporting on similar pro-China networks. aspi.org.au & graphika.com
The full report can be seen here on the @Cen4infoRes: info-res.org an independent, non-profit social enterprise dedicated to identifying, countering and exposing influence operations. A big thanks to those who helped on this piece.
It's also important to add context to these networks, which @FloraCarmichael has done in this great BBC report on the network.
bbc.co.uk

Loading suggestions...