Sociologist Philip N. Howard, writing in “IEEE Spectrum,” discusses automated political propaganda. His article “How political campaigns have weaponize social media bots” is readable and important to an understanding of political propaganda.
Looking at Tom Reed’s facebook page, I often wonder if comments there are posted by people or machines? It may not be possible to know.
- False facebook accounts aren’t easily identified.
- Some bots are mixed with human participation.
- Postings from false accounts are echoed by persons who may be unaware of the source of the material.
In some cases one might be suspicious. One account “likes” whatever some other accounts post on Tom Reed’s official page but never anything else. As this suspicious account is an account devoid of content, one might think it is a bot. Other accounts which post comments, but are also devoid of meaningful content, are also suspicious. Yet Howard reports that bots can have all the attributes of an account created by a person.
..about half of Twitter conversations originating in Russia involve highly automated accounts. Such accounts push out vast amounts of political content, and many are so well programmed that the targets never realize that they’re chatting with a piece of software.
Here is one way to identify automated propaganda postings:
We have found that accounts tweeting more than 50 times a day using a political hashtag are almost invariably bots or accounts that mix automated techniques with occasional human curation. Very few humans—even journalists and politicians—can consistently generate dozens of fresh political tweets each day for days on end.
Here is another point on the extent of propaganda efforts:
Facebook, for example, disabled over 1 billion fake accounts, and its safety and security team has doubled to more than 20,000 people handling content in 50 languages. Twitter reports that it blocks half a million suspicious log-ins per day.
One thing I found interesting is that both the Clinton campaign and the Trump campaign used political bots in 2016. The Republican effort was larger and perhaps more sophisticated. One wonders if that made the difference in the election, and whether Russian expertise was critical in the Republican effort.
To defend our democratic institutions, we need to continue to independently evaluate social media practices as they evolve, and then implement policies that protect legitimate discourse. Above all, we need to stay vigilant, because the real threats to democracy still lie ahead.