Lie Machines: How to Save Democracy from Troll Armies, Deceitful Robots, Junk News Operations, and Political Operatives

by Philip N. Howard, Yale University Press, 2020, 240 pp. Hardcover, $26.00, ISBN: 9780300250206 

In Lie Machines: How to Save Democracy from Troll Armies, Deceitful Robots, Junk News Operations, and Political Operatives, Oxford University professor Philip Howard takes us on a journey through the history of the production of “lie machines,” supported by the latest scientific research in the field of disinformation studies. In his view, politics is best understood as a socio-technical system where political actors generate lies that people consume, while the algorithms, data sets, and information infrastructure determine their impact. Howard argues that just as political actors are getting very good at producing big lies, social media algorithms provide an effective way of distributing those lies, and the science of marketing lies to the right audience is improving every day.  

Howard defines a lie machine as “a system of people and technologies that distribute false messages in the service of a political agenda” (13). He explains that lie machines have three components: the producers, the distributors, and the marketers. By understanding how they work together, we can devise ways to take them apart or even prevent lie machines from being built. 

The book starts with the story of how Russia began its involvement in the distribution of misinformation, even before the well-resourced Internet Research Agency (IRA) came to existence. Howard received, from an anonymous source, the IRA’s social media misinformation strategy, which provides fascinating and detailed insight into the IRA’s misinformation operations. As an experienced researcher in the field, Howard presents findings he shared with Congress in the August 1, 2018 US Senate hearing on the role that social media played in the execution of foreign influence operations. Afterward, Howard gives us a fresh perspective on how political lies are distributed on some platforms such as dating apps like Tinder. He points to a group of young activists who created Tinder bots that tried to persuade users to change their political views in the days leading up to the UK’s 2017 general election as an example of how the technical affordances of an app can be utilized by tech-savvy individuals to self-organize and influence politics. 

To guide the reader in the complex process of distributing lies, Howards walks us through how two marketing firms, one in Poland and one in Brazil, operate to develop fake political identities and seed misinformation across social media platforms. Another interesting insight provided is the result of Howard’s investigation into the Brexit marketing campaign’s impact. He presents an illustrative example of how the Vote Leave campaign used personalized social media ads to target citizens just days before the referendum to win support for Brexit. The Vote League ran a series of A/B tests of up to 450 different types of ads to see which were the most effective in persuading undecided citizens. Then, they measured how well each advertising campaign performed using different metrics such as gender and region. This allowed them to constantly test and refine campaign messages and identify the most persuasive ones that would influence citizens’ votes.  

Finally, Howard describes the future of propaganda and how the next generation of lie machines built with artificial intelligence will play a crucial role in the generation and distribution of political lies. For example, fake users will become even more convincing as they will be trained to be dynamic and interactive like our human friends and family, which will be a new, serious threat to our democracy. He reveals how lie machines are currently built from social media algorithms and junk news content (political news and information that is sensational, extremist, conspiratorial, severely biased, and presented as news); however, as a growing number of consumer products are embedded with small sensors, more and more of our behavioral data will be recorded and, hence, will be exploited to craft personalized political propaganda. 

The book concludes with Howard’s own opinions on how to break lie machines. He explains that part of the problem started with the monopoly that a handful of technology firms gained by owning our personal data. Howard argues that it is important to give individuals back the tools to monitor and direct the flow of their data, which he considers fundamental in safeguarding our democracy in the future. He also discusses why governments need to guide how and when firms can profit from information about individuals and societies, arguing that lie machines are only a symptom of a bigger problem. I agree with his argument about the need to regulate social media platforms in order to control how they profit from public data; however, Howard leaves the reader with questions about his vision of helping citizens control their personal data. It is not clear if he believes that society must upskill in order to gain knowledge of the importance of its data or how to manage it—a difficult scenario to imagine in developing countries where basic understanding of Internet technology is still limited. While I agree with the fact that the battle against lie machines is a series of small battles, we all must fight together—academia, industry, government, and civil society—to find solutions that will benefit all. 

This book is a good entry point for those researchers interested in the intersection of technology, politics, and disinformation campaigns. Lie Machines provides an overview of the strategies used to produce, disseminate, and market political falsehoods to manipulate people on social media. Sociologists and political scientists can enrich their knowledge about disinformation with different examples from all over the world. For those with more expertise on the topic, such as disinformation scholars and social media researchers, Lie Machine’s assortment of bibliographic resources are especially beneficial and can be used to further explore the cutting-edge research on disinformation and computational propaganda that inspired Howard to write this book.  

Claudia Flores-Saviaga, West Virginia University