Prepared remarks for the Mass Violence Prevention and Public Safety hearing
Today, I spoke to a panel of legislators from the Texas House of Representatives at Brookhaven College in Farmers Branch. I was on the second session, which included Dr. Richard Pineda from the University of Texas at El Paso, and Dr. Jared Schroeder, a journalism professor at SMU.
We had what I thought was a productive discussion with a diverse array of representatives who were trying to tackle a difficult topic. The first session included members of the law enforcement community as well as a representative from Facebook, and it was a bit tense at times, as noted in the Dallas Morning News story on the hearing.
My prepared testimony, updated to reflect some last-minute changes I made after the first session, is below.
Greetings, committee members and concerned citizens. I welcome the opportunity to offer my thoughts about the important topics of community safety and preventing mass violence.
We are not here to talk about guns, but I would like to draw a parallel to any discussions that may be going on about them as we instead talk about digital communications.
Having a gun is not a crime, at least until you point it at someone and pull the trigger, at which point your actions very likely become criminal. Many uses of the firearm are legal, of course, but many are illegal as well, including their use in committing acts of mass violence. It’s easy to make laws about these kinds of improper uses of firearms.
Using social media accounts is a slightly different thing. Owning a social media account is also not a crime, but unlike a gun, using it is almost never a criminal act. The First Amendment protects nearly all speech online, with very narrow exceptions — such as, relevant to this discussion, making what are known in the law as “true threats” or engaging in a conspiracy to plan an act of mass violence, or even using social media platforms to sell firearms illegally. Speech itself is not violence, and we should not conflate the two, even if that speech may spark extremism.
For example: Speech that may raise red flags about someone, such as incendiary talk about people based on their race or religion, is largely protected as a kind of free speech, as despicable and fear-inducing as it may be. That kind of speech may very well be reported to authorities when it raises suspicions that a dangerous act may be forthcoming. That said, citizens are also sometimes not great about determining a real threat. They call the police about a black family having a cookout. Or they express fear because a couple of Arabic men wave hello to each other on an airplane. So, any regulation aimed at flagging potentially dangerous online speech, particularly if it is informed by citizen complaints, should come with this awareness. The reporting system may very well result in flagging harmless activities in a way that results in further harassment of communities that already receive no shortage of scrutiny from law enforcement. And bad actors may take advantage of a citizen reporting system to abuse people online, as we’ve seen with “swatting” incidents in which people called in fake incidents to get police to respond to an address, which resulted in the death of a person in Wichita, Kansas, in 2017.
Additionally, we should be careful to ensure that any efforts to scrutinize the online activities of potentially dangerous or unstable people does not also infringe on our ability to keep parts of our lives private from government intrusion. We should not look to George Orwell’s Nineteen Eighty-Four, Mararet Atwood’s The Handmaid’s Tale, or Cory Doctorow’s Little Brother for inspiration on monitoring the online activities of Americans. Privacy ensures our greatest liberties — freedom of thought and expression and exchange of ideas — and studies have shown that people who know the government is surveilling their activities are more afraid and cautious about the things they are willing to discuss, particularly if it is unpopular or opposing the party in power.
It is certainly fair to conclude that people do not have a reasonable expectation of privacy in any public postings they make on a social platform such as Instagram, Facebook, Twitter, or YouTube. But expectations shift based on the kind of online tool or platform being used. Snapchat, for example, was built with user expectation of privacy in mind; auto-deletion of photos after a brief viewing period is part of the design that has made the app successful. Other message-deleting chat apps such as Signal, Confide, and WhatsApp, which also come with built-in end-to-end encryption, exist solely to provide people a place to communicate where they will be safe from outside intrusion or interference. While this may frustrate law enforcement authorities who want to get their hands on potentially dangerous communications or evidence of criminal conspiracies, these apps also enable dissidents in authoritarian regimes around the world a place to work together to seek safety or fight oppression. Any effort to breach the encryption or build in government backdoors will cause the tool to fail; once people know they are potentially being surveilled, their purpose for using the app has evaporated, and they will move on to other tools.
Where things get challenging is in private messaging or anonymous discussions on otherwise public platforms. For example, the Reddit community is built on anonymity. Maintaining users’ privacy is critical to their participation on the platform, and doxxing them is the greatest breach of the community’s expectations. Likewise, we reasonably expect some of our private messages on social networks to remain just that — private. I was not the only person who was horrified when it was revealed that Facebook was sharing users’ private messages with third parties such as Netflix, Spotify, and the Royal Bank of Canada, and defended it by arguing that users consented to it because they clicked “I agree” on the terms of service that nobody reads. Any Twitter user fears hacking and release of their draft tweets or their DMs. We expect privacy in these communications, and rightfully so. Government intrusion and monitoring of these will unnecessarily chill legal speech and should be approached very delicately.
The good news is, we already have a system in place for this, and it’s called “get a warrant.” It’s a lesson I teach every journalism student when they are approached by authorities seeking access to their photographs, videos, notes, or other materials about a news event they are covering. If authorities have probable cause that a crime has been committed, then absolutely, they can get access to people’s private messages. But anything below the probable cause threshold remains private, including location data and other information after the Supreme Court’s ruling in Carpenter v. U.S. earlier this year. Getting access to that would require cooperation from digital platforms to allow access consistent with the promises they have made their users in their terms of service agreements.
So, what should we do when we suspect that private chats would reveal potentially dangerous conduct, or when we see anonymous speech that is threatening a likely and imminent threat to public safety?
One possible avenue would be rapid response teams in law enforcement with negotiated agreements and access to similar response teams at online platforms. For example, back in 2015, a few teenagers posted threats of violence to the University of Missouri community during campus protests. This was on the anonymous gossip platform YikYak, an app that no longer exists but was location based. YikYak worked with law enforcement to quickly respond to those threats, revealing the location of the users and resulting in their arrest. Digital media companies have no interest in shielding people making actual, imminent threats of violence from the authorities. So, the key would be making sure that when there is a reported threat, and people outside the company believe that threat is credible, then there is an avenue for public safety officials to get information and respond quickly to that threat. One question I have is, could we mandate a business operating in the state to have an office or a designated person to serve in this role, perhaps as a condition of receiving some of the tax breaks these businesses receive to open offices in Texas.
Another possibility would be establishing an office of technology, such as the former congressional Office of Technology Assessment, that could serve as a resource for Texas policymakers as they consider the rapid change of online communication technologies. Part of the challenge here is understanding that while these platforms share some similar characteristics, each platform is different — their design is different, their function is different, their business is different, their staffing is different, and their user communities are different. I’ve been studying this stuff and the law on it closely for a decade now, I’ve written books and articles about it, and I can barely keep up with each new app or tool that comes along. It’s also worth remember that not all of the challenges are social media tools. When a threat arose at my daughter’s middle school last year, it was because some kid sent threatening pictures through AirDrop on an iPhone. Keeping up with the tools is easier because I have a couple of teenagers in the house, and I spend a lot of time talking to a roomful of 20-year-olds each week. We can’t expect legislators and policymakers to keep up with that without expert help.
Finally, while prevention of mass violence should be our primary goal, we should also consider limiting the damage to our communities once the attacks have commenced. Sometimes, the shooters have live-streamed their actions on platforms such as Facebook Live, as happened in Christchurch, New Zealand, documenting their atrocities and further tormenting our friends and neighbors. Other times, shooters have recorded the actions on their smartphones and uploaded the video to social media platforms such as Twitter and Facebook. While these acts may aid authorities in locating an active shooter, they also cause further trauma by remaining available on social platforms, where they may auto-play in a user’s stream before they can be shut off. They can be shared instantly and spread around the world before the company running the platform is able to shut down the account. Keeping situations that emerge during acts of mass violence such as this in mind may also help us respond to these horrifying situations.
Thank you for considering solutions to these important challenges, and I look forward to your questions.
Elizabeth Stoycheff, Under Surveillance: Examining Facebook’s Spiral of Silence Effects in the Wake of NSA Internet Monitoring, Journalism & Mass Communication Quarterly (2016) https://journals.sagepub.com/doi/abs/10.1177/1077699016630255?journalCode=jmqc
See Orin Kerr & Bruce Schneier, Encryption Workarounds, Georgetown Law Journal (2017), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2938033.