If you’ve ever reported a Tweet, you may have been frustrated by this process, which is a typical one online. First, you report an item, then a list of violations pops up — which one is it? Is it suspicious? Abusive? Both? You tick one of the boxes.
Based on Twitter receiving millions of reports; everything from misinformation and spam, to harassment and hate speech, it’s their way of telling Twitter, hey this isn’t right or I don’t feel safe. Based on user feedback, research, and an understanding that today’s reporting process wasn’t making enough people feel safe or heard, the company decided to do something about it.
That's why Twitter is testing an overhauled reporting process that will make it easier for people to alert them of harmful behavior. The new approach, which is currently being tested with a small group in the US, simplifies the reporting process. It lifts the burden from the individual to be the one who has to interpret the violation at hand. Instead, it asks them what happened.
This method is called symptoms-first, where Twitter first asks the person what's going on. Here’s the analogy the team uses: say you're in the midst of an emergency medical situation. If you break your leg, the doctor doesn’t say, is your leg broken? They say, where does it hurt? The idea is, first let’s try to find out what’s happening instead of asking you to diagnose the issue.
When people are motivated to report something, chances are, they’ve just experienced or witnessed something unsettling, which is a difficult time to ask them to figure out exactly which policy might have been violated. The reported Tweet didn’t exactly break a rule in some cases but bent it. By refocusing on the experience of the person reporting the Tweet, Twitter hopes to improve the quality of the reports they get. The more first-hand information they can gather about how people are experiencing certain content, the more precise Twitter can be when it comes to addressing it or ultimately removing it. This rich pool of information, even if the Tweets in question don't technically violate any rules, still gives Twitter valuable input that they can use to improve people’s experience on the platform.
At every stage of Twitter's research and design process, the team intentionally included people from marginalized communities — women, people of color, and people from the LGBTQ+ community, including those who identify as trans or nonbinary. The theory is, if you design for the outliers, you actually solve for the majority. In this case, Twitter was designed with them in mind because they also happen to be some of the platform's most engaged users.
Twitter will be able to use the feedback it gains from this new process to improve it and help more people. It will help them understand better what’s happening on the platform, and even what’s happening in the outside world, in society. And to connect those learnings to enhance their policies.
Once the person reporting a violation describes what happened, Twitter then presents them with the Terms of Service violation they think might have occurred, at which point Twitter asks: Is that right? If not, the person can say so, which will help signal to Twitter that there are still some gaps in the reporting system. All the while Twitter is gathering feedback and compiling learnings from this chain of events that will help them fine-tune the process and connect symptoms to actual policies. Ultimately, it helps Twitter take appropriate action.
“This report essentially triggers a review of the content. If Twitter determines that the content is violating and our rules dictate that the content be removed, that will happen,” said Fay Johnson, Director of Product Management on the Health team. “We'll do some additional investigation to see if there are other things that we need to take down based on what was reported, whether it be the content itself or an account.”
Come next year, as the new process begins to roll out to a wider audience, Twitter will be working on improving its communication process, ensuring that it's closing the loop with those who are taking the time to report. It’s a worthwhile investment because by asking people to describe what’s happening to them, as opposed to just ticking a box, Twitter gets the added benefit of collecting rich feedback that allows it to identify concerns that perhaps had not been on its radar.
“Obviously we want to have rules that help keep everyone safe while balancing freedom of speech and promoting the public conversation. We also want to make sure that if there are new issues that are emerging — ones that we may not have rules for yet — there is a method for us to learn about them,” said Johnson. “The intention of these reporting flows is to empower the customer, give Twitter actionable information that we can use to improve the product and our experiences, and also improve our trust and safety process overall.”
Comments
Post a Comment