Skip to main content

The secret world of good bots

 



 What comes to mind when you think of a bot on Twitter? You probably picture a spammy account sliding into your DMs or a Russian troll farm pumping out fake news and conspiracy theories. Thanks in large part to misinformation campaigns waged on social media during the US 2016 election, a lot of people associate bots with this type of nefarious activity. They have good reason to: Twitter detects roughly 25M accounts per month suspected of being automated or spam accounts. In fact, in the second half of 2020, it deployed 143M anti-spam challenges to accounts, which helped bring spam reports — those coming from people who flag Tweets as spam — down by about 18% from the first half of the year. Twitter has an entire enforcement team dedicated to tracking down these accounts and banning them. But it’s not as simple as blanket-banning all automated accounts. Bots actually come in all shapes and sizes, and chances are, you’re already following one that you like. Like a COVID-19 bot that alerts you to vaccine availability in your area, an earthquake bot that alerts you to tremors in your region, or an art bot that delivers a colorful dose of delight on your timeline. How these bots are represented on Twitter is almost as important as what they do for their followers. That’s where Oliver Stewart comes in. As the lead researcher on Twitter’s Identity and Profiles team, he wanted to understand how interpersonal trust is developed on Twitter and how automated accounts could affect that trust. “There are many bots on Twitter that do good things and that are helpful to people,” said Stewart. “We wanted to understand more about what those look like so we could help people identify them and feel more comfortable in their understanding of the space they’re in.” Stewart’s team revealed that people found content more trustworthy if they know more about who’s sharing it—starting with whether that account is human or automated. To help address the issue of bots, Twitter recently rolled out new labels that identify bots with an “automated” designation in their profile, an icon of a robot, and a link to the Twitter handle of the person who created the bot. “Not only are we just labeling these bots, we’re also saying: this is the owner, and this is why they’re here,” said Stewart. “Based on the preliminary research that we have, we hypothesize that that’s going to create an environment where you can trust those bots a lot more.” So why go to the trouble of labeling bots, instead of banning them all from Twitter? The labels themselves don’t call bots good or bad, it just gives people the signal that it’s automated. “If it’s compliant with Twitter’s rules, we’re OK with it being on the platform. For the ones that are noncompliant, we’re already actively doing the work to remove those off Twitter,” she said. A software engineer by day in Boulder, Colorado, Taraschuk began creating art bots to share his love of fine art with his followers.“We have this expectation that humans are more authentic, that interacting with a human is better. But the other side of that is that when it comes to art, humans introduced their personal biases,” said Taraschuk. “Bots are actually better in so many use cases than humans. They never forget, they never tire of sharing. They remember exactly what they shared and what they didn’t.” Balancing trust and safety That’s one of the balancing acts Twitter teams have to perform when negotiating the tension between verification and safety. Stewart, who started his career at Twitter researching whether to require customers to use their real identity, believes that allowing anonymity is one of the platform’s strengths. Stewart says labeling bots ties into the larger goal of supporting and making space for a spectrum of voices on the platform. “So how does verification impact public conversation? What are the voices that people want to hear from, and how can we make sure that those forces are balanced and equitable?” he said. But Stewart quickly clarifies that bot labels are not the same thing as the blue verified checkmarks, nor are they endorsements. “We’re not trying to say ‘this bot is good’ in a quality sense, because that is really subjective. We’re just trying to say that this is an automated account that we don’t think is doing any harm and that the owner wants to be honest with you—let you know that it’s automated,” he said. “No one should be going around telling you who to trust and who not to trust. Our goal is to give people the tools to make those decisions for themselves.” The response from developers in this initial phase has been overall quite positive, which noted that you can expect to see more Automated Account labels rolling out early next year. Any developer who wants to create a bot would self-identify it as an automated account and link to their personal Twitter handle in its profile. “Ultimately you get at the bad bots by solving for the good ones,”. “And so that’s really the long-tail strategy here.”

Comments

Popular posts from this blog

Unhackable Internet

  W hy it matters?   The internet is increasingly vulnerable to hacking; a quantum one would be unhackable. Quantum Computing    A quantum internet could be used to send unhackable messages, improve the accuracy of GPS, and enable cloud-based quantum computing. For more than twenty years, dreams of creating such a the quantum network have remained out of reach in large part because of the difficulty to send quantum signals across large distances without loss.   Now, Harvard and MIT researchers have found a way to correct for signal loss with a prototype quantum node that can catch, store and entangle bits of quantum information. The research is the missing link towards a practical quantum internet and a major step forward in the development of long-distance quantum networks.   The U.S Department of Energy (DoE) explains how a quantum link will make it happen through two quantum phenomenon: the first is quantum entanglement, where two-particle ...

Impact of Social Media on Business

Watch out for that bird! Imagine you are skydiving, you are visiting one of the most beautiful countries in the world and you want to share that experience with your loved ones and friends. Why not send a postcard? Oh wait, you’re already back from the trip by the time that postcard has reached, or it got lost in the mail. If only there was an alternative. This isn’t 1990. You have a platter of platforms to share your adventure. Webster’s dictionary defines social media as-“ forms of electronic communication (such as websites for social networking and microblogging) through which users create online communities to share information, ideas, personal messages, and other content (such as videos).”In simple words, social media, are various public platforms where people can share their views, stories, etc. with the help of various mediums. Starting with websites such as MySpace, Orkut, and Facebook, etc. it is now estimated that there are about 200 social media websites in ...

Pegasus Spyware: Flying Through The Air

 Hundreds of millions of people can't imagine life without their smartphones. Almost every aspect of their daily lives, from the most mundane to the most intimate, is within easy reach and hearing distance of their smartphones. Only few people realize that their phones may be used as surveillance devices, with someone hundreds of miles away secretly extracting their messages, photographs, and location while also activating their microphone and recording them in real time. Such capabilities are present in Pegasus, a spyware produced by NSO Group, an Israeli maker of mass surveillance weapons. What is Pegasus? Pegasus is a hacking software – or spyware – that is developed, marketed and licensed to governments around the world by the Israeli company NSO Group. It has the capability to infect billions of phones using either iOS or Android operating systems. The spyware is named after Pegasus, the white winged horse from Greek mythology. It is named so because it "flies through the...