What comes to mind when you think of a bot on Twitter? You probably picture a spammy account sliding into your DMs or a Russian troll farm pumping out fake news and conspiracy theories. Thanks in large part to misinformation campaigns waged on social media during the US 2016 election, a lot of people associate bots with this type of nefarious activity. They have good reason to: Twitter detects roughly 25M accounts per month suspected of being automated or spam accounts. In fact, in the second half of 2020, it deployed 143M anti-spam challenges to accounts, which helped bring spam reports — those coming from people who flag Tweets as spam — down by about 18% from the first half of the year. Twitter has an entire enforcement team dedicated to tracking down these accounts and banning them. But it’s not as simple as blanket-banning all automated accounts. Bots actually come in all shapes and sizes, and chances are, you’re already following one that you like. Like a COVID-19 bot that alerts you to vaccine availability in your area, an earthquake bot that alerts you to tremors in your region, or an art bot that delivers a colorful dose of delight on your timeline. How these bots are represented on Twitter is almost as important as what they do for their followers. That’s where Oliver Stewart comes in. As the lead researcher on Twitter’s Identity and Profiles team, he wanted to understand how interpersonal trust is developed on Twitter and how automated accounts could affect that trust. “There are many bots on Twitter that do good things and that are helpful to people,” said Stewart. “We wanted to understand more about what those look like so we could help people identify them and feel more comfortable in their understanding of the space they’re in.” Stewart’s team revealed that people found content more trustworthy if they know more about who’s sharing it—starting with whether that account is human or automated. To help address the issue of bots, Twitter recently rolled out new labels that identify bots with an “automated” designation in their profile, an icon of a robot, and a link to the Twitter handle of the person who created the bot. “Not only are we just labeling these bots, we’re also saying: this is the owner, and this is why they’re here,” said Stewart. “Based on the preliminary research that we have, we hypothesize that that’s going to create an environment where you can trust those bots a lot more.” So why go to the trouble of labeling bots, instead of banning them all from Twitter? The labels themselves don’t call bots good or bad, it just gives people the signal that it’s automated. “If it’s compliant with Twitter’s rules, we’re OK with it being on the platform. For the ones that are noncompliant, we’re already actively doing the work to remove those off Twitter,” she said. A software engineer by day in Boulder, Colorado, Taraschuk began creating art bots to share his love of fine art with his followers.“We have this expectation that humans are more authentic, that interacting with a human is better. But the other side of that is that when it comes to art, humans introduced their personal biases,” said Taraschuk. “Bots are actually better in so many use cases than humans. They never forget, they never tire of sharing. They remember exactly what they shared and what they didn’t.” Balancing trust and safety That’s one of the balancing acts Twitter teams have to perform when negotiating the tension between verification and safety. Stewart, who started his career at Twitter researching whether to require customers to use their real identity, believes that allowing anonymity is one of the platform’s strengths. Stewart says labeling bots ties into the larger goal of supporting and making space for a spectrum of voices on the platform. “So how does verification impact public conversation? What are the voices that people want to hear from, and how can we make sure that those forces are balanced and equitable?” he said. But Stewart quickly clarifies that bot labels are not the same thing as the blue verified checkmarks, nor are they endorsements. “We’re not trying to say ‘this bot is good’ in a quality sense, because that is really subjective. We’re just trying to say that this is an automated account that we don’t think is doing any harm and that the owner wants to be honest with you—let you know that it’s automated,” he said. “No one should be going around telling you who to trust and who not to trust. Our goal is to give people the tools to make those decisions for themselves.” The response from developers in this initial phase has been overall quite positive, which noted that you can expect to see more Automated Account labels rolling out early next year. Any developer who wants to create a bot would self-identify it as an automated account and link to their personal Twitter handle in its profile. “Ultimately you get at the bad bots by solving for the good ones,”. “And so that’s really the long-tail strategy here.”
W hy it matters? The internet is increasingly vulnerable to hacking; a quantum one would be unhackable. Quantum Computing A quantum internet could be used to send unhackable messages, improve the accuracy of GPS, and enable cloud-based quantum computing. For more than twenty years, dreams of creating such a the quantum network have remained out of reach in large part because of the difficulty to send quantum signals across large distances without loss. Now, Harvard and MIT researchers have found a way to correct for signal loss with a prototype quantum node that can catch, store and entangle bits of quantum information. The research is the missing link towards a practical quantum internet and a major step forward in the development of long-distance quantum networks. The U.S Department of Energy (DoE) explains how a quantum link will make it happen through two quantum phenomenon: the first is quantum entanglement, where two-particle ...
Comments
Post a Comment