One of many greatest challenges Twitter has proper now could be to scale back abuse and bullying on its platform. Final week, the corporate’s head of product, Kayvon Beykpour, sat down with Wired editor-in-chief Nicholas Thompson throughout the Shopper Electronics Present (CES) in Las Vegas to debate toxicity on the platform, the well being of conversations, and extra. By way of the interview, he revealed some elements of Twitter’s work to deal with abusive and offensive content material.
Beykpour mentioned one of many steps the corporate takes to scale back toxicity is to de-rank abusive replies utilizing machine studying:
I feel more and more, leveraging machine studying to attempt to mannequin the behaviors that we expect are most optimum for that space. So for instance, we want to present replies which are most certainly to be replied to. That’s one attribute you would possibly wish to optimize for, not the one attribute by any means. You’ll wish to deemphasize replies which are more likely to be blocked or reported for abuse.
He added that Twitter optimizes replies which are extra more likely to get reactions or replies. Nevertheless, it tweaks its algorithm to de-rank replies which are reaction-worthy, but abusive.
When Thompson requested him about how the corporate tries to regulate system so it doesn’t incentivize toxicity, Beykpour mentioned the social community trains its AI fashions rigorously to grasp its guidelines and rules:
Immediately, a really outstanding means that we leverage AI to attempt to decide toxicity is mainly having an excellent definition of what our guidelines are, after which having an enormous quantity of pattern knowledge round tweets that violate guidelines and constructing fashions round that.
Principally we’re attempting to foretell the tweets which are more likely to violate our guidelines. And that’s only one type of what individuals would possibly take into account abusive, as a result of one thing that you simply would possibly take into account abusive might not be towards our insurance policies, and that’s the place it will get tough.
The final line is kind of intriguing, and is probably going on the coronary heart of many an argument surrounding Twitter. Customers who get banned usually complain that Twitter’s moderation wasn’t adequately nuanced to grasp the context of the tweets that bought them in bother. On the flip facet, some accounts aren’t banned after they tweet controversial or abusive content material.
When Thompson jokingly requested if Twitter deliberate to offer abusers a ‘pink tick’ or roll out a toxicity rating to de-incentivize them, Beykpour waved it off, and mentioned the corporate is experimenting with extra delicate options in its beta app, reminiscent of hiding like counts and retweet counts.
Twitter’s problem when it comes to coaching its AI and moderation staff is to contemplate the ever-changing social and political context of various geographies. Some phrases or statements that have been normalized a couple of years in the past, may be abusive within the present context. So, the corporate must overview and refine its coverage continuously.
The entire interview is stuffed with attention-grabbing tidbits about how Twitter is considering the way forward for its platform, together with open-sourcing it. Discover it on Wired here.
Blockchain companies dished out nearly $1M in bug bounties last year