AI's Latest and Greatest

The Times Sharply Increases Articles Open for Comments, Using Google’s Technology

Moderator was created in partnership with Jigsaw, a technology incubator that’s part of Alphabet, Google’s parent company. It uses machine learning technology to prioritize comments for moderation, and sometimes, approves them automatically. Its judgments are based on more than 16 million moderated Times comments, going back to 2007.

If The Times has innovated in the comments space, it is by treating reader submissions like content. The community desk has long sought quality of comments over quantity. Surveys of Times readers have made clear that the approach paid off — readers who have seen our comment sections love them.

In the summer of 2016, Jigsaw braced itself to deal with a similar issue: how to improve the quality of online conversations.

The Times struck a deal with Jigsaw that we outlined last year: In exchange for The Times’s anonymized comments data, Jigsaw would build a machine learning algorithm that predicts what a Times moderator might do with future comments. In addition, The Times, Jigsaw and a digital product partner called Instrument would collaborate to create Moderator, an application built to take advantage of the machine learning that is now a part of the Perspective project, which spots abuse and harassment online.

“Publishers often rely on advertising, and advertising relies on reader engagement,” Jared Cohen, chief executive of Jigsaw, wrote in response to questions from The Times. Jigsaw’s efforts help “platforms to create more space to engage their readers in civil discussion.”

How The Times Will Use Moderator

Our new moderation platform diverges from the most common approach for organizing user generated content, which is to prioritize each submission in the order it was received.

In Moderator, each comment is scored based on the likelihood that Times staff members would make a certain judgment, i.e.: approve or reject the comment.

To the Times moderator, each comment appears as a dot on a histogram chart, illustrated below.

Photo

Its placement on the chart indicates the probability that it would be rejected by a Times moderator. Moderator also tries to predict why the comment would be rejected (e.g.: inflammatory or insubstantial).

For many stories, Times moderators will be able to check the machine learning model against an article’s comments by reading through the submissions with between, say, a 15 to 20 percent likelihood to be rejected. If those comments can be approved, then a moderator might approve all comments between the 0 to 20 percent range. In our previous platform, we would need to read each of those comments individually before they were approved.

Most comments will initially be prioritized based on a “summary score.” Right now, that means judging comments on three factors: their potential for obscenity, toxicity and likelihood to be rejected.

As The Times gains more confidence in this summary score model, we are taking our approach a step further — automating the moderation of comments that are overwhemingly likely to be approved.

“The best part about machine learning models is they get better with time,” Mr. Cohen told The Times.

Our partnership with Jigsaw and Instrument builds on work we’ve done in partnership with The Washington Post, Knight Foundation and Mozilla on the Coral Project, an effort that helps news sites accept and manage reader submissions on a large scale.

In the long run, we hope to reimagine what it means to “comment” online. The Times is developing a community where readers can discuss the news pseudonymously in an environment safe from harassment, abuse and even your crazy uncle. We hope you join us on the journey.

The Times community team will be responding to your questions in the comments.

Continue reading the main story

By BASSEY ETIM

https://www.nytimes.com/2017/06/13/insider/have-a-comment-leave-a-comment.html

Source link

Similar Posts

WP2Social Auto Publish Powered By : XYZScripts.com