[PDFlist] Instagram's War on Bullying - Analysis
Soloveni Vitoso
infor at pacificdisability.org
Thu Jul 18 21:26:56 MDT 2019
Inside Instagram's War on Bullying
[https://imagesvc.timeincapp.com/v3/mm/image?url=https%3A%2F%2Ftimedotcom.files.wordpress.com%2F2019%2F07%2Fgettyimages-830195036.jpg&w=882&h=437&c=sc&poi=face&q=85]
BY KATY STEINMETZ <https://time.com/author/katy-steinmetz/> - UPDATED: JULY 8
Ethan Cohen tried to laugh off his first experience with bullying on Instagram. Like many kids his age, the Raleigh, N.C., teen eagerly joined the platform in middle school, and one day he discovered fellow students snickering at an account. Someone - he still does not know who - had started taking surreptitious photos of him and posting them under the username ethan_cohens_neck_vein. The feed was dedicated to jeers about what appeared to be a prominent muscle in his neck. One post compared it to the Great Wall of China. Another suggested "systems of equations" could be done on its size. To friends, he dismissed it as a dumb prank, but privately he was distressed. Someone was tailing him and posting mocking pictures for all to see. "The anonymity of it was freaky," says Cohen, now 18. He reported the account multiple times to Instagram. Nothing happened, even though guidelines that govern behavior on the platform explicitly forbid mocking someone's physical appearance.
Today, Instagram says, the outcome would have been different. More sophisticated reporting tools and moderators would have quickly shut the account down. And, in the near future, the company aspires to something far more ambitious: sparing users like Cohen from having to report bullying in the first place by using artificial intelligence to root out behaviors like insults, shaming and disrespect. At a time when social media platforms are being blamed for a great deal of problems - and are under pressure from governments to demonstrate they can police themselves - Instagram has declared war on bullying. "We are in a pivotal moment," says Head of Instagram Adam Mosseri. "We want to lead the industry in this fight."
It's a logical step for what's become the platform of choice for young people. As teenagers have become glued to the app, bullying has become to Instagram what "fake news" is to Facebook and trolling is to Twitter: a seemingly unstoppable ill that users endure in order to be where everyone else is. By one estimate, nearly 80% of teens are on Instagram and more than half of those users have been bullied on the platform. And it gets far worse than neck taunts. In high school, Cohen came out as gay on Instagram and was pummeled by direct messages from a popular student calling him a "faggot" and "failed abortion." Users suffer haunting humiliations and threats of violence. More broadly, bullying on sites like Instagram has been linked to self-destructive behavior.
Sheri Bauman, a counseling professor at the University of Arizona who has spent years studying bullying's causes and effects, calls Instagram a "one-stop shop for the bully" because everything they need is there: an audience, anonymity, an emphasis on appearances, and channels that range from public feeds to behind-the-back group chats. Instagram executives acknowledge that as they try to attract more users and attention to the platform, each new feature brings with it a fresh opportunity for abuse. "Teens are exceptionally creative," says Instagram head of public policy Karina Newton.
Mosseri is new to the top job. After Instagram's founders abruptly departed late last year - reportedly amid tensions with parent company Facebook - the longtime Facebook employee took over, having honed his crisis management skills by overseeing Facebook's News Feed. The 36-year-old aims to define a new era for Instagram, calling the well-being of users his top priority. Tackling bullying gives shape to that agenda. Mosseri is dedicating engineers and designers to the cause. His team is doing extensive research, rolling out new features and changing company protocol, all with bullying in mind.
But it's a fight with tangled front lines and plenty of possibilities for collateral damage. Go after bullying too aggressively and risk alienating users with stricter rules and moderation that feels intrusive at a time when the company is a bright spot of growth for Facebook. Don't do enough, especially after promising to set new standards for the industry, and risk accusations of placing profits over the protection of kids.
Then there's the technical Everest to climb. Creating artificial intelligence designed to combat bullying means teaching machines to master an evolving problem with complex nuances. Instagram must also be wary of free speech issues as engineers create tools that are optimized to find things that they should, without suppressing things that they shouldn't. "I do worry that if we're not careful, we might overstep," Mosseri says. But he says nothing, including growth, trumps the need to keep the platform civil. "We will make decisions that mean people use Instagram less," he tells TIME, "if it keeps people more safe."
Facebook stands to profit from every hour people spend on Instagram. If those who associate additional safety measures with constriction go elsewhere, potential revenue leaves with them. When asked if it's in the company's financial interest to take on bullying, Mosseri's response is that if Instagram fails to curb it, he will not only be failing users on a moral level but also failing the business. "It could hurt our reputation and our brand over time. It could make our partnership relationships more difficult. There are all sorts of ways it could strain us," Mosseri says. "If you're not addressing issues on your platform, I have to believe it's going to come around and have a real cost."
Instagram was launched in 2010 by Kevin Systrom<http://time100.time.com/2013/04/18/time-100/slide/kevin-systrom/> and Mike Krieger, two twenty-somethings who hoped users would download a photo-sharing app that made their lives look beautiful. Users did. Within a year, more than 500,000 people were signing up each week. But it quickly became clear that the masses were going to use the app for ugly reasons too, and the duo spent time in the early days personally deleting nasty comments and banning trolls. By the time they left, the platform had more than one billion users<https://time.com/5317542/instagram-igtv-youtube/>, far too many for humans to monitor. So, like other social media platforms trying to ferret out forbidden material ranging from terrorist propaganda to child pornography, Instagram had turned to machines.
In a quest to make Instagram a kinder, gentler place, the founders had borrowed from Facebook an AI tool known as DeepText, which was designed to understand and interpret the language people were using on the platform. Instagram engineers first used the tool in 2016 to seek out spam. The next year, they trained it to find and block offensive comments, including racial slurs. By mid-2018, they were using it to find bullying in comments, too. A week after Mosseri took over in October, Instagram announced it wouldn't just use AI to search for bullying in remarks tacked below users' posts; it would start using machines to spot bullying in photos, meaning that AI would also analyze the posts themselves.
This is easier said than done.
When engineers want to teach a machine to perform a task, they start by building a training set - in lay terms, a collection of material that will help the machine understand the rules of its new job. In this case, it starts with human moderators sorting through hundreds of thousands of pieces of content and deciding whether they contain bullying or not. They label them and feed the examples into what is known as a classifier, which absorbs them like a preternatural police dog that is then set loose to sniff out that material. Of course, these initial examples can't cover everything a classifier encounters in the wild. But, as it flags content - and as human moderators assess whether that was the correct call - it learns from the additional examples. Ideally, with the help of engineers tweaking its study habits, it gets better over time.
Today, there are three separate bullying classifiers scanning content on Instagram: one trained to analyze text, one photos, and one videos. They're live, and they're flagging content by the hour. Yet they are also in "pretty early days," as lead engineer Yoav Shapira puts it. In other words, they're missing a lot of bullying and they're not necessarily sniffing out the right things. His team's mission is to change that.
One reason this is so challenging, compared to training a machine to seek out content like nudity, is that it's much easier to recognize when someone in a photo is not wearing pants than it is to recognize the broad array of behavior that might be considered bullying. Studies of cyberbullying vary wildly in their conclusions about how many people have experienced it, ranging from as low as 5% to as high as 72%, in part because no one agrees on precisely what it is. "What makes bullying so hard to tackle is that the definition is so different to individuals," says Newton, Instagram's head of public policy. And engineers need a clear sense of what qualifies and what doesn't in order to build a solid training set.
The forms bullying takes on Instagram have changed over time. There is plenty of what one might call old-fashioned bullying: According to Instagram's own research, mean comments, insults and threats are most common. Some of this is easy to catch. Instagram's text classifier, for example, has been well-trained to look for strings of words like "you ugly ass gapped tooth ass bitch" and "Your daughter is a slag." But slang changes over time and across cultures, especially youth culture. And catching aggressive behavior requires comprehending full sentences, not just a few words contained in them. Consider the world of difference between "I'm coming over later" and "I'm coming over later no matter what you say."
Users also find themselves victimized by bullies who go beyond words. On Instagram, there are so-called "hate pages," anonymous accounts dedicated to impersonating or making fun of people. A boyfriend might tag an ex in posts that show him with other girls; a girl might tag a bunch of friends in a post and pointedly exclude someone. Others will take a screenshot of someone's photo, alter it and reshare it, or just mock it in a group chat. There's repeated contact, like putting the same emoji on every picture a person posts, that mimics stalking. Many teens have embarrassing photos or videos of themselves shared without their consent, or find themselves the subject of "hot or not" votes (much like the rankings Mark Zuckerberg set up as an undergraduate at Harvard on a pre-Facebook website called Facemash).
"There's nothing in bullying," Shapira says, "that is super easy."
As part of its effort to develop effective AI, Instagram has been surveying thousands of users in hopes of better understanding all the forms that bullying can take, in the eyes of many beholders. (The responses will also help Instagram gauge the prevalence of bullying on the platform, data it plans to make public for the first time later this year.) Per Newton, the company's broad working definition of bullying is content intended to "harass or shame an individual," but Shapira's team has broken this down into seven subcategories: insults, shaming, threats, identity attacks, disrespect, unwanted contact and betrayals. The grand plan is to build artificial intelligence that is trained to understand each concept. "It's much more costly from an engineering perspective," Shapira says, "but it's much better at solving the problem."
Because bullying can be contextual - hinging on an inside joke or how well two people know each other - Instagram's engineers are also researching ways they can use account behavior to separate the bad from the benign. The word ho, for example, might be classified as bullying when a man says it to a woman but not when a woman uses it to refer to a friend. Similarly, if someone says "Awesome picture" once, that might be a compliment. If they say it on every photo a person posts, that starts to look suspicious. Engineers are capitalizing on signals that help reveal those relationships: Do two accounts tag each other a lot? Has one ever blocked the other? Is the username similar to one that has been kicked off the platform in the past? Does there seem to be a coordinated pile-on, like "Go to @someoneshandle and spam this picture on their dms"?
When it comes to photos and videos, the classifiers have had less practice and are less advanced. Engineers and moderators who analyze the content people report are still identifying patterns, but some guideposts have emerged. For example, a split screen is often a telltale sign of bullying, especially if a machine detects that one side shows a human and the other an animal. So is a photo of three people with a big red X drawn across someone's face. The use of filters can help signal a benign post: People don't tend to pretty up their victimizing. The team is also learning to understand factors like posture. It's likely a photo was taken without consent if it appears to be an "upskirt" shot. If one person is standing and another is in the posture of a victim, that's a red flag.
Every week, researchers write up a report of their findings, and almost every week there's some new form of bullying that engineers hadn't thought to look for, Shapira says. But with the help of teams and resources from Facebook, employees at Instagram who work on well-being believe they can not only master challenges that have long eluded experts - like building machines that understand sarcasm - but figure out how to use AI to find new-fangled phenomena like "intentional FOMO" and even accounts that torment middle-schoolers about their necks.
There are a lot of numbers Instagram won't share, including how many of its roughly 1,200 employees, or Facebook's 37,700, are working on the bullying problem. It also won't share the error rate of the classifiers that are currently live or the amount of content they're flagging for moderators. These days, the aspirations surrounding AI are often unmatched by the reality. But faith in the technology is sky high. "It's going to get much better," Shapira says, "over the next year or two."
Mosseri inherited one of the most powerful perches in social media at a tough time for the industry. We sit down to talk about bullying in mid-May, in an airy conference room at Instagram's San Francisco office. It's his first on-the-record interview in the U.S., and it's coming shortly after the White House launched a tool inviting people who feel they've been censored by social media companies to share their stories with the President. A few days before that, one of Facebook's co-founders had called for the breakup of the company.
When I ask about how free speech concerns are guiding his strategy on bullying, given that there will never be universal agreement on how tough Instagram's AI and moderators should be, he says that they need to be careful - "Speech is super important" - but emphasizes that the platform needs to act. For years, Internet companies distanced themselves from taking responsibility for content on their platforms, but as political scrutiny has mounted, executives have struck a more accountable tone. "We have a lot of responsibility," he says, "given our scale."
Minefields are everywhere, but Mosseri is used to tricky terrain. After joining Facebook as a designer in 2008, he went on to help establish the team that oversees News Feed, the endless scroll of posts in the middle of everyone's home page that has been an epicenter of controversy for the company. When Facebook was manipulated by trolls and foreign meddlers during the 2016 election, News Feed is where it happened. In the wake of that, Mosseri established what he named the "Integrity" team and spent his time overseeing development of AI tools meant to root out complex ills like misinformation. Along the way, he became close to Zuckerberg, and in early 2018, he was tapped to become Instagram's head of product, a post he soon leveraged into the top job.
As Instagram improves its definition of bullying, Mosseri believes the company will set new standards for using AI to squash it, developing practices that other platforms may even adopt. In the meantime, he's focused on finding ways to get Instagram's vast army of users to help combat this problem themselves, with the assistance of machines. "People often frame technology and people in opposition," he says. "My take is that people and technology can and should work in tandem."
Two new features Instagram is planning to roll out to all users this year embody this approach. One is what the company is calling a comment warning. When someone decides to comment on a post, if Instagram's bullying classifier detects even "borderline" content, it will give that user a prompt, encouraging them to rethink their words. "The idea is to give you a little nudge and say, 'Hey, this might be offensive,' without blocking you from posting," says Francesco Fogu, a call about exactly where that line is between bullying and free expression.)
designer who works on well-being. (It can also save Instagram from making a tricky judgment
https://time.com/5619999/instagram-mosseri-bullying-artificial-intelligence/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.pacificdisability.org/pipermail/pdflist_lists.pacificdisability.org/attachments/20190719/0fef4b3f/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image003.jpg
Type: image/jpeg
Size: 10391 bytes
Desc: image003.jpg
URL: <http://lists.pacificdisability.org/pipermail/pdflist_lists.pacificdisability.org/attachments/20190719/0fef4b3f/attachment.jpg>
More information about the PDFlist
mailing list