T&S Early Warning News
Get ahead of new stories that are impacting the T&S industry.
Craig Kelly rebukes Google and Facebook for removal of his content at social media inquiry
The Guardian | Jan 20, 2022
Company Listed: Google, Facebook meta, Social Media
United Australia party leader Craig Kelly has used a parliamentary inquiry on social media and online safety to take Google and Facebook to task over the removal of his party’s videos from YouTube and his ban from Facebook for pushing unproven treatments for Covid-19.
United Australia Party has spent close to $5m advertising its videos on YouTube since Kelly became leader of the party in August, accounting for about 98% of all political ad spend on YouTube in Australia during that time. YouTube has not banned the account or ceased taking money from the party, but it has removed a number of the party’s videos for allegedly violating its community guidelines.
Kelly was also banned from Facebook and Instagram last year for posts promoting unproven Covid-19 treatments such as ivermectin and hydroxychloroquine.
In a parliamentary committee hearing on Thursday, the member for Hughes questioned Google, Facebook and TikTok mainly using examples of each of these services taking action against content he or his party had posted.
He questioned why a speech he had given in parliament was removed from YouTube, and he had “countless examples” where content had been removed.
Facebook critics call for release of India human rights review
Reuters | Jan 20, 2022
Company Listed: Facebook Meta
Facebook critics on Wednesday called on the world's largest social network to release a human rights impact assessment it commissioned in 2020 to investigate hate speech on its platforms in India.
The social media company, which is now called Meta (FB.O), faces increasing scrutiny over its handling of abuses on its services, particularly after whistleblower Frances Haugen leaked internal documents showing its struggles monitoring problematic content in countries where it was most likely to cause harm. read more
In a letter sent to the company this month and made public Wednesday, rights groups, including Amnesty International, Human Rights Watch and India Civil Watch International urged Facebook to release the report.
Gare Smith, partner and chair of global business and human rights practice at the U.S. law firm Foley Hoag, which Facebook commissioned to carry out the assessment, said: "Such projects are complex, particularly in a country as diverse and large as India."
Microsoft is bigger than Google, Amazon and Facebook. But now lawmakers treat it like an ally in antitrust battles
The Washington Post | Jan 20, 2022
CompanyListed: Google, Amazon, Facebook Meta
When Google announced in 2019 that it would acquire Fitbit for $2 billion, lawmakers didn’t hide their frustration.
“By attempting this deal at this moment, Google is signaling that it will continue to flex and expand its power despite this immense scrutiny,” Rep. David N. Cicilline (D-R.I.), chairman of the House Judiciary antitrust subcommittee, said in a statement the same day the deal was announced.
But more than 24 hours after Microsoft announced its plans to purchase Activision for nearly $70 billion, aggressive trustbusters in Congress were uncharacteristically quiet. Core sponsors of antitrust legislation targeting the tech industry, including Cicilline, Sen. Amy Klobuchar (D-Minn.) and Sen. Tom Cotton (R-Ark.) did not immediately comment to The Washington Post on the deal.
The silence underscores how Microsoft has carved out a distinct reputation among policymakers, distancing itself from the political scrutiny embroiling its top competitors in Washington. As Apple, Facebook, Amazon and Google were marshaling their Washington resources to beat back competition legislation up for debate on Capitol Hill this week, Microsoft smoothly announced one of the largest acquisitions in the history of the tech industry. (Amazon founder Jeff Bezos owns The Washington Post.)
HOW TO SUPPORT A GLOBALLY CONNECTED COUNTER-DISINFORMATION NETWORK
Warontherocks.com | Jan 21, 2022
Company Listed: Social Media
From undermining democracy to inciting genocide, the global dangers of disinformation on social media are now well known. But despite countless calls for better legal regulation or intensified content moderation, the efforts of governments and social media companies to combat this threat have proven either woefully inadequate or dangerous to democratic practice.
The problem is that we have been looking for the solution in the wrong place. Civil society, not governments or social media companies, can best diminish disinformation. But these civil society organizations need equipping, and their tools need sharpening. A powerful, networked disinformation threat should be met with a powerful, networked response. This means more data access, more training, and a more entrepreneurial approach to support groups around the world that are already on the front lines. By providing this support, ideally in a more coordinated fashion, donors and research organizations can help make these groups even more powerful in their response.
While Americans often point to Russian interference in the 2016 U.S. election as the moment social media disinformation became a problem, the rest of the world was already worried. Political disinformation has impacted elections in every region. Hate speech has led to violence and genocide in Burma, Sri Lanka, Ethiopia, and elsewhere. Authoritarian states’ systems of propaganda have amplified conspiracy theories about the pandemic and encouraged intimidation of Western scholars.
New powers for online safety body begin
7News | Jan 21, 2022
Company Listed: Social Media, Twitter
New online safety measures to protect Australians on the internet will come into force this weekend, giving the national regulator more powers to take action against abuse.
From Sunday, the eSafety commissioner will be able to compel tech companies to consistently report how they are responding to online harm.
The timeframe within which platforms are required to respond to a 'take down' notice from the commissioner will be cut to 24 hours.
The changes come in to place as the federal government investigates further potential online safety measures.
It wants to introduce laws that would force social media platforms to take down offending posts and, in some circumstances, reveal the identity of anonymous posters.
But social media companies want the government to see the effect of the eSafety commissioner's new powers in addressing online abuse before introducing further measures.
This group of tech firms just signed up to a safer metaverse
TechnologyReview | Jan 21, 2022
Company Listed: Social Media
The internet can feel like a bottomless pit of the worst aspects of humanity. So far, there’s little indication that the metaverse—an envisioned virtual digital world where we work, play, and live—will be much better. As I reported last month, a beta tester in Meta’s virtual social platform, Horizon Worlds, has already complained of being groped.
Tiffany Xingyu Wang feels she has a solution. In August 2020—more than a year before Facebook announced it would change its name to Meta and shift its focus from its flagship social media platform to plans for its own metaverse—Wang launched the nonprofit Oasis Consortium, a group of game firms and online companies that envisions “an ethical internet where future generations trust they can interact, co-create, and exist free from online hate and toxicity.”
How? Wang thinks that Oasis can ensure a safer, better metaverse by helping tech companies self-regulate.
Earlier this month, Oasis released its User Safety Standards, a set of guidelines that include hiring a trust and safety officer, employing content moderation, and integrating the latest research in fighting toxicity. Companies that join the consortium pledge to work toward these goals.
Twitter loses appeal in French online hate speech case
NewsTrust | Jan 21, 2022
Company Listed: Twitter
Twitter must disclose details on what it does to tackle hate speech online in France, the Paris appeals court ruled on Thursday, handing a win to advocacy groups that say the social network does not do enough to clamp down on hateful content.
The verdict upheld a decision by a lower court that ordered Twitter to provide details on the number, nationality, localisation, and spoken language of people it employs to moderate content on the French version of the platform.
The appeals court said it confirmed in full the first ruling and said Twitter should pay 1,500 euros in damages to each of six plaintiffs, a copy of the ruling seen by Reuters showed,
The lower court decision also included the obligation for Twitter to disclose any contractual, administrative, commercial and technical documents that would help determine the financial and human means it has put in place to fight hate speech online in France.
"Our top priority is to ensure the safety of the people using our platform," A Twitter spokesperson said in response to a request for comment.
Ilana Soskin, a lawyer for one of the plaintiffs, advocacy group J'Accuse! (I Accuse!), said Twitter "could not defy French law and make fun of everyone".
Instagram testing feature that lets creators charge subscription fees
ABC News | Jan 21, 2022
Company Listed: Instagram