News Gist .News

Articles | Politics | Finance | Stocks | Crypto | AI | Technology | Science | Gaming | PC Hardware | Laptops | Smartphones | Archive

Meta Fixes Error that Exposed Instagram Users to Graphic and Violent Content

Meta has fixed an error that caused some users to see a flood of graphic and violent videos in their Instagram Reels feed. The fix comes after some users saw horrific and violent content despite having Instagram’s “Sensitive Content Control” enabled. Meta’s policy states that it prohibits content that includes “videos depicting dismemberment, visible innards or charred bodies,” and “sadistic remarks towards imagery depicting the suffering of humans and animals.” However, users were shown videos that appeared to show dead bodies, and graphic violence against humans and animals.

See Also

Meta Fixes Error that Flooded Instagram Reels with Violent Videos Δ1.90

Meta Platforms said on Thursday it had resolved an error that flooded the personal Reels feeds of Instagram users with violent and graphic videos worldwide. Meta's moderation policies have come under scrutiny after it decided last month to scrap its U.S. fact-checking program on Facebook, Instagram and Threads, three of the world's biggest social media platforms with more than 3 billion users globally. The company has in recent years been leaning more on its automated moderation tools, a tactic that is expected to accelerate with the shift away from fact-checking in the United States.

The Fallout of Meta’s Content Moderation Overhaul Δ1.82

Meta has implemented significant changes to its content moderation policies, replacing third-party fact-checking with a crowd-sourced model and relaxing restrictions on various topics, including hate speech. Under the new guidelines, previously prohibited expressions that could be deemed harmful will now be allowed, aligning with CEO Mark Zuckerberg's vision of “More Speech and Fewer Mistakes.” This shift reflects a broader alignment of Meta with the incoming Trump administration's approach to free speech and regulation, potentially reshaping the landscape of online discourse.

UK Asks Social Media Firms to Assess Online Risks by March 31 Δ1.77

Britain's media regulator Ofcom has set a March 31 deadline for social media and other online platforms to submit a risk assessment around the likelihood of users encountering illegal content on their sites. The Online Safety Act requires companies like Meta, Facebook, Instagram, and ByteDance's TikTok to take action against criminal activity and make their platforms safer. These firms must assess and mitigate risks related to terrorism, hate crime, child sexual exploitation, financial fraud, and other offences.

Technical Issues Resolved Across Whatsapp and Other Meta Apps Δ1.76

WhatsApp's recent technical issue, reported by thousands of users, has been resolved, according to a spokesperson for the messaging service. The outage impacted users' ability to send messages, with some also experiencing issues with Facebook and Facebook Messenger. Meta's user base is massive, making any glitches feel like they affect millions worldwide.

Reddit Will Issue Warnings to Users Who Repeatedly Upvote Banned Content Δ1.76

Reddit will now issue warnings to users who "upvote several pieces of content banned for violating our policies" within a certain timeframe, starting first with violent content. The company aims to reduce exposure to bad content without penalizing the vast majority of users, who already downvote or report abusive content. By monitoring user behavior, Reddit hopes to find a balance between free speech and maintaining a safe community.

YouTube Tightens Policies on Online Gambling Content Δ1.74

YouTube is tightening its policies on gambling content, prohibiting creators from verbally referring to unapproved services, displaying their logos, or linking to them in videos, effective March 19th. The new rules may also restrict online gambling content for users under 18 and remove content promising guaranteed returns. This update aims to protect the platform's community, particularly younger viewers.

Reddit Unveils New Tools to Boost User Engagement Δ1.73

Reddit has launched new content moderation and analytics tools aimed at helping users adhere to community rules and better understand content performance. The company's "rules check" feature allows users to adjust their posts to comply with specific subreddit rules, while a post recovery feature enables users to repost content to an alternative subreddit if their original post is removed for rule violations. Reddit will also provide personalized subreddit recommendations based on post content and improve its post insights feature to show engagement statistics and audience interactions.

Meta Expands Instagram Chat Feature to 250 Users Δ1.73

Instagram is testing a new Community Chat feature that enables up to 250 people in a group, allowing users to form chats around specific topics and share messages. The feature includes built-in moderation tools for admins and moderators, enabling them to remove messages or members to keep the channel safe. Additionally, Meta will review Community Chats against its Community Standards.

Tiktok Battles Instagram for Us Users, Meta Unveils Plan to Turn Reels Into Separate App Δ1.72

TikTok's uncertain future in the US market has prompted its rival, Meta, to take a more aggressive approach to luring creators and their followers. As part of this effort, Meta is considering turning the Reels feature on Instagram into a standalone video app, codenamed Project Ray. This move could further shift the focus of the social media landscape away from TikTok.

Investigation Into Social Media Companies Over Children's Personal Data Practices Δ1.72

Britain's privacy watchdog has launched an investigation into how TikTok, Reddit, and Imgur safeguard children's privacy, citing concerns over the use of personal data by Chinese company ByteDance's short-form video-sharing platform. The investigation follows a fine imposed on TikTok in 2023 for breaching data protection law regarding children under 13. Social media companies are required to prevent children from accessing harmful content and enforce age limits.

Twitch Creators 'Taking Live Stream Death Threats Very Seriously'. Δ1.71

Three US Twitch streamers say they're grateful to be unhurt after a man threatened to kill them during a live stream. The incident occurred during a week-long marathon stream in Los Angeles, where the streamers were targeted by a man who reappeared on their stream and made threatening statements. The streamers have spoken out about the incident, highlighting the need for caution and awareness among content creators.

Instagram Launches KAYALI, Twitter Competitor Threads, Amid Growing Beauty Industry Trends Δ1.71

Threads has already registered over 70 million accounts and allows users to share custom feeds, which can be pinned to their homepage by others. Instagram is now rolling out ads in the app, with a limited test of brands in the US and Japan, and is also introducing scheduled posts, which will let users plan up to 75 days in advance. Threads has also announced its intention to label content generated by AI as "clearly produced" and provide context about who is sharing such content.

How to Fix AI's Fatal Flaw - and Give Creators Their Due (Before It's Too Late) Δ1.71

AI image and video generation models face significant ethical challenges, primarily concerning the use of existing content for training without creator consent or compensation. The proposed solution, AItextify, aims to create a fair compensation model akin to Spotify, ensuring creators are paid whenever their work is utilized by AI systems. This innovative approach not only protects creators' rights but also enhances the quality of AI-generated content by fostering collaboration between creators and technology.

Reddit Moderation Tool Sparks Controversy Over Flagging of ‘Luigi’ as Violent Content Δ1.71

Reddit's automated moderation tool is flagging the word "Luigi" as potentially violent, even when the content doesn't justify such a classification. The tool's actions have raised concerns among users and moderators, who argue that it's overzealous and may unfairly target innocent discussions. As Reddit continues to grapple with its moderation policies, the platform's users are left wondering about the true impact of these automated tools on free speech.

Reddit Introduces Rules Check to Enhance Posting Experience Δ1.71

Reddit is rolling out a new feature called Rules Check, designed to help users identify potential violations of subreddit rules while drafting posts. This tool will notify users if their content may not align with community guidelines, and it will suggest alternative subreddits if a post gets flagged. Alongside this, Reddit is introducing Community Suggestions and Clear Community Info tools to further assist users in posting relevant content.

Melania Trump Urges Lawmakers to Sign Bill Combatting Revenge-Porn. Δ1.70

The first lady urged lawmakers to vote for a bill with bipartisan support that would make "revenge-porn" a federal crime, citing the heartbreaking challenges faced by young teens subjected to malicious online content. The Take It Down bill aims to remove intimate images posted online without consent and requires technology companies to take down such content within 48 hours. Melania Trump's efforts appear to be part of her husband's administration's continued focus on child well-being and online safety.

"Fake Nudes: Youths Confront Harms" Δ1.70

Teens increasingly traumatized by deepfake nudes clearly understand that the AI-generated images are harmful. A surprising recent Thorn survey suggests there's growing consensus among young people under 20 that making and sharing fake nudes is obviously abusive. The stigma surrounding creating and distributing non-consensual nudes appears to be shifting, with many teens now recognizing it as a serious form of abuse.

YouTube Under Pressure to Restore Free Speech Δ1.70

YouTube is under scrutiny from Rep. Jim Jordan and the House Judiciary Committee over its handling of content moderation policies, with some calling on the platform to roll back fact-checking efforts that have been criticized as overly restrictive by conservatives. The move comes amid growing tensions between Big Tech companies and Republicans who accuse them of suppressing conservative speech. Meta has already faced similar criticism for bowing to government pressure to remove content from its platforms.

Leaks at the Top of Meta: Ceo Mark Zuckerberg's Internal Comments Embroil Company in Scandal Δ1.70

Meta has fired roughly 20 employees who leaked confidential information about CEO Mark Zuckerberg's internal comments, with more firings expected. The company takes leaks seriously and is ramping up its efforts to find those responsible. A recent influx of stories detailing unannounced product plans and internal meetings led to a warning from Zuckerberg, which was subsequently leaked.

Tech Giant Google Discloses Scale of AI-Generated Terrorism Content Complaints Δ1.69

Google has informed Australian authorities it received more than 250 complaints globally over nearly a year that its artificial intelligence software was used to make deepfake terrorism material, highlighting the growing concern about AI-generated harm. The tech giant also reported dozens of user reports warning about its AI program Gemini being used to create child abuse material. The disclosures underscore the need for better guardrails around AI technology to prevent such misuse.

Meta Fires Around 20 Employees for Leaking Confidential Information Δ1.69

Meta has terminated approximately 20 employees for leaking confidential information, emphasizing the seriousness of internal policy violations. The company conducted an investigation following a rise in unauthorized disclosures, particularly regarding internal meetings and product plans. Meta has warned that more terminations may follow as they continue to address and prevent such breaches.

Big Tech Cripples Vaginal Health Businesses with Embarrassing Product Restrictions Δ1.69

Amazon's restrictive policies have led to the shutdown of businesses focused on addressing women's vaginal health issues, according to a new report. The company has allegedly flagged products as "potentially embarrassing or offensive" without clear guidelines or transparency. This move is exacerbating the lack of representation and support for women's reproductive health.

UBlock Origin Users Face Uncertainty After Chrome Removal Δ1.69

uBlock Origin, a popular ad-blocking extension, has been automatically disabled on some devices due to Google's shift to Manifest V3, the new extensions platform. This move comes as users are left wondering about their alternatives in the face of an impending deadline for removing all Manifest V2 extensions. Users who rely on uBlock Origin may need to consider switching to another browser or ad blocker.

UK Probes How TikTok, Reddit, and Imgur Protect Child Privacy Δ1.69

The U.K.'s Information Commissioner's Office (ICO) has initiated investigations into TikTok, Reddit, and Imgur regarding their practices for safeguarding children's privacy on their platforms. The inquiries focus on TikTok's handling of personal data from users aged 13 to 17, particularly concerning the exposure to potentially harmful content, while also evaluating Reddit and Imgur's age verification processes and data management. These probes are part of a larger effort by U.K. authorities to ensure compliance with data protection laws, especially following previous penalties against companies like TikTok for failing to obtain proper consent from younger users.

Spotify Fixes Premium Ad Bug Δ1.69

Spotify has acknowledged an issue that’s causing some of its paid Premium subscribers to encounter ads when trying to play music. In an X post published on Thursday by Spotify’s customer service account, the company said it’s looking into the problem and linked to its Community website where the issue has been documented by users over the past four weeks. The current issue has a different cause from the bug that had been previously reported by users.