News Gist .News

Articles | Politics | Finance | Stocks | Crypto | AI | Technology | Science | Gaming | PC Hardware | Laptops | Smartphones | Archive

YouTube Warns of Phishing Video Using Its CEO as Bait

YouTube has issued a warning to its users about an ongoing phishing scam that uses an AI-generated video of its CEO, Neal Mohan, as bait. The scammers are using stolen accounts to broadcast cryptocurrency scams, and the company is urging users not to click on any suspicious links or share their credentials with unknown parties. YouTube has emphasized that it will never contact users privately or share information through a private video.

See Also

Deepfakes Scam YouTube Creators with AI-Generated Videos Δ1.92

YouTube creators have been targeted by scammers using AI-generated deepfake videos to trick them into giving up their login details. The fake videos, including one impersonating CEO Neal Mohan, claim there's a change in the site's monetization policy and urge recipients to click on links that lead to phishing pages designed to steal user credentials. YouTube has warned users about these scams, advising them not to click on unsolicited links or provide sensitive information.

Google's Crypto Scam Ads Are a Threat to Online Security Δ1.83

YouTube has been inundated with ads promising "1-2 ETH per day" for at least two months now, luring users into fake videos claiming to explain how to start making money with cryptocurrency. These ads often appear credible and are designed to trick users into installing malicious browser extensions or running suspicious code. The ads' use of AI-generated personas and obscure Google accounts adds to their legitimacy, making them a significant threat to online security.

Tech Giant Google Discloses Scale of AI-Generated Terrorism Content Complaints Δ1.77

Google has informed Australian authorities it received more than 250 complaints globally over nearly a year that its artificial intelligence software was used to make deepfake terrorism material, highlighting the growing concern about AI-generated harm. The tech giant also reported dozens of user reports warning about its AI program Gemini being used to create child abuse material. The disclosures underscore the need for better guardrails around AI technology to prevent such misuse.

Accidentally Texting with Scammers? Google's AI Is Here to Stop the Chat Cold Δ1.76

Google has introduced AI-powered features designed to enhance scam detection for both text messages and phone calls on Android devices. The new capabilities aim to identify suspicious conversations in real-time, providing users with warnings about potential scams while maintaining their privacy. As cybercriminals increasingly utilize AI to target victims, Google's proactive measures represent a significant advancement in user protection against sophisticated scams.

YouTube Tightens Policies on Online Gambling Content Δ1.75

YouTube is tightening its policies on gambling content, prohibiting creators from verbally referring to unapproved services, displaying their logos, or linking to them in videos, effective March 19th. The new rules may also restrict online gambling content for users under 18 and remove content promising guaranteed returns. This update aims to protect the platform's community, particularly younger viewers.

Deepfake Scam Calls Are Costing British Victims Hundreds Each Time - Here's How to Stay Safe Δ1.75

The average scam cost the victim £595, report claims. Deepfakes are claiming thousands of victims, with a new report from Hiya detailing the rising risk and deepfake voice scams in the UK and abroad, noting how the rise of generative AI means deepfakes are more convincing than ever, and attackers can leverage them more frequently too. AI lowers the barriers for criminals to commit fraud, and makes scamming victims easier, faster, and more effective.

Android's AI Is Scanning Your Phone for Scam Activity Now in Two Ways Δ1.75

Google has introduced two AI-driven features for Android devices aimed at detecting and mitigating scam activity in text messages and phone calls. The scam detection for messages analyzes ongoing conversations for suspicious behavior in real-time, while the phone call feature issues alerts during potential scam calls, enhancing user protection. Both features prioritize user privacy and are designed to combat increasingly sophisticated scams that utilize AI technologies.

Protecting Yourself From Tax-Related Scams in 2025: 10 Expert Tips Δ1.75

Almost half of people polled by McAfee say they or someone they know has received a text or phone call from a scammer pretending to be from the IRS or a state tax agency, highlighting the growing threat of tax-related scams. The scammers use various tactics, including social media posts, emails, text messages, and phone calls, to target potential victims, often with promising fake refunds. To protect themselves, individuals can take steps such as filing their taxes early, monitoring their credit reports, watching out for phishing attacks, and being cautious of spoofed websites.

Most AI Voice Cloning Tools Aren't Safe From Scammers Δ1.74

Consumer Reports assessed the most leading voice cloning tools and found that four products did not have proper safeguards in place to prevent non-consensual cloning. The technology has many positive applications, but it can also be exploited for elaborate scams and fraud. To address these concerns, Consumer Reports recommends additional protections, such as unique scripts, watermarking AI-generated audio, and prohibiting audio containing scam phrases.

Ransomware Dominates Cybersecurity Threats in 2024 Δ1.74

The modern-day cyber threat landscape has become increasingly crowded, with Advanced Persistent Threats (APTs) becoming a major concern for cybersecurity teams worldwide. Group-IB's recent research points to 2024 as a 'year of cybercriminal escalation', with a 10% rise in ransomware compared to the previous year, and a 22% rise in phishing attacks. The "Game-changing" role of AI is being used by both security teams and cybercriminals, but its maturity level is still not there yet.

Google Messages Uses AI to Detect Scam Texts and Simplifies Reporting Δ1.73

Google Messages is rolling out an AI feature designed to assist Android users in identifying and managing text message scams effectively. This new scam detection tool evaluates SMS, MMS, and RCS messages in real time, issuing alerts for suspicious patterns while preserving user privacy by processing data on-device. Additionally, the update includes features like live location sharing and enhancements for Pixel devices, aiming to improve overall user safety and functionality.

Protecting Against Scams with Norton 360's Genie Scam Protection Δ1.73

Norton 360 has introduced a new feature called Genie Scam Protection that leverages AI to spot scams in text messages, online surfing, and emails. This feature aims to protect users from embarrassing losses of money and personal information when reading scam messages or browsing malicious websites. The Genie Scam Protection adds an extra layer of security to Norton 360's existing antivirus software products.

YouTube Under Pressure to Restore Free Speech Δ1.73

YouTube is under scrutiny from Rep. Jim Jordan and the House Judiciary Committee over its handling of content moderation policies, with some calling on the platform to roll back fact-checking efforts that have been criticized as overly restrictive by conservatives. The move comes amid growing tensions between Big Tech companies and Republicans who accuse them of suppressing conservative speech. Meta has already faced similar criticism for bowing to government pressure to remove content from its platforms.

Norton 360 Genie Scam Protection Δ1.73

The new Genie Scam Protection feature leverages AI to spot scams that readers might think are real. This helps avoid embarrassing losses of money and personal information when reading text messages, enticing offers, and surfing the web. Norton has added this advanced technology to all its Norton 360 security software products, providing users with a safer online experience.

Hacked, Leaked, Exposed: Why You Should Never Use Stalkerware Apps Δ1.73

Stalkerware apps are notoriously creepy, unethical, and potentially illegal, putting users' data and loved ones at risk. These companies, often marketed to jealous partners, have seen multiple app makers lose huge amounts of sensitive data in recent years. At least 24 stalkerware companies have been hacked or leaked customer data online since 2017.

New Spyware Found to Be Snooping on Thousands of Android and Ios Users Δ1.73

A recent discovery has revealed that Spyzie, another stalkerware app similar to Cocospy and Spyic, is leaking sensitive data of millions of people without their knowledge or consent. The researcher behind the finding claims that exploiting these flaws is "quite simple" and that they haven't been addressed yet. This highlights the ongoing threat posed by spyware apps, which are often marketed as legitimate monitoring tools but operate in a grey zone.

The Rise of Fake Spyware Apps in the Play Store Δ1.72

Google's security measures have been breached by fake spyware apps, which are hidden in plain sight on the Google Play Store. These malicious apps can cause immense damage to users' devices and personal data, including data theft, financial fraud, malware infections, ransomware attacks, and rootkit vulnerabilities. As a result, it is crucial for smartphone users to take precautions to spot these fake spyware apps and protect themselves from potential harm.

Protecting Yourself From Vishing Attacks Surged 442% Last Year Δ1.72

Vishing attacks have skyrocketed, with CrowdStrike tracking at least six campaigns in which attackers pretended to be IT staffers to trick employees into sharing sensitive information. The security firm's 2025 Global Threat Report revealed a 442% increase in vishing attacks during the second half of 2024 compared to the first half. These attacks often use social engineering tactics, such as help desk social engineering and callback phishing, to gain remote access to computer systems.

Fake LinkedIn Emails Contain Malware, Warns Security Expert Δ1.72

LinkedIn's InMail notification emails have been spoofed by cybercriminals to distribute malware. The emails are laced with phishing tactics, including fake companies, images, and notifications from legitimate platforms. Researchers at Cofense Intelligence warn that the attackers are using a ConnectWise Remote Access Trojan (RAT) to gain unauthorized control over systems.

"Fake Nudes: Youths Confront Harms" Δ1.72

Teens increasingly traumatized by deepfake nudes clearly understand that the AI-generated images are harmful. A surprising recent Thorn survey suggests there's growing consensus among young people under 20 that making and sharing fake nudes is obviously abusive. The stigma surrounding creating and distributing non-consensual nudes appears to be shifting, with many teens now recognizing it as a serious form of abuse.

The Cybersecurity Threat Landscape Becomes Increasingly Elusive Δ1.72

A cyber-attack like the one in Zero Day is improbable. The average Netflix viewer isn’t familiar with the technical details of how cyberattacks are carried out, but they’re acutely aware of their growing frequency and severity. Millions of Americans have had their data exposed in attacks, and while they may not fully understand what ransomware is, they know it isn’t good. While the critical reception of Zero Day remains to be seen, one thing is certain: viewers will debate the plausibility of the events unfolding on their screens.

Worried About DeepSeek? Well, Google Gemini Collects Even More of Your Personal Data Δ1.71

Google Gemini stands out as the most data-hungry service, collecting 22 of these data types, including highly sensitive data like precise location, user content, the device's contacts list, browsing history, and more. The analysis also found that 30% of the analyzed chatbots share user data with third parties, potentially leading to targeted advertising or spam calls. DeepSeek, while not the worst offender, collects only 11 unique types of data, including user input like chat history, raising concerns under GDPR rules.

Agentic AI Risks User Privacy Δ1.71

Signal President Meredith Whittaker warned Friday that agentic AI could come with a risk to user privacy. Speaking onstage at the SXSW conference in Austin, Texas, she referred to the use of AI agents as “putting your brain in a jar,” and cautioned that this new paradigm of computing — where AI performs tasks on users’ behalf — has a “profound issue” with both privacy and security. Whittaker explained how AI agents would need access to users' web browsers, calendars, credit card information, and messaging apps to perform tasks.

Malware Botnet Spreads Across 1.6 Million Android Tvs Δ1.71

The Vo1d botnet has infected over 1.6 million Android TVs, with its size fluctuating daily. The malware, designed as an anonymous proxy, redirects criminal traffic and blends it with legitimate consumer traffic. Researchers warn that Android TV users should check their installed apps, scan for suspicious activity, and perform a factory reset to clean up the device.

Detecting Deception in Digital Content Δ1.71

SurgeGraph has introduced its AI Detector tool to differentiate between human-written and AI-generated content, providing a clear breakdown of results at no cost. The AI Detector leverages advanced technologies like NLP, deep learning, neural networks, and large language models to assess linguistic patterns with reported accuracy rates of 95%. This innovation has significant implications for the content creation industry, where authenticity and quality are increasingly crucial.