Protecting Against Scams with Norton 360's Genie Scam Protection
Norton 360 has introduced a new feature called Genie Scam Protection that leverages AI to spot scams in text messages, online surfing, and emails. This feature aims to protect users from embarrassing losses of money and personal information when reading scam messages or browsing malicious websites. The Genie Scam Protection adds an extra layer of security to Norton 360's existing antivirus software products.
As the rise of phishing and smishing scams continues to evolve, it is essential for consumers to stay vigilant and up-to-date with the latest security measures to avoid falling victim to these types of cyber threats.
Will the widespread adoption of Genie Scam Protection lead to a reduction in reported scam losses, or will new and more sophisticated scams emerge to counter this new level of protection?
Google has introduced AI-powered features designed to enhance scam detection for both text messages and phone calls on Android devices. The new capabilities aim to identify suspicious conversations in real-time, providing users with warnings about potential scams while maintaining their privacy. As cybercriminals increasingly utilize AI to target victims, Google's proactive measures represent a significant advancement in user protection against sophisticated scams.
This development highlights the importance of leveraging technology to combat evolving cyber threats, potentially setting a standard for other tech companies to follow in safeguarding their users.
How effective will these AI-driven tools be in addressing the ever-evolving tactics of scammers, and what additional measures might be necessary to further enhance user security?
Google has introduced two AI-driven features for Android devices aimed at detecting and mitigating scam activity in text messages and phone calls. The scam detection for messages analyzes ongoing conversations for suspicious behavior in real-time, while the phone call feature issues alerts during potential scam calls, enhancing user protection. Both features prioritize user privacy and are designed to combat increasingly sophisticated scams that utilize AI technologies.
This proactive approach by Google reflects a broader industry trend towards leveraging artificial intelligence for consumer protection, raising questions about the future of cybersecurity in an era dominated by digital threats.
How effective will these AI-powered detection methods be in keeping pace with the evolving tactics of scammers?
Google Messages is rolling out an AI feature designed to assist Android users in identifying and managing text message scams effectively. This new scam detection tool evaluates SMS, MMS, and RCS messages in real time, issuing alerts for suspicious patterns while preserving user privacy by processing data on-device. Additionally, the update includes features like live location sharing and enhancements for Pixel devices, aiming to improve overall user safety and functionality.
The introduction of AI in scam detection reflects a significant shift in how tech companies are addressing evolving scam tactics, emphasizing the need for proactive and intelligent solutions in user safety.
As scammers become increasingly sophisticated, what additional measures can tech companies implement to further protect users from evolving threats?
Google Cloud has launched its AI Protection security suite, designed to identify, assess, and protect AI assets from vulnerabilities across various platforms. This suite aims to enhance security for businesses as they navigate the complexities of AI adoption, providing a centralized view of AI-related risks and threat management capabilities. With features such as AI Inventory Discovery and Model Armor, Google Cloud is positioning itself as a leader in securing AI workloads against emerging threats.
This initiative highlights the increasing importance of robust security measures in the rapidly evolving landscape of AI technologies, where the stakes for businesses are continually rising.
How will the introduction of AI Protection tools influence the competitive landscape of cloud service providers in terms of security offerings?
Almost half of people polled by McAfee say they or someone they know has received a text or phone call from a scammer pretending to be from the IRS or a state tax agency, highlighting the growing threat of tax-related scams. The scammers use various tactics, including social media posts, emails, text messages, and phone calls, to target potential victims, often with promising fake refunds. To protect themselves, individuals can take steps such as filing their taxes early, monitoring their credit reports, watching out for phishing attacks, and being cautious of spoofed websites.
The escalating nature of tax scams underscores the importance of staying vigilant and up-to-date on cybersecurity best practices to prevent falling prey to these sophisticated schemes.
As AI-generated phishing emails and deepfake audios become more prevalent, it is crucial to develop effective strategies to detect and mitigate these types of threats.
Microsoft's AI assistant Copilot will no longer provide guidance on how to activate pirated versions of Windows 11. The update aims to curb digital piracy by ensuring users are aware that it is both illegal and against Microsoft's user agreement. As a result, if asked about pirating software, Copilot now responds that it cannot assist with such actions.
This move highlights the evolving relationship between technology companies and piracy, where AI-powered tools must be reined in to prevent exploitation.
Will this update lead to increased scrutiny on other tech giants' AI policies, forcing them to reassess their approaches to combating digital piracy?
The average scam cost the victim £595, report claims. Deepfakes are claiming thousands of victims, with a new report from Hiya detailing the rising risk and deepfake voice scams in the UK and abroad, noting how the rise of generative AI means deepfakes are more convincing than ever, and attackers can leverage them more frequently too. AI lowers the barriers for criminals to commit fraud, and makes scamming victims easier, faster, and more effective.
The alarming rate at which these scams are spreading highlights the urgent need for robust security measures and education campaigns to protect vulnerable individuals from falling prey to sophisticated social engineering tactics.
What role should regulatory bodies play in establishing guidelines and standards for the use of AI-powered technologies, particularly those that can be exploited for malicious purposes?
The modern-day cyber threat landscape has become increasingly crowded, with Advanced Persistent Threats (APTs) becoming a major concern for cybersecurity teams worldwide. Group-IB's recent research points to 2024 as a 'year of cybercriminal escalation', with a 10% rise in ransomware compared to the previous year, and a 22% rise in phishing attacks. The "Game-changing" role of AI is being used by both security teams and cybercriminals, but its maturity level is still not there yet.
This move signifies a growing trend in the beauty industry where founder-led companies are reclaiming control from outside investors, potentially setting a precedent for similar brands.
How will the dynamics of founder ownership impact the strategic direction and innovation within the beauty sector in the coming years?
Vishing attacks have skyrocketed, with CrowdStrike tracking at least six campaigns in which attackers pretended to be IT staffers to trick employees into sharing sensitive information. The security firm's 2025 Global Threat Report revealed a 442% increase in vishing attacks during the second half of 2024 compared to the first half. These attacks often use social engineering tactics, such as help desk social engineering and callback phishing, to gain remote access to computer systems.
As the number of vishing attacks continues to rise, it is essential for organizations to prioritize employee education and training on recognizing potential phishing attempts, as these attacks often rely on human psychology rather than technical vulnerabilities.
With the increasing sophistication of vishing tactics, what measures can individuals and organizations take to protect themselves from these types of attacks in the future, particularly as they become more prevalent in the digital landscape?
Artificial Intelligence (AI) is increasingly used by cyberattackers, with 78% of IT executives fearing these threats, up 5% from 2024. However, businesses are not unprepared, as almost two-thirds of respondents said they are "adequately prepared" to defend against AI-powered threats. Despite this, a shortage of personnel and talent in the field is hindering efforts to keep up with the evolving threat landscape.
The growing sophistication of AI-powered cyberattacks highlights the urgent need for businesses to invest in AI-driven cybersecurity solutions to stay ahead of threats.
How will regulatory bodies address the lack of standardization in AI-powered cybersecurity tools, potentially creating a Wild West scenario for businesses to navigate?
Google is working on a new feature called Shielded Email, which aims to protect users from unwanted emails by creating an alias address when signing up for new accounts. This feature uses Google's autofill features to automatically forward emails sent to the alias address to the user's main email address, allowing them to easily block or unsubscribe from unwanted emails. By using a separate alias address, Shielded Email provides a buffer between users and service providers, making it harder for bad actors to track their online activity.
The introduction of Shielded Email highlights the growing concern over digital privacy and security, as more people become aware of the potential risks associated with sharing personal information across multiple platforms.
How will this new feature impact the overall trend of users taking steps to protect their digital footprints, particularly in light of increasing concerns about data collection and online surveillance?
Google's latest Pixel Drop update for March brings significant enhancements to Pixel phones, including an AI-driven scam detection feature for calls and the ability to share live locations with friends. The update also introduces new functionalities for Pixel Watches and Android devices, such as improved screenshot management and enhanced multimedia capabilities with the Gemini Live assistant. These updates reflect Google's commitment to integrating advanced AI technologies while improving user connectivity and safety.
The incorporation of AI to tackle issues like scam detection highlights the tech industry's increasing reliance on machine learning to enhance daily user experiences, potentially reshaping how consumers interact with their devices.
How might the integration of AI in everyday communication tools influence user privacy and security perceptions in the long term?
YouTube creators have been targeted by scammers using AI-generated deepfake videos to trick them into giving up their login details. The fake videos, including one impersonating CEO Neal Mohan, claim there's a change in the site's monetization policy and urge recipients to click on links that lead to phishing pages designed to steal user credentials. YouTube has warned users about these scams, advising them not to click on unsolicited links or provide sensitive information.
The rise of deepfake technology is exposing a critical vulnerability in online security, where AI-generated content can be used to deceive even the most tech-savvy individuals.
As more platforms become vulnerable to deepfakes, how will governments and tech companies work together to develop robust countermeasures before these scams escalate further?
Microsoft's Threat Intelligence has identified a new tactic from Chinese threat actor Silk Typhoon towards targeting "common IT solutions" such as cloud applications and remote management tools in order to gain access to victim systems. The group has been observed attacking a wide range of sectors, including IT services and infrastructure, healthcare, legal services, defense, government agencies, and many more. By exploiting zero-day vulnerabilities in edge devices, Silk Typhoon has established itself as one of the Chinese threat actors with the "largest targeting footprints".
The use of cloud applications by businesses may inadvertently provide a backdoor for hackers like Silk Typhoon to gain access to sensitive data, highlighting the need for robust security measures.
What measures can be taken by governments and private organizations to protect their critical infrastructure from such sophisticated cyber threats?
Google has informed Australian authorities it received more than 250 complaints globally over nearly a year that its artificial intelligence software was used to make deepfake terrorism material, highlighting the growing concern about AI-generated harm. The tech giant also reported dozens of user reports warning about its AI program Gemini being used to create child abuse material. The disclosures underscore the need for better guardrails around AI technology to prevent such misuse.
As the use of AI-generated content becomes increasingly prevalent, it is crucial for companies and regulators to develop effective safeguards that can detect and mitigate such harm before it spreads.
How will governments balance the need for innovation with the requirement to ensure that powerful technologies like AI are not used to facilitate hate speech or extremist ideologies?
Threat actors are exploiting misconfigured Amazon Web Services (AWS) environments to bypass email security and launch phishing campaigns that land in people's inboxes. Cybersecurity researchers have identified a group using this tactic, known as JavaGhost, which has been active since 2019 and has evolved its tactics to evade detection. The attackers use AWS access keys to gain initial access to the environment and set up temporary accounts to send phishing emails that bypass email protections.
This type of attack highlights the importance of proper AWS configuration and monitoring in preventing similar breaches, as misconfigured environments can provide an entry point for attackers.
As more organizations move their operations to the cloud, the risk of such attacks increases, making it essential for companies to prioritize security and incident response training.
Consumer Reports assessed the most leading voice cloning tools and found that four products did not have proper safeguards in place to prevent non-consensual cloning. The technology has many positive applications, but it can also be exploited for elaborate scams and fraud. To address these concerns, Consumer Reports recommends additional protections, such as unique scripts, watermarking AI-generated audio, and prohibiting audio containing scam phrases.
The current lack of regulation in the voice cloning industry may embolden malicious actors to use this technology for nefarious purposes.
How can policymakers balance the benefits of advanced technologies like voice cloning with the need to protect consumers from potential harm?
YouTube has issued a warning to its users about an ongoing phishing scam that uses an AI-generated video of its CEO, Neal Mohan, as bait. The scammers are using stolen accounts to broadcast cryptocurrency scams, and the company is urging users not to click on any suspicious links or share their credentials with unknown parties. YouTube has emphasized that it will never contact users privately or share information through a private video.
This phishing campaign highlights the vulnerability of social media platforms to deepfake technology, which can be used to create convincing but fake videos.
How will the rise of AI-generated content impact the responsibility of tech companies to protect their users from such scams?
Layer 7 Web DDoS attacks have surged by 550% in 2024, driven by the increasing accessibility of AI tools that enable even novice hackers to launch complex campaigns. Financial institutions and transportation services reported an almost 400% increase in DDoS attack volume, with the EMEA region bearing the brunt of these incidents. The evolving threat landscape necessitates more dynamic defense strategies as organizations struggle to differentiate between legitimate and malicious traffic.
This alarming trend highlights the urgent need for enhanced cybersecurity measures, particularly as AI continues to transform the tactics employed by cybercriminals.
What innovative approaches can organizations adopt to effectively counter the growing sophistication of DDoS attacks in the age of AI?
YouTube has been inundated with ads promising "1-2 ETH per day" for at least two months now, luring users into fake videos claiming to explain how to start making money with cryptocurrency. These ads often appear credible and are designed to trick users into installing malicious browser extensions or running suspicious code. The ads' use of AI-generated personas and obscure Google accounts adds to their legitimacy, making them a significant threat to online security.
As the rise of online scams continues to outpace law enforcement's ability to keep pace, it's becoming increasingly clear that the most vulnerable victims are not those with limited technical expertise, but rather those who have simply never been warned about these tactics.
Will regulators take steps to crack down on this type of ad targeting, or will Google continue to rely on its "verified" labels to shield itself from accountability?
Google's latest Pixel Drop introduces significant enhancements for both Pixel and non-Pixel devices, including AI-powered scam detection for text messages and expanded satellite messaging capabilities. The Pixel 9 series gains new features like simultaneous video recording from multiple cameras, enhancing mobile content creation. Additionally, the AI scam detection feature will be available on all supported Android devices, providing broader protection against fraudulent communications.
This update illustrates Google's commitment to enhancing user experience through innovative technology while also addressing security concerns across a wider range of devices.
Will the expansion of these features to non-Pixel devices encourage more users to adopt Android, or will it create a divide between Pixel and other Android experiences?
A broad overview of the four stages shows that nearly 1 million Windows devices were targeted by a sophisticated "malvertising" campaign, where malware was embedded in ads on popular streaming platforms. The malicious payload was hosted on platforms like GitHub and used Discord and Dropbox to spread, with infected devices losing login credentials, cryptocurrency, and other sensitive data. The attackers exploited browser files and cloud services like OneDrive to steal valuable information.
This massive "malvertising" spree highlights the vulnerability of online systems to targeted attacks, where even seemingly innocuous ads can be turned into malicious vectors.
What measures will tech companies and governments take to prevent such widespread exploitation in the future, and how can users better protect themselves against these types of attacks?
Sophisticated, advanced threats have been found lurking in the depths of the internet, compromising Cisco, ASUS, QNAP, and Synology devices. A previously-undocumented botnet, named PolarEdge, has been expanding around the world for more than a year, targeting a range of network devices. The botnet's goal is unknown at this time, but experts have warned that it poses a significant threat to global internet security.
As network device vulnerabilities continue to rise, the increasing sophistication of cyber threats underscores the need for robust cybersecurity measures and regular software updates.
Will governments and industries be able to effectively counter this growing threat by establishing standardized protocols for vulnerability reporting and response?
As more people turn to AI chatbots like ChatGPT to look things up on the internet, Scrunch AI wants to help enterprises better prepare for a world in which more AI bots and agents visit their website than humans do. Its platform helps companies audit and optimize how they appear on various AI search platforms and gives them better visibility into how AI web crawlers interact with their online information. By identifying information gaps and solving inaccuracies, Scrunch AI can help companies improve the quality of their online presence.
The emphasis on monitoring the customer journey by multiple AI agents may lead to a new standard for website optimization, where companies must ensure that their online content is consistent across various interfaces and platforms.
How will the increasing reliance on AI search impact the role of human webmasters in maintaining websites and ensuring accurate online information?
AppLovin Corporation (NASDAQ:APP) is pushing back against allegations that its AI-powered ad platform is cannibalizing revenue from advertisers, while the company's latest advancements in natural language processing and creative insights are being closely watched by investors. The recent release of OpenAI's GPT-4.5 model has also put the spotlight on the competitive landscape of AI stocks. As companies like Tencent launch their own AI models to compete with industry giants, the stakes are high for those who want to stay ahead in this rapidly evolving space.
The rapid pace of innovation in AI advertising platforms is raising questions about the sustainability of these business models and the long-term implications for investors.
What role will regulatory bodies play in shaping the future of AI-powered advertising and ensuring that consumers are protected from potential exploitation?