Protecting Yourself From Tax-Related Scams in 2025: 10 Expert Tips
Almost half of people polled by McAfee say they or someone they know has received a text or phone call from a scammer pretending to be from the IRS or a state tax agency, highlighting the growing threat of tax-related scams. The scammers use various tactics, including social media posts, emails, text messages, and phone calls, to target potential victims, often with promising fake refunds. To protect themselves, individuals can take steps such as filing their taxes early, monitoring their credit reports, watching out for phishing attacks, and being cautious of spoofed websites.
The escalating nature of tax scams underscores the importance of staying vigilant and up-to-date on cybersecurity best practices to prevent falling prey to these sophisticated schemes.
As AI-generated phishing emails and deepfake audios become more prevalent, it is crucial to develop effective strategies to detect and mitigate these types of threats.
Vishing attacks have skyrocketed, with CrowdStrike tracking at least six campaigns in which attackers pretended to be IT staffers to trick employees into sharing sensitive information. The security firm's 2025 Global Threat Report revealed a 442% increase in vishing attacks during the second half of 2024 compared to the first half. These attacks often use social engineering tactics, such as help desk social engineering and callback phishing, to gain remote access to computer systems.
As the number of vishing attacks continues to rise, it is essential for organizations to prioritize employee education and training on recognizing potential phishing attempts, as these attacks often rely on human psychology rather than technical vulnerabilities.
With the increasing sophistication of vishing tactics, what measures can individuals and organizations take to protect themselves from these types of attacks in the future, particularly as they become more prevalent in the digital landscape?
The E-ZPass smishing scam is targeting people with urgent toll demands, sending fraudulent text messages that threaten fines and license revocation if payment is not made promptly. The scammers aim to capture personal information by directing victims to a fake link, which can result in identity theft. In reality, it's the scammers who are seeking financial gain.
This scam highlights the vulnerability of individuals to phishing attacks, particularly those that exploit emotional triggers like fear and urgency.
What role do social media platforms play in disseminating and perpetuating smishing scams, making them even more challenging to prevent?
Norton 360 has introduced a new feature called Genie Scam Protection that leverages AI to spot scams in text messages, online surfing, and emails. This feature aims to protect users from embarrassing losses of money and personal information when reading scam messages or browsing malicious websites. The Genie Scam Protection adds an extra layer of security to Norton 360's existing antivirus software products.
As the rise of phishing and smishing scams continues to evolve, it is essential for consumers to stay vigilant and up-to-date with the latest security measures to avoid falling victim to these types of cyber threats.
Will the widespread adoption of Genie Scam Protection lead to a reduction in reported scam losses, or will new and more sophisticated scams emerge to counter this new level of protection?
Vishing has become a prevalent tactic for cybercriminals, with 442% increase in attacks compared to the first half of 2024, according to CrowdStrike's latest report. The security firm tracked at least six campaigns involving attackers posing as IT staffers to convince employees to set up remote support sessions or share sensitive information. Help desk social engineering tactics are often used, where scammers create a sense of urgency to trick victims into divulging credentials.
The growing sophistication of vishing attacks highlights the need for employees and organizations to be vigilant in recognizing potential threats, particularly those that exploit human weakness rather than software vulnerabilities.
As vishing continues to surge, what steps can governments and regulatory bodies take to establish clear guidelines and enforcement mechanisms to protect consumers from these types of attacks?
YouTube has issued a warning to its users about an ongoing phishing scam that uses an AI-generated video of its CEO, Neal Mohan, as bait. The scammers are using stolen accounts to broadcast cryptocurrency scams, and the company is urging users not to click on any suspicious links or share their credentials with unknown parties. YouTube has emphasized that it will never contact users privately or share information through a private video.
This phishing campaign highlights the vulnerability of social media platforms to deepfake technology, which can be used to create convincing but fake videos.
How will the rise of AI-generated content impact the responsibility of tech companies to protect their users from such scams?
The average scam cost the victim £595, report claims. Deepfakes are claiming thousands of victims, with a new report from Hiya detailing the rising risk and deepfake voice scams in the UK and abroad, noting how the rise of generative AI means deepfakes are more convincing than ever, and attackers can leverage them more frequently too. AI lowers the barriers for criminals to commit fraud, and makes scamming victims easier, faster, and more effective.
The alarming rate at which these scams are spreading highlights the urgent need for robust security measures and education campaigns to protect vulnerable individuals from falling prey to sophisticated social engineering tactics.
What role should regulatory bodies play in establishing guidelines and standards for the use of AI-powered technologies, particularly those that can be exploited for malicious purposes?
Google has introduced AI-powered features designed to enhance scam detection for both text messages and phone calls on Android devices. The new capabilities aim to identify suspicious conversations in real-time, providing users with warnings about potential scams while maintaining their privacy. As cybercriminals increasingly utilize AI to target victims, Google's proactive measures represent a significant advancement in user protection against sophisticated scams.
This development highlights the importance of leveraging technology to combat evolving cyber threats, potentially setting a standard for other tech companies to follow in safeguarding their users.
How effective will these AI-driven tools be in addressing the ever-evolving tactics of scammers, and what additional measures might be necessary to further enhance user security?
Google has introduced two AI-driven features for Android devices aimed at detecting and mitigating scam activity in text messages and phone calls. The scam detection for messages analyzes ongoing conversations for suspicious behavior in real-time, while the phone call feature issues alerts during potential scam calls, enhancing user protection. Both features prioritize user privacy and are designed to combat increasingly sophisticated scams that utilize AI technologies.
This proactive approach by Google reflects a broader industry trend towards leveraging artificial intelligence for consumer protection, raising questions about the future of cybersecurity in an era dominated by digital threats.
How effective will these AI-powered detection methods be in keeping pace with the evolving tactics of scammers?
Several strategies can help individuals avoid taxes on the interest earned from savings accounts, allowing them to retain more of their earnings for future use. Tax-advantaged accounts such as traditional IRAs, Roth IRAs, and health savings accounts (HSAs) provide opportunities for tax-deferred or tax-free growth, making them attractive options for long-term savings. Additionally, maximizing deductions and credits or employing tax-loss harvesting can further minimize tax liabilities on savings and investments.
Understanding the nuances of tax-advantaged accounts can empower savers to make informed decisions that enhance their financial well-being while navigating the complexities of the tax system.
What other innovative strategies could individuals explore to optimize their savings while minimizing tax obligations?
A company's executives received an extortion letter in the mail claiming to be from BianLian ransomware group, demanding payment of $250,000 to $350,000 in Bitcoin within ten days. However, cybersecurity researchers have found that the attacks are likely fake and the letter's contents bear no resemblance to real ransom notes. Despite this, the scammers are using a new tactic by sending physical letters, potentially as part of an elaborate social engineering campaign.
This unexpected use of snail mail highlights the adaptability and creativity of cybercriminals, who will stop at nothing to extort money from their victims.
As cybersecurity threats continue to evolve, it's essential for organizations to remain vigilant and develop effective strategies to mitigate the impact of such campaigns.
YouTube creators have been targeted by scammers using AI-generated deepfake videos to trick them into giving up their login details. The fake videos, including one impersonating CEO Neal Mohan, claim there's a change in the site's monetization policy and urge recipients to click on links that lead to phishing pages designed to steal user credentials. YouTube has warned users about these scams, advising them not to click on unsolicited links or provide sensitive information.
The rise of deepfake technology is exposing a critical vulnerability in online security, where AI-generated content can be used to deceive even the most tech-savvy individuals.
As more platforms become vulnerable to deepfakes, how will governments and tech companies work together to develop robust countermeasures before these scams escalate further?
The new Genie Scam Protection feature leverages AI to spot scams that readers might think are real. This helps avoid embarrassing losses of money and personal information when reading text messages, enticing offers, and surfing the web. Norton has added this advanced technology to all its Norton 360 security software products, providing users with a safer online experience.
The integration of AI-powered scam detection into antivirus software is a significant step forward in protecting users from increasingly sophisticated cyber threats.
As the use of Genie Scam Protection becomes widespread, will it also serve as a model for other security software companies to develop similar features?
YouTube has been inundated with ads promising "1-2 ETH per day" for at least two months now, luring users into fake videos claiming to explain how to start making money with cryptocurrency. These ads often appear credible and are designed to trick users into installing malicious browser extensions or running suspicious code. The ads' use of AI-generated personas and obscure Google accounts adds to their legitimacy, making them a significant threat to online security.
As the rise of online scams continues to outpace law enforcement's ability to keep pace, it's becoming increasingly clear that the most vulnerable victims are not those with limited technical expertise, but rather those who have simply never been warned about these tactics.
Will regulators take steps to crack down on this type of ad targeting, or will Google continue to rely on its "verified" labels to shield itself from accountability?
A Redditor's post highlighted a friend's refusal of a $5,000 raise due to a misunderstanding of how tax brackets work, believing it would reduce their overall income. Despite attempts to clarify that only the income above the threshold would be taxed at the higher rate, the friend remained unconvinced, showcasing a common misconception about taxation. This exchange prompted widespread reactions on Reddit, with users sharing similar stories of individuals who mistakenly avoid raises for fear of higher taxes.
The incident reflects a broader issue of financial illiteracy that persists in society, emphasizing the need for better education around personal finance and taxation.
What strategies could be implemented to improve financial literacy and prevent such misconceptions about taxes in the future?
Financial coach Bernadette Joy emphasizes the importance of selecting the right investment accounts and strategies to minimize tax liabilities, noting that many individuals unknowingly pay excess taxes on their investments. By adopting dollar-cost averaging and maximizing contributions to tax-advantaged accounts like 401(k)s and IRAs, investors can significantly reduce their taxable income and enhance their long-term wealth accumulation. Joy's insights serve as a crucial reminder for individuals to reassess their investment approaches to avoid costly mistakes.
This perspective highlights the often-overlooked intersection of investment strategy and tax efficiency, suggesting that financial literacy can have a profound impact on personal wealth.
What additional strategies can investors explore to further optimize their tax situation in an ever-changing financial landscape?
In the realm of cybersecurity, the emphasis on strong passwords often overshadows the critical importance of protecting one's email address, which serves as a digital identity. Data breaches and the activities of data brokers expose email addresses to threats, making them gateways to personal information and potential scams. Utilizing email aliases can offer a practical solution to mitigate these risks, allowing individuals to maintain privacy while engaging online.
This perspective highlights the necessity of re-evaluating our online behaviors, treating personal information with the same caution as physical identity documents to enhance overall security.
What innovative measures can individuals adopt to further safeguard their digital identities in an increasingly interconnected world?
A broad overview of the four stages shows that nearly 1 million Windows devices were targeted by a sophisticated "malvertising" campaign, where malware was embedded in ads on popular streaming platforms. The malicious payload was hosted on platforms like GitHub and used Discord and Dropbox to spread, with infected devices losing login credentials, cryptocurrency, and other sensitive data. The attackers exploited browser files and cloud services like OneDrive to steal valuable information.
This massive "malvertising" spree highlights the vulnerability of online systems to targeted attacks, where even seemingly innocuous ads can be turned into malicious vectors.
What measures will tech companies and governments take to prevent such widespread exploitation in the future, and how can users better protect themselves against these types of attacks?
Google Messages is rolling out an AI feature designed to assist Android users in identifying and managing text message scams effectively. This new scam detection tool evaluates SMS, MMS, and RCS messages in real time, issuing alerts for suspicious patterns while preserving user privacy by processing data on-device. Additionally, the update includes features like live location sharing and enhancements for Pixel devices, aiming to improve overall user safety and functionality.
The introduction of AI in scam detection reflects a significant shift in how tech companies are addressing evolving scam tactics, emphasizing the need for proactive and intelligent solutions in user safety.
As scammers become increasingly sophisticated, what additional measures can tech companies implement to further protect users from evolving threats?
Using virtual cards can significantly enhance online shopping security by allowing consumers to manage their spending and limit exposure to fraud. Services like Privacy.com enable users to create virtual card numbers with specific spending limits, making it easier to handle subscriptions and free trials without the risk of unexpected charges. This method not only protects personal financial information but also offers peace of mind when dealing with unfamiliar vendors.
The rise of virtual cards reflects a broader shift towards consumer empowerment in financial transactions, potentially reshaping the landscape of online commerce and digital security.
What other innovative financial tools could emerge to further safeguard consumers in the evolving landscape of online shopping?
Spam emails are an inevitable part of our online experience, but instead of deleting them, we should consider marking them. This teaches the spam filter to better recognize and catch unwanted emails, reducing the amount of junk mail in our inboxes. By doing so, we also help prevent scammers from mistakenly believing their messages have been reported, thereby protecting ourselves and others from potential harm. The benefits of this approach are clear, but it requires a change in behavior from simply deleting spam emails to taking an active role in training the filters to improve.
The shift towards marked spam emails has significant implications for the way we interact with our email clients and providers, forcing us to reevaluate our relationship with technology and the importance of user input in filtering out unwanted content.
As technology advances and new forms of spam and phishing tactics emerge, will our current methods of marking and reporting spam emails be sufficient to keep up with the evolving threat landscape?
Consumer Reports assessed the most leading voice cloning tools and found that four products did not have proper safeguards in place to prevent non-consensual cloning. The technology has many positive applications, but it can also be exploited for elaborate scams and fraud. To address these concerns, Consumer Reports recommends additional protections, such as unique scripts, watermarking AI-generated audio, and prohibiting audio containing scam phrases.
The current lack of regulation in the voice cloning industry may embolden malicious actors to use this technology for nefarious purposes.
How can policymakers balance the benefits of advanced technologies like voice cloning with the need to protect consumers from potential harm?
If you avoid exposing your regular email address, you reduce the risk of being spammed. Temporary email services offer a solution to this problem by providing short-term addresses that can be used on untrustworthy websites without compromising your primary inbox. These services allow users to receive verification codes or messages within a limited time frame before expiring.
The use of temporary email services highlights the growing need for online security and anonymity in today's digital landscape, where users must balance convenience with data protection concerns.
Will the increasing popularity of temporary email services lead to more innovative solutions for protecting user privacy and safeguarding against malicious activities?
Recently, news surfaced about stolen data containing billions of records, with 284 million unique email addresses affected. Infostealing software is behind a recent report about a massive data collection being sold on Telegram, with 23 billion entries containing 493 million unique pairs of email addresses and website domains. As summarized by Bleeping Computer, 284 million unique email addresses are affected overall.
A concerning trend in the digital age is the rise of data breaches, where hackers exploit vulnerabilities to steal sensitive information, raising questions about individual accountability and responsibility.
What measures can individuals take to protect themselves from infostealing malware, and how effective are current security protocols in preventing such incidents?
The energy company EDF gave a man's mobile number to scammers, who stole over £40,000 from his savings account. The victim, Stephen, was targeted by fraudsters who obtained his name and email address, allowing them to access his accounts with multiple companies. Stephen reported the incident to Hertfordshire Police and Action Fraud, citing poor customer service as a contributing factor.
The incident highlights the need for better cybersecurity measures, particularly among energy companies and financial institutions, to prevent similar scams from happening in the future.
How can regulators ensure that companies are taking adequate steps to protect their customers' personal data and prevent such devastating losses?
Researchers have uncovered a network of fake identities created by North Korean cybercriminals, all looking for software development work in Asia and the West. The goal is to earn money to fund Pyongyang's ballistic missile and nuclear weapons development programs. By creating these fake personas, hackers are able to gain access to companies' back ends, steal sensitive data, or even get paid.
This latest tactic highlights the evolving nature of cybercrime, where attackers are becoming increasingly sophisticated in their methods of deception and social engineering.
Can companies and recruiters effectively identify and prevent such scams, especially in the face of rapidly growing online job boards and freelance platforms?