Web DDoS Attacks See Major Surge as AI Allows More Powerful Attacks
Layer 7 Web DDoS attacks have surged by 550% in 2024, driven by the increasing accessibility of AI tools that enable even novice hackers to launch complex campaigns. Financial institutions and transportation services reported an almost 400% increase in DDoS attack volume, with the EMEA region bearing the brunt of these incidents. The evolving threat landscape necessitates more dynamic defense strategies as organizations struggle to differentiate between legitimate and malicious traffic.
This alarming trend highlights the urgent need for enhanced cybersecurity measures, particularly as AI continues to transform the tactics employed by cybercriminals.
What innovative approaches can organizations adopt to effectively counter the growing sophistication of DDoS attacks in the age of AI?
The modern-day cyber threat landscape has become increasingly crowded, with Advanced Persistent Threats (APTs) becoming a major concern for cybersecurity teams worldwide. Group-IB's recent research points to 2024 as a 'year of cybercriminal escalation', with a 10% rise in ransomware compared to the previous year, and a 22% rise in phishing attacks. The "Game-changing" role of AI is being used by both security teams and cybercriminals, but its maturity level is still not there yet.
This move signifies a growing trend in the beauty industry where founder-led companies are reclaiming control from outside investors, potentially setting a precedent for similar brands.
How will the dynamics of founder ownership impact the strategic direction and innovation within the beauty sector in the coming years?
Vishing attacks have skyrocketed, with CrowdStrike tracking at least six campaigns in which attackers pretended to be IT staffers to trick employees into sharing sensitive information. The security firm's 2025 Global Threat Report revealed a 442% increase in vishing attacks during the second half of 2024 compared to the first half. These attacks often use social engineering tactics, such as help desk social engineering and callback phishing, to gain remote access to computer systems.
As the number of vishing attacks continues to rise, it is essential for organizations to prioritize employee education and training on recognizing potential phishing attempts, as these attacks often rely on human psychology rather than technical vulnerabilities.
With the increasing sophistication of vishing tactics, what measures can individuals and organizations take to protect themselves from these types of attacks in the future, particularly as they become more prevalent in the digital landscape?
Artificial Intelligence (AI) is increasingly used by cyberattackers, with 78% of IT executives fearing these threats, up 5% from 2024. However, businesses are not unprepared, as almost two-thirds of respondents said they are "adequately prepared" to defend against AI-powered threats. Despite this, a shortage of personnel and talent in the field is hindering efforts to keep up with the evolving threat landscape.
The growing sophistication of AI-powered cyberattacks highlights the urgent need for businesses to invest in AI-driven cybersecurity solutions to stay ahead of threats.
How will regulatory bodies address the lack of standardization in AI-powered cybersecurity tools, potentially creating a Wild West scenario for businesses to navigate?
2024 has been marked as a record-breaking year for ransomware attacks, with a 65% increase in detected groups and 44 new malware variants contributing to almost a third of undisclosed attacks. The healthcare, government, and education sectors were disproportionately affected, while emerging groups like LockBit and RansomHub accounted for a significant number of incidents, highlighting the growing sophistication of cybercriminals. As organizations face escalating financial and reputational risks, the need for proactive cybersecurity measures has never been more urgent.
The rise in ransomware attacks emphasizes an unsettling trend where even traditionally secure sectors are becoming prime targets, prompting a reevaluation of cybersecurity strategies across industries.
What strategies can organizations implement to effectively defend against the evolving tactics of ransomware groups in an increasingly hostile cyber landscape?
The cybersecurity industry is poised for significant expansion, driven by increasing cyber threats, cloud computing adoption, and artificial intelligence (AI) integration in security measures. The global market is expected to grow from $172.24 billion in 2023 to $562.72 billion by 2032, reflecting a compound annual growth rate (CAGR) of approximately 14.3%. As cybersecurity spending continues to accelerate, businesses and governments are investing heavily in robust security defenses.
The rapid expansion of the global cybersecurity market underscores the critical role that effective cybersecurity solutions will play in protecting organizations from increasingly sophisticated cyber threats.
How can policymakers balance the need for increased investment in cybersecurity with concerns about regulatory overreach and the potential for cybersecurity solutions to exacerbate existing social inequalities?
The average scam cost the victim £595, report claims. Deepfakes are claiming thousands of victims, with a new report from Hiya detailing the rising risk and deepfake voice scams in the UK and abroad, noting how the rise of generative AI means deepfakes are more convincing than ever, and attackers can leverage them more frequently too. AI lowers the barriers for criminals to commit fraud, and makes scamming victims easier, faster, and more effective.
The alarming rate at which these scams are spreading highlights the urgent need for robust security measures and education campaigns to protect vulnerable individuals from falling prey to sophisticated social engineering tactics.
What role should regulatory bodies play in establishing guidelines and standards for the use of AI-powered technologies, particularly those that can be exploited for malicious purposes?
A broad overview of the four stages shows that nearly 1 million Windows devices were targeted by a sophisticated "malvertising" campaign, where malware was embedded in ads on popular streaming platforms. The malicious payload was hosted on platforms like GitHub and used Discord and Dropbox to spread, with infected devices losing login credentials, cryptocurrency, and other sensitive data. The attackers exploited browser files and cloud services like OneDrive to steal valuable information.
This massive "malvertising" spree highlights the vulnerability of online systems to targeted attacks, where even seemingly innocuous ads can be turned into malicious vectors.
What measures will tech companies and governments take to prevent such widespread exploitation in the future, and how can users better protect themselves against these types of attacks?
A recent DeskTime study found that 72% of US workplaces adopted ChatGPT in 2024, with time spent using the tool increasing by 42.6%. Despite this growth, individual adoption rates remained lower than global averages, suggesting a slower pace of adoption among some companies. The study also revealed that AI adoption fluctuated throughout the year, with usage dropping in January but rising in October.
The slow growth of ChatGPT adoption in US workplaces may be attributed to the increasing availability and accessibility of other generative AI tools, which could potentially offer similar benefits or ease-of-use.
What role will data security concerns play in shaping the future of AI adoption in US workplaces, particularly for companies that have already implemented restrictions on ChatGPT usage?
Sophisticated, advanced threats have been found lurking in the depths of the internet, compromising Cisco, ASUS, QNAP, and Synology devices. A previously-undocumented botnet, named PolarEdge, has been expanding around the world for more than a year, targeting a range of network devices. The botnet's goal is unknown at this time, but experts have warned that it poses a significant threat to global internet security.
As network device vulnerabilities continue to rise, the increasing sophistication of cyber threats underscores the need for robust cybersecurity measures and regular software updates.
Will governments and industries be able to effectively counter this growing threat by establishing standardized protocols for vulnerability reporting and response?
The UK's push to advance its position as a global leader in AI is placing increasing pressure on its energy sector, which has become a critical target for cyber threats. As the country seeks to integrate AI into every aspect of its life, it must also fortify its defenses against increasingly sophisticated cyberattacks that could disrupt its energy grid and national security. The cost of a data breach in the energy sector is staggering, with the average loss estimated at $5.29 million, and the consequences of a successful attack could be far more severe.
The UK's reliance on ageing infrastructure and legacy systems poses a significant challenge to cybersecurity efforts, as these outdated systems are often incompatible with modern security solutions.
As AI adoption in the energy sector accelerates, it is essential for policymakers and industry leaders to address the pressing question of how to balance security with operational reliability, particularly given the growing threat of ransomware attacks.
The new Genie Scam Protection feature leverages AI to spot scams that readers might think are real. This helps avoid embarrassing losses of money and personal information when reading text messages, enticing offers, and surfing the web. Norton has added this advanced technology to all its Norton 360 security software products, providing users with a safer online experience.
The integration of AI-powered scam detection into antivirus software is a significant step forward in protecting users from increasingly sophisticated cyber threats.
As the use of Genie Scam Protection becomes widespread, will it also serve as a model for other security software companies to develop similar features?
Aviation firms in the United Arab Emirates (UAE) were recently targeted by a highly sophisticated business email compromise (BEC) attack looking to deploy advanced malware. The attackers used a compromised email account to share polyglot files with their victims, which deployed a hidden backdoor against aviation firms. Cybersecurity researchers Proofpoint observed that these attacks started in late 2024 and target organizations with a distinct interest in aviation and satellite communications.
This highly targeted attack highlights the evolving nature of cyber threats, where attackers are leveraging sophisticated tactics like polyglot files to evade traditional detection mechanisms.
How will the increasing use of polyglot malware impact the ability of cybersecurity professionals to detect and prevent similar attacks in the future?
Recently, news surfaced about stolen data containing billions of records, with 284 million unique email addresses affected. Infostealing software is behind a recent report about a massive data collection being sold on Telegram, with 23 billion entries containing 493 million unique pairs of email addresses and website domains. As summarized by Bleeping Computer, 284 million unique email addresses are affected overall.
A concerning trend in the digital age is the rise of data breaches, where hackers exploit vulnerabilities to steal sensitive information, raising questions about individual accountability and responsibility.
What measures can individuals take to protect themselves from infostealing malware, and how effective are current security protocols in preventing such incidents?
Threat actors are exploiting misconfigured Amazon Web Services (AWS) environments to bypass email security and launch phishing campaigns that land in people's inboxes. Cybersecurity researchers have identified a group using this tactic, known as JavaGhost, which has been active since 2019 and has evolved its tactics to evade detection. The attackers use AWS access keys to gain initial access to the environment and set up temporary accounts to send phishing emails that bypass email protections.
This type of attack highlights the importance of proper AWS configuration and monitoring in preventing similar breaches, as misconfigured environments can provide an entry point for attackers.
As more organizations move their operations to the cloud, the risk of such attacks increases, making it essential for companies to prioritize security and incident response training.
Microsoft's Threat Intelligence has identified a new tactic from Chinese threat actor Silk Typhoon towards targeting "common IT solutions" such as cloud applications and remote management tools in order to gain access to victim systems. The group has been observed attacking a wide range of sectors, including IT services and infrastructure, healthcare, legal services, defense, government agencies, and many more. By exploiting zero-day vulnerabilities in edge devices, Silk Typhoon has established itself as one of the Chinese threat actors with the "largest targeting footprints".
The use of cloud applications by businesses may inadvertently provide a backdoor for hackers like Silk Typhoon to gain access to sensitive data, highlighting the need for robust security measures.
What measures can be taken by governments and private organizations to protect their critical infrastructure from such sophisticated cyber threats?
One week in tech has seen another slew of announcements, rumors, reviews, and debate. The pace of technological progress is accelerating rapidly, with AI advancements being a major driver of innovation. As the field continues to evolve, we're seeing more natural and knowledgeable chatbots like ChatGPT, as well as significant updates to popular software like Photoshop.
The growing reliance on AI technology raises important questions about accountability and ethics in the development and deployment of these systems.
How will future breakthroughs in AI impact our personal data, online security, and overall digital literacy?
Email marketing continues to be a cornerstone for businesses aiming to engage with their audience effectively. Global email marketing revenue was projected to surpass $9.5 billion in 2024, highlighting its robust growth and sustained relevance. Consumer engagement with email remains high, with 96% of consumers checking their email daily, making it a vital touchpoint for marketers.
The integration of artificial intelligence (AI) in email marketing has proven beneficial, enhancing personalization and effectiveness.
As the digital landscape evolves, brands are encouraged to harness the potential of email marketing, integrating emerging technologies and personalized content to stay ahead in the competitive market.
Security researchers spotted a new ClickFix campaign that has been abusing Microsoft SharePoint to distribute the Havoc post-exploitation framework. The attack chain starts with a phishing email, carrying a "restricted notice" as an .HTML attachment, which prompts the victim to update their DNS cache manually and then runs a script that downloads the Havoc framework as a DLL file. Cybercriminals are exploiting Microsoft tools to bypass email security and target victims with advanced red teaming and adversary simulation capabilities.
This devious two-step phishing campaign highlights the evolving threat landscape in cybersecurity, where attackers are leveraging legitimate tools and platforms to execute complex attacks.
What measures can organizations take to prevent similar ClickFix-like attacks from compromising their SharePoint servers and disrupting business operations?
A global crackdown on a criminal network that distributed artificial intelligence-generated images of children being sexually abused has resulted in the arrest of two dozen individuals, with Europol crediting international cooperation as key to the operation's success. The main suspect, a Danish national, operated an online platform where users paid for access to AI-generated material, sparking concerns about the use of such tools in child abuse cases. Authorities from 19 countries worked together to identify and apprehend those involved, with more arrests expected in the coming weeks.
The increasing sophistication of AI technology poses new challenges for law enforcement agencies, who must balance the need to investigate and prosecute crimes with the risk of inadvertently enabling further exploitation.
How will governments respond to the growing concern about AI-generated child abuse material, particularly in terms of developing legislation and regulations that effectively address this issue?
Google has introduced AI-powered features designed to enhance scam detection for both text messages and phone calls on Android devices. The new capabilities aim to identify suspicious conversations in real-time, providing users with warnings about potential scams while maintaining their privacy. As cybercriminals increasingly utilize AI to target victims, Google's proactive measures represent a significant advancement in user protection against sophisticated scams.
This development highlights the importance of leveraging technology to combat evolving cyber threats, potentially setting a standard for other tech companies to follow in safeguarding their users.
How effective will these AI-driven tools be in addressing the ever-evolving tactics of scammers, and what additional measures might be necessary to further enhance user security?
DeepSeek has broken into the mainstream consciousness after its chatbot app rose to the top of the Apple App Store charts (and Google Play, as well). DeepSeek's AI models, trained using compute-efficient techniques, have led Wall Street analysts — and technologists — to question whether the U.S. can maintain its lead in the AI race and whether the demand for AI chips will sustain. The company's ability to offer a general-purpose text- and image-analyzing system at a lower cost than comparable models has forced domestic competition to cut prices, making some models completely free.
This sudden shift in the AI landscape may have significant implications for the development of new applications and industries that rely on sophisticated chatbot technology.
How will the widespread adoption of DeepSeek's models impact the balance of power between established players like OpenAI and newer entrants from China?
Nine US AI startups have raised $100 million or more in funding so far this year, marking a significant increase from last year's count of 49 startups that reached this milestone. The latest round was announced on March 3 and was led by Lightspeed with participation from prominent investors such as Salesforce Ventures and Menlo Ventures. As the number of US AI companies continues to grow, it is clear that the industry is experiencing a surge in investment and innovation.
This influx of capital is likely to accelerate the development of cutting-edge AI technologies, potentially leading to significant breakthroughs in areas such as natural language processing, computer vision, and machine learning.
Will the increasing concentration of funding in a few large companies stifle the emergence of new, smaller startups in the US AI sector?
Google Cloud has launched its AI Protection security suite, designed to identify, assess, and protect AI assets from vulnerabilities across various platforms. This suite aims to enhance security for businesses as they navigate the complexities of AI adoption, providing a centralized view of AI-related risks and threat management capabilities. With features such as AI Inventory Discovery and Model Armor, Google Cloud is positioning itself as a leader in securing AI workloads against emerging threats.
This initiative highlights the increasing importance of robust security measures in the rapidly evolving landscape of AI technologies, where the stakes for businesses are continually rising.
How will the introduction of AI Protection tools influence the competitive landscape of cloud service providers in terms of security offerings?
Norton 360 has introduced a new feature called Genie Scam Protection that leverages AI to spot scams in text messages, online surfing, and emails. This feature aims to protect users from embarrassing losses of money and personal information when reading scam messages or browsing malicious websites. The Genie Scam Protection adds an extra layer of security to Norton 360's existing antivirus software products.
As the rise of phishing and smishing scams continues to evolve, it is essential for consumers to stay vigilant and up-to-date with the latest security measures to avoid falling victim to these types of cyber threats.
Will the widespread adoption of Genie Scam Protection lead to a reduction in reported scam losses, or will new and more sophisticated scams emerge to counter this new level of protection?