Hostinger Integrates Dark Web Scanning Into Hpanel
Hostinger has integrated a dark web scanning tool directly into its hPanel service, providing users with real-time security recommendations if their data is found on the dark web. The tool, developed by NordStellar, tracks keywords associated with business and employee/client data in dark web forums, search engines, markets, and communities. Hostinger claims that over 50% of its customers have had data previously disclosed on the dark web, and received warnings along with suggestions on how to improve security.
By integrating dark web scanning into its hosting panel, Hostinger is taking a proactive approach to addressing the growing threat of online data breaches and cyber attacks.
How will this move impact the broader cybersecurity landscape, particularly in the wake of increasing awareness about data protection and digital rights?
Modern web browsers offer several built-in settings that can significantly enhance data security and privacy while online. Key adjustments, such as enabling two-factor authentication, disabling the saving of sensitive data, and using encrypted DNS requests, can help users safeguard their personal information from potential threats. Additionally, leveraging the Tor network with specific configurations can further anonymize web browsing, although it may come with performance trade-offs.
These tweaks reflect a growing recognition of the importance of digital privacy, empowering users to take control of their online security without relying solely on external tools or services.
What additional measures might users adopt to enhance their online security in an increasingly interconnected world?
Zapier, a popular automation tool, has suffered a cyberattack that resulted in the loss of sensitive customer information. The company's Head of Security sent a breach notification letter to affected customers, stating that an unnamed threat actor accessed some customer data "inadvertently copied to the repositories" for debugging purposes. Zapier assures that the incident was isolated and did not affect any databases, infrastructure, or production systems.
This breach highlights the importance of robust security measures in place, particularly with regards to two-factor authentication (2FA) configurations, which can be vulnerable to exploitation.
As more businesses move online, how will companies like Zapier prioritize transparency and accountability in responding to data breaches, ensuring trust with their customers?
DuckDuckGo's recent development of its AI-generated search tool, dubbed DuckDuckAI, marks a significant step forward for the company in enhancing user experience and providing more concise responses to queries. The AI-powered chatbot, now out of beta, will integrate web search within its conversational interface, allowing users to seamlessly switch between the two options. This move aims to provide a more flexible and personalized experience for users, while maintaining DuckDuckGo's commitment to privacy.
By embedding AI into its search engine, DuckDuckGo is effectively blurring the lines between traditional search and chatbot interactions, potentially setting a new standard for digital assistants.
How will this trend of integrating AI-powered interfaces with search engines impact the future of online information discovery, and what implications will it have for users' control over their personal data?
Indian stock broker Angel One has confirmed that some of its Amazon Web Services (AWS) resources were compromised, prompting the company to hire an external forensic partner to investigate the impact. The breach did not affect clients' securities, funds, and credentials, with all client accounts remaining secure. Angel One is taking proactive steps to secure its systems after being notified by a dark-web monitoring partner.
This incident highlights the growing vulnerability of Indian companies to cyber threats, particularly those in the financial sector that rely heavily on cloud-based services.
How will India's regulatory landscape evolve to better protect its businesses and citizens from such security breaches in the future?
A broad overview of the four stages shows that nearly 1 million Windows devices were targeted by a sophisticated "malvertising" campaign, where malware was embedded in ads on popular streaming platforms. The malicious payload was hosted on platforms like GitHub and used Discord and Dropbox to spread, with infected devices losing login credentials, cryptocurrency, and other sensitive data. The attackers exploited browser files and cloud services like OneDrive to steal valuable information.
This massive "malvertising" spree highlights the vulnerability of online systems to targeted attacks, where even seemingly innocuous ads can be turned into malicious vectors.
What measures will tech companies and governments take to prevent such widespread exploitation in the future, and how can users better protect themselves against these types of attacks?
Security researchers spotted a new ClickFix campaign that has been abusing Microsoft SharePoint to distribute the Havoc post-exploitation framework. The attack chain starts with a phishing email, carrying a "restricted notice" as an .HTML attachment, which prompts the victim to update their DNS cache manually and then runs a script that downloads the Havoc framework as a DLL file. Cybercriminals are exploiting Microsoft tools to bypass email security and target victims with advanced red teaming and adversary simulation capabilities.
This devious two-step phishing campaign highlights the evolving threat landscape in cybersecurity, where attackers are leveraging legitimate tools and platforms to execute complex attacks.
What measures can organizations take to prevent similar ClickFix-like attacks from compromising their SharePoint servers and disrupting business operations?
Sophisticated, advanced threats have been found lurking in the depths of the internet, compromising Cisco, ASUS, QNAP, and Synology devices. A previously-undocumented botnet, named PolarEdge, has been expanding around the world for more than a year, targeting a range of network devices. The botnet's goal is unknown at this time, but experts have warned that it poses a significant threat to global internet security.
As network device vulnerabilities continue to rise, the increasing sophistication of cyber threats underscores the need for robust cybersecurity measures and regular software updates.
Will governments and industries be able to effectively counter this growing threat by establishing standardized protocols for vulnerability reporting and response?
As recent news reminds us, malicious browser add-ons can start life as legit extensions. Reviewing what you’ve got installed is a smart move. Earlier this month, an alarm sounded—security researchers at GitLab Threat Intelligence discovered a handful of Chrome extensions adding code in order to commit fraud, with at least 3.2 million users affected. But the add-ons didn’t start as malicious. Instead, they launched as legitimate software, only to be later compromised or sold to bad actors.
The fact that these extensions were able to deceive millions of users for so long highlights the importance of staying vigilant when installing browser add-ons and regularly reviewing their permissions.
As more people rely on online services, the risk of malicious extensions spreading through user adoption becomes increasingly critical, making it essential for Google to continually improve its Chrome extension review process.
Microsoft's Threat Intelligence has identified a new tactic from Chinese threat actor Silk Typhoon towards targeting "common IT solutions" such as cloud applications and remote management tools in order to gain access to victim systems. The group has been observed attacking a wide range of sectors, including IT services and infrastructure, healthcare, legal services, defense, government agencies, and many more. By exploiting zero-day vulnerabilities in edge devices, Silk Typhoon has established itself as one of the Chinese threat actors with the "largest targeting footprints".
The use of cloud applications by businesses may inadvertently provide a backdoor for hackers like Silk Typhoon to gain access to sensitive data, highlighting the need for robust security measures.
What measures can be taken by governments and private organizations to protect their critical infrastructure from such sophisticated cyber threats?
LlamaIndex, a startup developing tools for building 'agents' that can reason over unstructured data, has raised new cash in a funding round to develop its enterprise cloud service. The company's open-source software has racked up millions of downloads on GitHub, allowing developers to create custom agents that can extract information, generate reports and insights, and take specific actions. LlamaIndex provides data connectors and utilities like LlamaParse, which transforms unstructured data into a structured format for AI applications.
By democratizing access to building AI agents, LlamaIndex's cloud service has the potential to level the playing field for developers from non-traditional backgrounds, potentially driving innovation in enterprise applications.
As GenAI applications become increasingly ubiquitous, how will the emergence of standardized platforms like LlamaCloud impact the future of work and the skills required to remain employable?
The modern-day cyber threat landscape has become increasingly crowded, with Advanced Persistent Threats (APTs) becoming a major concern for cybersecurity teams worldwide. Group-IB's recent research points to 2024 as a 'year of cybercriminal escalation', with a 10% rise in ransomware compared to the previous year, and a 22% rise in phishing attacks. The "Game-changing" role of AI is being used by both security teams and cybercriminals, but its maturity level is still not there yet.
This move signifies a growing trend in the beauty industry where founder-led companies are reclaiming control from outside investors, potentially setting a precedent for similar brands.
How will the dynamics of founder ownership impact the strategic direction and innovation within the beauty sector in the coming years?
Microsoft has identified and named four individuals allegedly responsible for creating and distributing explicit deepfakes using leaked API keys from multiple Microsoft customers. The group, dubbed the “Azure Abuse Enterprise”, is said to have developed malicious tools that allowed threat actors to bypass generative AI guardrails to generate harmful content. This discovery highlights the growing concern of cybercriminals exploiting AI-powered services for nefarious purposes.
The exploitation of AI-powered services by malicious actors underscores the need for robust cybersecurity measures and more effective safeguards against abuse.
How will Microsoft's efforts to combat deepfake-related crimes impact the broader fight against online misinformation and disinformation?
Hackers are exploiting Microsoft Teams and other legitimate Windows tools to launch sophisticated attacks on corporate networks, employing social engineering tactics to gain access to remote desktop solutions. Once inside, they sideload flawed .DLL files that enable the installation of BackConnect, a remote access tool that allows persistent control over compromised devices. This emerging threat highlights the urgent need for businesses to enhance their cybersecurity measures, particularly through employee education and the implementation of multi-factor authentication.
The use of familiar tools for malicious purposes points to a concerning trend in cybersecurity, where attackers leverage trust in legitimate software to bypass traditional defenses, ultimately challenging the efficacy of current security protocols.
What innovative strategies can organizations adopt to combat the evolving tactics of cybercriminals in an increasingly digital workplace?
Google Gemini stands out as the most data-hungry service, collecting 22 of these data types, including highly sensitive data like precise location, user content, the device's contacts list, browsing history, and more. The analysis also found that 30% of the analyzed chatbots share user data with third parties, potentially leading to targeted advertising or spam calls. DeepSeek, while not the worst offender, collects only 11 unique types of data, including user input like chat history, raising concerns under GDPR rules.
This raises a critical question: as AI chatbot apps become increasingly omnipresent in our daily lives, how will we strike a balance between convenience and personal data protection?
What regulations or industry standards need to be put in place to ensure that the growing number of AI-powered chatbots prioritize user privacy above corporate interests?
Microsoft is updating its commercial cloud contracts to improve data protection for European Union institutions, following an investigation by the EU's data watchdog that found previous deals failed to meet EU law. The changes aim to increase Microsoft's data protection responsibilities and provide greater transparency for customers. By implementing these new provisions, Microsoft seeks to enhance trust with public sector and enterprise customers in the region.
The move reflects a growing recognition among tech giants of the need to balance business interests with regulatory demands on data privacy, setting a potentially significant precedent for the industry.
Will Microsoft's updated terms be sufficient to address concerns about data protection in the EU, or will further action be needed from regulators and lawmakers?
Startup Crogl has unveiled an autonomous assistant designed for cybersecurity researchers, aimed at efficiently analyzing thousands of daily network alerts to identify real security incidents. Backed by $30 million in funding, this innovative tool, likened to an “Iron Man suit” by CEO Monzy Merza, has already been tested with major enterprises during a private beta phase. The platform's unique approach, leveraging big data and machine learning, seeks to enhance security analysts' capabilities rather than reducing the number of alerts they face.
Crogl's development represents a significant shift in how cybersecurity tools are conceived, emphasizing the empowerment of analysts rather than merely streamlining their workload.
Will Crogl's model of using advanced AI to amplify the capabilities of human analysts redefine industry standards for cybersecurity solutions?
Google has informed Australian authorities it received more than 250 complaints globally over nearly a year that its artificial intelligence software was used to make deepfake terrorism material, highlighting the growing concern about AI-generated harm. The tech giant also reported dozens of user reports warning about its AI program Gemini being used to create child abuse material. The disclosures underscore the need for better guardrails around AI technology to prevent such misuse.
As the use of AI-generated content becomes increasingly prevalent, it is crucial for companies and regulators to develop effective safeguards that can detect and mitigate such harm before it spreads.
How will governments balance the need for innovation with the requirement to ensure that powerful technologies like AI are not used to facilitate hate speech or extremist ideologies?
Arkham Intelligence has introduced a new tagging system on its platform, allowing users to track cryptocurrency transactions of influential figures in the cryptocurrency space. The "key opinion leader" (KOL) label applies to those with more than 100,000 followers on X and links their associated wallet addresses, currently featuring 950 addresses. This move aims to enhance transparency and accountability among crypto influencers.
As the influence of crypto personalities grows, so does their financial sway, making this tagging system a crucial step in preventing money laundering and illicit activities in the space.
How will the widespread adoption of such tracking systems impact the global regulatory landscape for cryptocurrency transactions?
Microsoft has confirmed that its Windows drivers and software are being exploited by hackers through zero-day attacks, allowing them to escalate privileges and potentially drop ransomware on affected machines. The company patched five flaws in a kernel-level driver for Paragon Partition Manager, which were apparently found in BioNTdrv.sys, a piece of software used by the partition manager. Users are urged to apply updates as soon as possible to secure their systems.
This vulnerability highlights the importance of keeping software and drivers up-to-date, as outdated components can provide entry points for attackers.
What measures can individuals take to protect themselves from such attacks, and how can organizations ensure that their defenses against ransomware are robust?
SurgeGraph has introduced its AI Detector tool to differentiate between human-written and AI-generated content, providing a clear breakdown of results at no cost. The AI Detector leverages advanced technologies like NLP, deep learning, neural networks, and large language models to assess linguistic patterns with reported accuracy rates of 95%. This innovation has significant implications for the content creation industry, where authenticity and quality are increasingly crucial.
The proliferation of AI-generated content raises fundamental questions about authorship, ownership, and accountability in digital media.
As AI-powered writing tools become more sophisticated, how will regulatory bodies adapt to ensure that truthful labeling of AI-created content is maintained?
Threat actors are exploiting misconfigured Amazon Web Services (AWS) environments to bypass email security and launch phishing campaigns that land in people's inboxes. Cybersecurity researchers have identified a group using this tactic, known as JavaGhost, which has been active since 2019 and has evolved its tactics to evade detection. The attackers use AWS access keys to gain initial access to the environment and set up temporary accounts to send phishing emails that bypass email protections.
This type of attack highlights the importance of proper AWS configuration and monitoring in preventing similar breaches, as misconfigured environments can provide an entry point for attackers.
As more organizations move their operations to the cloud, the risk of such attacks increases, making it essential for companies to prioritize security and incident response training.
DeepSeek has emerged as a significant player in the ongoing AI revolution, positioning itself as an open-source chatbot that competes with established entities like OpenAI. While its efficiency and lower operational costs promise to democratize AI, concerns around data privacy and potential biases in its training data raise critical questions for users and developers alike. As the technology landscape evolves, organizations must balance the rapid adoption of AI tools with the imperative for robust data governance and ethical considerations.
The entry of DeepSeek highlights a shift in the AI landscape, suggesting that innovation is no longer solely the domain of Silicon Valley, which could lead to a more diverse and competitive market for artificial intelligence.
What measures can organizations implement to ensure ethical AI practices while still pursuing rapid innovation in their AI initiatives?
Browser company Opera has unveiled a new AI agent called Browser Operator that can complete tasks for you on different websites. In a demo video, the company showed the AI agent finding a right pair of socks from Walmart; securing tickets for a football match from the club’s site; and looking up a flight and a hotel for a trip on Booking.com. Opera said that the feature will be available to users through its Feature Drop program soon.
The integration of AI agents like Browser Operator is likely to disrupt traditional search engine business models, potentially forcing Google and Bing to rethink their approach to user assistance.
Will this level of automation lead to increased job displacement in industries heavily reliant on online transactions, such as e-commerce and travel?
Caspia Technologies has made a significant claim about its CODAx AI-assisted security linter, which has identified 16 security bugs in the OpenRISC CPU core in under 60 seconds. The tool uses a combination of machine learning algorithms and security rules to analyze processor designs for vulnerabilities. The discovery highlights the importance of design security and product assurance in the semiconductor industry.
The rapid identification of security flaws by CODAx underscores the need for proactive measures to address vulnerabilities in complex systems, particularly in critical applications such as automotive and media devices.
What implications will this technology have on the development of future microprocessors, where the risk of catastrophic failures due to design flaws may be exponentially higher?
Vishing attacks have skyrocketed, with CrowdStrike tracking at least six campaigns in which attackers pretended to be IT staffers to trick employees into sharing sensitive information. The security firm's 2025 Global Threat Report revealed a 442% increase in vishing attacks during the second half of 2024 compared to the first half. These attacks often use social engineering tactics, such as help desk social engineering and callback phishing, to gain remote access to computer systems.
As the number of vishing attacks continues to rise, it is essential for organizations to prioritize employee education and training on recognizing potential phishing attempts, as these attacks often rely on human psychology rather than technical vulnerabilities.
With the increasing sophistication of vishing tactics, what measures can individuals and organizations take to protect themselves from these types of attacks in the future, particularly as they become more prevalent in the digital landscape?