AI Tools in Workplaces: 90% Face Data Breach Risks, Study Warns

May 19, 2025
AI Tools in Workplaces: 90% Face Data Breach Risks, Study Warns
  • Approximately 75% of workers utilize AI for tasks such as drafting emails and taking notes, yet only 14% of workplaces have established official AI policies.

  • A report from Harmonic revealed that 8.5% of prompts to generative AI tools contained sensitive data, including customer information and employee data, raising concerns about data exposure.

  • The rapid and unmonitored adoption of AI tools in workplaces poses significant security risks, with nearly 90% of analyzed tools exposed to data breaches, according to the Cybernews Business Digital Index.

  • Researchers analyzed 52 popular AI web tools and found that despite an average cybersecurity score of 85 out of 100, 41% received a D or F rating, highlighting poor cybersecurity performance.

  • Alarmingly, 44% of companies developing AI tools showed signs of employee password reuse, contributing to credential-stuffing attacks and resulting in 51% of tools having corporate credentials stolen.

  • Data indicates that 45.4% of sensitive data prompts are submitted via personal accounts, which bypass company monitoring systems and heighten security risks.

  • Cybersecurity expert Vincentas Baubonis warns that a false sense of security regarding AI tool safety may be prevalent among users and businesses.

  • The analysis revealed that 93% of platforms had issues with SSL/TLS configurations, critical for secure communication, and 91% exhibited weaknesses in infrastructure management.

  • Productivity tools, widely used for collaboration, were identified as particularly vulnerable, with 92% having experienced data breaches and an average of 1,332 stolen corporate credentials per company.

  • To mitigate these risks, organizations are advised to implement ongoing monitoring of AI tool usage and establish incident response plans for potential AI-related issues.

  • Updating existing IT and network security policies to incorporate AI risks is essential for organizations to safeguard sensitive information.

  • Clear communication about AI risks, along with proactive policy adoption and employee training, is crucial for protecting sensitive data and promoting responsible innovation.

Summary based on 6 sources


Get a daily email with more AI stories

More Stories