
The 1980s were a far cry from what life is like today. If you weren’t there, you probably don’t remember using paper maps to navigate, renting VHS movies at Blockbuster, printing photos from movies, or calling people to talk on their landlines.
You may also not know that cybersecurity existed back then, but even that seemed very different from what it is today. The first hackers (yes, I was one of them) were motivated by curiosity and the excitement of testing the limits of the system. Concerns about large-scale cybercrime or widespread threats were largely absent.
Founder and CTO of DataKrypto.
Today, the stakes are dramatically higher. Data has become the jewel in the crown of every organization and a valuable target for the most ambitious cybercriminals. As such, data protection is at the heart and mind of every enterprise CISO, and the number one priority of every safety equipment.
Tasked with monitoring, defending and responding to threats, these teams today face unprecedented challenges as the rapid adoption of artificial intelligence (AI) expands the digital attack surface at an accelerated pace.
From silos to consolidation: AI and the new concentration of data
AI has revolutionized this model. Training and inference require large volumes of data to be consolidated and refined into compact, portable models. Instead of stitching together fragments of scattered systems, attackers now face concentrated repositories of sensitive information that are much easier to attack and penetrate.
With little security built in, AI leaves data exposed “in plain sight,” making the defender’s job less like protecting Fort Knox and more like protecting a bag of diamonds: smaller, denser, and much easier to steal.
This scenario illustrates why companies using AI face a significantly higher risk of data breaches, leading to financial and reputational damage and compliance failures.
In fact, research suggests that more than three-quarters of companies have already experienced AI-related breaches, ranging from accidental data leaks to the deliberate poisoning of training data sets, jeopardizing their compliance efforts with regulatory frameworks such as the General Data Protection Regulation (GDPR), the Health Insurance Portability and Accountability Act (HIPAA), and the California Consumer Privacy Act (CCPA).
Without proper safeguards, the risks of using AI are clear and may be contributing to the problems identified in the recent MIT report that companies are not getting the full value of their AI investments.
When the stakes are so high, companies are limiting the use of AI, which also limits their ability to create innovative solutions to highly complex challenges.
When it comes to AI, traditional data security methods are no longer enough.
A new approach: continuous encryption
This is where continuous encryption becomes indispensable. By maintaining encryption throughout the entire lifecycle, from storage and transmission to active computing, sensitive information remains protected at all times. Even during training or inference, data is never decrypted or made vulnerable to unauthorized access.
Two crucial technologies make this possible:
- Fully Homomorphic Encryption (FHE): Allows calculations to be performed directly on encrypted data, ensuring that raw values never need to be decrypted.
- Confidential Computing with Trusted Execution Environments (TEE): Provides secure enclaves where temporary operations occur in isolated, protected memory, inaccessible even to system administrators or cloud providers.
Combined, these technologies create a “zero-knowledge” environment in which neither the AI provider nor malicious actors can reconstruct inputs, outputs, or models outside of the secure enclave. As a result, both open source and proprietary custom AI LLMs are fully protected, while ensuring the privacy and security of sensitive data.
This zero-knowledge AI environment prevents a number of significant data exposure risks, including inadvertent leakage of sensitive information by employees using both authorized and unauthorized generative AI tools.
A new mandate: end-to-end protection in all use cases
For security teams, continuous encryption offers a convenient way to safeguard both sensitive data and the models trained on it. This approach presents a new mandate: protection must extend beyond storage and transmission to encompass all stages of data use in the AI lifecycle. The benefits across industries and use cases are powerful:
- Health care: Patient records can be analyzed for predictive insights without the risk of exposing personal health information.
- Financial services: Fraud detection and risk assessment models can be run on encrypted customer data without compromising privacy or compliance.
- Public sector and critical infrastructure: Agencies can share intelligence securely, knowing that sensitive information remains protected throughout the analysis process.
- Retail and consumer services: Retailers can leverage AI to personalize shopping experiences and loyalty programs, while protecting customers’ purchase histories and personal data.
- Telecommunications and cloud: Providers can optimize networks and offer secure multi-tenant AI services without the risk of exposing customer data.
In each case, encryption ensures that sensitive data remains inaccessible to unauthorized parties, even if models are stolen or environments compromised.
Align security with innovation
The rise of AI does not have to come at the expense of security or compliance. Seamless encryption enables organizations to harness the power of AI while maintaining confidentiality, integrity, and regulatory alignment.
This end-to-end data protection method means organizations can close the most critical security gap in AI, giving them the confidence to innovate without fear of being exposed.
We have presented the best endpoint protection software.
This article was produced as part of TechRadarPro’s Expert Insights channel, where we feature the best and brightest minds in today’s tech industry. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing, find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
#paradigm #concentration #data #requires #greater #vigilance