Cloud Governance: Data Residency and Sovereignty
Many organizations face a significant set of data residency (also referred to as data sovereignty) challenges when they are contemplating a move to the cloud. Cloud data residency is defined as maintaining control over the location where regulated data and documents physically reside. Privacy and data residency requirements vary by country and users of cloud services need to consider the rules that cover each of the jurisdictions they operate in as well as the rules that govern the treatment of data at the locations where the cloud service provider(s) provision their services (e.g., their data centers). Depending on the specific countries in which they operate, companies may need to keep certain types of information within a defined geographic jurisdiction. Countries that have various degrees of data residency or data sovereignty requirement include Canada, Germany, Switzerland, China and Australia, to name a few. In cloud environments, where datacenters are located in various parts of the world, cloud data tokenization can be used to keep sensitive data local (resident) while tokens (replacement data) are stored and processed in the cloud.
Tokenization is a process by which a sensitive data field, such as a Primary Account Number (PAN) from a credit or debit card, is replaced with a surrogate value called a token. De-tokenization is the reverse process of redeeming a token for its associated original value. While various approaches to creating tokens exist, frequently they are simply randomly generated values that have no mathematical relation to the original data field. This underlies the security of the approach – it is nearly impossible to determine the original value of a sensitive data field by knowing only the surrogate token value.
How is Tokenization Different From Encryption?
Encryption is an obfuscation approach that uses a cipher algorithm to mathematically transform sensitive data’s original value to a surrogate value. The surrogate can be transformed back to the original value via the use of a “key”, which can be thought of as the means to undo the mathematical lock. So while encryption clearly can be used to obfuscate a value, a mathematical link back to its true form still exists. Tokenization is unique in that it completely removes the original data from the systems in which the tokens reside. As such, advantages of tokenization are:
- Tokens cannot be reversed back to their original values without access to the original “look-up” table that matches them up to their original values. These tables are typically kept in a “hardened” database in a secure location inside a company’s firewall.
- Tokens can be made to maintain the same structure and data type as their original values.
While format-preserving encryption can retain the structure and data type, it’s still reversible back to the original if you have the key and algorithm. Because tokens cannot be reversed back to their original values, tokenization is frequently the de facto approach to addressing market requirements related to data residency.
Blue Coat Cloud Data Protection Gateway
The Blue Coat Cloud Data Protection Gateway provides a flexible data tokenization platform that provides:
- The ability to preserve SaaS functionality across a wide array of applications while maintaining the highest level of tokenization protection.
- High availability and enterprise-level performance, with the ability to scale the solution across multiple dimensions.
- Open integration and configuration options that simplify deployment and facilitate expanded use of the platform.
- A hybrid architecture that gives customers the flexibility to consider multiple deployment options, including hosted models to eliminate the need for any upfront capital expenditures.