Tokenization: How it Works in The Cloud
Blue Coat cloud data tokenization technology solves cloud data residency, data privacy and data security challenges for enterprises using cloud applications. Learn more about how the tokenization process works.
Tokenization is a process by which a sensitive data field, such as a Primary Account Number (PAN) from a credit or debit card, is replaced with a surrogate value called a token. De-tokenization is the reverse process of redeeming a token for its associated original value. While various approaches to creating tokens exist, frequently they are simply randomly generated values that have no mathematical relation to the original data field. This underlies the security of the approach – it is nearly impossible to determine the original value of a sensitive data field by knowing only the surrogate token value.
Depending on the particular implementation of a cloud data tokenization solution, tokens can be used to achieve compliance with requirements that stipulate how sensitive data needs to be treated and secured by companies in order to adhere to guidelines such as ITAR (International Traffic in Arms Regulations), PCI DSS, HITECH & HIPAA, CJIS, and Gramm-Leach-Bliley.
Whether sensitive data resides within on-premise systems or in the cloud, transmission and storage of tokens instead of original data fields are acknowledged industry-standard methods for securing data.
The PCI Security Standards has published the PCI DSS Tokenization Guidelines to provide guidance on the use of tokenization to secure data. The guidelines help organizations maintain compliance with PCI DSS standards, but also serve as a mature set of guidelines for use of tokenization across multiple industries.
How is Tokenization Different From Encryption?
Encryption is an obfuscation approach that uses a cipher algorithm to mathematically transform sensitive data’s original value to a surrogate value. The surrogate can be transformed back to the original value via the use of a “key”, which can be thought of as the means to undo the mathematical lock.
So while encryption clearly can be used to obfuscate a value, a mathematical link back to its true form still exists. Tokenization is unique in that it completely removes the original data from the systems in which the tokens reside. As such, advantages of tokenization are:
Tokens cannot be reversed back to their original values without access to the original “look-up” table that matches them up to their original values. These tables are typically kept in a “hardened” database in a secure location inside a company’s firewall.
Tokens can be made to maintain the same structure and data type as their original values.
While format-preserving encryption can retain the structure and data type, it’s still reversible back to the original if you have the key and algorithm.
Blue Coat Tokenization & Residency
Because tokens cannot be reversed back to their original values, tokenization is frequently the de facto approach to addressing a market requirement known as residency. Depending on the countries in which they operate, companies often face strict regulatory guidelines governing their treatment of sensitive customer and employee information. These data residency laws mandate that certain types of information must remain within a defined geographic jurisdiction. In cloud environments, where datacenters can be located in various parts of the world, tokenization can be used to keep sensitive data local (resident) while tokens are stored and processed in the cloud.
Learn more about encryption.