Blog

Can data be “jurisdiction-independent?”

Written by Paul Lewis | Apr 30, 2021 3:02:00 PM

The world of data has become increasingly complex with the advent of nation-specific data privacy laws. Certain nations, like France, Germany, and Russia require its citizens’ data to be stored on physical servers within their borders. Countries like the United States require federal agencies to store data only on servers located within the country. This is further complicated by industry, as patient records often need to be treated differently than financial records, and the rules change depending on where the data physically resides.


In recent years, there has been a strong push to move data to the next frontier - namely large public cloud providers. But these giant providers are global companies with data warehouses sprinkled across many countries, completely connected by the cyber superhighway. If everything is connected, how can a company be certain that its data will remain in a specific country and compliant with applicable laws? It can’t.


The bad news for enterprise IT and legal departments is that they are ultimately responsible for complying with these laws. While the cloud provider will likely make certain promises as to where data is stored, it is actually the data owner who is accountable.


Take for example a multinational company’s human resources department. Suppose some employees live in Paris, while others reside in Munich, and still others in Chicago. In order to remain compliant with local laws, the company would be required to have three disparate systems to store employee data. Not ideal.


The world got it wrong when it decided to govern data in the jurisdiction in which it is “at rest.” Perhaps a better way to safeguard data would be to govern it where it is created, consumed, and analyzed. But, at rest? One day our data may be at rest on Mars! But today our data is at rest in places like Amazon Web Services, or Alibaba Cloud. We need to get away from the mindset of storing our data on the proper side of a mountain range border in order to remain legally compliant. Instead, we should focus on being certain our data is only accessible and readable from within a certain location, not at rest waiting to be accessed.


Data is treated as if it was a physical object, yet it violates the very principles of the physical world. Data can be created and destroyed. Data can be reproduced a million times. Data can be transported at the speed of light anywhere in the world. Data is not a physical object, it is a virtual presence. So why do we care about the location of the magnetic media it is occupying at a given moment in time, when it can be moved at lightning speed to another physical location?


Consider the difference in how we choose to protect a gold coin (a physical object) and a bitcoin (a virtual presence). A gold coin is placed in a secure box, in a secure vault, within a secure building - with layer upon layer of physical security. Conversely, bitcoin is placed in the public domain with literally no security at all, and is stored in a ledger that is available to anyone. However, access control is reserved only for its owner. While it may be in the public domain, only its rightful owner can access it.


While the world might not be ready to institute a United Nations Data Privacy Regulation, there is a way to make the virtual data law abiding no matter where the servers are located. Destroy it. If data is destroyed and not accessible it is presumably compliant with data protection laws, right? But if the data is destroyed, it is useless. Unless, of course, the destroyed data can be recreated.


That is precisely how a safe data harbor works. A data harbor is not a physical location, it is a virtual location. It does not keep everything in one place, but instead keeps fragments of everything in lots of different places. And the fragments are not usable data but rather remnants of destroyed data. If the data is completely unintelligible and doesn’t exist in a complete format anywhere in the world, wouldn’t it be compliant with data protection laws? Such is the makeup of a safe data harbor, a place where data can exist in a destroyed format and only recreated by the rightful owner. And the physical location where the data is recreated is significant, because as soon as the data is recreated it should/could/would become subject to the laws of that location.


So, if data is created in Vancouver it should be subject to the laws of Canada. If it is then destroyed and its remnants are pushed to a virtual data harbor where the data does not exist, it should be free from any rules or laws. If the destroyed data is later recreated in Frankfurt, it could, and should immediately be subject to German law.


A data harbor makes sense only if the data cannot be recreated by anyone other than the rightful owner. It cannot be recreated by a hacker, the cloud provider, or any government. If control over the data truly resides with only the data owner, we can imagine a world where risk of non-compliance is dramatically reduced and maybe even eliminated.


Let’s all stop treating data as a physical object and pretending it needs to exist in a certain place, and start thinking about pushing destroyed data to a safe data harbor where only the rightful owner can recreate it.


Paul Lewis, CEO of Calamu, Creator of the World’s First Safe Data Harbor