Advertisement

You will be redirected to the page you want to view in  seconds.

Commentary: Blueprint for Security

Virtual computers on multiple servers are key to intel community’s efforts to save money with cloud computing

Jan. 30, 2012 - 12:25PM   |  
By ROBERT DAY AND LOUISE FUNKE   |   Comments
  • Filed Under

Government agencies and corporations have historically managed their data under a centralized approach of fixed data centers and backup sites. The problem with this strategy is that the world is increasingly decentralized, with large organizations conducting their work at numerous sites around the world. Latency is a major drawback to the centralized approach to data management because the farther a user is from the data center, the longer it takes to retrieve and return information to the data center. Recent efforts to consolidate data centers may further increase latency and hurt user performance.

Corporations and intelligence agencies are understandably exploring moving to cloud architectures, in which documents and applications would be spread among numerous computing stacks. The beauty of a cloud architecture is that it enables computing power to be treated like a utility akin to electricity or water. Businesses don’t worry about where their electric power comes from. They know that if they plug into the wall, they can get as much as they need. And so it can be with computing power. With cloud services, it is up to the service provider to decide how to best manage data and applications across its computing infrastructure to minimize cost and maximize availability.

As promising as the cloud approach is, it has proved to be challenging for organizations that manage sensitive information. As an example, in the corporate world there are numerous regulations governing exactly where and how data must be stored. Companies operating in Germany, for example, are not permitted to transmit personally identifiable consumer information outside the country. If you are running a business that sells to German consumers, you need to have a system set up just for Germany or you are out of compliance. Sometimes, it is the corporation itself that restricts the flow of data. Some technology firms bar their engineering data from being stored in certain parts of Asia.

In the intelligence community, the stakes are even higher than for corporations. Examples include agencies that have strict rules and regulations designed to protect data and prevent co-mingling of top secret and less classified information within servers. Some government departments need to keep their information separate from one another. This translates into physical server separation within government data centers, and an inability to take full advantage of server consolidation through virtualization, the process of programming sections of servers to act like separate computers. The commercial world has embraced virtualization, but the government has been slower to adopt it. Moving sensitive government data to the cloud is challenging for this reason, but not impossible.

Our companies were aware of the challenges ahead, and in early 2011 we began discussing a collaborative effort to address them for the intelligence community as it weighs how to transition to a secure, communitywide, private cloud architecture. We publicly announced our collaborative agreement in October.

Our software engineers have been working to combine TransLattice’s geographically distributed relational database and platform, called the TransLattice Application Platform, with the LynuxWorks secure data and application partitioning platform, called LynxSecure. The combined solution is designed for transactional applications that rely on relational databases. Relational databases include data sets that may be queried; for example, one data set could contain information on improvised explosive device events, while another data set could contain information on bases that require resupply. A query of the two data sets could provide a list of resupply stops that are within 100 feet of known IED locations.

There also needs to be secure separation of applications and data when physically located on the same system. LynxSecure is a secure virtualization software platform built to a Multiple Independent Levels of Security architec-ture. This software platform has been supplied to the Defense Department and U.S. intelligence community as a prototype secure cloud architecture. This software platform isolates applications into separate partitions, or buckets, to prevent unintended software interactions or data leakage.

The challenge facing the community is how to expand the prototype, or any cloud architecture, across multiple agencies. TransLattice’s software would do this by running within the LynxSecure-generated virtual computers, resulting in separate clusters of interconnected computer nodes for different agencies or levels of classified information.

The TransLattice Application Platform automatically distributes data among the nodes based on three factors: random assignment, historical usage patterns and the redundancy and location policies set by the customer. The platform automatically replicates the data as needed to meet the redundancy rules set by customers.

Once combined, the two systems will make it possible to store data in a decentralized fashion without the problems of co-mingling data and lack of data location control typified by public cloud offerings. The approach was designed recognizing that customers will demand flexibility. Some nodes may consist of cloud servers storing multiple classes of data, a variety of operating systems and applications. A customer may also choose to use physical appliance nodes in some circumstances. TransLattice software ties these on-premise and cloud nodes together as a cluster that can be managed as a single entity.

Just as in a public cloud, however, the heart of our approach is the process of virtualization. The LynxSecure specialized virtualization software rapidly divides physical computing boxes into multiple virtual computers. Virtualization is what has enabled public cloud providers to become profitable. If cloud providers had to buy separate computers and dedicate them for individual customers, the economics simply wouldn’t be there. Space on some of the computers would be wasted, and there would be redundant power and administrative personnel costs. Virtualization allows you to put more information and applications on fewer boxes in less space, with less power and fewer people. The same economic principle holds for private cloud hosts. Virtualization is essential for cost savings from the most efficient usage of data center hardware.

Cloud service providers offer utility computing in the form of “cloud instances,” in essence, virtual computers. Housing multiple cloud instances on a single physical system is the key to the money-saving promise of public or private clouds. To understand, consider an example: A single computer might be sliced up into five pieces. Application A is running on three of them, application B is running on one and application C on another. Each of these applications could be running on top of different virtualized operating systems (either different versions of Windows, or Windows and Linux, as examples). This fulfills the promise of virtualization as it allows for different types and versions of applications and data to run without having to dedicate a single physical machine to each. However, regular virtualization solutions to date have not been designed to maintain security between virtual machines, and hence all of the virtualized data and applications on a single server have to be accessed by people with the same levels of authorization or clearance.

That is one of the reasons that organizations with sensitive information have been concerned about going to the cloud. LynuxWorks’ virtualization approach is based on an underlying separation kernel, which is a software layer that securely isolates computer memory and devices and presents them as if they were physically separate. By adding virtualization to each of these isolated domains, the single hardware platform now has multiple virtual machines that can run different operating systems and applications. The separation kernel ensures that memory and data in one virtual machine are kept separate from another — in other words, that there is no leakage between the virtual machines. In an intelligence application, LynxSecure makes certain there is no co-mingling of top secret and less classified data, by enforcing strict security policies that have been predetermined when the system was initialized and that cannot be changed when the system is running.

When running the LynxSecure platform on a single physical server, virtual applications, data and devices can securely co-reside and can be accessed according to the security clearance of the users. Any attempts to access data or applications held in other virtual machines is thwarted by the strict enforcement of the security policies, and audit data will be available to detail any attempted attacks within the system. This secure separation and virtualization platform is as secure as having multiple physical systems without the physical overhead or cost.

The TransLattice software rides inside these virtual machines, creating a cluster of nodes through which data can be moved securely. A cluster can be easily expanded by adding a node or cloud instance. The cluster management software is smart enough to recognize a new node, and this software alerts the administrator: “I’ve got a new node here. Do you want to actually add this to the cluster?” If the administrator approves, the cluster takes care of placing data onto the node, again, according to the redundancy and location rules set by the customer and implemented by the strict security policies enforced by the LynxSecure separation kernel.

Using this approach, there is no need for a backup data center. Data is replicated across the cluster in a randomized fashion subject to the policies that control how many copies should be maintained at all times and at which locations. In the corporate world, if the customer needs financial information to stay in the continental U.S., the data replicas stay there. The customer could specify that there must always be at least three copies of data on at least two continents, and the system would automatically make that happen. The intelligence community would have the power to do the equivalent. It could even specify that certain classes of information should be kept out of the cloud portions of the system entirely.

If a cluster node is lost for any reason, additional copies of the data are available on other nodes and users can keep working. We recommend a minimum of three copies of all data be maintained across the cluster. If a node goes down, the system waits a few minutes to see if the node will come back online, which occasionally happens due to networking issues. When the time expires, the system determines which data requires additional copies to remain in compliance with the redundancy policy, since data can no longer be retrieved from that node. Then the system automatically makes additional replicas and places them appropriately.

In short, if an intelligence community cloud is designed correctly, it can be incredibly resilient and highly available. Coming to this realization has been difficult because it requires a new way of thinking about complex systems. Technologists are accustomed to systems becoming less reliable as more components — nodes in our case — are added. In this case, just the opposite is true. The more nodes are added, the more the data is spread out, and the less chance there will be that a critical piece of data will be on a node that failed. And with more nodes to share the work, it will take less time to provide the needed data. It’s the inverse of how we’ve always thought about the way systems work.

The combination of these two technologies allows organizations with sensitive information to take advantage of the benefits of cloud computing while maintaining the data security, high availability and excellent performance that is absolutely critical to them. The time has come to accept the potential of the cloud, even for the most security-sensitive of customers.

Robert Day is vice president of marketing for LynuxWorks, based in San Jose, Calif., and a board member of the Eclipse Foundation, a not-for-profit corporation that manages the Eclipse open source software community. Louise Funke is an industrial engineer and vice president of marketing for TransLattice, headquartered in Santa Clara, Calif.

This article appeared in the January-February issue of C4ISR Journal.

More In C4ISR

Start your day with a roundup of top defense news.

More Headlines

Shutdown undermines cybersecurity

Shutdown undermines cybersecurity

With fewer eyeballs monitoring the government's networks for malicious activities and an increasing number of federal systems sitting idle during the shutdown, security experts fear it could create a perfect storm for insiders and hackers looking to do ag

Exclusive Events Coverage

In-depth news and multimedia coverage of industry trade shows and conferences.

TRADE SHOWS:

CONFERENCES: