The world is changing; it always has but the world is changing faster now than it ever has before. This general change is translating into even bigger changes in the cyber world. Some of the key areas that are evolving aren’t new, like availability or security. Others like automation are maturing quickly, and then there is the ever-present need for “easy.” Easy is a nebulous term, but in this case it refers to ease of procurement, ease of set up, flexibility in platform and ease of ongoing management.
This accelerated change is being driven by different market and business drivers. Some of the key market drivers are compliance, time to market, cyber loss risk, and increased competition around the user experience. This change is acutely felt in the ADC space.
Compliance mostly affects security but in some cases touches on availability as well. Tied closely to compliance is the risk of cyber loss caused by a cyber-attack. As the world’s IT infrastructures become less insular and more public (SAAS, public clouds etc.), organizations have had to move away from protecting infrastructure and a security model that protects data. The common standard for protecting web data and integrity while in motion is SSL/TLS. The usage of this encryption technology dramatically increased last year. According to Mozilla Firefox and Google Chrome, SSL encrypted traffic accounted for 50% of all internet traffic. This increased usage has several unintended consequences.
First, SSL can protect our data but it also limits our visibility to data coming into our networks using the same technology. Why is this lack of visibility important? In the past, corporations usually ignored incoming SSL traffic. They made the choice that SSL traffic represented such a minute amount of all inbound traffic that ignoring it was an acceptable risk. This was not just a concept shared by most security vendors (the equipment that would need visibility to limit risk). So for example, if you bought/buy a firewall today, at worst you lose 85% of your throughput. At best you lose 70% of your throughput if you decide to inspect SSL traffic. Obviously that doesn’t allow you to cover enough traffic if 50% of it is SSL-encrypted. Compounding this problem is that most analysts agree that by the year 2020, well above 70% of the internet will be SSL encrypted.
So what’s the answer? Using an external device that was purposefully built to do SSL offload and inspection. For the past 15 years, ADCs have been doing SSL offload at high throughputs and have become a very cost-effective solution to the problem. However, pushing against this general need for visibility is our old friend, compliance. Yes, you need to secure your data but you also need to not violate a person’s right to privacy. This is a far trickier problem to solve and requires advanced capabilities in the chosen SSL solution that allows it to selectively choose what to inspect and what to ignore. This functionality has gone from a “nice-to-have” to a requirement as the different regional PII laws have come into place.
If this were the end of the data-in-motion protection story it would be complex enough, but it is not. Just like any other arms race (and cyber security is an arms race), the speed of adoption and complexity of said security is increasing exponentially. Most of us lived through the 1K to 2K key upgrade approximately two years ago. We got to see how that change affected our servers and other equipment performance (it was also recent enough that we remember). We had to make that change because the algorithms were no longer complex enough. Changes like this have always happened, but seldom at this pace. We are on the cusp of the second major SSL algorithm change in just three years, and given recent history, it’s likely there will be another within the next five years. The interesting thing about these SSL changes is that they are being driven by markets in general. Companies have to follow suit or be left behind. If, for example, TLS1.3 in its current form becomes the standard, everyone will have to change their SSL algorithm from RSA to ECC because RSA is not currently supported in the TLS1.3 standard. This change, like the change to 2K keys, will affect the performance of everyone’s existing infrastructure. The good news is that some corporations have built purpose-built solutions with a forward-thinking view to help mitigate the risk for these changes. If you are wondering where this problem affects you, there are four key areas to bear in mind.
1) Server offload (the application exchanges keys with the client, one of the most resource-intensive functions that takes place for the server point of view).
2) SSL inbound inspection for security visibility purposes (you can’t inspect what you can’t see).
3) SSL outbound inspection for security visibility outbound (similar to above but usually closely tied to DLP and needs to have intelligence to not view PII data).
4) SSL inspection upfront for DDOS protection. This sounds like #2 and #3, but remember SSL inspection is very resource-intensive, so having a solution that has intelligence built in to allow for limiting how much inspection has to take place is especially key in a DDoS environment, lest the SSL appliance become the most likely place to be DDOS’d.
Availability has always been a key tenant of what ADCs were meant to do, and it remains the key function that most ADCs offer. I believe we will see a renewed interest in using ADCs as a tool for testing appliances/applications that are running disparate code on scale out environments in order to decrease time to market, but reduce risk, i.e. if there is an unexpected bug, we can easily reroute the traffic without an outage. If we were to extrapolate this to a very specific example, think about if you were migrating IS vendors. No need to cut over or run both in parallel, or run two units from the same vendor with different code.
The other key area is easy. Easy can mean so many things, but if you tie it to the ever-growing movement of agile development and Devops, we can hone in on several key areas one should consider. First, ease of procurement. Can I call up a partner and get an easy-to-understand quote? Are payment arrangements flexible, capex or OPEX? Can I buy in cloud marketplaces? Do templates exist for setup? Can I have a fully managed solution? If I want to manage it myself, is automation supported natively for set up and management? Is there a native framework that supports operational automation? Are there tools and components that save time for developers like HTTP2 gateways, application optimization tools, SLA management features and built-in web application security functions like web application firewall and authentication gateways?
Ultimately the world is changing quickly, with web applications at the center of this change. As a result, we need to do what we can to meet the challenges in the most effective way.
References : www.radware.com