Grid computing, a combination of computer resources from multiple administrative domains utilized to reach a common goal, has today evolved into what is known as “cloud computing." Similar to grid computing, cloud computing is typically geographically dispersed. However, cloud computing not only provides more computing resources on demand but also provisions new services and capabilities as needed by individual customers.
Infrastructure as a Service (IaaS) providers who operate and maintain cloud-based networks, are looking into application delivery and other solutions to attract new customers, maintain customer satisfaction and increase revenues in addition to those generated from standard IaaS offerings. However, deploying an Application Delivery Controller (ADC (News - Alert)) in the cloud network, which provides the applications of the providers’ customers with maximum availability, best performance and complete security – is mandatory, but certainly not enough. In fact, today’s IaaS providers are highly interested in creating value-added cloud services which not only fit today’s business needs but also prove to be future proof; this means standardizing the ADC to ensure that IaaS providers have an advantage over others by selling fully customized premium services for each customer and thus, offer them a clear differentiator in terms of cloud services which translate into increased revenues and customer loyalty. To efficiently address these challenges, there are six criteria that must be met by an application delivery solution:
Providing a tiered IaaS model based on different SLAs: as cloud providers serve different customers, there are different SLA needs. Therefore, where most IaaS providers only offer services with no or low SLA guarantees some customers who run mission-critical applications require high SLAs for these services. To address these requirements, IaaS providers should deploy an ADC in different form factors: a dedicated, physical ADC - delivering high SLA; Multi-instances; a physical ADC for shared ADC providing high SLA – enabling ADC consolidation while ensuring performance predictability and complete resource reservation; and a Soft ADC running on a general-purpose server - providing SLA on a best-effort basis.
Allowing easy migration of applications/services from the enterprise data center to the cloud data center: While moving applications and infrastructure into the cloud, whether in a planned fashion or via a “cloud burst”, IT managers want to maintain the same network design as in their original data center to reduce risk and minimize business downtime. Additionally, an IT manager who is familiar with a certain ADC wants to continue using it in the cloud in exactly the same fashion. Therefore, an ADC deployed in the cloud should provide the same functionalities and capabilities from the customer’s viewpoint and address the very same network topology and services they are familiar with.
Providing advanced, customer self service capabilities: Similar to the needs for SLA requirements, different customers also have different ADC needs - from basic layer-4 load balancing to advanced application delivery techniques, including application acceleration, integrated security, layer-7 policies, URL rewrite rules and bandwidth management. Therefore, it is crucial for IaaS providers to deploy a best-of-breed ADC offering advanced ADC capabilities to increase potential revenue and provide enterprise IT managers exactly the same ADC experience regardless of application delivery service location.
Allowing to dynamically align application traffic and VM resources: Since IaaS providers serve dozens, hundreds, and sometimes thousands of customers, there are many moving parts in a cloud data center and its ever-changing capacity needs. As a result, there is a need for on-going cloud IT administration and the ADC – aligning the ADC to cloud network needs. That said, to improve cloud IT productivity, reduce on-going maintenance and eliminate human error, it is recommended to have an ADC which is integrated into the cloud eco-system, meaning that the ADC interacts with the orchestration systems to provision, decommission and migrate ADC instances from one location to another on demand – without any human intervention. This implies having an open API to the ADC and having adapters between the ADC and the orchestration systems thus, automating process workflow.
Elastically scaling within a single cloud data center and across multiple data centers: When services are moved from the enterprise datacenter to the cloud, providers expect them to be available and not experience performance degradation. Therefore, it is crucial that the ADC be able to monitor the performance levels of an application, and inform the orchestration system when the provisioning of additional computing resources is required, both per data center and across multiple data centers, while taking into consideration the available capacity within each data center in a way that guarantees the best response time for end-users. In addition, applications might require increased processing capacity due to increased transaction per second (TPS) and higher session concurrency. To address these needs, the ADC should allow for the easy scaling of throughout capacity by adding more capacity units and new application delivery services on demand, while continuing to use the same hardware to eliminate service downtime.
Providing Disaster Recovery and Global Traffic Redirection services: IT managers use cloud providers as a means of disaster recovery (DR) and for global traffic redirection. Therefore, for an effective DR solution, it is essential that the same ADC be deployed in the cloud data center and the enterprise data center – supporting the same user scenarios and policies on both ends.Amir Peles is Chief Technology Officer at Radware (News - Alert). To read more of his articles, please visit his columnist page.
Edited by Tammy Wolf