Enterprises are turning to the cloud in an accelerating trend. According to Gartner, cloud infrastructure and services will make up the bulk of new IT spend by 2016. The firm projects that nearly 50% of large enterprises will be deploying hybrid cloud architectures by the end of 2017. IDC (International Data Corporation) reports that spending on public IT cloud services should surpass $107 billion in 2017. IDC also expects public IT cloud services to drive 17% of overall spend on IT products, and nearly half of all growth across five technology categories: applications, system infrastructure software, Platform as a Service (PaaS), servers and basic services.
No matter how companies consume cloud services — either as software, platform, infrastructure or applications — service delivery will remain an essential key to success for all Cloud Service Providers (CSPs). The quality of service delivery will have direct implications on a number of parameters:
This is probably the most critical aspect of successful cloud service delivery. After all, if the user does not have a good experience, the service will not be successful.
How well does the application perform? Are there issues with latency, access or usability?
Cloud suggests a geography-neutral model. The service provider should be able to reach a wider market, and take advantage of more revenue-generating opportunities, constrained only by service scalability and performance capabilities
Customer Acquisition and Retention
User experience, application performance and reach all play into customer acquisition and retention, and impact churn.
To optimize performance in each of these areas, users and buyers should source their cloud-computing needs with reliable, flexible, scalable, secure CSPs. CSPs must, in turn, make informed choices when building out their cloud delivery infrastructure, so that they can serve their customers more effectively. Following is a presentation of key business and technology considerations for CSPs, and how CSPs can optimize their service delivery strategy. The role of the network in cloud service delivery, and the importance of proximity to improving user experience, application performance, reach and revenue will also be addressed.
Cloudy with a Chance of Proliferation
In the fourth annual “Future of Cloud Computing” report, respondents reported a rapidly growing adoption rate for SaaS offerings of 72%—which represented a fivefold increase in just four years. Approximately seven in 10 of the participants indicated plans to invest in moving more business applications to the cloud over the next couple of years. And 49% of the survey’s respondents reported that they either planned to or were already running their companies using infrastructure in the cloud. Although many legacy applications remain on premise, future refresh cycles may cause enterprises to look for cloud-based alternatives (as opposed to updating and continuing to maintain applications and infrastructure in-house). Net-new applications will likely be consumed as cloud services as well.
A few events over the past few years have accelerated the transition to cloud service adoption. The virtual storefront has expanded the role of data center services in the supply chain, and presented a host of new challenges, such as meeting growing demands for capacity to accommodate the large amounts of data traveling over the network. As the amount of data has grown, the network has become the bottleneck. Virtualization has enabled companies to make better use of existing resources, saving money and freeing up space to open up new capabilities. Virtual servers have also taken the pressure off enterprises running out of room for hardware to meet growing data needs; resources stored in the data center can now be used to power internal applications and services. The logical extension of virtualization has cloud and infrastructure service offerings—pools of on-demand computing resources that can be shared by multiple tenants.
As a result, servers, storage and network functions are considered economic units that can be leased, on-demand, with the click of a mouse. Instead of managing CapEx and depreciation to build out infrastructure that, in the end, is underutilized, capacity management has instead become an exercise of OpEx management. No matter how large or small the enterprise, this new cloud services consumption model means there is no longer a concern about running out of capital-intensive resources. But the network can still be a bottleneck. If a user is too far away from the network node to which the end device connects, congestion and latency from massive amounts of data traveling between nodes can affect application performance, thereby impacting service delivery and degrading the user experience.
Infrastructure elements in various locations are now networked together over Wide Area Network (WAN) links, creating a dependency on the CSPs to provide reliable, scalable, high-quality delivery of those resources. When this service chain is compromised in any way, problems occur. As such, the network has become a lifeline to critical IT resources.
The move that Netflix made to streaming video as a service is a perfect example. Netflix customers with high-quality network connections were the ones most likely to switch to streaming, because they could still have a DVD-quality experience. In comparison, Netflix customers who experienced jitter/flutter/blocking while streaming video (all signs of poor network conditions) were more likely to continue to use DVDs.
Similarly, business users who experience congestion and latency in the network when they access their cloud services are more likely to opt for installing software on premise instead, so they can enjoy a better user experience.
Change Brings Challenges
Although the network isn’t the only consideration, it’s a critical one. With all the new capabilities the internet provides, we’ve reached limitations around the delivery mechanism. In the past there was plenty of bandwidth—primarily because many networks were over-provisioned. But with the proliferation of mobile apps, the Internet of Things and Big Data flooding the “pipes,” the network has once again become a bottleneck for the consumer experience, and for enterprise IT departments using cloud.
Another challenge is making application developers aware of service delivery issues. Often a disconnect exists between the developers of cloud applications and those who manage the delivery of those applications. Development and operations personnel may work in silos and not understand each other’s functions. Developers sometimes overlook the less-glamorous aspect of service delivery; the actual network interconnections that enable data transmission are taken for granted and yet are still expected to perform optimally. As a result, many provider solutions fail to accommodate the aspects of networking that may impact quality of service.
One way to solve some of these challenges is to bring the service node closer to the end users. By minimizing the distance that data has to travel, CSPs can overcome the network congestion and latency issues that affect application performance, therefore providing a better customer experience.
A Closer Look at Proximity
Proximity is an essential, yet often overlooked characteristic of the network, particularly with the increasing complexity of today’s services and applications. Distance between infrastructure elements and users will have a direct impact on the performance of any distributed or cloud system. But proximity isn’t limited to where your users are today; you must take into consideration your addressable market. Where will your users be in the future, and how close will your service nodes be to those future users?
To better understand the impact of proximity to system performance, consider FedEx’s massive physical network of operating facilities and drop-off locations. Despite the 10.5 million shipments it process daily, FedEx prides itself on fast, reliable delivery. How does FedEx do this? It has created a physical distribution model that enables tremendous reach and close proximity to customers, thereby achieving desired service levels. Major warehouse and distribution centers are often collocated with major multi-tenant airports. Local depots and retail shipping centers are often collocated within multi-tenant strip malls. Similarly, network proximity reduces latency of cloud service delivery, enabling higher-speed interconnections and a better service experience.
The Internet we use today doesn’t provide the highest quality experience for business applications. The performance of such applications depends on all parties involved providing a consistent quality of service (QoS) and class of service (CoS). The reality, however, is a compromise. To control costs and more effectively exchange traffic with each other, carriers typically use public IP networks. These public IP networks are greatly affected by the increase in apps and services, often congesting the end-to-end traffic flow and affecting critical application performance. Meeting agreed-upon service levels can be difficult when there’s congestion in the network, and the farther the application source is from the user, the more networks and public IP points the data must pass through, slowing down delivery even more.
In the past, improving application response times has required either that the software be installed directly on the end device, or that application be accessed via the local area network (LAN) closest to the end device. Today, data center colocation allows extended reach and the capability to move into additional geographic markets more easily, to access more potential customers.
Significant benefits can arise from deploying a distributed service delivery architecture leveraging multiple colocation facilities. This type of infrastructure facilitates network aggregation, reduces costs, and enables connectivity to network and cloud providers for hybrid IT/cloud architectures. Placing application resources and content in distributed networked colocation centers brings the content and processing closer to the end user, reducing latency, increasing application performance and improving the user experience significantly.
Why Proximity Matters So Much
Users are sensitive to the speed and responsiveness of cloud-based applications; if users are dispersed geographically, they may experience service latency, depending on their proximity to the service origin. A poor user experience leads to frustration, and application providers will see more churn and experience lower retention rates as a result. Additionally, the pool of potential customers shrinks if your services are only consumable close to the network nodes.
Let’s look at user experience, performance and reach in greater detail:
Recent research indicates that organizations can lose significant revenue with just one additional second of delay beyond defined baselines for performance of Web applications. Not only does this have an impact on an organization’s ability to sign up new customers, it can damage brand perception. According to the Aberdeen Group, a one-second delay in page response time can reduce conversion rates by 7%. Frustrated users will simply abandon the shopping cart, and either find another vendor or change their minds about the purchase. Latency may also reduce the number of transactions that a vendor can process, potentially limiting revenue as well. For example, a one-second delay could cost an e-commerce site making $100,000 per day $2.5 million in potential sales. A single-second delay also reduces page views by 11%, and decreases customer satisfaction rates by 16%. Business users may feel the effects of latency even more acutely, as their work depends upon robust application performance and reliability. In a recent user test, a half-second delay in delivering search results culminated in a 20% drop in Google traffic.
Software developers building cloud apps or services are not always privy as to precisely where the application will be consumed; instead, they focus on building a “killer app.” It’s imperative to understand that the performance of an application is not just dependent on the software stack upon which it is built, but is also dependent on the underlying infrastructure resources required to support it. Traditionally, addressing performance issues has meant fine-tuning application code, implementing performance management solutions or installing additional hardware. Those methods can help to an extent, but at some point keeping up with demand becomes too costly, too inefficient or too unwieldy. Data over fiber networks travels up to the speed of light, and the speed of light does not change thus this becomes the de facto upper limit for data transmission speed. Simply stated, distance matters, and the proximity of distribution becomes critical to application performance.
A larger Total Addressable Market (TAM) provides a larger potential customer base, along with the opportunity to sell more services. Proximity of the network to the users helps to eliminate performance issues that affect the user experience, and enables you to expand your reach without any degradation to service delivery. As illustrated by the FedEx example, the closer the end points (in this case, network nodes) are to customers, the faster a service can be delivered.
The Four Classes of Applications
Not all applications have the same performance requirements and expectations. As a result, companies should characterize their applications based on the sensitivity of those applications to latency charted against their business criticality. Applications can generally be put into four classes: proximity, real-time, priority, and best effort.
Make It Reliable and Secure
For all the clear advantages that proximity provides end users and enterprises, it doesn’t mean much if service delivery suffers from frequent or avoidable interruptions. Proven operational expertise and reliability should be top of mind when researching colocation facilities. If a provider can’t point to an average annual uptime record of > 99.999%, keep looking. Nothing impacts a user’s quality of experience more than when service is unavailable. Similarly, IT and data security concerns are often considered separately from the mechanics of optimizing service delivery, but can profoundly impact it. CSPs should be certain their colocation facilities have tested, resilient security standards and are well-protected from internal and external intrusion. If users aren’t confident a CSP can keep their data safe and private, they won’t be users for long.
Interconnected Cloud Advantage
The quality and volume of interconnection offered through a given colocation provider is a critical question for CSPs, because it’s increasingly a prerequisite for delivering the high level of performance their users demand.
A truly global data center provider should offer access to a broad choice of network service providers in order to ensure cost-efficient access to the right markets and redundancy in a crisis. It should also be home to the active and interconnected ecosystems that enable the relationships with other CSPs, managed service providers and system integrators needed to optimize service delivery.
A colocation provider that can provide dense interconnection opportunities offers a tremendous advantage to CSPs, simply because of how broadly it expands their options for critical partnerships. In an interconnected cloud environment, CSPs find the clearest route around the security and performance concerns of the public Internet and fastest, most affordable path to meet the market’s ever-increasing performance expectations.
The Bottom Line is Your Bottom Line
The trend toward consuming applications and services in the cloud continues to gain momentum among enterprises looking to reduce costs and simplify access to IT resources. No matter how you’re looking at cloud service delivery—whether from the perspective of the end user, app developer or business strategist—the network and the proximity of service nodes to your addressable market are two of your most important considerations. Without localization and fast, predictable network connectivity, CSPs cannot take advantage of the opportunities the market offers.
As you prepare for growth and select a data center partner, make sure that the network you use can reach the regions of the world where your current and future customers will consume your services. You’ll want to select a partner that offers a distributed architecture for service nodes and builds applications with distribution in mind. Proximity of service nodes goes a long way toward improving user satisfaction, bolstering your brand and helping to extend your reach—all of which will help you meet the needs of your customers and improve your profitability.
Adapted from an online article in CRN