As the popularity of public clouds gains momentum, the battle is heating up among cloud IaaS (infrastructure as Service) providers. As a multi-billion-dollar IaaS cloud provider, Amazon Web Services (AWS) is leading the battle. But Google, Microsoft, Rackspace, IBM, and Hewlett-Packard are very active too. Last week Google announced that Google Compute Engine is now available to everyone. Google revealed unique features like advanced routing to help create gateways and VPN servers.
Clearly Google is serious about the networking aspect of public clouds. And we expect all cloud providers will be in the near future. Indeed, now everybody leveraging IaaS is aware that making the most out of your public cloud resources can present some hurdles, specifically because of various latency and network congestion issues. At the AWS partner summit a few weeks ago in San Francisco, Matt Wood, Principal Data Scientist at AWS, presented a very clear statement of the performance problem, spanning architecture design to operational aspects: Latency the Worst NightMare.
As a child I was fascinated by clouds (and stars). I remember looking out the back of my parents’ car and following clouds dancing in the sky. The clouds were telling me stories like a picture book and developing my imagination.
Now, as the CEO of a startup, I am still inspired by clouds. As you know, one founds a startup to change the world for the better. Cloud-computing is changing our IT world. We are all inventing clouds and creating the future. The mission of our company is to ensure that no barrier will ever slow you down. This is why Lyatiss is automating cloud-computing performance.
The 2013 Open Networking Summit (April 15 to 17) showcased the immense innovation that has taken place in the networking industry in the past year. Startups and incumbents alike have developed significant offerings in SDN controllers, network virtualization, network function virtualization, software appliances, and the like.
Among the announcements and demonstrations were: the OpenDaylight project, Intel’s Open Network Platform, Cisco’s Open Network Environment, and many others. Together with these there were also significant end user initiatives and proofs of concept including Goldman Sachs’ in-house initiatives.
However, a couple of things struck me after 3 days of sessions:
- We were no closer to having a universally agreed to definition of SDN
- There was a big blur on the overlaps and differences between SDN, Network Function Virtualization, and Network Virtualization
2013 starts full speed!
The Networking revolution is really here! Take a look at this SDN Central post to know more.
But the most exciting date for us is today: January 28th, we are launching Lyatiss and introducing CloudWeaver!
Pioneering application defined networking, today, we announce the public beta version of CloudWeaver for AWS. Any user of Amazon Web Service can test CloudWeaver, generate its cloud network map in just minutes and check the flow intelligence at a glance! That’s cloud operations at your fingertip.
CloudWeaver is the first application defined networking solution. It is accessible to any cloud user TODAY!
Do you know why Van Jacobson invented the famous Additive Increase, Multiplicative Decrease (AIMD) algorithm in 1988? It was the result of an unexpected Internet collapse. Initially, the Internet had been designed without any congestion control. In fact, the Internet was not supposed to become so popular! Nobody was worrying on congestion before 1984 (RFC 896). But then a congestion collapse happened in 1986 when the NSFnet phase-I backbone dropped three orders of magnitude from its capacity of 32 kbit/s to 40 bit/s.
In my last post I reported on the TEST & TEST advice I brought back from re:Invent, and the idea that cloud applications are becoming an art form.
Today an ocean of computing power is available everywhere. Despite the fact that NASA landed Curiosity on Mars using distributed applications in a cloud infrastructure, testing distributed applications in the cloud is still much too much of a “dark art” which wastes time, money and talent. What can be done to improve this situation?”
At re:Invent, the Netflix CEO alluded to the answer. He said that he feels like he and his team were trying to optimize the raw cloud machine as he used to optimize register usage in the old days of assembler programming some 30 years ago. We are still at the early stages of creating TEST & SET instructions in the cloud! We are still figuring out how to program the cloud machine. High-level language and compilers are needed… and I expect they will come soon!
Even during the holiday season, the cloud space remains very active. Just after Thanksgiving, there was re:Invent, Amazon Web Services’ first global customer and partner event in Las Vegas, and the CloudBeat 2012 conference in Silicon Valley.
At re:Invent, I learned that cloud application architects are like artists, painting a new world with a palette of colors and nice techniques for building awesome applications. So I guess you could say that we are not industrializing IT, we are going to transform it in a more artful way! This is really inspiring and refreshing!
Last Saturday I attended a wonderful event in the Computer History Museum at Mountain View. The Cloud Tech III day is organized by the Silicon Valley Cloud Computing group and followed by more than 250 geeks. I specially enjoyed the deep technical keynotes by legendary BigTable and MapReduce designer Jeff Dean, Sun & Arista founder Andy Bechtolsheim, and former Yahoo CTO Raymie Stata. Jeff Dean presented the last research at Google, Andy Bechtolsheim gave the evolutionary vision of Networks in Data Center and was outlining the issue TCP is facing in Clouds and Raymie advocated for an Orchestrator playing the auditor role between all the different sources of truth in the byzantine empire of Infrastructure as a Service!
The most important lesson I retained from this day is that the DevOps professionals pain point to deploy and efficiently operate their distributed applications in Cloud environments arises mainly because:
- The end to end network is not flexible enough to isolate, differentiate and optimize traffics according their specific needs.
- The network is dumb and hidden and always presented as a black box.
As discussed in my previous blog post, Application-Defined Networking (ADN) is the next level up from SDN, in any network, where ever SDN applies. ADN sits on top of SDN. In an ADN environment, the network is being tuned by apps and for apps.
The ‘ADN’ acronym has also been used in the past to describe Application ‘Delivery’ Network. This is a suite of technologies, generally high performance expensive hardware appliances (ADCs – application delivery controllers, WOCs -WAN optimization controllers), that are combined to provide application availability, security, visibility, and acceleration. Recently virtualized software ADCs have also become available for use in private and public cloud environments.
In this short blog post, I would like to explain why Application-Defined Networking is complementary to Application Delivery Controllers (ADC) and traditional network optimization techniques, in any context, wherever ADC is deployed. ADCs are point solutions controlling the application traffic to optimize and secure a service. F5′s Lori MacVittie touches on this in “SDN is Network Control. ADN (aka ADC) is Application Control,”
Last week I attended the ATCA Summit to present the concept of Application-Defined Networking (ADN). I also participated to a panel on the future of Openflow with distinguished colleagues from Cisco, Dell and Big Switch Networks. The lively discussion highlighted an important topic, the gap between Applications and Openflow.
I believe this gap should be closed and a key for this to happen is to reconcile the concept of flow from both an Application and a Network perspective.
Let me explain. To ensure data exchanges in a computer network, complex tasks such as routing are delegated to network devices from various different vendors while congestion control is delegated to the TCP stack of various operating systems. Internally networks manage individual packets while in the end systems, the TCP implementation and the application processes all speak in terms of flows. An Application-Defined Networking approach that aims at smoothing out communication and computation for (and by) the application must logically control the virtual networks that operate the application-level abstraction (flows) rather than packets.
Part 1: Definition
However, such benefits over traditional approaches to networking can be fully leveraged only if tangible results for end users are the main focus. Additionally these results must be obtained effortlessly while ensuring maximum reliability, scalability and efficiency.
For this to happen, it is critical to establish close linkage of the application with the underlying dynamic infrastructure in the cloud, including systems as well as networks … with a clear focus on the application.
Consequently we now start to see a new Networking paradigm defined by Applications that are taking the center stage. This new trend is all the more important as enterprises are increasingly migrating their applications to the cloud. The ability for enterprises to manage their networks while ensuring application delivery across a hybrid cloud infrastructure is the new challenge.
Application Defined Networking (ADN) is all about applications directly controlling and adapting the networking environment using API’s, so that application delivery and performance across public and private cloud networks are optimized without compromising on portability or security.
Back in May I had the opportunity to post a long blog post titled The Soft Network giving a perspective on the extraordinary industry interest for Network Virtualization and Software-Defined Networks.
Last month two other women technologists I respect a great deal wrote interesting comments on the same topic (Cisco’s Padmasree Warrior and Arista’s Jayshree Ullal), both confirming emphatically that networking is cool again! Granted, the views of these two executives are appropriately crafted around the vision they drive for their respective corporations but they offer a good glance of the huge interests at stake for their companies and the customers they serve. Here are views from other data center heavy weights vendors (HP, Dell, NEC) responding to key and challenging questions from Gartner on whether SDN is the future of networking.
Last week, Gartner published its “top 10″ list of the most significant emerging trends that will impact data centers and IT from now into the next five years Gartner: 10 key IT trends for 2012. A very interesting list which again largely stressed the impact of Cloud and SDN, a model we understand well at Lyatiss.
One comment caught my eye. According to Gartner’s analyst Dave Cappuccio “businesses still haven’t gotten the maximum performance benefits they can get in workload management”. I cannot resist by adding… “and in Cloud adoption”!
More IT Complexity?
CIOs are now convinced that Cloud is on their agenda. This is great! However, getting the real benefit of Cloud requires some caution.
For understanding why and how, I recommend you read the excellent post of Sasha Gilenson, Forget Tool Improvements! Today’s Systems Management Needs A New Generation Of Tools, To be brief, in the Cloud many components are changing continuously and dynamically: application code, internal workload, external interactions, race conditions for resources, need for quality and even infrastructure costs. This very dynamic IT environment creates a demand for a radically new generation of monitoring and automation tools.
Network Virtualization and Software-defined Networks (SDN) are really hot, white-hot. But as more and more people try to understand these concepts it seems there is a great deal of confusion.
I attended Interop Las Vegas in May, presenting a demo of our CloudWeaver solution at the OpenFlow Lab. Many people including many network experts came to me with a deluge of questions such as “Why are these technologies considered so revolutionary?” or “Network virtualization is nothing new, isn’t it ?” or “Networks are already ‘software-defined’, aren’t they?”.
Well, I can understand why people are so disappointed and confused. For one, with virtualization we fear losing control. In addition there is skepticism as to the benefit of the Cloud. A clear explanation is perhaps in order.
Here I’ll attempt to keep things simple and explain what these technologies are in essence. Which problem are they solving? Why are they coming to light now? How they relate to each other and to the Cloud? In summary, what making the network soft means?
April was a defining month for Cloud and Networking people in Silicon Valley with two major events of significant impact for the industry: Open Networking Summit and the OpenStack Conference.
Open Networking Summit was a gathering of top players looking at disruptive solutions for creating smart, reliable and cheap networks. Combining the software and virtualization approaches, they are looking at a new way of making the flow over the network be the new unit of control; in other words making the network virtual and “active”. Everybody was there: network vendors, services providers and researchers from top universities. This was a great moment for the entire networking community. It reminded me of all the work we have done on active networks as far back as 1997, on flow-aware networking, and on virtual networks. After years of gestation, I can now see emerging from this powerful groundswell very credible initiatives that will revitalize networking in a significant way. There is another major consequence of this network revolution: the emergence of a new developer community for the Network – think “Linux community for the Network”.