- Notifications
You must be signed in to change notification settings - Fork 5
/
Copy pathresponse-are-open-source-controllers.html
32 lines (30 loc) · 5.17 KB
/
response-are-open-source-controllers.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
---
date: 2016-05-03T12:01:00.000+02:00
tags:
- scalability
- OpenFlow
title: 'Response: Are Open-Source Controllers Ready for Carrier-Grade Services?'
url: /2016/05/response-are-open-source-controllers/
---
<p>My beloved <ahref="https://www.linkedin.com/groups/4359316">source of meaningless marketing messages</a> led me to a blog post with a catchy headline: are <ahref="https://www.linkedin.com/groups/4359316/4359316-6115712573024256003">open-source SDN controllers ready for carrier-grade services</a>?</p>
<p>It turned out the whole thing was a simple marketing gig for Ixia testers, but supposedly “<em>the response of the attendees of an </em><ahref="https://twitter.com/etherealmind/status/707152646872780800"><em>SDN event</em></a><em> was overwhelming”</em>, which worries me… or makes me happy, because it’s easy to see plenty of fix-and-redesign work in the future.<!--more--></p>
<p>Anyway, let’s walk through the <ahref="http://www.ixiacom.com/sites/default/files/resources/case-study/benchmarking-opensource-sdn.pdf">presentation</a>.</p>
<p><strong>What was the testbed?</strong> Ixia software emulated numerous OpenFlow switches connecting to a <em>single instance</em> of an open source OpenFlow controller. The switches were connected in a <em>linear topology</em> (N 2-port switches in sequence), which is <em>the least likely topology you’ll ever see in a network</em>.</p>
<p><strong>What were they measuring?</strong> Pretty useless stuff that’s easy to measure:</p>
<ulclass="ListParagraph"><li>How many OpenFlow switches can connect to a single controller instance?</li>
<li>How long does it take the controller to install a single flow across all switches?</li>
<li>How long does it take a controller to discover network topology?</li>
</ul>
<pclass="note">Also, it’s impossible (from the presentation published on Ixia web site) to figure out what <em>exactly </em>they were measuring, and whether it's relevant. For example, they assume the controller discovered the network topology when the LLDP packets generated by the controller where delivered back to the controller.</p>
<p><strong>Why are those metrics useless?</strong><strong></strong>Let’s go through them one-by-one:</p>
<ulclass="ListParagraph"><li>How many OpenFlow switches can connect to a controller? A single OpenFlow domain is a <ahref="/2014/09/controller-cluster-is-single-failure/">single failure domain</a>, and unless you plan to use overlay virtual networking (= <ahref="/2013/09/openflow-fabric-controllers-are-light/">mimic wireless controllers</a>) you don’t want your failure domain to be too large. Also, a decent carrier-grade controller would have a scale-out architecture (no, not a cluster of two controllers, but a real scale-out architecture with eventual consistency), which would make this metric moot.</li>
<li>How long does it take the controller to install a single flow? This one might expose internal workings of a controller (is the controller programming flows in switch-by-switch sequence or in parallel), but measuring anything beyond a few dozens of switches (= number of hops across the network) is plain ridiculous. Not surprisingly, the “interesting” behavior emerges in the totally-ridiculous territory (500+ switches in sequence), so let’s put that on the slide and claim victory.</li>
<li>How long does it take to discover network topology? Measuring this on a chain of 100 switches in linear topology is absolutely meaningless. What would make sense are questions like “<em>how quickly is a topology change that is not signaled via an interface down message detected</em><em>?</em>” or <em>“how quickly are</em><em> N thousand flows rerouted after a topology change</em><em>?</em><em>”</em><em></em>We still don’t know.</li>
</ul>
<p>Finally, while it seems (at least from the presentations like this one) that the main focus of SDN is reinventing bridges (because dynamic MAC learning really needs to get reinvented), everyone conveniently ignores the scalability challenges of <ahref="/2015/02/big-cloud-fabric-scaling-openflow-fabric/">running linecard protocols</a> across hundreds of switches from a central controller. BFD anyone?</p>
<p><strong>What has this to do with readiness for carrie</strong><strong>r-grade services?</strong> Absolutely nothing. The setup is irrelevant (no carrier would use a single-instance controller), the switches used (2-port switches) and the linear topology are meaningless, and the metrics they measured don’t reflect real-time scenarios.</p>
<p>The only link to <em>carrier-grade services </em>I could find is the need for a catchy headline.</p>
<h4>Ready for a dose of reality?</h4><ulclass="ListParagraph"><li>Start with the free <ahref="http://www.ipspace.net/SDN101">Introduction to SDN</a> webinar if you need the answer to the “<em>What is SDN?</em>” question.</li>
<li>Read the <ahref="https://www.ipspace.net/SDN_and_OpenFlow">SDN and OpenFlow (the Harsh Reality)</a> digital book, because it’s easier to read a book than recursively read <ahref="/tag/sdn/">over 350 blog posts</a>;</li>
<li>Watch the <ahref="http://www.ipspace.net/OpenFlow_Deep_Dive">OpenFlow Deep Dive</a> webinar to discover true OpenFlow scalability limitations.</li>
</ul>