For our computers at home we have a small network based on a standard wireless dual radio N/G router with 4x 1Gb Ethernet ports. However wireless performance in my office which is about 30 feet away from the router is poor (maybe 2Mb/s). We fixed this by creating a Ethernet over Power Line network (using the Netgear XAVB101) which works fine for most applications.
The one area that didn’t work well was transferring large files from and to our Network Attaches Storage (a 1 TB Buffalo LS-WTGL, predecessor to this model). Transfer speed from my MacBook Pro maxed out around 20 Mb/s. In other words filling the 1TB drive would take about 4.5 days. That’s a bit slow. How to fix it after the break.
In a recent post on Miles Davis’ blog he reports the latest statistics of the wireless network of the Stanford Computer Science Department. The numbers are interesting to say the least. The Laptop market is split equally between Mac OS X and Windows. Both however are crushed by the iPhone which is also the fastest growing segment. Android which had not yet made a dent on last month’s statistics is slowly creeping up.
Mac OS X
For complete statistics visit his blog post here. Stanford is far from typical network. However it sometimes is a good leading indicator of things to come. That phones are becoming a major category on wireless networks (and eventually the majority) seems like a safe bet. How quickly it happened is amazing though. It has been 2.5 years since the iPhone was introduced).
A second technology that is highlighted is Siri’s personal assistant software. I am really looking forward to trying out Siri’s product when it goes into beta later this year. For full disclosure, Morgenthaler is an investor in Siri.
RFC 5408 which describes a Security Architecture for Identity-Based Encryption. It includes protocols for key requests and public parameter requests as well as some basic building blocks for federation. The system described is similar to what Voltage uses for their IBE based encryption solutions. If you are interested in how Identity-Based Encrpytion systems scale in practice and don’t mind reading RFCs, it is a worthwhile read.
Thanks to my co-authors Mark Schertler and Luther Martin. Specifically Luther deserves the majority of the credit for moving this through the process over the past two (or more?) years. Also thanks to Terence Spies at Voltage, as well as Eric Rescorla, Tim Polk and Blake Ramsdell at the IETF for their support.
We just finished the OpenFlow Demo at the GENI Engineering Conference, and it was amazing. We showed our new OpenFlow protocol running on switches from Cisco, Juniper, HP and NEC. Our experimental network stretched half way around the globe from Stanford to Tokyo via New York. It used fibers from Internet2, CalRen and JGN2plus.
Over this network we showed how we can move around a running game server from one physical host to another without the game even getting interrupted. We demonstrated how you can route a network connection with a simple drag and drop interface (e.g. a TCP flow inside Stanford going via Tokyo and Houston). We even sent a running game server to Tokyo from Stanford, without losing the connection.
Press coverage of the demo included articles English, Japanese, Swedish and Spanish. The OpenFlow web site recieved a few thousand hits, with visitors from every major company in the networking space. All this was made possible by about 40 people from Stanford, Internet2, Cisco, Juniper, HP and NEC had been working on this for months.
As a result of this, OpenFlow is building momentum. NEC announced during the conference support for OpenFlow in their product, and more announcements will follow. By mid next year we are hoping to have pilot deployments at 6-10 universities, and I would hope we will see commercial deployments in that time frame as well. All in all a huge step forward for OpenFlow.
Congratulations to Neda, Yashar, Monia, Nick and Geoff for their best paper award at the Internet Measurement Conference. Their paper Experimental Study of Router Buffer Sizing tests out recent results on buffer requirements of high-speed routers that serve highly aggregated traffic. Amongst other things it verifies the C/sqrt(n) result from my thesis as well as my former office mate Yashar Ganjali’s work on very small buffers and find that they hold well.
It is great to see this work getting recognized, but what is even more encouraging is that two router vendors privately confirmed to me that the next generation of some of their products will have substantially smaller buffers. This not only reduces power consumption, but also means that we are less likely to see latency spike whenever peering points or core links are congested.
The project that currently takes up the majority of my time at Stanford is OpenFlow. OpenFlow is a new protocol that we specified and that vendors are now adding to their routers and switches. What OpenF
low allows you to do is remotely control the behavior of a switch from an controller software that runs on a standard server. This has two major advantages:
You can now write your own control software and try out new switch functionality at full line rates. In the past this has been difficult as all major router and switch vendors lack APIs and are typically closed platforms.
If you use a centralized controller now has a unified view of the network. For some applications such as mobility management, virtualized data canters or security this allows you to do things that previously would have been very difficult or impossible.
I am happy to announce that this week I have joined Stanford University as a Consulting Assistant Professor. This may come as a suprise to some people, as I am not exactly your typical academic. Those people would be correct, my job here is not primarily about teaching. The main reason I am joining Stanford is OpenFlow, and it is one of the most exciting technologies I have seen in networking for a long time.
OpenFlow is exciting in two ways. First, it allows you to run new protocols and algorithms on production networks. Before OpenFlow this was very hard, as modern routers have no API that gives access to this low level functionality. Second, it allows you to make centralized yet fine grain routing decisions. This has huge advantages in some areas such as security, data centers or mobility.