Friday, March 21, 2014

Quick Thoughts on the Micro Data Center

Here's something that's been on my radar lately: while all the talk in the networking world seems to be about the so-called "massively scalable" data center, almost all of the people I talk to in my world are dealing with the fact that data centers are rapidly getting smaller due to virtualization efficiencies. This seems to be the rule rather than the exception for small-to-medium sized enterprises.

In the micro data center that sits down the hall from me, for example, we've gone from 26 physical servers to 18 in the last few months, and we're scheduled to lose several more as older hypervisor hosts get replaced with newer, denser models. I suspect we'll eventually stabilize at around a dozen physical servers hosting in the low hundreds of VMs. We could get much denser, but things like political boundaries inevitably step in to keep the count higher than it might be otherwise. The case is similar in our other main facility.

From a networking perspective, this is interesting: I've heard vendor and VAR account managers remark lately that virtualization is cutting into their hardware sales. I'm most familiar with Cisco's offerings, and at least right now they don't seem to be looking at the micro-DC market as a niche: high-port count switches are basically all that are available. Cisco's design guide for the small-to-medium data center starts in the 100-300 10GE port range, which with modern virtualization hosts will support quite a few thousand typical enterprise VMs.

Having purchased the bulk of our older-generation servers before 10GE was cheap, we're just getting started with 10GE to the server access layer. Realistically, within a year or so a pair of redundant, reasonably feature-rich 24-32 port 10GE switches will be all we need for server access, probably in 10GBASE-T. Today, my best Cisco option seems to be the Nexus 9300 series, but it still has a lot of ports I'll never use.

One thought I've had is to standardize on the Catalyst 4500-X for all DC, campus core, and campus distribution use. With VSS, the topologies are simple. The space, power, and cooling requirements are minimal, and the redundancy is there. It has good layer 3 capabilities, along with excellent SPAN and NetFlow support. The only thing it seems to be lacking today is an upgrade path to 40GE, but that may be acceptable in low-port-density environments. Having one platform to manage would be nice. The drawbacks, of course, are a higher per-port cost and lack of scalability -- but again, scalability isn't really the problem.

Comments welcome.

5 comments:

Anonymous said...

This lines up with my thinking too.

Going beyond 10Gb, consider something like a pair of 32x40GbE switches, that most vendors are shipping now. Plug in a bunch of high-density servers, and you can easily run thousands of VMs in one or two racks. That's far more than most businesses now need.

So you've got relatively low costs for physical networking, and your challenges aren't really about managing the physical infrastructure - it's more about managing the virtual switches (the new access layer), and the interconnections between the VMs.

Out across the campus, it almost doesn't really matter what switching you use, as the required feature set hasn't really changed in years.

Makes for a tough life if you're in the business of selling network gear.

Julien Goodwin said...

Being rather a Juniper person the new QFX5100's seem like very nice boxes for a small DC.

I keep meaning to write it up myself, but consider what you can fit into a medium sized cage (say 20 racks), many petabytes of disk, tens if not hundreds of terabytes of RAM, and far more CPU than is required for nearly all non-supercomputing uses.

Jay Swan said...

Anonymous--the 40GE idea is an interesting one. We're already using QSFP breakout cables off of the 40GE ports on a Dell S4820T for testing purposes. I'll have to think about that more.

Julien--you can get a Dell 2RU rack server with 768GB of RAM now, so that's 14-16 or more TB of RAM per rack depending on rack size. Dell advertises an iSCSI SAN with up to 2PB of storage in a 48RU rack. That's enough to support 1000-2000 VMs in 2 large racks, depending of course on IO, CPU, and redundancy needs. Crazy.

CloudWedge said...

Micro Data Center is all-in-one product for global market that contains IT server rack and cooling system. Compared with server room construction.

Backpack Ben said...

Very creative posst