From A to B – don’t forget the physical parts of computing

Green BallLately, I’ve been dealing with ensuring that all dependencies are met — and I mean “_all_”.

When I’ve presented at conferences, I use a term, I call the “Big Stack”.  Technical Staff tend to understand the network protocol stack which ties back to the OSI model.  The most basic layer is the physical link.  If you don’t have a wire connecting two devices, odds are the packets aren’t going to flow (okay, let’s ignore wi-fi for the illustration).
My “big stack” starts at the physical layer, but then has network, storage, compute, hypervisors, OS, then applications.  The application layer is going to be dependent upon the OS and so on down (the network & storage components get convoluted, but I usually just tailor according to my audience).
So, when I presented to a customer on DR in the past couple of weeks, I explained the big stack – and if you don’t have something physical on the remote side – no power, no cage – your DR plan will take a while to execute.
20140322-150329.jpg
And why this has been top of mind lately, is that I’ve been a recent victim of this.  I had a cage.  I had cabinets.  I had power strips in the cab.  They connected to power whips coming out of the floor.  But, those didn’t connect to anything.  So, no power.  Hard to get all my servers and storage running without that.  This just caused a bit of a delay in the deployment.  But, it continued.  I knew that fibres were run under the floor to the other side of the data center.  But, they weren’t connected to the switches 5 inches away.
For home internet connectivity, telecom and cable companies used to talk about the “last mile” — meaning, they could get the cables from their distribution sites to the neighborhoods, but getting from the neighborhood box into the house was the hard part.
Similarly, I remember many years ago having to re-rack a storage array because we had to evacuate the cabinet to make room for something else.  The storage array was in one cabinet and connected to a server that was several feet away (10?).  So, we uninstalled the array from the bottom of the cabinet and moved into an adjacent cabinet 2ft closer – but it had to be racked about 3’ up rather than on the bottom, since there were other devices taking the space at the bottom.  Fishing the cables under the floor, ‘gee I’m several feet short’.  ‘Okay, I’ll just pull the line and re-run it because it not going in a very direct route.’  I was literally 2 inches short.  So, I had to re-rack the server that was on the other end and move it from the middle of the cabinet to the bottom, so that it would reach.
So, the years go by and nothing has really changed.
We have all this technology and hear the sales pitches about clouds and virtualization, ‘oh just shift the workloads’.  ‘You are at the app level, you don’t have to worry about what is at the lower layers’.
Well someone has to.
‘We’ll just drop some servers in’.  ‘Oh, but the new generation of servers are longer than the old ones, they need new racks.  They can’t go in the old ones.’  ‘But, I have space.’  ‘You have vertical space, but not depth space.’
It could be physical space.  It could be cable lengths.  It could power requirements.  When you get too many layers above, you make assumptions that everything below has worked, will continue to work.  But, someone still has to manage the pain below – and if you don’t know who… then it is you.

Jim – 03/22/14

@itbycrayon

View Jim Surlow's profile on LinkedIn (I don’t accept general LinkedIn invites – but if you say you read my blog, it will change my mind)

Bandwidth v. Latency or Highway Lanes v. Distance

Green Ball
Today, the Fibre Channel Association announced its Gen 6 spec with 128 gigabits per second.  For many, that sentence may not make any sense.  For the benefit of my readers, I really do try to simplify things.

So, let me start over.  Connectivity to disk or connectivity over the network is measured in speeds of gigabits per second (gbps); it used to be in mbps.  An important note is that these are “bits”, not “bytes” which are 8 times larger.  So, when you hear a network is 1gb, it is 1 gigabit per second, and thus 125 megabytes per second.  Networks tend to be measured in bits, as opposed to files which are measured in bytes.

Now, that I went on a tangent, let me come back again.  So, when you hear that you have a 1gb ethernet port on your PC or laptop, or that you wireless router does 54mb, this is the size of the pipe.  This does not mean that you can push that much data.  Also, the speed of the network isn’t the only thing.

As an example, I was working with someone recently who was trying to compare 10gb ethernet to 8gb fibrechannel (both ethernet & fibrechannel are networking protocols, the latter is exclusively for disk access).  “Well, 10 is greater than 8, so that is better”.  I respond, “well, I don’t know if you are capable of pushing data that fast.”  (I didn’t even want to get into the discussion of potential redundant pipes).

When the network is dedicated to a server, it is less important than when the network is shared.  “Oh we have <blank> gb for our network, why are things so slow?”  Well, the best way to explain it for shared network is to think in terms of lanes on a highway.  Just because I have lanes on the highway, that doesn’t mean I get to go faster.  When networks are slower and multiple users, then there is the possibility of more congestion.   So, greater bandwidth will make things go faster.

But, if there is no congestion, then there are still limits to how fast packets go from point A to point B.  That is “latency”.  If I want to go from one part of town to another part of town, then it doesn’t matter if there is little traffic or no traffic, it still takes time to go from A to B.  If there is lots of traffic, then yes, it does matter.  But, that is when and why it feels that more bandwidth is better.

20140212-223017.jpg

The lesson is not to confuse bandwidth with latency.  It also means that one shouldn’t get too excited when they hear the new speeds of networks.

Jim – 02/12/14

@itbycrayon

View Jim Surlow's profile on LinkedIn (I don’t accept general LinkedIn invites – but if you say you read my blog, it will change my mind)