From A to B – don’t forget the physical parts of computing

Green BallLately, I’ve been dealing with ensuring that all dependencies are met — and I mean “_all_”.

When I’ve presented at conferences, I use a term, I call the “Big Stack”.  Technical Staff tend to understand the network protocol stack which ties back to the OSI model.  The most basic layer is the physical link.  If you don’t have a wire connecting two devices, odds are the packets aren’t going to flow (okay, let’s ignore wi-fi for the illustration).
My “big stack” starts at the physical layer, but then has network, storage, compute, hypervisors, OS, then applications.  The application layer is going to be dependent upon the OS and so on down (the network & storage components get convoluted, but I usually just tailor according to my audience).
So, when I presented to a customer on DR in the past couple of weeks, I explained the big stack – and if you don’t have something physical on the remote side – no power, no cage – your DR plan will take a while to execute.
20140322-150329.jpg
And why this has been top of mind lately, is that I’ve been a recent victim of this.  I had a cage.  I had cabinets.  I had power strips in the cab.  They connected to power whips coming out of the floor.  But, those didn’t connect to anything.  So, no power.  Hard to get all my servers and storage running without that.  This just caused a bit of a delay in the deployment.  But, it continued.  I knew that fibres were run under the floor to the other side of the data center.  But, they weren’t connected to the switches 5 inches away.
For home internet connectivity, telecom and cable companies used to talk about the “last mile” — meaning, they could get the cables from their distribution sites to the neighborhoods, but getting from the neighborhood box into the house was the hard part.
Similarly, I remember many years ago having to re-rack a storage array because we had to evacuate the cabinet to make room for something else.  The storage array was in one cabinet and connected to a server that was several feet away (10?).  So, we uninstalled the array from the bottom of the cabinet and moved into an adjacent cabinet 2ft closer – but it had to be racked about 3’ up rather than on the bottom, since there were other devices taking the space at the bottom.  Fishing the cables under the floor, ‘gee I’m several feet short’.  ‘Okay, I’ll just pull the line and re-run it because it not going in a very direct route.’  I was literally 2 inches short.  So, I had to re-rack the server that was on the other end and move it from the middle of the cabinet to the bottom, so that it would reach.
So, the years go by and nothing has really changed.
We have all this technology and hear the sales pitches about clouds and virtualization, ‘oh just shift the workloads’.  ‘You are at the app level, you don’t have to worry about what is at the lower layers’.
Well someone has to.
‘We’ll just drop some servers in’.  ‘Oh, but the new generation of servers are longer than the old ones, they need new racks.  They can’t go in the old ones.’  ‘But, I have space.’  ‘You have vertical space, but not depth space.’
It could be physical space.  It could be cable lengths.  It could power requirements.  When you get too many layers above, you make assumptions that everything below has worked, will continue to work.  But, someone still has to manage the pain below – and if you don’t know who… then it is you.

Jim – 03/22/14

@itbycrayon

View Jim Surlow's profile on LinkedIn (I don’t accept general LinkedIn invites – but if you say you read my blog, it will change my mind)

Advertisements