Notes from #NTAPInsight 2014

Green Ball

After a partial week at NetApp 2014 Insight US, here are my thoughts:
(full disclosure:  I was a presenter of one session at the conference)
  1. Keynote thought
  2. OnTap 8.3 announcement
  3. Hybrid Cloud
    1. Data is state-ful, unlike (cloud) computing
  4. Data locality
  5. Different UNIX variants – Different Cloud
  6. Laundry services similar to cloud computing (Jay Kidd / NA CTO)
Tom Mendoza (NetApp Vice Chairman) was fantastic in his keynote.  He focused on culture and wanting to build a culture of trust & candor.  CIOs understand every company is going to have issues, the question will be does the CIO of the customer trust the vendor to be there when there is a problem.
Lots of talk about OnTap 8.3 – though the fact that it is RC1 and not GA is disappointing.   Didn’t hear anyone reference that the 8.3 is a Release Candidate.  8.3 provides full feature parity with 7-mode.  There was little discussion about 7-mode, except for how to move off 7-mode (7-mode transition tool).  7-mode transition still appears to be a large effort.  For, 7MTT, the key term is “tool”.
The key focus in the keynotes was “Hybrid Cloud”.  One of the key takeaways is the need for data locality.  The data is ‘state-ful’ as opposed to cloud computing which is ‘stateless’ — in the sense that the resource need can be metered, but data is not.  So, when moving from on-prem to cloud, data would have to be replicated completely between 2.   Or more so, if you are working between clouds, or maybe between clouds in different countries, the full data set has to be replicated.  The concern is that government entities (Snowden effect) will require data to be housed in respective countries.  This now becomes the digital equivalent of import/export laws and regulations.
With the notion of different clouds, it reminds me of all the different UNIX variants.  We had Solaris boxes and we had HP-UX boxes and we had DEC boxes and we struggled moving data between.  Some were big endian, some little endian.  So, binaries were incompatible.
Finally and irreverently during Jay Kidd’s (NetApp CTO) presentation, my mind wandered when thinking about cloud computing analogies.  Never noticed before how metered cloud computing is so much like washing machines at the laundry mat – pay per use.


Jim – 10/30/14 @itbycrayon View Jim Surlow's profile on LinkedIn (I don’t accept general LinkedIn invites – but if you say you read my blog, it will change my mind)

Problem calculating workloads on Storage, in this case NetApp

Double Black Diamond

With a centralized storage array, there can be front-side limitations (outside of the array to the host or client) and back-side limitations (the actual disk in the storage array).

The problem that occurs is that from the storage array point of view, the workloads at any given moment in time are random and from the array the details of the workloads are invisible.  So, to alleviate load on the array has to be determined from the client side not the storage side.

Take for example a VMware environment with NFS storage on a NetApp array:image

Each ESX host has some number of VMs and each ESX host is mounting the same export from the NetApp array.


Let IA = The Storage Array’s front side IOPS load.
Let hn(t) = The IOPS generated from a particular host at time t and n = number of ESX hosts.


The array’s front side IOPS load at time t, equals the sum of IOPS load of each ESX host at time t.

IA(t) = Σ hn(t)


An ESX host’s IOPS load at time t, equals the sum of the IOPS of each VM on the host at time t.

h(t) = Σ VMn(t)


A VM’s IOPS load at time t, equals the sum of the Read IOPS & Write IOPS on that VM at time t.

VM(t) = R(t) + W(t)


The Read IOPS are composed of those well formed Reads and not well formed reads.  “Well formed reads” are reads which will not incur a penalty on the back side of the storage array.  “Not well formed reads” will generate anywhere between 2 and 4 additional IOs on the back side of the storage array.

Let r1 = Well formed IOs

Let r2 = IOs which cause 1 additional IO on the back side of the array.

Let r3 = IOs which cause 2 additional IOs on the back side of the array.

Let r4 = IOs which cause 3 additional IOs on the back side of the array.

Let r5 = IOs which cause 4 additional IOs on the back side of the array.


R(t) = ar1(t) + br2(t) + cr3(t) + dr4(t) + er5(t)

Where a+b+c+d+e = 100% and a>0, b>0, c>0, d>0, e>0


W(t) = fw1(t) + gw2(t) + hw3(t) + iw4(t) + jw5(t)

Where f+g+h+i+j = 100% and f>0, g>0, h>0, i>0, j>0

Now for the back side IOPS (and I’m ignoring block size here which would just add a factor into the equation of array block size divided by block size).  The difference is to deal with the additional IOs.

R(t) = ar1(t) + 2br2(t) + 3cr3(t) + 4dr4(t) + 5er5(t)


W(t) = fw1(t) + 2gw2(t) + 3hw3(t) + 4iw4(t) + 5jw5(t)

Since the array cannot predetermine the values for a-i, it cannot determine the effects of an additional amount of IO.  Likewise it cannot determine if the host(s) are going to be sending sequential or random IO.  It will trend toward the random given n number of machines concurrently writing and the likelihood of n-1 systems being quite while 1 is sending sequential is low.

Visibility into the host side behaviors from the host side is required.


Jim – 10/01/14


View Jim Surlow's profile on LinkedIn (I don’t accept general LinkedIn invites – but if you say you read my blog, it will change my mind)

NetApp cDOT ssh key config via CLI

Double Black DiamondI had posted prior on how to configure SSH keys on 7-mode.  I’ve been remiss on getting the SSH keys for cDOT (NetApp’s clustered Data OnTap).

Before I get to the steps, let me list the assumptions:

  1. The steps below will be for a non-root user
  2. Root/Administrator privs are available to the user who is setting this up.
  3. The SSH key for the non-root user has already been generated on the client system.
  4. The SSH key can be done with a copy/paste from something reading the file (e.g. xterm or notepad) into a shell window with the CLI login into the filer (e.g. xterm or puTTY)

The methodology is fairly simple (provided one has the admin privs):

  1. Login into filer via CLI with appropriate privileges.
  2. # go to the security/login section
    • login
  3. # allow for ssh for the user
    • create -username <username> -application ssh -authmethod publickey
  4. # enter the public key
    • create -username <username> -publickey "ssh-rsa <public-key> <username>@<ssh client hostname>"

Jim – 09/29/14


View Jim Surlow's profile on LinkedIn (I don’t accept general LinkedIn invites – but if you say you read my blog, it will change my mind)

Shellshock / Bashbug quick check

Black Diamond

Given the latest news on the Shellshock aka Bashbug vulnerability, I modified a public command line check.
Backstory:  Unix systems (includes Linux & the Mac OS, OSX) have shells for their command line windows.  Bash is common.  A vulnerability was found and this has fairly large implications.   More detail is available online:

My modification to the command line script is:

Jim – 09/26/14 @itbycrayon View Jim Surlow's profile on LinkedIn (I don’t accept general LinkedIn invites – but if you say you read my blog, it will change my mind)

Does Technology eliminate jobs?

Green Ball

Harvard Business Review had a post, ”

Experts Have No Idea If Robots Will Steal Your Job”

and I decided to comment.

But generalizing, Does Technology put people out of work?

Most likely.   That being opinion being said, let me clarify.  Usually, what is being implied by the question is:  Do Technology advancements put Americans out of work?   The argument includes Ricardo’s Comparative Advantage.  There is no doubt that if a technology enhancement comes out to eliminate a job, that job is then gone.  e.g. The invention of cars meant that ultimately, the black smith industry was to decline.  Push button operations of elevators eliminated the need to have elevator operators.  But, with the elimination of that position, the human labor could be allocated to other things.  And those other things would be other industries that hadn’t existed.
As someone who has worked in technology for over 25 years and someone who has benefited from the improvements to the increase in the standard of living, it may be odd for me to take the position that have — especially since in business school I’ve learned all about comparative advantage.
However, here is my argument:  With technological efficiencies, positions are eliminated.  Those positions tend to be on the lower end of the information spectrum e.g. earth movers to pave roads rather than an army of people with shovels or voice recognition systems to replace dictation.  We are presently in the information age and a knowledge based economy.  That means that the tasks further disconnected from the product get eliminated.  And so, with the advances in transportation and the like – if there are low skill jobs that are menial, those can be outsourced to countries where the pay is about $1 an hour.
As Amazon works on getting drone delivery services – or Uber dreams of driverless cars, those human positions that were in the supply chain get removed.  Those in the delivery business – what jobs will they have?  And with the knowledge based economy, it is going to take a long time (if it happens at all), for them to ramp up to a technology job.
Things will get real interesting when 3D printing eliminates many of the Chinese assembly jobs as product can then be created in the States — assuming the population in the States has spending money.
Every time the minimum wage is increased, it provides an incentive for more technology to remove local jobs.  Every time the governments pump more money into the system, it increases the ability of innovating companies to innovate further.
  1. US Government borrows money to further stimulate the economy.
  2. Keynesian Economic Theory says that will stimulate the economy by increasing Aggregate Demand.
  3. However, the economy that is stimulated due to the multiplier effect, is the system as a whole.
  4. Since foreign products are a part of the system, the fact that the money goes overseas is not really seen as a bad sign (considering they were loaning money into the “system” as well).
  5. But, the role and intention of the US Government should not be to stimulate the world economy, but our own.
  6. As money works its way into the system, who are the beneficiaries?
    1. The innovators (those with technology)
    2. The employed.
Sure, jobs are created – there weren’t needs for developers with Ruby on Rails experience a decade ago.  So, yes, new needs come to the marketplace.  However, I suspect that many of the middle class who have had jobs eliminated would struggle to fill those voids.
When people talk about the middle class seeing a loss in real income over 15 years, a lot of this is relevant.
I don’t know if this can be stopped, but it certainly could be slowed by the increased borrowing and spending.  This now, may be too late as other countries (i.e. China) with sizable economies have better credit worthiness.
Technology waves will continue to be introduced by innovation.  Technology waves will continue to eliminate jobs (and some companies).  But, the economy will continue to shape the ability to fund innovation — and a significant portion of the economy is funded by borrowing of future expected revenue.  So, the frequency of technology waves is accelerated by government borrowing and spending.  The recipients of the economy are those who are employed and the information workers benefit on the high end and those on the low end which with the benefits of globalization are overseas.  The net result is that with minimum wage laws, let alone other regulations, it is more economical to employ those elsewhere.
So, when the question is:  Does Technology eliminate American jobs?  I would answer, “Yes”.  [I’ll leave the positve answer to “Does Technology significantly enhance the standard of living?” for a later blog.]
Off soapbox,

Jim – 08/10/14 @itbycrayon View Jim Surlow's profile on LinkedIn (I don’t accept general LinkedIn invites – but if you say you read my blog, it will change my mind)

Lack of Tech Workforce Diversity in Silicon Valley – my $0.02

Green Ball

Earlier today, on a Wall St. Journal tech blog stats were published showing that a large majority of workers at well known Silicon Valley tech companies are white or asian.  This follows some news of the last several weeks where tech companies are acknowledging this.
The question is:  Is this a problem?
And the next:  If so, can it be solved?
And lastly:  If so, what is the one solution or what are the multiple solutions to the problem?
I’d argue that it is a problem.  The world is in a knowledge economy and the more Americans that can participate in the knowledge economy, the better for America.  The lack of diversity reflects a lack of participation in the field and thus portions of the country not participating in the economy, as full as possible.
Yes, there is extrapolation going on here – large companies predominantly housed in Silicon Valley is being used as a proxy for all tech, and tech being a proxy for the best portions of the economy in the nation.
But, when they say that small companies grow the economy, it isn’t someone selling stamps or vitamins, it is companies that have venture capital like the beginnings of Facebook and such.
Tech companies start with some tech guys with an idea.  They borrow.  Then they go for venture capital.  Venture Capitalists want to ensure that the plan is sound and/or that they have some proven leadership.  The companies try to staff up with the best staff they can.
Meanwhile, the tech companies are in fierce competition for talent (except when they collude to keep wages down). So, tech companies in Silicon Valley have glorious headquarters and are willing to shuttle staff down from San Francisco.
So, when selecting candidates from college, what would tech companies look for?  Graduates with STEM degrees, of course.  And what does that diversity look like?  According to this site, , in 2011 75% of grads in Comp Sci were White or Asian.
In addition, those who start college pursuing STEM degrees, under represented minorities are less successful in completing those programs than others.  And this can be tied to how they perform in high school.  Minorities are known not to perform as well. In 2013 it was said, “This year only 15 percent of blacks and 23 percent of Latinos met or exceeded the SAT benchmark for college and career readiness.”
So, this does not really seem to be a problem with the tech companies.  You don’t hear how NFL teams aren’t recruiting enough from the Ivy League.  Going back to the question:  Is this a problem?  Yes.  More specifically, is it the tech companies’ problem?  No.
Can the problem of minority participation in tech be solved?  Maybe.  It needs to be done in earlier years.  In high school and earlier, logic and cause & effect, need to be taught.  Taking on the subject of the problem with public schools is beyond this blog, but the point is that the diversity in tech outcomes are results of issues long before it gets to employers.

Off soapbox,

Jim – 06/19/14 @itbycrayon View Jim Surlow's profile on LinkedIn (I don’t accept general LinkedIn invites – but if you say you read my blog, it will change my mind)

Notes from #EMCWorld 2014

Green Ball

After days at EMC World, here’s what sticks out in my mind:
  1. 3rd Platform Paradigm
  2. ViPR (& VMAXce)
  3. Electricity model
  4. Just like school
  5. The Venetian
  6. VMware
  7. Backups
  8. Other folks

Side note:  There was talk of SAP HANA, Hadoop, & Pivotal.  The place that I typically play in is Storage and not that space, so I’m going to ignore their emphasis there.

  1. 3rd Platform — This is at least the 2nd year that EMCWorld has had mention of this and I really like the tie into the concepts from the Innovator’s Dilemna i.e. that there are technology waves and that mobile users and devices are the 3rd wave of consumers, following the 2nd wave, PC users, and 1st wave mainframes.  Emphasis on this, in my opinion is a strength.

  2. ViPR (&VMAXce) — Lots of talk about ViPR 2.0 their abstraction layer & coding into storage.  One of my unknowns was how does EMC ViPR compare with VMware Orchestrator.  Turns out, ViPR needs to talk northbound into the VMware layer or OpenStack Layer while VMware Orchestrator would need to talk southbound to ViPR or directly to arrays.  Last year, there were lots of mentions of VMAXce, this year I didn’t hear anything.  So, VMAXce makes it easier to provision on predetermined tiers and such (Cloud provisioning portal, etc).  If EMC struggled getting that right, how will smaller firms deal with their own coding to do ViPR?

  3. Electricity model for storage models (Utility model) — Dovetailing to the above, one presenter made analogies to the Electricity model.  Right now, it is a utility 110/220V.  Whereas before AC was standard, companies needed their own electricity generation (Singer sewing machines as an example).  So, if the future is to scale, we need less complexity and thus, fewer options.  This ties back to ViPR and also ties into performance based criteria to storage, not just capacity.  Redundancy & Data Protection would be considered as givens.  But, as we can see from #2, it doesn’t appear as simple as it sounds.

  4. Just like school — Spoke to one new attendee to EMCworld and he compared it to school — racing between classes, information overload.  I think one can add parties at night, concert midweek, sleep or bail on the last day of the week (attendance at breakfast this AM was lower and projected to be so).

  5. Las Vegas Venetian / Sand Convention Center — Every year, I’m blown away with the logistics of feeding 15,000 people for breakfast and lunch.  Herd everyone in for breakfast, clean up, truck the stuff out, truck the lunch stuff in, prep the buffet tables, replace table cloths and herd the lunch crowd in.  Very impressive.  (I said in #4 that breakfast this AM, following last night’s concert, was projected to be less — they cut the dining area in half it appeared).

  6. VMware — I always find it interesting that at EMCworld, EMC touts VMware as an integral part of their company (I think they own 80%).  When you talk to VMware staff, they sound independent — as they wish to be storage agnostic.  However, it seems that the cultural differences between the two and other barriers are coming down a bit more.  Seems that VMware is really more a part of EMC than they used to be.

  7. Backups — During one of the backup presentations, there was a nice slide on the data protection spectrum.  Continuous Availability (immediate w/ VPlex), Replication (seconds, w/RPAs), Snapshots (minutes, w/array based), Backups (hours w/Avamar or Networker), Archive (days w/Atmos).  Seeing with those point solutions added clarity.

  8. Other folks in town — While EMCWorld was at the Venetian, I saw that NetApp was at the Aria & I heard that Symantec was at Caesar’s Palace.  I guess the week in Vegas was a temporary tech conference.

Jim – 05/08/14 @itbycrayon View Jim Surlow's profile on LinkedIn 
(I don’t accept general LinkedIn invites – but if you say you read my blog, it will change my mind)