3 Reasons why military veterans make good employees

Green Ball

Over the years, I have had the pleasure of working with numerous veterans of our armed forces. There are many experiences which I believe are common to the military that transfer over to civilian employment. I have not served in the military, but I believe my experience with former military colleagues and screening many job applicants over the past 20 years allows me to offer an opinion.
Frequently, employers look for experience in a certain industries or environments: Experience in a software development environment, or a service provider environment, a manufacturing environment, a sales environment, a financial services environment, or an academic environment, etc. etc.
So, what does a former soldier have to offer a civilian firm? [ Given that I’ve spent a majority of my years with IT firms, my explanation will be IT slanted ]
#3 Veterans have experience dealing with difficult people. Soldiers are trained to maintain composure in the face of drill sergeants and other superiors. That training is supposed to translate into the field, where the enemies are trying to instigate conflict. Do you think they are going to lose composure when an angry customer is yelling? Do you think that they are going to escalate conflict in the workplace?
I’ve seen two incidents where a manager was yelling at an employee and a response would have been justified. But, in both cases, one with a former Army private and another with an Army Officer who was in the reserves, neither spoke a strong word which would have escalated the situation.
#2 Veterans show a loyalty to the team. In a sense, this is related to the former. The teammates make up the unit. As the saying goes, “there is no ‘I’ in team”. So, team success is important. In the military, if the guy who has your back isn’t there, your future won’t be so bright. Employers want employees to care about the company’s success. Directors and VPs want to see teams that are successful, not just individuals. Heroes are good, but companies want to know that they can execute without them. In addition, managers are concerned about team chemistry. Guys who aren’t interested in team success tend to work against team chemistry.
I worked with a manager who had previously come from the Air Force (if memory serves that was the branch). He was loyal to the staff he inherited. He backed them up. He assumed responsibility for the team’s performance and was intent on getting the team to function together.
#1 Veterans are resilient to difficult times. In the workplace, change is frequent. In business, if you don’t change, you will be out of business: refine the organization, race to market, respond to competitors, personnel changes, new regulations, buyouts, spinoffs, etc. etc. Some of the changes or even rumors of changes can be overwhelming to staff. In business, projects can be started, stopped and then restarted – or direction switched and switched back. Military staff are trained to prepare for change. Just as complete information may not be available to staff, military staffers are used to having incomplete info and knowing that those above may have more information to make decisions, as opposed to what is public. In addition, conditions that soldiers are placed under are more stressful and more life impacting than what happens in the typical civilian job.
I worked with a former marine who while under a great deal of pressure to deliver. The project was important and it was behind on the timelines and had a fair amount of attention. He said something along the lines of: “Hey, compared to rolling in a tank through Fallujah (Iraq) and being shot at – this is pretty easy.”
The net result is that veterans bring commitment without anxiety. That is a value to any organization.

Jim – 11/10/13
@itbycrayon

View Jim Surlow's profile on LinkedIn (I don’t accept general LinkedIn invites – but if you say you read my blog, it will change my mind)

How to deal with exponential growth rates? And how does this relate to cloud computing?

Double Black DiamondWhat happens when demand exceed the resources? Ah, raise prices. But, sometimes that is a not available as a solution. And sometimes demand spikes far more than expected.

Example: Back in the early 2000s, NetFlix allowed renters to have 3 DVDs at a time, but some customers churned those 3 DVDs more frequently than average and more frequently than Netflix expected. So, they throttled those customers and put them at the back the line. (dug up this reference). This also appears to have happened in their streaming business.

Another example: Your web site gets linked on a site that generates a ton of traffic (I should be so lucky). This piece says that the Drudge Report sent 30-50,000 hits per hour bringing down the US Senate’s web site. At 36,000, that is an average of 10 per second.

Network Bandwidth tends to be the resource. Another example from AT&T: As a service provider, this piece says that 2% of their customers consume 20% of their network.

There are non-technical examples as well. The all-you-can-eat buffet is one. Some customers will consume significantly more than the average. (Unfortunately, I can’t find a youtube link to a commercial that VISA ran during the Olympics in the 80s or 90s where a sumo wrestler walks into a buffet – if you can find it for me, please reply).

Insurance customers deal with this as well. They try to spread out the risk so that if an event were to occur (e.g. a hurricane), they don’t want all their customers in a single area. Economists call this “adverse selection”. “How do we diversify the risk so that those that file claims, aren’t the only ones paying in?”

How does this deal with computing? Well, quotas are an example. I used to run systems with home directory quotas. If I had 100GB, but 1000 users, I couldn’t divide this up evenly. I had about 500 users who didn’t need 1MB, but I had 5 that needed 10GB. For the 500 users that did need more than 1MB, they needed more than an even slice.

So, the disk space had to be “oversubscribed”. I then could have a situation where everyone stayed under quota, but I could still run out of disk space.

Banks do this all the time. They have far less cash on-hand in the bank, than they have deposits. Banks compensate by having insurance through the Fed which should prevent a run on the bank.

In computing, this happens on network bandwidth, disk space, and compute power. At deeper levels, this deals with IO. As CPUs get faster, disks become the bottleneck and not everyone can afford solid state disks to keep up with the IO demand.

The demand in a cloud computing environment would hopefully follow a normal distribution (bell curve) for demand. But, that is not what always occurs. Demand tends to follow an exponential curve.

20131103-211158.jpg

As a result, if the demand cannot be quenched by price increases, then throttling must be implemented to prevent full consumption of the resources. There are many algorithms to choose from when looking at the network, likewise there are algorithms for the compute.

Given cloud architecture which is VM on a host connected to a switch connected to storage which has a disk pool of some sort, there are many places to introduce throttles. In the image below which is uses a VMware & NetApp vFiler environment (could be SVM aka vServer as well) serving, there is VM on ESX host, connected to Ethernet switch, connected to Filer, split between disk aggregate and a vFiler which then pulls from the volume sitting on the aggregate, and then has the file.

20131103-211311.jpg

Throttling at the switch may not do much good. As this would throttle all VMs on an ESX host or if not filtering by IP, all ESX hosts. Throttling at the ESX server layer again, affects multiple VMs. Imagine a single customer on 1 or many VMs. Likewise, filtering at the storage layer, specifically, the vFiler may impact multiple VMs. The logical thing to do for greatest granularity would be to throttle at the VM or vmdk level. Basically, throttle at the end-points. Since a VM could have multiple vmdks, it is probably best to throttle at the VM level. (NetApp Clustered OnTap 8.2 would allow for throttles at the file level). Not to favor NetApp, other vendors (e.g EMC, SolidFire) who are introducing QoS are doing these at the LUN layer (they tend to be block vendors).

For manual throttling, some isolate the workloads to specific equipment – this could be compute, network, or disk. When I used to work at the University of CA, Irvine and we saw the dorms coming online with Ethernet to the rooms, I joked that we should drive their traffic through our slowest routers as we feared they would bury the core network.

The question would be what type of throttle algorithm would be best? Since starving the main consumers to zero throughput is not acceptable, following a network model may be preferred. Something like a weighted fair queueing algorithm may be the most reasonable, though a simple proposition would be to revert back to the quota models for disk space – just set higher thresholds for many which will not eliminate every problem, but a majority. For extra credit (and maybe a headache) read this option which was a network solution to also maximize throughput

Jim – 11/03/13
@itbycrayon

View Jim Surlow's profile on LinkedIn (I don’t accept general LinkedIn invites – but if you say you read my blog, I’ll accept)

 

 

3 Reasons why techies hate being summoned to meetings run by non-technical staff

Green Ball

Techies tend not to enjoy meetings. And especially those run by non-technical staff. And more so, when they are summoned to them – “oh, we need you at that meeting”. Here are 3 explanations as to why:

#3 – Communication Gap – Lingo, jargon, idioms, whatever, is somewhat localized to the technical staff. Frequently, the non-technical person calling the meeting doesn’t understand the lingo. So, there is a communication gap. I’ve been on conference calls with peers in other countries and there were language barriers. I’ve been in meetings with people on the same team and there have been language barriers. System Administrators, Engineers & Architects have different languages than Salespeople, Project Managers, and Execs. This makes meetings sometimes painful. As people talk past each other or after lengthy dialogue meetings get longer.

#2 – Negotiation v. Conversation – Questions come in that say, “Isn’t it possible to do ?” And the answer is “Yes, but…” Unless the conditional phrase is put into terms that the audience can really grasp, the condition isn’t really heard. If the solution proceeds and negative consequences result, then the assumption is that the warnings were ignored. Meanwhile, it is stated the “expert” was in the room. So, blame ends up as a result. The, “can’t we do” question is a negotiation from the ones who need the solution, the “yes, but” answer tends to be a conversation. Since there usually is a technical solution to most problems and the question typically is interpreted as to what is possible, the answer almost always is “yes, but”. The answer should tend toward, “No, unless you have more dollars in the budget” or “No, unless you have more labor to provide me.”

#1 – Information Direction – Technical staff either have to research solutions or execute those solutions. They are information producers. “The solution will look like this … ” or “It will take this long to run…” While non-technical staff tend toward being information consumers – maybe they are decision makers (managers or executives) or maybe they are project managers needing to setup schedules. So, they need the technical staff to provide the information to the other stakeholders. While they are in these meetings waiting to supply information, they can’t be off “doing their job” of researching solutions or executing on those solutions. It is especially painful when they are in the meetings, waiting to contribute, and the question arises, “why are you so far behind?” or “when will it be done?” Reminds me of a Dilbert comic where the Pointy-Haired-Boss asks Dilbert for daily status updates as to why he is so far behind. Apparently, Scott Adams has two on the topic.

I focused this post on technical staff at non-technical meetings. When technical staff are at technical meetings, there tend not to be communication gaps nor negotiations and the information direction changes where they can also be information consumers rather than sole providers.

How do you make the meetings more effective?

#1 – Translate the information – Try to drive the information to the stakeholder’s concern, try to get a translation. Move the conversation from “if this happens, the port is down” to “if this happens, the customers can’t get data”, or “if this solution doesn’t work, I’ll need to go back to the drawing board.” to “if the solution doesn’t work, I’ll probably need another 2 months to find another way.”

#2 – Detail the requirements and/or assumptions – Instead of “can’t we do this?”, it should be rephrased to “can’t we do this, with the existing budget and existing schedule and existing staff?” (or whatever adjustments to one or all of the 3). Detail the meeting assumptions – at the meeting, I’m looking for “information to make a decision”, “information, so that all the attendees have the same base of information”, “timelines of execution”, or “proof information that is presented by ”

Jim – 10/21/13
@itbycrayon

View Jim Surlow's profile on LinkedIn (I don’t accept general LinkedIn invites – but if you say you read my blog, it will change my mind)

NetApp de-dupe, flexclone, fragmentation, and reallocation

Double Black DiamondSince NetApp has their own filesystem, WAFL, disk layouts and fragmentation behave differently than on traditional filesystems. Before NetApp introduced de-duplication, fragmentation was less common. Real quickly let me deviate a bit and describe some notions of WAFL:

WAFL stands for “Write-Anyway File Layout” The notion being that if inodes could be saved with part of the file, then there wouldn’t need to be a file allocation table or inode table at one part of the volume and then data at another.

Another goal was to minimize disk head seeks – so data through a RAID set would be striped across all the disks in the set at essentially the same sectors. If I have 4 data disks and a parity disk (keeping it simple here), the stripe of data would be 25% on one disk, 25% on the next, and so on, and parity on the last. Assuming I have 100 sectors (for ease of calculation), I could put data on disk one at sectors 0-24 and on the next disk 25-49 and on the next 10-34 and on the next one 95-100 and 0-3, then 88, then 87, then 5-10, etc. Well, that would be rather slow as the head jumps around the disk platter searching for those. NetApp tries to keep all the data in one spot on each disk.

So, data is now assembled in nice stripes and if the writes are large, then a file takes up nice consecutive blocks on the stripe. The writes are in 4k blocks, but with a much larger file, they line up nice and sequential.

From their Technical Report on the subject, they state that the goal is to line up data for nice sequential reads. So, when a read request takes place, a group of blocks can be pre-fetched in anticipation of serving up a larger sequence.

Well, with NetApp FlexClone (virtual clones) and NetApp ASIS (de-duplication), the data gets a bit more jumbled. (Same with writes of non-contiguous blocks). Here’s why:

If I have data that doesn’t live together, holes get created. Speaking of non-contiguous blocks, I’ve worked in areas with Oracle databases over NFS on NetApp. When Oracle does a write of a group of blocks, those may not be logically be grouped (like a text file might be). So, after some of the data is aged off, holes are created as block 1 might be kept, but not block 2 & 3.

With FlexClone, the snapshot is made writeable and then future writes to a volume are tracked separately and may get aged off separately. More clear is the notion of de-duplication. NetApp de-dupe is post-process so it accepts an entire write of a file and then later compares the contents of that file with other files on the system. It will then remove any duplicated blocks.

So, if I write a file and blocks 2, 3, & 4 match another file and blocks 1 & 5 do not, when the de-dupe process sweeps the file, it is going to remove those blocks 2,3, & 4. So, that stripe now has holes and that will impact the pre-fetch operation on the next read.
20130916-221649.jpg

In the past, I only ran the reallocate command when I added disk to the disk aggregate, but now while searching through performance enhancing methods on NetApp, I’ve realized the impact of de-dupe on fragmentation.

Jim – 09/16/13
@itbycrayon

View Jim Surlow's profile on LinkedIn (I don’t accept general LinkedIn invites – but if you say you read my blog, it will change my mind)

Migrating to the Cloud – Technical Concerns of migrating to an IaaS Cloud

Blue SquareThe thoughts of migrating to the Cloud can be flippant or daunting depending on where you sit on the optimist/pessimist scale. In reality, this is a matter of proportion to your environment.

In this week’s post, I’m talking specifically about Infrastructure-as-a-Service Cloud — rather than having a physical presence, your goal is to move to the cloud, so you don’t have to care for that hardware stuff.

My recommendation on where to begin is to ask the question: How would I migrate somewhere else?

It starts by – what services do I move 1st? When I worked at UC Irvine‘s School of Humanities in the early 90s, we had to move into a new building and the finance staff needed to move 1st since they didn’t want to get caught with closing the books at the very time we had to evacuate the old modular building. So, a server had to go over there to provide the Netware routing that we were doing between a classroom network and the office network (it was summer, so I didn’t have to worry about student congestion on the network – though the empty classroom I put the server in was victim of the painters unplugging it). After the office staff could move, then I could bring the office Netware server over one evening. The important part of this story is that I needed networking to handle the people 1st.

Another move that I performed was similar. I had to move servers from downtown Denver to a new data center in south Denver. The users of those machines couldn’t deal with the network latency as our route went from downtown Denver to Massachusetts to south Denver. Those users had to move, then they had to get new AD (Microsoft Active Directory) credentials and new security tokens. So, the important part here is that the users needed an authentication infrastructure 1st.

While moving some servers from Denver to Aurora, into a new facility for us – we again were concerned about latency, so we needed to have authentication and name services also stood up in Aurora, so that authentication wouldn’t have to cross the WAN.

My point from these anecdotes is it is not just as simple as moving one OS instance. There are dependencies. Typically, those dependencies are infrastructure dependencies, and they typically exist so that latency can be avoided. [I haven’t defined network latency — but, for those who need an example — think of TV interviews that occur when the anchor is in New York and the reporter is in the middle east. The anchor asks a question and the reporter has to wait for all the audio to get to him while the viewer sees a pause in conversation. That is network latency. The amount of time it takes to travel the “wire”].

Back to dependencies – I may need DNS (domain name service) at the new site, so that every time I look up itbycrayon.com, I don’t have to have the server talk back to my local network to get that information. I may need authentication services (e.g AD). I may need a network route outbound. I may need a database server. Now, these start to add up.

In my experience, there is a 1st wave – infrastructure services.

Then there is a 2nd wave – actual systems used by users. Typically, these are some guinea pigs which can endure the kinks being smoothed out.

Eventually, there are a bunch of systems that are all interrelated. This wave ends up being quite an undertaking, as this bulk of systems takes time to move and users are going to want minimal downtime.

Then after the final user wave, the final clean-up occurs – decommissioning the old infrastructure servers.

20130819-213319.jpg

What I’ve presented is more about how to do a migration than a migration into the cloud. For the cloud, there may be additional steps depending on your provider – maybe you have to convert VMware VMs using OVFtool and then import.

VM portability eases the task. The underlying hardware tends to be irrelevant – as opposed to moving physical servers where there may be different driver stacks, different devices, etc. Obviously, one has to be cognizant of compatibility. If one is running IBM AIX, then one must find a cloud provider that supports this.

My point is that it is still a migration of how to get from A to B, and high level requirements remain the same (How is my data going to move – over the wire or by truck? What can I live with? What systems depend on what other systems?). The big difference between an IaaS Cloud migration and a physical migration is that servers aren’t moving from site A to site B – so there isn’t the “swing gear” conversation or the “physical move” conversation. This is a migration of landing on pre-staged gear. The destination is ready. Figure out the transport requirements of your destination cloud and get going!

Best of Breed v. One Throat to Choke

Best of Breed or One-Throat-to-Choke – Pros & ConsGreen Ball

In the industry, you hear the term, “Best of Breed” thrown around.  What this implies is that in a system, no vendor has a single solution that excels everywhere in the system – so you need to assemble parts like a cooking recipe to get the best.

One may see VARs (Value Added Resellers), System Integrators, and even vendors with broad product portfolios sell best-of-breed solutions.  Symantec has multiple backup products, EMC has multiple backup products, EMC has multiple storage arrays running different operating systems, IBM has multiple storage arrays with different operating systems.

The reason for this is that: consumers have different preferences and those preferences could be either product specific or vendor specific.  As a consumer, I may want different feature functionality from my products, I may want my network routing from one vendor, my firewalls from another, and my WAN accelerators from another.  Or, as a consumer, I may want it all from one vendor so that I have one company rep to complain to and I’m not stuck between 2 vendors pointing their fingers at one another.

With best of breed, the responsibility is on the purchaser to make it all work.  Unless the purchaser is buying from a vendor (VAR, System Integrator, etc.) who is going to make it all interoperate (and that still doesn’t address the long term maintenance), the purchaser lacks that single contact – that “one throat to choke”.

20130803-214844.jpg

Companies know this, which is why larger companies will want to partner with a particular product vendor or OEM (brand the equipment as their own) or actually provide competing products.

I mentioned Symantec earlier.  Symantec for years offered both BackupExec and NetBackup.  Why would they do that?  Some companies would need the full enterprise features of NetBackup and would accept the complexity.  Others want a simpler use and not need the enterprise features, so they could go with BackupExec.

To be clear, these are not the same product with one with fewer licenses & features.  These are two separate products which would require two different command sets.  I don’t mean to pick on Symantec as many other companies do this as well – it was just that they came to mind first.

When companies purchase SAN storage arrays, if they use Brocade for their networking, they would purchase the Brocade FibreChannel switches from their array vendor.  Brocade sees no benefit in just selling switches, they would rather use the storage array vendors as their channel.  Meanwhile, the array vendors front end the selling of the switches and if the purchaser has an issue whether with the array or the storage, they can talk directly with the array vendor.   The “one throat to choke”.

Vendor relations can be very painful when one considers all the elements – sales, support contract maintenance, tech support, and upgrade planning.  That doesn’t sound too painful for one vendor, but when you consider that there are multiple hardware vendors and multiple software vendors – it can be very time consuming.

The caveat here is would you rather do less vendor relations or have a better mix of products?

Considering the mix of products, you do have to consider the interoperability and compatibility issues.

Each vendor is going to have their own strengths and weaknesses.

Is the premium Steakhouse going to do the same fantastic job on fish that they do on steak – when they sell 10x the number of steaks?  Or vice versa for the fish restaurant with steaks?

Companies that purchase other companies to fill or supplement product portfolio holes will inevitably have interoperability issues with their other products.  The benefit is that they are responsible and you aren’t.  The drawback is that they might not be completely interoperable.

 

Jim

@itbycrayon
<a href=”http://www.linkedin.com/pub/jim-surlow/7/913/b80″&gt;

<img src=”http://www.linkedin.com/img/webpromo/btn_myprofile_160x33.png&#8221; width=”160″ height=”33″ border=”0″ alt=”View Jim Surlow’s profile on LinkedIn”>

</a>

Business considerations for the move to the cloud

Blue SquareMigration implies change and change implies risk. So, what are the hurdles that the decision maker has to make before committing to a migration to the cloud?

First, what type of migration is it? Is it a migration to Infrastructure as a Service (IaaS), Platform as a Service (PaaS), or Software as a Service (SaaS) … or any of the other “fill in the blank as a Service” (XaaS)? Wikipedia can provide sufficient definitions for IaaS, PaaS, and SaaS, but just to quickly provide examples: IaaS allows one to hotel their computing environment – e.g. run Microsoft Server on someone else’s gear by renting it out. PaaS allows for a development environment to produce software on someone else’s gear and use their software development tools. SaaS allows one to run a specific software app on someone else’s environment — “webmail” being SaaS before there was a term for it. Now, it could be online learning, Salesforce.com, etc.

IaaS, PaaS, SaaS

IaaS, PaaS, SaaS

Second, what are the risks? In exchange for Capital Expenses and some Operational Expenses, one gets Operational Expenses. This also means that some control is turned over to the service. When I lose power to my house, since I haven’t built my own power plant, I’m at the mercy of the utility company. Power comes back when it comes back. I can’t re-prioritize tasks that the power company has set (e.g. bring my neighborhood back before the other neighborhood). Depending on the SLAs – Service Level Agreements – uptime, performance, etc. is where the expectation is set.

I’ve worked with some users when approached by the SLAs of internal systems – wanted to drive costs down. “Oh, I don’t need redundancy or highly available systems – these are test & development servers… except right before we do a code release, then the systems have to be up 24×7.” “Um, you don’t get to pick the time of your disaster or failure, so sounds like you need to buy an HA system.”

As systems become more complex, firms struggle with: “how is the expertise maintained?” Acquisition cost of gear is about 1/3 the total cost of gear. There is maintenance and then the administration. Unless one runs a tech company, the tech administration is not the company’s core competency. So, why would a company want to run that in their business?

This is the classic buy v. build decision. Of course, with IT, the problem is that after one builds, they still have to administer. And, after one buys, they still have to handle the vendor relations.

In addition to vendor relations, one has the concern about vendor longevity. Is the vendor going to be there for as long as your company needs it to be? What happens when the vendor goes out of business or ends the line of business?

Of course, on the build side, what happens when the expert you hired, finds a new job or you wish to promote him to an alternate position?

Non-profits have alternate problems where funds may not be regular and OpEx costs infinitum might not be serviceable. But, hardware/software maintenance costs and training fall in the same boat.

A third consideration is security. How secure is your data in the cloud? Returning to the SaaS e-mail, it is fair to assume given recent revelations that the NSA is mining your e-mail off Gmail, yahoo mail, Hotmail, and others just to name a few. One would hope that the systems are secure from hackers and this info is only leaking to the government lawfully. But, if you are concerned about hackers, how secure is your data in-house? So, there is a cost consideration for the build solution and there is a trust consideration given one’s provider.

The build v. buy decision is admittedly harder with technology given the high rate of change. This is especially true as it ties to security. Feature implementation is based upon service provider timetables and evaluation of risk. All this again returns to priorities and that in the build solution, one gets to make their own calls and evaluations.

In summary, one can select at what level they wish to move to the cloud. One needs to be concerned about the build v. buy decisions, but the cloud move could allow for granular cloud moves (we put this out there, we don’t put that). Security, Vendor Longevity, Vendor Relations, etc. are big factors. Time & Labor needs to be accounted for, doing it in-house or working to out-source. And, of course, there is the decisions about CapEx & OpEx.

Jim

@itbycrayon

<a href=”http://www.linkedin.com/pub/jim-surlow/7/913/b80″&gt;

<img src=”http://www.linkedin.com/img/webpromo/btn_myprofile_160x33.png&#8221; width=”160″ height=”33″ border=”0″ alt=”View Jim Surlow’s profile on LinkedIn”>

</a>