NetApp 7-mode ssh key config via CLI w/o NFS or CIFS

Double Black DiamondConfiguring NetApp to use SSH with keys without having the root volume holding /etc NFS exported or CIFS shared can be convoluted.

Before I get to the steps, let me list the assumptions:

  1. The steps below will be for a non-root user
  2. Root/Administrator privs are available to the user who is setting this up.
  3. The SSH key for the non-root user has already been generated on the client system.
  4. The SSH key can be done with a copy/paste from something reading the file (e.g. xterm or notepad) into a shell window with the CLI login into the filer (e.g. xterm or puTTY)

Basically, the trick is to setup the empty user directories since there isn’t a command to create directories.  Obviously, with NFS or CIFS, the directory can be made fairly easily.

  1. Login into filer via CLI with appropriate privileges.
  2. # go into advanced mode
    • priv set advanced
  3. # find an empty directory using ls – in some cases, /home/http may be empty.
    • ls /home/http
  4. # check ndmpd status
    • ndmpd status
  5. # if ndmp is not on, turn it on.
    • ndmpd on
  6. # When using ndmpcopy, the shortcut of dropping /vol/<root volume> does not work for the destination
    • ndmpcopy /home/http /vol/<root volume>/etc/sshd/<username>
      ndmpcopy /home/http /vol/<root volume>/etc/sshd/<username>/.ssh
  7. # Create the text file with wrfile and cut and Paste key(s) from your other window, and then ctrl-c
    • wrfile /vol/<root volume>/etc/sshd/<username>/.ssh/authorized_keys
  8. # if ndmpd was off, turn it off.
    • ndmpd off
  9. # ndmpd creates a restore_symboltable file.  For cleanliness, need to remove that.
    • rm /vol/<root volume>/etc/sshd/<username>/restore_symboltable
    • rm /vol/<root volume>/etc/sshd/<username>/.ssh/restore_symboltable

Short Cut (if a user has already been setup then their ssh keys and directory structure could be copied which saves some steps).
Warning: Technically, the permissions (unix or Windows ACLs) are going to follow with the ndmpcopy, so there is a security risk here, if /etc is NFS mounted or CIFS shared. Keep that in mind.

  1. # check ndmpd status
    • ndmpd status
  2. # if ndmp is not on, turn it on.
    • ndmpd on
  3. # When using ndmpcopy, the shortcut of dropping /vol/<root volume> does not work for the destination
    • ndmpcopy /vol/<root volume>/etc/sshd/<exist user with ssh keys>/vol/<root volume>/etc/sshd/<new ssh user>
  4. # Create the text file with wrfile and cut and Paste key(s) from your other window, and then ctrl-c
    • wrfile /vol/<root volume>/etc/sshd/<new ssh username>/.ssh/authorized_keys
  5. # if ndmpd was off, turn it off.
    • ndmpd off
  6. # ndmpd creates a restore_symboltable file.  For cleanliness, need to remove that.
    • rm /vol/<root volume>/etc/sshd/<new ssh username>/restore_symboltable

Jim – 11/18/13

@itbycrayon

View Jim Surlow's profile on LinkedIn (I don’t accept general LinkedIn invites – but if you say you read my blog, it will change my mind)

Advertisements

NetApp de-dupe, flexclone, fragmentation, and reallocation

Double Black DiamondSince NetApp has their own filesystem, WAFL, disk layouts and fragmentation behave differently than on traditional filesystems. Before NetApp introduced de-duplication, fragmentation was less common. Real quickly let me deviate a bit and describe some notions of WAFL:

WAFL stands for “Write-Anyway File Layout” The notion being that if inodes could be saved with part of the file, then there wouldn’t need to be a file allocation table or inode table at one part of the volume and then data at another.

Another goal was to minimize disk head seeks – so data through a RAID set would be striped across all the disks in the set at essentially the same sectors. If I have 4 data disks and a parity disk (keeping it simple here), the stripe of data would be 25% on one disk, 25% on the next, and so on, and parity on the last. Assuming I have 100 sectors (for ease of calculation), I could put data on disk one at sectors 0-24 and on the next disk 25-49 and on the next 10-34 and on the next one 95-100 and 0-3, then 88, then 87, then 5-10, etc. Well, that would be rather slow as the head jumps around the disk platter searching for those. NetApp tries to keep all the data in one spot on each disk.

So, data is now assembled in nice stripes and if the writes are large, then a file takes up nice consecutive blocks on the stripe. The writes are in 4k blocks, but with a much larger file, they line up nice and sequential.

From their Technical Report on the subject, they state that the goal is to line up data for nice sequential reads. So, when a read request takes place, a group of blocks can be pre-fetched in anticipation of serving up a larger sequence.

Well, with NetApp FlexClone (virtual clones) and NetApp ASIS (de-duplication), the data gets a bit more jumbled. (Same with writes of non-contiguous blocks). Here’s why:

If I have data that doesn’t live together, holes get created. Speaking of non-contiguous blocks, I’ve worked in areas with Oracle databases over NFS on NetApp. When Oracle does a write of a group of blocks, those may not be logically be grouped (like a text file might be). So, after some of the data is aged off, holes are created as block 1 might be kept, but not block 2 & 3.

With FlexClone, the snapshot is made writeable and then future writes to a volume are tracked separately and may get aged off separately. More clear is the notion of de-duplication. NetApp de-dupe is post-process so it accepts an entire write of a file and then later compares the contents of that file with other files on the system. It will then remove any duplicated blocks.

So, if I write a file and blocks 2, 3, & 4 match another file and blocks 1 & 5 do not, when the de-dupe process sweeps the file, it is going to remove those blocks 2,3, & 4. So, that stripe now has holes and that will impact the pre-fetch operation on the next read.
20130916-221649.jpg

In the past, I only ran the reallocate command when I added disk to the disk aggregate, but now while searching through performance enhancing methods on NetApp, I’ve realized the impact of de-dupe on fragmentation.

Jim – 09/16/13
@itbycrayon

View Jim Surlow's profile on LinkedIn (I don’t accept general LinkedIn invites – but if you say you read my blog, it will change my mind)

VMware Linked Clones & NetApp Partial writes fun


Double Black Diamond
NetApp OnTap writes data in 4k-blocks. As long as the writes to NetApp are in 4k increments, all is good.

Let me step back. Given all the fun that I’ve experienced recently, I am going to alter my topic schedule to bring this topic forward while it is hot.

One more step back: The environment consists of a VMware ESX hosts getting their storage via NFS from NetApp storage. When data is not written to a storage system in increments matching its block size, then misalignment occurs. For NetApp, that block size is 4k. If I write 32k, that breaks nicely into 8 4k blocks. If I write 10k, that doesn’t as it ends up being 2 and a half.

20130830-222125.jpg

The misalignment problems has been well documented. VMware has a doc. NetApp has a doc. Other vendors (e.g. HP/3PAR and EMC) reference alignment problems in their docs. The problem is well known – and easily googled. With misalignment, more read & write operations are required because the underlying block is not aligned with the block that is going out to storage.

And yay! VMware addresses it in their VMFS-5 file system by making the blocks 1MB in size. That will divide up nicely. And Microsoft, with Windows 2008, they changed the starting block which helped alignment.

So, all our problems are gone, right??

NO.

VMware introduced linked clones which have a grain size of 512 (see Cormac Hogan’s blog)

Once this issue is discovered, you end up reading more of Cormac’s blog, and then maybe some of Duncan Epping‘s, and maybe some of Jack McLeod, not to mention Knowledge Base articles from both VMware & NetApp. The recommended solution is to use VAAI and let the NetApp handle clones on the backend. And these 512-byte writes are technically “partial” writes and not “misaligned”.

If everything is aligned, then the partial writes require 1 disk read operation (of 4k), an instruction to now wedge the 512 packet in appropriately to the 4k, and 1 write back out. If misalignment exists, then it requires twice the IO operations.

However, if you look at nfsstat -d, you’ll notice that there are a whole bunch of packets in the 0-511 range. Wait! I have partial writes, those show up in the 512-1k. What are all these?

At this point, I don’t entirely know (gee Jim, great piece – no answers), but according to VMware KB 1007909 VMware NFS is doing 84-byte (84!?) writes for NFS locking. Given the count in my 1-511 bytes, NFS locking can’t account for all of those – but what does this do to NetApp’s 4K byte blocks?

Jim – 08/30/13
@itbycrayon

View Jim Surlow's profile on LinkedIn (I don’t accept general LinkedIn invites – but if you say you read my blog, it will change my mind)