So we basically have an archive enclosure that has 40TB usable of 4TB 7k disks. The problem was it was getting full.

Now if you know how Compellent works, you will know all writes come in at RAID10/RAID10-DM

Since this archive enclosure is classed as its own pool, and these 7k disks are the only disks that are part of that pool, it is classed as Tier 1.

So writes come in at RAID10-DM Dual Mirror, as DM is used by Compellent when you use large disks, it helps to mitigate the risk of using larger disks while still allowing the use of RAID-10. Normal RAID10 has a write penalty of 2 because data is written twice, with Dual Mirror there is a penalty of 3 because data is written 3 times.

As you can see from the screen shot below:


It was pretty full, the issue with RAID10 is that it consumes a load of space. Since this is a archive tier, write speed is of little importance, so I decided to set the volumes on this set of disks to use just RAID6-10. Once again with Compllenet with large disks you only have the option to use RAID6-10, as it helps mitigate the issue of the larger disks failing and their long rebuild times.

So to do this you need to adjust the storage profile used by these volumes, so I went into Enterprise Manager and created a new storage profile:


and added the volumes to it, then all you can do is wait. Data Progression that runs every day 7pm-7am by default, will then start moving the data out of RAID10-DM into RAID 6-10. This will take some time, as you can see from the screenshots below:







If you want the the volumes to take the new profile right away, you have to do a copy/mirror/migrate, so as the data is moved to the new volume it is put into the correct storage layer/profile.

I did consider doing a migrate (as I have used it before), migrate is a great feature, basically you create a new volume thats the same size as the current one, then select migrate on the original volume, select the new volume you want to migrate to, and Compllent on the back end will move the data to the new volume, incorporating the new volume storage profile.

This will be totally transparent to the front end hosts/servers etc, and when complete the new volume will inherit all the mappings LUN IDs etc and no one should know the different. Obviously during this process there will be some latency, there is no way round this, but apart from that its totally transparent.

The reason I couldn’t do this was that the space was already limited. Now Compllenet should adjust the RAID6 layers to increase them while reducing the size of the RADI10-DM allocated levels, as and when needed.

“Yes you can change the Storage Profile of a volume at any point in time while it’s online. Any new write or a change to an existing block will take on the new write characteristics immediately. The remainder of the volume will start to change it’s profile during the next Data Progression cycle. (if you want the whole thing to take on the new characteristics then you need to do a CMM, copy mirror migrate). The only thing to be concerned about is having enough capacity of the new tier and IO ability in that tier to take on the workload.”

As you can see from the screen shots, the RAID10DM disk space is still allocated, and for me I would rather that disk space to be allocated to RADI6-10 giving a much better use of the available disks. This is something only Compellent Co=Pilot Support can do, as they have to do trimming on the back end.

What they do is they de-allocate the space from RAID-10DM and mark it as free, then the controller can allocate it to RAID6-10 as needed. Since the Storage Profiles are set to use RAID6, as RAID6 is getting full it will dynamically expand RAID6-10 to accommodate.

Comments are closed.

Post Navigation