Introduction to Next Gen Dell EMC Unity

Nearly a year later, Dell EMC has added Next Gen Unity storage systems to its existing Unity portfolio. The new models are Unity 350F, 450F, 550F and 650F. In a nutshell, This year’s Dell EMC Unity release brings in improvements to hardware as well as software i.e. improvements to UnityOE. This post examines new hardware and the new features introduced this year.

Gen2 vs Gen1

In general, the second generation Unity systems have more cores per CPU and memory when compared with the hardware introduced last year. Does that mean there must be an improvement in performance? Yes, the maximum IOPS that the new generation hardware can handle is slightly increased when compared to the older generation. Another important thing to note is that Dell EMC is going all in on All Flash and this year in the Unity product line there is no new introduction of hybrid models (spinning disks + SSD’s). The new Unity models 350F, 450F, 550F and 650F are all Flash and does not support spinning disks. Here is a table that summarizes the improvements.

Unity 350F Unity 450F Unity 550F Unity 650F
Processor Intel E5-2603v4 6-core, 1.7 GHz Intel E5-2630v4 10-core, 2.2 GHz Intel E5-2660v4 14-core, 2.0 GHz Intel E5-2680v4 14-core, 2.4 GHz
Memory 48GB (3x 16GB DIMMs) – per SP 64GB (4x 16GB DIMMs) – per SP 128GB (4x 32GB DIMMs) – per SP 256GB (4x 64GB DIMMs) – per SP
Minimum/Maximum drives 6/150 6/250 6/500 6/1000
Maximum raw capacity* 2.4 PBs 4.0 PBs 8.0 PBs 16.0 PBs **
Max IO modules 4 4 4 4
Max LUN Size 256 TB 256 TB 256 TB 256 TB
Max LUNs per array 1,000 1,500 2,000 4,000
Max File System Size 256 TB 256 TB 256 TB 256 TB

*Maximum raw capacity may vary.

**Unity 650F raw capacity is a 2x increase when compared with Unity 600F.

The look of the hardware remains the same and there is no change in the aesthetics. But on the inside, things have changed so much with the introduction of Unity OE 4.2. Before we jump on to what new in the software, Dell EMC has introduced 80 drive DAE this year. This 80 drive DAE is compatible with all generation hardware. It can work with a Gen1 hybrid, all flash arrays, and Gen2 all flash arrays.

80 Drive DAE

Photo Credit: Dell EMC

The 80 drive DAE is a dense DAE that accommodates eighty 3.5″ drives and the drives used in this DAE cannot be used in fifteen drive DAE. The new 80 drive DAE supports connecting to all generation Unity hardware. The backend connection can be x4 lanes SAS or x8 lanes SAS.

If you would like to read about Unity DPE, other DAE types and the internal components of Unity DPE check out my post Unity hardware architecture.

New features in Unity OE 4.2

Unity OE 4.2 release is a major update of this year and here is a list of most notable ones, (click the arrow to expand and read)

Dynamic Pools
Thin Clones
Enhancements to Snapshots
Improvements to system limits
Inline Compression for File
SMB migration from VNX to Unity

I will be publishing separate posts detailing the most important features of Unity OE release 4.2. Stay tuned!

Disclosure: I work for Dell EMC and this is not a promoted post.

Unity Architecture – Part II

Is Unity really a re-branded VNX2/VNX? Well, let’s find out. In this post, we will take a closer look at the functions of Unity OE. This post is part II of Unity Architecture series. If you have not read the part I click here (opens in new window) to read.

Unity OE

The Operating System that runs on Unity hardware is called Unity OE (Operating Environment). Unity OE is based on SUSE Linux Enterprise Server (SLES). Unity provides block and file access to hosts and clients. Unity is an Asymmetric Active-Active array and is ALUA aware.

Multicore Cache

Each SP has a certain amount of Cache. In the older storage systems, usually the cache is divided into Read Cache and Write Cache and most of the times it will be a static partition and does not dynamically change according to the IO served. Unity’s Multicore Cache is dynamic and the amount of Read and Write cache is dynamically adjusted according to the read and write operation. The main aim of this approach is to minimize forced flushing when the high watermark level on the cache is reached. An additonal layer of SSD Cache can be added to a hybrid pool by leveraging FAST Cache technology.

Unity Block and File Storage

Using FC and iSCSI protocol, Unity provides block access of storage to hosts. Without any special hardware, Unity also provides file access to hosts via NAS Servers (virtual) that can be created in Unity OE. The most fundamental part of Unity storage is the Unified storage pools. Unity allows all types of storage resources such as Block LUNs, VVols, and NAS file systems to be placed in the same storage pool. The following diagram shows various storage resources residing in the same storage pool.

Unity storage pools

Storage Pools

The disks that are residing in DPE and DAE can be grouped together to form storage pools. In a pool, there can be three tiers,

  • Extreme performance tier (SSD)
  • Performance tier (SAS)
  • Capacity tier (NL-SAS)

RAID protection is applied on the tier and not at the pool. In Unity, All Flash system the pool will contain only extreme performance tier and they are called All Flash Pools. In hybrid systems, it is possible to create an all flash pool with only SSD’s and later on expand the pool with SAS or NL-SAS disks. Each tier in the pool can have different RAID levels applied. For Example, extreme performance tier can have RAID 10 and performance tier can be configured with RAID 5 and the capacity tier can be configured in RAID 6 fashion. But the same tier cannot have different RAID types. A drive that is part of one storage pool cannot be part of another storage pool.


FAST stands for Fully Automated Storage Tiering. Configuring pools is the first step in provisioning storage in Unity. Unity uses FAST VP (Fully Automated Storage Tiering for Virtual Pools) algorithms to move hot data to SSD and cold data to NL-SAS. The policy can be adjusted according to business need via Unisphere. Following are the available Tiering Policies,

  • Highest Available Tier
  • Auto-Tier
  • Start High then Auto-Tier (Default/Recommended)
  • Lowest Available Tier

When FAST VP is enabled, the data is spread across the pool in 256 MB slices. FAST VP is enabled on a storage resource such as LUN, Datastore individually.

Highest Available tier

With Highest Available Tier policy of FAST VP, new data slices are provisioned from the extreme performance tier. If that gets full, new slices are then provisioned from the next tier. With this policy, we should expect superior performance and low latency as all data is kept in extreme performance tier, only when it gets full the next tier is chosen. During the next relocation schedule, the system tries to place the slice in the highest tier only. Hot slices will take precedence over any other tiering policy. Hot denotes frequently accessed slices and cold denotes less frequently accessed slices.


Auto-Tier policy is very similar to highest available tier policy with two primary differences,

  • Even if a slice with auto-tier is more active than a slice with highest available tier policy, during relocation the slice belongs to highest available tier policy takes precedence.
  • When new storage resource with auto-tier policy is created, slices are allocated from all tiers depending on the usage of each tier. If more capacity is found in capacity tier, then the slices are allocated from capacity tier initially.

Start high then auto-tier

Dell EMC recommends using this policy on storage resources and it is the default policy. When new slices are allocated, this policy allocated slices from the highest performance tier. Later on during relocation schedule, slices are moved down if they are not frequently accessed. We can expect good initial performance on the storage resource with this policy and later on, for effective capacity utilization it moved the cold slices down. It works exactly like Auto-Tier policy and the only difference is the initial allocation of slices are from highest performance tier.

Lowest available tier

All slices of a storage resource reside always in the lowest tier i.e. the capacity tier. If the lowest tier is full, all slices with this policy will be compared and the one’s with the lowest activity will reside in the lowest tier.

Following table is an excerpt from “EMC Unity: FAST Technology Overview” white paper that summarizes the functions of each tiering policy.

Tiering Policy Corresponding Initial Tier Placement Description
Highest Available Tier Highest Available Tier Initial data placement and subsequent data relocations set to the highest performing tier of drives with available space
Auto-Tier Optimized for Pool Performance Initial data placement optimizes Pool capacity, then relocates slices to different tiers based on the activity levels of the slices
Start High then Auto-Tier (Default) Highest Available Tier Initial data placed on slices from the highest tier with available space, then relocates data based on performance statistics and slice activity
Lowest Available Tier Lowest Available Tier Initial data placement and subsequent relocations preferred on the lowest tier with available space.

The following picture shows hot and cold slices before and after relocation.

Image source:
Image source:

Expanding Storage Pool with additional drives

While expanding the pool with additional drives it is not mandatory to use the same stripe width. For example, if the existing tier is configured with RAID 5 4+1 (this means the existing tier contains sets of 5 drives); we can expand the pool in sets of RAID 5 (4+1) or RAID 5 (8+1) or RAID 5 (12+1). In VNX2, it was a best practice to maintain the preferred drive count while expanding i.e. 5 (as per our example). This is no longer the case in Unity, Unity allows expanding a tier in pool with any supported stripe width.

System Drives

Unity OE occupies first 4 disks in DPE. These drives are called as system drives. Unity OE does not occupy the entire drive capacity instead it uses 107GB. The system drives are allowed to take part in the storage pool with nonsystem drives.

FAST Cache

FAST Cache technology extends the existing cache in storage processor by utilizing high-speed SSD’s. FAST Cache is applicable only for hybrid pools. Frequently accessed data (64 KB chunks) in SAS and NL-SAS drives are copied to Cache tier. The data is only copied and not moved; it still exists in drives. Not only the frequently accessed data is copied, the algorithm also copies data that may be read next. After copying FAST Cache memory map is updated.

How read operation is performed in Unity?

For the incoming IO, first, the system cache (DRAM) is checked. If the data found to be residing in the system cache, the requested data is sent to host completing IO request. If a Cache miss occurs, i.e the data is not present in the system cache, FAST Cache memory map is checked. Obviously, FAST Cache must be enabled and configured for the memory map to exist. If FAST Cache exists, its memory map is checked and the data is sent to host. This improves system throughput and reduces response time. In case of system and FAST cache read miss, the data is read from the drive, copied to system cache and then sent to host.

How write operation is performed in Unity?

For all write request, the data is first written in system cache and acknowledgment is sent to host. During flushing, the data is written to pool. For some reason, if the system write cache is disabled, the data is written to FAST Cache (if present) and then written to pool.

Inline Compression

Unity has compression feature available for block LUNs and VMware VMFS Datastores. This feature was added in Unity OE 4.1 release. Compression can be enabled on storage resource while creating it or at a later point. Compression can be enabled for resources that are in all flash pools only. When a host sends data it is placed in system cache first and acknowledgment is sent to host. Compression of data happens inline between system cache and all flash pool. Compression is not available for VVols and file storage resources.

Management tools of Unity

Unity can be managed via Unisphere (HTML 5 GUI), Unisphere CLI and REST API (Unity management functions only and no data access like S3).

Replication, Snapshots and other protection

Unity supports Sync and Async replication of storage resources such as block LUNs, VMware Datastores. File storage resources can be protected by Async replication. Unity also supports snapshots natively. Snapshot supports block LUNs, VMFS Datastores, File systems and VMware NFS Datastores. Data At Rest Encryption can be enabled on Unity. When enabled all data in Unity will be encrypted. Unity also integrates well with other Dell EMC products such as RecoverPoint for DVR like recovery.

This brings us to the end of the post. What we discussed is high-level overview of Unity OE and its functions. I hope you found it helpful. Deep dive posts of Unity features will soon be published.

Disclosure: I work for Dell EMC and this is not a promoted post.

Unity Architecture – Part I

Last year, Dell EMC announced Unity midrange storage array at EMC World. Unity is based on VNXe architecture and does not replace the higher model of VNX2, i.e. VNX8000. This post takes a closer look at Unity to understand its hardware components, design, and the software. It is a two-part series. Part I is all about Unity hardware and part II  talks about software architecture of unity.

There are three variants in Unity, Unity Hybrid, Unity All Flash and Unity VSA. The models are Unity 300/300F, 400/400F, 500/500F, 600/600F. Model with “F” at the end is all flash (only SSD’s) and the other one is a hybrid storage system (Flash + Spinning disks). Unity VSA is a virtual appliance that can be deployed on vSphere.  Now let us take a look at some of the important specifications of these models,


The specifications listed here is for a system that runs on Unity OE 4.1 aka Falcon.

Unity 300/300F Unity 400/400F Unity 500/500F Unity 600/600F
Processor 2 x Intel 6-core, 1.6GHz 2 x Intel 8-core, 2.4GHz 2 x Intel 10-core, 2.6GHz 2 x Intel 12-core, 2.5GHz
Memory (Both SP) 48 GB 96 GB 128 GB 256 GB
Minimum/Maximum drives 5/150 5/250 5/500 5/1000
Maximum raw capacity* 2.34 PBs 3.91 PBs 7.81 PBs 9.77 PBs
Max IO modules 4 4 4 4
Max number of pools 20 30 40 100
Max LUN Size 256 TB 256 TB 256 TB 256 TB
Max File System Size 64 TB 64 TB 64 TB 64 TB
Max LUNs per array 1,000 1,500 2,000 6,000

*Maximum raw capacity may vary.

Supported disks

Unity Hybrid

Spinning disk drives

Unity All Flash

Solid state drives used in Unity are of eMLC and TLC type and the disks highlighted in bold are 1 WPD (write per day) disks.

Disk Processor Enclosure (DPE)

DPE holds storage processor (SP), IO modules and disks. Two variants of DPE is available,

  • 25 Drive DPE that can hold 2.5″ disks (Available for hybrid and all flash array)
  • 15 Drive DPE that can hold 3.5″ disks (Only available for hybrid array)

As seen in the table, SP in the respective model will have a different CPU model and a variable amount of memory. Both types of DPE will occupy 2U when mounted on a rack. The first four drives in Unity is DPE is called system drives. These drives contain Unity OE (Operating Environment). Remaining space in these drives can be used for storage pools. A minimum number of disks that are required to initialize system is 5. On the rear side of the DPE we have, 2x Storage Processor (1&2 on image) , 4x onboard converged network ports (Optical/Twinax) (3), 4x onboard 10 GbE Base-T RJ45 ports (4), 2x power supplies (5), IO module slots (6), 4x SAS port for backend connection (7), a management port and a service port (8). Here is the picture of DPE,

Disk processor Enclosure. Image source:

The onboard Converged Network Adapter (CNA) can be configured for 16/8/4/2 Gbps Fiber Channel SFP’s (multimode and single mode) or 10 GbE optical using SFP+ and Twinax. The other 2 on-board port on an SP is 10 GbE Base-T. All these onboard ports can be configured for Block (FC/iSCSI) or File IO (NFS/CIFS). And in each SP there is a management port (to access Unisphere) and a service port (service use or engineering use).

Each SP can have two IO modules installed to expand front end host connection. IO module installed on SPA should match what is installed on SPB respective slot. There cannot be a miss match. Following are the IO modules that Unity supports,

  • 4 port 16GB Fiber Channel
  • 10GbE Base-T
  • 1GbE Base-T
  • 2 port 10GbE Optical (SFP+ and Twinax)
  • 4 port 10GbE Optical (SFP+ and Twinax)
  • 12Gb SAS for backend expansion (Only for Unity 500 and 600)
Unity supports Active Twinax cables only and no support for Passive Twinax.

Protections space for cache (No Vault)

In the case of storage processor failure, the cache contents are dumped into M.2 SSD that resides inside each SP. If cabinet loses power, Unity SP contains an inbuilt battery backup unit (BBU) that can power the SP long enough to dump cache contents into M.2 SSD. Cache content is restored to respective SP cache when power is restored or SP is replaced. The M.2 SSD also contains Unity OE boot image.

Disk Array Enclosure (DPE)

The DAE holds drives and the number of DAE that a model supports will vary. Please refer to the specification table in the post to know the maximum DAE that a system will support. There are two variants of DAE,

  • 25 Drive DAE that can hold 2.5″ disks (2U)
  • 15 Drive DAE that can hold 3.5″ disks (3U)

On the rear side, each DAE has 4 SAS ports (marked as A & B) for DPE to DAE and DAE to DAE connection. The ports need mini-SAS HD connectors. Here are the images of 15 & 25 drive DAE,

15 drive DAE. Image source:
15 drive DAE. Image source:

That’s all about Unity hardware. In the next post, we will take a closer look at software in Unity. Stay tuned! Click here to read part II.

Disclosure: I work for Dell EMC and this is not a promoted post.

Analyzing Top 5 Storage and Data Center tech predictions made in 2016

Last year in January I published “Top 5 Storage and Data Center tech predictions for 2016” predicting the possible happenings for 2016 in Storage world. Let’s relook at the predictions and analyze how much of it has happened.

Magnetic storage disk numbers will decline in the enterprise space.

Dell EMC declared the year of 2016 as “Year of All Flash” (YoAF) on February 29th, 2016, a month later after the post was written. During the entire span of 2016, I witnessed the steps that Dell EMC took to make 2016 the YoAF. The company’s biggest announcements in 2016 were Unity All Flash Array and VMAX All Flash Array. They also announced all flash versions of existing products such as Isilon, VxRAIL, and others. There was a similar situation at other storage vendors as well. This indirectly drove enterprise SSD shipments up. AnandTech reports, Q2 2016 SSD shipments up 41.2% and in Q1 it was up 32.7% YoY. A report published in November 2016 by TRENDFOCUS shows a great increase in SSD adoption. Following is an excerpt from TRENDFOCUS report,

“In the enterprise SSD world, PCIe units leaped a spectacular 101% from CQ2 ‘16, mainly due to emerging demand from hyperscale customers. Both SATA and SAS posted growth with 6% and 14% increases”

Another report published by TRENDFOCUS in August 2016 says,

“On the enterprise SSD side, although SATA SSDs still dominate both hyperscale and system OEMs from a unit volume perspective (3.2 of the 4.07 million units shipped in CQ2 ’16), SAS SSDs saw a tremendous jump in capacity shipped in CQ2 ’16. Unit volume rose only 5.4%, but exabytes shipped increased a whopping 100.5% from the previous quarter. The move to higher capacities (like 3.84TB and even a few 15TB) is real. The storage networking companies taking SAS SSD solutions have embraced these higher capacity devices (helped by declining prices) at a surprising rate.” 

In summary, during Q2 SATA SSD’s shipped more and in Q3 PCIe and SAS SSD’s overtook SATA in growth. This lead to a serious NAND drain. El Reg reports this with a funny headline – “What the Dell? NAND flash drought hits Texan monster – sources. Thrice during the year, TRENDMICRO reported the NAND shortage. Shortage for NAND caused SSD prices to go high in Q4 2016.

Enterprise flash drives with more storage space will appear and the cost will come down.

Enterprise SSD heavyweights such as Samsung, Toshiba and others introduced larger capacity TLC drives in 2016. Samsung’s 15.36 TLC NAND device is the largest I have seen on production in a storage array. There were other TLC drives with capacity ranging from 3TB to 15TB. Most of these TLC SSD’s are 1 WPD (Write per day) devices. However, 3D XPoint is still not seen in enterprise storage arrays. The price did come down during the beginning of 2016 but certain NAND devices price went high due to the demand.

Adoption of NVMe and fabrics will kick start in the enterprise space

While it is true that use of NVMe is widely used in high-performance computing space, it has not made its way to the most common datacenters yet. The reason is NVMe over Fabrics (NVMe-oF) has not gained popularity as thought. Dennis Martin, president and founder of Demartek LLC advises, Nonvolatile memory express (NVMe) is coming to a storage system near you soon, and IT pros need to become familiar with the protocol. With DSSD, Dell EMC entered this space and it will be very much interesting to see the advancements in this space. We are looking at companies adopting developer-centric infrastructure design. These new protocols may get popular with such design. Interestingly, it is the PC world which has embraced NVMe sooner that enterprise. Samsung, Toshiba, MyDigitalSSD, Plextor, and others sell ultra-fast NVMe SSD’s. NVMe SSDs are the fastest SSD that are currently being sold.

Software defined storage solutions will grow as the cloud adoption increases

There is a serious competition between Dell EMC ScaleIO and VMware VSAN even though both belong to Dell technologies, these two products are eating away the competition in SDS. Mainly SSD growth is fueled by the rise of cloud technology such as OpenStack and HCI appliances. Dell EMC beefed up its HCI portfolio in the year 2016 with VxRAIL and VxRACK. VxRAIL runs entirely on VMware VSAN and VxRACK runs on ScaleIO or VSAN. Another iteration of VxRACK runs OpenStack with ScaleIO. These advancements fueled SDS growth. The report predicts that global SDS market to grow at a CAGR of 31.62% during the period 2016-2020.

Enterprises will realize that cloud is not an alternative to traditional data center.

IMO the year of 2015 was the most important time for OpenStack. Everybody knew what OpenStack is and some conscious customers went all in on OpenStack. made news in 2016, “Snapdeal launches its own cloud – Snapdeal Cirrus“. During the mid of 2015 Snapdeal realized that public cloud stops being cost effective after a certain scale. So they went on to build their own hybrid cloud – Cirrus. Cost is not the only factor that drove them towards hybrid cloud lane, the need of massive compute power to fuel flash sales where millions of people buy on were also a reason. Cirrus has more than 100,000 cores, 500TB of memory, 16PB of Block and Object storage entirely built on Ceph, SDN, spans 3 data centers and so on. Infrastructure like this makes sense for a company which runs each of its tech stacks at a large scale. Developer centric IT will only look at cloud-like infrastructure because it makes sense for them. As predicted earlier, traditional storage & compute tech will continue to exist in its own space.

How to configure NFS in EMC Elastic Cloud Storage (ECS) Appliance

Elastic Cloud Storage (ECS) appliance from EMC is a storage array which allows multiprotocol access to the storage space. A bucket can be configured to simultaneously access via Amazon S3, Swift and NFS. Likewise, it can be allowed to access via HDFS. This post explains about how to configure NFS in ECS. It took me a lot of time to figure out the steps since there was no clear documentation available to configure NFS in ECS. In this example, we will mount a bucket created in ECS as NFS share to Linux host running CentOS 7.

Before creating a bucket, we should create a user in ECS portal. Login to ECS portal by navigating to any one of the public IP’s set on the nodes. You will see Dashboard after login expand Manage and then click Users. It is assumed that you have already defined storage pools, virtual data center, replication group and namespace.

Step 1:

In user management window, by default Object Users tab is selected. Click New Object User. Type a suitable name for the new user, and select a relevant namespace. Click Next to Add Passwords. Under S3 click generate and add password. Generating an S3 password is optional, because if you plan to allow multi-protocol access it can be generated. The same applies for Swift and CAS.

Once done, the user that you have created should look the one in the following screenshot. (If the image is not clear, click the image to open in new tab)

Create User
Create User

Step 2:

Next step is to create a bucket where the data is going to be stored and retrieved. Click Buckets and click New Bucket in Bucket Management page. Type a suitable bucket name, and select the relevant namespace and replication group. And then in Bucket Owner field, type the user that we just created in step 1. In my case, it is az_nfs. Scroll down and click Enabled for File System. If the bucket is created without enabling file system access, it is not possible to enable it later.

Before continuing further, get the Linux group name that the Linux user is part of.

In Default Bucket Group field type the group name of the Linux user. Once done it should look like the following,

Create Bucket
Create Bucket
Create Bucket 2
Create Bucket contd..

Step 3:

In this step, we map the ECS object user with the Linux User and the group. Before continuing further, get the Linux user ID. Once you know the Linux user ID, Click File, next click User / Group Mapping tab and then click New User / Group Mapping. Type the ECS object user name and the ID of Linux user as shown in the screenshot. Next click Save.

User Mapping
User Mapping

Step 4:

After mapping ECS object user with Linux user, now we can create an NFS export of the bucket. While on the same page, click Exports tab and then click New Exports. In Bucket field select the bucket that we created and the export path will automatically change. Following is a screenshot of that,

New File Export
New File Export

Now click Add which is next to Export Host Options. In this step, we are adding the Linux host. Following screenshot show what need to be selected. Type in IP address of the Linux host in Export Host field and type ECS object username in AnonUser, AnonGroup and RootSquash fields. And then click Add.

Add Export Host
Add Export Host

Final Step

Now it’s time to mount and test the share in Linux Host. Make a note of the export path, and establish a terminal session to the Linux host. Following is the Linux command to mount the share.

# sudo mount -t nfs -o vers=3,sec=sys,proto=tcp \ x.x.x.x:/namespace/az_nfs_bucket/ /home/az/Desktop/ecs_share

After this, the share will be mounted to ecs_share directory and you will be able to create files and folders. Having challenges? Please feel free to comment.