Introduction to Next Gen Dell EMC Unity

Nearly a year later, Dell EMC has added Next Gen Unity storage systems to its existing Unity portfolio. The new models are Unity 350F, 450F, 550F and 650F. In a nutshell, This year’s Dell EMC Unity release brings in improvements to hardware as well as software i.e. improvements to UnityOE. This post examines new hardware and the new features introduced this year.

Gen2 vs Gen1

In general, the second generation Unity systems have more cores per CPU and memory when compared with the hardware introduced last year. Does that mean there must be an improvement in performance? Yes, the maximum IOPS that the new generation hardware can handle is slightly increased when compared to the older generation. Another important thing to note is that Dell EMC is going all in on All Flash and this year in the Unity product line there is no new introduction of hybrid models (spinning disks + SSD’s). The new Unity models 350F, 450F, 550F and 650F are all Flash and does not support spinning disks. Here is a table that summarizes the improvements.

Unity 350F Unity 450F Unity 550F Unity 650F
Processor Intel E5-2603v4 6-core, 1.7 GHz Intel E5-2630v4 10-core, 2.2 GHz Intel E5-2660v4 14-core, 2.0 GHz Intel E5-2680v4 14-core, 2.4 GHz
Memory 48GB (3x 16GB DIMMs) – per SP 64GB (4x 16GB DIMMs) – per SP 128GB (4x 32GB DIMMs) – per SP 256GB (4x 64GB DIMMs) – per SP
Minimum/Maximum drives 6/150 6/250 6/500 6/1000
Maximum raw capacity* 2.4 PBs 4.0 PBs 8.0 PBs 16.0 PBs **
Max IO modules 4 4 4 4
Max LUN Size 256 TB 256 TB 256 TB 256 TB
Max LUNs per array 1,000 1,500 2,000 4,000
Max File System Size 256 TB 256 TB 256 TB 256 TB

*Maximum raw capacity may vary.

**Unity 650F raw capacity is a 2x increase when compared with Unity 600F.

The look of the hardware remains the same and there is no change in the aesthetics. But on the inside, things have changed so much with the introduction of Unity OE 4.2. Before we jump on to what new in the software, Dell EMC has introduced 80 drive DAE this year. This 80 drive DAE is compatible with all generation hardware. It can work with a Gen1 hybrid, all flash arrays, and Gen2 all flash arrays.

80 Drive DAE

Photo Credit: Dell EMC

The 80 drive DAE is a dense DAE that accommodates eighty 3.5″ drives and the drives used in this DAE cannot be used in fifteen drive DAE. The new 80 drive DAE supports connecting to all generation Unity hardware. The backend connection can be x4 lanes SAS or x8 lanes SAS.

If you would like to read about Unity DPE, other DAE types and the internal components of Unity DPE check out my post Unity hardware architecture.

New features in Unity OE 4.2

Unity OE 4.2 release is a major update of this year and here is a list of most notable ones, (click the arrow to expand and read)

Dynamic Pools
Thin Clones
Enhancements to Snapshots
Improvements to system limits
Inline Compression for File
SMB migration from VNX to Unity

I will be publishing separate posts detailing the most important features of Unity OE release 4.2. Stay tuned!

Disclosure: I work for Dell EMC and this is not a promoted post.

Unity Architecture – Part II

Is Unity really a re-branded VNX2/VNX? Well, let’s find out. In this post, we will take a closer look at the functions of Unity OE. This post is part II of Unity Architecture series. If you have not read the part I click here (opens in new window) to read.

Unity OE

The Operating System that runs on Unity hardware is called Unity OE (Operating Environment). Unity OE is based on SUSE Linux Enterprise Server (SLES). Unity provides block and file access to hosts and clients. Unity is an Asymmetric Active-Active array and is ALUA aware.

Multicore Cache

Each SP has a certain amount of Cache. In the older storage systems, usually the cache is divided into Read Cache and Write Cache and most of the times it will be a static partition and does not dynamically change according to the IO served. Unity’s Multicore Cache is dynamic and the amount of Read and Write cache is dynamically adjusted according to the read and write operation. The main aim of this approach is to minimize forced flushing when the high watermark level on the cache is reached. An additonal layer of SSD Cache can be added to a hybrid pool by leveraging FAST Cache technology.

Unity Block and File Storage

Using FC and iSCSI protocol, Unity provides block access of storage to hosts. Without any special hardware, Unity also provides file access to hosts via NAS Servers (virtual) that can be created in Unity OE. The most fundamental part of Unity storage is the Unified storage pools. Unity allows all types of storage resources such as Block LUNs, VVols, and NAS file systems to be placed in the same storage pool. The following diagram shows various storage resources residing in the same storage pool.

Unity storage pools

Storage Pools

The disks that are residing in DPE and DAE can be grouped together to form storage pools. In a pool, there can be three tiers,

  • Extreme performance tier (SSD)
  • Performance tier (SAS)
  • Capacity tier (NL-SAS)

RAID protection is applied on the tier and not at the pool. In Unity, All Flash system the pool will contain only extreme performance tier and they are called All Flash Pools. In hybrid systems, it is possible to create an all flash pool with only SSD’s and later on expand the pool with SAS or NL-SAS disks. Each tier in the pool can have different RAID levels applied. For Example, extreme performance tier can have RAID 10 and performance tier can be configured with RAID 5 and the capacity tier can be configured in RAID 6 fashion. But the same tier cannot have different RAID types. A drive that is part of one storage pool cannot be part of another storage pool.

FAST VP

FAST stands for Fully Automated Storage Tiering. Configuring pools is the first step in provisioning storage in Unity. Unity uses FAST VP (Fully Automated Storage Tiering for Virtual Pools) algorithms to move hot data to SSD and cold data to NL-SAS. The policy can be adjusted according to business need via Unisphere. Following are the available Tiering Policies,

  • Highest Available Tier
  • Auto-Tier
  • Start High then Auto-Tier (Default/Recommended)
  • Lowest Available Tier

When FAST VP is enabled, the data is spread across the pool in 256 MB slices. FAST VP is enabled on a storage resource such as LUN, Datastore individually.

Highest Available tier

With Highest Available Tier policy of FAST VP, new data slices are provisioned from the extreme performance tier. If that gets full, new slices are then provisioned from the next tier. With this policy, we should expect superior performance and low latency as all data is kept in extreme performance tier, only when it gets full the next tier is chosen. During the next relocation schedule, the system tries to place the slice in the highest tier only. Hot slices will take precedence over any other tiering policy. Hot denotes frequently accessed slices and cold denotes less frequently accessed slices.

Auto-tier

Auto-Tier policy is very similar to highest available tier policy with two primary differences,

  • Even if a slice with auto-tier is more active than a slice with highest available tier policy, during relocation the slice belongs to highest available tier policy takes precedence.
  • When new storage resource with auto-tier policy is created, slices are allocated from all tiers depending on the usage of each tier. If more capacity is found in capacity tier, then the slices are allocated from capacity tier initially.

Start high then auto-tier

Dell EMC recommends using this policy on storage resources and it is the default policy. When new slices are allocated, this policy allocated slices from the highest performance tier. Later on during relocation schedule, slices are moved down if they are not frequently accessed. We can expect good initial performance on the storage resource with this policy and later on, for effective capacity utilization it moved the cold slices down. It works exactly like Auto-Tier policy and the only difference is the initial allocation of slices are from highest performance tier.

Lowest available tier

All slices of a storage resource reside always in the lowest tier i.e. the capacity tier. If the lowest tier is full, all slices with this policy will be compared and the one’s with the lowest activity will reside in the lowest tier.

Following table is an excerpt from “EMC Unity: FAST Technology Overview” white paper that summarizes the functions of each tiering policy.

Tiering Policy Corresponding Initial Tier Placement Description
Highest Available Tier Highest Available Tier Initial data placement and subsequent data relocations set to the highest performing tier of drives with available space
Auto-Tier Optimized for Pool Performance Initial data placement optimizes Pool capacity, then relocates slices to different tiers based on the activity levels of the slices
Start High then Auto-Tier (Default) Highest Available Tier Initial data placed on slices from the highest tier with available space, then relocates data based on performance statistics and slice activity
Lowest Available Tier Lowest Available Tier Initial data placement and subsequent relocations preferred on the lowest tier with available space.

The following picture shows hot and cold slices before and after relocation.

Image source: emc.com
Image source: emc.com

Expanding Storage Pool with additional drives

While expanding the pool with additional drives it is not mandatory to use the same stripe width. For example, if the existing tier is configured with RAID 5 4+1 (this means the existing tier contains sets of 5 drives); we can expand the pool in sets of RAID 5 (4+1) or RAID 5 (8+1) or RAID 5 (12+1). In VNX2, it was a best practice to maintain the preferred drive count while expanding i.e. 5 (as per our example). This is no longer the case in Unity, Unity allows expanding a tier in pool with any supported stripe width.

System Drives

Unity OE occupies first 4 disks in DPE. These drives are called as system drives. Unity OE does not occupy the entire drive capacity instead it uses 107GB. The system drives are allowed to take part in the storage pool with nonsystem drives.

FAST Cache

FAST Cache technology extends the existing cache in storage processor by utilizing high-speed SSD’s. FAST Cache is applicable only for hybrid pools. Frequently accessed data (64 KB chunks) in SAS and NL-SAS drives are copied to Cache tier. The data is only copied and not moved; it still exists in drives. Not only the frequently accessed data is copied, the algorithm also copies data that may be read next. After copying FAST Cache memory map is updated.

How read operation is performed in Unity?

For the incoming IO, first, the system cache (DRAM) is checked. If the data found to be residing in the system cache, the requested data is sent to host completing IO request. If a Cache miss occurs, i.e the data is not present in the system cache, FAST Cache memory map is checked. Obviously, FAST Cache must be enabled and configured for the memory map to exist. If FAST Cache exists, its memory map is checked and the data is sent to host. This improves system throughput and reduces response time. In case of system and FAST cache read miss, the data is read from the drive, copied to system cache and then sent to host.

How write operation is performed in Unity?

For all write request, the data is first written in system cache and acknowledgment is sent to host. During flushing, the data is written to pool. For some reason, if the system write cache is disabled, the data is written to FAST Cache (if present) and then written to pool.

Inline Compression

Unity has compression feature available for block LUNs and VMware VMFS Datastores. This feature was added in Unity OE 4.1 release. Compression can be enabled on storage resource while creating it or at a later point. Compression can be enabled for resources that are in all flash pools only. When a host sends data it is placed in system cache first and acknowledgment is sent to host. Compression of data happens inline between system cache and all flash pool. Compression is not available for VVols and file storage resources.

Management tools of Unity

Unity can be managed via Unisphere (HTML 5 GUI), Unisphere CLI and REST API (Unity management functions only and no data access like S3).

Replication, Snapshots and other protection

Unity supports Sync and Async replication of storage resources such as block LUNs, VMware Datastores. File storage resources can be protected by Async replication. Unity also supports snapshots natively. Snapshot supports block LUNs, VMFS Datastores, File systems and VMware NFS Datastores. Data At Rest Encryption can be enabled on Unity. When enabled all data in Unity will be encrypted. Unity also integrates well with other Dell EMC products such as RecoverPoint for DVR like recovery.

This brings us to the end of the post. What we discussed is high-level overview of Unity OE and its functions. I hope you found it helpful. Deep dive posts of Unity features will soon be published.

Disclosure: I work for Dell EMC and this is not a promoted post.

Unity Architecture – Part I

Last year, Dell EMC announced Unity midrange storage array at EMC World. Unity is based on VNXe architecture and does not replace the higher model of VNX2, i.e. VNX8000. This post takes a closer look at Unity to understand its hardware components, design, and the software. It is a two-part series. Part I is all about Unity hardware and part II  talks about software architecture of unity.

There are three variants in Unity, Unity Hybrid, Unity All Flash and Unity VSA. The models are Unity 300/300F, 400/400F, 500/500F, 600/600F. Model with “F” at the end is all flash (only SSD’s) and the other one is a hybrid storage system (Flash + Spinning disks). Unity VSA is a virtual appliance that can be deployed on vSphere.  Now let us take a look at some of the important specifications of these models,

Specifications

The specifications listed here is for a system that runs on Unity OE 4.1 aka Falcon.

Unity 300/300F Unity 400/400F Unity 500/500F Unity 600/600F
Processor 2 x Intel 6-core, 1.6GHz 2 x Intel 8-core, 2.4GHz 2 x Intel 10-core, 2.6GHz 2 x Intel 12-core, 2.5GHz
Memory (Both SP) 48 GB 96 GB 128 GB 256 GB
Minimum/Maximum drives 5/150 5/250 5/500 5/1000
Maximum raw capacity* 2.34 PBs 3.91 PBs 7.81 PBs 9.77 PBs
Max IO modules 4 4 4 4
Max number of pools 20 30 40 100
Max LUN Size 256 TB 256 TB 256 TB 256 TB
Max File System Size 64 TB 64 TB 64 TB 64 TB
Max LUNs per array 1,000 1,500 2,000 6,000

*Maximum raw capacity may vary.

Supported disks

Unity Hybrid

SSD's
Spinning disk drives

Unity All Flash

SSD's
Solid state drives used in Unity are of eMLC and TLC type and the disks highlighted in bold are 1 WPD (write per day) disks.

Disk Processor Enclosure (DPE)

DPE holds storage processor (SP), IO modules and disks. Two variants of DPE is available,

  • 25 Drive DPE that can hold 2.5″ disks (Available for hybrid and all flash array)
  • 15 Drive DPE that can hold 3.5″ disks (Only available for hybrid array)

As seen in the table, SP in the respective model will have a different CPU model and a variable amount of memory. Both types of DPE will occupy 2U when mounted on a rack. The first four drives in Unity is DPE is called system drives. These drives contain Unity OE (Operating Environment). Remaining space in these drives can be used for storage pools. A minimum number of disks that are required to initialize system is 5. On the rear side of the DPE we have, 2x Storage Processor (1&2 on image) , 4x onboard converged network ports (Optical/Twinax) (3), 4x onboard 10 GbE Base-T RJ45 ports (4), 2x power supplies (5), IO module slots (6), 4x SAS port for backend connection (7), a management port and a service port (8). Here is the picture of DPE,

Disk processor Enclosure. Image source: emc.com

The onboard Converged Network Adapter (CNA) can be configured for 16/8/4/2 Gbps Fiber Channel SFP’s (multimode and single mode) or 10 GbE optical using SFP+ and Twinax. The other 2 on-board port on an SP is 10 GbE Base-T. All these onboard ports can be configured for Block (FC/iSCSI) or File IO (NFS/CIFS). And in each SP there is a management port (to access Unisphere) and a service port (service use or engineering use).

Each SP can have two IO modules installed to expand front end host connection. IO module installed on SPA should match what is installed on SPB respective slot. There cannot be a miss match. Following are the IO modules that Unity supports,

  • 4 port 16GB Fiber Channel
  • 10GbE Base-T
  • 1GbE Base-T
  • 2 port 10GbE Optical (SFP+ and Twinax)
  • 4 port 10GbE Optical (SFP+ and Twinax)
  • 12Gb SAS for backend expansion (Only for Unity 500 and 600)
Unity supports Active Twinax cables only and no support for Passive Twinax.

Protections space for cache (No Vault)

In the case of storage processor failure, the cache contents are dumped into M.2 SSD that resides inside each SP. If cabinet loses power, Unity SP contains an inbuilt battery backup unit (BBU) that can power the SP long enough to dump cache contents into M.2 SSD. Cache content is restored to respective SP cache when power is restored or SP is replaced. The M.2 SSD also contains Unity OE boot image.

Disk Array Enclosure (DPE)

The DAE holds drives and the number of DAE that a model supports will vary. Please refer to the specification table in the post to know the maximum DAE that a system will support. There are two variants of DAE,

  • 25 Drive DAE that can hold 2.5″ disks (2U)
  • 15 Drive DAE that can hold 3.5″ disks (3U)

On the rear side, each DAE has 4 SAS ports (marked as A & B) for DPE to DAE and DAE to DAE connection. The ports need mini-SAS HD connectors. Here are the images of 15 & 25 drive DAE,

15 drive DAE. Image source: emc.com
15 drive DAE. Image source: emc.com

That’s all about Unity hardware. In the next post, we will take a closer look at software in Unity. Stay tuned! Click here to read part II.

Disclosure: I work for Dell EMC and this is not a promoted post.

How to configure NFS in EMC Elastic Cloud Storage (ECS) Appliance

Elastic Cloud Storage (ECS) appliance from EMC is a storage array which allows multiprotocol access to the storage space. A bucket can be configured to simultaneously access via Amazon S3, Swift and NFS. Likewise, it can be allowed to access via HDFS. This post explains about how to configure NFS in ECS. It took me a lot of time to figure out the steps since there was no clear documentation available to configure NFS in ECS. In this example, we will mount a bucket created in ECS as NFS share to Linux host running CentOS 7.

Before creating a bucket, we should create a user in ECS portal. Login to ECS portal by navigating to any one of the public IP’s set on the nodes. You will see Dashboard after login expand Manage and then click Users. It is assumed that you have already defined storage pools, virtual data center, replication group and namespace.

Step 1:

In user management window, by default Object Users tab is selected. Click New Object User. Type a suitable name for the new user, and select a relevant namespace. Click Next to Add Passwords. Under S3 click generate and add password. Generating an S3 password is optional, because if you plan to allow multi-protocol access it can be generated. The same applies for Swift and CAS.

Once done, the user that you have created should look the one in the following screenshot. (If the image is not clear, click the image to open in new tab)

Create User
Create User

Step 2:

Next step is to create a bucket where the data is going to be stored and retrieved. Click Buckets and click New Bucket in Bucket Management page. Type a suitable bucket name, and select the relevant namespace and replication group. And then in Bucket Owner field, type the user that we just created in step 1. In my case, it is az_nfs. Scroll down and click Enabled for File System. If the bucket is created without enabling file system access, it is not possible to enable it later.

Before continuing further, get the Linux group name that the Linux user is part of.

In Default Bucket Group field type the group name of the Linux user. Once done it should look like the following,

Create Bucket
Create Bucket
Create Bucket 2
Create Bucket contd..

Step 3:

In this step, we map the ECS object user with the Linux User and the group. Before continuing further, get the Linux user ID. Once you know the Linux user ID, Click File, next click User / Group Mapping tab and then click New User / Group Mapping. Type the ECS object user name and the ID of Linux user as shown in the screenshot. Next click Save.

User Mapping
User Mapping

Step 4:

After mapping ECS object user with Linux user, now we can create an NFS export of the bucket. While on the same page, click Exports tab and then click New Exports. In Bucket field select the bucket that we created and the export path will automatically change. Following is a screenshot of that,

New File Export
New File Export

Now click Add which is next to Export Host Options. In this step, we are adding the Linux host. Following screenshot show what need to be selected. Type in IP address of the Linux host in Export Host field and type ECS object username in AnonUser, AnonGroup and RootSquash fields. And then click Add.

Add Export Host
Add Export Host

Final Step

Now it’s time to mount and test the share in Linux Host. Make a note of the export path, and establish a terminal session to the Linux host. Following is the Linux command to mount the share.

# sudo mount -t nfs -o vers=3,sec=sys,proto=tcp \ x.x.x.x:/namespace/az_nfs_bucket/ /home/az/Desktop/ecs_share

After this, the share will be mounted to ecs_share directory and you will be able to create files and folders. Having challenges? Please feel free to comment.

What is MPIO DSM/device mapper/NMP and why we need it?

I am writing this article to understand the basic functionality of Multipath I/O. This article is targeted at Beginners of SAN technology.

Why we need multipath I/O?

In early days all storage solutions were SCSI based and directly attached to the host. When Fibre channel was introduced it enabled us to provide storage access to multiple hosts from a single centralized storage system. But those days all storage systems were Active/Passive arrays so communication happens in one path.
Modern day storage systems are Symmetric and Asymmetric active active arrays. Now we need to understand the difference between these two in order to understand MPIO

Asymmetric Active/Active

Lets consider a dual controller/node storage system and a LUN is provisioned to host. Lets assume the LUN is owned by controller A and there is two active paths between host and array. Lets assume host is accessing this LUN via controller B. Since the LUN is owned by controller A it alone will get exclusive access to write or read data from the LUN. Controller B will communicate with other controller via its internal link to get the IO served.

Symmetric Active/Active

If its an Symmetric Active/Active array whoever is the owner of LUN any controller/nodes in the storage system will be able to read/write data on the LUN regardless of ownership.

Multipath I/O

Multipath IO is a feature will will effectively manage the active paths to storage array. There are various path selection polices(PSP) which can be set for individual LUN. Following are the commonly used PSPs

  1. Most Recently Used (MRU)
  2. Fixed
  3. Round Robin (RR)

Round Robin is the most commonly used path selection policy for various storage arrays due to its dynamic load sharing capability. In ESXi environments RR is preferred to achieve good performance.

Similarly the host software which is required to configure this has various names on different OS Platforms. Microsoft calls it as MPIO, in Linux world this host software is called as Device Mapper. Vmware goes by the name Native multipathing plug-in (NMP)

For optimum performance it is recommended to use device specific DSM files, here device meaning the storage array. Storage vendors will release DSM files for easier configuration and it will instruct the host is how to access the LUN using a set of predefined rules.

Apart from this, there is also wide range of software solutions to manage multipath for LUNs in a larger scale using softwares like EMC Powerpath, HP Secure Path, Hitachi HighCommand Dynamic Link Manager etc.

The best practice is to implement Multipath as recommended by the storage vendor. Selecting the appropriate storage specific PSP will lead to optimum performance of LUN.

After appearing in the live fashion program Project Runway
weight loss tipsHow to Remove Shine From Polyester Pants
cartoon porn
What Is a TDI Engine
rape porn Where to Design Your Own Clothing

Do You Want to Be a 70s Pop Star
cartoon porn conclusion accessories smaller

The Top Fashion and Business Schools
cartoon porn She got the same hair

Universal Laws of Attraction Creation and Emotional Freedom Technique
youjizz Once you’re sure you have all this

based retailers can provide you with numerous options
how to lose weight fast way spirits as you just fall in love had been formulate

Creating a plan to build a basic wardrobe
snooki weight loss is how to shop for designer plus size clothes

BEST WESTERN PLUS Laguna Brisas Spa Hotel
christina aguilera weight loss how to correctly online store tools from the garden shed

Nicole Scherzinger is a knockout at Men In Black 3 Berlin premiere
weight loss tips where to shop for suits in the the big apple