The Inevitable Future

The Introduction

The world as we know it today is not the same as what it was nearly four decades ago. We are living in a world where an invisible army of remotely controlled software bots are used to host DDoS (Distributed Denial of Service) attack against a target. The invisible bots are Internet of Things (IoT) that are hijacked by people with wrong intentions. Back in 2016, Mirai botnet caused damage to the basic infrastructure of Internet causing service interruptions worldwide.

Traditionally war means physical damage to the countries engaging in war. Not anymore!

But in the beginning of the 21st century, it has found its way into cyberspace. Here is a map that shows cyber attacks in real time which is developed by Norse.

Real time cyber attack (representational view only)

In this information age, “Data” is the new raw material. An individual or an organization who owns meaningful data can monetize by selling the data to other business that may use it to drive more revenue. The profound impact of software is felt across various industries and nobody in earlier days would have thought electronic currencies would cause ripples in the financial sector. Increasing popularity of cryptocurrencies challenges the Financial industry. At the end of 2017, Bitcoin reached historical high and all of the sudden, everybody knows about Bitcoin and other cryptocurrencies. Even though, it’s a speculative investment at the time of writing people are dumping money thus increasing the demand. Almost all renowned financial gurus of the world have only spoken against the rise of cryptocurrencies. But that doesn’t mean they are right. It could become a stable investment vehicle or may burst soon it is a bubble.

On the other hand, we have companies that are trying to smoothen credit card transactions by eliminating cumbersome banking process that lies behind every transaction. Advancement in software has given birth to new type of home automation devices that turns your home smart. These smart devices that are connected to the internet are collectively referred as “Internet of Things”. Data produced by such devices are converted into meaningful representation and can be sold. These are the very same devices that can also be hijacked to lend its compute power to perform large-scale DDoS attacks. If you don’t secure your smart devices, it must not be surprising when they have been hijacked to DDoS someone or an organization or for any other malicious purpose. Which is why,

In future, you just don’t build a compound wall for your house; instead, you also build a firewall.

Yes, that’s right, Security for smart things is equally important as protecting oneself physically. While the security companies do realize this, but they failed to educate users. Equally, manufacturers of smart devices failed to prioritize security for the devices that they are bringing into the market for consumers. Advanced computers and software have just put Genomics research into the fast lane. With advancement in Genomics what we get is, accurate disease diagnosis and there are many other uses. With accurate disease diagnosis, the exact cause of disease can be identified and treated accordingly. This mainly helps in Cancer treatment. With further development of software, Genomics research may even jump into hyperdrive. The agriculture industry, on the other hand, is practicing itself on how to use Big Data effectively. Data produced by sensors and devices placed at various places on a farm can be used for forecasting and to apply pesticide in specific regions instead of spraying on the entire farmland. Traditional agriculture technology companies are acquiring startups who are disruptors of their field. Acquisition of disruptive startups is common and seen across many industry verticals. One similarity is, all the disruptors are software companies.

This post is the first part of “The Inevitable Future” series. In this article, I present you with compelling examples of disruption caused by software across various industries. The article will also explain security measures in brief that are currently in practice to protect new age assets and the future of security. Throughout this article, we will be focusing on Internet of Things, Security and Technology in finance.

The Early Days

Let me explain the importance of Internet with a personal example. A few months ago, I started reading an autobiography of Mahatma Gandhi, written in circa 1927. The book contained many references to how did people live their lives about a century ago. The striking fact that I discovered after reading the book is that not much is changed in the present world which we are living in today. I do agree that the living style, infrastructure, everyday life of people around the world has changed. Similar to a century ago our mind is still the same in the way we think. Our nature to explore and to make decisions based on our experience is still the same, our ability to work towards the betterment of the society in which we live in has not changed, and our desire to create something new has not changed.

A typical village in India.

I was raised in a remote village in India, and the school which I studied was located in the nearby town. During my pre-teen years, I was confident that our generation is by far the most advanced in terms of learning and the development. I thought people who are of my father’s age are very old and they are not aware of what I have learned. I often found myself in conversing with my dad proving that I am smart and I know more than him. But my father always has outsmarted me, and it was so cool to know that he knows much better than me. During the same time, I realized that my parents and my grandmother knew things that are unknown to me because they saw the world and people much before I was born. I often like to hear stories from my grandmother about how they lived in their younger age. The very obvious reason for me to believe we are the most advanced generation on the planet is because the technology which surrounded us. I also believed that people living in developed nations have easy access to them when compared to people in other parts of the world. Most of my school days passed by just learning about things that exist in the world mainly through newspapers and television.

During my school days (1998-2002) I learned that people are using the Internet for communication, work, entertainment, shopping, etc. and in the place where I was born and studied it was unknown to most. Our school had just one computer, and it was an actually a home PC.  I certainly don’t blame them as the computers were not cheaper at that time and it was hard to get a teacher who knows how to operate one. The single computer was managed by a person who taught to students on how to use a computer. We had very limited access to the computer room. There were around 40 in my class, and we got only 1 hour to use a single PC in a week. I never had my hands on them. This was the case for most of them. The computer in our school was more of a “display item” rather than equipment which must be practiced by everyone. It never felt like I was not getting what I wanted instead it felt like I was getting more than what I had expected considering the financial circumstances and the location where I was living in. The scenario at nearby cities would have been different. For higher education, my parents sent me to a different school in a nearby small city. Even though I have traveled to the place many times before I was never part of the society. After making friends in the new place, the first obvious thing I noticed was people were different, and I got more access to media and started to know more stuff about electronics and automotive world. By this time I started to buy every automotive magazine on the stand and learned about them.  The technology that was being used or introduced in automotive systems was amazing. And there were a lot of prototypes and stories about future of cars and automotive industry, in general, was fascinating. They did talk about driverless cars, but “no article that I read said organizations such as Google would be the first to pioneer one.” Later on, I enrolled myself in a computer class where they teach Microsoft Office. Computer programming was out of the question since it was costlier and not known to many.

During those days, the mindset simply was “learning how to use the computer” rather than understanding how it works.

I spent most of my money and time at cyber café. This is the only place where I can get access to the Internet. I had gone to each of the institutes and enquired if they would allow me to use the Internet. Nobody said “yes.” Cyber café rescued me, and Google was immensely helpful because it got what I wanted. This was the time I used the Internet to learn and improve knowledge on various subjects.

With the introduction of series and my early days, I would like to conclude part I of this series and in the next post, I will write about IoT, a few examples, and Security.

Introduction to Next Gen Dell EMC Unity

Nearly a year later, Dell EMC has added Next Gen Unity storage systems to its existing Unity portfolio. The new models are Unity 350F, 450F, 550F and 650F. In a nutshell, This year’s Dell EMC Unity release brings in improvements to hardware as well as software i.e. improvements to UnityOE. This post examines new hardware and the new features introduced this year.

Gen2 vs Gen1

In general, the second generation Unity systems have more cores per CPU and memory when compared with the hardware introduced last year. Does that mean there must be an improvement in performance? Yes, the maximum IOPS that the new generation hardware can handle is slightly increased when compared to the older generation. Another important thing to note is that Dell EMC is going all in on All Flash and this year in the Unity product line there is no new introduction of hybrid models (spinning disks + SSD’s). The new Unity models 350F, 450F, 550F and 650F are all Flash and does not support spinning disks. Here is a table that summarizes the improvements.

Unity 350F Unity 450F Unity 550F Unity 650F
Processor Intel E5-2603v4 6-core, 1.7 GHz Intel E5-2630v4 10-core, 2.2 GHz Intel E5-2660v4 14-core, 2.0 GHz Intel E5-2680v4 14-core, 2.4 GHz
Memory 48GB (3x 16GB DIMMs) – per SP 64GB (4x 16GB DIMMs) – per SP 128GB (4x 32GB DIMMs) – per SP 256GB (4x 64GB DIMMs) – per SP
Minimum/Maximum drives 6/150 6/250 6/500 6/1000
Maximum raw capacity* 2.4 PBs 4.0 PBs 8.0 PBs 16.0 PBs **
Max IO modules 4 4 4 4
Max LUN Size 256 TB 256 TB 256 TB 256 TB
Max LUNs per array 1,000 1,500 2,000 4,000
Max File System Size 256 TB 256 TB 256 TB 256 TB

*Maximum raw capacity may vary.

**Unity 650F raw capacity is a 2x increase when compared with Unity 600F.

The look of the hardware remains the same and there is no change in the aesthetics. But on the inside, things have changed so much with the introduction of Unity OE 4.2. Before we jump on to what new in the software, Dell EMC has introduced 80 drive DAE this year. This 80 drive DAE is compatible with all generation hardware. It can work with a Gen1 hybrid, all flash arrays, and Gen2 all flash arrays.

80 Drive DAE

Photo Credit: Dell EMC

The 80 drive DAE is a dense DAE that accommodates eighty 3.5″ drives and the drives used in this DAE cannot be used in fifteen drive DAE. The new 80 drive DAE supports connecting to all generation Unity hardware. The backend connection can be x4 lanes SAS or x8 lanes SAS.

If you would like to read about Unity DPE, other DAE types and the internal components of Unity DPE check out my post Unity hardware architecture.

New features in Unity OE 4.2

Unity OE 4.2 release is a major update of this year and here is a list of most notable ones, (click the arrow to expand and read)

Dynamic Pools
Thin Clones
Enhancements to Snapshots
Improvements to system limits
Inline Compression for File
SMB migration from VNX to Unity

I will be publishing separate posts detailing the most important features of Unity OE release 4.2. Stay tuned!

Disclosure: I work for Dell EMC and this is not a promoted post.

Unity Architecture – Part II

Is Unity really a re-branded VNX2/VNX? Well, let’s find out. In this post, we will take a closer look at the functions of Unity OE. This post is part II of Unity Architecture series. If you have not read the part I click here (opens in new window) to read.

Unity OE

The Operating System that runs on Unity hardware is called Unity OE (Operating Environment). Unity OE is based on SUSE Linux Enterprise Server (SLES). Unity provides block and file access to hosts and clients. Unity is an Asymmetric Active-Active array and is ALUA aware.

Multicore Cache

Each SP has a certain amount of Cache. In the older storage systems, usually the cache is divided into Read Cache and Write Cache and most of the times it will be a static partition and does not dynamically change according to the IO served. Unity’s Multicore Cache is dynamic and the amount of Read and Write cache is dynamically adjusted according to the read and write operation. The main aim of this approach is to minimize forced flushing when the high watermark level on the cache is reached. An additonal layer of SSD Cache can be added to a hybrid pool by leveraging FAST Cache technology.

Unity Block and File Storage

Using FC and iSCSI protocol, Unity provides block access of storage to hosts. Without any special hardware, Unity also provides file access to hosts via NAS Servers (virtual) that can be created in Unity OE. The most fundamental part of Unity storage is the Unified storage pools. Unity allows all types of storage resources such as Block LUNs, VVols, and NAS file systems to be placed in the same storage pool. The following diagram shows various storage resources residing in the same storage pool.

Unity storage pools

Storage Pools

The disks that are residing in DPE and DAE can be grouped together to form storage pools. In a pool, there can be three tiers,

  • Extreme performance tier (SSD)
  • Performance tier (SAS)
  • Capacity tier (NL-SAS)

RAID protection is applied on the tier and not at the pool. In Unity, All Flash system the pool will contain only extreme performance tier and they are called All Flash Pools. In hybrid systems, it is possible to create an all flash pool with only SSD’s and later on expand the pool with SAS or NL-SAS disks. Each tier in the pool can have different RAID levels applied. For Example, extreme performance tier can have RAID 10 and performance tier can be configured with RAID 5 and the capacity tier can be configured in RAID 6 fashion. But the same tier cannot have different RAID types. A drive that is part of one storage pool cannot be part of another storage pool.

FAST VP

FAST stands for Fully Automated Storage Tiering. Configuring pools is the first step in provisioning storage in Unity. Unity uses FAST VP (Fully Automated Storage Tiering for Virtual Pools) algorithms to move hot data to SSD and cold data to NL-SAS. The policy can be adjusted according to business need via Unisphere. Following are the available Tiering Policies,

  • Highest Available Tier
  • Auto-Tier
  • Start High then Auto-Tier (Default/Recommended)
  • Lowest Available Tier

When FAST VP is enabled, the data is spread across the pool in 256 MB slices. FAST VP is enabled on a storage resource such as LUN, Datastore individually.

Highest Available tier

With Highest Available Tier policy of FAST VP, new data slices are provisioned from the extreme performance tier. If that gets full, new slices are then provisioned from the next tier. With this policy, we should expect superior performance and low latency as all data is kept in extreme performance tier, only when it gets full the next tier is chosen. During the next relocation schedule, the system tries to place the slice in the highest tier only. Hot slices will take precedence over any other tiering policy. Hot denotes frequently accessed slices and cold denotes less frequently accessed slices.

Auto-tier

Auto-Tier policy is very similar to highest available tier policy with two primary differences,

  • Even if a slice with auto-tier is more active than a slice with highest available tier policy, during relocation the slice belongs to highest available tier policy takes precedence.
  • When new storage resource with auto-tier policy is created, slices are allocated from all tiers depending on the usage of each tier. If more capacity is found in capacity tier, then the slices are allocated from capacity tier initially.

Start high then auto-tier

Dell EMC recommends using this policy on storage resources and it is the default policy. When new slices are allocated, this policy allocated slices from the highest performance tier. Later on during relocation schedule, slices are moved down if they are not frequently accessed. We can expect good initial performance on the storage resource with this policy and later on, for effective capacity utilization it moved the cold slices down. It works exactly like Auto-Tier policy and the only difference is the initial allocation of slices are from highest performance tier.

Lowest available tier

All slices of a storage resource reside always in the lowest tier i.e. the capacity tier. If the lowest tier is full, all slices with this policy will be compared and the one’s with the lowest activity will reside in the lowest tier.

Following table is an excerpt from “EMC Unity: FAST Technology Overview” white paper that summarizes the functions of each tiering policy.

Tiering Policy Corresponding Initial Tier Placement Description
Highest Available Tier Highest Available Tier Initial data placement and subsequent data relocations set to the highest performing tier of drives with available space
Auto-Tier Optimized for Pool Performance Initial data placement optimizes Pool capacity, then relocates slices to different tiers based on the activity levels of the slices
Start High then Auto-Tier (Default) Highest Available Tier Initial data placed on slices from the highest tier with available space, then relocates data based on performance statistics and slice activity
Lowest Available Tier Lowest Available Tier Initial data placement and subsequent relocations preferred on the lowest tier with available space.

The following picture shows hot and cold slices before and after relocation.

Image source: emc.com
Image source: emc.com

Expanding Storage Pool with additional drives

While expanding the pool with additional drives it is not mandatory to use the same stripe width. For example, if the existing tier is configured with RAID 5 4+1 (this means the existing tier contains sets of 5 drives); we can expand the pool in sets of RAID 5 (4+1) or RAID 5 (8+1) or RAID 5 (12+1). In VNX2, it was a best practice to maintain the preferred drive count while expanding i.e. 5 (as per our example). This is no longer the case in Unity, Unity allows expanding a tier in pool with any supported stripe width.

System Drives

Unity OE occupies first 4 disks in DPE. These drives are called as system drives. Unity OE does not occupy the entire drive capacity instead it uses 107GB. The system drives are allowed to take part in the storage pool with nonsystem drives.

FAST Cache

FAST Cache technology extends the existing cache in storage processor by utilizing high-speed SSD’s. FAST Cache is applicable only for hybrid pools. Frequently accessed data (64 KB chunks) in SAS and NL-SAS drives are copied to Cache tier. The data is only copied and not moved; it still exists in drives. Not only the frequently accessed data is copied, the algorithm also copies data that may be read next. After copying FAST Cache memory map is updated.

How read operation is performed in Unity?

For the incoming IO, first, the system cache (DRAM) is checked. If the data found to be residing in the system cache, the requested data is sent to host completing IO request. If a Cache miss occurs, i.e the data is not present in the system cache, FAST Cache memory map is checked. Obviously, FAST Cache must be enabled and configured for the memory map to exist. If FAST Cache exists, its memory map is checked and the data is sent to host. This improves system throughput and reduces response time. In case of system and FAST cache read miss, the data is read from the drive, copied to system cache and then sent to host.

How write operation is performed in Unity?

For all write request, the data is first written in system cache and acknowledgment is sent to host. During flushing, the data is written to pool. For some reason, if the system write cache is disabled, the data is written to FAST Cache (if present) and then written to pool.

Inline Compression

Unity has compression feature available for block LUNs and VMware VMFS Datastores. This feature was added in Unity OE 4.1 release. Compression can be enabled on storage resource while creating it or at a later point. Compression can be enabled for resources that are in all flash pools only. When a host sends data it is placed in system cache first and acknowledgment is sent to host. Compression of data happens inline between system cache and all flash pool. Compression is not available for VVols and file storage resources.

Management tools of Unity

Unity can be managed via Unisphere (HTML 5 GUI), Unisphere CLI and REST API (Unity management functions only and no data access like S3).

Replication, Snapshots and other protection

Unity supports Sync and Async replication of storage resources such as block LUNs, VMware Datastores. File storage resources can be protected by Async replication. Unity also supports snapshots natively. Snapshot supports block LUNs, VMFS Datastores, File systems and VMware NFS Datastores. Data At Rest Encryption can be enabled on Unity. When enabled all data in Unity will be encrypted. Unity also integrates well with other Dell EMC products such as RecoverPoint for DVR like recovery.

This brings us to the end of the post. What we discussed is high-level overview of Unity OE and its functions. I hope you found it helpful. Deep dive posts of Unity features will soon be published.

Disclosure: I work for Dell EMC and this is not a promoted post.

Unity Architecture – Part I

Last year, Dell EMC announced Unity midrange storage array at EMC World. Unity is based on VNXe architecture and does not replace the higher model of VNX2, i.e. VNX8000. This post takes a closer look at Unity to understand its hardware components, design, and the software. It is a two-part series. Part I is all about Unity hardware and part II  talks about software architecture of unity.

There are three variants in Unity, Unity Hybrid, Unity All Flash and Unity VSA. The models are Unity 300/300F, 400/400F, 500/500F, 600/600F. Model with “F” at the end is all flash (only SSD’s) and the other one is a hybrid storage system (Flash + Spinning disks). Unity VSA is a virtual appliance that can be deployed on vSphere.  Now let us take a look at some of the important specifications of these models,

Specifications

The specifications listed here is for a system that runs on Unity OE 4.1 aka Falcon.

Unity 300/300F Unity 400/400F Unity 500/500F Unity 600/600F
Processor 2 x Intel 6-core, 1.6GHz 2 x Intel 8-core, 2.4GHz 2 x Intel 10-core, 2.6GHz 2 x Intel 12-core, 2.5GHz
Memory (Both SP) 48 GB 96 GB 128 GB 256 GB
Minimum/Maximum drives 5/150 5/250 5/500 5/1000
Maximum raw capacity* 2.34 PBs 3.91 PBs 7.81 PBs 9.77 PBs
Max IO modules 4 4 4 4
Max number of pools 20 30 40 100
Max LUN Size 256 TB 256 TB 256 TB 256 TB
Max File System Size 64 TB 64 TB 64 TB 64 TB
Max LUNs per array 1,000 1,500 2,000 6,000

*Maximum raw capacity may vary.

Supported disks

Unity Hybrid

SSD's
Spinning disk drives

Unity All Flash

SSD's
Solid state drives used in Unity are of eMLC and TLC type and the disks highlighted in bold are 1 WPD (write per day) disks.

Disk Processor Enclosure (DPE)

DPE holds storage processor (SP), IO modules and disks. Two variants of DPE is available,

  • 25 Drive DPE that can hold 2.5″ disks (Available for hybrid and all flash array)
  • 15 Drive DPE that can hold 3.5″ disks (Only available for hybrid array)

As seen in the table, SP in the respective model will have a different CPU model and a variable amount of memory. Both types of DPE will occupy 2U when mounted on a rack. The first four drives in Unity is DPE is called system drives. These drives contain Unity OE (Operating Environment). Remaining space in these drives can be used for storage pools. A minimum number of disks that are required to initialize system is 5. On the rear side of the DPE we have, 2x Storage Processor (1&2 on image) , 4x onboard converged network ports (Optical/Twinax) (3), 4x onboard 10 GbE Base-T RJ45 ports (4), 2x power supplies (5), IO module slots (6), 4x SAS port for backend connection (7), a management port and a service port (8). Here is the picture of DPE,

Disk processor Enclosure. Image source: emc.com

The onboard Converged Network Adapter (CNA) can be configured for 16/8/4/2 Gbps Fiber Channel SFP’s (multimode and single mode) or 10 GbE optical using SFP+ and Twinax. The other 2 on-board port on an SP is 10 GbE Base-T. All these onboard ports can be configured for Block (FC/iSCSI) or File IO (NFS/CIFS). And in each SP there is a management port (to access Unisphere) and a service port (service use or engineering use).

Each SP can have two IO modules installed to expand front end host connection. IO module installed on SPA should match what is installed on SPB respective slot. There cannot be a miss match. Following are the IO modules that Unity supports,

  • 4 port 16GB Fiber Channel
  • 10GbE Base-T
  • 1GbE Base-T
  • 2 port 10GbE Optical (SFP+ and Twinax)
  • 4 port 10GbE Optical (SFP+ and Twinax)
  • 12Gb SAS for backend expansion (Only for Unity 500 and 600)
Unity supports Active Twinax cables only and no support for Passive Twinax.

Protections space for cache (No Vault)

In the case of storage processor failure, the cache contents are dumped into M.2 SSD that resides inside each SP. If cabinet loses power, Unity SP contains an inbuilt battery backup unit (BBU) that can power the SP long enough to dump cache contents into M.2 SSD. Cache content is restored to respective SP cache when power is restored or SP is replaced. The M.2 SSD also contains Unity OE boot image.

Disk Array Enclosure (DPE)

The DAE holds drives and the number of DAE that a model supports will vary. Please refer to the specification table in the post to know the maximum DAE that a system will support. There are two variants of DAE,

  • 25 Drive DAE that can hold 2.5″ disks (2U)
  • 15 Drive DAE that can hold 3.5″ disks (3U)

On the rear side, each DAE has 4 SAS ports (marked as A & B) for DPE to DAE and DAE to DAE connection. The ports need mini-SAS HD connectors. Here are the images of 15 & 25 drive DAE,

15 drive DAE. Image source: emc.com
15 drive DAE. Image source: emc.com

That’s all about Unity hardware. In the next post, we will take a closer look at software in Unity. Stay tuned! Click here to read part II.

Disclosure: I work for Dell EMC and this is not a promoted post.

Analyzing Top 5 Storage and Data Center tech predictions made in 2016

Last year in January I published “Top 5 Storage and Data Center tech predictions for 2016” predicting the possible happenings for 2016 in Storage world. Let’s relook at the predictions and analyze how much of it has happened.

Magnetic storage disk numbers will decline in the enterprise space.

Dell EMC declared the year of 2016 as “Year of All Flash” (YoAF) on February 29th, 2016, a month later after the post was written. During the entire span of 2016, I witnessed the steps that Dell EMC took to make 2016 the YoAF. The company’s biggest announcements in 2016 were Unity All Flash Array and VMAX All Flash Array. They also announced all flash versions of existing products such as Isilon, VxRAIL, and others. There was a similar situation at other storage vendors as well. This indirectly drove enterprise SSD shipments up. AnandTech reports, Q2 2016 SSD shipments up 41.2% and in Q1 it was up 32.7% YoY. A report published in November 2016 by TRENDFOCUS shows a great increase in SSD adoption. Following is an excerpt from TRENDFOCUS report,

“In the enterprise SSD world, PCIe units leaped a spectacular 101% from CQ2 ‘16, mainly due to emerging demand from hyperscale customers. Both SATA and SAS posted growth with 6% and 14% increases”

Another report published by TRENDFOCUS in August 2016 says,

“On the enterprise SSD side, although SATA SSDs still dominate both hyperscale and system OEMs from a unit volume perspective (3.2 of the 4.07 million units shipped in CQ2 ’16), SAS SSDs saw a tremendous jump in capacity shipped in CQ2 ’16. Unit volume rose only 5.4%, but exabytes shipped increased a whopping 100.5% from the previous quarter. The move to higher capacities (like 3.84TB and even a few 15TB) is real. The storage networking companies taking SAS SSD solutions have embraced these higher capacity devices (helped by declining prices) at a surprising rate.” 

In summary, during Q2 SATA SSD’s shipped more and in Q3 PCIe and SAS SSD’s overtook SATA in growth. This lead to a serious NAND drain. El Reg reports this with a funny headline – “What the Dell? NAND flash drought hits Texan monster – sources. Thrice during the year, TRENDMICRO reported the NAND shortage. Shortage for NAND caused SSD prices to go high in Q4 2016.

Enterprise flash drives with more storage space will appear and the cost will come down.

Enterprise SSD heavyweights such as Samsung, Toshiba and others introduced larger capacity TLC drives in 2016. Samsung’s 15.36 TLC NAND device is the largest I have seen on production in a storage array. There were other TLC drives with capacity ranging from 3TB to 15TB. Most of these TLC SSD’s are 1 WPD (Write per day) devices. However, 3D XPoint is still not seen in enterprise storage arrays. The price did come down during the beginning of 2016 but certain NAND devices price went high due to the demand.

Adoption of NVMe and fabrics will kick start in the enterprise space

While it is true that use of NVMe is widely used in high-performance computing space, it has not made its way to the most common datacenters yet. The reason is NVMe over Fabrics (NVMe-oF) has not gained popularity as thought. Dennis Martin, president and founder of Demartek LLC advises, Nonvolatile memory express (NVMe) is coming to a storage system near you soon, and IT pros need to become familiar with the protocol. With DSSD, Dell EMC entered this space and it will be very much interesting to see the advancements in this space. We are looking at companies adopting developer-centric infrastructure design. These new protocols may get popular with such design. Interestingly, it is the PC world which has embraced NVMe sooner that enterprise. Samsung, Toshiba, MyDigitalSSD, Plextor, and others sell ultra-fast NVMe SSD’s. NVMe SSDs are the fastest SSD that are currently being sold.

Software defined storage solutions will grow as the cloud adoption increases

There is a serious competition between Dell EMC ScaleIO and VMware VSAN even though both belong to Dell technologies, these two products are eating away the competition in SDS. Mainly SSD growth is fueled by the rise of cloud technology such as OpenStack and HCI appliances. Dell EMC beefed up its HCI portfolio in the year 2016 with VxRAIL and VxRACK. VxRAIL runs entirely on VMware VSAN and VxRACK runs on ScaleIO or VSAN. Another iteration of VxRACK runs OpenStack with ScaleIO. These advancements fueled SDS growth. The report predicts that global SDS market to grow at a CAGR of 31.62% during the period 2016-2020.

Enterprises will realize that cloud is not an alternative to traditional data center.

IMO the year of 2015 was the most important time for OpenStack. Everybody knew what OpenStack is and some conscious customers went all in on OpenStack. Snapdeal.com made news in 2016, “Snapdeal launches its own cloud – Snapdeal Cirrus“. During the mid of 2015 Snapdeal realized that public cloud stops being cost effective after a certain scale. So they went on to build their own hybrid cloud – Cirrus. Cost is not the only factor that drove them towards hybrid cloud lane, the need of massive compute power to fuel flash sales where millions of people buy on snapdeal.com were also a reason. Cirrus has more than 100,000 cores, 500TB of memory, 16PB of Block and Object storage entirely built on Ceph, SDN, spans 3 data centers and so on. Infrastructure like this makes sense for a company which runs each of its tech stacks at a large scale. Developer centric IT will only look at cloud-like infrastructure because it makes sense for them. As predicted earlier, traditional storage & compute tech will continue to exist in its own space.