Analyzing Top 5 Storage and Data Center tech predictions made in 2016

Last year in January I published “Top 5 Storage and Data Center tech predictions for 2016” predicting the possible happenings for 2016 in Storage world. Let’s relook at the predictions and analyze how much of it has happened.

Magnetic storage disk numbers will decline in the enterprise space.

Dell EMC declared the year of 2016 as “Year of All Flash” (YoAF) on February 29th, 2016, a month later after the post was written. During the entire span of 2016, I witnessed the steps that Dell EMC took to make 2016 the YoAF. The company’s biggest announcements in 2016 were Unity All Flash Array and VMAX All Flash Array. They also announced all flash versions of existing products such as Isilon, VxRAIL, and others. There was a similar situation at other storage vendors as well. This indirectly drove enterprise SSD shipments up. AnandTech reports, Q2 2016 SSD shipments up 41.2% and in Q1 it was up 32.7% YoY. A report published in November 2016 by TRENDFOCUS shows a great increase in SSD adoption. Following is an excerpt from TRENDFOCUS report,

“In the enterprise SSD world, PCIe units leaped a spectacular 101% from CQ2 ‘16, mainly due to emerging demand from hyperscale customers. Both SATA and SAS posted growth with 6% and 14% increases”

Another report published by TRENDFOCUS in August 2016 says,

“On the enterprise SSD side, although SATA SSDs still dominate both hyperscale and system OEMs from a unit volume perspective (3.2 of the 4.07 million units shipped in CQ2 ’16), SAS SSDs saw a tremendous jump in capacity shipped in CQ2 ’16. Unit volume rose only 5.4%, but exabytes shipped increased a whopping 100.5% from the previous quarter. The move to higher capacities (like 3.84TB and even a few 15TB) is real. The storage networking companies taking SAS SSD solutions have embraced these higher capacity devices (helped by declining prices) at a surprising rate.” 

In summary, during Q2 SATA SSD’s shipped more and in Q3 PCIe and SAS SSD’s overtook SATA in growth. This lead to a serious NAND drain. El Reg reports this with a funny headline – “What the Dell? NAND flash drought hits Texan monster – sources. Thrice during the year, TRENDMICRO reported the NAND shortage. Shortage for NAND caused SSD prices to go high in Q4 2016.

Enterprise flash drives with more storage space will appear and the cost will come down.

Enterprise SSD heavyweights such as Samsung, Toshiba and others introduced larger capacity TLC drives in 2016. Samsung’s 15.36 TLC NAND device is the largest I have seen on production in a storage array. There were other TLC drives with capacity ranging from 3TB to 15TB. Most of these TLC SSD’s are 1 WPD (Write per day) devices. However, 3D XPoint is still not seen in enterprise storage arrays. The price did come down during the beginning of 2016 but certain NAND devices price went high due to the demand.

Adoption of NVMe and fabrics will kick start in the enterprise space

While it is true that use of NVMe is widely used in high-performance computing space, it has not made its way to the most common datacenters yet. The reason is NVMe over Fabrics (NVMe-oF) has not gained popularity as thought. Dennis Martin, president and founder of Demartek LLC advises, Nonvolatile memory express (NVMe) is coming to a storage system near you soon, and IT pros need to become familiar with the protocol. With DSSD, Dell EMC entered this space and it will be very much interesting to see the advancements in this space. We are looking at companies adopting developer-centric infrastructure design. These new protocols may get popular with such design. Interestingly, it is the PC world which has embraced NVMe sooner that enterprise. Samsung, Toshiba, MyDigitalSSD, Plextor, and others sell ultra-fast NVMe SSD’s. NVMe SSDs are the fastest SSD that are currently being sold.

Software defined storage solutions will grow as the cloud adoption increases

There is a serious competition between Dell EMC ScaleIO and VMware VSAN even though both belong to Dell technologies, these two products are eating away the competition in SDS. Mainly SSD growth is fueled by the rise of cloud technology such as OpenStack and HCI appliances. Dell EMC beefed up its HCI portfolio in the year 2016 with VxRAIL and VxRACK. VxRAIL runs entirely on VMware VSAN and VxRACK runs on ScaleIO or VSAN. Another iteration of VxRACK runs OpenStack with ScaleIO. These advancements fueled SDS growth. The report predicts that global SDS market to grow at a CAGR of 31.62% during the period 2016-2020.

Enterprises will realize that cloud is not an alternative to traditional data center.

IMO the year of 2015 was the most important time for OpenStack. Everybody knew what OpenStack is and some conscious customers went all in on OpenStack. Snapdeal.com made news in 2016, “Snapdeal launches its own cloud – Snapdeal Cirrus“. During the mid of 2015 Snapdeal realized that public cloud stops being cost effective after a certain scale. So they went on to build their own hybrid cloud – Cirrus. Cost is not the only factor that drove them towards hybrid cloud lane, the need of massive compute power to fuel flash sales where millions of people buy on snapdeal.com were also a reason. Cirrus has more than 100,000 cores, 500TB of memory, 16PB of Block and Object storage entirely built on Ceph, SDN, spans 3 data centers and so on. Infrastructure like this makes sense for a company which runs each of its tech stacks at a large scale. Developer centric IT will only look at cloud-like infrastructure because it makes sense for them. As predicted earlier, traditional storage & compute tech will continue to exist in its own space.

Top 5 Storage and Data Center tech predictions for 2016

Every few years we see major shift in technology trends. With more Internet of things; comes more data and we need a new way of computing. In 1965 Gordon Moore foresaw future when he was working at Fairchild Semiconductor. His vision is Moore’s Law. Moore’s Law helped companies to make software for tomorrow. This post is not an attempt to foresee future, but an attempt to cover how technology trends may change Storage and Data Center industry in 2016.

1. Magnetic storage disk numbers will decline in the enterprise space.

It well known that all flash array (AFA) sales numbers are growing and almost all storage vendors have atleast one AFA in their portfolio. Some vendors convert/enhance their already popular storage array lineup with flash drive only offerings. HPE 3PAR and EMC’s VNX2 arrays can be offered with only flash drives. Flash arrays come in different variant, general purpose dual controller arrays, AFA with scale-out architecture, inline deduplication and compression etc. The general purpose flash array with dual controllers is finding its way into more small and medium datacenters to suffice traditional workloads. Arrays with most modern features such as scale-out, deduplication are being used for specific workloads. Therefore the adoption rate of AFA may fuel the decrease of magnetic disks.

2. Enterprise flash drives with more storage space will appear and the cost will come down.

Most semiconductor manufacturers have already announced that they are now less focusing on 2D NAND (Planar). Most flash storage devices that we use today are all 2D NAND. And most importantly drives used in storage arrays are 2D NAND’s. On Nov 2015 HPE announced support for 3D NAND drives on their 3PAR series arrays. In 2016 we will continue to see most vendors doing the same. We cannot say for certain that 2D NAND drives prices will come down drastically but there will be a price difference when compared to the previous year.

3D NAND drives have better capacity when compared to 2D NAND drives. There is also another exciting new technology which Intel and Micron has introduced; it is significantly better than 3D NAND. 3D XPoint (Branded as Optane) is denser than 3D NAND and also leaps ahead in performance. Intel and Micron claims their 3D XPoint drives are more durable than any SSD in the market today. A common misconception is that 3D NAND and 3D XPoint is the same. This is not true; both NAND technologies are entirely different. 3D NAND internal structure will look like a sky scraper. 3D XPoint is a dual stack approach and the metal conductor will be sandwiched between the memory cells. The following image illustrates how a memory cell is accessed, the white bar is a metal conductor and each memory cell stores single bit of data.

image

My prediction is that we will see more 3D NAND drives appear on market as well as being supported by storage arrays. For 3D XPoint it will take atleast another year. Because Optane drives uses NVMe and PCIe interconnect. Both are not present in storage arrays, except EMC’s upcoming DSSD; which uses PCIe interconnect.

3. Adoption of NVMe protocol and fabrics will kick start in the enterprise space

Non Volatile Memory Express (NVMe) is a new storage protocol. NVM express work group claims that NVMe is better than SCSI. NVMe is in development since 2009. But only in 2015 it is widely known and interest on it has sky rocketed. Here is Google trends for search term “NVMe”,

image

NVMe presents many advantages over SCSI. It uses Ethernet, PCIe as a transport medium and NVMe over FC fabric is a work in progress. Adoption of NVMe will kick start in 2016 and will continue to grow. Popular interfaces like SATA and SAS may become obsolete in the near future if NVMe adoption grows.

During EMC world 2015, DSSD was demonstrated; DSSD is an enterprise storage array that uses NVMe protocol over PCIe interconnects. This array outperforms all the all flash arrays in current market and is expected to be generally available anytime in 2016. NVMe protocol is not just used to access SSD. It is used to access nonvolatile memory as well. NVRAM is a PCIe based SSD used to extend the capabilities RAM.

NVMe will slowly be adopted as an alternative for SAS, FC, and SATA. It is also possible for a storage controller to connect over PCIe to its disk enclosures. NVMe over Ethernet and FC may also replace current host connectivity protocols. Therefore my prediction is; we will witness adoption of NVMe fabric and NVMe protocol by some enterprise storage systems.

4. Software defined storage solutions will grow as the cloud adoption increases.

A near perfect solution does not exist in market today. By completely transforming your storage infrastructure to server based software defined storage, you create compute and storage silos. Few Hyper Converged Appliances in the market today provides appliance based compute and storage solution. But its scalability is limited. It’s not possible to expand the appliances beyond certain limit.

Although the above described challenges pose a threat, SDS is a perfect candidate for cloud based infrastructure. For example, Ceph is the most used storage solution in OpenStack because it is open source and just requires hardware. OpenStack also supports various storage arrays. But most people who adopt OpenStack do not want to use enterprise storage arrays (usage of Ceph is an evidence for that). Existing supported storage arrays can be configured to connect with Cinder, Swift and Manila.

During the recent OpenStack summit, users were asked to participate in a survey. The results of survey are published as a report after each summit. In the survey the following question was asked and the result is in the image.

Which OpenStack Block Storage (Cinder) drivers are in use?

image
Image originally appeared in OpenStack user survey report

The survey result showed Ceph is the most preferred choice for block storage deployment. This confirms software defined storage is leading in OpenStack cloud. VMware EVO:RAIL is a hyper converged appliance which by default uses VSAN. VMware also partnered with a number of OEMs to have their own variant of EVO:RAIL but VSAN remains as the only storage option. Similarly EMC’s SDS offering is ScaleIO. EMC also has an open source SDS controller called CoprHD. CoprHD is based on ViPR Controller and it abstracts all storage arrays in datacenter. It supports EMC arrays as well as 3rd party arrays using OpenStack Cinder driver.

My prediction is, rapid adoption of cloud and interest in SDS solutions will grow further in 2016.

5. Most importantly, Companies will realize that cloud is not an alternative to a traditional data center.

When OpenStack was being widely known most people thought it to be a replacement of traditional data center. The truth is, OpenStack is for cloud aware applications. Running your Microsoft Exchange server or a database on the virtual machines (Instances) is not a good idea when high availability of instances is a question. One of the fundamental limitations when it comes to block storage is that, OpenStack cinder does not support shared volume access to an instance. A volume can be mounted to only one instance at a time.

Mainframes were enjoying great market share in 90’s and early 21st century, its lucrative market share was threatened by emergence of rack servers. Today’s servers are very much capable of doing what a Mainframe can do. But still Mainframes are not out of industry yet. Similarly emergence of cloud and cloud aware applications will transform IT industry because the applications which are being developed are solutions to real world problems. But the traditional computing infrastructure will continue to exist in its own space.

Note: This article is based on my own insights in storage technology and not based on market report or analysis. 

Will Intel/Micron 3D XPoint change the game?

Few months ago I wrote a couple of post about a very new technology called 3D XPoint (pronounced as 3D cross point). Intel & Micron announced, drives using 3D XPoint technology will be branded as “Optane” and the memory will be “XPoint Memory”. If you are not sure what 3D XPoint is you can read my introductory post. Nearly a month after the initial announcement Intel & Micron revealed little more detail about their new technology. So I made a follow up post covering the announcement details. After that there was no announcement of any kind, this post is all about the latest developments.

Intel made a lot of buzz during IDF 2015 with a benchmarking test of Optane SSD’s with one of Intel top P3700 series SSD. Optane drives performed nearly 5 times faster than its counterpart. 5 times faster is seriously not a joke. Intel says, Optane Solid State Drives and XPoint Memory is expected to be available in 2016.

What interconnects will Optane SSD’s use?

Well, definitely not SAS any time sooner (may not even be supported). Optane SSD will be based on PCIe and Intel OmniPath.

Any news from enterprise storage industry?

KF confusedNo. Nothing official yet. Few blogs and news websites has posted very misleading information, saying a number of enterprise storage vendors are going to support 3D XPoint. This is not true, storage vendors are planning to use 3D NAND SSD next year and not Optane (as of now). 3D NAND technology is not Optane.

Is there any standard yet?

Yes. Storage Networking Industry Association (SNIA) published a document on Non Volatile Memory (NVM) programming model. Click here to go to the document.

What changes a server or storage array would need to have XPoint memory or Optane SSD?

First and foremost; the programming method, when using XPoint Memory the processor has access to non volatile memory closer to CPU. Secondly the way CPU addresses the registers; At present NVMe does the same but it is over PCIe, so there will be a huge difference in the way CPU has to address. In memory databases are definitely going to make use of the non volatile memory which is slightly faster than RAM. Use cases are many and gradually this will make an impact in computing industry. Optane drives may easily fit in a server using PCIe interconnect. So the adoption rate of SSD will be faster when compared to memory. Therefore we may see NVM mostly being used in high performance computing and may not be available atleast for 2 years in consumer devices.

Hewlett Packard (HPE) Labs director Martin Fink replied to a tweet asking if their most anticipated “The Machine” will compete XPoint memory (The Machine is expected to have HPE’s own NVM based on Memristors), to which he mentioned the following. I found this to be worth mentioning here.

With storage storage arrays the game is entirely different. Datacenters are made up of compute, network and storage. A traditional storage array cannot have hundreds of PCIe devices. This is just not possible today. But NVMe fabrics may be considered to be used. Why cant they make use of OmniPath? Well, it took so many years for datacenters to adopt 10 GbE network. OmniPath aims to replace infiniband and Ethernet fabric. So definitely not OmniPath. OmniPath is fairly new and well suited in high performance computing (HPC) environments. Intel has created everything from scratch for OmniPath fabric, OmniPath uses newly designed HBA’s, switches, connectors etc.

But, there is one factor which can get the storage folks excited, Optane SSD’s very well can find its way into server based storage solution, because most of server based storage solutions are “Software Defined”. In this type of storage solution, drives in many servers are pooled to be used for storage use. Since Optane SSD is going to be using PCIe and NVMe, servers equipped with Optane SSD’s are going to rock for sure.

Cost

Cost is a very important factor that directly impacts adoption rates. For XPoint Memory to succeed it must be cheaper than DRAM. For Optane drives, the cost may be comparable with current gen SSDs but the expectation is more GB for the same price. This is because the density of memory cell is significantly large which means more GB on the same space. Here is a graph predicting 3D XPoint device price, The following image originally appeared on electronicdesign.com

© electronicdesign.com
© electronicdesign.com

 How quickly can we see this happen?

niceIn case of a software defined storage solution, three things must be satisfied. i.e. the server hardware, operating system must support it and the storage software should be rewritten to support Optane. In 21st century these things happens very quick. In my opinion, wide adoption will take place during 2017.

Conclusion

Next year is going to be interesting because nobody from semiconductor industry made such announcement of breakthrough technology in a long time. Analysts reports suggest that this technology will succeed.  Intel and Micron intends to create a new market and there will be a tough competition in non volatile memory segment next year will prove it right or wrong.

Thanks for reading through. If you find this interesting or worth sharing please spread the words by sharing on social media.

Everything known so far about Intel Optane

As I predicted in my previous post, Intel announced storage products using its 3D XPoint technology (co developed by Micron and Intel) will be branded as Optane. If you are unsure what 3D XPoint is, I have tried to explain it here. This announcement was made recently in IDF 2015. So far what is known is, the SSD will be using a PCIe interconnect. A quick demo of performance difference between Optane SSD and Intel’s P3700 SSD series was shown. The demo session featured a CPU with a transparent side panel so the internal components could be seen and a monitor displaying two IOPS meters. The actual SSD was not shown.

Optane SSD significantly exceeded its counterpart. P3700 series SSD is one of the world fastest SSD’s available from Intel and it PCIe based. Three types of benchmarking was shown, 70/30 Read/Write mix with a queue depth of 8 and 1, Read only with queue depth of 1. I think they used Iometer for this benchmaring test. Higher the queue depth, higher the latency. In Iometer if queue depth is set to 1 then no queuing is used. Performance of these SSD’s are really impressive at this lowest queue depth levels.

70/30 Read/Write MIX with queue depth 8
70/30 Read/Write MIX with queue depth 8
70/30 Read/Write MIX with queue depth 1
70/30 Read/Write MIX with queue depth 1
Read only with queue depth 8
Read only with queue depth 8

Intel also said storage controllers optimized for Optane SSD’s will be made available. This will enable manufacturers to integrate Optane SSD’s in their Servers (Xeon based), Laptops, Ultrabooks, etc.

It is still unclear whether Optane SSD’s will support interfaces like SAS (Serial Attached SCSI). Most enterprise storage system currently use SAS as a backend connection interface to connect to disks. Using a SAS controller when compared with PCIe interface will significantly reduce performance but with the wide availability of 12 Gbps SAS and massive parallelization of multiple backend disks (in enterprise storage arrays), It will still make a huge positive difference if Optane SSD’s are used in storage systems.

Interesting thing to note here is, server based software defined storage (SDS) solutions such as ScaleIO, VSAN uses integrated disks in a group of servers to provide block storage volumes to other hosts. Therefore, It is highly possible that software defined storage solutions will be the first in enterprise storage industry to get benefited from this brand new technology. As the world moves forward with everything “software defined” no wonder SDS solutions will get the leading edge.

Update

I have recently published a post covering the latest updates developments 3D XPoint. Click here to read the exciting new article

Here is IDF 2015 full key note: (Video has audio lag)

A brief look into 3D XPoint – A new technology developed by Intel and Micron

In 2016 world is going to witness a new product based on a new revolutionary technology called 3D XPoint (pronounced as 3D cross point). 3D XPoint is developed by brightest minds from Intel and Micron. To me this looks like Intel and Micron is paving way for humans brightest future in-terms of technology use. It also somewhat resembles what Intel did in 1980’s and 1990’s with microprocessors. This may sound a bit of exaggeration to you but when you understand the uses case I’m sure your views are going to change.

Why is 3D XPoint revolutionary?

Because it is 10 times denser than DRAM, 1000 times faster than NAND and 1000 times long lasting than NAND. And yes, 3D XPoint is non-volatile. This obviously sounds like a statement from future. But it is not and that’s why it is revolutionary. If you have exposure to enterprise storage industry and day-to-day work on data, this statement will definitely create some questions in your mind. when they say 1000 times faster.. I’m like,

oh really
“Man, is that for real?”

In storage industry we already have enterprise class flash arrays delivering higher performance with micro second latency. Intel and Micron claims the latency in devices made using 3D XPoint will have latency in the scale of nano seconds. Yes you read that right “nano-seconds”.

Information about this new technology is not widely available. This post is a collation of information from webcast and resources made available by Intel and Micron. The complete architecture and internal working principles are currently unknown.

How is 3D XPoint positioned?

During the webcast Mark Durcan, CEO of Micron said “Only 7 different types of memory were made available in the past 5 decades”. This is correct and here is a timeline of memory shown during webcast.

Timeline of Memory
Click the image for a surprise! (be sure to check out with audio enabled)

In 1966 Intel made breaking news in semi conductor industry by introducing DRAM, Although the technology initially was introduced by Honeywell (Intel was a contractor to Honeywell and supplied chips based on design imposed by Honeywell). Intel then was quickly able to introduce a new semi conductor device with unique design and characteristics and sold the design rights to some companies and sold the product to many electronic industries. The major use case of DRAM at that time was in consumer electronics (majorly calculators), defense systems etc.

As depicted in the timeline NAND was introduced way back in 1989. Solid State Drive (SSD) is a storage device which uses NAND gates to store data (0 and 1). SSD’s recently gained huge popularity as the price is continuing to decline. Other NAND devices are small flash based storage devices such as memory cards, USB flash drives.

The irony is, “Today’s modern electronics is made using aging technology”

3D XPoint is not an extension to existing memory and NAND technology. It is fundamentally different. However the interesting fact is 3D XPoint can potentially replace memories and storage in a computing system. nice

Architecture:

During the webcast a very high level overview of the product was explained. 3D XPoint uses new switch and memory cell design, each of these memory cells are then stacked on top of one another and a metal conductor separates each layer. This enables to individually address each memory cell and not having to block an entire array.

Cross Point architecture
Cross Point architecture

So far what known is that, the bars in green and yellow are memory cells and the blue bar is a metal conductor. To read or write data (1’s and 0’s) in a memory cell the respective connector is used. Therefore it is possible to address individual memory cell. Also shown in this picture is a layered approach. Intel and Micron said initially their plan is to have chips having two stacks as shown in the image. This approach is called as “Cross Point Architecture” thus the name 3D XPoint.

Details such as how it is 1000 times faster than NAND (Read or Write?), higher endurance levels are currently unknown. However Intel and Micron confirmed that their first product will be able to store 128 GB of data in a die (Single Chip) and each die will contain 128 billion memory cells on them.

Individual die can store 128Gb of data
Individual die can store 128Gb of data

Performance

Scalability:

If we apply Moore’s Law, will the memory cell density and layers (Stack) increase over time? Answer is kind of “Yes”. During the webcast, Rob Crooke, SVP at Intel said over a period of time the density of the memory cell will increase and also the stacks. In the same space they plan to introduce multiple stacks. Both these are purely capacity based improvement over time. Performance scalability is still a question.

Production:

Intel and Micron confirmed the first device using this technology will be available in 2016. What we don’t know is the type of device which is going to make use of 3D XPoint. It could be a mobile phone, or a new gaming console, perhaps a raw storage device. Intel and Micron showcased the wafer fabricated in their labs.

3D XPoint wafer
3D XPoint wafer (screen-grab from a video), Not sure how many die’s are in this wafer.

Where can this be used?

As I mentioned earlier in the post this technology can replace memory and storage in a computing system (of any form). So the possible use cases are huge. Think of a new computing system or a mobile phone using this technology; it will be blazing fast, thus utilizing the processor to its fullest extent. Another use case could be raw storage media. If a storage array is using these drives in the back-end then you can imagine the mind-blowing IOPS count one could achieve with ultra low latency in the scale of nano seconds. I don’t see any reason for OLTP workloads and applications to be not happy.

If it’s implemented on a server or PC, the fundamental design of a system board may change so does the addressing schemes of operating systems.

Conclusion:

By 2020 data is going to explode to 44 Zettabytes. Technology like 3D XPoint is just going to enable this growth. A very important point to note here is, the increase in performance and capacity while continuing to be non-volatile. This simply is not possible today!

Here is a graph showing 3D XPoint comparison with technology which are currently in use,

graph

3D XPoint is more denser than NAND and can perform faster than DRAM.

Here are some cool facts:

The following facts are extract from Intel press release, links at the end of post.

3D XPoint Technology Performance

  • HDD latency is measured in milliseconds, NAND latency is measured in microseconds, and 3D XPoint technology latency is measured in nanoseconds (one-billionth of a second).
  • In the time it takes an HDD to sprint the length of a basketball court, NAND could finish a marathon, and 3D XPoint technology could nearly circle the globe.
  • If computer storagewere modes of travel:
    • HDDs could take you from New York to Los Angeles by car in 4 days (2,500 miles).
    • SSDs could get you to the moon in the same amount time (240,000 miles).
    • 3D XPoint technology could get you to Mars and back in the same time (280 million miles).
  • 3D XPoint technology is up to 1,000 faster than NAND.
    • The average daily commute of Americans would reduce from 25 minutes in traffic to 1.5 seconds.
    • Traveling by plane from San Francisco to Beijing could happen in about 43 seconds, instead of the 12 hours it takes now.
    • The Great Wall of China could have been built in 73 days instead of 200 years.

3D XPoint Technology Endurance

  • 3D XPoint technology has up to 1,000xthe endurance of NAND.
    • If 3D XPoint technology were your car’s engine oil, you would need an oil change a lot less often: once every 3,000,000 miles – the equivalent of driving around the world at the equator 120 times, or close to once around the sun.
    • If your car got 1000x the gas mileage, the average driver would fill up once every 25 years.
  • A consumer-grade SSD can write 40 gigabytes per day—enough to write 8.6 copies of the EncyclopediaBritannica or 10,000 MP3 files to the drive, every day for five years.
    • An SSD with up to 1,000x increase in endurance could write the entire printed collection of the U.S. Library of Congress (20TB) twice every day. After five years, that’s the equivalent of 1.46 billion standard four-drawer file cabinets full of text or 73 petabytes.

Update:

As I predicted in this post, Intel announced storage products using 3D XPoint technology will be branded as Optane. This announcement was made in IDF 2015. So far what is known is, the SSD will be using a PCIe interconnect. A quick demo of performance difference between Optane SSD and Intel’s P3700 SSD series was shown. As expected Optane SSD significantly exceeded its counterpart. Intel also said storage controllers optimized for Optane SSD’s will be made available. This will enable manufacturers to integrate Optane SSD’s in their Servers (Xeon based), Laptops, Ultrabooks, etc.

It is still unclear whether Optane SSD’s will support interfaces like SAS (Serial Attached SCSI). Most enterprise storage system currently use SAS as a backend connection interface to connect to disks.

Interesting thing to note here is, server based software defined storage (SDS) solutions such as ScaleIO, VSAN uses integrated disks in a group of servers to provide block storage volumes to other hosts. Therefore, It is highly possible that software defined storage solutions will be the first in enterprise storage industry to get benefited from this brand new technology.

More detailed information on Optane is published in a separate post. Click here to read.

Referenced URLs to write this post:

  1. Intel press release – http://goo.gl/P7slzC
  2. Memory timeline – http://goo.gl/mbhMLZ
  3. Micron press release – http://goo.gl/I0P8ML