Security vs Protection – The Same, but Different

Though the words “security” and “protection” are mostly interchangeable in regular use of the English language, when talking about data, it’s a different story.

When we talk about data security, we are referring to securing data from becoming compromised due to an external, premeditated attack. The most well-known examples are malware and ransomware attacks.

Data protection, however, refers to protecting data against corruption usually caused by an internal factor such as human error or hardware failures. We generally address data protection by way of backup or replication – creating accessible versions of the data that may be stored on different media and in various locations.

Of course, these backups can be used for data recovery in either scenario.

 

Under attack

We have seen a dramatic rise in ransomware attacks in recent years, with startling results. According to the FBI, in Q1 of 2016, victims paid $209M to ransomware criminals. Intermedia reported that 72% of companies infected with ransomware cannot access their data for at least 2 days, and 32% lose access for 5 days or more. According to a July 2016 Osterman Research Survey, nearly 80 percent of organizations breached have had high-value data held for ransom.

 

security-vs-protection-1So what is ransomware?

Ransomware is a form of malware that is covertly installed on a victim’s computer and adversely affects it, often by encrypting the data and making it unavailable until a ransom is paid to receive the decryption key or prevent the information from being published.

Most infamously, Sony fell victim two years ago to a crippling attack that shut down its computers and email systems and sensitive information was published on the web. The Sony breach was a watershed moment in the history of cyber attacks. It is believed that the attackers were inside Sony’s network for over 6 months, giving them plenty of time to map the network and identify where the most critical data was stored.

The attack unfolded over a 48 hour period. It began by destroying Sony’s recovery capability. Backup media targets and the associated master and media servers were destroyed first. Then the attack moved to the DR and Production environments. Only after it had crippled the recovery capabilities did the attack target the production environment. After Sony recognized the attack, they turned to their Data Protection infrastructure to restore the damaged systems. However, they had lost their ability to recover. Sony was down for over 28 days and never recovered much of its data.

In Israel, the Nazareth Illit municipality was recently paralyzed by ransomware. Tts critical data was locked until the municipality pays the attackers the ransom price.

 

security-vs-protection

What do we propose?

While Dell EMC offers a range of products and solutions for backup and recovery on traditional media such as tape and disk, data is increasingly sitting in publicly-accessible domains such as networks, causing a heightened threat to data security. To address the shift in data storage, in particular the growing trend towards application development and storage in the cloud, Dell EMC is utilizing its decades of experience in the area of securing data with the most stringent requirements and the most robust and secure technology set in the market, to architect and implement solutions. The new technologies will lock out hackers from critical data sets and secure a path to quick business recovery. One such solution is Isolated Recovery Solution (IRS).

IRS 101

Essentially, IRS creates an isolated environment to protect data from deletion and corruption while allowing for a quick recovery time. It comprises the following concepts:security-vs-protection-3

  • Isolated systems so that the environment is disconnected from the network and restricted from users other than those with proper clearance.
  • Periodic data copying whereby software automates data copies to secondary storage and backup targets. Procedures are put in place to schedule the copy over an air gap* between the production environment and the isolated recovery area.
  • Workflows to stage copied data in an isolated recovery zone and periodic integrity checks to rule out malware attacks.
  • Mechanisms to trigger alerts in the event of a security breach.
  • Procedures to perform recovery or remediation after an incident.

*What is an air gap?

An air gap is a security measure that isolates a computer or network and prevents it from establishing an external connection. An air-gapped computer is neither connected to the Internet nor any systems that are connected to the Internet. Generally, air gaps are implemented where the system or network requires extra security, such as classified military networks, payment networks, and so on.

Let’s compare an air gap to a water lock used for raising and lowering boats between stretches of water of different levels on a waterway. A boat that is traveling upstream enters the lock, the lower gates are closed, the lock is filled with water from upstream causing the boat to rise, the upper gates are opened and the boat exits the lock.

In order to transfer data securely, air gaps are opened for scheduled periods of time during actual copy operations to allow data to move from the primary storage to the isolated storage location. Once the replication is completed, the air gap is closed.

 

Dell EMC’s Data Domain product currently offers a retention lock feature preventing the deletion of files until a predefined date. IRS takes such capabilities further. The solution will continue to evolve to simplify deployment and provide security against an even broader range of attacks (rogue IT admins, for example), IRS solutions will make life more difficult for hackers and data more secure. In IT, “security” and “protection” have been treated as two independent, orthogonal concepts. The new, destructive style of attacks changes that relationship. The two teams must partner to make a coherent solution.

 

~Assaf Natanzon @ANatanzon

Managing Your Computing Ecosystem Pt. 2

Managing Your Computing Ecosystem Pt. 2

Overview

We are making strides toward universal and interoperable management interfaces. These are not only interfaces that will interoperate across one vendor or one part of the stack, but management interfaces that can truly integrate your infrastructure management. Last time, we discussed OData, the Rest standardization. This time we will talk about Redfish for managing hardware platforms.

Redfish redfish

Redfish defines a simple and secure, OData conformant data service for managing scalable hardware platforms. Redfish is defined by a set of open industry standard specifications that are developed by the Distributed Management Task Force, Inc. (DMTF).

The initial development was from the point of view of a Baseboard Management Controller (BMC) or equivalent. Redfish management currently covers bare-metal discovery, configuration, monitoring, and management of all common hardware components. It is capable of managing and updating installed software, including for the operating system and for device drivers.

Redfish is not limited to low-level hardware/firmware management. It is also expected to be deployed to manage higher level functionality, including configuration and management of containers and virtual systems.   In collaboration with the IETF, Redfish is also being extended to include management of networks.

The Redfish Scalable Platforms Management API Specification specifies functionality that can be divided into three areas: OData extensions, utility interfaces, and platform management interfaces. These are described briefly in the following sections.

Redfish OData extensions

Redfish requires at least OData v4 and specifies some additional constraints:

  • Use of HTTP v1.1 is required, with support for POST, GET, PATCH, and DELETE operations, including requirements on many HTTP headers
  • JSON representations are required within payloads
  • Several well-known URIs are specified
    • /redfish/v1/ returns the ServiceRoot resource for locating resources
    • /redfish/v1/OData/ returns the OData service document for locating resources
    • /redfish/v1/$metadata returns the OData metadata document for locating the entity data model declarations.

Redfish also extends the OData metamodel with an additional vocabulary for annotating model declarations. The annotations specify information about, or behaviors of the modeled resources.

Redfish utility interfaces

The utility interfaces provide functionality that is useful for any management domain (for example, these interfaces are used by Swordfish for storage management). These interfaces include account, event, log, session, and task management.

The account service manages access to a Redfish service via a manager accounts and roles.

The event service provides the means to specify events and to subscribe to indications when a defined event occurs on a specified set of resources. Each subscription specifies where indications are sent, this can be to a listening service or to an internal resource, (e.g. a log service).

Each log service manages a collection of event records, including size and replacement policies. Resources may have multiple log services for different purposes.

The session service manages sessions and enables creation of an X-Auth-Token representing a session used to access the Redfish service.

The task service manages tasks that represent independent threads of execution known to the redfish service. Typically tasks are spawned as a result of a long running operation.

The update service provides management of firmware and software resources, including the ability to update those resources.

Redfish platform management interfaces

The principal resources managed by a Redfish service are chassis, computer systems and fabrics. Each resource has its current status. Additionally, each type of resource may have references to other resources, properties defining the current state of the resource, and additional actions as necessary.

Each chassis represents a physical or logical container. It may represent a sheet-metal confined space like a rack, sled, shelf, or module. Or, it may represent a logical space like a row, pod, or computer room zone.

Each computer system represents a computing system and its software-visible resources such as memory, processors and other devices that can be accessed from that system. The computer system can be general purpose system or can be a specialized system like a storage server or a switch.

Each fabric represents a collection of zones, switches and related endpoints. A zone is a collection of involved switches and contained endpoints. A switch provides connectivity between a set of endpoints.

All other subsystems are represented as resources that are linked via one or more of these principal resources. These subsystems include: bios, drives, endpoints, fans, memories, PCIe devices, ports, power, sensors, processors and various types of networking interfaces.

Conclusion

Redfish delivers a standardized management interface for hardware resources. While it is beginning with basic functionality like discovery, configuration and monitoring, it will deliver much more. It will extend into both richer services and cover more than physical resources – e.g. virtual systems, containers, and networks. Redfish is built as an OData conformant service, which makes it the second connected part of an integrated management API stack. Next up – Swordfish.

~George Ericson @GEricson

Managing Your Computing Ecosystem

Managing Your Computing Ecosystem

Overview

There is a very real opportunity to take a giant step towards universal and interoperable management interfaces that are defined in terms of what your clients want to achieve. In the process, the industry can evolve away from the current complex, proprietary and product specific interfaces.

You’ve heard this promise before, but it’s never come to pass. What’s different this time? Major players are converging storage and servers. Functionality is commoditizing. Customers are demanding it more than ever.

Three industry-led open standards efforts have come together to collectively provide an easy to use and comprehensive API for managing all of the elements in your computing ecosystem, ranging from simple laptops to geographically distributed data centers.

This API is specified by:

  • the Open Data Protocol (OData) from Oasis
  • the Redfish Scalable Platforms Management API from the DMTF
  • the Swordfish Scalable Storage Management API from the SNIA

One can build a management service that is conformant to the Redfish or Swordfish specifications that provides a comprehensive interface for the discovery of the managed physical infrastructure, as well as for the provisioning, monitoring, and management of the environmental, compute, networking, and storage resources provided by that infrastructure. That management service is an OData conformant data service.

These specifications are evolving and certainly are not complete in all aspects. Nevertheless, they are already sufficient to provide comprehensive management of most features of products in the computing ecosystem.

This post and the following two will provide a short overview of each.

OData odata

The first effort is the definition of the Open Data Protocol (OData). OData v4 specifications are OASIS standards that have also begun the international standardization process with ISO.

Simply asserting that a data service has a Restful API does nothing to assure that it is interoperable with any other data service. More importantly, Rest by itself makes no guarantees that a client of one Restful data service will be able to discover or know how to even navigate around the Restful API presented by some other data service.

OData enables interoperable utilization of Restful data services. Such services allow resources, identified using Uniform Resource Locators (URLs) and defined in an Entity Data Model (EDM), to be published and edited by Web clients using simple HTTP messages.  In addition to Redfish and Swordfish described below, a growing number of applications support OData data services, e.g. Microsoft Azure, SAP NetWeaver, IBM WebSphere, and Salesforce.

The OData Common Schema Definition Language (CSDL) specifies a standard metamodel used to define an Entity Data Model over which an OData service acts. The metamodel defined by CSDL is consistent with common elements of the UML v2.5 metamodel.   This fact enables reliable translation to your programming language of your choice.

OData standardizes the construction of Restful APIs. OData provides standards for navigation between resources, for request and response payloads and for operation syntax. It specifies the discovery of the entity data model for the accessed data service. It also specifies how resources defined by the entity data model can be discovered. While it does not standardize the APIs themselves, OData does standardize how payloads are constructed and a set of query options and many other items that are often different across the many current Restful data services. OData specifications utilize standard HTTP, AtomPub, and JSON. Also, standard URIs are used to address and access resources.

The use of the OData protocol enables a client to access information from a variety of sources including relational databases, servers, storage systems, file systems, content management systems, traditional Web sites, and more.

Ubiquitous use will break down information silos and will enable interoperability between producers and consumers. This will significantly increase the ability to provide new and richer functionality on top of the OData services.

The OData specifications define:

Conclusion

While Rest is a useful architectural style, it is not a “standard” and the variances in Restful APIs to express similar functions means that there is no standard way to interact with different systems. OData is laying the groundwork for interoperable management by standardizing the construction of Restful APIs. Next up – Redfish.

 

~George Ericson @GEricson

 

Data Protection for Public Cloud Environments

Data Protection for Public Cloud Environments

In late 2015 I was researching the options available to protect application workloads running in public cloud environments. In this post I will discuss my findings, and what we are doing at Dell EMC to bring Enterprise grade Data Protection Solutions to workloads running in public cloud environments.

 

To understand how Data Protection applies to public cloud environments, we need to recognize that Data Protection can occur at different layers in the infrastructure. These include the server, storage, hypervisor (if virtualized), application and platform layer. When we implement Data Protection for on premises environments, our ability to exercise Data Protection functions at any one of these layers depends upon the technologies in use.

 

At the server layer, we typically deploy an agent-based solution that manages the creation of Data Protection copies of the running environment. This method can be used for virtualized, bare metal and even containerized environments that persist data.

 

At the application layer we typically rely on the applications’ native data protection functions to generate copies (usually to file system or pseudo media targets). Examples of this can include database dumps to local or remote disk storage. We can go a step further and embed control-and-data path plugins into the application layer to enable the application’s native data protection methods to interface with Data Protection storage for efficient data transfer and Data Protection management software for policy, scheduling, reporting and audit purposes.

 

Like the server approach, the application native approach is agnostic to the platform the application is running on, be it virtualized, bare metal or containerized, in public or private cloud environments. Where things get interesting is when we start leveraging infrastructure layers to support Data Protection requirements. The most common infrastructure layers used are the hypervisor and storage-centric Data Protection methods.

 

A by-product of infrastructure methods is they require privileged access to the infrastructure’s interfaces to create protection copies. In private cloud environments this requires coordination and trust between the Backup or Protection Administrator and the Storage or Virtualization Administrator. This access is often negotiated when the service is first established. In Public Cloud environments there is no Storage or Virtualization Administrator we can talk with to negotiate access. These layers are off limits to consumers of the Public Cloud. If we want to exercise Data Protection at these layers, we have to rely on the services that the Public Cloud provider makes available. These services are often referred to as Cloud-based Data Protection.

 

For example, Amazon Web Services (AWS) offers snapshots of Elastic Block Storage (EBS) volumes to S3 storage. This provides protection of volumes at the block-level. Microsoft Azure offers snapshots of VM’s to Azure Blob Storage and the Azure backup service for VM instances running the Windows Operating Systems.

 

A common property of Cloud-based Data Protection services and infrastructure-centric protection methods for that matter, is they are tightly coupled. Tight coupling means the technologies and methods are highly dependent on one another to function, which allows the method to perform at peak efficiency. For example, the method is able to track the underlying data that is changing in the virtual machine instance, and when appropriate take copies of the data that has changed between copies.

 

Tightly coupled methods have gained popularity in recent years simply because data volumes continue to grow to the extent that traditional methods are struggling to keep up. However, there are some important trade-offs being made when we bet the business solely on tightly coupled Data Protection methods.

 

Tight coupling trades efficiency for flexibility. In other words, we can have a very efficient capability, but it is highly inflexible. In the case of Data Protection, a solution focused on flexibility allows one to free the data copies from the underlying infrastructure. For example, in the case of AWS snapshot copies to S3, the copies are forever tied to the public cloud platform. This is a critical point that requires careful attention when devising a Public Cloud Data Protection strategy.

 

The best way I can describe the implications is to compare the situation to traditional on premises Data Protection methods. With on premises solutions, you are in full control of the creation, storage and recovery processes. For example, let us assume you have implemented a protection solution using a vendor’s product. This product would normally implement and manage the process of creating copies and storing these copies on media in the vendor’s data format (which in modern times is native to the application being protected). The property we usually take for granted here is we can move these copies from one media format to another or one location to another. We can also recover them to different systems and platforms. This heterogeneity offers flexibility, which enables choice. The choice to change our mind or adjust our approach to managing copies subject to changing conditions. For example, with loosely coupled copies, we can migrate them from one public cloud providers’ object storage (e.g. AWS S3) to another public cloud providers’ object storage (Azure Blob Storage), or even back to private cloud object storage (Elastic Cloud Storage), if we decide to bring certain workloads on premises.

 

Despite these trade-offs, there are very good reasons to use a public cloud providers native Data Protection functions. For example, if we want fast full VM recovery back to the source, we would be hard pressed to find a faster solution. However, cloud-native solutions do not address all recovery scenarios and lack flexibility. To mitigate these risks, a dual approach is often pursued that address the efficiency, speed and flexibility required by Enterprise applications, in public, private or hybrid cloud models.

 

My general advice to customers is to leverage tightly coupled Data Protection methods for short-lived Data Protection requirements, along with loosely coupled methods. In the case of Public Cloud models, this requires the deployment of software technologies (or hardware, via services like Direct Connect and ExpressRoute) that are not tied to the Public Cloud provider’s platform or data formats. As a consumer of Public Cloud services, this will afford you the flexibility to free your data copies if need be, in future.

 

Our Strategy

 

At Dell EMC we recognize that customers will deploy workloads across a variety of cloud delivery models. These workloads will require multiple forms of Data Protection, based on the value of the data and the desire to maintain independence from the underlying infrastructure or platform hosting the workloads.

 

Our strategy is to provide customers Data Protection everywhere. This protection will be delivered via multiple avenues, including orchestrating the control path of Public Cloud provider’s native solutions, and allowing the Public Cloud to host and manage the data path and storage. For workloads that require ultimate flexibility and independent Data Protection copies, we will also manage the data path and storage, to enable copies to remain agnostic to the cloud vendor. Furthermore, for customers that choose to consume SaaS-based solutions, we will continue to work with SaaS vendors to expand our existing SaaS Data Protection offering to export and manage data copies using the vendor’s available API’s, to the extent possible.

 

Ultimately, customers will choose which path they take. Our strategy is to ensure our Data Protection solutions allow customers to take any path available to them.

 

~Peter Marelas @pmarelas

Mathematics, Big Data, and Joss Whedon

Mathematics, Big Data, and Joss Whedon

Definition 1: The symmetric difference of two sets A and B, denoted A \Delta B , is the set of elements in each of A and B, but not in their intersection.

Let A be “Mathematics”, and let B be “Data Science”. This is certainly not the first article vying for attention with the latter buzzword, so I’ll go ahead and insert a few more here to help boost traffic and readership:

Analytics, Machine Learning, Algorithm,

Neural Networks, Bayesian, Big Data

These formerly technical words (except that last one) used to live solidly in the dingy faculty lounge of set A. They have since been distorted into vague corporate buzzwords, shunning their well-defined mathematical roots for the sexier company of “synergy”, “leverage”, and “swim lanes” at refined business luncheons. All of the above words have allowed themselves to become elements of the nebulous set B: “Data Science”. As the entire corporate and academic world scrambles to rebrand themselves as members of Big Data™, allow me to pause the chaos in order to reclaim set A.   This isn’t to say that set B is without its merits. Data Science is Joss Whedon, making the uncool comic books so hip that Target sells T-shirts now. The advent of powerful computational resources and a worldwide saturation of data have sparked a mathematical revival of sorts. (It is actually possible for university mathematics departments to receive funding now.) Data Science has inspired the development of methods for quantifying every aspect of life and business, many of which were forged in mathematical crucibles. Data science has built bridges between research disciplines, and sparked some taste for a subject that was previously about as appetizing to most as dry Thanksgiving turkey without gravy. Data science has driven billions of dollars in sales across every industry, customized our lives to our particular tastes, and advanced medical technology, to name a few. Moreover, the techniques employed by data scientists have mathematical roots. Good data scientists have some mathematical background, and my buzzwords above are certainly in both sets. Clearly,  A \cup B   is nonempty, and the two sets are not disjoint. However, the symmetric difference between the two sets is large. Symbolically,  (A \Delta B) \gg   A \cup B   . To avoid repetition of the plethora of articles about Data Science, our focus will be on the elements of mathematics that data science lacks. In mathematical symbols, we investigate the set A \ B.

Mathematics is simplification. Mathematicians seek to strip a problem bare. Just as every building has a foundation and a frame, every “applied” problem has a premise and a structure. Abstracting the problem into a mathematical realm identifies the facade of the problem that previously seemed necessary. An architect can design an entire subdivision with one floor plan, and introduce variation in cosmetic features to produce a hundred seemingly different homes. Mathematicians reverse this process, ignoring the unnecessary variation in building materials to find the underlying structure of the houses. A mathematician can solve several business problems with one good model by studying the anatomy of the problems.

Mathematics is rigor. My real analysis professor in graduate school told us that a mathematician’s job is two-fold: to break things and to build unbreakable things. We work in proofs, not judgment. Many of the data science algorithms and statistical tests that get name dropped at parties today are actually quite rigorous, if the assumptions are met. It is disingenuous to scorn statistics as merely a tool to lie; one doesn’t blame the screwdriver that is being misused as a hammer. Mathematicians focus on these assumptions. A longer list of assumptions prior to a statement indicates a weak statement; our goal is to strip assumptions one by one to see when the statement (or algorithm) breaks. Once we break it, we recraft it into a stronger statement with fewer assumptions, giving it more power.

Mathematics is elegance. Ultimately, this statement is a linear combination of the previous two, but still provides an insightful contrast. Data science has become a tool crib of “black box” algorithms that one employs in his language of choice. Many of these models have become uninterpretable blobs that churn out an answer (even good ones by many measures of performance. Pick your favorite measure–p values, Euclidean distance, prediction error.) They solve the specific problem given wonderfully, molding themselves to the given data like a good pair of spandex leggings. However, they provide no structure, no insight beyond that particular type of data. Understanding the problem takes a back seat to predictions, because predictions make money, especially before the end of the quarter. Vision is long-term and expensive. This type of thinking is short-sighted; with some investment, that singular dataset may reveal a structure that is isomorphic to another problem in an unrelated department, and even one that may be exceedingly simple in nature. In this case, mathematics can provide an interpretable, elegant solution that solves multiple problems, provides insight to behavior, and still retains predictive power.

As an example, let us examine the saturated research field of disk failures. There is certainly no shortage of papers that develop complex algorithms for disk failure prediction; typically the best performing ones are an ensemble method of some kind. Certain errors are good predictors of disk failure, for instance, medium errors and reallocated sectors. These errors evolve randomly, but always increase. A Markov chain fits this behavior perfectly, and we have developed the method to model these errors. Parameter estimation is a challenge, but the idea is simple, elegant, and interpretable. Because the mathematics are so versatile, with just one transition matrix a user can answer almost any question he likes without needing to rerun the model. This approach allows for both predictive analytics and behavior monitoring, is quick to implement, and is analytically (in the mathematical sense) sound. The only estimation needed is in the parameters, not in the model structure itself. Effective parameter estimation will effectively guarantee good performance.

There is room for both data scientists and mathematicians; the relationship between a data scientist and a mathematician is a symbiotic one. Practicality forces a quick or canned solution at times, and sometimes the time investment needed to “reinvent the wheel” when we have (almost) infinite storage and processing power at hand is not good business. Both data science and mathematics require extensive study to be effective; one course on Coursera does not make one a data scientist, just as calculus knowledge does not make one a mathematician. But ultimately, mathematics is the foundation of all science; we shouldn’t forget to build that foundation in the quest to be industry Big Data™ leaders.

 

~Rachel Traylor @mathpocalypse

 

How I Learned to Stop Worrying and Love New Storage Media: The Promises and Pitfalls of Flash and New Non-volatile Memory Part III

How I Learned to Stop Worrying and Love New Storage Media:   The Promises and Pitfalls of Flash and New Non-volatile Memory Part III

Moving Beyond Flash: Non-volatile Memory

I previously discussed my work with current flash products as well as technology trends driving the future of flash. Next, I will look into a fuzzy crystal ball at new storage media technologies coming to market.

The traditional storage hierarchy had DRAM at the top for fast accesses and hard disk drives at the bottom for persistent storage. Then we added flash to the hierarchy, either as a cache between the two layers or as a replacement for hard disk drives. But the storage vendors do not stand still; they are innovating on new storage media. In this blog, I want to focus on two non-volatile memory technologies: Non-volatile Memory Express (NVMe) and non-volatile DIMMs (NVDIMMs). For a primer on these topics, check out the proceedings of the 2016 Flash Memory Summit.

NVMe

When performance engineers started measuring the time each step takes when accessing flash, they found a surprising result. While flash is very fast, a sizable fraction of the access time is devoted to communication overheads just to reach flash. NVMe was developed to address this problem. Now, instead of taking 80 microseconds to read traditional flash, using NVMe, it may take 7 microseconds. Any time you can improve access latency by an order of magnitude, it gets attention.

NVMe is really a protocol, not a media itself, and Dell EMC is part of the specification committee considering better parallelization, deeper queues, and fine-grained user control. NVMe could be built with traditional flash or a newer technology. While flash-based NVMe products are currently coming to market, a more speculative technology may increase the lifespan of NVMe devices and up-end the storage hierarchy.

Upgrading the D in DRAM

There is research into new storage media with improved characteristics relative to flash, such as nanosecond access times (vs. microseconds), millions of erasures (vs. thousands), and access with memory pointers (vs. block access with read/writes). These technologies are a new alphabet soup of acronyms to learn: Phase Change Memory (PCM), Spin Transfer Torque (STT), Resistive RAM (RRAM), Carbon Nanotube RAM (NRAM), and others.

I won’t go through all of the details of these techniques, but it is enough to know that these are non-volatile storage media that fall somewhere between DRAM and flash for most characteristics. There has been work on these media for decades, but perhaps we are reaching the point where a mature product is ready for customers.

Intel and Micron leaped into the middle of the NVM space when they announced the 3D XPoint product for both NVMe and NVDIMMS. Unlike previous university and corporate research projects, the companies described roadmaps for the next few years. There have been enough internal meetings and vendor demonstrations, that 3D XPoint looks real.

The implementation details are under wraps, but the best guess is that it is a variant of PCM. Accessed via NVMe, 3D XPoint largely solves the lifespan issues with flash by increasing the number of erasure cycles by several orders of magnitude. Used within NVDIMMS, it has the potential to extend server memory from gigabytes to terabytes. Imagine running multi-terabyte databases in memory. Imagine a storage system where flash is the slow media for colder content. Imagine compute where all data is available with a pointer instead of a read/write call that waits millions of CPU cycles to complete. Turning these imaginations into reality is the hard part.

I need to provide a quick disclaimer: the terminology in this community can be misleading. In my opinion, any product labeled “non-volatile memory (NVM)” should 1. be non-volatile (i.e. hold its value across power events), and 2. be memory (i.e. accessed by a software pointer, not a read/write call at a block size). Sadly, there are products with the NVM label that are both volatile and read/written in large blocks. I am (not) sorry to be pedantic, but that makes me angry and confuses customers.

Not the End of Storage

Do NVMe and NVDIMM products eliminate the need for clever storage design? Thankfully no, or I would be out of a job. While a new media dramatically decreases the latency of data access, there are whole new storage architectures to figure out and programming models to design. The programming complexities of NVDIMMs are an area of active research because data persistence is not immediate. A flush command is needed to move data from processor caches to NVDIMMs, which introduces its own delay.

Also, we are used to the idea of rebooting a computer if it is acting odd. That has traditionally cleared the memory and reloaded a consistent state from storage. Now, the inconsistent state in memory will still be there after a reboot. We’ll need to figure out how to make systems behave with these new characteristics.

Besides these persistence issues, we will need to explore the right caching/tiering model in combination with existing media. While more data can stay in persistent memory, not all of it will. Furthermore, we’ll still need to protect and recover the data, regardless of where the primary copy resides.

I don’t know the right answers to these problems, but we will find out. Considering how flash has upended the market, I can’t wait to see the impact of new non-volatile memory technologies under development. I know that, regardless of how we answer the questions, the storage world will look different in a few years.

~Philip Shilane @philipshilane

How I Learned to Stop Worrying and Love New Storage Media: The Promises and Pitfalls of Flash and New Non-volatile Memory Part II

How I Learned to Stop Worrying and Love New Storage Media:   The Promises and Pitfalls of Flash and New Non-volatile Memory Part II

Flash Tomorrow

While my early work in flash caching focused on the flash devices we currently have, the next stage of my research looks at where flash is going.

A combination of customer needs and vendor competition is driving flash towards higher capacity and lower cost, through a variety of technical advancements. When I recently purchased a 128GB SD card for a digital camera, I was reminded of the now-useless 64MB cards I had purchased a decade ago.

A similar trend exists at the enterprise and cloud storage level. While a 100GB flash device was standard years ago, a prototype 60TB flash device was announced at the Flash Memory Summit in August. No price was announced, but I would guess it will cost over $10,000, which sounds high, but at potentially $0.20 per gigabyte, it will be among the cheapest flash devices, per gigabyte. I don’t need this product in my home, but it will find a role in enterprise or cloud storage. How did flash technology evolve to the point where a 60TB flash device is a plausible product?

Flash Trends

There are three main ongoing technical trends driving flash technology. All of the trends focus on increasing the density of flash storage. First, manufacturing processes continue to squeeze more bits into the same chip, which is analogous to denser hard drive capacities and more CPU cores.

Until recently, advancement in density has been within the two dimensional plane of flash chips. Now however, researchers are pursuing a second trend – to stack flash chips within a device, which is conveniently called 3D stacking. Vendors are stacking 48 or more layers already, and there is research pushing to 100 or more layers in the coming years.

The third trend is to increase the information content of individual cells. At a simplistic level, a flash cell is a storage unit for electrons.  Based on the charge of the cell, the value of the cell is either a 0 or a 1.  This was the original design for flash cells called a single-level cell (SLC).  Instead of simply splitting the range of cell charges into two sub-ranges (representing 0 and 1), it is possible to further divide the ranges again to produce four values (0, 1, 2 and 3), called a multi-level cell (MLC). This effectively doubled the capacity of a flash device.  This process of dividing charge ranges continued to three-level cells (TLC) with bit values of 0 through 7, and we are even beginning to see quad-level cells (QLC) with bit values 0 through 15.  QLC flash effectively has 8X the capacity of SLC flash, and flash vendors sell these denser products at cheaper prices per gigabyte.

Impacts of Flash Technology Trends

Unfortunately, there is a downside to packing flash cells more tightly and finely slicing the charge ranges. Recording a value in a cell is a slightly destructive process that wears on the components of the cell. This leads to slight fluctuations in a cell’s charge and means the chance of incorrectly identifying a cell’s true value increases.

Flash manufacturers have addressed the risk of storing/reading data incorrectly by adding error correcting technology, but the end result is that flash lifespans are shrinking. Once a cell has been damaged from numerous writes, it can no longer be reliably written again, so the flash device marks it as invalid.

How should you assess the reliability of flash devices? First, you want to think in terms of writes, since that’s what wears out the flash device. You should be able to read unmodified data for as long as you want. Flash devices are designed to spread writes evenly across the cells, so we typically think about the write cycles of the entire device instead of individual cells. Therefore, think of a write cycle as a period where the entire flash capacity is written. Whereas SLC flash could support 100,000 write cycles, MLC flash supports a few thousand, TLC flash around 1,000 and QLC flash perhaps a few hundred.  This leads to the industry using the term writes-per-day to describe flash devices. Writes-per-day is equal to the number of supported write cycles divided by the number of days in the expected lifetime of the device (e.g. 365 days times 5 years). More writes-per-day means that the flash device is less likely to have cells wear out and fail.

Performance characteristics are also changing with denser devices. Reading, writing and erasing are all getting slower. The fastest SLC flash devices could read in just 25 microseconds, write in 300 microseconds and erase in 1.5 milliseconds.  Each time the charge range is divided to cram more logical values into a cell, these operations have gotten slower. In some cases, doubling the bit density has doubled the latency of each operation. Reads have slowed for a number of reasons including writes and background erasures interfering with reads.

A related issue is the throughput to read and write, which is the number of megabytes (or gigabytes) that can be read and written per second. While capacities are doubling, throughput has increased slowly with each generation of products. Flash devices are getting bigger but we can’t access the full space as quickly as before.

We tend to think of flash as blazingly fast, and that is true relative to the latency of reading and writing to hard drives, but flash is slowing down each year. It is an interesting trend of cheaper, denser, slower flash devices coming to market with shorter lifespans.

Storage System Design Changes

The evolution of flash has led to unexpected choices in system design. Sometimes we select smaller, more expensive (per gigabyte) flash devices because they have the lifespan needed for a storage system. In other cases we may select multiple, smaller flash devices over a single, large flash device to get the overall IOPS and throughput needed by a system. Simply picking the newest, biggest flash device isn’t always the right answer.

The evolution of flash also affects how to design smaller storage systems. As flash vendors have emphasized manufacturing larger and larger devices, we have run into situations where they no longer stock low capacity flash devices needed for storage systems targeted at small businesses. Just as some customers had to buy excess hard disk capacity in their storage system (for performance or reliability), the same may be coming for flash storage systems, too.

As a result, we are starting to consider tiers of flash, where a high performance cache uses SLC flash with the highest endurance and lowest latency while colder data is stored on denser, low endurance flash. Of course, that assumes that flash won’t be supplanted by newer media, itself.

While flash has been the hot media of the last decade and will continue to be heavily leveraged for years to come, there are new storage media coming to market and in the prototype phase. My next blog post will discuss NVMe and new non-volatile DRAM options and the potential to upend not only storage but servers as well.

 

~Philip Shilane @philipshilane

The Hashtag Cortex

The Hashtag Cortex

Escaping the deadly radiation of the tech industry pulsar this time Inside The Data Cortex.

  • This year has been “The Year of all Flash” and Mark didn’t notice.
  • Weeks after day one Stephen and Mark discuss day one. It was kind of like day zero and not much different than day two. But day two had the world’s largest donut at Dell EMC World.
  • Weight gain and not much weight loss at tradeshows.
  • Stephen on the Goldilocks approach to embracing the public cloud and the tyranny of selection bias.
  • Do Google consider themselves an enterprise supplier?
  • This time of year there’s no sunshine anywhere outside of California. Says man living in California.
  • Software Defined Storage is kind of interesting. Says customer who thinks the installation packages will do everything.
  • Scale out is still a hard problem.
  • Mark has looked at home grown storage solutions and sees a lot of ugly babies. (Sorry! He’s not sorry.)
  • The Botnet of Things is real and your dishwasher is hitting someone with a denial of service attack right now.
  • This episode in reading things. Alcatraz Verses the Evil Librarians, Benjamin Franklin: An American Life, Steinbeck’s The Winter of our Discontent, Ken Clarke’s Kind of Blue and Stalin Paradoxes of Power.

No one likes to give up power. Go before you are pushed. Because it will be people like us doing the pushing.


Download this episode (right click and save)

Subscribe to this on iTunes

Get it from Podbean

Follow us on Pocket Casts
Stephen Manley @makitadremel Mark Twomey @Storagezilla

The Origins of eCDM: Team Houston

The Origins of eCDM: Team Houston

Dell EMC recently announced Enterprise Copy Data Management (eCDM), a product that enables global management and monitoring of data copies across primary and protection storage. Perhaps just as interesting as eCDM is the way that the product was conceived, designed, developed, and taken to market. Like the trailblazers of the west, the product team behind eCDM was faced with the daunting challenge of exploring uncharted territory. They created a product from scratch using agile methodologies, open source technology, a brand new UI, and an entirely custom go-to-market strategy.

This is the final post in a series that details the challenges and successes of the product team from conception to release. The first and second posts can be found here and here, respectively.

Designing and developing a product such as eCDM is a major feat, but it’s really only half of the story when discussing emerging products. The other half of the challenge is taking the product to market. In a company that is so accustomed to acquiring new technology, the eCDM product team needed to address the rarely encountered challenge of creating a go-to-market strategy from scratch. However, starting from nothing has its advantages. The eCDM team’s strategy began with using customer feedback differently. The development team continuously and directly addressed customer feedback during design and implementation to build a more valuable and usable product.

In order to anticipate the needs of the customer, a specialized team of quality engineering experts called Team Houston was created to provide the customer voice to the engineering team. The group quickly became a core component of the eCDM go-to-market strategy by performing end to end testing and developing a deep understanding of the product from a customer viewpoint. David Sandock, a senior member of Team Houston, explained that his team provides a unique perspective of the product by testing features as if he and his team were actual customers. “We concentrate on areas that we anticipate to be most frequented by the end user and then we try to break them,” David said while discussing the role of his team. Unlike traditional engineering or quality teams, most members of Team Houston are not on scrum teams; as a result, they are free to explore and test any part of the product.

Team Houston also plays a major role in gathering and executing on customer feedback. David and the team helped facilitate the hosted beta. They took their experiences directly back to the engineering team, removing the pesky barrier between the engineer and the customer. Through this model and the diligence of Team Houston, the engineering team addressed nearly all of the customer feedback from the hosted beta through bug fixes, new features, or user experience improvements. “We’re listening,” David told me, and the engineering team is certainly taking action to address what Team Houston hears.

The product management team is also listening closely to customer feedback. Robert Hammond, the go-to-market product manager for eCDM, spends nearly 40% of his time in front of customers to learn and help solve the challenges that they face with data protection. As Rob puts it, “I don’t learn anything in my office. The only way to learn what we should be building is to interact with the customer.” With emerging products, it becomes increasingly important to understand deeply the problems that customers are facing in order to address them.

eCDM was designed from customer feedback, and future product iterations will continue to reflect countless customer conversations to properly address customer needs. Rob and the product management team have spent countless hours understanding customer needs and developing requirements that address many of the challenges that customers face with self-service data protection. “What we’ve built has come directly from listening to our customers,” Rob said, “and we’ll continue to listen in order to address the needs of our customers.”

As eCDM evolves, it is fueled by customer input. Team Houston and the Product Management team have done an exceptional job of taking a new product to market in a way that leverages customer interaction to improve eCDM. Team Houston plays an instrumental role in engaging the development team with customer feedback, and the product management team has included many features based directly on customer input. Together, these teams enable eCDM to enter the market as a well-defined and user-focused product.

David Sandock has spent the last 9 years working in the Israeli tech scene, having spent the last 3 years with EMC as a Senior Software Quality Engineer working for RecoverPoint. He recently relocated to the US to take a role with Dell EMC’s eCDM product.  He has been primarily responsible for customer focused end-to-end testing, while being a focal point for the different storage technologies used in testing the eCDM product and as a direct link between testing teams and the product management team.

Robert Hammond is a product manager on the eCDM Team. He is focused on helping customers successfully adopt eCDM while sharing what he has learned from customers with the rest of the product & engineering teams.  Prior to this role, Robert held various product, marketing and pre-sales roles at Dell, Amazon and a few startups.

 

~Tyler Stone @tyler_stone_

Honoring Dell EMC’s Core Technologies Technical Directors

Honoring Dell EMC’s Core Technologies Technical Directors

In the modern business world, executives get all the external recognition. It’s just a few weeks into the Dell acquisition of EMC, and most people already know names like: Marius Haas, Jeff Clarke, David Goulden, Jeff Clarke, Howard Elias, and Rory Read. Some of them even have their own Wikipedia pages.

A company like Dell EMC, however, cannot succeed without people who design, build, and ship the products that the executives talk about. Therefore, every quarter, we recognize the newest Dell EMC Core Technologies Technical Directors. These are senior technical leaders who have delivered a sustained business impact by delivering – products, solutions, and customer experience. They are the engine of the company. The previous recipients are detailed here.

Of course, Core Technologies continues to deliver innovative solutions, so we continue to expand the roster of Technical Directors. This quarter I’m pleased to announce:

Frederic Corniquet – Frederic has been a leader in the NAS protocols for the midrange systems for over a decade. Frederic has been a driving force in EMC’s NAS offerings growing in both technical strength and customer adoption from VNX1 to VNX2 to Unity. Frederic’s expertise extends from the NAS protocol to security to integrating with VMware for NFS data stores. As a leader in EMEA, Frederic also evangelizes and connects with some of our biggest customers. Frederic is a technical leader, evangelist, and expert who is growing EMC’s NAS business.

Rajesh Nair – Rajesh has been a leader in NetWorker for over a decade, focused on solving our largest customers’ most difficult backup challenges. He began by working on image-level backups (SnapImage) to solve customers’ large file system backup challenges. He then delivered NetWorker’s NDMP tape solution, solving customers’ large NAS file system backup challenges. Rajesh then led the team to integrate Data Domain BOOST into NetWorker, which solves performance and networking scaling challenges. Today more than half of NetWorker customers leverage BOOST. Rajesh’s decade of innovation, delivery, and leadership have driven NetWorker to be the customers’ choice for the most difficult backup challenges.

Tom Papadakis – Applications. IT teams want to speak to their application owners. Tom has led application-centric data protection for almost 20 years. Tom began by making NetWorker indexes scale for application backup. Then he developed NetWorker’s Oracle integration which allows DBA and backup administrator to work independently, while retaining centralized control. Tom also brings a customer and sales-centric viewpoint to application protection. He spearheaded the creation of NMDA – a package that combines the application support for multiple applications. The result was a dramatically improved total customer experience. As application protection spans across all of data protection, Tom has also brought together Data Domain (via DDBEA integration) and Avamar. Application protection is the present and future of backup and Tom has been at the front of that mission.

Ian Wigmore – Ian specializes in making products run fast. He began in the Symmetrix Microcode group connecting the Symmetrix to IBM’s S/390 and Z-series mainframes via the FC-2 software layers in the FICON storage director. To say that mainframe applications and users are sensitive to latency is an understatement. Ian’s performance tuning helped the Symmetrix (now VMAX) be the storage of choice for the most demanding mainframe applications. Ian was one of the initial leaders on ProtectPoint for VMAX – a product that improves backup times from VMAX to Data Domain by up to 20x. This has uniquely solved the challenge of large database backups. Fast and simple – whether it is mainframe, migration, or backup – Ian’s work separates EMC technology from the competition.

Dorota Zak – Flexibility of choice. Dorota protects customers’ core applications regardless of the tool. While she began in NetWorker, Dorota was instrumental to expanding Avamar’s application support, adding Sybase and SAP support, which helped solve backup challenges for many enterprises. Dorota then created the framework for DDBEA (which enables application admins to protect their data directly to Data Domain, without using backup software), so that it could quickly support the key 6 databases. That then extended to supporting key applications for Dell EMC’s industry leading ProtectPoint product (XtremIO and VMAX backing up directly to Data Domain). Dorota even helped SAP design the appropriate backup APIs for SAP HANA; EMC now leads the industry in protecting SAP HANA. Dorota’s work enables our customers to protect their applications however they like.

Dell EMC executives are among the industry’s best. They set our direction and guide the organization through unprecedented company and market changes. Our technical leaders, however, are the absolute best in the industry. It’s always easier to lead when you have the best and most talent. These Dell EMC Core Technologies Technical Directors are just some of the technical talent who deliver the infrastructure solutions that run the world.

~Stephen Manley @makitadremel