Honoring Dell EMC’s Data Protection and Storage Technical Directors

Honoring Dell EMC’s Data Protection and Storage Technical Directors

Everything changes. Organizational structures, company names, and, of course, technology. For technical companies to survive, however, one thing cannot change. We need technical leaders who can turn changing technology into new products that solve our customers’ problems. Dell EMC has replaced the Core Technologies Division with the Data Protection Division and the Storage Division, but we are still are building the core of customers’ data infrastructure.

Therefore, every quarter, we recognize the newest Dell EMC Data Protection and Storage Technical Directors. These are senior technical leaders who have delivered a sustained business impact by delivering – products, solutions, and customer experience. They are the engine of the company. The previous recipients are detailed here and here.

John Adams – John helps deliver VMAX performance that matters – performance for our customers’ most important applications. He’s demonstrated and optimized performance in the most demanding customer environments. He then drives customer-critical performance work into the engineering team – from evaluating large flash drives to host LUN scaling to dynamic cache partitioning. His skill spans from Unisys to EPIC (leading health care database/application). John is the go-to person who connects with customers, customer support, and engineering for their performance needs.

Michael Barber – Michael Barber is the rare quality engineer who truly is the customers’ advocate. First, Michael understands that customers buy VMAX replication to ensure that their mission-critical data is always safe. Since customer environments are constantly under load, facing all manner of unusual circumstances (especially in a disaster), Michael has built a tool that validates data consistency while generating all of those unusual situation. The tool is used across VMAX and much of the rest of the company. Michael also reviews and influences core features to ensure they meet customers’ standards and needs. VMAX customers have Michael Barber on their side.

Fred Douglis – Fred has led Dell EMC’s Data Protection Division’s academic strategy, while also driving research into the product. Under Fred’s guidance, Dell EMC has consistently published in prestigious conferences and journals. This work has helped advanced the state-of-the-art in deduplication research and development. He has also built strong relationships with leading universities like University of Chicago, Northeastern, and University of Wisconsin at Madison. His contributions to the industry have also been recognized. Fred is an IEEE Fellow and is currently serving on the Board of Governors for the IEEE Computer Society. Finally, the innovation of Data Domain BOOST-FS has enabled customers to more easily and rapidly protect custom and big data apps.

Martin Feeney – Martin helps the largest customers in the world run their most mission critical applications. As an expert in both FICON and VMAX, Martin has helped our mainframe customers get reliable access and predictable performance from their storage. He was instrumental in unifying the data format and storage algorithms for VMAX Mainframe and Open Systems support. This enables our customers to get better performance, functionality, and reliability more quickly. Martin is also responsible for optimizing the VMAX2 performance while also delivering the Mainframe Host Adapter optimizations for the VMAX3 platform. As customers continue to run their most important workloads on Mainframe, Martin keeps those applications running optimally.

Simon Gordon – Simon has been the Product Management lead for ProtectPoint for VMAX and XtremIO. Our most innovative customers deploy ProtectPoint to protect, refresh, and test some of their largest and most mission critical databases – like Oracle, DB2, Microsoft SQL, and EPIC. Simon has been instrumental in connecting customers, the field, application partners, and our engineering teams so that we can deliver a comprehensive protection solution built on top of revolutionary technology.

Colin Johnson – Colin, an expert in user experience design, has been the UX leader for Data Domain Management Console, Data Domain Long Term Retention in the Cloud, and ProtectPoint for XtremIO. Colin’s expertise in user experience, visual design, customer interaction, and data protection has enabled the Data Protection Division to deliver products that are easier for our customers to use across cloud, replication, multi-tenancy, and next-generation data protection.

Jerry Jourdain – Jerry has been the driving technical force behind Dell EMC’s email archiving solutions. Jerry co-founded Dell EMC’s initial industry-leading email archiving product, EmailXtender, and then was Chief Architect of the follow-on SourceOne product. Thousands of customers depend on Dell EMC to protect their most critical information for compliance, legal, or business needs. Jerry ensures that we can address their most challenging compliance and retention needs.

Amit Khanna – Amit has been modernizing data protection for NetWorker customers. He was the force behind NetWorker’s vProxy support – standalone, re-purposable, fully RESTful protection for VMware. Amit began by integrating Data Domain BOOST into NetWorker and tying together NetWorker and Data Domain replication. He then delivered the policy-based management for NetWorker 9, which allows customers to move toward Backup as a Service. His work on CloudBoost allows customers to back up both to the cloud and in the cloud. Amit’s work has made NetWorker a core part of modern data protection.

Ilya Liubovich – Over the past couple of years, VMAX customers have raved about how much easier it is to manage their systems. Ilya led one of the biggest optimizations, Unisphere/360 for VMAX. It is already attached to the majority of new VMAX systems, simplifying the management of their most critical storage. Furthermore, as security becomes an even more important issue in the world, Ilya has led the security standards for the management software – ensuring compliance to the highest standards, without intruding on the customer experience. With Ilya’s work, VMAX delivers high-end storage functionality with greater simplicity.

Prasanna Malaiyandi – Prasanna, a Data Protection Solution Architect, has led both ProtectPoint and ECDM from inception to delivery. ProtectPoint directly backs up VMAX and XtremIO systems to Data Domain, delivering up to 20x faster data protection than any other method. ECDM enables IT organizations to deliver Data Protection as a Service. Protection teams centrally control data protection, while allowing application, VM, and storage administrators to back up and recover data on their own, using high performance technologies like ProtectPoint and DD BOOST. Prasanna connected disparate products to bring Dell EMC products together somewhere other than the purchase order.

Jeremy O’Hare – Jeremy has delivered core VMAX functionality that separates it from every other product in the marketplace. Most recently, Jeremy led the creation of VMAX compression that delivers space savings with unparalleled performance in the industry. He’s also been instrumental to Virtual LUNS (VLUN) which enabled the groundbreaking FAST functionality. As a technical leader, Jeremy stands out for being able to bring solutions across teams. Compression touches virtually every part of the VMAX and Jeremy drove development and QA efforts across all of the groups, so that our customers enjoy compression without compromise on their VMAX systems.

Kedar Patwardhan – Kedar enables Avamar customers to solve their biggest, most challenging backup problems. First, Kedar created the only traditional file-system backup that doesn’t need to scan the file system. Customers with large file servers can scale their backups without compromising on functionality. Second, Kedar delivered OpenStack integration to protect some of our largest customers’ data. Third, the integration with vRA enables our Dell EMC’s customers to manage their protection from VMware interfaces. For the largest file systems to OpenStack to large VMware deployments, Kedar’s work enables us to deliver simple, scalable data protection.

Rong Yu – Rong is responsible for key algorithmic and architectural improvements to Symmetrix systems. First, he delivered a Quality of Service (QoS) framework that delivers both customer-defined Service Level Objectives while meeting the needs of internal operations like cloning and drive rebuild. He overhauled the prefetching model to leverage the knowledge of the host/application access patterns. He continues to help optimize RDF performance. Most recently, he developed the new middleware layer in the VMAX system that has enabled new features (like compression) and performance optimizations (optimizing cache-read-miss). Customers depend on VMAX for reliable, predictable high performance regardless of the situation. Rong’s work helps ensure that VMAX meets and exceeds those expectations.

Congratulations and thanks to the new and existing Dell EMC Technical Directors. You are the engine of Dell EMC!

~Stephen Manley @makitadremel

The Dell EMC World Cortex

The Dell EMC World Cortex

This time Inside the Data Cortex; Mark hates Vegas and no one sees sunlight as days pass in minutes during Dell EMC World. While getting ready for another day of customer meetings Stephen and Mark discuss:

  • The types of customer conversations happening this year.
  • Michael Dell’s shot over the bow of the public cloud.
  • When analytics are a big thing and when they are not.
  • New integrated data protection appliances, RecoverPoint for Virtual Machines, ProtectPoint adoption and Cloud Disaster Recovery.
  • Then everything goes right to hell in the book discussion.

All this (and not much more) Inside The Data Cortex.

Download this episode (right click and save)

Subscribe to this on iTunes

Get it from Podbean

Follow us on Pocket Casts
Stephen Manley @makitadremel Mark Twomey @Storagezilla

Cleaning Up Is Hard To Do

Cleaning Up Is Hard To Do

We recently published a paper[1] at the 15th USENIX Conference on File and Storage Technologies, describing how Dell EMC Data Domain’s method to reclaim free space has changed in the face of new workloads.

Readers of the Data Cortex are likely familiar with Data Domain and the way the Data Domain File System (DDFS) deduplicates redundant data. The original technical paper about DDFS gave a lot of information about deduplication, but it said little about how dead data gets reclaimed during Garbage Collection (GC). Nearly a decade later, we’ve filled in that gap while also describing how and why that process has changed in recent years.

In DDFS, there are two types of data that should be cleaned up by GC: unreferenced chunks (called segments in the DDFS paper and much other Data Domain literature, but chunks elsewhere), belonging to deleted files; and duplicate chunks, which have been written to storage multiple times when a single copy is sufficient. (The reason for duplicates being written is performance: it is generally faster to write a duplicate than to look up an arbitrary entry in the on-disk index to decide it’s already there, so the system limits how often it does index lookups.)

Both unreferenced and duplicate chunks can be identified via a mark-and-sweep garbage collection process. First, DDFS marks all the chunks that are referenced by any file, noting any chunks that appear multiple times. Then DDFS sweeps the unique, referenced chunks into new locations and frees up the original storage. Since chunks are grouped into larger units called storage containers, largely dead containers can be cleaned up with low overhead (i.e. copying the still-live data to new containers), while containers with lots of live data are left unchanged (i.e. the sweep process does not happen). This process is much like the early log-structured file system work, except that liveness is complicated by deduplication.

Originally, and for many years, DDFS performed the mark phase by going through every file in the file system and marking all the chunks reached by that file. This included both data chunks (which DDFS calls L0 chunks) and metadata (chunks containing fingerprints of other data or metadata chunks in the file, which DDFS calls L1-L6 chunks). Collectively this representation is known as a Merkle tree. We call this type of GC “logical garbage collection” because it operates on the logical representation of the file system, i.e., the way the file system appears to a client.

Logical GC worked well for quite some time, but recent changes to workloads caused problems. Some systems used a form of backups that created many files that all referenced the same underlying data, driving up the system’s deduplication ratio. The total compression, which is the cumulative effect of deduplication and intra-file compression, might be 100-1000X on such systems, compared to 10-20X on typical systems in the past. Revisiting the same data hundreds of times, with the random I/O that entailed, slowed the mark phase of GC.   Another new workload, having many (e.g., hundreds of millions) small files rather than a small number of very large files, similarly ran slowly when processing a file at a time.

Data Domain engineers reimplemented GC to do the mark phase using the physical layout of the storage containers, rather than the files. Every L1-L6 chunk gets processed exactly once, starting from the higher levels of the Merkle tree (L6) to flag the live chunks in the next level below. This physical GC avoids the random I/O and repeated traversals of the earlier logical GC procedure. Instead of scanning the file trees and jumping around the containers, the physical GC scans the containers sequentially. (Note: It may scan the same container multiple times as it moves from L6 to L1 blocks because each time through it only looks for blocks of one level. However, there are not that many L1-L6 containers compared to the actual L0 data containers: the metadata is only about 2-10% at most, with less metadata for traditional backups and more for the new high-deduplication usage patterns).

Physical GC requires a new data structure, a “perfect hash,” which is similar to a Bloom filter (representing the presence of a value in just a few bits) but requires about half the memory and has no false positives. In exchange for these two great advantages, the perfect hash requires extra overhead to preprocess all the chunk fingerprints: it creates a one-to-one mapping of fingerprint values to bits in the array, with the additional space needed to identify which bit matches a value. Analyzing the fingerprints at the start of the mark phase is somewhat time-consuming; however, using the perfect hash ensures both that no chunks are missed and that no false positives result in large amounts of data being retained needlessly.

We learned that physical GC dramatically improved performance for the new workloads. However, it was slightly slower for the traditional workloads. Because of other changes made in parallel with the move to physical GC, it was hard to determine how much of this slower performance was due to the perfect hash overhead, and how much might be due to the other changes.

We needed to make GC faster overall. One of the causes of the slow mark phase was the need to make two passes through the file system much of the time. This was necessary because there was insufficient memory to track all chunks at once. Instead, GC would do much of the work of traversing the file system, but only sampling to get a sense of which containers should be focused on for cleaning. Then GC would identify which chunks are stored in those containers, and traverse the file system a second time while focusing only on those chunks and containers.

Phase-optimized Physical GC (PGC+) reduces the memory requirements by using a perfect hash in place of one Bloom filter and eliminating the need for another Bloom filter completely. This allows PGC+ to run in a single GC phase rather than with two passes. Further optimizations also improved performance dramatically. Now GC is at least as fast as the original logical GC for all workloads and is about twice as fast for those that required two passes of LGC or PGC. Like PGC, PGC+ is orders of magnitude better than LGC for the new problematic workloads.

Data Domain continues to evolve, as do the applications using it. Aspects of the system, such as garbage collection, have to evolve with it. Logical GC was initially a very intuitive way to identify which chunks were referenced and which ones could be reclaimed. Doing it by looking at the individual storage containers is, by comparison, very elaborate. Physical GC may seem like a complex redesign of what was a fairly intuitive algorithm, but in practice it’s a carefully designed optimization to cope with the random-access penalty of spinning disks while ensuring the stringent guarantees of the Data Domain Data Invulnerability Architecture.

Because after all, slow garbage collection … just isn’t logical!

~Fred Douglis @freddouglis

[1] Fred Douglis, Abhinav Duggal, Philip Shilane, Tony Wong, Shiqin Yan, and Fabiano Botelho. “The Logic of Physical Garbage Collection in Deduplicating Storage.” In 15th USENIX Conference on File and Storage Technologies (FAST 17), pp. 29-44. USENIX Association, 2017.

Professional Organizations for Computing – More than the Elks’ Lodge

Professional Organizations for Computing – More than the Elks’ Lodge

There are many professional organizations, serving all sorts of purposes. For instance, the American Bar Association and American Medical Association help to represent lawyers and doctors, respectively, when setting standards, policies, and laws.

Within the field of computing, there are a number of professional organizations of note. Some are specific to certain roles, such as the League of Professional System Administrators. Here I will focus on three that serve software engineers, Computer Science (CS) researchers, CS academics, and those of similar professional interests. Mostly I’m doing this to try and impress upon readers the benefits of membership and participation in these organizations.

I first joined the Association for Computing Machinery (ACM) and the Computer Society of the Institute of Electrical and Electronics Engineers (IEEE) when I was a Ph.D. student. By becoming a student member, I subscribed to their monthly magazines, which contained numerous articles of interest. Shortly after finishing my degree I added the USENIX Association to the list.

Initially, the primary motivation for joining (or continuing membership in) these organizations was the significant discounts offered to members when attending conferences sponsored by one of them. Often the savings would more than compensate for the membership fee. In addition, there were personal benefits, such as the IEEE’s group life insurance plan.

The three professional organizations all run conferences, but beyond that they quickly diverge in their services.


I’ll start with the simplest first. USENIX basically exists to run computer-related conferences. They also have a quarterly newsletter, and many years ago published a Computing Systems journal, but the conferences are the reason USENIX exists … and they do a great job of it. The top systems conferences include such events as OSDI, NSDI, FAST, the USENIX Annual Technical Conference, and the USENIX Security conference.   I’ve chaired a couple of conferences for them many years ago, and USENIX makes it incredibly easy for the conference organizers. Instead of depending on the chair to manage volunteers to handle logistics, the chair simply is responsible for selecting content. In addition, USENIX has enacted a policy of making all conference publications freely available over the Internet.


ACM conducts a broader set of activities than Usenix. ACM runs a number of conferences, many of which are among the most prestigious conferences within their domain, but it does much more. ACM is organized into “Special Interest Groups” such as the SIG on Operating Systems (SIGOPS) or the SIG on Data Communications (SIGCOMM). The SIGs run conferences, such as the Symposium on Operating Systems Principals, known as SOSP (SIGOPS) or the SIGCOMM annual conference. Each SIG typically publishes a regular newsletter with a combination of news and technical content (with little or no peer review).   ACM also publishes a number of journals, which provide archival-quality content, often extended versions of conference papers. For example, Transactions on Storage publishes a number of articles that extend papers from FAST, including the papers selected as “best papers” for the conference. Finally, ACM has a number of awards, such as membership levels (fellows, distinguished members, and senior members) and for exceptional achievements (such as the Mark Weiser award).

IEEE Computer Society

IEEE-CS (“CS”) is the largest society within IEEE, though there are other computer-related societies and councils, such as IEEE Communications Society. I’ll focus on CS.

Like ACM, CS runs conferences and publishes journals and magazines. Many of their magazines are much closer to the journals in style and quality than to the newsletters run by ACM SIGs or their CS counterparts, technical committees (TCs). Compared to journals, the magazines tend to have shorter articles, as well as columns and other technical content of general interest. Each issue tends to have a “theme” focusing articles on a particular topic. I was editor-in-chief of Internet Computing for four years, so I led the decisions about what themes for which to request submissions, and I would assign other submissions to associate editors to gather peer reviews and make recommendations. I highly recommend CS magazines for those interested in high-level material in general (Computer, which comes with CS membership, or specific areas such as Cloud Computing or Security & Privacy.

IEEE-CS also sponsors many conferences across a variety of subdisciplines. I mention these after the periodicals because I feel like CS stands out more because of its magazines than its journals or conferences, which are roughly analogous to those from ACM. Additionally, many conferences are sponsored jointly by two or more societies, blurring that boundary further. Conferences are sponsored by Technical Committees, which are similar to ACM SIGs.

Finally, it is worth pointing out that both IEEE and ACM make a number of contributions in other important areas, such as education and standards. The societies cooperate on things like curricula guidelines; in addition, CS produces bodies of knowledge, which are handbooks on specific topics such as software engineering. IEEE has entire Standards Association, which produces such things as the 802.11 WiFi standard. The societies have local chapters as well, which sponsor invited talks, local conferences, and other ways to reach out to the immediate community.

My Own Role

I started as a volunteer with CS by serving as general chair of the Workshop on Workstation Operating Systems, which we later renamed the Workshop on Hot Topics on Operating Systems. I chaired the Technical Committee on Operating Systems, then created and formed the Technical Committee on the Internet. At that point I was asked to join the Internet Computing editorial board as liaison to the TC, but when my term expired I was kept on the board anyway and became associate editor in chief, then EIC. In 2015 I was elected to a three-year term on the CS Board of Governors. From there, I help set CS policies and decide on the next generation of volunteers such as periodical editors.

In parallel, I’ve also been active with USENIX. In addition to serving on many technical program committees, I was the program chair for the USENIX Annual Technical Conference in 1998 and USENIX Symposium on Internet Technologies and Systems (later NSDI) in 1999. I’ve served on the steering committee for the Workshop on Hot Topics in Cloud Computing since 2015.

What’s In It for You?

By now I hope I’ve given you an idea what the three societies do for their members and the community at large. Even if you don’t tend to participate in the major technical conferences, there are local opportunities to network with colleagues and learn about new technologies. The magazines offered by IEEE-CS, as well as Communications of the ACM, are extremely informative. And don’t forget about those great insurance discounts!


~Fred Douglis @freddouglis

Making Math Delicious: The Research Cortex

Making Math Delicious: The Research Cortex


Last time I posted, I was the grouchy mathematician “telling data scientists to get off my lawn” as I attempted to persuade you that eating Brussels sprouts of Math is just as cool as eating that thick porterhouse steak named Data Science. (Disclaimer: I recognize all diets as equally valid, and WLOG operate in the space where Brussels sprouts are uncool and steaks are cool.) Data Science gets to be that porterhouse because its practitioners not only demonstrate its nutritional value to a business, but found a way to make it delectable, satisfying, and visually appealing to a wide audience. We all know that Brussels sprouts are nutritious, but in that way that tastes nutritious. With all this in mind, I’d like to provide a recipe for preparing those Brussels sprouts in a way that doesn’t feel like you are forcing them down while your mother glares at you.


Meet the Research Cortex at www.theresearchcortex.com.


The Research Cortex has the lofty goal of doing for mathematics what so many others have done for data science—make it rigorous, yet accessible to a wide audience that spans disciplines and industries. This new sibling of The Data Cortex serves as the unofficial hub for academic research of Dell EMC’s Data Protection Division CTO Team. Initially, our focus will be primarily mathematics.


The work we’ll publish is original, rigorous content… with a twist. Shortly after publishing a new paper, we add video overviews about the work and the key results. We also feature video microcontent (Math Snacks) that spans various topics and metatopics in mathematics. Our first series of Math Snacks looks at types of mathematical proofs, beginning with direct proofs, in order to give some insight into how mathematicians approach problems.


The scope of the research is broad; no echo chambers here. We want full exploration of all branches of mathematics, pure and applied. Our first published work, by yours truly, examines sequences of dependent random variables and constructs a new probability distribution that analytically handles correlated categorical random variables. The next paper is the first part of a Masters thesis by Jonathan Johnson, currently a PhD student at the University of Texas at Austin, discussing summation chains of sequences. Future work will touch queuing theory, reliability theory, algebraic statistics, and anything else that needs a home and an audience.


Mathematics is that underground river that nurtures every other branch of science and engineering. My hope is that, by making these theoretical and foundational works accessible and enjoyable to consume, we can spark innovative ideas and applications by our readers in any area they can think of.


I also want to take the time to acknowledge those who helped the Research Cortex go from a mathematician’s lofty ideal to a tangible (sort of) object. Mariah Arevalo, a software engineer in the ELDP program at Dell EMC is the site administrator, designer, social media manager, and other titles I’m sure I’ve missed. I’ll also throw a quick shout-out to Jason Hathcock for the assistance in video design and production, and music composition.


We are very proud of the Data Cortex’s new brother, and hope you will bookmark www.theresearchcortex.com and visit regularly to check out all our new content.


~Rachel Traylor @ Mathopocalypse

12th USENIX Symposium on Operating Systems Design and Implementation

12th USENIX Symposium on Operating Systems Design and Implementation

OSDI’16 was held in early November in Savannah, GA. It’s a very competitive conference, accepting 18% of what is already by and large a set of very strong papers. They shortened the talks and lengthened the conference to fit in 47 papers, which is well over twice the size of the conference when it started with 21 papers in 1994. (Fun fact: I had a paper in the first conference, but by the time we submitted the paper, not a single author was still affiliated with the company where the work was performed.) This year there were over 500 attendees, which is a pretty good number for a systems conference, and as usual it was like “old home week” running into past colleagues, students, and faculty.


There are too many papers at the conference to say much about many of them, but I will highlight a few, as well as some of the other awards.


Best Papers

There were three best paper selections. The first two are pretty theoretical as OSDI papers go, though verification and trust are certainly recurring themes.


Push-Button Verification of File Systems via Crash Refinement

Helgi Sigurbjarnarson, James Bornholt, Emina Torlak, and Xi Wang, University of Washington

This work uses a theorem prover to try and verify file systems. The “push-button verification” refers to letting the system automatically reason about correctness without manual intervention. The idea of “crash refinement” is to add states that are allowed in the specification.


Ryoan: A Distributed Sandbox for Untrusted Computation on Secret Data

Tyler Hunt, Zhiting Zhu, Yuanzhong Xu, Simon Peter, and Emmett Witchel, The University of Texas at Austin

Ryoan leverages the Intel secure processing enclave to try and build a system that enables private data to be computed upon in the cloud without leaking it to other applications.


Early Detection of Configuration Errors to Reduce Failure Damage

Tianyin Xu, Xinxin Jin, Peng Huang, and Yuanyuan Zhou, University of California, San Diego; Shan Lu, University of Chicago; Long Jin, University of California, San Diego; Shankar Pasupathy, NetApp, Inc.

PCHECK is a tool that stresses systems to try to uncover “latent” errors that otherwise would not manifest themselves for a long period of time. In particular, configuration errors are often not caught because they don’t involve the common execution path. PCHECK can analyze the code to add checkers to run at initialization time, and it has been found empirically to identify a high fraction of latent configuration errors.


Some of the Others

Here are a few other papers I thought either might be of particular interest to readers of this blog, or which I found particularly cool.


TensorFlow: A System for Large-Scale Machine Learning

Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek G. Murray, Benoit Steiner, Paul Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng, Google Brain

TensorFlow is a tool Google uses for machine learning, using dataflow graphs. Google has open-sourced the tool (www.tensorflow.org) so it’s gaining traction in the research community. The talk was primarily about the model and performance. Since I know nothing about machine learning, I include this here only because it had a lot of hype at the conference, and not because I have much to say about it. (Read the paper.)


Shuffler: Fast and Deployable Continuous Code Re-Randomization

David Williams-King and Graham Gobieski, Columbia University; Kent Williams-King, University of British Columbia; James P. Blake and Xinhao Yuan, Columbia University; Patrick Colp, University of British Columbia; Michelle Zheng, Columbia University; Vasileios P. Kemerlis, Brown University; Junfeng Yang, Columbia University; William Aiello, University of British Columbia

This is another security-focused paper, but it was focused on a very specific attack vector. (And I have to give the presenter credit for making it understandable even to someone with no background in this sort of issue.) The idea behind return-oriented programming is that an attacker finds snippets of code to string together to turn into a bad set of instructions. The idea here is to move the code around faster than the attacker can do this. It uses a function pointer table to indirect so one can find functions via an index, but the index isn’t disclosable in user space.

Interestingly, the shuffler runs in the same address space, so has to shuffle its own code to protect it. In all, a neat idea, and an excellent talk.


EC-Cache: Load-Balanced, Low-Latency Cluster Caching with Online Erasure CodingV.

Rashmi, University of California, Berkeley; Mosharaf Chowdhury and Jack Kosaian, University of Michigan; Ion Stoica and Kannan Ramchandran, University of California, Berkeley

I’ll start by pointing out this is the one talk that was presented via recording (the primary author couldn’t travel). The technology for the presentation was excellent: the image of the speaker appeared in a corner of the video, integrated into the field of vision much better than what I’ve seen in things like Webex. However, rather than that person then taking questions by audio, there was a coauthor in person to handle questions.

EC-cache gains the benefits of both increased reliability and improved performance via erasure coding (EC) rather than full replicas. It gets better read performance by reading K+delta units when it needs only K to reconstruct an object, then it uses the first K that arrive. (Eric Brewer spoke of a similar process in Google at his FAST’17 keynote.)   Even with delta just equal to 1, this improves tail latency considerably.

One of the other benefits of EC over replication is that replication creates integral multiples of data, while EC allows fractional overhead. Note, though, that this is for read-mostly data – the overhead of EC for read-write data would be another story.


To Waffinity and Beyond: A Scalable Architecture for Incremental Parallelization of File System Code

Matthew Curtis-Maury, Vinay Devadas, Vania Fang, and Aditya Kulkarni, NetApp, Inc.

This work was done by the FS performance team at NetApp and was IMHO the most applied paper as well as the one nearest and dearest to Dell EMC. Because NetApp is a competitor, I hesitate to go into too many details for fear of mischaracterizing something.   The gist of the paper was that NetApp needed to take better advantage of multiprocessing in a system that wasn’t initially geared for that. Over time, the system evolved to break files into smaller stripes that could be operated on independently, then additional data structures were partitioned for increased parallelism, then finally finer-grained locking was added to work in conjunction with the partitioning.


Kraken: Leveraging Live Traffic Tests to Identify and Resolve Resource Utilization Bottlenecks in Large Scale Web Services

Kaushik Veeraraghavan, Justin Meza, David Chou, Wonho Kim, Sonia Margulis, Scott Michelson, Rajesh Nishtala, Daniel Obenshain, Dmitri Perelman, and Yee Jiun Song, Facebook Inc.

This was one of my favorite talks. Facebook updates their system multiple times per day. They need to safely determine the peak capacity across different granularities (web server, cluster, or region) and back off when experiencing degradation. The use this to identify things like inefficient load balancing. After identifying hundreds of bottlenecks, they could serve 20% more customers with the same infrastructure.



It is worth a quick shout-out to the various people recognized with other awards at the conference. Ant Rowstron at Microsoft Cambridge won the Weiser award for best young researcher. Vijay Chidambaram, a past student of Andrea and Remzi Arpaci-Dusseau at the University of Wisconsin–Madison,  won the Richie thesis award for “Orderless and Eventually Durable File Systems”. Charles M. Curtsinger won Honorable Mention. Finally, BigTable won the “test of time” award 10 years after it was published.


~Fred Douglis @FredDouglis

The Who Do You Trust Cortex

The Who Do You Trust Cortex

Travelling through the Information Technology industry at warp speed this week Inside the Data Cortex Stephen and Mark traverse the conversational universe.

  • Inane SNL conversation and whatever happened to Will Ferrell?
  • Melissa McCarthy’s return on investment.
  • Does development team size affect the making of a hit product?
  • Why you should not build big at the beginning.
  • All leadership is communication.
  • Who do you trust? Can trust be designed out of product development?
  • All day and all night it’s Enterprise Copy Data Management.
  • The most important thing to know about Enterprise Copy Data Management.
  • Who is wasting our reading time this month?

Download this episode (right click and save)

Subscribe to this on iTunes

Get it from Podbean

Follow us on Pocket Casts
Stephen Manley @makitadremel Mark Twomey @Storagezilla

Managing your computing ecosystem Pt. 3

Managing your computing ecosystem Pt. 3


The prospect of universal and interoperable management interfaces is closer to reality than ever. Not only is infrastructure converging, but so is the control and management plane. Last time, we discussed Redfish for managing hardware platforms. This time we will talk about Swordfish for managing storage.


The goal of Swordfish is to provide scalable storage management interfaces. The interfaces are designed to provide efficient, low footprint management for simple direct attached storage with the ability to scale up to provide easy to use management across cooperating enterprise class storage servers in a storage network.

The Swordfish Scalable Storage Management API specification defines extensions to the Redfish API. Thus a Swordfish service is at the same time a Redfish service. These extensions enable simple, scalable, and interoperable management of storage resources, ranging from direct attached to complex enterprise class storage servers. These extensions are collectively named Swordfish and are defined by the Storage Networking Industry Association (SNIA) as open industry standards.

Swordfish extends Redfish in two principal areas. The first is the introduction of the management and configuration based on service levels. The other is the addition of management interfaces for higher level storage resources. The following sections provide more detail on each.

Service based management

Swordfish interfaces allow the client to get what they want without having to know how the implementation produces the results. As an example, a client might want storage protected so that no more than 5 seconds of data is lost in the event of some failure. Instead of specifying implementation details like mirroring, clones, snapshots, or journaling, the interface allows the client to request storage with a recovery point objective of 5 seconds.   The implementation then chooses how to accomplish this requirement.

The basic ideas are borrowed from ITIL (a set of practices for IT service management that focuses on aligning IT services with the needs of business) and are consistent with ISO/IEC 20000.

A Swordfish line of service describes a category of requirements. Each instance of a line of service describes a service requirement within that category. The management service will typically be configured with a small number of supported choices for each line of service. The service may allow an administrator to create new choices if it is able to implement and enforce that choice. To take an example from airlines, you have seating as one line of service with choices of first, business, and steerage. Another line of service could be meals, with choices like regular, vegetarian, and gluten free. Lines of service are meant to be independent from each other. So, in our airline example, we can mix any meal choice with any seating choice.

Swordfish provides three lines of service covering requirements for data storage, (protection, security, and storage), and two lines of service covering requirements for access to data storage, (connectivity and performance).   Swordfish leaves the specification of specific choices within each of these lines of service to management service implementations.

A Swordfish class of service resource describes a service level agreement (SLA). If an SLA is specified for a resource, the service implementation is responsible for assuring that level of service is provided. For that reason, the management service will typically advertise only a small number of SLAs. The service may allow an administrator to create new SLAs if it is able to implement and enforce that agreement.   The requirements of an SLA represented by a class of service resource are defined by a small set of line of service choices.

Swordfish storage

Swordfish starts with Redfish definitions and then extends them. Redfish specifies drive and memory resources from a hardware centric point of view.   Redfish also specifies volumes as block addressable storage composed from drives. Redfish volumes may be encrypted. Swordfish then extends volumes and adds filesystems, file shares, storage pools, storage groups, and a storage service.   (Object stores are intended to be added in the future.)

A storage service provides a focus for management and discovery of the storage resources of a system.  Two principal resources of the storage service are storage pools and storage groups.

A storage pool is a container of data storage capable of providing capacity that conforms to a specified class of service. A storage pool does not support IO to its data storage.  The storage pool acts as a factory to provide storage resources (volumes, file systems, and other storage pools) that have a specified class of service. The capacity of a storage pool may come from multiple sources and are not all required to be of the same type. The storage pool tracks allocated capacity and may provide alerts when space is low.

A storage group is an administrative collection of storage resources (volumes or file shares) that are managed as a group. Typically, the storage group would be associated with one or more client applications. The storage group can be used to specify that all of its resources share the characteristics of a specified class of service. For example a class of service specifying data protection requirements might be applied to all of the resources in the storage group.

One primary purpose of a storage group is to support exposing or hiding all of the volumes associated with a particular application. When exposed, all clients can access the storage in the group via the specified server and client endpoints. The storage group also supports storage (or crash) consistency across the resources in the storage group.

Swordfish extends volumes and adds support for file systems and file shares, including support for both local and remote replication. Each type supports provisioning and monitoring by class of service. The resulting SLA based interface is a significant improvement for clients over the current practice where the client must know the individual configuration requirements of each product in the client’s ecosystem. Each storage service lists the filesystems, endpoints, storage pools, storage groups, drives and volumes that are managed by the storage service.


These three specifications should form the basis for any Restful system management solution.

As a starting point, OData provides a uniform interface suitable for any data service. It is agnostic to the functions of the service, but it supports inspection of an entity data model via an OData conformant metadata document provided by the service. Because of the generic functionality of the Restful style and with the help of inspection of the metadata document, any OData client can have both syntactic and semantic access to most of the functionality of an OData service implementation. OData is recommended as the basis for any Restful service.

Redfish defines an OData data service that provides a number of basic utility functions as well as hardware discovery and basic system management functions. A Redfish implementation can be very light-weight.   All computing systems should implement a Redfish management service. This recommendation runs the gamut from very simple devices in the IOT space up to enterprise class systems.

Finally, Swordfish extends the Redfish service to provide service based storage management. A Swordfish management service is recommended for all systems that provide advanced storage services, whether host based or network based.

Universal, interoperable management based on well-defined, supported standards. It may still seem like an impossible hope to some. Every day, however, we move closer to a more standard, more manageable infrastructure environment.

~George Ericson @GEricson

Security vs Protection – The Same, but Different

Though the words “security” and “protection” are mostly interchangeable in regular use of the English language, when talking about data, it’s a different story.

When we talk about data security, we are referring to securing data from becoming compromised due to an external, premeditated attack. The most well-known examples are malware and ransomware attacks.

Data protection, however, refers to protecting data against corruption usually caused by an internal factor such as human error or hardware failures. We generally address data protection by way of backup or replication – creating accessible versions of the data that may be stored on different media and in various locations.

Of course, these backups can be used for data recovery in either scenario.


Under attack

We have seen a dramatic rise in ransomware attacks in recent years, with startling results. According to the FBI, in Q1 of 2016, victims paid $209M to ransomware criminals. Intermedia reported that 72% of companies infected with ransomware cannot access their data for at least 2 days, and 32% lose access for 5 days or more. According to a July 2016 Osterman Research Survey, nearly 80 percent of organizations breached have had high-value data held for ransom.


security-vs-protection-1So what is ransomware?

Ransomware is a form of malware that is covertly installed on a victim’s computer and adversely affects it, often by encrypting the data and making it unavailable until a ransom is paid to receive the decryption key or prevent the information from being published.

Most infamously, Sony fell victim two years ago to a crippling attack that shut down its computers and email systems and sensitive information was published on the web. The Sony breach was a watershed moment in the history of cyber attacks. It is believed that the attackers were inside Sony’s network for over 6 months, giving them plenty of time to map the network and identify where the most critical data was stored.

The attack unfolded over a 48 hour period. It began by destroying Sony’s recovery capability. Backup media targets and the associated master and media servers were destroyed first. Then the attack moved to the DR and Production environments. Only after it had crippled the recovery capabilities did the attack target the production environment. After Sony recognized the attack, they turned to their Data Protection infrastructure to restore the damaged systems. However, they had lost their ability to recover. Sony was down for over 28 days and never recovered much of its data.

In Israel, the Nazareth Illit municipality was recently paralyzed by ransomware. Tts critical data was locked until the municipality pays the attackers the ransom price.



What do we propose?

While Dell EMC offers a range of products and solutions for backup and recovery on traditional media such as tape and disk, data is increasingly sitting in publicly-accessible domains such as networks, causing a heightened threat to data security. To address the shift in data storage, in particular the growing trend towards application development and storage in the cloud, Dell EMC is utilizing its decades of experience in the area of securing data with the most stringent requirements and the most robust and secure technology set in the market, to architect and implement solutions. The new technologies will lock out hackers from critical data sets and secure a path to quick business recovery. One such solution is Isolated Recovery Solution (IRS).

IRS 101

Essentially, IRS creates an isolated environment to protect data from deletion and corruption while allowing for a quick recovery time. It comprises the following concepts:security-vs-protection-3

  • Isolated systems so that the environment is disconnected from the network and restricted from users other than those with proper clearance.
  • Periodic data copying whereby software automates data copies to secondary storage and backup targets. Procedures are put in place to schedule the copy over an air gap* between the production environment and the isolated recovery area.
  • Workflows to stage copied data in an isolated recovery zone and periodic integrity checks to rule out malware attacks.
  • Mechanisms to trigger alerts in the event of a security breach.
  • Procedures to perform recovery or remediation after an incident.

*What is an air gap?

An air gap is a security measure that isolates a computer or network and prevents it from establishing an external connection. An air-gapped computer is neither connected to the Internet nor any systems that are connected to the Internet. Generally, air gaps are implemented where the system or network requires extra security, such as classified military networks, payment networks, and so on.

Let’s compare an air gap to a water lock used for raising and lowering boats between stretches of water of different levels on a waterway. A boat that is traveling upstream enters the lock, the lower gates are closed, the lock is filled with water from upstream causing the boat to rise, the upper gates are opened and the boat exits the lock.

In order to transfer data securely, air gaps are opened for scheduled periods of time during actual copy operations to allow data to move from the primary storage to the isolated storage location. Once the replication is completed, the air gap is closed.


Dell EMC’s Data Domain product currently offers a retention lock feature preventing the deletion of files until a predefined date. IRS takes such capabilities further. The solution will continue to evolve to simplify deployment and provide security against an even broader range of attacks (rogue IT admins, for example), IRS solutions will make life more difficult for hackers and data more secure. In IT, “security” and “protection” have been treated as two independent, orthogonal concepts. The new, destructive style of attacks changes that relationship. The two teams must partner to make a coherent solution.


~Assaf Natanzon @ANatanzon

Managing Your Computing Ecosystem Pt. 2

Managing Your Computing Ecosystem Pt. 2


We are making strides toward universal and interoperable management interfaces. These are not only interfaces that will interoperate across one vendor or one part of the stack, but management interfaces that can truly integrate your infrastructure management. Last time, we discussed OData, the Rest standardization. This time we will talk about Redfish for managing hardware platforms.

Redfish redfish

Redfish defines a simple and secure, OData conformant data service for managing scalable hardware platforms. Redfish is defined by a set of open industry standard specifications that are developed by the Distributed Management Task Force, Inc. (DMTF).

The initial development was from the point of view of a Baseboard Management Controller (BMC) or equivalent. Redfish management currently covers bare-metal discovery, configuration, monitoring, and management of all common hardware components. It is capable of managing and updating installed software, including for the operating system and for device drivers.

Redfish is not limited to low-level hardware/firmware management. It is also expected to be deployed to manage higher level functionality, including configuration and management of containers and virtual systems.   In collaboration with the IETF, Redfish is also being extended to include management of networks.

The Redfish Scalable Platforms Management API Specification specifies functionality that can be divided into three areas: OData extensions, utility interfaces, and platform management interfaces. These are described briefly in the following sections.

Redfish OData extensions

Redfish requires at least OData v4 and specifies some additional constraints:

  • Use of HTTP v1.1 is required, with support for POST, GET, PATCH, and DELETE operations, including requirements on many HTTP headers
  • JSON representations are required within payloads
  • Several well-known URIs are specified
    • /redfish/v1/ returns the ServiceRoot resource for locating resources
    • /redfish/v1/OData/ returns the OData service document for locating resources
    • /redfish/v1/$metadata returns the OData metadata document for locating the entity data model declarations.

Redfish also extends the OData metamodel with an additional vocabulary for annotating model declarations. The annotations specify information about, or behaviors of the modeled resources.

Redfish utility interfaces

The utility interfaces provide functionality that is useful for any management domain (for example, these interfaces are used by Swordfish for storage management). These interfaces include account, event, log, session, and task management.

The account service manages access to a Redfish service via a manager accounts and roles.

The event service provides the means to specify events and to subscribe to indications when a defined event occurs on a specified set of resources. Each subscription specifies where indications are sent, this can be to a listening service or to an internal resource, (e.g. a log service).

Each log service manages a collection of event records, including size and replacement policies. Resources may have multiple log services for different purposes.

The session service manages sessions and enables creation of an X-Auth-Token representing a session used to access the Redfish service.

The task service manages tasks that represent independent threads of execution known to the redfish service. Typically tasks are spawned as a result of a long running operation.

The update service provides management of firmware and software resources, including the ability to update those resources.

Redfish platform management interfaces

The principal resources managed by a Redfish service are chassis, computer systems and fabrics. Each resource has its current status. Additionally, each type of resource may have references to other resources, properties defining the current state of the resource, and additional actions as necessary.

Each chassis represents a physical or logical container. It may represent a sheet-metal confined space like a rack, sled, shelf, or module. Or, it may represent a logical space like a row, pod, or computer room zone.

Each computer system represents a computing system and its software-visible resources such as memory, processors and other devices that can be accessed from that system. The computer system can be general purpose system or can be a specialized system like a storage server or a switch.

Each fabric represents a collection of zones, switches and related endpoints. A zone is a collection of involved switches and contained endpoints. A switch provides connectivity between a set of endpoints.

All other subsystems are represented as resources that are linked via one or more of these principal resources. These subsystems include: bios, drives, endpoints, fans, memories, PCIe devices, ports, power, sensors, processors and various types of networking interfaces.


Redfish delivers a standardized management interface for hardware resources. While it is beginning with basic functionality like discovery, configuration and monitoring, it will deliver much more. It will extend into both richer services and cover more than physical resources – e.g. virtual systems, containers, and networks. Redfish is built as an OData conformant service, which makes it the second connected part of an integrated management API stack. Next up – Swordfish.

~George Ericson @GEricson