Honoring Dell EMC’s Data Protection and Storage Technical Directors

Honoring Dell EMC’s Data Protection and Storage Technical Directors

Everything changes. Organizational structures, company names, and, of course, technology. For technical companies to survive, however, one thing cannot change. We need technical leaders who can turn changing technology into new products that solve our customers’ problems. Dell EMC has replaced the Core Technologies Division with the Data Protection Division and the Storage Division, but we are still are building the core of customers’ data infrastructure.

Therefore, every quarter, we recognize the newest Dell EMC Data Protection and Storage Technical Directors. These are senior technical leaders who have delivered a sustained business impact by delivering – products, solutions, and customer experience. They are the engine of the company. The previous recipients are detailed here and here.

John Adams – John helps deliver VMAX performance that matters – performance for our customers’ most important applications. He’s demonstrated and optimized performance in the most demanding customer environments. He then drives customer-critical performance work into the engineering team – from evaluating large flash drives to host LUN scaling to dynamic cache partitioning. His skill spans from Unisys to EPIC (leading health care database/application). John is the go-to person who connects with customers, customer support, and engineering for their performance needs.

Michael Barber – Michael Barber is the rare quality engineer who truly is the customers’ advocate. First, Michael understands that customers buy VMAX replication to ensure that their mission-critical data is always safe. Since customer environments are constantly under load, facing all manner of unusual circumstances (especially in a disaster), Michael has built a tool that validates data consistency while generating all of those unusual situation. The tool is used across VMAX and much of the rest of the company. Michael also reviews and influences core features to ensure they meet customers’ standards and needs. VMAX customers have Michael Barber on their side.

Fred Douglis – Fred has led Dell EMC’s Data Protection Division’s academic strategy, while also driving research into the product. Under Fred’s guidance, Dell EMC has consistently published in prestigious conferences and journals. This work has helped advanced the state-of-the-art in deduplication research and development. He has also built strong relationships with leading universities like University of Chicago, Northeastern, and University of Wisconsin at Madison. His contributions to the industry have also been recognized. Fred is an IEEE Fellow and is currently serving on the Board of Governors for the IEEE Computer Society. Finally, the innovation of Data Domain BOOST-FS has enabled customers to more easily and rapidly protect custom and big data apps.

Martin Feeney – Martin helps the largest customers in the world run their most mission critical applications. As an expert in both FICON and VMAX, Martin has helped our mainframe customers get reliable access and predictable performance from their storage. He was instrumental in unifying the data format and storage algorithms for VMAX Mainframe and Open Systems support. This enables our customers to get better performance, functionality, and reliability more quickly. Martin is also responsible for optimizing the VMAX2 performance while also delivering the Mainframe Host Adapter optimizations for the VMAX3 platform. As customers continue to run their most important workloads on Mainframe, Martin keeps those applications running optimally.

Simon Gordon – Simon has been the Product Management lead for ProtectPoint for VMAX and XtremIO. Our most innovative customers deploy ProtectPoint to protect, refresh, and test some of their largest and most mission critical databases – like Oracle, DB2, Microsoft SQL, and EPIC. Simon has been instrumental in connecting customers, the field, application partners, and our engineering teams so that we can deliver a comprehensive protection solution built on top of revolutionary technology.

Colin Johnson – Colin, an expert in user experience design, has been the UX leader for Data Domain Management Console, Data Domain Long Term Retention in the Cloud, and ProtectPoint for XtremIO. Colin’s expertise in user experience, visual design, customer interaction, and data protection has enabled the Data Protection Division to deliver products that are easier for our customers to use across cloud, replication, multi-tenancy, and next-generation data protection.

Jerry Jourdain – Jerry has been the driving technical force behind Dell EMC’s email archiving solutions. Jerry co-founded Dell EMC’s initial industry-leading email archiving product, EmailXtender, and then was Chief Architect of the follow-on SourceOne product. Thousands of customers depend on Dell EMC to protect their most critical information for compliance, legal, or business needs. Jerry ensures that we can address their most challenging compliance and retention needs.

Amit Khanna – Amit has been modernizing data protection for NetWorker customers. He was the force behind NetWorker’s vProxy support – standalone, re-purposable, fully RESTful protection for VMware. Amit began by integrating Data Domain BOOST into NetWorker and tying together NetWorker and Data Domain replication. He then delivered the policy-based management for NetWorker 9, which allows customers to move toward Backup as a Service. His work on CloudBoost allows customers to back up both to the cloud and in the cloud. Amit’s work has made NetWorker a core part of modern data protection.

Ilya Liubovich – Over the past couple of years, VMAX customers have raved about how much easier it is to manage their systems. Ilya led one of the biggest optimizations, Unisphere/360 for VMAX. It is already attached to the majority of new VMAX systems, simplifying the management of their most critical storage. Furthermore, as security becomes an even more important issue in the world, Ilya has led the security standards for the management software – ensuring compliance to the highest standards, without intruding on the customer experience. With Ilya’s work, VMAX delivers high-end storage functionality with greater simplicity.

Prasanna Malaiyandi – Prasanna, a Data Protection Solution Architect, has led both ProtectPoint and ECDM from inception to delivery. ProtectPoint directly backs up VMAX and XtremIO systems to Data Domain, delivering up to 20x faster data protection than any other method. ECDM enables IT organizations to deliver Data Protection as a Service. Protection teams centrally control data protection, while allowing application, VM, and storage administrators to back up and recover data on their own, using high performance technologies like ProtectPoint and DD BOOST. Prasanna connected disparate products to bring Dell EMC products together somewhere other than the purchase order.

Jeremy O’Hare – Jeremy has delivered core VMAX functionality that separates it from every other product in the marketplace. Most recently, Jeremy led the creation of VMAX compression that delivers space savings with unparalleled performance in the industry. He’s also been instrumental to Virtual LUNS (VLUN) which enabled the groundbreaking FAST functionality. As a technical leader, Jeremy stands out for being able to bring solutions across teams. Compression touches virtually every part of the VMAX and Jeremy drove development and QA efforts across all of the groups, so that our customers enjoy compression without compromise on their VMAX systems.

Kedar Patwardhan – Kedar enables Avamar customers to solve their biggest, most challenging backup problems. First, Kedar created the only traditional file-system backup that doesn’t need to scan the file system. Customers with large file servers can scale their backups without compromising on functionality. Second, Kedar delivered OpenStack integration to protect some of our largest customers’ data. Third, the integration with vRA enables our Dell EMC’s customers to manage their protection from VMware interfaces. For the largest file systems to OpenStack to large VMware deployments, Kedar’s work enables us to deliver simple, scalable data protection.

Rong Yu – Rong is responsible for key algorithmic and architectural improvements to Symmetrix systems. First, he delivered a Quality of Service (QoS) framework that delivers both customer-defined Service Level Objectives while meeting the needs of internal operations like cloning and drive rebuild. He overhauled the prefetching model to leverage the knowledge of the host/application access patterns. He continues to help optimize RDF performance. Most recently, he developed the new middleware layer in the VMAX system that has enabled new features (like compression) and performance optimizations (optimizing cache-read-miss). Customers depend on VMAX for reliable, predictable high performance regardless of the situation. Rong’s work helps ensure that VMAX meets and exceeds those expectations.

Congratulations and thanks to the new and existing Dell EMC Technical Directors. You are the engine of Dell EMC!

~Stephen Manley @makitadremel

Honoring Dell EMC’s Core Technologies Technical Directors

Honoring Dell EMC’s Core Technologies Technical Directors

In the modern business world, executives get all the external recognition. It’s just a few weeks into the Dell acquisition of EMC, and most people already know names like: Marius Haas, Jeff Clarke, David Goulden, Jeff Clarke, Howard Elias, and Rory Read. Some of them even have their own Wikipedia pages.

A company like Dell EMC, however, cannot succeed without people who design, build, and ship the products that the executives talk about. Therefore, every quarter, we recognize the newest Dell EMC Core Technologies Technical Directors. These are senior technical leaders who have delivered a sustained business impact by delivering – products, solutions, and customer experience. They are the engine of the company. The previous recipients are detailed here.

Of course, Core Technologies continues to deliver innovative solutions, so we continue to expand the roster of Technical Directors. This quarter I’m pleased to announce:

Frederic Corniquet – Frederic has been a leader in the NAS protocols for the midrange systems for over a decade. Frederic has been a driving force in EMC’s NAS offerings growing in both technical strength and customer adoption from VNX1 to VNX2 to Unity. Frederic’s expertise extends from the NAS protocol to security to integrating with VMware for NFS data stores. As a leader in EMEA, Frederic also evangelizes and connects with some of our biggest customers. Frederic is a technical leader, evangelist, and expert who is growing EMC’s NAS business.

Rajesh Nair – Rajesh has been a leader in NetWorker for over a decade, focused on solving our largest customers’ most difficult backup challenges. He began by working on image-level backups (SnapImage) to solve customers’ large file system backup challenges. He then delivered NetWorker’s NDMP tape solution, solving customers’ large NAS file system backup challenges. Rajesh then led the team to integrate Data Domain BOOST into NetWorker, which solves performance and networking scaling challenges. Today more than half of NetWorker customers leverage BOOST. Rajesh’s decade of innovation, delivery, and leadership have driven NetWorker to be the customers’ choice for the most difficult backup challenges.

Tom Papadakis – Applications. IT teams want to speak to their application owners. Tom has led application-centric data protection for almost 20 years. Tom began by making NetWorker indexes scale for application backup. Then he developed NetWorker’s Oracle integration which allows DBA and backup administrator to work independently, while retaining centralized control. Tom also brings a customer and sales-centric viewpoint to application protection. He spearheaded the creation of NMDA – a package that combines the application support for multiple applications. The result was a dramatically improved total customer experience. As application protection spans across all of data protection, Tom has also brought together Data Domain (via DDBEA integration) and Avamar. Application protection is the present and future of backup and Tom has been at the front of that mission.

Ian Wigmore – Ian specializes in making products run fast. He began in the Symmetrix Microcode group connecting the Symmetrix to IBM’s S/390 and Z-series mainframes via the FC-2 software layers in the FICON storage director. To say that mainframe applications and users are sensitive to latency is an understatement. Ian’s performance tuning helped the Symmetrix (now VMAX) be the storage of choice for the most demanding mainframe applications. Ian was one of the initial leaders on ProtectPoint for VMAX – a product that improves backup times from VMAX to Data Domain by up to 20x. This has uniquely solved the challenge of large database backups. Fast and simple – whether it is mainframe, migration, or backup – Ian’s work separates EMC technology from the competition.

Dorota Zak – Flexibility of choice. Dorota protects customers’ core applications regardless of the tool. While she began in NetWorker, Dorota was instrumental to expanding Avamar’s application support, adding Sybase and SAP support, which helped solve backup challenges for many enterprises. Dorota then created the framework for DDBEA (which enables application admins to protect their data directly to Data Domain, without using backup software), so that it could quickly support the key 6 databases. That then extended to supporting key applications for Dell EMC’s industry leading ProtectPoint product (XtremIO and VMAX backing up directly to Data Domain). Dorota even helped SAP design the appropriate backup APIs for SAP HANA; EMC now leads the industry in protecting SAP HANA. Dorota’s work enables our customers to protect their applications however they like.

Dell EMC executives are among the industry’s best. They set our direction and guide the organization through unprecedented company and market changes. Our technical leaders, however, are the absolute best in the industry. It’s always easier to lead when you have the best and most talent. These Dell EMC Core Technologies Technical Directors are just some of the technical talent who deliver the infrastructure solutions that run the world.

~Stephen Manley @makitadremel

Love in the Data Center

Love in the Data Center

I love the data center.

I’ve heard the responses, too:

  • “You have to say that. Dell EMC, and for all your ‘cloud’ talk, is still a data center company.”
  • “Sure, and you also love cassette tapes, flip phones, and encyclopedias.”
  • “How long until you tell the kids to get off your lawn?”

First, I can love both the data center and the cloud. Second, never tell me I love tape. Third, I don’t have kids on my lawn because I can’t get rid of the skunks.

I love the data center because I’m not convinced that everything will standardize and I think technology still matters.

Everything is Not Standard

IT infrastructure won’t standardize because applications and governments won’t standardize. CIOs use the “electricity” analogy when talking about the data center. They want to consume IT as a commodity. The first problem with that aspiration – application developers.

Application developers block standardization because they are the “talent” in modern businesses. Every business is pivoting to become a technology company – e.g. Tesla is “a sophisticated computer on wheels” and lightbulb companies have become software companies for lighting. Therefore, application developers become business critical talent. When you’re the “talent”, you get what you ask for (witness every professional sports athlete). This is especially true in organizations where nobody really understands what they do. When an application developer asks for a unique performance profile to support a hot new business application, she will get it. When an application developer needs a nonstandard network configuration, he will get it. When an application developer must store data differently than everybody else, it will happen. Artists are the biggest roadblock to conformity; application developers are the artists in most companies that are becoming “software companies that do X”.

Government compliance regulations will also block standardization. Fifteen years ago, the “highly regulated” IT organizations were in federal government, health care, and financials. Today, every company faces complex regulations. Those regulations vary across countries, and they’re always changing. I’ve met multiple organizations with teams of lawyers who manage regulations – and they still get things wrong. Finally, as businesses span industries and geographies, the compliance expectations can even conflict! In a world where politicians know more about making headlines than they do about technology, standardization can’t happen.

Executives want to treat IT like a commoditized utility. The difference between electricity and IT infrastructure is data. Application developers want to do creative things with data. Government organizations want to regulate data access and retention. As long as the talent and the regulator both expect special treatment, IT-as-a-utility is a myth.

Technology Still Matters

The other nemesis of standardization is innovation. If users demand something different, but everything is effectively the same, then there is little value in trying to bypass standards. In our industry, however, things are still changing rapidly. Hardware innovation drives software innovation, and we’re in the midst of relentless hardware turnover.

The storage media upheaval seems to be accelerating. Ten years ago, Data Domain declared that “Tape is Dead”. Dell EMC declared 2016 to be “The Year of All Flash” for primary storage. Many IT organizations think this is a time to take a deep breath because history says it will be another decade before the media shifts again. I think the next disruption will begin in the next 3 years, not in a decade. The “Disk is Dead” (All-Flash Protection Storage) and “Non-Volatile Memory” (where I/O moves closer to the application) revolutions are coming.

Analytics has also transformed companies’ relationship to IT infrastructure. Successful organizations mine as much information as they can – about their customers, their teams, their processes, and their interactions. When running analytics, the most important ingredient is – DATA. What data are people accessing? What region is leveraging different services? What applications are getting the most load at different times of day? Good companies ask how they can improve every aspect of their business. Great companies answer those questions with concrete data, and then take action. How often have you wanted to know more detail on what was happening somewhere? When you cede control of your IT infrastructure, you lose access to its telemetry. In a world where data and metadata are your most precious assets, why let somebody else have them?

Technology still matters. Performance, cost, scale, and functionality can change in a matter of months. Those changes can mean the difference between launching an application and failing to meet the ROI goals. Meanwhile, analytics enables businesses to better understand how they run internally and connect with customers externally, on a global scale.

To control your destiny, sometimes you need to control your IT infrastructure.

Conclusion

I love the data center. I love building products that power the data center. It has provided the infrastructure for unparalleled growth and invention around the world. Obviously, we need to simplify the data center technologies – to enable our users to deliver value more quickly to their customers. We need to ensure that we’re not simply adding value-free features or products. But the data center is here to stay.

With all that, I love the cloud. There is enormous value in standardized cloud applications and infrastructure. It’s a great way to develop, explore, and scale. It’s ideal for a variety of applications. But with all the “prodigal son” love that cloud gets, sometimes it’s important to remind the first son how much you love him.

When you choose to standardize, you settle for the least common denominator. In a world filled with constantly changing demands (developers and regulators) and constantly changing supply (innovation and analytics), are you sure you’re ready to settle?

~Stephen Manley @makitadremel

How To Get Things Done

How To Get Things Done

“How can we get anything done across products?”

That was the theme of the 2016 EMC Core Technologies Senior Architect Meeting. Every year, we gather the senior technical leaders to discuss market directions, technology trends, and our solutions. This year included evolving storage media, storage connectivity, Copy Data Management, Analytics, CI/HCI, Cloud, and more. While the technical topics generated discussion and debate, the passion was greatest around – “How can we get anything done across products?” Each Senior Architect got to their position by successfully driving an agenda in their product groups, so they find their lack of cross-product influence to be exceptionally frustrating.

While the challenge may sound unique to senior leaders in a large organization, it’s a variant of the most common question I get from techies of all levels across all companies: “How can I get things done?”

What’s the Value?

Engineers – if your idea does not either generate revenue or save cost, you’re going to have a difficult time generating interest from business leaders, sales, and customers. Everybody loves talking about exciting technology, but they pay for solutions to business problems.  Too often, engineers propose projects that customers like, but would not pay for.

An internal team once proposed a project that would make our UI look “cooler”. I asked what it would do for the customer. It wouldn’t eliminate a user task. It wouldn’t help them get more done. But they were convinced it would be more “fun” which would convince more enterprises to buy the product. Not surprisingly, we didn’t pursue that project.

I recently met a startup with very exciting technology, but I couldn’t see how/why anybody would pay for it. The founder looked me in the eye and said, “People will love it so much, that they’ll just send me checks in the mail. But I’ll only cash the big ones, since smaller companies shouldn’t have to pay.” I started laughing at his joke, then felt really guilty(OK, sort of guilty) when I realized he was serious.

As you think about your value, it’s preferable to focus on revenue generation. Customers and executives would rather invest in solutions that increase their revenue rather than those that save costs. Cost saving discussions are either uncomfortable (and then you lay off ‘n’ people) or hard to justify (if you spend a lot of money today, you’ll save even more… in three years ). On the other hand, everybody likes talking about generating new revenue.

My Executive Briefing Center sessions often come after either Pivotal or Big Data discussions. The customers are excited about CloudFoundry, analytics, and new development techniques because it allows them to more quickly respond to customers and generate new revenue streams. As I walk in, they’re excitedly inviting the Pivotal presenter to dinner. After I discuss backup or storage, they say, “Thanks, this should help us reduce our costs. We still wish it weren’t so expensive, though.” Oh, and they NEVER invite me to dinner. Because nobody likes the “cost cutting” person. Or nobody likes me. Either one.

What are the Alternatives?

Technical people tend to make three mistakes when pitching an idea.

Mistake 1: Leading the audience through your entire thought process.

First, most senior people don’t have the attention span (I blame a day full of 30 minute meetings) to wait for your conclusion. Quickly set context, then get to the conclusion. Be prepared to support your position, but let them question you, don’t pre-answer everything. Second, most people don’t problem solve the same way you do, so your “obvious” thought path may not be clear to others. Finally, the longer you talk, the less likely you are to have a conversation. Your audience wants to be involved in a decision; that only happens when they can express their viewpoint and know that you’ve understood it.

Mistake 2: Not presenting actions

Let’s say you’ve made an astounding presentation. The audience is engaged. You’ve had a great discussion. Everybody supports the conclusion. And… you walk away. Too often, engineers forget to add: “And here’s what we need to do.” If you don’t ask for something to be done, nothing will be done.

Mistake 3: Not presenting alternatives

People and executives (some of whom display human characteristics) want to feel like they have some control over things. That means they want to be able to make choices. They also want to believe that you, the presenter, have considered many alternatives before drawing your conclusion. To satisfy both needs, you must present two or three (more than that and it’s overwhelming) legitimate approaches that address the challenge. If you don’t they’ll feel like you’re trapping them.

One of my worst presentations was titled – “Large file system restores are slow.” I spent an hour walking through 23 slides detailing the pain of restoring large file systems (both by capacity and file count). At the end, the Sr. Director said, “We knew it was slow. That’s why we hired you. Are you saying that we can’t hire someone to solve this, or that we just made the wrong hire?” Now THAT is an example of quickly presenting actionable alternatives.

Who are You Selling To?

As you sell your idea, you need tailor the pitch to your audience.

  • What actions can you ask for? If your audience doesn’t control resources or roadmaps, then ask them for what they can give – support, personal time, etc. Conversely, if your audience can make decisions, ask for the decision. It’s better to get a “no” than to drift forever.
  • What does your audience care about? Business leaders want to hear about revenue, routes to market, investment costs, etc. Your demo may be the coolest thing ever, but it won’t move them until you get them interested. Technical leaders generally care about both, but be careful about losing them on a deep dive. Technical experts want the deep dive. Engineers want to know what work they need to do.
  • What is their background? If you’re selling an idea to non-experts, you’ll need to spend more time setting context (business, technical , etc.). If you’re talking to experts, don’t waste their time with the basics.

In other words, there is no “one size fits all” presentation. It may be more work to tailor your approach to each audience, but nobody said this was easy.

When I first started working with customers, I would race through my presentation – always doing it the same way. I was too nervous to ask what the audience was interested in hearing. As I talked, I’d never give the audience a chance to respond. I considered myself lucky if the audience sat in silence, so that I could quickly exit, drenched in sweat. One day, I walked into the Briefing Center, saw 2 people in suits sitting there, and rattled through my 30 minute talk. At the conclusion, one of them said, “That was good. That was a lot of the content we want to cover. Just so you know, the customer is running late, but they should be here soon.”

Conclusion

How do you get things done? You convince people. You need to convince business leaders, peers across groups, technical experts, and the engineers who will actually do the work. Whether you’re a new college graduate or a technical leader with decades of experience, the formula doesn’t change:

  • What’s the value?
  • What are the alternatives?
  • Who is the audience?

If you follow these guidelines, you may not always get the decision you like… but you will get a decision. And “getting decisions about actions” is the only way you can get anything done.

-Stephen Manley @makitadremel

The Portfolio Life

The Portfolio Life

EMC NetWorker 8 launched in June 2012. I’d just spent 5 hours recording a video for the launch. Customers were excited about the new architecture and the tighter integration with Data Domain. Sales were already hyped by the revenue growth started with NetWorker 7.6.2. The NetWorker 8 launch was going to be an “I’m Back” moment. Between the adrenaline and the caffeine, I was vibrating as I walked through the building. Then a NetWorker engineer sidled up to me and asked, “So, does this launch mean we’re killing NetWorker?”

This week’s question – “Are we killing XtremIO?”

Portfolio Companies – Nothing Ever Dies

Companies decide who they are going to be: consumer vs. business, product vs. services, profitable vs. unicorn. One of the biggest choices is whether to be single product vs. portfolio. At a single product company, there is little confusion about what product matters, but you can get constrained by the limits of that product. At a portfolio company, you can build/buy whatever you need, but there is complexity in having multiple products. And, of course, at a portfolio company, you have to answer the “are you killing Product X” question. As an old CEO said, “That’s the life we’ve chosen”.

Over the past few years, each product has had at least one customer question whether we’re killing it. My favorite was the customer who asked if Centera would kill Data Domain. (Yes, I did write that sentence in the proper order.) The truth is, it’s almost impossible to kill any product at EMC. Each product has at least one massive customer who has built a business-critical process around it; that customer, of course, will stop working with EMC if we don’t support that product. At a portfolio company, there are no “independent” product sales. That’s why we’re so careful about adding a new product – once you’ve put it in, it’s there forever.

The result – it’s almost unheard of to kill a product, much less a high-growth, high-revenue product.

All-Flash Storage is not One-Size-Fits-All

If you think of “All-Flash Storage” as a market, like “Purpose Built Backup Appliance” (think Data Domain), then it makes sense that you think you only need one product. If all that matters is the storage media, then the functionality, cost/performance, reliability, availability, and protocols are irrelevant. The applications will be so thrilled to have “All-Flash Storage” that they’ll re-architect to fit whatever limitations the storage system has. Not only does the universe revolve around storage, but it revolves around storage media. (Not surprisingly , narcissism is often an attribute of a single-product company). That’s the logical conclusion of the arguments I hear.

All-Flash Storage is not a new market. Flash storage is disrupting the storage media market, but not the storage market. All-Flash has become ubiquitous across both traditional and new arrays and vendors, so it’s important, but it’s not a separate market. All-Flash does drive architecture and product evolution, but we’re all still building storage arrays. If storage media changes really did create new markets, then I want more hype around Barium-Ferrite tape and shingled magnetic recording disk (SMR). (Note: I really don’t. On the other hand, the DNA storage is cool.)

What Makes Each All-Flash Product Special

Each of the all-flash primary storage products in the Core Technologies portfolio (VMAX, XtremIO, Unity) has features and architectures that make them unique. (Disclaimer: With the amount of engineering invested in each product, there are thousands of “favorite architectural choices”. These are mine. If you want to tout others, feel free to use the comments section or spray paint them on the side of my house; my HOA already hates me, anyway.)

VMAXCaching/Tiering. Everybody knows about the performance, reliability, availability, data protection (SRDF, SnapVX, ProtectPoint), protocols, etc. At the core, however, the FAST (Fully Automated Storage Tiering) algorithms fire me up. When everything is “All-Flash”, though, who needs caching/tiering? In the next couple of years, however, we’ll see persistent memory, flash, shingled disk, and cloud storage – understanding how to best store and serve data will matter more than ever. VMAX has the metadata analytics to be the storage media broker for core and critical applications.

XtremIOVersions (Copy Data Management). When people think of XtremIO, they think of speed and deduplication. Storage speed and space efficiency isn’t enough to survive in the modern metadata center, though. XtremIO’s dedupe is just the first way to expose the value of the block sharing architecture. The sustained value comes from creating and distributing lightweight copies for test & development, data protection, and analytics. Unlike many systems, I don’t need to worry about complex links to other copies (block sharing creates independence), the performance hit on the production copy (scale-out), or crushing the network (dedupe-aware data movement). XtremIO has the metadata management to be the system of choice for the DevOps and analytics world…

UnitySimplicity. Most storage systems are skewed toward simplifying enterprise data center challenges. That means scale-out, heterogeneous storage managed by dedicated IT teams. I love scale-out (see VMAX and XtremIO), but it complicates install and management of well-known consolidated workloads. Similarly, I love heterogeneous data protection, but sometimes replicating between two similar systems is better (especially if one is All-Flash and the other is Hybrid to reduce costs). I see the value in feature-rich dedicated storage management, but sometimes I want to just get a basic environment up and running in 15 minutes or less. As the metadata center shifts toward wanting simple, agile, application/VM-driven storage, Unity has the simplicity to deliver.

 

Each system has a design center that any “one-size-fits-all” product can’t deliver. Of course, there are many workloads that all of these systems (and many competitor’s products) could handle without a problem. There will always be a baseline of functionality (e.g. in CDM, caching/tiering, and simplicity) that we’ll deliver across the platforms, and even strive to drive the baseline higher. In virtually every environment, however, key applications will drive requirements to optimize in some direction. I believe that these 3 design centers will be the most critical in the storage space.

Conclusion

EMC is a portfolio company. The storage products in our portfolio will continue to evolve with the media transitions. Remember, All-Flash Storage isn’t a market – it’s a point in time for the type of media we’ll use in our systems. Soon enough, you’ll hear about All-Persistent-Memory Storage. Regardless of the media, our storage systems are differentiated by their software design center, and you’ll see us continue to extend our functionality in those areas. Each system will be part of the Modern Metadata Center.

In summary, when we talk all-flash: XtremIO is not dead. VMAX is not dead. Unity is not dead.

Oh, and in case anybody is wondering, NetWorker is alive and well, too.

Stephen Manley @makitadremel

Copy Data Management – What About Unstructured Data?

Copy Data Management – What About Unstructured Data?

There’s a comfort in certainty. When you know where you’re going… when you know what’s going to happen… when you know you’re right… the future can’t come fast enough. For the past twenty years, I’ve felt that way. In the past six months, however, I’ve found out what it means to have questions that I can’t confidently answer. Since misery loves company, I’ll share my questions and internal debate.

This week: What does Copy Data Management mean for Unstructured Data?

What is Copy Data Management?

Copy Data Management (CDM) is the centralized management of the copies of data that an organization creates. These copies can be used for backup, disaster recovery, test & development, data analytics, and other custom uses.

Each CDM product has a unique target today. Some CDM products focus on reducing the number of copies. Others emphasize meeting SLAs and compliance regulations. Still others try to optimize specific workflows (e.g. test & development). The products also split on the layer of copy management: storage, VM, application, or cloud.

Despite the product diversity, they all have one thing in common: application focus. The products all try to streamline copy management for applications (whether physical, virtual, or cloud). The decision makes sense. Applications are valuable. Application developers create multiple data copies. Application developers are technically savvy enough to understand the value of CDM and pay for it.

Still, unstructured data continues to grow exponentially. Much of that data is business critical and a source of security and compliance concerns. Traditional backup and archive techniques have not scaled with unstructured data growth and use cases, so companies need new answers.

What should CDM products do about unstructured data (i.e. files and objects)?

Affirmative Side: CDM should Include Unstructured Data

Copy Data Management should include unstructured data because there is more commonality than difference.

First, the core use cases are common. Customers need to protect their unstructured data in accordance with SLAs and compliance regulations, just as they do their applications. While customers may not run test & development with most of their unstructured data (except for file-based applications), many are interested in running analytics against that data. With that much overlap in function, CDM should aggressively incorporate unstructured data.

Second, unstructured data is part of applications. While some applications are built with only containerized processes and a centralized database, many apps leverage unstructured data (file and object) to store information. Thus, the “application” vs. “unstructured data” dichotomy is an illusion.

Third, the data path will be common. Customers use files for their applications already, like running Oracle and VMware over NFS. Since CDM products are already managing files for their application data, why not extend to unstructured data?

Finally, today most customers use one tool to manage all of their copies – their backup application. CDM is an upstart compared to backup software. In a world where everybody is attempting to streamline operations and become more agile, why are we splitting one tool into two?

The use cases and data path are similar, CDM needs to support files no matter what, and customers don’t want multiple products. CDM must support unstructured data – case closed.

Negative Side: CDM should Include Unstructured Data

My adversary lives in a legacy, backup-centric world. (Yes, I often resort to ad hominem attacks when debating myself.) Copy Data Management and Unstructured Data Management are evolving into two very different things, and need to be handled separately. The use cases are already very distinct and the divergence is increasing. The underlying technical flows are also from different worlds.

First, the requirements for application vs. unstructured protection are as different as their target audiences. Application owners recover application data; end-users recover unstructured data. Application owners, a small and technically savvy group, want an application-integrated interface from which to browse and recover (usually by rolling back in time) their application. In contrast, end-users need a simple, secure interface to enable them to search for and recover lost or damaged files. Protection means very different things to these two very different audiences.

Second, the requirements for compliance also vary because of the audience. Since there are relatively few applications (even with today’s application sprawl), application compliance focuses on securing access, encrypting the data, and ensuring the application does not store/distribute private information. Unstructured data truly is the Wild West. Since users create it, there is little ability to oversee what is created, where it is shared, and what happens to it. As a result, companies use brute force mechanisms (e.g. backup) to copy all the data, store those copies for years, and then employ tools (or contractors) to try to find information in response to an event. When you have no control over what’s happening, it’s hard not to be reactive. With applications, you can be proactive.

Third, test and development is becoming as important as protection. The application world is moving to a dev ops model. Teams automate their testing and deployment, constantly update their applications, and roll forward (and back) through application versions faster than backup and recovery ever dreamed of. As a result, the test and development use cases will become more common and more critical than protection. Over time, they may even absorb much of what was considered “protection” in the past.

Finally, the data flows are very different. To support the application flows, the data needs to stay in its original, native format. You cannot run test and development against a tar image of an application. Fortunately, the application infrastructure has built data movement mechanisms (generally based on snapshots or filters) to enable that movement. Even better, since the application has already encapsulated the data, it becomes possible to just copy the LUN, VM, or container without needing to know what’s inside. In contrast, protecting unstructured data is messy. Backup software agents run on Windows, Linux, and Unix file servers to generate proprietary backup images. NAS servers generate their own proprietary NDMP backup streams, or replicate only to compatible NAS servers and generate no catalog information. There are few high-performance data movement mechanisms, and since each file system is unique, there is no elegant encapsulation. The data flows between application and unstructured data could not be more different.

Due to the differences in use cases, users, and underlying technology, it is unrealistic to design a single CDM product to effectively cover both use cases.

Verdict: Confusion

I don’t see an obvious answer. The use cases, workflows, and technology demonstrate that application data CDM is not the same as unstructured data CDM. Of course, the overlap in general needs (protection policies, support for file/object) combined with the preference/expectation for centralized support demonstrates that integrated CDM has significant value.

The question comes down to: Is the value of an integrated product greater than the compromises required to build an “all-in-one”? The market is moving toward infrastructure convergence, but is the same going to happen with data management software?

I don’t have the answer, yet. But just as Tyler Stone is sharing “how products are built”, I’ll take you behind the scenes on how strategic decisions are made. Just wait until you see the great coin flip algorithm we employ…

Stephen Manley @makitadremel

Unintellectual Thoughts

Unintellectual Thoughts

Emptying the dresser drawer of my mind.

  • When will all-flash protection storage become the “hot new thing”? To deal with the increased scale of primary storage capacity and more demanding customer SLAs, the industry is moving from traditional tar/dump backups to versioned replication. Thus, protection storage needs to support higher performance data ingest and instant access recovery. It seems plausible that protection storage will follow the primary storage path: from disk-only to caching/tiering with flash to all-flash (with a cloud-tier for long-term retention).
  • When will custom hardware come back? The industry has pivoted so hard to commodity components, it feels like the pendulum has to swing back. Will hyper-converged infrastructure drive that shift? After all, where better to go custom than inside a completely integrated end-to-end environment (as with the mainframe)?
  • Are job candidates the biggest winners in Open Source? Companies continue to struggle to make money in Open Source. Whether the monetization strategy is services, consulting, management & orchestration, or upsell, it’s been a tough road for Open Source companies. On the other hand, Open Source contributions are like an artist’s portfolio for an engineer– far more useful than a resume. Even better, if you can become influential in Open Source, you can raise your profile with prospective employers.
  • When will NAS evolve (or will it)? It’s been decades since NAS first made it easy for users to consolidate their files and collaborate with their peers in the same site. Since then, the world has evolved from being site-centric (LAN) to global-centric (WAN). Despite all the attempts – Wide-Area File Services (WAFS), WAN accelerators, sync and share – files still seem constrained by the LAN. Will NAS stay grounded or expand beyond the LAN? Or will object storage will simply be the evolution for unstructured data storage and collaboration?
  • What’s the future of IT management? Analytics. We’ve spent decades building element managers, aggregated managers, reporting tools, ticketing systems, processes, and layers of support organizations to diagnose and analyze problems. As infrastructure commoditizes, we should be able to standardize telemetry. From that telemetry, we can advise customers on what to do before anything goes wrong. If companies like EMC can make technology that reliably stores ExaBytes of storage around the world, we should be able to make technology to enable customers to not have to babysit those systems.
  • Will Non-Volatile Memory be the disruption that we thought Flash would be? Flash didn’t disrupt the storage industry; it was a media shift that the major platforms/vendors have navigated. (Flash did disrupt the disk drive industry.) The non-volatile memory technologies, however, could be more disruptive. The latency is so small that the overhead of communicating with a storage array exceeds that of the media. In other words, it will take longer to talk to the storage array than it will to extract data from the media. To optimize performance, applications may learn to write to local Non-Volatile Memory, shifting storage out of the primary I/O path. Maybe that will be the disruption we’ve all being talking about?
  • What happens when storage and storage services commoditize? The general consensus is that the commoditization of IT infrastructure is well under way. Most people feel the same about storage and storage services (e.g. replication, data protection, etc.) As commoditization happens, customers will choose products based on cost of purchase and management. As an infrastructure vendor, the question will be – how do we add value? One camp believes that the value will move to management and orchestration. I’m skeptical. Commoditization will lead to storage and services being embedded (e.g. converged/hyper-converged) and implicitly managed. Thus, I think there will be two paths to success. One path involves becoming a premier converged/hyperconverged player. The second revolves around helping customers understand and manage their data – on and off-premises. This means security, forensics, compliance, and helping users find what they need when they need it. Successful vendors will either deliver the end-to-end infrastructure or insight into the data. If you do both… then you’ve really got something. You can guess where I’d like Dell EMC to go.

I also wonder about whether software engineering jobs are following the path of manufacturing jobs, whether software-defined XYZ is a bunch of hooey, the future of networking, whether any of these big-data unicorns has a shot at success, and why people are so hysterical about containers. But we’ll save those incoherent thoughts for another time.

-Stephen Manley @makitadremel

Honoring EMC’s Core Technologies Technical Directors

Honoring EMC’s Core Technologies Technical Directors

How do you tell the story of a great organization like EMC? There are the big names – Egan, Marino, Ruettgers, Tucci. There are the outsized personalities – Scannell, Sakac, Burton, Dacier. There are the technical titans – Yanai, Gelsinger, Maritz. You can’t tell the EMC and EMC federation story without these epic players. But EMC isn’t about big names and glamor.

 

EMC’s culture is built on a principle best expressed by another technical genius – The Rock: ”Blood, Sweat, & Respect. The First Two You Give, Last One You Earn.”

 

I’m pleased to announce the latest EMC Core Technologies Technical Director award recipients. EMC Core Technologies recognizes technical leaders who have made a significant impact on the business through their contributions to products and solutions that help our customers.

TD 1

As you read through their contributions, you’ll see that Technical Director award recipients contributed to different products and solutions in very different ways. Despite the differences, they have one thing in common: when things needed to get done, they picked up a shovel and worked until things were right.

 

*NOTE: All TD recipients with a ’*’ earned the TD honor in previous cycles.

 

Scott Auchmoody – Scott, a founder of Avamar, has helped transform the backup industry. Prior to Avamar, the “state of the art” in backup was writing full backup images that backup storage deduplicated. Avamar created an end-to-end deduplication from client to backup storage. This architecture minimized server, production storage, network, and backup storage load, while also improving backup and recovery times. Subsequently, Scott played a central role in integrating Avamar and Data Domain. After that he, led the development EMC’s next-generation VMware backup and recovery solutions. Scott is a leading figure not just in delivering EMC’s backup solutions, but in advancing the state of the art of backup.

 

*Bhimsen Bhanjois – Bhim has led Data Domain replication, integration with backup software, and Data Domain Cloud Tier. Virtually every customer expects to replicate their protection data, and Data Domain’s network-optimized, flexible replication has become the standard of excellence. He also has helped Data Domain integrate with Avamar, NetWorker, RecoverPoint, and VMAX. Most recently, Bhim led the popular Data Domain Cloud Tier project. Bhim connects Data Domain to the rest of the ecosystem – from strategy to planning to execution.

 

Paul Bradley – Paul has been a leader in simplifying the management of VMAX systems. First, he was instrumental in streamlining the VMAX management tool stack. He led the implementation of Service-Level based management to the end-to-end process of: Planning, Provisioning, Performance Monitoring, and Protection. Second, as customers have struggled to manage large VMAX estates, Paul led the delivery of Unisphere 360 – the first data Center-wide VMAX Management and Monitoring tool. Moreover, Paul has created a new culture of simplicity of management in the Ireland Center of Excellence, so that VMAX continues to streamline its operations. The result – customers can deploy and manage VMAX for more uses at both smaller and greater scale.

 

Steve Bromling – Steve has been the architect and implementer for the anchor VPLEX infrastructure and functionality. Steve is responsible for the core data path of VPLEX and the distributed caching engine that deliver the active-active functionality of VPLEX. Steve has delivered changes in memory usage and layout that reduce response time variability, improve latency, and reduce hardware footprint. He also jointly created and architected MetroPoint – integrating data protection and availability – which has enabled customers to leverage the full data protection continuum. Steve has ensured that VPLEX meets our customers’ performance and functionality needs for their most business critical and valuable workloads.

 

*Ravi Chitloor – As one of the Chief Architects of Data Domain and an EMC Distinguished Engineer, Ravi has dramatically improved almost every part of the product. Ravi’s work in building the first system GUI enabled Data Domain to expand from NFS-based backups to being truly usable as a VTL Ravi then led Data Domain product mature into a Enterprise product in areas such as Security, RBAC, AAA, Licensing, SNMP and Reporting. By driving the creation of Data Domain Management Center, the largest customers were now able to manage multiple Data Domain systems at scale. Ravi’s leadership on mtrees and multi-tenancy APIs enabled Data Domain to expand into ITaaS/Service Providers markets and enabled Oracle admins to leverage Data Domain for direct backups. He has contributed significantly in developing Data Domain features/products such Data Domain Virtual Edition (DDVE), storage tiering, and Global Deduplication Array (GDA) as well as protection solutions like Avamar integrated with Data Domain and Protect Point (VMAX/XIO integrated with Data Domain. In his time at Data Domain, Ravi has also led supportability, quality, and testability initiatives. Today, he is part of the senior leadership team in defining the vision, strategy and roadmap of Data Domain. Ravi is responsible for the simple, reliable, secure Data Domain system that customers depend on worldwide.

 

Yaron Dar – Yaron has been the focal point for application intelligence in Symmetrix. Yaron’s work was the catalyst for the customer value and success of deploying Oracle databases on VMAX storage. From Oracle/VMAX data integrity validation, Oracle ASM space reclamation, integrating VMAX snapshots/clones with Oracle Enterprise Manager, and supporting SRDF Yaron has brought the best of VMAX to the Oracle environment. He has now been extending that expertise to cross-EMC solutions – e.g. ProtectPoint for VMAX with Oracle – tying together VMAX, Data Domain, and Oracle for industry leading data protection. Yaron also works directly with sales teams and customers to ensure that the engineering integration translates into customer value. The result of Yaron’s work is the exceptionally successful deployments of Oracle on VMAX across the globe.

 

*Rob Fair – Rob leads the protocols for Data Domain. Rob led the creation of Data Domain’s VTL interface. He worked on the SCSI target daemon that enabled Data Domain to connect to backup applications as a VTL and FC- BOOST device, and ultimately as a block-storage target for ProtectPoint. After that, Rob has been instrumental in Data Domain securely connecting to customers’ environments, especially focused on enhancing and supporting NFS. Rob has helped connect the value of Data Domain to our customers, regardless of their preferred environment and networking technology.

 

Mike Fishman – Mike, through development and acquisition, has helped EMC solve customers’ data protection and midrange storage challenges. As CTO of EMC’s Data Protection team, Mike was responsible for the extremely successful EMC Disk Backup Library VTL solutions. He was also instrumental to the creation of the EMC Data Protection portfolio – working on the acquisitions of Legato (now NetWorker), Avamar, and Data Domain. In the mid-range storage, Mike was part of the due diligence for Isilon and XtremIO. Mike’s development and M&A activity have enabled EMC to lead the industry in addressing customer’s data storage and protection challenges.

 

Terry Hahn – Terry is a systems management expert who has made it easy for customers to manage one, two, or dozens of Data Domain systems. The work began with laying the infrastructure for storing, processing, and reporting historical system data, so that we can help customers understand changes in their system’s behavior. Terry then applied that infrastructure to helping customers manage and address issues in their replication performance. To help customers manage at scale, Terry architected the original Data Domain Management Center. Recently, Terry has spearheaded the Data Domain Secure Multi-Tenancy work that is deployed across hundreds of customers and systems worldwide. Terry exemplifies both the increasing commitment and value in helping customers manage our systems.

 

*Mahesh Kamat – As one of the Chief Architects of Data Domain, Mahesh leads the Data Domain System Architecture and Design. Data Domain is the heart of our customers’ data protection solutions. Initially, Mahesh was a core member of the team that doubled Data Domain performance and capacity every year, while enhancing the Data Invulnerability Architecture. Scalable performance and reliable storage makes Data Domain the “storage of last resort” for so many backup customers. Today, Mahesh has led DDFS to improve support for random I/O, which has enabled support for Avamar backups, VM-image backups with disaster recovery, and ProtectPoint. He has been a catalyst for the delivery of Data Domain Virtual Edition, Data Domain Cloud Tier, and core Data Domain OS and system enhancements. Mahesh also mentors the senior engineering community so that Data Domain can continue to innovate and scale. Mahesh is responsible for Data Domain’s industry-leading performance, flexibility and reliability.

 

Anton Kucherov – Anton leads major initiatives that deliver strategic, innovative solutions for our XtremIO customers. With the importance of copy data management to our customers, it is critical that XtremIO can create and delete snapshots with minimal effects on the system. Anton led snapshot performance improvements that have helped some of our largest and most innovative customers. Anton is known for delivering simple, innovative solutions for complex system-wide problems. He’s also enabled XtremIO to expand its innovation and delivery capacity by growing and mentoring the team in the US. Anton is a technical leader, innovative engineer, and pragmatic problem solver who delivers critical value to our most strategic customers.

 

Brian Lake – Brian has been responsible for the core functionality, feature enhancements, and stability of VPLEX – a product that keeps thousands of customers’ systems online and available. Brian’s leadership on the VPLEX platform delivered a stable, reliable, high-performance system that customers deploy in front of mission-critical VMAX, XtremIO, and VNX systems. Brian’s performance work has delivered scale for all workloads – from small I/O to large, sequential I/O. Finally, his innovation on MetroPoint enabled EMC to deliver a unique and successful integration of data protection and availability. Brian’s work has helped VPLEX become a trusted solution for customers’ availability challenges across their entire environment.

 

Amit Lieberman – Amit has been a technical leader and innovation engine for Data Protection Adviser (DPA). Data protection customers depend on DPA to understand and monitor their end-to-end data protection environment. With DPA 6, Amit brought together the replication and backup architectures to provide an end-to-end view of their protection environment. Amit was also crucial in delivering high-performance, clustering, and real-time analytics in the DPA 6 line. He also worked on integrating DPA and ViPR-SRM. The result is that EMC delivers a more comprehensive, more reliable, more scalable, more flexible view of data protection for our customers.

 

Kobi Luz – Kobi a key leader of the XtremIO R&D Organization.  He’s been driving the core development of latest releases. Kobi led the XtremIO 4.0 release and helped deliver major technical with cross team integration. This includes adding non-disruptive cluster upgrade, native RecoverPoint support for DR, AppSync integration for Copy Data Management. Just as importantly, Kobi has helped expand the organization to enable XtremIO to continue its hypergrowth, while also being one of the core team members for resolving the most complex of technical challenges. Kobi both innovates and delivers for XtremIO’s customers.

 

John Madden – John’s technical leadership has been instrumental to some of EMC’s most dramatic transformations. John was a member of the “Open Symmetrix” team, that added SCSI and Fibre connectors to Symmetrix. This brought EMC from mainframe-only to open systems. John then led the team that enabled customers to manage their own Symmetrix. For example, the SRDF control scripts enabled customers to manage their own SRDF DR operations, instead of having to call EMC support to make changes for them. Most recently, John was a key influencer on further simplifying VMAX management with FAST-VP and SLO based provisioning. The unique longevity and success of the Symmetrix platform in the storage industry has been due to leaders like John Madden driving internal disruption.

 

Owen Martin – Owen has been at the heart of some of the most differentiated storage functionality in the industry. First, Owen led the radical shift in the provisioning and consumption of enterprise storage with the creation of Fully Automated Storage Tiering (FAST). The majority of VMAX customers use FAST to automatically place data in the most effective tier of storage. As the storage market continues to create differentiated tiers of storage – non-volatile memory, flash, disk, cloud – FAST will simplify and optimize customer environments. Owen was also critical to the SLO (Service Level Objective) based provisioning that streamlines the deployment and management of VMAX storage. Owen’s contributions have made VMAX a uniquely powerful, optimized, and simple-to-manage storage system.

 

*George Mathew – George’s work on the Data Domain File System has helped it solve even more customers’ protection challenges. As the lead for Directory Manager(DM) 2.0, George led the work on multi-threading DM and improving Garbage Collection enumeration. The result – Data Domain achieved the incredible goal of supporting 1 Billion files. This enables customers to use Data Domain for file archiving. George also led DDFS support for HA. George continues to evolve the Data Domain into a system that solves customers’ next generation protection challenges.

 

Steve Morley – Steve was a major driving force behind the MCx re-architecture delivered in VNX2 and carried forward into Unity.  Steve had significant development contributions in all three areas of MCx: DRAM Cache, SSD Cache, and RAID.  His systems thinking, development experience and internal knowledge of the entire IO stack helped identify bottlenecks and engineer multi-core improvements – including multi-core IO scheduling across the entire data path stack.  He also drove support for enhanced drive queuing and data path throttling mechanisms.  These improvements allowed VNX2 and Unity systems to scale performance at an impressive 96% linearity with CPU core count. MCx and Steve’s work on it are directly responsible for the multi-billion dollar success of VNX2.

 

Peter Puhov – Peter has delivered many significant contributions to Midrange products.  He architected and delivered the FBE infrastructure, a framework for building stackable system components, and leveraged it to deliver the multi-core infrastructure for the RAID component of MCx. This framework is now also being used in Unity to implement Mapped RAID. In addition, Peter helped architect and deliver Controller Based Encryption in VNX2.  He also pioneered work in predictive analytics of disk failures to improve data availability and system drive performance.  He has been a leader in development best practices, setting the bar for high-quality maintainable code.  As a result of Peter’s work, thousands of customers can depend on VNX and Unity for high-performance, scalable, reliable storage.

 

*Naveen Rastogi – Naveen is an expert when it comes to systems management and customer advocacy. He has made customers’ data protection simpler, both with his work on Data Domain and on ProtectPoint. Naveen delivered the core management infrastructure that still provides the basis for how all system management is done. Naveen was also instrumental in developing the Data Domain Global Deduplication Appliance, enabling customers to scale their Data Domain. More recently, Naveen spearheaded the Data Domain’s support for ProtectPoint – enabling industry-leading high-performance backups directly from VMAX to Data Domain. After that, Naveen led the entire ProtectPoint for XtremIO project. From simplifying daily management, to enabling customers to manage at scale, to reducing the complexity and overhead of backup, Naveen simplifies customers’ backup challenges. Today Naveen is driving Data Domain RESTful APIs as the vanguard of the “Simplification” movement.

 

Tony Rodriguez – Tony has consistently led EMC into new markets and technologies to solve new customer challenges. Tony was one of the original architects of EMC Atmos – our initial foray into object storage. The highly scalable and resilient architecture enabled Atmos to become an industry-leading product which still provides the foundations of some of largest customers’ global applications. Subsequently, Tony has led EMC’s efforts in critical acquisitions, next-generation interconnects, and the future of networking. Tony has helped create not only new products, but entirely new solutions and markets.

 

Zvi Schneider – Zvi is the Chief Architect for XtremIO R&D. He led the core development of the Data Path Module, which delivers industry-leading performance, efficiency, and copy data functionality. Furthermore, Zvi works across the organization to ensure that XtremIO delivers sound designs and solutions to ensure high performance, scalability,, and reliability that customers expect from EMC. Whenever there is a challenge in XtremIO, Zvi is the ”go-to” member who can solve the problems. Zvi is one of the key contributors for XtremIO’s continued technical leadership in the market.

 

*Udi Shemer – As Chief Architect for RecoverPoint, Udi continues to solve customers’ data protection and replication challenges. Initially, Udi worked on RecoverPoint splitters, which efficiently extract the data from sources, so that it can replicated for protection. Udi’s work on RecoverPoint for VMAX2 continues to provide protection for mission-critical applications around the world. More recently, his work on adding replication support for VPLEX enabled customers to realize the value of continuous availability with reliable, high-performance data protection. Furthermore, Udi’s leadership on RP4VM enables customers to protect all of their VMware VMs, regardless of the underlying storage. Udi has consistently delivered solutions that enable customers to focus on their business, knowing that their data is safe.

 

Ed Smith – Ed has been instrumental in improving the quality and velocity of EMC’s Midrange products – constantly executing on the goal of creating end-to-end analytic-driven automated testing. For the past four years, the team has benefited from the Continuous Integration Testing that Ed spearheaded. Furthermore, the Automated Results Triage Services simplifies and accelerates the process of filing and triaging system defects found during testing. Finally, Ed has been responsible for consolidating the test results which enables analytic analysis. The result is that over that time, we’ve executed more reliably and delivered ever-higher quality products for our VNX and Unity customers – helping the bottom line for both EMC and our customers.

 

Erik Smith – As the lead engineer for Connectrix, Erik delivers network technology that enables customers to connect their servers to storage. By leading the Connectrix Fibre Channel and TCP/IP qualifications and technical documentation, Erik ensures that server and storage can be reliably connected by Connectrix switches. Erik is also a leading Storage Connectivity evangelist. He works with the field and customers to help architect, configure, and manage their environments – with direct meetings, industry-leading “Brass Tacks” blog, and with his highly attended EMC World sessions. He is also an industry leader in the future of storage networking, including driving the definition and acceptance of Target Driven Zoning by T11. Erik’s work continues to enable thousands of customers to reliably, securely, and rapidly access their data.

 

Lei Wang – Lei has helped make the mid-range team’s goal of End-to-End Automation a reality with both his architectural abilities and leadership. As EMC’s customers increasingly adopt a DevOps model for running their IT infrastructure, Lei is applying those same principles to running our automation infrastructure. As a result of his work on creating a test harness for fully automating tests, the midrange team tests early and often – improving quality and agility. Furthermore, Lei has brought together the teams across the globe on a common approach to automated testing. Lei’s work has not only resulted in higher quality releases for our customers, but in developing a Quality Engineering organization that will continue to accelerate and improve the midrange systems that our customers rely upon.

 

Vince Westin – As a technical evangelist for the VMAX, Vince has synthesized customer input and technical trends into fundamental product changes. Vince has been a leader in driving VMAX to adopt flash. First, he showed both the technical feasibility and customer desire for higher degrees of flash in the hybrid VMAX systems. Second, he was a key contributor to product management, engineering, and performance engineering on the capabilities of VMAX All-Flash leveraging low-write per day flash storage. Finally, he pioneered what has become the VMAX sizer tool, enabling EMC to deliver the right system for our customers’ needs. There is no one at EMC that advocates more for both our customers and the potential of our technology.

 

William Whitney – Bill has been critical to integrating EMC’s VNX and Unity NAS functionality into customer environments. Since the creation of NAS systems, backup and recovery has been one of the industry’s biggest challenges. Bill was instrumental in delivering EMC’s support for NDMP, the NAS industry standard for backup. He also drove file-level deduplication, compression and tiering in our NAS stack. Most recently, Bill has been driving the use of VNX’s NAS stack for VMware environments through his significant contributions to the VNX VAAI provider, enablement of the new UFS64 file system for NFS datastores and most recently the VVOL implementation. For customers who prefer running VMware over NFS for simplicity, Bill’s efforts have helped EMC support these high-profile customers.

 

Andy Wierzbicki – Andy embodies the role of customer advocate for mainframe. With a background in both support and engineering, Andy understands that there is nothing more mission-critical than mainframe and ensures that we exceed our customers’ expectations. First, Andy worked cross-functionally to ensure that the VMAX3 platform would meet customer reliability, performance, and response time expectations. Second, if customers hit issues, Andy leads the way on reproducing the issue so that we can diagnose, understand, and resolve the customers’ issues as quickly as possible. Mainframe still runs the core of the largest organizations in the world – Andy makes sure that VMAX delivers the quality and reliability that those organizations need.

 

*Doug Wilson – Doug has delivered innovative, reliable platforms for Data Domain. Doug was the lead architect for the DD2200/2500, shepherding the success of the programs from the requirements to the release. During that project, Doug was instrumental in adopting battery-backed persistent RAM, instead of NVRAM; which gave the DD2200 significant cost advantages. Koala was also EMC’s first “gray box”, and Doug brought together Data Domain, GHE, and the ODM (Original Design Manufacturer) partner. Finally, Doug helped grow technical leads to take over the DD2500 and DD2200. Doug continues to work on delivering the next-generation platforms to help our customers protect their data. Doug delivers, innovates, brings together the organization, and helps others advance their career.

 

*NOTE: All TD recipients with a ’*’ earned the TD honor in previous cycles.

 

EMC has always prided itself on its scrappiness and work ethic. Those adjectives may seem odd for a company that became and continues to be an industry titan. On the other hand, that’s exactly how EMC became what it is – fighting every day to make better, faster, more reliable products to solve our customers’ problems. That’s the ethos that we need to continue to be successful as Dell EMC. Congratulations to these Technical Director recipients. Now get back to work. 🙂

 

-Stephen Manley @makitadremel

Are You Sabotaging Your Career – The Performance Review Mailbag

Are You Sabotaging Your Career – The Performance Review Mailbag

The response from the initial mailbag was fantastic. Among the emails I received:

“Apparently, you only read tweets and questions to your blog, so I’m sending this as a mailbag question. Could you please approve my expense report?” Anonymous Team Member.

“I read all your blogs and I haven’t gotten promoted. Your [sic] not good at advice.” KL, Ohio.

“Did you know Walt Whitman wrote and published reviews of ‘Leaves of Grass’ under fake names to boost sales? I’m just part of a grand literary tradition.” SPAM from a book author I’ve never heard of who apparently wrote his own Amazon reviews.

In other words, it’s an ideal time to do another one!

Q: I just had my performance review and things got very intense. There has to be a better way to handle this. If I’m just silent, then there is not much value, but my way didn’t work either. Engineer, Beijing, China.

A: You could fill a library with books about performance reviews – how to conduct them, how to receive them, why they’re terrible, why they’re important, etc. Many of these books are written by experts in psychology, sociology, and organizational behavior. I, on the other hand, spent most of my life writing code, before suddenly becoming a manager. In other words, I have no idea what I’m doing… which means I’m probably a lot like your manager.

First, you’ll almost never get an honest critique of your abilities.

  • Performance reviews reflect your boss’s priorities and perceptions more than your actual abilities and execution.
  • Performance reviews skew positive. Most managers don’t give negative feedback because it creates conflict, and they are conflict averse. In fact, many engineers never have a performance review meeting. They just get vaguely-worded written feedback. If there is a meeting, then it’s filled with fuzzy platitudes. (Note: If you don’t get a face-to-face performance review and your peers do, be worried; your boss can’t even fake some positive comments. Yes, I’ve done this.)
  • Financial compensation is the most accurate measure of your manager’s perspective on your performance. Words are cheaper than salary and bonus. NOTE: Even financial compensation doesn’t give you an accurate picture of your boss’s perspective on your performance. Due to conflict-aversion, managers will give a small raise just to avoid a more difficult conversation. Thus, even if you get a raise, find out if it’s below or above average. (Yep, I did this one, too.)

mailbag pic 1

Second, if you want to influence your performance review (which, I assume is code for “your raise”):

  1. Set and manage expectations early – First, find out what your manager values. Ask him/her what’s important and observe who the manager rewards (sometimes a manager doesn’t notice the difference between what they say and how they behave). Next, share your goal with your manager early (e.g. 9 months before a yearly performance review). Finally, jointly create a plan, so your manager feels ownership for your progress, as well.
  2. Keep in touch frequently – By staying close to your manager you can more easily course correct. You’ll also stay top of mind. Since pay raises are usually a yearly event, you want to constantly reinforce that you’re making progress.
  3. Don’t react during the performance review – Whatever happens, you’re going to be emotional. If you’re not – your boss will be (again, managers generally stress about reviews). Moreover, your boss will assume you’re being emotional. Wait a couple of weeks, and then follow Step 1.

In each of these cases, I’m expecting you to take the lead. “Isn’t that the manager’s job?” You’re uncomfortable trying to connect with your manager because engineers are generally introverted. Unfortunately, most engineering managers started as engineers, so they’re introverts, too! Therefore, they’re no more comfortable connecting than you are. Since you have the greater incentive, it’s up to you to drive the process.

(Two examples about how your manager is as introverted as you:

  • mailbag pic 2When I joined EMC, it was the first time I ever had an admin. I quickly realized that she addressed my biggest weaknesses. She was social, had free candy, and sat right outside my office. As a result, when engineers came by to chat with her and eat candy, it was an easy opportunity for me to connect and chat with them, too.
  • When the office setup changed, I lost that connection. As a result, I added– “Walk around and just chat with people” to the daily goals I write for myself. I’ve written that goal over 1000 times. I’ve failed on that goal over 1000 times. Engineering managers are just introverts that took a wrong turn in their careers.)

 

Third, if you want to actually want to improve your performance, that’s the easiest question to address. As you make progress on a project, ask your manager, your mentor, and your peers – “What could I have done to make this better?” That question makes it safe for them to respond, since it’s about accelerating improvement rather than critiquing. You’ll also get practical feedback because the project will be fresh in their minds.

As with everything else in a big company – first understand what performance reviews really mean. Second, decide what your goals are. Finally, take ownership because your manager is just as scared of you as you are of him/her.

Q: Are we ever going to schedule my performance review? Member of your teammailbag pic 3

A: Ummm… I’ll see if I can set something up… for some time. But, in the interim, just know that I’m really impressed with how hard you’re working on … whatever you’re working on. As for your raise, well, it was a really tight year, but I want you to know how much I value you, even if I can’t express it financially. High five!

Q: I keep getting great performance reviews, but I never get promoted. What am I doing wrong? Engineer, Hopkinton, MA

A: First, let’s assume that you’re not getting the performance review run-around, where you get the “high five” but the manager doesn’t mean it. In other words, you’re getting good raises and bonus. If not, read the answer to the first question.

Thus, you’re confusing doing well at your existing job with positioning yourself for your next job. Most organizations expect you to operate at the next job level before promoting you. Thus, I’d follow the advice from the first question – set that goal for promotion with your manager, create a plan to achieve it, and keep in touch with your manager.

Don’t confuse performance reviews with career path advancement. You need to show you can operate at the next level, not just excel at the existing one.

Q: I hate doing performance reviews with my team. It feels like a huge waste of time, trying to offer feedback based on things that happened as much as 11 months ago. I want to help my team grow and advance. How should I do it? S. Manley, Santa Clara, CA

A: Try to ask this question at least once a week – “How can I help?” Your team will open up to you with their challenges and you can provide guidance to them. It will open a conversation that can go into other areas, which is even better. Regardless, you’ll be involved, connected, and helpful rather than distant, disconnected, and judgmental.

You could make a daily goal that you should “Walk around and just chat with people.” Who knows, maybe the 1345th time will be the charm. And stop sending mailbag questions to yourself. It makes you look desperate.

 

Conclusion

Performance reviews are stressful for everybody – employee and manager alike. By default, they’re generally unproductive and potentially harmful. However, if we focus on staying constantly connected to help our teams grow, we can dramatically reduce the pain and agony. Until next time, I’ll leave you with one last question:

I met Joe Tucci once at an event. We took a picture together in front of a canoe. I attached the picture. [Ed. Note: It was not attached] I had fun that night because Joe and everybody from EMC was so nice to me and I really liked the food and there were free drinks. Joe said I could call him if I ever needed anything but I forgot to ask him for his phone number. I want to invite him to come to my wedding because I think we’d have fun. Could you send his cell phone number to me?” Stalker, NJ.

Stephen Manley @makitadremel

In 2016, Flash Changes Everything!*

In 2016, Flash Changes Everything!*

*If by ‘everything’, you mean the media that sits inside of enterprise storage systems.

At an event in Paris, a customer asked, “Do you know what I like best about all-flash storage?” Since I had been warned that the French are sensitive, I resisted saying– “It doesn’t go on strike?” (At the time there was both a petrol and air traffic controller strike – in other words, a normal week in Paris.) His answer was disarmingly honest, “Everything else – cloud, hyperconverged infrastructure, containers – confuses me. But all-flash storage? It’s different, but I can understand it.”

While flash doesn’t disrupt the storage systems market, it is driving the evolution of storage systems. The evolution spans system design, vendor business models, and customer behavior. This time, let’s talk about basic storage system design.

The Evolution – System Design

Flash doesn’t change what storage systems do, but it does change how they do it. Flash storage systems enable applications and users to write and read data via a variety of protocols and networks – file, block, object, FICON, etc. They attempt to ensure that what a user stores is what the user reads. If that sounds like the functionality of disk and hybrid arrays, it is. Underneath, however, storage systems have changed how they do space optimization and how they make the media reliable.

Space Optimization

Disk storage systems make trade-offs between performance and space optimization. Space efficiency features like compression, deduplication, and clones incur costs: increased response time, management complexity, or unpredictable system performance. For decades, storage systems have optimized performance by laying out data in optimal locations on the disk. Space optimizations disrupt those carefully tuned algorithms. They fragment data, which increases the number of disk seeks, which degrades performance. As a result, disk systems implement space efficiency features for specific workloads (e.g. backup, archive, VDI, etc.) or as best-effort background tasks, but not as inline operations for general purpose usage.

All-flash storage systems both require and enable ubiquitous space efficiency. Flash delivers much greater I/O density than disk, but to make it cost effective, systems need to increase flash’s capacity density. While not all space efficiency techniques apply to all workloads, every flash array must make space efficiency features part of its toolkit. Conversely, flash storage makes it possible to deliver inline, ubiquitous space optimization. While the data may fragment, the random I/O performance of flash doesn’t depend on disk seeks; therefore, you can have space optimization and performance!

Note: Flash drives are growing much larger. The speed of reading data from the drive will not keep pace with the amount of data it stores. As a result, we’ll have a potential data access bottleneck. Flash storage systems will need to optimize data layout on a drive, intelligently spread data across drives, and cache efficiently. Storage media – the more things change, the more they stay the same.

Making Media Reliable

Storage systems work hard to return the same data that was written. All hardware fails. Storage media fails in multiple ways. The device completely fails. The device incorrectly writes data. The device returns wrong data. Regardless of the type of hardware failure, storage systems work to ensure that the users never know. While the mission remains the same, flash has different failure behaviors than disk drives.

Computer scientists have built companies, careers, and research groups on disk drive resiliency. Decades later, customers still debate over their preferred RAID algorithms. As we move into larger drives, we’ve resurrected the mirror vs. RAID vs. ECC debates. Meanwhile, the industry has increased the focus on predicting and handling drive failures, to reduce the impact of failed drives. Additionally, some research shows that media errors (on a healthy drive) and firmware bugs pose a more insidious threat to your data than full drive failures. Such events are both more common and less visible than failed drives. Thus, approaches like Data Domain’s Invulnerability Architecture have become a key market differentiator. Even in the year of “all-flash”, disk storage systems are evolving in the wake of their changing media.

Flash fails, but it fails differently than disk. The most obvious contrast is in “wear”. The mechanical components of disk drives wear out. That breakdown, however, is largely independent of the amount of times the system writes to the disk. Conversely, flash media is built of cells that can only be written a certain number of times before they wear out and cannot store data anymore. As a result, storage systems have changed their write behaviors to minimize and distribute the wear on the media. These modifications include: log-structured file systems to evenly distribute writes across the cells, space efficiency to reduce how many cells need to be written, and caching to eliminate frequent overwrites of data.

Meanwhile, all-flash arrays must respond to unique failure patterns of flash drives. First, we’re still learning how SSDs will fail. For example, how well will flash drives age? Unlike disk, where we have decades of experience in tracking drive failures over time, we’re still learning with flash. (I know vendors are trying to simulate accelerated aging, but I’m skeptical. The only proven way to accelerate aging is to have children.) Fortunately, we have more analytic tools available than ever before. Meanwhile, all-flash arrays are evolving traditional RAID approaches to better fit the new media. With a preference toward larger strip sizes (to minimize space consumption), resiliency across all components (e.g. across power zones in a disk array enclosure), and multi-drive resiliency (N+2), flash has forced an evolution of media failure analytics and protection.

Hardware fails – whether it is disk drives, flash drives, or memory. Storage systems will evolve to combat those failures. Regardless of the media and the failure characteristics, storage systems will continue deliver value by transforming inherently unreliable hardware into reliable data storage systems.

Conclusion

The disruption of storage media is driving the evolution of the storage system market. The basic needs haven’t changed. Customers want reliable storage that delivers the performance they need at the best possible cost. Flash storage changes many underlying assumptions, and storage systems are responding to the new media base. As a result, we’re all headed in the same direction.

The first question customers ask is – can a new system more quickly and efficiently add all of the expected resiliency and functionality to their “all-flash” base… or can established systems more quickly and efficiently modify their battle-tested resiliency and functionality to leverage the “all-flash” media? The second question is whether any of these systems can deliver more value than they’ve come to expect from traditional storage systems.

Before sharing my answer to those questions (giving time for each camp to bribe me – I do take t-shirts as payment), I will first discuss how business models and customer behaviors are changing in the next post.

Stephen Manley @makitadremel