All-Flash Arrays: 5 Things to Consider Before Making the Call


Nowadays, most people expect data to be available instantly. Waiting for apps to pop up is not only an inconvenience—it can be a real drag on business performance. That’s part of the reason why All-Flash Arrays (AFA) are becoming increasingly popular enterprise storage solutions – flash memory’s promise of faster data transfer and stronger performance than spinning hard disk drives.

But are they the right choice for your organization? It all depends on the workload, as Alex Minakis, Burwood Group Data Center & Cloud Automation expert, explained in our latest TrendWatch webinar, All-Flash Arrays: Where Do They Fit for Your Organization?

AFAs: Not a one-size-fits-all solution

Is slow app performance causing an organizational problem? Are end users complaining? AFAs could potentially solve those problems—but not necessarily. For one thing, it’s generally too expensive to suit all needs. Hybrid arrays are still viable for most of the environment, while AFAs are better suited for performance-intensive areas or workloads, like Oracle or SQL.

For another, your bottleneck may have less to do with storage than you think. Minakis points out that the storage system was responsible for app-data gaps for only about 40 percent of his clients. There’s little point in rushing to invest in AFA if the bottleneck is being caused by a network issue or CPU exhaustion, for example.

Once you’ve identified the bottleneck and discovered the AFA is the best path forward, then it’s time to vet the options.

5 considerations when evaluating AFAs

How do you choose between high-performing options that are available, such as Nimble Storage, which has great ease of use, and integrates well with hybrid), PureStorage (a refined product that performs at cost-effective levels, and XtremeIO (high performance, with cost to match)?

Keep the following factors top of mind:

1. Throughput. Flash offers countless Input/Output Operations Per Second (IOPS)—but equally critical is throughput. Look beyond the average block size to determine the reality of what will come through in your environment—even if it’s not average, the smallest and largest apps need to function.

2. Read-write ratio: How will Write Amplification (WA) affect performance when data changes? Too much Write can crash an environment, while a Read-heavy environment will get the best performance. You’ll need the read-write ratio that best aligns with actual workload.

3. Block size. It used to be that one app lived in one environment, and so different block sizes wasn’t even a question to ask. But within the last year and a half, we’ve been seeing more use cases around multiple apps. Can the system handle different block sizes without impacting performance?

4. Feature management. In some cases, certain features won’t always support best performance. For example, turning compression off for an app that might not use it can allow performance to hit top end. And, since SSD is all about over-provision, deduplications (dedupes) can also be a tax. Understanding how dedupe works will be critical to optimizing WA.

5. Longevity. What processes do the vendors have in place to negate inherent performance issues? Performance will naturally change as the drive fills up—it’s a natural function of SSD. Understand when you will need to consider upgrades, and also look for the product’s ability to scale.

Additionally, be sure to test out the best option, carefully considering read/write ratios, performance levels, and capacity—then document what you learn.

Lots of vendors offer a feature checklist. Look beyond the checklist to understand how these features will work with your apps, and the broader picture of your workload.


March 28, 2016