Price Reduction for Nimble Storage CS210 and CS220 Arrays

 4Storage, Nimble Storage, Nimble Storage Pricing  Comments Off
Nov 152013
 

In a recent announcement to its partners and resellers, Nimble Storage has made the following announcement in regard to pricing for its CS210 and CS220 SAN solutions:

“Nimble has reduced the price of the CS210 and CS220 by 20% and 10% respectively. This reduction will help you better address the requirements of your SMB and mid-market customers. Contact your local Nimble Storage sales team or distribution partner for more information.”

Not only does Nimble Storage realize the need to stay competitive in the storage marketplace, but it also realizes the need for small businesses and their remote offices to take advantage of industry-leading technology at a fair price.

Nimble Storage pricing has always been competitively positioned, considering the overall cost of SSD and flash drives, but Nimble Storage has taken additional steps to ensure the lower costs of manufacturing and drives is passed on to its clients.

Share

Nimble Storage CASL Cuts Storage Costs

 Nimble Storage, Nimble Storage Pricing  Comments Off
Nov 072013
 

Nimble Storage solutions are built on the patented Cache Accelerated Sequential Layout (CASL™) architecture. CASL differs from the traditional bolt-on approach of using flash as a tier. Instead, CASL is designed from the ground up to leverage the lightning-fast random read performance of flash and the cost-effective capacity of hard disk drives. What’s more, CASL incorporates innovative efficiency features, such as in-line variable-block compression, cloning, and integrated snapshots to store and serve more data in less space.

Dynamic flash-based read caching

CASL caches “hot” active data into a flash-based SSD in real time—without need to set any complex policies. This way it can instantly respond to read requests—as much as 10X faster than traditional bolt-on or tiered approach to flash.

Write optimized data layout

CASL collects or coalesces random writes to the array, compresses and writes them sequentially to disks. This results in write operations that are as much as 100x faster than traditional disk-based storage.

In-line compression
CASL compresses data as it is written to the array with no performance impact. It takes advantage of efficient variable block compression and multicore processors. A recent measurement of our install base shows average compression rates from 30% to 75% for a variety of workloads.

Scale-to-fit flexibility

CASL allows for the non-disruptive and independent scaling of performance and capacity. This is accomplished by either upgrading the storage controller (compute) for higher throughput, moving to larger flash SSD (cache) to accommodate more active data, by adding storage shelves to boost capacity, or by combining multiple arrays in to a single scale-out cluster—scaling both performance and capacity beyond a single array.

Snapshots and integrated data protection

CASL can take thousands of point-in-time instant snapshots of volumes by creating a copy of the volumes’ indices. Any updates to existing data or new data written to a volume are redirected to free space (optimized by CASL’s unique data layout). This means there is no performance impact of snapshots and snapshots take little incremental space as only changes are maintained. This also simplifies restoring snapshots as no data needs to be copied.

Efficient replication

Nimble Storage efficiently replicates data to another array by transferring compressed, block-level changes only. These remote copies can be made active if the primary array becomes unavailable. This makes deploying disaster data recovery easy and affordable – especially over a WAN to a remote array where bandwidth is limited.

Zero Copy Clones

Nimble Storage arrays can create snapshot-based read/writable clones of existing volumes instantly. These clones benefit from the fast read and write performance, making them ideal for demanding applications such as VDI or test/development.

nimblestorage_logo
nimble_cs200

button_nimblepricing
Self-Service Nimble Storage Pricing
Share

Do You Have Enough Cache for Your Nimble Storage SAN?

 Nimble Storage, Nimble Storage Pricing  Comments Off
Nov 072013
 

Notice we said “Cache” instead of “Cash”. But even then, spending more cash on cache is no laughing matter when it comes to your mission-critical applications. Nimble Storage is rare among storage vendors in that it provides its clients the option to upgrade the cache (base flash capacity) on all arrays (except for the CS210).

Why is this important…?

The amount of active data an organization has might increase over time. Even if you have a very large capacity storage array with a very high performance (IOPS) ratio, you may still need a sufficient and / or scalable cache amount to address your data. Depending on what your needs and requirements are, storing 160GBs worth of active data in flash might be sufficient…but if you need more, Nimble Storage has the answer.

Nimble Storage tends to use the terms “cache” and “flash” or “flash capacity” interchangeably, but for configuration and Nimble Storage pricing discussions they mean the same thing. Nimble Storage arrays can scale anywhere between 160GB of cache / flash to 2,400GB worth of the stuff…and that’s pretty powerful.

Yup, Nimble Storage has planned ahead and got you covered…so before you buy that Nimble Storage SAN, make sure you compare the various arrays through some Nimble pricing research to make sure you have the right amount of cache / flash and how much you can upgrade to if the need ever arises.

Share

Nimble Storage Scales Up and Out

 Nimble Storage, Nimble Storage Pricing  Comments Off
Oct 082013
 

Nimble Introduces Scale-to-Fit Storage

Groundbreaking Scaling Paradigm Allows Enterprises to Individually Scale Performance and Capacity at the Lowest Incremental Cost to Accommodate the Requirements of Diverse Workloads and Applications

According to Nimble, today’s scale-up storage architectures are inflexible, requiring upfront forecasting of performance needs over an array’s three-to-five-year lifespan and creating separate storage silos that complicate management. Scale-out cluster solutions provide upfront flexibility, but tie performance and capacity together, requiring customers to incur higher incremental costs every time they add a storage node.

Nimble Storage addresses these challenges by allowing customers to purchase exactly what they need up front and by providing the industry’s most flexible scaling options. Customers can thus protect their existing investments while growing their storage at the lowest incremental cost without downtime as their needs evolve. Nimble Storage offers three flexible paths for scaling:

(1) Scale capacity only: a new line of ES-Series storage expansion shelves allows customers who do not require additional performance to add capacity at the lowest incremental cost without downtime.

(2) Scale performance only: a new line of extreme-performance CS400 series arrays is ideal for customers running performance-intensive applications such as OLTP and VDI. Existing CS200 series array controllers can be nondisruptively upgraded to CS400 arrays without downtime. Customers can also upgrade to higher-capacity flash SSDs without downtime to accommodate workloads with larger active data sets.

(3) Scale capacity and performance: a powerful new operating system upgrade, Nimble OS 2.0, allows customers to cluster arrays together, providing linear scaling of both capacity and performance, as well as unified management. Nimble scale-out clustering allows users to grow or shrink their storage environments seamlessly, and to easily perform data migrations and upgrades, all without downtime.

Enterprises can mix and match these approaches to scale to hundreds of terabytes and hundreds of thousands of IOPS in a single storage cluster.

“Traditional scale-up systems and more modern scale-out systems are rooted in an era when storage capacity and storage performance were tethered together,” said Suresh Vasudevan, CEO of Nimble Storage. “Our scale-to-fit architecture delivers an unparalleled ability to independently scale the controller performance, cache capacity or storage capacity of any node while also allowing multiple nodes to become part of a cluster. Starting with a small footprint, our customers can continually and nondisruptively scale and evolve their infrastructure in small, granular increments across the widest range of workloads.”

Nimble’s scale-to-fit technology is built on the groundbreaking Cache Accelerated Sequential Layout Architecture (CASL). CASL is architected from the ground up to leverage flash and high-capacity disk to deliver affordable performance and capacity. In addition, CASL delivers highly efficient snapshots and WAN-efficient replication, dramatically simplifying backups and disaster recovery. Operation of the arrays is simplified across their lifecycle through deep application integration as well as proactive wellness and management.

“Enterprise storage requirements are continually and rapidly in flux,” said Roger Cox, research vice president at Gartner. “Consolidation of diverse workloads means that higher demands are placed on storage. Because most enterprises can’t predict their storage requirements even one year down the road, they need assurance of elastic performance and capacity in their storage platforms.”

Scale-Out Clustering

Powerful scale-out clustering with Nimble Storage allows enterprises to size their storage to current requirements, eliminate the need to forecast future requirements and eliminate storage silos:

Scale performance and capacity linearly with the seamless addition of new arrays
Automatically “stripe” storage volumes across multiple arrays in a cluster
Create storage pools in a cluster to segment applications and workloads on different arrays
Migrate data nondisruptively across pools in a cluster
Add arrays to a cluster or remove them with full, uninterrupted data access. The Nimble operating system automatically redistributes data across arrays in the cluster.

Furthermore, scale out is compatible with all existing and new Nimble Storage arrays, and any arrays can be mixed and matched within a scale-out cluster. Nimble makes scale out available at no cost to all customers with current service and support contracts.

“With the expansion of our operations, it was imperative that we build a storage architecture that could scale performance and capacity to accommodate staff additions, significant growth in our data and a range of new applications, including VDI,” said Derek Schostag, systems engineer, Lindquist & Vennum P.L.L.P, a business-oriented general-practice law firm. “With Nimble Storage we have achieved that objective, and we’re extremely confident of investment protection, even for scenarios that we can’t envision today. No other storage provider can give us that confidence.”

CS400 Series Extreme-Performance Arrays

Nimble’s new extreme-performance CS400 arrays are designed to host an enterprise’s most demanding applications and workloads, supporting hundreds of virtual machines or thousands of VDI users on each array.

“Nimble’s CS400 provides a single storage server that can meet the requirements of virtualization on a large scale, allowing enterprises to support their core applications, workloads and users, while also providing room to grow,” said Raj Mallempati, director of product marketing, VMware. “End users using VMware View™ solutions are looking for uncompromising performance at scale. Nimble CS400 and VMware View provide a high-performance, scalable desktop-virtualization solution that can meet the ever-growing capacity requirements of enterprise users.”

Share

Unduped – Saving bandwidth costs with Nimble Storage

 Nimble Storage, Nimble Storage Pricing  Comments Off
Oct 082013
 

Here’s a great article written by Nimble Storage’s Umesh Maheshwari, Co-Founder and CTO that discusses it’s always on compression approach. Enjoy.

In a traditional storage environment, primary and backup storage are separate, and backups are based on copying data. Typically, the whole volume is copied from primary storage to backup storage every week or every day. If stored on backup storage without any capacity optimization, these backups can easily use up many times the space used on primary storage. Capacity-optimized backup storage systems overcome this problem using various techniques:

Deduplication, aka dedupe. Successive full backups have mostly the same content because the change rate is generally small. Dedupe removes this duplication in content by sharing blocks across backups. Global dedupe goes a step further and enables sharing of identical blocks regardless of where they are, including identical blocks at different locations within a backup.

Compression. Compression works on an individual block of data (generally less than 1 MB) at a time, and crunches it down based on commonality within the block. Examples include the gzip utility and various LZ algorithms.

In a previous article, Ajay wrote about the reasons for moving towards converged primary and backup storage. With converged storage, backups are based on volume snapshots. A snapshot is logically a point-in-time copy of the volume, but physically it shares all unchanged blocks with the primary state and other snapshots. There is no copying or duplication of data to begin with, so there is no need to de duplicate. This provides huge savings in CPU, network, disk, and memory utilization than first copying the whole volume and then deduping it back down. One might say that snapshot-based backups are not duped in the first place and don’t need dedupe—they are unduped.

In addition, in Nimble’s converged storage model, all data is compressed, including the primary state and backups. This provides a huge advantage compared to most primary storage systems, which do not compress randomly-accessed application data at all.

Next, I will focus on space usage—not because it is the most important difference, but because many interesting questions arise around it.

Proponents of deduping might assume that dedupe is more space optimized than unduped, because global dedupe is able to share identical blocks across backups as well as within a single backup at different locations, while unduped snapshots only share blocks at the same location. The intra-backup sharing does provide a small advantage for dedupe. However, unduped storage benefits from a bigger advantage: the sharing of blocks between the primary state and backups! In essence, unduped converged storage keeps only one baseline copy of the volume, while separate deduped storage keeps two—one on primary storage and one on backup storage. As we will see, the primary-backup sharing outweighs the intra-backup sharing. Therefore, compared to the total space used with separate primary and deduped backup storage, converged storage uses even less space.

Below I present a mathematical comparison of the total space usage (including the primary state) between the following four types of storage:

- Unoptimized daily incremental and weekly full backups
- Global dedupe with compression (as in optimized backup storage)
- Unduped without compression (as in optimized primary storage)
- Unduped with compression (as in Nimble converged storage)

The following chart plots the capacity optimization ratio for each of the three optimized storage types. Capacity optimization is computed as the ratio of the total space used in unoptimized storage over the total space used in the specific optimized storage type. Higher values are better. (This ratio ignores the higher cost of primary storage compared to backup storage, and therefore significantly understates the advantage of converged storage, which uses less expensive storage.) The x-axis indicates the days of backup retention. In general, capacity optimization improves with retention.


The chart shows the following:

- Deduping is a fine and necessary optimization for separate backup storage.
- Unduped converged storage without compression is not as effective as deduped storage with compression.
- Unduped converged storage with compression saves significantly more space than deduped storage with compression for typical backup retention periods of 30–90 days. In fact, dedupe would catch up with unduped in terms of capacity savings only if backup retention is longer than 8 months.

Of course, data protection is not complete without provision for disaster recovery, which requires an off-site replica. Comparisons similar to the one above can easily be made that include the space used on the replica. Unduped converged storage with replica retains a lead over separate deduped storage with replica, regardless of whether the primary or the backup storage is replicated. This is because unduped storage with replica has two baseline copies of data (one on converged storage and the other on replica), while deduped storage with replica has three (one on primary storage, one on backup storage, and one on replica).

Interestingly, matching the space saving of dedupe was not our top motivation for building converged storage. The major motivations were the following:

Ability to directly use backups and replicas without having to convert the data from backup to primary format.
Avoid massive transfer of data from primary to backup storage.
Enable significant space savings without performance impact for randomly accessed data, such as databases.

Nevertheless, it is good to demonstrate that unduped storage is not just as good as deduped storage in saving space, it is even better!

Share

Keys to the CASL

 Nimble Storage, Nimble Storage Pricing  Comments Off
Oct 082013
 

In our last post, “Flash for Cache, just not for Writes” we talked about Nimble Storage’s approach to accelerating read performance using a true SSD Flash cache. Simple, elegant, FAST. But what about increasing write performance?

Write BIG, Write FAST

Nimble Storage solves the disk I/O write bottleneck by once again identifying the problem and applying new, inexpensive technologies that we unavailable 10-15 years ago. The solution is based on what they call Cache Accelerated Sequential Layout, or CASL.

At the core of CASL write performance story is a design structured around Write Optimized Data Layout. When pure throughput is required for spinning media like SATA drives, the best way to push performance is to write in long, continuous streams. Small, random writes spawn head seeks and drastically reduce throughput.

To solve this problem, Nimble Storage’s CASL “collects or coalesces random writes to the array, compresses and writes them sequentially to disks. This results in write operations that are as much as 100x faster than traditional disk-based storage.” The key to CASL is that the system uses DRAM to facilitate the collection and consolidation of data which is extremely quick.

All this adds up to a relatively inexpensive array that competes head-to-head with gear twice the cost.

I wish there were more complexity to it, but sometimes the smartest ideas are simple!

Share

Simplifying SAN Costs – Nimble Storage vs. The Complex Guys

 Nimble Storage, Nimble Storage Pricing  Comments Off
Oct 012013
 

There’s money in SAN complexity…and it’s YOURS!

Have you ever tried to decipher a SAN pricing proposal from one of the “big” guys? There are more part numbers than I would expect when building the space shuttle. It is not enough to have the basic hardware broken down into dozens of pieces but then you have to select a dizzying array of software licensing options for both the array and any hosts using array features.

Here’s a sample SKU list from a major NAS/SAN vendor for a simple storage array:

FAS6200HA-IB-BASE-1 GF-RAS56200HA,IB,1B CF,OS,R5 2
X70015A-ESH2-R5-C DS14MK2 SHLF, ACPS,14x144GB,15K,HDD 5
X6524-R6-C CBL, 2M, Optical, Pair, LC/C,-C,R6 2
X6530-R6-C CBL,0.5M,PATCH,FC SFPTO SFP-C,R6 6
X1941A-R6-C CBL,5M,CLUSTER4X,CU-C,R6 2
X871A-R6-C 20A Storace Equipment Cabinet,-C,R6 1
X875A-R6-C 20A PWR CORD (4),CABINET, NEMA-S,R6 1
X800-42U-R6-C Cabinet Component Power Cable, R6 14
X5517-R6-C Storage cabinet, Rail Set,42U,-C,R6 1
X6529-R6-C SFP, Optical, Pair, LC/LC, -6, R6 4
X8773-R6-C Multiple Product Tie-Down Bracket,- 2
DOC-3XXX-C Documents, 3XXX,-C 1
Software (Base Unit)
SW-T4C-CIFS-C CIFS CIFS Software, T4C-C 2
SW-T4C-ISCSI-C iSCSI iSCSI Software, T4C-C 2
SW-T4C-NFS-C NFS NFS Software, T4C-C 2
SW-T4C-SRESTORE-C SnapRestore Software,T4C,-C 2
SW-T4C-SME-C SnapManager Software, Exchange, T4C 2
SW-T4C-SMSVS-C SnapMirro-SnapVault Software Bndl, 2
Software (Host Side)
SW-SDR-WIN SnapDrive, Windows 10
SW-SSP-SDR-WINDOW SW Subs SnapDrive for Windows 3.0 10
SW-SMBR-1000PK Single Mbox Recovery, 1000pk 1
SW-SSPVN-SMBR-1000PKendor SW Sub SMBR, 1000pk 1
SW-SDR-SOL-TIER1 SnapDrive, Solaris, Tier 1 10
1-2CPU
SW-SSP-SDR-SOL-TIER1SW Subs, SDR Solaris Support, Tier 10
SW-SDU-CPU SW, SnapDrive UNIX, CPU 5
SW-SMO-CPU SW, SnapManager Oracle, CPU 5
SW-SSP-SMO-CPU SW, SUB, SnapManager Oracle 5
Services and Support
CS-S-INST Initial Installation-DS14 1
CS-A SupportEdge Standard-FAS270 Mths: 3 2

Wow!

All we wanted was a high-performing storage array. Instead, we got a dissertation on what the sales person thinks we need. Notice that this proposal contains base hardware, base software, host software, installation services and support. Although this would be considered a “simple” configuration, it is still complex and a potential customer would be hesitant to make any changes or delete any proposed features.

What’s worse is if the customer proceeds with this proposal thinking they have “covered all their bases” for the future. However, since many projects increase in complexity over time, new features may be needed but this a la carte model requires the customer to re-budget and purchase additional options in the future…more money.

Nimble Storage’s approach eliminates complexity

Compare this nightmare configuration with how Nimble Storage sells their arrays. Nimble Storage has a single part number for everything including every option. Here is an Nimble Storage configuration for a fault-tolerant (no single point of failure) 6TB system with SAS drives.

SKU Description Qty
CS260 24TB CS260 24TB Usable (24-48TB with compression)
Hardware: (4) 320GB SSD, (12) 3TB SATA, Dual Ctlrs, (6) 1GbE Active Ports
Software: Includes array and all software features: Dynamic caching, Write-Optimized Data Layout, Universal Compression, Thin Provisioning, Instant Snapshot and Recovery, Efficient Replication, Zero-Copy Clones.
1

I like to call this the single most important business feature of Nimble Storage; all inclusive pricing. Everything you will need, now and in the future, is included at no additional charge. When you install the unit and fire up the GUI you will notice that there are no features “greyed out”. This is fantastic for projects that evolve because when you need a feature, say Offsite Replication, its available. Simply configure it!

Want to save money? Consider eliminating complexity.

nimblestorage_logo
nimble_cs200

button_nimblepricing
Self-Service Nimble Storage Pricing
Share

Flash for cache, just not on writes

 Nimble Storage, Nimble Storage Pricing  Comments Off
Oct 012013
 

As we worked through our discovery process of Nimble Storage and it’s amazing claim that it solves the performance vs. capacity trade-off, we stumbled upon an interesting concept….not using the Flash reservoir for write caching. Using the flash only for true, read caching instead of an additional, higher performing storage tier seemed foreign until we read the blog post by co-founder Umesh Maheshwari.

Flash memory shines on reads: it reads 100 times faster than a disk. But its performance advantage is much weaker on writes, and its write endurance is much lower than disk’s. Therefore, Nimble OS uses flash only for accelerating reads, aka “read caching”. It uses NVRAM (a DRAM-based device) for accelerating writes, aka “write caching”.

The Takeaway

As with all technology trade-offs, emerging companies without the legacy customer base using older techology have an advantage. New concepts and methods are available to newer companies like Nimble that can architect efficient use of advanced multi-processors, DRAM and flash. In Nimble’s case, they have built an efficient DRAM to disk layer that eliminates the need for a middle flash tier. Flash can be used for what it does best; read speed.

See Umesh’s blog post here Write Caching in Flash; a dubious distinction.

Share

Converged Primary and Backup storage made Nimble

 Nimble Storage, Nimble Storage Pricing  Comments Off
Sep 262013
 

Nimble Storage’s recipe for converging primary and backup storage is simple and has two parts.

1. Capacity optimization: Storing backups for 30–90 days needs lots of capacity. In a system not designed to store backups, they can easily use 10–20x the space used in primary storage. Nimble handles this problem as follows:

  • Store all data on high-capacity disk drives. These disks have over 3x the capacity and 1/6x the cost per GB of high-performance disks. They also have only 1/3x the performance of high-performance disks, but we deal with that separately. (The high-capacity disks have often been called SATA disks, but that is quickly becoming a misnomer as high-capacity SAS drives enter the market.)
  • Use data reduction techniques such as compression and block sharing. These techniques can reduce the space used by backups by 10–20x. Block sharing can take many forms, e.g., snapshots and dedupe, and it is important to pick judiciously based on the context. I will write further about this in the next article.

2. Performance optimization: Especially random IO performance. Common business applications such as Exchange and SQL Server generate lots of random IO. Hard disks are generally bad at random IO. High-capacity disks are particularly bad. We use two techniques that more than make up for this slowness:

  • Accelerate random reads using flash as a large cache. Most storage vendors have a story around using flash. However, flash has some peculiar characteristics, and how a system uses flash is more important than whether it uses flash. In particular, flash is not a performance cure-all; e.g., it might not be cost effective in accelerating random writes.
  • Accelerate random writes by sequentializing them on disk. This technology has been known for some time as log-structured file systems, but it has become more interesting recently because of new enabling technologies.
nimblestorage_logo
nimble_cs200

button_nimblepricing
Self-Service Nimble Storage Pricing
Share

Nimble Storage

 4Storage, Nimble Storage, Nimble Storage Pricing  Comments Off
Aug 292013
 

Nimble Storage Pricing with No Hassles

Use the link below to access our Nimble Storage Self-Service Quote Tool (powered by EchoQuote, a third party quoting tool) and you can receive your Nimble quote quickly, often in minutes. There’s no obligation and no pressure.

nimble_cs200 button_nimblepricing


Nimble Storage Configuration Information

Use the link below to access our 4NimbleStorage site to get Nimble Storage options.


4NimbleStorage Blog - Tips and Tricks
Nimble Storage Tips and Tricks

Share