Storage Magazine - UK
  Are you connected in the right places?

Are you connected in the right places?

From STORAGE Magazine Vol 7, Issue 4 - June 2007

NAS is seen in many quarters as an ideal choice for organisations looking for a simple and cost-effective way to achieve fast data access for multiple clients at the file level. Brian Wall reports

So, what exactly is NAS - Network Attached Storage - and what might it offer that other solutions do not? Well, first and foremost, it challenges the traditional file server approach by creating systems designed specifically for data storage. Instead of starting with a general-purpose computer and configuring or removing features from that base, NAS designs begin with the bare-bones components necessary to support file transfers and add features from the bottom up.

NAS is regarded in many quarters as an ideal choice for organisations looking for a simple and cost-effective way to achieve fast data access for multiple clients at the file level. Implementers of NAS benefit from performance and productivity gains. First popularised as an entry-level or midrange solution, NAS still has its largest install base in the small-to-medium sized business sector.

Yet the hallmarks of NAS - simplicity and value - are equally applicable for the enterprise market. Smaller companies find NAS to be a plug-and-play solution that is easy to install, deploy and manage, with or without IT staff at hand. Thanks to advances in disk drive technology, they also benefit from a lower cost of entry.

In recent years, NAS has developed more sophisticated functionality, leading to its growing adoption in enterprise departments and workgroups. It is not uncommon for NAS to go head to head with storage area networks in the purchasing decision or become part of a NAS/SAN convergence scheme. High reliability features such as RAID, and hot swappable drives and components, are standard even in lower end NAS systems, while mid-range offerings provide enterprise data protection features, such as replication and mirroring for business continuance.

According to Rami Schwartz, CEO of NAS software company Exanet, two trends are transforming the NAS market - the rapid adoption of digital content and unstructured data in a number of emerging markets, and fundamental shifts in the way data centres are designed and utilised.

"Digital content is more widespread than just digital media," he says. "Markets making extensive use of digital content include storage service providers, internet aggregators, CCTV surveillance, telecoms, active archiving and many others. The adoption of digital content within these markets is creating new challenges for NAS vendors, including massive capacity demands, new performance requirements, limited ability to forecast growth, and the need to simplify manageability and reduce costs. The ever growing resolution of stored data is further challenging the ability to deliver the required capacity and performance when managing hundred of millions - sometimes billions - of files. Fortunately, enterprise-class NAS solutions have developed so much that, in some cases, both their reliability and availability surpass those of traditional SAN arrays."

In parallel to the dramatic explosion of unstructured data, data centres are evolving to meet the practical, technical and business challenges laid down by an increasingly aware market. "Modern and future data centres are characterised by increased deployment of virtualisation technologies, evolving grid-based compute models, blade technology, and other evolutions in storage and network technologies," Schwartz adds.

"A clear case in point is the shift in server virtualisation; we now take this for granted. Where the server industry leads, the storage industry looks set to follow. So far, NAS has played a minor part in server virtualisation, but our belief is that it is going to change and sooner than people realise." Both server and storage virtualisation are driven by the same underlying business objectives: improving utilisation and simplifying manageability as a means to reduce operational costs and extend business agility. However, NAS is not a solution for all storage challenges, as Schwartz acknowledges, but the line between SAN and NAS is more blurred now than ever before.

"Legacy OLTP applications will continue to be based on SAN, while more and more of the newly deployed applications will find enterprise-class NAS to be the better, more natural, choice. Furthermore, the market opportunity created by the high adoption rates of modern NAS solutions by the SMB segment makes for exciting times ahead, as both the size of companies with storage demands and the level of requirement itself continues to change."

While NAS may be an enterprise-ready technology, it is only one part of a unified storage solution. "NAS is for file storage," states David Hubbard, chief operating officer at Reldata. "It does not provide effective application storage. Why? Because the main advantage of NAS, the ability to share files among multiple users, is of no use to applications that expect and rely on dedicated and fast access to their storage resources. As a result, the sharing functionality only reduces the access speed and creates vulnerabilities."

As an example, Hubbard points to how running an exchange server from a NAS filer is theoretically possible, yet practically very dangerous, because network outages can lead to email database corruption that may take hours to recover. "Also, the performance of such installations is sub par and they don't scale well," he says. "Most popular SQL databases are similarly affected. Therefore, in an effort to address the diversity of applications' requirements, the majority of the IP storage market is moving towards unified storage solutions that deliver iSCSI SAN in conjunction with NAS.

"Currently, almost all such converged NAS/IP SAN solutions are built on NAS technology by adding an iSCSI target into one large file in the file system. This is
a quick-fix solution for vendors tied to a legacy architecture and it is definitely not an enterprise solution. Taking block level iSCSI application storage from the host, converting it to fit through a file system and then converting back to block level storage at the disk level produces significant latency, has a potential for data loss or corruption during power outages and introduces unnecessary risk of instability under stress loads, all of which is unacceptable in an enterprise environment."

Put simply, Hubbard argues, block level application data is far less protected than file data and has far less packaging, thus allowing block storage to work faster in a dedicated application environment. "Once again, the file systems are designed to work with shared file data accessed by multiple users. They provide semantics for organising the files in user-friendly folder structures protected by various types of access control features and locks. This functionality is not necessary for iSCSI volumes storing dedicated block-level application data, and only serves to limit the I/O performance.”

Perhaps, most importantly, in an effort to improve concurrent access performance, file systems tend to hold the data before writing them out to the disk in a bulk. Since iSCSI initiator running on the host is not aware of this, it becomes a guaranteed set-up for data corruption in event of a power outage. “Curiously, if there is a file system on top of an iSCSI target, the data has to go through two file systems before it reaches the storage device."

Instead of trying and fighting to fit iSCSI through a file system, NAS and iSCSI simply have to be delivered in parallel from a virtualised storage device, he adds. In this way, block level data never needs to go anywhere near NAS file protocols or file system semantics. This type of unified storage can be delivered from the same physical storage device; it is merely a case of setting up the software architecture in a more logical manner to gain enterprise reliability.

"Wide area - or WAN - backup and replication of both application and file data is the third element that should now be expected of any unified IP storage solution. This is one area where pure NAS or NAS-based unified storage solutions fall very short. Backing up file data is achievable. However, running wide area backup of a block level volume through a file system could hog file system resources and become a recipe for disaster.”

The most secure solution for backup and continuity is a straight replication of application data, where data is sent in real time from source to two live storage targets. “This is a compliance requirement in many enterprise environments,” adds Hubbard, “but it is virtually impossible to achieve using iSCSI targets mounted within a NAS file system if the application is write-intensive (such as a database that is frequently loaded). The alternative is to create periodic snapshots of block application data that are then sent to a remote backup device. However, if snapshots consume file system resources, there is a considerable potential for corruption of not only the backup copy but also the original source volume as they both depend on the same file system structure."

As far as Mike Walters, consulting systems engineer, NetApp, is concerned, the argument between NAS and SAN no longer exists today. Many properties that have historically been considered the domain of highly resilient SAN-only environments have for several years actually been available on some NAS solutions, he says. There are, for example, a lot of resiliency features inherently available in today's storage solutions, regardless of whether the usage is to be within SAN or NAS, including:

• Hot-plug and hot-swap redundant components, such as disks/shelves/fans/power supplies, make it possible to build highly available storage solutions.

• Multi-pathing capabilities that provide secure access to stored data are as easy in the NAS world, through use of ether-channel trunking mechanisms, as they are in the SAN world.

"Both NAS and SAN offer high-end data protection capabilities such as clustering, asynchronous and synchronous replication solutions," states Walters.

 "Replication can be administrated within a machine room environment or trans-continental, regardless of access protocol used. Data protection features, such as NetApp's Snapshots and SnapRestore, also work across both NAS and SAN, although, in the block world of SAN data, such mechanisms need to be controlled from the owning application server. This can be simplified through use of manageability tools, such as SnapDrive and SnapManager."

Direct-attached storage works well in environments with individual servers or limited servers, but the situation rapidly becomes unmanageable if there are dozens of servers or significant data growth, Walters observes. "NAS solutions provide true heterogeneous data sharing and deliver unparalleled ease of use, enabling IT organisations to automate and greatly simplify their data management operations. Customers are able to respond to business change, reduce administrative costs and improve application availability with improved scalability, reliability, availability and performance.”

The choice of how to access data is down to which protocols are preferred for the data required. "For shared data, such as home directories, clearly NAS is the correct approach, through CIFS for Windows or NFS for UNIX access. Applications such as Microsoft Exchange and SQL Server require a local disk and hence a block access protocol is needed. This could be using either FC or iSCSI.

"There are also other applications, such as Oracle, for which both block and file access will work. In many cases, the ease of configuration and simpler flexibility of a NAS-based protocol is often preferred. In the end, the goal for any organisations is to simplify their operations and get maximum value from the solutions they deploy. Many companies find NAS access easier to configure and use; but, more often than not, customers deploy storage solutions that use more than one protocol, getting greater utilisation for their storage estate. So the choice remains open."

Hugh Jenkins, enterprise marketing manager, Dell UK, believes that customer requirements for NAS have been polarised between:

• Low-cost, entry-level NAS solutions with limited requirement to be highly scalable/highly available beyond standard internal RAID drive sets.

• Scalable enterprise storage solutions with software and hardware and high availability features for critical data backup and recovery.

The entry-level market is characterised by solutions in the sub-£5k price range, with the vast majority of solutions shipping with Microsoft Windows Storage Server operating system. "In many ways, these solutions are server-centric," says Jenkins. "In fact, a large majority of these systems are self-contained server units with internal RAID-protected storage.

"The mid-range NAS market, typically in the £10-50K range, is distinguished by purpose-built solutions that behave like true storage appliances. These systems are tightly integrated storage systems and feature 'single-pane-of-glass' management consoles. Ease of use is a key attribute of the leading solutions in this segment, with provisioning of storage, network shares and share attribute usually wizard-driven.

“The storage centric, easy to use, mid-range NAS appliance has been largely driven by the advent of storage specialists in an increasing number of organisations. Mid-range NAS solutions are typically being specified for applications by an organisation's storage specialist, which is often in contrast to the entry-level NAS market where an organisation may not have a specialist entirely dedicated to storage infrastructure. These storage specialists are demanding an integrated, storage-centric, easy to use management experience around a mid-range NAS solution."

Mid-Range NAS solutions are typically deployed to provide high-performance file storage for file-intensive workloads in large enterprises, small and medium businesses. Additionally, they are purchased and deployed for file server consolidation projects. "A key attribute of many environments adopting mid-range NAS solutions is client heterogeneity," comments Jenkins. In other words, clients needing to access files using the typical Windows file protocol (SMB/CIFS) with simultaneous access by clients using the typical UNIX/Linux file protocol (NFS).

"This has spurred the recent development in the mid-range NAS market of the concept of storage convergence or unification. Unification of storage describes systems capable of serving both file [.doc, .ppt, etc] and block [SQL, Exchange, Oracle applications] storage data. NAS has always featured file protocols, but what is becoming increasingly popularised is the concept of a converged storage device built upon a NAS-focused appliance that supports multiple file and block protocols, including CIFS, NFS, iSCSI and Fibre Channel. In today's heterogeneous infrastructures, a unified solution is a pre-requisite for success in the mid-range NAS marketplace."

Dell believes there will be many customers that value the combination of a simple-to-use converged storage device, such as the Dell PowerVault NX1950 NAS with Windows Unified Data Storage Server 2003. "This type of solution is also able to complement the widespread use of Fibre-Channel SANs by acting as a 'gateway' device to Dell|EMC SAN storage, for example," Jenkins concludes. "In short, the functionality of NAS appliances has begun to address some of the enterprise-level requirements formerly only addressable with the functionality, scalability and resilience of a SAN-based solution. Whilst the rich functional capability of SANs will see them remain the '800lb gorilla' in the market for enterprise-level storage requirements, there is definitely an emerging place for the unified storage versatility offered by some of today's mid-range NAS solutions.”

Clearly, with the rapid pace of development as it is today, it is no longer possible to describe a NAS solution in one all-encompassing phrase. "In many cases, the foundation for NAS systems still lies in the servers they are based on, which brings with it the twin familiarities of hardware and resilience on which the platform-hat is built," says Stephen Watson, HP Storage Works Division product marketing manager.

"This makes management easier, as no retraining is involved to maintain a NAS system when the hardware and the underlying fabric is the same. Nevertheless, dependent on what the storage is to be used for, and costs being considered up front, there may be a very simple choice to make. A NAS system can be introduced to a network and, within a short time, begin to take the strain off other systems that are busy taking care of simple file services and print jobs. In this way, dedicated application servers can undertake other tasks and effectively do what they were brought in to do. Replication can be performed that will mirror data across to a local duplicate data store or across a WAN to an off site facility."

However, the job of the NAS system can grow with the requirements laid on it. So much so that it can also begin to look after data areas of other servers running an application via the simple addition of iSCSI target software. "The database can be linked over the network to the server running the application," he continues.

 "Again, cost of the initial outlay may be a determining factor here and performance over a shared LAN must be clearly understood. Along with the added usability of the NAS system, it also introduces some extra resilience features, such as a multi-path redundancy or clustering and simplified back up. Replication can again occur; and when it occurs at the block level, it copies that data, no matter what it is."

Building further on the usability of the NAS system, as data requirements may grow massively, then the NAS system can become the gateway to a SAN - thus providing the simple entry interface from the LAN to high-speed data storage on the back end.

"Via clustering of the NAS, it can become a gateway system that further enhances desired resilience and performance demands of users and systems alike through multiple paths to the storage system," adds Watson. "Management of such a system becomes simpler with upgrades of firmware. Or scaling up the capacities of the data areas presented can be done on the fly, rather than via cost-consuming power cycling of an individual server system. It will also enable the provision of file and block level data, so a mix of both can be achieved within the same system. Overall, these systems can replicate across their own software independent of the OS and thereby provide greater levels of redundancy, should they be required." ST

The products referenced in this site are provided by parties other than BTC. BTC makes no representations regarding either the products or any information about the products. Any questions, complaints, or claims regarding the products must be directed to the appropriate manufacturer or vendor. Click here for usage terms and conditions.

©2006 Business and Technical Communications Ltd. All rights reserved.
No part of this site may be reproduced without written permission of the owners.
For Technical problems with this site contact the Webmaster