Asset ID: |
1-79-1952086.1 |
Update Date: | 2014-12-22 |
Keywords: | |
Solution Type
Predictive Self-Healing Sure
Solution
1952086.1
:
Oracle ZFS Storage Appliance: Best Practices For Front Porch DIVA Content Management Software
Related Items |
- Sun ZFS Storage 7420
- Oracle ZFS Storage ZS3-BA
- Oracle ZFS Storage ZS4-4
- Oracle ZFS Storage ZS3-4
- Sun ZFS Storage 7320
- Oracle ZFS Storage ZS3-2
|
Related Categories |
- PLA-Support>Sun Systems>DISK>ZFS Storage>SN-DK: 7xxx NAS
- Tools>Primary Use>Configuration
|
This Knowledgebase entry will be an ongoing document to guide the configuration of Oracle ZFS Storage Appliances to best operate with Front Porch Digital DIVA Content Storage Management software.
In this Document
Applies to:
Oracle ZFS Storage ZS3-BA Sun ZFS Storage 7320 Oracle ZFS Storage ZS4-4 Sun ZFS Storage 7420 Oracle ZFS Storage ZS3-4 7000 Appliance OS (Fishworks)
DIVArchive
DIVAdirector
Actor
Purpose
This Knowledgebase entry will be an active document to guide the configuration of Oracle ZFS Storage Appliances to best operate with Front Porch Digital DIVA Content Storage Management software.
Scope
Audience: Administrators of Oracle ZFS Storage Appliances and Front Porch Digital software.
Details
Overview
Front Porch DIVA Content Storage Management (www.fpdigital.com/solutions/diva) is a suite of solutions that integrate with industry standard multimedia production workflow software to manage the storage necessary to support very active media environments. At the heart of this suite are DIVAdirector, which directs, controls and catalogs the media as it flows through the system, and DIVArchive, which ingests, moves, backs up, and archives media files. Media files may reside on disk, tape or both mediums. The servers which actually move the media files between storage mediums are called Actors. A DIVArchive installation will have a Director server and one or many Actor servers. Microsoft Windows Server operating systems host the Director and the Actor software.
Integral to a DIVArchive system is the disk storage on which the active media resides. Network attached storage methods are recommended for attachment to the Oracle ZFS Storage Appliance. As of December 8, 2014, the supported protocols for access from the DIVArchive Actors to the Oracle ZFS Storage Appliance are SMB V2 (2.01) and NFS V3.
General Recommendations for the Oracle ZFS Storage Appliance for use by DIVArchive
- Content management of media files is very bandwidth-intensive. Obviously, the faster the network links between the Actors and the Oracle ZFS Storage Appliance, the better. 10GB links are recommended, with LACP to aggregate multiple links to the Appliance, if possible. If using NFS, Large MTU sizes (Jumbo frames) are recommended if supported by the NICs, servers and switch. Note: SMB 2.0 does not leverage large MTU sizes. For an in-depth look at the networking considerations on the Oracle ZFS Storage Appliance, reference "Networking Best Practices with the Oracle ZFS Storage Appliance" http://www.oracle.com/technetwork/server-storage/sun-unified-storage/documentation/networking-bestprac-zfssa-2215767.pdf
- Pool configuration will impact performance. The ZFS Storage Appliance can be built in a maximum performance configuration with Terabytes of DRAM ARC cache, eight disk trays of 10K RPM performance drives, RAID10 mirroring, and large capacity read and write flash logs, or, it can be built to lean more toward archival use, with just a few trays of 7K RPM drives in a RAID-Z configuration, and minimal ARC DRAM and flash cache. THere is no single configuration that will satisfy every user. For the DIVArchive installation, which can be generally characterized as a large streaming archive workload, the following is generally recommended:
- Pools should be "Single parity, narrow stripes", with a Log Profile of "Mirrored Log".
- Write flash ("Logs") are more suited to very latency-sensitive random writes, but you may possibly see improvement in some streaming workloads with write logs available in a pool. If the files being written are larger than the write flash in the pool, or the pool is shared with other applications, set Sequential write bias for shares to "Throughput" rather than "Latency".
Recommendations for SMB Shares for use by DIVArchive
- 2013.x.x.x ZFS Appliance firmware supports the SMB V2 (2.01) protocol and is highly recommended. Windows will auto-negotiate to SMB V2 if this firmware is installed.
- Refer to "Integrating Microsoft Servers and the Oracle ZFS Storage Appliance - Implementation Guide for SMB Deployment" at http://www.oracle.com/technetwork/server-storage/sun-unified-storage/documentation/mswindows-integration-063012-1690774.pdf for a very detailed description of starting and managing the SMB service on the Oracle ZFS Storage Appliance.
- SMB shares should not be thin-provisioned. They should be given a Reservation of the size needed to accommodate the DIVArchive installation.
- The recommended Properties of SMB shares used by the DIVArchive Actors are:
- Data compression : Off - Media files do not compress well.
- Cache device usage : Do not use cache devices - Unless the same media file is to be read repeatedly and will fit into the L2ARC, use of L2ARC is not recommended for media files. The L2ARC is intended for random read workloads.
- Synchronous write bias : It depends. - If the storage pool where the media shares reside is dedicated to media files, and is not shared with applications requiring very low latency, and there are "Logzilla" write log disks in the pool, then Synchronous write bias of "Latency" is recommended. If the storage pool *is* shared with very latency-dependent applications, the Synchronous write bias of "Throughput" is recommended. Use of the "Latency" setting is a general recommendation and may not improve performance in all scenarios. For example, if a pool has four 68.4GB write log devices for a total of 273GB, and you are simultaneously writing four 500GB media files, the "Latency" setting is not buying you much. If you transfer a single 200GB file, it is a win. In any case, if the storage pool is shared with latency-sensitive files like Oracle Database Redo logs, use of the "Latency" setting on a media share can severely impact the latency-sensitive application.
- Database record size - 64K is recommended for SMB V2.
Recommendations for NFS Shares for use by DIVArchive
- 2013.x.x.x ZFS Appliance firmware is highly recommended.
- The Windows servers in the DIVArchive installation must have the "Client for NFS" installed on the Windows Server. Refer to Microsoft documentation for procedures. This is generally in the "Server Role" area of the Windows configuration, but it varies by Windows release.
- NFS shares used for DIVArchive should not be thin-provisioned. They should be given a Reservation of the size needed to accommodate the DIVArchive installation.
- The recommended Properties of NFS shares used by the DIVArchive Actors are:
- Data compression : Off - Media files do not compress well.
- Cache device usage : Do not use cache devices - Unless the same media file is to be read repeatedly and will fit into the L2ARC, use of L2ARC is not recommended for media files. The L2ARC is intended for random read workloads.
- Synchronous write bias : It depends. - If the storage pool where the media shares reside is dedicated to media files, and is not shared with applications requiring very low latency, and there are "Logzilla" write log disks in the pool, then Synchronous write bias of "Latency" is recommended. If the storage pool *is* shared with very latency-dependent applications, the Synchronous write bias of "Throughput" is recommended. Use of the "Latency" setting is a general recommendation and may not improve performance in all scenarios. For example, if a pool has four 68.4GB write log devices for a total of 273GB, and you are simultaneously writing four 500GB media files, the "Latency" setting is not buying you much. If you transfer a single 200GB file, it is a win. In any case, if the storage pool is shared with latency-sensitive files like Oracle Database Redo logs, use of the "Latency" setting on a media share can severely impact the latency-sensitive application.
- Database record size - 1M is recommended for NFS shares, and is the default record size used by the Windows Server 2012 R2 NFS client.
Attachments
This solution has no attachment
|