[Discuss] Failing WD Disks

Dan Ritter dsr at randomstring.org
Thu May 18 15:52:02 EDT 2023


Bill Ricker wrote: 
> On Thu, May 18, 2023 at 12:36 PM Kent Borg <kentborg at borg.org> wrote:
> 
> And possibly too big to fail as well.
> At 5TB size or larger, we may be better off with an array of 6+ × 1TB disks
> with redundancy, even tho that starts to cost real $$.
> If the MTBF is less than a large multiple of the time to fill the disk, it
> may risky to do a backup or to restore a replacement 5TB disk from backup
> or from redundancy in an array!


Factors to consider:

- how available does it need to be, 24/7?
- how fast does it need to be read/written?
- how long can you afford for it to be down while you replace
  hardware?
- how long can you afford for it to be slow while you replace
  hardware?
- can you afford temporary slowdowns?
- do you need backups for full restores?
- do you need snapshots to recover from minor file
  deletion/overwriting?
- do you need long term archives?


Case 1. Database server needs about 12TB of very fast random access
(IOPS) and fairly fast transfer (MB/s) with very high
reliability. Budget is fairly high.

The solution was a new server with 8 2.5" U.2 NVMe slots, with 6
4TB NVMe SSDs and two slots ready for spares or expansion.
RAID10, so striping over three mirror pairs. ZFS is used to send
backups elsewhere.


Case 2. Home media server needs as much storage as the small
budget will allow. Spinning IOPS and transfer rate are fine --
video maxes out at about 25Mb/s, and often much less. There is a
small database which is IOPS-sensitive.

The solution was a low-end CPU (2 cores x86-64, 2GHz), a cheap
250GB SATA SSD for root, and 4 3TB spinning disks in RAID10. I'm
thinking about replacing the spinning disks with larger ones
sometime soon.


Case 3. Desktop box.

Solution: 1TB NVMe SSD as main storage. Nightly backup to
another machine. Access to the media server via an NFS read-only
mount.


-dsr-


More information about the Discuss mailing list