Which NAS do list members prefer?

QuadraphonicQuad

Help Support QuadraphonicQuad:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
The misreporting of HDD space has always annoyed me. It says 4TB on the box but 3.63TB when I plug it in? How about a discount then?

I smell a class action lawsuit! C'mon, who's with me?

Anyone? Bueller?
 
Good to know, thanks for the reply. These drives won't be getting a defrag anytime in the foreseeable future so I'll leave just a bit of free space in case anything goes pear-shaped.

Defragmentation works for HDD because it speeds up sequential IO... it doesn't really matter much for SSD because reading NAND is fundamentally different from reading a spinning platter. However, SSD sequential IO are faster because the firmware can use some sort of parallelism.

In the past, defragmentation was useful with HDDs. But in reality, things have progressed since the 70s and 80s... so unless you are doing lots of multiple IO with many users/threads, I wouldn't worry too much about it.

Capacity? I push it to 90%. In a single 1TB SDD drive that is a big enough that there plenty of room for a storing the 1GB+ files from ripping DVDs. For my NAS RAIDs, done strictly with HDDs, I have pushed the smaller ones (the daily use for IO, running about 9TB) to 90%, the big ones (30TB) seldom hit 40%...

If you are so concerned with not pushing your storage... HDDs are relatively cheap per GB.
 
This has NOTHING to do with measuring in 1024 or 1000. Nothing to do with host operating systems.

Terabyte definition is 1,000,000,000,000 bytes (metric system = 1000 x 1000 x 1000 x 1000)

The numerical difference between the metric (base 10) and binary (base 2) systems is relatively small for the kilobyte (1000 bytes is about 2% smaller than the 1024), the systems deviate increasingly as units grow larger (the relative deviation grows by 2.4% for each three orders of magnitude). For example, a Terabyte (metric) is nearly 10% ‘smaller’ than binary based (1099511627776 bytes = 1024 x 1024 x 1024 x 1024)

There have already been court cases. The disc manufacturers win as they use the metric system. Operating Systems (Windows etc) use the binary system.

It’s like trying to argue a fine for speeding: “But officer, I was only doing 90 miles per hr in a 100km per hr zone”
 
Not true?

This has NOTHING to do with measuring in 1024 or 1000. Nothing to do with host operating systems ( even in bare metal ).

It is intrinsic to the algorithm used by the drive's firmware to maintain storage capacity as it wears out from use. It has to do with how the manufacturer decides to keep some storage in reserve to allow for failures of the media ( bad blocks or sectors ). The firmware in the drive keeps track of good and bad and avoids using bad places on the media disk or array, when it finds a grown bad block or sector, it replaces it with one from the spare list.

The spare list is NOT counted in the claimed capacity and the user is never told about it.

The better the quality of the drive, not only is the media proper better (better discs, better NAND) but the spare storage is larger as well.

You, the user, know NOTHING about it except that the "better the drive" the more spares it has.

Yes, you are all right.

There is an extra overhead for storage needed to manage the OS File System. Also and the HW bad block replacement or other performance or caching buffers that eventually could be used is needed also. Not only the 100% of our data is stored in the disk, but additional data to manage all file structure and security.

Not sure if performance caching may use disk data blocks or are always implemented in additional RAM.

Anyway, the main discrepancy of the manufacturer announced disk capacity and the value that the user see in the computer is the confusion about the size of the unit of measurement, either decimal (1000) or binary (1024).

This really isn't correct, at least in terms of history.

Back in the day, hard drives were 100% marketed/sold (in terms of capacity) as multiples of 1024. You lost maybe a little to low-level formatting (drives used to come without a low-level format; that hasn't been the case for a very long time), but nothing like the 1000/1024 discrepancy.

Hard drive manufacturers switched to the 1000 multiplier purely as a marketing ploy. THEN the stupid kibi/mebi/gibi nonsense was invented and retroactively applied to the 2^x system. The decimal system makes no sense from a computing standpoint, and OSes properly report storage using the 2^x system.

Yes, you are all right too.

Sure it is a marketing trick to be able to show more capacity (or to manufacture less sectors in the disk). That's why changing the measurment unit without explaining or understanding it well creates confusion.

The misreporting of HDD space has always annoyed me. It says 4TB on the box but 3.63TB when I plug it in? How about a discount then?

I smell a class action lawsuit! C'mon, who's with me?

Anyone? Bueller?

With all my HDD/SDD formatted as NTFS I have calculated from the offered capacity (decimal), divided by 1024 and have obtained exactly the capacity showed by the Operating System.

When the available space is smaller than this division by 1024, I guess it could be due to additional space taken up by pre-installed utilities or software images needed for RAID operation on a NAS. If not, then I agree that the manufacturer suplies the wrong information.
 
Terabyte definition is 1,000,000,000,000 bytes (metric system = 1000 x 1000 x 1000 x 1000)

The numerical difference between the metric (base 10) and binary (base 2) systems is relatively small for the kilobyte (1000 bytes is about 2% smaller than the 1024), the systems deviate increasingly as units grow larger (the relative deviation grows by 2.4% for each three orders of magnitude). For example, a Terabyte (metric) is nearly 10% ‘smaller’ than binary based (1099511627776 bytes = 1024 x 1024 x 1024 x 1024)

There have already been court cases. The disc manufacturers win as they use the metric system. Operating Systems (Windows etc) use the binary system.

It’s like trying to argue a fine for speeding: “But officer, I was only doing 90 miles per hr in a 100km per hr zone”

Yes, the disc manufacturers use correctly the metric system. They cannot be sued because they say the truth.

But when OS like windows use the Unit Symbol, they don't use the correct one (TiB instead of TB), as it is used by some disk management utilities, for instance.

The reason why Microsoft or others still do not use binary Symbol Units is unknown to me. But I wouldn't be surprised if it was because there is no enough space to display it in a certain window.
 
Terabyte definition is 1,000,000,000,000 bytes (metric system = 1000 x 1000 x 1000 x 1000)

The numerical difference between the metric (base 10) and binary (base 2) systems is relatively small for the kilobyte (1000 bytes is about 2% smaller than the 1024), the systems deviate increasingly as units grow larger (the relative deviation grows by 2.4% for each three orders of magnitude). For example, a Terabyte (metric) is nearly 10% ‘smaller’ than binary based (1099511627776 bytes = 1024 x 1024 x 1024 x 1024)

There have already been court cases. The disc manufacturers win as they use the metric system. Operating Systems (Windows etc) use the binary system.

It’s like trying to argue a fine for speeding: “But officer, I was only doing 90 miles per hr in a 100km per hr zone”

Users.... they know it all!

When "users" call the poor FAEs to complain, the poor FAEs call us in R&D.. Our answer? "Are they running the latest drivers?"... and then we shine them off -unless it's a seven figure account.

Did you read my post about spare allocation?

My claim is that the physical storage size is different from the published size. This has NOTHING to do with the OS. N.O.T.H.I.N.G

BTW, I've been doing hex numbers since the 70s... before there was Windows or Apple. You don't have to define them to me.
 
Yes, you are all right.

There is an extra overhead for storage needed to manage the OS File System. Also and the HW bad block replacement or other performance or caching buffers that eventually could be used is needed also. Not only the 100% of our data is stored in the disk, but additional data to manage all file structure and security.

Not sure if performance caching may use disk data blocks or are always implemented in additional RAM.

...

The spare storage allocation has nothing to do with the operating system, partitions, etc... It is purely an allocation of "dark" storage to increase the MTBF numbers. Something enterprise users pay for but cheap skate consumers croak about.

PCIe does allows for the firmware to know about partitions and drives but that's a logical layer applied to the PHY layer within the FW in the drive. PCI and SATA, et al, have no clue.

The PHY layer in the drive maintains the spare list and the tables of storage. This is the storage I'm talking about, and it can be an expensive one in the case of enterprise SSDs, a lot cheaper in magnetic disks.

...

Caching is indeed handled in DDR or the like. In SSDs a LOT of caching is required since the NAND blocks can only be written one block at a time... so to modify a block, you have to do a Read/Modify/Write action (*)... often what seems like a "simple" random write may involve several blocks, this is called Write Amplification. So, the PHY layer may "aggregate the data" in an action across many blocks. The idea is to minimize disruption of the NAND since it wears it out.

With magnetic media you don't have those issues as you can random write stripes within a sector so for something that will see LOTS of random small writes, you're better off with HDDs than SSD.

Now, the issue with caching is loss of power. For this purpose SSDs and HDDs have a reasonable amount of capacitance and some means in the FW to rebuild the data in case of a power loss. A lot of work and money goes into that feature.

(*) Actually it's worse for an SSD.

1) Store new data for Block A into DDR
2) Read Block A into DDR
3) In DDR: Modify the Read Block A data with the new Write Data Block A
4) Fetch a block from the unallocated block list, Block B
5) Possibly Erase Block B
6) Write Block B with the modified Read Data Block A in DDR
7) Put Block A into the unallocated -not erased list (Might get erased by the Garbage Collection).
8) Free the DDR buffers for the Read and Write Data

"Coding" is not an easy job, you see.
 
Last edited:
Do you not find converting picture-based subs to text-based subs kind of...tedious? Every time I've tried, I got so bored I was ready to claw my face off.
Yes, but I prefer to do it myself, for the same reason I prefer to rip my own music than to download a copy ripped by someone else. I've developed a good workflow and the problematic titles are pretty rare (in my collection). Older DVDs are the worst for that with crappy fonts.
 
Yes, but I prefer to do it myself, for the same reason I prefer to rip my own music than to download a copy ripped by someone else. I've developed a good workflow and the problematic titles are pretty rare (in my collection). Older DVDs are the worst for that with crappy fonts.
Fair enough. The last time I tried, I was doing Doctor Who DVDs, and good lord.

I gave up and kept them as picture-based subs.
 
I currently have a Synology [2-bay] DS212+ which offers: 3No USB ports, 1No eSATA port (never used) and 1No SD card slot. But as all my media files are currently stored on 3No 5TB 2.5" USB connected HDD's and my miscellaneous documents are stored on an SD card, I'm considering new NAS options.

All I want from an NAS is to share my media files across my personal network via SMB and UPnP/DLNA. I have no interest in all the other features NAS manufacturers offer.

So I'm interested to know if anybody here gone down the Raspberry Pie route as recent models offer a minimum of four USB ports along with an SD card slot and the possibility of adding a further four USB ports via an expansion hub/hat.

Any thoughts?
 
I use Synology (just as playback/back-up for music & documents, don't use the 'fancy' software) and have a 2xHDD 4TB Raid-1 set-up on the DS218+ and a 4xHDD as 2x10TB Raid-1 on the DS418play. What I like about them is they do operating software updates for years. I also have an old Zyxel 1TB as Raid-1 which is still going, and I tend to play files from that the most as its operating software is no longer supported, so when it goes 'phut' I won't mind!
 
So I'm interested to know if anybody here gone down the Raspberry Pie route as recent models offer a minimum of four USB ports along with an SD card slot and the possibility of adding a further four USB ports via an expansion hub/hat.
I've used various models of Raspberry Pi (Models 3, 4 and now 5) as file servers for years.

Though they've generally been hosting backup drives rather than primaries, so I can't speak to their suitability for streaming. But I'd be truly shocked if they aren't up to the task. Just be aware that you don't get USB 2 on any model lower than a 4.
 
For DIY types only, over the last couple of years I’ve built four 5-disk ZFS NAS systems using something called a “Penta SATA hat” and RockPi 4B SBCs. Three of these used 2.5” drives, either spinning or SSD, in a cute little vertical case that held four drives, plus the 5th attached via an eSATA cable (4 ports on the hat are standard SATA connections including power, the 5th is eSATA). The fourth NAS box used a full sized (3.5”) enclosure with much larger capacity disks. I use this fourth box to backup the other three (using ZFS send and ZFS receive). All of these use the ZFS equivalent of RAID5 (but far superior since data corruption detection and automatic repair on reads occurs transparently).

The RockPI 4 SBC runs several different versions of Linux. Getting ZFS to build was very straightforward. The difference between the RockPi SBC and a RPI-3 or RPI-4 is a much more competent PCIe bus with an M.2 connector - you can either hook up an M.2 NVME SSD disk, or in my case the SATA hat. RPI-5 SBCs now have a similar M.2 connector, so I would guess we’ll see multi-SATA hats supported for that computer as well. There is a “Quad SATA Hat” for RPI (3 & 4 IIRC), but the data rates aren’t as good because of the lack of a good PCIe bus. I believe AllNet still sells the Quad SATA Kit for the RPI which includes the cute little case to hold the SBC and four 2.5” disks; I have no experience with running ZFS on RPI.

From AllNet the Penta SATA Hat is $49. A 4GB RockPi 4B will set you back roughly $85 depending on what size eMMC you include.

As I said, DIY only, and advanced at that (experience with Linux, ZFS, SBCs, etc.). Contact me for more info if you are interested (in advice that is). All of my music, movies/tv and astrophotography is stored on these systems. Very good gigE and streaming works very well (to my Oppo, ChinOppo clone, Apple TV).
 
Last edited:
I've been looking at 8GB versions of the 4 and the 5. Not much difference in their prices...
Yeah, at their prices I always buy whatever the top of the line is. If it's overkill I haven't lost much. But if I repurpose the little thing for some other project, having the fanciest one(s) may turn out to be helpful.
 
For DIY types only, over the last couple of years I’ve built four 5-disk ZFS NAS systems using something called a “Penta SATA hat” and RockPi 4B SBCs.
Big ZFS fan here too, though I use TrueNAS on an old Dell desktop.

I'm familiar with the RockPi by name, but don't know anything about how well it's supported. When it comes to SBCs, I started with a Wandboard Quad, then moved on to an Odroid C2. Eventually I started sticking to just Raspberry Pi because the other two stopped being supported, making it impossible to update the OS to something more modern and, presumably, more secure.

I've bought a total of 10 Raspberry Pi models over the years and all except one (a Model 2) are still in use for one purpose or another. Unfortunately, three Odroids and at least three Wandboards are now just sitting gathering dust.

To be super-clear, I have no idea if the manufacturer will abandon older Rock Pi models. They may be as dedicated to their customer base as the Raspberry Pi Foundation. Personal experience with other brands just makes me feel compelled to mention the possibility.
 
Big ZFS fan here too, though I use TrueNAS on an old Dell desktop.

I'm familiar with the RockPi by name, but don't know anything about how well it's supported. When it comes to SBCs, I started with a Wandboard Quad, then moved on to an Odroid C2. Eventually I started sticking to just Raspberry Pi because the other two stopped being supported, making it impossible to update the OS to something more modern and, presumably, more secure.

I've bought a total of 10 Raspberry Pi models over the years and all except one (a Model 2) are still in use for one purpose or another. Unfortunately, three Odroids and at least three Wandboards are now just sitting gathering dust.

To be super-clear, I have no idea if the manufacturer will abandon older Rock Pi models. They may be as dedicated to their customer base as the Raspberry Pi Foundation. Personal experience with other brands just makes me feel compelled to mention the possibility.
The manufacturer is Radxa. The Rock Pi 4B has been around a long while - I believe I bought my first in 2018 - but clearly there’s no way Raxda can compete with the RPI Foundation in either sales volume or software support. That company’s niche has always been superior performance because of hardware choices, but it’s never going to be the turnkey solution that RPI will be. That said, Linux on a wide variety of ARM-based SBCs is fairly easy to install and use and there is an ARM-industry supported ecosystem with the maturity you’d expect (nightly builds, robust toolchains, etc.). Getting every last bit of capability supported, for example, audio input via HDMI, can be an issue. But more generic support, and I’d classify PCIe-attached SATA as fundamental to any Linux kernel, is easy. Further the SATA hat is m.2 PCIe based so doesn’t depend on the Rock Pi 4. Radxa said in 2021 that they are committed to producing the 4 until at least 2029. But I wouldn’t be surprised to see support for the SATA hat on the Rock Pi 5, a more powerful and newer SBC; for all I know it already works there, I just don’t feel like disassembling one of my NAS units to test. I might spring for another $45 hat just to test and have a spare.

To my mind the value here is in my ZFS filesystems, since they can easily be moved to new hardware - say a replacement SBC or full sized computer - by simply moving the disks or doing ZFS send/rcv. If my Rock Pi 4Bs fail, fine - I’ll simply slip in a different computer.

This has got to be about the cheapest way of building a 5-disk ZFS NAS. Almost all of the cost is going for the disks you choose and the cabling, plus however you value your time.

FWIW these are headless NAS nodes. They do have HDMI and if you really wanted to use them as a desktop you can, but I don’t bother. Folks who need a GUI admin package as with Synology are out of luck (unless you want to write your own). They do have X and so graphical clients will work if you have an X server running elsewhere. Again, DIYer level, for folks comfortable with ssh logins, configuring NFS and SMB shares, etc.. One of mine runs a Plex server happily and without (so far after several years) a hitch; no problem upgrading the server software. Very little need for ever upgrading the OS considering the nodes are on a private network.
 
Last edited:
The manufacturer is Radxa. The Rock Pi 4B has been around a long while - I believe I bought my first in 2018 - but clearly there’s no way Raxda can compete with the RPI Foundation in either sales volume or software support. That company’s niche has always been superior performance because of hardware choices, but it’s never going to be the turnkey solution that RPI will be. That said, Linux on a wide variety of ARM-based SBCs is fairly easy to install and use and there is an ARM-industry supported ecosystem with the maturity you’d expect (nightly builds, robust toolchains, etc.). Getting every last bit of capability supported, for example, audio input via HDMI, can be an issue. But more generic support, and I’d classify PCIe-attached SATA as fundamental to any Linux kernel, is easy. Further the SATA hat is m.2 PCIe based so doesn’t depend on the Rock Pi 4. Radxa said in 2021 that they are committed to producing the 4 until at least 2029. But I wouldn’t be surprised to see support for the SATA hat on the Rock Pi 5, a more powerful and newer SBC; for all I know it already works there, I just don’t feel like disassembling one of my NAS units to test. I might spring for another $45 hat just to test and have a spare.

As I guessed might happen, Radxa has just released a new Penta SATA Hat that can be used on their Rock Pi 4 and Rock Pi 5, and on a Raspberry Pi 5. This provides 4 conventional SATA connectors (including power), plus an eSATA connector, all running off of the M.2 PCIe available on those various SBCs. See Penta SATA Hat. $49.
 
As I guessed might happen, Radxa has just released a new Penta SATA Hat that can be used on their Rock Pi 4 and Rock Pi 5, and on a Raspberry Pi 5. This provides 4 conventional SATA connectors (including power), plus an eSATA connector, all running off of the M.2 PCIe available on those various SBCs. See Penta SATA Hat. $49.
That looks REALLY useful!

Though I wish they sold (or provided a link to) the proper power supply.
 
That looks REALLY useful!

Though I wish they sold (or provided a link to) the proper power supply.
Agreed, they should provide links. Allnet does sell suitable supplies. On one of my NAS boxes using this hat (the prior version), I used a SATA chassis which had a conventional PC power supply with ATX connectors, and I plugged one of those directly into the hat to power the SBC and the drives. The other 4 units all used one of the SATA RAID kits from Allnet, which included this 12V DC power supply (Allnet 12V DC supply) which plugs into the Rock Pi 4 and Rock Pi 5 board female barrel connector jack (EDIT - oops, after checking, this plugs into a female barrel connector on the hat). I’m not sure about power options for the RPI 5 - I assume like prior RPI’s you can power via the connectors on a hat, so the ATX or 12V barrel options should work there.
 
Last edited:
Back
Top