Which NAS do list members prefer?

QuadraphonicQuad

Help Support QuadraphonicQuad:

timothyemerson

1K Club - QQ Shooting Star
Joined
Aug 6, 2014
Messages
1,062
Location
Upper Hutt, New Zealand
The misreporting of HDD space has always annoyed me. It says 4TB on the box but 3.63TB when I plug it in? How about a discount then?

I smell a class action lawsuit! C'mon, who's with me?

Anyone? Bueller?
 

tonyE

300 Club - QQ All-Star
Joined
Feb 12, 2018
Messages
340
Good to know, thanks for the reply. These drives won't be getting a defrag anytime in the foreseeable future so I'll leave just a bit of free space in case anything goes pear-shaped.

Defragmentation works for HDD because it speeds up sequential IO... it doesn't really matter much for SSD because reading NAND is fundamentally different from reading a spinning platter. However, SSD sequential IO are faster because the firmware can use some sort of parallelism.

In the past, defragmentation was useful with HDDs. But in reality, things have progressed since the 70s and 80s... so unless you are doing lots of multiple IO with many users/threads, I wouldn't worry too much about it.

Capacity? I push it to 90%. In a single 1TB SDD drive that is a big enough that there plenty of room for a storing the 1GB+ files from ripping DVDs. For my NAS RAIDs, done strictly with HDDs, I have pushed the smaller ones (the daily use for IO, running about 9TB) to 90%, the big ones (30TB) seldom hit 40%...

If you are so concerned with not pushing your storage... HDDs are relatively cheap per GB.
 

HomerJAU

Moderator: MCH Media Players
Staff member
Moderator
Moderator
Joined
Jun 13, 2013
Messages
5,402
Location
Melbourne, Australia
This has NOTHING to do with measuring in 1024 or 1000. Nothing to do with host operating systems.

Terabyte definition is 1,000,000,000,000 bytes (metric system = 1000 x 1000 x 1000 x 1000)

The numerical difference between the metric (base 10) and binary (base 2) systems is relatively small for the kilobyte (1000 bytes is about 2% smaller than the 1024), the systems deviate increasingly as units grow larger (the relative deviation grows by 2.4% for each three orders of magnitude). For example, a Terabyte (metric) is nearly 10% ‘smaller’ than binary based (1099511627776 bytes = 1024 x 1024 x 1024 x 1024)

There have already been court cases. The disc manufacturers win as they use the metric system. Operating Systems (Windows etc) use the binary system.

It’s like trying to argue a fine for speeding: “But officer, I was only doing 90 miles per hr in a 100km per hr zone”
 

AYanguas

900 Club - QQ All-Star
QQ Supporter
Joined
Apr 10, 2020
Messages
916
Location
Segovia, Spain
Not true?

This has NOTHING to do with measuring in 1024 or 1000. Nothing to do with host operating systems ( even in bare metal ).

It is intrinsic to the algorithm used by the drive's firmware to maintain storage capacity as it wears out from use. It has to do with how the manufacturer decides to keep some storage in reserve to allow for failures of the media ( bad blocks or sectors ). The firmware in the drive keeps track of good and bad and avoids using bad places on the media disk or array, when it finds a grown bad block or sector, it replaces it with one from the spare list.

The spare list is NOT counted in the claimed capacity and the user is never told about it.

The better the quality of the drive, not only is the media proper better (better discs, better NAND) but the spare storage is larger as well.

You, the user, know NOTHING about it except that the "better the drive" the more spares it has.

Yes, you are all right.

There is an extra overhead for storage needed to manage the OS File System. Also and the HW bad block replacement or other performance or caching buffers that eventually could be used is needed also. Not only the 100% of our data is stored in the disk, but additional data to manage all file structure and security.

Not sure if performance caching may use disk data blocks or are always implemented in additional RAM.

Anyway, the main discrepancy of the manufacturer announced disk capacity and the value that the user see in the computer is the confusion about the size of the unit of measurement, either decimal (1000) or binary (1024).

This really isn't correct, at least in terms of history.

Back in the day, hard drives were 100% marketed/sold (in terms of capacity) as multiples of 1024. You lost maybe a little to low-level formatting (drives used to come without a low-level format; that hasn't been the case for a very long time), but nothing like the 1000/1024 discrepancy.

Hard drive manufacturers switched to the 1000 multiplier purely as a marketing ploy. THEN the stupid kibi/mebi/gibi nonsense was invented and retroactively applied to the 2^x system. The decimal system makes no sense from a computing standpoint, and OSes properly report storage using the 2^x system.

Yes, you are all right too.

Sure it is a marketing trick to be able to show more capacity (or to manufacture less sectors in the disk). That's why changing the measurment unit without explaining or understanding it well creates confusion.

The misreporting of HDD space has always annoyed me. It says 4TB on the box but 3.63TB when I plug it in? How about a discount then?

I smell a class action lawsuit! C'mon, who's with me?

Anyone? Bueller?

With all my HDD/SDD formatted as NTFS I have calculated from the offered capacity (decimal), divided by 1024 and have obtained exactly the capacity showed by the Operating System.

When the available space is smaller than this division by 1024, I guess it could be due to additional space taken up by pre-installed utilities or software images needed for RAID operation on a NAS. If not, then I agree that the manufacturer suplies the wrong information.
 

AYanguas

900 Club - QQ All-Star
QQ Supporter
Joined
Apr 10, 2020
Messages
916
Location
Segovia, Spain
Terabyte definition is 1,000,000,000,000 bytes (metric system = 1000 x 1000 x 1000 x 1000)

The numerical difference between the metric (base 10) and binary (base 2) systems is relatively small for the kilobyte (1000 bytes is about 2% smaller than the 1024), the systems deviate increasingly as units grow larger (the relative deviation grows by 2.4% for each three orders of magnitude). For example, a Terabyte (metric) is nearly 10% ‘smaller’ than binary based (1099511627776 bytes = 1024 x 1024 x 1024 x 1024)

There have already been court cases. The disc manufacturers win as they use the metric system. Operating Systems (Windows etc) use the binary system.

It’s like trying to argue a fine for speeding: “But officer, I was only doing 90 miles per hr in a 100km per hr zone”

Yes, the disc manufacturers use correctly the metric system. They cannot be sued because they say the truth.

But when OS like windows use the Unit Symbol, they don't use the correct one (TiB instead of TB), as it is used by some disk management utilities, for instance.

The reason why Microsoft or others still do not use binary Symbol Units is unknown to me. But I wouldn't be surprised if it was because there is no enough space to display it in a certain window.
 

tonyE

300 Club - QQ All-Star
Joined
Feb 12, 2018
Messages
340
Terabyte definition is 1,000,000,000,000 bytes (metric system = 1000 x 1000 x 1000 x 1000)

The numerical difference between the metric (base 10) and binary (base 2) systems is relatively small for the kilobyte (1000 bytes is about 2% smaller than the 1024), the systems deviate increasingly as units grow larger (the relative deviation grows by 2.4% for each three orders of magnitude). For example, a Terabyte (metric) is nearly 10% ‘smaller’ than binary based (1099511627776 bytes = 1024 x 1024 x 1024 x 1024)

There have already been court cases. The disc manufacturers win as they use the metric system. Operating Systems (Windows etc) use the binary system.

It’s like trying to argue a fine for speeding: “But officer, I was only doing 90 miles per hr in a 100km per hr zone”

Users.... they know it all!

When "users" call the poor FAEs to complain, the poor FAEs call us in R&D.. Our answer? "Are they running the latest drivers?"... and then we shine them off -unless it's a seven figure account.

Did you read my post about spare allocation?

My claim is that the physical storage size is different from the published size. This has NOTHING to do with the OS. N.O.T.H.I.N.G

BTW, I've been doing hex numbers since the 70s... before there was Windows or Apple. You don't have to define them to me.
 

tonyE

300 Club - QQ All-Star
Joined
Feb 12, 2018
Messages
340
Yes, you are all right.

There is an extra overhead for storage needed to manage the OS File System. Also and the HW bad block replacement or other performance or caching buffers that eventually could be used is needed also. Not only the 100% of our data is stored in the disk, but additional data to manage all file structure and security.

Not sure if performance caching may use disk data blocks or are always implemented in additional RAM.

...

The spare storage allocation has nothing to do with the operating system, partitions, etc... It is purely an allocation of "dark" storage to increase the MTBF numbers. Something enterprise users pay for but cheap skate consumers croak about.

PCIe does allows for the firmware to know about partitions and drives but that's a logical layer applied to the PHY layer within the FW in the drive. PCI and SATA, et al, have no clue.

The PHY layer in the drive maintains the spare list and the tables of storage. This is the storage I'm talking about, and it can be an expensive one in the case of enterprise SSDs, a lot cheaper in magnetic disks.

...

Caching is indeed handled in DDR or the like. In SSDs a LOT of caching is required since the NAND blocks can only be written one block at a time... so to modify a block, you have to do a Read/Modify/Write action (*)... often what seems like a "simple" random write may involve several blocks, this is called Write Amplification. So, the PHY layer may "aggregate the data" in an action across many blocks. The idea is to minimize disruption of the NAND since it wears it out.

With magnetic media you don't have those issues as you can random write stripes within a sector so for something that will see LOTS of random small writes, you're better off with HDDs than SSD.

Now, the issue with caching is loss of power. For this purpose SSDs and HDDs have a reasonable amount of capacitance and some means in the FW to rebuild the data in case of a power loss. A lot of work and money goes into that feature.

(*) Actually it's worse for an SSD.

1) Store new data for Block A into DDR
2) Read Block A into DDR
3) In DDR: Modify the Read Block A data with the new Write Data Block A
4) Fetch a block from the unallocated block list, Block B
5) Possibly Erase Block B
6) Write Block B with the modified Read Data Block A in DDR
7) Put Block A into the unallocated -not erased list (Might get erased by the Garbage Collection).
8) Free the DDR buffers for the Read and Write Data

"Coding" is not an easy job, you see.
 
Last edited:

fcormier

Senior Member
Joined
Jul 1, 2015
Messages
250
Location
Montréal, QC, Canada
Do you not find converting picture-based subs to text-based subs kind of...tedious? Every time I've tried, I got so bored I was ready to claw my face off.
Yes, but I prefer to do it myself, for the same reason I prefer to rip my own music than to download a copy ripped by someone else. I've developed a good workflow and the problematic titles are pretty rare (in my collection). Older DVDs are the worst for that with crappy fonts.
 

cdheer

Senior Member
QQ Supporter
Joined
Feb 4, 2022
Messages
255
Location
Gurnee, IL
Yes, but I prefer to do it myself, for the same reason I prefer to rip my own music than to download a copy ripped by someone else. I've developed a good workflow and the problematic titles are pretty rare (in my collection). Older DVDs are the worst for that with crappy fonts.
Fair enough. The last time I tried, I was doing Doctor Who DVDs, and good lord.

I gave up and kept them as picture-based subs.
 
Top