Blog‎ > ‎RaspPhotoPipe - En‎ > ‎

Storage Expansion - Part IV

posted 4 Aug 2013, 08:09 by Flip Pipe

In order to RaspPhotoPipe be useful, at least on raw file should be copy below 1 second. With my last results, I could copy a raw file (about 20Mb in my camera) in about 2 seconds. It wasn't very bad, must should be better, so I start to find out how to get better from my raspberry pi.

Something I learn from this, was: bigger not always are better, because this is a small device and it is very easy to reach resource starvation.

First, get the best from USB disks, with the lower resources.

According with this FAQ there is a configurable value in kernel to get more throughput from USB. It is the max_sectors, which limits the number of bytes in each command.

USB

In my configuration, max_sectors will be 192, because, even asking more bytes, I will not get much more speed.


Next step, how kernel handle the I/O calls.

Linux kernel implements 4 different algorithms to read/write data to the device, and those are know as schedulers. You can find here in StackOverFlow a brief explanation.


But in Raspbian, just 3 are in the kernel: noop, deadline and cfq.

Scheduler

Don't be mistaken by the chart, because if I didn't change the scale, almost there is no difference between them, so I don't think change this will have a big difference in the end.


Next step, how to improve the RAID, and I start by the building blocks of RAID. What is the size of the smallest amount of data to be written in the disk (chunk size) and the RAM used to cache the data (stripe_cache_size)


After running a scripts varying this values, I find out the best values are:


Chunk size of 32 bytes and a stripe cache size of 4096 bytes. A chunk size of 8bytes could have better writing throughput, but the reading speed if must worst.


Total RAM allocated to writing cache it will be 4 disks x 4k x 4096 bytes = 64MB, according with this comment.

Write




The last step, is the file system.



So far, all my tests where doing DD from /dev/zero to the disk. But with this tests, I've two problems: first it will not be the way I will copy the data to disk in real usage and second, this is highly resource intensive to the raspberry pi. If the processor it is occupied to generate the zeros, it can not using that time to process the data in the raid.


So I change the way to do the tests. So I my next step was copy the files from the ram of raspberry pi to the RAID. And I also stop doing testes with stripe cache lower than 1024.


A big change in the numbers, right? But this numbers are some how fake, because the origin of the data is in RAM, and the destination, because the cache of Raid, it is also RAM. So from this graph, I only can conclude xfs will be my first choice, because it is more consistence in the tests even with worst performance than btrfs.


By the way! One hypothesis I've done in my last post, is if the mdadm and FS modules are compiled in kernel it will be faster... well there you have the answer... No visible changes, so, don't wast time recompile the kernel just for this.



So, my last try to have some real world results, was to copy files from a Compact Flash to the Raid. Almost no difference between the FS.


So many hours running tests, to reach a conclusion: Probably my bottle neck it is in USB it shelf rather than in the RAID performance.



Comments