Monday, February 21, 2011

[nslu2-linux] A slug's life (and death) ... The resurrection?

 

Hi all,
 
I am after some advice.
 
I have a venerable slug that has over the years run all the Unslung versions from 1.1-beta through to 6.7-alpha.
 
Its primary purpose has been to "pull" user-file store backups from various windows PCs and a NAS device *and then push these backups back out to various alternate locations*.
 
To this end it has had a 750GB disk attached with some custom partitioning:
700GB (ext3) root partition, 7.5GB (ext3) legacy root partition (un-used), 2.5GB (swap) and 32GB (FAT32/VFAT) partition.
 
It then runs just over 30 separate scheduled "pull" or "push" backup "jobs", some daily and some weekly,
such that I have a hot-standby NAS which has a mirror of the data held on a primary NAS
and remote/off-site backup NAS with data that is approximately 1 week old
 
The slug is set to automatically reboot every Saturday night.
Once every 6 to 12 months or so, this reboot would fail because it failed the disk checks.
 
The fix is (was) to boot a live-linux CD on PC and do an "fsck" of the ext3 partition(s).
 
This has worked flawlessly until, you guessed it, last Saturday night.
Unfortunately the "fsck -y" (yes, risky I know, but it has never failed before)
went ok at first but then encountered many many "short reads" whose recovery apparently failed.
Even trying another fsck using alternate superblock didn't help.
 
So whilst I have not lost any real data, I have lost my backup "strategy"
and I think the partition/data, (but not necessarily the drive) is "toast".
 
But as the drive is also rather old (the oldest date on the "un-used" partition goes back to 2005) I already had plans to replace it.
So I have a blank 2TB drive sitting in another USB enclosure ready to use.
 
I think it is time for me to move away from the "unslung" firmware and move to SlugOS (BE)
as I think I want to make the slug take a more "passive" role by acting as the recipient of rdiff-backup (using rsync)
where the backup is initiated/"pushed" by the machine hosting the "master" copy of the data
 
So the advice needed is as follows:
 
1) Will SlugOS cope with a 2TB drive?
 
2) What sort of partitioning scheme would work best?
    (I am inclined to partition the drive into at least 2 x 1TB partitions, just so the checks are quicker and only 1/2 of the data is "lost" if a partition becomes corrupt)
 
3) I'm betting that I also need some swap space because SlugOS will not cope with these sized partitions using ram alone?
    Any recommended size for this swap space? (Like do I need say 1GB swap per 1TB of partition to be "fsck"'d)
 
Sorry for the long-ish post and thanks in advance for any advice/tips
 
TIA
Ian White
 
P.S. The other NAS devices are a Qnap and 2 Linkstations, all with "root" access, but the Slug was my first NAS, so I think it deserves some continued effort on my part.

__._,_.___
Recent Activity:
.

__,_._,___

No comments:

Post a Comment