Thanks for reporting the problem. I've corrected the conffile problem in subversion.
If dhcpd is supposed to support -4, report to the upstream and help them fix the problem.
For synology NAS, another possible workaround is to get ipv6 kernel module from optware syno... feed at
http://ipkg.
-Brian
--- In nslu2-linux@
>
> Hello,
>
> i recently made an upgrade of my isc-dhcp server and i think i messed up.
>
> First of all my dhcpd.conf was overwritten by the default config file provided with the package. I had a backup but this is surprising.
>
> And then the new DHCP server refuses to starts, since it asks for the ipv6 kernel module to be enabled (which is not enabled in Synology kernel):
>
> Here's the output :
>
> <<
> servebox> dhcpd -4
> Internet Systems Consortium DHCP Server 4.1.0p1
> Copyright 2004-2009 Internet Systems Consortium.
> All rights reserved.
> For info, please visit http://www.isc.
> Wrote 0 leases to leases file.
> Error opening '/proc/net/if_
> Can't get list of interfaces.
>
> If you did not get this software from ftp.isc.org, please
> get the latest from ftp.isc.org and install that before
> requesting help.
>
> If you did get this software from ftp.isc.org and have not
> yet read the README, please read it before requesting help.
> If you intend to request help from the dhcp-server@
> mailing list, please read the section on the README about
> submitting bug reports and requests for help.
>
> Please do not under any circumstances send requests for
> help directly to the authors of this software - please
> send them to the appropriate mailing list as described in
> the README file.
>
> exiting.
> servebox>
> >>
>
> I obviously tried the -4 option to force ipv4-only but it seems dhcpd has a bug on it.
>
> Is there a way or a workaround ? My LAN is running with another "emergency" dhcp server, but this is still annoying...
>
> At least i would like to roll-back to the previous working archive package (ipk) but I can't find a "Archive" directory in the NSLU2 feeds.
>
> Thanks !
>
Monday, November 30, 2009
[nslu2-linux] Re: DHCP Server problem on Synology DS409+
[nslu2-linux] Re: Problem with perl and large files
Doing a strace on a file > 2GB clearly shows that stat64 call has succeeded, so the problem is in user space.
stat64("big-
-Brian
--- In nslu2-linux@
>
> Hi,
>
> I hope, that anyone in this group can give me a hint about the maintainer of the perl ipkg-package of the cs08q1armel branch.
> I found an issue with the pert stat() function, that fails for files larger than 2GByte on my QNAP TS119. This function is used from the rsnapshot-diff script to get all informations about a certain file for comparing different backup snapshots. It seems that perl is compiled without 64Bit support for file sizes, with 32Bit one can only store size numbers up to 2GByte. Here a little test script to verify this issue.
>
> test.pl
> --- cut here ---
> #! /opt/bin/perl
>
> @mystat = stat($ARGV[0]
> print "$ARGV[0]: file size: $mystat[7]\n"
> --- cut here ---
>
> Run test.pl FILE
>
> If file is smaller than 2GByte it will display the right file size, if it is larger then the file size will be empty caused by the failed stat().
> Can anyone tell me why perl is compiled without the
> -D_FILE_OFFSET_
> Thank you in advance
>
> Jörg
>
[nslu2-linux] Re: Problem with perl and large files
optware perl is maintained collectively. Initially gda and I made it cross compilable, and I was responsible for adding a few targets. Recently I've been busy with lot of other things, so have not been able to spend as much time on optware as I'd like.
Perl is not the easiest thing to cross compile and maintain.
Regarding this particular issue, I can only say that as far as I can see,
$ grep =64 sources/perl/
ccflags='-fno-
ccflags_uselargefil
cppsymbols='
I have to look at the build log to see if the option is being applied. Anyone is welcome to dig deeper and find the issue. We have too many issue finders and too few problem solvers.
-Brian
--- In nslu2-linux@
>
> Hi,
>
> I hope, that anyone in this group can give me a hint about the maintainer of the perl ipkg-package of the cs08q1armel branch.
> I found an issue with the pert stat() function, that fails for files larger than 2GByte on my QNAP TS119. This function is used from the rsnapshot-diff script to get all informations about a certain file for comparing different backup snapshots. It seems that perl is compiled without 64Bit support for file sizes, with 32Bit one can only store size numbers up to 2GByte. Here a little test script to verify this issue.
>
> test.pl
> --- cut here ---
> #! /opt/bin/perl
>
> @mystat = stat($ARGV[0]
> print "$ARGV[0]: file size: $mystat[7]\n"
> --- cut here ---
>
> Run test.pl FILE
>
> If file is smaller than 2GByte it will display the right file size, if it is larger then the file size will be empty caused by the failed stat().
> Can anyone tell me why perl is compiled without the
> -D_FILE_OFFSET_
> Thank you in advance
>
> Jörg
>
[Java] abt meta search engine
hi friends!
plz help me in solving the project on meta search engine
I am facing problem how to start and what to search
Java Official Group is created for the following topics: Java 2 Enterprise Edition - J2EE, Java 2 Standard Edition - J2SE, Java 2 Micro Edition - J2ME, XML, XSL, XSD, XPATH, Web Services, Jini, JXTA for all type of Java Geeks.
Whoever posts spam / ads / job related message will be BANNED IMMEDIATELY
[nslu2-linux] Re: NFS writing poor performance
Try this set of options in your Ubuntu /etc/fstab file.
noatime,rsize=
--- In nslu2-linux@
>
> Gurus,
>
> I am experiencing very slow writing speed on my NFS share on my UNSLUNG.
>
> To write a file of 40 MB from my UBUNTU machine to the NSLU2 I need 3
> min 15 sec.
>
> To read back the same file to UBUNTU, I need only 15 secs!!!!!!
>
> What is going on here? Am I missing something?
>
>
> ************
>
> On my NSLU2 running UNSLUNG 6.10
>
> # mount
> ......
> /dev/sdb1 on /share/hdd/data/
>
>
> # cat /etc/exports
> /share/hdd/data/
>
>
> ***********
>
> On UBUNTU
>
> To mount the share
>
> sudo mount 192.168.2.2:
>
>
> TIA
> vasanag
>
Re: [Java] New Member, old coding heritage...
Thanks, Java Guy,
I was afraid of that.
I think I'm too set in my ways; I've long regarded OOP as a 'black-box' approach to scoping of variables and separating code from data. I expect this view is wrong, but it does seem little more than an exercise in semantics and mindset-adjustment.
Just to fill-in a little more of my coding history, I learned Z80 assembler in the early 1980's, and moved on to BASIC and then C in the early 1990's, always as a hobbyist. I have never programmed for a living, but have made several software tools to support my main hobby of amateur radio construction. My last programs were written in 1996. I had hoped to drag my game kicking and squealing into the present millenium with a 'modern' language, and Java appears adequate (easily ported, lots of plug-ins), I only hope I'm able to learn it before the wetware starts rotting too far.
I'll soldier on with the Sun Tutorial series (I agree; they've done a nice job), but just this weekend I felt somewhat down-hearted, and even found myself trying to get my old Borland Turbo C 4.5 IDE running under Wine on my netbook. Now that's _desperation_
Pete Morris
____________
From: Java Guy <javaguy@midnightmus
To: Java_Official@
Sent: Sunday, 29 November, 2009 17:39:43
Subject: Re: [Java] New Member, old coding heritage...
Pete,
Coming from C, you really have 2 concepts to learn, not one. First, Java
is an Object-Oriented language, so you need to learn OOP
(Object-Oriented Programming)
<snip>
[Non-text portions of this message have been removed]
Java Official Group is created for the following topics: Java 2 Enterprise Edition - J2EE, Java 2 Standard Edition - J2SE, Java 2 Micro Edition - J2ME, XML, XSL, XSD, XPATH, Web Services, Jini, JXTA for all type of Java Geeks.
Whoever posts spam / ads / job related message will be BANNED IMMEDIATELY
[nslu2-linux] Problem with perl and large files
Hi,
I hope, that anyone in this group can give me a hint about the maintainer of the perl ipkg-package of the cs08q1armel branch.
I found an issue with the pert stat() function, that fails for files larger than 2GByte on my QNAP TS119. This function is used from the rsnapshot-diff script to get all informations about a certain file for comparing different backup snapshots. It seems that perl is compiled without 64Bit support for file sizes, with 32Bit one can only store size numbers up to 2GByte. Here a little test script to verify this issue.
test.pl
--- cut here ---
#! /opt/bin/perl
@mystat = stat($ARGV[0]
print "$ARGV[0]: file size: $mystat[7]\n"
--- cut here ---
Run test.pl FILE
If file is smaller than 2GByte it will display the right file size, if it is larger then the file size will be empty caused by the failed stat().
Can anyone tell me why perl is compiled without the
-D_FILE_OFFSET_
Thank you in advance
Jörg
Re: [Java] New Member, old coding heritage...
Can you help me write a "modify" method for an array program im working on in school?
Heath
--- On Sun, 11/29/09, Java Guy <javaguy@midnightmus
From: Java Guy <javaguy@midnightmus
Subject: Re: [Java] New Member, old coding heritage...
To: Java_Official@
Date: Sunday, November 29, 2009, 5:39 PM
Pete,
Coming from C, you really have 2 concepts to learn, not one. First, Java
is an Object-Oriented language, so you need to learn OOP
(Object-Oriented Programming)
function-oriented language, whereas Java is an object (or even data)
oriented language.
Second, you need to learn Java concepts, which are built, indirectly,
upon C and C++. One thing you need to get used to is: on the surface,
there are no pointers. Yet you are allowed to initialize references with
"null", and an exception type is NullPointerExceptio
In my opinion, the best Java tutorial is at http://java.
They're the experts; after all, they invented Java.
-Java Guy
Pete Morris wrote:
>
> Hi, everyone,
>
> I'm a Java newbie looking to find my path with this fascinating language.
>
> I have a remote history of traditional(
> reincarnated complete without the confusing bits. Because of this, I
> probably have more to unlearn than learn; so my question:
>
> Does anyone know of resources online for guiding C (notC++) veterans
> in the way of Java?
>
> Blessings,
>
> Pete Morris
>
> [Non-text portions of this message have been removed]
>
>
------------
Visit http://aiaiai.
Java Official Group is created for the following topics: Java 2 Enterprise Edition - J2EE, Java 2 Standard Edition - J2SE, Java 2 Micro Edition - J2ME, XML, XSL, XSD, XPATH, Web Services, Jini, JXTA for all type of Java Geeks.
Whoever posts spam / ads / job related message will be BANNED IMMEDIATELYYahoo! Groups Links
[Non-text portions of this message have been removed]
Java Official Group is created for the following topics: Java 2 Enterprise Edition - J2EE, Java 2 Standard Edition - J2SE, Java 2 Micro Edition - J2ME, XML, XSL, XSD, XPATH, Web Services, Jini, JXTA for all type of Java Geeks.
Whoever posts spam / ads / job related message will be BANNED IMMEDIATELY
[nslu2-linux] Re: NFS writing poor performance
After a more careful look in the progress bar, on my UBUNTU machine,
while writing a file, I noticed that file transfer speed is not
constant. The progress bar stops, sometimes for more 1 minute in the
same position. Seems that is struggling to write but something goes
wrong!!!
Any ideas?
vasanag
tlhackque wrote:
>> # cat /etc/exports
>> /share/hdd/data/USB_D1 192.168.2.*(rw,sync,no_root_squash)
>
> Try removing the 'sync' option.
>
> --- In nslu2-linux@yahoogroups.com, vasanag <vasanag@...> wrote:
>> Gurus,
>>
>> I am experiencing very slow writing speed on my NFS share on my UNSLUNG.
>>
>> To write a file of 40 MB from my UBUNTU machine to the NSLU2 I need 3
>> min 15 sec.
>>
>> To read back the same file to UBUNTU, I need only 15 secs!!!!!!
>>
>> What is going on here? Am I missing something?
>>
>>
>> ************
>>
>> On my NSLU2 running UNSLUNG 6.10
>>
>> # mount
>> ......
>> /dev/sdb1 on /share/hdd/data/USB_D1 type ext3 (rw,noatime)
>>
>>
>> # cat /etc/exports
>> /share/hdd/data/USB_D1 192.168.2.*(rw,sync,no_root_squash)
>>
>>
>> ***********
>>
>> On UBUNTU
>>
>> To mount the share
>>
>> sudo mount 192.168.2.2:/share/hdd/data/USB_D1 /home/anagnost/mnt/nfs_nslu
>>
>>
>> TIA
>> vasanag
>>
>
>
>
>
> ------------------------------------
>
------------------------------------
Yahoo! Groups Links
<*> To visit your group on the web, go to:
http://groups.yahoo.com/group/nslu2-linux/
<*> Your email settings:
Individual Email | Traditional
<*> To change settings online go to:
http://groups.yahoo.com/group/nslu2-linux/join
(Yahoo! ID required)
<*> To change settings via email:
nslu2-linux-digest@yahoogroups.com
nslu2-linux-fullfeatured@yahoogroups.com
<*> To unsubscribe from this group, send an email to:
nslu2-linux-unsubscribe@yahoogroups.com
<*> Your use of Yahoo! Groups is subject to:
http://docs.yahoo.com/info/terms/
Re: [LINUX_Newbies]
The Ubuntu repository has something called "secure-delete" that is supposed to be able to handle that...
David
--- In LINUX_Newbies@
>
> On Mon, Nov 30, 2009 at 09:42, Roy <linuxcanuck@
> > Thanks to all the disks are wiped. Some of these techniques take time....
> > lots of time.
>
> You're welcome :)
>
> FWIW, when I turned in my ThinkPad back in April when I left IBM, I
> did a high level wipe on the hard disk before handing it over. It
> took almost 24 hours to do a 120GB drive...
>
> Also did it on a 500GB disk prior to that and it took several days to
> complete (that one I just did for fun to see how long it would take,
> back when I was playing with different disk wiping tools)
>
> Now, for EVERYONE ELSE...
>
> That just made me think... what utilities exist for securely wiping
> individual FILES from a hard disk. Something like a Secure Delete
> that overwrites the blocks for an individual file with several passes
> of random data.
>
> Just something I was thinking about, and admittedly, I haven't really
> researched it myself.
>
> Cheers
> Jeff
>
>
> --
>
> Joan Crawford - "I, Joan Crawford, I believe in the dollar.
> Everything I earn, I spend." -
> http://www.brainyqu
>
Re: [LINUX_Newbies]
On Mon, Nov 30, 2009 at 09:42, Roy <linuxcanuck@
> Thanks to all the disks are wiped. Some of these techniques take time....
> lots of time.
You're welcome :)
FWIW, when I turned in my ThinkPad back in April when I left IBM, I
did a high level wipe on the hard disk before handing it over. It
took almost 24 hours to do a 120GB drive...
Also did it on a 500GB disk prior to that and it took several days to
complete (that one I just did for fun to see how long it would take,
back when I was playing with different disk wiping tools)
Now, for EVERYONE ELSE...
That just made me think... what utilities exist for securely wiping
individual FILES from a hard disk. Something like a Secure Delete
that overwrites the blocks for an individual file with several passes
of random data.
Just something I was thinking about, and admittedly, I haven't really
researched it myself.
Cheers
Jeff
--
Joan Crawford - "I, Joan Crawford, I believe in the dollar.
Everything I earn, I spend." -
http://www.brainyqu
Re: [LINUX_Newbies]
Indeed they do--kinda like the Microsoft disk formatting routine that hasn't been changed in ages, hunh? 8:)
David
--- In LINUX_Newbies@
>
> Thanks to all the disks are wiped. Some of these techniques take time....
> lots of time.
>
> Roy
>
> 2009/11/30 dbneeley <dbneeley@..
>
> >
> >
> > If that were adequate, the government standard would not call for
> > overwriting every sector multiple times.
> >
> > A single overwrite is not sufficient against sophisticated techniques.
> > Multiple overwrites, using multiple characters, is required to be completely
> > sure.
> >
> > I was in a warehouse owned by a disk drive manufacturer many years ago when
> > they got back the drives the government had leased---completely disassembled
> > and the disks from many drives scattered, making physical reconstruction of
> > indivdividual drives practically impossible. That was after having wiped
> > them with secure erasure techniques involving seven separate overwrites.
> >
> > David
> >
> >
> > --- In LINUX_Newbies@
> > "Mr.Jagadeesh Malakannavar" <mn_jagadeesh@
> > >
> > > using dd fill your harddrive with 0. That could be one of the best way I
> > think.
> > >
> > > Thanks
> > >
> > >
> > >
> > >
> > > ____________
> > > From: Glenn Sheppard <glenn596658@
> >
> > > To: LINUX_Newbies@
> > > Sent: Sun, November 29, 2009 9:07:10 AM
> > > Subject: Re: [LINUX_Newbies]
> > >
> > >
> > >
> > >
> > > --- On Fri, 11/27/09, Roy <linuxcanuck@ gmail.com> wrote:
> > >
> > > From: Roy <linuxcanuck@ gmail.com>
> > > Subject: [LINUX_Newbies]
> > > To: LINUX_Newbies@ yahoogroups. com
> > > Received: Friday, November 27, 2009, 5:27 AM
> > >
> > >
> > >
> > > I have some old hard drives to get rid of but first want to wipe them
> > >
> > > clean. I keep on hearing reports of them ending up in Africa where data
> > >
> > > mining and identity theft is occurring. What is the best way to go about
> > >
> > > this with Linux?
> > >
> > > Roy
> > >
> > > [Non-text portions of this message have been removed]
> > >
> > > Since I have heard so many horror stories about old hard drives and
> > personal data I "wipe" all my old hard drives by taking them to my workshop
> > and wiping them with a hammer and then throwing them in the trash! I
> > purchased a used hard drive from a computer second hand store once and it
> > wasn't even formatted! I installed it and all the previous users programs
> > and data were still there for anyone to see. I immediately reformatted the
> > drive and installed my OS (PCLinuxOS) but personally I will not let any
> > drive that I have used be resold. It's not worth risking your hard earned
> > money and your credit rating for a few dollars.
> > >
> > > ____________ _________ _________ _________ _________ _________ _
> > > Yahoo! Canada Toolbar: Search from anywhere on the web, and bookmark your
> > favourite sites. Download it now
> > > http://ca.toolbar. yahoo.com.
> > >
> > > [Non-text portions of this message have been removed]
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > [Non-text portions of this message have been removed]
> > >
> >
> >
> >
>
>
> [Non-text portions of this message have been removed]
>
Re: [LINUX_Newbies] Re: Who has Wave?
If it plays then it may just be a faulty link in the gadget or may be a
problem with whatever they use to embed the video. I am not sure if it is
java or what.
Roy
2009/11/30 Darksyde <m_alexander61@yahoo.com>
>
>
> So-so. Youtube vids usually do ok, sometimes a little snag here and there.
> Other vids sometimes do very well, sometimes nothing. Of course I never
> bother to check the format so I can't narrow it by type, but I've had
> Youtube clips much longer than Dr. Wave play well. Any ideas?
> Thanks,
> Mark
>
>
> --- In LINUX_Newbies@yahoogroups.com <LINUX_Newbies%40yahoogroups.com>,
> Roy <linuxcanuck@...> wrote:
> >
> > Dr. Wave plays for me in both FF and Chromium so I don't think that it is
> a
> > plugin. It loads a Youtube video so if Youtube plays it should. Do other
> > gadgets load?
> >
> > Roy
> >
> > 2009/11/29 Darksyde <m_alexander61@...>
> >
> > >
> > >
> > >
> > >
> > > --- In LINUX_Newbies@yahoogroups.com <LINUX_Newbies%40yahoogroups.com><LINUX_Newbies%
> 40yahoogroups.com>,
>
> > > Roy <linuxcanuck@> wrote:
> > > >
> > > > I have since learned quite a bit and have added bots, cruised the
> public
> > > > waves and added my own. It is very different. It takes some getting
> used
> > > to
> > > > but it is also very powerful. I am impressed with some of the stuff
> that
> > > > I've seen and real time would be good, if it wasn't so slow!
> > > >
> > > > Roy
> > > >
> > > Though I haven't had any live waves yet it does seem incredibly slow.
> The
> > > "Dr. Wave" intro video always hangs, even though I keep my cookies
> culled
> > > and history wiped. Is it possible that I need a F'fox extension or
> plugin?
> > > I'm using v. 3.0.15.
> > > Anyway, at least, as someone said, it has been released as a beta for
> > > limited use and (hopefully) the input from those who use it will be
> > > reflected in the official release. Certain other companies would do
> well to
> > > follow their example, eh?
> > > Mark
> > >
> > >
> > >
> >
> >
> > [Non-text portions of this message have been removed]
> >
>
>
>
[Non-text portions of this message have been removed]
------------------------------------
To unsubscribe from this list, please email LINUX_Newbies-unsubscribe@yahoogroups.com & you will be removed.Yahoo! Groups Links
<*> To visit your group on the web, go to:
http://groups.yahoo.com/group/LINUX_Newbies/
<*> Your email settings:
Individual Email | Traditional
<*> To change settings online go to:
http://groups.yahoo.com/group/LINUX_Newbies/join
(Yahoo! ID required)
<*> To change settings via email:
LINUX_Newbies-digest@yahoogroups.com
LINUX_Newbies-fullfeatured@yahoogroups.com
<*> To unsubscribe from this group, send an email to:
LINUX_Newbies-unsubscribe@yahoogroups.com
<*> Your use of Yahoo! Groups is subject to:
http://docs.yahoo.com/info/terms/
Re: [LINUX_Newbies]
lots of time.
Roy
2009/11/30 dbneeley <dbneeley@gmail.com>
>
>
> If that were adequate, the government standard would not call for
> overwriting every sector multiple times.
>
> A single overwrite is not sufficient against sophisticated techniques.
> Multiple overwrites, using multiple characters, is required to be completely
> sure.
>
> I was in a warehouse owned by a disk drive manufacturer many years ago when
> they got back the drives the government had leased---completely disassembled
> and the disks from many drives scattered, making physical reconstruction of
> indivdividual drives practically impossible. That was after having wiped
> them with secure erasure techniques involving seven separate overwrites.
>
> David
>
>
> --- In LINUX_Newbies@yahoogroups.com <LINUX_Newbies%40yahoogroups.com>,
> "Mr.Jagadeesh Malakannavar" <mn_jagadeesh@...> wrote:
> >
> > using dd fill your harddrive with 0. That could be one of the best way I
> think.
> >
> > Thanks
> >
> >
> >
> >
> > ________________________________
> > From: Glenn Sheppard <glenn596658@...>
>
> > To: LINUX_Newbies@yahoogroups.com <LINUX_Newbies%40yahoogroups.com>
> > Sent: Sun, November 29, 2009 9:07:10 AM
> > Subject: Re: [LINUX_Newbies]
> >
> >
> >
> >
> > --- On Fri, 11/27/09, Roy <linuxcanuck@ gmail.com> wrote:
> >
> > From: Roy <linuxcanuck@ gmail.com>
> > Subject: [LINUX_Newbies]
> > To: LINUX_Newbies@ yahoogroups. com
> > Received: Friday, November 27, 2009, 5:27 AM
> >
> >
> >
> > I have some old hard drives to get rid of but first want to wipe them
> >
> > clean. I keep on hearing reports of them ending up in Africa where data
> >
> > mining and identity theft is occurring. What is the best way to go about
> >
> > this with Linux?
> >
> > Roy
> >
> > [Non-text portions of this message have been removed]
> >
> > Since I have heard so many horror stories about old hard drives and
> personal data I "wipe" all my old hard drives by taking them to my workshop
> and wiping them with a hammer and then throwing them in the trash! I
> purchased a used hard drive from a computer second hand store once and it
> wasn't even formatted! I installed it and all the previous users programs
> and data were still there for anyone to see. I immediately reformatted the
> drive and installed my OS (PCLinuxOS) but personally I will not let any
> drive that I have used be resold. It's not worth risking your hard earned
> money and your credit rating for a few dollars.
> >
> > ____________ _________ _________ _________ _________ _________ _
> > Yahoo! Canada Toolbar: Search from anywhere on the web, and bookmark your
> favourite sites. Download it now
> > http://ca.toolbar. yahoo.com.
> >
> > [Non-text portions of this message have been removed]
> >
> >
> >
> >
> >
> >
> >
> > [Non-text portions of this message have been removed]
> >
>
>
>
[Non-text portions of this message have been removed]
------------------------------------
To unsubscribe from this list, please email LINUX_Newbies-unsubscribe@yahoogroups.com & you will be removed.Yahoo! Groups Links
<*> To visit your group on the web, go to:
http://groups.yahoo.com/group/LINUX_Newbies/
<*> Your email settings:
Individual Email | Traditional
<*> To change settings online go to:
http://groups.yahoo.com/group/LINUX_Newbies/join
(Yahoo! ID required)
<*> To change settings via email:
LINUX_Newbies-digest@yahoogroups.com
LINUX_Newbies-fullfeatured@yahoogroups.com
<*> To unsubscribe from this group, send an email to:
LINUX_Newbies-unsubscribe@yahoogroups.com
<*> Your use of Yahoo! Groups is subject to:
http://docs.yahoo.com/info/terms/