f



vi .swp recovery file corrupted: "unable to read block 1...", original file zero length

Hi all,

I have Red Hat 9 Pro installed on a Sager 5680 laptop. It's running
fine, except that every now and then the system hangs completely. I'm
still troubleshooting that, and am leaning towards overheating as a
cause.

On one such occasion, my setup-bold.txt file that was open got
corrupted.
Events (everything happens to root, I was in the process of installing
drivers):

    0 - vi: VIM 6.1.320, kernel: 2.4.20-6, custom compiled

    1 - ~/setup-bold.txt is open
    2 - system hangs as I am compiling wlan drivers
    3 - hard reboot, missed the opportunity to force fsck on boot
    4 - was scared of fsck-ing a mounted filesyste, did another hard
reboot
    5 - forced filesystem check during reboot, no messages
    6 - su, cd, vi ~/setup-bold.txt
    7 - vi: blah-blah, found .swp file, the usual, choose (R)ecovery
    8 - the following error:
======================================================
Swap file ".setup-bold.txt.swp" already exists!
"setup-bold.txt" 0L, 0C
Using swap file ".setup-bold.txt.swp"
Original file "~/setup-bold.txt"
E308: Warning: Original file may have been changed
E309: Unable to read block 1 from .setup-bold.txt.swp
=======================================================
    9 - the file is opened as 0L, 0C. lenthg on disk is 0.
   10 - I've rebooted several times since, did fsck -fn, found 3
        orphaned inodes, and one with dtime 0, but haven't changed
        anything as I can't unmount the partition (/).

I am in the process of making a bootable floppy, so that I can run one
final fsck, although I don't expect much. I can open, read, copy,
rename, and diff this .swp file, and I still get the "block 1"
message. I am guessing that this is not a problem with the file
system, but with the vi, and hoping I can hack the recovery file.

Text editing the recovery file gives garbage (although I can make out
my hostname. This file is my installation log, that holds the results
of about a week of full-time googling and trouble-shooting. It wasn't
backed up, as I was still in the middle of installation. :(

Qs:

1) Is there a way to hack this and get the text back?
2) Could it be a filesystem issue, and can it be fixed now?
3) How unsafe is it to fsck a mounted filesystem (and we're talking
   about / here)? I only have two partitions, / and /boot.

I've used Linux for ~8 years, and I've never seen this before. Google
didn't help. :(

Thanks,
Milica
0
3/5/2004 5:29:32 PM
comp.os.linux.misc 33599 articles. 1 followers. amosa69 (78) is leader. Post Follow

20 Replies
1673 Views

Similar Articles

[PageSpeed] 13

On 5 Mar 2004 09:29:32 -0800, Milica Medved 
  <positively_no_spam@yahoo.com> wrote:
>
> 1) Is there a way to hack this and get the text back?
> 2) Could it be a filesystem issue, and can it be fixed now?
> 3) How unsafe is it to fsck a mounted filesystem (and we're talking
>    about / here)? I only have two partitions, / and /boot.
>
Go to single user mode and mount the / filesystem read-only:
mount -n -o remount,ro /
fsck
mount -n -o remount,rw /

Good luck!

-- 
Incrsease your earoning poswer and gaerner profwessional resspect.
Get the Un1iversity Dewgree you have already earned.
 [from the prestigious, non-accredited University of Spam!]
0
bmarcum2 (928)
3/5/2004 10:34:47 PM
On futher examination, the swap file is too short to contain any
information. It is the same size as what I get for an empty file. :(

I tried debugfs but that won't work, cause my file is not deleted,
just changed to zero length.
(BTW, debugfs only gave me a list of deleted inodes from two days ago,
nothing from today or yesterday. What's up with that?)

I have resigned to scanning the hard drive for the remains of the
file. :P

Right now I'm doing

cat /dev/hda3 | split -b85000000 - bold-hda3-output.

into a remote (huge) hard drive. This will give me about 26x26=676
files, 85K bytes each. I plan to grep each for the text I know was in
my file:

grep -H <knowntext> *

and then examine the ones that come up positive. Short of splitting
these files into smaller ones (at 85MB, they are probably too big to
manage with vi etc.), does anybody have suggestions on how to do it
easily/intelligently/...?

Thanks,
Milica
0
3/6/2004 12:18:17 AM
On 5 Mar 2004 16:18:17 -0800, Milica Medved <positively_no_spam@yahoo.com> wrote:
> 
> 
> On futher examination, the swap file is too short to contain any
> information. It is the same size as what I get for an empty file. :(
> 
> I tried debugfs but that won't work, cause my file is not deleted,
> just changed to zero length.
> (BTW, debugfs only gave me a list of deleted inodes from two days ago,
> nothing from today or yesterday. What's up with that?)
> 
> I have resigned to scanning the hard drive for the remains of the
> file. :P
> 
> Right now I'm doing
> 
> cat /dev/hda3 | split -b85000000 - bold-hda3-output.
> 
> into a remote (huge) hard drive. This will give me about 26x26=676
> files, 85K bytes each. I plan to grep each for the text I know was in
> my file:
> 
> grep -H <knowntext> *
> 
> and then examine the ones that come up positive. Short of splitting
> these files into smaller ones (at 85MB, they are probably too big to
> manage with vi etc.), does anybody have suggestions on how to do it
> easily/intelligently/...?
> 
> Thanks,
> Milica

You don't need the -H if you are searching a list of files.

I'd try this, and it will take a LONG time:

grep -C4 -n 'string' *  > outputfile

That will give you 4 lines of context before and after the search string
and the line number within the file of each and the file name.


AC

-- 
ed(1) Check out the original tutorials by Brian W.
Kernighan at the Ed Home Page  http://tinyurl.com/2aa6g
0
zzzzzz (1966)
3/6/2004 12:58:23 AM
The following suggestion is absolutely bordering on where
angels fear to tread, and I've used it just once as
a last resort.

[ skip to bottom ]

Milica Medved wrote:

> On futher examination, the swap file is too short to contain any
> information. It is the same size as what I get for an empty file. :(
>=20
> I tried debugfs but that won't work, cause my file is not deleted,
> just changed to zero length.
> (BTW, debugfs only gave me a list of deleted inodes from two days ago,
> nothing from today or yesterday. What's up with that?)
>=20
> I have resigned to scanning the hard drive for the remains of the
> file. :P
>=20
> Right now I'm doing
>=20
> cat /dev/hda3 | split -b85000000 - bold-hda3-output.
>=20
> into a remote (huge) hard drive. This will give me about 26x26=3D676
> files, 85K bytes each. I plan to grep each for the text I know was in
> my file:
>=20
> grep -H <knowntext> *
>=20
> and then examine the ones that come up positive. Short of splitting
> these files into smaller ones (at 85MB, they are probably too big to
> manage with vi etc.), does anybody have suggestions on how to do it
> easily/intelligently/...?
>=20
> Thanks,
> Milica

[ Danger, minefield ahead! ]

If you have the code for fsck, hack it as follows:

Instead of clearing the bad inodes, do nothing.
Do no fixes to the file-system whatsoever!
(Need I say call it something else than "fsck"?)

Instead of just reporting the free blocks as you find them,
write each block somewhere (different disk) with the
number of the block as the file name on the other
disk/partition.  Make sure there is plenty of room
on the other partition.

These should be 4K blocks, but there will be LOTS and LOTS
of them.

Reconstruct your original files as best you can from the
results on the "clean" partition.

Note:  It will NOT preserve your directory structure.
that's up to you.

Once you have salvaged all you can, you can run
an fsck on the "bad" partition and copy the files
over.

[ Don't try this at home, kiddies! ]

Good luck and may the source be with you!

--=20
=D1
"It is impossible to make anything foolproof because fools are so=20
ingenious" - A. Bloch

0
hukolau3 (292)
3/6/2004 1:39:15 AM
On Fri, 05 Mar 2004 09:29:32 -0800, Milica Medved wrote:

> Hi all,
>

[snip hang, reboot lost file, fsck]

Have you examined the lost+found directory on the filesystem?


-- 
NPV

What did that old blonde gal say? -- That is the part you throw away.
   Tom Waits - The part you throw away

0
me4 (19624)
3/6/2004 2:37:01 PM
Nils Petter Vaskinn <me@privacy.net> wrote in message news:<pan.2004.03.06.14.36.59.70436@privacy.net>...
> On Fri, 05 Mar 2004 09:29:32 -0800, Milica Medved wrote:
> 
> > Hi all,
> >
> 
> [snip hang, reboot lost file, fsck]
> 
> Have you examined the lost+found directory on the filesystem?

Thanks for the suggestion. There's nothing there.
0
3/6/2004 6:55:46 PM
Nick Landsberg <hukolau@NOSPAM.att.net> wrote in message news:<7Z92c.154557$hR.2868761@bgtnsc05-news.ops.worldnet.att.net>...

I already did a read-only fsck. I got ~60 inodes, and they were all
from two days before the problem occured, so I don't think they would
help. I also already did a regular fsck on the second reboot, which
probably cleared out the recent inodes. (Right? I'm sure I did a lot
of file deleting in the previous two days.)

After getting some sleep, I am now inclined to think that fsck can't
help me, as the file is repared, existing. It's just that it's zero
length.

But I will keep your advice in mind. One never knows when she may need
it.

Milica


> The following suggestion is absolutely bordering on where
> angels fear to tread, and I've used it just once as
> a last resort.
> 
> [ skip to bottom ]
> 
> Milica Medved wrote:
> 
> > On futher examination, the swap file is too short to contain any
> > information. It is the same size as what I get for an empty file. :(
> > 
> > I tried debugfs but that won't work, cause my file is not deleted,
> > just changed to zero length.
> > (BTW, debugfs only gave me a list of deleted inodes from two days ago,
> > nothing from today or yesterday. What's up with that?)
> > 
> > I have resigned to scanning the hard drive for the remains of the
> > file. :P
> > 
> > Right now I'm doing
> > 
> > cat /dev/hda3 | split -b85000000 - bold-hda3-output.
> > 
> > into a remote (huge) hard drive. This will give me about 26x26=676
> > files, 85K bytes each. I plan to grep each for the text I know was in
> > my file:
> > 
> > grep -H <knowntext> *
> > 
> > and then examine the ones that come up positive. Short of splitting
> > these files into smaller ones (at 85MB, they are probably too big to
> > manage with vi etc.), does anybody have suggestions on how to do it
> > easily/intelligently/...?
> > 
> > Thanks,
> > Milica
> 
> [ Danger, minefield ahead! ]
> 
> If you have the code for fsck, hack it as follows:
> 
> Instead of clearing the bad inodes, do nothing.
> Do no fixes to the file-system whatsoever!
> (Need I say call it something else than "fsck"?)
> 
> Instead of just reporting the free blocks as you find them,
> write each block somewhere (different disk) with the
> number of the block as the file name on the other
> disk/partition.  Make sure there is plenty of room
> on the other partition.
> 
> These should be 4K blocks, but there will be LOTS and LOTS
> of them.
> 
> Reconstruct your original files as best you can from the
> results on the "clean" partition.
> 
> Note:  It will NOT preserve your directory structure.
> that's up to you.
> 
> Once you have salvaged all you can, you can run
> an fsck on the "bad" partition and copy the files
> over.
> 
> [ Don't try this at home, kiddies! ]
> 
> Good luck and may the source be with you!
0
3/6/2004 7:05:42 PM
I'll do that.
Thanks

Alan Connor <zzzzzz@xxx.yyy> wrote in message news:<Pm92c.20907$yZ1.13058@newsread2.news.pas.earthlink.net>...
> On 5 Mar 2004 16:18:17 -0800, Milica Medved <positively_no_spam@yahoo.com> wrote:
> > 
> > 
> > On futher examination, the swap file is too short to contain any
> > information. It is the same size as what I get for an empty file. :(
> > 
> > I tried debugfs but that won't work, cause my file is not deleted,
> > just changed to zero length.
> > (BTW, debugfs only gave me a list of deleted inodes from two days ago,
> > nothing from today or yesterday. What's up with that?)
> > 
> > I have resigned to scanning the hard drive for the remains of the
> > file. :P
> > 
> > Right now I'm doing
> > 
> > cat /dev/hda3 | split -b85000000 - bold-hda3-output.
> > 
> > into a remote (huge) hard drive. This will give me about 26x26=676
> > files, 85K bytes each. I plan to grep each for the text I know was in
> > my file:
> > 
> > grep -H <knowntext> *
> > 
> > and then examine the ones that come up positive. Short of splitting
> > these files into smaller ones (at 85MB, they are probably too big to
> > manage with vi etc.), does anybody have suggestions on how to do it
> > easily/intelligently/...?
> > 
> > Thanks,
> > Milica
> 
> You don't need the -H if you are searching a list of files.
> 
> I'd try this, and it will take a LONG time:
> 
> grep -C4 -n 'string' *  > outputfile
> 
> That will give you 4 lines of context before and after the search string
> and the line number within the file of each and the file name.
> 
> 
> AC
0
3/6/2004 7:11:43 PM
positively_no_spam@yahoo.com (Milica Medved) wrote:
>Right now I'm doing
>
>cat /dev/hda3 | split -b85000000 - bold-hda3-output.
>
>into a remote (huge) hard drive. This will give me about 26x26=676
>files, 85K bytes each. I plan to grep each for the text I know was in
>my file:
>
>grep -H <knowntext> *
>
>and then examine the ones that come up positive. Short of splitting
>these files into smaller ones (at 85MB, they are probably too big to
>manage with vi etc.), does anybody have suggestions on how to do it
>easily/intelligently/...?

That's the right idea.  The program you need to use is dd.
You can do a binary search on the original file system without
even using temporary files.

Use dd to grab chunks, and output to stdout piped to a grep
command that will signal having found the text you are looking
for.  You also want to overlap the chunks, just in case the
search pattern happens to land right on a boundary, and gets cut
in half.

 dd if=/dev/hda3 bs=1024k count=101 skip=0 | grep 'text'

That looks at the first 101Mb.  What you actually want to do
though, is make it one block larger than half the size of the
whole file system.  If that doesn't find it, make the skip value
1 less than half the size of the file system, and do it again.
(My examples here are based on a 200Mb filesystem, and obviously
you are going to multiply that by some significantly large
number!)

 dd if=/dev/hda3 of=/tmp/chunk bs=1024k count=101 skip=99 | grep 'text'

gives you a second chunk with a 1k offset back into the first one.
For each following chunk just add 100 to the skip value.

If it turns up in the second chunk, you then do a binary search on
it,

 dd if=/tmp/chunk of=/tmp/chunk1  bs=1024k count=51 skip 99 | grep 'text'
 dd if=/tmp/chunk of=/tmp/chunk2  bs=1024k count=51 skip=149 | grep 'text'

Keep dividing it down until you have some nice small, easy to manage
size, and then redirect dd to a file (or use of=fname) and edit the
file with an editor.

Hmmm... you could put this all into a script, fire it off and go do
something useful for awhile.

--
Floyd L. Davidson           <http://web.newsguy.com/floyd_davidson>
Ukpeagvik (Barrow, Alaska)                         floyd@barrow.com
0
floyd (1028)
3/6/2004 7:47:55 PM

Milica Medved wrote:

> Nick Landsberg <hukolau@NOSPAM.att.net> wrote in message news:<7Z92c.15=
4557$hR.2868761@bgtnsc05-news.ops.worldnet.att.net>...
>=20
> I already did a read-only fsck. I got ~60 inodes, and they were all
> from two days before the problem occured, so I don't think they would
> help. I also already did a regular fsck on the second reboot, which
> probably cleared out the recent inodes. (Right? I'm sure I did a lot
> of file deleting in the previous two days.)
>=20
> After getting some sleep, I am now inclined to think that fsck can't
> help me, as the file is repared, existing. It's just that it's zero
> length.
>=20
> But I will keep your advice in mind. One never knows when she may need
> it.
>=20

I meant not just the bad inodes, write out every free block
which you find to anohter disk and then go through
the tedious process of recreating your original files
from that.  There will be a lot of useless garbage
picked up, but some of your data will be there.
Even if it's just 50%, it's better than recreating it
from scratch.


> Milica
>=20
>=20
>=20
>>The following suggestion is absolutely bordering on where
>>angels fear to tread, and I've used it just once as
>>a last resort.
>>
>>[ skip to bottom ]
>>
>>Milica Medved wrote:
>>
>>
>>>On futher examination, the swap file is too short to contain any
>>>information. It is the same size as what I get for an empty file. :(
>>>
>>>I tried debugfs but that won't work, cause my file is not deleted,
>>>just changed to zero length.
>>>(BTW, debugfs only gave me a list of deleted inodes from two days ago,=

>>>nothing from today or yesterday. What's up with that?)
>>>
>>>I have resigned to scanning the hard drive for the remains of the
>>>file. :P
>>>
>>>Right now I'm doing
>>>
>>>cat /dev/hda3 | split -b85000000 - bold-hda3-output.
>>>
>>>into a remote (huge) hard drive. This will give me about 26x26=3D676
>>>files, 85K bytes each. I plan to grep each for the text I know was in
>>>my file:
>>>
>>>grep -H <knowntext> *
>>>
>>>and then examine the ones that come up positive. Short of splitting
>>>these files into smaller ones (at 85MB, they are probably too big to
>>>manage with vi etc.), does anybody have suggestions on how to do it
>>>easily/intelligently/...?
>>>
>>>Thanks,
>>>Milica
>>
>>[ Danger, minefield ahead! ]
>>
>>If you have the code for fsck, hack it as follows:
>>
>>Instead of clearing the bad inodes, do nothing.
>>Do no fixes to the file-system whatsoever!
>>(Need I say call it something else than "fsck"?)
>>
>>Instead of just reporting the free blocks as you find them,
>>write each block somewhere (different disk) with the
>>number of the block as the file name on the other
>>disk/partition.  Make sure there is plenty of room
>>on the other partition.
>>
>>These should be 4K blocks, but there will be LOTS and LOTS
>>of them.
>>
>>Reconstruct your original files as best you can from the
>>results on the "clean" partition.
>>
>>Note:  It will NOT preserve your directory structure.
>>that's up to you.
>>
>>Once you have salvaged all you can, you can run
>>an fsck on the "bad" partition and copy the files
>>over.
>>
>>[ Don't try this at home, kiddies! ]
>>
>>Good luck and may the source be with you!

--=20
=D1
"It is impossible to make anything foolproof because fools are so=20
ingenious" - A. Bloch

0
hukolau3 (292)
3/6/2004 7:54:21 PM
On Sat, 06 Mar 2004 10:47:55 -0900, Floyd L. Davidson <floyd@barrow.com> wrote:
> 
> 
> positively_no_spam@yahoo.com (Milica Medved) wrote:
>>Right now I'm doing
>>
>>cat /dev/hda3 | split -b85000000 - bold-hda3-output.
>>
>>into a remote (huge) hard drive. This will give me about 26x26=676
>>files, 85K bytes each. I plan to grep each for the text I know was in
>>my file:
>>
>>grep -H <knowntext> *
>>
>>and then examine the ones that come up positive. Short of splitting
>>these files into smaller ones (at 85MB, they are probably too big to
>>manage with vi etc.), does anybody have suggestions on how to do it
>>easily/intelligently/...?
> 
> That's the right idea.  The program you need to use is dd.
> You can do a binary search on the original file system without
> even using temporary files.
> 
> Use dd to grab chunks, and output to stdout piped to a grep
> command that will signal having found the text you are looking
> for.  You also want to overlap the chunks, just in case the
> search pattern happens to land right on a boundary, and gets cut
> in half.
> 
>  dd if=/dev/hda3 bs=1024k count=101 skip=0 | grep 'text'
> 
> That looks at the first 101Mb.  What you actually want to do
> though, is make it one block larger than half the size of the
> whole file system.  If that doesn't find it, make the skip value
> 1 less than half the size of the file system, and do it again.
> (My examples here are based on a 200Mb filesystem, and obviously
> you are going to multiply that by some significantly large
> number!)
> 
>  dd if=/dev/hda3 of=/tmp/chunk bs=1024k count=101 skip=99 | grep 'text'
> 
> gives you a second chunk with a 1k offset back into the first one.
> For each following chunk just add 100 to the skip value.
> 
> If it turns up in the second chunk, you then do a binary search on
> it,
> 
>  dd if=/tmp/chunk of=/tmp/chunk1  bs=1024k count=51 skip 99 | grep 'text'
>  dd if=/tmp/chunk of=/tmp/chunk2  bs=1024k count=51 skip=149 | grep 'text'
> 
> Keep dividing it down until you have some nice small, easy to manage
> size, and then redirect dd to a file (or use of=fname) and edit the
> file with an editor.
> 
> Hmmm... you could put this all into a script, fire it off and go do
> something useful for awhile.
> 
> --
> Floyd L. Davidson           <http://web.newsguy.com/floyd_davidson>
> Ukpeagvik (Barrow, Alaska)                         floyd@barrow.com

That's it. Way to go, Floyd.

Going to play with that myself. Create a junkfile and unlink it, then
see if I can find it.

Now which partition is the smallest? :-)


AC

0
zzzzzz (1966)
3/6/2004 8:58:18 PM
Oh, OK, I didn't get that. I think it's because at this point the
numbers do play a role. Your method is suitable for a mostly full
filesystem, where there is fewer empty inodes. I have ~ 4Gb of files
on a 55Gb filesystem - there would be _a lot_ of sorting through.

I'm going with the "dd/cat /dev/xxx -> other filesystem, grep", just
because it seems simpler than anything else I can think of now, and
looks like less work (and less brain usage). ;) Since my file is a
text file, it should work. (fingers crossed).

Thanks for the explanation,
Milica


Nick Landsberg <hukolau@NOSPAM.att.net> wrote in message news:<N%p2c.80465$aH3.2460163@bgtnsc04-news.ops.worldnet.att.net>...
> Milica Medved wrote:
> 
> > Nick Landsberg <hukolau@NOSPAM.att.net> wrote in message news:<7Z92c.15
>  4557$hR.2868761@bgtnsc05-news.ops.worldnet.att.net>...
> > 
> > I already did a read-only fsck. I got ~60 inodes, and they were all
> > from two days before the problem occured, so I don't think they would
> > help. I also already did a regular fsck on the second reboot, which
> > probably cleared out the recent inodes. (Right? I'm sure I did a lot
> > of file deleting in the previous two days.)
> > 
> > After getting some sleep, I am now inclined to think that fsck can't
> > help me, as the file is repared, existing. It's just that it's zero
> > length.
> > 
> > But I will keep your advice in mind. One never knows when she may need
> > it.
> > 
> 
> I meant not just the bad inodes, write out every free block
> which you find to anohter disk and then go through
> the tedious process of recreating your original files
> from that.  There will be a lot of useless garbage
> picked up, but some of your data will be there.
> Even if it's just 50%, it's better than recreating it
> from scratch.
> 
> 
> > Milica
> > 
> > 
> > 
> >>The following suggestion is absolutely bordering on where
> >>angels fear to tread, and I've used it just once as
> >>a last resort.
> >>
> >>[ skip to bottom ]
> >>
> >>Milica Medved wrote:
> >>
> >>
> >>>On futher examination, the swap file is too short to contain any
> >>>information. It is the same size as what I get for an empty file. :(
> >>>
> >>>I tried debugfs but that won't work, cause my file is not deleted,
> >>>just changed to zero length.
> >>>(BTW, debugfs only gave me a list of deleted inodes from two days ago,
>  
> >>>nothing from today or yesterday. What's up with that?)
> >>>
> >>>I have resigned to scanning the hard drive for the remains of the
> >>>file. :P
> >>>
> >>>Right now I'm doing
> >>>
> >>>cat /dev/hda3 | split -b85000000 - bold-hda3-output.
> >>>
> >>>into a remote (huge) hard drive. This will give me about 26x26=676
> >>>files, 85K bytes each. I plan to grep each for the text I know was in
> >>>my file:
> >>>
> >>>grep -H <knowntext> *
> >>>
> >>>and then examine the ones that come up positive. Short of splitting
> >>>these files into smaller ones (at 85MB, they are probably too big to
> >>>manage with vi etc.), does anybody have suggestions on how to do it
> >>>easily/intelligently/...?
> >>>
> >>>Thanks,
> >>>Milica
> >>
> >>[ Danger, minefield ahead! ]
> >>
> >>If you have the code for fsck, hack it as follows:
> >>
> >>Instead of clearing the bad inodes, do nothing.
> >>Do no fixes to the file-system whatsoever!
> >>(Need I say call it something else than "fsck"?)
> >>
> >>Instead of just reporting the free blocks as you find them,
> >>write each block somewhere (different disk) with the
> >>number of the block as the file name on the other
> >>disk/partition.  Make sure there is plenty of room
> >>on the other partition.
> >>
> >>These should be 4K blocks, but there will be LOTS and LOTS
> >>of them.
> >>
> >>Reconstruct your original files as best you can from the
> >>results on the "clean" partition.
> >>
> >>Note:  It will NOT preserve your directory structure.
> >>that's up to you.
> >>
> >>Once you have salvaged all you can, you can run
> >>an fsck on the "bad" partition and copy the files
> >>over.
> >>
> >>[ Don't try this at home, kiddies! ]
> >>
> >>Good luck and may the source be with you!
0
3/7/2004 4:59:18 AM
Hm... At first this method seemed like a very complicated way to do
what I was doing already. Then I realized that it is totally possible
to use it without access to a different filesystem, and without
writing any files to the current one! Very nice. Either as a script,
or interactively, or simply loooking at small chunks to begin with
(and taking forever), it would locate the sought pattern.

Anyway, I just copied my 55Gb filesystem to another (blessedly empty)
250Gb filesystem, so I can work off of another machine. (The laptop in
question is a production system (I believe that's the word), and
begining Monday, I'll have to start writing 2Gb sets of data over it.)
The splitting into 85MB files was because the filesystem won't take
files over 2Gb, but it comes in handy. It doesn't provide overlap,
though. In my case, that's OK, as I know that 'known pattern' occured
more than once, but it is an important point.

Milica

floyd@barrow.com (Floyd L. Davidson) wrote in message news:<87brn9kb9w.fld@barrow.com>...
> positively_no_spam@yahoo.com (Milica Medved) wrote:
> >Right now I'm doing
> >
> >cat /dev/hda3 | split -b85000000 - bold-hda3-output.
> >
> >into a remote (huge) hard drive. This will give me about 26x26=676
> >files, 85K bytes each. I plan to grep each for the text I know was in
> >my file:
> >
> >grep -H <knowntext> *
> >
> >and then examine the ones that come up positive. Short of splitting
> >these files into smaller ones (at 85MB, they are probably too big to
> >manage with vi etc.), does anybody have suggestions on how to do it
> >easily/intelligently/...?
> 
> That's the right idea.  The program you need to use is dd.
> You can do a binary search on the original file system without
> even using temporary files.
> 
> Use dd to grab chunks, and output to stdout piped to a grep
> command that will signal having found the text you are looking
> for.  You also want to overlap the chunks, just in case the
> search pattern happens to land right on a boundary, and gets cut
> in half.
> 
>  dd if=/dev/hda3 bs=1024k count=101 skip=0 | grep 'text'
> 
> That looks at the first 101Mb.  What you actually want to do
> though, is make it one block larger than half the size of the
> whole file system.  If that doesn't find it, make the skip value
> 1 less than half the size of the file system, and do it again.
> (My examples here are based on a 200Mb filesystem, and obviously
> you are going to multiply that by some significantly large
> number!)
> 
>  dd if=/dev/hda3 of=/tmp/chunk bs=1024k count=101 skip=99 | grep 'text'
> 
> gives you a second chunk with a 1k offset back into the first one.
> For each following chunk just add 100 to the skip value.
> 
> If it turns up in the second chunk, you then do a binary search on
> it,
> 
>  dd if=/tmp/chunk of=/tmp/chunk1  bs=1024k count=51 skip 99 | grep 'text'
>  dd if=/tmp/chunk of=/tmp/chunk2  bs=1024k count=51 skip=149 | grep 'text'
> 
> Keep dividing it down until you have some nice small, easy to manage
> size, and then redirect dd to a file (or use of=fname) and edit the
> file with an editor.
> 
> Hmmm... you could put this all into a script, fire it off and go do
> something useful for awhile.
0
3/7/2004 5:15:52 AM
I've actually had luck with my 'grep' method! I found several areas
over which my file is distributed. :)

Is there a command that will extract lines nnnnn to NNNNN from a file?
I did some apropos/man-ing, but didn't come up with anything.

Thanks,
Milica


positively_no_spam@yahoo.com (Milica Medved) wrote in message news:<6e7d0221.0403051618.6729ea82@posting.google.com>...
> On futher examination, the swap file is too short to contain any
> information. It is the same size as what I get for an empty file. :(
> 
> I tried debugfs but that won't work, cause my file is not deleted,
> just changed to zero length.
> (BTW, debugfs only gave me a list of deleted inodes from two days ago,
> nothing from today or yesterday. What's up with that?)
> 
> I have resigned to scanning the hard drive for the remains of the
> file. :P
> 
> Right now I'm doing
> 
> cat /dev/hda3 | split -b85000000 - bold-hda3-output.
> 
> into a remote (huge) hard drive. This will give me about 26x26=676
> files, 85K bytes each. I plan to grep each for the text I know was in
> my file:
> 
> grep -H <knowntext> *
> 
> and then examine the ones that come up positive. Short of splitting
> these files into smaller ones (at 85MB, they are probably too big to
> manage with vi etc.), does anybody have suggestions on how to do it
> easily/intelligently/...?
> 
> Thanks,
> Milica
0
3/7/2004 5:52:04 AM
On Sat, 06 Mar 2004 21:52:04 -0800, Milica Medved wrote:

> I've actually had luck with my 'grep' method! I found several areas
> over which my file is distributed. :)
> 
> Is there a command that will extract lines nnnnn to NNNNN from a file?
> I did some apropos/man-ing, but didn't come up with anything.

This will print lines 5 through 10 of a file:

sed -e '1,4d' -e '11,$d' file

and so will this:

tail -n +5 file | head -n 6

and so will this, assuming that "foo" appears on line 7 and nowhere else:

grep -A 3 -B 2 foo file

0
emurphy42 (1226)
3/7/2004 7:23:36 AM
On Sun, 07 Mar 2004 at 05:52 GMT, Milica Medved wrote:
> 
> Is there a command that will extract lines nnnnn to NNNNN from a file?
> I did some apropos/man-ing, but didn't come up with anything.

    There are various ways:

## these are used in all examples
first=13
last=69
FILE=/path/to/file

#1
sed -n -e "$(( $last + 1 )) q" -e "$first,$last p" "$FILE"

#2
awk "NR > $last {exit} NR >= $first && NR <= $last" "$FILE"

#3
n=0
{ while [ $(( ++n )) -lt $first ]; do read; done
  while [ $(( n++ )) -le $last ]
  do
    IFS= read -r line
    printf "%s\n" "$line"
  done
} < "$FILE"

#4
head -$last "$FILE" | tail -$(( $last - $first + 1 ))

#5
tail +$first "$FILE" | head -$(( $last - $first + 1 ))

-- 
    Chris F.A. Johnson                  http://cfaj.freeshell.org/shell
    ===================================================================
    My code (if any) in this post is copyright 2004, Chris F.A. Johnson
    and may be copied under the terms of the GNU General Public License
0
c.fa.johnson (292)
3/7/2004 7:52:50 AM
On 6 Mar 2004 21:52:04 -0800, Milica Medved <positively_no_spam@yahoo.com> wrote:
> 
> 
> I've actually had luck with my 'grep' method! I found several areas
> over which my file is distributed. :)
> 
> Is there a command that will extract lines nnnnn to NNNNN from a file?
> I did some apropos/man-ing, but didn't come up with anything.
> 
> Thanks,
> Milica
> 

[please don't top post]


sed -n '/pattern/,/pattern/p' file > outputfile

Must be a really important file....

I hope you are good with ed or vi. Seems like you have a lot of
interactive editing to do.

AC


-- 
ed(1) Check out the original tutorials by Brian W.
Kernighan at the Ed Home Page  http://tinyurl.com/2aa6g
0
zzzzzz (1966)
3/7/2004 8:28:16 AM
On Sun, 07 Mar 2004 08:28:16 +0000, Alan Connor wrote:

> On 6 Mar 2004 21:52:04 -0800, Milica Medved <positively_no_spam@yahoo.com> wrote:

>> Is there a command that will extract lines nnnnn to NNNNN from a file?
>> I did some apropos/man-ing, but didn't come up with anything.

> sed -n '/pattern/,/pattern/p' file > outputfile

Ah, I knew there was something better than my methods.  Note that
/pattern/ need not (cannot?) be a regexp, but instead indicates
where to insert the line numbers.  For instance, for lines 5 to 10,
inclusive:

sed -n '5,10p' file > outputfile

0
emurphy42 (1226)
3/7/2004 8:59:58 AM
On Sun, 07 Mar 2004 at 08:59 GMT, Ed Murphy wrote:
> On Sun, 07 Mar 2004 08:28:16 +0000, Alan Connor wrote:
> 
>> On 6 Mar 2004 21:52:04 -0800, Milica Medved <positively_no_spam@yahoo.com> wrote:
> 
>>> Is there a command that will extract lines nnnnn to NNNNN from a file?
>>> I did some apropos/man-ing, but didn't come up with anything.
> 
>> sed -n '/pattern/,/pattern/p' file > outputfile
> 
> Ah, I knew there was something better than my methods.  Note that
> /pattern/ need not (cannot?) be a regexp, but instead indicates
> where to insert the line numbers.  For instance, for lines 5 to 10,
> inclusive:
> 
> sed -n '5,10p' file > outputfile

     /pattern/ _is_ a regular expression, and, like a number, is an
     address to select lines. 

    "An address is either:

          a decimal number linecount, which is cumulative  across
          input files;

          a $, which addresses the last input line;

          or a context address, which is a  /regular  expression/
          in the style of ed(1);
    "



-- 
    Chris F.A. Johnson                  http://cfaj.freeshell.org/shell
    ===================================================================
    My code (if any) in this post is copyright 2004, Chris F.A. Johnson
    and may be copied under the terms of the GNU General Public License
0
c.fa.johnson (292)
3/7/2004 9:25:36 AM
Thanks everyone!
0
3/7/2004 7:02:36 PM
Reply: