f



Need Help Live Upgrade and ZFS Issues

Need Help: Reverted to old BE with older version of ZFS cannot mount
rpool root or get back to new BE

Solaris 10 u9 on sparc v490.  3 non-global zones.  I did the
following: I used Live Upgrade method to install latest 10_recommended
which worked

with no errors.  Rebooted into the new BE with no apparent problems.
ZFS tools reported that my disk format was now outdated ZFS version,
so I

ran zpool upgrade rpool; zpool upgrade dpool.  No errors.  Now my ZFS
disk format is ZFS version 29.

 In the porcess of troubleshooting a strange error that surfaced with
regard to zones -- zoneadm and zlogin both fail with <segmentatin
fault> core -- I

decided to revert back to the original BE with the luactivate
command.  Rebooted and mout reboot fails with panic as the system
cannot mount root --

rpool -- because the original BE is still at an earlier ZFS version
and not comapatible with the newer ZFS version 29 on disk format.

NOTICE: zfs_parse_bootfs: error 48
Cannot mount root on rpool/51 fstype zfs
panic[cpu2]/thread=180e000: vfs_mountroot: cannot mount root

Obtained the latest Sol10-u10 iso and burned DVD and rebooted cdrom
which works then tried to implement the Solaris instructions for
reverting

back to the previous BE in the advent that your patched BE fails or
has serious problems.  Did a zpool import rpool and basically the
following:

**********************************************************************

The target boot environment has been activated. It will be used when
you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands.
You
MUST USE either the init or the shutdown command when you reboot. If
you
do not use either init or shutdown, the system will not boot using the
target BE.

**********************************************************************

In case of a failure while booting to the target BE, the following
process
needs to be followed to fallback to the currently working boot
environment:

1. Enter the PROM monitor (ok prompt).

2. Boot the machine to Single User mode using a different boot device
(like the Solaris Install CD or Network). Examples:

     At the PROM monitor (ok prompt):
     For boot to Solaris CD:  boot cdrom -s
     For boot to network:     boot net -s

3. Mount the Current boot environment root slice to some directory
(like
/mnt). You can use the following commands in sequence to mount the BE:

     zpool import rpool
     zfs inherit -r mountpoint rpool/ROOT/sol25Q10u9-patch-May12
     zfs set mountpoint=<mountpointName> rpool/ROOT/sol25Q10u9-patch-
May12
     zfs mount rpool/ROOT/sol25Q10u9-patch-May12

4. Run <luactivate> utility with out any arguments from the Parent
boot
environment root slice, as shown below:

     <mountpointName>/sbin/luactivate

5. luactivate, activates the previous working boot environment and
indicates the result.

6. Exit Single User mode and reboot the machine.

**********************************************************************

Modifying boot archive service
Activation of boot environment <sol25Q10u9-baseline> successful.

*********************************************************************************

So one issue here is that in fact I do not want to activate the boot
environment <sol25Q10u9-baseline> but the NEW BE sol25Q10u9-patch-
May12

and then proceed to work out my zone issues.  Alternatively I need to
figure out how to upgrade the original BE to the new ZFS version
without being

able to boot it.

When I did the zpool import and tried to run luactivate or lustatus:

# /mnt/sbin/luactivate
luactivate: ERROR: Live Upgrade not installed properly (/etc/default/
lu not found).
# cd /
# /mnt/sbin/luactivate
luactivate: ERROR: Live Upgrade not installed properly (/etc/default/
lu not found).
# lustatus
lustatus: not found
# /mnt/sbin/lustatus
/mnt/sbin/lustatus: not found

Also tried:


# chroot /mnt /sbin/luactivate sol25Q10u9-patch-May12
df: Could not find mount point for /
ERROR: Unable to determine major and minor device numbers for boot
device </dev/dsk/c3t0d0s0>.
ERROR: Unable to determine the configuration of the current boot
environment <sol25Q10u9-patch-May12>.
0
swingboyla (12)
5/18/2012 10:29:07 PM
comp.unix.solaris 26025 articles. 2 followers. Post Follow

26 Replies
2206 Views

Similar Articles

[PageSpeed] 29

Paul Vanderhoof <swingboyla@gmail.com> writes:

>ran zpool upgrade rpool; zpool upgrade dpool.  No errors.  Now my ZFS
>disk format is ZFS version 29.


Unfortunately, zfs upgrade is a one way step.  After doing so, you
cannot boot older BEs.

Casper
0
Casper.Dik2 (318)
5/19/2012 2:58:47 PM
[cross-posted to comp.sys.sun.admin]
In article <b9ed635d-cc80-48da-a14f-48fdbb449641@p1g2000vbv.googlegroups.com>,
Paul Vanderhoof  <swingboyla@gmail.com> wrote:
># chroot /mnt /sbin/luactivate sol25Q10u9-patch-May12
>df: Could not find mount point for /
>ERROR: Unable to determine major and minor device numbers for boot
>device </dev/dsk/c3t0d0s0>.
>ERROR: Unable to determine the configuration of the current boot
>environment <sol25Q10u9-patch-May12>.

That looks like a potential RFE: S10's lu(1M) and friends should
be able to work against a client root path.

Let us know how Oracle Support helps you rescue your patched S10
BE from the latest installation media.

John
groenveld@acm.org
0
groenvel (550)
5/21/2012 12:08:27 PM
On May 18, 4:29=A0pm, Paul Vanderhoof <swingbo...@gmail.com> wrote:
> Need Help: Reverted to old BE with older version of ZFS cannot mount
> rpool root or get back to new BE
>
> Solaris 10 u9 on sparc v490. =A03 non-global zones. =A0I did the
> following: I used Live Upgrade method to install latest 10_recommended
> which worked
>
> with no errors. =A0Rebooted into the new BE with no apparent problems.
> ZFS tools reported that my disk format was now outdated ZFS version,
> so I
>
> ran zpool upgrade rpool; zpool upgrade dpool. =A0No errors. =A0Now my ZFS
> disk format is ZFS version 29.
>
> =A0In the porcess of troubleshooting a strange error that surfaced with
> regard to zones -- zoneadm and zlogin both fail with <segmentatin
> fault> core -- I
>
> decided to revert back to the original BE with the luactivate
> command. =A0Rebooted and mout reboot fails with panic as the system
> cannot mount root --
>
> rpool -- because the original BE is still at an earlier ZFS version
> and not comapatible with the newer ZFS version 29 on disk format.
>
> NOTICE: zfs_parse_bootfs: error 48
> Cannot mount root on rpool/51 fstype zfs
> panic[cpu2]/thread=3D180e000: vfs_mountroot: cannot mount root
>
> Obtained the latest Sol10-u10 iso and burned DVD and rebooted cdrom
> which works then tried to implement the Solaris instructions for
> reverting
>
> back to the previous BE in the advent that your patched BE fails or
> has serious problems. =A0Did a zpool import rpool and basically the
> following:
>
> **********************************************************************
>
> The target boot environment has been activated. It will be used when
> you
> reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands.
> You
> MUST USE either the init or the shutdown command when you reboot. If
> you
> do not use either init or shutdown, the system will not boot using the
> target BE.
>
> **********************************************************************
>
> In case of a failure while booting to the target BE, the following
> process
> needs to be followed to fallback to the currently working boot
> environment:
>
> 1. Enter the PROM monitor (ok prompt).
>
> 2. Boot the machine to Single User mode using a different boot device
> (like the Solaris Install CD or Network). Examples:
>
> =A0 =A0 =A0At the PROM monitor (ok prompt):
> =A0 =A0 =A0For boot to Solaris CD: =A0boot cdrom -s
> =A0 =A0 =A0For boot to network: =A0 =A0 boot net -s
>
> 3. Mount the Current boot environment root slice to some directory
> (like
> /mnt). You can use the following commands in sequence to mount the BE:
>
> =A0 =A0 =A0zpool import rpool
> =A0 =A0 =A0zfs inherit -r mountpoint rpool/ROOT/sol25Q10u9-patch-May12
> =A0 =A0 =A0zfs set mountpoint=3D<mountpointName> rpool/ROOT/sol25Q10u9-pa=
tch-
> May12
> =A0 =A0 =A0zfs mount rpool/ROOT/sol25Q10u9-patch-May12
>
> 4. Run <luactivate> utility with out any arguments from the Parent
> boot
> environment root slice, as shown below:
>
> =A0 =A0 =A0<mountpointName>/sbin/luactivate
>
> 5. luactivate, activates the previous working boot environment and
> indicates the result.
>
> 6. Exit Single User mode and reboot the machine.
>
> **********************************************************************
>
> Modifying boot archive service
> Activation of boot environment <sol25Q10u9-baseline> successful.
>
> *************************************************************************=
********
>
> So one issue here is that in fact I do not want to activate the boot
> environment <sol25Q10u9-baseline> but the NEW BE sol25Q10u9-patch-
> May12
>
> and then proceed to work out my zone issues. =A0Alternatively I need to
> figure out how to upgrade the original BE to the new ZFS version
> without being
>
> able to boot it.
>
> When I did the zpool import and tried to run luactivate or lustatus:
>
> # /mnt/sbin/luactivate
> luactivate: ERROR: Live Upgrade not installed properly (/etc/default/
> lu not found).
> # cd /
> # /mnt/sbin/luactivate
> luactivate: ERROR: Live Upgrade not installed properly (/etc/default/
> lu not found).
> # lustatus
> lustatus: not found
> # /mnt/sbin/lustatus
> /mnt/sbin/lustatus: not found
>
> Also tried:
>
> # chroot /mnt /sbin/luactivate sol25Q10u9-patch-May12
> df: Could not find mount point for /
> ERROR: Unable to determine major and minor device numbers for boot
> device </dev/dsk/c3t0d0s0>.
> ERROR: Unable to determine the configuration of the current boot
> environment <sol25Q10u9-patch-May12>.

I'm not sure how much progress you have made on unraveling this
problem, but
here are some ideas:

1. You need to find out the reason why your zones are unhappy. LU
supports a limited
set of zone configurations, that are described here.

http://docs.oracle.com/cd/E23823_01/html/819-5461/ggpdm.html#gigek

My concern is that if your zone config isn't supported by LU, you
usually know during
the LU phase, not by zone login core dumps post LU. I've never seen
this one so
I would make sure your zones are operational prior to LU.

An upcoming Solaris 10 release includes an LU pre-flight checker that
will help
determine whether your zones are supported. A workaround is to detach
your zones
before the LU and then reattach them after the LU. I haven't tried
this myself.

2. The on-screen LU recovery steps don't work based on a discussion on
this group
and I filed a CR that is fixed in an upcoming Solaris 10 release.

3. I hope you can start over at a good known state. If that's not an
option, then
you could recreate a root pool at the version you need by using the
version option when
the new root pool is created. Then, point LU at the new pool.

These steps are documented in the above pointer.

4. You should be able to import your existing root pool from the
matching media version
and mount the root BE. You can't run the LU suite of commands in an
alternate boot mode.

Thanks,

Cindy



0
5/21/2012 3:37:41 PM
On May 21, 8:37=A0am, cindy <cindy.swearin...@oracle.com> wrote:
> On May 18, 4:29=A0pm, Paul Vanderhoof <swingbo...@gmail.com> wrote:
>
>
>
> > Need Help: Reverted to old BE with older version of ZFS cannot mount
> > rpool root or get back to new BE
>
> > Solaris 10 u9 on sparc v490. =A03 non-global zones. =A0I did the
> > following: I used Live Upgrade method to install latest 10_recommended
> > which worked
>
> > with no errors. =A0Rebooted into the new BE with no apparent problems.
> > ZFS tools reported that my disk format was now outdated ZFS version,
> > so I
>
> > ran zpool upgrade rpool; zpool upgrade dpool. =A0No errors. =A0Now my Z=
FS
> > disk format is ZFS version 29.
>
> > =A0In the porcess of troubleshooting a strange error that surfaced with
> > regard to zones -- zoneadm and zlogin both fail with <segmentatin
> > fault> core -- I
>
> > decided to revert back to the original BE with the luactivate
> > command. =A0Rebooted and mout reboot fails with panic as the system
> > cannot mount root --
>
> > rpool -- because the original BE is still at an earlier ZFS version
> > and not comapatible with the newer ZFS version 29 on disk format.
>
> > NOTICE: zfs_parse_bootfs: error 48
> > Cannot mount root on rpool/51 fstype zfs
> > panic[cpu2]/thread=3D180e000: vfs_mountroot: cannot mount root
>
> > Obtained the latest Sol10-u10 iso and burned DVD and rebooted cdrom
> > which works then tried to implement the Solaris instructions for
> > reverting
>
> > back to the previous BE in the advent that your patched BE fails or
> > has serious problems. =A0Did a zpool import rpool and basically the
> > following:
>
> > **********************************************************************
>
> > The target boot environment has been activated. It will be used when
> > you
> > reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands.
> > You
> > MUST USE either the init or the shutdown command when you reboot. If
> > you
> > do not use either init or shutdown, the system will not boot using the
> > target BE.
>
> > **********************************************************************
>
> > In case of a failure while booting to the target BE, the following
> > process
> > needs to be followed to fallback to the currently working boot
> > environment:
>
> > 1. Enter the PROM monitor (ok prompt).
>
> > 2. Boot the machine to Single User mode using a different boot device
> > (like the Solaris Install CD or Network). Examples:
>
> > =A0 =A0 =A0At the PROM monitor (ok prompt):
> > =A0 =A0 =A0For boot to Solaris CD: =A0boot cdrom -s
> > =A0 =A0 =A0For boot to network: =A0 =A0 boot net -s
>
> > 3. Mount the Current boot environment root slice to some directory
> > (like
> > /mnt). You can use the following commands in sequence to mount the BE:
>
> > =A0 =A0 =A0zpool import rpool
> > =A0 =A0 =A0zfs inherit -r mountpoint rpool/ROOT/sol25Q10u9-patch-May12
> > =A0 =A0 =A0zfs set mountpoint=3D<mountpointName> rpool/ROOT/sol25Q10u9-=
patch-
> > May12
> > =A0 =A0 =A0zfs mount rpool/ROOT/sol25Q10u9-patch-May12
>
> > 4. Run <luactivate> utility with out any arguments from the Parent
> > boot
> > environment root slice, as shown below:
>
> > =A0 =A0 =A0<mountpointName>/sbin/luactivate
>
> > 5. luactivate, activates the previous working boot environment and
> > indicates the result.
>
> > 6. Exit Single User mode and reboot the machine.
>
> > **********************************************************************
>
> > Modifying boot archive service
> > Activation of boot environment <sol25Q10u9-baseline> successful.
>
> > ***********************************************************************=
**********
>
> > So one issue here is that in fact I do not want to activate the boot
> > environment <sol25Q10u9-baseline> but the NEW BE sol25Q10u9-patch-
> > May12
>
> > and then proceed to work out my zone issues. =A0Alternatively I need to
> > figure out how to upgrade the original BE to the new ZFS version
> > without being
>
> > able to boot it.
>
> > When I did the zpool import and tried to run luactivate or lustatus:
>
> > # /mnt/sbin/luactivate
> > luactivate: ERROR: Live Upgrade not installed properly (/etc/default/
> > lu not found).
> > # cd /
> > # /mnt/sbin/luactivate
> > luactivate: ERROR: Live Upgrade not installed properly (/etc/default/
> > lu not found).
> > # lustatus
> > lustatus: not found
> > # /mnt/sbin/lustatus
> > /mnt/sbin/lustatus: not found
>
> > Also tried:
>
> > # chroot /mnt /sbin/luactivate sol25Q10u9-patch-May12
> > df: Could not find mount point for /
> > ERROR: Unable to determine major and minor device numbers for boot
> > device </dev/dsk/c3t0d0s0>.
> > ERROR: Unable to determine the configuration of the current boot
> > environment <sol25Q10u9-patch-May12>.
>
> I'm not sure how much progress you have made on unraveling this
> problem, but
> here are some ideas:
>
> 1. You need to find out the reason why your zones are unhappy. LU
> supports a limited
> set of zone configurations, that are described here.
>
> http://docs.oracle.com/cd/E23823_01/html/819-5461/ggpdm.html#gigek
>
> My concern is that if your zone config isn't supported by LU, you
> usually know during
> the LU phase, not by zone login core dumps post LU. I've never seen
> this one so
> I would make sure your zones are operational prior to LU.
>
> An upcoming Solaris 10 release includes an LU pre-flight checker that
> will help
> determine whether your zones are supported. A workaround is to detach
> your zones
> before the LU and then reattach them after the LU. I haven't tried
> this myself.
>
> 2. The on-screen LU recovery steps don't work based on a discussion on
> this group
> and I filed a CR that is fixed in an upcoming Solaris 10 release.
>
> 3. I hope you can start over at a good known state. If that's not an
> option, then
> you could recreate a root pool at the version you need by using the
> version option when
> the new root pool is created. Then, point LU at the new pool.
>
> These steps are documented in the above pointer.
>
> 4. You should be able to import your existing root pool from the
> matching media version
> and mount the root BE. You can't run the LU suite of commands in an
> alternate boot mode.
>
> Thanks,
>
> Cindy

Thanks for your help and reply.  I will read the link you posted.
Here is what I have managed since posting.

From tip from another list, I was able to use boot -L (and
subsequently boot -Z) from >ok to see and choose the BE I wanted to
boot into.  Chose the newest patched BE with kernel support for the
correct version of ZFS.  This worked and I was able to boot the system
and mount my ZFS file systems. Zones auto started and were running but
I still had the segmentation fault / core dump from the zoneadm and
zlogin commands.  Other weirdness was apparent, and also very
surprising, found LU had moved my zone dirs from /export/zones to /
zoneds and changed the zone config.to match the new path.  Old zone
path / dirs still there, but renamed to something_oldBE_name. I
checked lustatus and it tells me that me old original, un patched BE
is the active BE.  I ran luactivate and set the active BE to the new
patched BE and that succeeded with no errors and lustatus confirmed.
When I tried to shutdown init 6 reboot I was right back to were I
started with error 48 and impossible to mount file systems.  Used boot
-Z new_BE again and booted back into my new_BE with all ZFS file
systems mounted.  Decided to try to lucreate a new BE named new_BE_cpy
which succeeded and then luativate to that, which succeeded.  The
logic of this was that if successful this process might fix / rewrite
the boot config / information that was screwed up and kept giving me
the same error 48 (bad ZFS version).  Again rebooted and this time
booted successfully without error into the newly created BE.  zoneadm
list works, now segmentation fault / core dump, but revealed no non-
global (sparse) zones running. Attempted zoneadm -z my_zone1 boot and
this fails with errors.  I rechecked the zone config to verify the
path and then checked to see that the path was valid, existed, and
that my zones were actually there.  Found that /zoneds was still
there, but no zone dirs or files / data, etc was there (it was there
under the new_BE).  Now perplexed as to the next step.  Researching if
I can A) copy zone dirs / data / files from the original BE location /
export/zones (which still exists or B) use a some kind of zone export
command (but current working BE shows no zone dirs / data / files or
running zones C) some other method (tape restore) to get my zones dir /
data /files into /zoneds location and then D) patch separately the
zones using the patchadd or the 10_recommended install script in some
manner that I am not presently sure of the correct syntax or E)
somehow patch the original, old BE (lumake? run 10_recommended install
script or PCA to alternate root after lumounting old BE?) and patch
the zones under the old_BE and attempt to luactivate that and if all
is normal then delete the other new_BE (keep my new_BE_cpy just in
case)

Ok -- so this is where I am at.  Current BE seems normal and working
but my zones are broken.  It looks like I can live comfortably with
the new_BE_cpy if I can get my zones back / running without any errors
or errors from zoneadm and zlogin, etc commands -- but how exactly.

Thanks

Paul
0
swingboyla (12)
5/22/2012 6:39:01 PM
On May 22, 12:39=A0pm, Paul Vanderhoof <swingbo...@gmail.com> wrote:
> On May 21, 8:37=A0am, cindy <cindy.swearin...@oracle.com> wrote:
>
>
>
>
>
>
>
>
>
> > On May 18, 4:29=A0pm, Paul Vanderhoof <swingbo...@gmail.com> wrote:
>
> > > Need Help: Reverted to old BE with older version of ZFS cannot mount
> > > rpool root or get back to new BE
>
> > > Solaris 10 u9 on sparc v490. =A03 non-global zones. =A0I did the
> > > following: I used Live Upgrade method to install latest 10_recommende=
d
> > > which worked
>
> > > with no errors. =A0Rebooted into the new BE with no apparent problems=
..
> > > ZFS tools reported that my disk format was now outdated ZFS version,
> > > so I
>
> > > ran zpool upgrade rpool; zpool upgrade dpool. =A0No errors. =A0Now my=
 ZFS
> > > disk format is ZFS version 29.
>
> > > =A0In the porcess of troubleshooting a strange error that surfaced wi=
th
> > > regard to zones -- zoneadm and zlogin both fail with <segmentatin
> > > fault> core -- I
>
> > > decided to revert back to the original BE with the luactivate
> > > command. =A0Rebooted and mout reboot fails with panic as the system
> > > cannot mount root --
>
> > > rpool -- because the original BE is still at an earlier ZFS version
> > > and not comapatible with the newer ZFS version 29 on disk format.
>
> > > NOTICE: zfs_parse_bootfs: error 48
> > > Cannot mount root on rpool/51 fstype zfs
> > > panic[cpu2]/thread=3D180e000: vfs_mountroot: cannot mount root
>
> > > Obtained the latest Sol10-u10 iso and burned DVD and rebooted cdrom
> > > which works then tried to implement the Solaris instructions for
> > > reverting
>
> > > back to the previous BE in the advent that your patched BE fails or
> > > has serious problems. =A0Did a zpool import rpool and basically the
> > > following:
>
> > > *********************************************************************=
*
>
> > > The target boot environment has been activated. It will be used when
> > > you
> > > reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands.
> > > You
> > > MUST USE either the init or the shutdown command when you reboot. If
> > > you
> > > do not use either init or shutdown, the system will not boot using th=
e
> > > target BE.
>
> > > *********************************************************************=
*
>
> > > In case of a failure while booting to the target BE, the following
> > > process
> > > needs to be followed to fallback to the currently working boot
> > > environment:
>
> > > 1. Enter the PROM monitor (ok prompt).
>
> > > 2. Boot the machine to Single User mode using a different boot device
> > > (like the Solaris Install CD or Network). Examples:
>
> > > =A0 =A0 =A0At the PROM monitor (ok prompt):
> > > =A0 =A0 =A0For boot to Solaris CD: =A0boot cdrom -s
> > > =A0 =A0 =A0For boot to network: =A0 =A0 boot net -s
>
> > > 3. Mount the Current boot environment root slice to some directory
> > > (like
> > > /mnt). You can use the following commands in sequence to mount the BE=
:
>
> > > =A0 =A0 =A0zpool import rpool
> > > =A0 =A0 =A0zfs inherit -r mountpoint rpool/ROOT/sol25Q10u9-patch-May1=
2
> > > =A0 =A0 =A0zfs set mountpoint=3D<mountpointName> rpool/ROOT/sol25Q10u=
9-patch-
> > > May12
> > > =A0 =A0 =A0zfs mount rpool/ROOT/sol25Q10u9-patch-May12
>
> > > 4. Run <luactivate> utility with out any arguments from the Parent
> > > boot
> > > environment root slice, as shown below:
>
> > > =A0 =A0 =A0<mountpointName>/sbin/luactivate
>
> > > 5. luactivate, activates the previous working boot environment and
> > > indicates the result.
>
> > > 6. Exit Single User mode and reboot the machine.
>
> > > *********************************************************************=
*
>
> > > Modifying boot archive service
> > > Activation of boot environment <sol25Q10u9-baseline> successful.
>
> > > *********************************************************************=
************
>
> > > So one issue here is that in fact I do not want to activate the boot
> > > environment <sol25Q10u9-baseline> but the NEW BE sol25Q10u9-patch-
> > > May12
>
> > > and then proceed to work out my zone issues. =A0Alternatively I need =
to
> > > figure out how to upgrade the original BE to the new ZFS version
> > > without being
>
> > > able to boot it.
>
> > > When I did the zpool import and tried to run luactivate or lustatus:
>
> > > # /mnt/sbin/luactivate
> > > luactivate: ERROR: Live Upgrade not installed properly (/etc/default/
> > > lu not found).
> > > # cd /
> > > # /mnt/sbin/luactivate
> > > luactivate: ERROR: Live Upgrade not installed properly (/etc/default/
> > > lu not found).
> > > # lustatus
> > > lustatus: not found
> > > # /mnt/sbin/lustatus
> > > /mnt/sbin/lustatus: not found
>
> > > Also tried:
>
> > > # chroot /mnt /sbin/luactivate sol25Q10u9-patch-May12
> > > df: Could not find mount point for /
> > > ERROR: Unable to determine major and minor device numbers for boot
> > > device </dev/dsk/c3t0d0s0>.
> > > ERROR: Unable to determine the configuration of the current boot
> > > environment <sol25Q10u9-patch-May12>.
>
> > I'm not sure how much progress you have made on unraveling this
> > problem, but
> > here are some ideas:
>
> > 1. You need to find out the reason why your zones are unhappy. LU
> > supports a limited
> > set of zone configurations, that are described here.
>
> >http://docs.oracle.com/cd/E23823_01/html/819-5461/ggpdm.html#gigek
>
> > My concern is that if your zone config isn't supported by LU, you
> > usually know during
> > the LU phase, not by zone login core dumps post LU. I've never seen
> > this one so
> > I would make sure your zones are operational prior to LU.
>
> > An upcoming Solaris 10 release includes an LU pre-flight checker that
> > will help
> > determine whether your zones are supported. A workaround is to detach
> > your zones
> > before the LU and then reattach them after the LU. I haven't tried
> > this myself.
>
> > 2. The on-screen LU recovery steps don't work based on a discussion on
> > this group
> > and I filed a CR that is fixed in an upcoming Solaris 10 release.
>
> > 3. I hope you can start over at a good known state. If that's not an
> > option, then
> > you could recreate a root pool at the version you need by using the
> > version option when
> > the new root pool is created. Then, point LU at the new pool.
>
> > These steps are documented in the above pointer.
>
> > 4. You should be able to import your existing root pool from the
> > matching media version
> > and mount the root BE. You can't run the LU suite of commands in an
> > alternate boot mode.
>
> > Thanks,
>
> > Cindy
>
> Thanks for your help and reply. =A0I will read the link you posted.
> Here is what I have managed since posting.
>
> From tip from another list, I was able to use boot -L (and
> subsequently boot -Z) from >ok to see and choose the BE I wanted to
> boot into. =A0Chose the newest patched BE with kernel support for the
> correct version of ZFS. =A0This worked and I was able to boot the system
> and mount my ZFS file systems. Zones auto started and were running but
> I still had the segmentation fault / core dump from the zoneadm and
> zlogin commands. =A0Other weirdness was apparent, and also very
> surprising, found LU had moved my zone dirs from /export/zones to /
> zoneds and changed the zone config.to match the new path. =A0Old zone
> path / dirs still there, but renamed to something_oldBE_name. I
> checked lustatus and it tells me that me old original, un patched BE
> is the active BE. =A0I ran luactivate and set the active BE to the new
> patched BE and that succeeded with no errors and lustatus confirmed.
> When I tried to shutdown init 6 reboot I was right back to were I
> started with error 48 and impossible to mount file systems. =A0Used boot
> -Z new_BE again and booted back into my new_BE with all ZFS file
> systems mounted. =A0Decided to try to lucreate a new BE named new_BE_cpy
> which succeeded and then luativate to that, which succeeded. =A0The
> logic of this was that if successful this process might fix / rewrite
> the boot config / information that was screwed up and kept giving me
> the same error 48 (bad ZFS version). =A0Again rebooted and this time
> booted successfully without error into the newly created BE. =A0zoneadm
> list works, now segmentation fault / core dump, but revealed no non-
> global (sparse) zones running. Attempted zoneadm -z my_zone1 boot and
> this fails with errors. =A0I rechecked the zone config to verify the
> path and then checked to see that the path was valid, existed, and
> that my zones were actually there. =A0Found that /zoneds was still
> there, but no zone dirs or files / data, etc was there (it was there
> under the new_BE). =A0Now perplexed as to the next step. =A0Researching i=
f
> I can A) copy zone dirs / data / files from the original BE location /
> export/zones (which still exists or B) use a some kind of zone export
> command (but current working BE shows no zone dirs / data / files or
> running zones C) some other method (tape restore) to get my zones dir /
> data /files into /zoneds location and then D) patch separately the
> zones using the patchadd or the 10_recommended install script in some
> manner that I am not presently sure of the correct syntax or E)
> somehow patch the original, old BE (lumake? run 10_recommended install
> script or PCA to alternate root after lumounting old BE?) and patch
> the zones under the old_BE and attempt to luactivate that and if all
> is normal then delete the other new_BE (keep my new_BE_cpy just in
> case)
>
> Ok -- so this is where I am at. =A0Current BE seems normal and working
> but my zones are broken. =A0It looks like I can live comfortably with
> the new_BE_cpy if I can get my zones back / running without any errors
> or errors from zoneadm and zlogin, etc commands -- but how exactly.
>
> Thanks
>
> Paul

Hi Paul,

I can't follow everything in description above, but a couple of
comments:

1. LU inserts the zone zoneds file system names when needed so it
can support your zones. If you remove them, they will break LU.

2. The boot -L allows you to select the BE to from but it does not
activate
the BE for you. You would still need to luactivate the working BE.

3. I think you would benefit from reviewing the doc pointer I
provided.
My opinion is that stuff should work without having to read the docs
and I write
docs for a living, but for LU + zones, its a must.

4. I see we are offering a zones pre-flight checker for migration
purposes.
I'm hoping you can download this script and it will tell you what's
wrong with your
zones.

https://blogs.oracle.com/listey/entry/oracle_solaris_zones_preflight_system

Thanks,

Cindy
0
5/22/2012 7:07:10 PM
In article <4066b68e-4017-4dd6-a2e7-8f0f537d5d0c@3g2000vbx.googlegroups.com>,
Paul Vanderhoof  <swingboyla@gmail.com> wrote:
>Ok -- so this is where I am at.  Current BE seems normal and working
>but my zones are broken.  It looks like I can live comfortably with
>the new_BE_cpy if I can get my zones back / running without any errors
>or errors from zoneadm and zlogin, etc commands -- but how exactly.

What does Martin Paul's PCA report as missing patches in your current
working BE?

John
groenveld@acm.org
0
groenvel (550)
5/22/2012 7:20:02 PM
In article <4066b68e-4017-4dd6-a2e7-8f0f537d5d0c@3g2000vbx.googlegroups.com>,
Paul Vanderhoof  <swingboyla@gmail.com> wrote:
>boot into.  Chose the newest patched BE with kernel support for the
>correct version of ZFS.  This worked and I was able to boot the system
>and mount my ZFS file systems. Zones auto started and were running but
>I still had the segmentation fault / core dump from the zoneadm and
>zlogin commands.  Other weirdness was apparent, and also very
>surprising, found LU had moved my zone dirs from /export/zones to /
>zoneds and changed the zone config.to match the new path.  Old zone

What is your ZFS and zone configurations?
# zpool list
# zfs list
# mount
# zoneadm list -cv
# for zone in `zoneadm list -p | awk -F: '{print $2}' | grep -v global`
> do
> echo "# $zone"
> zonecfg -z $zone export
> done


John
groenveld@acm.org
0
groenvel (550)
5/22/2012 7:38:57 PM
On May 22, 12:07=A0pm, cindy <cindy.swearin...@oracle.com> wrote:

>
> Hi Paul,
>
> I can't follow everything in description above, but a couple of
> comments:
>
> 1. LU inserts the zone zoneds file system names when needed so it
> can support your zones. If you remove them, they will break LU.

Did not remove them: when I created another BE from the patched
new_BE, then activated it with luactivate, then rebooted
into the newly created BE, I found the zone dirs had disappeared.
Basically I have 3 BE old_BE (u09 not patched), new_BE (patched with
latest 10_Recommended and shows as u10), and new_BE_cpy, a BE created
after I had successfully boot -Z booted into my patched new_BE.

I have not tried to revert back to new_BE to see if the zone dirs
reappear in the /zoneds path.
>
> 2. The boot -L allows you to select the BE to from but it does not
> activate
> the BE for you. You would still need to luactivate the working BE.

Per my description above I have done exactly that -- and it really did
not work.  On reboot
I was back into the non bootable situation from which I had started
with an error 48 (wrong ZFS version) and
un mountalbe file systems.  My next step was to once again use boot -Z
to get back to a booted running state
with new_BE and then I created a new BE called new_BE_cpy, then
activated that with luactivate and rebooted.
At this point I had a system that was successfully booting into a
patched u10 system with all ZFS volumes and file systems
 mounted.  This is where I am now.  How can I get my zones back?
There are populated zone dirs under the old zone
path "/export/zones" related to the original, un-patched old_BE.
>
> 3. I think you would benefit from reviewing the doc pointer I
> provided.
> My opinion is that stuff should work without having to read the docs
> and I write
> docs for a living, but for LU + zones, its a must.
I have read tons already, and have googled tons more and will try to
read everything I can get my hands on -- but would be super nice
if someone recognized the situation I face and had a ready answer.
>
> 4. I see we are offering a zones pre-flight checker for migration
> purposes.
> I'm hoping you can download this script and it will tell you what's
> wrong with your
> zones.
>
> https://blogs.oracle.com/listey/entry/oracle_solaris_zones_preflight_...
>
I did know about the preflight and downloaded it but have not had
time to study it.  From what I have read so far, and what is
indicated
on the link you post, the purpose of this tool is to evaluate
migrating an
environment to a zone -- no a check to see / fix what is wrong with
an
existing zone.

> Thanks,
>
> Cindy
Appreciate your help and comments.

Paul
0
swingboyla (12)
5/22/2012 7:47:21 PM
On May 22, 12:20=A0pm, groen...@cse.psu.edu (John D Groenveld) wrote:
> In article <4066b68e-4017-4dd6-a2e7-8f0f537d5...@3g2000vbx.googlegroups.c=
om>,
> Paul Vanderhoof =A0<swingbo...@gmail.com> wrote:
>
> >Ok -- so this is where I am at. =A0Current BE seems normal and working
> >but my zones are broken. =A0It looks like I can live comfortably with
> >the new_BE_cpy if I can get my zones back / running without any errors
> >or errors from zoneadm and zlogin, etc commands -- but how exactly.
>
> What does Martin Paul's PCA report as missing patches in your current
> working BE?
>
> John
> groenv...@acm.org

Haven't use PCA much as management decided not to
renew Oracle support contract and cannot connect PCA with
Oracle to check.  We have some kind of corp deal with Oracle
to get patches, but as corp seems to want to migrate away from
Solaris to Linux (probably a 5 year project at best) they do not
want to pay for T-support.

Other suggestions?

Thanks for you help.

Paul
0
swingboyla (12)
5/22/2012 7:50:23 PM
In article <4d8d25fd-9f2c-4a23-af54-76fb5eb9f955@v24g2000vbx.googlegroups.com>,
Paul Vanderhoof  <swingboyla@gmail.com> wrote:
>Haven't use PCA much as management decided not to
>renew Oracle support contract and cannot connect PCA with
>Oracle to check.  We have some kind of corp deal with Oracle
>to get patches, but as corp seems to want to migrate away from
>Solaris to Linux (probably a 5 year project at best) they do not
>want to pay for T-support.
>
>Other suggestions?

What are these outputs:
# uname -a
# pkginfo -l
# showrev -p
  
John
groenveld@acm.org
0
groenvel (550)
5/22/2012 8:02:46 PM
On May 22, 1:47=A0pm, Paul Vanderhoof <swingbo...@gmail.com> wrote:
> On May 22, 12:07=A0pm, cindy <cindy.swearin...@oracle.com> wrote:
>
>
>
> > Hi Paul,
>
> > I can't follow everything in description above, but a couple of
> > comments:
>
> > 1. LU inserts the zone zoneds file system names when needed so it
> > can support your zones. If you remove them, they will break LU.
>
> Did not remove them: when I created another BE from the patched
> new_BE, then activated it with luactivate, then rebooted
> into the newly created BE, I found the zone dirs had disappeared.
> Basically I have 3 BE old_BE (u09 not patched), new_BE (patched with
> latest 10_Recommended and shows as u10), and new_BE_cpy, a BE created
> after I had successfully boot -Z booted into my patched new_BE.
>
> I have not tried to revert back to new_BE to see if the zone dirs
> reappear in the /zoneds path.
>
>
>
> > 2. The boot -L allows you to select the BE to from but it does not
> > activate
> > the BE for you. You would still need to luactivate the working BE.
>
> Per my description above I have done exactly that -- and it really did
> not work. =A0On reboot
> I was back into the non bootable situation from which I had started
> with an error 48 (wrong ZFS version) and
> un mountalbe file systems. =A0My next step was to once again use boot -Z
> to get back to a booted running state
> with new_BE and then I created a new BE called new_BE_cpy, then
> activated that with luactivate and rebooted.
> At this point I had a system that was successfully booting into a
> patched u10 system with all ZFS volumes and file systems
> =A0mounted. =A0This is where I am now. =A0How can I get my zones back?
> There are populated zone dirs under the old zone
> path "/export/zones" related to the original, un-patched old_BE.
>
> > 3. I think you would benefit from reviewing the doc pointer I
> > provided.
> > My opinion is that stuff should work without having to read the docs
> > and I write
> > docs for a living, but for LU + zones, its a must.
>
> I have read tons already, and have googled tons more and will try to
> read everything I can get my hands on -- but would be super nice
> if someone recognized the situation I face and had a ready answer.
>
> > 4. I see we are offering a zones pre-flight checker for migration
> > purposes.
> > I'm hoping you can download this script and it will tell you what's
> > wrong with your
> > zones.
>
> >https://blogs.oracle.com/listey/entry/oracle_solaris_zones_preflight_...
>
> I did know about the preflight and downloaded it but have not had
> time to study it. =A0From what I have read so far, and what is
> indicated
> on the link you post, the purpose of this tool is to evaluate
> migrating an
> environment to a zone -- no a check to see / fix what is wrong with
> an
> existing zone.
>
> > Thanks,
>
> > Cindyhttp://docs.oracle.com/cd/E23824_01/html/821-1448/gbchy.html#scrol=
ltoc
>
> Appreciate your help and comments.
>
> Paul

A few comments. I'm the one who needs to improve my reading...

I. I think you are saying that you recovered by using boot -L and
attempted to
activate your good know BE, but after the activation, the system
booted from
the bad BE any way? Is this right?

The original system is s10u9, right? I see some older CRs about having
ZFS
file systems mounted inside your NGZ with legacy mount option causes
the activation of the BE to fail but they are fixed in s10u8.

2. I think your assessment of the zones flight checker is correct and
mine was
wrong. I apologize.

3. Troubleshooting your zone config is harder. I think you will need
to describe your zones using the
supported zones config info I sent you, such as my zoneroot is /rpool/
abc and I do not have
any nested zone paths and so son.

Thanks,

Cindy
0
5/22/2012 8:42:17 PM
> A few comments. I'm the one who needs to improve my reading...
>
> I. I think you are saying that you recovered by using boot -L and
> attempted to
> activate your good know BE, but after the activation, the system
> booted from
> the bad BE any way? Is this right?
Yes -- correct.

>
> The original system is s10u9, right? I see some older CRs about having
> ZFS
> file systems mounted inside your NGZ with legacy mount option causes
> the activation of the BE to fail but they are fixed in s10u8.
>
Yes -- my initial install was s10u9.

> 2. I think your assessment of the zones flight checker is correct and
> mine was
> wrong. I apologize.
>
> 3. Troubleshooting your zone config is harder. I think you will need
> to describe your zones using the
> supported zones config info I sent you, such as my zoneroot is /rpool/
> abc and I do not have
> any nested zone paths and so son.
>
I am working some other ideas on how to resolve this as well as ideas
contributed here.
My corp policy severely limits how much identifiable information from
our IT systems
that can be made public.  I will try to provide a "the names have been
changed to
protect the innocent" version of all relevant config info and same
with regard to poster
John's info requests.  At this point I am looking to try a zone
detach / reattach "update on attach"
method of getting the zones starightened out.  More details later.

Paul
> Thanks,
>
> Cindy
0
swingboyla (12)
5/23/2012 1:19:51 AM
> A few comments. I'm the one who needs to improve my reading...
>
> I. I think you are saying that you recovered by using boot -L and
> attempted to
> activate your good know BE, but after the activation, the system
> booted from
> the bad BE any way? Is this right?
>
> The original system is s10u9, right? I see some older CRs about having
> ZFS
> file systems mounted inside your NGZ with legacy mount option causes
> the activation of the BE to fail but they are fixed in s10u8.
>
> 2. I think your assessment of the zones flight checker is correct and
> mine was
> wrong. I apologize.
>
> 3. Troubleshooting your zone config is harder. I think you will need
> to describe your zones using the
> supported zones config info I sent you, such as my zoneroot is /rpool/
> abc and I do not have
> any nested zone paths and so son.
>
> Thanks,
>
> Cindy

A few more points to that above:

-- after boot -L to boot into the patched new_BE i had running zones,
but the zone path had been changed to /zoneds.
My zoneadm, zlogin commands, etc, still resulted in <segmentation
fault> core dumped.  However, I was able to ssh
into the zones normally from outside and everything seemed to function
normally within the zone.

-- after again using boot -L to boot into the patched new_BE, I tried
to
create another new BE and activate it  -- lets call it new_BE_cpy --
on the premise that doing so would "fix" and "straighten
out" the weird scenario where I kept booting into the "bad" BE.  This
has actually worked and my system now boots cleanly
into the 3rd, new_BE_cpy BE without any errors.  However, at this
point, I have no zone dirs, files, data, etc under /zoneds,
but all my original zone dirs, files, data, etc, still existed under /
export/zones, which is where I created them initially with the
first, unpatched, s10u9 install.

-- I plan to try to copy those zone dirs from /export/zones, check
that my relevant /etc/zones config xml stuff is still there in
the new_BE_cpy BE, and then try a zoneadm -z xxx attach -U option.
This is talked about in the Zone Migration link and I
will be reading over this carefully tonight.

-- before patching my s10u9 system with the latest 10_Recommended I
did have any issues with the zones or the zoneadm
commands and I did not have any <sementation fault> core dumped.  This
appeared only after my apparently successful
patch install and reboot into the (patched) new_BE.

Thanks

Paul
0
swingboyla (12)
5/23/2012 1:38:15 AM
On May 22, 7:38=A0pm, Paul Vanderhoof <swingbo...@gmail.com> wrote:
> > A few comments. I'm the one who needs to improve my reading...
>
> > I. I think you are saying that you recovered by using boot -L and
> > attempted to
> > activate your good know BE, but after the activation, the system
> > booted from
> > the bad BE any way? Is this right?
>
> > The original system is s10u9, right? I see some older CRs about having
> > ZFS
> > file systems mounted inside your NGZ with legacy mount option causes
> > the activation of the BE to fail but they are fixed in s10u8.
>
> > 2. I think your assessment of the zones flight checker is correct and
> > mine was
> > wrong. I apologize.
>
> > 3. Troubleshooting your zone config is harder. I think you will need
> > to describe your zones using the
> > supported zones config info I sent you, such as my zoneroot is /rpool/
> > abc and I do not have
> > any nested zone paths and so son.
>
> > Thanks,
>
> > Cindy
>
> A few more points to that above:
>
> -- after boot -L to boot into the patched new_BE i had running zones,
> but the zone path had been changed to /zoneds.
> My zoneadm, zlogin commands, etc, still resulted in <segmentation
> fault> core dumped. =A0However, I was able to ssh
> into the zones normally from outside and everything seemed to function
> normally within the zone.
>
> -- after again using boot -L to boot into the patched new_BE, I tried
> to
> create another new BE and activate it =A0-- lets call it new_BE_cpy --
> on the premise that doing so would "fix" and "straighten
> out" the weird scenario where I kept booting into the "bad" BE. =A0This
> has actually worked and my system now boots cleanly
> into the 3rd, new_BE_cpy BE without any errors. =A0However, at this
> point, I have no zone dirs, files, data, etc under /zoneds,
> but all my original zone dirs, files, data, etc, still existed under /
> export/zones, which is where I created them initially with the
> first, unpatched, s10u9 install.
>
> -- I plan to try to copy those zone dirs from /export/zones, check
> that my relevant /etc/zones config xml stuff is still there in
> the new_BE_cpy BE, and then try a zoneadm -z xxx attach -U option.
> This is talked about in the Zone Migration link and I
> will be reading over this carefully tonight.
>
> -- before patching my s10u9 system with the latest 10_Recommended I
> did have any issues with the zones or the zoneadm
> commands and I did not have any <sementation fault> core dumped. =A0This
> appeared only after my apparently successful
> patch install and reboot into the (patched) new_BE.
>
> Thanks
>
> Paul

Sounds like you are making good progress. Yesterday morning. I talked
to the s10 install support
manager about your zone login core dumping after the LU and he had not
heard of this
either so we're really stumped. I'll try again with one of the LU guys
who did a lot of zones
work.

Just to recap:

1. A system running 10u9 with zones that are completely functional.
The zone configs
verify and halt and reboot without error.

2. During the LU to s10u10 migration process, no zone-related error
messages occur.

3. After the LU and the system reboots successfully, the zone login
core dumps.

Thanks,

Cindy
0
5/23/2012 3:26:51 AM
Paul Vanderhoof wrote:
> On May 22, 12:20 pm, groen...@cse.psu.edu (John D Groenveld) wrote:
>> What does Martin Paul's PCA report as missing patches in your current
>> working BE?
> 
> Haven't use PCA much as management decided not to
> renew Oracle support contract and cannot connect PCA with
> Oracle to check.

You don't need a support contract (or a "My Oracle Support" account) if you want 
to use PCA just to check for missing patches ...

hth,
Martin.
-- 
SysAdmin | Research Group Scientific Computing - University of Vienna
      PCA | Analyze, download and install patches for Solaris
          | http://www.par.univie.ac.at/solaris/pca/
0
5/23/2012 6:44:35 AM
On May 22, 8:26=A0pm, cindy swearingen <cindy.swearin...@gmail.com>
wrote:
> On May 22, 7:38=A0pm, Paul Vanderhoof <swingbo...@gmail.com> wrote:
>
>
>
> > > A few comments. I'm the one who needs to improve my reading...
>
> > > I. I think you are saying that you recovered by using boot -L and
> > > attempted to
> > > activate your good know BE, but after the activation, the system
> > > booted from
> > > the bad BE any way? Is this right?
>
> > > The original system is s10u9, right? I see some older CRs about havin=
g
> > > ZFS
> > > file systems mounted inside your NGZ with legacy mount option causes
> > > the activation of the BE to fail but they are fixed in s10u8.
>
> > > 2. I think your assessment of the zones flight checker is correct and
> > > mine was
> > > wrong. I apologize.
>
> > > 3. Troubleshooting your zone config is harder. I think you will need
> > > to describe your zones using the
> > > supported zones config info I sent you, such as my zoneroot is /rpool=
/
> > > abc and I do not have
> > > any nested zone paths and so son.
>
> > > Thanks,
>
> > > Cindy
>
> > A few more points to that above:
>
> > -- after boot -L to boot into the patched new_BE i had running zones,
> > but the zone path had been changed to /zoneds.
> > My zoneadm, zlogin commands, etc, still resulted in <segmentation
> > fault> core dumped. =A0However, I was able to ssh
> > into the zones normally from outside and everything seemed to function
> > normally within the zone.
>
> > -- after again using boot -L to boot into the patched new_BE, I tried
> > to
> > create another new BE and activate it =A0-- lets call it new_BE_cpy --
> > on the premise that doing so would "fix" and "straighten
> > out" the weird scenario where I kept booting into the "bad" BE. =A0This
> > has actually worked and my system now boots cleanly
> > into the 3rd, new_BE_cpy BE without any errors. =A0However, at this
> > point, I have no zone dirs, files, data, etc under /zoneds,
> > but all my original zone dirs, files, data, etc, still existed under /
> > export/zones, which is where I created them initially with the
> > first, unpatched, s10u9 install.
>
> > -- I plan to try to copy those zone dirs from /export/zones, check
> > that my relevant /etc/zones config xml stuff is still there in
> > the new_BE_cpy BE, and then try a zoneadm -z xxx attach -U option.
> > This is talked about in the Zone Migration link and I
> > will be reading over this carefully tonight.
>
> > -- before patching my s10u9 system with the latest 10_Recommended I
> > did have any issues with the zones or the zoneadm
> > commands and I did not have any <sementation fault> core dumped. =A0Thi=
s
> > appeared only after my apparently successful
> > patch install and reboot into the (patched) new_BE.
>
> > Thanks
>
> > Paul
>
> Sounds like you are making good progress. Yesterday morning. I talked
> to the s10 install support
> manager about your zone login core dumping after the LU and he had not
> heard of this
> either so we're really stumped. I'll try again with one of the LU guys
> who did a lot of zones
> work.
>
> Just to recap:
>
> 1. A system running 10u9 with zones that are completely functional.
> The zone configs
> verify and halt and reboot without error.
>
> 2. During the LU to s10u10 migration process, no zone-related error
> messages occur.
>
> 3. After the LU and the system reboots successfully, the zone login
> core dumps.
>
> Thanks,
>
> Cindy

All the above is correct except for one mistake on my part in that I
did not LU to s10u10, but did an LU to latest
10_Recommended.  cat /etc/release still shows me at Oracle Solaris 10
9/10 s10s_u9wos_14a SPARC.  I apologize
for mistakenly assuming that the 10_Recommended was taking my system
to the equivalent of s10u10.  If I can
return this system to a stable reliable state I would like to do an LU
to s10u10 but Ihave never previosly attempted
a full version LU upgrade.
0
swingboyla (12)
5/23/2012 4:51:35 PM
On May 22, 11:44=A0pm, Martin Paul <martin.use...@diepauls.at> wrote:
> Paul Vanderhoof wrote:
> > On May 22, 12:20 pm, groen...@cse.psu.edu (John D Groenveld) wrote:
> >> What does Martin Paul's PCA report as missing patches in your current
> >> working BE?
>
> > Haven't use PCA much as management decided not to
> > renew Oracle support contract and cannot connect PCA with
> > Oracle to check.
>
> You don't need a support contract (or a "My Oracle Support" account) if y=
ou want
> to use PCA just to check for missing patches ...
>
> hth,
> Martin.
> --
> SysAdmin | Research Group Scientific Computing - University of Vienna
> =A0 =A0 =A0 PCA | Analyze, download and install patches for Solaris
> =A0 =A0 =A0 =A0 =A0 |http://www.par.univie.ac.at/solaris/pca/

Thanks you for that info.  I have used PCA some in the past but not
for a while now.
I will revisit the PCA process and syntax and see what it tells me.
0
swingboyla (12)
5/23/2012 4:52:35 PM
On May 23, 10:51=A0am, Paul Vanderhoof <swingbo...@gmail.com> wrote:
> On May 22, 8:26=A0pm, cindy swearingen <cindy.swearin...@gmail.com>
> wrote:
>
>
>
>
>
>
>
>
>
> > On May 22, 7:38=A0pm, Paul Vanderhoof <swingbo...@gmail.com> wrote:
>
> > > > A few comments. I'm the one who needs to improve my reading...
>
> > > > I. I think you are saying that you recovered by using boot -L and
> > > > attempted to
> > > > activate your good know BE, but after the activation, the system
> > > > booted from
> > > > the bad BE any way? Is this right?
>
> > > > The original system is s10u9, right? I see some older CRs about hav=
ing
> > > > ZFS
> > > > file systems mounted inside your NGZ with legacy mount option cause=
s
> > > > the activation of the BE to fail but they are fixed in s10u8.
>
> > > > 2. I think your assessment of the zones flight checker is correct a=
nd
> > > > mine was
> > > > wrong. I apologize.
>
> > > > 3. Troubleshooting your zone config is harder. I think you will nee=
d
> > > > to describe your zones using the
> > > > supported zones config info I sent you, such as my zoneroot is /rpo=
ol/
> > > > abc and I do not have
> > > > any nested zone paths and so son.
>
> > > > Thanks,
>
> > > > Cindy
>
> > > A few more points to that above:
>
> > > -- after boot -L to boot into the patched new_BE i had running zones,
> > > but the zone path had been changed to /zoneds.
> > > My zoneadm, zlogin commands, etc, still resulted in <segmentation
> > > fault> core dumped. =A0However, I was able to ssh
> > > into the zones normally from outside and everything seemed to functio=
n
> > > normally within the zone.
>
> > > -- after again using boot -L to boot into the patched new_BE, I tried
> > > to
> > > create another new BE and activate it =A0-- lets call it new_BE_cpy -=
-
> > > on the premise that doing so would "fix" and "straighten
> > > out" the weird scenario where I kept booting into the "bad" BE. =A0Th=
is
> > > has actually worked and my system now boots cleanly
> > > into the 3rd, new_BE_cpy BE without any errors. =A0However, at this
> > > point, I have no zone dirs, files, data, etc under /zoneds,
> > > but all my original zone dirs, files, data, etc, still existed under =
/
> > > export/zones, which is where I created them initially with the
> > > first, unpatched, s10u9 install.
>
> > > -- I plan to try to copy those zone dirs from /export/zones, check
> > > that my relevant /etc/zones config xml stuff is still there in
> > > the new_BE_cpy BE, and then try a zoneadm -z xxx attach -U option.
> > > This is talked about in the Zone Migration link and I
> > > will be reading over this carefully tonight.
>
> > > -- before patching my s10u9 system with the latest 10_Recommended I
> > > did have any issues with the zones or the zoneadm
> > > commands and I did not have any <sementation fault> core dumped. =A0T=
his
> > > appeared only after my apparently successful
> > > patch install and reboot into the (patched) new_BE.
>
> > > Thanks
>
> > > Paul
>
> > Sounds like you are making good progress. Yesterday morning. I talked
> > to the s10 install support
> > manager about your zone login core dumping after the LU and he had not
> > heard of this
> > either so we're really stumped. I'll try again with one of the LU guys
> > who did a lot of zones
> > work.
>
> > Just to recap:
>
> > 1. A system running 10u9 with zones that are completely functional.
> > The zone configs
> > verify and halt and reboot without error.
>
> > 2. During the LU to s10u10 migration process, no zone-related error
> > messages occur.
>
> > 3. After the LU and the system reboots successfully, the zone login
> > core dumps.
>
> > Thanks,
>
> > Cindy
>
> All the above is correct except for one mistake on my part in that I
> did not LU to s10u10, but did an LU to latest
> 10_Recommended. =A0cat /etc/release still shows me at Oracle Solaris 10
> 9/10 s10s_u9wos_14a SPARC. =A0I apologize
> for mistakenly assuming that the 10_Recommended was taking my system
> to the equivalent of s10u10. =A0If I can
> return this system to a stable reliable state I would like to do an LU
> to s10u10 but Ihave never previosly attempted
> a full version LU upgrade.

Could be I misunderstood. Diagnosis by email can be pretty painful,
particularly
if I have to read a lot of text.

I think there might some overlapping problems here:

1. If you new patched BE (the ABE) is in the same pool as the current
BE (PBE),
then the onscreen instructions fail. This is CR 6996301, fixed
recently.

2. I believe your BEs are ZFS file systems.

3. Are the zone roots also ZFS and are they in the root pool or a
separate pool?

4. Another long-time s10 install support engineer suggests the detach,
upgrade your
BE with the patch set, and re-attach and update your zones.

I haven't talked to the guys who have worked on LU/zones issues
recently, but
the answer to question #3 will help.

Thanks,

Cindy
0
5/23/2012 7:25:12 PM
On May 23, 12:25=A0pm, cindy <cindy.swearin...@oracle.com> wrote:
> On May 23, 10:51=A0am, Paul Vanderhoof <swingbo...@gmail.com> wrote:
>
>
>
> > On May 22, 8:26=A0pm, cindy swearingen <cindy.swearin...@gmail.com>
> > wrote:
>
> > > On May 22, 7:38=A0pm, Paul Vanderhoof <swingbo...@gmail.com> wrote:
>
> > > > > A few comments. I'm the one who needs to improve my reading...
>
> > > > > I. I think you are saying that you recovered by using boot -L and
> > > > > attempted to
> > > > > activate your good know BE, but after the activation, the system
> > > > > booted from
> > > > > the bad BE any way? Is this right?
>
> > > > > The original system is s10u9, right? I see some older CRs about h=
aving
> > > > > ZFS
> > > > > file systems mounted inside your NGZ with legacy mount option cau=
ses
> > > > > the activation of the BE to fail but they are fixed in s10u8.
>
> > > > > 2. I think your assessment of the zones flight checker is correct=
 and
> > > > > mine was
> > > > > wrong. I apologize.
>
> > > > > 3. Troubleshooting your zone config is harder. I think you will n=
eed
> > > > > to describe your zones using the
> > > > > supported zones config info I sent you, such as my zoneroot is /r=
pool/
> > > > > abc and I do not have
> > > > > any nested zone paths and so son.
>
> > > > > Thanks,
>
> > > > > Cindy
>
> > > > A few more points to that above:
>
> > > > -- after boot -L to boot into the patched new_BE i had running zone=
s,
> > > > but the zone path had been changed to /zoneds.
> > > > My zoneadm, zlogin commands, etc, still resulted in <segmentation
> > > > fault> core dumped. =A0However, I was able to ssh
> > > > into the zones normally from outside and everything seemed to funct=
ion
> > > > normally within the zone.
>
> > > > -- after again using boot -L to boot into the patched new_BE, I tri=
ed
> > > > to
> > > > create another new BE and activate it =A0-- lets call it new_BE_cpy=
 --
> > > > on the premise that doing so would "fix" and "straighten
> > > > out" the weird scenario where I kept booting into the "bad" BE. =A0=
This
> > > > has actually worked and my system now boots cleanly
> > > > into the 3rd, new_BE_cpy BE without any errors. =A0However, at this
> > > > point, I have no zone dirs, files, data, etc under /zoneds,
> > > > but all my original zone dirs, files, data, etc, still existed unde=
r /
> > > > export/zones, which is where I created them initially with the
> > > > first, unpatched, s10u9 install.
>
> > > > -- I plan to try to copy those zone dirs from /export/zones, check
> > > > that my relevant /etc/zones config xml stuff is still there in
> > > > the new_BE_cpy BE, and then try a zoneadm -z xxx attach -U option.
> > > > This is talked about in the Zone Migration link and I
> > > > will be reading over this carefully tonight.
>
> > > > -- before patching my s10u9 system with the latest 10_Recommended I
> > > > did have any issues with the zones or the zoneadm
> > > > commands and I did not have any <sementation fault> core dumped. =
=A0This
> > > > appeared only after my apparently successful
> > > > patch install and reboot into the (patched) new_BE.
>
> > > > Thanks
>
> > > > Paul
>
> > > Sounds like you are making good progress. Yesterday morning. I talked
> > > to the s10 install support
> > > manager about your zone login core dumping after the LU and he had no=
t
> > > heard of this
> > > either so we're really stumped. I'll try again with one of the LU guy=
s
> > > who did a lot of zones
> > > work.
>
> > > Just to recap:
>
> > > 1. A system running 10u9 with zones that are completely functional.
> > > The zone configs
> > > verify and halt and reboot without error.
>
> > > 2. During the LU to s10u10 migration process, no zone-related error
> > > messages occur.
>
> > > 3. After the LU and the system reboots successfully, the zone login
> > > core dumps.
>
> > > Thanks,
>
> > > Cindy
>
> > All the above is correct except for one mistake on my part in that I
> > did not LU to s10u10, but did an LU to latest
> > 10_Recommended. =A0cat /etc/release still shows me at Oracle Solaris 10
> > 9/10 s10s_u9wos_14a SPARC. =A0I apologize
> > for mistakenly assuming that the 10_Recommended was taking my system
> > to the equivalent of s10u10. =A0If I can
> > return this system to a stable reliable state I would like to do an LU
> > to s10u10 but Ihave never previosly attempted
> > a full version LU upgrade.
>
> Could be I misunderstood. Diagnosis by email can be pretty painful,
> particularly
> if I have to read a lot of text.
>
> I think there might some overlapping problems here:
>
> 1. If you new patched BE (the ABE) is in the same pool as the current
> BE (PBE),
> then the onscreen instructions fail. This is CR 6996301, fixed
> recently.

Yes new patched the BE in the sam pool as the current BE.
>
> 2. I believe your BEs are ZFS file systems.
Yes all BE live on ZFS file systems.
>
> 3. Are the zone roots also ZFS and are they in the root pool or a
> separate pool?

Zone roots are ZFS and on the root pool.  File systems lofs mounted
from the root pool and the data pool.

>
> 4. Another long-time s10 install support engineer suggests the detach,
> upgrade your
> BE with the patch set, and re-attach and update your zones.

Trying this but so far any zoneadm or zonecfg command that references
the zones fails with <segmentation fault> core dumped.
I tried to copy the existing zone data from /export/zones/xxxx_name of
old BE (LU renamed the original zone dirs) to the
/zoneds/"zonename" dir and then run zoneadm -z zonename attach -U and
this fails with <segmentation fault> core dump.
Now going to try creating a new zone and the copy my data from the /
export/zones/xxx location.

I also brought the system to single user from >ok and ran the
10_recommended "installpatchset" script and the process show all
patches skipped -- so everyhting was up to date.
>
> I haven't talked to the guys who have worked on LU/zones issues
> recently, but
> the answer to question #3 will help.
>
> Thanks,
>
> Cindy

Thanks for your help.
0
swingboyla (12)
5/23/2012 11:11:00 PM
On May 23, 4:11=A0pm, Paul Vanderhoof <swingbo...@gmail.com> wrote:
> On May 23, 12:25=A0pm, cindy <cindy.swearin...@oracle.com> wrote:
>
>
>
> > On May 23, 10:51=A0am, Paul Vanderhoof <swingbo...@gmail.com> wrote:
>
> > > On May 22, 8:26=A0pm, cindy swearingen <cindy.swearin...@gmail.com>
> > > wrote:
>
> > > > On May 22, 7:38=A0pm, Paul Vanderhoof <swingbo...@gmail.com> wrote:
>
> > > > > > A few comments. I'm the one who needs to improve my reading...
>
> > > > > > I. I think you are saying that you recovered by using boot -L a=
nd
> > > > > > attempted to
> > > > > > activate your good know BE, but after the activation, the syste=
m
> > > > > > booted from
> > > > > > the bad BE any way? Is this right?
>
> > > > > > The original system is s10u9, right? I see some older CRs about=
 having
> > > > > > ZFS
> > > > > > file systems mounted inside your NGZ with legacy mount option c=
auses
> > > > > > the activation of the BE to fail but they are fixed in s10u8.
>
> > > > > > 2. I think your assessment of the zones flight checker is corre=
ct and
> > > > > > mine was
> > > > > > wrong. I apologize.
>
> > > > > > 3. Troubleshooting your zone config is harder. I think you will=
 need
> > > > > > to describe your zones using the
> > > > > > supported zones config info I sent you, such as my zoneroot is =
/rpool/
> > > > > > abc and I do not have
> > > > > > any nested zone paths and so son.
>
> > > > > > Thanks,
>
> > > > > > Cindy
>
> > > > > A few more points to that above:
>
> > > > > -- after boot -L to boot into the patched new_BE i had running zo=
nes,
> > > > > but the zone path had been changed to /zoneds.
> > > > > My zoneadm, zlogin commands, etc, still resulted in <segmentation
> > > > > fault> core dumped. =A0However, I was able to ssh
> > > > > into the zones normally from outside and everything seemed to fun=
ction
> > > > > normally within the zone.
>
> > > > > -- after again using boot -L to boot into the patched new_BE, I t=
ried
> > > > > to
> > > > > create another new BE and activate it =A0-- lets call it new_BE_c=
py --
> > > > > on the premise that doing so would "fix" and "straighten
> > > > > out" the weird scenario where I kept booting into the "bad" BE. =
=A0This
> > > > > has actually worked and my system now boots cleanly
> > > > > into the 3rd, new_BE_cpy BE without any errors. =A0However, at th=
is
> > > > > point, I have no zone dirs, files, data, etc under /zoneds,
> > > > > but all my original zone dirs, files, data, etc, still existed un=
der /
> > > > > export/zones, which is where I created them initially with the
> > > > > first, unpatched, s10u9 install.
>
> > > > > -- I plan to try to copy those zone dirs from /export/zones, chec=
k
> > > > > that my relevant /etc/zones config xml stuff is still there in
> > > > > the new_BE_cpy BE, and then try a zoneadm -z xxx attach -U option=
..
> > > > > This is talked about in the Zone Migration link and I
> > > > > will be reading over this carefully tonight.
>
> > > > > -- before patching my s10u9 system with the latest 10_Recommended=
 I
> > > > > did have any issues with the zones or the zoneadm
> > > > > commands and I did not have any <sementation fault> core dumped. =
=A0This
> > > > > appeared only after my apparently successful
> > > > > patch install and reboot into the (patched) new_BE.
>
> > > > > Thanks
>
> > > > > Paul
>
> > > > Sounds like you are making good progress. Yesterday morning. I talk=
ed
> > > > to the s10 install support
> > > > manager about your zone login core dumping after the LU and he had =
not
> > > > heard of this
> > > > either so we're really stumped. I'll try again with one of the LU g=
uys
> > > > who did a lot of zones
> > > > work.
>
> > > > Just to recap:
>
> > > > 1. A system running 10u9 with zones that are completely functional.
> > > > The zone configs
> > > > verify and halt and reboot without error.
>
> > > > 2. During the LU to s10u10 migration process, no zone-related error
> > > > messages occur.
>
> > > > 3. After the LU and the system reboots successfully, the zone login
> > > > core dumps.
>
> > > > Thanks,
>
> > > > Cindy
>
> > > All the above is correct except for one mistake on my part in that I
> > > did not LU to s10u10, but did an LU to latest
> > > 10_Recommended. =A0cat /etc/release still shows me at Oracle Solaris =
10
> > > 9/10 s10s_u9wos_14a SPARC. =A0I apologize
> > > for mistakenly assuming that the 10_Recommended was taking my system
> > > to the equivalent of s10u10. =A0If I can
> > > return this system to a stable reliable state I would like to do an L=
U
> > > to s10u10 but Ihave never previosly attempted
> > > a full version LU upgrade.
>
> > Could be I misunderstood. Diagnosis by email can be pretty painful,
> > particularly
> > if I have to read a lot of text.
>
> > I think there might some overlapping problems here:
>
> > 1. If you new patched BE (the ABE) is in the same pool as the current
> > BE (PBE),
> > then the onscreen instructions fail. This is CR 6996301, fixed
> > recently.
>
> Yes new patched the BE in the sam pool as the current BE.
>
> > 2. I believe your BEs are ZFS file systems.
>
> Yes all BE live on ZFS file systems.
>
>
>
> > 3. Are the zone roots also ZFS and are they in the root pool or a
> > separate pool?
>
> Zone roots are ZFS and on the root pool. =A0File systems lofs mounted
> from the root pool and the data pool.
>
>
>
> > 4. Another long-time s10 install support engineer suggests the detach,
> > upgrade your
> > BE with the patch set, and re-attach and update your zones.
>
> Trying this but so far any zoneadm or zonecfg command that references
> the zones fails with <segmentation fault> core dumped.
> I tried to copy the existing zone data from /export/zones/xxxx_name of
> old BE (LU renamed the original zone dirs) to the
> /zoneds/"zonename" dir and then run zoneadm -z zonename attach -U and
> this fails with <segmentation fault> core dump.
> Now going to try creating a new zone and the copy my data from the /
> export/zones/xxx location.
>
> I also brought the system to single user from >ok and ran the
> 10_recommended "installpatchset" script and the process show all
> patches skipped -- so everyhting was up to date.
>
>
>
> > I haven't talked to the guys who have worked on LU/zones issues
> > recently, but
> > the answer to question #3 will help.
>
> > Thanks,
>
> > Cindy
>
> Thanks for your help.

So all my attempts to attach a zone fails with <segmentation
fault>(coredump).
zoneadm -z my_zone attach -U fails and zonecfg -z my_zone create -a /
zoneds/xxxx fails.
Next I tried to configure and create an intirely new zone.  zonecfg -z
my_new_zone result is

zone1: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:zone1> create
Segmentation Fault(coredump)

So I am now at a standstill.  do I try ./installpatchset -R /zoneds/
my_zone and then try
to zoneadm -z my_zone attach -U ?
0
swingboyla (12)
5/23/2012 11:46:54 PM
In article <50118950-ab50-43a8-87cb-039c7d50ed29@t2g2000pbl.googlegroups.com>,
Paul Vanderhoof  <swingboyla@gmail.com> wrote:
>Next I tried to configure and create an intirely new zone.  zonecfg -z
>my_new_zone result is
>
>zone1: No such zone configured
>Use 'create' to begin configuring a new zone.
>zonecfg:zone1> create
>Segmentation Fault(coredump)

You've proven that your current BE is broken.
What is the output of lustatus(1M)?

>So I am now at a standstill.  do I try ./installpatchset -R /zoneds/
>my_zone and then try
>to zoneadm -z my_zone attach -U ?

No.
What is the checksum for your 10_Recommended.zip?
$ /usr/sfw/bin/openssl md5 10_Recommended.zip

John
groenveld@acm.org
0
groenvel (550)
5/24/2012 7:39:19 AM
On May 23, 5:46=A0pm, Paul Vanderhoof <swingbo...@gmail.com> wrote:
> On May 23, 4:11=A0pm, Paul Vanderhoof <swingbo...@gmail.com> wrote:
>
>
>
>
>
>
>
>
>
> > On May 23, 12:25=A0pm, cindy <cindy.swearin...@oracle.com> wrote:
>
> > > On May 23, 10:51=A0am, Paul Vanderhoof <swingbo...@gmail.com> wrote:
>
> > > > On May 22, 8:26=A0pm, cindy swearingen <cindy.swearin...@gmail.com>
> > > > wrote:
>
> > > > > On May 22, 7:38=A0pm, Paul Vanderhoof <swingbo...@gmail.com> wrot=
e:
>
> > > > > > > A few comments. I'm the one who needs to improve my reading..=
..
>
> > > > > > > I. I think you are saying that you recovered by using boot -L=
 and
> > > > > > > attempted to
> > > > > > > activate your good know BE, but after the activation, the sys=
tem
> > > > > > > booted from
> > > > > > > the bad BE any way? Is this right?
>
> > > > > > > The original system is s10u9, right? I see some older CRs abo=
ut having
> > > > > > > ZFS
> > > > > > > file systems mounted inside your NGZ with legacy mount option=
 causes
> > > > > > > the activation of the BE to fail but they are fixed in s10u8.
>
> > > > > > > 2. I think your assessment of the zones flight checker is cor=
rect and
> > > > > > > mine was
> > > > > > > wrong. I apologize.
>
> > > > > > > 3. Troubleshooting your zone config is harder. I think you wi=
ll need
> > > > > > > to describe your zones using the
> > > > > > > supported zones config info I sent you, such as my zoneroot i=
s /rpool/
> > > > > > > abc and I do not have
> > > > > > > any nested zone paths and so son.
>
> > > > > > > Thanks,
>
> > > > > > > Cindy
>
> > > > > > A few more points to that above:
>
> > > > > > -- after boot -L to boot into the patched new_BE i had running =
zones,
> > > > > > but the zone path had been changed to /zoneds.
> > > > > > My zoneadm, zlogin commands, etc, still resulted in <segmentati=
on
> > > > > > fault> core dumped. =A0However, I was able to ssh
> > > > > > into the zones normally from outside and everything seemed to f=
unction
> > > > > > normally within the zone.
>
> > > > > > -- after again using boot -L to boot into the patched new_BE, I=
 tried
> > > > > > to
> > > > > > create another new BE and activate it =A0-- lets call it new_BE=
_cpy --
> > > > > > on the premise that doing so would "fix" and "straighten
> > > > > > out" the weird scenario where I kept booting into the "bad" BE.=
 =A0This
> > > > > > has actually worked and my system now boots cleanly
> > > > > > into the 3rd, new_BE_cpy BE without any errors. =A0However, at =
this
> > > > > > point, I have no zone dirs, files, data, etc under /zoneds,
> > > > > > but all my original zone dirs, files, data, etc, still existed =
under /
> > > > > > export/zones, which is where I created them initially with the
> > > > > > first, unpatched, s10u9 install.
>
> > > > > > -- I plan to try to copy those zone dirs from /export/zones, ch=
eck
> > > > > > that my relevant /etc/zones config xml stuff is still there in
> > > > > > the new_BE_cpy BE, and then try a zoneadm -z xxx attach -U opti=
on.
> > > > > > This is talked about in the Zone Migration link and I
> > > > > > will be reading over this carefully tonight.
>
> > > > > > -- before patching my s10u9 system with the latest 10_Recommend=
ed I
> > > > > > did have any issues with the zones or the zoneadm
> > > > > > commands and I did not have any <sementation fault> core dumped=
.. =A0This
> > > > > > appeared only after my apparently successful
> > > > > > patch install and reboot into the (patched) new_BE.
>
> > > > > > Thanks
>
> > > > > > Paul
>
> > > > > Sounds like you are making good progress. Yesterday morning. I ta=
lked
> > > > > to the s10 install support
> > > > > manager about your zone login core dumping after the LU and he ha=
d not
> > > > > heard of this
> > > > > either so we're really stumped. I'll try again with one of the LU=
 guys
> > > > > who did a lot of zones
> > > > > work.
>
> > > > > Just to recap:
>
> > > > > 1. A system running 10u9 with zones that are completely functiona=
l.
> > > > > The zone configs
> > > > > verify and halt and reboot without error.
>
> > > > > 2. During the LU to s10u10 migration process, no zone-related err=
or
> > > > > messages occur.
>
> > > > > 3. After the LU and the system reboots successfully, the zone log=
in
> > > > > core dumps.
>
> > > > > Thanks,
>
> > > > > Cindy
>
> > > > All the above is correct except for one mistake on my part in that =
I
> > > > did not LU to s10u10, but did an LU to latest
> > > > 10_Recommended. =A0cat /etc/release still shows me at Oracle Solari=
s 10
> > > > 9/10 s10s_u9wos_14a SPARC. =A0I apologize
> > > > for mistakenly assuming that the 10_Recommended was taking my syste=
m
> > > > to the equivalent of s10u10. =A0If I can
> > > > return this system to a stable reliable state I would like to do an=
 LU
> > > > to s10u10 but Ihave never previosly attempted
> > > > a full version LU upgrade.
>
> > > Could be I misunderstood. Diagnosis by email can be pretty painful,
> > > particularly
> > > if I have to read a lot of text.
>
> > > I think there might some overlapping problems here:
>
> > > 1. If you new patched BE (the ABE) is in the same pool as the current
> > > BE (PBE),
> > > then the onscreen instructions fail. This is CR 6996301, fixed
> > > recently.
>
> > Yes new patched the BE in the sam pool as the current BE.
>
> > > 2. I believe your BEs are ZFS file systems.
>
> > Yes all BE live on ZFS file systems.
>
> > > 3. Are the zone roots also ZFS and are they in the root pool or a
> > > separate pool?
>
> > Zone roots are ZFS and on the root pool. =A0File systems lofs mounted
> > from the root pool and the data pool.
>
> > > 4. Another long-time s10 install support engineer suggests the detach=
,
> > > upgrade your
> > > BE with the patch set, and re-attach and update your zones.
>
> > Trying this but so far any zoneadm or zonecfg command that references
> > the zones fails with <segmentation fault> core dumped.
> > I tried to copy the existing zone data from /export/zones/xxxx_name of
> > old BE (LU renamed the original zone dirs) to the
> > /zoneds/"zonename" dir and then run zoneadm -z zonename attach -U and
> > this fails with <segmentation fault> core dump.
> > Now going to try creating a new zone and the copy my data from the /
> > export/zones/xxx location.
>
> > I also brought the system to single user from >ok and ran the
> > 10_recommended "installpatchset" script and the process show all
> > patches skipped -- so everyhting was up to date.
>
> > > I haven't talked to the guys who have worked on LU/zones issues
> > > recently, but
> > > the answer to question #3 will help.
>
> > > Thanks,
>
> > > Cindy
>
> > Thanks for your help.
>
> So all my attempts to attach a zone fails with <segmentation
> fault>(coredump).
> zoneadm -z my_zone attach -U fails and zonecfg -z my_zone create -a /
> zoneds/xxxx fails.
> Next I tried to configure and create an intirely new zone. =A0zonecfg -z
> my_new_zone result is
>
> zone1: No such zone configured
> Use 'create' to begin configuring a new zone.
> zonecfg:zone1> create
> Segmentation Fault(coredump)
>
> So I am now at a standstill. =A0do I try ./installpatchset -R /zoneds/
> my_zone and then try
> to zoneadm -z my_zone attach -U ?

I'm checking with someone more knowledgeable about LU and zones.
It would  be helpful to get some info from the zone-related core, such
as the stack info, like this:

# mdb core
Loading modules:
> ::stack
> <Control-d>

Thanks,

Cindy
0
5/24/2012 7:53:41 PM
On May 24, 8:53=A0pm, cindy <cindy.swearin...@oracle.com> wrote:
> On May 23, 5:46=A0pm, Paul Vanderhoof <swingbo...@gmail.com> wrote:
>
>
>
>
>
>
>
>
>
> > On May 23, 4:11=A0pm, Paul Vanderhoof <swingbo...@gmail.com> wrote:
>
> > > On May 23, 12:25=A0pm, cindy <cindy.swearin...@oracle.com> wrote:
>
> > > > On May 23, 10:51=A0am, Paul Vanderhoof <swingbo...@gmail.com> wrote=
:
>
> > > > > On May 22, 8:26=A0pm, cindy swearingen <cindy.swearin...@gmail.co=
m>
> > > > > wrote:
>
> > > > > > On May 22, 7:38=A0pm, Paul Vanderhoof <swingbo...@gmail.com> wr=
ote:
>
> > > > > > > > A few comments. I'm the one who needs to improve my reading=
....
>
> > > > > > > > I. I think you are saying that you recovered by using boot =
-L and
> > > > > > > > attempted to
> > > > > > > > activate your good know BE, but after the activation, the s=
ystem
> > > > > > > > booted from
> > > > > > > > the bad BE any way? Is this right?
>
> > > > > > > > The original system is s10u9, right? I see some older CRs a=
bout having
> > > > > > > > ZFS
> > > > > > > > file systems mounted inside your NGZ with legacy mount opti=
on causes
> > > > > > > > the activation of the BE to fail but they are fixed in s10u=
8.
>
> > > > > > > > 2. I think your assessment of the zones flight checker is c=
orrect and
> > > > > > > > mine was
> > > > > > > > wrong. I apologize.
>
> > > > > > > > 3. Troubleshooting your zone config is harder. I think you =
will need
> > > > > > > > to describe your zones using the
> > > > > > > > supported zones config info I sent you, such as my zoneroot=
 is /rpool/
> > > > > > > > abc and I do not have
> > > > > > > > any nested zone paths and so son.
>
> > > > > > > > Thanks,
>
> > > > > > > > Cindy
>
> > > > > > > A few more points to that above:
>
> > > > > > > -- after boot -L to boot into the patched new_BE i had runnin=
g zones,
> > > > > > > but the zone path had been changed to /zoneds.
> > > > > > > My zoneadm, zlogin commands, etc, still resulted in <segmenta=
tion
> > > > > > > fault> core dumped. =A0However, I was able to ssh
> > > > > > > into the zones normally from outside and everything seemed to=
 function
> > > > > > > normally within the zone.
>
> > > > > > > -- after again using boot -L to boot into the patched new_BE,=
 I tried
> > > > > > > to
> > > > > > > create another new BE and activate it =A0-- lets call it new_=
BE_cpy --
> > > > > > > on the premise that doing so would "fix" and "straighten
> > > > > > > out" the weird scenario where I kept booting into the "bad" B=
E. =A0This
> > > > > > > has actually worked and my system now boots cleanly
> > > > > > > into the 3rd, new_BE_cpy BE without any errors. =A0However, a=
t this
> > > > > > > point, I have no zone dirs, files, data, etc under /zoneds,
> > > > > > > but all my original zone dirs, files, data, etc, still existe=
d under /
> > > > > > > export/zones, which is where I created them initially with th=
e
> > > > > > > first, unpatched, s10u9 install.
>
> > > > > > > -- I plan to try to copy those zone dirs from /export/zones, =
check
> > > > > > > that my relevant /etc/zones config xml stuff is still there i=
n
> > > > > > > the new_BE_cpy BE, and then try a zoneadm -z xxx attach -U op=
tion.
> > > > > > > This is talked about in the Zone Migration link and I
> > > > > > > will be reading over this carefully tonight.
>
> > > > > > > -- before patching my s10u9 system with the latest 10_Recomme=
nded I
> > > > > > > did have any issues with the zones or the zoneadm
> > > > > > > commands and I did not have any <sementation fault> core dump=
ed. =A0This
> > > > > > > appeared only after my apparently successful
> > > > > > > patch install and reboot into the (patched) new_BE.
>
> > > > > > > Thanks
>
> > > > > > > Paul
>
> > > > > > Sounds like you are making good progress. Yesterday morning. I =
talked
> > > > > > to the s10 install support
> > > > > > manager about your zone login core dumping after the LU and he =
had not
> > > > > > heard of this
> > > > > > either so we're really stumped. I'll try again with one of the =
LU guys
> > > > > > who did a lot of zones
> > > > > > work.
>
> > > > > > Just to recap:
>
> > > > > > 1. A system running 10u9 with zones that are completely functio=
nal.
> > > > > > The zone configs
> > > > > > verify and halt and reboot without error.
>
> > > > > > 2. During the LU to s10u10 migration process, no zone-related e=
rror
> > > > > > messages occur.
>
> > > > > > 3. After the LU and the system reboots successfully, the zone l=
ogin
> > > > > > core dumps.
>
> > > > > > Thanks,
>
> > > > > > Cindy
>
> > > > > All the above is correct except for one mistake on my part in tha=
t I
> > > > > did not LU to s10u10, but did an LU to latest
> > > > > 10_Recommended. =A0cat /etc/release still shows me at Oracle Sola=
ris 10
> > > > > 9/10 s10s_u9wos_14a SPARC. =A0I apologize
> > > > > for mistakenly assuming that the 10_Recommended was taking my sys=
tem
> > > > > to the equivalent of s10u10. =A0If I can
> > > > > return this system to a stable reliable state I would like to do =
an LU
> > > > > to s10u10 but Ihave never previosly attempted
> > > > > a full version LU upgrade.
>
> > > > Could be I misunderstood. Diagnosis by email can be pretty painful,
> > > > particularly
> > > > if I have to read a lot of text.
>
> > > > I think there might some overlapping problems here:
>
> > > > 1. If you new patched BE (the ABE) is in the same pool as the curre=
nt
> > > > BE (PBE),
> > > > then the onscreen instructions fail. This is CR 6996301, fixed
> > > > recently.
>
> > > Yes new patched the BE in the sam pool as the current BE.
>
> > > > 2. I believe your BEs are ZFS file systems.
>
> > > Yes all BE live on ZFS file systems.
>
> > > > 3. Are the zone roots also ZFS and are they in the root pool or a
> > > > separate pool?
>
> > > Zone roots are ZFS and on the root pool. =A0File systems lofs mounted
> > > from the root pool and the data pool.
>
> > > > 4. Another long-time s10 install support engineer suggests the deta=
ch,
> > > > upgrade your
> > > > BE with the patch set, and re-attach and update your zones.
>
> > > Trying this but so far any zoneadm or zonecfg command that references
> > > the zones fails with <segmentation fault> core dumped.
> > > I tried to copy the existing zone data from /export/zones/xxxx_name o=
f
> > > old BE (LU renamed the original zone dirs) to the
> > > /zoneds/"zonename" dir and then run zoneadm -z zonename attach -U and
> > > this fails with <segmentation fault> core dump.
> > > Now going to try creating a new zone and the copy my data from the /
> > > export/zones/xxx location.
>
> > > I also brought the system to single user from >ok and ran the
> > > 10_recommended "installpatchset" script and the process show all
> > > patches skipped -- so everyhting was up to date.
>
> > > > I haven't talked to the guys who have worked on LU/zones issues
> > > > recently, but
> > > > the answer to question #3 will help.
>
> > > > Thanks,
>
> > > > Cindy
>
> > > Thanks for your help.
>
> > So all my attempts to attach a zone fails with <segmentation
> > fault>(coredump).
> > zoneadm -z my_zone attach -U fails and zonecfg -z my_zone create -a /
> > zoneds/xxxx fails.
> > Next I tried to configure and create an intirely new zone. =A0zonecfg -=
z
> > my_new_zone result is
>
> > zone1: No such zone configured
> > Use 'create' to begin configuring a new zone.
> > zonecfg:zone1> create
> > Segmentation Fault(coredump)
>
> > So I am now at a standstill. =A0do I try ./installpatchset -R /zoneds/
> > my_zone and then try
> > to zoneadm -z my_zone attach -U ?
>
> I'm checking with someone more knowledgeable about LU and zones.
> It would =A0be helpful to get some info from the zone-related core, such
> as the stack info, like this:
>
> # mdb core
> Loading modules:
>
> > ::stack
> > <Control-d>
>
> Thanks,
>
> Cindy

Hi
So if I am reading this thread correctly you applied the recommended
patch cluster, if so can i see
/var/sadm/install_data/*verbose*.log

check in /var/tmp for any files realating to patching ( ie *??????-??
*.log )

pkginfo -p

run mdb on the core and give

:;status
$C

But you need to somehow upload an explorer and the log files from
patching ( the *verbose*  mentioned above )

if this is not possible you need to closely examine the log file for
things like
egrep -i "error|fail|var/tmp|fatal" /var/sadm/install_data/
*verbose*.log

in /var/sadm/patch

grep "Re-installing Patch" */log
egrep -i "fail|fatal|pkgadd|pkginstall|postinstall|checkinstall|
preinstall|could not|test|mv|cp" */log

in above egrep output, be aware that an exit code of 2 from compress
is expected in some case ( just means that the compressed file would
be larger than uncompressed )

just so ignore errors from compress.

But a service contract would be good right about now, ie upload an
explorer etc
for explorer info and where to grab from MOS read doc id 1153444.1  on
supporthtml.oracle.com

Enda
0
aleahy4 (1)
5/24/2012 9:33:36 PM
On May 24, 2:33=A0pm, Ann Marie leahy <alea...@gmail.com> wrote:
> On May 24, 8:53=A0pm, cindy <cindy.swearin...@oracle.com> wrote:
>
>
>
> > On May 23, 5:46=A0pm, Paul Vanderhoof <swingbo...@gmail.com> wrote:
>
> > > On May 23, 4:11=A0pm, Paul Vanderhoof <swingbo...@gmail.com> wrote:
>
> > > > On May 23, 12:25=A0pm, cindy <cindy.swearin...@oracle.com> wrote:
>
> > > > > On May 23, 10:51=A0am, Paul Vanderhoof <swingbo...@gmail.com> wro=
te:
>
> > > > > > On May 22, 8:26=A0pm, cindy swearingen <cindy.swearin...@gmail.=
com>
> > > > > > wrote:
>
> > > > > > > On May 22, 7:38=A0pm, Paul Vanderhoof <swingbo...@gmail.com> =
wrote:
>
> > > > > > > > > A few comments. I'm the one who needs to improve my readi=
ng...
>
> > > > > > > > > I. I think you are saying that you recovered by using boo=
t -L and
> > > > > > > > > attempted to
> > > > > > > > > activate your good know BE, but after the activation, the=
 system
> > > > > > > > > booted from
> > > > > > > > > the bad BE any way? Is this right?
>
> > > > > > > > > The original system is s10u9, right? I see some older CRs=
 about having
> > > > > > > > > ZFS
> > > > > > > > > file systems mounted inside your NGZ with legacy mount op=
tion causes
> > > > > > > > > the activation of the BE to fail but they are fixed in s1=
0u8.
>
> > > > > > > > > 2. I think your assessment of the zones flight checker is=
 correct and
> > > > > > > > > mine was
> > > > > > > > > wrong. I apologize.
>
> > > > > > > > > 3. Troubleshooting your zone config is harder. I think yo=
u will need
> > > > > > > > > to describe your zones using the
> > > > > > > > > supported zones config info I sent you, such as my zonero=
ot is /rpool/
> > > > > > > > > abc and I do not have
> > > > > > > > > any nested zone paths and so son.
>
> > > > > > > > > Thanks,
>
> > > > > > > > > Cindy
>
> > > > > > > > A few more points to that above:
>
> > > > > > > > -- after boot -L to boot into the patched new_BE i had runn=
ing zones,
> > > > > > > > but the zone path had been changed to /zoneds.
> > > > > > > > My zoneadm, zlogin commands, etc, still resulted in <segmen=
tation
> > > > > > > > fault> core dumped. =A0However, I was able to ssh
> > > > > > > > into the zones normally from outside and everything seemed =
to function
> > > > > > > > normally within the zone.
>
> > > > > > > > -- after again using boot -L to boot into the patched new_B=
E, I tried
> > > > > > > > to
> > > > > > > > create another new BE and activate it =A0-- lets call it ne=
w_BE_cpy --
> > > > > > > > on the premise that doing so would "fix" and "straighten
> > > > > > > > out" the weird scenario where I kept booting into the "bad"=
 BE. =A0This
> > > > > > > > has actually worked and my system now boots cleanly
> > > > > > > > into the 3rd, new_BE_cpy BE without any errors. =A0However,=
 at this
> > > > > > > > point, I have no zone dirs, files, data, etc under /zoneds,
> > > > > > > > but all my original zone dirs, files, data, etc, still exis=
ted under /
> > > > > > > > export/zones, which is where I created them initially with =
the
> > > > > > > > first, unpatched, s10u9 install.
>
> > > > > > > > -- I plan to try to copy those zone dirs from /export/zones=
, check
> > > > > > > > that my relevant /etc/zones config xml stuff is still there=
 in
> > > > > > > > the new_BE_cpy BE, and then try a zoneadm -z xxx attach -U =
option.
> > > > > > > > This is talked about in the Zone Migration link and I
> > > > > > > > will be reading over this carefully tonight.
>
> > > > > > > > -- before patching my s10u9 system with the latest 10_Recom=
mended I
> > > > > > > > did have any issues with the zones or the zoneadm
> > > > > > > > commands and I did not have any <sementation fault> core du=
mped. =A0This
> > > > > > > > appeared only after my apparently successful
> > > > > > > > patch install and reboot into the (patched) new_BE.
>
> > > > > > > > Thanks
>
> > > > > > > > Paul
>
> > > > > > > Sounds like you are making good progress. Yesterday morning. =
I talked
> > > > > > > to the s10 install support
> > > > > > > manager about your zone login core dumping after the LU and h=
e had not
> > > > > > > heard of this
> > > > > > > either so we're really stumped. I'll try again with one of th=
e LU guys
> > > > > > > who did a lot of zones
> > > > > > > work.
>
> > > > > > > Just to recap:
>
> > > > > > > 1. A system running 10u9 with zones that are completely funct=
ional.
> > > > > > > The zone configs
> > > > > > > verify and halt and reboot without error.
>
> > > > > > > 2. During the LU to s10u10 migration process, no zone-related=
 error
> > > > > > > messages occur.
>
> > > > > > > 3. After the LU and the system reboots successfully, the zone=
 login
> > > > > > > core dumps.
>
> > > > > > > Thanks,
>
> > > > > > > Cindy
>
> > > > > > All the above is correct except for one mistake on my part in t=
hat I
> > > > > > did not LU to s10u10, but did an LU to latest
> > > > > > 10_Recommended. =A0cat /etc/release still shows me at Oracle So=
laris 10
> > > > > > 9/10 s10s_u9wos_14a SPARC. =A0I apologize
> > > > > > for mistakenly assuming that the 10_Recommended was taking my s=
ystem
> > > > > > to the equivalent of s10u10. =A0If I can
> > > > > > return this system to a stable reliable state I would like to d=
o an LU
> > > > > > to s10u10 but Ihave never previosly attempted
> > > > > > a full version LU upgrade.
>
> > > > > Could be I misunderstood. Diagnosis by email can be pretty painfu=
l,
> > > > > particularly
> > > > > if I have to read a lot of text.
>
> > > > > I think there might some overlapping problems here:
>
> > > > > 1. If you new patched BE (the ABE) is in the same pool as the cur=
rent
> > > > > BE (PBE),
> > > > > then the onscreen instructions fail. This is CR 6996301, fixed
> > > > > recently.
>
> > > > Yes new patched the BE in the sam pool as the current BE.
>
> > > > > 2. I believe your BEs are ZFS file systems.
>
> > > > Yes all BE live on ZFS file systems.
>
> > > > > 3. Are the zone roots also ZFS and are they in the root pool or a
> > > > > separate pool?
>
> > > > Zone roots are ZFS and on the root pool. =A0File systems lofs mount=
ed
> > > > from the root pool and the data pool.
>
> > > > > 4. Another long-time s10 install support engineer suggests the de=
tach,
> > > > > upgrade your
> > > > > BE with the patch set, and re-attach and update your zones.
>
> > > > Trying this but so far any zoneadm or zonecfg command that referenc=
es
> > > > the zones fails with <segmentation fault> core dumped.
> > > > I tried to copy the existing zone data from /export/zones/xxxx_name=
 of
> > > > old BE (LU renamed the original zone dirs) to the
> > > > /zoneds/"zonename" dir and then run zoneadm -z zonename attach -U a=
nd
> > > > this fails with <segmentation fault> core dump.
> > > > Now going to try creating a new zone and the copy my data from the =
/
> > > > export/zones/xxx location.
>
> > > > I also brought the system to single user from >ok and ran the
> > > > 10_recommended "installpatchset" script and the process show all
> > > > patches skipped -- so everyhting was up to date.
>
> > > > > I haven't talked to the guys who have worked on LU/zones issues
> > > > > recently, but
> > > > > the answer to question #3 will help.
>
> > > > > Thanks,
>
> > > > > Cindy
>
> > > > Thanks for your help.
>
> > > So all my attempts to attach a zone fails with <segmentation
> > > fault>(coredump).
> > > zoneadm -z my_zone attach -U fails and zonecfg -z my_zone create -a /
> > > zoneds/xxxx fails.
> > > Next I tried to configure and create an intirely new zone. =A0zonecfg=
 -z
> > > my_new_zone result is
>
> > > zone1: No such zone configured
> > > Use 'create' to begin configuring a new zone.
> > > zonecfg:zone1> create
> > > Segmentation Fault(coredump)
>
> > > So I am now at a standstill. =A0do I try ./installpatchset -R /zoneds=
/
> > > my_zone and then try
> > > to zoneadm -z my_zone attach -U ?
>
> > I'm checking with someone more knowledgeable about LU and zones.
> > It would =A0be helpful to get some info from the zone-related core, suc=
h
> > as the stack info, like this:
>
> > # mdb core
> > Loading modules:
>
> > > ::stack
> > > <Control-d>
>
> > Thanks,
>
> > Cindy
>
> Hi
> So if I am reading this thread correctly you applied the recommended
> patch cluster, if so can i see
> /var/sadm/install_data/*verbose*.log
>
> check in /var/tmp for any files realating to patching ( ie *??????-??
> *.log )
>
> pkginfo -p
>
> run mdb on the core and give
>
> :;status
> $C
>
> But you need to somehow upload an explorer and the log files from
> patching ( the *verbose* =A0mentioned above )
>
> if this is not possible you need to closely examine the log file for
> things like
> egrep -i "error|fail|var/tmp|fatal" /var/sadm/install_data/
> *verbose*.log
>
> in /var/sadm/patch
>
> grep "Re-installing Patch" */log
> egrep -i "fail|fatal|pkgadd|pkginstall|postinstall|checkinstall|
> preinstall|could not|test|mv|cp" */log
>
> in above egrep output, be aware that an exit code of 2 from compress
> is expected in some case ( just means that the compressed file would
> be larger than uncompressed )
>
> just so ignore errors from compress.
>
> But a service contract would be good right about now, ie upload an
> explorer etc
> for explorer info and where to grab from MOS read doc id 1153444.1 =A0on
> supporthtml.oracle.com
>
> Enda

OK -- so after poking around with mdb and ldd and with all the help
provided here by all you great people I have resolved this issue -- at
least so it appears -- still testing everything.

The issue with the zone command problems turns out to be an issue with
conflicting Apache2 and PHP libraries.  Just after the Live Upgrade
patching and before we saw any of the zone command problems (we
actually had not had any reason to use them at the time) we had
compiled new versions of Apache2 and PHP for this system and changed
the LD_LIBRARY_PATH and LD_RUN_PATH to include the libraries needed by
Apache2 and PHP.  I added them after the original paths, but
apparently the zone commands are still getting confused over these.
Took them out of LD_LIBRARY_PATH and LD_RUN_PATH and the zone commands
began to behave properly and ldd against the zone commands are not
showing any more "=3D>  (version not found)" issues.  After that I made
backup copies of my /export/zones zone dirs and then used the zoneadm -
z zone_xxxx attach -U to upgrade them to the current patch level and
to bring them to installed and running state.  That worked just fine
and I was back in business.  Then I used the zoneadm -z zone_xxxx move
command to get them back to the original path names since they still
had the renamed paths that LU had created.

So now we have to work on how to get the compiled new versions of
Apache2 and PHP (which are required by our business application) to
have access to their libraries and play nice with the Solaris
libraries.  I am not a developer nor an Apache admin, but will work
with the company software developers to see if we can get this
straightened out.  A quick google on the net brings up some others
with similar issues and looks like we can use compiler directives to
hard code the library paths or something.

Again -- gigantic THANKS to everyone who wrote back to my plea for
help and for all they helpful suggestions.

Paul
0
swingboyla (12)
5/30/2012 4:42:45 PM
Paul Vanderhoof <swingboyla@gmail.com> wrote:
[ TWO HUNDRED AND SIXTY lines clipped.  My god man, learn to quote!]

> compiled new versions of Apache2 and PHP for this system and changed
> the LD_LIBRARY_PATH and LD_RUN_PATH to include the libraries needed by

Whenever LD_LIBRARY_PATH and LD_RUN_PATH enter the picture, you're asking
for pain... either immediately or down the line.

Tell your developer to investigate compiling with the '-R' flag where ever
he has the -L.  For example, if you compile with -L/my/personal/libraries,
there should be a matching -R/my/personal/libraries.

Basically, it moves the contents of LD_LIBRARY_PATH from the environment,
where it poisons and breaks everything, into the binary itself.

-- 
Brandon Hume    - hume -> BOFH.Ca, http://WWW.BOFH.Ca/
0
5/30/2012 5:02:23 PM
In article <jq5jqv$alr$1@Kil-nws-1.UCIS.Dal.Ca>,
 <hume.spamfilter@bofh.ca> wrote:
>Whenever LD_LIBRARY_PATH and LD_RUN_PATH enter the picture, you're asking
>for pain... either immediately or down the line.

See Xah Lee's archive of Dave Barr's treatise,
"Why LD_LIBRARY_PATH is bad"
<URL:http://xahlee.org/UnixResource_dir/_/ldpath.html>

John
groenveld@acm.org
0
groenvel (550)
5/30/2012 5:23:44 PM
Reply:

Similar Artilces:

Live Upgrade broken for Solaris 9 to Solaris 10 upgrade
Has anyone else run into this problem? I'm using Live Upgrade to upgrade a Solaris 9 server to Solaris 10. I created a boot environment on a separate disk, and then upgraded it to Solaris 10 with `luupgrade -u'. Now when I go to use `luupgrade -t' to apply the latest Solaris 10 patches to it, I get this... Validating the contents of the media </var/tmp/patches>. The media contains 220 software patches that can be added. All 220 patches will be added because you did not specify any specific patches to add. Mounting the BE <s10lu>. ERROR: The boot environment...

XP and Unix (Solaris) network help needed
I have my XP home computer and a Win95 computer networked together and sharing the internet. I've added a Sun solaris box but am unable to access the internet from it. I can ping the XP box from Solaris and ping the Solaris box from XP. I can telnet from XP to the Solaris system. What do I need to do to get internet access functionality from the Unix box thru' the XP machine? I've not been able to find any explanation on how to accomplish this. Thanks for any and all help. Dave E. On Mon, 22 Dec 2003 01:24:49 +0000 (UTC), "David E. Edwards" <dedwards@...

zfs after live upgrade to solaris 10(11.06)
I used LiveUpgrade (lucreate \ luupgrade\luactivate) in order to upgrade from solaris 10(3.05+resent recommanded pathed) to solaris 10(11.06). the upgrade finished succesfully, but then I check zfs commands like : zpool, zfs the upgrade installed only the simbolik links /use/sbin/ zpool but not the actual file that should be located on /sbin/ zpool !?!? do you know why? how can I use zfs on solaris 10(11.06) after LiveUpgrade on this situation? ...

Need help with help need
Friends and wormbots: I am looking for some intrepid souls to try out and comment on a perl script I wrote. (What, perl in SAS-L and it's not David Cassell ?) The script takes the SAS help files apart and does some analysis (orphans, linkrot and duplicates) and inserts back links. What do back links do ? It ensures every page in the help system has a link to every page that links to it. (Actually only the the subset of the help system represented by the modules you choose to play with [there are over 140 help modules]) 1,000 lines of perl that sprouted out of a two line seed (or should...

Need help with Live Upgrade, S-10, update 3 to update 6
I'm trying to upgrade this machine which now has Solaris 10 Update 3 to Update 6 using Live Upgrade. I'm stuck at my current install and would appreciate help in upgrading, (btw, I posted this in Sun.com forums too) -------------------------------------------------------------------------------- Current layout (sparc system w 3 disks) -------------------------------------------------------------------------------- Disk 0: 16 GB: /dev/dsk/c0t0d0s0 1.9G / /dev/dsk/c0t0d0s1 692M /usr/openwin /dev/dsk/c0t0d0s3 7.7G /var /dev/dsk/c0t0d0s4 3.9G swap /dev/dsk/c0t0d0s5 2.5G /tmp Disk 1: 1...

Live upgrade from Solaris 8 to Solaris 9
Hoping that some one has already done this and would be able to answer my questions.. I am looking at the live upgrade guide and I understand that I have to create an empty boot environment with -s option before I can install a flash archive on the disk. Q) What comprises of the empty boot environment? Once an empty BE is set up, we use the luupgrade and -a/-j/-J the three mutually exclusive options. Crossing out -a option I am confused about the -j and -J option usage -J for an entry from the profile -j for for the profile path Q) If I use the -J switch will it still lay out the f...

Need Proof... Need Help Access issues
Good day all, I have seen so many postings dealing with MS Access as a security risk and other items, yet I see now clear reason why. I would really like someone to point me in the right direction for clear reasons why MS Access should not be used for the enterprise. For a handful of users I don't see an issue, but when an access application (I use that loosly) drags down the network and the SQL Server at the same time, it makes me want to fix the problems and the only way to do that is to create applications and not use MS Access. Any help would be greatly appreciated. Jeffrey &quo...

HELP ! ! I need help on a command for my Unix class ! !
I've got a file that contains one line of text. The line looks like this: 34 That's all. When I try to echo the results in a sentence, I can't seem to ignore the leading white spaces!! I think I've tried all the man pages. How can I delete the white spaces and echo the result in a sentence? This Project is due soon. Any help would be greatly appreciated. 1328 wrote: > I've got a file that contains one line of text. The line looks like > this: 34 > That's all. When I try to echo the results in a sentence, > I can't seem to ig...

Solaris 10 live upgrade from Solaris 8
Hi All, I am having problems upgrading the Solaris 8 to Solaris 10 on both of my servers: Sunblade 2000 and Sunfire v440. Both upgrades are failing the same way: After successful upgrade at the reboot to Solaris 10 BE the network/inetd-upgrade service fails to start and after this service all other dependent services fail: services... telnet, ftp... X... If I disable the network/inetd-update and reboot the system(s) it is the same... Any suggestion would be greatly appreciated.. Thanks, Ned ...

Need help with xFree86
I need to know where I can upgrade my xFree86 to the most current. I am running 4.1.0 and am having problems getting my ATI Radeon 8500 All in Wonder to work. Can someone point me to where I can get this? Also does this come with an installation guide? I am a newbie going through the motions. Thanks in advance, Scott Giganews wrote: >I need to know where I can upgrade my xFree86 to the most current. I am >running 4.1.0 and am having problems getting my ATI Radeon 8500 All in >Wonder to work. Can someone point me to where I can get this? Also does this >come with an installatio...

Upgrading from solaris 9 to solaris 10. Oracle issues????
Hi All, I have 10.2.0.3 running on Solaris 9 and we are planning to move to solas 10. Big almost 800G DB, typical OLTP system, little bit write intensive. I was wondering if any body knows of any issues regarding this. Thanks for any help. Faraz wrote: > Hi All, > I have 10.2.0.3 running on Solaris 9 and we are planning to move to > solas 10. Big almost 800G DB, typical OLTP system, little bit write > intensive. I was wondering if any body knows of any issues regarding > this. > Thanks for any help. Don't let your SA's use any of the new 10g features they undoubted...

Please Help!!! I need help with iptables permission issue
Hi, I have two days before I need to demo my project to my teacher and I need help desperately! I have a HTML form with C++ CGI to enter IP Table rules, but I am running into permission issues. The CGI calls a system command to enter the IP Table rule but it is running as apache user which does not have permission for any of the tables. I have to either give permission to apache user or make my CGI run as the root. I don't know how to do any of this, please help me out. I don't know if any of this makes any sense. I appreciate any help. Please help. Thanks. Regards, ...

Solaris 8 upgrade using Live upgrade method
I want to get some feedback on the Live upgrade of Solaris 8 (2/04). How was the upgrade? Was it kind of problem-free? Did Veritas give any problem with the upgrade? ...

SN#17353 Guide to Upgrading and Patching with Solaris Live Upgrade
SYSTEM NEWS FOR SUN USERS Vol 106 Issue 1 2006-12-04 Article 17353 from section "SysAdmin's Section" Planning Issues Covered, Examples Illustrate Upgrades The "How To Guide" on "Upgrading and Patching with Solaris Live Upgrade" demonstrates the use of Solaris Live Upgrade, a powerful tool for managing change, risk and system availability on Solaris[TM] Operating Systems (Solaris OS). Details at http://sun.systemnews.com/g?A=17353 Have a custom version of 'System News for Sun Users' delivered to you via ema...

Live Upgrade fails during upgrade from Solaris 10 U7 to U8
Hello, problem by updating Solaris 10 x86 from U7 to U8 with Live Upgrade. ZFS root mirrored on 2 disks, no zones, no separate /var. Should be an easy job for live upgrade. Yes, liveupgrade20 has been applied from the lofi mounted U8. Yes, 121431-44, the Live Upgrade Patch is installed. luupgrade fails with: ERROR: Installation of the packages from this media of the media failed; pfinstall returned these diagnostics: Processing profile Loading local environment and services Why does lucreate propagates /boot/grub/menu.lst? It's a dummy, the real menu.lst is ...

Migrating from solaris 2.5.1 to solaris 8. Need Help ?
Hello There, We are migrating from solaris 2.5.1 (spark station 10) server to solaris 8. Can some tell me what steps need to be followed ? we will have the same IP address for the new server. Thank you in advance. Gaurav Can someone tell me how should i go about transfering data and files from old server to new one ?? what are my options. Thank you Gaurav bansal2425@hotmail.com (Gaurav) wrote in message news:<42983cb.0402031653.79bec21b@posting.google.com>... > Hello There, > > We are migrating from solaris 2.5.1 (spark station 10) server to > solaris 8. Can some te...

Complete newboe to unix
Hi I've been remotely running a BSD server for a few years and know the bare basics of command line communication. I set up mysql accounts using phpmyadmin .. thats how sad I am ;) I have an Digital Alphaserver 2100 4/275 that starts fine when I attempt to boot, mentions all the bits like bootstrap mode etc, but when it gets to the login, the screen dies. Ive been told the machine is looking for the remote terminal that it was shut down from. Outside of unix i mainly use OSX, but ive found that using windows running Hyper terminal and connected to the alphaserver thru the ...

need help with upgrade
I have this Atari 400 with b-key keyboard and 48k ram. I've had it for 25 years and it sort of holds a nostalgic place in my heart. I was thinking if I could hack it some way so that output was something better than RF then attach a SIO cable I could use it to play a couple old games once in while. Anyone know of a good hack/upgrade for the video output of the A400??? kvn IIRC, Analog published an article on adding a monitor output in one of their fairly-early issues. The video section of a 400 is (nearly) the same as an 800 -- looked pretty easy to pull off the lines to get composite and S- video (again, from memory -- 20 years+.) Larry On 12 Apr 2004 05:43:35 GMT, llppwhite@aol.comnsp (LarryWhite) wrote: >IIRC, Analog published an article on adding a monitor output in one of their >fairly-early issues. The video section of a 400 is (nearly) the same as an 800 >-- looked pretty easy to pull off the lines to get composite and S- video >(again, from memory -- 20 years+.) Iwould like to find this issue. Lately I have been thinking about those early days and looking at my frankenstein 400 and wondering how to upgrade the video. I've always wanted to get a 1200xl and do some modding on that beast too, that is my all time favorite 8bit. Thanx for the info, I know all the antics are online, but I dont think the analogs are, if noone else knows, maybe I can get someone to scan that article. kv...

Some Upgrade Help Needed
I have OS/2 - 4.52 Installed and have been using SeaMonkey 2.04 as my browser. I realized the nned to upgrade and easily installed Firefox 10 with the needed library (under Warpin). Now, here is where I need help. (1) Firefox 10 reports that the JAVA on my system 1.4 is outdated (2) Warpzilla says JAVA 6 will work with JDK - Looked at Hobbes, but got lost here in the requirements and dependencies (3) Please be specific in what I need to do to get these upgrades (Sites, Order of Installation, Extras needed) Any help will be greatly appreciated Paul -- O...

Upgrading from U4 (UFS root) to U7 (zfs root) via live upgrade?
First off, I'm not experienced with Live Upgrade. In the normal run of things I tend to wait for a whole new box before major upgrades. But, with U7 out I figured it might be a nice time to upgrade my personal server box (a Blade 2000), and finally step into LU. All of the documents I've found online use LU to go from one Solaris version to another, or from UFS root to ZFS root. One document (here at http://alldayidreamaboutsolaris.blogspot.com/2008/11/solaris-10-ufs-to-zfs-upgrade.html ) actually does both, but in two phases. Is it possible to do both at once, for the advantage o...

Need Solaris Help
I'm told the following bit of code doesn't work on Solaris: # A Unix savvy method to fetch the console columns, and rows. def terminal_size `stty size`.split.map { |x| x.to_i }.reverse end HighLine makes use of this method as does Capistrano by extension, so we need find a portable solution. Can anyone provide an equivalent technique that works on Solaris? Thanks. James Edward Gray II On 8/7/07, James Edward Gray II <james@grayproductions.net> wrote: > I'm told the following bit of code doesn't work on Solaris: > > # A...

Unix help needed
Hi , i need help in unix. i want to check the "execute" and "write" permissions on a directory. So how can i do it. i know "ls -l" shows that but it doesnt check it. example if (TEST Condition) then echo " you have write permission on dir" else echo " you dont have write permission dir" fi So my question is that how to use test command to check specific permission. Please help . Thanks alot. Jeff In article <bkae96$e52d$1@news.swt.edu>, mhk wrote: > Hi , > > i need help in unix. i want to check the "execu...

Upgrading; Need Help
I'm buying a G4 tower today. I'm currently using a G3 iMac. How do I go about copying all of my files to the new computer without worrying about permissions, etc? All the Mail, Docs, Music, Quicken Data, etc. needs to be transferred, and I would like to do it without having to use the same exact username from my old machine. But if this happens, I run into the problem of permissions. Someone told me if I use the old machine as a FireWire drive (by holding down the T key on bootup) for the new computer, permissions wouldn't recognized. Is this true? Also, is there a bett...

Solaris 10 vs Solaris 9 Live Upgrade with RAID-1 mirrors
I have been learning about Live Upgrade in a lab environment for the last month. I managed to upgrade 3 Solaris 9 systems to Solaris 10. The setup involves a pair of RAID-1 drives that I split using LU to detach one slice of the current mirror, and create a new mirror. The command that I have used is as follows: (the disk mirror is d10, and the submirrors are d0, and d1)... # lucreate -n Solaris_10 -m /:/dev/md/dsk/d11:ufs,mirror \ > -m /:d0:detach,attach,preserve Discovering physical storage devices Discovering logical storage devices Cross referencing storage devices with boot environme...

Web resources about - Need Help Live Upgrade and ZFS Issues - comp.unix.solaris

Windows Anytime Upgrade - Wikipedia, the free encyclopedia
Windows Anytime Upgrade (WAU) was an upgrade method offered by Microsoft and selected licensed resellers to users who intended to upgrade their ...


President Obama's Facebook Page Upgrades To Timeline
Yes, we can upgrade to timeline. U.S. President Barack Obama's Facebook page has the new layout.

Update on Android Ice Cream Sandwich upgrades
HTC has been working hard to get its Ice Cream Sandwich upgrades ready, and we’re excited to announce that our first round of ICS upgrades will ...

Four Reasons to Upgrade to the New Share Dialog for iOS
... as friend tagging and privacy controls. 2. A faster and more native sharing experience Pinterest and popular iOS game 4 Pics 1 Word upgraded ...

More Streams: Facebook to Upgrade App Directory, App About Pages Soon
Over 50,000 applications have been added to the Facebook application directory since it first launched two years ago, and Facebook says it’s ...

Search Twitter - upgrade
... here Search Refresh Laura Tobin @ Lauratobin1 2m The only way to get through today is chocolate, I'm going to start small with baby & upgrade ...

Brain Upgrade - Improve Concentration and Relieve Stress! on the App Store on iTunes
Get Brain Upgrade - Improve Concentration and Relieve Stress! on the App Store. See screenshots and ratings, and read customer reviews.

upgrade - Flickr - Photo Sharing!
... Flickr. We noticed that you may be using an unsupported browser. All the basics will still work, but to get the most out of Flickr please upgrade ...

How to Upgrade your MacBook PRO to a SSD (Solid State Hard Drive) - YouTube
Follow me on twitter : http://twitter.com/sakitechonline Follow me on facebook: http://goo.gl/R95Pq Website: http://www.sakitechonline.com Create ...

Resources last updated: 2/28/2016 1:57:45 PM