Oracle 19c RAC ASM 密码文件恢复方案三:将补丁升级至 19.8 版本后,利用 asmcmd --nocp credfix 进行修复

Oracle19cRAC ASM密码文件恢复(其三)

简介

问题描述:在Oracle19cRAC中启动CRS服务,其中一个节点的ASM实例无法自动启动,需要手动执行startup命令,且启动实例后集群服务正常。

这种情况就有可能是ASM密码文件错误造成的,当ASM的密码文件丢失、损坏,或手动更替后,就会出现这种问题。

为解决这个错误,我将通过四个测试,介绍四种不同的恢复ASM密码文件的方法。

1、通过asmcmd --nocp credfix命令恢复。

2、通过密码文件备份恢复。

3、没有密码文件备份,且版本低于19.8,通过升级补丁后执行asmcmd --nocp credfix命令恢复。

4、直接创建密码文件恢复

测试三:

没有密码文件备份,同时版本低于19.8,通过升级补丁后执行asmcmd --nocp credfix命令恢复ASM密码文件。

测试环境:

Oracle19c双节点RAC集群

rac1 节点一

rac2 节点二

适用场景:

Oracle 版本低于 19.8

无ASM密码文件备份

可通过打补丁升级至 19.8+ 版本

修复方法:

在出现问题后升级补丁,通过credverify命令验证状态,通过credfix命令修复凭证。

1.回退补丁

由于该环境已经打过补丁,为了模拟低于19.8版本的情况,所以回退补丁。

bash 复制代码
[root@rac1 ~]# /u01/app/19.3.0/grid/OPatch/opatchauto rollback /soft/34130714/ -oh /u01/app/19.3.0/grid
[root@rac2 ~]# /u01/app/19.3.0/grid/OPatch/opatchauto rollback /soft/34130714/ -oh /u01/app/19.3.0/grid

2.检查版本

节点一

bash 复制代码
[grid@rac1 ~]$ cd /u01/app/19.3.0/grid/OPatch
[grid@rac1 OPatch]$ ./opatch lspatches
29585399;OCW RELEASE UPDATE 19.3.0.0.0 (29585399)
29517247;ACFS RELEASE UPDATE 19.3.0.0.0 (29517247)
29517242;Database Release Update : 19.3.0.0.190416 (29517242)
29401763;TOMCAT RELEASE UPDATE 19.0.0.0.0 (29401763)

OPatch succeeded.
[grid@rac1 OPatch]$ sqlplus / as sysasm

SQL*Plus: Release 19.0.0.0.0 - Production on Tue Dec 2 16:37:41 2025
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> select banner from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production

节点二

bash 复制代码
[grid@rac2 ~]$ cd /u01/app/19.3.0/grid/OPatch
[grid@rac2 OPatch]$ ./opatch lspatches
29585399;OCW RELEASE UPDATE 19.3.0.0.0 (29585399)
29517247;ACFS RELEASE UPDATE 19.3.0.0.0 (29517247)
29517242;Database Release Update : 19.3.0.0.190416 (29517242)
29401763;TOMCAT RELEASE UPDATE 19.0.0.0.0 (29401763)

OPatch succeeded.

[grid@rac2 OPatch]$ !sql
sqlplus / as sysasm

SQL*Plus: Release 19.0.0.0.0 - Production on Tue Dec 2 16:44:04 2025
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> select banner from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production

命令已经无法使用

bash 复制代码
[grid@rac1 ~]$ asmcmd --nocp credverify 
ASMCMD-8022: unknown command 'credverify' specified

3.查看集群状态和配置

bash 复制代码
[grid@rac1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.chad
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.net1.network
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.ons
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        ONLINE  OFFLINE                               STABLE
ora.DATANEW.dg(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.DG_DATA.dg(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.DG_FRA.dg(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.DG_OCR.dg(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac1                     STABLE
ora.asm(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     Started,STABLE
      2        ONLINE  ONLINE       rac2                     Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       rac1                     STABLE
ora.emrep.db
      1        ONLINE  ONLINE       rac1                     Open,HOME=/u01/app/o
                                                             racle/product/19.3.0
                                                             /dbhome_1,STABLE
      2        ONLINE  ONLINE       rac2                     Open,HOME=/u01/app/o
                                                             racle/product/19.3.0
                                                             /dbhome_1,STABLE
ora.qosmserver
      1        ONLINE  ONLINE       rac1                     STABLE
ora.rac1.vip
      1        ONLINE  ONLINE       rac1                     STABLE
ora.rac2.vip
      1        ONLINE  ONLINE       rac2                     STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac1                     STABLE
--------------------------------------------------------------------------------

[grid@rac1 ~]$ asmcmd lspwusr
        Username sysdba sysoper sysasm 
             SYS   TRUE    TRUE  TRUE 
CRSUSER__ASM_005   TRUE   FALSE   TRUE 
         ASMSNMP   TRUE   FALSE  FALSE 
      ORACLE_148   TRUE   FALSE  FALSE 

[grid@rac1 ~]$ srvctl config asm
ASM home: <CRS home>
Password file: +DG_OCR/orapwasm
Backup of Password file: +DG_OCR/orapwASM_backup
ASM listener: LISTENER
ASM instance count: 3
Cluster ASM listener: ASMNET1LSNR_ASM

4.删除ASM密码文件,模拟故障

bash 复制代码
[grid@rac1 ~]$ asmcmd
ASMCMD> cd +dg_ocr
ASMCMD> ls
DB_UNKNOWN/
orapwasm
rac-cluster/
ASMCMD> rm -rf orapwasm

5.重启CRS观察异常状况

所有节点重启CRS

bash 复制代码
[root@rac1 ~]# crsctl stop crs
[root@rac2 ~]# crsctl stop crs
[root@rac1 ~]# crsctl start crs
[root@rac2 ~]# crsctl start crs

节点一

bash 复制代码
[grid@rac1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1                     STABLE
ora.chad
               ONLINE  ONLINE       rac1                     STABLE
ora.net1.network
               ONLINE  ONLINE       rac1                     STABLE
ora.ons
               ONLINE  ONLINE       rac1                     STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  OFFLINE                               STABLE
      3        ONLINE  OFFLINE                               STABLE
ora.DATANEW.dg(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        OFFLINE OFFLINE                               STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.DG_DATA.dg(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        OFFLINE OFFLINE                               STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.DG_FRA.dg(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        OFFLINE OFFLINE                               STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.DG_OCR.dg(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        OFFLINE OFFLINE                               STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac1                     STABLE
ora.asm(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     Started,STABLE
      2        ONLINE  OFFLINE                               STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  OFFLINE                               STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       rac1                     STABLE
ora.emrep.db
      1        ONLINE  ONLINE       rac1                     Open,HOME=/u01/app/o
                                                             racle/product/19.3.0
                                                             /dbhome_1,STABLE
      2        ONLINE  OFFLINE                               STABLE
ora.qosmserver
      1        ONLINE  ONLINE       rac1                     STABLE
ora.rac1.vip
      1        ONLINE  ONLINE       rac1                     STABLE
ora.rac2.vip
      1        ONLINE  INTERMEDIATE rac1                     FAILED OVER,STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac1                     STABLE
--------------------------------------------------------------------------------

节点二

bash 复制代码
[grid@rac2 ~]$ crsctl stat res -t
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Status failed, or completed with errors.
[grid@rac2 ~]$ crsctl stat res -t -init
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      1        ONLINE  ONLINE       rac2                     STABLE
ora.cluster_interconnect.haip
      1        ONLINE  ONLINE       rac2                     STABLE
ora.crf
      1        ONLINE  ONLINE       rac2                     STABLE
ora.crsd
      1        ONLINE  OFFLINE                               STABLE
ora.cssd
      1        ONLINE  ONLINE       rac2                     STABLE
ora.cssdmonitor
      1        ONLINE  ONLINE       rac2                     STABLE
ora.ctssd
      1        ONLINE  ONLINE       rac2                     OBSERVER,STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.evmd
      1        ONLINE  ONLINE       rac2                     STABLE
ora.gipcd
      1        ONLINE  ONLINE       rac2                     STABLE
ora.gpnpd
      1        ONLINE  ONLINE       rac2                     STABLE
ora.mdnsd
      1        ONLINE  ONLINE       rac2                     STABLE
ora.storage
      1        ONLINE  OFFLINE      rac2                     STARTING
--------------------------------------------------------------------------------

6.手动启动ASM实例

bash 复制代码
[grid@rac2 ~]$ !sql
sqlplus / as sysasm

SQL*Plus: Release 19.0.0.0.0 - Production on Tue Dec 2 16:54:48 2025
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup
ASM instance started

Total System Global Area 1137173320 bytes
Fixed Size		    8905544 bytes
Variable Size		 1103101952 bytes
ASM Cache		   25165824 bytes
ASM diskgroups mounted

7.创建密码文件

bash 复制代码
[grid@rac1 ~]$ orapwd file='+dg_ocr/orapwasm' asm=y force=y password=Password123*
[grid@rac1 ~]$ asmcmd lspwusr
Username sysdba sysoper sysasm 
     SYS   TRUE    TRUE  FALSE 

8.节点一打补丁

用opatchauto打补丁,可以开着数据库打,但为避免不必要的风险,将数据库关闭后打补丁更安全。

在节点一中打补丁的过程中,遇到了两个报错。

第一个是预料之外的报错,检查之后确保正常,执行resume继续打补丁。

第二个报错是由于ASM密码文件的异常,打补丁打到最后的时候,会出现一个报错,暂时不用理会该报错,因为此时该打的补丁基本都已经打好了,credfix命令已经可以使用,在之后处理好ASM密码文件的问题后,重新执行resume命令,就可以继续完成补丁。

bash 复制代码
[grid@rac1 ~]$ srvctl stop database -d emrep
[root@rac1 ~]# /u01/app/19.3.0/grid/OPatch/opatchauto apply /soft/34130714 -oh /u01/app/19.3.0/grid

打补丁过程:

复制代码
OPatchauto session is initiated at Tue Dec  2 17:07:32 2025

System initialization log file is /u01/app/19.3.0/grid/cfgtoollogs/opatchautodb/systemconfig2025-12-02_05-07-36PM.log.

Session log file is /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/opatchauto2025-12-02_05-07-53PM.log
The id for this session is V1N9

Executing OPatch prereq operations to verify patch applicability on home /u01/app/19.3.0/grid
Patch applicability verified successfully on home /u01/app/19.3.0/grid


Executing patch validation checks on home /u01/app/19.3.0/grid
Patch validation checks successfully completed on home /u01/app/19.3.0/grid


Performing prepatch operations on CRS - bringing down CRS service on home /u01/app/19.3.0/grid
Prepatch operation log file location: /u01/app/grid/crsdata/rac1/crsconfig/crs_prepatch_apply_inplace_rac1_2025-12-02_05-08-28PM.log
CRS service brought down successfully on home /u01/app/19.3.0/grid


Start applying binary patch on home /u01/app/19.3.0/grid
Failed while applying binary patches on home /u01/app/19.3.0/grid

Execution of [OPatchAutoBinaryAction] patch action failed, check log for more details. Failures:
Patch Target : rac1->/u01/app/19.3.0/grid Type[crs]
Details: [
---------------------------Patching Failed---------------------------------
Command execution failed during patching in home: /u01/app/19.3.0/grid, host: rac1.
Command failed:  /u01/app/19.3.0/grid/OPatch/opatchauto  apply /soft/34130714 -oh /u01/app/19.3.0/grid -target_type cluster -binary -invPtrLoc /u01/app/19.3.0/grid/oraInst.loc -jre /u01/app/19.3.0/grid/OPatch/jre -persistresult /u01/app/19.3.0/grid/opatchautocfg/db/sessioninfo/sessionresult_rac1_crs_1.ser -analyzedresult /u01/app/19.3.0/grid/opatchautocfg/db/sessioninfo/sessionresult_analyze_rac1_crs_1.ser
Command failure output: 
==Following patches FAILED in apply:

Patch: /soft/34130714/33575402
Log: /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2025-12-02_17-09-29PM_1.log
Reason: Failed during Patching: oracle.opatch.opatchsdk.OPatchException: Prerequisite check "CheckActiveFilesAndExecutables" failed.

Patch: /soft/34130714/34133642
Log: /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2025-12-02_17-09-29PM_1.log
Reason: Failed during Patching: oracle.opatch.opatchsdk.OPatchException: Prerequisite check "CheckActiveFilesAndExecutables" failed.

Patch: /soft/34130714/34139601
Log: /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2025-12-02_17-09-29PM_1.log
Reason: Failed during Patching: oracle.opatch.opatchsdk.OPatchException: Prerequisite check "CheckActiveFilesAndExecutables" failed.

Patch: /soft/34130714/34160635
Log: /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2025-12-02_17-09-29PM_1.log
Reason: Failed during Patching: oracle.opatch.opatchsdk.OPatchException: Prerequisite check "CheckActiveFilesAndExecutables" failed.

Patch: /soft/34130714/34318175
Log: /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2025-12-02_17-09-29PM_1.log
Reason: Failed during Patching: oracle.opatch.opatchsdk.OPatchException: Prerequisite check "CheckActiveFilesAndExecutables" failed. 

After fixing the cause of failure Run opatchauto resume

]
OPATCHAUTO-68061: The orchestration engine failed.
OPATCHAUTO-68061: The orchestration engine failed with return code 1
OPATCHAUTO-68061: Check the log for more details.
OPatchAuto failed.

OPatchauto session completed at Tue Dec  2 17:09:47 2025
Time taken to complete the session 2 minutes, 16 seconds

 opatchauto failed with error code 42


不清楚什么情况,可能有进程没停干净,检查数据库和集群都已经关闭,重新执行resume,正常,可以继续打。
[root@rac1 ~]# /u01/app/19.3.0/grid/OPatch/opatchauto resume

OPatchauto session is initiated at Tue Dec  2 17:16:53 2025
Session log file is /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/opatchauto2025-12-02_05-16-54PM.log
Resuming existing session with id V1N9

Start applying binary patch on home /u01/app/19.3.0/grid
Binary patch applied successfully on home /u01/app/19.3.0/grid

Checking shared status of home.....

Performing postpatch operations on CRS - starting CRS service on home /u01/app/19.3.0/grid
Postpatch operation log file location: /u01/app/grid/crsdata/rac1/crsconfig/crs_postpatch_apply_inplace_rac1_2025-12-02_05-22-18PM.log
Failed to start CRS service on home /u01/app/19.3.0/grid

Execution of [GIStartupAction] patch action failed, check log for more details. Failures:
Patch Target : rac1->/u01/app/19.3.0/grid Type[crs]
Details: [
---------------------------Patching Failed---------------------------------
Command execution failed during patching in home: /u01/app/19.3.0/grid, host: rac1.
Command failed:  /u01/app/19.3.0/grid/perl/bin/perl -I/u01/app/19.3.0/grid/perl/lib -I/u01/app/19.3.0/grid/opatchautocfg/db/dbtmp/bootstrap_rac1/patchwork/crs/install -I/u01/app/19.3.0/grid/opatchautocfg/db/dbtmp/bootstrap_rac1/patchwork/xag /u01/app/19.3.0/grid/opatchautocfg/db/dbtmp/bootstrap_rac1/patchwork/crs/install/rootcrs.pl -postpatch
Command failure output: 
Using configuration parameter file: /u01/app/19.3.0/grid/opatchautocfg/db/dbtmp/bootstrap_rac1/patchwork/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/rac1/crsconfig/crs_postpatch_apply_inplace_rac1_2025-12-02_05-22-18PM.log
2025/12/02 17:22:26 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2672: Attempting to start 'ora.evmd' on 'rac1'
CRS-2676: Start of 'ora.evmd' on 'rac1' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'rac1'
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.crf' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac1'
CRS-2672: Attempting to start 'ora.ctssd' on 'rac1'
CRS-2676: Start of 'ora.ctssd' on 'rac1' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac1'
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'rac1'
ORA-01017: invalid username/password; logon denied
CRS-5055: unable to connect to an ASM instance because no ASM instance is running in the cluster
CRS-2883: Resource 'ora.storage' failed during Clusterware stack start.
CRS-4406: Oracle High Availability Services synchronous start failed.
CRS-41053: checking Oracle Grid Infrastructure for file permission issues
PRVH-0116 : Path "/u01/app/19.3.0/grid/crs/install/cmdllroot.sh" with permissions "rw-r--r--" does not have execute permissions for the owner, file's group, and others on node "rac1".
PRVG-2031 : Owner of file "/u01/app/19.3.0/grid/crs/install/cmdllroot.sh" did not match the expected value on node "rac1". [Expected = "grid(1100)" ; Found = "root(0)"]
PRVG-2032 : Group of file "/u01/app/19.3.0/grid/crs/install/cmdllroot.sh" did not match the expected value on node "rac1". [Expected = "oinstall(1005)" ; Found = "root(0)"]
CRS-4000: Command Start failed, or completed with errors.
2025/12/02 17:33:15 CLSRSC-117: Failed to start Oracle Clusterware stack from the Grid Infrastructure home /u01/app/19.3.0/grid 

After fixing the cause of failure Run opatchauto resume

]
OPATCHAUTO-68061: The orchestration engine failed.
OPATCHAUTO-68061: The orchestration engine failed with return code 1
OPATCHAUTO-68061: Check the log for more details.
OPatchAuto failed.

OPatchauto session completed at Tue Dec  2 17:33:16 2025
Time taken to complete the session 16 minutes, 23 seconds

 opatchauto failed with error code 42

查看补丁情况

bash 复制代码
[root@rac1 ~]# su - grid
Last login: Tue Dec  2 17:33:16 CST 2025
[grid@rac1 ~]$ cd /u01/app/19.3.0/grid/OPatch
[grid@rac1 OPatch]$  ./opatch lspatches
34318175;TOMCAT RELEASE UPDATE 19.0.0.0.0 (34318175)
34160635;OCW RELEASE UPDATE 19.16.0.0.0 (34160635)
34139601;ACFS RELEASE UPDATE 19.16.0.0.0 (34139601)
34133642;Database Release Update : 19.16.0.0.220719 (34133642)
33575402;DBWLM RELEASE UPDATE 19.0.0.0.0 (33575402)

OPatch succeeded.

补丁基本已经打好了,最后的报错在处理完ASM密码文件的问题后,执行resume命令即可继续完成补丁。

9.手动启动ASM实例,并查看状态

bash 复制代码
[grid@rac1 OPatch]$ crsctl stat res -t
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Status failed, or completed with errors.
[grid@rac1 OPatch]$ !sql
sqlplus / as sysasm

SQL*Plus: Release 19.0.0.0.0 - Production on Tue Dec 2 17:37:52 2025
Version 19.16.0.0.0

Copyright (c) 1982, 2022, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup
ASM instance started

Total System Global Area 1137173320 bytes
Fixed Size		    8905544 bytes
Variable Size		 1103101952 bytes
ASM Cache		   25165824 bytes
ASM diskgroups mounted
SQL> Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.16.0.0.0
[grid@rac1 OPatch]$ crsctl stat res -t
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Status failed, or completed with errors.
[grid@rac1 OPatch]$ crsctl stat res -t -init
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      1        ONLINE  ONLINE       rac1                     STABLE
ora.cluster_interconnect.haip
      1        ONLINE  ONLINE       rac1                     STABLE
ora.crf
      1        ONLINE  ONLINE       rac1                     STABLE
ora.crsd
      1        ONLINE  OFFLINE                               STABLE
ora.cssd
      1        ONLINE  ONLINE       rac1                     STABLE
ora.cssdmonitor
      1        ONLINE  ONLINE       rac1                     STABLE
ora.ctssd
      1        ONLINE  ONLINE       rac1                     OBSERVER,STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.evmd
      1        ONLINE  ONLINE       rac1                     STABLE
ora.gipcd
      1        ONLINE  ONLINE       rac1                     STABLE
ora.gpnpd
      1        ONLINE  ONLINE       rac1                     STABLE
ora.mdnsd
      1        ONLINE  ONLINE       rac1                     STABLE
ora.storage
      1        ONLINE  ONLINE       rac1                     STABLE
--------------------------------------------------------------------------------

crsd没有启动,理论来讲这里可以手动启动crsd,不过我当时选择了重启CRS。

bash 复制代码
[root@rac1 ~]# crsctl stop crs
CRS-2796: The command may not proceed when Cluster Ready Services is not running
CRS-4687: Shutdown command has completed with errors.
CRS-4000: Command Stop failed, or completed with errors.
报错,需要加-f
[root@rac1 ~]# crsctl stop crs -f
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac1'
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'
CRS-2673: Attempting to stop 'ora.storage' on 'rac1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'
CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.evmd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.storage' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'rac1'
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac1'
CRS-2677: Stop of 'ora.crf' on 'rac1' succeeded
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'
CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'
CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[root@rac1 ~]# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.

手动启动ASM实例

bash 复制代码
[grid@rac1 ~]$ sqlplus / as sysasm

SQL*Plus: Release 19.0.0.0.0 - Production on Tue Dec 2 17:43:15 2025
Version 19.16.0.0.0

Copyright (c) 1982, 2022, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup
ASM instance started

Total System Global Area 1137173320 bytes
Fixed Size		    8905544 bytes
Variable Size		 1103101952 bytes
ASM Cache		   25165824 bytes
ASM diskgroups mounted

查看状态

bash 复制代码
[grid@rac1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.chad
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.net1.network
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.ons
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        ONLINE  OFFLINE                               STABLE
ora.DATANEW.dg(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.DG_DATA.dg(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.DG_FRA.dg(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.DG_OCR.dg(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac2                     STABLE
ora.asm(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     Started,STABLE
      2        ONLINE  ONLINE       rac2                     Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       rac2                     STABLE
ora.emrep.db
      1        OFFLINE OFFLINE                               Instance Shutdown,ST
                                                             ABLE
      2        OFFLINE OFFLINE                               STABLE
ora.qosmserver
      1        ONLINE  ONLINE       rac2                     STABLE
ora.rac1.vip
      1        ONLINE  ONLINE       rac1                     STABLE
ora.rac2.vip
      1        ONLINE  ONLINE       rac2                     STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac2                     STABLE
--------------------------------------------------------------------------------

10.另一节点打补丁

bash 复制代码
[root@rac2 ~]# /u01/app/19.3.0/grid/OPatch/opatchauto apply /soft/34130714 -oh /u01/app/19.3.0/grid

OPatchauto session is initiated at Tue Dec  2 17:46:47 2025

System initialization log file is /u01/app/19.3.0/grid/cfgtoollogs/opatchautodb/systemconfig2025-12-02_05-46-50PM.log.

Session log file is /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/opatchauto2025-12-02_05-47-00PM.log
The id for this session is 4X47

Executing OPatch prereq operations to verify patch applicability on home /u01/app/19.3.0/grid
Patch applicability verified successfully on home /u01/app/19.3.0/grid


Executing patch validation checks on home /u01/app/19.3.0/grid
Patch validation checks successfully completed on home /u01/app/19.3.0/grid


Performing prepatch operations on CRS - bringing down CRS service on home /u01/app/19.3.0/grid
Prepatch operation log file location: /u01/app/grid/crsdata/rac2/crsconfig/crs_prepatch_apply_inplace_rac2_2025-12-02_05-47-35PM.log
CRS service brought down successfully on home /u01/app/19.3.0/grid


Start applying binary patch on home /u01/app/19.3.0/grid
Binary patch applied successfully on home /u01/app/19.3.0/grid


Performing postpatch operations on CRS - starting CRS service on home /u01/app/19.3.0/grid
Postpatch operation log file location: /u01/app/grid/crsdata/rac2/crsconfig/crs_postpatch_apply_inplace_rac2_2025-12-02_05-52-27PM.log
Failed to start CRS service on home /u01/app/19.3.0/grid

Execution of [GIStartupAction] patch action failed, check log for more details. Failures:
Patch Target : rac2->/u01/app/19.3.0/grid Type[crs]
Details: [
---------------------------Patching Failed---------------------------------
Command execution failed during patching in home: /u01/app/19.3.0/grid, host: rac2.
Command failed:  /u01/app/19.3.0/grid/perl/bin/perl -I/u01/app/19.3.0/grid/perl/lib -I/u01/app/19.3.0/grid/opatchautocfg/db/dbtmp/bootstrap_rac2/patchwork/crs/install -I/u01/app/19.3.0/grid/opatchautocfg/db/dbtmp/bootstrap_rac2/patchwork/xag /u01/app/19.3.0/grid/opatchautocfg/db/dbtmp/bootstrap_rac2/patchwork/crs/install/rootcrs.pl -postpatch
Command failure output: 
Using configuration parameter file: /u01/app/19.3.0/grid/opatchautocfg/db/dbtmp/bootstrap_rac2/patchwork/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/rac2/crsconfig/crs_postpatch_apply_inplace_rac2_2025-12-02_05-52-27PM.log
2025/12/02 17:52:34 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.evmd' on 'rac2'
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac2'
CRS-2676: Start of 'ora.evmd' on 'rac2' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac2'
CRS-2676: Start of 'ora.gpnpd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac2'
CRS-2676: Start of 'ora.gipcd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'rac2'
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac2'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac2'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac2'
CRS-2676: Start of 'ora.diskmon' on 'rac2' succeeded
CRS-2676: Start of 'ora.crf' on 'rac2' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac2'
CRS-2672: Attempting to start 'ora.ctssd' on 'rac2'
CRS-2676: Start of 'ora.ctssd' on 'rac2' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac2'
CRS-2676: Start of 'ora.asm' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'rac2'
ORA-01017: invalid username/password; logon denied
CRS-5055: unable to connect to an ASM instance because no ASM instance is running in the cluster
CRS-2883: Resource 'ora.storage' failed during Clusterware stack start.
CRS-4406: Oracle High Availability Services synchronous start failed.
CRS-41053: checking Oracle Grid Infrastructure for file permission issues
PRVH-0116 : Path "/u01/app/19.3.0/grid/crs/install/cmdllroot.sh" with permissions "rw-r--r--" does not have execute permissions for the owner, file's group, and others on node "rac2".
PRVG-2031 : Owner of file "/u01/app/19.3.0/grid/crs/install/cmdllroot.sh" did not match the expected value on node "rac2". [Expected = "grid(1100)" ; Found = "root(0)"]
PRVG-2032 : Group of file "/u01/app/19.3.0/grid/crs/install/cmdllroot.sh" did not match the expected value on node "rac2". [Expected = "oinstall(1005)" ; Found = "root(0)"]
CRS-4000: Command Start failed, or completed with errors.
2025/12/02 18:03:34 CLSRSC-117: Failed to start Oracle Clusterware stack from the Grid Infrastructure home /u01/app/19.3.0/grid 

After fixing the cause of failure Run opatchauto resume

]
OPATCHAUTO-68061: The orchestration engine failed.
OPATCHAUTO-68061: The orchestration engine failed with return code 1
OPATCHAUTO-68061: Check the log for more details.
OPatchAuto failed.

OPatchauto session completed at Tue Dec  2 18:03:36 2025
Time taken to complete the session 16 minutes, 49 seconds

 opatchauto failed with error code 42

11.手动启动ASM实例,并查看状态

bash 复制代码
[grid@rac2 ~]$ !sql
sqlplus / as sysasm

SQL*Plus: Release 19.0.0.0.0 - Production on Wed Dec 3 15:35:52 2025
Version 19.16.0.0.0

Copyright (c) 1982, 2022, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup
ASM instance started

Total System Global Area 1137173320 bytes
Fixed Size		    8905544 bytes
Variable Size		 1103101952 bytes
ASM Cache		   25165824 bytes
ASM diskgroups mounted

查看状态

bash 复制代码
[grid@rac2 ~]$ crsctl stat res -t -init
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      1        ONLINE  ONLINE       rac2                     STABLE
ora.cluster_interconnect.haip
      1        ONLINE  ONLINE       rac2                     STABLE
ora.crf
      1        ONLINE  ONLINE       rac2                     STABLE
ora.crsd
      1        ONLINE  ONLINE       rac2                     STABLE
ora.cssd
      1        ONLINE  ONLINE       rac2                     STABLE
ora.cssdmonitor
      1        ONLINE  ONLINE       rac2                     STABLE
ora.ctssd
      1        ONLINE  ONLINE       rac2                     OBSERVER,STABLE
ora.diskmon
      1        OFFLINE OFFLINE                               STABLE
ora.evmd
      1        ONLINE  ONLINE       rac2                     STABLE
ora.gipcd
      1        ONLINE  ONLINE       rac2                     STABLE
ora.gpnpd
      1        ONLINE  ONLINE       rac2                     STABLE
ora.mdnsd
      1        ONLINE  ONLINE       rac2                     STABLE
ora.storage
      1        ONLINE  ONLINE       rac2                     STABLE
--------------------------------------------------------------------------------
[grid@rac2 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.chad
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.net1.network
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.ons
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        ONLINE  OFFLINE                               STABLE
ora.DATANEW.dg(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.DG_DATA.dg(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.DG_FRA.dg(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.DG_OCR.dg(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac1                     STABLE
ora.asm(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     Started,STABLE
      2        ONLINE  ONLINE       rac2                     Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       rac1                     STABLE
ora.emrep.db
      1        OFFLINE OFFLINE                               STABLE
      2        OFFLINE OFFLINE                               Instance Shutdown,ST
                                                             ABLE
ora.qosmserver
      1        ONLINE  ONLINE       rac1                     STABLE
ora.rac1.vip
      1        ONLINE  ONLINE       rac1                     STABLE
ora.rac2.vip
      1        ONLINE  ONLINE       rac2                     STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac1                     STABLE
--------------------------------------------------------------------------------

节点二打完补丁后启动ASM实例,crsd倒是也跟着启动了。

12.添加用户和权限,执行credfix修复凭证

如果之前没有创建密码文件,要先使用orapwd命令创建一个密码文件,类似 orapwd file='+dg_ocr/orapwasm' asm=y force=y password=Password123* 的语句。

查看凭证状态

bash 复制代码
[grid@rac1 ~]$ asmcmd lspwusr
Username sysdba sysoper sysasm 
     SYS   TRUE    TRUE  FALSE 
bash 复制代码
[grid@rac1 ~]$ asmcmd --nocp credverify  
credverify: No credentials in password file, please run 'credfix' to fix the credentials.

先添加其它用户,或者先执行credfix添加CRSUSER用户都行,顺序不重要。

如果事前没有记录用户和权限的信息,那么可以参考这些用户

通常ASM实例的用户由SYS、ASMSNMP和CRSUSER__ASM_001用户组成

bash 复制代码
[grid@rac1 ~]$ asmcmd orapwusr --grant sysasm SYS
[grid@rac1 ~]$ asmcmd orapwusr --add ASMSNMP
Enter password: ******
[grid@rac1 ~]$ asmcmd orapwusr --grant sysdba ASMSNMP
[grid@rac1 ~]$ asmcmd orapwusr --add ORACLE_148
Enter password: ******
[grid@rac1 ~]$ asmcmd orapwusr --grant sysdba ORACLE_148
[grid@rac1 ~]$ asmcmd lspwusr
  Username sysdba sysoper sysasm 
       SYS   TRUE    TRUE   TRUE 
   ASMSNMP   TRUE   FALSE  FALSE 
ORACLE_148   TRUE   FALSE  FALSE 

在root用户中执行asmcmd --nocp credfix之前,还需要对各节点root用户进行互信。

互信命令:/u01/app/19.3.0/grid/oui/prov/resources/scripts/sshUserSetup.sh -user root -hosts "rac1 rac2 " -advanced -noPromptPassphrase

执行credfix命令修复

bash 复制代码
[root@rac1 ~]# asmcmd --nocp credfix
credfix: Credentials for CRSUSER__ASM_005 not in password file, trying next credential.
op=addcrscreds wrap=/tmp/creds0.xml 
credfix: Creating new credentials, no valid credentials in OCR.
credfix: New user CRSUSER__ASM_006 created. 
op=credimport wrap=/tmp/creds0.xml olr=true force=true 
credfix: OLR for rac1 has been fixed if credentials were created incorrectly. 
credfix: Starting SSH session on node rac2.
credfix: OLR for rac2 has been fixed if credentials were created incorrectly. Exiting SSH session.
op=delcrscreds crs_user=CRSUSER__ASM_005 
credfix: Deleted CRSUSER__ASM_005 from OCR.
credverify: Credentials created correctly on rac1.
credverify: Starting SSH session on node rac2
credverify: Credentials created correctly on rac2. Exiting SSH session.
credfix: Credentials have been fixed if they were created incorrectly.

检查密码凭证状态

bash 复制代码
[grid@rac1 ~]$ asmcmd --nocp credverify  
credverify: Credentials created correctly on rac1.
credverify: Starting SSH session on node rac2
credverify: Credentials created correctly on rac2. Exiting SSH session.
[grid@rac1 ~]$ asmcmd lspwusr
        Username sysdba sysoper sysasm 
             SYS   TRUE    TRUE   TRUE 
         ASMSNMP   TRUE   FALSE  FALSE 
      ORACLE_148   TRUE   FALSE  FALSE 
CRSUSER__ASM_006   TRUE   FALSE   TRUE 

13.继续打补丁

两边节点分别执行opatchauto resume命令

bash 复制代码
[root@rac1 ~]# /u01/app/19.3.0/grid/OPatch/opatchauto resume

OPatchauto session is initiated at Wed Dec  3 15:57:21 2025
Session log file is /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/opatchauto2025-12-03_03-57-22PM.log
Resuming existing session with id V1N9

Performing postpatch operations on CRS - starting CRS service on home /u01/app/19.3.0/grid
Postpatch operation log file location: /u01/app/grid/crsdata/rac1/crsconfig/crs_postpatch_apply_inplace_rac1_2025-12-03_03-57-33PM.log
CRS service started successfully on home /u01/app/19.3.0/grid

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:rac1
CRS Home:/u01/app/19.3.0/grid
Version:19.0.0.0.0
Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /soft/34130714/33575402
Log: /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2025-12-02_17-17-07PM_1.log

Patch: /soft/34130714/34133642
Log: /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2025-12-02_17-17-07PM_1.log

Patch: /soft/34130714/34139601
Log: /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2025-12-02_17-17-07PM_1.log

Patch: /soft/34130714/34160635
Log: /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2025-12-02_17-17-07PM_1.log

Patch: /soft/34130714/34318175
Log: /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2025-12-02_17-17-07PM_1.log



OPatchauto session completed at Wed Dec  3 15:59:46 2025
Time taken to complete the session 2 minutes, 25 seconds
bash 复制代码
[root@rac2 ~]# /u01/app/19.3.0/grid/OPatch/opatchauto resume

OPatchauto session is initiated at Wed Dec  3 16:01:06 2025
Session log file is /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/opatchauto2025-12-03_04-01-06PM.log
Resuming existing session with id 4X47

Performing postpatch operations on CRS - starting CRS service on home /u01/app/19.3.0/grid
Postpatch operation log file location: /u01/app/grid/crsdata/rac2/crsconfig/crs_postpatch_apply_inplace_rac2_2025-12-03_04-01-17PM.log
CRS service started successfully on home /u01/app/19.3.0/grid

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:rac2
CRS Home:/u01/app/19.3.0/grid
Version:19.0.0.0.0
Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /soft/34130714/33575402
Log: /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2025-12-02_17-48-29PM_1.log

Patch: /soft/34130714/34133642
Log: /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2025-12-02_17-48-29PM_1.log

Patch: /soft/34130714/34139601
Log: /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2025-12-02_17-48-29PM_1.log

Patch: /soft/34130714/34160635
Log: /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2025-12-02_17-48-29PM_1.log

Patch: /soft/34130714/34318175
Log: /u01/app/19.3.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2025-12-02_17-48-29PM_1.log



OPatchauto session completed at Wed Dec  3 16:03:56 2025
Time taken to complete the session 2 minutes, 51 seconds

14.重启CRS验证恢复

重启CRS

bash 复制代码
[root@rac1 ~]# crsctl stop crs
[root@rac2 ~]# crsctl stop crs
[root@rac1 ~]# crsctl start crs
[root@rac2 ~]# crsctl start crs

查看状态

bash 复制代码
[grid@rac1 ~]$ crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.chad
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.net1.network
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.ons
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        ONLINE  OFFLINE                               STABLE
ora.DATANEW.dg(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.DG_DATA.dg(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.DG_FRA.dg(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.DG_OCR.dg(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac1                     STABLE
ora.asm(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     Started,STABLE
      2        ONLINE  ONLINE       rac2                     Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       rac1                     STABLE
ora.emrep.db
      1        OFFLINE OFFLINE                               Instance Shutdown,ST
                                                             ABLE
      2        OFFLINE OFFLINE                               Instance Shutdown,ST
                                                             ABLE
ora.qosmserver
      1        ONLINE  ONLINE       rac1                     STABLE
ora.rac1.vip
      1        ONLINE  ONLINE       rac1                     STABLE
ora.rac2.vip
      1        ONLINE  ONLINE       rac2                     STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac1                     STABLE
--------------------------------------------------------------------------------

数据库启动一下

bash 复制代码
[grid@rac1 ~]$ srvctl start database -d emrep

集群正常启动,所有资源正常,恢复成功

相关推荐
管家婆客服中心2 小时前
管家婆辉煌系列怎样修改账套名称?
数据库
honder试试2 小时前
Centos7从0-1安装部署Clickhouse验证与Mysql实时同步
数据库·mysql·clickhouse
VX:Fegn08952 小时前
计算机毕业设计|基于springboot + vue心理健康管理系统(源码+数据库+文档)
数据库·vue.js·spring boot·后端·课程设计
Shingmc32 小时前
MySQL表的约束
数据库·mysql
sugarzhangnotes2 小时前
应用服务OOM引发GC异常,导致Redis请求超时失败的问题分析与解决
数据库·redis·测试工具
SelectDB2 小时前
面向 Agent 的高并发分析:Doris vs. Snowflake vs. ClickHouse
数据库·apache·agent
alien爱吃蛋挞2 小时前
【JavaEE】Spring Boot日志
java·数据库·spring boot
zjeweler2 小时前
redis tools gui ---Redis图形化漏洞利用工具
数据库·redis·web安全·缓存
Leon-Ning Liu3 小时前
Oracle 19c RAC ASM 密码文件恢复方案四:创建新密码文件覆盖恢复
数据库·oracle