User Rating: 3 / 5

Star ActiveStar ActiveStar ActiveStar InactiveStar Inactive
 

This article will show how to add a node to a cluster on Oracle Grid Infrastructure 12cR2.

Before starting, you'll need
- One server with :
    # Oracle Grid Infrastructure 12cR2 for RAC already installed
      (if not, check how to install Oracle Grid Infrastructure 12cR2 for RAC)
- A second server with :
    # root access
    # 8GB of RAM or more
    # Linux already configured for Oracle 12c
      (if not, check how to Configure Oracle Linux for Oracle Database 12c)

Adding a Node

In this scenario, oralab02 server will be added to the Oracle GI 12R2 cluster, which is installed on oralab01 server.

Configure ssh between nodes

Execute as follow on oralab01 server as grid user :

# Generate ssh keys
[grid@oralab01]$ ssh-keygen
    Generating public/private rsa key pair.
    Enter file in which to save the key (/home/grid/.ssh/id_rsa):
    Enter passphrase (empty for no passphrase):
    Enter same passphrase again:
    Your identification has been saved in /home/grid/.ssh/id_rsa.
    Your public key has been saved in /home/grid/.ssh/id_rsa.pub.
    The key fingerprint is:
    28:91:ac:5a:ac:b0:68:f8:be:fb:3b:c9:95:8b:a7:da [email protected]
    The key's randomart image is:
    +--[ RSA 2048]----+
    |                 |
    |   . .           |
    |    +            |
    | . . . .         |
    |. + . ..S        |
    |+=   .o          |
    |*. . + .         |
    |.. .= o          |
    | .*=E=           |
    +-----------------+

# Copy ssh keys to oralab02
[grid@oralab01]$ ssh-copy-id oralab02
    The authenticity of host 'oralab02 (192.168.0.32)' can't be established.
    ECDSA key fingerprint is 9f:8b:d5:42:22:f5:fa:a6:5f:e5:8f:8a:26:c0:f0:50.
    Are you sure you want to continue connecting (yes/no)? yes
    /bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
    /bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
    grid@oralab02's password:

    Number of key(s) added: 1

    Now try logging into the machine, with:   "ssh 'oralab02'"
    and check to make sure that only the key(s) you wanted were added.

# Check if it works
[grid@oralab01]$ ssh oralab02 hostname
    oralab02.uxora.com

Do the same on oralab02 server as grid user :

# Generate ssh keys
[grid@oralab02]$ ssh-keygen
    Generating public/private rsa key pair.
    Enter file in which to save the key (/home/grid/.ssh/id_rsa):
    Enter passphrase (empty for no passphrase):
    Enter same passphrase again:
    Your identification has been saved in /home/grid/.ssh/id_rsa.
    Your public key has been saved in /home/grid/.ssh/id_rsa.pub.
    The key fingerprint is:
    SHA256:rO80Ns9MYkHOpokJ88Q7IHouRJquhKllEEO7EvF56Eo [email protected]
    The key's randomart image is:
    +---[RSA 2048]----+
    |..               |
    |.o.o             |
    |+.+ .   .        |
    | *.o   =         |
    |*E= o   S        |
    |B* * + = .       |
    |B.+ * + B .      |
    |+*   . = O       |
    |+..    .o +      |
    +----[SHA256]-----+

# Copy ssh keys to oralab01
[grid@oralab02]$ ssh-copy-id oralab01
    /bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/grid/.ssh/id_rsa.pub"
    The authenticity of host 'oralab01 (192.168.0.31)' can't be established.
    ECDSA key fingerprint is SHA256:ZXsjEH2nrRyU4STczflbLozmq1TLkf50N1BKhPzqemo.
    ECDSA key fingerprint is MD5:9f:8b:d5:42:22:f5:fa:a6:5f:e5:8f:8a:26:c0:f0:50.
    Are you sure you want to continue connecting (yes/no)? yes
    /bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
    grid@oralab01's password:

    Number of key(s) added: 1

    Now try logging into the machine, with:   "ssh 'oralab01'"
    and check to make sure that only the key(s) you wanted were added.

# Check if it works
[grid@oralab02]$ ssh oralab01 hostname
    oralab01.uxora.com

Install cvuqdisk

On oralab01 server as grid user :

# Copy rpm package
[grid@oralab01]$ scp $ORACLE_HOME/cv/rpm/cvuqdisk*.rpm grid@oralab02:~
    cvuqdisk-1.0.10-1.rpm                            100% 8860     8.7KB/s   00:00

# Install package (need oralab02 root password)
[grid@oralab01]$ ssh oralab02
[grid@oralab02]$ su - -c "rpm -Uvh ~grid/cvuqdisk*"
    Password:
    Preparing...                          ################################# [100%]
    Using default group oinstall to install package
    Updating / installing...
       1:cvuqdisk-1.0.10-1                ################################# [100%]

[grid@oralab02]$ exit
    logout
    Connection to oralab02 closed.

[grid@oralab01]$

Check prerequisites

On oralab01 server as grid user, run the verification utility script as follow:

# Check nodeadd prerequisites
[grid@oralab01]$ cluvfy stage -pre nodeadd -n oralab02

    Verifying Physical Memory ...PASSED
    Verifying Available Physical Memory ...PASSED
    Verifying Swap Size ...PASSED
    Verifying Free Space: oralab02:/usr,oralab02:/var,oralab02:/etc,oralab02:/sbin,oralab02:/tmp ...PASSED
    Verifying Free Space: oralab02:/u01/app/12.2.0/grid ...PASSED
    Verifying Free Space: oralab01:/usr,oralab01:/var,oralab01:/etc,oralab01:/sbin,oralab01:/tmp ...PASSED
    Verifying Free Space: oralab01:/u01/app/12.2.0/grid ...PASSED
    Verifying User Existence: grid ...
      Verifying Users With Same UID: 54322 ...PASSED
    Verifying User Existence: grid ...PASSED
    Verifying User Existence: root ...
      Verifying Users With Same UID: 0 ...PASSED
    Verifying User Existence: root ...PASSED
    Verifying Group Existence: asmadmin ...PASSED
    Verifying Group Existence: asmoper ...PASSED
    Verifying Group Existence: asmdba ...PASSED
    Verifying Group Existence: oinstall ...PASSED
    Verifying Group Membership: oinstall ...PASSED
    Verifying Group Membership: asmdba ...PASSED
    Verifying Group Membership: asmadmin ...PASSED
    Verifying Group Membership: asmoper ...PASSED
    Verifying Run Level ...PASSED
    Verifying Hard Limit: maximum open file descriptors ...PASSED
    Verifying Soft Limit: maximum open file descriptors ...PASSED
    Verifying Hard Limit: maximum user processes ...PASSED
    Verifying Soft Limit: maximum user processes ...PASSED
    Verifying Soft Limit: maximum stack size ...PASSED
    Verifying Architecture ...PASSED
    Verifying OS Kernel Version ...PASSED
    Verifying OS Kernel Parameter: semmsl ...PASSED
    Verifying OS Kernel Parameter: semmns ...PASSED
    Verifying OS Kernel Parameter: semopm ...PASSED
    Verifying OS Kernel Parameter: semmni ...PASSED
    Verifying OS Kernel Parameter: shmmax ...PASSED
    Verifying OS Kernel Parameter: shmmni ...PASSED
    Verifying OS Kernel Parameter: shmall ...PASSED
    Verifying OS Kernel Parameter: file-max ...PASSED
    Verifying OS Kernel Parameter: ip_local_port_range ...PASSED
    Verifying OS Kernel Parameter: rmem_default ...PASSED
    Verifying OS Kernel Parameter: rmem_max ...PASSED
    Verifying OS Kernel Parameter: wmem_default ...PASSED
    Verifying OS Kernel Parameter: wmem_max ...PASSED
    Verifying OS Kernel Parameter: aio-max-nr ...PASSED
    Verifying OS Kernel Parameter: panic_on_oops ...PASSED
    Verifying Package: binutils-2.23.52.0.1 ...PASSED
    Verifying Package: compat-libcap1-1.10 ...PASSED
    Verifying Package: libgcc-4.8.2 (x86_64) ...PASSED
    Verifying Package: libstdc++-4.8.2 (x86_64) ...PASSED
    Verifying Package: libstdc++-devel-4.8.2 (x86_64) ...PASSED
    Verifying Package: sysstat-10.1.5 ...PASSED
    Verifying Package: ksh ...PASSED
    Verifying Package: make-3.82 ...PASSED
    Verifying Package: glibc-2.17 (x86_64) ...PASSED
    Verifying Package: glibc-devel-2.17 (x86_64) ...PASSED
    Verifying Package: libaio-0.3.109 (x86_64) ...PASSED
    Verifying Package: libaio-devel-0.3.109 (x86_64) ...PASSED
    Verifying Package: nfs-utils-1.2.3-15 ...PASSED
    Verifying Package: smartmontools-6.2-4 ...PASSED
    Verifying Package: net-tools-2.0-0.17 ...PASSED
    Verifying Users With Same UID: 0 ...PASSED
    Verifying Current Group ID ...PASSED
    Verifying Root user consistency ...PASSED
    Verifying Package: cvuqdisk-1.0.10-1 ...PASSED
    Verifying Node Addition ...
      Verifying CRS Integrity ...PASSED
      Verifying Clusterware Version Consistency ...PASSED
      Verifying '/u01/app/12.2.0/grid' ...PASSED
    Verifying Node Addition ...PASSED
    Verifying Node Connectivity ...
      Verifying Hosts File ...PASSED
      Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED
      Verifying subnet mask consistency for subnet "192.168.0.0" ...PASSED
      Verifying subnet mask consistency for subnet "192.168.10.0" ...PASSED
    Verifying Node Connectivity ...PASSED
    Verifying Multicast check ...PASSED
    Verifying ASM Integrity ...
      Verifying Node Connectivity ...
        Verifying Hosts File ...PASSED
        Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED
        Verifying subnet mask consistency for subnet "192.168.0.0" ...PASSED
        Verifying subnet mask consistency for subnet "192.168.10.0" ...PASSED
      Verifying Node Connectivity ...PASSED
    Verifying ASM Integrity ...PASSED
    Verifying Device Checks for ASM ...PASSED
    Verifying OCR Integrity ...PASSED
    Verifying Time zone consistency ...PASSED
    Verifying Network Time Protocol (NTP) ...
      Verifying '/etc/chrony.conf' ...PASSED
      Verifying '/var/run/chronyd.pid' ...PASSED
      Verifying Daemon 'chronyd' ...PASSED
      Verifying NTP daemon or service using UDP port 123 ...PASSED
      Verifying chrony daemon is synchronized with at least one external time source ...PASSED
    Verifying Network Time Protocol (NTP) ...PASSED
    Verifying User Not In Group "root": grid ...PASSED
    Verifying resolv.conf Integrity ...
      Verifying (Linux) resolv.conf Integrity ...PASSED
    Verifying resolv.conf Integrity ...PASSED
    Verifying DNS/NIS name service ...PASSED
    Verifying User Equivalence ...PASSED
    Verifying /boot mount ...PASSED
    Verifying zeroconf check ...PASSED

    Pre-check for node addition was successful.

    CVU operation performed:      stage -pre nodeadd
    Date:                         20-Aug-2017 07:24:51
    CVU home:                     /u01/app/12.2.0/grid/
    User:                         grid

Add node in silent mode

On oralab01 server as grid user :

# Add node in silent mode
[grid@oralab01]$ $ORACLE_HOME/addnode/addnode.sh -silent "CLUSTER_NEW_NODES={oralab02}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={oralab02-vip}" "CLUSTER_NEW_NODE_ROLES={hub}"

    Prepare Configuration in progress.

    Prepare Configuration successful.
    ..................................................   7% Done.

    Copy Files to Remote Nodes in progress.
    ..................................................   12% Done.
    ..................................................   17% Done.
    ..............................
    Copy Files to Remote Nodes successful.
    You can find the log of this install session at:
     /u01/app/oraInventory/logs/addNodeActions2017-09-07_01-30-05-AM.log

    Instantiate files in progress.

    Instantiate files successful.
    ..................................................   49% Done.

    Saving cluster inventory in progress.
    ..................................................   83% Done.

    Saving cluster inventory successful.
    The Cluster Node Addition of /u01/app/12.2.0/grid was successful.
    Please check '/u01/app/12.2.0/grid/inventory/silentInstall2017-09-07_1-30-03-AM.log' for more details.

    Setup Oracle Base in progress.

    Setup Oracle Base successful.
    ..................................................   90% Done.

    Update Inventory in progress.

    Update Inventory successful.
    ..................................................   97% Done.

    As a root user, execute the following script(s):
            1. /u01/app/oraInventory/orainstRoot.sh
            2. /u01/app/12.2.0/grid/root.sh

    Execute /u01/app/oraInventory/orainstRoot.sh on the following nodes:
    [oralab02]
    Execute /u01/app/12.2.0/grid/root.sh on the following nodes:
    [oralab02]

    The scripts can be executed in parallel on all the nodes.

    ..................................................   100% Done.
    Successfully Setup Software.

Then On oralab02 server as root user :

# Execute scripts on oralab02
[root@oralab02]$ /u01/app/oraInventory/orainstRoot.sh
    Changing permissions of /u01/app/oraInventory.
    Adding read,write permissions for group.
    Removing read,write,execute permissions for world.

    Changing groupname of /u01/app/oraInventory to oinstall.
    The execution of the script is complete.

[root@oralab02]$ /u01/app/12.2.0/grid/root.sh
    Check /u01/app/12.2.0/grid/install/root_oralab02.uxora.com_2017-09-07_01-47-07-661108446.log for the output of root script

Check if successful

Verify successful nodeadd

# Verify successful nodeadd
[grid@oralab01]$ cluvfy stage -post nodeadd -n oralab02

    Verifying Node Connectivity ...
      Verifying Hosts File ...PASSED
      Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED
      Verifying subnet mask consistency for subnet "192.168.0.0" ...PASSED
      Verifying subnet mask consistency for subnet "192.168.10.0" ...PASSED
    Verifying Node Connectivity ...PASSED
    Verifying Cluster Integrity ...PASSED
    Verifying Node Addition ...
      Verifying CRS Integrity ...PASSED
      Verifying Clusterware Version Consistency ...PASSED
      Verifying '/u01/app/12.2.0/grid' ...PASSED
    Verifying Node Addition ...PASSED
    Verifying Multicast check ...PASSED
    Verifying Node Application Existence ...PASSED
    Verifying Single Client Access Name (SCAN) ...
      Verifying DNS/NIS name service 'oralab-cluster-scan' ...
        Verifying Name Service Switch Configuration File Integrity ...PASSED
      Verifying DNS/NIS name service 'oralab-cluster-scan' ...PASSED
    Verifying Single Client Access Name (SCAN) ...PASSED
    Verifying User Not In Group "root": grid ...PASSED
    Verifying Clock Synchronization ...
    CTSS is in Observer state. Switching over to clock synchronization checks using NTP

      Verifying Network Time Protocol (NTP) ...
        Verifying '/etc/chrony.conf' ...PASSED
        Verifying '/var/run/chronyd.pid' ...PASSED
        Verifying Daemon 'chronyd' ...PASSED
        Verifying NTP daemon or service using UDP port 123 ...PASSED
        Verifying chrony daemon is synchronized with at least one external time source ...PASSED
      Verifying Network Time Protocol (NTP) ...PASSED
    Verifying Clock Synchronization ...PASSED

    Post-check for node addition was successful.

    CVU operation performed:      stage -post nodeadd
    Date:                         08-Sep-2017 00:16:08
    CVU home:                     /u01/app/12.2.0/grid/
    User:                         grid

List status ressources

[grid@oralab01]$ olsnodes -s -t
    oralab01        Active  Unpinned
    oralab02        Active  Unpinned

[grid@oralab01]$ crsctl stat res -t
    --------------------------------------------------------------------------------
    Name           Target  State        Server                   State details
    --------------------------------------------------------------------------------
    Local Resources
    --------------------------------------------------------------------------------
    ora.ASMNET1LSNR_ASM.lsnr
                   ONLINE  ONLINE       oralab01                 STABLE
                   ONLINE  ONLINE       oralab02                 STABLE
    ora.DATA.dg
                   ONLINE  ONLINE       oralab01                 STABLE
                   ONLINE  ONLINE       oralab02                 STABLE
    ora.LISTENER.lsnr
                   ONLINE  ONLINE       oralab01                 STABLE
                   ONLINE  ONLINE       oralab02                 STABLE
    ora.chad
                   ONLINE  ONLINE       oralab01                 STABLE
                   ONLINE  ONLINE       oralab02                 STABLE
    ora.net1.network
                   ONLINE  ONLINE       oralab01                 STABLE
                   ONLINE  ONLINE       oralab02                 STABLE
    ora.ons
                   ONLINE  ONLINE       oralab01                 STABLE
                   ONLINE  ONLINE       oralab02                 STABLE
    ora.proxy_advm
                   OFFLINE OFFLINE      oralab01                 STABLE
                   OFFLINE OFFLINE      oralab02                 STABLE
    --------------------------------------------------------------------------------
    Cluster Resources
    --------------------------------------------------------------------------------
    ora.LISTENER_SCAN1.lsnr
          1        ONLINE  ONLINE       oralab02                 STABLE
    ora.LISTENER_SCAN2.lsnr
          1        ONLINE  ONLINE       oralab01                 STABLE
    ora.LISTENER_SCAN3.lsnr
          1        ONLINE  ONLINE       oralab01                 STABLE
    ora.MGMTLSNR
          1        ONLINE  ONLINE       oralab01                 169.254.147.90 192.1
                                                                 68.10.31,STABLE
    ora.asm
          1        ONLINE  ONLINE       oralab01                 Started,STABLE
          2        ONLINE  ONLINE       oralab02                 Started,STABLE
          3        OFFLINE OFFLINE                               STABLE
    ora.cvu
          1        ONLINE  ONLINE       oralab01                 STABLE
    ora.mgmtdb
          1        ONLINE  ONLINE       oralab01                 Open,STABLE
    ora.oralab01.vip
          1        ONLINE  ONLINE       oralab01                 STABLE
    ora.oralab02.vip
          1        ONLINE  ONLINE       oralab02                 STABLE
    ora.qosmserver
          1        ONLINE  ONLINE       oralab01                 STABLE
    ora.scan1.vip
          1        ONLINE  ONLINE       oralab02                 STABLE
    ora.scan2.vip
          1        ONLINE  ONLINE       oralab01                 STABLE
    ora.scan3.vip
          1        ONLINE  ONLINE       oralab01                 STABLE
    --------------------------------------------------------------------------------

Add instance to a node

First you need to "Configure ssh between nodes" with oracle user, like explain on the first part of this article.

Then execute as follow on oralab01 server as oracle user :

# Add instance to a node oralab02
[oracle@oralab01]$ dbca -silent -addInstance -gdbName UXOCDBRAC -nodeName oralab02
    Adding instance
    1% complete
    2% complete
    6% complete
    13% complete
    20% complete
    26% complete
    33% complete
    40% complete
    46% complete
    53% complete
    66% complete
    Completing instance management.
    76% complete
    100% complete
    Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/UXOCDBRAC/UXOCDBRA.log" for further details.

[oracle@oralab01]$ sql / as sysdba

SQL> select * from v$active_instances;

    INST_NUMBER INST_NAME                            CON_ID
    ----------- -------------------------------- ----------
              1 oralab01.uxora.com:UXOCDB1               0
              2 oralab02.uxora.com:UXOCDB2               0

What's next ...

- Remove a node (check Remove a Node from a cluster Oracle 12cR2 RAC)
- Install Oracle Enterprise Manager Cloud Control 13cR2 (check how to install OEM Cloud Control 12c)
- Install Oracle 12c rdbms (check Silent install of Oracle 12c RDBMS)

Please leave comments and suggestions,
Michel.

Reference
Adding and Deleting Cluster Nodes (docs.oracle.com)
Removing Node and Adding Node from the Cluster for Oracle 12c (toadworld.com)

Enjoyed this article? Please like it or share it.

Add comment

Please connect with one of social login below (or fill up name and email)

     


Security code
Refresh