ASM Proxy: New Instance Type in Oracle 12c
Prior Oracle Database 12c, an ASM instance ran on every node
in the cluster and ASM Cluster File System (ACFS) Service on a node
connected to the local ASM instance running on the same host to fetch
the required metadata. If the ASM instance on a node were to fail, then
ACFS file systems could no longer be accessed on that node.
With introduction of Flex ASM in Oracle 12c, hard dependency between ASM and its clients has been relaxed and only a smaller number of ASM instances need run on a subset of servers in a cluster. In such a scenario, in order to make ACFS services available on nodes without an ASM instance, a new instance type has been introduced by Flex ASM: the ASM-proxy instance which works on behalf of a real ASM instance. ASM Proxy instance fetches the metadata about ACFS volumes and file systems from an ASM instance and caches it. If ASM instance is not available locally, ASM proxy instance connects to other ASM instances over the network to fetch the metadata. Additionally, if the local ASM instance fails, then ASM proxy instance can failover to another surviving ASM instance on a different server resulting in uninterrupted availability of shared storage and ACFS file systems.
Whenever IO needs to be performed on an ACFS, the ASM-Proxy instance passes on the extent map and disk list information to the ADVM driver. Subsequently, this metadata is cached by the ADVM driver. ADVM directs all the ACFS IOs to a specific ASM disk group disk (disks) location, including any mirrored extent updates. In other words, all ACFS IOs are written through the ADVM OS Kernel driver directly to storage. No IOs are delivered through the ASM proxy or ASM instance.
With introduction of Flex ASM in Oracle 12c, hard dependency between ASM and its clients has been relaxed and only a smaller number of ASM instances need run on a subset of servers in a cluster. In such a scenario, in order to make ACFS services available on nodes without an ASM instance, a new instance type has been introduced by Flex ASM: the ASM-proxy instance which works on behalf of a real ASM instance. ASM Proxy instance fetches the metadata about ACFS volumes and file systems from an ASM instance and caches it. If ASM instance is not available locally, ASM proxy instance connects to other ASM instances over the network to fetch the metadata. Additionally, if the local ASM instance fails, then ASM proxy instance can failover to another surviving ASM instance on a different server resulting in uninterrupted availability of shared storage and ACFS file systems.
Whenever IO needs to be performed on an ACFS, the ASM-Proxy instance passes on the extent map and disk list information to the ADVM driver. Subsequently, this metadata is cached by the ADVM driver. ADVM directs all the ACFS IOs to a specific ASM disk group disk (disks) location, including any mirrored extent updates. In other words, all ACFS IOs are written through the ADVM OS Kernel driver directly to storage. No IOs are delivered through the ASM proxy or ASM instance.
Which nodes can host ASM-Proxy instance?
ASM-Proxy instance only needs to run in the clusters employing Flex ASM on the nodes where access to ACFS is required. Whereas ASM-Proxy instance can run on any node in a standard cluster, only Hub nodes in a Flex cluster can host it (Fig. 1). It can run on the same node as ASM instance or on a different node. It can be shut down when ACFS is not running.
Fig. 1
The ASM Proxy instance has:- INSTANCE_TYPE initialization parameter set to ASMPROXY
- ORACLE_SID set to +APX<node number>.
- Metadata related to ACFS is cached in ASM-Proxy instance rather than ASM instance
- ASM-Proxy instance obtains the metadata related to ACFS from:
- An ASM instance running locally
- An ASM instance running remotely if the local ASM instance is not running
- Availability of ACFS on a node:
- Requires that ASM proxy instance must be running on that node
- Is not affected by the failure of the local ASM instance
Overview:
- Create a Cloud File System Resource on DATA disk group.
- Verify that:
- All the ASM and ACFS-related resources are running on both the nodes
- Metadata related to ACFS is cached in ASM-Proxy instance rather than ASM instance
- ASM-Proxy instance obtains the metadata related to ACFS from ASM instance running locally
- Verify that on stopping ASM on a node (host02):
- The ASM-Proxy instance obtains the metadata related to ACFS from an ASM instance running remotely
- Availability of ACFS on the node is not affected
- Verify that on stopping ASM- Proxy instance on a node (host02), ACFS cannot be accessed on that node even if ASM instance is available.
Demonstration:
[root@host01 ~]# crsctl get cluster mode status Cluster is running in "flex" mode [root@host01 ~]# crsctl get node role status -all Node 'host01' active role is 'hub' Node 'host02' active role is 'hub' ASMCMD> showclustermode ASM cluster : Flex mode enabled
- Create a Cloud File System Resource on DATA disk group:
- Check that Proxy instance is not running presently
[root@host01 ~]# crsctl stat res ora.proxy_advm -t
CRS-2613: Could not find resource 'ora.proxy_advm'.
- Create a volume VOL1 on DATA disk group
[grid@host01 root]$ asmcmd setattr -G DATA compatible.advm 12.1.0.0.0 [grid@host01 root]$ asmcmd volcreate -G DATA -s 200m VOL1 [grid@host01 root]$ asmcmd volinfo -G DATA VOL1 Diskgroup Name: DATA Volume Name: VOL1 Volume Device: /dev/asm/vol1-104 State: ENABLED Size (MB): 256 Resize Unit (MB): 64 Redundancy: MIRROR Stripe Columns: 8 Stripe Width (K): 1024 Usage: Mountpath:
- As soon as the volume is created, an ASM proxy instance is automatically started on both nodes.
[root@host01 ~]# crsctl stat res ora.proxy_advm -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.proxy_advm ONLINE ONLINE host01 STABLE ONLINE ONLINE host02 STABLE [root@host01 ~]# ps -ef |grep pmon grid 7209 1 0 14:28 ? 00:00:00 asm_pmon_+ASM1 grid 10076 1 0 14:34 ? 00:00:00 apx_pmon_+APX1 root 11297 7103 0 14:37 pts/1 00:00:00 grep pmon [root@host02 ~]# ps -ef |grep pmon grid 13901 1 0 14:33 ? 00:00:00 apx_pmon_+APX2 root 15113 12648 0 14:37 pts/3 00:00:00 grep pmon grid 16229 1 0 13:13 ? 00:00:00 asm_pmon_+ASM2 grid 20548 1 0 13:16 ? 00:00:00 mdb_pmon_-MGMTDB
- Create an ACFS File System on the newly created volume VOL1
[root@host01 ~]# mkfs -t acfs /dev/asm/vol1-104 mkfs.acfs: version = 12.1.0.2.0 mkfs.acfs: on-disk version = 39.0 mkfs.acfs: volume = /dev/asm/vol1-104 mkfs.acfs: volume size = 268435456 ( 256.00 MB ) mkfs.acfs: Format complete.
- Create Corresponding Mount Points on both nodes
[root@host01 ~]# mkdir -p /mnt/acfsmounts/acfs1 [root@host02 ~]# mkdir -p /mnt/acfsmounts/acfs1
- Configure and start Cloud File System Resource on the volume device VOL1 with the mount point
/mnt/acfsmounts/acfs1
[root@host01 ~]# srvctl add filesystem -path /mnt/acfsmounts/acfs1 -device /dev/asm/vol1-104
[root@host01 ~]# srvctl start filesystem -d /dev/asm/vol1-104
[root@host01 ~]# srvctl status filesystem -device /dev/asm/vol1-104
ACFS file system /mnt/acfsmounts/acfs1 is mounted on nodes host01,host02
- Verify the Cloud File System Resource by creating a small text file created on it from host01 and then accessing it successfully from host02
[root@host01 ~]# echo “Test File on ACFS” > /mnt/acfsmounts/acfs1/testfile.txt [root@host02 asm]# cat /mnt/acfsmounts/acfs1/testfile.txt Test File on ACFS
- Verify that All the ASM and ACFS-related resources are running on both nodes
[root@host01 ~]# crsctl stat res ora.asm ora.DATA.dg ora.DATA.VOL1.advm ora.data.vol1.acfs -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.DATA.VOL1.advm ONLINE ONLINE host01 Volume device /dev/a sm/vol1-104 is onlin e,STABLE ONLINE ONLINE host02 Volume device /dev/a sm/vol1-104 is onlin e,STABLE ora.DATA.dg ONLINE ONLINE host01 STABLE ONLINE ONLINE host02 STABLE ora.asm ONLINE ONLINE host01 Started,STABLE ONLINE ONLINE host02 Started,STABLE ora.data.vol1.acfs ONLINE ONLINE host01 mounted on /mnt/acfs mounts/acfs1,STABLE ONLINE ONLINE host02 mounted on /mnt/acfs mounts/acfs1,STABLE
- Verify that Metadata for ACFS is cached in not cached in ASM instance
+ASM1>sho parameter instance_name NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ instance_name string +ASM1 +ASM1> select name, state from v$asm_diskgroup; NAME STATE ------------------------------ ----------- DATA MOUNTED +ASM1>select fs_name, vol_device from v$ASM_ACFSVOLUMES; no rows selected +ASM1>select fs_name, state from v$asm_filesystem; no rows selected +ASM1>select volume_name, volume_device, mountpath from v$asm_volume; VOLUME_NAM VOLUME_DEVICE MOUNTPATH ---------- -------------------- ------------------------- VOL1 /dev/asm/vol1-104 /mnt/acfsmounts/acfs1
- Verify that Metadata for ACFS is cached in ASM-Proxy instance:
[grid@host01 ~]$ cat /etc/oratab | grep APX +APX1:/u01/app/12.1.0/grid:N # line added by Agent [grid@host01 ~]# ps -ef |grep pmon grid 8341 1 0 12:10 ? 00:00:00 asm_pmon_+ASM1 grid 10072 1 0 12:12 ? 00:00:00 apx_pmon_+APX1 grid 10752 1 0 12:13 ? 00:00:00 mdb_pmon_-MGMTDB root 19167 7644 0 12:22 pts/1 00:00:00 grep pmon +APX1> sho parameter instance_type NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ instance_type string ASMPROXY +APX1> sho parameter instance_name NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ instance_name string +APX1 +APX1> select name, state from v$asm_diskgroup; NAME STATE ------------------------------ ----------- DATA CONNECTED +APX1>select fs_name, vol_device from v$ASM_ACFSVOLUMES; FS_NAME VOL_DEVICE ------------------------------ -------------------- /mnt/acfsmounts/acfs1 /dev/asm/vol1-104 +APX1>select fs_name, state from v$asm_filesystem; FS_NAME STATE ------------------------------ ------------- /mnt/acfsmounts/acfs1 AVAILABLE +APX1>select volume_name, volume_device, mountpath from v$asm_volume; VOLUME_NAM VOLUME_DEVICE MOUNTPATH ---------- -------------------- ------------------------- VOL1 /dev/asm/vol1-104 /mnt/acfsmounts/acfs1
- Verify that both the ASM-Proxy instances obtain the metadata related to ACFS from ASM instance running locally:
+ASM1> SELECT DISTINCT i.instance_name asm_instance_name, i.host_name asm_host_name, c.instance_name client_instance_name, c.status FROM gv$instance i, gv$asm_client c WHERE i.inst_id = c.inst_id; ASM_INSTANCE_NAM ASM_HOST_NAME CLIENT_INSTANCE_NAME STATUS ---------------- -------------------- -------------------- ------------ +ASM1 host01.example.com +APX1 CONNECTED +ASM1 host01.example.com +ASM1 CONNECTED +ASM2 host02.example.com +APX2 CONNECTED
- Verify that on stopping ASM on a node (host02),
- ASM-Proxy instance obtains the metadata related to ACFS from an ASM instance running remotely
- Availability of ACFS on the node is not affected
[root@host01 ~]# srvctl stop asm -n host02 PRCR-1014 : Failed to stop resource ora.asm PRCR-1065 : Failed to stop resource ora.asm CRS-2529: Unable to act on 'ora.asm' because that would require stopping or relocating 'ora.DATA.dg', but the force option was not specified [root@host01 ~]# srvctl stop asm -n host02 -f [root@host01 ~]# srvctl status asm ASM is running on host01 +ASM1> SELECT DISTINCT i.instance_name asm_instance_name, i.host_name asm_host_name, c.instance_name client_instance_name, c.status FROM gv$instance i, gv$asm_client c WHERE i.inst_id = c.inst_id; ASM_INSTANCE_NAM ASM_HOST_NAME CLIENT_INSTANCE_NAME STATUS ---------------- -------------------- -------------------- ------------ +ASM1 host01.example.com +APX1 CONNECTED +ASM1 host01.example.com +APX2 CONNECTED +ASM1 host01.example.com +ASM1 CONNECTED [root@host01 ~]# crsctl stat res ora.asm ora.DATA.dg ora.DATA.VOL1.advm ora.data.vol1.acfs -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.DATA.VOL1.advm ONLINE ONLINE host01 Volume device /dev/a sm/vol1-104 is onlin e,STABLE ONLINE ONLINE host02 Volume device /dev/a sm/vol1-104 is onlin e,STABLE ora.DATA.dg ONLINE ONLINE host01 STABLE OFFLINE OFFLINE host02 STABLE ora.data.vol1.acfs ONLINE ONLINE host01 mounted on /mnt/acfs mounts/acfs1,STABLE ONLINE ONLINE host02 mounted on /mnt/acfs mounts/acfs1,STABLE -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.asm 1 ONLINE ONLINE host01 Started,STABLE 2 OFFLINE OFFLINE Instance Shutdown,ST ABLE --------------------------------------------------------------------------------
- Verify that on stopping ASM- Proxy instance on a node (host02), ACFS cannot be accessed on that node even though ASM instance is present:
[root@host01 ~]# srvctl start asm -n host02 [root@host01 ~]# srvctl stop asm -proxy -n host02 PRCR-1014 : Failed to stop resource ora.proxy_advm PRCR-1065 : Failed to stop resource ora.proxy_advm CRS-2529: Unable to act on 'ora.proxy_advm' because that would require stopping or relocating 'ora.DATA.VOL1.advm', but the force option was not specified [root@host01 ~]# srvctl stop asm -proxy -n host02 -f [root@host01 ~]# crsctl stat res ora.asm ora.DATA.dg ora.DATA.VOL1.advm ora.data.vol1.acfs -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.DATA.VOL1.advm ONLINE ONLINE host01 Volume device /dev/a sm/vol1-104 is onlin e,STABLE OFFLINE OFFLINE host02 Volume device /dev/a sm/vol1-104 is offli ne,STABLE ora.DATA.dg ONLINE ONLINE host01 STABLE ONLINE ONLINE host02 STABLE ora.data.vol1.acfs ONLINE ONLINE host01 mounted on /mnt/acfs mounts/acfs1,STABLE OFFLINE OFFLINE host02 volume /mnt/acfsmoun ts/acfs1 is unmounte d,STABLE -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.asm 1 ONLINE ONLINE host01 Started,STABLE 2 ONLINE ONLINE host02 Started,STABLE ---------------------------------------------------------------------------------Thus in a flex cluster, all metadata about ACFS is cached in ASM-Proxy instance. Also, availability of ACFS Volume and ACFS on a node is dependent on availability of an ASM–Proxy instance only on the node irrespective of the status of local ASM instance.
Summary
- In a cluster with an ASM instance running on every node:
- Metadata related to ACFS is cached in the ASM instance
- Failure of the local ASM instance disrupts the availability of ACFS on that node
- A new instance type has been introduced by Flex ASM – the ASM proxy instance, which gets metadata information from ASM instance on behalf of ACFS.
- In a cluster with flex ASM
- ASM instances run on a subset of nodes
- Metadata related to ACFS is cached in the ASM-Proxy instance rather than the ASM instance
- The ASM-Proxy instance obtains the metadata related to ACFS from the ASM instance running locally / remotely
- Availability of ACFS on a node is dependent on availability of ASM–Proxy instance only irrespective of the status of local ASM instance.
Comments