> The both drbd server are CentOS 7.5, the installed packages are here: > includes the floating vip, the local domain is s-ka.local: > The involved hosts are all with mapped through local dns, which also > I have a simple diagram in attachment, which may state the deployment > has a VCSA one one ESXi host installed. The whole configuration is under a two node ESXi 6.5 Cluster, which > without any explicit "reason" from Pacemaker. > "device not found" error), but the resource group still can't start and > not start anymore, i've tried remove the LVM resource (which caused a > But after i "copied" the configuration from this article, my cluster can > Volume but only raw iSCSI Disk, and i have to translate CRM commands > url: > The difference betwee me and this article, i think is i don't have LVM > And one of my BEST reference (the closest configuration to mein) is this > with extra information, other system, other software version. > I've tried all kinds of resources/manuals/documents, but they all mixed > pacemaker logs may include more information from resource agent than > of specific resource agent, sadly I am not familiar with iSCSI target. > pacemaker point of view and all pacemaker attempts to start resource > According to your output below, in all cases resource was stopped from > pacemaker only handles resources that were started by pacemaker. > start on the Primary DRBD Instance, but in my situation it's sadly NOT. > know, the tgtd should be handled by Pacemaker so it will automatically > still running, but the underlying Disk became read-only. > But when i switch the Primary/Secondary of DRBD, although the test VM > on it, and then i can create a test VM on that VMFS. > so the VCSA can recognize the iSCSI Disk and create VMFS/StorageObject > working, but it stuck at iSCSI, i can manually start a tgtd on one node, > I've got the Pacemaker/Corosync cluster working, DRBD replication also > Pacemaker/Corosync Cluster, which has a iSCSI Device based on a two node > i need your help to enable the hot switch of iSCSI under a (id:pcs_rsc_colocation_set_ClusterIP_p_iSCSITarget_p_iSCSILogicalUnit) (id:pcs_rsc_set_ClusterIP_p_iSCSITarget_p_iSCSILogicalUnit) setoptions Set ClusterIP p_iSCSITarget p_iSCSILogicalUnit * p_iSCSILogicalUnit_start_0 on 'unknown error' (1): * p_iSCSITarget_start_0 on 'unknown error' (1): P_iSCSILogicalUnit (ocf::heartbeat:iSCSILogicalUnit): Started P_iSCSITarget (ocf::heartbeat:iSCSITarget): Started Online: ĬlusterIP (ocf::heartbeat:IPaddr2): Started Last change: Wed Oct 24 08:43:24 2018 by root via cibadmin on How to i solve this? thank you people in advance!Ĭurrent DC: (version 1.1.18-11.el7_5.3-2b07d5c5a9). Now i have new problem, because the resource and tgtd are startet,Īlthough i set "colocation constraint", the pacemaker always try to ":disk.1", which the last ".1" maybe means the "tid". The difference from previous version is here: use iqn Pcs constraint colocation set ClusterIP p_iSCSITarget p_iSCSILogicalUnit Pcs resource group add p_iSCSI ClusterIP p_iSCSITarget p_iSCSILogicalUnit Implementation="tgt" target_iqn=":disk" lun="10" Pcs resource create p_iSCSILogicalUnit ocf:heartbeat:iSCSILogicalUnit Pcs resource create p_iSCSITarget ocf:heartbeat:iSCSITarget In 2015, and got some new ideas, and tried out, the situation is I've check the logs all the time,īut there are nothing helpful, just a bunch of heartbeat messages.Īnyway, i've read the book "Packt - CentOS High Availability" published
0 Comments
Leave a Reply. |