ceph集群报错(Monitor clock skew detected)


[root@ceph-node1 ~]# ceph -s
    cluster 79eb607c-7c60-4738-a0eb-3a5add7dcb05
     health HEALTH_WARN
            clock skew detected on mon.ceph-node2, mon.ceph-node3
            Monitor clock skew detected 
     monmap e5: 3 mons at {ceph-node1=192.168.2.128:6789/0,ceph-node2=192.168.2.129:6789/0,ceph-node3=192.168.2.130:6789/0}
            election epoch 62, quorum 0,1,2 ceph-node1,ceph-node2,ceph-node3
     mdsmap e49: 1/1/1 up {0=ceph-node1=up:active}
     osdmap e124: 3 osds: 3 up, 3 in
      pgmap v296: 84 pgs, 3 pools, 65770 bytes data, 21 objects
            15522 MB used, 201 GB / 228 GB avail
                  84 active+clean
  client io 3566 B/s wr, 1 op/s

[ceph@ceph-node1 ceph]$ vim ~/ceph/ceph.conf
mon clock drift allowed = 2
mon clock drift warn backoff = 30    
[ceph@ceph-node1 ceph]$ ceph-deploy --overwrite-conf admin ceph-node1 ceph-node2 ceph-node3
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.37): /usr/bin/ceph-deploy --overwrite-conf admin ceph-node1 ceph-node2 ceph-node3
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : True
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x15944d0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : ['ceph-node1', 'ceph-node2', 'ceph-node3']
[ceph_deploy.cli][INFO  ]  func                          : <function admin at 0x14b0a28>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-node1
[ceph-node1][DEBUG ] connection detected need for sudo
[ceph-node1][DEBUG ] connected to host: ceph-node1 
[ceph-node1][DEBUG ] detect platform information from remote host
[ceph-node1][DEBUG ] detect machine type
[ceph-node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-node2
[ceph-node2][DEBUG ] connection detected need for sudo
[ceph-node2][DEBUG ] connected to host: ceph-node2 
[ceph-node2][DEBUG ] detect platform information from remote host
[ceph-node2][DEBUG ] detect machine type
[ceph-node2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-node3
[ceph-node3][DEBUG ] connection detected need for sudo
[ceph-node3][DEBUG ] connected to host: ceph-node3 
[ceph-node3][DEBUG ] detect platform information from remote host
[ceph-node3][DEBUG ] detect machine type
[ceph-node3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
Error in sys.exitfunc:
[root@ceph-node1 ~]# service ceph restart mon    (将每个mon节点重启)
=== mon.ceph-node1 === 
=== mon.ceph-node1 === 
Stopping Ceph mon.ceph-node1 on ceph-node1...kill 2540...done
=== mon.ceph-node1 === 
Starting Ceph mon.ceph-node1 on ceph-node1...
Starting ceph-create-keys on ceph-node1...
[root@ceph-node1 ~]# ceph -s
    cluster 79eb607c-7c60-4738-a0eb-3a5add7dcb05
     health HEALTH_OK
     monmap e5: 3 mons at {ceph-node1=192.168.2.128:6789/0,ceph-node2=192.168.2.129:6789/0,ceph-node3=192.168.2.130:6789/0}
            election epoch 68, quorum 0,1,2 ceph-node1,ceph-node2,ceph-node3
     mdsmap e49: 1/1/1 up {0=ceph-node1=up:active}
     osdmap e124: 3 osds: 3 up, 3 in
      pgmap v299: 84 pgs, 3 pools, 65770 bytes data, 21 objects
            15522 MB used, 201 GB / 228 GB avail
                  84 active+clean


时间原因
关掉ntpd服务,使用ntpdate ntp1.aliyun.com同步一下时间,然后再启动ntpd服务
-------------本文结束感谢您的阅读-------------