In the docker installed mongodb, fragmentation set up a problem.

mongos --configdb config:27017 --keyFile /opt/keyfile/mongodb-keyfile 2015-07-04T11:25:28.926+0800 W SHARDING running with 1 config server should be done only for testing purposes and is not recommended for production 2015-07-04T11:25:28.930+0800 I CONTROL ** WARNING: You are running this process as the root user, which is not recommended. 2015-07-04T11:25:28.930+0800 I CONTROL 2015-07-04T11:25:28.949+0800 I SHARDING [mongosMain] MongoS version 3.0.3 starting: pid=23 port=27017 64-bit host=2e9001af0e19 (--help for usage) 2015-07-04T11:25:28.949+0800 I CONTROL [mongosMain] db version v3.0.3 2015-07-04T11:25:28.949+0800 I CONTROL [mongosMain] git version: b40106b36eecd1b4407eb1ad1af6bc60593c6105 2015-07-04T11:25:28.949+0800 I CONTROL [mongosMain] OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013 2015-07-04T11:25:28.949+0800 I CONTROL [mongosMain] build info: Linux ip-10-165-116-142 3.10.0-121.el7.x86_64 #1 SMP Tue Apr 8 10:48:19 EDT 2014 x86_64 BOOST_LIB_VERSION=1_49 2015-07-04T11:25:28.949+0800 I CONTROL [mongosMain] allocator: tcmalloc 2015-07-04T11:25:28.949+0800 I CONTROL [mongosMain] options: { security: { keyFile: "/opt/keyfile/mongodb-keyfile" }, sharding: { configDB: "config:27017" } } 2015-07-04T11:25:29.002+0800 I SHARDING [LockPinger] creating distributed lock ping thread for config:27017 and process 2e9001af0e19:27017:1435980328:1804289383 (sleeping for 30000ms) 2015-07-04T11:25:29.003+0800 I SHARDING [LockPinger] cluster config:27017 pinged successfully at Sat Jul 4 11:25:29 2015 by distributed lock pinger 'config:27017/2e9001af0e19:27017:1435980328:1804289383', sleeping for 30000ms 2015-07-04T11:25:40.016+0800 I SHARDING [mongosMain] waited 11s for distributed lock configUpgrade for upgrading config database to new format v6: LockBusy Lock for upgrading config database to new format v6 is taken. 2015-07-04T11:25:51.030+0800 I SHARDING [mongosMain] waited 22s for distributed lock configUpgrade for upgrading config database to new format v6: LockBusy Lock for upgrading config database to new format v6 is taken. 2015-07-04T11:25:59.004+0800 I SHARDING [LockPinger] cluster config:27017 pinged successfully at Sat Jul 4 11:25:59 2015 by distributed lock pinger 'config:27017/2e9001af0e19:27017:1435980328:1804289383', sleeping for 30000ms 2015-07-04T11:26:02.043+0800 I SHARDING [mongosMain] waited 33s for distributed lock configUpgrade for upgrading config database to new format v6: LockBusy Lock for upgrading config database to new format v6 is taken. 2015-07-04T11:26:13.057+0800 I SHARDING [mongosMain] waited 44s for distributed lock configUpgrade for upgrading config database to new format v6: LockBusy Lock for upgrading config database to new format v6 is taken. 2015-07-04T11:26:24.071+0800 I SHARDING [mongosMain] waited 55s for distributed lock configUpgrade for upgrading config database to new format v6: LockBusy Lock for upgrading config database to new format v6 is taken. 2015-07-04T11:26:29.006+0800 I SHARDING [LockPinger] cluster config:27017 pinged successfully at Sat Jul 4 11:26:29 2015 by distributed lock pinger 'config:27017/2e9001af0e19:27017:1435980328:1804289383', sleeping for 30000ms 2015-07-04T11:26:35.084+0800 I SHARDING [mongosMain] waited 66s for distributed lock configUpgrade for upgrading config database to new format v6: LockBusy Lock for upgrading config database to new format v6 is taken. 

Check as if it is time synchronization problems, but I put two containers in the / etc / localtime: / etc / localtime: ro are pointing to the mainframe.

But still prompted this. Neighborhoods

Heads up! This alert needs your attention, but it's not super important.